venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
ICLR | Title
Maximum a Posteriori Policy Optimisation
Abstract
We introduce a new algorithm for reinforcement learning called Maximum aposteriori Policy Optimisation (MPO) based on coordinate ascent on a relativeentropy objective. We show that several existing methods can directly be related to our derivation. We develop two off-policy algorithms and demonstrate that they are competitive with the state-of-the-art in deep reinforcement learning. In particular, for continuous control, our method outperforms existing methods with respect to sample efficiency, premature convergence and robustness to hyperparameter settings.
1 INTRODUCTION
Model free reinforcement learning algorithms can acquire sophisticated behaviours by interacting with the environment while receiving simple rewards. Recent experiments (Mnih et al., 2015; Jaderberg et al., 2016; Heess et al., 2017) successfully combined these algorithms with powerful deep neural-network approximators while benefiting from the increase of compute capacity.
Unfortunately, the generality and flexibility of these algorithms comes at a price: They can require a large number of samples and – especially in continuous action spaces – suffer from high gradient variance. Taken together these issues can lead to unstable learning and/or slow convergence. Nonetheless, recent years have seen significant progress, with improvements to different aspects of learning algorithms including stability, data-efficiency and speed, enabling notable results on a variety of domains, including locomotion (Heess et al., 2017; Peng et al., 2016), multi-agent behaviour (Bansal et al., 2017) and classical control (Duan et al., 2016).
Two types of algorithms currently dominate scalable learning for continuous control problems: First, Trust-Region Policy Optimisation (TRPO; Schulman et al. 2015) and the derivative family of Proximal Policy Optimisation algorithms (PPO; Schulman et al. 2017b). These policy-gradient algorithms are on-policy by design, reducing gradient variance through large batches and limiting the allowed change in parameters. They are robust, applicable to high-dimensional problems, and require moderate parameter tuning, making them a popular first choice (Ho & Ermon, 2016). However, as on-policy algorithms, they suffer from poor sample efficiency.
In contrast, off-policy value-gradient algorithms such as the Deep Deterministic Policy Gradient (DDPG, Silver et al. 2014; Lillicrap et al. 2016), Stochastic Value Gradient (SVG, Heess et al. 2015), and the related Normalized Advantage Function formulation (NAF, Gu et al. 2016b) rely on experience replay and learned (action-)value functions. These algorithms exhibit much better data efficiency, approaching the regime where experiments with real robots are possible (Gu et al., 2016a; Andrychowicz et al., 2017). While also popular, these algorithms can be difficult to tune, especially for high-dimensional domains like general robot manipulation tasks.
In this paper we propose a novel off-policy algorithm that benefits from the best properties of both classes. It exhibits the scalability, robustness and hyperparameter insensitivity of on-policy algorithms, while offering the data-efficiency of off-policy, value-based methods.
To derive our algorithm, we take advantage of the duality between control and estimation by using Expectation Maximisation (EM), a powerful tool from the probabilistic estimation toolbox, in order to solve control problems. This duality can be understood as replacing the question “what are the actions which maximise future rewards?” with the question “assuming future success in maximising
rewards, what are the actions most likely to have been taken?”. By using this estimation objective we have more control over the policy change in both E and M steps, yielding robust learning. We show below that several algorithms, including TRPO, can be directly related to this perspective. We leverage the fast convergence properties of EM-style coordinate ascent by alternating a nonparametric data-based E-step which re-weights state-action samples, with a supervised, parametric M-step using deep neural networks.
We evaluate our algorithm on a broad spectrum of continuous control problems including a 56 DoF humanoid body. All experiments used the same optimisation hyperparameters 1. Our algorithm shows remarkable data efficiency often solving the tasks we consider an order of magnitude faster than the state-of-the-art. A video of some resulting behaviours can be found here dropbox.com/s/pgcmjst7t0zwm4y/MPO.mp4.
2 BACKGROUND AND NOTATION
2.1 RELATED WORK
Casting Reinforcement Learning (RL) as an inference problem has a long history dating back at least two decades (Dayan & Hinton, 1997). The framework presented here is inspired by a variational inference perspective on RL that has previously been utilised in multiple studies; c.f. Dayan & Hinton (1997); Neumann (2011); Deisenroth et al. (2013); Rawlik et al. (2012); Levine & Koltun (2013); Florensa et al. (2017).
Particular attention has been paid to obtaining maximum entropy policies as the solution to an inference problem. The penalisation of determinism can be seen encouraging both robustness and simplicity. Among these are methods that perform trajectory optimisation using either linearised dynamics (Todorov, 2008; Toussaint, 2009; Levine & Koltun, 2013) or general dynamics as in path integral control (Kappen, 2005; Theodorou et al., 2010). In contrast to these algorithms, here we do not assume the availability of a transition model and avoid on-policy optimisation. A number of other authors have considered the same perspective but in a model-free RL setting (Neumann, 2011; Peters et al., 2010a; Florensa et al., 2017; Daniel et al., 2016) or inverse RL problems (Ziebart et al., 2008). These algorithms are more directly related to our work and can be cast in the same (EM-like) alternating optimisation scheme on which we base our algorithm. However, they typically lack the maximisation (M)-step – with the prominent exception of REPS, AC-REPS, PI2-GPS and MDGPS (Peters et al., 2010a; Wirth et al., 2016; Chebotar et al., 2016; Montgomery & Levine, 2016) to which our algorithm is closely related as outlined below. An interesting recent addition to these approaches is an EM-perspective on the PoWER algorithm (Roux, 2016) which uses the same iterative policy improvement employed here, but commits to parametric inference distributions and avoids an exponential reward transformation, resulting in a harder to optimise lower bound.
As an alternative to these policy gradient inspired algorithms, the class of recent algorithms for soft Q-learning (e.g. Rawlik et al. (2012); Haarnoja et al. (2017); Fox et al. (2016) parameterise and estimate a so called “soft” Q-function directly, implicitly inducing a maximum entropy policy. A perspective that can also be extended to hierarchical policies (Florensa et al., 2017), and has recently been used to establish connections between Q-learning and policy gradient methods (O’Donoghue et al., 2016; Schulman et al., 2017a). In contrast, we here rely on a parametric policy, our bound and derivation is however closely related to the definition of the soft (entropy regularised) Q-function.
A line of work, that is directly related to the “RL as inference” perspective, has focused on using information theoretic regularisers such as the entropy of the policy or the Kullback-Leibler divergence (KL) between policies to stabilise standard RL objectives. In fact, most state-of-the-art policy gradient algorithms fall into this category. For example see the entropy regularization terms used in Mnih et al. (2016) or the KL constraints employed by work on trust-region based methods (Schulman et al., 2015; 2017b; Gu et al., 2017; Wang et al., 2017). The latter methods introduce a trust region constraint, defined by the KL divergence between the new policy and the old policy, so that the expected KL divergence over state space is bounded. From the perspective of this paper these trust-region based methods can be seen as optimising a parametric E-step, as in our algorithm, but are “missing” an explicit M-step.
1With the exception of the number of samples collected between updates.
Finally, the connection between RL and inference has been invoked to motivate work on exploration. The most prominent examples for this are formed by work on Boltzmann exploration such as Kaelbling et al. (1996); Perkins & Precup (2002); Sutton (1990); O’Donoghue et al. (2017), which can be connected back to soft Q-learning (and thus to our approach) as shown in Haarnoja et al. (2017).
2.2 MARKOV DECISION PROCESSES
We consider the problem of finding an optimal policy π for a discounted reinforcement learning (RL) problem; formally characterized by a Markov decision process (MDP). The MDP consists of: continuous states s, actions a, transition probabilities p(st+1|st, at) – specifying the probability of transitioning from state st to st+1 under action at –, a reward function r(s, a) ∈ R as well as the discounting factor γ ∈ [0, 1). The policy π(a|s,θ) (with parameters θ) is assumed to specify a probability distribution over action choices given any state and – together with the transition probabilities – gives rise to the stationary distribution µπ(s).
Using these basic quantities we can now define the notion of a Markov sequence or trajectory τπ = {(s0, a0) . . . (sT , aT )} sampled by following the policy π; i.e. τπ ∼ pπ(τ) with pπ(τ) = p(s0) ∏ t>0 p(st+1|st, at)π(at|st); and the expected return Eτπ [ ∑∞ t=0 γ
tr(st, st)]. We will use the shorthand rt = r(st, at).
3 MAXIMUM A POSTERIORI POLICY OPTIMISATION
Our approach is motivated by the well established connection between RL and probabilistic inference. This connection casts the reinforcement learning problem as that of inference in a particular probabilistic model. Conventional formulations of RL aim to find a trajectory that maximizes expected reward. In contrast, inference formulations start from a prior distribution over trajectories, condition a desired outcome such as achieving a goal state, and then estimate the posterior distribution over trajectories consistent with this outcome.
A finite-horizon undiscounted reward formulation can be cast as inference problem by constructing a suitable probabilistic model via a likelihood function p(O = 1|τ) ∝ exp( ∑ t rt/α), where α is a temperature parameter. Intuitively, O can be interpreted as the event of obtaining maximum reward by choosing an action; or the event of succeeding at the RL task (Toussaint, 2009; Neumann, 2011). With this definition we can define the following lower bound on the likelihood of optimality for the policy π:
log pπ(O = 1) = log ∫ pπ(τ)p(O = 1|τ)dτ ≥ ∫ q(τ) [ log p(O = 1|τ) + log pπ(τ)
q(τ)
] dτ (1)
= Eq [∑
t
rt/α ] −KL ( q(τ)||pπ(τ) ) = J (q, π), (2)
where pπ is the trajectory distribution induced by policy π(a|s) as described in section 2.2 and q(τ) is an auxiliary distribution over trajectories that will discussed in more detail below. The lower bound J is the evidence lower bound (ELBO) which plays an important role in the probabilistic modeling literature. It is worth already noting here that optimizing (2) with respect to q can be seen as a regularized RL problem.
An important motivation for transforming a RL problem into an inference problem is that this allows us draw from the rich toolbox of inference methods: For instance, J can be optimized with the familiy of expectation maximization (EM) algorithms which alternate between improving J with respect to q and π. In this paper we follow classical (Dayan & Hinton, 1997) and more recent works (e.g. Peters et al. 2010b; Levine & Koltun 2013; Daniel et al. 2016; Wirth et al. 2016) and cast policy search as a particular instance of this family. Our algorithm then combines properties of existing approaches in this family with properties of recent off-policy algorithms for neural networks.
The algorithm alternates between two phases which we refer to as E and M step in reference to an EM-algorithm. The E-step improves J with respect to q. Existing EM policy search approaches perform this step typically by reweighting trajectories with sample returns (Kober & Peters, 2009) or via local trajectory optimization (Levine & Koltun, 2013). We show how off-policy deep RL
techniques and value-function approximation can be used to make this step both scalable as well as data efficient. The M-step then updates the parametric policy in a supervised learning step using the reweighted state-action samples from the E-step as targets.
These choices lead to the following desirable properties: (a) low-variance estimates of the expected return via function approximation; (b) low-sample complexity of value function estimate via robust off-policy learning; (c) minimal parametric assumption about the form of the trajectory distribution in the E-step; (d) policy updates via supervised learning in the M step; (e) robust updates via hard trust-region constraints in both the E and the M step.
3.1 POLICY IMPROVEMENT
The derivation of our algorithm then starts from the infinite-horizon analogue of the KL-regularized expected reward objective from Equation (2). In particular, we consider variational distributions q(τ) that factor in the same way as pπ , i.e. q(τ) = p(s0) ∏ t>0 p(st+1|st, at)q(at|st) which yields:
J (q,θ) = Eq [ ∞∑ t=0 γt [ rt − αKL ( q(a|st)‖π(a|st,θ) )]] + log p(θ). (3)
Note that due to the assumption about the structure of q(τ) the KL over trajectories decomposes into a KL over the individual state-conditional action distributions. This objective has also been considered e.g. by Haarnoja et al. (2017); Schulman et al. (2017a). The additional log p(θ) term is a prior over policy parameters and can be motivated by a maximum a-posteriori estimation problem (see appendix for more details).
We also define the regularized Q-value function associated with (3) as
Qqθ(s, a) = r0 + Eq(τ),s0=s,a0=a ∞∑ t≥1 γt [ rt − αKL(qt‖πt) ] , (4) with KL ( qt||πt ) = KL ( q(a|st) ) ‖π(a|st,θ) ) . Note that KL ( q0||π0 ) and p(θ) are not part of the Q-function as they are not a function of the action.
We observe that optimizing J with respect to q is equivalent to solving an expected reward RL problem with augmented reward r̃t = rt−α log q(at|st)π(at|st,θ) . In this view π represents a default policy towards which q is regularized – i.e. the current best policy. The MPO algorithm treats π as the primary object of interest. In this case q serves as an auxiliary distribution that allows optimizing J via alternate coordinate ascent in q and πθ, analogous to the expectation-maximization algorithm in the probabilistic modelling literature. In our case, the E-step optimizes J with respect to q while the M-step optimizes J with respect to π. Different optimizations in the E-step and M-step lead to different algorithms. In particular, we note that for the case where p(θ) is an uninformative prior a variant of our algorithm has a monotonic improvement guarantee as show in the Appendix A.
3.2 E-STEP
In the E-step of iteration i we perform a partial maximization of J (q,θ) with respect to q given θ = θi. We start by setting q = πθi and estimate the unregularized action-value function:
Qqθi(s, a) = Qθi(s, a) = Eτπi ,s0=s,a0=a [ ∞∑ t γtrt ] , (5)
since KL(q||πi) = 0. In practice we estimate Qθi from off-policy data (we refer to Section 4 for details about the policy evaluation step). This greatly increases the data efficiency of our algorithm. Given Qθi we improve the lower bound J w.r.t. q by first expanding Qθi(s, a) via the regularized Bellman operator Tπ,q = Eq(a|s) [ r(s, a) − αKL(q‖πi) + γEp(s′|s,a)[Vθi(s′)]], and optimize the
“one-step” KL regularised objective
max q J̄s(q, θi) = max q Tπ,qQθi(s, a)
= max q
Eµ(s) [ Eq(·|s)[Qθi(s, a)]− αKL(q‖πi) ] ,
(6)
since Vθi(s) = Eq(a|s[Qθi(s, a)] and thus Qθi(s, a) = r(s, a) + γVθi(s).
Maximizing Equation (6), thus obtaining qi = arg max J̄ (q, θi), does not fully optimize J since we treat Qθi as constant with respect to q. An intuitive interpretation qi is that it chooses the softoptimal action for one step and then resorts to executing policy π. In the language of the EM algorithm this optimization implements a partial E-step. In practice we also choose µq to be the stationary distribution as given through samples from the replay buffer.
CONSTRAINED E-STEP
The reward and the KL terms are on an arbitray relative scale. This can make it difficult to choose α. We therefore replace the soft KL regularization with a hard constraint with parameter , i.e,
max q
Eµ(s) [ Eq(a|s) [ Qθi(s, a) ]] s.t.Eµ(s) [ KL(q(a|s), π(a|s,θi)) ] < .
(7)
If we choose to explicitly parameterize q(a|s) – option 1 below – the resulting optimisation is similar to that performed by the recent TRPO algorithm for continuous control (Schulman et al., 2015); only in an off-policy setting. Analogously, the unconstrained objective (6) is similar to the objective used by PPO (Schulman et al., 2017b). We note, however, that the KL is reversed when compared to the KL used by TRPO and PPO.
To implement (7) we need to choose a form for the variational policy q(a|s). Two options arise:
1. We can use a parametric variational distribution q(a|s,θq), with parameters θq , and optimise Equation (7) via the likelihood ratio or action-value gradients. This leads to an algorithm similar to TRPO/PPO and an explicit M-step becomes unnecessary (see. Alg. 3).
2. We can choose a non-parametric representation of q(a|s) given by one probability factor per sample. To achieve generalization in state space we then fit a parametric policy in the M-step.
Fitting a parametric policy in the M-step is a supervised learning problem, allowing us to employ various regularization techniques at that point. It also makes it easier to enforce the hard KL constraint.
NON PARAMETRIC VARIATIONAL DISTRIBUTION
In the non-parametric case we can obtain the optimal sample based q distribution – the solution to Equation (7) – in closed form (see the appendix for a full derivation), as,
qi(a|s) ∝ π(a|s,θi) exp (Qθi(s, a)
η∗
) , (8)
where we can obtain η∗ by minimising the following convex dual function,
g(η) = η + η ∫ µ(s) log ∫ π(a|s,θi) exp (Qθi(s, a) η ) dads, (9)
after the optimisation of which we can evaluate qi(a|s) on given samples. This optimization problem is similar to the one solved by relative entropy policy search (REPS) (Peters et al., 2010a) with the difference that we optimise only for the conditional variational distribution q(a|s) instead of a joint distribution q(a, s) – effectively fixing µq(s) to the stationary distribution given by previously collected experience – and we use the Q function of the old policy to evaluate the integral over a. While this might seem unimportant it is crucial as it allows us to estimate the integral over actions with multiple samples without additional environment interaction. This greatly reduces the variance of the estimate and allows for fully off-policy learning at the cost of performing only a partial optimization of J as described above.
3.3 M-STEP
Given qi from the E-step we can optimize the lower bound J with respect to θ to obtain an updated policy θi+1 = arg maxθ J (qi,θ). Dropping terms independent of θ this entails solving for the solution of
max θ J (qi, θ) = max θ Eµq(s)
[ Eq(a|s) [ log π(a|s,θ) ]] + log p(θ), (10)
which corresponds to a weighted maximum a-posteriroi estimation (MAP) problem where samples are weighted by the variational distribution from the E-step. Since this is essentially a supervised learning step we can choose any policy representation in combination with any prior for regularisation. In this paper we set p(θ) to a Gaussian prior around the current policy, i.e, p(θ) ≈ N ( µ = θi,Σ = Fθi λ ) , where θi are the parameters of the current policy distribution, Fθi is the empirical Fisher information matrix and λ is a positive scalar. As shown in the appendix this suggests the following generalized M-step:
max π
Eµq(s) [ Eq(a|s) [ log π(a|s,θ) ] − λKL ( π(a|s,θi), π(a|s,θ) )] (11)
which can be re-written as the hard constrained version:
max π
Eµq(s) [ Eq(a|s) [ log π(a|s,θ) ]] s.t. Eµq(s) [ KL(π(a|s,θi), π(a|s,θ)) ] < .
(12)
This additional constraint minimises the risk of overfitting the samples, i.e. it helps us to obtain a policy that generalises beyond the state-action samples used for the optimisation. In practice we have found the KL constraint in the M step to greatly increase stability of the algorithm. We also note that in the E-step we are using the reverse, mode-seeking, KL while in the M-step we are using the forward, moment-matching, KL which reduces the tendency of the entropy of the parametric policy to collapse. This is in contrast to other RL algorithms that use M-projection without KL constraint to fit a parametric policy (Peters et al., 2010a; Wirth et al., 2016; Chebotar et al., 2016; Montgomery & Levine, 2016). Using KL constraint in M-step has also been shown effective for stochastic search algorithms (Abdolmaleki et al., 2017).
4 POLICY EVALUATION
Our method is directly applicable in an off-policy setting. For this, we have to rely on a stable policy evaluation operator to obtain a parametric representation of the Q-function Qθ(s, a). We make use of the policy evaluation operator from the Retrace algorithm Munos et al. (2016), which we found to yield stable policy evaluation in practice2. Concretely, we fit the Q-functionQθi(s, a, φ) as represented by a neural network, with parameters φ, by minimising the squared loss:
min φ L(φ) = min φ Eµb(s),b(a|s)
[( Qθi(st, at, φ)−Qrett )2] ,with
Qrett = Qφ′(st, at) + ∞∑ j=t γj−t ( j∏ k=t+1 ck )[ r(sj , aj) + Eπ(a|sj+1)[Qφ′(sj+1, a)]−Qφ′(sj , aj) ] ,
ck = min (
1, π(ak|sk) b(ak|sk)
) ,
(13)
2We note that, despite this empirical finding, Retrace may not be guaranteed to be stable with function approximation (Touati et al., 2017).
whereQφ′(s, a) denotes the output of a target Q-network, with parameters φ′, that we copy from the current parameters φ after each M-step. We truncate the infinite sum after N steps by bootstrapping with Qφ′ (rather than considering a λ return). Additionally, b(a|s) denotes the probabilities of an arbitrary behaviour policy. In our case we use an experience replay buffer and hence b is given by the action probabilities stored in the buffer; which correspond to the action probabilities at the time of action selection.
5 EXPERIMENTS
For our experiments we evaluate our MPO algorithm across a wide range of tasks. Specifically, we start by looking at the continuous control tasks of the DeepMind Control Suite (Tassa et al. (2018), see Figure 1), and then consider the challenging parkour environments recently published in Heess et al. (2017). In both cases we use a Gaussian distribution for the policy whose mean and covariance are parameterized by a neural network (see appendix for details). In addition, we present initial experiments for discrete control using ATARI environments using a categorical policy distribution (whose logits are again parameterized by a neural network) in the appendix.
5.1 EVALUATION ON CONTROL SUITE
The suite of continuous control tasks that we are evaluating against contains 18 tasks, comprising a wide range of domains including well known tasks from the literature. For example, the classical cart-pole and acrobot dynamical systems, 2D and Humanoid walking as well as simple low-dimensional planar reaching and manipulation tasks. This suite of tasks was built in python on top of mujoco and will also be open sourced to the public by the time of publication.
While we include plots depicting the performance of our algorithm on all tasks below; comparing it against the state-of-the-art algorithms in terms of data-efficiency. We want to start by directing the attention of the reader to a more detailed evaluation on three of the harder tasks from the suite.
5.1.1 DETAILED ANALYSIS ON WALKER-2D, ACROBOT, HOPPER
We start by looking at the results for the classical Acrobot task (two degrees of freedom, one continuous action dimension) as well as the 2D walker (which has 12 degrees of freedom and thus a 12 dimensional action space and a 21 dimensional state space) and the hopper standing task. The reward in the Acrobot task is the distance of the robots end-effector to an upright position of the underactuated system. For the walker task it is given by the forward velocity, whereas in the hopper the requirement is to stand still.
Figure 2 shows the results for this task obtained by applying our algorithm MPO as well as several ablations – in which different parts were removed from the MPO optimization – and two baselines: our implementation of Proximal Policy Optimization (PPO) (Schulman et al., 2017b) and DDPG. The hyperparameters for MPO were kept fixed for all experiments in the paper (see the appendix for hyperparameter settings).
As a first observation, we can see that MPO gives stable learning on all tasks and, thanks to its fully off-policy implementation, is significantly more sample efficient than the on-policy PPO baseline. Furthermore, we can observe that changing from the non-parametric variational distribution to a parametric distribution3 (which, as described above, can be related to PPO) results in only a minor asymptotic performance loss but slowed down optimisation and thus hampered sample efficiency; which can be attributed to the fact that the parametric q distribution required a stricter KL constraint. Removing the automatically tuned KL constraint and replacing it with a manually set entropy regulariser then yields an off-policy actor-critic method with Retrace. This policy gradient method still uses the idea of estimating the integral over actions – and thus, for a gradient based optimiser, its likelihood ratio derivative – via multiple action samples (as judged by a Q-Retrace critic). This idea has previously been coined as using the expected policy gradient (EPG) (Ciosek & Whiteson, 2017) and we hence denote the corresponding algorithm with EPG + Retrace, which no-longer follows the intuitions of the MPO perspective. EPG + Retrace performed well when the correct entropy regularisation scale is used. This, however, required task specific tuning (c.f. Figure 4 where this hyperparameter was set to the one that performed best in average across tasks). Finally using only a single sample to estimate the integral (and hence the likelihood ratio gradient) results in an actor-critic variant with Retrace that is the least performant off-policy algorithm in our comparison.
5.1.2 COMPLETE RESULTS ON THE CONTROL SUITE
The results for MPO (non-parameteric) – and a comparison to an implementation of state-of-the-art algorithms from the literature in our framework – on all the environments from the control suite that we tested on are shown in Figure 4. All tasks have rewards that are scaled to be between 0 and 1000. We note that in order to ensure a fair comparison all algorithms ran with exactly the same network configuration, used a single learner (no distributed computation), used the same optimizer and were tuned w.r.t. their hyperparameters for best performance across all tasks. We refer to the appendix for a complete description of the hyperparameters. Our comparison is made in terms of data-efficiency.
From the plot a few trends are readily apparent: i) We can clearly observe the advantage in terms of data-efficiency that methods relying on a Q-critic obtain over the PPO baseline. This difference is so extreme that in several instances the PPO baseline converges an order of magnitude slower than the off-policy algorithms and we thus indicate the asymptotic performance of each algorithm of PPO and DDPG (which also improved significantly later during training in some instances) with a colored star in the plot; ii) the difference between the MPO results and the (expected) policy gradient (EPG) with entropy regularisation confirm our suspicion from Section 5.1.1: finding a good setting for the entropy regulariser that transfers across environments without additional constraints on the policy distribution is very difficult, leading to instabilities in the learning curves. In contrast to this the MPO results appear to be stable across all environments; iii) Finally, in terms of data-efficiency the methods utilising Retrace obtain a clear advantage over DDPG. The single learner vanilla DDPG implementation learns the lower dimensional environments quickly but suffers in terms of learning
3We note that we use a value function baseline Eπ[Q(s, ·)] in this setup. See appendix for details.
speed in environments with sparse rewards (finger, acrobot) and higher dimensional action spaces. Overall, MPO is able to solve all environments using surprisingly moderate amounts of data. On average less than 1000 trajectories (or 106 samples) are needed to reach the best performance.
5.2 HIGH-DIMENSIONAL CONTINUOUS CONTROL
Next we turn to evaluating our algorithm on two higher-dimensional continuous control problems; humanoid and walker. To make computation time bearable in these more complicated domains we utilize a parallel variant of our algorithm: in this implementation K learners are all independently collecting data from an instance of the environment. Updates are performed at the end of each collected trajectory using distributed synchronous gradient descent on a shared set of policy and Q-function parameters (we refer to the appendix for an algorithm description). The results of this experiment are depicted in Figure 3.
For the Humanoid running domain we can observe a similar trend to the experiments from the previous section: MPO quickly finds a stable running policy, outperforming all other algorithms in terms of sample efficiency also in this high-dimensional control problem.
The case for the Walker-2D parkour domain (where we compare against a PPO baseline) is even more striking: where standard PPO requires approximately 1M trajectories to find a good policy MPO finds a solution that is asymptotically no worse than the PPO solution in in about 70k trajectories (or 60M samples), resulting in an order of magnitude improvement. In addition to the walker experiment we have also evaluated MPO on the Parkour domain using a humanoid body (with 22 degrees of freedom) which was learned successfully (not shown in the plot, please see the supplementary video).
5.3 DISCRETE CONTROL
As a proof of concept – showcasing the robustness of our algorithm and its hyperparameters – we performed an experiment on a subset of the games contained contained in the "Arcade Learning Environment" (ALE) where we used the same hyperparameter settings for the KL constraints as for the continuous control experiments. The results of this experiment can be found in the Appendix.
6 CONCLUSION
We have presented a new off-policy reinforcement learning algorithm called Maximum a-posteriori Policy Optimisation (MPO). The algorithm is motivated by the connection between RL and inference and it consists of an alternating optimisation scheme that has a direct relation to several existing algorithms from the literature. Overall, we arrive at a novel, off-policy algorithm that is highly data efficient, robust to hyperparameter choices and applicable to complex control problems. We demonstrated the effectiveness of MPO on a large set of continuous control problems.
A PROOF OF MONOTONIC IMPROVEMENT FOR THE KL-REGULARIZED POLICY OPTIMIZATION PROCEDURE
In this section we prove a monotonic improvement guarantee for KL-regularized policy optimization via alternating updates on π and q under the assumption that the prior on θ is uninformative.
A.1 REGULARIZED REINFORCEMENT LEARNING
Let π be an arbitrary policy. For any other policy q such that, for all x, a, {π(a|x) > 0} =⇒ {q(a|x) > 0}, define the π-regularized reward for policy q:
rπ,qα (x, a) = r(x, a)− α log q(a|x) π(a|x) ,
where α > 0.
Bellman operators: Define the π-regularized Bellman operator for policy q Tπ,qα V (x) = Ea∼q(·|x) [ rπ,qα (x, a) + γEy∼p(·|x,a)V (y) ] ,
and the non-regularized Bellman operator for policy q T qV (x) = Ea∼q(·|x) [ r(x, a) + γEy∼p(·|x,a)V (y) ] .
Value function: Define the π-regularized value function for policy q as V π,qα (x) = Eq [∑ t≥0 γtrπ,qα (xt, at)|x0 = x, q ] .
and the non-regularized value function V q(x) = Eq [∑ t≥0 γtr(xt, at)|x0 = x, q ] .
Proposition 1. For any q, π, V , we have V π,qα ≤ V q and Tπ,qα V ≤ T qV . Indeed
Eq [
log q(at|xt) π(at|xt)
] = KL ( q(·|xt)‖π(·|xt) ) ≥ 0.
Optimal value function and policy Define the optimal regularized value function: V π,∗α (x) = maxq V π,q α (x), and the optimal (non-regularized) value function: V ∗(x) = maxq V q(x).
The optimal policy of the π-regularized problem qπ,∗α (·|x) = arg maxq V π,qα (x) and the optimal policy of the non-regularized problem q∗(·|x) = arg maxq V q . Proposition 2. We have that V π,qα is the unique fixed point of Tπ,qα , and V q is the unique fixed point of T q . Thus we have the following Bellman equations: For all x ∈ X ,
V π,qα (x) = ∑ a q(a|x) [ rπ,qα (x, a) + γEy∼p(·|x,a) [ V π,qα (y) ]] (14)
V q(x) = ∑ a q(a|x) [ r(x, a) + γEy∼p(·|x,a) [ V q(y) ]] (15)
V π,∗α (x) = r π,qπ,∗α α (x, a) + γEy∼p(·|x,a) [ V π,∗α (y) ] for all a ∈ A, (16)
V ∗(x) = max a∈A
[ r(x, a) + γEy∼p(·|x,a) [ V ∗(y) ]] . (17)
Notice that (16) holds for all actions a ∈ A, and not in expectation w.r.t. a ∼ q(·|x) only.
A.2 REGULARIZED JOINT POLICY GRADIENT
We now consider a parametrized policy πθ and consider maximizing the regularized joint policy optimization problem for a given initial state x0 (this could be a distribution over initial states). Thus we want to find a parameter θ that (locally) maximizes
J (θ, q) = V πθ,q(x0) = Eq [∑ t≥0 γt ( r(xt, at)− αKL(q(·|xt)‖πθ(·|xt)) )∣∣x0, q]. We start with an initial parameter θ0 and define a sequence of policies πi = πθi parametrized by θi, in the following way:
• Given θi, define qi = arg max
q T πθi ,q α V πθi ,
• Define θi+1 as θi+1 = θi − β∇θEπi [∑ t≥0 γtKL ( qk(·|xt)‖πθ(·|xt) ) |θ=θi ∣∣x0, πi]. (18) Proposition 3. We have the following properties:
• The policy qi satisfies:
qi(a|x) = πi(a|x)e
1 αQ πi (x,a) Eb∼πi(·|x) [ e 1 αQ πi (x,b) ] , (19)
where Qπ(x, a) = r(x, a) + γEy∼p(·|x,a)V π(y).
• We have V πi,qiα ≥ V πi . (20)
• For η sufficiently small, we have J (θi+1, qi+1) ≥ J (θi, qi) + cgi, (21)
where c is a numerical constant, and gi is the norm of the gradient (minimized by the algorithm):
gi = ∥∥∥∇θEπi[∑
t≥0
γtKL ( qi(·|xt)‖πθ(·|xt) ) |θ=θi ∣∣x0, πi]∥∥∥. Thus we build a sequence of policies (πθi , qi) whose values J (θi, qi) are non-decreasing thus converge to a local maximum. In addition, the improvement is lower-bounded by a constant times the norm of the gradient, thus the algorithm keeps improving the performance until the gradient vanishes (when we reach the limit of the capacity of our representation).
Proof. We have
qi(·|x) = arg max q Ea∼q(·|x) [ r(x, a) + γEy∼p(·|x,a)V πi(y)︸ ︷︷ ︸
Qπi (x,a)
−α log q(a|x) πi(a|x)
] ,
from which we deduce (19). Now, from the definition of qi, we have
Tπi,qiα V πi ≥ Tπi,πiα V πi = TπiV πi = V πi .
Now, since Tπi,qiα is a monotone operator (i.e. if V1 ≥ V2 elementwise, then Tπi,qiα V1 ≥ Tπi,qiα V2) and its fixed point is V πi,qiα , we have
V πi,qiα = lim t→∞ (Tπi,qiα ) tV πi ≥ V πi ,
which proves (20).
Now, in order to prove (21) we derive the following steps.
Step 1: From the definition of qi+1 we have, for any x, Ea∼qi+1 [ Qπi+1(x, a) ] −αKL ( qi+1(·|x)‖πi+1(·|x) ) ≥ Ea∼qi [ Qπi+1(x, a) ] −αKL ( qi(·|x)‖πi+1(·|x) ) .
(22)
Writing the functional that we minimize f(π, q, θ) = Eπ [∑ t≥0 γtKL ( q(·|xt)‖πθ(·|xt) )∣∣x0, π], the update rule is θi+1 = θi − β∇θf(πi, qi, θi). Thus we have that for sufficiently small β,
f(πi, qi, θi+1) ≤ f(πi, qi, θi)− βgi, (23) where gi = 12 ∥∥∇θf(πi, qi, θi)∥∥. Step 2: Now define F :
F(π, q, θ, π′) = Eπ [∑ t≥0 γt ( Ea∼q [ Qπ ′ (xt, a) ] − αKL ( q(·|xt)‖πθ(·|xt) ))∣∣x0, π] = δx0(I − γPπ)−1Tπθ,qα V π ′
= δx0(I − γPπ)−1T qV π ′ − f(π, q, θ),
where δx0 is a Dirac (in the row vector x0), and P π is the transition matrix for policy π.
From (22) and (23) we deduce that F(πi, qi+1, θi+1, πi+1) ≥ F(πi, qi, θi+1, πi+1)
≥ F(πi, qi, θi, πi+1) + βgi.
We deduce F(πi, qi+1, θi+1, πi)
≥ F(πi, qi, θi, πi) + βgi +F(πi, qi+1, θi+1, πi)−F(πi, qi+1, θi+1, πi+1) + F(πi, qi, θi, πi+1)−F(πi, qi, θi, πi)
= F(πi, qi, θi, πi) + βgi + Eπi [∑ t≥0 γt ( Ea∼qi+1 [ Qπi(xt, a)−Qπi+1(xt, a) ] − Ea∼qi [ Qπi(xt, a)−Qπi+1(xt, a) ])] ︸ ︷︷ ︸
=O(β2) since πi=πi+1+O(β) and qi=qi+1+O(β)
This rewrites: δx0(I − γπi)−1 ( T qi+1,πi+1α V πi − T qi,πiα V πi ) ≥ ηgi +O(β2). (24)
Step 3: Now a bit of algebra. For two stochastic matrices P and P ′, we have (I − γP )−1
= (I − γP ′)−1 + γ(I − γP )−1(P − P ′)(I − γP ′)−1 = (I − γP ′)−1 + γ [ (I − γP ′)−1 + γ(I − γP )−1(P − P ′)(I − γP ′)−1 ] (P − P ′)(I − γP ′)−1
= (I − γP ′)−1 + γ(I − γP ′)−1(P − P ′)(I − γP ′)−1
+γ2(I − γP )−1(P − P ′)(I − γP ′)−1(P − P ′)(I − γP ′)−1.
Applying this equality to the transition matrices Pπk and Pπk+1 and since ‖Pπk+1 −Pπk‖ = O(η), we have:
V qi+1,πi+1α
= (I − γPπi)−1rqi+1,πi+1α = (I − γPπi)−1rqi+1,πi+1α + γ(I − γPπi)−1(Pπi+1 − Pπi)(I − γPπi)−1rqi+1,πi+1α +O(β2) = (I − γPπi)−1rqi,πiα + (I − γPπi)−1(rqi+1,πi+1α − rqi,πiα + γPπi+1 − γPπi)(I − γPπi)−1rqi,πiα +O(β2) = V qi,πiα + (I − γPπi)−1(T qi+1,πi+1α V πi − T qi,πiα V πk) +O(β2).
Finally, using (24), we deduce that
J (θi+1, qi+1) = V qi+1,πi+1α (x0) = V qi,πiα (x0) + δx0(I − γPπi)−1(T qi+1,πi+1α V πi − T qi,πiα V πi) +O(β2) ≥ J (θi, qi) + ηgi +O(β2)
≥ J (θi, qi) + 1
2 ηgi,
for small enough η.
B ADDITIONAL EXPERIMENT: DISCRETE CONTROL
As a proof of concept – showcasing the robustness of our algorithm and its hyperparameters – we performed an experiment on a subset of the games contained contained in the "Arcade Learning Environment" (ALE). For this experiment we used the same hyperparameter settings for the KL constraints as for the continuous control experiments as well as the same learning rate and merely altered the network architecture to the standard network structure used by DQN Mnih et al. (2015) – and created a seperate network with the same architecture, but predicting the parameters of the policy distribution. A comparison between our algorithm and well established baselines from the literature, in terms of the mean performance, is listed in Table 1. While we do not obtain state-ofthe-art performance in this experiment, the fact that MPO is competitive, out-of-the-box in these domains suggests that combining the ideas presented in this paper with recent advances for RL with discrete actions (Bellemare et al., 2017) could be a fruitful avenue for future work.
C EXPERIMENT DETAILS
In this section we give the details on the hyper-parameters used for each experiment. All the continuous control experiments use a feed-forward network except for Parkour-2d were we used the same network architecture as in Heess et al. (2017). Other hyper parameters for MPO with non parametric variational distribution were set as follows,
Hyperparameters for MPO with parametric variational distribution were as follows,
D DERIVATION OF UPDATE RULES FOR A GAUSSIAN POLICY
For continuous control we assume that the policy is given by a Gaussian distribution with a full covariance matrix, i.e, π(a|s,θ) = N (µ,Σ). Our neural network outputs the mean µ = µ(s) and Cholesky factor A = A(s), such that Σ = AAT . The lower triagular factor A has positive diagonal elements enforced by the softplus transform Aii ← log(1 + exp(Aii)).
D.1 NON-PARAMETRIC VARIATIONAL DISTRIBUTION
In this section we provide the derivations and implementation details for the non-parametric variational distribution case for both E-step and M-step.
D.2 E-STEP
The E-step with a non-parametric variational solves the following program, where we have replaced expectations with integrals to simplify the following derivations:
max q
∫ µq(s) ∫ q(a|s)Qθi(s, a)dads
s.t. ∫ µq(s)KL(q(a|s), π(a|s,θi))da < ,∫∫
µq(s)q(a|s)dads = 1.
First we write the Lagrangian equation, i.e,
L(q, η, γ) = ∫ µq(s) ∫ q(a|s)Qθi(s, a)dads+
η ( − ∫ µq(s) ∫ q(a|s) log q(a|s)
π(a|s,θi)
) + γ ( 1− ∫∫ µq(s)q(a|s)dads ) .
Next we maximise the Lagrangian L w.r.t the primal variable q. The derivative w.r.t q reads,
∂qL(q, η, γ) = Qθi(a, s)− η log q(a|s) + η log π(a|s,θi)− (η − γ).
Setting it to zero and rearranging terms we get
q(a|s) = π(a|s,θi) exp ( Qθi(a, s)
η
) exp ( −η − γ
η
) .
However the last exponential term is a normalisation constant for q. Therefore we can write,
exp(−η − γ η ) =
∫ π(a|s,θi) exp( Qθi(a, s)
η )da,
γ = η − η log (∫ π(a|s,θi) exp( Qθi(a, s)
η )da
) .
Note that we could write γ based on π and η. At this point we can derive the dual function,
g(η) = η + η ∫ µq(s) log (∫ π(a|s,θi) exp( Q(a, s)
η )da
) .
D.3 M-STEP
To obtain the KL constraint in the M step we set p(θ) to a Gaussian prior around the current policy, i.e,
p(θ) ≈ N ( µ = θi,Σ =
Fθi λ
) ,
where θi are the parameters of the current policy distribution, Fθi is the empirical Fisher information matrix and λ.
With this, and dropping constant terms our optimization program becomes
max π
∫ µq(s) ∫ q(a|s) log π(a|s,θ)dads− λ(θ − θi)TF−1θi (θ − θi). (25)
We can observe that (θ − θi)TF−1θi (θ − θi) is the second order Taylor approximation of∫ µq(s)KL(π(a|s,θi), π(a|s,θ))ds which leads us to the generalized M-step objective:
max π
∫ µq(s) ∫ q(a|s) log π(a|s,θ)dads− λ ∫ µq(s)KL(π(a|s,θi), π(a|s,θ))ds (26)
which corresponds to Equation (11) from the main text, where expectations are replaced by integrals.
After obtaining the non parametric variational distribution in the M step with a Gaussian policy we empirically observed that better results could be achieved by decoupling the KL constraint into two terms such that we can constrain the contribution of the mean and covariance separately i.e.
∫ µq(s)KL(πi(a|s,θ), π(a|s,θ)) = Cµ + CΣ, (27)
where Cµ = ∫ µq(s) 1 2 (tr(Σ −1Σi)− n+ ln( Σ
Σi ))ds,
CΣ = ∫ µq(s) 1 2 (µ− µi) TΣ−1(µ− µi)ds.
This decoupling allows us to set different values for each component, i.e., µ, Σ for the mean, the covariance matrix respectively. Different lead to different learning rates. The effectivness of this decoupling has also been shown in Abdolmaleki et al. (2017). We always set a much smaller epsilon for covariance than the mean. The intuition is that while we would like the distribution moves fast in the action space, we also want to keep the exploration to avoid premature convergence.
In order to solve the constrained optimisation in the M-step, we first write the generalised Lagrangian equation, i.e,
L(θ, ηµ, ηΣ) = ∫ µq(s) ∫ q(a|s) log π(a|s,θ)dads+ ηµ( µ − Cµ) + ηΣ( Σ − CΣ)
Where ηµ and ηΣ are Lagrangian multipliers. Following prior work on constraint optimisation, we formulate the following primal problem,
max θ min ηµ>0,ηΣ>0 L(θ, ηµ, ηΣ).
In order to solve for θ we iteratively solve the inner and outer optimisation programs independently: We fix the Lagrangian multipliers to their current value and optimise for θ (outer maximisation) and then fix the parameters θ to their current value and optimise for the Lagrangian multipliers (inner minimisation). We continue this procedure until policy parameters θ and Lagrangian multipliers converge. Please note that the same approach can be employed to bound the KL explicitly instead of decoupling the contribution of mean and covariance matrix to the KL.
D.4 PARAMETRIC VARIATIONAL DISTRIBUTION
In this case we assume our variational distribution also uses a Gaussian distribution over the action space and use the same structure as our policy π.
Similar to the non-parametric case for a Gaussian distribution in the M-step we also use a decoupled KL but this time in the E-step for a Gaussian variational distribution. Using the same reasoning as in the previous section we can obtain the following generalized Lagrangian equation:
L(θq, ηµ, ηΣ) = ∫ µq(s) ∫ q(a|s;θq)Ai(a, s)dads+ ηµ( µ − Cµ) + ηΣ( Σ − CΣ).
Where ηµ and ηΣ are Lagrangian multipliers. And where we use the advantage function A(a, s) instead of the Q function Q(a, s), as it empirically gave better performance. Please note that the KL in the E-step is different than the one used in the M-step. Following prior works on constraint optimisation, we can formulate the following primal problem,
max θq min ηµ>0,ηΣ>0
L(θq, ηµ, ηΣ)
In order to solve for θq we iteratively solve the inner and outer optimisation programs independently. In order to that we fix the Lagrangian multipliers to their current value and optimise for θq (outer maximisation), in this case we use the likelihood ratio gradient to compute the gradient w.r.t θq . Subsequently we fix the parameters θq to their current value and optimise for Lagrangian multipliers (inner minimisation). We iteratively continue this procedure until the policy parameters θq and the Lagrangian multipliers converges. Please note that the same approach can be used to bound the KL explicitly instead of decoupling the contribution of mean and covariance matrix to the KL. As our policy has the same structure as the parametric variational distribution, the M step in this case reduce to set the policy parameters θ to the parameters θq we obtained in E-step, i.e,
θi+1 = θ q
E IMPLEMENTATION DETAILS
While we ran most of our experiments using a single learner, we implemented a scalable variant of the presented method in which multiple workers collect data independently in an instance of the considered environment, compute gradients and send them to a chief (or parameter server) that performs parameter update by averaging gradients. That is we use distributed synchronous gradient descent. These procedures are described in Algorithms 1 and 2 for the non-parametric case and 3 for the parametric case.
Algorithm 1 MPO (chief) 1: Input G number of gradients to average 2: while True do 3: initialize N = 0 4: initialize gradient store sφ = {}, sη = {}, sηµ = {}, sηΣ = {} sθ = {} 5: while N < G do 6: receive next gradient from worker w 7: sφ = sφ + [δφw] 8: sφ = sθ + [δθw] 9: sη = sη + [δηw] 10: sηµ = sηµ + [δη w µ ] 11: sηθ = sηθ + [δη w θ ] 12: update parameters with average gradient from 13: sφ, sη , sηµ , sηΣ sθ 14: send new parameters to workers
Algorithm 2 MPO (worker) - Non parametric variational distribution 1: Input = , Σ, µ, Lmax 2: i = 0, Lcurr = 0 3: Initialise Qωi(a, s), π(a|s,θi), η, ηµ, ηΣ 4: for each worker do 5: while Lcurr > Lmax do 6: update replay buffer B with L trajectories from the environment 7: k = 0 8: // Find better policy by gradient descent 9: while k < 1000 do 10: sample a mini-batch B of N (s, a, r) pairs from replay 11: sampleM additional actions for each state fromB, π(a|s,θi) for estimating integrals 12: compute gradients, estimating integrals using samples 13: // Q-function gradient: 14: δφ = ∂φL′φ(φ) 15: // E-Step gradient:
16: δη = ∂ηg(η)
17: Let: q(a|s) ∝ π(a|s,θi) exp( Qθt (a,s,φ ′)
η )
18: // M-Step gradient:
19: [δηµ , δηΣ ] = α∂ηµ,ηΣL(θk, ηµ, ηΣ)
20: δθ = ∂θL(θ, ηµk+1, ηΣk+1)
21: send gradients to chief worker 22: wait for gradient update by chief 23: fetch new parameters φ, θ, η, ηµ, ηΣ 24: k = k + 1 25: i = i+ 1, Lcurr = Lcurr + L 26: θi = θ, φ′ = φ
Algorithm 3 MPO (worker) - parametric variational distribution 1: Input = Σ, µ, Lmax 2: i = 0, Lcurr = 0 3: Initialise Qωi(a, s), π(a|s,θi), η, ηµ, ηΣ 4: for each worker do 5: while Lcurr < Lmax do 6: update replay buffer B with L trajectories from the environment 7: k = 0 8: // Find better policy by gradient descent 9: while k < 1000 do 10: sample a mini-batch B of N (s, a, r) pairs from replay 11: sample M additional actions for each state from B, π(a|s,θk) for estimating inte-
grals 12: compute gradients, estimating integrals using samples 13: // Q-function gradient: 14: δφ = ∂φL′φ(φ) 15: // E-Step gradient:
16: [δηµ , δηΣ ] = α∂ηµ,ηΣL(θk, ηµ, ηΣ)
17: δθ = ∂θL(θ, ηµk+1, ηΣk+1)
18: // M-Step gradient: In practice there is no M-step in this case as policy and variatinal
distribution q use a same structure. 19: send gradients to chief worker 20: wait for gradient update by chief 21: fetch new parameters φ, θ, η, ηµ, ηΣ 22: k = k + 1 23: i = i+ 1, Lcurr = Lcurr + L 24: θi = θ, φ′ = φ | 1. What is the main contribution of the paper regarding reinforcement learning?
2. How does the proposed algorithm compare to other inference-based RL algorithms?
3. What are the strengths and weaknesses of the provided theoretical analysis?
4. Do you have any questions about the formulation of the objective function or its optimization?
5. Are there any concerns regarding the addition of a one-step KL regularization term?
6. How do the two KL constraints for the E and M steps impact performance, and is it beneficial to use different epsilons for each?
7. What are some potential follow-up experiments that could provide further insights into the algorithm's behavior? | Review | Review
The paper presents a new algorithm for inference-based reinforcement learning for deep RL. The algorithm decomposes the policy update in two steps, an E and an M-step. In the E-step, the algorithm estimates a variational distribution q which is subsequentially used for the M-step to obtain a new policy. Two versions of the algorithm are presented, using a parametric or a non-parametric (sample-based) distribution for q. The algorithm is used in combination with the retrace algorithm to estimate the q-function, which is also needed in the policy update.
This is a well written paper presenting an interesting algorithm. The algorithm is similar to other inference-based RL algorithm, but is the first application of inference based RL to deep reinforcement learning. The results look very promising and define a new state of the art or deep reinforcement learning in continuous control, which is a very active topic right now. Hence, I think the paper should be accepted.
I do have a few comments / corrections / questions about the paper:
- There are several approaches that already use the a combination of the KL-constraint with reverse KL on a non-parametric distribution and subsequently an M-projection to obtain again a parametric distribution, see HiREPS, non-parametric REPS [Hoof2017, JMLR] or AC-REPS [Wirth2016, AAAI]. These algorithms do not use the inference-based view but the trust region justification. As in the non-parametric case, the asymptotic performance guarantees from the EM framework are gone, why is it beneficial to formulate it with EM instead of directly with a trust region of the expected reward?
- It is not clear to me whether the algorithm really optimizes the original maximum a posteriori objective defined in Equation 1. First, alpha changes every iteration of the algorithm while the objective assumes that alpha is constant. This means that we change the objective all the time which is theoretically a bit weird. Moreover, the presented algorithm also changes the prior all the time (in order to introduce the 2nd trust region) in the M-step. Again, this changes the objective, so it is unclear to me what exactly is maximised in the end. Would it not be cleaner to start with the average reward objective (no prior or alpha) and then introduce both trust regions just out of the motivation that we need trust regions in policy search? Then the objective is clearly defined.
- I did not get whether the additional "one-step KL regularisation" is obtained from the lower bound or just added as additional regularisation? Could you explain?
- The algorithm has now 2 KL constraints, for E and M step. Is the epsilon for both the same or can we achieve better performance by using different epsilons?
- I think the following experiments would be very informative:
- MPO without trust region in M-step
- MPO without retrace algorithm for getting the Q-value
- test different epsilons for E and M step |
ICLR | Title
Maximum a Posteriori Policy Optimisation
Abstract
We introduce a new algorithm for reinforcement learning called Maximum aposteriori Policy Optimisation (MPO) based on coordinate ascent on a relativeentropy objective. We show that several existing methods can directly be related to our derivation. We develop two off-policy algorithms and demonstrate that they are competitive with the state-of-the-art in deep reinforcement learning. In particular, for continuous control, our method outperforms existing methods with respect to sample efficiency, premature convergence and robustness to hyperparameter settings.
1 INTRODUCTION
Model free reinforcement learning algorithms can acquire sophisticated behaviours by interacting with the environment while receiving simple rewards. Recent experiments (Mnih et al., 2015; Jaderberg et al., 2016; Heess et al., 2017) successfully combined these algorithms with powerful deep neural-network approximators while benefiting from the increase of compute capacity.
Unfortunately, the generality and flexibility of these algorithms comes at a price: They can require a large number of samples and – especially in continuous action spaces – suffer from high gradient variance. Taken together these issues can lead to unstable learning and/or slow convergence. Nonetheless, recent years have seen significant progress, with improvements to different aspects of learning algorithms including stability, data-efficiency and speed, enabling notable results on a variety of domains, including locomotion (Heess et al., 2017; Peng et al., 2016), multi-agent behaviour (Bansal et al., 2017) and classical control (Duan et al., 2016).
Two types of algorithms currently dominate scalable learning for continuous control problems: First, Trust-Region Policy Optimisation (TRPO; Schulman et al. 2015) and the derivative family of Proximal Policy Optimisation algorithms (PPO; Schulman et al. 2017b). These policy-gradient algorithms are on-policy by design, reducing gradient variance through large batches and limiting the allowed change in parameters. They are robust, applicable to high-dimensional problems, and require moderate parameter tuning, making them a popular first choice (Ho & Ermon, 2016). However, as on-policy algorithms, they suffer from poor sample efficiency.
In contrast, off-policy value-gradient algorithms such as the Deep Deterministic Policy Gradient (DDPG, Silver et al. 2014; Lillicrap et al. 2016), Stochastic Value Gradient (SVG, Heess et al. 2015), and the related Normalized Advantage Function formulation (NAF, Gu et al. 2016b) rely on experience replay and learned (action-)value functions. These algorithms exhibit much better data efficiency, approaching the regime where experiments with real robots are possible (Gu et al., 2016a; Andrychowicz et al., 2017). While also popular, these algorithms can be difficult to tune, especially for high-dimensional domains like general robot manipulation tasks.
In this paper we propose a novel off-policy algorithm that benefits from the best properties of both classes. It exhibits the scalability, robustness and hyperparameter insensitivity of on-policy algorithms, while offering the data-efficiency of off-policy, value-based methods.
To derive our algorithm, we take advantage of the duality between control and estimation by using Expectation Maximisation (EM), a powerful tool from the probabilistic estimation toolbox, in order to solve control problems. This duality can be understood as replacing the question “what are the actions which maximise future rewards?” with the question “assuming future success in maximising
rewards, what are the actions most likely to have been taken?”. By using this estimation objective we have more control over the policy change in both E and M steps, yielding robust learning. We show below that several algorithms, including TRPO, can be directly related to this perspective. We leverage the fast convergence properties of EM-style coordinate ascent by alternating a nonparametric data-based E-step which re-weights state-action samples, with a supervised, parametric M-step using deep neural networks.
We evaluate our algorithm on a broad spectrum of continuous control problems including a 56 DoF humanoid body. All experiments used the same optimisation hyperparameters 1. Our algorithm shows remarkable data efficiency often solving the tasks we consider an order of magnitude faster than the state-of-the-art. A video of some resulting behaviours can be found here dropbox.com/s/pgcmjst7t0zwm4y/MPO.mp4.
2 BACKGROUND AND NOTATION
2.1 RELATED WORK
Casting Reinforcement Learning (RL) as an inference problem has a long history dating back at least two decades (Dayan & Hinton, 1997). The framework presented here is inspired by a variational inference perspective on RL that has previously been utilised in multiple studies; c.f. Dayan & Hinton (1997); Neumann (2011); Deisenroth et al. (2013); Rawlik et al. (2012); Levine & Koltun (2013); Florensa et al. (2017).
Particular attention has been paid to obtaining maximum entropy policies as the solution to an inference problem. The penalisation of determinism can be seen encouraging both robustness and simplicity. Among these are methods that perform trajectory optimisation using either linearised dynamics (Todorov, 2008; Toussaint, 2009; Levine & Koltun, 2013) or general dynamics as in path integral control (Kappen, 2005; Theodorou et al., 2010). In contrast to these algorithms, here we do not assume the availability of a transition model and avoid on-policy optimisation. A number of other authors have considered the same perspective but in a model-free RL setting (Neumann, 2011; Peters et al., 2010a; Florensa et al., 2017; Daniel et al., 2016) or inverse RL problems (Ziebart et al., 2008). These algorithms are more directly related to our work and can be cast in the same (EM-like) alternating optimisation scheme on which we base our algorithm. However, they typically lack the maximisation (M)-step – with the prominent exception of REPS, AC-REPS, PI2-GPS and MDGPS (Peters et al., 2010a; Wirth et al., 2016; Chebotar et al., 2016; Montgomery & Levine, 2016) to which our algorithm is closely related as outlined below. An interesting recent addition to these approaches is an EM-perspective on the PoWER algorithm (Roux, 2016) which uses the same iterative policy improvement employed here, but commits to parametric inference distributions and avoids an exponential reward transformation, resulting in a harder to optimise lower bound.
As an alternative to these policy gradient inspired algorithms, the class of recent algorithms for soft Q-learning (e.g. Rawlik et al. (2012); Haarnoja et al. (2017); Fox et al. (2016) parameterise and estimate a so called “soft” Q-function directly, implicitly inducing a maximum entropy policy. A perspective that can also be extended to hierarchical policies (Florensa et al., 2017), and has recently been used to establish connections between Q-learning and policy gradient methods (O’Donoghue et al., 2016; Schulman et al., 2017a). In contrast, we here rely on a parametric policy, our bound and derivation is however closely related to the definition of the soft (entropy regularised) Q-function.
A line of work, that is directly related to the “RL as inference” perspective, has focused on using information theoretic regularisers such as the entropy of the policy or the Kullback-Leibler divergence (KL) between policies to stabilise standard RL objectives. In fact, most state-of-the-art policy gradient algorithms fall into this category. For example see the entropy regularization terms used in Mnih et al. (2016) or the KL constraints employed by work on trust-region based methods (Schulman et al., 2015; 2017b; Gu et al., 2017; Wang et al., 2017). The latter methods introduce a trust region constraint, defined by the KL divergence between the new policy and the old policy, so that the expected KL divergence over state space is bounded. From the perspective of this paper these trust-region based methods can be seen as optimising a parametric E-step, as in our algorithm, but are “missing” an explicit M-step.
1With the exception of the number of samples collected between updates.
Finally, the connection between RL and inference has been invoked to motivate work on exploration. The most prominent examples for this are formed by work on Boltzmann exploration such as Kaelbling et al. (1996); Perkins & Precup (2002); Sutton (1990); O’Donoghue et al. (2017), which can be connected back to soft Q-learning (and thus to our approach) as shown in Haarnoja et al. (2017).
2.2 MARKOV DECISION PROCESSES
We consider the problem of finding an optimal policy π for a discounted reinforcement learning (RL) problem; formally characterized by a Markov decision process (MDP). The MDP consists of: continuous states s, actions a, transition probabilities p(st+1|st, at) – specifying the probability of transitioning from state st to st+1 under action at –, a reward function r(s, a) ∈ R as well as the discounting factor γ ∈ [0, 1). The policy π(a|s,θ) (with parameters θ) is assumed to specify a probability distribution over action choices given any state and – together with the transition probabilities – gives rise to the stationary distribution µπ(s).
Using these basic quantities we can now define the notion of a Markov sequence or trajectory τπ = {(s0, a0) . . . (sT , aT )} sampled by following the policy π; i.e. τπ ∼ pπ(τ) with pπ(τ) = p(s0) ∏ t>0 p(st+1|st, at)π(at|st); and the expected return Eτπ [ ∑∞ t=0 γ
tr(st, st)]. We will use the shorthand rt = r(st, at).
3 MAXIMUM A POSTERIORI POLICY OPTIMISATION
Our approach is motivated by the well established connection between RL and probabilistic inference. This connection casts the reinforcement learning problem as that of inference in a particular probabilistic model. Conventional formulations of RL aim to find a trajectory that maximizes expected reward. In contrast, inference formulations start from a prior distribution over trajectories, condition a desired outcome such as achieving a goal state, and then estimate the posterior distribution over trajectories consistent with this outcome.
A finite-horizon undiscounted reward formulation can be cast as inference problem by constructing a suitable probabilistic model via a likelihood function p(O = 1|τ) ∝ exp( ∑ t rt/α), where α is a temperature parameter. Intuitively, O can be interpreted as the event of obtaining maximum reward by choosing an action; or the event of succeeding at the RL task (Toussaint, 2009; Neumann, 2011). With this definition we can define the following lower bound on the likelihood of optimality for the policy π:
log pπ(O = 1) = log ∫ pπ(τ)p(O = 1|τ)dτ ≥ ∫ q(τ) [ log p(O = 1|τ) + log pπ(τ)
q(τ)
] dτ (1)
= Eq [∑
t
rt/α ] −KL ( q(τ)||pπ(τ) ) = J (q, π), (2)
where pπ is the trajectory distribution induced by policy π(a|s) as described in section 2.2 and q(τ) is an auxiliary distribution over trajectories that will discussed in more detail below. The lower bound J is the evidence lower bound (ELBO) which plays an important role in the probabilistic modeling literature. It is worth already noting here that optimizing (2) with respect to q can be seen as a regularized RL problem.
An important motivation for transforming a RL problem into an inference problem is that this allows us draw from the rich toolbox of inference methods: For instance, J can be optimized with the familiy of expectation maximization (EM) algorithms which alternate between improving J with respect to q and π. In this paper we follow classical (Dayan & Hinton, 1997) and more recent works (e.g. Peters et al. 2010b; Levine & Koltun 2013; Daniel et al. 2016; Wirth et al. 2016) and cast policy search as a particular instance of this family. Our algorithm then combines properties of existing approaches in this family with properties of recent off-policy algorithms for neural networks.
The algorithm alternates between two phases which we refer to as E and M step in reference to an EM-algorithm. The E-step improves J with respect to q. Existing EM policy search approaches perform this step typically by reweighting trajectories with sample returns (Kober & Peters, 2009) or via local trajectory optimization (Levine & Koltun, 2013). We show how off-policy deep RL
techniques and value-function approximation can be used to make this step both scalable as well as data efficient. The M-step then updates the parametric policy in a supervised learning step using the reweighted state-action samples from the E-step as targets.
These choices lead to the following desirable properties: (a) low-variance estimates of the expected return via function approximation; (b) low-sample complexity of value function estimate via robust off-policy learning; (c) minimal parametric assumption about the form of the trajectory distribution in the E-step; (d) policy updates via supervised learning in the M step; (e) robust updates via hard trust-region constraints in both the E and the M step.
3.1 POLICY IMPROVEMENT
The derivation of our algorithm then starts from the infinite-horizon analogue of the KL-regularized expected reward objective from Equation (2). In particular, we consider variational distributions q(τ) that factor in the same way as pπ , i.e. q(τ) = p(s0) ∏ t>0 p(st+1|st, at)q(at|st) which yields:
J (q,θ) = Eq [ ∞∑ t=0 γt [ rt − αKL ( q(a|st)‖π(a|st,θ) )]] + log p(θ). (3)
Note that due to the assumption about the structure of q(τ) the KL over trajectories decomposes into a KL over the individual state-conditional action distributions. This objective has also been considered e.g. by Haarnoja et al. (2017); Schulman et al. (2017a). The additional log p(θ) term is a prior over policy parameters and can be motivated by a maximum a-posteriori estimation problem (see appendix for more details).
We also define the regularized Q-value function associated with (3) as
Qqθ(s, a) = r0 + Eq(τ),s0=s,a0=a ∞∑ t≥1 γt [ rt − αKL(qt‖πt) ] , (4) with KL ( qt||πt ) = KL ( q(a|st) ) ‖π(a|st,θ) ) . Note that KL ( q0||π0 ) and p(θ) are not part of the Q-function as they are not a function of the action.
We observe that optimizing J with respect to q is equivalent to solving an expected reward RL problem with augmented reward r̃t = rt−α log q(at|st)π(at|st,θ) . In this view π represents a default policy towards which q is regularized – i.e. the current best policy. The MPO algorithm treats π as the primary object of interest. In this case q serves as an auxiliary distribution that allows optimizing J via alternate coordinate ascent in q and πθ, analogous to the expectation-maximization algorithm in the probabilistic modelling literature. In our case, the E-step optimizes J with respect to q while the M-step optimizes J with respect to π. Different optimizations in the E-step and M-step lead to different algorithms. In particular, we note that for the case where p(θ) is an uninformative prior a variant of our algorithm has a monotonic improvement guarantee as show in the Appendix A.
3.2 E-STEP
In the E-step of iteration i we perform a partial maximization of J (q,θ) with respect to q given θ = θi. We start by setting q = πθi and estimate the unregularized action-value function:
Qqθi(s, a) = Qθi(s, a) = Eτπi ,s0=s,a0=a [ ∞∑ t γtrt ] , (5)
since KL(q||πi) = 0. In practice we estimate Qθi from off-policy data (we refer to Section 4 for details about the policy evaluation step). This greatly increases the data efficiency of our algorithm. Given Qθi we improve the lower bound J w.r.t. q by first expanding Qθi(s, a) via the regularized Bellman operator Tπ,q = Eq(a|s) [ r(s, a) − αKL(q‖πi) + γEp(s′|s,a)[Vθi(s′)]], and optimize the
“one-step” KL regularised objective
max q J̄s(q, θi) = max q Tπ,qQθi(s, a)
= max q
Eµ(s) [ Eq(·|s)[Qθi(s, a)]− αKL(q‖πi) ] ,
(6)
since Vθi(s) = Eq(a|s[Qθi(s, a)] and thus Qθi(s, a) = r(s, a) + γVθi(s).
Maximizing Equation (6), thus obtaining qi = arg max J̄ (q, θi), does not fully optimize J since we treat Qθi as constant with respect to q. An intuitive interpretation qi is that it chooses the softoptimal action for one step and then resorts to executing policy π. In the language of the EM algorithm this optimization implements a partial E-step. In practice we also choose µq to be the stationary distribution as given through samples from the replay buffer.
CONSTRAINED E-STEP
The reward and the KL terms are on an arbitray relative scale. This can make it difficult to choose α. We therefore replace the soft KL regularization with a hard constraint with parameter , i.e,
max q
Eµ(s) [ Eq(a|s) [ Qθi(s, a) ]] s.t.Eµ(s) [ KL(q(a|s), π(a|s,θi)) ] < .
(7)
If we choose to explicitly parameterize q(a|s) – option 1 below – the resulting optimisation is similar to that performed by the recent TRPO algorithm for continuous control (Schulman et al., 2015); only in an off-policy setting. Analogously, the unconstrained objective (6) is similar to the objective used by PPO (Schulman et al., 2017b). We note, however, that the KL is reversed when compared to the KL used by TRPO and PPO.
To implement (7) we need to choose a form for the variational policy q(a|s). Two options arise:
1. We can use a parametric variational distribution q(a|s,θq), with parameters θq , and optimise Equation (7) via the likelihood ratio or action-value gradients. This leads to an algorithm similar to TRPO/PPO and an explicit M-step becomes unnecessary (see. Alg. 3).
2. We can choose a non-parametric representation of q(a|s) given by one probability factor per sample. To achieve generalization in state space we then fit a parametric policy in the M-step.
Fitting a parametric policy in the M-step is a supervised learning problem, allowing us to employ various regularization techniques at that point. It also makes it easier to enforce the hard KL constraint.
NON PARAMETRIC VARIATIONAL DISTRIBUTION
In the non-parametric case we can obtain the optimal sample based q distribution – the solution to Equation (7) – in closed form (see the appendix for a full derivation), as,
qi(a|s) ∝ π(a|s,θi) exp (Qθi(s, a)
η∗
) , (8)
where we can obtain η∗ by minimising the following convex dual function,
g(η) = η + η ∫ µ(s) log ∫ π(a|s,θi) exp (Qθi(s, a) η ) dads, (9)
after the optimisation of which we can evaluate qi(a|s) on given samples. This optimization problem is similar to the one solved by relative entropy policy search (REPS) (Peters et al., 2010a) with the difference that we optimise only for the conditional variational distribution q(a|s) instead of a joint distribution q(a, s) – effectively fixing µq(s) to the stationary distribution given by previously collected experience – and we use the Q function of the old policy to evaluate the integral over a. While this might seem unimportant it is crucial as it allows us to estimate the integral over actions with multiple samples without additional environment interaction. This greatly reduces the variance of the estimate and allows for fully off-policy learning at the cost of performing only a partial optimization of J as described above.
3.3 M-STEP
Given qi from the E-step we can optimize the lower bound J with respect to θ to obtain an updated policy θi+1 = arg maxθ J (qi,θ). Dropping terms independent of θ this entails solving for the solution of
max θ J (qi, θ) = max θ Eµq(s)
[ Eq(a|s) [ log π(a|s,θ) ]] + log p(θ), (10)
which corresponds to a weighted maximum a-posteriroi estimation (MAP) problem where samples are weighted by the variational distribution from the E-step. Since this is essentially a supervised learning step we can choose any policy representation in combination with any prior for regularisation. In this paper we set p(θ) to a Gaussian prior around the current policy, i.e, p(θ) ≈ N ( µ = θi,Σ = Fθi λ ) , where θi are the parameters of the current policy distribution, Fθi is the empirical Fisher information matrix and λ is a positive scalar. As shown in the appendix this suggests the following generalized M-step:
max π
Eµq(s) [ Eq(a|s) [ log π(a|s,θ) ] − λKL ( π(a|s,θi), π(a|s,θ) )] (11)
which can be re-written as the hard constrained version:
max π
Eµq(s) [ Eq(a|s) [ log π(a|s,θ) ]] s.t. Eµq(s) [ KL(π(a|s,θi), π(a|s,θ)) ] < .
(12)
This additional constraint minimises the risk of overfitting the samples, i.e. it helps us to obtain a policy that generalises beyond the state-action samples used for the optimisation. In practice we have found the KL constraint in the M step to greatly increase stability of the algorithm. We also note that in the E-step we are using the reverse, mode-seeking, KL while in the M-step we are using the forward, moment-matching, KL which reduces the tendency of the entropy of the parametric policy to collapse. This is in contrast to other RL algorithms that use M-projection without KL constraint to fit a parametric policy (Peters et al., 2010a; Wirth et al., 2016; Chebotar et al., 2016; Montgomery & Levine, 2016). Using KL constraint in M-step has also been shown effective for stochastic search algorithms (Abdolmaleki et al., 2017).
4 POLICY EVALUATION
Our method is directly applicable in an off-policy setting. For this, we have to rely on a stable policy evaluation operator to obtain a parametric representation of the Q-function Qθ(s, a). We make use of the policy evaluation operator from the Retrace algorithm Munos et al. (2016), which we found to yield stable policy evaluation in practice2. Concretely, we fit the Q-functionQθi(s, a, φ) as represented by a neural network, with parameters φ, by minimising the squared loss:
min φ L(φ) = min φ Eµb(s),b(a|s)
[( Qθi(st, at, φ)−Qrett )2] ,with
Qrett = Qφ′(st, at) + ∞∑ j=t γj−t ( j∏ k=t+1 ck )[ r(sj , aj) + Eπ(a|sj+1)[Qφ′(sj+1, a)]−Qφ′(sj , aj) ] ,
ck = min (
1, π(ak|sk) b(ak|sk)
) ,
(13)
2We note that, despite this empirical finding, Retrace may not be guaranteed to be stable with function approximation (Touati et al., 2017).
whereQφ′(s, a) denotes the output of a target Q-network, with parameters φ′, that we copy from the current parameters φ after each M-step. We truncate the infinite sum after N steps by bootstrapping with Qφ′ (rather than considering a λ return). Additionally, b(a|s) denotes the probabilities of an arbitrary behaviour policy. In our case we use an experience replay buffer and hence b is given by the action probabilities stored in the buffer; which correspond to the action probabilities at the time of action selection.
5 EXPERIMENTS
For our experiments we evaluate our MPO algorithm across a wide range of tasks. Specifically, we start by looking at the continuous control tasks of the DeepMind Control Suite (Tassa et al. (2018), see Figure 1), and then consider the challenging parkour environments recently published in Heess et al. (2017). In both cases we use a Gaussian distribution for the policy whose mean and covariance are parameterized by a neural network (see appendix for details). In addition, we present initial experiments for discrete control using ATARI environments using a categorical policy distribution (whose logits are again parameterized by a neural network) in the appendix.
5.1 EVALUATION ON CONTROL SUITE
The suite of continuous control tasks that we are evaluating against contains 18 tasks, comprising a wide range of domains including well known tasks from the literature. For example, the classical cart-pole and acrobot dynamical systems, 2D and Humanoid walking as well as simple low-dimensional planar reaching and manipulation tasks. This suite of tasks was built in python on top of mujoco and will also be open sourced to the public by the time of publication.
While we include plots depicting the performance of our algorithm on all tasks below; comparing it against the state-of-the-art algorithms in terms of data-efficiency. We want to start by directing the attention of the reader to a more detailed evaluation on three of the harder tasks from the suite.
5.1.1 DETAILED ANALYSIS ON WALKER-2D, ACROBOT, HOPPER
We start by looking at the results for the classical Acrobot task (two degrees of freedom, one continuous action dimension) as well as the 2D walker (which has 12 degrees of freedom and thus a 12 dimensional action space and a 21 dimensional state space) and the hopper standing task. The reward in the Acrobot task is the distance of the robots end-effector to an upright position of the underactuated system. For the walker task it is given by the forward velocity, whereas in the hopper the requirement is to stand still.
Figure 2 shows the results for this task obtained by applying our algorithm MPO as well as several ablations – in which different parts were removed from the MPO optimization – and two baselines: our implementation of Proximal Policy Optimization (PPO) (Schulman et al., 2017b) and DDPG. The hyperparameters for MPO were kept fixed for all experiments in the paper (see the appendix for hyperparameter settings).
As a first observation, we can see that MPO gives stable learning on all tasks and, thanks to its fully off-policy implementation, is significantly more sample efficient than the on-policy PPO baseline. Furthermore, we can observe that changing from the non-parametric variational distribution to a parametric distribution3 (which, as described above, can be related to PPO) results in only a minor asymptotic performance loss but slowed down optimisation and thus hampered sample efficiency; which can be attributed to the fact that the parametric q distribution required a stricter KL constraint. Removing the automatically tuned KL constraint and replacing it with a manually set entropy regulariser then yields an off-policy actor-critic method with Retrace. This policy gradient method still uses the idea of estimating the integral over actions – and thus, for a gradient based optimiser, its likelihood ratio derivative – via multiple action samples (as judged by a Q-Retrace critic). This idea has previously been coined as using the expected policy gradient (EPG) (Ciosek & Whiteson, 2017) and we hence denote the corresponding algorithm with EPG + Retrace, which no-longer follows the intuitions of the MPO perspective. EPG + Retrace performed well when the correct entropy regularisation scale is used. This, however, required task specific tuning (c.f. Figure 4 where this hyperparameter was set to the one that performed best in average across tasks). Finally using only a single sample to estimate the integral (and hence the likelihood ratio gradient) results in an actor-critic variant with Retrace that is the least performant off-policy algorithm in our comparison.
5.1.2 COMPLETE RESULTS ON THE CONTROL SUITE
The results for MPO (non-parameteric) – and a comparison to an implementation of state-of-the-art algorithms from the literature in our framework – on all the environments from the control suite that we tested on are shown in Figure 4. All tasks have rewards that are scaled to be between 0 and 1000. We note that in order to ensure a fair comparison all algorithms ran with exactly the same network configuration, used a single learner (no distributed computation), used the same optimizer and were tuned w.r.t. their hyperparameters for best performance across all tasks. We refer to the appendix for a complete description of the hyperparameters. Our comparison is made in terms of data-efficiency.
From the plot a few trends are readily apparent: i) We can clearly observe the advantage in terms of data-efficiency that methods relying on a Q-critic obtain over the PPO baseline. This difference is so extreme that in several instances the PPO baseline converges an order of magnitude slower than the off-policy algorithms and we thus indicate the asymptotic performance of each algorithm of PPO and DDPG (which also improved significantly later during training in some instances) with a colored star in the plot; ii) the difference between the MPO results and the (expected) policy gradient (EPG) with entropy regularisation confirm our suspicion from Section 5.1.1: finding a good setting for the entropy regulariser that transfers across environments without additional constraints on the policy distribution is very difficult, leading to instabilities in the learning curves. In contrast to this the MPO results appear to be stable across all environments; iii) Finally, in terms of data-efficiency the methods utilising Retrace obtain a clear advantage over DDPG. The single learner vanilla DDPG implementation learns the lower dimensional environments quickly but suffers in terms of learning
3We note that we use a value function baseline Eπ[Q(s, ·)] in this setup. See appendix for details.
speed in environments with sparse rewards (finger, acrobot) and higher dimensional action spaces. Overall, MPO is able to solve all environments using surprisingly moderate amounts of data. On average less than 1000 trajectories (or 106 samples) are needed to reach the best performance.
5.2 HIGH-DIMENSIONAL CONTINUOUS CONTROL
Next we turn to evaluating our algorithm on two higher-dimensional continuous control problems; humanoid and walker. To make computation time bearable in these more complicated domains we utilize a parallel variant of our algorithm: in this implementation K learners are all independently collecting data from an instance of the environment. Updates are performed at the end of each collected trajectory using distributed synchronous gradient descent on a shared set of policy and Q-function parameters (we refer to the appendix for an algorithm description). The results of this experiment are depicted in Figure 3.
For the Humanoid running domain we can observe a similar trend to the experiments from the previous section: MPO quickly finds a stable running policy, outperforming all other algorithms in terms of sample efficiency also in this high-dimensional control problem.
The case for the Walker-2D parkour domain (where we compare against a PPO baseline) is even more striking: where standard PPO requires approximately 1M trajectories to find a good policy MPO finds a solution that is asymptotically no worse than the PPO solution in in about 70k trajectories (or 60M samples), resulting in an order of magnitude improvement. In addition to the walker experiment we have also evaluated MPO on the Parkour domain using a humanoid body (with 22 degrees of freedom) which was learned successfully (not shown in the plot, please see the supplementary video).
5.3 DISCRETE CONTROL
As a proof of concept – showcasing the robustness of our algorithm and its hyperparameters – we performed an experiment on a subset of the games contained contained in the "Arcade Learning Environment" (ALE) where we used the same hyperparameter settings for the KL constraints as for the continuous control experiments. The results of this experiment can be found in the Appendix.
6 CONCLUSION
We have presented a new off-policy reinforcement learning algorithm called Maximum a-posteriori Policy Optimisation (MPO). The algorithm is motivated by the connection between RL and inference and it consists of an alternating optimisation scheme that has a direct relation to several existing algorithms from the literature. Overall, we arrive at a novel, off-policy algorithm that is highly data efficient, robust to hyperparameter choices and applicable to complex control problems. We demonstrated the effectiveness of MPO on a large set of continuous control problems.
A PROOF OF MONOTONIC IMPROVEMENT FOR THE KL-REGULARIZED POLICY OPTIMIZATION PROCEDURE
In this section we prove a monotonic improvement guarantee for KL-regularized policy optimization via alternating updates on π and q under the assumption that the prior on θ is uninformative.
A.1 REGULARIZED REINFORCEMENT LEARNING
Let π be an arbitrary policy. For any other policy q such that, for all x, a, {π(a|x) > 0} =⇒ {q(a|x) > 0}, define the π-regularized reward for policy q:
rπ,qα (x, a) = r(x, a)− α log q(a|x) π(a|x) ,
where α > 0.
Bellman operators: Define the π-regularized Bellman operator for policy q Tπ,qα V (x) = Ea∼q(·|x) [ rπ,qα (x, a) + γEy∼p(·|x,a)V (y) ] ,
and the non-regularized Bellman operator for policy q T qV (x) = Ea∼q(·|x) [ r(x, a) + γEy∼p(·|x,a)V (y) ] .
Value function: Define the π-regularized value function for policy q as V π,qα (x) = Eq [∑ t≥0 γtrπ,qα (xt, at)|x0 = x, q ] .
and the non-regularized value function V q(x) = Eq [∑ t≥0 γtr(xt, at)|x0 = x, q ] .
Proposition 1. For any q, π, V , we have V π,qα ≤ V q and Tπ,qα V ≤ T qV . Indeed
Eq [
log q(at|xt) π(at|xt)
] = KL ( q(·|xt)‖π(·|xt) ) ≥ 0.
Optimal value function and policy Define the optimal regularized value function: V π,∗α (x) = maxq V π,q α (x), and the optimal (non-regularized) value function: V ∗(x) = maxq V q(x).
The optimal policy of the π-regularized problem qπ,∗α (·|x) = arg maxq V π,qα (x) and the optimal policy of the non-regularized problem q∗(·|x) = arg maxq V q . Proposition 2. We have that V π,qα is the unique fixed point of Tπ,qα , and V q is the unique fixed point of T q . Thus we have the following Bellman equations: For all x ∈ X ,
V π,qα (x) = ∑ a q(a|x) [ rπ,qα (x, a) + γEy∼p(·|x,a) [ V π,qα (y) ]] (14)
V q(x) = ∑ a q(a|x) [ r(x, a) + γEy∼p(·|x,a) [ V q(y) ]] (15)
V π,∗α (x) = r π,qπ,∗α α (x, a) + γEy∼p(·|x,a) [ V π,∗α (y) ] for all a ∈ A, (16)
V ∗(x) = max a∈A
[ r(x, a) + γEy∼p(·|x,a) [ V ∗(y) ]] . (17)
Notice that (16) holds for all actions a ∈ A, and not in expectation w.r.t. a ∼ q(·|x) only.
A.2 REGULARIZED JOINT POLICY GRADIENT
We now consider a parametrized policy πθ and consider maximizing the regularized joint policy optimization problem for a given initial state x0 (this could be a distribution over initial states). Thus we want to find a parameter θ that (locally) maximizes
J (θ, q) = V πθ,q(x0) = Eq [∑ t≥0 γt ( r(xt, at)− αKL(q(·|xt)‖πθ(·|xt)) )∣∣x0, q]. We start with an initial parameter θ0 and define a sequence of policies πi = πθi parametrized by θi, in the following way:
• Given θi, define qi = arg max
q T πθi ,q α V πθi ,
• Define θi+1 as θi+1 = θi − β∇θEπi [∑ t≥0 γtKL ( qk(·|xt)‖πθ(·|xt) ) |θ=θi ∣∣x0, πi]. (18) Proposition 3. We have the following properties:
• The policy qi satisfies:
qi(a|x) = πi(a|x)e
1 αQ πi (x,a) Eb∼πi(·|x) [ e 1 αQ πi (x,b) ] , (19)
where Qπ(x, a) = r(x, a) + γEy∼p(·|x,a)V π(y).
• We have V πi,qiα ≥ V πi . (20)
• For η sufficiently small, we have J (θi+1, qi+1) ≥ J (θi, qi) + cgi, (21)
where c is a numerical constant, and gi is the norm of the gradient (minimized by the algorithm):
gi = ∥∥∥∇θEπi[∑
t≥0
γtKL ( qi(·|xt)‖πθ(·|xt) ) |θ=θi ∣∣x0, πi]∥∥∥. Thus we build a sequence of policies (πθi , qi) whose values J (θi, qi) are non-decreasing thus converge to a local maximum. In addition, the improvement is lower-bounded by a constant times the norm of the gradient, thus the algorithm keeps improving the performance until the gradient vanishes (when we reach the limit of the capacity of our representation).
Proof. We have
qi(·|x) = arg max q Ea∼q(·|x) [ r(x, a) + γEy∼p(·|x,a)V πi(y)︸ ︷︷ ︸
Qπi (x,a)
−α log q(a|x) πi(a|x)
] ,
from which we deduce (19). Now, from the definition of qi, we have
Tπi,qiα V πi ≥ Tπi,πiα V πi = TπiV πi = V πi .
Now, since Tπi,qiα is a monotone operator (i.e. if V1 ≥ V2 elementwise, then Tπi,qiα V1 ≥ Tπi,qiα V2) and its fixed point is V πi,qiα , we have
V πi,qiα = lim t→∞ (Tπi,qiα ) tV πi ≥ V πi ,
which proves (20).
Now, in order to prove (21) we derive the following steps.
Step 1: From the definition of qi+1 we have, for any x, Ea∼qi+1 [ Qπi+1(x, a) ] −αKL ( qi+1(·|x)‖πi+1(·|x) ) ≥ Ea∼qi [ Qπi+1(x, a) ] −αKL ( qi(·|x)‖πi+1(·|x) ) .
(22)
Writing the functional that we minimize f(π, q, θ) = Eπ [∑ t≥0 γtKL ( q(·|xt)‖πθ(·|xt) )∣∣x0, π], the update rule is θi+1 = θi − β∇θf(πi, qi, θi). Thus we have that for sufficiently small β,
f(πi, qi, θi+1) ≤ f(πi, qi, θi)− βgi, (23) where gi = 12 ∥∥∇θf(πi, qi, θi)∥∥. Step 2: Now define F :
F(π, q, θ, π′) = Eπ [∑ t≥0 γt ( Ea∼q [ Qπ ′ (xt, a) ] − αKL ( q(·|xt)‖πθ(·|xt) ))∣∣x0, π] = δx0(I − γPπ)−1Tπθ,qα V π ′
= δx0(I − γPπ)−1T qV π ′ − f(π, q, θ),
where δx0 is a Dirac (in the row vector x0), and P π is the transition matrix for policy π.
From (22) and (23) we deduce that F(πi, qi+1, θi+1, πi+1) ≥ F(πi, qi, θi+1, πi+1)
≥ F(πi, qi, θi, πi+1) + βgi.
We deduce F(πi, qi+1, θi+1, πi)
≥ F(πi, qi, θi, πi) + βgi +F(πi, qi+1, θi+1, πi)−F(πi, qi+1, θi+1, πi+1) + F(πi, qi, θi, πi+1)−F(πi, qi, θi, πi)
= F(πi, qi, θi, πi) + βgi + Eπi [∑ t≥0 γt ( Ea∼qi+1 [ Qπi(xt, a)−Qπi+1(xt, a) ] − Ea∼qi [ Qπi(xt, a)−Qπi+1(xt, a) ])] ︸ ︷︷ ︸
=O(β2) since πi=πi+1+O(β) and qi=qi+1+O(β)
This rewrites: δx0(I − γπi)−1 ( T qi+1,πi+1α V πi − T qi,πiα V πi ) ≥ ηgi +O(β2). (24)
Step 3: Now a bit of algebra. For two stochastic matrices P and P ′, we have (I − γP )−1
= (I − γP ′)−1 + γ(I − γP )−1(P − P ′)(I − γP ′)−1 = (I − γP ′)−1 + γ [ (I − γP ′)−1 + γ(I − γP )−1(P − P ′)(I − γP ′)−1 ] (P − P ′)(I − γP ′)−1
= (I − γP ′)−1 + γ(I − γP ′)−1(P − P ′)(I − γP ′)−1
+γ2(I − γP )−1(P − P ′)(I − γP ′)−1(P − P ′)(I − γP ′)−1.
Applying this equality to the transition matrices Pπk and Pπk+1 and since ‖Pπk+1 −Pπk‖ = O(η), we have:
V qi+1,πi+1α
= (I − γPπi)−1rqi+1,πi+1α = (I − γPπi)−1rqi+1,πi+1α + γ(I − γPπi)−1(Pπi+1 − Pπi)(I − γPπi)−1rqi+1,πi+1α +O(β2) = (I − γPπi)−1rqi,πiα + (I − γPπi)−1(rqi+1,πi+1α − rqi,πiα + γPπi+1 − γPπi)(I − γPπi)−1rqi,πiα +O(β2) = V qi,πiα + (I − γPπi)−1(T qi+1,πi+1α V πi − T qi,πiα V πk) +O(β2).
Finally, using (24), we deduce that
J (θi+1, qi+1) = V qi+1,πi+1α (x0) = V qi,πiα (x0) + δx0(I − γPπi)−1(T qi+1,πi+1α V πi − T qi,πiα V πi) +O(β2) ≥ J (θi, qi) + ηgi +O(β2)
≥ J (θi, qi) + 1
2 ηgi,
for small enough η.
B ADDITIONAL EXPERIMENT: DISCRETE CONTROL
As a proof of concept – showcasing the robustness of our algorithm and its hyperparameters – we performed an experiment on a subset of the games contained contained in the "Arcade Learning Environment" (ALE). For this experiment we used the same hyperparameter settings for the KL constraints as for the continuous control experiments as well as the same learning rate and merely altered the network architecture to the standard network structure used by DQN Mnih et al. (2015) – and created a seperate network with the same architecture, but predicting the parameters of the policy distribution. A comparison between our algorithm and well established baselines from the literature, in terms of the mean performance, is listed in Table 1. While we do not obtain state-ofthe-art performance in this experiment, the fact that MPO is competitive, out-of-the-box in these domains suggests that combining the ideas presented in this paper with recent advances for RL with discrete actions (Bellemare et al., 2017) could be a fruitful avenue for future work.
C EXPERIMENT DETAILS
In this section we give the details on the hyper-parameters used for each experiment. All the continuous control experiments use a feed-forward network except for Parkour-2d were we used the same network architecture as in Heess et al. (2017). Other hyper parameters for MPO with non parametric variational distribution were set as follows,
Hyperparameters for MPO with parametric variational distribution were as follows,
D DERIVATION OF UPDATE RULES FOR A GAUSSIAN POLICY
For continuous control we assume that the policy is given by a Gaussian distribution with a full covariance matrix, i.e, π(a|s,θ) = N (µ,Σ). Our neural network outputs the mean µ = µ(s) and Cholesky factor A = A(s), such that Σ = AAT . The lower triagular factor A has positive diagonal elements enforced by the softplus transform Aii ← log(1 + exp(Aii)).
D.1 NON-PARAMETRIC VARIATIONAL DISTRIBUTION
In this section we provide the derivations and implementation details for the non-parametric variational distribution case for both E-step and M-step.
D.2 E-STEP
The E-step with a non-parametric variational solves the following program, where we have replaced expectations with integrals to simplify the following derivations:
max q
∫ µq(s) ∫ q(a|s)Qθi(s, a)dads
s.t. ∫ µq(s)KL(q(a|s), π(a|s,θi))da < ,∫∫
µq(s)q(a|s)dads = 1.
First we write the Lagrangian equation, i.e,
L(q, η, γ) = ∫ µq(s) ∫ q(a|s)Qθi(s, a)dads+
η ( − ∫ µq(s) ∫ q(a|s) log q(a|s)
π(a|s,θi)
) + γ ( 1− ∫∫ µq(s)q(a|s)dads ) .
Next we maximise the Lagrangian L w.r.t the primal variable q. The derivative w.r.t q reads,
∂qL(q, η, γ) = Qθi(a, s)− η log q(a|s) + η log π(a|s,θi)− (η − γ).
Setting it to zero and rearranging terms we get
q(a|s) = π(a|s,θi) exp ( Qθi(a, s)
η
) exp ( −η − γ
η
) .
However the last exponential term is a normalisation constant for q. Therefore we can write,
exp(−η − γ η ) =
∫ π(a|s,θi) exp( Qθi(a, s)
η )da,
γ = η − η log (∫ π(a|s,θi) exp( Qθi(a, s)
η )da
) .
Note that we could write γ based on π and η. At this point we can derive the dual function,
g(η) = η + η ∫ µq(s) log (∫ π(a|s,θi) exp( Q(a, s)
η )da
) .
D.3 M-STEP
To obtain the KL constraint in the M step we set p(θ) to a Gaussian prior around the current policy, i.e,
p(θ) ≈ N ( µ = θi,Σ =
Fθi λ
) ,
where θi are the parameters of the current policy distribution, Fθi is the empirical Fisher information matrix and λ.
With this, and dropping constant terms our optimization program becomes
max π
∫ µq(s) ∫ q(a|s) log π(a|s,θ)dads− λ(θ − θi)TF−1θi (θ − θi). (25)
We can observe that (θ − θi)TF−1θi (θ − θi) is the second order Taylor approximation of∫ µq(s)KL(π(a|s,θi), π(a|s,θ))ds which leads us to the generalized M-step objective:
max π
∫ µq(s) ∫ q(a|s) log π(a|s,θ)dads− λ ∫ µq(s)KL(π(a|s,θi), π(a|s,θ))ds (26)
which corresponds to Equation (11) from the main text, where expectations are replaced by integrals.
After obtaining the non parametric variational distribution in the M step with a Gaussian policy we empirically observed that better results could be achieved by decoupling the KL constraint into two terms such that we can constrain the contribution of the mean and covariance separately i.e.
∫ µq(s)KL(πi(a|s,θ), π(a|s,θ)) = Cµ + CΣ, (27)
where Cµ = ∫ µq(s) 1 2 (tr(Σ −1Σi)− n+ ln( Σ
Σi ))ds,
CΣ = ∫ µq(s) 1 2 (µ− µi) TΣ−1(µ− µi)ds.
This decoupling allows us to set different values for each component, i.e., µ, Σ for the mean, the covariance matrix respectively. Different lead to different learning rates. The effectivness of this decoupling has also been shown in Abdolmaleki et al. (2017). We always set a much smaller epsilon for covariance than the mean. The intuition is that while we would like the distribution moves fast in the action space, we also want to keep the exploration to avoid premature convergence.
In order to solve the constrained optimisation in the M-step, we first write the generalised Lagrangian equation, i.e,
L(θ, ηµ, ηΣ) = ∫ µq(s) ∫ q(a|s) log π(a|s,θ)dads+ ηµ( µ − Cµ) + ηΣ( Σ − CΣ)
Where ηµ and ηΣ are Lagrangian multipliers. Following prior work on constraint optimisation, we formulate the following primal problem,
max θ min ηµ>0,ηΣ>0 L(θ, ηµ, ηΣ).
In order to solve for θ we iteratively solve the inner and outer optimisation programs independently: We fix the Lagrangian multipliers to their current value and optimise for θ (outer maximisation) and then fix the parameters θ to their current value and optimise for the Lagrangian multipliers (inner minimisation). We continue this procedure until policy parameters θ and Lagrangian multipliers converge. Please note that the same approach can be employed to bound the KL explicitly instead of decoupling the contribution of mean and covariance matrix to the KL.
D.4 PARAMETRIC VARIATIONAL DISTRIBUTION
In this case we assume our variational distribution also uses a Gaussian distribution over the action space and use the same structure as our policy π.
Similar to the non-parametric case for a Gaussian distribution in the M-step we also use a decoupled KL but this time in the E-step for a Gaussian variational distribution. Using the same reasoning as in the previous section we can obtain the following generalized Lagrangian equation:
L(θq, ηµ, ηΣ) = ∫ µq(s) ∫ q(a|s;θq)Ai(a, s)dads+ ηµ( µ − Cµ) + ηΣ( Σ − CΣ).
Where ηµ and ηΣ are Lagrangian multipliers. And where we use the advantage function A(a, s) instead of the Q function Q(a, s), as it empirically gave better performance. Please note that the KL in the E-step is different than the one used in the M-step. Following prior works on constraint optimisation, we can formulate the following primal problem,
max θq min ηµ>0,ηΣ>0
L(θq, ηµ, ηΣ)
In order to solve for θq we iteratively solve the inner and outer optimisation programs independently. In order to that we fix the Lagrangian multipliers to their current value and optimise for θq (outer maximisation), in this case we use the likelihood ratio gradient to compute the gradient w.r.t θq . Subsequently we fix the parameters θq to their current value and optimise for Lagrangian multipliers (inner minimisation). We iteratively continue this procedure until the policy parameters θq and the Lagrangian multipliers converges. Please note that the same approach can be used to bound the KL explicitly instead of decoupling the contribution of mean and covariance matrix to the KL. As our policy has the same structure as the parametric variational distribution, the M step in this case reduce to set the policy parameters θ to the parameters θq we obtained in E-step, i.e,
θi+1 = θ q
E IMPLEMENTATION DETAILS
While we ran most of our experiments using a single learner, we implemented a scalable variant of the presented method in which multiple workers collect data independently in an instance of the considered environment, compute gradients and send them to a chief (or parameter server) that performs parameter update by averaging gradients. That is we use distributed synchronous gradient descent. These procedures are described in Algorithms 1 and 2 for the non-parametric case and 3 for the parametric case.
Algorithm 1 MPO (chief) 1: Input G number of gradients to average 2: while True do 3: initialize N = 0 4: initialize gradient store sφ = {}, sη = {}, sηµ = {}, sηΣ = {} sθ = {} 5: while N < G do 6: receive next gradient from worker w 7: sφ = sφ + [δφw] 8: sφ = sθ + [δθw] 9: sη = sη + [δηw] 10: sηµ = sηµ + [δη w µ ] 11: sηθ = sηθ + [δη w θ ] 12: update parameters with average gradient from 13: sφ, sη , sηµ , sηΣ sθ 14: send new parameters to workers
Algorithm 2 MPO (worker) - Non parametric variational distribution 1: Input = , Σ, µ, Lmax 2: i = 0, Lcurr = 0 3: Initialise Qωi(a, s), π(a|s,θi), η, ηµ, ηΣ 4: for each worker do 5: while Lcurr > Lmax do 6: update replay buffer B with L trajectories from the environment 7: k = 0 8: // Find better policy by gradient descent 9: while k < 1000 do 10: sample a mini-batch B of N (s, a, r) pairs from replay 11: sampleM additional actions for each state fromB, π(a|s,θi) for estimating integrals 12: compute gradients, estimating integrals using samples 13: // Q-function gradient: 14: δφ = ∂φL′φ(φ) 15: // E-Step gradient:
16: δη = ∂ηg(η)
17: Let: q(a|s) ∝ π(a|s,θi) exp( Qθt (a,s,φ ′)
η )
18: // M-Step gradient:
19: [δηµ , δηΣ ] = α∂ηµ,ηΣL(θk, ηµ, ηΣ)
20: δθ = ∂θL(θ, ηµk+1, ηΣk+1)
21: send gradients to chief worker 22: wait for gradient update by chief 23: fetch new parameters φ, θ, η, ηµ, ηΣ 24: k = k + 1 25: i = i+ 1, Lcurr = Lcurr + L 26: θi = θ, φ′ = φ
Algorithm 3 MPO (worker) - parametric variational distribution 1: Input = Σ, µ, Lmax 2: i = 0, Lcurr = 0 3: Initialise Qωi(a, s), π(a|s,θi), η, ηµ, ηΣ 4: for each worker do 5: while Lcurr < Lmax do 6: update replay buffer B with L trajectories from the environment 7: k = 0 8: // Find better policy by gradient descent 9: while k < 1000 do 10: sample a mini-batch B of N (s, a, r) pairs from replay 11: sample M additional actions for each state from B, π(a|s,θk) for estimating inte-
grals 12: compute gradients, estimating integrals using samples 13: // Q-function gradient: 14: δφ = ∂φL′φ(φ) 15: // E-Step gradient:
16: [δηµ , δηΣ ] = α∂ηµ,ηΣL(θk, ηµ, ηΣ)
17: δθ = ∂θL(θ, ηµk+1, ηΣk+1)
18: // M-Step gradient: In practice there is no M-step in this case as policy and variatinal
distribution q use a same structure. 19: send gradients to chief worker 20: wait for gradient update by chief 21: fetch new parameters φ, θ, η, ηµ, ηΣ 22: k = k + 1 23: i = i+ 1, Lcurr = Lcurr + L 24: θi = θ, φ′ = φ | 1. What is the focus of the paper, and how does it contribute to the field of policy-as-inference approaches?
2. What are the strengths of the proposed method, particularly in terms of its empirical results?
3. What are the weaknesses or limitations of the paper, such as the challenge of normalizing the non-parametric q(a|s) in equation (6)?
4. How does the reviewer assess the clarity and motivation of the presentation?
5. What are the questions raised by the reviewer regarding the replacement of the KL divergence regularizer with a "hard" constraint, and the use of Lagrange multipliers in the optimization process? | Review | Review
This is an interesting policy-as-inference approach, presented in a reasonably clear and well-motivated way. I have a couple questions which somewhat echo questions of other commenters here. Unfortunately, I am not sufficiently familiar with the relevant recent policy learning literature to judge novelty. However, as best I am aware the empirical results presented here seem quite impressive for off-policy learning.
- When is it possible to normalize the non-parametric q(a|s) in equation (6)? It seems to me this will be challenging in most any situation where the action space is continuous. Is this guaranteed to be Gaussian? If so, I don’t understand why.
– In equations (5) and (10), a KL divergence regularizer is replaced by a “hard” constraint. However, for optimization purposes, in C.3 the hard constraint is then replaced by a soft constraint (with Lagrange multipliers), which depend on values of epsilon. Are these values of epsilon easy to pick in practice? If so, why are they easier to pick than e.g. the lambda value in eq (10)? |
ICLR | Title
Maximum a Posteriori Policy Optimisation
Abstract
We introduce a new algorithm for reinforcement learning called Maximum aposteriori Policy Optimisation (MPO) based on coordinate ascent on a relativeentropy objective. We show that several existing methods can directly be related to our derivation. We develop two off-policy algorithms and demonstrate that they are competitive with the state-of-the-art in deep reinforcement learning. In particular, for continuous control, our method outperforms existing methods with respect to sample efficiency, premature convergence and robustness to hyperparameter settings.
1 INTRODUCTION
Model free reinforcement learning algorithms can acquire sophisticated behaviours by interacting with the environment while receiving simple rewards. Recent experiments (Mnih et al., 2015; Jaderberg et al., 2016; Heess et al., 2017) successfully combined these algorithms with powerful deep neural-network approximators while benefiting from the increase of compute capacity.
Unfortunately, the generality and flexibility of these algorithms comes at a price: They can require a large number of samples and – especially in continuous action spaces – suffer from high gradient variance. Taken together these issues can lead to unstable learning and/or slow convergence. Nonetheless, recent years have seen significant progress, with improvements to different aspects of learning algorithms including stability, data-efficiency and speed, enabling notable results on a variety of domains, including locomotion (Heess et al., 2017; Peng et al., 2016), multi-agent behaviour (Bansal et al., 2017) and classical control (Duan et al., 2016).
Two types of algorithms currently dominate scalable learning for continuous control problems: First, Trust-Region Policy Optimisation (TRPO; Schulman et al. 2015) and the derivative family of Proximal Policy Optimisation algorithms (PPO; Schulman et al. 2017b). These policy-gradient algorithms are on-policy by design, reducing gradient variance through large batches and limiting the allowed change in parameters. They are robust, applicable to high-dimensional problems, and require moderate parameter tuning, making them a popular first choice (Ho & Ermon, 2016). However, as on-policy algorithms, they suffer from poor sample efficiency.
In contrast, off-policy value-gradient algorithms such as the Deep Deterministic Policy Gradient (DDPG, Silver et al. 2014; Lillicrap et al. 2016), Stochastic Value Gradient (SVG, Heess et al. 2015), and the related Normalized Advantage Function formulation (NAF, Gu et al. 2016b) rely on experience replay and learned (action-)value functions. These algorithms exhibit much better data efficiency, approaching the regime where experiments with real robots are possible (Gu et al., 2016a; Andrychowicz et al., 2017). While also popular, these algorithms can be difficult to tune, especially for high-dimensional domains like general robot manipulation tasks.
In this paper we propose a novel off-policy algorithm that benefits from the best properties of both classes. It exhibits the scalability, robustness and hyperparameter insensitivity of on-policy algorithms, while offering the data-efficiency of off-policy, value-based methods.
To derive our algorithm, we take advantage of the duality between control and estimation by using Expectation Maximisation (EM), a powerful tool from the probabilistic estimation toolbox, in order to solve control problems. This duality can be understood as replacing the question “what are the actions which maximise future rewards?” with the question “assuming future success in maximising
rewards, what are the actions most likely to have been taken?”. By using this estimation objective we have more control over the policy change in both E and M steps, yielding robust learning. We show below that several algorithms, including TRPO, can be directly related to this perspective. We leverage the fast convergence properties of EM-style coordinate ascent by alternating a nonparametric data-based E-step which re-weights state-action samples, with a supervised, parametric M-step using deep neural networks.
We evaluate our algorithm on a broad spectrum of continuous control problems including a 56 DoF humanoid body. All experiments used the same optimisation hyperparameters 1. Our algorithm shows remarkable data efficiency often solving the tasks we consider an order of magnitude faster than the state-of-the-art. A video of some resulting behaviours can be found here dropbox.com/s/pgcmjst7t0zwm4y/MPO.mp4.
2 BACKGROUND AND NOTATION
2.1 RELATED WORK
Casting Reinforcement Learning (RL) as an inference problem has a long history dating back at least two decades (Dayan & Hinton, 1997). The framework presented here is inspired by a variational inference perspective on RL that has previously been utilised in multiple studies; c.f. Dayan & Hinton (1997); Neumann (2011); Deisenroth et al. (2013); Rawlik et al. (2012); Levine & Koltun (2013); Florensa et al. (2017).
Particular attention has been paid to obtaining maximum entropy policies as the solution to an inference problem. The penalisation of determinism can be seen encouraging both robustness and simplicity. Among these are methods that perform trajectory optimisation using either linearised dynamics (Todorov, 2008; Toussaint, 2009; Levine & Koltun, 2013) or general dynamics as in path integral control (Kappen, 2005; Theodorou et al., 2010). In contrast to these algorithms, here we do not assume the availability of a transition model and avoid on-policy optimisation. A number of other authors have considered the same perspective but in a model-free RL setting (Neumann, 2011; Peters et al., 2010a; Florensa et al., 2017; Daniel et al., 2016) or inverse RL problems (Ziebart et al., 2008). These algorithms are more directly related to our work and can be cast in the same (EM-like) alternating optimisation scheme on which we base our algorithm. However, they typically lack the maximisation (M)-step – with the prominent exception of REPS, AC-REPS, PI2-GPS and MDGPS (Peters et al., 2010a; Wirth et al., 2016; Chebotar et al., 2016; Montgomery & Levine, 2016) to which our algorithm is closely related as outlined below. An interesting recent addition to these approaches is an EM-perspective on the PoWER algorithm (Roux, 2016) which uses the same iterative policy improvement employed here, but commits to parametric inference distributions and avoids an exponential reward transformation, resulting in a harder to optimise lower bound.
As an alternative to these policy gradient inspired algorithms, the class of recent algorithms for soft Q-learning (e.g. Rawlik et al. (2012); Haarnoja et al. (2017); Fox et al. (2016) parameterise and estimate a so called “soft” Q-function directly, implicitly inducing a maximum entropy policy. A perspective that can also be extended to hierarchical policies (Florensa et al., 2017), and has recently been used to establish connections between Q-learning and policy gradient methods (O’Donoghue et al., 2016; Schulman et al., 2017a). In contrast, we here rely on a parametric policy, our bound and derivation is however closely related to the definition of the soft (entropy regularised) Q-function.
A line of work, that is directly related to the “RL as inference” perspective, has focused on using information theoretic regularisers such as the entropy of the policy or the Kullback-Leibler divergence (KL) between policies to stabilise standard RL objectives. In fact, most state-of-the-art policy gradient algorithms fall into this category. For example see the entropy regularization terms used in Mnih et al. (2016) or the KL constraints employed by work on trust-region based methods (Schulman et al., 2015; 2017b; Gu et al., 2017; Wang et al., 2017). The latter methods introduce a trust region constraint, defined by the KL divergence between the new policy and the old policy, so that the expected KL divergence over state space is bounded. From the perspective of this paper these trust-region based methods can be seen as optimising a parametric E-step, as in our algorithm, but are “missing” an explicit M-step.
1With the exception of the number of samples collected between updates.
Finally, the connection between RL and inference has been invoked to motivate work on exploration. The most prominent examples for this are formed by work on Boltzmann exploration such as Kaelbling et al. (1996); Perkins & Precup (2002); Sutton (1990); O’Donoghue et al. (2017), which can be connected back to soft Q-learning (and thus to our approach) as shown in Haarnoja et al. (2017).
2.2 MARKOV DECISION PROCESSES
We consider the problem of finding an optimal policy π for a discounted reinforcement learning (RL) problem; formally characterized by a Markov decision process (MDP). The MDP consists of: continuous states s, actions a, transition probabilities p(st+1|st, at) – specifying the probability of transitioning from state st to st+1 under action at –, a reward function r(s, a) ∈ R as well as the discounting factor γ ∈ [0, 1). The policy π(a|s,θ) (with parameters θ) is assumed to specify a probability distribution over action choices given any state and – together with the transition probabilities – gives rise to the stationary distribution µπ(s).
Using these basic quantities we can now define the notion of a Markov sequence or trajectory τπ = {(s0, a0) . . . (sT , aT )} sampled by following the policy π; i.e. τπ ∼ pπ(τ) with pπ(τ) = p(s0) ∏ t>0 p(st+1|st, at)π(at|st); and the expected return Eτπ [ ∑∞ t=0 γ
tr(st, st)]. We will use the shorthand rt = r(st, at).
3 MAXIMUM A POSTERIORI POLICY OPTIMISATION
Our approach is motivated by the well established connection between RL and probabilistic inference. This connection casts the reinforcement learning problem as that of inference in a particular probabilistic model. Conventional formulations of RL aim to find a trajectory that maximizes expected reward. In contrast, inference formulations start from a prior distribution over trajectories, condition a desired outcome such as achieving a goal state, and then estimate the posterior distribution over trajectories consistent with this outcome.
A finite-horizon undiscounted reward formulation can be cast as inference problem by constructing a suitable probabilistic model via a likelihood function p(O = 1|τ) ∝ exp( ∑ t rt/α), where α is a temperature parameter. Intuitively, O can be interpreted as the event of obtaining maximum reward by choosing an action; or the event of succeeding at the RL task (Toussaint, 2009; Neumann, 2011). With this definition we can define the following lower bound on the likelihood of optimality for the policy π:
log pπ(O = 1) = log ∫ pπ(τ)p(O = 1|τ)dτ ≥ ∫ q(τ) [ log p(O = 1|τ) + log pπ(τ)
q(τ)
] dτ (1)
= Eq [∑
t
rt/α ] −KL ( q(τ)||pπ(τ) ) = J (q, π), (2)
where pπ is the trajectory distribution induced by policy π(a|s) as described in section 2.2 and q(τ) is an auxiliary distribution over trajectories that will discussed in more detail below. The lower bound J is the evidence lower bound (ELBO) which plays an important role in the probabilistic modeling literature. It is worth already noting here that optimizing (2) with respect to q can be seen as a regularized RL problem.
An important motivation for transforming a RL problem into an inference problem is that this allows us draw from the rich toolbox of inference methods: For instance, J can be optimized with the familiy of expectation maximization (EM) algorithms which alternate between improving J with respect to q and π. In this paper we follow classical (Dayan & Hinton, 1997) and more recent works (e.g. Peters et al. 2010b; Levine & Koltun 2013; Daniel et al. 2016; Wirth et al. 2016) and cast policy search as a particular instance of this family. Our algorithm then combines properties of existing approaches in this family with properties of recent off-policy algorithms for neural networks.
The algorithm alternates between two phases which we refer to as E and M step in reference to an EM-algorithm. The E-step improves J with respect to q. Existing EM policy search approaches perform this step typically by reweighting trajectories with sample returns (Kober & Peters, 2009) or via local trajectory optimization (Levine & Koltun, 2013). We show how off-policy deep RL
techniques and value-function approximation can be used to make this step both scalable as well as data efficient. The M-step then updates the parametric policy in a supervised learning step using the reweighted state-action samples from the E-step as targets.
These choices lead to the following desirable properties: (a) low-variance estimates of the expected return via function approximation; (b) low-sample complexity of value function estimate via robust off-policy learning; (c) minimal parametric assumption about the form of the trajectory distribution in the E-step; (d) policy updates via supervised learning in the M step; (e) robust updates via hard trust-region constraints in both the E and the M step.
3.1 POLICY IMPROVEMENT
The derivation of our algorithm then starts from the infinite-horizon analogue of the KL-regularized expected reward objective from Equation (2). In particular, we consider variational distributions q(τ) that factor in the same way as pπ , i.e. q(τ) = p(s0) ∏ t>0 p(st+1|st, at)q(at|st) which yields:
J (q,θ) = Eq [ ∞∑ t=0 γt [ rt − αKL ( q(a|st)‖π(a|st,θ) )]] + log p(θ). (3)
Note that due to the assumption about the structure of q(τ) the KL over trajectories decomposes into a KL over the individual state-conditional action distributions. This objective has also been considered e.g. by Haarnoja et al. (2017); Schulman et al. (2017a). The additional log p(θ) term is a prior over policy parameters and can be motivated by a maximum a-posteriori estimation problem (see appendix for more details).
We also define the regularized Q-value function associated with (3) as
Qqθ(s, a) = r0 + Eq(τ),s0=s,a0=a ∞∑ t≥1 γt [ rt − αKL(qt‖πt) ] , (4) with KL ( qt||πt ) = KL ( q(a|st) ) ‖π(a|st,θ) ) . Note that KL ( q0||π0 ) and p(θ) are not part of the Q-function as they are not a function of the action.
We observe that optimizing J with respect to q is equivalent to solving an expected reward RL problem with augmented reward r̃t = rt−α log q(at|st)π(at|st,θ) . In this view π represents a default policy towards which q is regularized – i.e. the current best policy. The MPO algorithm treats π as the primary object of interest. In this case q serves as an auxiliary distribution that allows optimizing J via alternate coordinate ascent in q and πθ, analogous to the expectation-maximization algorithm in the probabilistic modelling literature. In our case, the E-step optimizes J with respect to q while the M-step optimizes J with respect to π. Different optimizations in the E-step and M-step lead to different algorithms. In particular, we note that for the case where p(θ) is an uninformative prior a variant of our algorithm has a monotonic improvement guarantee as show in the Appendix A.
3.2 E-STEP
In the E-step of iteration i we perform a partial maximization of J (q,θ) with respect to q given θ = θi. We start by setting q = πθi and estimate the unregularized action-value function:
Qqθi(s, a) = Qθi(s, a) = Eτπi ,s0=s,a0=a [ ∞∑ t γtrt ] , (5)
since KL(q||πi) = 0. In practice we estimate Qθi from off-policy data (we refer to Section 4 for details about the policy evaluation step). This greatly increases the data efficiency of our algorithm. Given Qθi we improve the lower bound J w.r.t. q by first expanding Qθi(s, a) via the regularized Bellman operator Tπ,q = Eq(a|s) [ r(s, a) − αKL(q‖πi) + γEp(s′|s,a)[Vθi(s′)]], and optimize the
“one-step” KL regularised objective
max q J̄s(q, θi) = max q Tπ,qQθi(s, a)
= max q
Eµ(s) [ Eq(·|s)[Qθi(s, a)]− αKL(q‖πi) ] ,
(6)
since Vθi(s) = Eq(a|s[Qθi(s, a)] and thus Qθi(s, a) = r(s, a) + γVθi(s).
Maximizing Equation (6), thus obtaining qi = arg max J̄ (q, θi), does not fully optimize J since we treat Qθi as constant with respect to q. An intuitive interpretation qi is that it chooses the softoptimal action for one step and then resorts to executing policy π. In the language of the EM algorithm this optimization implements a partial E-step. In practice we also choose µq to be the stationary distribution as given through samples from the replay buffer.
CONSTRAINED E-STEP
The reward and the KL terms are on an arbitray relative scale. This can make it difficult to choose α. We therefore replace the soft KL regularization with a hard constraint with parameter , i.e,
max q
Eµ(s) [ Eq(a|s) [ Qθi(s, a) ]] s.t.Eµ(s) [ KL(q(a|s), π(a|s,θi)) ] < .
(7)
If we choose to explicitly parameterize q(a|s) – option 1 below – the resulting optimisation is similar to that performed by the recent TRPO algorithm for continuous control (Schulman et al., 2015); only in an off-policy setting. Analogously, the unconstrained objective (6) is similar to the objective used by PPO (Schulman et al., 2017b). We note, however, that the KL is reversed when compared to the KL used by TRPO and PPO.
To implement (7) we need to choose a form for the variational policy q(a|s). Two options arise:
1. We can use a parametric variational distribution q(a|s,θq), with parameters θq , and optimise Equation (7) via the likelihood ratio or action-value gradients. This leads to an algorithm similar to TRPO/PPO and an explicit M-step becomes unnecessary (see. Alg. 3).
2. We can choose a non-parametric representation of q(a|s) given by one probability factor per sample. To achieve generalization in state space we then fit a parametric policy in the M-step.
Fitting a parametric policy in the M-step is a supervised learning problem, allowing us to employ various regularization techniques at that point. It also makes it easier to enforce the hard KL constraint.
NON PARAMETRIC VARIATIONAL DISTRIBUTION
In the non-parametric case we can obtain the optimal sample based q distribution – the solution to Equation (7) – in closed form (see the appendix for a full derivation), as,
qi(a|s) ∝ π(a|s,θi) exp (Qθi(s, a)
η∗
) , (8)
where we can obtain η∗ by minimising the following convex dual function,
g(η) = η + η ∫ µ(s) log ∫ π(a|s,θi) exp (Qθi(s, a) η ) dads, (9)
after the optimisation of which we can evaluate qi(a|s) on given samples. This optimization problem is similar to the one solved by relative entropy policy search (REPS) (Peters et al., 2010a) with the difference that we optimise only for the conditional variational distribution q(a|s) instead of a joint distribution q(a, s) – effectively fixing µq(s) to the stationary distribution given by previously collected experience – and we use the Q function of the old policy to evaluate the integral over a. While this might seem unimportant it is crucial as it allows us to estimate the integral over actions with multiple samples without additional environment interaction. This greatly reduces the variance of the estimate and allows for fully off-policy learning at the cost of performing only a partial optimization of J as described above.
3.3 M-STEP
Given qi from the E-step we can optimize the lower bound J with respect to θ to obtain an updated policy θi+1 = arg maxθ J (qi,θ). Dropping terms independent of θ this entails solving for the solution of
max θ J (qi, θ) = max θ Eµq(s)
[ Eq(a|s) [ log π(a|s,θ) ]] + log p(θ), (10)
which corresponds to a weighted maximum a-posteriroi estimation (MAP) problem where samples are weighted by the variational distribution from the E-step. Since this is essentially a supervised learning step we can choose any policy representation in combination with any prior for regularisation. In this paper we set p(θ) to a Gaussian prior around the current policy, i.e, p(θ) ≈ N ( µ = θi,Σ = Fθi λ ) , where θi are the parameters of the current policy distribution, Fθi is the empirical Fisher information matrix and λ is a positive scalar. As shown in the appendix this suggests the following generalized M-step:
max π
Eµq(s) [ Eq(a|s) [ log π(a|s,θ) ] − λKL ( π(a|s,θi), π(a|s,θ) )] (11)
which can be re-written as the hard constrained version:
max π
Eµq(s) [ Eq(a|s) [ log π(a|s,θ) ]] s.t. Eµq(s) [ KL(π(a|s,θi), π(a|s,θ)) ] < .
(12)
This additional constraint minimises the risk of overfitting the samples, i.e. it helps us to obtain a policy that generalises beyond the state-action samples used for the optimisation. In practice we have found the KL constraint in the M step to greatly increase stability of the algorithm. We also note that in the E-step we are using the reverse, mode-seeking, KL while in the M-step we are using the forward, moment-matching, KL which reduces the tendency of the entropy of the parametric policy to collapse. This is in contrast to other RL algorithms that use M-projection without KL constraint to fit a parametric policy (Peters et al., 2010a; Wirth et al., 2016; Chebotar et al., 2016; Montgomery & Levine, 2016). Using KL constraint in M-step has also been shown effective for stochastic search algorithms (Abdolmaleki et al., 2017).
4 POLICY EVALUATION
Our method is directly applicable in an off-policy setting. For this, we have to rely on a stable policy evaluation operator to obtain a parametric representation of the Q-function Qθ(s, a). We make use of the policy evaluation operator from the Retrace algorithm Munos et al. (2016), which we found to yield stable policy evaluation in practice2. Concretely, we fit the Q-functionQθi(s, a, φ) as represented by a neural network, with parameters φ, by minimising the squared loss:
min φ L(φ) = min φ Eµb(s),b(a|s)
[( Qθi(st, at, φ)−Qrett )2] ,with
Qrett = Qφ′(st, at) + ∞∑ j=t γj−t ( j∏ k=t+1 ck )[ r(sj , aj) + Eπ(a|sj+1)[Qφ′(sj+1, a)]−Qφ′(sj , aj) ] ,
ck = min (
1, π(ak|sk) b(ak|sk)
) ,
(13)
2We note that, despite this empirical finding, Retrace may not be guaranteed to be stable with function approximation (Touati et al., 2017).
whereQφ′(s, a) denotes the output of a target Q-network, with parameters φ′, that we copy from the current parameters φ after each M-step. We truncate the infinite sum after N steps by bootstrapping with Qφ′ (rather than considering a λ return). Additionally, b(a|s) denotes the probabilities of an arbitrary behaviour policy. In our case we use an experience replay buffer and hence b is given by the action probabilities stored in the buffer; which correspond to the action probabilities at the time of action selection.
5 EXPERIMENTS
For our experiments we evaluate our MPO algorithm across a wide range of tasks. Specifically, we start by looking at the continuous control tasks of the DeepMind Control Suite (Tassa et al. (2018), see Figure 1), and then consider the challenging parkour environments recently published in Heess et al. (2017). In both cases we use a Gaussian distribution for the policy whose mean and covariance are parameterized by a neural network (see appendix for details). In addition, we present initial experiments for discrete control using ATARI environments using a categorical policy distribution (whose logits are again parameterized by a neural network) in the appendix.
5.1 EVALUATION ON CONTROL SUITE
The suite of continuous control tasks that we are evaluating against contains 18 tasks, comprising a wide range of domains including well known tasks from the literature. For example, the classical cart-pole and acrobot dynamical systems, 2D and Humanoid walking as well as simple low-dimensional planar reaching and manipulation tasks. This suite of tasks was built in python on top of mujoco and will also be open sourced to the public by the time of publication.
While we include plots depicting the performance of our algorithm on all tasks below; comparing it against the state-of-the-art algorithms in terms of data-efficiency. We want to start by directing the attention of the reader to a more detailed evaluation on three of the harder tasks from the suite.
5.1.1 DETAILED ANALYSIS ON WALKER-2D, ACROBOT, HOPPER
We start by looking at the results for the classical Acrobot task (two degrees of freedom, one continuous action dimension) as well as the 2D walker (which has 12 degrees of freedom and thus a 12 dimensional action space and a 21 dimensional state space) and the hopper standing task. The reward in the Acrobot task is the distance of the robots end-effector to an upright position of the underactuated system. For the walker task it is given by the forward velocity, whereas in the hopper the requirement is to stand still.
Figure 2 shows the results for this task obtained by applying our algorithm MPO as well as several ablations – in which different parts were removed from the MPO optimization – and two baselines: our implementation of Proximal Policy Optimization (PPO) (Schulman et al., 2017b) and DDPG. The hyperparameters for MPO were kept fixed for all experiments in the paper (see the appendix for hyperparameter settings).
As a first observation, we can see that MPO gives stable learning on all tasks and, thanks to its fully off-policy implementation, is significantly more sample efficient than the on-policy PPO baseline. Furthermore, we can observe that changing from the non-parametric variational distribution to a parametric distribution3 (which, as described above, can be related to PPO) results in only a minor asymptotic performance loss but slowed down optimisation and thus hampered sample efficiency; which can be attributed to the fact that the parametric q distribution required a stricter KL constraint. Removing the automatically tuned KL constraint and replacing it with a manually set entropy regulariser then yields an off-policy actor-critic method with Retrace. This policy gradient method still uses the idea of estimating the integral over actions – and thus, for a gradient based optimiser, its likelihood ratio derivative – via multiple action samples (as judged by a Q-Retrace critic). This idea has previously been coined as using the expected policy gradient (EPG) (Ciosek & Whiteson, 2017) and we hence denote the corresponding algorithm with EPG + Retrace, which no-longer follows the intuitions of the MPO perspective. EPG + Retrace performed well when the correct entropy regularisation scale is used. This, however, required task specific tuning (c.f. Figure 4 where this hyperparameter was set to the one that performed best in average across tasks). Finally using only a single sample to estimate the integral (and hence the likelihood ratio gradient) results in an actor-critic variant with Retrace that is the least performant off-policy algorithm in our comparison.
5.1.2 COMPLETE RESULTS ON THE CONTROL SUITE
The results for MPO (non-parameteric) – and a comparison to an implementation of state-of-the-art algorithms from the literature in our framework – on all the environments from the control suite that we tested on are shown in Figure 4. All tasks have rewards that are scaled to be between 0 and 1000. We note that in order to ensure a fair comparison all algorithms ran with exactly the same network configuration, used a single learner (no distributed computation), used the same optimizer and were tuned w.r.t. their hyperparameters for best performance across all tasks. We refer to the appendix for a complete description of the hyperparameters. Our comparison is made in terms of data-efficiency.
From the plot a few trends are readily apparent: i) We can clearly observe the advantage in terms of data-efficiency that methods relying on a Q-critic obtain over the PPO baseline. This difference is so extreme that in several instances the PPO baseline converges an order of magnitude slower than the off-policy algorithms and we thus indicate the asymptotic performance of each algorithm of PPO and DDPG (which also improved significantly later during training in some instances) with a colored star in the plot; ii) the difference between the MPO results and the (expected) policy gradient (EPG) with entropy regularisation confirm our suspicion from Section 5.1.1: finding a good setting for the entropy regulariser that transfers across environments without additional constraints on the policy distribution is very difficult, leading to instabilities in the learning curves. In contrast to this the MPO results appear to be stable across all environments; iii) Finally, in terms of data-efficiency the methods utilising Retrace obtain a clear advantage over DDPG. The single learner vanilla DDPG implementation learns the lower dimensional environments quickly but suffers in terms of learning
3We note that we use a value function baseline Eπ[Q(s, ·)] in this setup. See appendix for details.
speed in environments with sparse rewards (finger, acrobot) and higher dimensional action spaces. Overall, MPO is able to solve all environments using surprisingly moderate amounts of data. On average less than 1000 trajectories (or 106 samples) are needed to reach the best performance.
5.2 HIGH-DIMENSIONAL CONTINUOUS CONTROL
Next we turn to evaluating our algorithm on two higher-dimensional continuous control problems; humanoid and walker. To make computation time bearable in these more complicated domains we utilize a parallel variant of our algorithm: in this implementation K learners are all independently collecting data from an instance of the environment. Updates are performed at the end of each collected trajectory using distributed synchronous gradient descent on a shared set of policy and Q-function parameters (we refer to the appendix for an algorithm description). The results of this experiment are depicted in Figure 3.
For the Humanoid running domain we can observe a similar trend to the experiments from the previous section: MPO quickly finds a stable running policy, outperforming all other algorithms in terms of sample efficiency also in this high-dimensional control problem.
The case for the Walker-2D parkour domain (where we compare against a PPO baseline) is even more striking: where standard PPO requires approximately 1M trajectories to find a good policy MPO finds a solution that is asymptotically no worse than the PPO solution in in about 70k trajectories (or 60M samples), resulting in an order of magnitude improvement. In addition to the walker experiment we have also evaluated MPO on the Parkour domain using a humanoid body (with 22 degrees of freedom) which was learned successfully (not shown in the plot, please see the supplementary video).
5.3 DISCRETE CONTROL
As a proof of concept – showcasing the robustness of our algorithm and its hyperparameters – we performed an experiment on a subset of the games contained contained in the "Arcade Learning Environment" (ALE) where we used the same hyperparameter settings for the KL constraints as for the continuous control experiments. The results of this experiment can be found in the Appendix.
6 CONCLUSION
We have presented a new off-policy reinforcement learning algorithm called Maximum a-posteriori Policy Optimisation (MPO). The algorithm is motivated by the connection between RL and inference and it consists of an alternating optimisation scheme that has a direct relation to several existing algorithms from the literature. Overall, we arrive at a novel, off-policy algorithm that is highly data efficient, robust to hyperparameter choices and applicable to complex control problems. We demonstrated the effectiveness of MPO on a large set of continuous control problems.
A PROOF OF MONOTONIC IMPROVEMENT FOR THE KL-REGULARIZED POLICY OPTIMIZATION PROCEDURE
In this section we prove a monotonic improvement guarantee for KL-regularized policy optimization via alternating updates on π and q under the assumption that the prior on θ is uninformative.
A.1 REGULARIZED REINFORCEMENT LEARNING
Let π be an arbitrary policy. For any other policy q such that, for all x, a, {π(a|x) > 0} =⇒ {q(a|x) > 0}, define the π-regularized reward for policy q:
rπ,qα (x, a) = r(x, a)− α log q(a|x) π(a|x) ,
where α > 0.
Bellman operators: Define the π-regularized Bellman operator for policy q Tπ,qα V (x) = Ea∼q(·|x) [ rπ,qα (x, a) + γEy∼p(·|x,a)V (y) ] ,
and the non-regularized Bellman operator for policy q T qV (x) = Ea∼q(·|x) [ r(x, a) + γEy∼p(·|x,a)V (y) ] .
Value function: Define the π-regularized value function for policy q as V π,qα (x) = Eq [∑ t≥0 γtrπ,qα (xt, at)|x0 = x, q ] .
and the non-regularized value function V q(x) = Eq [∑ t≥0 γtr(xt, at)|x0 = x, q ] .
Proposition 1. For any q, π, V , we have V π,qα ≤ V q and Tπ,qα V ≤ T qV . Indeed
Eq [
log q(at|xt) π(at|xt)
] = KL ( q(·|xt)‖π(·|xt) ) ≥ 0.
Optimal value function and policy Define the optimal regularized value function: V π,∗α (x) = maxq V π,q α (x), and the optimal (non-regularized) value function: V ∗(x) = maxq V q(x).
The optimal policy of the π-regularized problem qπ,∗α (·|x) = arg maxq V π,qα (x) and the optimal policy of the non-regularized problem q∗(·|x) = arg maxq V q . Proposition 2. We have that V π,qα is the unique fixed point of Tπ,qα , and V q is the unique fixed point of T q . Thus we have the following Bellman equations: For all x ∈ X ,
V π,qα (x) = ∑ a q(a|x) [ rπ,qα (x, a) + γEy∼p(·|x,a) [ V π,qα (y) ]] (14)
V q(x) = ∑ a q(a|x) [ r(x, a) + γEy∼p(·|x,a) [ V q(y) ]] (15)
V π,∗α (x) = r π,qπ,∗α α (x, a) + γEy∼p(·|x,a) [ V π,∗α (y) ] for all a ∈ A, (16)
V ∗(x) = max a∈A
[ r(x, a) + γEy∼p(·|x,a) [ V ∗(y) ]] . (17)
Notice that (16) holds for all actions a ∈ A, and not in expectation w.r.t. a ∼ q(·|x) only.
A.2 REGULARIZED JOINT POLICY GRADIENT
We now consider a parametrized policy πθ and consider maximizing the regularized joint policy optimization problem for a given initial state x0 (this could be a distribution over initial states). Thus we want to find a parameter θ that (locally) maximizes
J (θ, q) = V πθ,q(x0) = Eq [∑ t≥0 γt ( r(xt, at)− αKL(q(·|xt)‖πθ(·|xt)) )∣∣x0, q]. We start with an initial parameter θ0 and define a sequence of policies πi = πθi parametrized by θi, in the following way:
• Given θi, define qi = arg max
q T πθi ,q α V πθi ,
• Define θi+1 as θi+1 = θi − β∇θEπi [∑ t≥0 γtKL ( qk(·|xt)‖πθ(·|xt) ) |θ=θi ∣∣x0, πi]. (18) Proposition 3. We have the following properties:
• The policy qi satisfies:
qi(a|x) = πi(a|x)e
1 αQ πi (x,a) Eb∼πi(·|x) [ e 1 αQ πi (x,b) ] , (19)
where Qπ(x, a) = r(x, a) + γEy∼p(·|x,a)V π(y).
• We have V πi,qiα ≥ V πi . (20)
• For η sufficiently small, we have J (θi+1, qi+1) ≥ J (θi, qi) + cgi, (21)
where c is a numerical constant, and gi is the norm of the gradient (minimized by the algorithm):
gi = ∥∥∥∇θEπi[∑
t≥0
γtKL ( qi(·|xt)‖πθ(·|xt) ) |θ=θi ∣∣x0, πi]∥∥∥. Thus we build a sequence of policies (πθi , qi) whose values J (θi, qi) are non-decreasing thus converge to a local maximum. In addition, the improvement is lower-bounded by a constant times the norm of the gradient, thus the algorithm keeps improving the performance until the gradient vanishes (when we reach the limit of the capacity of our representation).
Proof. We have
qi(·|x) = arg max q Ea∼q(·|x) [ r(x, a) + γEy∼p(·|x,a)V πi(y)︸ ︷︷ ︸
Qπi (x,a)
−α log q(a|x) πi(a|x)
] ,
from which we deduce (19). Now, from the definition of qi, we have
Tπi,qiα V πi ≥ Tπi,πiα V πi = TπiV πi = V πi .
Now, since Tπi,qiα is a monotone operator (i.e. if V1 ≥ V2 elementwise, then Tπi,qiα V1 ≥ Tπi,qiα V2) and its fixed point is V πi,qiα , we have
V πi,qiα = lim t→∞ (Tπi,qiα ) tV πi ≥ V πi ,
which proves (20).
Now, in order to prove (21) we derive the following steps.
Step 1: From the definition of qi+1 we have, for any x, Ea∼qi+1 [ Qπi+1(x, a) ] −αKL ( qi+1(·|x)‖πi+1(·|x) ) ≥ Ea∼qi [ Qπi+1(x, a) ] −αKL ( qi(·|x)‖πi+1(·|x) ) .
(22)
Writing the functional that we minimize f(π, q, θ) = Eπ [∑ t≥0 γtKL ( q(·|xt)‖πθ(·|xt) )∣∣x0, π], the update rule is θi+1 = θi − β∇θf(πi, qi, θi). Thus we have that for sufficiently small β,
f(πi, qi, θi+1) ≤ f(πi, qi, θi)− βgi, (23) where gi = 12 ∥∥∇θf(πi, qi, θi)∥∥. Step 2: Now define F :
F(π, q, θ, π′) = Eπ [∑ t≥0 γt ( Ea∼q [ Qπ ′ (xt, a) ] − αKL ( q(·|xt)‖πθ(·|xt) ))∣∣x0, π] = δx0(I − γPπ)−1Tπθ,qα V π ′
= δx0(I − γPπ)−1T qV π ′ − f(π, q, θ),
where δx0 is a Dirac (in the row vector x0), and P π is the transition matrix for policy π.
From (22) and (23) we deduce that F(πi, qi+1, θi+1, πi+1) ≥ F(πi, qi, θi+1, πi+1)
≥ F(πi, qi, θi, πi+1) + βgi.
We deduce F(πi, qi+1, θi+1, πi)
≥ F(πi, qi, θi, πi) + βgi +F(πi, qi+1, θi+1, πi)−F(πi, qi+1, θi+1, πi+1) + F(πi, qi, θi, πi+1)−F(πi, qi, θi, πi)
= F(πi, qi, θi, πi) + βgi + Eπi [∑ t≥0 γt ( Ea∼qi+1 [ Qπi(xt, a)−Qπi+1(xt, a) ] − Ea∼qi [ Qπi(xt, a)−Qπi+1(xt, a) ])] ︸ ︷︷ ︸
=O(β2) since πi=πi+1+O(β) and qi=qi+1+O(β)
This rewrites: δx0(I − γπi)−1 ( T qi+1,πi+1α V πi − T qi,πiα V πi ) ≥ ηgi +O(β2). (24)
Step 3: Now a bit of algebra. For two stochastic matrices P and P ′, we have (I − γP )−1
= (I − γP ′)−1 + γ(I − γP )−1(P − P ′)(I − γP ′)−1 = (I − γP ′)−1 + γ [ (I − γP ′)−1 + γ(I − γP )−1(P − P ′)(I − γP ′)−1 ] (P − P ′)(I − γP ′)−1
= (I − γP ′)−1 + γ(I − γP ′)−1(P − P ′)(I − γP ′)−1
+γ2(I − γP )−1(P − P ′)(I − γP ′)−1(P − P ′)(I − γP ′)−1.
Applying this equality to the transition matrices Pπk and Pπk+1 and since ‖Pπk+1 −Pπk‖ = O(η), we have:
V qi+1,πi+1α
= (I − γPπi)−1rqi+1,πi+1α = (I − γPπi)−1rqi+1,πi+1α + γ(I − γPπi)−1(Pπi+1 − Pπi)(I − γPπi)−1rqi+1,πi+1α +O(β2) = (I − γPπi)−1rqi,πiα + (I − γPπi)−1(rqi+1,πi+1α − rqi,πiα + γPπi+1 − γPπi)(I − γPπi)−1rqi,πiα +O(β2) = V qi,πiα + (I − γPπi)−1(T qi+1,πi+1α V πi − T qi,πiα V πk) +O(β2).
Finally, using (24), we deduce that
J (θi+1, qi+1) = V qi+1,πi+1α (x0) = V qi,πiα (x0) + δx0(I − γPπi)−1(T qi+1,πi+1α V πi − T qi,πiα V πi) +O(β2) ≥ J (θi, qi) + ηgi +O(β2)
≥ J (θi, qi) + 1
2 ηgi,
for small enough η.
B ADDITIONAL EXPERIMENT: DISCRETE CONTROL
As a proof of concept – showcasing the robustness of our algorithm and its hyperparameters – we performed an experiment on a subset of the games contained contained in the "Arcade Learning Environment" (ALE). For this experiment we used the same hyperparameter settings for the KL constraints as for the continuous control experiments as well as the same learning rate and merely altered the network architecture to the standard network structure used by DQN Mnih et al. (2015) – and created a seperate network with the same architecture, but predicting the parameters of the policy distribution. A comparison between our algorithm and well established baselines from the literature, in terms of the mean performance, is listed in Table 1. While we do not obtain state-ofthe-art performance in this experiment, the fact that MPO is competitive, out-of-the-box in these domains suggests that combining the ideas presented in this paper with recent advances for RL with discrete actions (Bellemare et al., 2017) could be a fruitful avenue for future work.
C EXPERIMENT DETAILS
In this section we give the details on the hyper-parameters used for each experiment. All the continuous control experiments use a feed-forward network except for Parkour-2d were we used the same network architecture as in Heess et al. (2017). Other hyper parameters for MPO with non parametric variational distribution were set as follows,
Hyperparameters for MPO with parametric variational distribution were as follows,
D DERIVATION OF UPDATE RULES FOR A GAUSSIAN POLICY
For continuous control we assume that the policy is given by a Gaussian distribution with a full covariance matrix, i.e, π(a|s,θ) = N (µ,Σ). Our neural network outputs the mean µ = µ(s) and Cholesky factor A = A(s), such that Σ = AAT . The lower triagular factor A has positive diagonal elements enforced by the softplus transform Aii ← log(1 + exp(Aii)).
D.1 NON-PARAMETRIC VARIATIONAL DISTRIBUTION
In this section we provide the derivations and implementation details for the non-parametric variational distribution case for both E-step and M-step.
D.2 E-STEP
The E-step with a non-parametric variational solves the following program, where we have replaced expectations with integrals to simplify the following derivations:
max q
∫ µq(s) ∫ q(a|s)Qθi(s, a)dads
s.t. ∫ µq(s)KL(q(a|s), π(a|s,θi))da < ,∫∫
µq(s)q(a|s)dads = 1.
First we write the Lagrangian equation, i.e,
L(q, η, γ) = ∫ µq(s) ∫ q(a|s)Qθi(s, a)dads+
η ( − ∫ µq(s) ∫ q(a|s) log q(a|s)
π(a|s,θi)
) + γ ( 1− ∫∫ µq(s)q(a|s)dads ) .
Next we maximise the Lagrangian L w.r.t the primal variable q. The derivative w.r.t q reads,
∂qL(q, η, γ) = Qθi(a, s)− η log q(a|s) + η log π(a|s,θi)− (η − γ).
Setting it to zero and rearranging terms we get
q(a|s) = π(a|s,θi) exp ( Qθi(a, s)
η
) exp ( −η − γ
η
) .
However the last exponential term is a normalisation constant for q. Therefore we can write,
exp(−η − γ η ) =
∫ π(a|s,θi) exp( Qθi(a, s)
η )da,
γ = η − η log (∫ π(a|s,θi) exp( Qθi(a, s)
η )da
) .
Note that we could write γ based on π and η. At this point we can derive the dual function,
g(η) = η + η ∫ µq(s) log (∫ π(a|s,θi) exp( Q(a, s)
η )da
) .
D.3 M-STEP
To obtain the KL constraint in the M step we set p(θ) to a Gaussian prior around the current policy, i.e,
p(θ) ≈ N ( µ = θi,Σ =
Fθi λ
) ,
where θi are the parameters of the current policy distribution, Fθi is the empirical Fisher information matrix and λ.
With this, and dropping constant terms our optimization program becomes
max π
∫ µq(s) ∫ q(a|s) log π(a|s,θ)dads− λ(θ − θi)TF−1θi (θ − θi). (25)
We can observe that (θ − θi)TF−1θi (θ − θi) is the second order Taylor approximation of∫ µq(s)KL(π(a|s,θi), π(a|s,θ))ds which leads us to the generalized M-step objective:
max π
∫ µq(s) ∫ q(a|s) log π(a|s,θ)dads− λ ∫ µq(s)KL(π(a|s,θi), π(a|s,θ))ds (26)
which corresponds to Equation (11) from the main text, where expectations are replaced by integrals.
After obtaining the non parametric variational distribution in the M step with a Gaussian policy we empirically observed that better results could be achieved by decoupling the KL constraint into two terms such that we can constrain the contribution of the mean and covariance separately i.e.
∫ µq(s)KL(πi(a|s,θ), π(a|s,θ)) = Cµ + CΣ, (27)
where Cµ = ∫ µq(s) 1 2 (tr(Σ −1Σi)− n+ ln( Σ
Σi ))ds,
CΣ = ∫ µq(s) 1 2 (µ− µi) TΣ−1(µ− µi)ds.
This decoupling allows us to set different values for each component, i.e., µ, Σ for the mean, the covariance matrix respectively. Different lead to different learning rates. The effectivness of this decoupling has also been shown in Abdolmaleki et al. (2017). We always set a much smaller epsilon for covariance than the mean. The intuition is that while we would like the distribution moves fast in the action space, we also want to keep the exploration to avoid premature convergence.
In order to solve the constrained optimisation in the M-step, we first write the generalised Lagrangian equation, i.e,
L(θ, ηµ, ηΣ) = ∫ µq(s) ∫ q(a|s) log π(a|s,θ)dads+ ηµ( µ − Cµ) + ηΣ( Σ − CΣ)
Where ηµ and ηΣ are Lagrangian multipliers. Following prior work on constraint optimisation, we formulate the following primal problem,
max θ min ηµ>0,ηΣ>0 L(θ, ηµ, ηΣ).
In order to solve for θ we iteratively solve the inner and outer optimisation programs independently: We fix the Lagrangian multipliers to their current value and optimise for θ (outer maximisation) and then fix the parameters θ to their current value and optimise for the Lagrangian multipliers (inner minimisation). We continue this procedure until policy parameters θ and Lagrangian multipliers converge. Please note that the same approach can be employed to bound the KL explicitly instead of decoupling the contribution of mean and covariance matrix to the KL.
D.4 PARAMETRIC VARIATIONAL DISTRIBUTION
In this case we assume our variational distribution also uses a Gaussian distribution over the action space and use the same structure as our policy π.
Similar to the non-parametric case for a Gaussian distribution in the M-step we also use a decoupled KL but this time in the E-step for a Gaussian variational distribution. Using the same reasoning as in the previous section we can obtain the following generalized Lagrangian equation:
L(θq, ηµ, ηΣ) = ∫ µq(s) ∫ q(a|s;θq)Ai(a, s)dads+ ηµ( µ − Cµ) + ηΣ( Σ − CΣ).
Where ηµ and ηΣ are Lagrangian multipliers. And where we use the advantage function A(a, s) instead of the Q function Q(a, s), as it empirically gave better performance. Please note that the KL in the E-step is different than the one used in the M-step. Following prior works on constraint optimisation, we can formulate the following primal problem,
max θq min ηµ>0,ηΣ>0
L(θq, ηµ, ηΣ)
In order to solve for θq we iteratively solve the inner and outer optimisation programs independently. In order to that we fix the Lagrangian multipliers to their current value and optimise for θq (outer maximisation), in this case we use the likelihood ratio gradient to compute the gradient w.r.t θq . Subsequently we fix the parameters θq to their current value and optimise for Lagrangian multipliers (inner minimisation). We iteratively continue this procedure until the policy parameters θq and the Lagrangian multipliers converges. Please note that the same approach can be used to bound the KL explicitly instead of decoupling the contribution of mean and covariance matrix to the KL. As our policy has the same structure as the parametric variational distribution, the M step in this case reduce to set the policy parameters θ to the parameters θq we obtained in E-step, i.e,
θi+1 = θ q
E IMPLEMENTATION DETAILS
While we ran most of our experiments using a single learner, we implemented a scalable variant of the presented method in which multiple workers collect data independently in an instance of the considered environment, compute gradients and send them to a chief (or parameter server) that performs parameter update by averaging gradients. That is we use distributed synchronous gradient descent. These procedures are described in Algorithms 1 and 2 for the non-parametric case and 3 for the parametric case.
Algorithm 1 MPO (chief) 1: Input G number of gradients to average 2: while True do 3: initialize N = 0 4: initialize gradient store sφ = {}, sη = {}, sηµ = {}, sηΣ = {} sθ = {} 5: while N < G do 6: receive next gradient from worker w 7: sφ = sφ + [δφw] 8: sφ = sθ + [δθw] 9: sη = sη + [δηw] 10: sηµ = sηµ + [δη w µ ] 11: sηθ = sηθ + [δη w θ ] 12: update parameters with average gradient from 13: sφ, sη , sηµ , sηΣ sθ 14: send new parameters to workers
Algorithm 2 MPO (worker) - Non parametric variational distribution 1: Input = , Σ, µ, Lmax 2: i = 0, Lcurr = 0 3: Initialise Qωi(a, s), π(a|s,θi), η, ηµ, ηΣ 4: for each worker do 5: while Lcurr > Lmax do 6: update replay buffer B with L trajectories from the environment 7: k = 0 8: // Find better policy by gradient descent 9: while k < 1000 do 10: sample a mini-batch B of N (s, a, r) pairs from replay 11: sampleM additional actions for each state fromB, π(a|s,θi) for estimating integrals 12: compute gradients, estimating integrals using samples 13: // Q-function gradient: 14: δφ = ∂φL′φ(φ) 15: // E-Step gradient:
16: δη = ∂ηg(η)
17: Let: q(a|s) ∝ π(a|s,θi) exp( Qθt (a,s,φ ′)
η )
18: // M-Step gradient:
19: [δηµ , δηΣ ] = α∂ηµ,ηΣL(θk, ηµ, ηΣ)
20: δθ = ∂θL(θ, ηµk+1, ηΣk+1)
21: send gradients to chief worker 22: wait for gradient update by chief 23: fetch new parameters φ, θ, η, ηµ, ηΣ 24: k = k + 1 25: i = i+ 1, Lcurr = Lcurr + L 26: θi = θ, φ′ = φ
Algorithm 3 MPO (worker) - parametric variational distribution 1: Input = Σ, µ, Lmax 2: i = 0, Lcurr = 0 3: Initialise Qωi(a, s), π(a|s,θi), η, ηµ, ηΣ 4: for each worker do 5: while Lcurr < Lmax do 6: update replay buffer B with L trajectories from the environment 7: k = 0 8: // Find better policy by gradient descent 9: while k < 1000 do 10: sample a mini-batch B of N (s, a, r) pairs from replay 11: sample M additional actions for each state from B, π(a|s,θk) for estimating inte-
grals 12: compute gradients, estimating integrals using samples 13: // Q-function gradient: 14: δφ = ∂φL′φ(φ) 15: // E-Step gradient:
16: [δηµ , δηΣ ] = α∂ηµ,ηΣL(θk, ηµ, ηΣ)
17: δθ = ∂θL(θ, ηµk+1, ηΣk+1)
18: // M-Step gradient: In practice there is no M-step in this case as policy and variatinal
distribution q use a same structure. 19: send gradients to chief worker 20: wait for gradient update by chief 21: fetch new parameters φ, θ, η, ηµ, ηΣ 22: k = k + 1 23: i = i+ 1, Lcurr = Lcurr + L 24: θi = θ, φ′ = φ | 1. What are the strengths and weaknesses of the proposed approach in the paper regarding its theoretical foundation?
2. Are there any concerns regarding the technical aspects of the paper, particularly in terms of policy evaluation and off-policy stability?
3. How does the reviewer assess the clarity and depth of the analysis in the paper, especially regarding the shifting between different formulations?
4. What kind of evidence or validation does the reviewer expect to support the paper's claims about its performance compared to other methods? | Review | Review
This paper studies new off-policy policy optimization algorithm using relative entropy objective and use EM algorithm to solve it. The general idea is not new, aka, formulating the MDP problem as a probabilistic inference problem.
There are some technical questions:
1. For parametric EM case, there is asymptotic convergence guarantee to local optima case; However, for nonparametric EM case, there is no guarantee for that. This is the biggest concern I have for the theoretical justification of the paper.
2. In section 4, it is said that Retrace algorithm from Munos et al. (2016) is used for policy evaluation. This is not true. The Retrace algorithm, is per se, a value iteration algorithm. I think the author could say using the policy evaluation version of Retrace, or use the truncated importance weights technique as used in Retrace algorithm, which is more accurate.
Besides, a minor point: Retrace algorithm is not off-policy stable with function approximation, as shown in several recent papers, such as
“Convergent Tree-Backup and Retrace with Function Approximation”. But this is a minor point if the author doesn’t emphasize too much about off-policy stability.
3. The shifting between the unconstrained multiplier formulation in Eq.9 to the constrained optimization formulation in Eq.10 should be clarified. Usually, an in-depth analysis between the choice of \lambda in multiplier formulation and the \epsilon in the constraint should be discussed, which is necessary for further theoretical analysis.
4. The experimental conclusions are conducted without sound evidence. For example, the author claims the method to be 'highly data efficient' compared with existing approaches, however, there is no strong evidence supporting this claim.
Overall, although the motivation of this paper is interesting, I think there is still a lot of details to improve. |
ICLR | Title
Wasserstein Distributional Normalization : Nonparametric Stochastic Modeling for Handling Noisy Labels
Abstract
We propose a novel Wasserstein distributional normalization (WDN) algorithm to handle noisy labels for accurate classification. In this paper, we split our data into uncertain and certain samples based on small loss criteria. We investigate the geometric relationship between these two different types of samples and enhance this relation to exploit useful information, even from uncertain samples. To this end, we impose geometric constraints on the uncertain samples by normalizing them into the Wasserstein ball centered on certain samples. Experimental results demonstrate that our WDN outperforms other state-of-the-art methods on the Clothing1M and CIFAR-10/100 datasets, which have diverse noisy labels. The proposed WDN is highly compatible with existing classification methods, meaning it can be easily plugged into various methods to improve their accuracy significantly.
N/A
We propose a novel Wasserstein distributional normalization (WDN) algorithm to handle noisy labels for accurate classification. In this paper, we split our data into uncertain and certain samples based on small loss criteria. We investigate the geometric relationship between these two different types of samples and enhance this relation to exploit useful information, even from uncertain samples. To this end, we impose geometric constraints on the uncertain samples by normalizing them into the Wasserstein ball centered on certain samples. Experimental results demonstrate that our WDN outperforms other state-of-the-art methods on the Clothing1M and CIFAR-10/100 datasets, which have diverse noisy labels. The proposed WDN is highly compatible with existing classification methods, meaning it can be easily plugged into various methods to improve their accuracy significantly.
1 INTRODUCTION
The successful results of deep neural networks (DNNs) on supervised classification tasks heavily rely on accurate and high-quality label information. However, annotating large-scale datasets is extremely expensive and a time-consuming task. Because obtaining high-quality datasets is very difficult, in most conventional works, training data have been obtained alternatively using crowd-sourcing platforms Yu et al. (2018) to obtain large-scaled datasets, which leads inevitable noisy labels in the annotated samples.
While there are numerous methods that can deal with noisy labeled data, recent methods actively adopt the small loss criterion, which enables to construct classification models that are not susceptible to noise corruption. In this learning scheme, a neural network is trained using easy samples first in the early stages of training. Harder samples are then gradually selected to train mature models as training proceeds. Jiang et al. (2018) suggested collaborative learning models, in which a mentor network delivers the data-driven curriculum loss to a student network. Han et al. (2018); Yu et al. (2019) proposed dual networks to generate gradient information jointly using easy samples and employed this information to allow the networks to teach each other. Wei et al. (2020) adopted a disagreement strategy, which determines the gradient information to update based on disagreement values between dual networks. Han et al. (2020) implemented accumulated gradients to escape optimization processes from over-parameterization and to obtain more generalized results. In this paper, we tackle to solve major issues raised from the aforementioned methods based on the small-loss criterion, as follows.
In comprehensive experiments, the aforementioned methods gain empirical insight regarding network behavior under noisy labels. However, theoretical and quantitative explanation have not been closely investigated. In contrast, we give strong theoretical/empirical explanations to understand the network under noisy labels. In particular, we present an in-depth analysis of small loss criteria in a probabilistic sense. We exploit the stochastic properties of noisy labeled data and develop probabilistic descriptions of data under the small loss criteria, as follows. Let P be a probability measure for the pre-softmax logits of the training samples, l be an objective function for classification, and 1{·} be an indicator function. Then, our central object to deal with is a truncated measure defined as
X ∼ µ|ζ = 1{X;l(X)>ζ}P P[l(X) > ζ] , Y ∼ ξ|ζ = 1{X;l(Y )≤ζ}P P[l(Y ) ≤ ζ] , (1)
where X and Y , which are sampled from µ|ζ and ξ|ζ, denote uncertain and certain samples defined in the pre-softmax feature space1 (i.e.,Rd), respectively. In equation 1, µ and ξ denote the probability measures of uncertain and certain samples, respectively, and ζ is a constant. Most previous works have focused on the usage of Y and the sampling strategy of ζ, but poor generalization capabilities based on the abundance of uncertain samples X has not been thoroughly investigated, even though these samples potentially contain important information. To understand the effect of noisy labels on the generalized bounds, we provide the concentration inequality of uncertain measure µ, which renders the probabilistic relation between µ and ξ and learnability of the network under noisy labels.
While most conventional methods Han et al. (2018); Wei et al. (2020); Li et al. (2019a); Yu et al. (2019) require additional dual networks to guide misinformed noisy samples, the scalability is not guaranteed due to the existence of dual architectures, which have the same number of parameters as the base network. To alleviate this problem, we build a statistical machinery, which should be fully non-parametric, simple to implement, and computationally efficient to reduce the computational complexity of conventional approaches, while maintaining the concept of small-loss criterion. Based on the empirical observation of ill-behaved certain/uncertain samples, we propose the gradient flow in the Wasserstein space, which can be induced by simulating non-parametric stochastic differential equation (SDE) with respect to the Ornstein-Ulenbeck type to control the ill-behaved dynamics. The reason for selecting these dynamics will be thoroughly discussed in the following sections.
Thus, key contributions of our work are as follows.
• We theoretically verified that there exists a strong correlation between model confidence and statistical distance between X and Y . We empirically investigate that the classification accuracy worsens when the upper-bound of 2-Wasserstein distance W2(µ, ξ) ≤ ε (i.e., distributional distance between certain and uncertain samples) drastically increase. Due to the empirical nature of upper-bound ε, it can be used as an estimator to determine if a network suffers from over-parameterization.
• Based on empirical observations, we develop a simple, non-parametric, and computationally efficient stochastic model to control the observed ill-behaved sample dynamics. As a primal object, we propose the stochastic dynamics of gradient flow (i.e.,, Ornstein-Ulenbeck process) to simulate simple/non-parametric stochastic differential equation. Thus, our method do not require any additional learning parameters.
• We provide important theoretical results. First, the controllable upper-bound ε with the inverse exponential ratio is induced, which indicates that our method can efficiently control the diverging effect of Wasserstein distance. Second, the concentration inequality of transported uncertain measure is presented, which clearly renders the probabilistic relation between µ and ξ.
2 RELATED WORK
Curriculum Learning & Small-loss Criterion. To handle noisy labels, Han et al. (2018); Yu et al. (2019); Jiang et al. (2018); Wei et al. (2020); Lyu & Tsang (2020a); Han et al. (2020) adopted curriculum learning or sample selection frameworks. However, these methods only consider a small number of selected samples, where large portion of samples are excluded at the end of the training. This inevitably leads to poor generalization capabilities. However, this conflicts with sample selection methods because a large portion of training samples are gradually eliminated. By contrast, our method can extract useful information from unselected samples X ∼ µ (i.e., uncertain samples) and enhance these samples (e.g., X ′ ∼ Fµ) for more accurate classification. Chen et al. (2019) iteratively apply cross-validation to randomly partitioned noisy labeled data to identify most samples that have correct labels. To generate such partitions, they adopt small-loss criterion for selecting samples.
Loss Correction & Label Correction. Patrini et al. (2017a); Hendrycks et al. (2018); Ren et al. (2018) either explicitly or implicitly transformed noisy labels into clean labels by correcting classification losses. Unlike these methods, our method transforms the holistic information from uncertain samples into certain samples, which implicitly reduces the effects of potentially noisy labels. While correction of label noisy by modifying the loss-dynamics do not perform well under extreme noise environments, Arazo et al. (2019) adopt label augmentation method called MixUp Zhang et al. (2018).
1Due to the technical difficulties, we define our central objects on pre-softmax space rather than label space, i.e., the space of σ(X), σ(Y ), where σ indicates softmax function. Please refer to Appendix for more details.
Distillation. Li et al. (2019b) updated mean teacher parameters by calculating the exponential moving average of student parameters to mitigate the impact of gradients induced by noisy labels. Lukasik et al. (2020) deeply investigated the effects of label smearing for noisy labels and linked label smoothing to loss correction in a distillation framework. Similar to these methods, our method leverages the useful properties of distillation models. We set ν as a pivot measure, which guides our normalization functional Fµ for uncertain measures. This is similar to self-distillation because uncertain training samples are forced to be normalized to those of past states.
Other methods. Lee et al. (2019) induced a robust generative classifier based on pre-trained deep models. Similar to our method, Damodaran et al. (2019) designed a constraint on the Wasserstein space and adopted an adversarial framework for classification models of noisy labeled data by implementing semantic Wasserstein distance. Pleiss et al. (2020) identify noisy labeled samples by considering AUM statistics which exploits differences in training dynamics of clean and mislabeled samples. In most recent work, Li et al. (2019a) adopts semi-supervised learning (SSL) methods to deal with noisy labels where the student network utilizes both labeled/unlabeled samples to perform semi-supervised learning guided by the other teacher network.
3 DISTRIBUTIONAL NORMALIZATION
Because our main target object is a probability measure (distribution), we first define an objective function in a distributional sense. Let l be cross entropy and r̂ be a corrupted label random vector for an unknown label transition matrix from a clean label r which is independent of X , with label transition matrix Q. Then, a conventional objective function for classification with noisy labels can be defined as follows:
min µ J [µ] = min µ EX∼µ,r̂|Q [l(X; r̂)] . (2)
However, due to the significant changes in label information, the conventional objective function defined in equation 2 cannot be used for accurate classification. Instead of directly using uncertain samples X ∼ µ as in previous works, we normalize µ in the form of a metric ball and present a holistic constraint. For a clear mathematical description, we first introduce the following definition. Definition 1. (Wasserstein ambiguity set) Let P2(Rd) = {µ : Eµd2E(x0, x) <∞,∀x0 ∈ Rd} be a 2-Wasserstein space, where d denotes the number of classes, dE is Euclidean distance defined on Rd. Then, we define a Wasserstein ambiguity set (i.e., metric ball) in this space as follows:
BW2(ν, ε) = { µ ∈ P2 ( Rd ) :W2(µ, ν) ≤ ε } , (3)
whereW2 denotes the 2-Wasserstein distance and ν is the pivot measure. Then, we propose a new objective function by imposing geometric constraints on µ as follows:
min Fµ∈BW2 (ν,ε),ξ J [Fµ] + J [ξ] = min θ EX∼Fµθ,r̂[l(X; r̂)] + EX∼ξθ,r̂[l(Y ; r̂)], (4)
where F : P2(Rd)→ P2(Rd) is a functional for probability measures, which assures the constraint on Fµ (i.e., Fµ ∈ BW2(ν, ε)) and our main objective. The right-hand side of equation equation 4 is equivalent vectorial form of distributional form in left-hand side. While our main objects are defined on pre-softmax, both probability measures µθ and ξθ is parameterized by neural network with parameters θ. This newly proposed objective function uses the geometrically enhanced version of an uncertain measure Fµ with a certain measure ξ. In equation 4, probability measure ν is defined as follows: ν = arg minJ [ξk? ], where ξk denotes a certain measure at the current k-th iteration and k? ∈ Ik−1 = {1, · · · , k − 1}. In other words, our method finds the best probability measure that represents all certain samples so far at training time, where the uncertain measures are transported to be lying in the Wasserstein ball centered on ν. In equation 4, the Wasserstein constraint on Fµ enforces uncertain measures statistically resemble ν from a geometric perspective (i.e.,W2(ν,Fµ) ≤ ε). Now, an important question naturally stems from the aforementioned analysis: how can we select the optimal radius ε? Clearly, finding an F that induces a small ε ≈ 0 is suboptimal because Fµ ≈ ν and using objective function J [Fµ ≈ ν] can lead to the following critical problem. As the optimization process proceeds, enhanced uncertain samples X ′ ∼ Fµ contribute less and less, because it is statistically identical to ν, meaning our objective in equation 4 would receive little benefits from these transported uncertain samples. By contrast, if we adopt a large radius for ε, enhanced uncertain samples will be statistically and geometrically unrelated to ν, which causes the normalized measure Fµ to yield large losses and violates our objective.
To overcome two problems above and select the radius, we make a detour, i.e., a Gaussian measure, for cutting the path between ν and Fµ (i.e., ν → N (mν ,Σν)→ Fµ) rather than directly calculating the geodesic between ν and Fµ (i.e., ν → Fµ). Specifically, we decompose the original constraint in equation 4 into two terms using the triangle inequality of the Wasserstein distance:
W2 (ν,Fµ) ≤ ε =W2 (ν,N (mν ,Σν))︸ ︷︷ ︸ d1: Intrinsic statistics +W2 (N (mν ,Σν),Fµ)︸ ︷︷ ︸ d2: Wasserstein Normalization . (5)
The first intrinsic statistics term sets a detour point as a Gaussian measure, for which the mean and covariance are the same as those for ν (i.e., mν = EY∼ν [Y ] and Σν = CovY∼ν [Y ]). The Wasserstein upper bound of this term is only dependent on the statistical structure of ν because (mν ,Σν) is dependent on ν. Thus, this term induces a data-dependent, non-zero constant upper bound whenever ν 6= N and can prevent the upper-bound from collapsing to ε → 0, regardless of F . This gives huge advantage when dealing with ε because the first term can be considered a fixed constant during the training. The second normalization term represents our central objective. F facilitates geometric manipulation in the Wasserstein space and prevent uncertain measure µ from diverging, where µ is normalized onto the Wasserstein ambiguity BW2(ν, ε) in Fig1. The theoretical/numerical advantages of setting detour measure as Gaussian is well-explained following section.
3.1 WASSERSTEIN NORMALIZATION
In the previous section, we present a novel objective function that imposes a geometric constraint on µ such that the transformed measure Fµ lies in BW2(ν, ε) for ν. Now, we specify F and relate it to the Gaussian measure (generally Gibbs measure). For simplicity, we denote Nν = N (mν ,Σν). Proposition 1. F : R+×P2 → P2 is a functional on the probability measure such thatF [t, µ] = µt, where dµt = ptdNν , dNν = dqtdx, and µt is a solution to the following continuity equations:
∂tµt = ∇ · (µtvt) , (6)
which is read as ∂tp(t, x) = ∇ · (p(t, x)∇ log q(t, x)) in a distributional sense. Then, a uniquely defined functional Ft[·] = F [t, ·] normalizes µ onto BW2 (Nν , e−tK2 (µ)), where K2(µ) > 0 is a constant that depends on µ.
It is well known that the solution to equation 6 induces a geodesic in the 2-Wasserstein space (Villani (2008)), which is the shortest path from µ = µt=0 to Nν . The functional Ft generates a path for µt, in which the distance is exponentially decayed according to the auxiliary variable t and constant K2, meaningW2(Nν ,Ftµ) ≤ K2e−t. This theoretical results indicates that the Wasserstein distance of second term in equation 5 can be reduced/controlled with exponential ratio. Thus, by setting a different t, our method can efficiently control the diverging distance in equation 5. Unfortunately, it is typically intractable to compute the partial differential equation (PDE) in equation 6.
Algorithm 1 Wasserstein Distributional Normalization Require: α ∈ [0, 0.2], % ∈ [0.1, 0.65], T = 64,∆t = 10−4, τ = 0.001,
for k = 1 to K (i.e., the total number of training iterations) do 1) Select uncertain (1− ρ)N and certain ρN samples from the mini-batch N . {Y nk }{n≤ρN} ∼ ξk, {X n k }{n≤(1−ρ)N} ∼ µk
2) Update the most certain measure ν. if J [ξk] < J [ν] then ν ← ξk,mν ← E [Yk], and Σν ← Cov [Yk] end if 3) Update the moving geodesic averageN (mα,Σα). Solve the Ricatti equation T ΣνT = Σξk . Σα = ((1− α)Id + αT ) Σν ((1− α)Id + αT ) and mα = (1− α)mν + αmξk 4) Simulate the discrete SDE for T steps. for t = 0 to T − 1 do Xnk,t+1 = −∇φ(Xnk,t;mα)∆t + √ 2τ−1Σαν dW n t s.t. { Xnk,t=0 } ∼ µk, { Xnk,t=T } ∼ FTµk end for 5) Update the network with the objective function. J [Fµk] + J [ξk] = EFT µk [l(Xk,T ; r̂)] + Eξk [l(Yk; r̂)]
end for
To solve this problem, we adopt particle-based stochastic dynamics, which enables tractable computation. There exists a unique iterative form corresponding PDE in equation 6 which is called as multi-dimensional Ornstein-Ulenbeck process, which can be approximated using particle-based dynamics. In particular, we draw N(1 − %) uncertain samples from a single batch of N samples using equation 1 for hyper-parameter 0 ≤ % ≤ 1. We then simulate a discrete stochastic differential equation (SDE) for each particle using the Euler-Maruyama scheme as follows:
Xnt+1 = X n t −∇φ (Xnt ;mν) ∆t + √ 2τ−1∆tΣZ n I , (7)
where φ (Xt;mν) = τ2d 2 E (Xt,mν), n ∈ {1 · · · , N(1− %)}, dE is a Euclidean distance, and N is a single mini-batch size. We selected OU process as our stochastic dynamic due to the following reasons: First, we want to build computationally efficient, and non-parametric method to estimate/minimize the second term of equation 5. The SDE in equation 7 corresponding OU process have simple form with fixed drift and diffusion terms which is invariant over times which makes us to induce the non-parametric representations of simulation of SDE. While the simulation of equation 7 is just non-parametric for-loops in implementation algorithm, our method is computationally very efficient compared to other baseline methods such as Han et al. (2018). Second, when estimating empirical upper-bound of Wasserstein distance, OU process allows us to use explicit form called Meheler’s formula which can be efficiently estimated (Please refer to Appendix for more details). The overall procedure for our method is summarized in Algorithm 1.
3.2 WASSERSTEIN MOVING GEODESIC AVERAGE
In our experiments, we observe that the best measure ν is not updated for a few epochs after the training begins. This is problematic because ν diverges significantly from the current certain measure ξk, which is equivalent to the normalized measure Fµk diverging from ξk, meaning XT and Y become increasingly statistically inconsistent. To alleviate this statistical distortion, we modify detour measure from Nν to other Gaussian measure, which allows us to capture the statistics of both ξk and ν. Inspired by the moving average of Gaussian parameters in batch normalization Ioffe & Szegedy (2015), we propose the Wasserstein moving geodesic average. Specifically, we replace Gaussian parameters {mν ,Σν} with {mα,Σα} such that mα = (1 − α)mν + αmξk and Σα = ((1− α)Id + αT ) Σν ((1− α)Id + αT ), where T is a solution to the Riccati equation T ΣνT = Σξk . Therefore our final detour Gaussian measure is set to Nαν := N (m(α),Σ(α)), 0 ≤ α ≤ 12.
4 THEORETICAL ANALYSIS
In equation 5, we select the detour point as a Gaussian measure because this measure can provide a statistical structure, which is similar to that of the optimal ν. In addition to this heuristic motivation, setting a detour point as a Gaussian measure (Gibbs measure) also provides theoretical advantages, e.g., the theoretical upper bound of the Wasserstein constraint terms. In this section, we investigate the explicit upper bounds of two terms in equation 5, which are naturally induced by the SDE.
2Please refer to Appendix C.4 for more details.
Proposition 2. A scalar 0 < β <∞ exists and depends on ν, resulting in the following inequality: W2(ν,Ftµ) ≤ ε = K1(ν) ∨ [ e−tK2(µ) +K2(ν) ] , (8)
where λmax(Σν) denotes the maximum eigenvalue of the covariance matrix Σν and for some constant 0 < K1 <∞, we have K1(ν) = √ dβλmax(Σν) + ‖EνY ‖2 which is only dependent on ν.
Intuitively, K2(µ) can be interpreted as an indicator that tells us how the uncertain measure µ is diffused, whereas the designed term e−tK2(µ) controls the upper bound of the Wasserstein distance using a variable t. The other term K2(ν) does not vanish even with a very large t, which assures a non-collapsing upper-bound ε.
Proposition 3. (Concentration inequality for the normalized uncertain measure). Assume that there are some constants T ∈ [ 1η ,∞), η ≥ 0 such that the following inequality holds:
EFTµ[f2]− [EFTµ[f ]]2 ≤ (1 + η)EFTµ[A∇fT∇f ], f ∈ C∞0 (Rd), (9)
for A ∈ Sym+d and D(A,Σν) ≤ aη for some a > 0 with any metric D defined on Sym + d . In this case, there is a δ such that the following probability inequality for an uncertain measure is induced:
FTµ ( |σ − Eν [σ]| ≥ δ ) ≤ 6e− √ 2δ 3 2 K2(µ) , (10)
where σ denotes a soft-max function.
In equation 10, we show that the label information induced by the normalized uncertain measure is close to that of most certain measure Eν [σ], where the upper bound is exponentially relative to the initial diffuseness of µ (i.e.,K2(µ)). Because the upper bound of the probability inequality does not collapse to zero and FTµ is concentrated around the most certain labels (i.e.,Eν [σ]), the uncertain sample XT ∼ FTµ helps our method avoid over-parameterization.
4.1 EMPIRICAL UNDERSTANDINGS
We investigate the theoretical upper bound of the Wasserstein ambiguity (i.e., radius of the Wasserstein ball) for Fµ and its corresponding probability inequality. To provide more in-depth insights into the proposed method, we approximate the upper bound and demonstrate that our Wasserstein normalization actually makes neural networks more robust to label noise.
As we verified previously, according to Proposition 2, the following inequality holds:
W2(Ftµ, ν) ≤ ε = K1(ν) ∨ (K2(ν) +K2(Ftµ)) . (11)
Because the first term K1(ν) is constant, dependent on ν, and generally small compared to the second term with t ≤ T , we only examine the behavior of the second term K2(ν) +K2(Ftµ), which can be efficiently approximated using a simple form. Because our detour measure is Gaussian, we have the following inequality for any h ∈ C∞0 (Rd)3:
K̂2(µ) = lim s→0
1 s EX,Z∼NI
[ h ( e−sX + √ 1− e−2s(Σ 1 2 ν Z + mν) ) − h(X) ] ≤ K2(µ), (12)
where this equality holds if h is selected to induce a supremum over the set C∞0 . For approximation, we simply consider h(X) = ‖X‖2 as a test function. In this case, the following inequality naturally holds: ε̂ = K̂2(ν) + K̂2(Fµ) ≤ K2(ν) + K2(Fµ) ≤ K1(ν) ∨ (K2(ν) + K2(Fµ)) = ε. Thus, ε̂ can be considered as an approximation of the theoretical upper bound ε suggested in Proposition 2. Subsequently, we investigate the effects of Wasserstein normalization based on K̂2(µ) in equation 12.
(1) The proposed WDN ensures that the Wasserstein ambiguity is bounded. We examine the relation between ε̂ and test accuracy in an experiment using the CIFAR-10 dataset with symmetric noise at a ratio of 0.5. Fig.2 presents the landscape for the log10-scaled cumulative average of ε̂ and test accuracy over epochs. The red dotted lines represent the landscape of the vanilla network with cross-entropy loss, where ε̂k = K̂2(νk)+K̂2(Ft=0µk) and k is the epoch index. In this case, the time constant t is set to zero, because Wasserstein normalization is not employed for the vanilla network. The black lines indicate the landscape of the proposed method, where ε̂k = K̂2(νk) + K̂2(Ft=Tµk)
3Please refer to Appendix C.2 for additional details.
in this case. It is noteworthy that the test accuracy of the vanilla network begins to decrease after 13- epochs (red-dotted vertical lines in the top-right plot), whereas the Wasserstein ambiguity (i.e., upper bound of the Wasserstein distance) increases quadratically in the top-left plot. These experimental results verify that the distance between uncertain and most certain measure (i.e., ν) becomes large in the 2-Wasserstein space without any constraints in vanilla networks. They also indicate a definite relationship between Wasserstein ambiguity and test accuracy. In the proposed WDN, Wasserstein ambiguity can be efficiently bounded (i.e., lim supk ε̂k ≈ 2.15) as the test accuracy continues to increase, even after 13-epochs. For detailed analysis, we compute the deviation of an empirical upper bound as follows: ∆̂k = ε̂k − ε̂k−1. In the gray regions, the deviation for the vanilla network is grater than 2.5× 10−2, i.e., ∆k > 2.5× 10−2. Then, its test accuracy begins to drop, as shown in Fig.2. In contrast to the vanilla network, the maximum deviation of the proposed WDN is bounded above by a very small value (supk ∆̂k ≤ 8× 10−3). (2) The proposed WDN helps networks to escape from over-parameterization. To analyze the behavior of deep neural networks under over-parameterization with and without the proposed WDN, we design several variants of the WDN, which begin at delayed epochs. The green, orange, and blue curves in the second row of Fig.2 represent the landscapes, when our WDN is applied after kd ∈ {10, 15, 20} epochs, respectively. In this experiment, the upper bound ε̂k is defined as
ε̂k = { K̂2(νk) + K̂2(Ft=0µk), if k < kd, K̂2(νk) + K̂2(Ft=Tµk), else k ≥ kd.
(13)
Consider kd = 20, which is represented by the blue dotted vertical lines. Before our WDN is applied (i.e., k < kd), the network suffers from over-parameterization, which induces a significant performance drop, as indicated by the blue curve in the bottom-right plot. However, the network rapidly recovers to normal accuracy following Wasserstein normalization (i.e., k ≥ kd). Please note that similar behavior can be observed in the green and orange curves. In particular, the orange curve produces less fluctuations than the blue curve in terms of test accuracy. This indicates that the proposed WDN can help a network escape from over-parameterization by imposing geometric constraints on the Wasserstein space with proposed method.
(3) The proposed WDN can derive data-dependent bounds according to different noise levels. Another interesting point in Fig.2 is that all curves, excluding the red curve, converge to specific numbers 2.15 = ε := lim infk ε̂k ≤ lim supk ε̂k := ε̄ = 2.2. The upper bound ε̄ is neither overly enlarged nor collapsed to zero, while the lower bound ε is fixed for all curves. We argue that this behavior stems from the geometric characteristics of the proposed method, where the first term in equation 5, namelyW2(ν,Nν) ∝ K̂2(ν), is a non-zero data-dependent term that is minimized by the proposed geometric constraint. Therefore, we can derive the following relationship:
[W2(ν,Fµ) ≤ W2(ν,Nν) +W2(Nν ,Fµ)]⇓ ∝ [K̂2(ν) + K̂2(Fµ) = ε̂]⇓. (14) This empirical observation verifies that a detour point, which is set as a Gaussian measure, can induce the data-dependent bound (ε, ε̄), where our data-dependent bound can vary according to different
noise levels and efficiently leverage data-dependent statistics. Fig.2 indicates that classification models with more stable data-dependent bounds also induce more stable convergence in test accuracy.
5 EXPERIMENTS
5.1 EXPERIMENTS ON THE CIFAR-10/100 DATASET
We used settings similar to those proposed by Laine & Aila (2016); Han et al. (2018) for our experiments on the CIFAR10/100 dataset. We used a 9-layered CNN as a baseline architecture with a batch size of 128. We used the Adam optimizer with (β1, β2) = (0.9, 0.99), where the learning rate linearly decreased from 10−3 to 10−5. Synthetic Noise. We injected label noise into clean datasets using a noise transition matrix Qi,j = Pr(r̂ = j|r = i), where a noisy label r̂ is obtained from a true clean label r. We defined Qi,j by following the approach discussed by Han et al. (2018). For symmetric noise, we used the polynomial, % = −1.11r2 + 1.78r + 0.04 for 0.2 ≤ r ≤ 0.65, where r is the noise ratio. For the asymmetric noise, we set % to 0.35. To select the enhanced detour measure, we set α to 0.2 for the Wasserstein moving geodesic average in all experiments. We trained our classification model over 500 epochs because the test accuracy of our method continued increasing, whereas those of the other methods did not. We compared our method with other state-of-the-art methods, including [MentorNet, Jiang et al. (2018)], [Co-teaching, Han et al. (2018)], [Co-teaching+, Yu et al. (2019)], [GCE, Zhang & Sabuncu (2018)], [RoG, Lee et al. (2019)], [JoCoR, Wei et al. (2020)], [NPCL, Lyu & Tsang (2020b)], [SIGUA, Han et al. (2020)], and [DivideMix, Li et al. (2019a)]. As shown in Table 1, the proposed WDN significantly outperformed other baseline methods. Please note that our WDN utilizes a simple Gaussian measure as a target pivot measure. Thus, there are potential risks when handling highly concentrated and non-smooth types of noise (e.g., asymmetric noise). Nevertheless, the proposed WDN still produced accurate results, even with asymmetric noise. In this case, a variant of our WDN (i.e., WDNcot) exhibited the best performance. Open-set Noise. In this experiment, we considered the open-set noisy scenario suggested by Wang et al. (2018), where a large number of training images were sampled from other CIFAR-100 dataset; however, these images were still labeled according to the classes in the CIFAR-10 dataset. We used a 9-layered CNN, which also used in our previous experiment. For hyper-parameters, we set % and α to 0.5 and 0.2, respectively. As shown in Table 2, our method achieved state-of-the-art accuracy. Collaboration with Other Methods. Because our core methodology is based on small loss criteria, our method can collaborate with co-teaching methods. In Han et al. (2018), only certain samples (Y ∼ ξ) were used for updating colleague networks, where the number of uncertain samples gradually decreased until it reached a predetermined portion. To enhance potentially bad statistics for co-teaching, we taught dual networks by considering a set of samples (Y,XT ), where XT ∼ FTµ are uncertain samples enhanced using equation 7.
Table 1 shows the test accuracy results for the proposed collaboration model with a co-teaching network (WDNcot). This collaboration model achieved the most accurate performance for the CIFAR100 dataset with asymmetric noise, which verifies that our WDN can be integrated into existing methods to improve their performance significantly, particularly when the density of pre-logits is highly-concentrated. Fig.3 reveals that co-teaching quickly falls into over-parameterization and induces drastic drop in accuracy after the 15th-epoch. WDNcot also exhibits a slight accuracy drop. However, it surpassed the baseline co-teaching method by a large margin (+7%) during training. This demonstrates that our enhanced samples XT can alleviate the over-parameterization issues faced by conventional co-teaching models, which helps improve their accuracy significantly.
5.2 EXPERIMENTS ON A REAL-WORLD DATASET
To evaluate our method on real-world datasets, we employed the Clothing1M dataset presented by Xiao et al. (2015), which consists of 1M noisy, labeled, and large-scale cloth images with 14 classes collected from shopping websites. It contains 50K, 10K, and 14K clean images for training, testing, and validation, respectively. We only used a noisy set for training; for testing, we used a clean set. We set α = 0.2 and % = 0.1. For fair comparison, we followed the settings suggested in previous works. We used a pre-trained ResNet50 for a baseline architecture with a batch size of 48. For the pre-processing steps, we applied a random center crop, random flipping, and normalization to 224× 224 pixels. We adopted the Adam optimizer with a learning rate starting at 10−5 that linearly decayed to 5× 10−6 at 24K iterations. Regarding the baseline methods, we compared the proposed method to [GCE, Zhang & Sabuncu (2018)], [D2L, Ma et al. (2018)], [FW, Patrini et al. (2017b)], [WAR, Damodaran et al. (2019)], [SL, Wang et al. (2019)], [JOFL, Tanaka et al. (2018)], [DMI, Xu et al. (2019)], [PENCIL, Yi & Wu (2019)], and [MLNT, Li et al. (2019b)]. Table 3 reveals that our method achieved competitive performance as comparison with other baseline methods.
5.3 COMPUTATIONAL COST
Because Co-teaching, JoCoR, and DivideMix use additional networks, the number of network parameters is twice (8.86M) as many as that of the Vanilla network (4.43M ). In Table 4, we compare the average training time for first 5-epochs over various baseline methods under symmetric noise on the CIFAR-10 dataset. While non-parametric methods such as GCE and WDN require less than 12% additional time, other methods that require additional networks spent more time than non-parametric methods. The averaging time can vary according to different experimental environments. In table 4, we measure the time using publicly available code provided by authors.
6 CONCLUSION
We proposed a novel method called WDN for accurate classification of noisy labels. The proposed method normalizes uncertain measures to data-dependent Gaussian measures by imposing geometric constraints in the 2-Wasserstein space. We simulated discrete SDE using the Euler-Maruyama scheme, which makes our method fast, computationally efficient, and non-parametric. In theoretical analysis, we derived the explicit upper-bound of the proposed Wasserstein normalization and experimentally demonstrated a strong relationship between this upper-bound and the over-parameterization. We conducted experiments both on the CIFAR-10/100 and Clothing1M datasets. The results demonstrated that the proposed WDN significantly outperforms other state-of-the-art methods.
A OPEN-SOURCE DATASET
Transition matrix for CIFAR-10/100. For the experiment summarized in Table 1, we implemented open-source code to generate the noise transition matrix discussed by Han et al. (2018), as well as the 9-layered CNN architecture (https://github.com/bhanML/Co-teaching).
Open-set noise. For the experiment summarized in Table 2, we used the same dataset for open-set noisy labels presented by Lee et al. (2019) (https://github.com/pokaxpoka/ RoGNoisyLabel).
Clothing1M. For the experiment summarized in Table 3, we used the open-source dataset presented by Xiao et al. (2015) (https://github.com/Cysu/noisy_label).
B COMPARISONS TO RELATED WORKS
Methodology Parametric Class-dependency Distillation Sample-weight Sample-selection
DivideMix 3 7 7 7 3 Co-teaching 3 7 3 7 3
JoCoR 3 7 3 7 3 MLNT 3 3 3 7 7 Ren et al. (2018) 7 7 7 3 7 NPCL 7 7 7 3 7 GCE 7 7 7 3 7
WDN 7 7 7 7 7
Table B indicates that no previous methodologies can conceptually include our method.
Because the solution to the Fokker-plank equation can be explicitly calculated without any additional parameters, our method is fully non-parametric (in terms of additional parameters beyond those required by the original neural network). By contrast, co-teaching is parametric because it requires a clone network with additional parameters that are copies of those in the original network. Similarly, MLNT requires an additional teacher network for training, which also contains a number of parameters.
Many method based on small loss criteria select certain samples, whereas our method uses the combination of ρN certain and (1 − ρ)N normalized uncertain samples. Therefore, our method can fully leverage the batches of training datasets, where (1− ρ)N + ρN = N . Additionally, our method does not assume any class-dependent prior knowledge. Rather than considering class-wise prior knowledge, our method uses holistic information from both certain and uncertain samples (i.e., Y and XT ) in the logit space. Other meta-class-based model, such as MLNT, assume class-wise meta prior knowledge from a teacher network.
In Arazo et al. (2019), they assumed the beta-mixture model as a label distribution on label space. But due to the non-deterministic type of noisy label distribution, it sometimes fails to train with extremely non-uniform type of noise. For example, Arazo et al. (2019) reported failure case with Clothing1M dataset. It seems that fundamental assumption on noise model of mixup will be improved in future work. Similar to this method, our work have trouble when dealing with synthetic asymmetric noise with high ratio where relatively large performance drop is observed in Table 1 (despite our method produces second best performance in the table).
Most recent work Li et al. (2019a), they also adopt Co-train by implementing additional dual network, but much sophisticated methodology called Co-divide/guessing based on SSL. We predict that the Wasserstein distance between labeled and unlabeled probability measures is well-controlled in their method. We think that applying the OT/Markov theory (as in our paper) to their method will broaden the understanding of LNL problem.
In contrast to sample weight methods such as GCE and NPCL, which require prior knowledge regarding the cardinality of the training samples to be weighted, our method is free from such assumptions because our Wasserstein normalization is applied in a batch-wise manner.
C TECHNICAL DIFFICULTY FOR APPLYING GENERAL OPTIMAL TRANSPORT/MARKOV THEORY TO LABEL SPACE.
LetX,Y be uncertain and certain samples in pre-softmax feature space. And assume that we consider the distributional constraint on label-space (the space of σ(X), σ(Y ), where σ denotes the soft-max function). This space is not proper to define the objective function such as (5). Because, all the samples in this label space is of the form σ(X) = [a1, a2, · · · , an] such that ∑d i=1 ai = 1, thus label-space is d-dimensional affine-simplex Ud which is subset of Euclidean space Ud ⊂ Rd. In this case, the definition of Wasserstein space in equation (4) is unacceptable while dE is not true metric on Ud. The Wasserstein space P2(Ud) is merely investigated in the mathematical literature which makes unable to use all the technical details and assumptions, theories developed in the P2(Rd) which are theoretical ground of our work. But, if we look this problem slightly different point of view, for example, consider pre-softmax Rd,P2(Rd) as our base space. In this case, all the technical issues/problems when we try to use OT tools in P2(Ud) can be overcome/ignored. while softmax is non-parametric one-to-one function connecting pre-softmax feature space Rd to Ud, there exists a unique labels in Ud as a mapped point of the manipulated uncertain samples. Even though our objects are defined on pre-softmax space, the theoretical analysis in Proposition 3 contains softmax function to evaluate the concentration inequality of proposed transformation F affecting in label-space Ud.
D MATHEMATICAL BACKGROUND
In this section, we introduce important definitions, notations, and propositions used in our proofs and the main paper.
D.1 NOTATION
We denote f#µ as a push-forward of µ through f . C∞0 (Rd) denotes the set of∞-class functions with compact support in Rd. For the Lp-norm of the function f , we denote ‖f‖p,ν = ( ∫ |f |pdν) 1 p . The Hessian matrix of the function f is denoted as Hess[f ] = [∂i∂jf ]di,j . Sym + d denotes the space for semi-definite positive symmetric matrices of size d× d. ‖f‖Lip denotes the Lipschitz norm of the function f . For any matrix A ∈Md, we let ‖A‖op denote the operator norm of A.
D.2 DIFFUSION-INVARIANCE AND HYPER-CONTRACTIVITY
Definition 2. The Markov semigroup (Pt)t≥0 in Rd acting on a function f ∈ C∞0 is defined as follows:
Ptf(x) = ∫ f(x′)pt(x, dx ′), (15)
where pt(x, dx′) is a transition kernel that is the probability measure for all t ≥ 0. Definition 3. (Diffusion Operator) Given a Markov semi-group Pt at time t, the diffusion operator (i.e., infinitesimal generator) L of Pt is defined as
Lg(y) = lim t→0
1 t (Ptg(y)− g(y)) = ∑ i,j ∂2 ∂yi∂yj Bij(y)g(y)− ∑ i Ai(y) ∂ ∂yi g(y), (16)
where B and A are matrix and vector-valued measurable functions, respectively. Bij denotes the (i, j)-th function of B and Ai denotes the i-th component function of A. Definition 4. (Diffusion-invariant Measure) Given the diffusion operator L, the probability measure µ is considered to be invariant measure to L when EX∼µ[Lf(X)] = 0 for any f ∈ C∞0 . Lemma 1. (Infinitesimal generator for the multivariate Gaussian measure, Bolley & Gentil (2010).) The Gaussian measure Nν := N (mν ,Σν) with a mean mν and covariance Σν is an invariant measure according to the following diffusion-operator L:
Lf(x) = ΣνHess[f ](x)− (x−mν)T ∇f(x), ∀f ∈ C∞0 (Rd), (17) where Bij(x) := [Σν ]ij is a constant function, and Ai(x) := xi −miν .
This generator serves as our main tool for the geometric analysis of the upper bound ε. In Section 4.1 in the main paper, we introduced an approximate upper-bound K̂2(µ) without any general description of the inequality involved. We now introduce the underlying mathematics for equation 12. Because our detour measure is Gaussian, there is a unique semi-group Pth called the multidimensional Ornstein-Ulenbeck semi-group that is invariant to Nν . Specifically, Pt is defined as follows:
Psh(X) = EZ∼NI [ h ( e−sX + √ 1− e−2s(Σ 1 2 ν Z + mν) )] , ∀h ∈ C∞0 . (18)
The invariance property of Pt relative to our detour measure is naturally induced by the following Proposition: Proposition 4. We define C : Rd → Rd and C(X) = AX + b such that A ∈ Sym+d ,b ∈ Rd, and select an arbitrary smooth h ∈ C∞0 (Rd). We then define the diffusion Markov semi-group Psh as follows:
Psh(X) = EZ∼N [ h ( e−sX + √ 1− e−2sC(Z) )] . (19)
Then, N (A2,b) is invariant with respect to Ps, meaning the following equality holds for every h and s ≥ 0: ∫
Rd [Psh(X)− h(X)]dN (A2,b)(X) = 0. (20)
Proof. For simplicity, we denote N (A2,b) := NC .∫ Psh(X)dNC(X) = ∫ ∫ h(e−sX + √ 1− e−2sC(Z))dNC(X)dN (Z)
= ∫ ∫ h ◦ C(e−sZ ′ + √ 1− e−2sZ)dN (Z ′)dN (Z).
(21)
The second equality holds because C is linear in Rd. Let e−s = cos θ and e−2s = sin θ for any 0 ≤ θ ≤ 2π. Then, we define φ as φ(Z ′, Z) = e−sZ ′ + √ 1− e−2sZ = cos(θ)Z ′ + sin(θ)Z, and π(Z ′, Z) = Z. Based on the rotation property of the standard Gaussian measure, one can induce the following equality.
(N ⊗N ) ◦ (C ◦ φ)−1 = ((N ⊗N ) ◦ φ−1) ◦ C−1 = N ◦ C−1. (22) However, we know that dN [C−1(X)] = dNC(X) = ( (2π)d|A2| )− 12 e−0.5(X−b)TA−2(X−b). By combining equation 21 and equation 22, one can derive the following result:∫
h ◦ C(e−sZ ′ + √ 1− e−2sZ)d[N ⊗N ] = ∫ h(X)d [ (N ⊗N ) ◦ φ−1 ◦ C−1 ] (X)
= ∫ h(X)d[N ◦ C−1](X) = ∫ h(X)dN [C−1(X)]
= ∫ h(X)dNC(X).
(23)
Proposition 4 demonstrates the invariance property of the defined semi-group. If we setA = Σ 1 2 ν ,b = mν , then we can recover equation 18.
We are now ready to define the approximation of K2(µ) in terms of semi-group invariance. Specifically, for any real-valued smooth h, we define the following inequality:
K̂2(µ) = EX∼µ[Lh(X)] = lim s→0
EX∼µ [ 1
s (Psh(X)− h(X)) ] = lim s→0 1 s EX,Z∼NI [ h ( e−sX + √ 1− e−2s(Σ 1 2 ν Z + mν) ) − h(X) ] ≤ K2(µ).
(24)
This inequality holds if h is selected to induce a supremum over the set C∞0 , where suph K̂2(µ, h) = suph EX∼µ[Lh(X)] = K2(µ). Although a more sophisticated design for the test function h will induce a tighter upper bound for K̂2, we determined that the L2-norm is generally sufficient.
Definition 5. (Diffuseness of the probability measure) We define the integral operator K2 : W2(Rd)→ R+ as follows:
K2(µ) = √ sup f∈C∞0 ∫ Rd |Lf(x)| dµ(x). (25)
According to Definition 4, we know that ∫ Lf(X)dNν(X) = 0 for any f . Based on this observation, it is intuitive that K2 estimates how the probability measure ν is distorted in terms of diffusion invariance. While this measure takes a supremum over the function space C∞0 , it searches for a function that enables the estimation of maximal distortion. Because the value of K2 is entirely dependent on the structure of µ, K2 can be considered as a constant for the sake of simplicity if the uncertain measure µ is fixed over one iteration of training. Definition 6. (Diffusion carré du champ) Let f, g ∈ C∞0 (Rd). Then, we define a bilinear form Γc in C∞0 (Rd)× C∞0 (Rd) as
Γe(f, g) = 1
2 [LΓe−1(fg)− Γe−1(fLg)− Γe−1(gLf)], e ≥ 1. (26)
We also denote Γ(f) ≡ Γ(f, f). The bilinear form Γ can be considered as a generalization of the integration by the parts formula, where ∫ fLg + Γ(f)dµ = 0 for the invariant measure µ of L.
Definition 7. (Curvature-Dimension condition, Ambrosio et al. (2015)) We can say that the infinitesimal generator L induces the CD(ρ,∞) curvature-dimension condition if it satisfies Γ1(f) ≤ ρΓ2(f) for all f ∈ C∞0 .
Because our diffusion operator generates a semi-group with respect to the Gibbs measure, the curvature-dimension condition can be calculated explicitly. Through simple calculations, the firstorder (c = 1) diffusion carré du champ can be induced as follows:
Γ1(f) = ( [∇f ]TΣν∇f )2 . (27)
Similarly, the second-order (c = 2) diffusion carré du champ is calculated as follows:
Γ2(f) = 1
2
[ L ( Γ1(f 2) ) − 2Γ1 (f,L(f)) ] = Tr ([ Σν∇2f ]2) + ( [∇f ]TΣν∇f )2 = Tr ([ Σν∇2f ]2) + Γ1(f),
(28)
for an arbitrary f ∈ C∞0 (Rd). While Tr ([ Σ∇2f ]2)
is non-negative, we can infer that Γ1 ≤ Γ2. In this case, the diffusion operator L defined in Lemma 1 induces the CD(ρ = 1,∞) curvaturedimension condition. For the other diffusion operators, please refer to Bolley & Gentil (2010). Proposition 5. (Decay of Fisher information along a Markov semigroup, Bakry et al. (2013).) If we assume the curvature-dimension condition CD(ρ,∞), then I(µt|Nν) ≤ e−2ρtI(µ|Nν).
The exponential decay of the Fisher information in Proposition 5 is a core property of the exponential decay of the Wasserstein distance, which will be used in the proof of Proposition 2.
D.3 FOKKER-PLANK EQUATION, SDE
Definition 8. (Over-damped Langevin Dynamics) We have dXt = −∇φ(Xt;mν)dt+ √ 2τ−1ΣνdWt, (29)
where φ (Xt;mν) = τ2d 2 (Xt,mν), Wt denotes Brownian motion, and d denotes Euclidean distance. The particle Xt is distributed in Xt ∼ pt. The probability density limt→∞ p(x, t) with respect to X∞ converges to the Gaussian density X∞ = √ Σν(Z + mν) ∼ p∞(x) = q(x) ∝ e−d(x,mν) TΣ−1ν d(x,mν).
In classical SDE literature, it is stated that E [ sup0≤t≤T ∣∣∣X̂t −Xt∣∣∣] ≤ G(N%)− 12 , where G(T ) is some constant that depends only on T and X̂ denotes the true solution of the SDE in equation 29. While the number of uncertain samples is greater than N% > 40, our method exhibits acceptable convergence.
D.4 GAUSSIAN WASSERSTEIN SUBSPACES
It is known that the space of non-degenerate Gaussian measures (i.e., covariance matrices are positivedefinite) forms a subspace in the 2-Wasserstein space denoted asW2,g ∼= Sym+d × Rd. Because the 2-Wasserstein space can be considered as a Riemannian manifold equipped with Riemannian metrics Villani (2008), W2,g can be endowed with a Riemannian structure that also induces the Wasserstein metric (McCann (1997)). In the Riemannian sub-manifold of Gaussian measures, the geodesic between two points γ(0) = NA and γ(1) = NB is defined as follows Malagò et al. (2018):
γ(α) = Nt = N (m(α),Σ(α)), (30)
where m(α) = (1 − α)mA + αmB and Σ(α) = [(1− α)I + αT ] ΣA [(1− α)I + αT ], where T ΣAT = ΣB . In Section 3.2, we set (mA,ΣA) → (mν ,Σν) and (mB ,ΣB) → (mξk ,Σξk). Regardless of how ν is updated, the statistical information regarding the current certain measure ξk is considered in the detour Gaussian measure, which yields a much smoother geometric constraint on µ.
E PROOFS
Proposition 6. Let Γ(µ, ν) be a set of couplings between µ and ν, and assume that the noisy label r̂ is independent of X . For functional J [µ] = Eµ∼X l(X; r̂), we define D(µ, ν) as:
D(µ, ν) = inf γ∈Γ(µ,ν)
|J [µ]− J [ν]| , (31)
where D : P2 ×P2 → R. Then, D is the metric defined on P2, which is weaker than the Wasserstein metric, where D(µ, ν) ≤ αW2(µ, ν) for α = c−10 r̂ + c −1 1 (1− r̂) and some constants c0, c1 > 0.
Proof.
|J [ν]− J [µ]| = |Eµ[l(X; r̂)]− Eν [l(Z; r̂)]| = |Eµ⊗ν [r̂ (log σ(X)− log σ(Z))− (1− r̂) (log(1− σ(X))− log(1− σ(Z)))]| ≤ E |r̂Eµ⊗ν [log σ(X)− log σ(Z)]|+ E |(1− r̂)Eµ⊗ν [log(1− σ(X))− log(1− σ(Z))]| ≤ Er̂Eµ⊗ν |log σ(X)− log σ(Z)|+ E(1− r̂)Eµ⊗ν |log(1− σ(X))− log(1− σ(Z))| ≤ c−10 E(r̂)Eµ⊗ν |X − Z|+ c −1 1 E(1− r̂)Eµ⊗ν |Z −X| = E[c−10 r̂ + c −1 1 (1− r̂)]Eµ⊗ν |X − Z| (32)
By taking the infimum of the aforementioned inequality with set of couplings γ(µ, ν), we obtain the following inequality:
D(ν, µ) = inf γ(µ,ν)
|J [ν]− J [µ]| ≤ E[c−10 Y + c −1 1 (1− Y )] inf γ(µ,ν) Eγ |X − Z|
= E[c−10 Y + c −1 1 (1− Y )]W1(µ, ν) ≤ E[c−10 Y + c −1 1 (1− Y )]W2(µ, ν),
(33)
which completes the proof.
Proposition 6 follows from the Lipschitzness of the functional J , where D searches for the best coupling to derive the minimal loss difference between two probability measures. This proposition indicates that inf |J [ν]− J [Fµ]| is bounded by the Wasserstein distance, which justifies our geometric constraint presented in equation 4. It should be noted that the prior assumption regarding noisy labels is essential for Lipschitzness. Proposition 7. Let F : R+ × P2 be a functional on probability measures such that F [t, µ] = µt, where dµt = ptdNν , dNν = dqtdx, and let µt be a solution of the continuity equation in the 2-Wasserstein space defined as follows:
∂tµt = ∇ · (µt∇Φt) , (34)
which is represented as ∂tp(t, x) = ∇ · (p(t, x)∇ log q(t, x)) in a distributional sense. Then, the functional Ft[·] = F [t, ·] is defined unique and normalizes µ onto BW2 (Nν , e−tK2 (µ)), where K2(µ) ≤ ∞ is an integral operator in Definition 5 with respect to µ.
Proof. We assume that the probability measure µt is absolutely continuous with respect to the detour Gaussian measure N (mν ,Σν) = Nν , µt Nν . In this case, according to the Radon-Nikodym theorem, there is a corresponding unique probability density q(t, x) = qt(x) ∈ C∞0 such that dµt = qtdNν . Lemma 2. (WI-inequality, Otto & Villani (2000)) If the stationary state of µt with respect to Pt satisfies limt→∞ Eµ[Ptf ] = 0 for any f ∈ C∞0 , then the following inequality holds:
d dt+ W2(µ, µt) ≤
√ I(µt|Nν). (35)
By integrating both sides of the inequality in Lemma 2 with respect to t ∈ (0,∞), the following inequality can be obtained:
W2(µt,Nν) = ∫ ∞
0
d dt+ W2(µt,Nν)dt ≤ ∫ ∞ 0 √ I(µt|Nν)dt. (36)
In the aforementioned inequality, we replace the Fisher information with the diffusion generator L as follows:
W2(µ,Nν) ≤ ∫ ∞
0
√ I(µt|Nν)dt
= ∫ ∞ 0 √∫ [Ptq]−1Γ(Ptq)dNνdt = ∫ ∞ 0 √∫ L(− logPtq)dµtdt. (37)
The second equality above is derived by leveraging the properties of the bilinear operator Γ (Bakry et al. (2013); Villani (2008)) with respect to the diffusion operator L, which is defined as follows:∫
[Ptq] −1Γ(Ptq)dNν = − ∫ L(logPtq)qtdNν = ∫ L(− logPtq)dµt ≥ 0. (38)
For simplicity, we denote |g| = g+ for any g ∈ C∞0 . According to Proposition 5, we can relate Ftµ = µt to its initial term µ = µt=0 as follows:∫ ∞
0
√∫ L(− logPtq)(X)d[Ftµ](X)dt ≤ ∫ ∞ 0 √ e−2ρt ∫ L (− logPt=0q) (X)dµ(X)dt
≤ ∫ ∞
0
√ e−2ρt sup
g∈C∞0
∫ L+g(Z)qdNν(Z)dt
= ∫ ∞ 0 √ e−2ρtdt √ sup g∈C∞0 ∫ L+g(X)dµ(X)
= ρ−1K2(µ).
(39)
The second inequality is naturally induced, because the proposed objective function is defined to select the maximum elements over the set of functions g ∈ C∞0 and Lg ≤ L+g. If the integral interval is set to (0, s), then we can induceW2(µ,Ftµ) ≤ 1ρ (1− e
−s)K2(µ). Our diffusion-operator induces ρ = 1, which completes the proof.
Proposition 8. There is a scalar 0 < β < ∞ dependent on ν such that the following inequality holds: W2(ν,Ftµ) ≤ [√ dβλmax(Σν) + ‖EνY ‖2 ] ∨ [ e−tK2(µ) +K2(ν) ] . (40)
As a motivation for setting a detour measure to Nν , we mentioned the natural property of the non-collapsing Wasserstein distance ofW2(ν,Nν) 6= 0. However, it is unclear from a geometric perspective exactly how the upper bound (i.e.,W2(ν,Nν) ≤ ?) can be induced based on the intrinsic statistics term (i.e., d1 in Fig.1). Specifically, in the situation where the covariance matrices of ν and Nν are identical, it is difficult to determine a theoretical upper bound without additional tools. The first part of this proof focuses on resolving this important issue. The second part of the proof is naturally induced by Proposition 1. Please note that in the following proposition, parameter for Wasserstein moving average is set to α = 0 for clarity.
Proof. Before proceeding with the first part of the proof, we define a constant β as follows:
β = sup 1≤j≤d ∫ 1 0 1 s EYsv2s,j(Ys)ds. (41)
If we assume a mild condition such that mins,j inf1≤j≤dO(vs,j) ≥ O( √ s), then the integral term in β is finite and well-defined. This value will directly yield the upper bound of the Kullback–Leibler (KL) divergence of ν. First, we introduce the following inequality.
Lemma 3. (de Bruijn’s identity, Johnson & Suhov (2001); Nourdin et al. (2014)) We let Y ∼ ν, Z ∼ N (0, I) denote a standard Gaussian random variable, and let define Ys = √ sY + √ 1− sΣ 1 2 ν Z with the score function defined as vs(x) = ∇ log ps(x) with respect to the random variable Ys. Then, the following equality holds:
KL(ν|N (0,Σν)) = ∫ 1
0
Tr
( 1
2s ΣνEps∼Ys [vs(Ys)vs(Ys)T ]
) ds. (42)
From equation 42, we can derive the relations between KL-divergence and the constant β defined earlier.∫ 1
0
1 2s Tr ( ΣνEx[vs(Ys)vs(Ys)T ]) ) ds ≤ ∫ 1 0 1 2s Tr ( ΣνEx[vs,ivs,j ]di,j) ) ds
≤ ∫ 1
0
1 2 λmax(Σν) d∑ j=1 E
[ v2s,j(Ys)
s
] ds ≤ 1
2 λmax ∫ 1 0 d∑ j=1 βds = 1 2 λmax(Σν)dβ.
(43)
The second inequality holds based on the following element property of symmetric positive-definite matrices:
Tr(AB) ≤ ‖A‖opTr(B) = λmax(A)Tr(B), ∀A,B ∈ Sym + d . (44)
It should be noted that because the distribution of ν is compactly supported (i.e., supp(q) is compact), the maximum eigenvalue of the covariance Σν is finite. The other relations are induced by the aforementioned definition. Next, we relate the KL-divergence and 2-Wasserstein distance naturally.
Definition 9. (Talagrand inequality for Gaussian measures, Otto & Villani (2000)) For any nondegenerate Gaussian measure N with a mean 0, the following inequality is satisfied:
W2(ν,N ) ≤ √ 2KL(ν|N ), ∀ν ∈ P2(Rd). (45)
By combining Definition 9 and equation 43, we can derive the following expression: W2(ν,N (0,Σν)) ≤ √ 2KL(ν|N (0,Σν)) ≤ √ dβλmax(Σν) <∞. (46)
According to the triangle inequality for the 2-Wasserstein distance, we obtain:
W2(ν,N (mν ,Σν)) ≤ W2(ν,N (0,Σν)) +W2(N (mν ,Σν),N (0,Σν)) (47)
In Appendix C.3, we investigated that the geodesic distance between two Gaussian measures having the same covariance is equivalent to the Euclidean distance between two means. Therefore, we can obtain the following equality:
W2(N (mν ,Σν),N (0,Σν)) =W2(ιmν# [N (0,Σν)],N (0,Σν)) = ‖mν − 0‖2 = ‖EνY ‖2 ,
(48)
where ιa(X) = X + a for any vector a ∈ supp(q). Now, by adding the two inequalities defined earlier, we can obtain
W2(ν,N (mν ,Σν)) ≤ ‖EνY ‖2 + √ dβλmax(Σν), (49)
where it is easily shown that the upper-bound is only dependent on the statistical structure of ν. Specifically, the term ‖EνY ‖2 represents the center of mass for a density of ν and √ dβλmax(Σν) is related to the covariance structure of ν.
By applying Proposition 8 to both Ftµ and ν, we can easily recover equation 5 as follows: W2(ν,Ftµ) ≤ ε =W2(ν,N (mν ,Σν)) +W2(N (mν ,Σν),Ftµ)
≤ ([ ‖EνY ‖2 + √ dβλmax(Σν) ] ∧K2(ν) ) + e−tK2(µ)
≤ [√ dβλmax(Σν) + ‖EνY ‖2 ] ∨ [ e−tK2(µ) +K2(ν) ] .
(50)
The second inequality is easily obtained as (a ∧ b) + c ≤ a ∨ (b + c) for any a, b, c ≥ 0, which completes the proof.
Proposition 9. (Concentration inequality for uncertain measures). Assume that there are some constants s? ∈ [ 1η ,∞), η ≥ 0 such that the following inequality is satisfied:
EFs?µ[f 2]− [EFs?µ[f ]] 2 ≤ (1 + η)EFs?µ[A∇f T∇f ], (51)
for A ∈ Sym+d , D(A,Σν) ≤ aη for some a > 0, and for any metric D defined on Sym + d . In this case, there is a δ such that the following probability inequality for an uncertain measure is induced:
Fs?µ ( |σ − Eν [σ]| ≥ δ ) ≤ 6e− √ 2δ 3 2 K2 , (52)
where κ denotes the Lipschitz constant of σ.
Proof. Before proceeding with the main proof, we first prove the existence of s?. The limit of the interval with respect to η converges to a singleton {∞} as I = limη→0[ 1η ,∞). In this case, equation 51 is the same as the Poincaré inequality for a Gaussian measure Nν , which can be written as
lim η→0
EFs?µ[f 2]− [EFs?µ[f ]] 2 ≤ lim η→0 (1 + η)EFs?µ[A∇f T∇f ]
= EFs?µ[Σν∇f T∇f ].
(53)
While the Poincaré inequality in equation 53 is uniquely defined, we can find at least one value s? satisfying equation 51. Let X(t, w) = Xt(w) denote the stochastic process with respect to qt(x) defined in the proof of Proposition 2. Additionally, let c = Eν [σ]− EFs?µ[σ]. Then, we can obtain the following inequality:
c = Eν [σ]− EFs?µ[σ] = κ ( Eν [σ κ ] − EFs?µ [σ κ ]) ≤ κ sup
g∈Lip1 (Eνg − EFs?µg)
≤ κW1(Fs?µ, ν) ≤ κW2(Fs?µ, ν) ≤ κK2(µ)
1 + η .
(54)
The first inequality is induced by the assumption regarding the κ-Lipschitzness of the function σ and the second inequality is induced by the Kantorovich-Rubinstein theorem. The third inequality is natural becauseWa(·, ·) ≤ Wb(·, ·) for any 1 ≤ a ≤ b <∞. because equation 51 is equivalent to the Poincaré inequality for the measure Fs?µ, it satisfies the Bakry-emery curvature-dimension condition CD(1 + η,∞). Thus, as shown in the proof of Proposition 2 (i.e., equation 39), the last inequality is induced. Additionally, based on the concentration inequality of Fs?µ [Proposition 4.4.2 Bakry et al. (2013)], we can derive the following probability inequality:
Fs?µ [σ(Xs?(w)) ≥ EFs?µ[σ] + δ] ≤ 3e − δ√ 1+ηκ , (55)
where the Poincaré constant for Fs?µ is naturally 1 + η and ‖σ‖Lip = κ. Next, we will derive the desired form from equation 55. First, we introduce the following inequality.
σ(Xs?) ≥ EFs?µ[σ] + δ ≥ Eν [σ] + δ − κ
1 + η K2 (56)
The last inequality is directly induced by equation 54 because −c ≥ − κ1+ηK2. While η, κ, and K2 are constants with respect to w, the following set inclusion can be obtained naturally:
S1 = {w : σ(Xs?(w)) ≥ EFs?µ[σ] + δ} ⊇ {w : σ(Xs?(w)) ≥ Eν [σ] + δ − κ
1 + η K2} = S2.
(57)
For the modified version of the original probability inequality, we take probability measure Fs?µ[·] for the sets S1,S2, which is defined as
3e − δ√
1+ηκ ≥ Fs?µ ({w : σ(Xs?(w)) ≥ EFs?µ[σ] + δ}) ≥ Fs?µ ( {w : σ(Xs?(w)) ≥ Eν [σ] + δ − κ
1 + η K2}
) .
(58)
The concentration inequality around Eν [σ] is obtained by combining the inequalities induced by σ and −σ as follows:
1 2 Fs?µ ⋃ h∈{σ,−σ} {w : h(Xs?(w))− Eν [h] ≥ ± ( δ − κ 1 + η K2} ) = Fs?µ ( {w : |σ(Xs?(w))− Eν [σ]| ≥ δ − κ
1 + η K2}
) ≤ 6e− δ√ 1+ηκ .
(59)
The inequality in equation 59 is the general form containing the relation between the upper bound of the probability and (η, κ,K2). While this form is quite complicated and highly technical, we choose not to present all the detailed expressions of equation 59 in the main paper. Rather than that, we re-write it in a much simplified form for clarity. Specifically, by setting κK2/(1 + η) = 0.5δ and rescaling δ to 2δ, the aforementioned inequality in equation 59 can be converted into the following simpler form:
Fs?µ ({w : |σ(Xs?(w))− Eν [l]| ≥ δ) ≤ 6e− √ 2δ 3 2 κK2 . (60)
Finally, if we set σ = Softmax, then the Lipschitz constant is induced as κ = 1. This proof is completed by setting s? := T . | 1. What is the focus of the paper regarding learning with noisy or corrupted labels?
2. What is the novel approach proposed by the author to address this challenge?
3. What are the strengths of the paper, particularly in its numerical results?
4. What are the weaknesses of the paper, especially regarding its theoretical analysis and clarity?
5. How does the reviewer suggest improving the paper's clarity and accessibility for a broader audience? | Review | Review
This paper aims to deal with the learning of noisy/corrupted labels based on the small loss criterion. If I understood well the idea is to consider a new loss function on the Wasserstein space to learn the certain and uncertain data distributions. This loss function is based on kind of penalty term ensuring that the uncertain labels lies in a Wasserstein ball for which the radius is automatically tuned to get the best possible result. This construction crucially relies on the use of the Wasserstein gradient flow associated with Gaussian distributions. To conclude, a series of experiments show that the new methodology proposed in the paper leads to state-of-art results.
I think that the idea is nice and was very impressed by the numerical results reported by the authors. However, it took me a while to just understand the problem and the setting considered by the authors! My second main criticism concerns the theoretical results presented in this paper. While I think that it is important that methods are justified by rigorous results, the one presented in this paper are just not understandable by most people. Two of the main underlying issues in my opinion are that the notion and objects which are used in the paper are not introduced very much rigor (or even not at all) and the result lack of clarity. To be honest, I did not catch half of the sentences of the paper.
I advise the authors to make a in-depth revision of their paper, introducing more carefully their method so it can be understand by a broader audience. One solution in my opinion, is to reduce the theoretical notion and results to their strict minimum. I think that a lot of them are unnecessary for the introduction of the proposed methodology. |
ICLR | Title
Wasserstein Distributional Normalization : Nonparametric Stochastic Modeling for Handling Noisy Labels
Abstract
We propose a novel Wasserstein distributional normalization (WDN) algorithm to handle noisy labels for accurate classification. In this paper, we split our data into uncertain and certain samples based on small loss criteria. We investigate the geometric relationship between these two different types of samples and enhance this relation to exploit useful information, even from uncertain samples. To this end, we impose geometric constraints on the uncertain samples by normalizing them into the Wasserstein ball centered on certain samples. Experimental results demonstrate that our WDN outperforms other state-of-the-art methods on the Clothing1M and CIFAR-10/100 datasets, which have diverse noisy labels. The proposed WDN is highly compatible with existing classification methods, meaning it can be easily plugged into various methods to improve their accuracy significantly.
N/A
We propose a novel Wasserstein distributional normalization (WDN) algorithm to handle noisy labels for accurate classification. In this paper, we split our data into uncertain and certain samples based on small loss criteria. We investigate the geometric relationship between these two different types of samples and enhance this relation to exploit useful information, even from uncertain samples. To this end, we impose geometric constraints on the uncertain samples by normalizing them into the Wasserstein ball centered on certain samples. Experimental results demonstrate that our WDN outperforms other state-of-the-art methods on the Clothing1M and CIFAR-10/100 datasets, which have diverse noisy labels. The proposed WDN is highly compatible with existing classification methods, meaning it can be easily plugged into various methods to improve their accuracy significantly.
1 INTRODUCTION
The successful results of deep neural networks (DNNs) on supervised classification tasks heavily rely on accurate and high-quality label information. However, annotating large-scale datasets is extremely expensive and a time-consuming task. Because obtaining high-quality datasets is very difficult, in most conventional works, training data have been obtained alternatively using crowd-sourcing platforms Yu et al. (2018) to obtain large-scaled datasets, which leads inevitable noisy labels in the annotated samples.
While there are numerous methods that can deal with noisy labeled data, recent methods actively adopt the small loss criterion, which enables to construct classification models that are not susceptible to noise corruption. In this learning scheme, a neural network is trained using easy samples first in the early stages of training. Harder samples are then gradually selected to train mature models as training proceeds. Jiang et al. (2018) suggested collaborative learning models, in which a mentor network delivers the data-driven curriculum loss to a student network. Han et al. (2018); Yu et al. (2019) proposed dual networks to generate gradient information jointly using easy samples and employed this information to allow the networks to teach each other. Wei et al. (2020) adopted a disagreement strategy, which determines the gradient information to update based on disagreement values between dual networks. Han et al. (2020) implemented accumulated gradients to escape optimization processes from over-parameterization and to obtain more generalized results. In this paper, we tackle to solve major issues raised from the aforementioned methods based on the small-loss criterion, as follows.
In comprehensive experiments, the aforementioned methods gain empirical insight regarding network behavior under noisy labels. However, theoretical and quantitative explanation have not been closely investigated. In contrast, we give strong theoretical/empirical explanations to understand the network under noisy labels. In particular, we present an in-depth analysis of small loss criteria in a probabilistic sense. We exploit the stochastic properties of noisy labeled data and develop probabilistic descriptions of data under the small loss criteria, as follows. Let P be a probability measure for the pre-softmax logits of the training samples, l be an objective function for classification, and 1{·} be an indicator function. Then, our central object to deal with is a truncated measure defined as
X ∼ µ|ζ = 1{X;l(X)>ζ}P P[l(X) > ζ] , Y ∼ ξ|ζ = 1{X;l(Y )≤ζ}P P[l(Y ) ≤ ζ] , (1)
where X and Y , which are sampled from µ|ζ and ξ|ζ, denote uncertain and certain samples defined in the pre-softmax feature space1 (i.e.,Rd), respectively. In equation 1, µ and ξ denote the probability measures of uncertain and certain samples, respectively, and ζ is a constant. Most previous works have focused on the usage of Y and the sampling strategy of ζ, but poor generalization capabilities based on the abundance of uncertain samples X has not been thoroughly investigated, even though these samples potentially contain important information. To understand the effect of noisy labels on the generalized bounds, we provide the concentration inequality of uncertain measure µ, which renders the probabilistic relation between µ and ξ and learnability of the network under noisy labels.
While most conventional methods Han et al. (2018); Wei et al. (2020); Li et al. (2019a); Yu et al. (2019) require additional dual networks to guide misinformed noisy samples, the scalability is not guaranteed due to the existence of dual architectures, which have the same number of parameters as the base network. To alleviate this problem, we build a statistical machinery, which should be fully non-parametric, simple to implement, and computationally efficient to reduce the computational complexity of conventional approaches, while maintaining the concept of small-loss criterion. Based on the empirical observation of ill-behaved certain/uncertain samples, we propose the gradient flow in the Wasserstein space, which can be induced by simulating non-parametric stochastic differential equation (SDE) with respect to the Ornstein-Ulenbeck type to control the ill-behaved dynamics. The reason for selecting these dynamics will be thoroughly discussed in the following sections.
Thus, key contributions of our work are as follows.
• We theoretically verified that there exists a strong correlation between model confidence and statistical distance between X and Y . We empirically investigate that the classification accuracy worsens when the upper-bound of 2-Wasserstein distance W2(µ, ξ) ≤ ε (i.e., distributional distance between certain and uncertain samples) drastically increase. Due to the empirical nature of upper-bound ε, it can be used as an estimator to determine if a network suffers from over-parameterization.
• Based on empirical observations, we develop a simple, non-parametric, and computationally efficient stochastic model to control the observed ill-behaved sample dynamics. As a primal object, we propose the stochastic dynamics of gradient flow (i.e.,, Ornstein-Ulenbeck process) to simulate simple/non-parametric stochastic differential equation. Thus, our method do not require any additional learning parameters.
• We provide important theoretical results. First, the controllable upper-bound ε with the inverse exponential ratio is induced, which indicates that our method can efficiently control the diverging effect of Wasserstein distance. Second, the concentration inequality of transported uncertain measure is presented, which clearly renders the probabilistic relation between µ and ξ.
2 RELATED WORK
Curriculum Learning & Small-loss Criterion. To handle noisy labels, Han et al. (2018); Yu et al. (2019); Jiang et al. (2018); Wei et al. (2020); Lyu & Tsang (2020a); Han et al. (2020) adopted curriculum learning or sample selection frameworks. However, these methods only consider a small number of selected samples, where large portion of samples are excluded at the end of the training. This inevitably leads to poor generalization capabilities. However, this conflicts with sample selection methods because a large portion of training samples are gradually eliminated. By contrast, our method can extract useful information from unselected samples X ∼ µ (i.e., uncertain samples) and enhance these samples (e.g., X ′ ∼ Fµ) for more accurate classification. Chen et al. (2019) iteratively apply cross-validation to randomly partitioned noisy labeled data to identify most samples that have correct labels. To generate such partitions, they adopt small-loss criterion for selecting samples.
Loss Correction & Label Correction. Patrini et al. (2017a); Hendrycks et al. (2018); Ren et al. (2018) either explicitly or implicitly transformed noisy labels into clean labels by correcting classification losses. Unlike these methods, our method transforms the holistic information from uncertain samples into certain samples, which implicitly reduces the effects of potentially noisy labels. While correction of label noisy by modifying the loss-dynamics do not perform well under extreme noise environments, Arazo et al. (2019) adopt label augmentation method called MixUp Zhang et al. (2018).
1Due to the technical difficulties, we define our central objects on pre-softmax space rather than label space, i.e., the space of σ(X), σ(Y ), where σ indicates softmax function. Please refer to Appendix for more details.
Distillation. Li et al. (2019b) updated mean teacher parameters by calculating the exponential moving average of student parameters to mitigate the impact of gradients induced by noisy labels. Lukasik et al. (2020) deeply investigated the effects of label smearing for noisy labels and linked label smoothing to loss correction in a distillation framework. Similar to these methods, our method leverages the useful properties of distillation models. We set ν as a pivot measure, which guides our normalization functional Fµ for uncertain measures. This is similar to self-distillation because uncertain training samples are forced to be normalized to those of past states.
Other methods. Lee et al. (2019) induced a robust generative classifier based on pre-trained deep models. Similar to our method, Damodaran et al. (2019) designed a constraint on the Wasserstein space and adopted an adversarial framework for classification models of noisy labeled data by implementing semantic Wasserstein distance. Pleiss et al. (2020) identify noisy labeled samples by considering AUM statistics which exploits differences in training dynamics of clean and mislabeled samples. In most recent work, Li et al. (2019a) adopts semi-supervised learning (SSL) methods to deal with noisy labels where the student network utilizes both labeled/unlabeled samples to perform semi-supervised learning guided by the other teacher network.
3 DISTRIBUTIONAL NORMALIZATION
Because our main target object is a probability measure (distribution), we first define an objective function in a distributional sense. Let l be cross entropy and r̂ be a corrupted label random vector for an unknown label transition matrix from a clean label r which is independent of X , with label transition matrix Q. Then, a conventional objective function for classification with noisy labels can be defined as follows:
min µ J [µ] = min µ EX∼µ,r̂|Q [l(X; r̂)] . (2)
However, due to the significant changes in label information, the conventional objective function defined in equation 2 cannot be used for accurate classification. Instead of directly using uncertain samples X ∼ µ as in previous works, we normalize µ in the form of a metric ball and present a holistic constraint. For a clear mathematical description, we first introduce the following definition. Definition 1. (Wasserstein ambiguity set) Let P2(Rd) = {µ : Eµd2E(x0, x) <∞,∀x0 ∈ Rd} be a 2-Wasserstein space, where d denotes the number of classes, dE is Euclidean distance defined on Rd. Then, we define a Wasserstein ambiguity set (i.e., metric ball) in this space as follows:
BW2(ν, ε) = { µ ∈ P2 ( Rd ) :W2(µ, ν) ≤ ε } , (3)
whereW2 denotes the 2-Wasserstein distance and ν is the pivot measure. Then, we propose a new objective function by imposing geometric constraints on µ as follows:
min Fµ∈BW2 (ν,ε),ξ J [Fµ] + J [ξ] = min θ EX∼Fµθ,r̂[l(X; r̂)] + EX∼ξθ,r̂[l(Y ; r̂)], (4)
where F : P2(Rd)→ P2(Rd) is a functional for probability measures, which assures the constraint on Fµ (i.e., Fµ ∈ BW2(ν, ε)) and our main objective. The right-hand side of equation equation 4 is equivalent vectorial form of distributional form in left-hand side. While our main objects are defined on pre-softmax, both probability measures µθ and ξθ is parameterized by neural network with parameters θ. This newly proposed objective function uses the geometrically enhanced version of an uncertain measure Fµ with a certain measure ξ. In equation 4, probability measure ν is defined as follows: ν = arg minJ [ξk? ], where ξk denotes a certain measure at the current k-th iteration and k? ∈ Ik−1 = {1, · · · , k − 1}. In other words, our method finds the best probability measure that represents all certain samples so far at training time, where the uncertain measures are transported to be lying in the Wasserstein ball centered on ν. In equation 4, the Wasserstein constraint on Fµ enforces uncertain measures statistically resemble ν from a geometric perspective (i.e.,W2(ν,Fµ) ≤ ε). Now, an important question naturally stems from the aforementioned analysis: how can we select the optimal radius ε? Clearly, finding an F that induces a small ε ≈ 0 is suboptimal because Fµ ≈ ν and using objective function J [Fµ ≈ ν] can lead to the following critical problem. As the optimization process proceeds, enhanced uncertain samples X ′ ∼ Fµ contribute less and less, because it is statistically identical to ν, meaning our objective in equation 4 would receive little benefits from these transported uncertain samples. By contrast, if we adopt a large radius for ε, enhanced uncertain samples will be statistically and geometrically unrelated to ν, which causes the normalized measure Fµ to yield large losses and violates our objective.
To overcome two problems above and select the radius, we make a detour, i.e., a Gaussian measure, for cutting the path between ν and Fµ (i.e., ν → N (mν ,Σν)→ Fµ) rather than directly calculating the geodesic between ν and Fµ (i.e., ν → Fµ). Specifically, we decompose the original constraint in equation 4 into two terms using the triangle inequality of the Wasserstein distance:
W2 (ν,Fµ) ≤ ε =W2 (ν,N (mν ,Σν))︸ ︷︷ ︸ d1: Intrinsic statistics +W2 (N (mν ,Σν),Fµ)︸ ︷︷ ︸ d2: Wasserstein Normalization . (5)
The first intrinsic statistics term sets a detour point as a Gaussian measure, for which the mean and covariance are the same as those for ν (i.e., mν = EY∼ν [Y ] and Σν = CovY∼ν [Y ]). The Wasserstein upper bound of this term is only dependent on the statistical structure of ν because (mν ,Σν) is dependent on ν. Thus, this term induces a data-dependent, non-zero constant upper bound whenever ν 6= N and can prevent the upper-bound from collapsing to ε → 0, regardless of F . This gives huge advantage when dealing with ε because the first term can be considered a fixed constant during the training. The second normalization term represents our central objective. F facilitates geometric manipulation in the Wasserstein space and prevent uncertain measure µ from diverging, where µ is normalized onto the Wasserstein ambiguity BW2(ν, ε) in Fig1. The theoretical/numerical advantages of setting detour measure as Gaussian is well-explained following section.
3.1 WASSERSTEIN NORMALIZATION
In the previous section, we present a novel objective function that imposes a geometric constraint on µ such that the transformed measure Fµ lies in BW2(ν, ε) for ν. Now, we specify F and relate it to the Gaussian measure (generally Gibbs measure). For simplicity, we denote Nν = N (mν ,Σν). Proposition 1. F : R+×P2 → P2 is a functional on the probability measure such thatF [t, µ] = µt, where dµt = ptdNν , dNν = dqtdx, and µt is a solution to the following continuity equations:
∂tµt = ∇ · (µtvt) , (6)
which is read as ∂tp(t, x) = ∇ · (p(t, x)∇ log q(t, x)) in a distributional sense. Then, a uniquely defined functional Ft[·] = F [t, ·] normalizes µ onto BW2 (Nν , e−tK2 (µ)), where K2(µ) > 0 is a constant that depends on µ.
It is well known that the solution to equation 6 induces a geodesic in the 2-Wasserstein space (Villani (2008)), which is the shortest path from µ = µt=0 to Nν . The functional Ft generates a path for µt, in which the distance is exponentially decayed according to the auxiliary variable t and constant K2, meaningW2(Nν ,Ftµ) ≤ K2e−t. This theoretical results indicates that the Wasserstein distance of second term in equation 5 can be reduced/controlled with exponential ratio. Thus, by setting a different t, our method can efficiently control the diverging distance in equation 5. Unfortunately, it is typically intractable to compute the partial differential equation (PDE) in equation 6.
Algorithm 1 Wasserstein Distributional Normalization Require: α ∈ [0, 0.2], % ∈ [0.1, 0.65], T = 64,∆t = 10−4, τ = 0.001,
for k = 1 to K (i.e., the total number of training iterations) do 1) Select uncertain (1− ρ)N and certain ρN samples from the mini-batch N . {Y nk }{n≤ρN} ∼ ξk, {X n k }{n≤(1−ρ)N} ∼ µk
2) Update the most certain measure ν. if J [ξk] < J [ν] then ν ← ξk,mν ← E [Yk], and Σν ← Cov [Yk] end if 3) Update the moving geodesic averageN (mα,Σα). Solve the Ricatti equation T ΣνT = Σξk . Σα = ((1− α)Id + αT ) Σν ((1− α)Id + αT ) and mα = (1− α)mν + αmξk 4) Simulate the discrete SDE for T steps. for t = 0 to T − 1 do Xnk,t+1 = −∇φ(Xnk,t;mα)∆t + √ 2τ−1Σαν dW n t s.t. { Xnk,t=0 } ∼ µk, { Xnk,t=T } ∼ FTµk end for 5) Update the network with the objective function. J [Fµk] + J [ξk] = EFT µk [l(Xk,T ; r̂)] + Eξk [l(Yk; r̂)]
end for
To solve this problem, we adopt particle-based stochastic dynamics, which enables tractable computation. There exists a unique iterative form corresponding PDE in equation 6 which is called as multi-dimensional Ornstein-Ulenbeck process, which can be approximated using particle-based dynamics. In particular, we draw N(1 − %) uncertain samples from a single batch of N samples using equation 1 for hyper-parameter 0 ≤ % ≤ 1. We then simulate a discrete stochastic differential equation (SDE) for each particle using the Euler-Maruyama scheme as follows:
Xnt+1 = X n t −∇φ (Xnt ;mν) ∆t + √ 2τ−1∆tΣZ n I , (7)
where φ (Xt;mν) = τ2d 2 E (Xt,mν), n ∈ {1 · · · , N(1− %)}, dE is a Euclidean distance, and N is a single mini-batch size. We selected OU process as our stochastic dynamic due to the following reasons: First, we want to build computationally efficient, and non-parametric method to estimate/minimize the second term of equation 5. The SDE in equation 7 corresponding OU process have simple form with fixed drift and diffusion terms which is invariant over times which makes us to induce the non-parametric representations of simulation of SDE. While the simulation of equation 7 is just non-parametric for-loops in implementation algorithm, our method is computationally very efficient compared to other baseline methods such as Han et al. (2018). Second, when estimating empirical upper-bound of Wasserstein distance, OU process allows us to use explicit form called Meheler’s formula which can be efficiently estimated (Please refer to Appendix for more details). The overall procedure for our method is summarized in Algorithm 1.
3.2 WASSERSTEIN MOVING GEODESIC AVERAGE
In our experiments, we observe that the best measure ν is not updated for a few epochs after the training begins. This is problematic because ν diverges significantly from the current certain measure ξk, which is equivalent to the normalized measure Fµk diverging from ξk, meaning XT and Y become increasingly statistically inconsistent. To alleviate this statistical distortion, we modify detour measure from Nν to other Gaussian measure, which allows us to capture the statistics of both ξk and ν. Inspired by the moving average of Gaussian parameters in batch normalization Ioffe & Szegedy (2015), we propose the Wasserstein moving geodesic average. Specifically, we replace Gaussian parameters {mν ,Σν} with {mα,Σα} such that mα = (1 − α)mν + αmξk and Σα = ((1− α)Id + αT ) Σν ((1− α)Id + αT ), where T is a solution to the Riccati equation T ΣνT = Σξk . Therefore our final detour Gaussian measure is set to Nαν := N (m(α),Σ(α)), 0 ≤ α ≤ 12.
4 THEORETICAL ANALYSIS
In equation 5, we select the detour point as a Gaussian measure because this measure can provide a statistical structure, which is similar to that of the optimal ν. In addition to this heuristic motivation, setting a detour point as a Gaussian measure (Gibbs measure) also provides theoretical advantages, e.g., the theoretical upper bound of the Wasserstein constraint terms. In this section, we investigate the explicit upper bounds of two terms in equation 5, which are naturally induced by the SDE.
2Please refer to Appendix C.4 for more details.
Proposition 2. A scalar 0 < β <∞ exists and depends on ν, resulting in the following inequality: W2(ν,Ftµ) ≤ ε = K1(ν) ∨ [ e−tK2(µ) +K2(ν) ] , (8)
where λmax(Σν) denotes the maximum eigenvalue of the covariance matrix Σν and for some constant 0 < K1 <∞, we have K1(ν) = √ dβλmax(Σν) + ‖EνY ‖2 which is only dependent on ν.
Intuitively, K2(µ) can be interpreted as an indicator that tells us how the uncertain measure µ is diffused, whereas the designed term e−tK2(µ) controls the upper bound of the Wasserstein distance using a variable t. The other term K2(ν) does not vanish even with a very large t, which assures a non-collapsing upper-bound ε.
Proposition 3. (Concentration inequality for the normalized uncertain measure). Assume that there are some constants T ∈ [ 1η ,∞), η ≥ 0 such that the following inequality holds:
EFTµ[f2]− [EFTµ[f ]]2 ≤ (1 + η)EFTµ[A∇fT∇f ], f ∈ C∞0 (Rd), (9)
for A ∈ Sym+d and D(A,Σν) ≤ aη for some a > 0 with any metric D defined on Sym + d . In this case, there is a δ such that the following probability inequality for an uncertain measure is induced:
FTµ ( |σ − Eν [σ]| ≥ δ ) ≤ 6e− √ 2δ 3 2 K2(µ) , (10)
where σ denotes a soft-max function.
In equation 10, we show that the label information induced by the normalized uncertain measure is close to that of most certain measure Eν [σ], where the upper bound is exponentially relative to the initial diffuseness of µ (i.e.,K2(µ)). Because the upper bound of the probability inequality does not collapse to zero and FTµ is concentrated around the most certain labels (i.e.,Eν [σ]), the uncertain sample XT ∼ FTµ helps our method avoid over-parameterization.
4.1 EMPIRICAL UNDERSTANDINGS
We investigate the theoretical upper bound of the Wasserstein ambiguity (i.e., radius of the Wasserstein ball) for Fµ and its corresponding probability inequality. To provide more in-depth insights into the proposed method, we approximate the upper bound and demonstrate that our Wasserstein normalization actually makes neural networks more robust to label noise.
As we verified previously, according to Proposition 2, the following inequality holds:
W2(Ftµ, ν) ≤ ε = K1(ν) ∨ (K2(ν) +K2(Ftµ)) . (11)
Because the first term K1(ν) is constant, dependent on ν, and generally small compared to the second term with t ≤ T , we only examine the behavior of the second term K2(ν) +K2(Ftµ), which can be efficiently approximated using a simple form. Because our detour measure is Gaussian, we have the following inequality for any h ∈ C∞0 (Rd)3:
K̂2(µ) = lim s→0
1 s EX,Z∼NI
[ h ( e−sX + √ 1− e−2s(Σ 1 2 ν Z + mν) ) − h(X) ] ≤ K2(µ), (12)
where this equality holds if h is selected to induce a supremum over the set C∞0 . For approximation, we simply consider h(X) = ‖X‖2 as a test function. In this case, the following inequality naturally holds: ε̂ = K̂2(ν) + K̂2(Fµ) ≤ K2(ν) + K2(Fµ) ≤ K1(ν) ∨ (K2(ν) + K2(Fµ)) = ε. Thus, ε̂ can be considered as an approximation of the theoretical upper bound ε suggested in Proposition 2. Subsequently, we investigate the effects of Wasserstein normalization based on K̂2(µ) in equation 12.
(1) The proposed WDN ensures that the Wasserstein ambiguity is bounded. We examine the relation between ε̂ and test accuracy in an experiment using the CIFAR-10 dataset with symmetric noise at a ratio of 0.5. Fig.2 presents the landscape for the log10-scaled cumulative average of ε̂ and test accuracy over epochs. The red dotted lines represent the landscape of the vanilla network with cross-entropy loss, where ε̂k = K̂2(νk)+K̂2(Ft=0µk) and k is the epoch index. In this case, the time constant t is set to zero, because Wasserstein normalization is not employed for the vanilla network. The black lines indicate the landscape of the proposed method, where ε̂k = K̂2(νk) + K̂2(Ft=Tµk)
3Please refer to Appendix C.2 for additional details.
in this case. It is noteworthy that the test accuracy of the vanilla network begins to decrease after 13- epochs (red-dotted vertical lines in the top-right plot), whereas the Wasserstein ambiguity (i.e., upper bound of the Wasserstein distance) increases quadratically in the top-left plot. These experimental results verify that the distance between uncertain and most certain measure (i.e., ν) becomes large in the 2-Wasserstein space without any constraints in vanilla networks. They also indicate a definite relationship between Wasserstein ambiguity and test accuracy. In the proposed WDN, Wasserstein ambiguity can be efficiently bounded (i.e., lim supk ε̂k ≈ 2.15) as the test accuracy continues to increase, even after 13-epochs. For detailed analysis, we compute the deviation of an empirical upper bound as follows: ∆̂k = ε̂k − ε̂k−1. In the gray regions, the deviation for the vanilla network is grater than 2.5× 10−2, i.e., ∆k > 2.5× 10−2. Then, its test accuracy begins to drop, as shown in Fig.2. In contrast to the vanilla network, the maximum deviation of the proposed WDN is bounded above by a very small value (supk ∆̂k ≤ 8× 10−3). (2) The proposed WDN helps networks to escape from over-parameterization. To analyze the behavior of deep neural networks under over-parameterization with and without the proposed WDN, we design several variants of the WDN, which begin at delayed epochs. The green, orange, and blue curves in the second row of Fig.2 represent the landscapes, when our WDN is applied after kd ∈ {10, 15, 20} epochs, respectively. In this experiment, the upper bound ε̂k is defined as
ε̂k = { K̂2(νk) + K̂2(Ft=0µk), if k < kd, K̂2(νk) + K̂2(Ft=Tµk), else k ≥ kd.
(13)
Consider kd = 20, which is represented by the blue dotted vertical lines. Before our WDN is applied (i.e., k < kd), the network suffers from over-parameterization, which induces a significant performance drop, as indicated by the blue curve in the bottom-right plot. However, the network rapidly recovers to normal accuracy following Wasserstein normalization (i.e., k ≥ kd). Please note that similar behavior can be observed in the green and orange curves. In particular, the orange curve produces less fluctuations than the blue curve in terms of test accuracy. This indicates that the proposed WDN can help a network escape from over-parameterization by imposing geometric constraints on the Wasserstein space with proposed method.
(3) The proposed WDN can derive data-dependent bounds according to different noise levels. Another interesting point in Fig.2 is that all curves, excluding the red curve, converge to specific numbers 2.15 = ε := lim infk ε̂k ≤ lim supk ε̂k := ε̄ = 2.2. The upper bound ε̄ is neither overly enlarged nor collapsed to zero, while the lower bound ε is fixed for all curves. We argue that this behavior stems from the geometric characteristics of the proposed method, where the first term in equation 5, namelyW2(ν,Nν) ∝ K̂2(ν), is a non-zero data-dependent term that is minimized by the proposed geometric constraint. Therefore, we can derive the following relationship:
[W2(ν,Fµ) ≤ W2(ν,Nν) +W2(Nν ,Fµ)]⇓ ∝ [K̂2(ν) + K̂2(Fµ) = ε̂]⇓. (14) This empirical observation verifies that a detour point, which is set as a Gaussian measure, can induce the data-dependent bound (ε, ε̄), where our data-dependent bound can vary according to different
noise levels and efficiently leverage data-dependent statistics. Fig.2 indicates that classification models with more stable data-dependent bounds also induce more stable convergence in test accuracy.
5 EXPERIMENTS
5.1 EXPERIMENTS ON THE CIFAR-10/100 DATASET
We used settings similar to those proposed by Laine & Aila (2016); Han et al. (2018) for our experiments on the CIFAR10/100 dataset. We used a 9-layered CNN as a baseline architecture with a batch size of 128. We used the Adam optimizer with (β1, β2) = (0.9, 0.99), where the learning rate linearly decreased from 10−3 to 10−5. Synthetic Noise. We injected label noise into clean datasets using a noise transition matrix Qi,j = Pr(r̂ = j|r = i), where a noisy label r̂ is obtained from a true clean label r. We defined Qi,j by following the approach discussed by Han et al. (2018). For symmetric noise, we used the polynomial, % = −1.11r2 + 1.78r + 0.04 for 0.2 ≤ r ≤ 0.65, where r is the noise ratio. For the asymmetric noise, we set % to 0.35. To select the enhanced detour measure, we set α to 0.2 for the Wasserstein moving geodesic average in all experiments. We trained our classification model over 500 epochs because the test accuracy of our method continued increasing, whereas those of the other methods did not. We compared our method with other state-of-the-art methods, including [MentorNet, Jiang et al. (2018)], [Co-teaching, Han et al. (2018)], [Co-teaching+, Yu et al. (2019)], [GCE, Zhang & Sabuncu (2018)], [RoG, Lee et al. (2019)], [JoCoR, Wei et al. (2020)], [NPCL, Lyu & Tsang (2020b)], [SIGUA, Han et al. (2020)], and [DivideMix, Li et al. (2019a)]. As shown in Table 1, the proposed WDN significantly outperformed other baseline methods. Please note that our WDN utilizes a simple Gaussian measure as a target pivot measure. Thus, there are potential risks when handling highly concentrated and non-smooth types of noise (e.g., asymmetric noise). Nevertheless, the proposed WDN still produced accurate results, even with asymmetric noise. In this case, a variant of our WDN (i.e., WDNcot) exhibited the best performance. Open-set Noise. In this experiment, we considered the open-set noisy scenario suggested by Wang et al. (2018), where a large number of training images were sampled from other CIFAR-100 dataset; however, these images were still labeled according to the classes in the CIFAR-10 dataset. We used a 9-layered CNN, which also used in our previous experiment. For hyper-parameters, we set % and α to 0.5 and 0.2, respectively. As shown in Table 2, our method achieved state-of-the-art accuracy. Collaboration with Other Methods. Because our core methodology is based on small loss criteria, our method can collaborate with co-teaching methods. In Han et al. (2018), only certain samples (Y ∼ ξ) were used for updating colleague networks, where the number of uncertain samples gradually decreased until it reached a predetermined portion. To enhance potentially bad statistics for co-teaching, we taught dual networks by considering a set of samples (Y,XT ), where XT ∼ FTµ are uncertain samples enhanced using equation 7.
Table 1 shows the test accuracy results for the proposed collaboration model with a co-teaching network (WDNcot). This collaboration model achieved the most accurate performance for the CIFAR100 dataset with asymmetric noise, which verifies that our WDN can be integrated into existing methods to improve their performance significantly, particularly when the density of pre-logits is highly-concentrated. Fig.3 reveals that co-teaching quickly falls into over-parameterization and induces drastic drop in accuracy after the 15th-epoch. WDNcot also exhibits a slight accuracy drop. However, it surpassed the baseline co-teaching method by a large margin (+7%) during training. This demonstrates that our enhanced samples XT can alleviate the over-parameterization issues faced by conventional co-teaching models, which helps improve their accuracy significantly.
5.2 EXPERIMENTS ON A REAL-WORLD DATASET
To evaluate our method on real-world datasets, we employed the Clothing1M dataset presented by Xiao et al. (2015), which consists of 1M noisy, labeled, and large-scale cloth images with 14 classes collected from shopping websites. It contains 50K, 10K, and 14K clean images for training, testing, and validation, respectively. We only used a noisy set for training; for testing, we used a clean set. We set α = 0.2 and % = 0.1. For fair comparison, we followed the settings suggested in previous works. We used a pre-trained ResNet50 for a baseline architecture with a batch size of 48. For the pre-processing steps, we applied a random center crop, random flipping, and normalization to 224× 224 pixels. We adopted the Adam optimizer with a learning rate starting at 10−5 that linearly decayed to 5× 10−6 at 24K iterations. Regarding the baseline methods, we compared the proposed method to [GCE, Zhang & Sabuncu (2018)], [D2L, Ma et al. (2018)], [FW, Patrini et al. (2017b)], [WAR, Damodaran et al. (2019)], [SL, Wang et al. (2019)], [JOFL, Tanaka et al. (2018)], [DMI, Xu et al. (2019)], [PENCIL, Yi & Wu (2019)], and [MLNT, Li et al. (2019b)]. Table 3 reveals that our method achieved competitive performance as comparison with other baseline methods.
5.3 COMPUTATIONAL COST
Because Co-teaching, JoCoR, and DivideMix use additional networks, the number of network parameters is twice (8.86M) as many as that of the Vanilla network (4.43M ). In Table 4, we compare the average training time for first 5-epochs over various baseline methods under symmetric noise on the CIFAR-10 dataset. While non-parametric methods such as GCE and WDN require less than 12% additional time, other methods that require additional networks spent more time than non-parametric methods. The averaging time can vary according to different experimental environments. In table 4, we measure the time using publicly available code provided by authors.
6 CONCLUSION
We proposed a novel method called WDN for accurate classification of noisy labels. The proposed method normalizes uncertain measures to data-dependent Gaussian measures by imposing geometric constraints in the 2-Wasserstein space. We simulated discrete SDE using the Euler-Maruyama scheme, which makes our method fast, computationally efficient, and non-parametric. In theoretical analysis, we derived the explicit upper-bound of the proposed Wasserstein normalization and experimentally demonstrated a strong relationship between this upper-bound and the over-parameterization. We conducted experiments both on the CIFAR-10/100 and Clothing1M datasets. The results demonstrated that the proposed WDN significantly outperforms other state-of-the-art methods.
A OPEN-SOURCE DATASET
Transition matrix for CIFAR-10/100. For the experiment summarized in Table 1, we implemented open-source code to generate the noise transition matrix discussed by Han et al. (2018), as well as the 9-layered CNN architecture (https://github.com/bhanML/Co-teaching).
Open-set noise. For the experiment summarized in Table 2, we used the same dataset for open-set noisy labels presented by Lee et al. (2019) (https://github.com/pokaxpoka/ RoGNoisyLabel).
Clothing1M. For the experiment summarized in Table 3, we used the open-source dataset presented by Xiao et al. (2015) (https://github.com/Cysu/noisy_label).
B COMPARISONS TO RELATED WORKS
Methodology Parametric Class-dependency Distillation Sample-weight Sample-selection
DivideMix 3 7 7 7 3 Co-teaching 3 7 3 7 3
JoCoR 3 7 3 7 3 MLNT 3 3 3 7 7 Ren et al. (2018) 7 7 7 3 7 NPCL 7 7 7 3 7 GCE 7 7 7 3 7
WDN 7 7 7 7 7
Table B indicates that no previous methodologies can conceptually include our method.
Because the solution to the Fokker-plank equation can be explicitly calculated without any additional parameters, our method is fully non-parametric (in terms of additional parameters beyond those required by the original neural network). By contrast, co-teaching is parametric because it requires a clone network with additional parameters that are copies of those in the original network. Similarly, MLNT requires an additional teacher network for training, which also contains a number of parameters.
Many method based on small loss criteria select certain samples, whereas our method uses the combination of ρN certain and (1 − ρ)N normalized uncertain samples. Therefore, our method can fully leverage the batches of training datasets, where (1− ρ)N + ρN = N . Additionally, our method does not assume any class-dependent prior knowledge. Rather than considering class-wise prior knowledge, our method uses holistic information from both certain and uncertain samples (i.e., Y and XT ) in the logit space. Other meta-class-based model, such as MLNT, assume class-wise meta prior knowledge from a teacher network.
In Arazo et al. (2019), they assumed the beta-mixture model as a label distribution on label space. But due to the non-deterministic type of noisy label distribution, it sometimes fails to train with extremely non-uniform type of noise. For example, Arazo et al. (2019) reported failure case with Clothing1M dataset. It seems that fundamental assumption on noise model of mixup will be improved in future work. Similar to this method, our work have trouble when dealing with synthetic asymmetric noise with high ratio where relatively large performance drop is observed in Table 1 (despite our method produces second best performance in the table).
Most recent work Li et al. (2019a), they also adopt Co-train by implementing additional dual network, but much sophisticated methodology called Co-divide/guessing based on SSL. We predict that the Wasserstein distance between labeled and unlabeled probability measures is well-controlled in their method. We think that applying the OT/Markov theory (as in our paper) to their method will broaden the understanding of LNL problem.
In contrast to sample weight methods such as GCE and NPCL, which require prior knowledge regarding the cardinality of the training samples to be weighted, our method is free from such assumptions because our Wasserstein normalization is applied in a batch-wise manner.
C TECHNICAL DIFFICULTY FOR APPLYING GENERAL OPTIMAL TRANSPORT/MARKOV THEORY TO LABEL SPACE.
LetX,Y be uncertain and certain samples in pre-softmax feature space. And assume that we consider the distributional constraint on label-space (the space of σ(X), σ(Y ), where σ denotes the soft-max function). This space is not proper to define the objective function such as (5). Because, all the samples in this label space is of the form σ(X) = [a1, a2, · · · , an] such that ∑d i=1 ai = 1, thus label-space is d-dimensional affine-simplex Ud which is subset of Euclidean space Ud ⊂ Rd. In this case, the definition of Wasserstein space in equation (4) is unacceptable while dE is not true metric on Ud. The Wasserstein space P2(Ud) is merely investigated in the mathematical literature which makes unable to use all the technical details and assumptions, theories developed in the P2(Rd) which are theoretical ground of our work. But, if we look this problem slightly different point of view, for example, consider pre-softmax Rd,P2(Rd) as our base space. In this case, all the technical issues/problems when we try to use OT tools in P2(Ud) can be overcome/ignored. while softmax is non-parametric one-to-one function connecting pre-softmax feature space Rd to Ud, there exists a unique labels in Ud as a mapped point of the manipulated uncertain samples. Even though our objects are defined on pre-softmax space, the theoretical analysis in Proposition 3 contains softmax function to evaluate the concentration inequality of proposed transformation F affecting in label-space Ud.
D MATHEMATICAL BACKGROUND
In this section, we introduce important definitions, notations, and propositions used in our proofs and the main paper.
D.1 NOTATION
We denote f#µ as a push-forward of µ through f . C∞0 (Rd) denotes the set of∞-class functions with compact support in Rd. For the Lp-norm of the function f , we denote ‖f‖p,ν = ( ∫ |f |pdν) 1 p . The Hessian matrix of the function f is denoted as Hess[f ] = [∂i∂jf ]di,j . Sym + d denotes the space for semi-definite positive symmetric matrices of size d× d. ‖f‖Lip denotes the Lipschitz norm of the function f . For any matrix A ∈Md, we let ‖A‖op denote the operator norm of A.
D.2 DIFFUSION-INVARIANCE AND HYPER-CONTRACTIVITY
Definition 2. The Markov semigroup (Pt)t≥0 in Rd acting on a function f ∈ C∞0 is defined as follows:
Ptf(x) = ∫ f(x′)pt(x, dx ′), (15)
where pt(x, dx′) is a transition kernel that is the probability measure for all t ≥ 0. Definition 3. (Diffusion Operator) Given a Markov semi-group Pt at time t, the diffusion operator (i.e., infinitesimal generator) L of Pt is defined as
Lg(y) = lim t→0
1 t (Ptg(y)− g(y)) = ∑ i,j ∂2 ∂yi∂yj Bij(y)g(y)− ∑ i Ai(y) ∂ ∂yi g(y), (16)
where B and A are matrix and vector-valued measurable functions, respectively. Bij denotes the (i, j)-th function of B and Ai denotes the i-th component function of A. Definition 4. (Diffusion-invariant Measure) Given the diffusion operator L, the probability measure µ is considered to be invariant measure to L when EX∼µ[Lf(X)] = 0 for any f ∈ C∞0 . Lemma 1. (Infinitesimal generator for the multivariate Gaussian measure, Bolley & Gentil (2010).) The Gaussian measure Nν := N (mν ,Σν) with a mean mν and covariance Σν is an invariant measure according to the following diffusion-operator L:
Lf(x) = ΣνHess[f ](x)− (x−mν)T ∇f(x), ∀f ∈ C∞0 (Rd), (17) where Bij(x) := [Σν ]ij is a constant function, and Ai(x) := xi −miν .
This generator serves as our main tool for the geometric analysis of the upper bound ε. In Section 4.1 in the main paper, we introduced an approximate upper-bound K̂2(µ) without any general description of the inequality involved. We now introduce the underlying mathematics for equation 12. Because our detour measure is Gaussian, there is a unique semi-group Pth called the multidimensional Ornstein-Ulenbeck semi-group that is invariant to Nν . Specifically, Pt is defined as follows:
Psh(X) = EZ∼NI [ h ( e−sX + √ 1− e−2s(Σ 1 2 ν Z + mν) )] , ∀h ∈ C∞0 . (18)
The invariance property of Pt relative to our detour measure is naturally induced by the following Proposition: Proposition 4. We define C : Rd → Rd and C(X) = AX + b such that A ∈ Sym+d ,b ∈ Rd, and select an arbitrary smooth h ∈ C∞0 (Rd). We then define the diffusion Markov semi-group Psh as follows:
Psh(X) = EZ∼N [ h ( e−sX + √ 1− e−2sC(Z) )] . (19)
Then, N (A2,b) is invariant with respect to Ps, meaning the following equality holds for every h and s ≥ 0: ∫
Rd [Psh(X)− h(X)]dN (A2,b)(X) = 0. (20)
Proof. For simplicity, we denote N (A2,b) := NC .∫ Psh(X)dNC(X) = ∫ ∫ h(e−sX + √ 1− e−2sC(Z))dNC(X)dN (Z)
= ∫ ∫ h ◦ C(e−sZ ′ + √ 1− e−2sZ)dN (Z ′)dN (Z).
(21)
The second equality holds because C is linear in Rd. Let e−s = cos θ and e−2s = sin θ for any 0 ≤ θ ≤ 2π. Then, we define φ as φ(Z ′, Z) = e−sZ ′ + √ 1− e−2sZ = cos(θ)Z ′ + sin(θ)Z, and π(Z ′, Z) = Z. Based on the rotation property of the standard Gaussian measure, one can induce the following equality.
(N ⊗N ) ◦ (C ◦ φ)−1 = ((N ⊗N ) ◦ φ−1) ◦ C−1 = N ◦ C−1. (22) However, we know that dN [C−1(X)] = dNC(X) = ( (2π)d|A2| )− 12 e−0.5(X−b)TA−2(X−b). By combining equation 21 and equation 22, one can derive the following result:∫
h ◦ C(e−sZ ′ + √ 1− e−2sZ)d[N ⊗N ] = ∫ h(X)d [ (N ⊗N ) ◦ φ−1 ◦ C−1 ] (X)
= ∫ h(X)d[N ◦ C−1](X) = ∫ h(X)dN [C−1(X)]
= ∫ h(X)dNC(X).
(23)
Proposition 4 demonstrates the invariance property of the defined semi-group. If we setA = Σ 1 2 ν ,b = mν , then we can recover equation 18.
We are now ready to define the approximation of K2(µ) in terms of semi-group invariance. Specifically, for any real-valued smooth h, we define the following inequality:
K̂2(µ) = EX∼µ[Lh(X)] = lim s→0
EX∼µ [ 1
s (Psh(X)− h(X)) ] = lim s→0 1 s EX,Z∼NI [ h ( e−sX + √ 1− e−2s(Σ 1 2 ν Z + mν) ) − h(X) ] ≤ K2(µ).
(24)
This inequality holds if h is selected to induce a supremum over the set C∞0 , where suph K̂2(µ, h) = suph EX∼µ[Lh(X)] = K2(µ). Although a more sophisticated design for the test function h will induce a tighter upper bound for K̂2, we determined that the L2-norm is generally sufficient.
Definition 5. (Diffuseness of the probability measure) We define the integral operator K2 : W2(Rd)→ R+ as follows:
K2(µ) = √ sup f∈C∞0 ∫ Rd |Lf(x)| dµ(x). (25)
According to Definition 4, we know that ∫ Lf(X)dNν(X) = 0 for any f . Based on this observation, it is intuitive that K2 estimates how the probability measure ν is distorted in terms of diffusion invariance. While this measure takes a supremum over the function space C∞0 , it searches for a function that enables the estimation of maximal distortion. Because the value of K2 is entirely dependent on the structure of µ, K2 can be considered as a constant for the sake of simplicity if the uncertain measure µ is fixed over one iteration of training. Definition 6. (Diffusion carré du champ) Let f, g ∈ C∞0 (Rd). Then, we define a bilinear form Γc in C∞0 (Rd)× C∞0 (Rd) as
Γe(f, g) = 1
2 [LΓe−1(fg)− Γe−1(fLg)− Γe−1(gLf)], e ≥ 1. (26)
We also denote Γ(f) ≡ Γ(f, f). The bilinear form Γ can be considered as a generalization of the integration by the parts formula, where ∫ fLg + Γ(f)dµ = 0 for the invariant measure µ of L.
Definition 7. (Curvature-Dimension condition, Ambrosio et al. (2015)) We can say that the infinitesimal generator L induces the CD(ρ,∞) curvature-dimension condition if it satisfies Γ1(f) ≤ ρΓ2(f) for all f ∈ C∞0 .
Because our diffusion operator generates a semi-group with respect to the Gibbs measure, the curvature-dimension condition can be calculated explicitly. Through simple calculations, the firstorder (c = 1) diffusion carré du champ can be induced as follows:
Γ1(f) = ( [∇f ]TΣν∇f )2 . (27)
Similarly, the second-order (c = 2) diffusion carré du champ is calculated as follows:
Γ2(f) = 1
2
[ L ( Γ1(f 2) ) − 2Γ1 (f,L(f)) ] = Tr ([ Σν∇2f ]2) + ( [∇f ]TΣν∇f )2 = Tr ([ Σν∇2f ]2) + Γ1(f),
(28)
for an arbitrary f ∈ C∞0 (Rd). While Tr ([ Σ∇2f ]2)
is non-negative, we can infer that Γ1 ≤ Γ2. In this case, the diffusion operator L defined in Lemma 1 induces the CD(ρ = 1,∞) curvaturedimension condition. For the other diffusion operators, please refer to Bolley & Gentil (2010). Proposition 5. (Decay of Fisher information along a Markov semigroup, Bakry et al. (2013).) If we assume the curvature-dimension condition CD(ρ,∞), then I(µt|Nν) ≤ e−2ρtI(µ|Nν).
The exponential decay of the Fisher information in Proposition 5 is a core property of the exponential decay of the Wasserstein distance, which will be used in the proof of Proposition 2.
D.3 FOKKER-PLANK EQUATION, SDE
Definition 8. (Over-damped Langevin Dynamics) We have dXt = −∇φ(Xt;mν)dt+ √ 2τ−1ΣνdWt, (29)
where φ (Xt;mν) = τ2d 2 (Xt,mν), Wt denotes Brownian motion, and d denotes Euclidean distance. The particle Xt is distributed in Xt ∼ pt. The probability density limt→∞ p(x, t) with respect to X∞ converges to the Gaussian density X∞ = √ Σν(Z + mν) ∼ p∞(x) = q(x) ∝ e−d(x,mν) TΣ−1ν d(x,mν).
In classical SDE literature, it is stated that E [ sup0≤t≤T ∣∣∣X̂t −Xt∣∣∣] ≤ G(N%)− 12 , where G(T ) is some constant that depends only on T and X̂ denotes the true solution of the SDE in equation 29. While the number of uncertain samples is greater than N% > 40, our method exhibits acceptable convergence.
D.4 GAUSSIAN WASSERSTEIN SUBSPACES
It is known that the space of non-degenerate Gaussian measures (i.e., covariance matrices are positivedefinite) forms a subspace in the 2-Wasserstein space denoted asW2,g ∼= Sym+d × Rd. Because the 2-Wasserstein space can be considered as a Riemannian manifold equipped with Riemannian metrics Villani (2008), W2,g can be endowed with a Riemannian structure that also induces the Wasserstein metric (McCann (1997)). In the Riemannian sub-manifold of Gaussian measures, the geodesic between two points γ(0) = NA and γ(1) = NB is defined as follows Malagò et al. (2018):
γ(α) = Nt = N (m(α),Σ(α)), (30)
where m(α) = (1 − α)mA + αmB and Σ(α) = [(1− α)I + αT ] ΣA [(1− α)I + αT ], where T ΣAT = ΣB . In Section 3.2, we set (mA,ΣA) → (mν ,Σν) and (mB ,ΣB) → (mξk ,Σξk). Regardless of how ν is updated, the statistical information regarding the current certain measure ξk is considered in the detour Gaussian measure, which yields a much smoother geometric constraint on µ.
E PROOFS
Proposition 6. Let Γ(µ, ν) be a set of couplings between µ and ν, and assume that the noisy label r̂ is independent of X . For functional J [µ] = Eµ∼X l(X; r̂), we define D(µ, ν) as:
D(µ, ν) = inf γ∈Γ(µ,ν)
|J [µ]− J [ν]| , (31)
where D : P2 ×P2 → R. Then, D is the metric defined on P2, which is weaker than the Wasserstein metric, where D(µ, ν) ≤ αW2(µ, ν) for α = c−10 r̂ + c −1 1 (1− r̂) and some constants c0, c1 > 0.
Proof.
|J [ν]− J [µ]| = |Eµ[l(X; r̂)]− Eν [l(Z; r̂)]| = |Eµ⊗ν [r̂ (log σ(X)− log σ(Z))− (1− r̂) (log(1− σ(X))− log(1− σ(Z)))]| ≤ E |r̂Eµ⊗ν [log σ(X)− log σ(Z)]|+ E |(1− r̂)Eµ⊗ν [log(1− σ(X))− log(1− σ(Z))]| ≤ Er̂Eµ⊗ν |log σ(X)− log σ(Z)|+ E(1− r̂)Eµ⊗ν |log(1− σ(X))− log(1− σ(Z))| ≤ c−10 E(r̂)Eµ⊗ν |X − Z|+ c −1 1 E(1− r̂)Eµ⊗ν |Z −X| = E[c−10 r̂ + c −1 1 (1− r̂)]Eµ⊗ν |X − Z| (32)
By taking the infimum of the aforementioned inequality with set of couplings γ(µ, ν), we obtain the following inequality:
D(ν, µ) = inf γ(µ,ν)
|J [ν]− J [µ]| ≤ E[c−10 Y + c −1 1 (1− Y )] inf γ(µ,ν) Eγ |X − Z|
= E[c−10 Y + c −1 1 (1− Y )]W1(µ, ν) ≤ E[c−10 Y + c −1 1 (1− Y )]W2(µ, ν),
(33)
which completes the proof.
Proposition 6 follows from the Lipschitzness of the functional J , where D searches for the best coupling to derive the minimal loss difference between two probability measures. This proposition indicates that inf |J [ν]− J [Fµ]| is bounded by the Wasserstein distance, which justifies our geometric constraint presented in equation 4. It should be noted that the prior assumption regarding noisy labels is essential for Lipschitzness. Proposition 7. Let F : R+ × P2 be a functional on probability measures such that F [t, µ] = µt, where dµt = ptdNν , dNν = dqtdx, and let µt be a solution of the continuity equation in the 2-Wasserstein space defined as follows:
∂tµt = ∇ · (µt∇Φt) , (34)
which is represented as ∂tp(t, x) = ∇ · (p(t, x)∇ log q(t, x)) in a distributional sense. Then, the functional Ft[·] = F [t, ·] is defined unique and normalizes µ onto BW2 (Nν , e−tK2 (µ)), where K2(µ) ≤ ∞ is an integral operator in Definition 5 with respect to µ.
Proof. We assume that the probability measure µt is absolutely continuous with respect to the detour Gaussian measure N (mν ,Σν) = Nν , µt Nν . In this case, according to the Radon-Nikodym theorem, there is a corresponding unique probability density q(t, x) = qt(x) ∈ C∞0 such that dµt = qtdNν . Lemma 2. (WI-inequality, Otto & Villani (2000)) If the stationary state of µt with respect to Pt satisfies limt→∞ Eµ[Ptf ] = 0 for any f ∈ C∞0 , then the following inequality holds:
d dt+ W2(µ, µt) ≤
√ I(µt|Nν). (35)
By integrating both sides of the inequality in Lemma 2 with respect to t ∈ (0,∞), the following inequality can be obtained:
W2(µt,Nν) = ∫ ∞
0
d dt+ W2(µt,Nν)dt ≤ ∫ ∞ 0 √ I(µt|Nν)dt. (36)
In the aforementioned inequality, we replace the Fisher information with the diffusion generator L as follows:
W2(µ,Nν) ≤ ∫ ∞
0
√ I(µt|Nν)dt
= ∫ ∞ 0 √∫ [Ptq]−1Γ(Ptq)dNνdt = ∫ ∞ 0 √∫ L(− logPtq)dµtdt. (37)
The second equality above is derived by leveraging the properties of the bilinear operator Γ (Bakry et al. (2013); Villani (2008)) with respect to the diffusion operator L, which is defined as follows:∫
[Ptq] −1Γ(Ptq)dNν = − ∫ L(logPtq)qtdNν = ∫ L(− logPtq)dµt ≥ 0. (38)
For simplicity, we denote |g| = g+ for any g ∈ C∞0 . According to Proposition 5, we can relate Ftµ = µt to its initial term µ = µt=0 as follows:∫ ∞
0
√∫ L(− logPtq)(X)d[Ftµ](X)dt ≤ ∫ ∞ 0 √ e−2ρt ∫ L (− logPt=0q) (X)dµ(X)dt
≤ ∫ ∞
0
√ e−2ρt sup
g∈C∞0
∫ L+g(Z)qdNν(Z)dt
= ∫ ∞ 0 √ e−2ρtdt √ sup g∈C∞0 ∫ L+g(X)dµ(X)
= ρ−1K2(µ).
(39)
The second inequality is naturally induced, because the proposed objective function is defined to select the maximum elements over the set of functions g ∈ C∞0 and Lg ≤ L+g. If the integral interval is set to (0, s), then we can induceW2(µ,Ftµ) ≤ 1ρ (1− e
−s)K2(µ). Our diffusion-operator induces ρ = 1, which completes the proof.
Proposition 8. There is a scalar 0 < β < ∞ dependent on ν such that the following inequality holds: W2(ν,Ftµ) ≤ [√ dβλmax(Σν) + ‖EνY ‖2 ] ∨ [ e−tK2(µ) +K2(ν) ] . (40)
As a motivation for setting a detour measure to Nν , we mentioned the natural property of the non-collapsing Wasserstein distance ofW2(ν,Nν) 6= 0. However, it is unclear from a geometric perspective exactly how the upper bound (i.e.,W2(ν,Nν) ≤ ?) can be induced based on the intrinsic statistics term (i.e., d1 in Fig.1). Specifically, in the situation where the covariance matrices of ν and Nν are identical, it is difficult to determine a theoretical upper bound without additional tools. The first part of this proof focuses on resolving this important issue. The second part of the proof is naturally induced by Proposition 1. Please note that in the following proposition, parameter for Wasserstein moving average is set to α = 0 for clarity.
Proof. Before proceeding with the first part of the proof, we define a constant β as follows:
β = sup 1≤j≤d ∫ 1 0 1 s EYsv2s,j(Ys)ds. (41)
If we assume a mild condition such that mins,j inf1≤j≤dO(vs,j) ≥ O( √ s), then the integral term in β is finite and well-defined. This value will directly yield the upper bound of the Kullback–Leibler (KL) divergence of ν. First, we introduce the following inequality.
Lemma 3. (de Bruijn’s identity, Johnson & Suhov (2001); Nourdin et al. (2014)) We let Y ∼ ν, Z ∼ N (0, I) denote a standard Gaussian random variable, and let define Ys = √ sY + √ 1− sΣ 1 2 ν Z with the score function defined as vs(x) = ∇ log ps(x) with respect to the random variable Ys. Then, the following equality holds:
KL(ν|N (0,Σν)) = ∫ 1
0
Tr
( 1
2s ΣνEps∼Ys [vs(Ys)vs(Ys)T ]
) ds. (42)
From equation 42, we can derive the relations between KL-divergence and the constant β defined earlier.∫ 1
0
1 2s Tr ( ΣνEx[vs(Ys)vs(Ys)T ]) ) ds ≤ ∫ 1 0 1 2s Tr ( ΣνEx[vs,ivs,j ]di,j) ) ds
≤ ∫ 1
0
1 2 λmax(Σν) d∑ j=1 E
[ v2s,j(Ys)
s
] ds ≤ 1
2 λmax ∫ 1 0 d∑ j=1 βds = 1 2 λmax(Σν)dβ.
(43)
The second inequality holds based on the following element property of symmetric positive-definite matrices:
Tr(AB) ≤ ‖A‖opTr(B) = λmax(A)Tr(B), ∀A,B ∈ Sym + d . (44)
It should be noted that because the distribution of ν is compactly supported (i.e., supp(q) is compact), the maximum eigenvalue of the covariance Σν is finite. The other relations are induced by the aforementioned definition. Next, we relate the KL-divergence and 2-Wasserstein distance naturally.
Definition 9. (Talagrand inequality for Gaussian measures, Otto & Villani (2000)) For any nondegenerate Gaussian measure N with a mean 0, the following inequality is satisfied:
W2(ν,N ) ≤ √ 2KL(ν|N ), ∀ν ∈ P2(Rd). (45)
By combining Definition 9 and equation 43, we can derive the following expression: W2(ν,N (0,Σν)) ≤ √ 2KL(ν|N (0,Σν)) ≤ √ dβλmax(Σν) <∞. (46)
According to the triangle inequality for the 2-Wasserstein distance, we obtain:
W2(ν,N (mν ,Σν)) ≤ W2(ν,N (0,Σν)) +W2(N (mν ,Σν),N (0,Σν)) (47)
In Appendix C.3, we investigated that the geodesic distance between two Gaussian measures having the same covariance is equivalent to the Euclidean distance between two means. Therefore, we can obtain the following equality:
W2(N (mν ,Σν),N (0,Σν)) =W2(ιmν# [N (0,Σν)],N (0,Σν)) = ‖mν − 0‖2 = ‖EνY ‖2 ,
(48)
where ιa(X) = X + a for any vector a ∈ supp(q). Now, by adding the two inequalities defined earlier, we can obtain
W2(ν,N (mν ,Σν)) ≤ ‖EνY ‖2 + √ dβλmax(Σν), (49)
where it is easily shown that the upper-bound is only dependent on the statistical structure of ν. Specifically, the term ‖EνY ‖2 represents the center of mass for a density of ν and √ dβλmax(Σν) is related to the covariance structure of ν.
By applying Proposition 8 to both Ftµ and ν, we can easily recover equation 5 as follows: W2(ν,Ftµ) ≤ ε =W2(ν,N (mν ,Σν)) +W2(N (mν ,Σν),Ftµ)
≤ ([ ‖EνY ‖2 + √ dβλmax(Σν) ] ∧K2(ν) ) + e−tK2(µ)
≤ [√ dβλmax(Σν) + ‖EνY ‖2 ] ∨ [ e−tK2(µ) +K2(ν) ] .
(50)
The second inequality is easily obtained as (a ∧ b) + c ≤ a ∨ (b + c) for any a, b, c ≥ 0, which completes the proof.
Proposition 9. (Concentration inequality for uncertain measures). Assume that there are some constants s? ∈ [ 1η ,∞), η ≥ 0 such that the following inequality is satisfied:
EFs?µ[f 2]− [EFs?µ[f ]] 2 ≤ (1 + η)EFs?µ[A∇f T∇f ], (51)
for A ∈ Sym+d , D(A,Σν) ≤ aη for some a > 0, and for any metric D defined on Sym + d . In this case, there is a δ such that the following probability inequality for an uncertain measure is induced:
Fs?µ ( |σ − Eν [σ]| ≥ δ ) ≤ 6e− √ 2δ 3 2 K2 , (52)
where κ denotes the Lipschitz constant of σ.
Proof. Before proceeding with the main proof, we first prove the existence of s?. The limit of the interval with respect to η converges to a singleton {∞} as I = limη→0[ 1η ,∞). In this case, equation 51 is the same as the Poincaré inequality for a Gaussian measure Nν , which can be written as
lim η→0
EFs?µ[f 2]− [EFs?µ[f ]] 2 ≤ lim η→0 (1 + η)EFs?µ[A∇f T∇f ]
= EFs?µ[Σν∇f T∇f ].
(53)
While the Poincaré inequality in equation 53 is uniquely defined, we can find at least one value s? satisfying equation 51. Let X(t, w) = Xt(w) denote the stochastic process with respect to qt(x) defined in the proof of Proposition 2. Additionally, let c = Eν [σ]− EFs?µ[σ]. Then, we can obtain the following inequality:
c = Eν [σ]− EFs?µ[σ] = κ ( Eν [σ κ ] − EFs?µ [σ κ ]) ≤ κ sup
g∈Lip1 (Eνg − EFs?µg)
≤ κW1(Fs?µ, ν) ≤ κW2(Fs?µ, ν) ≤ κK2(µ)
1 + η .
(54)
The first inequality is induced by the assumption regarding the κ-Lipschitzness of the function σ and the second inequality is induced by the Kantorovich-Rubinstein theorem. The third inequality is natural becauseWa(·, ·) ≤ Wb(·, ·) for any 1 ≤ a ≤ b <∞. because equation 51 is equivalent to the Poincaré inequality for the measure Fs?µ, it satisfies the Bakry-emery curvature-dimension condition CD(1 + η,∞). Thus, as shown in the proof of Proposition 2 (i.e., equation 39), the last inequality is induced. Additionally, based on the concentration inequality of Fs?µ [Proposition 4.4.2 Bakry et al. (2013)], we can derive the following probability inequality:
Fs?µ [σ(Xs?(w)) ≥ EFs?µ[σ] + δ] ≤ 3e − δ√ 1+ηκ , (55)
where the Poincaré constant for Fs?µ is naturally 1 + η and ‖σ‖Lip = κ. Next, we will derive the desired form from equation 55. First, we introduce the following inequality.
σ(Xs?) ≥ EFs?µ[σ] + δ ≥ Eν [σ] + δ − κ
1 + η K2 (56)
The last inequality is directly induced by equation 54 because −c ≥ − κ1+ηK2. While η, κ, and K2 are constants with respect to w, the following set inclusion can be obtained naturally:
S1 = {w : σ(Xs?(w)) ≥ EFs?µ[σ] + δ} ⊇ {w : σ(Xs?(w)) ≥ Eν [σ] + δ − κ
1 + η K2} = S2.
(57)
For the modified version of the original probability inequality, we take probability measure Fs?µ[·] for the sets S1,S2, which is defined as
3e − δ√
1+ηκ ≥ Fs?µ ({w : σ(Xs?(w)) ≥ EFs?µ[σ] + δ}) ≥ Fs?µ ( {w : σ(Xs?(w)) ≥ Eν [σ] + δ − κ
1 + η K2}
) .
(58)
The concentration inequality around Eν [σ] is obtained by combining the inequalities induced by σ and −σ as follows:
1 2 Fs?µ ⋃ h∈{σ,−σ} {w : h(Xs?(w))− Eν [h] ≥ ± ( δ − κ 1 + η K2} ) = Fs?µ ( {w : |σ(Xs?(w))− Eν [σ]| ≥ δ − κ
1 + η K2}
) ≤ 6e− δ√ 1+ηκ .
(59)
The inequality in equation 59 is the general form containing the relation between the upper bound of the probability and (η, κ,K2). While this form is quite complicated and highly technical, we choose not to present all the detailed expressions of equation 59 in the main paper. Rather than that, we re-write it in a much simplified form for clarity. Specifically, by setting κK2/(1 + η) = 0.5δ and rescaling δ to 2δ, the aforementioned inequality in equation 59 can be converted into the following simpler form:
Fs?µ ({w : |σ(Xs?(w))− Eν [l]| ≥ δ) ≤ 6e− √ 2δ 3 2 κK2 . (60)
Finally, if we set σ = Softmax, then the Lipschitz constant is induced as κ = 1. This proof is completed by setting s? := T . | 1. What is the main contribution of the paper, and what are the novel aspects introduced by the author regarding objective functions and logits?
2. What are the strengths of the paper, particularly in theoretical analysis and validation?
3. Do you have any questions or concerns about the paper's approach to normalizing uncertain measures, and how it relates to practical problems?
4. How does the reviewer assess the clarity and explanatory power of the paper's content, particularly regarding the motivation behind Wasserstein Normalization?
5. Are there any concerns regarding the training process, such as the potential slowness of the SDE grad flow, and how does the reviewer evaluate the effectiveness of the proposed approach? | Review | Review
Summary
The paper introduces a novel objective function by imposing geometric constraints on the logits of uncertain samples. The authors' approach is to map the distribution logits of uncertain samples onto the 2-Wasserstein ball centered on the measure of certain samples. To overcome the dilemma of selecting the ball radius, the authors propose a surrogate objective, namely Wasserstein Normalization. An SDE grad flow is proposed for solving the Wasserstein normalization. The paper also keeps the Gaussian parameters as moving average during training in light of batch normalization. The paper both theoretically and empirically validate their method.
Contributions
i) Proposal of a novel Wassersterin Normalization objective for uncertain samples.
ii) Theoretically giving a verifiable upper bound of the constrain term.
iii) Extensive experiments on synthetic and real-world datasets.
Issues:
i) The motivation of normalizing the uncertain measures into the 2-Wasserstein ball of certain measure is unclear to me. And it seems irrelevant to the practical problem. Yes, I agree that we should prevent the over-parameterized network to overfit the uncertain samples since the majority of them are noisy samples (network tends to fit noisy samples slowly). Based on this observation, the idea of discarding/relabeled uncertain samples is widely adopted in this field. The paper's approach belongs to this category. But the motivation of Wasserstein Normalization for uncertain samples is weak. I cannot directly see its benefits. Imagining the network has high confidence for certain samples (which is generally true in practice), then the mean of gaussian
m
is uniform categorical distribution. The paper's approach is just injecting noise onto the network output of uncertain samples.
ii) Does the simulation of the SDE makes the training much slower? Since we need the calculate the grad flow for every single sample during every iteration of the inner loop.
iii) Could the authors better explain the dilemma of selecting the ball radius
ϵ
in Sec 3? I cannot see why the benefits decrease when
ϵ
≈
0
. |
ICLR | Title
Wasserstein Distributional Normalization : Nonparametric Stochastic Modeling for Handling Noisy Labels
Abstract
We propose a novel Wasserstein distributional normalization (WDN) algorithm to handle noisy labels for accurate classification. In this paper, we split our data into uncertain and certain samples based on small loss criteria. We investigate the geometric relationship between these two different types of samples and enhance this relation to exploit useful information, even from uncertain samples. To this end, we impose geometric constraints on the uncertain samples by normalizing them into the Wasserstein ball centered on certain samples. Experimental results demonstrate that our WDN outperforms other state-of-the-art methods on the Clothing1M and CIFAR-10/100 datasets, which have diverse noisy labels. The proposed WDN is highly compatible with existing classification methods, meaning it can be easily plugged into various methods to improve their accuracy significantly.
N/A
We propose a novel Wasserstein distributional normalization (WDN) algorithm to handle noisy labels for accurate classification. In this paper, we split our data into uncertain and certain samples based on small loss criteria. We investigate the geometric relationship between these two different types of samples and enhance this relation to exploit useful information, even from uncertain samples. To this end, we impose geometric constraints on the uncertain samples by normalizing them into the Wasserstein ball centered on certain samples. Experimental results demonstrate that our WDN outperforms other state-of-the-art methods on the Clothing1M and CIFAR-10/100 datasets, which have diverse noisy labels. The proposed WDN is highly compatible with existing classification methods, meaning it can be easily plugged into various methods to improve their accuracy significantly.
1 INTRODUCTION
The successful results of deep neural networks (DNNs) on supervised classification tasks heavily rely on accurate and high-quality label information. However, annotating large-scale datasets is extremely expensive and a time-consuming task. Because obtaining high-quality datasets is very difficult, in most conventional works, training data have been obtained alternatively using crowd-sourcing platforms Yu et al. (2018) to obtain large-scaled datasets, which leads inevitable noisy labels in the annotated samples.
While there are numerous methods that can deal with noisy labeled data, recent methods actively adopt the small loss criterion, which enables to construct classification models that are not susceptible to noise corruption. In this learning scheme, a neural network is trained using easy samples first in the early stages of training. Harder samples are then gradually selected to train mature models as training proceeds. Jiang et al. (2018) suggested collaborative learning models, in which a mentor network delivers the data-driven curriculum loss to a student network. Han et al. (2018); Yu et al. (2019) proposed dual networks to generate gradient information jointly using easy samples and employed this information to allow the networks to teach each other. Wei et al. (2020) adopted a disagreement strategy, which determines the gradient information to update based on disagreement values between dual networks. Han et al. (2020) implemented accumulated gradients to escape optimization processes from over-parameterization and to obtain more generalized results. In this paper, we tackle to solve major issues raised from the aforementioned methods based on the small-loss criterion, as follows.
In comprehensive experiments, the aforementioned methods gain empirical insight regarding network behavior under noisy labels. However, theoretical and quantitative explanation have not been closely investigated. In contrast, we give strong theoretical/empirical explanations to understand the network under noisy labels. In particular, we present an in-depth analysis of small loss criteria in a probabilistic sense. We exploit the stochastic properties of noisy labeled data and develop probabilistic descriptions of data under the small loss criteria, as follows. Let P be a probability measure for the pre-softmax logits of the training samples, l be an objective function for classification, and 1{·} be an indicator function. Then, our central object to deal with is a truncated measure defined as
X ∼ µ|ζ = 1{X;l(X)>ζ}P P[l(X) > ζ] , Y ∼ ξ|ζ = 1{X;l(Y )≤ζ}P P[l(Y ) ≤ ζ] , (1)
where X and Y , which are sampled from µ|ζ and ξ|ζ, denote uncertain and certain samples defined in the pre-softmax feature space1 (i.e.,Rd), respectively. In equation 1, µ and ξ denote the probability measures of uncertain and certain samples, respectively, and ζ is a constant. Most previous works have focused on the usage of Y and the sampling strategy of ζ, but poor generalization capabilities based on the abundance of uncertain samples X has not been thoroughly investigated, even though these samples potentially contain important information. To understand the effect of noisy labels on the generalized bounds, we provide the concentration inequality of uncertain measure µ, which renders the probabilistic relation between µ and ξ and learnability of the network under noisy labels.
While most conventional methods Han et al. (2018); Wei et al. (2020); Li et al. (2019a); Yu et al. (2019) require additional dual networks to guide misinformed noisy samples, the scalability is not guaranteed due to the existence of dual architectures, which have the same number of parameters as the base network. To alleviate this problem, we build a statistical machinery, which should be fully non-parametric, simple to implement, and computationally efficient to reduce the computational complexity of conventional approaches, while maintaining the concept of small-loss criterion. Based on the empirical observation of ill-behaved certain/uncertain samples, we propose the gradient flow in the Wasserstein space, which can be induced by simulating non-parametric stochastic differential equation (SDE) with respect to the Ornstein-Ulenbeck type to control the ill-behaved dynamics. The reason for selecting these dynamics will be thoroughly discussed in the following sections.
Thus, key contributions of our work are as follows.
• We theoretically verified that there exists a strong correlation between model confidence and statistical distance between X and Y . We empirically investigate that the classification accuracy worsens when the upper-bound of 2-Wasserstein distance W2(µ, ξ) ≤ ε (i.e., distributional distance between certain and uncertain samples) drastically increase. Due to the empirical nature of upper-bound ε, it can be used as an estimator to determine if a network suffers from over-parameterization.
• Based on empirical observations, we develop a simple, non-parametric, and computationally efficient stochastic model to control the observed ill-behaved sample dynamics. As a primal object, we propose the stochastic dynamics of gradient flow (i.e.,, Ornstein-Ulenbeck process) to simulate simple/non-parametric stochastic differential equation. Thus, our method do not require any additional learning parameters.
• We provide important theoretical results. First, the controllable upper-bound ε with the inverse exponential ratio is induced, which indicates that our method can efficiently control the diverging effect of Wasserstein distance. Second, the concentration inequality of transported uncertain measure is presented, which clearly renders the probabilistic relation between µ and ξ.
2 RELATED WORK
Curriculum Learning & Small-loss Criterion. To handle noisy labels, Han et al. (2018); Yu et al. (2019); Jiang et al. (2018); Wei et al. (2020); Lyu & Tsang (2020a); Han et al. (2020) adopted curriculum learning or sample selection frameworks. However, these methods only consider a small number of selected samples, where large portion of samples are excluded at the end of the training. This inevitably leads to poor generalization capabilities. However, this conflicts with sample selection methods because a large portion of training samples are gradually eliminated. By contrast, our method can extract useful information from unselected samples X ∼ µ (i.e., uncertain samples) and enhance these samples (e.g., X ′ ∼ Fµ) for more accurate classification. Chen et al. (2019) iteratively apply cross-validation to randomly partitioned noisy labeled data to identify most samples that have correct labels. To generate such partitions, they adopt small-loss criterion for selecting samples.
Loss Correction & Label Correction. Patrini et al. (2017a); Hendrycks et al. (2018); Ren et al. (2018) either explicitly or implicitly transformed noisy labels into clean labels by correcting classification losses. Unlike these methods, our method transforms the holistic information from uncertain samples into certain samples, which implicitly reduces the effects of potentially noisy labels. While correction of label noisy by modifying the loss-dynamics do not perform well under extreme noise environments, Arazo et al. (2019) adopt label augmentation method called MixUp Zhang et al. (2018).
1Due to the technical difficulties, we define our central objects on pre-softmax space rather than label space, i.e., the space of σ(X), σ(Y ), where σ indicates softmax function. Please refer to Appendix for more details.
Distillation. Li et al. (2019b) updated mean teacher parameters by calculating the exponential moving average of student parameters to mitigate the impact of gradients induced by noisy labels. Lukasik et al. (2020) deeply investigated the effects of label smearing for noisy labels and linked label smoothing to loss correction in a distillation framework. Similar to these methods, our method leverages the useful properties of distillation models. We set ν as a pivot measure, which guides our normalization functional Fµ for uncertain measures. This is similar to self-distillation because uncertain training samples are forced to be normalized to those of past states.
Other methods. Lee et al. (2019) induced a robust generative classifier based on pre-trained deep models. Similar to our method, Damodaran et al. (2019) designed a constraint on the Wasserstein space and adopted an adversarial framework for classification models of noisy labeled data by implementing semantic Wasserstein distance. Pleiss et al. (2020) identify noisy labeled samples by considering AUM statistics which exploits differences in training dynamics of clean and mislabeled samples. In most recent work, Li et al. (2019a) adopts semi-supervised learning (SSL) methods to deal with noisy labels where the student network utilizes both labeled/unlabeled samples to perform semi-supervised learning guided by the other teacher network.
3 DISTRIBUTIONAL NORMALIZATION
Because our main target object is a probability measure (distribution), we first define an objective function in a distributional sense. Let l be cross entropy and r̂ be a corrupted label random vector for an unknown label transition matrix from a clean label r which is independent of X , with label transition matrix Q. Then, a conventional objective function for classification with noisy labels can be defined as follows:
min µ J [µ] = min µ EX∼µ,r̂|Q [l(X; r̂)] . (2)
However, due to the significant changes in label information, the conventional objective function defined in equation 2 cannot be used for accurate classification. Instead of directly using uncertain samples X ∼ µ as in previous works, we normalize µ in the form of a metric ball and present a holistic constraint. For a clear mathematical description, we first introduce the following definition. Definition 1. (Wasserstein ambiguity set) Let P2(Rd) = {µ : Eµd2E(x0, x) <∞,∀x0 ∈ Rd} be a 2-Wasserstein space, where d denotes the number of classes, dE is Euclidean distance defined on Rd. Then, we define a Wasserstein ambiguity set (i.e., metric ball) in this space as follows:
BW2(ν, ε) = { µ ∈ P2 ( Rd ) :W2(µ, ν) ≤ ε } , (3)
whereW2 denotes the 2-Wasserstein distance and ν is the pivot measure. Then, we propose a new objective function by imposing geometric constraints on µ as follows:
min Fµ∈BW2 (ν,ε),ξ J [Fµ] + J [ξ] = min θ EX∼Fµθ,r̂[l(X; r̂)] + EX∼ξθ,r̂[l(Y ; r̂)], (4)
where F : P2(Rd)→ P2(Rd) is a functional for probability measures, which assures the constraint on Fµ (i.e., Fµ ∈ BW2(ν, ε)) and our main objective. The right-hand side of equation equation 4 is equivalent vectorial form of distributional form in left-hand side. While our main objects are defined on pre-softmax, both probability measures µθ and ξθ is parameterized by neural network with parameters θ. This newly proposed objective function uses the geometrically enhanced version of an uncertain measure Fµ with a certain measure ξ. In equation 4, probability measure ν is defined as follows: ν = arg minJ [ξk? ], where ξk denotes a certain measure at the current k-th iteration and k? ∈ Ik−1 = {1, · · · , k − 1}. In other words, our method finds the best probability measure that represents all certain samples so far at training time, where the uncertain measures are transported to be lying in the Wasserstein ball centered on ν. In equation 4, the Wasserstein constraint on Fµ enforces uncertain measures statistically resemble ν from a geometric perspective (i.e.,W2(ν,Fµ) ≤ ε). Now, an important question naturally stems from the aforementioned analysis: how can we select the optimal radius ε? Clearly, finding an F that induces a small ε ≈ 0 is suboptimal because Fµ ≈ ν and using objective function J [Fµ ≈ ν] can lead to the following critical problem. As the optimization process proceeds, enhanced uncertain samples X ′ ∼ Fµ contribute less and less, because it is statistically identical to ν, meaning our objective in equation 4 would receive little benefits from these transported uncertain samples. By contrast, if we adopt a large radius for ε, enhanced uncertain samples will be statistically and geometrically unrelated to ν, which causes the normalized measure Fµ to yield large losses and violates our objective.
To overcome two problems above and select the radius, we make a detour, i.e., a Gaussian measure, for cutting the path between ν and Fµ (i.e., ν → N (mν ,Σν)→ Fµ) rather than directly calculating the geodesic between ν and Fµ (i.e., ν → Fµ). Specifically, we decompose the original constraint in equation 4 into two terms using the triangle inequality of the Wasserstein distance:
W2 (ν,Fµ) ≤ ε =W2 (ν,N (mν ,Σν))︸ ︷︷ ︸ d1: Intrinsic statistics +W2 (N (mν ,Σν),Fµ)︸ ︷︷ ︸ d2: Wasserstein Normalization . (5)
The first intrinsic statistics term sets a detour point as a Gaussian measure, for which the mean and covariance are the same as those for ν (i.e., mν = EY∼ν [Y ] and Σν = CovY∼ν [Y ]). The Wasserstein upper bound of this term is only dependent on the statistical structure of ν because (mν ,Σν) is dependent on ν. Thus, this term induces a data-dependent, non-zero constant upper bound whenever ν 6= N and can prevent the upper-bound from collapsing to ε → 0, regardless of F . This gives huge advantage when dealing with ε because the first term can be considered a fixed constant during the training. The second normalization term represents our central objective. F facilitates geometric manipulation in the Wasserstein space and prevent uncertain measure µ from diverging, where µ is normalized onto the Wasserstein ambiguity BW2(ν, ε) in Fig1. The theoretical/numerical advantages of setting detour measure as Gaussian is well-explained following section.
3.1 WASSERSTEIN NORMALIZATION
In the previous section, we present a novel objective function that imposes a geometric constraint on µ such that the transformed measure Fµ lies in BW2(ν, ε) for ν. Now, we specify F and relate it to the Gaussian measure (generally Gibbs measure). For simplicity, we denote Nν = N (mν ,Σν). Proposition 1. F : R+×P2 → P2 is a functional on the probability measure such thatF [t, µ] = µt, where dµt = ptdNν , dNν = dqtdx, and µt is a solution to the following continuity equations:
∂tµt = ∇ · (µtvt) , (6)
which is read as ∂tp(t, x) = ∇ · (p(t, x)∇ log q(t, x)) in a distributional sense. Then, a uniquely defined functional Ft[·] = F [t, ·] normalizes µ onto BW2 (Nν , e−tK2 (µ)), where K2(µ) > 0 is a constant that depends on µ.
It is well known that the solution to equation 6 induces a geodesic in the 2-Wasserstein space (Villani (2008)), which is the shortest path from µ = µt=0 to Nν . The functional Ft generates a path for µt, in which the distance is exponentially decayed according to the auxiliary variable t and constant K2, meaningW2(Nν ,Ftµ) ≤ K2e−t. This theoretical results indicates that the Wasserstein distance of second term in equation 5 can be reduced/controlled with exponential ratio. Thus, by setting a different t, our method can efficiently control the diverging distance in equation 5. Unfortunately, it is typically intractable to compute the partial differential equation (PDE) in equation 6.
Algorithm 1 Wasserstein Distributional Normalization Require: α ∈ [0, 0.2], % ∈ [0.1, 0.65], T = 64,∆t = 10−4, τ = 0.001,
for k = 1 to K (i.e., the total number of training iterations) do 1) Select uncertain (1− ρ)N and certain ρN samples from the mini-batch N . {Y nk }{n≤ρN} ∼ ξk, {X n k }{n≤(1−ρ)N} ∼ µk
2) Update the most certain measure ν. if J [ξk] < J [ν] then ν ← ξk,mν ← E [Yk], and Σν ← Cov [Yk] end if 3) Update the moving geodesic averageN (mα,Σα). Solve the Ricatti equation T ΣνT = Σξk . Σα = ((1− α)Id + αT ) Σν ((1− α)Id + αT ) and mα = (1− α)mν + αmξk 4) Simulate the discrete SDE for T steps. for t = 0 to T − 1 do Xnk,t+1 = −∇φ(Xnk,t;mα)∆t + √ 2τ−1Σαν dW n t s.t. { Xnk,t=0 } ∼ µk, { Xnk,t=T } ∼ FTµk end for 5) Update the network with the objective function. J [Fµk] + J [ξk] = EFT µk [l(Xk,T ; r̂)] + Eξk [l(Yk; r̂)]
end for
To solve this problem, we adopt particle-based stochastic dynamics, which enables tractable computation. There exists a unique iterative form corresponding PDE in equation 6 which is called as multi-dimensional Ornstein-Ulenbeck process, which can be approximated using particle-based dynamics. In particular, we draw N(1 − %) uncertain samples from a single batch of N samples using equation 1 for hyper-parameter 0 ≤ % ≤ 1. We then simulate a discrete stochastic differential equation (SDE) for each particle using the Euler-Maruyama scheme as follows:
Xnt+1 = X n t −∇φ (Xnt ;mν) ∆t + √ 2τ−1∆tΣZ n I , (7)
where φ (Xt;mν) = τ2d 2 E (Xt,mν), n ∈ {1 · · · , N(1− %)}, dE is a Euclidean distance, and N is a single mini-batch size. We selected OU process as our stochastic dynamic due to the following reasons: First, we want to build computationally efficient, and non-parametric method to estimate/minimize the second term of equation 5. The SDE in equation 7 corresponding OU process have simple form with fixed drift and diffusion terms which is invariant over times which makes us to induce the non-parametric representations of simulation of SDE. While the simulation of equation 7 is just non-parametric for-loops in implementation algorithm, our method is computationally very efficient compared to other baseline methods such as Han et al. (2018). Second, when estimating empirical upper-bound of Wasserstein distance, OU process allows us to use explicit form called Meheler’s formula which can be efficiently estimated (Please refer to Appendix for more details). The overall procedure for our method is summarized in Algorithm 1.
3.2 WASSERSTEIN MOVING GEODESIC AVERAGE
In our experiments, we observe that the best measure ν is not updated for a few epochs after the training begins. This is problematic because ν diverges significantly from the current certain measure ξk, which is equivalent to the normalized measure Fµk diverging from ξk, meaning XT and Y become increasingly statistically inconsistent. To alleviate this statistical distortion, we modify detour measure from Nν to other Gaussian measure, which allows us to capture the statistics of both ξk and ν. Inspired by the moving average of Gaussian parameters in batch normalization Ioffe & Szegedy (2015), we propose the Wasserstein moving geodesic average. Specifically, we replace Gaussian parameters {mν ,Σν} with {mα,Σα} such that mα = (1 − α)mν + αmξk and Σα = ((1− α)Id + αT ) Σν ((1− α)Id + αT ), where T is a solution to the Riccati equation T ΣνT = Σξk . Therefore our final detour Gaussian measure is set to Nαν := N (m(α),Σ(α)), 0 ≤ α ≤ 12.
4 THEORETICAL ANALYSIS
In equation 5, we select the detour point as a Gaussian measure because this measure can provide a statistical structure, which is similar to that of the optimal ν. In addition to this heuristic motivation, setting a detour point as a Gaussian measure (Gibbs measure) also provides theoretical advantages, e.g., the theoretical upper bound of the Wasserstein constraint terms. In this section, we investigate the explicit upper bounds of two terms in equation 5, which are naturally induced by the SDE.
2Please refer to Appendix C.4 for more details.
Proposition 2. A scalar 0 < β <∞ exists and depends on ν, resulting in the following inequality: W2(ν,Ftµ) ≤ ε = K1(ν) ∨ [ e−tK2(µ) +K2(ν) ] , (8)
where λmax(Σν) denotes the maximum eigenvalue of the covariance matrix Σν and for some constant 0 < K1 <∞, we have K1(ν) = √ dβλmax(Σν) + ‖EνY ‖2 which is only dependent on ν.
Intuitively, K2(µ) can be interpreted as an indicator that tells us how the uncertain measure µ is diffused, whereas the designed term e−tK2(µ) controls the upper bound of the Wasserstein distance using a variable t. The other term K2(ν) does not vanish even with a very large t, which assures a non-collapsing upper-bound ε.
Proposition 3. (Concentration inequality for the normalized uncertain measure). Assume that there are some constants T ∈ [ 1η ,∞), η ≥ 0 such that the following inequality holds:
EFTµ[f2]− [EFTµ[f ]]2 ≤ (1 + η)EFTµ[A∇fT∇f ], f ∈ C∞0 (Rd), (9)
for A ∈ Sym+d and D(A,Σν) ≤ aη for some a > 0 with any metric D defined on Sym + d . In this case, there is a δ such that the following probability inequality for an uncertain measure is induced:
FTµ ( |σ − Eν [σ]| ≥ δ ) ≤ 6e− √ 2δ 3 2 K2(µ) , (10)
where σ denotes a soft-max function.
In equation 10, we show that the label information induced by the normalized uncertain measure is close to that of most certain measure Eν [σ], where the upper bound is exponentially relative to the initial diffuseness of µ (i.e.,K2(µ)). Because the upper bound of the probability inequality does not collapse to zero and FTµ is concentrated around the most certain labels (i.e.,Eν [σ]), the uncertain sample XT ∼ FTµ helps our method avoid over-parameterization.
4.1 EMPIRICAL UNDERSTANDINGS
We investigate the theoretical upper bound of the Wasserstein ambiguity (i.e., radius of the Wasserstein ball) for Fµ and its corresponding probability inequality. To provide more in-depth insights into the proposed method, we approximate the upper bound and demonstrate that our Wasserstein normalization actually makes neural networks more robust to label noise.
As we verified previously, according to Proposition 2, the following inequality holds:
W2(Ftµ, ν) ≤ ε = K1(ν) ∨ (K2(ν) +K2(Ftµ)) . (11)
Because the first term K1(ν) is constant, dependent on ν, and generally small compared to the second term with t ≤ T , we only examine the behavior of the second term K2(ν) +K2(Ftµ), which can be efficiently approximated using a simple form. Because our detour measure is Gaussian, we have the following inequality for any h ∈ C∞0 (Rd)3:
K̂2(µ) = lim s→0
1 s EX,Z∼NI
[ h ( e−sX + √ 1− e−2s(Σ 1 2 ν Z + mν) ) − h(X) ] ≤ K2(µ), (12)
where this equality holds if h is selected to induce a supremum over the set C∞0 . For approximation, we simply consider h(X) = ‖X‖2 as a test function. In this case, the following inequality naturally holds: ε̂ = K̂2(ν) + K̂2(Fµ) ≤ K2(ν) + K2(Fµ) ≤ K1(ν) ∨ (K2(ν) + K2(Fµ)) = ε. Thus, ε̂ can be considered as an approximation of the theoretical upper bound ε suggested in Proposition 2. Subsequently, we investigate the effects of Wasserstein normalization based on K̂2(µ) in equation 12.
(1) The proposed WDN ensures that the Wasserstein ambiguity is bounded. We examine the relation between ε̂ and test accuracy in an experiment using the CIFAR-10 dataset with symmetric noise at a ratio of 0.5. Fig.2 presents the landscape for the log10-scaled cumulative average of ε̂ and test accuracy over epochs. The red dotted lines represent the landscape of the vanilla network with cross-entropy loss, where ε̂k = K̂2(νk)+K̂2(Ft=0µk) and k is the epoch index. In this case, the time constant t is set to zero, because Wasserstein normalization is not employed for the vanilla network. The black lines indicate the landscape of the proposed method, where ε̂k = K̂2(νk) + K̂2(Ft=Tµk)
3Please refer to Appendix C.2 for additional details.
in this case. It is noteworthy that the test accuracy of the vanilla network begins to decrease after 13- epochs (red-dotted vertical lines in the top-right plot), whereas the Wasserstein ambiguity (i.e., upper bound of the Wasserstein distance) increases quadratically in the top-left plot. These experimental results verify that the distance between uncertain and most certain measure (i.e., ν) becomes large in the 2-Wasserstein space without any constraints in vanilla networks. They also indicate a definite relationship between Wasserstein ambiguity and test accuracy. In the proposed WDN, Wasserstein ambiguity can be efficiently bounded (i.e., lim supk ε̂k ≈ 2.15) as the test accuracy continues to increase, even after 13-epochs. For detailed analysis, we compute the deviation of an empirical upper bound as follows: ∆̂k = ε̂k − ε̂k−1. In the gray regions, the deviation for the vanilla network is grater than 2.5× 10−2, i.e., ∆k > 2.5× 10−2. Then, its test accuracy begins to drop, as shown in Fig.2. In contrast to the vanilla network, the maximum deviation of the proposed WDN is bounded above by a very small value (supk ∆̂k ≤ 8× 10−3). (2) The proposed WDN helps networks to escape from over-parameterization. To analyze the behavior of deep neural networks under over-parameterization with and without the proposed WDN, we design several variants of the WDN, which begin at delayed epochs. The green, orange, and blue curves in the second row of Fig.2 represent the landscapes, when our WDN is applied after kd ∈ {10, 15, 20} epochs, respectively. In this experiment, the upper bound ε̂k is defined as
ε̂k = { K̂2(νk) + K̂2(Ft=0µk), if k < kd, K̂2(νk) + K̂2(Ft=Tµk), else k ≥ kd.
(13)
Consider kd = 20, which is represented by the blue dotted vertical lines. Before our WDN is applied (i.e., k < kd), the network suffers from over-parameterization, which induces a significant performance drop, as indicated by the blue curve in the bottom-right plot. However, the network rapidly recovers to normal accuracy following Wasserstein normalization (i.e., k ≥ kd). Please note that similar behavior can be observed in the green and orange curves. In particular, the orange curve produces less fluctuations than the blue curve in terms of test accuracy. This indicates that the proposed WDN can help a network escape from over-parameterization by imposing geometric constraints on the Wasserstein space with proposed method.
(3) The proposed WDN can derive data-dependent bounds according to different noise levels. Another interesting point in Fig.2 is that all curves, excluding the red curve, converge to specific numbers 2.15 = ε := lim infk ε̂k ≤ lim supk ε̂k := ε̄ = 2.2. The upper bound ε̄ is neither overly enlarged nor collapsed to zero, while the lower bound ε is fixed for all curves. We argue that this behavior stems from the geometric characteristics of the proposed method, where the first term in equation 5, namelyW2(ν,Nν) ∝ K̂2(ν), is a non-zero data-dependent term that is minimized by the proposed geometric constraint. Therefore, we can derive the following relationship:
[W2(ν,Fµ) ≤ W2(ν,Nν) +W2(Nν ,Fµ)]⇓ ∝ [K̂2(ν) + K̂2(Fµ) = ε̂]⇓. (14) This empirical observation verifies that a detour point, which is set as a Gaussian measure, can induce the data-dependent bound (ε, ε̄), where our data-dependent bound can vary according to different
noise levels and efficiently leverage data-dependent statistics. Fig.2 indicates that classification models with more stable data-dependent bounds also induce more stable convergence in test accuracy.
5 EXPERIMENTS
5.1 EXPERIMENTS ON THE CIFAR-10/100 DATASET
We used settings similar to those proposed by Laine & Aila (2016); Han et al. (2018) for our experiments on the CIFAR10/100 dataset. We used a 9-layered CNN as a baseline architecture with a batch size of 128. We used the Adam optimizer with (β1, β2) = (0.9, 0.99), where the learning rate linearly decreased from 10−3 to 10−5. Synthetic Noise. We injected label noise into clean datasets using a noise transition matrix Qi,j = Pr(r̂ = j|r = i), where a noisy label r̂ is obtained from a true clean label r. We defined Qi,j by following the approach discussed by Han et al. (2018). For symmetric noise, we used the polynomial, % = −1.11r2 + 1.78r + 0.04 for 0.2 ≤ r ≤ 0.65, where r is the noise ratio. For the asymmetric noise, we set % to 0.35. To select the enhanced detour measure, we set α to 0.2 for the Wasserstein moving geodesic average in all experiments. We trained our classification model over 500 epochs because the test accuracy of our method continued increasing, whereas those of the other methods did not. We compared our method with other state-of-the-art methods, including [MentorNet, Jiang et al. (2018)], [Co-teaching, Han et al. (2018)], [Co-teaching+, Yu et al. (2019)], [GCE, Zhang & Sabuncu (2018)], [RoG, Lee et al. (2019)], [JoCoR, Wei et al. (2020)], [NPCL, Lyu & Tsang (2020b)], [SIGUA, Han et al. (2020)], and [DivideMix, Li et al. (2019a)]. As shown in Table 1, the proposed WDN significantly outperformed other baseline methods. Please note that our WDN utilizes a simple Gaussian measure as a target pivot measure. Thus, there are potential risks when handling highly concentrated and non-smooth types of noise (e.g., asymmetric noise). Nevertheless, the proposed WDN still produced accurate results, even with asymmetric noise. In this case, a variant of our WDN (i.e., WDNcot) exhibited the best performance. Open-set Noise. In this experiment, we considered the open-set noisy scenario suggested by Wang et al. (2018), where a large number of training images were sampled from other CIFAR-100 dataset; however, these images were still labeled according to the classes in the CIFAR-10 dataset. We used a 9-layered CNN, which also used in our previous experiment. For hyper-parameters, we set % and α to 0.5 and 0.2, respectively. As shown in Table 2, our method achieved state-of-the-art accuracy. Collaboration with Other Methods. Because our core methodology is based on small loss criteria, our method can collaborate with co-teaching methods. In Han et al. (2018), only certain samples (Y ∼ ξ) were used for updating colleague networks, where the number of uncertain samples gradually decreased until it reached a predetermined portion. To enhance potentially bad statistics for co-teaching, we taught dual networks by considering a set of samples (Y,XT ), where XT ∼ FTµ are uncertain samples enhanced using equation 7.
Table 1 shows the test accuracy results for the proposed collaboration model with a co-teaching network (WDNcot). This collaboration model achieved the most accurate performance for the CIFAR100 dataset with asymmetric noise, which verifies that our WDN can be integrated into existing methods to improve their performance significantly, particularly when the density of pre-logits is highly-concentrated. Fig.3 reveals that co-teaching quickly falls into over-parameterization and induces drastic drop in accuracy after the 15th-epoch. WDNcot also exhibits a slight accuracy drop. However, it surpassed the baseline co-teaching method by a large margin (+7%) during training. This demonstrates that our enhanced samples XT can alleviate the over-parameterization issues faced by conventional co-teaching models, which helps improve their accuracy significantly.
5.2 EXPERIMENTS ON A REAL-WORLD DATASET
To evaluate our method on real-world datasets, we employed the Clothing1M dataset presented by Xiao et al. (2015), which consists of 1M noisy, labeled, and large-scale cloth images with 14 classes collected from shopping websites. It contains 50K, 10K, and 14K clean images for training, testing, and validation, respectively. We only used a noisy set for training; for testing, we used a clean set. We set α = 0.2 and % = 0.1. For fair comparison, we followed the settings suggested in previous works. We used a pre-trained ResNet50 for a baseline architecture with a batch size of 48. For the pre-processing steps, we applied a random center crop, random flipping, and normalization to 224× 224 pixels. We adopted the Adam optimizer with a learning rate starting at 10−5 that linearly decayed to 5× 10−6 at 24K iterations. Regarding the baseline methods, we compared the proposed method to [GCE, Zhang & Sabuncu (2018)], [D2L, Ma et al. (2018)], [FW, Patrini et al. (2017b)], [WAR, Damodaran et al. (2019)], [SL, Wang et al. (2019)], [JOFL, Tanaka et al. (2018)], [DMI, Xu et al. (2019)], [PENCIL, Yi & Wu (2019)], and [MLNT, Li et al. (2019b)]. Table 3 reveals that our method achieved competitive performance as comparison with other baseline methods.
5.3 COMPUTATIONAL COST
Because Co-teaching, JoCoR, and DivideMix use additional networks, the number of network parameters is twice (8.86M) as many as that of the Vanilla network (4.43M ). In Table 4, we compare the average training time for first 5-epochs over various baseline methods under symmetric noise on the CIFAR-10 dataset. While non-parametric methods such as GCE and WDN require less than 12% additional time, other methods that require additional networks spent more time than non-parametric methods. The averaging time can vary according to different experimental environments. In table 4, we measure the time using publicly available code provided by authors.
6 CONCLUSION
We proposed a novel method called WDN for accurate classification of noisy labels. The proposed method normalizes uncertain measures to data-dependent Gaussian measures by imposing geometric constraints in the 2-Wasserstein space. We simulated discrete SDE using the Euler-Maruyama scheme, which makes our method fast, computationally efficient, and non-parametric. In theoretical analysis, we derived the explicit upper-bound of the proposed Wasserstein normalization and experimentally demonstrated a strong relationship between this upper-bound and the over-parameterization. We conducted experiments both on the CIFAR-10/100 and Clothing1M datasets. The results demonstrated that the proposed WDN significantly outperforms other state-of-the-art methods.
A OPEN-SOURCE DATASET
Transition matrix for CIFAR-10/100. For the experiment summarized in Table 1, we implemented open-source code to generate the noise transition matrix discussed by Han et al. (2018), as well as the 9-layered CNN architecture (https://github.com/bhanML/Co-teaching).
Open-set noise. For the experiment summarized in Table 2, we used the same dataset for open-set noisy labels presented by Lee et al. (2019) (https://github.com/pokaxpoka/ RoGNoisyLabel).
Clothing1M. For the experiment summarized in Table 3, we used the open-source dataset presented by Xiao et al. (2015) (https://github.com/Cysu/noisy_label).
B COMPARISONS TO RELATED WORKS
Methodology Parametric Class-dependency Distillation Sample-weight Sample-selection
DivideMix 3 7 7 7 3 Co-teaching 3 7 3 7 3
JoCoR 3 7 3 7 3 MLNT 3 3 3 7 7 Ren et al. (2018) 7 7 7 3 7 NPCL 7 7 7 3 7 GCE 7 7 7 3 7
WDN 7 7 7 7 7
Table B indicates that no previous methodologies can conceptually include our method.
Because the solution to the Fokker-plank equation can be explicitly calculated without any additional parameters, our method is fully non-parametric (in terms of additional parameters beyond those required by the original neural network). By contrast, co-teaching is parametric because it requires a clone network with additional parameters that are copies of those in the original network. Similarly, MLNT requires an additional teacher network for training, which also contains a number of parameters.
Many method based on small loss criteria select certain samples, whereas our method uses the combination of ρN certain and (1 − ρ)N normalized uncertain samples. Therefore, our method can fully leverage the batches of training datasets, where (1− ρ)N + ρN = N . Additionally, our method does not assume any class-dependent prior knowledge. Rather than considering class-wise prior knowledge, our method uses holistic information from both certain and uncertain samples (i.e., Y and XT ) in the logit space. Other meta-class-based model, such as MLNT, assume class-wise meta prior knowledge from a teacher network.
In Arazo et al. (2019), they assumed the beta-mixture model as a label distribution on label space. But due to the non-deterministic type of noisy label distribution, it sometimes fails to train with extremely non-uniform type of noise. For example, Arazo et al. (2019) reported failure case with Clothing1M dataset. It seems that fundamental assumption on noise model of mixup will be improved in future work. Similar to this method, our work have trouble when dealing with synthetic asymmetric noise with high ratio where relatively large performance drop is observed in Table 1 (despite our method produces second best performance in the table).
Most recent work Li et al. (2019a), they also adopt Co-train by implementing additional dual network, but much sophisticated methodology called Co-divide/guessing based on SSL. We predict that the Wasserstein distance between labeled and unlabeled probability measures is well-controlled in their method. We think that applying the OT/Markov theory (as in our paper) to their method will broaden the understanding of LNL problem.
In contrast to sample weight methods such as GCE and NPCL, which require prior knowledge regarding the cardinality of the training samples to be weighted, our method is free from such assumptions because our Wasserstein normalization is applied in a batch-wise manner.
C TECHNICAL DIFFICULTY FOR APPLYING GENERAL OPTIMAL TRANSPORT/MARKOV THEORY TO LABEL SPACE.
LetX,Y be uncertain and certain samples in pre-softmax feature space. And assume that we consider the distributional constraint on label-space (the space of σ(X), σ(Y ), where σ denotes the soft-max function). This space is not proper to define the objective function such as (5). Because, all the samples in this label space is of the form σ(X) = [a1, a2, · · · , an] such that ∑d i=1 ai = 1, thus label-space is d-dimensional affine-simplex Ud which is subset of Euclidean space Ud ⊂ Rd. In this case, the definition of Wasserstein space in equation (4) is unacceptable while dE is not true metric on Ud. The Wasserstein space P2(Ud) is merely investigated in the mathematical literature which makes unable to use all the technical details and assumptions, theories developed in the P2(Rd) which are theoretical ground of our work. But, if we look this problem slightly different point of view, for example, consider pre-softmax Rd,P2(Rd) as our base space. In this case, all the technical issues/problems when we try to use OT tools in P2(Ud) can be overcome/ignored. while softmax is non-parametric one-to-one function connecting pre-softmax feature space Rd to Ud, there exists a unique labels in Ud as a mapped point of the manipulated uncertain samples. Even though our objects are defined on pre-softmax space, the theoretical analysis in Proposition 3 contains softmax function to evaluate the concentration inequality of proposed transformation F affecting in label-space Ud.
D MATHEMATICAL BACKGROUND
In this section, we introduce important definitions, notations, and propositions used in our proofs and the main paper.
D.1 NOTATION
We denote f#µ as a push-forward of µ through f . C∞0 (Rd) denotes the set of∞-class functions with compact support in Rd. For the Lp-norm of the function f , we denote ‖f‖p,ν = ( ∫ |f |pdν) 1 p . The Hessian matrix of the function f is denoted as Hess[f ] = [∂i∂jf ]di,j . Sym + d denotes the space for semi-definite positive symmetric matrices of size d× d. ‖f‖Lip denotes the Lipschitz norm of the function f . For any matrix A ∈Md, we let ‖A‖op denote the operator norm of A.
D.2 DIFFUSION-INVARIANCE AND HYPER-CONTRACTIVITY
Definition 2. The Markov semigroup (Pt)t≥0 in Rd acting on a function f ∈ C∞0 is defined as follows:
Ptf(x) = ∫ f(x′)pt(x, dx ′), (15)
where pt(x, dx′) is a transition kernel that is the probability measure for all t ≥ 0. Definition 3. (Diffusion Operator) Given a Markov semi-group Pt at time t, the diffusion operator (i.e., infinitesimal generator) L of Pt is defined as
Lg(y) = lim t→0
1 t (Ptg(y)− g(y)) = ∑ i,j ∂2 ∂yi∂yj Bij(y)g(y)− ∑ i Ai(y) ∂ ∂yi g(y), (16)
where B and A are matrix and vector-valued measurable functions, respectively. Bij denotes the (i, j)-th function of B and Ai denotes the i-th component function of A. Definition 4. (Diffusion-invariant Measure) Given the diffusion operator L, the probability measure µ is considered to be invariant measure to L when EX∼µ[Lf(X)] = 0 for any f ∈ C∞0 . Lemma 1. (Infinitesimal generator for the multivariate Gaussian measure, Bolley & Gentil (2010).) The Gaussian measure Nν := N (mν ,Σν) with a mean mν and covariance Σν is an invariant measure according to the following diffusion-operator L:
Lf(x) = ΣνHess[f ](x)− (x−mν)T ∇f(x), ∀f ∈ C∞0 (Rd), (17) where Bij(x) := [Σν ]ij is a constant function, and Ai(x) := xi −miν .
This generator serves as our main tool for the geometric analysis of the upper bound ε. In Section 4.1 in the main paper, we introduced an approximate upper-bound K̂2(µ) without any general description of the inequality involved. We now introduce the underlying mathematics for equation 12. Because our detour measure is Gaussian, there is a unique semi-group Pth called the multidimensional Ornstein-Ulenbeck semi-group that is invariant to Nν . Specifically, Pt is defined as follows:
Psh(X) = EZ∼NI [ h ( e−sX + √ 1− e−2s(Σ 1 2 ν Z + mν) )] , ∀h ∈ C∞0 . (18)
The invariance property of Pt relative to our detour measure is naturally induced by the following Proposition: Proposition 4. We define C : Rd → Rd and C(X) = AX + b such that A ∈ Sym+d ,b ∈ Rd, and select an arbitrary smooth h ∈ C∞0 (Rd). We then define the diffusion Markov semi-group Psh as follows:
Psh(X) = EZ∼N [ h ( e−sX + √ 1− e−2sC(Z) )] . (19)
Then, N (A2,b) is invariant with respect to Ps, meaning the following equality holds for every h and s ≥ 0: ∫
Rd [Psh(X)− h(X)]dN (A2,b)(X) = 0. (20)
Proof. For simplicity, we denote N (A2,b) := NC .∫ Psh(X)dNC(X) = ∫ ∫ h(e−sX + √ 1− e−2sC(Z))dNC(X)dN (Z)
= ∫ ∫ h ◦ C(e−sZ ′ + √ 1− e−2sZ)dN (Z ′)dN (Z).
(21)
The second equality holds because C is linear in Rd. Let e−s = cos θ and e−2s = sin θ for any 0 ≤ θ ≤ 2π. Then, we define φ as φ(Z ′, Z) = e−sZ ′ + √ 1− e−2sZ = cos(θ)Z ′ + sin(θ)Z, and π(Z ′, Z) = Z. Based on the rotation property of the standard Gaussian measure, one can induce the following equality.
(N ⊗N ) ◦ (C ◦ φ)−1 = ((N ⊗N ) ◦ φ−1) ◦ C−1 = N ◦ C−1. (22) However, we know that dN [C−1(X)] = dNC(X) = ( (2π)d|A2| )− 12 e−0.5(X−b)TA−2(X−b). By combining equation 21 and equation 22, one can derive the following result:∫
h ◦ C(e−sZ ′ + √ 1− e−2sZ)d[N ⊗N ] = ∫ h(X)d [ (N ⊗N ) ◦ φ−1 ◦ C−1 ] (X)
= ∫ h(X)d[N ◦ C−1](X) = ∫ h(X)dN [C−1(X)]
= ∫ h(X)dNC(X).
(23)
Proposition 4 demonstrates the invariance property of the defined semi-group. If we setA = Σ 1 2 ν ,b = mν , then we can recover equation 18.
We are now ready to define the approximation of K2(µ) in terms of semi-group invariance. Specifically, for any real-valued smooth h, we define the following inequality:
K̂2(µ) = EX∼µ[Lh(X)] = lim s→0
EX∼µ [ 1
s (Psh(X)− h(X)) ] = lim s→0 1 s EX,Z∼NI [ h ( e−sX + √ 1− e−2s(Σ 1 2 ν Z + mν) ) − h(X) ] ≤ K2(µ).
(24)
This inequality holds if h is selected to induce a supremum over the set C∞0 , where suph K̂2(µ, h) = suph EX∼µ[Lh(X)] = K2(µ). Although a more sophisticated design for the test function h will induce a tighter upper bound for K̂2, we determined that the L2-norm is generally sufficient.
Definition 5. (Diffuseness of the probability measure) We define the integral operator K2 : W2(Rd)→ R+ as follows:
K2(µ) = √ sup f∈C∞0 ∫ Rd |Lf(x)| dµ(x). (25)
According to Definition 4, we know that ∫ Lf(X)dNν(X) = 0 for any f . Based on this observation, it is intuitive that K2 estimates how the probability measure ν is distorted in terms of diffusion invariance. While this measure takes a supremum over the function space C∞0 , it searches for a function that enables the estimation of maximal distortion. Because the value of K2 is entirely dependent on the structure of µ, K2 can be considered as a constant for the sake of simplicity if the uncertain measure µ is fixed over one iteration of training. Definition 6. (Diffusion carré du champ) Let f, g ∈ C∞0 (Rd). Then, we define a bilinear form Γc in C∞0 (Rd)× C∞0 (Rd) as
Γe(f, g) = 1
2 [LΓe−1(fg)− Γe−1(fLg)− Γe−1(gLf)], e ≥ 1. (26)
We also denote Γ(f) ≡ Γ(f, f). The bilinear form Γ can be considered as a generalization of the integration by the parts formula, where ∫ fLg + Γ(f)dµ = 0 for the invariant measure µ of L.
Definition 7. (Curvature-Dimension condition, Ambrosio et al. (2015)) We can say that the infinitesimal generator L induces the CD(ρ,∞) curvature-dimension condition if it satisfies Γ1(f) ≤ ρΓ2(f) for all f ∈ C∞0 .
Because our diffusion operator generates a semi-group with respect to the Gibbs measure, the curvature-dimension condition can be calculated explicitly. Through simple calculations, the firstorder (c = 1) diffusion carré du champ can be induced as follows:
Γ1(f) = ( [∇f ]TΣν∇f )2 . (27)
Similarly, the second-order (c = 2) diffusion carré du champ is calculated as follows:
Γ2(f) = 1
2
[ L ( Γ1(f 2) ) − 2Γ1 (f,L(f)) ] = Tr ([ Σν∇2f ]2) + ( [∇f ]TΣν∇f )2 = Tr ([ Σν∇2f ]2) + Γ1(f),
(28)
for an arbitrary f ∈ C∞0 (Rd). While Tr ([ Σ∇2f ]2)
is non-negative, we can infer that Γ1 ≤ Γ2. In this case, the diffusion operator L defined in Lemma 1 induces the CD(ρ = 1,∞) curvaturedimension condition. For the other diffusion operators, please refer to Bolley & Gentil (2010). Proposition 5. (Decay of Fisher information along a Markov semigroup, Bakry et al. (2013).) If we assume the curvature-dimension condition CD(ρ,∞), then I(µt|Nν) ≤ e−2ρtI(µ|Nν).
The exponential decay of the Fisher information in Proposition 5 is a core property of the exponential decay of the Wasserstein distance, which will be used in the proof of Proposition 2.
D.3 FOKKER-PLANK EQUATION, SDE
Definition 8. (Over-damped Langevin Dynamics) We have dXt = −∇φ(Xt;mν)dt+ √ 2τ−1ΣνdWt, (29)
where φ (Xt;mν) = τ2d 2 (Xt,mν), Wt denotes Brownian motion, and d denotes Euclidean distance. The particle Xt is distributed in Xt ∼ pt. The probability density limt→∞ p(x, t) with respect to X∞ converges to the Gaussian density X∞ = √ Σν(Z + mν) ∼ p∞(x) = q(x) ∝ e−d(x,mν) TΣ−1ν d(x,mν).
In classical SDE literature, it is stated that E [ sup0≤t≤T ∣∣∣X̂t −Xt∣∣∣] ≤ G(N%)− 12 , where G(T ) is some constant that depends only on T and X̂ denotes the true solution of the SDE in equation 29. While the number of uncertain samples is greater than N% > 40, our method exhibits acceptable convergence.
D.4 GAUSSIAN WASSERSTEIN SUBSPACES
It is known that the space of non-degenerate Gaussian measures (i.e., covariance matrices are positivedefinite) forms a subspace in the 2-Wasserstein space denoted asW2,g ∼= Sym+d × Rd. Because the 2-Wasserstein space can be considered as a Riemannian manifold equipped with Riemannian metrics Villani (2008), W2,g can be endowed with a Riemannian structure that also induces the Wasserstein metric (McCann (1997)). In the Riemannian sub-manifold of Gaussian measures, the geodesic between two points γ(0) = NA and γ(1) = NB is defined as follows Malagò et al. (2018):
γ(α) = Nt = N (m(α),Σ(α)), (30)
where m(α) = (1 − α)mA + αmB and Σ(α) = [(1− α)I + αT ] ΣA [(1− α)I + αT ], where T ΣAT = ΣB . In Section 3.2, we set (mA,ΣA) → (mν ,Σν) and (mB ,ΣB) → (mξk ,Σξk). Regardless of how ν is updated, the statistical information regarding the current certain measure ξk is considered in the detour Gaussian measure, which yields a much smoother geometric constraint on µ.
E PROOFS
Proposition 6. Let Γ(µ, ν) be a set of couplings between µ and ν, and assume that the noisy label r̂ is independent of X . For functional J [µ] = Eµ∼X l(X; r̂), we define D(µ, ν) as:
D(µ, ν) = inf γ∈Γ(µ,ν)
|J [µ]− J [ν]| , (31)
where D : P2 ×P2 → R. Then, D is the metric defined on P2, which is weaker than the Wasserstein metric, where D(µ, ν) ≤ αW2(µ, ν) for α = c−10 r̂ + c −1 1 (1− r̂) and some constants c0, c1 > 0.
Proof.
|J [ν]− J [µ]| = |Eµ[l(X; r̂)]− Eν [l(Z; r̂)]| = |Eµ⊗ν [r̂ (log σ(X)− log σ(Z))− (1− r̂) (log(1− σ(X))− log(1− σ(Z)))]| ≤ E |r̂Eµ⊗ν [log σ(X)− log σ(Z)]|+ E |(1− r̂)Eµ⊗ν [log(1− σ(X))− log(1− σ(Z))]| ≤ Er̂Eµ⊗ν |log σ(X)− log σ(Z)|+ E(1− r̂)Eµ⊗ν |log(1− σ(X))− log(1− σ(Z))| ≤ c−10 E(r̂)Eµ⊗ν |X − Z|+ c −1 1 E(1− r̂)Eµ⊗ν |Z −X| = E[c−10 r̂ + c −1 1 (1− r̂)]Eµ⊗ν |X − Z| (32)
By taking the infimum of the aforementioned inequality with set of couplings γ(µ, ν), we obtain the following inequality:
D(ν, µ) = inf γ(µ,ν)
|J [ν]− J [µ]| ≤ E[c−10 Y + c −1 1 (1− Y )] inf γ(µ,ν) Eγ |X − Z|
= E[c−10 Y + c −1 1 (1− Y )]W1(µ, ν) ≤ E[c−10 Y + c −1 1 (1− Y )]W2(µ, ν),
(33)
which completes the proof.
Proposition 6 follows from the Lipschitzness of the functional J , where D searches for the best coupling to derive the minimal loss difference between two probability measures. This proposition indicates that inf |J [ν]− J [Fµ]| is bounded by the Wasserstein distance, which justifies our geometric constraint presented in equation 4. It should be noted that the prior assumption regarding noisy labels is essential for Lipschitzness. Proposition 7. Let F : R+ × P2 be a functional on probability measures such that F [t, µ] = µt, where dµt = ptdNν , dNν = dqtdx, and let µt be a solution of the continuity equation in the 2-Wasserstein space defined as follows:
∂tµt = ∇ · (µt∇Φt) , (34)
which is represented as ∂tp(t, x) = ∇ · (p(t, x)∇ log q(t, x)) in a distributional sense. Then, the functional Ft[·] = F [t, ·] is defined unique and normalizes µ onto BW2 (Nν , e−tK2 (µ)), where K2(µ) ≤ ∞ is an integral operator in Definition 5 with respect to µ.
Proof. We assume that the probability measure µt is absolutely continuous with respect to the detour Gaussian measure N (mν ,Σν) = Nν , µt Nν . In this case, according to the Radon-Nikodym theorem, there is a corresponding unique probability density q(t, x) = qt(x) ∈ C∞0 such that dµt = qtdNν . Lemma 2. (WI-inequality, Otto & Villani (2000)) If the stationary state of µt with respect to Pt satisfies limt→∞ Eµ[Ptf ] = 0 for any f ∈ C∞0 , then the following inequality holds:
d dt+ W2(µ, µt) ≤
√ I(µt|Nν). (35)
By integrating both sides of the inequality in Lemma 2 with respect to t ∈ (0,∞), the following inequality can be obtained:
W2(µt,Nν) = ∫ ∞
0
d dt+ W2(µt,Nν)dt ≤ ∫ ∞ 0 √ I(µt|Nν)dt. (36)
In the aforementioned inequality, we replace the Fisher information with the diffusion generator L as follows:
W2(µ,Nν) ≤ ∫ ∞
0
√ I(µt|Nν)dt
= ∫ ∞ 0 √∫ [Ptq]−1Γ(Ptq)dNνdt = ∫ ∞ 0 √∫ L(− logPtq)dµtdt. (37)
The second equality above is derived by leveraging the properties of the bilinear operator Γ (Bakry et al. (2013); Villani (2008)) with respect to the diffusion operator L, which is defined as follows:∫
[Ptq] −1Γ(Ptq)dNν = − ∫ L(logPtq)qtdNν = ∫ L(− logPtq)dµt ≥ 0. (38)
For simplicity, we denote |g| = g+ for any g ∈ C∞0 . According to Proposition 5, we can relate Ftµ = µt to its initial term µ = µt=0 as follows:∫ ∞
0
√∫ L(− logPtq)(X)d[Ftµ](X)dt ≤ ∫ ∞ 0 √ e−2ρt ∫ L (− logPt=0q) (X)dµ(X)dt
≤ ∫ ∞
0
√ e−2ρt sup
g∈C∞0
∫ L+g(Z)qdNν(Z)dt
= ∫ ∞ 0 √ e−2ρtdt √ sup g∈C∞0 ∫ L+g(X)dµ(X)
= ρ−1K2(µ).
(39)
The second inequality is naturally induced, because the proposed objective function is defined to select the maximum elements over the set of functions g ∈ C∞0 and Lg ≤ L+g. If the integral interval is set to (0, s), then we can induceW2(µ,Ftµ) ≤ 1ρ (1− e
−s)K2(µ). Our diffusion-operator induces ρ = 1, which completes the proof.
Proposition 8. There is a scalar 0 < β < ∞ dependent on ν such that the following inequality holds: W2(ν,Ftµ) ≤ [√ dβλmax(Σν) + ‖EνY ‖2 ] ∨ [ e−tK2(µ) +K2(ν) ] . (40)
As a motivation for setting a detour measure to Nν , we mentioned the natural property of the non-collapsing Wasserstein distance ofW2(ν,Nν) 6= 0. However, it is unclear from a geometric perspective exactly how the upper bound (i.e.,W2(ν,Nν) ≤ ?) can be induced based on the intrinsic statistics term (i.e., d1 in Fig.1). Specifically, in the situation where the covariance matrices of ν and Nν are identical, it is difficult to determine a theoretical upper bound without additional tools. The first part of this proof focuses on resolving this important issue. The second part of the proof is naturally induced by Proposition 1. Please note that in the following proposition, parameter for Wasserstein moving average is set to α = 0 for clarity.
Proof. Before proceeding with the first part of the proof, we define a constant β as follows:
β = sup 1≤j≤d ∫ 1 0 1 s EYsv2s,j(Ys)ds. (41)
If we assume a mild condition such that mins,j inf1≤j≤dO(vs,j) ≥ O( √ s), then the integral term in β is finite and well-defined. This value will directly yield the upper bound of the Kullback–Leibler (KL) divergence of ν. First, we introduce the following inequality.
Lemma 3. (de Bruijn’s identity, Johnson & Suhov (2001); Nourdin et al. (2014)) We let Y ∼ ν, Z ∼ N (0, I) denote a standard Gaussian random variable, and let define Ys = √ sY + √ 1− sΣ 1 2 ν Z with the score function defined as vs(x) = ∇ log ps(x) with respect to the random variable Ys. Then, the following equality holds:
KL(ν|N (0,Σν)) = ∫ 1
0
Tr
( 1
2s ΣνEps∼Ys [vs(Ys)vs(Ys)T ]
) ds. (42)
From equation 42, we can derive the relations between KL-divergence and the constant β defined earlier.∫ 1
0
1 2s Tr ( ΣνEx[vs(Ys)vs(Ys)T ]) ) ds ≤ ∫ 1 0 1 2s Tr ( ΣνEx[vs,ivs,j ]di,j) ) ds
≤ ∫ 1
0
1 2 λmax(Σν) d∑ j=1 E
[ v2s,j(Ys)
s
] ds ≤ 1
2 λmax ∫ 1 0 d∑ j=1 βds = 1 2 λmax(Σν)dβ.
(43)
The second inequality holds based on the following element property of symmetric positive-definite matrices:
Tr(AB) ≤ ‖A‖opTr(B) = λmax(A)Tr(B), ∀A,B ∈ Sym + d . (44)
It should be noted that because the distribution of ν is compactly supported (i.e., supp(q) is compact), the maximum eigenvalue of the covariance Σν is finite. The other relations are induced by the aforementioned definition. Next, we relate the KL-divergence and 2-Wasserstein distance naturally.
Definition 9. (Talagrand inequality for Gaussian measures, Otto & Villani (2000)) For any nondegenerate Gaussian measure N with a mean 0, the following inequality is satisfied:
W2(ν,N ) ≤ √ 2KL(ν|N ), ∀ν ∈ P2(Rd). (45)
By combining Definition 9 and equation 43, we can derive the following expression: W2(ν,N (0,Σν)) ≤ √ 2KL(ν|N (0,Σν)) ≤ √ dβλmax(Σν) <∞. (46)
According to the triangle inequality for the 2-Wasserstein distance, we obtain:
W2(ν,N (mν ,Σν)) ≤ W2(ν,N (0,Σν)) +W2(N (mν ,Σν),N (0,Σν)) (47)
In Appendix C.3, we investigated that the geodesic distance between two Gaussian measures having the same covariance is equivalent to the Euclidean distance between two means. Therefore, we can obtain the following equality:
W2(N (mν ,Σν),N (0,Σν)) =W2(ιmν# [N (0,Σν)],N (0,Σν)) = ‖mν − 0‖2 = ‖EνY ‖2 ,
(48)
where ιa(X) = X + a for any vector a ∈ supp(q). Now, by adding the two inequalities defined earlier, we can obtain
W2(ν,N (mν ,Σν)) ≤ ‖EνY ‖2 + √ dβλmax(Σν), (49)
where it is easily shown that the upper-bound is only dependent on the statistical structure of ν. Specifically, the term ‖EνY ‖2 represents the center of mass for a density of ν and √ dβλmax(Σν) is related to the covariance structure of ν.
By applying Proposition 8 to both Ftµ and ν, we can easily recover equation 5 as follows: W2(ν,Ftµ) ≤ ε =W2(ν,N (mν ,Σν)) +W2(N (mν ,Σν),Ftµ)
≤ ([ ‖EνY ‖2 + √ dβλmax(Σν) ] ∧K2(ν) ) + e−tK2(µ)
≤ [√ dβλmax(Σν) + ‖EνY ‖2 ] ∨ [ e−tK2(µ) +K2(ν) ] .
(50)
The second inequality is easily obtained as (a ∧ b) + c ≤ a ∨ (b + c) for any a, b, c ≥ 0, which completes the proof.
Proposition 9. (Concentration inequality for uncertain measures). Assume that there are some constants s? ∈ [ 1η ,∞), η ≥ 0 such that the following inequality is satisfied:
EFs?µ[f 2]− [EFs?µ[f ]] 2 ≤ (1 + η)EFs?µ[A∇f T∇f ], (51)
for A ∈ Sym+d , D(A,Σν) ≤ aη for some a > 0, and for any metric D defined on Sym + d . In this case, there is a δ such that the following probability inequality for an uncertain measure is induced:
Fs?µ ( |σ − Eν [σ]| ≥ δ ) ≤ 6e− √ 2δ 3 2 K2 , (52)
where κ denotes the Lipschitz constant of σ.
Proof. Before proceeding with the main proof, we first prove the existence of s?. The limit of the interval with respect to η converges to a singleton {∞} as I = limη→0[ 1η ,∞). In this case, equation 51 is the same as the Poincaré inequality for a Gaussian measure Nν , which can be written as
lim η→0
EFs?µ[f 2]− [EFs?µ[f ]] 2 ≤ lim η→0 (1 + η)EFs?µ[A∇f T∇f ]
= EFs?µ[Σν∇f T∇f ].
(53)
While the Poincaré inequality in equation 53 is uniquely defined, we can find at least one value s? satisfying equation 51. Let X(t, w) = Xt(w) denote the stochastic process with respect to qt(x) defined in the proof of Proposition 2. Additionally, let c = Eν [σ]− EFs?µ[σ]. Then, we can obtain the following inequality:
c = Eν [σ]− EFs?µ[σ] = κ ( Eν [σ κ ] − EFs?µ [σ κ ]) ≤ κ sup
g∈Lip1 (Eνg − EFs?µg)
≤ κW1(Fs?µ, ν) ≤ κW2(Fs?µ, ν) ≤ κK2(µ)
1 + η .
(54)
The first inequality is induced by the assumption regarding the κ-Lipschitzness of the function σ and the second inequality is induced by the Kantorovich-Rubinstein theorem. The third inequality is natural becauseWa(·, ·) ≤ Wb(·, ·) for any 1 ≤ a ≤ b <∞. because equation 51 is equivalent to the Poincaré inequality for the measure Fs?µ, it satisfies the Bakry-emery curvature-dimension condition CD(1 + η,∞). Thus, as shown in the proof of Proposition 2 (i.e., equation 39), the last inequality is induced. Additionally, based on the concentration inequality of Fs?µ [Proposition 4.4.2 Bakry et al. (2013)], we can derive the following probability inequality:
Fs?µ [σ(Xs?(w)) ≥ EFs?µ[σ] + δ] ≤ 3e − δ√ 1+ηκ , (55)
where the Poincaré constant for Fs?µ is naturally 1 + η and ‖σ‖Lip = κ. Next, we will derive the desired form from equation 55. First, we introduce the following inequality.
σ(Xs?) ≥ EFs?µ[σ] + δ ≥ Eν [σ] + δ − κ
1 + η K2 (56)
The last inequality is directly induced by equation 54 because −c ≥ − κ1+ηK2. While η, κ, and K2 are constants with respect to w, the following set inclusion can be obtained naturally:
S1 = {w : σ(Xs?(w)) ≥ EFs?µ[σ] + δ} ⊇ {w : σ(Xs?(w)) ≥ Eν [σ] + δ − κ
1 + η K2} = S2.
(57)
For the modified version of the original probability inequality, we take probability measure Fs?µ[·] for the sets S1,S2, which is defined as
3e − δ√
1+ηκ ≥ Fs?µ ({w : σ(Xs?(w)) ≥ EFs?µ[σ] + δ}) ≥ Fs?µ ( {w : σ(Xs?(w)) ≥ Eν [σ] + δ − κ
1 + η K2}
) .
(58)
The concentration inequality around Eν [σ] is obtained by combining the inequalities induced by σ and −σ as follows:
1 2 Fs?µ ⋃ h∈{σ,−σ} {w : h(Xs?(w))− Eν [h] ≥ ± ( δ − κ 1 + η K2} ) = Fs?µ ( {w : |σ(Xs?(w))− Eν [σ]| ≥ δ − κ
1 + η K2}
) ≤ 6e− δ√ 1+ηκ .
(59)
The inequality in equation 59 is the general form containing the relation between the upper bound of the probability and (η, κ,K2). While this form is quite complicated and highly technical, we choose not to present all the detailed expressions of equation 59 in the main paper. Rather than that, we re-write it in a much simplified form for clarity. Specifically, by setting κK2/(1 + η) = 0.5δ and rescaling δ to 2δ, the aforementioned inequality in equation 59 can be converted into the following simpler form:
Fs?µ ({w : |σ(Xs?(w))− Eν [l]| ≥ δ) ≤ 6e− √ 2δ 3 2 κK2 . (60)
Finally, if we set σ = Softmax, then the Lipschitz constant is induced as κ = 1. This proof is completed by setting s? := T . | 1. What is the main contribution of the paper regarding label noise problems?
2. What are the strengths of the proposed method, particularly its performance on various datasets?
3. What are the weaknesses of the paper, especially regarding its writing and clarity?
4. Do you have any questions or concerns about the method's approach to small loss criteria, distributional normalization, and stochastic dynamics?
5. How does the reviewer assess the paper's ability to handle asymmetric noise settings?
6. Are there any specific parts of the paper that need improvement or further explanation? | Review | Review
The paper is a contribution that aims at solving the label noise problem. In this setting, the labels are possibly corrupted, this yielding a potentially significant underperformance of a (neural network) classifier when minimizing the empirical risk. This problem is ubiquitous and important in real life scenarii. The paper builds on the idea of small loss criteria, which favors learning on certain samples in the beginning of the learning process, and gradually incorporate uncertain samples along iterations. The paper proposes a novel type of distributional normalization based on Wasserstein distance. It projects uncertain samples on a Wasserstein ball defined wrt. the certain samples. This process is done with a particle based stochastic dynamics, based on a Ornstein-Ulenbeck process. A theoretical Analysis is given, along with results on classical datasets in the symmetric noise setting, open noise and a real world dataset (clothing 1M), for which it achieves very good performances compared to state of the art competing methods.
While the overall idea is interesting, and the results described in the paper seem impressive and state-of-the-art on many of the tasks considered, I believe that the paper is badly written and very hard to follow. In the reviewer’s opinion, the major problems arise from:
a lack of introduction on the label noise problem, and a more comprehensive overview of the so called small loss criterion and its rationales;
despite an apparently rigorous mathematical framework, there are a lot of vague assertions (see minor remarks) and arbitrary choices (two examples: certain and uncertain distributions are looked upon in the pre-softmax logins, why ? Is the continuity equation of proposition 7 the only choice to model \mathcal{F} ? Some parts are really unclear (Why taking a detour ? what is the role of the Wasserstein moving average ? Is solving the SDE corresponding to the OU process the only solution to perform the normalization ? There are many algorithms in the literature that allows to estimate an OT mapping without solving this SDE)
Finally, whereas symmetric noise is considered in the experiment, I wonder how much the method is amenable to the more realistic setting of asymmetric noise, which is more likely to occur in the real life scenarii.
As a conclusion, I think there is a lot of interesting ideas and material in this paper, but the writing should be improved and clarified. I am willing to revise my note positively given that those aspects are taken into account in a revision of the paper.
Minor comments:
why taking the pre-softmax activations to build \ mu and \epsilon ?
some sentences are rather vague : ‘a functional that can manipulate \mu to induce geometric properties...’ ‘the pivot measure can be represented as a mini-batch containing the best combination of certain samples’ ‘the other iteration ‘ of which process ?
Définition 1: ‘d’ is used for the distance symbol and the dimensions of the input signal
Why Eq. 7 is a proposition ? It is rather a modeling choice
Typo p8 CIAFAR-10 -> CIFAR-10
After response to authors
I thank sincerely the authors for providing a detailed answer to my concerns. I changed my note to 6. I believe the paper is interesting and show strong empirical evidences that the method is worth considering. I am not giving a higher grade because in my opinion the writing of the paper could be signifcantly improved. |
ICLR | Title
Wasserstein Distributional Normalization : Nonparametric Stochastic Modeling for Handling Noisy Labels
Abstract
We propose a novel Wasserstein distributional normalization (WDN) algorithm to handle noisy labels for accurate classification. In this paper, we split our data into uncertain and certain samples based on small loss criteria. We investigate the geometric relationship between these two different types of samples and enhance this relation to exploit useful information, even from uncertain samples. To this end, we impose geometric constraints on the uncertain samples by normalizing them into the Wasserstein ball centered on certain samples. Experimental results demonstrate that our WDN outperforms other state-of-the-art methods on the Clothing1M and CIFAR-10/100 datasets, which have diverse noisy labels. The proposed WDN is highly compatible with existing classification methods, meaning it can be easily plugged into various methods to improve their accuracy significantly.
N/A
We propose a novel Wasserstein distributional normalization (WDN) algorithm to handle noisy labels for accurate classification. In this paper, we split our data into uncertain and certain samples based on small loss criteria. We investigate the geometric relationship between these two different types of samples and enhance this relation to exploit useful information, even from uncertain samples. To this end, we impose geometric constraints on the uncertain samples by normalizing them into the Wasserstein ball centered on certain samples. Experimental results demonstrate that our WDN outperforms other state-of-the-art methods on the Clothing1M and CIFAR-10/100 datasets, which have diverse noisy labels. The proposed WDN is highly compatible with existing classification methods, meaning it can be easily plugged into various methods to improve their accuracy significantly.
1 INTRODUCTION
The successful results of deep neural networks (DNNs) on supervised classification tasks heavily rely on accurate and high-quality label information. However, annotating large-scale datasets is extremely expensive and a time-consuming task. Because obtaining high-quality datasets is very difficult, in most conventional works, training data have been obtained alternatively using crowd-sourcing platforms Yu et al. (2018) to obtain large-scaled datasets, which leads inevitable noisy labels in the annotated samples.
While there are numerous methods that can deal with noisy labeled data, recent methods actively adopt the small loss criterion, which enables to construct classification models that are not susceptible to noise corruption. In this learning scheme, a neural network is trained using easy samples first in the early stages of training. Harder samples are then gradually selected to train mature models as training proceeds. Jiang et al. (2018) suggested collaborative learning models, in which a mentor network delivers the data-driven curriculum loss to a student network. Han et al. (2018); Yu et al. (2019) proposed dual networks to generate gradient information jointly using easy samples and employed this information to allow the networks to teach each other. Wei et al. (2020) adopted a disagreement strategy, which determines the gradient information to update based on disagreement values between dual networks. Han et al. (2020) implemented accumulated gradients to escape optimization processes from over-parameterization and to obtain more generalized results. In this paper, we tackle to solve major issues raised from the aforementioned methods based on the small-loss criterion, as follows.
In comprehensive experiments, the aforementioned methods gain empirical insight regarding network behavior under noisy labels. However, theoretical and quantitative explanation have not been closely investigated. In contrast, we give strong theoretical/empirical explanations to understand the network under noisy labels. In particular, we present an in-depth analysis of small loss criteria in a probabilistic sense. We exploit the stochastic properties of noisy labeled data and develop probabilistic descriptions of data under the small loss criteria, as follows. Let P be a probability measure for the pre-softmax logits of the training samples, l be an objective function for classification, and 1{·} be an indicator function. Then, our central object to deal with is a truncated measure defined as
X ∼ µ|ζ = 1{X;l(X)>ζ}P P[l(X) > ζ] , Y ∼ ξ|ζ = 1{X;l(Y )≤ζ}P P[l(Y ) ≤ ζ] , (1)
where X and Y , which are sampled from µ|ζ and ξ|ζ, denote uncertain and certain samples defined in the pre-softmax feature space1 (i.e.,Rd), respectively. In equation 1, µ and ξ denote the probability measures of uncertain and certain samples, respectively, and ζ is a constant. Most previous works have focused on the usage of Y and the sampling strategy of ζ, but poor generalization capabilities based on the abundance of uncertain samples X has not been thoroughly investigated, even though these samples potentially contain important information. To understand the effect of noisy labels on the generalized bounds, we provide the concentration inequality of uncertain measure µ, which renders the probabilistic relation between µ and ξ and learnability of the network under noisy labels.
While most conventional methods Han et al. (2018); Wei et al. (2020); Li et al. (2019a); Yu et al. (2019) require additional dual networks to guide misinformed noisy samples, the scalability is not guaranteed due to the existence of dual architectures, which have the same number of parameters as the base network. To alleviate this problem, we build a statistical machinery, which should be fully non-parametric, simple to implement, and computationally efficient to reduce the computational complexity of conventional approaches, while maintaining the concept of small-loss criterion. Based on the empirical observation of ill-behaved certain/uncertain samples, we propose the gradient flow in the Wasserstein space, which can be induced by simulating non-parametric stochastic differential equation (SDE) with respect to the Ornstein-Ulenbeck type to control the ill-behaved dynamics. The reason for selecting these dynamics will be thoroughly discussed in the following sections.
Thus, key contributions of our work are as follows.
• We theoretically verified that there exists a strong correlation between model confidence and statistical distance between X and Y . We empirically investigate that the classification accuracy worsens when the upper-bound of 2-Wasserstein distance W2(µ, ξ) ≤ ε (i.e., distributional distance between certain and uncertain samples) drastically increase. Due to the empirical nature of upper-bound ε, it can be used as an estimator to determine if a network suffers from over-parameterization.
• Based on empirical observations, we develop a simple, non-parametric, and computationally efficient stochastic model to control the observed ill-behaved sample dynamics. As a primal object, we propose the stochastic dynamics of gradient flow (i.e.,, Ornstein-Ulenbeck process) to simulate simple/non-parametric stochastic differential equation. Thus, our method do not require any additional learning parameters.
• We provide important theoretical results. First, the controllable upper-bound ε with the inverse exponential ratio is induced, which indicates that our method can efficiently control the diverging effect of Wasserstein distance. Second, the concentration inequality of transported uncertain measure is presented, which clearly renders the probabilistic relation between µ and ξ.
2 RELATED WORK
Curriculum Learning & Small-loss Criterion. To handle noisy labels, Han et al. (2018); Yu et al. (2019); Jiang et al. (2018); Wei et al. (2020); Lyu & Tsang (2020a); Han et al. (2020) adopted curriculum learning or sample selection frameworks. However, these methods only consider a small number of selected samples, where large portion of samples are excluded at the end of the training. This inevitably leads to poor generalization capabilities. However, this conflicts with sample selection methods because a large portion of training samples are gradually eliminated. By contrast, our method can extract useful information from unselected samples X ∼ µ (i.e., uncertain samples) and enhance these samples (e.g., X ′ ∼ Fµ) for more accurate classification. Chen et al. (2019) iteratively apply cross-validation to randomly partitioned noisy labeled data to identify most samples that have correct labels. To generate such partitions, they adopt small-loss criterion for selecting samples.
Loss Correction & Label Correction. Patrini et al. (2017a); Hendrycks et al. (2018); Ren et al. (2018) either explicitly or implicitly transformed noisy labels into clean labels by correcting classification losses. Unlike these methods, our method transforms the holistic information from uncertain samples into certain samples, which implicitly reduces the effects of potentially noisy labels. While correction of label noisy by modifying the loss-dynamics do not perform well under extreme noise environments, Arazo et al. (2019) adopt label augmentation method called MixUp Zhang et al. (2018).
1Due to the technical difficulties, we define our central objects on pre-softmax space rather than label space, i.e., the space of σ(X), σ(Y ), where σ indicates softmax function. Please refer to Appendix for more details.
Distillation. Li et al. (2019b) updated mean teacher parameters by calculating the exponential moving average of student parameters to mitigate the impact of gradients induced by noisy labels. Lukasik et al. (2020) deeply investigated the effects of label smearing for noisy labels and linked label smoothing to loss correction in a distillation framework. Similar to these methods, our method leverages the useful properties of distillation models. We set ν as a pivot measure, which guides our normalization functional Fµ for uncertain measures. This is similar to self-distillation because uncertain training samples are forced to be normalized to those of past states.
Other methods. Lee et al. (2019) induced a robust generative classifier based on pre-trained deep models. Similar to our method, Damodaran et al. (2019) designed a constraint on the Wasserstein space and adopted an adversarial framework for classification models of noisy labeled data by implementing semantic Wasserstein distance. Pleiss et al. (2020) identify noisy labeled samples by considering AUM statistics which exploits differences in training dynamics of clean and mislabeled samples. In most recent work, Li et al. (2019a) adopts semi-supervised learning (SSL) methods to deal with noisy labels where the student network utilizes both labeled/unlabeled samples to perform semi-supervised learning guided by the other teacher network.
3 DISTRIBUTIONAL NORMALIZATION
Because our main target object is a probability measure (distribution), we first define an objective function in a distributional sense. Let l be cross entropy and r̂ be a corrupted label random vector for an unknown label transition matrix from a clean label r which is independent of X , with label transition matrix Q. Then, a conventional objective function for classification with noisy labels can be defined as follows:
min µ J [µ] = min µ EX∼µ,r̂|Q [l(X; r̂)] . (2)
However, due to the significant changes in label information, the conventional objective function defined in equation 2 cannot be used for accurate classification. Instead of directly using uncertain samples X ∼ µ as in previous works, we normalize µ in the form of a metric ball and present a holistic constraint. For a clear mathematical description, we first introduce the following definition. Definition 1. (Wasserstein ambiguity set) Let P2(Rd) = {µ : Eµd2E(x0, x) <∞,∀x0 ∈ Rd} be a 2-Wasserstein space, where d denotes the number of classes, dE is Euclidean distance defined on Rd. Then, we define a Wasserstein ambiguity set (i.e., metric ball) in this space as follows:
BW2(ν, ε) = { µ ∈ P2 ( Rd ) :W2(µ, ν) ≤ ε } , (3)
whereW2 denotes the 2-Wasserstein distance and ν is the pivot measure. Then, we propose a new objective function by imposing geometric constraints on µ as follows:
min Fµ∈BW2 (ν,ε),ξ J [Fµ] + J [ξ] = min θ EX∼Fµθ,r̂[l(X; r̂)] + EX∼ξθ,r̂[l(Y ; r̂)], (4)
where F : P2(Rd)→ P2(Rd) is a functional for probability measures, which assures the constraint on Fµ (i.e., Fµ ∈ BW2(ν, ε)) and our main objective. The right-hand side of equation equation 4 is equivalent vectorial form of distributional form in left-hand side. While our main objects are defined on pre-softmax, both probability measures µθ and ξθ is parameterized by neural network with parameters θ. This newly proposed objective function uses the geometrically enhanced version of an uncertain measure Fµ with a certain measure ξ. In equation 4, probability measure ν is defined as follows: ν = arg minJ [ξk? ], where ξk denotes a certain measure at the current k-th iteration and k? ∈ Ik−1 = {1, · · · , k − 1}. In other words, our method finds the best probability measure that represents all certain samples so far at training time, where the uncertain measures are transported to be lying in the Wasserstein ball centered on ν. In equation 4, the Wasserstein constraint on Fµ enforces uncertain measures statistically resemble ν from a geometric perspective (i.e.,W2(ν,Fµ) ≤ ε). Now, an important question naturally stems from the aforementioned analysis: how can we select the optimal radius ε? Clearly, finding an F that induces a small ε ≈ 0 is suboptimal because Fµ ≈ ν and using objective function J [Fµ ≈ ν] can lead to the following critical problem. As the optimization process proceeds, enhanced uncertain samples X ′ ∼ Fµ contribute less and less, because it is statistically identical to ν, meaning our objective in equation 4 would receive little benefits from these transported uncertain samples. By contrast, if we adopt a large radius for ε, enhanced uncertain samples will be statistically and geometrically unrelated to ν, which causes the normalized measure Fµ to yield large losses and violates our objective.
To overcome two problems above and select the radius, we make a detour, i.e., a Gaussian measure, for cutting the path between ν and Fµ (i.e., ν → N (mν ,Σν)→ Fµ) rather than directly calculating the geodesic between ν and Fµ (i.e., ν → Fµ). Specifically, we decompose the original constraint in equation 4 into two terms using the triangle inequality of the Wasserstein distance:
W2 (ν,Fµ) ≤ ε =W2 (ν,N (mν ,Σν))︸ ︷︷ ︸ d1: Intrinsic statistics +W2 (N (mν ,Σν),Fµ)︸ ︷︷ ︸ d2: Wasserstein Normalization . (5)
The first intrinsic statistics term sets a detour point as a Gaussian measure, for which the mean and covariance are the same as those for ν (i.e., mν = EY∼ν [Y ] and Σν = CovY∼ν [Y ]). The Wasserstein upper bound of this term is only dependent on the statistical structure of ν because (mν ,Σν) is dependent on ν. Thus, this term induces a data-dependent, non-zero constant upper bound whenever ν 6= N and can prevent the upper-bound from collapsing to ε → 0, regardless of F . This gives huge advantage when dealing with ε because the first term can be considered a fixed constant during the training. The second normalization term represents our central objective. F facilitates geometric manipulation in the Wasserstein space and prevent uncertain measure µ from diverging, where µ is normalized onto the Wasserstein ambiguity BW2(ν, ε) in Fig1. The theoretical/numerical advantages of setting detour measure as Gaussian is well-explained following section.
3.1 WASSERSTEIN NORMALIZATION
In the previous section, we present a novel objective function that imposes a geometric constraint on µ such that the transformed measure Fµ lies in BW2(ν, ε) for ν. Now, we specify F and relate it to the Gaussian measure (generally Gibbs measure). For simplicity, we denote Nν = N (mν ,Σν). Proposition 1. F : R+×P2 → P2 is a functional on the probability measure such thatF [t, µ] = µt, where dµt = ptdNν , dNν = dqtdx, and µt is a solution to the following continuity equations:
∂tµt = ∇ · (µtvt) , (6)
which is read as ∂tp(t, x) = ∇ · (p(t, x)∇ log q(t, x)) in a distributional sense. Then, a uniquely defined functional Ft[·] = F [t, ·] normalizes µ onto BW2 (Nν , e−tK2 (µ)), where K2(µ) > 0 is a constant that depends on µ.
It is well known that the solution to equation 6 induces a geodesic in the 2-Wasserstein space (Villani (2008)), which is the shortest path from µ = µt=0 to Nν . The functional Ft generates a path for µt, in which the distance is exponentially decayed according to the auxiliary variable t and constant K2, meaningW2(Nν ,Ftµ) ≤ K2e−t. This theoretical results indicates that the Wasserstein distance of second term in equation 5 can be reduced/controlled with exponential ratio. Thus, by setting a different t, our method can efficiently control the diverging distance in equation 5. Unfortunately, it is typically intractable to compute the partial differential equation (PDE) in equation 6.
Algorithm 1 Wasserstein Distributional Normalization Require: α ∈ [0, 0.2], % ∈ [0.1, 0.65], T = 64,∆t = 10−4, τ = 0.001,
for k = 1 to K (i.e., the total number of training iterations) do 1) Select uncertain (1− ρ)N and certain ρN samples from the mini-batch N . {Y nk }{n≤ρN} ∼ ξk, {X n k }{n≤(1−ρ)N} ∼ µk
2) Update the most certain measure ν. if J [ξk] < J [ν] then ν ← ξk,mν ← E [Yk], and Σν ← Cov [Yk] end if 3) Update the moving geodesic averageN (mα,Σα). Solve the Ricatti equation T ΣνT = Σξk . Σα = ((1− α)Id + αT ) Σν ((1− α)Id + αT ) and mα = (1− α)mν + αmξk 4) Simulate the discrete SDE for T steps. for t = 0 to T − 1 do Xnk,t+1 = −∇φ(Xnk,t;mα)∆t + √ 2τ−1Σαν dW n t s.t. { Xnk,t=0 } ∼ µk, { Xnk,t=T } ∼ FTµk end for 5) Update the network with the objective function. J [Fµk] + J [ξk] = EFT µk [l(Xk,T ; r̂)] + Eξk [l(Yk; r̂)]
end for
To solve this problem, we adopt particle-based stochastic dynamics, which enables tractable computation. There exists a unique iterative form corresponding PDE in equation 6 which is called as multi-dimensional Ornstein-Ulenbeck process, which can be approximated using particle-based dynamics. In particular, we draw N(1 − %) uncertain samples from a single batch of N samples using equation 1 for hyper-parameter 0 ≤ % ≤ 1. We then simulate a discrete stochastic differential equation (SDE) for each particle using the Euler-Maruyama scheme as follows:
Xnt+1 = X n t −∇φ (Xnt ;mν) ∆t + √ 2τ−1∆tΣZ n I , (7)
where φ (Xt;mν) = τ2d 2 E (Xt,mν), n ∈ {1 · · · , N(1− %)}, dE is a Euclidean distance, and N is a single mini-batch size. We selected OU process as our stochastic dynamic due to the following reasons: First, we want to build computationally efficient, and non-parametric method to estimate/minimize the second term of equation 5. The SDE in equation 7 corresponding OU process have simple form with fixed drift and diffusion terms which is invariant over times which makes us to induce the non-parametric representations of simulation of SDE. While the simulation of equation 7 is just non-parametric for-loops in implementation algorithm, our method is computationally very efficient compared to other baseline methods such as Han et al. (2018). Second, when estimating empirical upper-bound of Wasserstein distance, OU process allows us to use explicit form called Meheler’s formula which can be efficiently estimated (Please refer to Appendix for more details). The overall procedure for our method is summarized in Algorithm 1.
3.2 WASSERSTEIN MOVING GEODESIC AVERAGE
In our experiments, we observe that the best measure ν is not updated for a few epochs after the training begins. This is problematic because ν diverges significantly from the current certain measure ξk, which is equivalent to the normalized measure Fµk diverging from ξk, meaning XT and Y become increasingly statistically inconsistent. To alleviate this statistical distortion, we modify detour measure from Nν to other Gaussian measure, which allows us to capture the statistics of both ξk and ν. Inspired by the moving average of Gaussian parameters in batch normalization Ioffe & Szegedy (2015), we propose the Wasserstein moving geodesic average. Specifically, we replace Gaussian parameters {mν ,Σν} with {mα,Σα} such that mα = (1 − α)mν + αmξk and Σα = ((1− α)Id + αT ) Σν ((1− α)Id + αT ), where T is a solution to the Riccati equation T ΣνT = Σξk . Therefore our final detour Gaussian measure is set to Nαν := N (m(α),Σ(α)), 0 ≤ α ≤ 12.
4 THEORETICAL ANALYSIS
In equation 5, we select the detour point as a Gaussian measure because this measure can provide a statistical structure, which is similar to that of the optimal ν. In addition to this heuristic motivation, setting a detour point as a Gaussian measure (Gibbs measure) also provides theoretical advantages, e.g., the theoretical upper bound of the Wasserstein constraint terms. In this section, we investigate the explicit upper bounds of two terms in equation 5, which are naturally induced by the SDE.
2Please refer to Appendix C.4 for more details.
Proposition 2. A scalar 0 < β <∞ exists and depends on ν, resulting in the following inequality: W2(ν,Ftµ) ≤ ε = K1(ν) ∨ [ e−tK2(µ) +K2(ν) ] , (8)
where λmax(Σν) denotes the maximum eigenvalue of the covariance matrix Σν and for some constant 0 < K1 <∞, we have K1(ν) = √ dβλmax(Σν) + ‖EνY ‖2 which is only dependent on ν.
Intuitively, K2(µ) can be interpreted as an indicator that tells us how the uncertain measure µ is diffused, whereas the designed term e−tK2(µ) controls the upper bound of the Wasserstein distance using a variable t. The other term K2(ν) does not vanish even with a very large t, which assures a non-collapsing upper-bound ε.
Proposition 3. (Concentration inequality for the normalized uncertain measure). Assume that there are some constants T ∈ [ 1η ,∞), η ≥ 0 such that the following inequality holds:
EFTµ[f2]− [EFTµ[f ]]2 ≤ (1 + η)EFTµ[A∇fT∇f ], f ∈ C∞0 (Rd), (9)
for A ∈ Sym+d and D(A,Σν) ≤ aη for some a > 0 with any metric D defined on Sym + d . In this case, there is a δ such that the following probability inequality for an uncertain measure is induced:
FTµ ( |σ − Eν [σ]| ≥ δ ) ≤ 6e− √ 2δ 3 2 K2(µ) , (10)
where σ denotes a soft-max function.
In equation 10, we show that the label information induced by the normalized uncertain measure is close to that of most certain measure Eν [σ], where the upper bound is exponentially relative to the initial diffuseness of µ (i.e.,K2(µ)). Because the upper bound of the probability inequality does not collapse to zero and FTµ is concentrated around the most certain labels (i.e.,Eν [σ]), the uncertain sample XT ∼ FTµ helps our method avoid over-parameterization.
4.1 EMPIRICAL UNDERSTANDINGS
We investigate the theoretical upper bound of the Wasserstein ambiguity (i.e., radius of the Wasserstein ball) for Fµ and its corresponding probability inequality. To provide more in-depth insights into the proposed method, we approximate the upper bound and demonstrate that our Wasserstein normalization actually makes neural networks more robust to label noise.
As we verified previously, according to Proposition 2, the following inequality holds:
W2(Ftµ, ν) ≤ ε = K1(ν) ∨ (K2(ν) +K2(Ftµ)) . (11)
Because the first term K1(ν) is constant, dependent on ν, and generally small compared to the second term with t ≤ T , we only examine the behavior of the second term K2(ν) +K2(Ftµ), which can be efficiently approximated using a simple form. Because our detour measure is Gaussian, we have the following inequality for any h ∈ C∞0 (Rd)3:
K̂2(µ) = lim s→0
1 s EX,Z∼NI
[ h ( e−sX + √ 1− e−2s(Σ 1 2 ν Z + mν) ) − h(X) ] ≤ K2(µ), (12)
where this equality holds if h is selected to induce a supremum over the set C∞0 . For approximation, we simply consider h(X) = ‖X‖2 as a test function. In this case, the following inequality naturally holds: ε̂ = K̂2(ν) + K̂2(Fµ) ≤ K2(ν) + K2(Fµ) ≤ K1(ν) ∨ (K2(ν) + K2(Fµ)) = ε. Thus, ε̂ can be considered as an approximation of the theoretical upper bound ε suggested in Proposition 2. Subsequently, we investigate the effects of Wasserstein normalization based on K̂2(µ) in equation 12.
(1) The proposed WDN ensures that the Wasserstein ambiguity is bounded. We examine the relation between ε̂ and test accuracy in an experiment using the CIFAR-10 dataset with symmetric noise at a ratio of 0.5. Fig.2 presents the landscape for the log10-scaled cumulative average of ε̂ and test accuracy over epochs. The red dotted lines represent the landscape of the vanilla network with cross-entropy loss, where ε̂k = K̂2(νk)+K̂2(Ft=0µk) and k is the epoch index. In this case, the time constant t is set to zero, because Wasserstein normalization is not employed for the vanilla network. The black lines indicate the landscape of the proposed method, where ε̂k = K̂2(νk) + K̂2(Ft=Tµk)
3Please refer to Appendix C.2 for additional details.
in this case. It is noteworthy that the test accuracy of the vanilla network begins to decrease after 13- epochs (red-dotted vertical lines in the top-right plot), whereas the Wasserstein ambiguity (i.e., upper bound of the Wasserstein distance) increases quadratically in the top-left plot. These experimental results verify that the distance between uncertain and most certain measure (i.e., ν) becomes large in the 2-Wasserstein space without any constraints in vanilla networks. They also indicate a definite relationship between Wasserstein ambiguity and test accuracy. In the proposed WDN, Wasserstein ambiguity can be efficiently bounded (i.e., lim supk ε̂k ≈ 2.15) as the test accuracy continues to increase, even after 13-epochs. For detailed analysis, we compute the deviation of an empirical upper bound as follows: ∆̂k = ε̂k − ε̂k−1. In the gray regions, the deviation for the vanilla network is grater than 2.5× 10−2, i.e., ∆k > 2.5× 10−2. Then, its test accuracy begins to drop, as shown in Fig.2. In contrast to the vanilla network, the maximum deviation of the proposed WDN is bounded above by a very small value (supk ∆̂k ≤ 8× 10−3). (2) The proposed WDN helps networks to escape from over-parameterization. To analyze the behavior of deep neural networks under over-parameterization with and without the proposed WDN, we design several variants of the WDN, which begin at delayed epochs. The green, orange, and blue curves in the second row of Fig.2 represent the landscapes, when our WDN is applied after kd ∈ {10, 15, 20} epochs, respectively. In this experiment, the upper bound ε̂k is defined as
ε̂k = { K̂2(νk) + K̂2(Ft=0µk), if k < kd, K̂2(νk) + K̂2(Ft=Tµk), else k ≥ kd.
(13)
Consider kd = 20, which is represented by the blue dotted vertical lines. Before our WDN is applied (i.e., k < kd), the network suffers from over-parameterization, which induces a significant performance drop, as indicated by the blue curve in the bottom-right plot. However, the network rapidly recovers to normal accuracy following Wasserstein normalization (i.e., k ≥ kd). Please note that similar behavior can be observed in the green and orange curves. In particular, the orange curve produces less fluctuations than the blue curve in terms of test accuracy. This indicates that the proposed WDN can help a network escape from over-parameterization by imposing geometric constraints on the Wasserstein space with proposed method.
(3) The proposed WDN can derive data-dependent bounds according to different noise levels. Another interesting point in Fig.2 is that all curves, excluding the red curve, converge to specific numbers 2.15 = ε := lim infk ε̂k ≤ lim supk ε̂k := ε̄ = 2.2. The upper bound ε̄ is neither overly enlarged nor collapsed to zero, while the lower bound ε is fixed for all curves. We argue that this behavior stems from the geometric characteristics of the proposed method, where the first term in equation 5, namelyW2(ν,Nν) ∝ K̂2(ν), is a non-zero data-dependent term that is minimized by the proposed geometric constraint. Therefore, we can derive the following relationship:
[W2(ν,Fµ) ≤ W2(ν,Nν) +W2(Nν ,Fµ)]⇓ ∝ [K̂2(ν) + K̂2(Fµ) = ε̂]⇓. (14) This empirical observation verifies that a detour point, which is set as a Gaussian measure, can induce the data-dependent bound (ε, ε̄), where our data-dependent bound can vary according to different
noise levels and efficiently leverage data-dependent statistics. Fig.2 indicates that classification models with more stable data-dependent bounds also induce more stable convergence in test accuracy.
5 EXPERIMENTS
5.1 EXPERIMENTS ON THE CIFAR-10/100 DATASET
We used settings similar to those proposed by Laine & Aila (2016); Han et al. (2018) for our experiments on the CIFAR10/100 dataset. We used a 9-layered CNN as a baseline architecture with a batch size of 128. We used the Adam optimizer with (β1, β2) = (0.9, 0.99), where the learning rate linearly decreased from 10−3 to 10−5. Synthetic Noise. We injected label noise into clean datasets using a noise transition matrix Qi,j = Pr(r̂ = j|r = i), where a noisy label r̂ is obtained from a true clean label r. We defined Qi,j by following the approach discussed by Han et al. (2018). For symmetric noise, we used the polynomial, % = −1.11r2 + 1.78r + 0.04 for 0.2 ≤ r ≤ 0.65, where r is the noise ratio. For the asymmetric noise, we set % to 0.35. To select the enhanced detour measure, we set α to 0.2 for the Wasserstein moving geodesic average in all experiments. We trained our classification model over 500 epochs because the test accuracy of our method continued increasing, whereas those of the other methods did not. We compared our method with other state-of-the-art methods, including [MentorNet, Jiang et al. (2018)], [Co-teaching, Han et al. (2018)], [Co-teaching+, Yu et al. (2019)], [GCE, Zhang & Sabuncu (2018)], [RoG, Lee et al. (2019)], [JoCoR, Wei et al. (2020)], [NPCL, Lyu & Tsang (2020b)], [SIGUA, Han et al. (2020)], and [DivideMix, Li et al. (2019a)]. As shown in Table 1, the proposed WDN significantly outperformed other baseline methods. Please note that our WDN utilizes a simple Gaussian measure as a target pivot measure. Thus, there are potential risks when handling highly concentrated and non-smooth types of noise (e.g., asymmetric noise). Nevertheless, the proposed WDN still produced accurate results, even with asymmetric noise. In this case, a variant of our WDN (i.e., WDNcot) exhibited the best performance. Open-set Noise. In this experiment, we considered the open-set noisy scenario suggested by Wang et al. (2018), where a large number of training images were sampled from other CIFAR-100 dataset; however, these images were still labeled according to the classes in the CIFAR-10 dataset. We used a 9-layered CNN, which also used in our previous experiment. For hyper-parameters, we set % and α to 0.5 and 0.2, respectively. As shown in Table 2, our method achieved state-of-the-art accuracy. Collaboration with Other Methods. Because our core methodology is based on small loss criteria, our method can collaborate with co-teaching methods. In Han et al. (2018), only certain samples (Y ∼ ξ) were used for updating colleague networks, where the number of uncertain samples gradually decreased until it reached a predetermined portion. To enhance potentially bad statistics for co-teaching, we taught dual networks by considering a set of samples (Y,XT ), where XT ∼ FTµ are uncertain samples enhanced using equation 7.
Table 1 shows the test accuracy results for the proposed collaboration model with a co-teaching network (WDNcot). This collaboration model achieved the most accurate performance for the CIFAR100 dataset with asymmetric noise, which verifies that our WDN can be integrated into existing methods to improve their performance significantly, particularly when the density of pre-logits is highly-concentrated. Fig.3 reveals that co-teaching quickly falls into over-parameterization and induces drastic drop in accuracy after the 15th-epoch. WDNcot also exhibits a slight accuracy drop. However, it surpassed the baseline co-teaching method by a large margin (+7%) during training. This demonstrates that our enhanced samples XT can alleviate the over-parameterization issues faced by conventional co-teaching models, which helps improve their accuracy significantly.
5.2 EXPERIMENTS ON A REAL-WORLD DATASET
To evaluate our method on real-world datasets, we employed the Clothing1M dataset presented by Xiao et al. (2015), which consists of 1M noisy, labeled, and large-scale cloth images with 14 classes collected from shopping websites. It contains 50K, 10K, and 14K clean images for training, testing, and validation, respectively. We only used a noisy set for training; for testing, we used a clean set. We set α = 0.2 and % = 0.1. For fair comparison, we followed the settings suggested in previous works. We used a pre-trained ResNet50 for a baseline architecture with a batch size of 48. For the pre-processing steps, we applied a random center crop, random flipping, and normalization to 224× 224 pixels. We adopted the Adam optimizer with a learning rate starting at 10−5 that linearly decayed to 5× 10−6 at 24K iterations. Regarding the baseline methods, we compared the proposed method to [GCE, Zhang & Sabuncu (2018)], [D2L, Ma et al. (2018)], [FW, Patrini et al. (2017b)], [WAR, Damodaran et al. (2019)], [SL, Wang et al. (2019)], [JOFL, Tanaka et al. (2018)], [DMI, Xu et al. (2019)], [PENCIL, Yi & Wu (2019)], and [MLNT, Li et al. (2019b)]. Table 3 reveals that our method achieved competitive performance as comparison with other baseline methods.
5.3 COMPUTATIONAL COST
Because Co-teaching, JoCoR, and DivideMix use additional networks, the number of network parameters is twice (8.86M) as many as that of the Vanilla network (4.43M ). In Table 4, we compare the average training time for first 5-epochs over various baseline methods under symmetric noise on the CIFAR-10 dataset. While non-parametric methods such as GCE and WDN require less than 12% additional time, other methods that require additional networks spent more time than non-parametric methods. The averaging time can vary according to different experimental environments. In table 4, we measure the time using publicly available code provided by authors.
6 CONCLUSION
We proposed a novel method called WDN for accurate classification of noisy labels. The proposed method normalizes uncertain measures to data-dependent Gaussian measures by imposing geometric constraints in the 2-Wasserstein space. We simulated discrete SDE using the Euler-Maruyama scheme, which makes our method fast, computationally efficient, and non-parametric. In theoretical analysis, we derived the explicit upper-bound of the proposed Wasserstein normalization and experimentally demonstrated a strong relationship between this upper-bound and the over-parameterization. We conducted experiments both on the CIFAR-10/100 and Clothing1M datasets. The results demonstrated that the proposed WDN significantly outperforms other state-of-the-art methods.
A OPEN-SOURCE DATASET
Transition matrix for CIFAR-10/100. For the experiment summarized in Table 1, we implemented open-source code to generate the noise transition matrix discussed by Han et al. (2018), as well as the 9-layered CNN architecture (https://github.com/bhanML/Co-teaching).
Open-set noise. For the experiment summarized in Table 2, we used the same dataset for open-set noisy labels presented by Lee et al. (2019) (https://github.com/pokaxpoka/ RoGNoisyLabel).
Clothing1M. For the experiment summarized in Table 3, we used the open-source dataset presented by Xiao et al. (2015) (https://github.com/Cysu/noisy_label).
B COMPARISONS TO RELATED WORKS
Methodology Parametric Class-dependency Distillation Sample-weight Sample-selection
DivideMix 3 7 7 7 3 Co-teaching 3 7 3 7 3
JoCoR 3 7 3 7 3 MLNT 3 3 3 7 7 Ren et al. (2018) 7 7 7 3 7 NPCL 7 7 7 3 7 GCE 7 7 7 3 7
WDN 7 7 7 7 7
Table B indicates that no previous methodologies can conceptually include our method.
Because the solution to the Fokker-plank equation can be explicitly calculated without any additional parameters, our method is fully non-parametric (in terms of additional parameters beyond those required by the original neural network). By contrast, co-teaching is parametric because it requires a clone network with additional parameters that are copies of those in the original network. Similarly, MLNT requires an additional teacher network for training, which also contains a number of parameters.
Many method based on small loss criteria select certain samples, whereas our method uses the combination of ρN certain and (1 − ρ)N normalized uncertain samples. Therefore, our method can fully leverage the batches of training datasets, where (1− ρ)N + ρN = N . Additionally, our method does not assume any class-dependent prior knowledge. Rather than considering class-wise prior knowledge, our method uses holistic information from both certain and uncertain samples (i.e., Y and XT ) in the logit space. Other meta-class-based model, such as MLNT, assume class-wise meta prior knowledge from a teacher network.
In Arazo et al. (2019), they assumed the beta-mixture model as a label distribution on label space. But due to the non-deterministic type of noisy label distribution, it sometimes fails to train with extremely non-uniform type of noise. For example, Arazo et al. (2019) reported failure case with Clothing1M dataset. It seems that fundamental assumption on noise model of mixup will be improved in future work. Similar to this method, our work have trouble when dealing with synthetic asymmetric noise with high ratio where relatively large performance drop is observed in Table 1 (despite our method produces second best performance in the table).
Most recent work Li et al. (2019a), they also adopt Co-train by implementing additional dual network, but much sophisticated methodology called Co-divide/guessing based on SSL. We predict that the Wasserstein distance between labeled and unlabeled probability measures is well-controlled in their method. We think that applying the OT/Markov theory (as in our paper) to their method will broaden the understanding of LNL problem.
In contrast to sample weight methods such as GCE and NPCL, which require prior knowledge regarding the cardinality of the training samples to be weighted, our method is free from such assumptions because our Wasserstein normalization is applied in a batch-wise manner.
C TECHNICAL DIFFICULTY FOR APPLYING GENERAL OPTIMAL TRANSPORT/MARKOV THEORY TO LABEL SPACE.
LetX,Y be uncertain and certain samples in pre-softmax feature space. And assume that we consider the distributional constraint on label-space (the space of σ(X), σ(Y ), where σ denotes the soft-max function). This space is not proper to define the objective function such as (5). Because, all the samples in this label space is of the form σ(X) = [a1, a2, · · · , an] such that ∑d i=1 ai = 1, thus label-space is d-dimensional affine-simplex Ud which is subset of Euclidean space Ud ⊂ Rd. In this case, the definition of Wasserstein space in equation (4) is unacceptable while dE is not true metric on Ud. The Wasserstein space P2(Ud) is merely investigated in the mathematical literature which makes unable to use all the technical details and assumptions, theories developed in the P2(Rd) which are theoretical ground of our work. But, if we look this problem slightly different point of view, for example, consider pre-softmax Rd,P2(Rd) as our base space. In this case, all the technical issues/problems when we try to use OT tools in P2(Ud) can be overcome/ignored. while softmax is non-parametric one-to-one function connecting pre-softmax feature space Rd to Ud, there exists a unique labels in Ud as a mapped point of the manipulated uncertain samples. Even though our objects are defined on pre-softmax space, the theoretical analysis in Proposition 3 contains softmax function to evaluate the concentration inequality of proposed transformation F affecting in label-space Ud.
D MATHEMATICAL BACKGROUND
In this section, we introduce important definitions, notations, and propositions used in our proofs and the main paper.
D.1 NOTATION
We denote f#µ as a push-forward of µ through f . C∞0 (Rd) denotes the set of∞-class functions with compact support in Rd. For the Lp-norm of the function f , we denote ‖f‖p,ν = ( ∫ |f |pdν) 1 p . The Hessian matrix of the function f is denoted as Hess[f ] = [∂i∂jf ]di,j . Sym + d denotes the space for semi-definite positive symmetric matrices of size d× d. ‖f‖Lip denotes the Lipschitz norm of the function f . For any matrix A ∈Md, we let ‖A‖op denote the operator norm of A.
D.2 DIFFUSION-INVARIANCE AND HYPER-CONTRACTIVITY
Definition 2. The Markov semigroup (Pt)t≥0 in Rd acting on a function f ∈ C∞0 is defined as follows:
Ptf(x) = ∫ f(x′)pt(x, dx ′), (15)
where pt(x, dx′) is a transition kernel that is the probability measure for all t ≥ 0. Definition 3. (Diffusion Operator) Given a Markov semi-group Pt at time t, the diffusion operator (i.e., infinitesimal generator) L of Pt is defined as
Lg(y) = lim t→0
1 t (Ptg(y)− g(y)) = ∑ i,j ∂2 ∂yi∂yj Bij(y)g(y)− ∑ i Ai(y) ∂ ∂yi g(y), (16)
where B and A are matrix and vector-valued measurable functions, respectively. Bij denotes the (i, j)-th function of B and Ai denotes the i-th component function of A. Definition 4. (Diffusion-invariant Measure) Given the diffusion operator L, the probability measure µ is considered to be invariant measure to L when EX∼µ[Lf(X)] = 0 for any f ∈ C∞0 . Lemma 1. (Infinitesimal generator for the multivariate Gaussian measure, Bolley & Gentil (2010).) The Gaussian measure Nν := N (mν ,Σν) with a mean mν and covariance Σν is an invariant measure according to the following diffusion-operator L:
Lf(x) = ΣνHess[f ](x)− (x−mν)T ∇f(x), ∀f ∈ C∞0 (Rd), (17) where Bij(x) := [Σν ]ij is a constant function, and Ai(x) := xi −miν .
This generator serves as our main tool for the geometric analysis of the upper bound ε. In Section 4.1 in the main paper, we introduced an approximate upper-bound K̂2(µ) without any general description of the inequality involved. We now introduce the underlying mathematics for equation 12. Because our detour measure is Gaussian, there is a unique semi-group Pth called the multidimensional Ornstein-Ulenbeck semi-group that is invariant to Nν . Specifically, Pt is defined as follows:
Psh(X) = EZ∼NI [ h ( e−sX + √ 1− e−2s(Σ 1 2 ν Z + mν) )] , ∀h ∈ C∞0 . (18)
The invariance property of Pt relative to our detour measure is naturally induced by the following Proposition: Proposition 4. We define C : Rd → Rd and C(X) = AX + b such that A ∈ Sym+d ,b ∈ Rd, and select an arbitrary smooth h ∈ C∞0 (Rd). We then define the diffusion Markov semi-group Psh as follows:
Psh(X) = EZ∼N [ h ( e−sX + √ 1− e−2sC(Z) )] . (19)
Then, N (A2,b) is invariant with respect to Ps, meaning the following equality holds for every h and s ≥ 0: ∫
Rd [Psh(X)− h(X)]dN (A2,b)(X) = 0. (20)
Proof. For simplicity, we denote N (A2,b) := NC .∫ Psh(X)dNC(X) = ∫ ∫ h(e−sX + √ 1− e−2sC(Z))dNC(X)dN (Z)
= ∫ ∫ h ◦ C(e−sZ ′ + √ 1− e−2sZ)dN (Z ′)dN (Z).
(21)
The second equality holds because C is linear in Rd. Let e−s = cos θ and e−2s = sin θ for any 0 ≤ θ ≤ 2π. Then, we define φ as φ(Z ′, Z) = e−sZ ′ + √ 1− e−2sZ = cos(θ)Z ′ + sin(θ)Z, and π(Z ′, Z) = Z. Based on the rotation property of the standard Gaussian measure, one can induce the following equality.
(N ⊗N ) ◦ (C ◦ φ)−1 = ((N ⊗N ) ◦ φ−1) ◦ C−1 = N ◦ C−1. (22) However, we know that dN [C−1(X)] = dNC(X) = ( (2π)d|A2| )− 12 e−0.5(X−b)TA−2(X−b). By combining equation 21 and equation 22, one can derive the following result:∫
h ◦ C(e−sZ ′ + √ 1− e−2sZ)d[N ⊗N ] = ∫ h(X)d [ (N ⊗N ) ◦ φ−1 ◦ C−1 ] (X)
= ∫ h(X)d[N ◦ C−1](X) = ∫ h(X)dN [C−1(X)]
= ∫ h(X)dNC(X).
(23)
Proposition 4 demonstrates the invariance property of the defined semi-group. If we setA = Σ 1 2 ν ,b = mν , then we can recover equation 18.
We are now ready to define the approximation of K2(µ) in terms of semi-group invariance. Specifically, for any real-valued smooth h, we define the following inequality:
K̂2(µ) = EX∼µ[Lh(X)] = lim s→0
EX∼µ [ 1
s (Psh(X)− h(X)) ] = lim s→0 1 s EX,Z∼NI [ h ( e−sX + √ 1− e−2s(Σ 1 2 ν Z + mν) ) − h(X) ] ≤ K2(µ).
(24)
This inequality holds if h is selected to induce a supremum over the set C∞0 , where suph K̂2(µ, h) = suph EX∼µ[Lh(X)] = K2(µ). Although a more sophisticated design for the test function h will induce a tighter upper bound for K̂2, we determined that the L2-norm is generally sufficient.
Definition 5. (Diffuseness of the probability measure) We define the integral operator K2 : W2(Rd)→ R+ as follows:
K2(µ) = √ sup f∈C∞0 ∫ Rd |Lf(x)| dµ(x). (25)
According to Definition 4, we know that ∫ Lf(X)dNν(X) = 0 for any f . Based on this observation, it is intuitive that K2 estimates how the probability measure ν is distorted in terms of diffusion invariance. While this measure takes a supremum over the function space C∞0 , it searches for a function that enables the estimation of maximal distortion. Because the value of K2 is entirely dependent on the structure of µ, K2 can be considered as a constant for the sake of simplicity if the uncertain measure µ is fixed over one iteration of training. Definition 6. (Diffusion carré du champ) Let f, g ∈ C∞0 (Rd). Then, we define a bilinear form Γc in C∞0 (Rd)× C∞0 (Rd) as
Γe(f, g) = 1
2 [LΓe−1(fg)− Γe−1(fLg)− Γe−1(gLf)], e ≥ 1. (26)
We also denote Γ(f) ≡ Γ(f, f). The bilinear form Γ can be considered as a generalization of the integration by the parts formula, where ∫ fLg + Γ(f)dµ = 0 for the invariant measure µ of L.
Definition 7. (Curvature-Dimension condition, Ambrosio et al. (2015)) We can say that the infinitesimal generator L induces the CD(ρ,∞) curvature-dimension condition if it satisfies Γ1(f) ≤ ρΓ2(f) for all f ∈ C∞0 .
Because our diffusion operator generates a semi-group with respect to the Gibbs measure, the curvature-dimension condition can be calculated explicitly. Through simple calculations, the firstorder (c = 1) diffusion carré du champ can be induced as follows:
Γ1(f) = ( [∇f ]TΣν∇f )2 . (27)
Similarly, the second-order (c = 2) diffusion carré du champ is calculated as follows:
Γ2(f) = 1
2
[ L ( Γ1(f 2) ) − 2Γ1 (f,L(f)) ] = Tr ([ Σν∇2f ]2) + ( [∇f ]TΣν∇f )2 = Tr ([ Σν∇2f ]2) + Γ1(f),
(28)
for an arbitrary f ∈ C∞0 (Rd). While Tr ([ Σ∇2f ]2)
is non-negative, we can infer that Γ1 ≤ Γ2. In this case, the diffusion operator L defined in Lemma 1 induces the CD(ρ = 1,∞) curvaturedimension condition. For the other diffusion operators, please refer to Bolley & Gentil (2010). Proposition 5. (Decay of Fisher information along a Markov semigroup, Bakry et al. (2013).) If we assume the curvature-dimension condition CD(ρ,∞), then I(µt|Nν) ≤ e−2ρtI(µ|Nν).
The exponential decay of the Fisher information in Proposition 5 is a core property of the exponential decay of the Wasserstein distance, which will be used in the proof of Proposition 2.
D.3 FOKKER-PLANK EQUATION, SDE
Definition 8. (Over-damped Langevin Dynamics) We have dXt = −∇φ(Xt;mν)dt+ √ 2τ−1ΣνdWt, (29)
where φ (Xt;mν) = τ2d 2 (Xt,mν), Wt denotes Brownian motion, and d denotes Euclidean distance. The particle Xt is distributed in Xt ∼ pt. The probability density limt→∞ p(x, t) with respect to X∞ converges to the Gaussian density X∞ = √ Σν(Z + mν) ∼ p∞(x) = q(x) ∝ e−d(x,mν) TΣ−1ν d(x,mν).
In classical SDE literature, it is stated that E [ sup0≤t≤T ∣∣∣X̂t −Xt∣∣∣] ≤ G(N%)− 12 , where G(T ) is some constant that depends only on T and X̂ denotes the true solution of the SDE in equation 29. While the number of uncertain samples is greater than N% > 40, our method exhibits acceptable convergence.
D.4 GAUSSIAN WASSERSTEIN SUBSPACES
It is known that the space of non-degenerate Gaussian measures (i.e., covariance matrices are positivedefinite) forms a subspace in the 2-Wasserstein space denoted asW2,g ∼= Sym+d × Rd. Because the 2-Wasserstein space can be considered as a Riemannian manifold equipped with Riemannian metrics Villani (2008), W2,g can be endowed with a Riemannian structure that also induces the Wasserstein metric (McCann (1997)). In the Riemannian sub-manifold of Gaussian measures, the geodesic between two points γ(0) = NA and γ(1) = NB is defined as follows Malagò et al. (2018):
γ(α) = Nt = N (m(α),Σ(α)), (30)
where m(α) = (1 − α)mA + αmB and Σ(α) = [(1− α)I + αT ] ΣA [(1− α)I + αT ], where T ΣAT = ΣB . In Section 3.2, we set (mA,ΣA) → (mν ,Σν) and (mB ,ΣB) → (mξk ,Σξk). Regardless of how ν is updated, the statistical information regarding the current certain measure ξk is considered in the detour Gaussian measure, which yields a much smoother geometric constraint on µ.
E PROOFS
Proposition 6. Let Γ(µ, ν) be a set of couplings between µ and ν, and assume that the noisy label r̂ is independent of X . For functional J [µ] = Eµ∼X l(X; r̂), we define D(µ, ν) as:
D(µ, ν) = inf γ∈Γ(µ,ν)
|J [µ]− J [ν]| , (31)
where D : P2 ×P2 → R. Then, D is the metric defined on P2, which is weaker than the Wasserstein metric, where D(µ, ν) ≤ αW2(µ, ν) for α = c−10 r̂ + c −1 1 (1− r̂) and some constants c0, c1 > 0.
Proof.
|J [ν]− J [µ]| = |Eµ[l(X; r̂)]− Eν [l(Z; r̂)]| = |Eµ⊗ν [r̂ (log σ(X)− log σ(Z))− (1− r̂) (log(1− σ(X))− log(1− σ(Z)))]| ≤ E |r̂Eµ⊗ν [log σ(X)− log σ(Z)]|+ E |(1− r̂)Eµ⊗ν [log(1− σ(X))− log(1− σ(Z))]| ≤ Er̂Eµ⊗ν |log σ(X)− log σ(Z)|+ E(1− r̂)Eµ⊗ν |log(1− σ(X))− log(1− σ(Z))| ≤ c−10 E(r̂)Eµ⊗ν |X − Z|+ c −1 1 E(1− r̂)Eµ⊗ν |Z −X| = E[c−10 r̂ + c −1 1 (1− r̂)]Eµ⊗ν |X − Z| (32)
By taking the infimum of the aforementioned inequality with set of couplings γ(µ, ν), we obtain the following inequality:
D(ν, µ) = inf γ(µ,ν)
|J [ν]− J [µ]| ≤ E[c−10 Y + c −1 1 (1− Y )] inf γ(µ,ν) Eγ |X − Z|
= E[c−10 Y + c −1 1 (1− Y )]W1(µ, ν) ≤ E[c−10 Y + c −1 1 (1− Y )]W2(µ, ν),
(33)
which completes the proof.
Proposition 6 follows from the Lipschitzness of the functional J , where D searches for the best coupling to derive the minimal loss difference between two probability measures. This proposition indicates that inf |J [ν]− J [Fµ]| is bounded by the Wasserstein distance, which justifies our geometric constraint presented in equation 4. It should be noted that the prior assumption regarding noisy labels is essential for Lipschitzness. Proposition 7. Let F : R+ × P2 be a functional on probability measures such that F [t, µ] = µt, where dµt = ptdNν , dNν = dqtdx, and let µt be a solution of the continuity equation in the 2-Wasserstein space defined as follows:
∂tµt = ∇ · (µt∇Φt) , (34)
which is represented as ∂tp(t, x) = ∇ · (p(t, x)∇ log q(t, x)) in a distributional sense. Then, the functional Ft[·] = F [t, ·] is defined unique and normalizes µ onto BW2 (Nν , e−tK2 (µ)), where K2(µ) ≤ ∞ is an integral operator in Definition 5 with respect to µ.
Proof. We assume that the probability measure µt is absolutely continuous with respect to the detour Gaussian measure N (mν ,Σν) = Nν , µt Nν . In this case, according to the Radon-Nikodym theorem, there is a corresponding unique probability density q(t, x) = qt(x) ∈ C∞0 such that dµt = qtdNν . Lemma 2. (WI-inequality, Otto & Villani (2000)) If the stationary state of µt with respect to Pt satisfies limt→∞ Eµ[Ptf ] = 0 for any f ∈ C∞0 , then the following inequality holds:
d dt+ W2(µ, µt) ≤
√ I(µt|Nν). (35)
By integrating both sides of the inequality in Lemma 2 with respect to t ∈ (0,∞), the following inequality can be obtained:
W2(µt,Nν) = ∫ ∞
0
d dt+ W2(µt,Nν)dt ≤ ∫ ∞ 0 √ I(µt|Nν)dt. (36)
In the aforementioned inequality, we replace the Fisher information with the diffusion generator L as follows:
W2(µ,Nν) ≤ ∫ ∞
0
√ I(µt|Nν)dt
= ∫ ∞ 0 √∫ [Ptq]−1Γ(Ptq)dNνdt = ∫ ∞ 0 √∫ L(− logPtq)dµtdt. (37)
The second equality above is derived by leveraging the properties of the bilinear operator Γ (Bakry et al. (2013); Villani (2008)) with respect to the diffusion operator L, which is defined as follows:∫
[Ptq] −1Γ(Ptq)dNν = − ∫ L(logPtq)qtdNν = ∫ L(− logPtq)dµt ≥ 0. (38)
For simplicity, we denote |g| = g+ for any g ∈ C∞0 . According to Proposition 5, we can relate Ftµ = µt to its initial term µ = µt=0 as follows:∫ ∞
0
√∫ L(− logPtq)(X)d[Ftµ](X)dt ≤ ∫ ∞ 0 √ e−2ρt ∫ L (− logPt=0q) (X)dµ(X)dt
≤ ∫ ∞
0
√ e−2ρt sup
g∈C∞0
∫ L+g(Z)qdNν(Z)dt
= ∫ ∞ 0 √ e−2ρtdt √ sup g∈C∞0 ∫ L+g(X)dµ(X)
= ρ−1K2(µ).
(39)
The second inequality is naturally induced, because the proposed objective function is defined to select the maximum elements over the set of functions g ∈ C∞0 and Lg ≤ L+g. If the integral interval is set to (0, s), then we can induceW2(µ,Ftµ) ≤ 1ρ (1− e
−s)K2(µ). Our diffusion-operator induces ρ = 1, which completes the proof.
Proposition 8. There is a scalar 0 < β < ∞ dependent on ν such that the following inequality holds: W2(ν,Ftµ) ≤ [√ dβλmax(Σν) + ‖EνY ‖2 ] ∨ [ e−tK2(µ) +K2(ν) ] . (40)
As a motivation for setting a detour measure to Nν , we mentioned the natural property of the non-collapsing Wasserstein distance ofW2(ν,Nν) 6= 0. However, it is unclear from a geometric perspective exactly how the upper bound (i.e.,W2(ν,Nν) ≤ ?) can be induced based on the intrinsic statistics term (i.e., d1 in Fig.1). Specifically, in the situation where the covariance matrices of ν and Nν are identical, it is difficult to determine a theoretical upper bound without additional tools. The first part of this proof focuses on resolving this important issue. The second part of the proof is naturally induced by Proposition 1. Please note that in the following proposition, parameter for Wasserstein moving average is set to α = 0 for clarity.
Proof. Before proceeding with the first part of the proof, we define a constant β as follows:
β = sup 1≤j≤d ∫ 1 0 1 s EYsv2s,j(Ys)ds. (41)
If we assume a mild condition such that mins,j inf1≤j≤dO(vs,j) ≥ O( √ s), then the integral term in β is finite and well-defined. This value will directly yield the upper bound of the Kullback–Leibler (KL) divergence of ν. First, we introduce the following inequality.
Lemma 3. (de Bruijn’s identity, Johnson & Suhov (2001); Nourdin et al. (2014)) We let Y ∼ ν, Z ∼ N (0, I) denote a standard Gaussian random variable, and let define Ys = √ sY + √ 1− sΣ 1 2 ν Z with the score function defined as vs(x) = ∇ log ps(x) with respect to the random variable Ys. Then, the following equality holds:
KL(ν|N (0,Σν)) = ∫ 1
0
Tr
( 1
2s ΣνEps∼Ys [vs(Ys)vs(Ys)T ]
) ds. (42)
From equation 42, we can derive the relations between KL-divergence and the constant β defined earlier.∫ 1
0
1 2s Tr ( ΣνEx[vs(Ys)vs(Ys)T ]) ) ds ≤ ∫ 1 0 1 2s Tr ( ΣνEx[vs,ivs,j ]di,j) ) ds
≤ ∫ 1
0
1 2 λmax(Σν) d∑ j=1 E
[ v2s,j(Ys)
s
] ds ≤ 1
2 λmax ∫ 1 0 d∑ j=1 βds = 1 2 λmax(Σν)dβ.
(43)
The second inequality holds based on the following element property of symmetric positive-definite matrices:
Tr(AB) ≤ ‖A‖opTr(B) = λmax(A)Tr(B), ∀A,B ∈ Sym + d . (44)
It should be noted that because the distribution of ν is compactly supported (i.e., supp(q) is compact), the maximum eigenvalue of the covariance Σν is finite. The other relations are induced by the aforementioned definition. Next, we relate the KL-divergence and 2-Wasserstein distance naturally.
Definition 9. (Talagrand inequality for Gaussian measures, Otto & Villani (2000)) For any nondegenerate Gaussian measure N with a mean 0, the following inequality is satisfied:
W2(ν,N ) ≤ √ 2KL(ν|N ), ∀ν ∈ P2(Rd). (45)
By combining Definition 9 and equation 43, we can derive the following expression: W2(ν,N (0,Σν)) ≤ √ 2KL(ν|N (0,Σν)) ≤ √ dβλmax(Σν) <∞. (46)
According to the triangle inequality for the 2-Wasserstein distance, we obtain:
W2(ν,N (mν ,Σν)) ≤ W2(ν,N (0,Σν)) +W2(N (mν ,Σν),N (0,Σν)) (47)
In Appendix C.3, we investigated that the geodesic distance between two Gaussian measures having the same covariance is equivalent to the Euclidean distance between two means. Therefore, we can obtain the following equality:
W2(N (mν ,Σν),N (0,Σν)) =W2(ιmν# [N (0,Σν)],N (0,Σν)) = ‖mν − 0‖2 = ‖EνY ‖2 ,
(48)
where ιa(X) = X + a for any vector a ∈ supp(q). Now, by adding the two inequalities defined earlier, we can obtain
W2(ν,N (mν ,Σν)) ≤ ‖EνY ‖2 + √ dβλmax(Σν), (49)
where it is easily shown that the upper-bound is only dependent on the statistical structure of ν. Specifically, the term ‖EνY ‖2 represents the center of mass for a density of ν and √ dβλmax(Σν) is related to the covariance structure of ν.
By applying Proposition 8 to both Ftµ and ν, we can easily recover equation 5 as follows: W2(ν,Ftµ) ≤ ε =W2(ν,N (mν ,Σν)) +W2(N (mν ,Σν),Ftµ)
≤ ([ ‖EνY ‖2 + √ dβλmax(Σν) ] ∧K2(ν) ) + e−tK2(µ)
≤ [√ dβλmax(Σν) + ‖EνY ‖2 ] ∨ [ e−tK2(µ) +K2(ν) ] .
(50)
The second inequality is easily obtained as (a ∧ b) + c ≤ a ∨ (b + c) for any a, b, c ≥ 0, which completes the proof.
Proposition 9. (Concentration inequality for uncertain measures). Assume that there are some constants s? ∈ [ 1η ,∞), η ≥ 0 such that the following inequality is satisfied:
EFs?µ[f 2]− [EFs?µ[f ]] 2 ≤ (1 + η)EFs?µ[A∇f T∇f ], (51)
for A ∈ Sym+d , D(A,Σν) ≤ aη for some a > 0, and for any metric D defined on Sym + d . In this case, there is a δ such that the following probability inequality for an uncertain measure is induced:
Fs?µ ( |σ − Eν [σ]| ≥ δ ) ≤ 6e− √ 2δ 3 2 K2 , (52)
where κ denotes the Lipschitz constant of σ.
Proof. Before proceeding with the main proof, we first prove the existence of s?. The limit of the interval with respect to η converges to a singleton {∞} as I = limη→0[ 1η ,∞). In this case, equation 51 is the same as the Poincaré inequality for a Gaussian measure Nν , which can be written as
lim η→0
EFs?µ[f 2]− [EFs?µ[f ]] 2 ≤ lim η→0 (1 + η)EFs?µ[A∇f T∇f ]
= EFs?µ[Σν∇f T∇f ].
(53)
While the Poincaré inequality in equation 53 is uniquely defined, we can find at least one value s? satisfying equation 51. Let X(t, w) = Xt(w) denote the stochastic process with respect to qt(x) defined in the proof of Proposition 2. Additionally, let c = Eν [σ]− EFs?µ[σ]. Then, we can obtain the following inequality:
c = Eν [σ]− EFs?µ[σ] = κ ( Eν [σ κ ] − EFs?µ [σ κ ]) ≤ κ sup
g∈Lip1 (Eνg − EFs?µg)
≤ κW1(Fs?µ, ν) ≤ κW2(Fs?µ, ν) ≤ κK2(µ)
1 + η .
(54)
The first inequality is induced by the assumption regarding the κ-Lipschitzness of the function σ and the second inequality is induced by the Kantorovich-Rubinstein theorem. The third inequality is natural becauseWa(·, ·) ≤ Wb(·, ·) for any 1 ≤ a ≤ b <∞. because equation 51 is equivalent to the Poincaré inequality for the measure Fs?µ, it satisfies the Bakry-emery curvature-dimension condition CD(1 + η,∞). Thus, as shown in the proof of Proposition 2 (i.e., equation 39), the last inequality is induced. Additionally, based on the concentration inequality of Fs?µ [Proposition 4.4.2 Bakry et al. (2013)], we can derive the following probability inequality:
Fs?µ [σ(Xs?(w)) ≥ EFs?µ[σ] + δ] ≤ 3e − δ√ 1+ηκ , (55)
where the Poincaré constant for Fs?µ is naturally 1 + η and ‖σ‖Lip = κ. Next, we will derive the desired form from equation 55. First, we introduce the following inequality.
σ(Xs?) ≥ EFs?µ[σ] + δ ≥ Eν [σ] + δ − κ
1 + η K2 (56)
The last inequality is directly induced by equation 54 because −c ≥ − κ1+ηK2. While η, κ, and K2 are constants with respect to w, the following set inclusion can be obtained naturally:
S1 = {w : σ(Xs?(w)) ≥ EFs?µ[σ] + δ} ⊇ {w : σ(Xs?(w)) ≥ Eν [σ] + δ − κ
1 + η K2} = S2.
(57)
For the modified version of the original probability inequality, we take probability measure Fs?µ[·] for the sets S1,S2, which is defined as
3e − δ√
1+ηκ ≥ Fs?µ ({w : σ(Xs?(w)) ≥ EFs?µ[σ] + δ}) ≥ Fs?µ ( {w : σ(Xs?(w)) ≥ Eν [σ] + δ − κ
1 + η K2}
) .
(58)
The concentration inequality around Eν [σ] is obtained by combining the inequalities induced by σ and −σ as follows:
1 2 Fs?µ ⋃ h∈{σ,−σ} {w : h(Xs?(w))− Eν [h] ≥ ± ( δ − κ 1 + η K2} ) = Fs?µ ( {w : |σ(Xs?(w))− Eν [σ]| ≥ δ − κ
1 + η K2}
) ≤ 6e− δ√ 1+ηκ .
(59)
The inequality in equation 59 is the general form containing the relation between the upper bound of the probability and (η, κ,K2). While this form is quite complicated and highly technical, we choose not to present all the detailed expressions of equation 59 in the main paper. Rather than that, we re-write it in a much simplified form for clarity. Specifically, by setting κK2/(1 + η) = 0.5δ and rescaling δ to 2δ, the aforementioned inequality in equation 59 can be converted into the following simpler form:
Fs?µ ({w : |σ(Xs?(w))− Eν [l]| ≥ δ) ≤ 6e− √ 2δ 3 2 κK2 . (60)
Finally, if we set σ = Softmax, then the Lipschitz constant is induced as κ = 1. This proof is completed by setting s? := T . | 1. What is the main contribution of the paper regarding computational efficiency and accuracy?
2. How does the proposed algorithm differ from other state-of-the-art approaches in terms of its assumptions and flexibility?
3. What are the strengths of the paper's technical aspects, particularly in its concentration results?
4. Are there any concerns or limitations regarding the applicability or generalization of the proposed method? | Review | Review
This paper propose a computationally efficient Wasserstein distributional normalization algorithm for accurate classification of noisy labels. An explicit upper bound for the Wasserstein-2 distance is derived and such a bound can be used as an estimator to determine if a network is over-parameterized. Empirical results on CIFAR-10/100 and Clothing1M suggest that the propose algorithm outperforms other SOTA approaches.
Overall, this paper is very well-written and easy to follow. The problem is well-motivated, the proofs looks solid, and the claims are well supported by experiments.
I am positive with respect to acceptance of this paper. First, compare to previous works, the proposed method (WDN) is fully non-parametric, sufficiently leverage the batches of training datasets, and do not assume any prior knowledge on class dependency. Hence WDN is flexible and potential has a smaller generalization error (compare to existing method based on sample selection). Second, I would like to emphasize the technical quality of the paper. The concentration results in this paper are by themselves quite elegance. |
ICLR | Title
Wasserstein Distributional Normalization : Nonparametric Stochastic Modeling for Handling Noisy Labels
Abstract
We propose a novel Wasserstein distributional normalization (WDN) algorithm to handle noisy labels for accurate classification. In this paper, we split our data into uncertain and certain samples based on small loss criteria. We investigate the geometric relationship between these two different types of samples and enhance this relation to exploit useful information, even from uncertain samples. To this end, we impose geometric constraints on the uncertain samples by normalizing them into the Wasserstein ball centered on certain samples. Experimental results demonstrate that our WDN outperforms other state-of-the-art methods on the Clothing1M and CIFAR-10/100 datasets, which have diverse noisy labels. The proposed WDN is highly compatible with existing classification methods, meaning it can be easily plugged into various methods to improve their accuracy significantly.
N/A
We propose a novel Wasserstein distributional normalization (WDN) algorithm to handle noisy labels for accurate classification. In this paper, we split our data into uncertain and certain samples based on small loss criteria. We investigate the geometric relationship between these two different types of samples and enhance this relation to exploit useful information, even from uncertain samples. To this end, we impose geometric constraints on the uncertain samples by normalizing them into the Wasserstein ball centered on certain samples. Experimental results demonstrate that our WDN outperforms other state-of-the-art methods on the Clothing1M and CIFAR-10/100 datasets, which have diverse noisy labels. The proposed WDN is highly compatible with existing classification methods, meaning it can be easily plugged into various methods to improve their accuracy significantly.
1 INTRODUCTION
The successful results of deep neural networks (DNNs) on supervised classification tasks heavily rely on accurate and high-quality label information. However, annotating large-scale datasets is extremely expensive and a time-consuming task. Because obtaining high-quality datasets is very difficult, in most conventional works, training data have been obtained alternatively using crowd-sourcing platforms Yu et al. (2018) to obtain large-scaled datasets, which leads inevitable noisy labels in the annotated samples.
While there are numerous methods that can deal with noisy labeled data, recent methods actively adopt the small loss criterion, which enables to construct classification models that are not susceptible to noise corruption. In this learning scheme, a neural network is trained using easy samples first in the early stages of training. Harder samples are then gradually selected to train mature models as training proceeds. Jiang et al. (2018) suggested collaborative learning models, in which a mentor network delivers the data-driven curriculum loss to a student network. Han et al. (2018); Yu et al. (2019) proposed dual networks to generate gradient information jointly using easy samples and employed this information to allow the networks to teach each other. Wei et al. (2020) adopted a disagreement strategy, which determines the gradient information to update based on disagreement values between dual networks. Han et al. (2020) implemented accumulated gradients to escape optimization processes from over-parameterization and to obtain more generalized results. In this paper, we tackle to solve major issues raised from the aforementioned methods based on the small-loss criterion, as follows.
In comprehensive experiments, the aforementioned methods gain empirical insight regarding network behavior under noisy labels. However, theoretical and quantitative explanation have not been closely investigated. In contrast, we give strong theoretical/empirical explanations to understand the network under noisy labels. In particular, we present an in-depth analysis of small loss criteria in a probabilistic sense. We exploit the stochastic properties of noisy labeled data and develop probabilistic descriptions of data under the small loss criteria, as follows. Let P be a probability measure for the pre-softmax logits of the training samples, l be an objective function for classification, and 1{·} be an indicator function. Then, our central object to deal with is a truncated measure defined as
X ∼ µ|ζ = 1{X;l(X)>ζ}P P[l(X) > ζ] , Y ∼ ξ|ζ = 1{X;l(Y )≤ζ}P P[l(Y ) ≤ ζ] , (1)
where X and Y , which are sampled from µ|ζ and ξ|ζ, denote uncertain and certain samples defined in the pre-softmax feature space1 (i.e.,Rd), respectively. In equation 1, µ and ξ denote the probability measures of uncertain and certain samples, respectively, and ζ is a constant. Most previous works have focused on the usage of Y and the sampling strategy of ζ, but poor generalization capabilities based on the abundance of uncertain samples X has not been thoroughly investigated, even though these samples potentially contain important information. To understand the effect of noisy labels on the generalized bounds, we provide the concentration inequality of uncertain measure µ, which renders the probabilistic relation between µ and ξ and learnability of the network under noisy labels.
While most conventional methods Han et al. (2018); Wei et al. (2020); Li et al. (2019a); Yu et al. (2019) require additional dual networks to guide misinformed noisy samples, the scalability is not guaranteed due to the existence of dual architectures, which have the same number of parameters as the base network. To alleviate this problem, we build a statistical machinery, which should be fully non-parametric, simple to implement, and computationally efficient to reduce the computational complexity of conventional approaches, while maintaining the concept of small-loss criterion. Based on the empirical observation of ill-behaved certain/uncertain samples, we propose the gradient flow in the Wasserstein space, which can be induced by simulating non-parametric stochastic differential equation (SDE) with respect to the Ornstein-Ulenbeck type to control the ill-behaved dynamics. The reason for selecting these dynamics will be thoroughly discussed in the following sections.
Thus, key contributions of our work are as follows.
• We theoretically verified that there exists a strong correlation between model confidence and statistical distance between X and Y . We empirically investigate that the classification accuracy worsens when the upper-bound of 2-Wasserstein distance W2(µ, ξ) ≤ ε (i.e., distributional distance between certain and uncertain samples) drastically increase. Due to the empirical nature of upper-bound ε, it can be used as an estimator to determine if a network suffers from over-parameterization.
• Based on empirical observations, we develop a simple, non-parametric, and computationally efficient stochastic model to control the observed ill-behaved sample dynamics. As a primal object, we propose the stochastic dynamics of gradient flow (i.e.,, Ornstein-Ulenbeck process) to simulate simple/non-parametric stochastic differential equation. Thus, our method do not require any additional learning parameters.
• We provide important theoretical results. First, the controllable upper-bound ε with the inverse exponential ratio is induced, which indicates that our method can efficiently control the diverging effect of Wasserstein distance. Second, the concentration inequality of transported uncertain measure is presented, which clearly renders the probabilistic relation between µ and ξ.
2 RELATED WORK
Curriculum Learning & Small-loss Criterion. To handle noisy labels, Han et al. (2018); Yu et al. (2019); Jiang et al. (2018); Wei et al. (2020); Lyu & Tsang (2020a); Han et al. (2020) adopted curriculum learning or sample selection frameworks. However, these methods only consider a small number of selected samples, where large portion of samples are excluded at the end of the training. This inevitably leads to poor generalization capabilities. However, this conflicts with sample selection methods because a large portion of training samples are gradually eliminated. By contrast, our method can extract useful information from unselected samples X ∼ µ (i.e., uncertain samples) and enhance these samples (e.g., X ′ ∼ Fµ) for more accurate classification. Chen et al. (2019) iteratively apply cross-validation to randomly partitioned noisy labeled data to identify most samples that have correct labels. To generate such partitions, they adopt small-loss criterion for selecting samples.
Loss Correction & Label Correction. Patrini et al. (2017a); Hendrycks et al. (2018); Ren et al. (2018) either explicitly or implicitly transformed noisy labels into clean labels by correcting classification losses. Unlike these methods, our method transforms the holistic information from uncertain samples into certain samples, which implicitly reduces the effects of potentially noisy labels. While correction of label noisy by modifying the loss-dynamics do not perform well under extreme noise environments, Arazo et al. (2019) adopt label augmentation method called MixUp Zhang et al. (2018).
1Due to the technical difficulties, we define our central objects on pre-softmax space rather than label space, i.e., the space of σ(X), σ(Y ), where σ indicates softmax function. Please refer to Appendix for more details.
Distillation. Li et al. (2019b) updated mean teacher parameters by calculating the exponential moving average of student parameters to mitigate the impact of gradients induced by noisy labels. Lukasik et al. (2020) deeply investigated the effects of label smearing for noisy labels and linked label smoothing to loss correction in a distillation framework. Similar to these methods, our method leverages the useful properties of distillation models. We set ν as a pivot measure, which guides our normalization functional Fµ for uncertain measures. This is similar to self-distillation because uncertain training samples are forced to be normalized to those of past states.
Other methods. Lee et al. (2019) induced a robust generative classifier based on pre-trained deep models. Similar to our method, Damodaran et al. (2019) designed a constraint on the Wasserstein space and adopted an adversarial framework for classification models of noisy labeled data by implementing semantic Wasserstein distance. Pleiss et al. (2020) identify noisy labeled samples by considering AUM statistics which exploits differences in training dynamics of clean and mislabeled samples. In most recent work, Li et al. (2019a) adopts semi-supervised learning (SSL) methods to deal with noisy labels where the student network utilizes both labeled/unlabeled samples to perform semi-supervised learning guided by the other teacher network.
3 DISTRIBUTIONAL NORMALIZATION
Because our main target object is a probability measure (distribution), we first define an objective function in a distributional sense. Let l be cross entropy and r̂ be a corrupted label random vector for an unknown label transition matrix from a clean label r which is independent of X , with label transition matrix Q. Then, a conventional objective function for classification with noisy labels can be defined as follows:
min µ J [µ] = min µ EX∼µ,r̂|Q [l(X; r̂)] . (2)
However, due to the significant changes in label information, the conventional objective function defined in equation 2 cannot be used for accurate classification. Instead of directly using uncertain samples X ∼ µ as in previous works, we normalize µ in the form of a metric ball and present a holistic constraint. For a clear mathematical description, we first introduce the following definition. Definition 1. (Wasserstein ambiguity set) Let P2(Rd) = {µ : Eµd2E(x0, x) <∞,∀x0 ∈ Rd} be a 2-Wasserstein space, where d denotes the number of classes, dE is Euclidean distance defined on Rd. Then, we define a Wasserstein ambiguity set (i.e., metric ball) in this space as follows:
BW2(ν, ε) = { µ ∈ P2 ( Rd ) :W2(µ, ν) ≤ ε } , (3)
whereW2 denotes the 2-Wasserstein distance and ν is the pivot measure. Then, we propose a new objective function by imposing geometric constraints on µ as follows:
min Fµ∈BW2 (ν,ε),ξ J [Fµ] + J [ξ] = min θ EX∼Fµθ,r̂[l(X; r̂)] + EX∼ξθ,r̂[l(Y ; r̂)], (4)
where F : P2(Rd)→ P2(Rd) is a functional for probability measures, which assures the constraint on Fµ (i.e., Fµ ∈ BW2(ν, ε)) and our main objective. The right-hand side of equation equation 4 is equivalent vectorial form of distributional form in left-hand side. While our main objects are defined on pre-softmax, both probability measures µθ and ξθ is parameterized by neural network with parameters θ. This newly proposed objective function uses the geometrically enhanced version of an uncertain measure Fµ with a certain measure ξ. In equation 4, probability measure ν is defined as follows: ν = arg minJ [ξk? ], where ξk denotes a certain measure at the current k-th iteration and k? ∈ Ik−1 = {1, · · · , k − 1}. In other words, our method finds the best probability measure that represents all certain samples so far at training time, where the uncertain measures are transported to be lying in the Wasserstein ball centered on ν. In equation 4, the Wasserstein constraint on Fµ enforces uncertain measures statistically resemble ν from a geometric perspective (i.e.,W2(ν,Fµ) ≤ ε). Now, an important question naturally stems from the aforementioned analysis: how can we select the optimal radius ε? Clearly, finding an F that induces a small ε ≈ 0 is suboptimal because Fµ ≈ ν and using objective function J [Fµ ≈ ν] can lead to the following critical problem. As the optimization process proceeds, enhanced uncertain samples X ′ ∼ Fµ contribute less and less, because it is statistically identical to ν, meaning our objective in equation 4 would receive little benefits from these transported uncertain samples. By contrast, if we adopt a large radius for ε, enhanced uncertain samples will be statistically and geometrically unrelated to ν, which causes the normalized measure Fµ to yield large losses and violates our objective.
To overcome two problems above and select the radius, we make a detour, i.e., a Gaussian measure, for cutting the path between ν and Fµ (i.e., ν → N (mν ,Σν)→ Fµ) rather than directly calculating the geodesic between ν and Fµ (i.e., ν → Fµ). Specifically, we decompose the original constraint in equation 4 into two terms using the triangle inequality of the Wasserstein distance:
W2 (ν,Fµ) ≤ ε =W2 (ν,N (mν ,Σν))︸ ︷︷ ︸ d1: Intrinsic statistics +W2 (N (mν ,Σν),Fµ)︸ ︷︷ ︸ d2: Wasserstein Normalization . (5)
The first intrinsic statistics term sets a detour point as a Gaussian measure, for which the mean and covariance are the same as those for ν (i.e., mν = EY∼ν [Y ] and Σν = CovY∼ν [Y ]). The Wasserstein upper bound of this term is only dependent on the statistical structure of ν because (mν ,Σν) is dependent on ν. Thus, this term induces a data-dependent, non-zero constant upper bound whenever ν 6= N and can prevent the upper-bound from collapsing to ε → 0, regardless of F . This gives huge advantage when dealing with ε because the first term can be considered a fixed constant during the training. The second normalization term represents our central objective. F facilitates geometric manipulation in the Wasserstein space and prevent uncertain measure µ from diverging, where µ is normalized onto the Wasserstein ambiguity BW2(ν, ε) in Fig1. The theoretical/numerical advantages of setting detour measure as Gaussian is well-explained following section.
3.1 WASSERSTEIN NORMALIZATION
In the previous section, we present a novel objective function that imposes a geometric constraint on µ such that the transformed measure Fµ lies in BW2(ν, ε) for ν. Now, we specify F and relate it to the Gaussian measure (generally Gibbs measure). For simplicity, we denote Nν = N (mν ,Σν). Proposition 1. F : R+×P2 → P2 is a functional on the probability measure such thatF [t, µ] = µt, where dµt = ptdNν , dNν = dqtdx, and µt is a solution to the following continuity equations:
∂tµt = ∇ · (µtvt) , (6)
which is read as ∂tp(t, x) = ∇ · (p(t, x)∇ log q(t, x)) in a distributional sense. Then, a uniquely defined functional Ft[·] = F [t, ·] normalizes µ onto BW2 (Nν , e−tK2 (µ)), where K2(µ) > 0 is a constant that depends on µ.
It is well known that the solution to equation 6 induces a geodesic in the 2-Wasserstein space (Villani (2008)), which is the shortest path from µ = µt=0 to Nν . The functional Ft generates a path for µt, in which the distance is exponentially decayed according to the auxiliary variable t and constant K2, meaningW2(Nν ,Ftµ) ≤ K2e−t. This theoretical results indicates that the Wasserstein distance of second term in equation 5 can be reduced/controlled with exponential ratio. Thus, by setting a different t, our method can efficiently control the diverging distance in equation 5. Unfortunately, it is typically intractable to compute the partial differential equation (PDE) in equation 6.
Algorithm 1 Wasserstein Distributional Normalization Require: α ∈ [0, 0.2], % ∈ [0.1, 0.65], T = 64,∆t = 10−4, τ = 0.001,
for k = 1 to K (i.e., the total number of training iterations) do 1) Select uncertain (1− ρ)N and certain ρN samples from the mini-batch N . {Y nk }{n≤ρN} ∼ ξk, {X n k }{n≤(1−ρ)N} ∼ µk
2) Update the most certain measure ν. if J [ξk] < J [ν] then ν ← ξk,mν ← E [Yk], and Σν ← Cov [Yk] end if 3) Update the moving geodesic averageN (mα,Σα). Solve the Ricatti equation T ΣνT = Σξk . Σα = ((1− α)Id + αT ) Σν ((1− α)Id + αT ) and mα = (1− α)mν + αmξk 4) Simulate the discrete SDE for T steps. for t = 0 to T − 1 do Xnk,t+1 = −∇φ(Xnk,t;mα)∆t + √ 2τ−1Σαν dW n t s.t. { Xnk,t=0 } ∼ µk, { Xnk,t=T } ∼ FTµk end for 5) Update the network with the objective function. J [Fµk] + J [ξk] = EFT µk [l(Xk,T ; r̂)] + Eξk [l(Yk; r̂)]
end for
To solve this problem, we adopt particle-based stochastic dynamics, which enables tractable computation. There exists a unique iterative form corresponding PDE in equation 6 which is called as multi-dimensional Ornstein-Ulenbeck process, which can be approximated using particle-based dynamics. In particular, we draw N(1 − %) uncertain samples from a single batch of N samples using equation 1 for hyper-parameter 0 ≤ % ≤ 1. We then simulate a discrete stochastic differential equation (SDE) for each particle using the Euler-Maruyama scheme as follows:
Xnt+1 = X n t −∇φ (Xnt ;mν) ∆t + √ 2τ−1∆tΣZ n I , (7)
where φ (Xt;mν) = τ2d 2 E (Xt,mν), n ∈ {1 · · · , N(1− %)}, dE is a Euclidean distance, and N is a single mini-batch size. We selected OU process as our stochastic dynamic due to the following reasons: First, we want to build computationally efficient, and non-parametric method to estimate/minimize the second term of equation 5. The SDE in equation 7 corresponding OU process have simple form with fixed drift and diffusion terms which is invariant over times which makes us to induce the non-parametric representations of simulation of SDE. While the simulation of equation 7 is just non-parametric for-loops in implementation algorithm, our method is computationally very efficient compared to other baseline methods such as Han et al. (2018). Second, when estimating empirical upper-bound of Wasserstein distance, OU process allows us to use explicit form called Meheler’s formula which can be efficiently estimated (Please refer to Appendix for more details). The overall procedure for our method is summarized in Algorithm 1.
3.2 WASSERSTEIN MOVING GEODESIC AVERAGE
In our experiments, we observe that the best measure ν is not updated for a few epochs after the training begins. This is problematic because ν diverges significantly from the current certain measure ξk, which is equivalent to the normalized measure Fµk diverging from ξk, meaning XT and Y become increasingly statistically inconsistent. To alleviate this statistical distortion, we modify detour measure from Nν to other Gaussian measure, which allows us to capture the statistics of both ξk and ν. Inspired by the moving average of Gaussian parameters in batch normalization Ioffe & Szegedy (2015), we propose the Wasserstein moving geodesic average. Specifically, we replace Gaussian parameters {mν ,Σν} with {mα,Σα} such that mα = (1 − α)mν + αmξk and Σα = ((1− α)Id + αT ) Σν ((1− α)Id + αT ), where T is a solution to the Riccati equation T ΣνT = Σξk . Therefore our final detour Gaussian measure is set to Nαν := N (m(α),Σ(α)), 0 ≤ α ≤ 12.
4 THEORETICAL ANALYSIS
In equation 5, we select the detour point as a Gaussian measure because this measure can provide a statistical structure, which is similar to that of the optimal ν. In addition to this heuristic motivation, setting a detour point as a Gaussian measure (Gibbs measure) also provides theoretical advantages, e.g., the theoretical upper bound of the Wasserstein constraint terms. In this section, we investigate the explicit upper bounds of two terms in equation 5, which are naturally induced by the SDE.
2Please refer to Appendix C.4 for more details.
Proposition 2. A scalar 0 < β <∞ exists and depends on ν, resulting in the following inequality: W2(ν,Ftµ) ≤ ε = K1(ν) ∨ [ e−tK2(µ) +K2(ν) ] , (8)
where λmax(Σν) denotes the maximum eigenvalue of the covariance matrix Σν and for some constant 0 < K1 <∞, we have K1(ν) = √ dβλmax(Σν) + ‖EνY ‖2 which is only dependent on ν.
Intuitively, K2(µ) can be interpreted as an indicator that tells us how the uncertain measure µ is diffused, whereas the designed term e−tK2(µ) controls the upper bound of the Wasserstein distance using a variable t. The other term K2(ν) does not vanish even with a very large t, which assures a non-collapsing upper-bound ε.
Proposition 3. (Concentration inequality for the normalized uncertain measure). Assume that there are some constants T ∈ [ 1η ,∞), η ≥ 0 such that the following inequality holds:
EFTµ[f2]− [EFTµ[f ]]2 ≤ (1 + η)EFTµ[A∇fT∇f ], f ∈ C∞0 (Rd), (9)
for A ∈ Sym+d and D(A,Σν) ≤ aη for some a > 0 with any metric D defined on Sym + d . In this case, there is a δ such that the following probability inequality for an uncertain measure is induced:
FTµ ( |σ − Eν [σ]| ≥ δ ) ≤ 6e− √ 2δ 3 2 K2(µ) , (10)
where σ denotes a soft-max function.
In equation 10, we show that the label information induced by the normalized uncertain measure is close to that of most certain measure Eν [σ], where the upper bound is exponentially relative to the initial diffuseness of µ (i.e.,K2(µ)). Because the upper bound of the probability inequality does not collapse to zero and FTµ is concentrated around the most certain labels (i.e.,Eν [σ]), the uncertain sample XT ∼ FTµ helps our method avoid over-parameterization.
4.1 EMPIRICAL UNDERSTANDINGS
We investigate the theoretical upper bound of the Wasserstein ambiguity (i.e., radius of the Wasserstein ball) for Fµ and its corresponding probability inequality. To provide more in-depth insights into the proposed method, we approximate the upper bound and demonstrate that our Wasserstein normalization actually makes neural networks more robust to label noise.
As we verified previously, according to Proposition 2, the following inequality holds:
W2(Ftµ, ν) ≤ ε = K1(ν) ∨ (K2(ν) +K2(Ftµ)) . (11)
Because the first term K1(ν) is constant, dependent on ν, and generally small compared to the second term with t ≤ T , we only examine the behavior of the second term K2(ν) +K2(Ftµ), which can be efficiently approximated using a simple form. Because our detour measure is Gaussian, we have the following inequality for any h ∈ C∞0 (Rd)3:
K̂2(µ) = lim s→0
1 s EX,Z∼NI
[ h ( e−sX + √ 1− e−2s(Σ 1 2 ν Z + mν) ) − h(X) ] ≤ K2(µ), (12)
where this equality holds if h is selected to induce a supremum over the set C∞0 . For approximation, we simply consider h(X) = ‖X‖2 as a test function. In this case, the following inequality naturally holds: ε̂ = K̂2(ν) + K̂2(Fµ) ≤ K2(ν) + K2(Fµ) ≤ K1(ν) ∨ (K2(ν) + K2(Fµ)) = ε. Thus, ε̂ can be considered as an approximation of the theoretical upper bound ε suggested in Proposition 2. Subsequently, we investigate the effects of Wasserstein normalization based on K̂2(µ) in equation 12.
(1) The proposed WDN ensures that the Wasserstein ambiguity is bounded. We examine the relation between ε̂ and test accuracy in an experiment using the CIFAR-10 dataset with symmetric noise at a ratio of 0.5. Fig.2 presents the landscape for the log10-scaled cumulative average of ε̂ and test accuracy over epochs. The red dotted lines represent the landscape of the vanilla network with cross-entropy loss, where ε̂k = K̂2(νk)+K̂2(Ft=0µk) and k is the epoch index. In this case, the time constant t is set to zero, because Wasserstein normalization is not employed for the vanilla network. The black lines indicate the landscape of the proposed method, where ε̂k = K̂2(νk) + K̂2(Ft=Tµk)
3Please refer to Appendix C.2 for additional details.
in this case. It is noteworthy that the test accuracy of the vanilla network begins to decrease after 13- epochs (red-dotted vertical lines in the top-right plot), whereas the Wasserstein ambiguity (i.e., upper bound of the Wasserstein distance) increases quadratically in the top-left plot. These experimental results verify that the distance between uncertain and most certain measure (i.e., ν) becomes large in the 2-Wasserstein space without any constraints in vanilla networks. They also indicate a definite relationship between Wasserstein ambiguity and test accuracy. In the proposed WDN, Wasserstein ambiguity can be efficiently bounded (i.e., lim supk ε̂k ≈ 2.15) as the test accuracy continues to increase, even after 13-epochs. For detailed analysis, we compute the deviation of an empirical upper bound as follows: ∆̂k = ε̂k − ε̂k−1. In the gray regions, the deviation for the vanilla network is grater than 2.5× 10−2, i.e., ∆k > 2.5× 10−2. Then, its test accuracy begins to drop, as shown in Fig.2. In contrast to the vanilla network, the maximum deviation of the proposed WDN is bounded above by a very small value (supk ∆̂k ≤ 8× 10−3). (2) The proposed WDN helps networks to escape from over-parameterization. To analyze the behavior of deep neural networks under over-parameterization with and without the proposed WDN, we design several variants of the WDN, which begin at delayed epochs. The green, orange, and blue curves in the second row of Fig.2 represent the landscapes, when our WDN is applied after kd ∈ {10, 15, 20} epochs, respectively. In this experiment, the upper bound ε̂k is defined as
ε̂k = { K̂2(νk) + K̂2(Ft=0µk), if k < kd, K̂2(νk) + K̂2(Ft=Tµk), else k ≥ kd.
(13)
Consider kd = 20, which is represented by the blue dotted vertical lines. Before our WDN is applied (i.e., k < kd), the network suffers from over-parameterization, which induces a significant performance drop, as indicated by the blue curve in the bottom-right plot. However, the network rapidly recovers to normal accuracy following Wasserstein normalization (i.e., k ≥ kd). Please note that similar behavior can be observed in the green and orange curves. In particular, the orange curve produces less fluctuations than the blue curve in terms of test accuracy. This indicates that the proposed WDN can help a network escape from over-parameterization by imposing geometric constraints on the Wasserstein space with proposed method.
(3) The proposed WDN can derive data-dependent bounds according to different noise levels. Another interesting point in Fig.2 is that all curves, excluding the red curve, converge to specific numbers 2.15 = ε := lim infk ε̂k ≤ lim supk ε̂k := ε̄ = 2.2. The upper bound ε̄ is neither overly enlarged nor collapsed to zero, while the lower bound ε is fixed for all curves. We argue that this behavior stems from the geometric characteristics of the proposed method, where the first term in equation 5, namelyW2(ν,Nν) ∝ K̂2(ν), is a non-zero data-dependent term that is minimized by the proposed geometric constraint. Therefore, we can derive the following relationship:
[W2(ν,Fµ) ≤ W2(ν,Nν) +W2(Nν ,Fµ)]⇓ ∝ [K̂2(ν) + K̂2(Fµ) = ε̂]⇓. (14) This empirical observation verifies that a detour point, which is set as a Gaussian measure, can induce the data-dependent bound (ε, ε̄), where our data-dependent bound can vary according to different
noise levels and efficiently leverage data-dependent statistics. Fig.2 indicates that classification models with more stable data-dependent bounds also induce more stable convergence in test accuracy.
5 EXPERIMENTS
5.1 EXPERIMENTS ON THE CIFAR-10/100 DATASET
We used settings similar to those proposed by Laine & Aila (2016); Han et al. (2018) for our experiments on the CIFAR10/100 dataset. We used a 9-layered CNN as a baseline architecture with a batch size of 128. We used the Adam optimizer with (β1, β2) = (0.9, 0.99), where the learning rate linearly decreased from 10−3 to 10−5. Synthetic Noise. We injected label noise into clean datasets using a noise transition matrix Qi,j = Pr(r̂ = j|r = i), where a noisy label r̂ is obtained from a true clean label r. We defined Qi,j by following the approach discussed by Han et al. (2018). For symmetric noise, we used the polynomial, % = −1.11r2 + 1.78r + 0.04 for 0.2 ≤ r ≤ 0.65, where r is the noise ratio. For the asymmetric noise, we set % to 0.35. To select the enhanced detour measure, we set α to 0.2 for the Wasserstein moving geodesic average in all experiments. We trained our classification model over 500 epochs because the test accuracy of our method continued increasing, whereas those of the other methods did not. We compared our method with other state-of-the-art methods, including [MentorNet, Jiang et al. (2018)], [Co-teaching, Han et al. (2018)], [Co-teaching+, Yu et al. (2019)], [GCE, Zhang & Sabuncu (2018)], [RoG, Lee et al. (2019)], [JoCoR, Wei et al. (2020)], [NPCL, Lyu & Tsang (2020b)], [SIGUA, Han et al. (2020)], and [DivideMix, Li et al. (2019a)]. As shown in Table 1, the proposed WDN significantly outperformed other baseline methods. Please note that our WDN utilizes a simple Gaussian measure as a target pivot measure. Thus, there are potential risks when handling highly concentrated and non-smooth types of noise (e.g., asymmetric noise). Nevertheless, the proposed WDN still produced accurate results, even with asymmetric noise. In this case, a variant of our WDN (i.e., WDNcot) exhibited the best performance. Open-set Noise. In this experiment, we considered the open-set noisy scenario suggested by Wang et al. (2018), where a large number of training images were sampled from other CIFAR-100 dataset; however, these images were still labeled according to the classes in the CIFAR-10 dataset. We used a 9-layered CNN, which also used in our previous experiment. For hyper-parameters, we set % and α to 0.5 and 0.2, respectively. As shown in Table 2, our method achieved state-of-the-art accuracy. Collaboration with Other Methods. Because our core methodology is based on small loss criteria, our method can collaborate with co-teaching methods. In Han et al. (2018), only certain samples (Y ∼ ξ) were used for updating colleague networks, where the number of uncertain samples gradually decreased until it reached a predetermined portion. To enhance potentially bad statistics for co-teaching, we taught dual networks by considering a set of samples (Y,XT ), where XT ∼ FTµ are uncertain samples enhanced using equation 7.
Table 1 shows the test accuracy results for the proposed collaboration model with a co-teaching network (WDNcot). This collaboration model achieved the most accurate performance for the CIFAR100 dataset with asymmetric noise, which verifies that our WDN can be integrated into existing methods to improve their performance significantly, particularly when the density of pre-logits is highly-concentrated. Fig.3 reveals that co-teaching quickly falls into over-parameterization and induces drastic drop in accuracy after the 15th-epoch. WDNcot also exhibits a slight accuracy drop. However, it surpassed the baseline co-teaching method by a large margin (+7%) during training. This demonstrates that our enhanced samples XT can alleviate the over-parameterization issues faced by conventional co-teaching models, which helps improve their accuracy significantly.
5.2 EXPERIMENTS ON A REAL-WORLD DATASET
To evaluate our method on real-world datasets, we employed the Clothing1M dataset presented by Xiao et al. (2015), which consists of 1M noisy, labeled, and large-scale cloth images with 14 classes collected from shopping websites. It contains 50K, 10K, and 14K clean images for training, testing, and validation, respectively. We only used a noisy set for training; for testing, we used a clean set. We set α = 0.2 and % = 0.1. For fair comparison, we followed the settings suggested in previous works. We used a pre-trained ResNet50 for a baseline architecture with a batch size of 48. For the pre-processing steps, we applied a random center crop, random flipping, and normalization to 224× 224 pixels. We adopted the Adam optimizer with a learning rate starting at 10−5 that linearly decayed to 5× 10−6 at 24K iterations. Regarding the baseline methods, we compared the proposed method to [GCE, Zhang & Sabuncu (2018)], [D2L, Ma et al. (2018)], [FW, Patrini et al. (2017b)], [WAR, Damodaran et al. (2019)], [SL, Wang et al. (2019)], [JOFL, Tanaka et al. (2018)], [DMI, Xu et al. (2019)], [PENCIL, Yi & Wu (2019)], and [MLNT, Li et al. (2019b)]. Table 3 reveals that our method achieved competitive performance as comparison with other baseline methods.
5.3 COMPUTATIONAL COST
Because Co-teaching, JoCoR, and DivideMix use additional networks, the number of network parameters is twice (8.86M) as many as that of the Vanilla network (4.43M ). In Table 4, we compare the average training time for first 5-epochs over various baseline methods under symmetric noise on the CIFAR-10 dataset. While non-parametric methods such as GCE and WDN require less than 12% additional time, other methods that require additional networks spent more time than non-parametric methods. The averaging time can vary according to different experimental environments. In table 4, we measure the time using publicly available code provided by authors.
6 CONCLUSION
We proposed a novel method called WDN for accurate classification of noisy labels. The proposed method normalizes uncertain measures to data-dependent Gaussian measures by imposing geometric constraints in the 2-Wasserstein space. We simulated discrete SDE using the Euler-Maruyama scheme, which makes our method fast, computationally efficient, and non-parametric. In theoretical analysis, we derived the explicit upper-bound of the proposed Wasserstein normalization and experimentally demonstrated a strong relationship between this upper-bound and the over-parameterization. We conducted experiments both on the CIFAR-10/100 and Clothing1M datasets. The results demonstrated that the proposed WDN significantly outperforms other state-of-the-art methods.
A OPEN-SOURCE DATASET
Transition matrix for CIFAR-10/100. For the experiment summarized in Table 1, we implemented open-source code to generate the noise transition matrix discussed by Han et al. (2018), as well as the 9-layered CNN architecture (https://github.com/bhanML/Co-teaching).
Open-set noise. For the experiment summarized in Table 2, we used the same dataset for open-set noisy labels presented by Lee et al. (2019) (https://github.com/pokaxpoka/ RoGNoisyLabel).
Clothing1M. For the experiment summarized in Table 3, we used the open-source dataset presented by Xiao et al. (2015) (https://github.com/Cysu/noisy_label).
B COMPARISONS TO RELATED WORKS
Methodology Parametric Class-dependency Distillation Sample-weight Sample-selection
DivideMix 3 7 7 7 3 Co-teaching 3 7 3 7 3
JoCoR 3 7 3 7 3 MLNT 3 3 3 7 7 Ren et al. (2018) 7 7 7 3 7 NPCL 7 7 7 3 7 GCE 7 7 7 3 7
WDN 7 7 7 7 7
Table B indicates that no previous methodologies can conceptually include our method.
Because the solution to the Fokker-plank equation can be explicitly calculated without any additional parameters, our method is fully non-parametric (in terms of additional parameters beyond those required by the original neural network). By contrast, co-teaching is parametric because it requires a clone network with additional parameters that are copies of those in the original network. Similarly, MLNT requires an additional teacher network for training, which also contains a number of parameters.
Many method based on small loss criteria select certain samples, whereas our method uses the combination of ρN certain and (1 − ρ)N normalized uncertain samples. Therefore, our method can fully leverage the batches of training datasets, where (1− ρ)N + ρN = N . Additionally, our method does not assume any class-dependent prior knowledge. Rather than considering class-wise prior knowledge, our method uses holistic information from both certain and uncertain samples (i.e., Y and XT ) in the logit space. Other meta-class-based model, such as MLNT, assume class-wise meta prior knowledge from a teacher network.
In Arazo et al. (2019), they assumed the beta-mixture model as a label distribution on label space. But due to the non-deterministic type of noisy label distribution, it sometimes fails to train with extremely non-uniform type of noise. For example, Arazo et al. (2019) reported failure case with Clothing1M dataset. It seems that fundamental assumption on noise model of mixup will be improved in future work. Similar to this method, our work have trouble when dealing with synthetic asymmetric noise with high ratio where relatively large performance drop is observed in Table 1 (despite our method produces second best performance in the table).
Most recent work Li et al. (2019a), they also adopt Co-train by implementing additional dual network, but much sophisticated methodology called Co-divide/guessing based on SSL. We predict that the Wasserstein distance between labeled and unlabeled probability measures is well-controlled in their method. We think that applying the OT/Markov theory (as in our paper) to their method will broaden the understanding of LNL problem.
In contrast to sample weight methods such as GCE and NPCL, which require prior knowledge regarding the cardinality of the training samples to be weighted, our method is free from such assumptions because our Wasserstein normalization is applied in a batch-wise manner.
C TECHNICAL DIFFICULTY FOR APPLYING GENERAL OPTIMAL TRANSPORT/MARKOV THEORY TO LABEL SPACE.
LetX,Y be uncertain and certain samples in pre-softmax feature space. And assume that we consider the distributional constraint on label-space (the space of σ(X), σ(Y ), where σ denotes the soft-max function). This space is not proper to define the objective function such as (5). Because, all the samples in this label space is of the form σ(X) = [a1, a2, · · · , an] such that ∑d i=1 ai = 1, thus label-space is d-dimensional affine-simplex Ud which is subset of Euclidean space Ud ⊂ Rd. In this case, the definition of Wasserstein space in equation (4) is unacceptable while dE is not true metric on Ud. The Wasserstein space P2(Ud) is merely investigated in the mathematical literature which makes unable to use all the technical details and assumptions, theories developed in the P2(Rd) which are theoretical ground of our work. But, if we look this problem slightly different point of view, for example, consider pre-softmax Rd,P2(Rd) as our base space. In this case, all the technical issues/problems when we try to use OT tools in P2(Ud) can be overcome/ignored. while softmax is non-parametric one-to-one function connecting pre-softmax feature space Rd to Ud, there exists a unique labels in Ud as a mapped point of the manipulated uncertain samples. Even though our objects are defined on pre-softmax space, the theoretical analysis in Proposition 3 contains softmax function to evaluate the concentration inequality of proposed transformation F affecting in label-space Ud.
D MATHEMATICAL BACKGROUND
In this section, we introduce important definitions, notations, and propositions used in our proofs and the main paper.
D.1 NOTATION
We denote f#µ as a push-forward of µ through f . C∞0 (Rd) denotes the set of∞-class functions with compact support in Rd. For the Lp-norm of the function f , we denote ‖f‖p,ν = ( ∫ |f |pdν) 1 p . The Hessian matrix of the function f is denoted as Hess[f ] = [∂i∂jf ]di,j . Sym + d denotes the space for semi-definite positive symmetric matrices of size d× d. ‖f‖Lip denotes the Lipschitz norm of the function f . For any matrix A ∈Md, we let ‖A‖op denote the operator norm of A.
D.2 DIFFUSION-INVARIANCE AND HYPER-CONTRACTIVITY
Definition 2. The Markov semigroup (Pt)t≥0 in Rd acting on a function f ∈ C∞0 is defined as follows:
Ptf(x) = ∫ f(x′)pt(x, dx ′), (15)
where pt(x, dx′) is a transition kernel that is the probability measure for all t ≥ 0. Definition 3. (Diffusion Operator) Given a Markov semi-group Pt at time t, the diffusion operator (i.e., infinitesimal generator) L of Pt is defined as
Lg(y) = lim t→0
1 t (Ptg(y)− g(y)) = ∑ i,j ∂2 ∂yi∂yj Bij(y)g(y)− ∑ i Ai(y) ∂ ∂yi g(y), (16)
where B and A are matrix and vector-valued measurable functions, respectively. Bij denotes the (i, j)-th function of B and Ai denotes the i-th component function of A. Definition 4. (Diffusion-invariant Measure) Given the diffusion operator L, the probability measure µ is considered to be invariant measure to L when EX∼µ[Lf(X)] = 0 for any f ∈ C∞0 . Lemma 1. (Infinitesimal generator for the multivariate Gaussian measure, Bolley & Gentil (2010).) The Gaussian measure Nν := N (mν ,Σν) with a mean mν and covariance Σν is an invariant measure according to the following diffusion-operator L:
Lf(x) = ΣνHess[f ](x)− (x−mν)T ∇f(x), ∀f ∈ C∞0 (Rd), (17) where Bij(x) := [Σν ]ij is a constant function, and Ai(x) := xi −miν .
This generator serves as our main tool for the geometric analysis of the upper bound ε. In Section 4.1 in the main paper, we introduced an approximate upper-bound K̂2(µ) without any general description of the inequality involved. We now introduce the underlying mathematics for equation 12. Because our detour measure is Gaussian, there is a unique semi-group Pth called the multidimensional Ornstein-Ulenbeck semi-group that is invariant to Nν . Specifically, Pt is defined as follows:
Psh(X) = EZ∼NI [ h ( e−sX + √ 1− e−2s(Σ 1 2 ν Z + mν) )] , ∀h ∈ C∞0 . (18)
The invariance property of Pt relative to our detour measure is naturally induced by the following Proposition: Proposition 4. We define C : Rd → Rd and C(X) = AX + b such that A ∈ Sym+d ,b ∈ Rd, and select an arbitrary smooth h ∈ C∞0 (Rd). We then define the diffusion Markov semi-group Psh as follows:
Psh(X) = EZ∼N [ h ( e−sX + √ 1− e−2sC(Z) )] . (19)
Then, N (A2,b) is invariant with respect to Ps, meaning the following equality holds for every h and s ≥ 0: ∫
Rd [Psh(X)− h(X)]dN (A2,b)(X) = 0. (20)
Proof. For simplicity, we denote N (A2,b) := NC .∫ Psh(X)dNC(X) = ∫ ∫ h(e−sX + √ 1− e−2sC(Z))dNC(X)dN (Z)
= ∫ ∫ h ◦ C(e−sZ ′ + √ 1− e−2sZ)dN (Z ′)dN (Z).
(21)
The second equality holds because C is linear in Rd. Let e−s = cos θ and e−2s = sin θ for any 0 ≤ θ ≤ 2π. Then, we define φ as φ(Z ′, Z) = e−sZ ′ + √ 1− e−2sZ = cos(θ)Z ′ + sin(θ)Z, and π(Z ′, Z) = Z. Based on the rotation property of the standard Gaussian measure, one can induce the following equality.
(N ⊗N ) ◦ (C ◦ φ)−1 = ((N ⊗N ) ◦ φ−1) ◦ C−1 = N ◦ C−1. (22) However, we know that dN [C−1(X)] = dNC(X) = ( (2π)d|A2| )− 12 e−0.5(X−b)TA−2(X−b). By combining equation 21 and equation 22, one can derive the following result:∫
h ◦ C(e−sZ ′ + √ 1− e−2sZ)d[N ⊗N ] = ∫ h(X)d [ (N ⊗N ) ◦ φ−1 ◦ C−1 ] (X)
= ∫ h(X)d[N ◦ C−1](X) = ∫ h(X)dN [C−1(X)]
= ∫ h(X)dNC(X).
(23)
Proposition 4 demonstrates the invariance property of the defined semi-group. If we setA = Σ 1 2 ν ,b = mν , then we can recover equation 18.
We are now ready to define the approximation of K2(µ) in terms of semi-group invariance. Specifically, for any real-valued smooth h, we define the following inequality:
K̂2(µ) = EX∼µ[Lh(X)] = lim s→0
EX∼µ [ 1
s (Psh(X)− h(X)) ] = lim s→0 1 s EX,Z∼NI [ h ( e−sX + √ 1− e−2s(Σ 1 2 ν Z + mν) ) − h(X) ] ≤ K2(µ).
(24)
This inequality holds if h is selected to induce a supremum over the set C∞0 , where suph K̂2(µ, h) = suph EX∼µ[Lh(X)] = K2(µ). Although a more sophisticated design for the test function h will induce a tighter upper bound for K̂2, we determined that the L2-norm is generally sufficient.
Definition 5. (Diffuseness of the probability measure) We define the integral operator K2 : W2(Rd)→ R+ as follows:
K2(µ) = √ sup f∈C∞0 ∫ Rd |Lf(x)| dµ(x). (25)
According to Definition 4, we know that ∫ Lf(X)dNν(X) = 0 for any f . Based on this observation, it is intuitive that K2 estimates how the probability measure ν is distorted in terms of diffusion invariance. While this measure takes a supremum over the function space C∞0 , it searches for a function that enables the estimation of maximal distortion. Because the value of K2 is entirely dependent on the structure of µ, K2 can be considered as a constant for the sake of simplicity if the uncertain measure µ is fixed over one iteration of training. Definition 6. (Diffusion carré du champ) Let f, g ∈ C∞0 (Rd). Then, we define a bilinear form Γc in C∞0 (Rd)× C∞0 (Rd) as
Γe(f, g) = 1
2 [LΓe−1(fg)− Γe−1(fLg)− Γe−1(gLf)], e ≥ 1. (26)
We also denote Γ(f) ≡ Γ(f, f). The bilinear form Γ can be considered as a generalization of the integration by the parts formula, where ∫ fLg + Γ(f)dµ = 0 for the invariant measure µ of L.
Definition 7. (Curvature-Dimension condition, Ambrosio et al. (2015)) We can say that the infinitesimal generator L induces the CD(ρ,∞) curvature-dimension condition if it satisfies Γ1(f) ≤ ρΓ2(f) for all f ∈ C∞0 .
Because our diffusion operator generates a semi-group with respect to the Gibbs measure, the curvature-dimension condition can be calculated explicitly. Through simple calculations, the firstorder (c = 1) diffusion carré du champ can be induced as follows:
Γ1(f) = ( [∇f ]TΣν∇f )2 . (27)
Similarly, the second-order (c = 2) diffusion carré du champ is calculated as follows:
Γ2(f) = 1
2
[ L ( Γ1(f 2) ) − 2Γ1 (f,L(f)) ] = Tr ([ Σν∇2f ]2) + ( [∇f ]TΣν∇f )2 = Tr ([ Σν∇2f ]2) + Γ1(f),
(28)
for an arbitrary f ∈ C∞0 (Rd). While Tr ([ Σ∇2f ]2)
is non-negative, we can infer that Γ1 ≤ Γ2. In this case, the diffusion operator L defined in Lemma 1 induces the CD(ρ = 1,∞) curvaturedimension condition. For the other diffusion operators, please refer to Bolley & Gentil (2010). Proposition 5. (Decay of Fisher information along a Markov semigroup, Bakry et al. (2013).) If we assume the curvature-dimension condition CD(ρ,∞), then I(µt|Nν) ≤ e−2ρtI(µ|Nν).
The exponential decay of the Fisher information in Proposition 5 is a core property of the exponential decay of the Wasserstein distance, which will be used in the proof of Proposition 2.
D.3 FOKKER-PLANK EQUATION, SDE
Definition 8. (Over-damped Langevin Dynamics) We have dXt = −∇φ(Xt;mν)dt+ √ 2τ−1ΣνdWt, (29)
where φ (Xt;mν) = τ2d 2 (Xt,mν), Wt denotes Brownian motion, and d denotes Euclidean distance. The particle Xt is distributed in Xt ∼ pt. The probability density limt→∞ p(x, t) with respect to X∞ converges to the Gaussian density X∞ = √ Σν(Z + mν) ∼ p∞(x) = q(x) ∝ e−d(x,mν) TΣ−1ν d(x,mν).
In classical SDE literature, it is stated that E [ sup0≤t≤T ∣∣∣X̂t −Xt∣∣∣] ≤ G(N%)− 12 , where G(T ) is some constant that depends only on T and X̂ denotes the true solution of the SDE in equation 29. While the number of uncertain samples is greater than N% > 40, our method exhibits acceptable convergence.
D.4 GAUSSIAN WASSERSTEIN SUBSPACES
It is known that the space of non-degenerate Gaussian measures (i.e., covariance matrices are positivedefinite) forms a subspace in the 2-Wasserstein space denoted asW2,g ∼= Sym+d × Rd. Because the 2-Wasserstein space can be considered as a Riemannian manifold equipped with Riemannian metrics Villani (2008), W2,g can be endowed with a Riemannian structure that also induces the Wasserstein metric (McCann (1997)). In the Riemannian sub-manifold of Gaussian measures, the geodesic between two points γ(0) = NA and γ(1) = NB is defined as follows Malagò et al. (2018):
γ(α) = Nt = N (m(α),Σ(α)), (30)
where m(α) = (1 − α)mA + αmB and Σ(α) = [(1− α)I + αT ] ΣA [(1− α)I + αT ], where T ΣAT = ΣB . In Section 3.2, we set (mA,ΣA) → (mν ,Σν) and (mB ,ΣB) → (mξk ,Σξk). Regardless of how ν is updated, the statistical information regarding the current certain measure ξk is considered in the detour Gaussian measure, which yields a much smoother geometric constraint on µ.
E PROOFS
Proposition 6. Let Γ(µ, ν) be a set of couplings between µ and ν, and assume that the noisy label r̂ is independent of X . For functional J [µ] = Eµ∼X l(X; r̂), we define D(µ, ν) as:
D(µ, ν) = inf γ∈Γ(µ,ν)
|J [µ]− J [ν]| , (31)
where D : P2 ×P2 → R. Then, D is the metric defined on P2, which is weaker than the Wasserstein metric, where D(µ, ν) ≤ αW2(µ, ν) for α = c−10 r̂ + c −1 1 (1− r̂) and some constants c0, c1 > 0.
Proof.
|J [ν]− J [µ]| = |Eµ[l(X; r̂)]− Eν [l(Z; r̂)]| = |Eµ⊗ν [r̂ (log σ(X)− log σ(Z))− (1− r̂) (log(1− σ(X))− log(1− σ(Z)))]| ≤ E |r̂Eµ⊗ν [log σ(X)− log σ(Z)]|+ E |(1− r̂)Eµ⊗ν [log(1− σ(X))− log(1− σ(Z))]| ≤ Er̂Eµ⊗ν |log σ(X)− log σ(Z)|+ E(1− r̂)Eµ⊗ν |log(1− σ(X))− log(1− σ(Z))| ≤ c−10 E(r̂)Eµ⊗ν |X − Z|+ c −1 1 E(1− r̂)Eµ⊗ν |Z −X| = E[c−10 r̂ + c −1 1 (1− r̂)]Eµ⊗ν |X − Z| (32)
By taking the infimum of the aforementioned inequality with set of couplings γ(µ, ν), we obtain the following inequality:
D(ν, µ) = inf γ(µ,ν)
|J [ν]− J [µ]| ≤ E[c−10 Y + c −1 1 (1− Y )] inf γ(µ,ν) Eγ |X − Z|
= E[c−10 Y + c −1 1 (1− Y )]W1(µ, ν) ≤ E[c−10 Y + c −1 1 (1− Y )]W2(µ, ν),
(33)
which completes the proof.
Proposition 6 follows from the Lipschitzness of the functional J , where D searches for the best coupling to derive the minimal loss difference between two probability measures. This proposition indicates that inf |J [ν]− J [Fµ]| is bounded by the Wasserstein distance, which justifies our geometric constraint presented in equation 4. It should be noted that the prior assumption regarding noisy labels is essential for Lipschitzness. Proposition 7. Let F : R+ × P2 be a functional on probability measures such that F [t, µ] = µt, where dµt = ptdNν , dNν = dqtdx, and let µt be a solution of the continuity equation in the 2-Wasserstein space defined as follows:
∂tµt = ∇ · (µt∇Φt) , (34)
which is represented as ∂tp(t, x) = ∇ · (p(t, x)∇ log q(t, x)) in a distributional sense. Then, the functional Ft[·] = F [t, ·] is defined unique and normalizes µ onto BW2 (Nν , e−tK2 (µ)), where K2(µ) ≤ ∞ is an integral operator in Definition 5 with respect to µ.
Proof. We assume that the probability measure µt is absolutely continuous with respect to the detour Gaussian measure N (mν ,Σν) = Nν , µt Nν . In this case, according to the Radon-Nikodym theorem, there is a corresponding unique probability density q(t, x) = qt(x) ∈ C∞0 such that dµt = qtdNν . Lemma 2. (WI-inequality, Otto & Villani (2000)) If the stationary state of µt with respect to Pt satisfies limt→∞ Eµ[Ptf ] = 0 for any f ∈ C∞0 , then the following inequality holds:
d dt+ W2(µ, µt) ≤
√ I(µt|Nν). (35)
By integrating both sides of the inequality in Lemma 2 with respect to t ∈ (0,∞), the following inequality can be obtained:
W2(µt,Nν) = ∫ ∞
0
d dt+ W2(µt,Nν)dt ≤ ∫ ∞ 0 √ I(µt|Nν)dt. (36)
In the aforementioned inequality, we replace the Fisher information with the diffusion generator L as follows:
W2(µ,Nν) ≤ ∫ ∞
0
√ I(µt|Nν)dt
= ∫ ∞ 0 √∫ [Ptq]−1Γ(Ptq)dNνdt = ∫ ∞ 0 √∫ L(− logPtq)dµtdt. (37)
The second equality above is derived by leveraging the properties of the bilinear operator Γ (Bakry et al. (2013); Villani (2008)) with respect to the diffusion operator L, which is defined as follows:∫
[Ptq] −1Γ(Ptq)dNν = − ∫ L(logPtq)qtdNν = ∫ L(− logPtq)dµt ≥ 0. (38)
For simplicity, we denote |g| = g+ for any g ∈ C∞0 . According to Proposition 5, we can relate Ftµ = µt to its initial term µ = µt=0 as follows:∫ ∞
0
√∫ L(− logPtq)(X)d[Ftµ](X)dt ≤ ∫ ∞ 0 √ e−2ρt ∫ L (− logPt=0q) (X)dµ(X)dt
≤ ∫ ∞
0
√ e−2ρt sup
g∈C∞0
∫ L+g(Z)qdNν(Z)dt
= ∫ ∞ 0 √ e−2ρtdt √ sup g∈C∞0 ∫ L+g(X)dµ(X)
= ρ−1K2(µ).
(39)
The second inequality is naturally induced, because the proposed objective function is defined to select the maximum elements over the set of functions g ∈ C∞0 and Lg ≤ L+g. If the integral interval is set to (0, s), then we can induceW2(µ,Ftµ) ≤ 1ρ (1− e
−s)K2(µ). Our diffusion-operator induces ρ = 1, which completes the proof.
Proposition 8. There is a scalar 0 < β < ∞ dependent on ν such that the following inequality holds: W2(ν,Ftµ) ≤ [√ dβλmax(Σν) + ‖EνY ‖2 ] ∨ [ e−tK2(µ) +K2(ν) ] . (40)
As a motivation for setting a detour measure to Nν , we mentioned the natural property of the non-collapsing Wasserstein distance ofW2(ν,Nν) 6= 0. However, it is unclear from a geometric perspective exactly how the upper bound (i.e.,W2(ν,Nν) ≤ ?) can be induced based on the intrinsic statistics term (i.e., d1 in Fig.1). Specifically, in the situation where the covariance matrices of ν and Nν are identical, it is difficult to determine a theoretical upper bound without additional tools. The first part of this proof focuses on resolving this important issue. The second part of the proof is naturally induced by Proposition 1. Please note that in the following proposition, parameter for Wasserstein moving average is set to α = 0 for clarity.
Proof. Before proceeding with the first part of the proof, we define a constant β as follows:
β = sup 1≤j≤d ∫ 1 0 1 s EYsv2s,j(Ys)ds. (41)
If we assume a mild condition such that mins,j inf1≤j≤dO(vs,j) ≥ O( √ s), then the integral term in β is finite and well-defined. This value will directly yield the upper bound of the Kullback–Leibler (KL) divergence of ν. First, we introduce the following inequality.
Lemma 3. (de Bruijn’s identity, Johnson & Suhov (2001); Nourdin et al. (2014)) We let Y ∼ ν, Z ∼ N (0, I) denote a standard Gaussian random variable, and let define Ys = √ sY + √ 1− sΣ 1 2 ν Z with the score function defined as vs(x) = ∇ log ps(x) with respect to the random variable Ys. Then, the following equality holds:
KL(ν|N (0,Σν)) = ∫ 1
0
Tr
( 1
2s ΣνEps∼Ys [vs(Ys)vs(Ys)T ]
) ds. (42)
From equation 42, we can derive the relations between KL-divergence and the constant β defined earlier.∫ 1
0
1 2s Tr ( ΣνEx[vs(Ys)vs(Ys)T ]) ) ds ≤ ∫ 1 0 1 2s Tr ( ΣνEx[vs,ivs,j ]di,j) ) ds
≤ ∫ 1
0
1 2 λmax(Σν) d∑ j=1 E
[ v2s,j(Ys)
s
] ds ≤ 1
2 λmax ∫ 1 0 d∑ j=1 βds = 1 2 λmax(Σν)dβ.
(43)
The second inequality holds based on the following element property of symmetric positive-definite matrices:
Tr(AB) ≤ ‖A‖opTr(B) = λmax(A)Tr(B), ∀A,B ∈ Sym + d . (44)
It should be noted that because the distribution of ν is compactly supported (i.e., supp(q) is compact), the maximum eigenvalue of the covariance Σν is finite. The other relations are induced by the aforementioned definition. Next, we relate the KL-divergence and 2-Wasserstein distance naturally.
Definition 9. (Talagrand inequality for Gaussian measures, Otto & Villani (2000)) For any nondegenerate Gaussian measure N with a mean 0, the following inequality is satisfied:
W2(ν,N ) ≤ √ 2KL(ν|N ), ∀ν ∈ P2(Rd). (45)
By combining Definition 9 and equation 43, we can derive the following expression: W2(ν,N (0,Σν)) ≤ √ 2KL(ν|N (0,Σν)) ≤ √ dβλmax(Σν) <∞. (46)
According to the triangle inequality for the 2-Wasserstein distance, we obtain:
W2(ν,N (mν ,Σν)) ≤ W2(ν,N (0,Σν)) +W2(N (mν ,Σν),N (0,Σν)) (47)
In Appendix C.3, we investigated that the geodesic distance between two Gaussian measures having the same covariance is equivalent to the Euclidean distance between two means. Therefore, we can obtain the following equality:
W2(N (mν ,Σν),N (0,Σν)) =W2(ιmν# [N (0,Σν)],N (0,Σν)) = ‖mν − 0‖2 = ‖EνY ‖2 ,
(48)
where ιa(X) = X + a for any vector a ∈ supp(q). Now, by adding the two inequalities defined earlier, we can obtain
W2(ν,N (mν ,Σν)) ≤ ‖EνY ‖2 + √ dβλmax(Σν), (49)
where it is easily shown that the upper-bound is only dependent on the statistical structure of ν. Specifically, the term ‖EνY ‖2 represents the center of mass for a density of ν and √ dβλmax(Σν) is related to the covariance structure of ν.
By applying Proposition 8 to both Ftµ and ν, we can easily recover equation 5 as follows: W2(ν,Ftµ) ≤ ε =W2(ν,N (mν ,Σν)) +W2(N (mν ,Σν),Ftµ)
≤ ([ ‖EνY ‖2 + √ dβλmax(Σν) ] ∧K2(ν) ) + e−tK2(µ)
≤ [√ dβλmax(Σν) + ‖EνY ‖2 ] ∨ [ e−tK2(µ) +K2(ν) ] .
(50)
The second inequality is easily obtained as (a ∧ b) + c ≤ a ∨ (b + c) for any a, b, c ≥ 0, which completes the proof.
Proposition 9. (Concentration inequality for uncertain measures). Assume that there are some constants s? ∈ [ 1η ,∞), η ≥ 0 such that the following inequality is satisfied:
EFs?µ[f 2]− [EFs?µ[f ]] 2 ≤ (1 + η)EFs?µ[A∇f T∇f ], (51)
for A ∈ Sym+d , D(A,Σν) ≤ aη for some a > 0, and for any metric D defined on Sym + d . In this case, there is a δ such that the following probability inequality for an uncertain measure is induced:
Fs?µ ( |σ − Eν [σ]| ≥ δ ) ≤ 6e− √ 2δ 3 2 K2 , (52)
where κ denotes the Lipschitz constant of σ.
Proof. Before proceeding with the main proof, we first prove the existence of s?. The limit of the interval with respect to η converges to a singleton {∞} as I = limη→0[ 1η ,∞). In this case, equation 51 is the same as the Poincaré inequality for a Gaussian measure Nν , which can be written as
lim η→0
EFs?µ[f 2]− [EFs?µ[f ]] 2 ≤ lim η→0 (1 + η)EFs?µ[A∇f T∇f ]
= EFs?µ[Σν∇f T∇f ].
(53)
While the Poincaré inequality in equation 53 is uniquely defined, we can find at least one value s? satisfying equation 51. Let X(t, w) = Xt(w) denote the stochastic process with respect to qt(x) defined in the proof of Proposition 2. Additionally, let c = Eν [σ]− EFs?µ[σ]. Then, we can obtain the following inequality:
c = Eν [σ]− EFs?µ[σ] = κ ( Eν [σ κ ] − EFs?µ [σ κ ]) ≤ κ sup
g∈Lip1 (Eνg − EFs?µg)
≤ κW1(Fs?µ, ν) ≤ κW2(Fs?µ, ν) ≤ κK2(µ)
1 + η .
(54)
The first inequality is induced by the assumption regarding the κ-Lipschitzness of the function σ and the second inequality is induced by the Kantorovich-Rubinstein theorem. The third inequality is natural becauseWa(·, ·) ≤ Wb(·, ·) for any 1 ≤ a ≤ b <∞. because equation 51 is equivalent to the Poincaré inequality for the measure Fs?µ, it satisfies the Bakry-emery curvature-dimension condition CD(1 + η,∞). Thus, as shown in the proof of Proposition 2 (i.e., equation 39), the last inequality is induced. Additionally, based on the concentration inequality of Fs?µ [Proposition 4.4.2 Bakry et al. (2013)], we can derive the following probability inequality:
Fs?µ [σ(Xs?(w)) ≥ EFs?µ[σ] + δ] ≤ 3e − δ√ 1+ηκ , (55)
where the Poincaré constant for Fs?µ is naturally 1 + η and ‖σ‖Lip = κ. Next, we will derive the desired form from equation 55. First, we introduce the following inequality.
σ(Xs?) ≥ EFs?µ[σ] + δ ≥ Eν [σ] + δ − κ
1 + η K2 (56)
The last inequality is directly induced by equation 54 because −c ≥ − κ1+ηK2. While η, κ, and K2 are constants with respect to w, the following set inclusion can be obtained naturally:
S1 = {w : σ(Xs?(w)) ≥ EFs?µ[σ] + δ} ⊇ {w : σ(Xs?(w)) ≥ Eν [σ] + δ − κ
1 + η K2} = S2.
(57)
For the modified version of the original probability inequality, we take probability measure Fs?µ[·] for the sets S1,S2, which is defined as
3e − δ√
1+ηκ ≥ Fs?µ ({w : σ(Xs?(w)) ≥ EFs?µ[σ] + δ}) ≥ Fs?µ ( {w : σ(Xs?(w)) ≥ Eν [σ] + δ − κ
1 + η K2}
) .
(58)
The concentration inequality around Eν [σ] is obtained by combining the inequalities induced by σ and −σ as follows:
1 2 Fs?µ ⋃ h∈{σ,−σ} {w : h(Xs?(w))− Eν [h] ≥ ± ( δ − κ 1 + η K2} ) = Fs?µ ( {w : |σ(Xs?(w))− Eν [σ]| ≥ δ − κ
1 + η K2}
) ≤ 6e− δ√ 1+ηκ .
(59)
The inequality in equation 59 is the general form containing the relation between the upper bound of the probability and (η, κ,K2). While this form is quite complicated and highly technical, we choose not to present all the detailed expressions of equation 59 in the main paper. Rather than that, we re-write it in a much simplified form for clarity. Specifically, by setting κK2/(1 + η) = 0.5δ and rescaling δ to 2δ, the aforementioned inequality in equation 59 can be converted into the following simpler form:
Fs?µ ({w : |σ(Xs?(w))− Eν [l]| ≥ δ) ≤ 6e− √ 2δ 3 2 κK2 . (60)
Finally, if we set σ = Softmax, then the Lipschitz constant is induced as κ = 1. This proof is completed by setting s? := T . | 1. What is the main contribution of the paper regarding handling noisy labels in deep learning?
2. What are the strengths and weaknesses of the proposed method compared to other baselines?
3. How does the reviewer assess the clarity and potential errors in the manuscript?
4. What is the intuition behind the proposed approach, and are there any implicit assumptions made?
5. Minor suggestions for improving the paper include a more informative title, using different Greek letters for notation, and clarifying the mapping of the transport map F. | Review | Review
Note: initial score was 4; updated to 5 after taking a brief look at the experimental sections; will check again later to take a closer look at the updated "motivation" part.
0. disclaimer. It is possible that I failed to understand a significant portion of this manuscript; I had a very hard time trying to understand the notations and the writing overall. Please correct me if I am wrong.
1. paper summary. The paper aims to give a better way of handling the noisy label data, by providing a way to better utilize the information given by uncertain (or high-loss) samples, which are often simply filtered out in previous works. The proposed method is as follows: (1) select the "lowest-loss mini-batch" among the certain samples (i.e., low-loss)
Y
, with some empirical measure
ν
. (2) Construct an approximate optimal transport map
F
from the measure of uncertain samples
μ
to a Gaussian measure $\mathcal{N}(\mathbf{m}\nu,\Sigma\nu)
i
n
t
h
e
f
e
a
t
u
r
e
s
p
a
c
e
,
h
a
v
i
n
g
t
h
e
s
a
m
e
f
i
r
s
t
a
n
d
s
e
c
o
n
d
m
o
m
e
n
t
s
a
s
\nu$. (3) Minimize the sum of losses on the certain samples and transported uncertain samples. Authors also introduce several techniques to circumvent
2. review summary. The proposed method seems to be advantageous over the considered baselines, but I am not sure if the method would outperform other baselines as well. Also, the clarity of the manuscript is not quite good (in my opinion).
3. missing baselines. A wide range of algorithms has been proposed recently to make use of the estimated-to-be-mislabeled subset of samples (or uncertain sample, as authors put it) data. For instance, the ICLR 2020 paper by Li et al. ("DivideMix: Learning with noisy labels as semi-supervised learning") proposes to drop the labels from the uncertain samples and use semi-supervised learning approaches. The paper also refers to several related approaches, which could/should be compared or discussed against the optimal transport based method proposed by the authors. Other popular baselines would be INCV and the beta-mixture modeling by Arazo et al., which are slightly different in spirit. A recent NeurIPS 2020 paper (which should be considered as a concurrent work) is worth discussing, for the benefit of the readers.
I strongly recommend adding (at least) an empirical comparison to the Li et al., as it shares a larger goal (in the sense that utilizing the information from mislabeled data) yet implemented with a different philosophy. Understanding when/how one approach outperforms the other should be fantastic.
4. clarity and (potential) errors. Some parts of the manuscript was not clear enough to me, or perhaps is written in an overly convoluted manner. Clarifying the following bits may help the readers (and me) follow the content more easily.
Eq. (2) follows the description "Specifically, with a probability
1
−
δ
, the generalization bound for clean datasets is defined as follows:". However, I am not sure whether it is a "definition of the generalization bound." Rather, it should be one of many generalization bounds that one can prove.
Eq. (3) contains the expression
E
X
∼
μ
,
r
^
|
T
[
l
(
X
;
r
^
)
]
. I am not sure if the definition of
T
appears in the main text. I guess that it is the label transition matrix? Also, on which space is the minimum taken over? Is it over any distribution
μ
on
R
d
, or is it more like over any distribution that can be generated by partitioning
P
? Any vagueness must be removed, as the authors also provide theoretical results.
In definition 1, what distance measure
d
(
⋅
,
⋅
)
are authors using? (also, using same
d
for distance and dimensions add unnecessary difficulty). If the authors are not using more than two different notions of distances, why don't you just write it directly, like
E
μ
‖
x
0
−
x
‖
2
2
?
In Eq. (5), again it is not clear over what the minimum is being taken over. Is it over all partitioned-distribution
μ
,
ξ
, or also over all possible transport maps
F
?
5. Rationale I am curious what the general intuition behind the proposed approach is. Why is the "minimum transportation cost" map
F
a sensible method to transport the high-loss samples? Are we making any implicit assumptions on the nature of label-flipping operations, or perhaps the learning dynamics itself?
minor suggestions.
The title can be more informative. I am not sure if the current title "Wasserstein distributional normalization" gives any information about the task under consideration, or
The notation
ϑ
looks like the model parameter at a first glance. How about changing it to other greek letters?
In proposition 1, there should be a typo:
F
:
R
+
×
P
2
. The mapping
F
maps to where? |
ICLR | Title
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Abstract
In this paper we propose a novel model for unconditional audio generation based on generating one audio sample at a time. We show that our model, which profits from combining memory-less modules, namely autoregressive multilayer perceptrons, and stateful recurrent neural networks in a hierarchical structure is able to capture underlying sources of variations in the temporal sequences over very long time spans, on three datasets of different nature. Human evaluation on the generated samples indicate that our model is preferred over competing models. We also show how each component of the model contributes to the exhibited performance.
1 INTRODUCTION
Audio generation is a challenging task at the core of many problems of interest, such as text-tospeech synthesis, music synthesis and voice conversion. The particular difficulty of audio generation is that there is often a very large discrepancy between the dimensionality of the the raw audio signal and that of the effective semantic-level signal. Consider the task of speech synthesis, where we are typically interested in generating utterances corresponding to full sentences. Even at a relatively low sample rate of 16kHz, on average we will have 6,000 samples per word generated. 1
Traditionally, the high-dimensionality of raw audio signal is dealt with by first compressing it into spectral or hand-engineered features and defining the generative model over these features. However, when the generated signal is eventually decompressed into audio waveforms, the sample quality is often degraded and requires extensive domain-expert corrective measures. This results in complicated signal processing pipelines that are to adapt to new tasks or domains. Here we propose a step in the direction of replacing these handcrafted systems.
In this work, we investigate the use of recurrent neural networks (RNNs) to model the dependencies in audio data. We believe RNNs are well suited as they have been designed and are suited solutions for these tasks (see Graves (2013), Karpathy (2015), and Siegelmann (1999)). However, in practice it is a known problem of these models to not scale well at such a high temporal resolution as is found when generating acoustic signals one sample at a time, e.g., 16000 times per second. This is one of the reasons that Oord et al. (2016) profits from other neural modules such as one presented by Yu & Koltun (2015) to show extremely good performance.
In this paper, an end-to-end unconditional audio synthesis model for raw waveforms is presented while keeping all the computations tractable.2 Since our model has different modules operating at different clock-rates (which is in contrast to WaveNet), we have the flexibility in allocating the amount of computational resources in modeling different levels of abstraction. In particular, we can potentially allocate very limited resource to the module responsible for sample level alignments
1Statistics based on the average speaking rate of a set of TED talk speakers http://sixminutes. dlugan.com/speaking-rate/
2Code https://github.com/soroushmehr/sampleRNN_ICLR2017 and samples https:// soundcloud.com/samplernn/sets
operating at the clock-rate equivalent to sample-rate of the audio, while allocating more resources in modeling dependencies which vary very slowly in audio, for example identity of phoneme being spoken. This advantage makes our model arbitrarily flexible in handling sequential dependencies at multiple levels of abstraction.
Hence, our contribution is threefold:
1. We present a novel method that utilizes RNNs at different scales to model longer term dependencies in audio waveforms while training on short sequences which results in memory efficiency during training.
2. We extensively explore and compare variants of models achieving the above effect. 3. We study and empirically evaluate the impact of different components of our model on
three audio datasets. Human evaluation also has been conducted to test these generative models.
2 SAMPLERNN MODEL
In this paper we propose SampleRNN (shown in Fig. 1), a density model for audio waveforms. SampleRNN models the probability of a sequence of waveform samples X = {x1, x2, . . . , xT } (a random variable over input data sequences) as the product of the probabilities of each sample conditioned on all previous samples:
p(X) = T−1∏ i=0 p(xi+1|x1, . . . , xi) (1)
RNNs are commonly used to model sequential data which can be formulated as:
ht = H(ht−1, xi=t) (2) p(xi+1|x1, . . . , xi) = Softmax(MLP (ht)) (3)
with H being one of the known memory cells, Gated Recurrent Units (GRUs) (Chung et al., 2014), Long Short Term Memory Units (LSTMs) (Hochreiter & Schmidhuber, 1997), or their deep variations (Section 3). However, raw audio signals are challenging to model because they contain structure at very different scales: correlations exist between neighboring samples as well as between ones thousands of samples apart.
SampleRNN helps to address this challenge by using a hierarchy of modules, each operating at a different temporal resolution. The lowest module processes individual samples, and each higher module operates on an increasingly longer timescale and a lower temporal resolution. Each module conditions the module below it, with the lowest module outputting sample-level predictions. The entire hierarchy is trained jointly end-to-end by backpropagation.
2.1 FRAME-LEVEL MODULES
Rather than operating on individual samples, the higher-level modules in SampleRNN operate on non-overlapping frames of FS(k) (“Frame Size”) samples at the kth level up in the hierarchy at a time (frames denoted by f (k)). Each frame-level module is a deep RNN which summarizes the history of its inputs into a conditioning vector for the next module downward.
The variable number of frames we condition upon up to timestep t−1 is expressed by a fixed length hidden state or memory h(k)t where t is related to clock rate at that tier. The RNN makes a memory update at timestep t as a function of the previous memory h(k)t−1 and an input inp (k) t . This input for top tier k = K is simply the input frame. For intermediate tiers (1 < k < K) this input is a linear combination of conditioning vector from higher tier and current input frame. See Eqs. 4–5.
Because different modules operate at different temporal resolutions, we need to upsample each vector c at the output of a module into a series of r(k) vectors (where r(k) is the ratio between the temporal resolutions of the modules) before feeding it into the input of the next module downward (Eq. 6). We do this with a set of r(k) separate linear projections.
Here we are formalizing the frame-level module in tier k. Note that following equations are exclusive to tier k and timestep t for that specific tier. To increase the readability, unless necessary superscript (k) is not shown for t, inp(k), W (k)x , h(k),H(k), W (k)j , and r(k).
inpt =
{ Wxf (k) t + c (k+1) t ; 1 < k < K
f (k=K) t ; k = K
(4)
ht = H(ht−1, inpt) (5)
c (k) (t−1)∗r+j =Wjht; 1 ≤ j ≤ r (6)
Our approach of upsampling with r(k) linear projections is exactly equivalent to upsampling by adding zeros and then applying a linear convolution. This is sometimes called “perforated” upsampling in the context of convolutional neural networks (CNNs). It was first demonstrated to work well in Dosovitskiy et al. (2016) and is a fairly common upsampling technique.
2.2 SAMPLE-LEVEL MODULE
The lowest module (tier k = 1; Eqs. 7–9) in the SampleRNN hierarchy outputs a distribution over a sample xi+1, conditioned on the FS(1) preceding samples as well as a vector c (k=2) i from the next higher module which encodes information about the sequence prior to that frame. As FS(1) is usually a small value and correlations in nearby samples are easy to model by a simple memoryless module, we implement it with a multilayer perceptron (MLP) rather than RNN which slightly speeds up the training. Assuming ei represents xi after passing through embedding layer (section 2.2.1), conditional distribution in Eq. 1 can be achieved by following and for further clarity two consecutive sample-level frames are shown. In addition, Wx in Eq. 8 is simply used to linearly combine a frame and conditioning vector from above.
f (1) i−1 = flatten([ei−FS(1) , . . . , ei−1]) (7)
f (1) i = flatten([ei−FS(1)+1, . . . , ei])
inp (1) i =W (1) x f (1) i + c (2) i (8)
p(xi+1|x1, . . . , xi) = Softmax(MLP (inp(1)i )) (9)
We use a Softmax because we found that better results were obtained by discretizing the audio signals (also see van den Oord et al. (2016)) and outputting a Multinoulli distribution rather than using a Gaussian or Gaussian mixture to represent the conditional density of the original real-valued signal. When processing an audio sequence, the MLP is convolved over the sequence, processing
each window of FS(1) samples and predicting the next sample. At generation time, the MLP is run repeatedly to generate one sample at a time. Table 1 shows a considerable gap between the baseline model RNN and this model, suggesting that the proposed hierarchically structured architecture of SampleRNN makes a big difference.
2.2.1 OUTPUT QUANTIZATION
The sample-level module models its output as a q-way discrete distribution over possible quantized values of xi (that is, the output layer of the MLP is a q-way Softmax).
To demonstrate the importance of a discrete output distribution, we apply the same architecture on real-valued data by replacing the q-way Softmax with a Gaussian Mixture Models (GMM) output distribution. Table 2 shows that our model outperforms an RNN baseline even when both models use real-valued outputs. However, samples from the real-valued model are almost indistinguishable from random noise.
In this work we use linear quantization with q = 256, corresponding to a per-sample bit depth of 8. Unintuitively, we realized that even linearly decreasing the bit depth (resolution of each audio sample) from 16 to 8 can ease the optimization procedure while generated samples still have reasonable quality and are artifact-free.
In addition, early on we noticed that the model can achieve better performance and generation quality when we embed the quantized input values before passing them through the sample-level MLP (see Table 4). The embedding steps maps each of the q discrete values to a real-valued vector embedding. However, real-valued raw samples are still used as input to the higher modules.
2.2.2 CONDITIONALLY INDEPENDENT SAMPLE OUTPUTS
To demonstrate the importance of a sample-level autoregressive module, we try replacing it with “Multi-Softmax” (see Table 4), where the prediction of each sample xi depends only on the conditioning vector c from Eq. 9. In this configuration, the model outputs an entire frame of FS(1) samples at a time, modeling all samples in a frame as conditionally independent of each other. We find that this Multi-Softmax model (which lacks a sample-level autoregressive module) scores significantly worse in terms of log-likelihood and fails to generate convincing samples. This suggests that modeling the joint distribution of the acoustic samples inside each frame is very important in order to obtain good acoustic generation. We found this to be true even when the frame size is reduced, with best results always with a frame size of 1, i.e., generating only one acoustic sample at a time.
2.3 TRUNCATED BPTT
Training recurrent neural networks on long sequences can be very computationally expensive. Oord et al. (2016) avoid this problem by using a stack of dilated convolutions instead of any recurrent connections. However, when they can be trained efficiently, recurrent networks have been shown to be very powerful and expressive sequence models. We enable efficient training of our recurrent model using truncated backpropagation through time, splitting each sequence into short subsequences and propagating gradients only to the beginning of each subsequence. We experiment with different subsequence lengths and demonstrate that we are able to train our networks, which model very long-term dependencies, despite backpropagating through relatively short subsequences.
Table 3 shows that by increasing the subsequence length, performance substantially increases alongside with train-time memory usage and convergence time. Yet it is noteworthy that our best models have been trained on subsequences of length 512, which corresponds to 32 milliseconds, a small fraction of the length of a single a phoneme of human speech while generated samples exhibit longer word-like structures.
Despite the aforementioned fact, this generative model can mimic the existing long-term structure of the data which results in more natural and coherent samples that is preferred by human listeners. (More on this in Sections 3.2–3.3.) This is due to the fast updates from TBPTT and specialized frame-level modules (Section 2.1) with top tiers designed to model a lower resolution of signal while leaving the process of filling the details to lower tiers.
3 EXPERIMENTS AND RESULTS
In this section we are introducing three datasets which have been chosen to evaluate the proposed architecture for modeling raw acoustic sequences. The description of each dataset and their preprocessing is as follows:
Blizzard which is a dataset presented by Prahallad et al. (2013) for speech synthesis task, contains 315 hours of a single female voice actor in English; however, for our experiments we are using only 20.5 hours. The training/validation/test split is 86%-7%-7%. Onomatopoeia3, a relatively small dataset with 6,738 sequences adding up to 3.5 hours, is human vocal sounds like grunting, screaming, panting, heavy breathing, and coughing. Diversity of sound type and the fact that these sounds were recorded from 51 actors and many categories makes it a challenging task. To add to that, this data is extremely unbalanced. The training/validation/test split is 92%-4%-4%. Music dataset is the collection of all 32 Beethoven’s piano sonatas publicly available on https://archive.org/ amounting to 10 hours of non-vocal audio. The training/validation/test split is 88%-6%-6%.
See Fig. 2 for a visual demonstration of examples from datasets and generated samples. For all the datasets we are using a 16 kHz sample rate and 16 bit depth. For the Blizzard and Music datasets, preprocessing simply amounts to chunking the long audio files into 8 seconds long sequences on which we will perform truncated backpropagation through time. Each sequence in the Onomatopoeia dataset is few seconds long, ranging from 1 to 11 seconds. To train the models on this dataset, zero-padding has been applied to make all the sequences in a mini-batch have the same length and corresponding cost values (for the predictions over the added 0s) would be ignored when computing the gradients.
We particularly explored two gated variants of RNNs—GRUs and LSTMs. For the case of LSTMs, the forget gate bias is initialized with a large positive value of 3, as recommended by Zaremba (2015) and Gers (2001), which has been shown to be beneficial for learning long-term dependencies.
As for models that take real-valued input, e.g. the RNN-GMM and SampleRNN-GMM (with 4 components), normalization is applied per audio sample with the global mean and standard deviation obtained from the train split. For most of our experiments where the model demands discrete input, binning was applied per audio sample.
All the models have been trained with teacher forcing and stochastic gradient decent (mini-batch size 128) to minimize the Negative Log-Likelihood (NLL) in bits per dimension (per audio sample). Gradients were hard-clipped to remain in [-1, 1] range. Update rules from the Adam optimizer (Kingma & Ba, 2014) (β1 = 0.9, β2 = 0.999, and = 1e−8) with an initial learning rate of 0.001 was used to adjust the parameters. For training each model, random search over hyper-parameter values (Bergstra & Bengio, 2012) was conducted. The initial RNN state of all the RNN-based models was always learnable. Weight Normalization (Salimans & Kingma, 2016) has been used for all the linear layers in the model (except for the embedding layer) to accelerate the training procedure. Size of the embedding layer was 256 and initialized by standard normal distribution. Orthogonal weight matrices used for hidden-to-hidden connections and other weight matrices initialized similar to He et al. (2015). In final model, we found GRU to work best (slightly better than LSTM). 1024 was the the number of hidden units for all GRUs (1 layer per tier for 3-tier and 3 layer for 2-tier model) and MLPs (3 fully connected layers with ReLU activation with output dimension being 1024 for first two layers and 256 for the final layer before softmax). Also FS(1) = FS(2) = 2 and FS(3) = 8 were found to result in lowest NLL.
3.1 WAVENET RE-IMPLEMENTATION
We implemented the WaveNet architecture as described in Oord et al. (2016). Ideally, we would have liked to replicate their model exactly but owing to missing details of architecture and hyperparameters, as well as limited compute power at our disposal, we made our own design choices so that the model would fit on a single GPU while having a receptive field of around 250 milliseconds,
3Courtesy of Ubisoft
Table 3: Effect of subsequence length on NLL (bits per audio sample) computed on the Blizzard validation set. Subsequence Length 32 64 128 256 512
NLL Validation 1.575 1.468 1.412 1.391 1.364
while having a reasonable number of updates per unit time. Although our model is very similar to WaveNet, the design choices, e.g. number of convolution filters in each dilated convolution layer, length of target sequence to train on simultaneously (one can train with a single target with all samples in the receptive field as input or with target sequence length of size T with input of size receptive field + T - 1), batch-size, etc. might make our implementation different from what the authors have done in the original WaveNet model. Hence, we note here that although we did our best at exactly reproducing their results, there would very likely be different choice of hyper-parameters between our implementation and the one of the authors.
For our WaveNet implementation, we have used 4 dilated convolution blocks each having 10 dilated convolution layers with dilation 1, 2, 4, 8 up to 512. Hence, our network has a receptive field of 4092 acoustic samples i.e. the parameters of multinomial distribution of sample at time step t, p(xi) = fθ(xi−1, xi−2, . . . xi−4092) where θ is model parameters. We train on target sequence length of 1600 and use batch size of 8. Each dilated convolution filter has size 2 and the number of output channels is 64 for each dilated convolutional layer (128 filters in total due to gated nonlinearity). We trained this model using Adam optimizer with a fixed global learning rate of 0.001 for Blizzard dataset and 0.0001 for Onomatopoeia and Music datasets. We trained these models for about one week on a GeForce GTX TITAN X. We dropped the learning rate in the Blizzard experiment to 0.0001 after around 3 days of training.
3.2 HUMAN EVALUATION
Apart from reporting NLL, we conducted AB preference tests for random samples from four models trained on the Blizzard dataset. For unconditional generation of speech which at best sounds like mumbling, this type of test is the one which is more suited. Competing models were the RNN, SampleRNN (2-tier), SampleRNN (3-tier), and our implementation of WaveNet. The rest of the models were excluded as the quality of samples were definitely lower and also to keep the number of pair comparison tests manageable. We will release the samples that have been used in this test too.
All the samples were set to have the same volume. Every user is then shown a set of twenty pairs of samples with one random pair at a time. Each pair had samples from two different models. The human evaluator is asked to listen to the samples and had the option of choosing between the two model or choosing not to prefer any of them. Hence, we have a quantification of preference between every pair of models. We used the online tool made publicly available by Jillings et al. (2015).
Results in Fig. 3 clearly points out that SampleRNN (3-tier) is a winner by a huge margin in terms of preference by human raters, then SampleRNN (2-tier) and afterward two other models, which matches with the performance comparison in Table 1.
The same evaluation was conducted for Music dataset except for an additional filtering process of samples. Specific to only this dataset, we observed that a batch of generated samples from competing models (this time restricted to RNN, SampleRNN (2-tier), and SampleRNN (3-tier)) were either music-like or random noise. For all these models we only considered random samples that were not random noise. Fig. 4 is dedicated to result of human evaluation on Music dataset.
3.3 QUANTIFYING INFORMATION RETENTION
For the last experiment we are interested in measuring the memory span of the model. We trained our model, SampleRNN (3-tier), with best hyper-parameters on a dataset of 2 speakers reading audio books, one male and one female, respectively, with mean fundamental frequency of 125.3 and 201.8Hz. Each speaker has roughly 10 hours of audio in the dataset that has been preprocessed similar to Blizzard. We observed that it learned to stay consistent generating samples from the same speaker without having any knowledge about the speaker ID or any other conditioning information. This effect is more apparent here in comparison to the unbalanced Onomatopoeia that sometimes mixes two different categories of sounds.
Another experiment was conducted to test the effect of memory and study the effective memory horizon. We inject 1 second of silence in the middle of sampling procedure in order to see if it will remember to generate from the same speaker or not. Initially when sampling we let the model generate 2 seconds of audio as it normally do. From 2 to 3 seconds instead of feeding back the generated sample at that timestep a silent token (zero amplitude) would be fed. From 3 to 5 seconds again we sample normally; feeding back the generated token.
We did classification based on mean fundamental frequency of speakers for the first and last 2 seconds. In 83% of samples SampleRNN generated from the same person in two separate segments.
This is in contrast to a model with fixed past window like WaveNet where injecting 16000 silent tokens (3.3 times the receptive field size) is equivalent to generating from scratch which has 50% chance (assuming each 2-second segment is coherent and not a mixed sound of two speakers).
4 RELATED WORK
Our work is related to earlier work on auto-regressive multi-layer neural networks, starting with Bengio & Bengio (1999), then NADE (Larochelle & Murray, 2011) and more recently PixelRNN (van den Oord et al., 2016). Similar to how they tractably model joint distribution over units of the data (e.g. words in sentences, pixels in images, etc.) through an auto-regressive decomposition, we transform the joint distribution of acoustic samples using Eq. 1.
The idea of having part of the model running at different clock rates is related to multi-scale RNNs (Schmidhuber, 1992; El Hihi & Bengio, 1995; Koutnik et al., 2014; Sordoni et al., 2015; Serban et al., 2016).
Chung et al. (2015) also attempt to model raw audio waveforms which is in contrast to traditional approaches which use spectral features as in Tokuda et al. (2013), Bertrand et al. (2008), and Lee et al. (2009).
Our work is closely related to WaveNet (Oord et al., 2016), which is why we have made the above comparisons, and makes it interesting to compare the effect of adding higher-level RNN stages working at a low resolution. Similar to this work, our models generate one acoustic sample at a time conditioned on all previously generated samples. We also share the preprocessing step of quantizing the acoustics into bins. Unlike this model, we have different modules in our models running at different clock-rates. In contrast to WaveNets, we mitigate the problem of long-term dependency with hierarchical structure and using stateful RNNs, i.e. we will always propagate hidden states to the next training sequence although the gradient of the loss will not take into account the samples in previous training sequence.
5 DISCUSSION AND CONCLUSION
We propose a novel model that can address unconditional audio generation in the raw acoustic domain, which typically has been done until recently with hand-crafted features. We are able to show that a hierarchy of time scales and frequent updates will help to overcome the problem of modeling extremely high-resolution temporal data. That allows us, for this particular application, to learn the data manifold directly from audio samples. We show that this model can generalize well and generate samples on three datasets that are different in nature. We also show that the samples generated by this model are preferred by human raters.
Success in this application, with a general-purpose solution as proposed here, opens up room for more improvement when specific domain knowledge is applied. This method, however, proposed with audio generation application in mind, can easily be adapted to other tasks that require learning the representation of sequential data with high temporal resolution and long-range complex structure.
ACKNOWLEDGMENTS
The authors would like to thank João Felipe Santos and Kyle Kastner for insightful comments and discussion. We would like to thank the Theano Development Team (2016)4 and MILA staff. We acknowledge the support of the following agencies for research funding and computing support: NSERC, Calcul Québec, Compute Canada, the Canada Research Chairs and CIFAR. Jose Sotelo also thanks the Consejo Nacional de Ciencia y Tecnologı́a (CONACyT) as well as the Secretarı́a de Educación Pública (SEP) for their support. This work was a collaboration with Ubisoft.
4http://deeplearning.net/software/theano/
APPENDIX A
A MODEL VARIANT: SAMPLERNN-WAVENET HYBRID
SampleRNN-WaveNet model has two modules operating at two different clock-rate. The slower clock-rate module (frame-level module) sees one frame (each of which has size FS) at a time while the faster clock-rate component(sample-level component) sees one acoustic sample at a time i.e. the ratio of clock-rates for these two modules would be the size of a single frame. Number of sequential steps for frame-level component would be FS times lower. We repeat the output of each step of frame-level component FS times so that number of time-steps for output of both the components match. The output of both these modules are concatenated for every time-step which is further operated by non-linearities for every time-step independently before generating the final output.
In our experiments, we kept size of a single frame (FS) to be 128. We tried two variants of this model: 1. fully convolutional WaveNet and 2. RNN-WaveNet. In fully convolutional WaveNet, both modules described above are implemented using dilated convolutions as described in original WaveNet model. In RNN-WaveNet, we use high capacity RNN in the frame-level module to model the dependency between frames. The sample-level WaveNet in RNN-WaveNet has receptive field of size 509 samples from the past.
Although these models are designed with the intention of combining the two models to harness their best features, preliminary experiments show that this variant is not meeting our expectations at the moment which directs us to a possible future work. | 1. What are the contributions and novelties of the proposed SampleRNN?
2. How does the SampleRNN perform compared to other models, specifically WaveNet and LSTM-RNN?
3. What are the limitations of the paper, particularly in describing the proposed model?
4. Is the comparison between SampleRNN and WaveNet convincing enough?
5. How can the authors improve the paper to provide more detailed descriptions of the proposed model? | Review | Review
The paper proposed a novel SampleRNN to directly model waveform signals and achieved better performance both in terms of objective test NLL and subjective A/B tests.
As mentioned in the discussions, the current status of the paper lack plenty of details in describing their model. Hopefully, this will be addressed in the final version.
The authors attempted to compare with wavenet model, but they didn't manage to get a model better than the baseline LSTM-RNN, which makes all the comparisons to wavenets less convincing. Hence, instead of wasting time and space comparing to wavenet, detailing the proposed model would be better. |
ICLR | Title
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Abstract
In this paper we propose a novel model for unconditional audio generation based on generating one audio sample at a time. We show that our model, which profits from combining memory-less modules, namely autoregressive multilayer perceptrons, and stateful recurrent neural networks in a hierarchical structure is able to capture underlying sources of variations in the temporal sequences over very long time spans, on three datasets of different nature. Human evaluation on the generated samples indicate that our model is preferred over competing models. We also show how each component of the model contributes to the exhibited performance.
1 INTRODUCTION
Audio generation is a challenging task at the core of many problems of interest, such as text-tospeech synthesis, music synthesis and voice conversion. The particular difficulty of audio generation is that there is often a very large discrepancy between the dimensionality of the the raw audio signal and that of the effective semantic-level signal. Consider the task of speech synthesis, where we are typically interested in generating utterances corresponding to full sentences. Even at a relatively low sample rate of 16kHz, on average we will have 6,000 samples per word generated. 1
Traditionally, the high-dimensionality of raw audio signal is dealt with by first compressing it into spectral or hand-engineered features and defining the generative model over these features. However, when the generated signal is eventually decompressed into audio waveforms, the sample quality is often degraded and requires extensive domain-expert corrective measures. This results in complicated signal processing pipelines that are to adapt to new tasks or domains. Here we propose a step in the direction of replacing these handcrafted systems.
In this work, we investigate the use of recurrent neural networks (RNNs) to model the dependencies in audio data. We believe RNNs are well suited as they have been designed and are suited solutions for these tasks (see Graves (2013), Karpathy (2015), and Siegelmann (1999)). However, in practice it is a known problem of these models to not scale well at such a high temporal resolution as is found when generating acoustic signals one sample at a time, e.g., 16000 times per second. This is one of the reasons that Oord et al. (2016) profits from other neural modules such as one presented by Yu & Koltun (2015) to show extremely good performance.
In this paper, an end-to-end unconditional audio synthesis model for raw waveforms is presented while keeping all the computations tractable.2 Since our model has different modules operating at different clock-rates (which is in contrast to WaveNet), we have the flexibility in allocating the amount of computational resources in modeling different levels of abstraction. In particular, we can potentially allocate very limited resource to the module responsible for sample level alignments
1Statistics based on the average speaking rate of a set of TED talk speakers http://sixminutes. dlugan.com/speaking-rate/
2Code https://github.com/soroushmehr/sampleRNN_ICLR2017 and samples https:// soundcloud.com/samplernn/sets
operating at the clock-rate equivalent to sample-rate of the audio, while allocating more resources in modeling dependencies which vary very slowly in audio, for example identity of phoneme being spoken. This advantage makes our model arbitrarily flexible in handling sequential dependencies at multiple levels of abstraction.
Hence, our contribution is threefold:
1. We present a novel method that utilizes RNNs at different scales to model longer term dependencies in audio waveforms while training on short sequences which results in memory efficiency during training.
2. We extensively explore and compare variants of models achieving the above effect. 3. We study and empirically evaluate the impact of different components of our model on
three audio datasets. Human evaluation also has been conducted to test these generative models.
2 SAMPLERNN MODEL
In this paper we propose SampleRNN (shown in Fig. 1), a density model for audio waveforms. SampleRNN models the probability of a sequence of waveform samples X = {x1, x2, . . . , xT } (a random variable over input data sequences) as the product of the probabilities of each sample conditioned on all previous samples:
p(X) = T−1∏ i=0 p(xi+1|x1, . . . , xi) (1)
RNNs are commonly used to model sequential data which can be formulated as:
ht = H(ht−1, xi=t) (2) p(xi+1|x1, . . . , xi) = Softmax(MLP (ht)) (3)
with H being one of the known memory cells, Gated Recurrent Units (GRUs) (Chung et al., 2014), Long Short Term Memory Units (LSTMs) (Hochreiter & Schmidhuber, 1997), or their deep variations (Section 3). However, raw audio signals are challenging to model because they contain structure at very different scales: correlations exist between neighboring samples as well as between ones thousands of samples apart.
SampleRNN helps to address this challenge by using a hierarchy of modules, each operating at a different temporal resolution. The lowest module processes individual samples, and each higher module operates on an increasingly longer timescale and a lower temporal resolution. Each module conditions the module below it, with the lowest module outputting sample-level predictions. The entire hierarchy is trained jointly end-to-end by backpropagation.
2.1 FRAME-LEVEL MODULES
Rather than operating on individual samples, the higher-level modules in SampleRNN operate on non-overlapping frames of FS(k) (“Frame Size”) samples at the kth level up in the hierarchy at a time (frames denoted by f (k)). Each frame-level module is a deep RNN which summarizes the history of its inputs into a conditioning vector for the next module downward.
The variable number of frames we condition upon up to timestep t−1 is expressed by a fixed length hidden state or memory h(k)t where t is related to clock rate at that tier. The RNN makes a memory update at timestep t as a function of the previous memory h(k)t−1 and an input inp (k) t . This input for top tier k = K is simply the input frame. For intermediate tiers (1 < k < K) this input is a linear combination of conditioning vector from higher tier and current input frame. See Eqs. 4–5.
Because different modules operate at different temporal resolutions, we need to upsample each vector c at the output of a module into a series of r(k) vectors (where r(k) is the ratio between the temporal resolutions of the modules) before feeding it into the input of the next module downward (Eq. 6). We do this with a set of r(k) separate linear projections.
Here we are formalizing the frame-level module in tier k. Note that following equations are exclusive to tier k and timestep t for that specific tier. To increase the readability, unless necessary superscript (k) is not shown for t, inp(k), W (k)x , h(k),H(k), W (k)j , and r(k).
inpt =
{ Wxf (k) t + c (k+1) t ; 1 < k < K
f (k=K) t ; k = K
(4)
ht = H(ht−1, inpt) (5)
c (k) (t−1)∗r+j =Wjht; 1 ≤ j ≤ r (6)
Our approach of upsampling with r(k) linear projections is exactly equivalent to upsampling by adding zeros and then applying a linear convolution. This is sometimes called “perforated” upsampling in the context of convolutional neural networks (CNNs). It was first demonstrated to work well in Dosovitskiy et al. (2016) and is a fairly common upsampling technique.
2.2 SAMPLE-LEVEL MODULE
The lowest module (tier k = 1; Eqs. 7–9) in the SampleRNN hierarchy outputs a distribution over a sample xi+1, conditioned on the FS(1) preceding samples as well as a vector c (k=2) i from the next higher module which encodes information about the sequence prior to that frame. As FS(1) is usually a small value and correlations in nearby samples are easy to model by a simple memoryless module, we implement it with a multilayer perceptron (MLP) rather than RNN which slightly speeds up the training. Assuming ei represents xi after passing through embedding layer (section 2.2.1), conditional distribution in Eq. 1 can be achieved by following and for further clarity two consecutive sample-level frames are shown. In addition, Wx in Eq. 8 is simply used to linearly combine a frame and conditioning vector from above.
f (1) i−1 = flatten([ei−FS(1) , . . . , ei−1]) (7)
f (1) i = flatten([ei−FS(1)+1, . . . , ei])
inp (1) i =W (1) x f (1) i + c (2) i (8)
p(xi+1|x1, . . . , xi) = Softmax(MLP (inp(1)i )) (9)
We use a Softmax because we found that better results were obtained by discretizing the audio signals (also see van den Oord et al. (2016)) and outputting a Multinoulli distribution rather than using a Gaussian or Gaussian mixture to represent the conditional density of the original real-valued signal. When processing an audio sequence, the MLP is convolved over the sequence, processing
each window of FS(1) samples and predicting the next sample. At generation time, the MLP is run repeatedly to generate one sample at a time. Table 1 shows a considerable gap between the baseline model RNN and this model, suggesting that the proposed hierarchically structured architecture of SampleRNN makes a big difference.
2.2.1 OUTPUT QUANTIZATION
The sample-level module models its output as a q-way discrete distribution over possible quantized values of xi (that is, the output layer of the MLP is a q-way Softmax).
To demonstrate the importance of a discrete output distribution, we apply the same architecture on real-valued data by replacing the q-way Softmax with a Gaussian Mixture Models (GMM) output distribution. Table 2 shows that our model outperforms an RNN baseline even when both models use real-valued outputs. However, samples from the real-valued model are almost indistinguishable from random noise.
In this work we use linear quantization with q = 256, corresponding to a per-sample bit depth of 8. Unintuitively, we realized that even linearly decreasing the bit depth (resolution of each audio sample) from 16 to 8 can ease the optimization procedure while generated samples still have reasonable quality and are artifact-free.
In addition, early on we noticed that the model can achieve better performance and generation quality when we embed the quantized input values before passing them through the sample-level MLP (see Table 4). The embedding steps maps each of the q discrete values to a real-valued vector embedding. However, real-valued raw samples are still used as input to the higher modules.
2.2.2 CONDITIONALLY INDEPENDENT SAMPLE OUTPUTS
To demonstrate the importance of a sample-level autoregressive module, we try replacing it with “Multi-Softmax” (see Table 4), where the prediction of each sample xi depends only on the conditioning vector c from Eq. 9. In this configuration, the model outputs an entire frame of FS(1) samples at a time, modeling all samples in a frame as conditionally independent of each other. We find that this Multi-Softmax model (which lacks a sample-level autoregressive module) scores significantly worse in terms of log-likelihood and fails to generate convincing samples. This suggests that modeling the joint distribution of the acoustic samples inside each frame is very important in order to obtain good acoustic generation. We found this to be true even when the frame size is reduced, with best results always with a frame size of 1, i.e., generating only one acoustic sample at a time.
2.3 TRUNCATED BPTT
Training recurrent neural networks on long sequences can be very computationally expensive. Oord et al. (2016) avoid this problem by using a stack of dilated convolutions instead of any recurrent connections. However, when they can be trained efficiently, recurrent networks have been shown to be very powerful and expressive sequence models. We enable efficient training of our recurrent model using truncated backpropagation through time, splitting each sequence into short subsequences and propagating gradients only to the beginning of each subsequence. We experiment with different subsequence lengths and demonstrate that we are able to train our networks, which model very long-term dependencies, despite backpropagating through relatively short subsequences.
Table 3 shows that by increasing the subsequence length, performance substantially increases alongside with train-time memory usage and convergence time. Yet it is noteworthy that our best models have been trained on subsequences of length 512, which corresponds to 32 milliseconds, a small fraction of the length of a single a phoneme of human speech while generated samples exhibit longer word-like structures.
Despite the aforementioned fact, this generative model can mimic the existing long-term structure of the data which results in more natural and coherent samples that is preferred by human listeners. (More on this in Sections 3.2–3.3.) This is due to the fast updates from TBPTT and specialized frame-level modules (Section 2.1) with top tiers designed to model a lower resolution of signal while leaving the process of filling the details to lower tiers.
3 EXPERIMENTS AND RESULTS
In this section we are introducing three datasets which have been chosen to evaluate the proposed architecture for modeling raw acoustic sequences. The description of each dataset and their preprocessing is as follows:
Blizzard which is a dataset presented by Prahallad et al. (2013) for speech synthesis task, contains 315 hours of a single female voice actor in English; however, for our experiments we are using only 20.5 hours. The training/validation/test split is 86%-7%-7%. Onomatopoeia3, a relatively small dataset with 6,738 sequences adding up to 3.5 hours, is human vocal sounds like grunting, screaming, panting, heavy breathing, and coughing. Diversity of sound type and the fact that these sounds were recorded from 51 actors and many categories makes it a challenging task. To add to that, this data is extremely unbalanced. The training/validation/test split is 92%-4%-4%. Music dataset is the collection of all 32 Beethoven’s piano sonatas publicly available on https://archive.org/ amounting to 10 hours of non-vocal audio. The training/validation/test split is 88%-6%-6%.
See Fig. 2 for a visual demonstration of examples from datasets and generated samples. For all the datasets we are using a 16 kHz sample rate and 16 bit depth. For the Blizzard and Music datasets, preprocessing simply amounts to chunking the long audio files into 8 seconds long sequences on which we will perform truncated backpropagation through time. Each sequence in the Onomatopoeia dataset is few seconds long, ranging from 1 to 11 seconds. To train the models on this dataset, zero-padding has been applied to make all the sequences in a mini-batch have the same length and corresponding cost values (for the predictions over the added 0s) would be ignored when computing the gradients.
We particularly explored two gated variants of RNNs—GRUs and LSTMs. For the case of LSTMs, the forget gate bias is initialized with a large positive value of 3, as recommended by Zaremba (2015) and Gers (2001), which has been shown to be beneficial for learning long-term dependencies.
As for models that take real-valued input, e.g. the RNN-GMM and SampleRNN-GMM (with 4 components), normalization is applied per audio sample with the global mean and standard deviation obtained from the train split. For most of our experiments where the model demands discrete input, binning was applied per audio sample.
All the models have been trained with teacher forcing and stochastic gradient decent (mini-batch size 128) to minimize the Negative Log-Likelihood (NLL) in bits per dimension (per audio sample). Gradients were hard-clipped to remain in [-1, 1] range. Update rules from the Adam optimizer (Kingma & Ba, 2014) (β1 = 0.9, β2 = 0.999, and = 1e−8) with an initial learning rate of 0.001 was used to adjust the parameters. For training each model, random search over hyper-parameter values (Bergstra & Bengio, 2012) was conducted. The initial RNN state of all the RNN-based models was always learnable. Weight Normalization (Salimans & Kingma, 2016) has been used for all the linear layers in the model (except for the embedding layer) to accelerate the training procedure. Size of the embedding layer was 256 and initialized by standard normal distribution. Orthogonal weight matrices used for hidden-to-hidden connections and other weight matrices initialized similar to He et al. (2015). In final model, we found GRU to work best (slightly better than LSTM). 1024 was the the number of hidden units for all GRUs (1 layer per tier for 3-tier and 3 layer for 2-tier model) and MLPs (3 fully connected layers with ReLU activation with output dimension being 1024 for first two layers and 256 for the final layer before softmax). Also FS(1) = FS(2) = 2 and FS(3) = 8 were found to result in lowest NLL.
3.1 WAVENET RE-IMPLEMENTATION
We implemented the WaveNet architecture as described in Oord et al. (2016). Ideally, we would have liked to replicate their model exactly but owing to missing details of architecture and hyperparameters, as well as limited compute power at our disposal, we made our own design choices so that the model would fit on a single GPU while having a receptive field of around 250 milliseconds,
3Courtesy of Ubisoft
Table 3: Effect of subsequence length on NLL (bits per audio sample) computed on the Blizzard validation set. Subsequence Length 32 64 128 256 512
NLL Validation 1.575 1.468 1.412 1.391 1.364
while having a reasonable number of updates per unit time. Although our model is very similar to WaveNet, the design choices, e.g. number of convolution filters in each dilated convolution layer, length of target sequence to train on simultaneously (one can train with a single target with all samples in the receptive field as input or with target sequence length of size T with input of size receptive field + T - 1), batch-size, etc. might make our implementation different from what the authors have done in the original WaveNet model. Hence, we note here that although we did our best at exactly reproducing their results, there would very likely be different choice of hyper-parameters between our implementation and the one of the authors.
For our WaveNet implementation, we have used 4 dilated convolution blocks each having 10 dilated convolution layers with dilation 1, 2, 4, 8 up to 512. Hence, our network has a receptive field of 4092 acoustic samples i.e. the parameters of multinomial distribution of sample at time step t, p(xi) = fθ(xi−1, xi−2, . . . xi−4092) where θ is model parameters. We train on target sequence length of 1600 and use batch size of 8. Each dilated convolution filter has size 2 and the number of output channels is 64 for each dilated convolutional layer (128 filters in total due to gated nonlinearity). We trained this model using Adam optimizer with a fixed global learning rate of 0.001 for Blizzard dataset and 0.0001 for Onomatopoeia and Music datasets. We trained these models for about one week on a GeForce GTX TITAN X. We dropped the learning rate in the Blizzard experiment to 0.0001 after around 3 days of training.
3.2 HUMAN EVALUATION
Apart from reporting NLL, we conducted AB preference tests for random samples from four models trained on the Blizzard dataset. For unconditional generation of speech which at best sounds like mumbling, this type of test is the one which is more suited. Competing models were the RNN, SampleRNN (2-tier), SampleRNN (3-tier), and our implementation of WaveNet. The rest of the models were excluded as the quality of samples were definitely lower and also to keep the number of pair comparison tests manageable. We will release the samples that have been used in this test too.
All the samples were set to have the same volume. Every user is then shown a set of twenty pairs of samples with one random pair at a time. Each pair had samples from two different models. The human evaluator is asked to listen to the samples and had the option of choosing between the two model or choosing not to prefer any of them. Hence, we have a quantification of preference between every pair of models. We used the online tool made publicly available by Jillings et al. (2015).
Results in Fig. 3 clearly points out that SampleRNN (3-tier) is a winner by a huge margin in terms of preference by human raters, then SampleRNN (2-tier) and afterward two other models, which matches with the performance comparison in Table 1.
The same evaluation was conducted for Music dataset except for an additional filtering process of samples. Specific to only this dataset, we observed that a batch of generated samples from competing models (this time restricted to RNN, SampleRNN (2-tier), and SampleRNN (3-tier)) were either music-like or random noise. For all these models we only considered random samples that were not random noise. Fig. 4 is dedicated to result of human evaluation on Music dataset.
3.3 QUANTIFYING INFORMATION RETENTION
For the last experiment we are interested in measuring the memory span of the model. We trained our model, SampleRNN (3-tier), with best hyper-parameters on a dataset of 2 speakers reading audio books, one male and one female, respectively, with mean fundamental frequency of 125.3 and 201.8Hz. Each speaker has roughly 10 hours of audio in the dataset that has been preprocessed similar to Blizzard. We observed that it learned to stay consistent generating samples from the same speaker without having any knowledge about the speaker ID or any other conditioning information. This effect is more apparent here in comparison to the unbalanced Onomatopoeia that sometimes mixes two different categories of sounds.
Another experiment was conducted to test the effect of memory and study the effective memory horizon. We inject 1 second of silence in the middle of sampling procedure in order to see if it will remember to generate from the same speaker or not. Initially when sampling we let the model generate 2 seconds of audio as it normally do. From 2 to 3 seconds instead of feeding back the generated sample at that timestep a silent token (zero amplitude) would be fed. From 3 to 5 seconds again we sample normally; feeding back the generated token.
We did classification based on mean fundamental frequency of speakers for the first and last 2 seconds. In 83% of samples SampleRNN generated from the same person in two separate segments.
This is in contrast to a model with fixed past window like WaveNet where injecting 16000 silent tokens (3.3 times the receptive field size) is equivalent to generating from scratch which has 50% chance (assuming each 2-second segment is coherent and not a mixed sound of two speakers).
4 RELATED WORK
Our work is related to earlier work on auto-regressive multi-layer neural networks, starting with Bengio & Bengio (1999), then NADE (Larochelle & Murray, 2011) and more recently PixelRNN (van den Oord et al., 2016). Similar to how they tractably model joint distribution over units of the data (e.g. words in sentences, pixels in images, etc.) through an auto-regressive decomposition, we transform the joint distribution of acoustic samples using Eq. 1.
The idea of having part of the model running at different clock rates is related to multi-scale RNNs (Schmidhuber, 1992; El Hihi & Bengio, 1995; Koutnik et al., 2014; Sordoni et al., 2015; Serban et al., 2016).
Chung et al. (2015) also attempt to model raw audio waveforms which is in contrast to traditional approaches which use spectral features as in Tokuda et al. (2013), Bertrand et al. (2008), and Lee et al. (2009).
Our work is closely related to WaveNet (Oord et al., 2016), which is why we have made the above comparisons, and makes it interesting to compare the effect of adding higher-level RNN stages working at a low resolution. Similar to this work, our models generate one acoustic sample at a time conditioned on all previously generated samples. We also share the preprocessing step of quantizing the acoustics into bins. Unlike this model, we have different modules in our models running at different clock-rates. In contrast to WaveNets, we mitigate the problem of long-term dependency with hierarchical structure and using stateful RNNs, i.e. we will always propagate hidden states to the next training sequence although the gradient of the loss will not take into account the samples in previous training sequence.
5 DISCUSSION AND CONCLUSION
We propose a novel model that can address unconditional audio generation in the raw acoustic domain, which typically has been done until recently with hand-crafted features. We are able to show that a hierarchy of time scales and frequent updates will help to overcome the problem of modeling extremely high-resolution temporal data. That allows us, for this particular application, to learn the data manifold directly from audio samples. We show that this model can generalize well and generate samples on three datasets that are different in nature. We also show that the samples generated by this model are preferred by human raters.
Success in this application, with a general-purpose solution as proposed here, opens up room for more improvement when specific domain knowledge is applied. This method, however, proposed with audio generation application in mind, can easily be adapted to other tasks that require learning the representation of sequential data with high temporal resolution and long-range complex structure.
ACKNOWLEDGMENTS
The authors would like to thank João Felipe Santos and Kyle Kastner for insightful comments and discussion. We would like to thank the Theano Development Team (2016)4 and MILA staff. We acknowledge the support of the following agencies for research funding and computing support: NSERC, Calcul Québec, Compute Canada, the Canada Research Chairs and CIFAR. Jose Sotelo also thanks the Consejo Nacional de Ciencia y Tecnologı́a (CONACyT) as well as the Secretarı́a de Educación Pública (SEP) for their support. This work was a collaboration with Ubisoft.
4http://deeplearning.net/software/theano/
APPENDIX A
A MODEL VARIANT: SAMPLERNN-WAVENET HYBRID
SampleRNN-WaveNet model has two modules operating at two different clock-rate. The slower clock-rate module (frame-level module) sees one frame (each of which has size FS) at a time while the faster clock-rate component(sample-level component) sees one acoustic sample at a time i.e. the ratio of clock-rates for these two modules would be the size of a single frame. Number of sequential steps for frame-level component would be FS times lower. We repeat the output of each step of frame-level component FS times so that number of time-steps for output of both the components match. The output of both these modules are concatenated for every time-step which is further operated by non-linearities for every time-step independently before generating the final output.
In our experiments, we kept size of a single frame (FS) to be 128. We tried two variants of this model: 1. fully convolutional WaveNet and 2. RNN-WaveNet. In fully convolutional WaveNet, both modules described above are implemented using dilated convolutions as described in original WaveNet model. In RNN-WaveNet, we use high capacity RNN in the frame-level module to model the dependency between frames. The sample-level WaveNet in RNN-WaveNet has receptive field of size 509 samples from the past.
Although these models are designed with the intention of combining the two models to harness their best features, preliminary experiments show that this variant is not meeting our expectations at the moment which directs us to a possible future work. | 1. What is the main contribution of the paper, and how does it compare to previous works?
2. What are the strengths and weaknesses of the proposed SampleRNN architecture?
3. How does the author evaluate the performance of the model, and what are the results?
4. What is the significance of the subsequence length used for truncated BPTT, and how does it affect performance?
5. Why did the authors choose to use upsampling with r separate linear projections, and what are the advantages or disadvantages of this method?
6. How does the proposed model handle long-range temporal correlations, and why does the 2-tier SampleRNN outperform the 3-tier model in certain cases?
7. What are the limitations of the Wavenet reimplementation, and how does it compare to the original model?
8. Are there any other methods or techniques that could have been used to improve the performance of the model? | Review | Review
The paper introduces SampleRNN, a hierarchical recurrent neural network model of raw audio. The model is trained end-to-end and evaluated using log-likelihood and by human judgement of unconditional samples, on three different datasets covering speech and music. This evaluation shows the proposed model to compare favourably to the baselines.
It is shown that the subsequence length used for truncated BPTT affects performance significantly, but interestingly, a subsequence length of 512 samples (~32 ms) is sufficient to get good results, even though the features of the data that are modelled span much longer timescales. This is an interesting and somewhat unintuitive result that I think warrants a bit more discussion.
The authors have attempted to reimplement WaveNet, an alternative model of raw audio that is fully convolutional. They were unable to reproduce the exact model architecture from the original paper, but have attempted to build an instance of the model with a receptive field of about 250ms that could be trained in a reasonable time using their computational resources, which is commendable.
The architecture of the Wavenet model is described in detail, but it found it challenging to find the same details for the proposed SampleRNN architecture (e.g. which value of "r" is used for the different tiers, how many units per layer, ...). I think a comparison in terms of computational cost, training time and number of parameters would also be very informative.
Surprisingly, Table 1 shows a vanilla RNN (LSTM) substantially outperforming this model in terms of likelihood, which is quite suspicious as LSTMs tend to have effective receptive fields of a few hundred timesteps at best. One would expect the much larger receptive field of the Wavenet model to be reflected in the likelihood scores to some extent. Similarly, Figure 3 shows the vanilla RNN outperforming the Wavenet reimplementation in human evaluation on the Blizzard dataset. This raises questions about the implementation of the latter. Some discussion about this result and whether the authors expected it or not would be very welcome.
Table 1 and Figure 4 also show the 2-tier SampleRNN outperforming the 3-tier model in terms of likelihood and human rating respectively, which is very counterintuitive as one would expect longer-range temporal correlations to be even more relevant for music than for speech. This is not discussed at all, I think it would be useful to comment on why this could be happening.
Overall, this an interesting attempt to tackle modelling very long sequences with long-range temporal correlations and the results are quite convincing, even if the same can't always be said of the comparison with the baselines. It would be interesting to see how the model performs for conditional generation, seeing as it can be more easily be objectively compared to models like Wavenet in that domain.
Other remarks:
- upsampling the output of the models is done with r separate linear projections. This choice of upsampling method is not motivated. Why not just use linear interpolation or nearest neighbour upsampling? What is the advantage of learning this operation? Don't the r linear projections end up learning largely the same thing, give or take some noise?
- The third paragraph of Section 2.1.1 indicates that 8-bit linear PCM was used. This is in contrast to Wavenet, for which an 8-bit mu-law encoding was used, and this supposedly improves the audio fidelity of the samples. Did you try this as well?
- Section 2.1 mentions the discretisation of the input and the use of a softmax to model this discretised input, without any reference to prior work that made the same observation. A reference is given in 2.1.1, but it should probably be moved up a bit to avoid giving the impression that this is a novel observation. |
ICLR | Title
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Abstract
In this paper we propose a novel model for unconditional audio generation based on generating one audio sample at a time. We show that our model, which profits from combining memory-less modules, namely autoregressive multilayer perceptrons, and stateful recurrent neural networks in a hierarchical structure is able to capture underlying sources of variations in the temporal sequences over very long time spans, on three datasets of different nature. Human evaluation on the generated samples indicate that our model is preferred over competing models. We also show how each component of the model contributes to the exhibited performance.
1 INTRODUCTION
Audio generation is a challenging task at the core of many problems of interest, such as text-tospeech synthesis, music synthesis and voice conversion. The particular difficulty of audio generation is that there is often a very large discrepancy between the dimensionality of the the raw audio signal and that of the effective semantic-level signal. Consider the task of speech synthesis, where we are typically interested in generating utterances corresponding to full sentences. Even at a relatively low sample rate of 16kHz, on average we will have 6,000 samples per word generated. 1
Traditionally, the high-dimensionality of raw audio signal is dealt with by first compressing it into spectral or hand-engineered features and defining the generative model over these features. However, when the generated signal is eventually decompressed into audio waveforms, the sample quality is often degraded and requires extensive domain-expert corrective measures. This results in complicated signal processing pipelines that are to adapt to new tasks or domains. Here we propose a step in the direction of replacing these handcrafted systems.
In this work, we investigate the use of recurrent neural networks (RNNs) to model the dependencies in audio data. We believe RNNs are well suited as they have been designed and are suited solutions for these tasks (see Graves (2013), Karpathy (2015), and Siegelmann (1999)). However, in practice it is a known problem of these models to not scale well at such a high temporal resolution as is found when generating acoustic signals one sample at a time, e.g., 16000 times per second. This is one of the reasons that Oord et al. (2016) profits from other neural modules such as one presented by Yu & Koltun (2015) to show extremely good performance.
In this paper, an end-to-end unconditional audio synthesis model for raw waveforms is presented while keeping all the computations tractable.2 Since our model has different modules operating at different clock-rates (which is in contrast to WaveNet), we have the flexibility in allocating the amount of computational resources in modeling different levels of abstraction. In particular, we can potentially allocate very limited resource to the module responsible for sample level alignments
1Statistics based on the average speaking rate of a set of TED talk speakers http://sixminutes. dlugan.com/speaking-rate/
2Code https://github.com/soroushmehr/sampleRNN_ICLR2017 and samples https:// soundcloud.com/samplernn/sets
operating at the clock-rate equivalent to sample-rate of the audio, while allocating more resources in modeling dependencies which vary very slowly in audio, for example identity of phoneme being spoken. This advantage makes our model arbitrarily flexible in handling sequential dependencies at multiple levels of abstraction.
Hence, our contribution is threefold:
1. We present a novel method that utilizes RNNs at different scales to model longer term dependencies in audio waveforms while training on short sequences which results in memory efficiency during training.
2. We extensively explore and compare variants of models achieving the above effect. 3. We study and empirically evaluate the impact of different components of our model on
three audio datasets. Human evaluation also has been conducted to test these generative models.
2 SAMPLERNN MODEL
In this paper we propose SampleRNN (shown in Fig. 1), a density model for audio waveforms. SampleRNN models the probability of a sequence of waveform samples X = {x1, x2, . . . , xT } (a random variable over input data sequences) as the product of the probabilities of each sample conditioned on all previous samples:
p(X) = T−1∏ i=0 p(xi+1|x1, . . . , xi) (1)
RNNs are commonly used to model sequential data which can be formulated as:
ht = H(ht−1, xi=t) (2) p(xi+1|x1, . . . , xi) = Softmax(MLP (ht)) (3)
with H being one of the known memory cells, Gated Recurrent Units (GRUs) (Chung et al., 2014), Long Short Term Memory Units (LSTMs) (Hochreiter & Schmidhuber, 1997), or their deep variations (Section 3). However, raw audio signals are challenging to model because they contain structure at very different scales: correlations exist between neighboring samples as well as between ones thousands of samples apart.
SampleRNN helps to address this challenge by using a hierarchy of modules, each operating at a different temporal resolution. The lowest module processes individual samples, and each higher module operates on an increasingly longer timescale and a lower temporal resolution. Each module conditions the module below it, with the lowest module outputting sample-level predictions. The entire hierarchy is trained jointly end-to-end by backpropagation.
2.1 FRAME-LEVEL MODULES
Rather than operating on individual samples, the higher-level modules in SampleRNN operate on non-overlapping frames of FS(k) (“Frame Size”) samples at the kth level up in the hierarchy at a time (frames denoted by f (k)). Each frame-level module is a deep RNN which summarizes the history of its inputs into a conditioning vector for the next module downward.
The variable number of frames we condition upon up to timestep t−1 is expressed by a fixed length hidden state or memory h(k)t where t is related to clock rate at that tier. The RNN makes a memory update at timestep t as a function of the previous memory h(k)t−1 and an input inp (k) t . This input for top tier k = K is simply the input frame. For intermediate tiers (1 < k < K) this input is a linear combination of conditioning vector from higher tier and current input frame. See Eqs. 4–5.
Because different modules operate at different temporal resolutions, we need to upsample each vector c at the output of a module into a series of r(k) vectors (where r(k) is the ratio between the temporal resolutions of the modules) before feeding it into the input of the next module downward (Eq. 6). We do this with a set of r(k) separate linear projections.
Here we are formalizing the frame-level module in tier k. Note that following equations are exclusive to tier k and timestep t for that specific tier. To increase the readability, unless necessary superscript (k) is not shown for t, inp(k), W (k)x , h(k),H(k), W (k)j , and r(k).
inpt =
{ Wxf (k) t + c (k+1) t ; 1 < k < K
f (k=K) t ; k = K
(4)
ht = H(ht−1, inpt) (5)
c (k) (t−1)∗r+j =Wjht; 1 ≤ j ≤ r (6)
Our approach of upsampling with r(k) linear projections is exactly equivalent to upsampling by adding zeros and then applying a linear convolution. This is sometimes called “perforated” upsampling in the context of convolutional neural networks (CNNs). It was first demonstrated to work well in Dosovitskiy et al. (2016) and is a fairly common upsampling technique.
2.2 SAMPLE-LEVEL MODULE
The lowest module (tier k = 1; Eqs. 7–9) in the SampleRNN hierarchy outputs a distribution over a sample xi+1, conditioned on the FS(1) preceding samples as well as a vector c (k=2) i from the next higher module which encodes information about the sequence prior to that frame. As FS(1) is usually a small value and correlations in nearby samples are easy to model by a simple memoryless module, we implement it with a multilayer perceptron (MLP) rather than RNN which slightly speeds up the training. Assuming ei represents xi after passing through embedding layer (section 2.2.1), conditional distribution in Eq. 1 can be achieved by following and for further clarity two consecutive sample-level frames are shown. In addition, Wx in Eq. 8 is simply used to linearly combine a frame and conditioning vector from above.
f (1) i−1 = flatten([ei−FS(1) , . . . , ei−1]) (7)
f (1) i = flatten([ei−FS(1)+1, . . . , ei])
inp (1) i =W (1) x f (1) i + c (2) i (8)
p(xi+1|x1, . . . , xi) = Softmax(MLP (inp(1)i )) (9)
We use a Softmax because we found that better results were obtained by discretizing the audio signals (also see van den Oord et al. (2016)) and outputting a Multinoulli distribution rather than using a Gaussian or Gaussian mixture to represent the conditional density of the original real-valued signal. When processing an audio sequence, the MLP is convolved over the sequence, processing
each window of FS(1) samples and predicting the next sample. At generation time, the MLP is run repeatedly to generate one sample at a time. Table 1 shows a considerable gap between the baseline model RNN and this model, suggesting that the proposed hierarchically structured architecture of SampleRNN makes a big difference.
2.2.1 OUTPUT QUANTIZATION
The sample-level module models its output as a q-way discrete distribution over possible quantized values of xi (that is, the output layer of the MLP is a q-way Softmax).
To demonstrate the importance of a discrete output distribution, we apply the same architecture on real-valued data by replacing the q-way Softmax with a Gaussian Mixture Models (GMM) output distribution. Table 2 shows that our model outperforms an RNN baseline even when both models use real-valued outputs. However, samples from the real-valued model are almost indistinguishable from random noise.
In this work we use linear quantization with q = 256, corresponding to a per-sample bit depth of 8. Unintuitively, we realized that even linearly decreasing the bit depth (resolution of each audio sample) from 16 to 8 can ease the optimization procedure while generated samples still have reasonable quality and are artifact-free.
In addition, early on we noticed that the model can achieve better performance and generation quality when we embed the quantized input values before passing them through the sample-level MLP (see Table 4). The embedding steps maps each of the q discrete values to a real-valued vector embedding. However, real-valued raw samples are still used as input to the higher modules.
2.2.2 CONDITIONALLY INDEPENDENT SAMPLE OUTPUTS
To demonstrate the importance of a sample-level autoregressive module, we try replacing it with “Multi-Softmax” (see Table 4), where the prediction of each sample xi depends only on the conditioning vector c from Eq. 9. In this configuration, the model outputs an entire frame of FS(1) samples at a time, modeling all samples in a frame as conditionally independent of each other. We find that this Multi-Softmax model (which lacks a sample-level autoregressive module) scores significantly worse in terms of log-likelihood and fails to generate convincing samples. This suggests that modeling the joint distribution of the acoustic samples inside each frame is very important in order to obtain good acoustic generation. We found this to be true even when the frame size is reduced, with best results always with a frame size of 1, i.e., generating only one acoustic sample at a time.
2.3 TRUNCATED BPTT
Training recurrent neural networks on long sequences can be very computationally expensive. Oord et al. (2016) avoid this problem by using a stack of dilated convolutions instead of any recurrent connections. However, when they can be trained efficiently, recurrent networks have been shown to be very powerful and expressive sequence models. We enable efficient training of our recurrent model using truncated backpropagation through time, splitting each sequence into short subsequences and propagating gradients only to the beginning of each subsequence. We experiment with different subsequence lengths and demonstrate that we are able to train our networks, which model very long-term dependencies, despite backpropagating through relatively short subsequences.
Table 3 shows that by increasing the subsequence length, performance substantially increases alongside with train-time memory usage and convergence time. Yet it is noteworthy that our best models have been trained on subsequences of length 512, which corresponds to 32 milliseconds, a small fraction of the length of a single a phoneme of human speech while generated samples exhibit longer word-like structures.
Despite the aforementioned fact, this generative model can mimic the existing long-term structure of the data which results in more natural and coherent samples that is preferred by human listeners. (More on this in Sections 3.2–3.3.) This is due to the fast updates from TBPTT and specialized frame-level modules (Section 2.1) with top tiers designed to model a lower resolution of signal while leaving the process of filling the details to lower tiers.
3 EXPERIMENTS AND RESULTS
In this section we are introducing three datasets which have been chosen to evaluate the proposed architecture for modeling raw acoustic sequences. The description of each dataset and their preprocessing is as follows:
Blizzard which is a dataset presented by Prahallad et al. (2013) for speech synthesis task, contains 315 hours of a single female voice actor in English; however, for our experiments we are using only 20.5 hours. The training/validation/test split is 86%-7%-7%. Onomatopoeia3, a relatively small dataset with 6,738 sequences adding up to 3.5 hours, is human vocal sounds like grunting, screaming, panting, heavy breathing, and coughing. Diversity of sound type and the fact that these sounds were recorded from 51 actors and many categories makes it a challenging task. To add to that, this data is extremely unbalanced. The training/validation/test split is 92%-4%-4%. Music dataset is the collection of all 32 Beethoven’s piano sonatas publicly available on https://archive.org/ amounting to 10 hours of non-vocal audio. The training/validation/test split is 88%-6%-6%.
See Fig. 2 for a visual demonstration of examples from datasets and generated samples. For all the datasets we are using a 16 kHz sample rate and 16 bit depth. For the Blizzard and Music datasets, preprocessing simply amounts to chunking the long audio files into 8 seconds long sequences on which we will perform truncated backpropagation through time. Each sequence in the Onomatopoeia dataset is few seconds long, ranging from 1 to 11 seconds. To train the models on this dataset, zero-padding has been applied to make all the sequences in a mini-batch have the same length and corresponding cost values (for the predictions over the added 0s) would be ignored when computing the gradients.
We particularly explored two gated variants of RNNs—GRUs and LSTMs. For the case of LSTMs, the forget gate bias is initialized with a large positive value of 3, as recommended by Zaremba (2015) and Gers (2001), which has been shown to be beneficial for learning long-term dependencies.
As for models that take real-valued input, e.g. the RNN-GMM and SampleRNN-GMM (with 4 components), normalization is applied per audio sample with the global mean and standard deviation obtained from the train split. For most of our experiments where the model demands discrete input, binning was applied per audio sample.
All the models have been trained with teacher forcing and stochastic gradient decent (mini-batch size 128) to minimize the Negative Log-Likelihood (NLL) in bits per dimension (per audio sample). Gradients were hard-clipped to remain in [-1, 1] range. Update rules from the Adam optimizer (Kingma & Ba, 2014) (β1 = 0.9, β2 = 0.999, and = 1e−8) with an initial learning rate of 0.001 was used to adjust the parameters. For training each model, random search over hyper-parameter values (Bergstra & Bengio, 2012) was conducted. The initial RNN state of all the RNN-based models was always learnable. Weight Normalization (Salimans & Kingma, 2016) has been used for all the linear layers in the model (except for the embedding layer) to accelerate the training procedure. Size of the embedding layer was 256 and initialized by standard normal distribution. Orthogonal weight matrices used for hidden-to-hidden connections and other weight matrices initialized similar to He et al. (2015). In final model, we found GRU to work best (slightly better than LSTM). 1024 was the the number of hidden units for all GRUs (1 layer per tier for 3-tier and 3 layer for 2-tier model) and MLPs (3 fully connected layers with ReLU activation with output dimension being 1024 for first two layers and 256 for the final layer before softmax). Also FS(1) = FS(2) = 2 and FS(3) = 8 were found to result in lowest NLL.
3.1 WAVENET RE-IMPLEMENTATION
We implemented the WaveNet architecture as described in Oord et al. (2016). Ideally, we would have liked to replicate their model exactly but owing to missing details of architecture and hyperparameters, as well as limited compute power at our disposal, we made our own design choices so that the model would fit on a single GPU while having a receptive field of around 250 milliseconds,
3Courtesy of Ubisoft
Table 3: Effect of subsequence length on NLL (bits per audio sample) computed on the Blizzard validation set. Subsequence Length 32 64 128 256 512
NLL Validation 1.575 1.468 1.412 1.391 1.364
while having a reasonable number of updates per unit time. Although our model is very similar to WaveNet, the design choices, e.g. number of convolution filters in each dilated convolution layer, length of target sequence to train on simultaneously (one can train with a single target with all samples in the receptive field as input or with target sequence length of size T with input of size receptive field + T - 1), batch-size, etc. might make our implementation different from what the authors have done in the original WaveNet model. Hence, we note here that although we did our best at exactly reproducing their results, there would very likely be different choice of hyper-parameters between our implementation and the one of the authors.
For our WaveNet implementation, we have used 4 dilated convolution blocks each having 10 dilated convolution layers with dilation 1, 2, 4, 8 up to 512. Hence, our network has a receptive field of 4092 acoustic samples i.e. the parameters of multinomial distribution of sample at time step t, p(xi) = fθ(xi−1, xi−2, . . . xi−4092) where θ is model parameters. We train on target sequence length of 1600 and use batch size of 8. Each dilated convolution filter has size 2 and the number of output channels is 64 for each dilated convolutional layer (128 filters in total due to gated nonlinearity). We trained this model using Adam optimizer with a fixed global learning rate of 0.001 for Blizzard dataset and 0.0001 for Onomatopoeia and Music datasets. We trained these models for about one week on a GeForce GTX TITAN X. We dropped the learning rate in the Blizzard experiment to 0.0001 after around 3 days of training.
3.2 HUMAN EVALUATION
Apart from reporting NLL, we conducted AB preference tests for random samples from four models trained on the Blizzard dataset. For unconditional generation of speech which at best sounds like mumbling, this type of test is the one which is more suited. Competing models were the RNN, SampleRNN (2-tier), SampleRNN (3-tier), and our implementation of WaveNet. The rest of the models were excluded as the quality of samples were definitely lower and also to keep the number of pair comparison tests manageable. We will release the samples that have been used in this test too.
All the samples were set to have the same volume. Every user is then shown a set of twenty pairs of samples with one random pair at a time. Each pair had samples from two different models. The human evaluator is asked to listen to the samples and had the option of choosing between the two model or choosing not to prefer any of them. Hence, we have a quantification of preference between every pair of models. We used the online tool made publicly available by Jillings et al. (2015).
Results in Fig. 3 clearly points out that SampleRNN (3-tier) is a winner by a huge margin in terms of preference by human raters, then SampleRNN (2-tier) and afterward two other models, which matches with the performance comparison in Table 1.
The same evaluation was conducted for Music dataset except for an additional filtering process of samples. Specific to only this dataset, we observed that a batch of generated samples from competing models (this time restricted to RNN, SampleRNN (2-tier), and SampleRNN (3-tier)) were either music-like or random noise. For all these models we only considered random samples that were not random noise. Fig. 4 is dedicated to result of human evaluation on Music dataset.
3.3 QUANTIFYING INFORMATION RETENTION
For the last experiment we are interested in measuring the memory span of the model. We trained our model, SampleRNN (3-tier), with best hyper-parameters on a dataset of 2 speakers reading audio books, one male and one female, respectively, with mean fundamental frequency of 125.3 and 201.8Hz. Each speaker has roughly 10 hours of audio in the dataset that has been preprocessed similar to Blizzard. We observed that it learned to stay consistent generating samples from the same speaker without having any knowledge about the speaker ID or any other conditioning information. This effect is more apparent here in comparison to the unbalanced Onomatopoeia that sometimes mixes two different categories of sounds.
Another experiment was conducted to test the effect of memory and study the effective memory horizon. We inject 1 second of silence in the middle of sampling procedure in order to see if it will remember to generate from the same speaker or not. Initially when sampling we let the model generate 2 seconds of audio as it normally do. From 2 to 3 seconds instead of feeding back the generated sample at that timestep a silent token (zero amplitude) would be fed. From 3 to 5 seconds again we sample normally; feeding back the generated token.
We did classification based on mean fundamental frequency of speakers for the first and last 2 seconds. In 83% of samples SampleRNN generated from the same person in two separate segments.
This is in contrast to a model with fixed past window like WaveNet where injecting 16000 silent tokens (3.3 times the receptive field size) is equivalent to generating from scratch which has 50% chance (assuming each 2-second segment is coherent and not a mixed sound of two speakers).
4 RELATED WORK
Our work is related to earlier work on auto-regressive multi-layer neural networks, starting with Bengio & Bengio (1999), then NADE (Larochelle & Murray, 2011) and more recently PixelRNN (van den Oord et al., 2016). Similar to how they tractably model joint distribution over units of the data (e.g. words in sentences, pixels in images, etc.) through an auto-regressive decomposition, we transform the joint distribution of acoustic samples using Eq. 1.
The idea of having part of the model running at different clock rates is related to multi-scale RNNs (Schmidhuber, 1992; El Hihi & Bengio, 1995; Koutnik et al., 2014; Sordoni et al., 2015; Serban et al., 2016).
Chung et al. (2015) also attempt to model raw audio waveforms which is in contrast to traditional approaches which use spectral features as in Tokuda et al. (2013), Bertrand et al. (2008), and Lee et al. (2009).
Our work is closely related to WaveNet (Oord et al., 2016), which is why we have made the above comparisons, and makes it interesting to compare the effect of adding higher-level RNN stages working at a low resolution. Similar to this work, our models generate one acoustic sample at a time conditioned on all previously generated samples. We also share the preprocessing step of quantizing the acoustics into bins. Unlike this model, we have different modules in our models running at different clock-rates. In contrast to WaveNets, we mitigate the problem of long-term dependency with hierarchical structure and using stateful RNNs, i.e. we will always propagate hidden states to the next training sequence although the gradient of the loss will not take into account the samples in previous training sequence.
5 DISCUSSION AND CONCLUSION
We propose a novel model that can address unconditional audio generation in the raw acoustic domain, which typically has been done until recently with hand-crafted features. We are able to show that a hierarchy of time scales and frequent updates will help to overcome the problem of modeling extremely high-resolution temporal data. That allows us, for this particular application, to learn the data manifold directly from audio samples. We show that this model can generalize well and generate samples on three datasets that are different in nature. We also show that the samples generated by this model are preferred by human raters.
Success in this application, with a general-purpose solution as proposed here, opens up room for more improvement when specific domain knowledge is applied. This method, however, proposed with audio generation application in mind, can easily be adapted to other tasks that require learning the representation of sequential data with high temporal resolution and long-range complex structure.
ACKNOWLEDGMENTS
The authors would like to thank João Felipe Santos and Kyle Kastner for insightful comments and discussion. We would like to thank the Theano Development Team (2016)4 and MILA staff. We acknowledge the support of the following agencies for research funding and computing support: NSERC, Calcul Québec, Compute Canada, the Canada Research Chairs and CIFAR. Jose Sotelo also thanks the Consejo Nacional de Ciencia y Tecnologı́a (CONACyT) as well as the Secretarı́a de Educación Pública (SEP) for their support. This work was a collaboration with Ubisoft.
4http://deeplearning.net/software/theano/
APPENDIX A
A MODEL VARIANT: SAMPLERNN-WAVENET HYBRID
SampleRNN-WaveNet model has two modules operating at two different clock-rate. The slower clock-rate module (frame-level module) sees one frame (each of which has size FS) at a time while the faster clock-rate component(sample-level component) sees one acoustic sample at a time i.e. the ratio of clock-rates for these two modules would be the size of a single frame. Number of sequential steps for frame-level component would be FS times lower. We repeat the output of each step of frame-level component FS times so that number of time-steps for output of both the components match. The output of both these modules are concatenated for every time-step which is further operated by non-linearities for every time-step independently before generating the final output.
In our experiments, we kept size of a single frame (FS) to be 128. We tried two variants of this model: 1. fully convolutional WaveNet and 2. RNN-WaveNet. In fully convolutional WaveNet, both modules described above are implemented using dilated convolutions as described in original WaveNet model. In RNN-WaveNet, we use high capacity RNN in the frame-level module to model the dependency between frames. The sample-level WaveNet in RNN-WaveNet has receptive field of size 509 samples from the past.
Although these models are designed with the intention of combining the two models to harness their best features, preliminary experiments show that this variant is not meeting our expectations at the moment which directs us to a possible future work. | 1. What is the main contribution of the paper in the field of audio generation?
2. What are the strengths of the proposed approach, particularly in comparison to WaveNet?
3. What are the weaknesses of the paper, especially regarding the modeling choices and experimental setup?
4. How does the reviewer assess the novelty and significance of the proposed method?
5. Are there any concerns regarding the comparisons made with WaveNet? | Review | Review
Pros:
The authors are presenting an RNN-based alternative to wavenet, for generating audio a sample at a time.
RNNs are a natural candidate for this task so this is an interesting alternative. Furthermore the authors claim to make significant improvement in the quality of the produces samples.
Another novelty here is that they use a quantitative likelihood-based measure to assess them model, in addition to the AB human comparisons used in the wavenet work.
Cons:
The paper is lacking equations that detail the model. This can be remedied in the camera-ready version.
The paper is lacking detailed explanations of the modeling choices:
- It's not clear why an MLP is used in the bottom layer instead of (another) RNN.
- It's not clear why r linear projections are used for up-sampling, instead of feeding the same state to all r samples, or use a more powerful type of transformation.
As the authors admit, their wavenet implementation is probably not as good as the original one, which makes the comparisons questionable.
Despite the cons and given that more modeling details are provided, I think this paper will be a valuable contribution. |
ICLR | Title
Variational Latent Branching Model for Off-Policy Evaluation
Abstract
Model-based methods have recently shown great potential for off-policy evaluation (OPE); offline trajectories induced by behavioral policies are fitted to transitions of Markov decision processes (MDPs), which are used to rollout simulated trajectories and estimate the performance of policies. Model-based OPE methods face two key challenges. First, as offline trajectories are usually fixed, they tend to cover limited state and action space. Second, the performance of model-based methods can be sensitive to the initialization of their parameters. In this work, we propose the variational latent branching model (VLBM) to learn the transition function of MDPs by formulating the environmental dynamics as a compact latent space, from which the next states and rewards are then sampled. Specifically, VLBM leverages and extends the variational inference framework with the recurrent state alignment (RSA), which is designed to capture as much information underlying the limited training data, by smoothing out the information flow between the variational (encoding) and generative (decoding) part of VLBM. Moreover, we also introduce the branching architecture to improve the model’s robustness against randomly initialized model weights. The effectiveness of the VLBM is evaluated on the deep OPE (DOPE) benchmark, from which the training trajectories are designed to result in varied coverage of the state-action space. We show that the VLBM outperforms existing state-of-the-art OPE methods in general.
1 INTRODUCTION
Off-policy evaluation (OPE) allows for evaluation of reinforcement learning (RL) policies without online interactions. It is applicable to many domains where on-policy data collection could be prevented due to efficiency and safety concerns, e.g., healthcare (Gao et al., 2022c;a; Tang & Wiens, 2021), recommendation systems (Mehrotra et al., 2018; Li et al., 2011), education (Mandel et al., 2014), social science (Segal et al., 2018) and optimal control (Silver et al., 2016; Vinyals et al., 2019; Gao et al., 2020a; 2019; 2020b). Recently, as reported in the deep OPE (DOPE) benchmark (Fu et al., 2020b), model-based OPE methods, leveraging feed-forward (Fu et al., 2020b) and auto-regressive (AR) (Zhang et al., 2020a) architectures, have shown promising results toward estimating the return of target policies, by fitting transition functions of MDPs. However, model-based OPE methods remain challenged as they can only be trained using offline trajectory data, which often offers limited coverage of state and action space. Thus, they may perform sub-optimally on tasks where parts of the dynamics are not fully explored (Fu et al., 2020b). Moreover, different initialization of the model weights could lead to varied evaluation performance (Hanin & Rolnick, 2018; Rossi et al., 2019), reducing the robustness of downstream OPE estimations. Some approaches in RL policy optimization literature use latent models trained to capture a compact space from which the dynamics underlying MDPs are extrapolated; this allows learning expressive representations over the state-action space. However, such approaches usually require online data collections as the focus is on quickly navigating to the high-reward regions (Rybkin et al., 2021), as well as on improving coverage of the explored state and action space (Zhang et al., 2019; Hafner et al., 2019; 2020a) or sample efficiency (Lee et al., 2020).
In this work, we propose the variational latent branching model (VLBM), aiming to learn a compact and disentangled latent representation space from offline trajectories, which can better capture the
∗Duke University, USA. Emails: {qitong.gao, miroslav.pajic}@duke.edu. †North Carolina State University, USA. Emails: {ggao5, mchi}@ncsu.edu
Code available at https://github.com/gaoqitong/vlbm.
dynamics underlying environments. VLBM enriches the architectures and optimization objectives for existing latent modeling frameworks, allowing them to learn from a fixed set of offline trajectories. Specifically, VLBM considers learning variational (encoding) and generative (decoding) distributions, both represented by long short-term memories (LSTMs) with reparameterization (Kingma & Welling, 2013), to encode the state-action pairs and enforce the transitions over the latent space, respectively. To train such models, we optimize over the evidence lower bound (ELBO) jointly with a recurrent state alignment (RSA) term defined over the LSTM states; this ensures that the information encoded into the latent space can be effectively teased out by the decoder. Then, we introduce the branching architecture that allows for multiple decoders to jointly infer from the latent space and reach a consensus, from which the next state and reward are generated. This is designed to mitigate the side effects of model-based methods where different weight initializations could lead to varied performance (Fu et al., 2020b; Hanin & Rolnick, 2018; Rossi et al., 2019).
We focus on using the VLBM to facilitate OPE since it allows to better distinguish the improvements made upon learning dynamics underlying the MDP used for estimating policy returns, as opposed to RL training where performance can be affected by multiple factors, e.g., techniques used for exploration and policy optimization. Moreover, model-based OPE methods is helpful for evaluating the safety and efficacy of RL-based controllers before deployments in the real world (Gao et al., 2022b), e.g., how a surgical robot would react to states that are critical to a successful procedure. The key contributions of this paper are summarized as follows: (i) to the best of our knowledge, the VLBM is the first method that leverages variational inference for OPE. It can be trained using offline trajectories and capture environment dynamics over latent space, as well as estimate returns of target (evaluation) policies accurately. (ii) The design of the RSA loss term and branching architecture can effectively smooth the information flow in the latent space shared by the encoder and decoder, increasing the expressiveness and robustness of the model. This is empirically shown in experiments by comparing with ablation baselines. (iii) Our method generally outperforms existing model-based and model-free OPE methods, for evaluating policies over various D4RL environments (Fu et al., 2020a). Specifically, we follow guidelines provided by the DOPE benchmark (Fu et al., 2020b), which contains challenging OPE tasks where the training trajectories include varying levels of coverage of the state-action space, and target policies are designed toward resulting in state-action distributions different from the ones induced by behavioral policies.
2 VARIATIONAL LATENT BRANCHING MODEL
In this section, we first introduce the objective of OPE and the variational latent model (VLM) we consider. Then, we propose the recurrent state alignment (RSA) term as well as the branching architecture that constitute the variational latent branching model (VLBM).
2.1 OPE OBJECTIVE
We first introduce the MDP used to characterize the environment. Specifically, an MDP can be defined as a tuple M = (S,A,P, R, s0, γ), where S is the set of states, A the set of actions, P : S × A → S is the transition distribution usually captured by probabilities p(st|st−1, at−1), R : S ×A → R is the reward function, s0 is the initial state sampled from the initial state distribution p(s0), γ ∈ [0, 1) is the discounting factor. Finally, the agent interacts with the MDP following some policy π(a|s) which defines the probabilities of taking action a at state s. Then, the goal of OPE can be formulated as follows. Given trajectories collected by a behavioral policy β, ρβ = {[(s0, a0, r0, s1), . . . , (sT−1, aT−1, rT−1, sT )](0), [(s0, a0, r0, s1), . . . ](1), . . . |at ∼ β(at|st)}1, estimate the expected total return over the unknown state-action visitation distribution ρπ of the target (evaluation) policy π – i.e., for T being the horizon,
E(s,a)∼ρπ,r∼R [∑T
t=0 γtR(st, at)
] . (1)
2.2 VARIATIONAL LATENT MODEL
We consider the VLM consisting of a prior p(z) over the latent variables z ∈ Z ⊂ Rl, with Z representing the latent space and l the dimension, along with a variational encoder qψ(zt|zt−1, at−1, st)
1We slightly abuse the notation ρβ , to represent either the trajectories or state-action visitation distribution under the behavioral policy, depending on the context.
and a generative decoder pϕ(zt, st, rt−1|zt−1, at−1), parameterized by ψ and ϕ respectively. Basics of variational inference are introduced in Appendix F.
Latent Prior p(z0). The prior specifies the distribution from which the latent variable of the initial stage, z0, is sampled. We configure p(z0) to follow a Gaussian with zero mean and identity covariance matrix, which is a common choice under the variational inference framework (Kingma & Welling, 2013; Lee et al., 2020).
Variational Encoder for Inference qψ(zt|zt−1, at−1, st). The encoder is used to approximate the intractable posterior, p(zt|zt−1, at−1, st) = p(zt−1,at−1,zt,st)∫ zt∈Z p(zt−1,at−1,zt,st)dzt , where the denominator requires integrating over the unknown latent space. Specifically, the encoder can be decomposed into two parts, given that
qψ(z0:T |s0:T , a0:T−1)
=qψ(z0|s0) T∏ t=1 qψ(zt|zt−1, at−1, st); (2)
here, qψ(z0|s0) encodes the initial state s0 in to the corresponding latent variable z0, then, qψ(zt|zt−1, at−1, st) enforces the transi-
tion from zt−1 to zt conditioned on at−1 and st. Both distributions are diagonal Gaussians2, with means and diagonal of covariance matrices determined by multi-layered perceptron (MLP) (Bishop, 2006) and long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) respectively. The weights for both neural networks are referred to as ψ in general.
Consequently, the inference process for zt can be summarized as
zψ0 ∼ qψ(z0|s0), h ψ t = fψ(h ψ t−1, z ψ t−1, at−1, st), z ψ t ∼ qψ(zt|h ψ t ), (3)
where fψ represents the LSTM layer and h ψ t the LSTM recurrent (hidden) state. Note that we use ψ in superscripts to distinguish the variables involved in this inference process, against the generative process introduced below. Moreover, reparameterization can be used to sample zψ0 and z ψ t , such that gradients of sampling can be back-propagated, as introduced in (Kingma & Welling, 2013). Overview of the inference and generative processes are illustrated in Fig. 1.
Generative Decoder for Sampling pϕ(zt, st, rt−1|zt−1, at−1). The decoder is used to interact with the target policies and acts as a synthetic environment during policy evaluation, from which the expected returns can be estimated as the mean return of simulated trajectories. The decoder can be represented by the multiplication of three diagonal Gaussian distributions, given that
pϕ(z1:T , s0:T , r0:T−1|z0, π) = T∏ t=0 pϕ(st|zt) T∏ t=1 pϕ(zt|zt−1, at−1)pϕ(rt−1|zt), (4)
with at ∼ π(at|st) at each time step. Specifically, pϕ(zt|zt−1, at−1) has its mean and covariance determined by an LSTM, enforcing the transition from zt−1 to zt in the latent space given action at−1. In what follows, pϕ(st|zt) and pϕ(rt−1|zt) generate the current state st and reward rt−1 given zt, whose mean and covariance are determined by MLPs. As a result, the generative process starts with sampling the initial latent variable from the latent prior, i.e., zϕ0 ∼ p(z0). Then, the initial state sϕ0 ∼ pϕ(s0|z ϕ 0 ) and action a0 ∼ π(a0|s ϕ 0 ) are obtained from pϕ and target policy π, respectively; the rest of generative process can be summarized as
hϕt = fϕ(h ϕ t−1, z ϕ t−1, at−1), h̃ ϕ t = gϕ(h ϕ t ), z ϕ t ∼ pϕ(h̃ ϕ t ),
sϕt ∼ pϕ(st|z ϕ t ), r ϕ t−1 ∼ pϕ(rt−1|z ϕ t ), at ∼ π(at|s ϕ t ), (5)
2Assume that different dimensions of the states are non-correlated with each other. Otherwise, the states can be projected to orthogonal basis, such that non-diagonal elements of the covariance matrix will be zeros.
where fϕ is the LSTM layer producing recurrent state h ϕ t . Then, an MLP gϕ is used to generate mapping between hϕt and h̃ ϕ t that will be used for recurrent state alignment (RSA) introduced below, to augment the information flow between the inference and generative process.
Furthermore, to train the elements in the encoder (3) and decoder (5), one can maximize the evidence lower bound (ELBO), a lower bound of the joint log-likelihood p(s0:T , r0:T−1), following
LELBO(ψ, ϕ) =Eqψ [∑T
t=0 log pϕ(st|zt) + ∑T t=1 log pϕ(rt−1|zt)−KL ( qψ(z0|s0)||p(z0) ) − ∑T
t=1 KL
( qψ(zt|zt−1, at−1, st)||pϕ(zt|zt−1, at−1) )] ; (6)
here, the first two terms represent the log-likelihood of reconstructing the states and rewards, and the last two terms regularize the approximated posterior. The proof can be found in Appendix E.
2.3 RECURRENT STATE ALIGNMENT
The latent model discussed above is somewhat reminiscent of the ones used in model-based RL policy training methods, e.g., recurrent state space model (RSSM) used in PlaNet (Hafner et al., 2019) and Dreamer (Hafner et al., 2020a;b), as well as similar ones in Lee et al. (2020); Lu et al. (2022). Such methods rely on a growing experience buffer for training, which is collected online by the target policy that is being concurrently updated (with exploration noise added); however, OPE aims to extrapolate returns from a fixed set of offline trajectories which may result in limited coverage of the state and action space. Consequently, directly applying VLM for OPE can lead to subpar performance empirically; see results in Sec. 3. Moreover, the encoder above plays a key role of capturing the temporal transitions between latent variables, i.e., pψ(zt|zt−1, at−1, st) from (2). However, it is absent in the generative process, as the decoder leverages a separate network to determine the latent transitions, i.e., pϕ(zt|zt−1, at−1). Moreover, from the ELBO (6) above it can be seen that only the KL-divergence terms are used to regularize these two parts, which may not be sufficient for OPE as limited offline trajectories are provided. As a result, we introduce the RSA term as part of the training objective, to further regularize pψ(zt|zt−1, at−1, st) and pϕ(zt|zt−1, at−1). A graphical illustration of RSA can be found in Fig. 2.3
Specifically, RSA is defined as the mean pairwise squared error between hψt from the encoder (3) and h̃ϕt from the decoder (5), i.e.,
LRSA(h̃ϕt , h ψ t ;ψ, ϕ) =
1
N N∑ i=1 T∑ t=0 M(M − 1) 2 [M−1∑ j=1 M∑ k=j+1 ( (h̃ϕt [j]− h̃ ϕ t [k])− (h ψ t [j]− h ψ t [k]) )2] ;
(7)
here, we assume that both LSTM recurrent states have the same dimension h̃ϕt , h ψ t ∈ RM , with h (·) t [j] referring to the j-th element of the recurrent state, and N the number of training trajectories.
Here, we choose the pairwise squared loss over the classic mean squared error (MSE), because MSE could be too strong to regularize hψt and h̃ ϕ t which support the inference and generative processes respectively and are not supposed to be exactly the same. In contrast, the pairwise loss (7) can
3Rewards and actions are omitted for conciseness of the presentation.
promote structural similarity between the LSTM recurrent states of the encoder and decoder, without strictly enforcing them to become the same. Note that this design choice has been justified in Sec. 3 through an ablation study by comparing against models trained with MSE. In general, the pairwise loss has also been adopted in many domains for similar purposes, e.g., object detection (Gould et al., 2009; Rocco et al., 2018), ranking systems (Doughty et al., 2018; Saquil et al., 2021) and contrastive learning (Wang et al., 2021; Chen et al., 2020). Similarly, we apply the pairwise loss over hψt and h̃ ϕ t , instead of directly over h ψ t and h ϕ t , as the mapping gϕ (from equation 5) could serve as a regularization layer to ensure optimality over LRSA without changing hψt , h ϕ t significantly.
As a result, the objective for training the VLM, following architectures specified in (3) and (5), can be formulated as
max ψ,ϕ LV LM (ψ, ϕ) = max ψ,ϕ
( LELBO(ψ, ϕ)− C · LRSA(h̃ϕt , h ψ t ;ψ, ϕ) ) , (8)
with C > 0 and C ∈ R being the constant balancing the scale of the ELBO and RSA terms.
2.4 BRANCHING FOR GENERATIVE DECODER
The performance of model-based methods can vary upon different design factors (Fu et al., 2020b; Hanin & Rolnick, 2018). Specifically, Rossi et al. (2019) has found that the convergence speed and optimality of variational models are sensitive to the choice of weight initialization techniques. Moreover, under the typical variational inference setup followed by the VLM above, the latent transitions reconstructed by the decoder, pϕ(zt|zt−1, at−1), are only trained through regularization losses in (6) and (7), but are fully responsible for rolling out trajectories during evaluation. Consequently, in this sub-section we introduce the branching architecture for decoder, with the goal of minimizing the impact brought by random weight initialization of the networks, and allowing the decoder to best reconstruct the latent transitions pϕ(zt|zt−1, at−1) as well as st’s and rt−1’s correctly. Specifically, the branching architecture leverages an ensemble of B ∈ Z+ decoders to tease out information from the latent space formulated by the encoder, with final predictions sampled from a mixture of the Gaussian output distributions from (5). Note that the classic setup of ensembles is not considered, i.e., train and average over B VLMs end-to-end; because in this case B different latent space exist, each of which is still associated with a single decoder, leaving the challenges above unresolved. This design choice is justified by ablations studies in Sec. 3, by comparing VLBM against a (classic) ensemble of VLMs.
Branching Architecture. Consider the generative process involving B branches of the decoders parameterized by {ϕ1, . . . , ϕB}. The forward architecture over a single step is illustrated in Fig. 2.4 Specifically, the procedure of sampling zϕbt and s ϕb t for each b ∈ [1, B] follows from (5). Recall that by definition pϕb(st|z ϕb t ) follows multivariate Gaussian with mean and diagonal of covariance matrix determined by the corresponding MLPs, i.e., µ(sϕbt ) = ϕ MLP b,µ (z ϕb t ) and Σdiag(s ϕb t ) = ϕ MLP b,Σ (z ϕb t ). In what follows, the final outcome sϕt can be sampled following diagonal Gaussian with mean and variance determined by weighted averaging across all branches using weights wb’s, i.e.,
sϕt ∼ pϕ(st|z ϕ1 t , . . . , z ϕB t ) = N
( µ = ∑ b wb · µ(sϕbt ),Σdiag = ∑ b w2b · Σdiag(s ϕb t ) ) . (9)
The objective below can be used to jointly update, wb’s, ψ and ϕb’s, i.e.,
max ψ,ϕ,w
LV LBM (ψ, ϕ1, . . . , ϕB , w1, . . . , wB)
= max ψ,ϕ,w ( T∑ t=0 log pϕ(s ϕ t |z ϕ1 t , . . . , z ϕB t )− C1 · ∑ b LRSA(h̃ϕbt , h ψ t ;ψ, ϕb) + C2 ∑ b LELBO(ψ, ϕb) ) ,
s.t. w1, . . . , wB > 0 , ∑ b wb = 1 and constants C1, C2 > 0. (10)
Though the first term above already propagates through allwb’s and ϕb’s, the third term and constraints over wb’s regularize ϕb in each individual branch such that they are all trained toward maximizing
4For simplicity, the parts generating rewards are omitted without lost of generality.
the likelihood pϕb(s ϕb t |z ϕb t ). Pseudo-code for training and evaluating the VLBM can be found in Appendix C. Further, in practice, one can define wb = v2b
ϵ+ ∑ b v 2 b
, with vb ∈ R the learnable variables and 0 < ϵ ≪ 1, ϵ ∈ R, the constant ensuring denominator to be greater than zero, to convert (10) into unconstrained optimization and solve it using gradient descent. Lastly, note that complementary latent modeling methods, e.g., latent overshooting from Hafner et al. (2019), could be adopted in (10). However, we keep the objective straightforward, so that the source of performance improvements can be isolated.
3 EXPERIMENTS
To evaluate the VLBM, we follow the guidelines from the deep OPE (DOPE) benchmark (Fu et al., 2020b). Specifically, we follow the D4RL branch in DOPE and use the GymMujoco and Adroit suites as the test base (Fu et al., 2020a). Such environments have long horizons and highdimensional state and action space, which are usually challenging for model-based methods. The provided offline trajectories for training are collected using behavioral policies at varied scale, including limited exploration, human teleoperation etc., which can result in different levels of
coverage over the state-action space. Also, the target (evaluation) policies are generated using online RL training, aiming to reduce the similarity between behavioral and target policies; it introduces another challenge that during evaluation the agent may visit states unseen from training trajectories.
Environmental and Training Setup. A total of 8 environments are provided by Gym-Mujoco and Adroit suites (Fu et al., 2020b;a). Moreover, each environment is provided with 5 (for Gym-Mujoco) or 3 (for Adroit) training datasets collected using different behavioral policies, resulting in a total of 32 sets of env-dataset tasks5 – a full list can be found in Appendix A. DOPE also provides 11 target policies for each environment, whose performance are to be evaluated by the OPE methods. They in general result in varied scales of returns, as shown in the x-axes of Fig. 7. Moreover, we consider the decoder to have B = 10 branches, i.e., {pϕ1 , . . . , pϕ10}. The dimension of latent space is set to be 16, i.e., z ∈ Z ⊂ R16. Other implementation details can be found in Appendix A. Baselines and Evaluation Metrics. In addition to the five baselines reported from DOPE, i.e., importance sampling (IS) (Precup, 2000), doubly robust (DR) (Thomas & Brunskill, 2016), variational power method (VPM) (Wen et al., 2020), distribution correction estimation (DICE) (Yang et al., 2020), and fitted Q-evaluation (FQE) (Le et al., 2019), the effectiveness of VLBM is also compared against the state-of-the-art model-based OPE method leveraging the auto-regressive (AR) architecture (Zhang et al., 2020a). Specifically, for each task we train an ensemble of 10 AR models, for fair comparisons against VLBM which leverages the branching architecture; see Appendix A for details of the AR ensemble setup. Following the DOPE benchmark (Fu et al., 2020b), our evaluation metrics includes rank correlation, regret@1, and mean absolute error (MAE). VLBM and all baselines are trained using 3 different random seeds over each task, leading to the results reported below.
Ablation. Four ablation baselines are also considered, i.e., VLM, VLM+RSA, VLM+RSA(MSE) and VLM+RSA Ensemble. Specifically, VLM refers to the model introduced in Sec. 2.2, trained toward maximizing only the ELBO, i.e., (6). Note that, arguably, VLM could be seen as the generalization of directly applying latent-models proposed in existing RL policy optimization literature (Lee et al., 2020; Hafner et al., 2019; 2020a;b; Lu et al., 2022); details can be found in Sec. 4 below. The VLM+RSA ablation baseline follows the same model architecture as VLM, but is trained to optimize over both ELBO and recurrent state alignment (RSA) as introduced in (8), i.e., branching is not used comparing to VLBM. The design of these two baselines can help analyze the effectiveness of the RSA
5From now on the dataset names are abbreviated by their initials, e.g., Ant-M-R refers to Ant-Medium-Replay.
loss term and branching architecture introduced in Sec. 2.3 and 2.4. Moreover, VLM+RSA(MSE) uses mean squared error to replace the pairwise loss introduced in (7), and the VLM+RSA Ensemble applies classic ensembles by averaging over B VLM+RSA models end-to-end, instead of branching from decoder as in VLBM. These two ablation baselines can help justify the use of pairwise loss for RSA, and the benefit of using branching architecture over classic ensembles.
Results. Fig. 3 shows the mean overall performance attained by VLBM and baselines over all the 32 GymMujoco and Adroit tasks. In general VLBM leads to significantly increased rank correlations and decreased regret@1’s over existing methods, with MAEs maintained at the state-of-the-art level. Specifically, VLBM achieves state-of-the-art performance in 31, 29, and 15 (out of 32) tasks in terms of rank correlation, regret@1 and MAE, respectively. Performance for each task can be found in Tables 1- 6 at the end of Appendices. Note that results for IS, VPM, DICE, DR, and FQE are obtained directly from DOPE benchmark (Fu et al., 2020b), since the same experimental setup is considered. Fig. 4 and 5 visualize
the mean performance for each Gym-Mujoco and Adroit environment respectively, over all the associated datasets. It can be also observed that the model-based and FQE baselines generally perform better than the other baselines, which is consistent with findings from DOPE.
The fact that VLM+RSA outperforming the VLM ablation baseline, as shown in Fig. 4, illustrates the need of the RSA loss term to smooth the flow of information between the encoder and decoder, in the latent space. Moreover, one can observe that VLM+RSA(MSE) sometimes performs worse than VLM, and significantly worse than VLM+RSA in general. Specifically, it has be found that, compared to VLM and VLM+RSA respectively, VLM+RSA(MSE) significantly worsen at least two metrics in 7 and 12 (out of 20) Gym-Mujoco tasks; detailed performance over these tasks can be found in Tables 1- 6 at the end of Appendices. Such a finding backs up the design choice of using pairwise loss for RSA instead of MSE, as MSE could be overly strong to regularize the LSTM recurrent states of the encoder and decoder, while pairwise loss only enforces structural similarities. Moreover, VLBM significantly improves rank correlations and regrets greatly compared to VLM+RSA, illustrating the importance of the branching architecture. In the paragraph below, we show empirically the benefits brought in by branching over classic ensembles.
Branching versus Classic Ensembles. Fig. 4 shows that the VLM+RSA Ensemble does not improve performance over the VLM+RSA in general, and even leads to worse overall rank correlations and regrets in Walker2d and Hopper environments. This supports the rationale provided in Sec. 2.4 that each decoder still samples from different latent space exclusively, and averaging over the output distributions may not help reduce the disturbance brought in by the modeling artifacts under the variational inference framework, e.g., random weight initializations (Hanin & Rolnick, 2018; Rossi et al., 2019). In contrast, the VLBM leverages the branching architecture, allowing all the branches to sample from the same latent space formulated by the encoder. Empirically, we find that the branching weights, wb’s in (9), allows VLBM to kill branches that are not helpful toward reconstructing the trajectories accurately, to possibly overcome bad initializations etc. Over all the the 32 tasks we consider, most of VLBMs only keep 1-3 branches (out of 10), i.e., wb < 10−5 for all other branches. The distribution of all wb’s, from VLBMs trained on the 32 tasks, are shown in Fig. 6; one can observe that most of the wb’s are close to zero, while the others generally fall in the range of (0, 0.25] and [0.75, 1).
AR ensembles also lead to compelling rank correlations and regrets, but attains much smaller margins in MAEs over other baselines in general; see Fig. 3. From Fig. 7, one can observe that it tends to significantly under-estimate most of the high-performing policies. Scatter plots for the other tasks can be found in Appendix A, which also show this trend. The reason could be that its model architecture and training objectives are designed to directly learn the transitions of the MDP; thus, may produce biased predictions when the target policies lead to visitation of the states that are not substantially presented in training data, since such data are obtained using behavioral policies that are sub-optimal. In
contrast, the VLBM can leverage RSA and branching against such situations, thus outperforming AR ensembles in most of the OPE tasks in terms of all metrics we considered. Interestingly, Fig. 7 also shows that latent models could sometimes over-estimate the returns. For example, in Hopper-M-E and Walker2d-M-E, VLM tends to over-estimate most policies. The VLBM performs consistently well in Hopper-M-E, but is mildly affected by such an effect in Walker2d-M-E, though over fewer policies and smaller margins. It has been found that variational inference may fall short in approximating true distributions that are asymmetric, and produce biased estimations (Yao et al., 2018). So the hypothesis would be that the dynamics used to define certain environments may lead to asymmetry in the true posterior p(zt|zt−1, at−1, st), which could be hard to be captured by the latent modeling framework we consider. More comprehensive understanding of such behavior can be explored in future work. However, the VLBM still significantly outperforms VLM overall, and achieves top-performing rank correlations and regrets; such results illustrate the VLBM’s improved robustness as a result of its architectural design and choices over training objectives.
t-SNE Visualization of the Latent Space. Fig. 8 illustrates t-SNE visualization of the latent space by rolling out trajectories using all target policies respectively, followed by feeding the state-action pairs into the encoder of VLBM which maps them into the latent space. It shows the encoded state-action pairs induced from policies with similar performance are in general swirled and clustered together, illustrating that VLBM can learn expressive and disentangled representations of its inputs.
4 RELATED WORK
Latent Modeling in RL. Though variational inference has rarely been explored to facilitate modelbased OPE methods so far, there exist several latent models designed for RL policy optimization that are related to our work, such as SLAC (Lee et al., 2020), SOLAR (Zhang et al., 2019), LatCo (Rybkin et al., 2021), PlaNet (Hafner et al., 2019), Dreamer (Hafner et al., 2020a;b). Below we discuss the connections and distinctions between VLBM and the latent models leveraged by them, with a detailed overview of these methods provided in Appendix G. Specifically, SLAC and SOLAR learn latent representations of the dynamics jointly with optimization of the target policies, using the latent information to improve sample efficiency. Similarly, LatCo performs trajectory optimization over the latent space to allow for temporarily bypassing dynamic constraints. As a result, latent models used in such methods are not designed toward rolling out trajectories independently, as opposed to the use of VLBM in this paper. PlaNet and Dreamer train the recurrent state space model (RSSM) using a growing experience dataset collected by the target policy that is being concurrently updated (with exploration noise added), which requires online data collection. In contrast, under the OPE setup, VLBM is trained over a fixed set of offline trajectories collected over unknown behavioral policies. Moreover, note that the VLM baseline is somewhat reminiscent of the RSSM and similar ones as in Lee et al. (2020); Lu et al. (2022), however, experiments above show that directly using VLM for OPE could lead to subpar performance. On the other hand, though MOPO (Yu et al., 2020), LOMPO (Rafailov et al., 2021) and COMBO (Yu et al., 2021) can learn from offline data, they focus on quantifying the uncertainty of model’s predictions toward next states and rewards, followed by incorporating them into policy optimization objectives to penalize for visiting regions where transitions are not fully captured; thus, such works are also orthogonal to the use case of OPE. OPE. Classic OPE methods adopt IS to estimate expectations over the unknown visitation distribution over the target policy, resulting in weighted IS, step-wise IS and weighted step-wise IS (Precup, 2000). IS can lead to estimations with low (or zero) bias, but with high variance (Kostrikov & Nachum, 2020; Jiang & Li, 2016), which sparks a long line of research to address this challenge. DR methods propose to reduce variance by coupling IS with a value function approximator (Jiang & Li, 2016; Thomas & Brunskill, 2016; Farajtabar et al., 2018). However, the introduction of such approximations may increase bias, so the method proposed in Tang et al. (2019) attempts to balance the scale of bias and variance for DR. Unlike IS and DR methods that require the behavioral policies to be fully known, DICE family of estimators (Zhang et al., 2020c;b; Yang et al., 2021; 2020; Nachum et al., 2019; Dai et al., 2020) and VPM (Wen et al., 2020) can be behavioral-agnostic; they directly capture marginalized IS weights as the ratio between the propensity of the target policy to visit particular state-action pairs, relative to their likelihood of appearing in the logged data. There also exist FQE methods which extrapolate policy returns from approximated Q-functions (Hao et al., 2021; Le et al., 2019; Kostrikov & Nachum, 2020). Existing model-based OPE methods are designed to directly fit MDP transitions using feed-forward (Fu et al., 2020b) or auto-regressive (Zhang et al., 2020a) models, and has shown promising results over model-free methods as reported in a recent benchmark (Fu et al., 2020b). However, such model-based approaches could be sensitive to the initialization of weights (Hanin & Rolnick, 2018; Rossi et al., 2019) and produce biased predictions, due to the limited coverage over state and action space provided by offline trajectories (Fu et al., 2020b). Instead, VLBM mitigates such effects by capturing the dynamics over the latent space, such that states and rewards are evolved from a compact feature space over time. Moreover, RSA and the branching can lead to increased expressiveness and robustness, such that future states and rewards are predicted accurately. There also exist OPE methods proposed toward specific applications (Chen et al., 2022; Saito et al., 2021; Gao et al., 2023; 2022b).
5 CONCLUSION AND FUTURE WORK
We have developed the VLBM which can accurately capture the dynamics underlying environments from offline training data that provide limited coverage of the state and action space; this is achieved by using the RSA term to smooth out the information flow from the encoders to decoders in the latent space, as well as the branching architecture which improve VLBM’s robustness against random initializations. We have followed evaluation guidelines provided by the DOPE benchmark, and experimental results have shown that the VLBM generally outperforms the state-of-the-art modelbased OPE method using AR architectures, as well as other model-free methods. VLBM can also facilitate off-policy optimizations, which can be explored in future works. Specifically, VLBM can serve as a synthetic environment on which optimal controllers (e.g., linear–quadratic regulator) can be deployed. On the other hand, similar to Dreamer and SLAC, policies can be updated jointly with training of VLBM, but without the need of online interactions with the environment during training.
ACKNOWLEDGMENTS
This work is sponsored in part by the AFOSR under award number FA9550-19-1-0169, and by the NSF CNS-1652544, CNS-1837499, DUE-1726550, IIS-1651909 and DUE-2013502 awards, as well as the National AI Institute for Edge Computing Leveraging Next Generation Wireless Networks, Grant CNS-2112562.
A ADDITIONAL EXPERIMENTAL DETAILS AND RESULTS
Additional Results and Discussions. Rank correlations, regret@1 and MAEs for all 32 tasks are documented in Tables 1- 6 below.6 The mean and standard deviation (in subscripts) over 3 random seeds are reported. Note that in each column, performance of multiple methods may be highlighted in bold, meaning they all achieve the best performance and do not significantly outperform each other. The fact that VLBM outperforms the ablation baselines in most cases suggests that the RSA loss term and branching architecture can effectively increase model expressiveness, and allow to learn the dynamics underlying the MDP more accurately and robustly from offline data that provide limited exploration coverage. Yet, smaller margins are attained between the VLBM and VLM+RSA in Hopper-M-E and Hopper-M. It is likely because Hopper has relatively lower dimensional state space compared to the other three environments, from which the underlying dynamics can be sufficiently captured by the VLM+RSA. Fig. 10 and 11 shows the correlation between estimated (y-axis) and true returns (x-axis) for all the OPE tasks we consider. It can be found that for Halfcheetah-R, -M-R, -M, most of the model-based methods cannot significantly distinguish the returns across target policies. The cause could be that the offline trajectories provided for this task are relatively more challenging, compared to the other OPE tasks. Such an effect appears to affect IS, VPM, DICE, DR and FQE at larger scale. It can be observed from the scatter plots reported in the DOPE benchmark (Fu et al., 2020b) that these methods could hardly tell the scale of returns across different target policies; as the dots almost form a horizontal line in each plot. However, the estimated returns from VLBM and IS still preserve the rank, which leads to high rank correlations and low regrets.
Implementation Details and Hyper-parameter. The model-based methods are evaluated by directly interacting with each target policy for 50 episodes, and the mean of discounted total returns (γ = 0.995) over all episodes is used as estimated performance for the policy. We choose the neural network architectures as follows. For the components involving LSTMs, which include qψ(zt|zt−1, at−1, st) and pϕ(zt|zt−1, at−1), their architecture include one LSTM layer with 64 nodes, followed by a dense layer with 64 nodes. All other components do not have LSTM layers involved, so they are constituted by a neural network with 2 dense layers, with 128 and 64 nodes respectively. The output layers that determine the mean and diagonal covariance of diagonal Gaussian distributions use linear and softplus activations, respectively. The ones that determine the mean of Bernoulli distributions (e.g., for capturing early termination of episodes) are configured to use sigmoid activations. VLBM and the two ablation baselines, VLM and VLM+RSA, are trained using offline trajectories provided by DOPE, with max_iter in Alg. 1 set to 1,000 and minibatch size set to 64. Adam optimizer is used to perform gradient descent. To determine the learning rate, we perform grid search among {0.003, 0.001, 0.0007, 0.0005, 0.0003, 0.0001, 0.00005}. Exponential decay is applied to the learning rate, which decays the learning rate by 0.997 every iteration. To train VLBM, we set the constants from equation 10 following C1 = C2, and perform grid search among
6Some VPM entries are absent since they were not reported in Fu et al. (2020b), nor the code is open-sourced.
{5, 1, 0.1, 0.05, 0.01, 0.005, 0.001, 0.0001}. To train VLM+RSA, the constant C from equation 8 is determined by grid search among the same set of parameters above. L2-regularization with decay of 0.001 and batch normalization are applied to all hidden layers. Consider that some of the environments (e.g., Ant, Hopper, Walker2d, Pen) may terminate an episode, before timeout, if the state meets specific conditions; details for VLBM to capture such early termination behavior is introduced in Appendix D.
The DOPE Benchmark. The deep OPE (DOPE) benchmark (Fu et al., 2020b) provides standardized training and evaluation procedure for OPE works to follow, which facilitates fair and comprehensive comparisons among various OPE methods. Specifically, it utilizes existing environments and training trajectories provided by D4RL7 and RLUnplugged8, which are two benchmark suites for offline RL training, and additionally provide target policies for OPE methods to evaluate. In the D4RL branch, the training trajectories are originally collected from various sources including random exploration, human teleoperation, and RL-trained policies with limited exploration; thus, can provide varied levels of coverage over the state-action space. Moreover, the target policies are trained using online RL algorithms, which can in general lead to different state-action visitations than in the training trajectories. We leverage the D4RL branch as our test base, since the OPE tasks it provides are considered challenging, i.e., the limited coverage introduced by training data, as well as the discrepancy between the behavioral and target policies. Graphical illustrations of the Gym-Mujoco and Adroit environments considered are shown in Fig. 9. Details on the environments and datasets used are shown in Tables 7 and 8, from the perspectives of state and action dimensions, if episodes can be terminated before timeout, if controls are performed over continuous space, and the size of the offline trajectories used for training. In contrast, in the RLUnplugged branch, the training trajectories are always collected using online RL training, which can result in adequate coverage over the state-action space. The target policies are trained by applying offline RL over the training trajectories, so that behavioral and target policies can lead to similar state-action visitation distributions. As discussed in DOPE (Fu et al., 2020b), such tasks are suitable for studies where ideal data are needed, such as complexity comparisons.
Evaluation Metrics. Following from (Fu et al., 2020b), we consider rank correlation, regret@1 and mean absolute error (MAE) as the evaluation metrics. Specifically, rank correlation measures the strength and direction of monotonic association between the rank of OPE-estimated returns and true returns over all target policies. It is is captured by Spearsman’s correlation coefficient between the ordinal rankings between estimated and true returns. Regret@1 is captured by the difference between the return of the policy corresponding to the highest return as estimated by OPE and the return of the policy that actually produces the highest true return. In other words, regret@1 evaluates how worse the policy resulting in the highest OPE-estimated return would perform than the actual best policy. The two metrics above evaluate how useful OPE would be to facilitate important applications such as policy selection. Finally, we also consider MAE which is commonly used in estimation/regression tasks. Mathematical definitions of these metrics can be found in (Fu et al., 2020b).
Implementation of AR Ensembles. For fair comparisons with VLBM, in experiments we train an ensemble of the state-of-the-art model-based OPE method, auto-regressive (AR) models (Zhang et al., 2020a), as one of the baselines. Specifically, we train an ensemble of 10 AR models to learn p(st+1, rt|st, at) following the auto-regressive manner, with each individual model following the design introduced in (Zhang et al., 2020a), i.e.,
s (j) t+1 ∼ p(s (j) t+1|st, at, s (1) t+1, . . . , s (j−1) t+1 ), (11)
with s(j)t+1 representing the element located at the j-th dimension of the state variable, and D the dimension of state space. The reward is treated as an additional dimension of the states, i.e., rt ∼ p(rt|st, at, s(1)t+1, . . . , s (D) t+1). However, in the original literature (Zhang et al., 2020a) it does not introduce in details regarding which specific ensemble architecture is used (e.g., overall averaging or weighted averaging). As a result, we choose the same weighted averaging procedure as used in VLBM branching, to sort out the influence of different ensemble architectures and facilitate fair comparisons. Specifically, a total of 10 AR models, parameterized by {θ1, . . . , θ10}, along with 10
7https://github.com/rail-berkeley/d4rl 8https://github.com/deepmind/deepmind-research/tree/master/rl_unplugged
weight variables {wθ1, . . . , wθ10| ∑ i w θ i = 1}, are trained. Similar to weighted averaging architecture used in VLBM, i.e., equation 9, the mean and variance of the prediction s(j)t+1, captured by normal distribution N (µ, σ2), follow
µ = ∑10
i=1 wθi · µθi(s (j) t+1), σ
2 = ∑10
i=1 (wθi ) 2 · σ2θi(s (j) t+1), (12)
where µθi(s (j) t+1) and σ 2 θi (s (j) t+1) are the mean and variance produced from each individual AR model in the ensemble.
Training Resources. Training of the proposed method, and baselines, are facilitated by Nvidia Quadro RTX 6000, NVIDIA RTX A5000, and NVIDIA TITAN XP GPUs.
License. The use of DOPE9 and D4RL (Fu et al., 2020a) follow the Apache License 2.0.
9https://github.com/google-research/deep_ope
B MORE t-SNE VISUALIZATIONS
Figures 12 and 13 above visualize the latent space captured by two ablation baselines, VLM and VLM+RSA(MSE), respectively. It can be observed that comparing to the latent space captured by VLM are not disentangled well compared to VLBM (shown in Figure 8), as the state-action pairs induced by policies with different levels of performance are generally cluster together without explicit boundaries. Such a finding illustrated the importance of the use of RSA loss (7) empirically, as it can effectively regularize pψ(zt|zt−1, at−1, st) and allows the encoder to map the MDP states to an expressive and compact latent space from which the decoder can reconstruct states and rewards accurately. Moreover, Figure 13 shows that the latent representations of the state-action pairs captured by VLM+RSA(MSE) distributed almost uniformly over the latent space. This justifies the rationale provided in Sec. 2.3 where MSE is too strong to regularize the hidden states of the encoder and decoder, and is also consistent with the results reported in Figure 3 that MSE+RSA(MSE) performs worse than VLM in general.
C ALGORITHMS FOR TRAINING AND EVALUATING VLBM
Algorithm 1 Train VLBM.
Input: Model weights ψ, ϕ1, . . . , ϕB , w1, . . . , wB , offline trajectories ρβ , and learning rate α. Begin:
1: Initialize ψ, ϕ1, . . . , ϕB , w1, . . . , wB 2: for iter in 1 : max_iter do 3: Sample a trajectory [(s0, a0, r0, s1), . . . , (sT−1, aT−1, rT−1, sT )] ∼ ρβ 4: zψ0 ∼ qψ(z0|s0) 5: zϕb0 ∼ p(z0), for all b ∈ [1, B] 6: Run forward pass of VLBM following (3), (5) and (9) for t = 1 : T , and collect all variables needed to evaluate LV LBM as specified in (10). 7: ψ ← ψ + α∇ψLV LBM 8: for b in 1 : B do 9: ϕb ← ϕb + α∇ϕbLV LBM
10: wb ← wb + α∇wbLV LBM 11: end for 12: end for
Algorithm 2 Evaluate VLBM. Input: Trained model weights ψ, ϕ1, . . . , ϕB , w1, . . . , wB Begin:
1: Initialize the list that stores the accumulated returns over all episodesR = [] 2: for epi in 1 : max_epi do 3: Initialize the variable r = 0 that tracks the accumulated return for the current episode 4: Initialize latent states from the prior, i.e., zϕb0 ∼ p(z0) for all b ∈ [1, B] 5: Initialize LSTM hidden states hϕb0 = 0 for all b ∈ [1, B] 6: Sample sϕb0 ∼ pϕ(s0|z ϕb t ) for all b ∈ [1, B] and generate initial MDP state s ϕ 0 following (9) 7: for t in 1 : T do 8: Determine the action following the target policy π, i.e., at−1 ∼ π(at−1|sϕt−1) 9: for b in 1 : B do
10: Update hϕbt , h̃ ϕb t , z ϕb t , s ϕb t , r ϕb t−1 following (5). 11: end for 12: Generate the next state sϕt following (9), as well as the reward r ϕ t−1 ∼
pϕ(rt−1|zϕ1t , . . . , z ϕB t ) = N
( µ = ∑ b wb · µ(r ϕb t−1),Σdiag = ∑ b w 2 b · Σdiag(r ϕb t−1) ) 13: Update r ← r + γt−1rϕt−1, with γ being the discounting factor 14: end for 15: Append r intoR 16: end for 17: Average over all elements inR, which serves as the estimated return over π
D EARLY TERMINATION OF ENVIRONMENTS
Given that some Gym-Mujoco environments, including Ant, Hopper, Walker2d and Pen, may terminate an episode before reaching the maximum steps, if the state violates specific constraints. Below we introduce how VLM and VLBM can be enriched to capture such early termination behaviors.
VLM For VLM, we introduce an additional component dϕt ∼ pϕ(dt|z ϕ t ) to the generative process equation 5, where dϕt is a Bernoulli variable determining if an episode should be terminated at its t-th step. Specifically, pϕ(dt|zϕt ) follows Bernoulli distribution, with mean determined by an MLP with sigmoid activation applied to the output layer. As a result, the generative process now follows
hϕt = fϕ(h ϕ t−1, z ϕ t−1, at−1), h̃ ϕ t = gϕ(h ϕ t ), z ϕ t ∼ pϕ(h̃ ϕ t ),
sϕt ∼ pϕ(st|z ϕ t ), r ϕ t−1 ∼ pϕ(rt−1|z ϕ t ), d ϕ t ∼ pϕ(dt|z ϕ t ), at ∼ π(at|s ϕ t ). (13)
Moreover, we add in a new term to VLM’s training objective, in order to update the component introduced above during training, i.e.,
Learly_termV LM (ψ, ϕ) = LV LM (ψ, ϕ) + ∑T
t=0 log pϕ(dt|zt), (14)
with LV LM (ψ, ϕ) being the original objective of VLM, as presented in equation 8.
VLBM For VLBM, the termination of an episode is determined following, i.e.,
dϕt ∼ pϕ(dt|z ϕ1 t , . . . , z ϕB t ) = Bernoulli(µ = ∑ b wb · µd(dϕbt )), (15)
where µd(d ϕb t ) = ϕ MLP b,µd (zϕbt ) is the mean of d ϕb t produced from the b-th branch of the decoder, and ϕMLPb,µd is the corresponding MLP that maps z ϕb t to µd(d ϕb t ). To update the components involved in the procedure above, we introduce a new term to the VLBM’s objective, i.e.,
Learly_termV LBM (ψ, ϕ1, . . . , ϕB , w1, · · · , wB) (16) =LV LBM (ψ, ϕ1, . . . , ϕB , w1, · · · , wB) + ∑T
t=0 log pϕ(d
ϕ t |z ϕ1 t , . . . , z ϕB t ), (17)
with LV LBM being the original objective of VLBM, as presented in equation 10.
E BOUND DERIVATION
We now derive the evidence lower bound (ELBO) for the joint log-likelihood distribution, i.e.,
log pϕ(s0:T , r0:T−1) (18)
= log ∫ z1:T∈Z pϕ(s0:T , z1:T , r0:T−1)dz (19)
= log ∫ z1:T∈Z pϕ(s0:T , z1:T , r0:T−1) qψ(z0:T |s0:T , a0:T−1) qψ(z0:T |s0:T , a0:T−1)dz (20) ≥Eqψ [log p(z0) + log pϕ(s0:T , z1:T , r0:T−1|z0)− log qψ(z0:T |s0:T , a0:T−1)] (21)
=Eqψ [ log p(z0) + log pϕ(s0|z0) + ∑T t=1 log pϕ(st, zt, rt−1|zt−1, at−1)
− log qψ(z0|s0)− ∑T
t=1 log qψ(zt|zt−1, at−1, st)
] (22)
=Eqψ [ log p(z0)− log qψ(z0|s0) + log pϕ(s0|z0) + ∑T t=1 log ( pϕ(st|zt)pϕ(rt−1|zt)pϕ(zt|zt−1, at−1) ) − ∑T
t=1 log qψ(zt|zt−1, at−1, st)
] (23)
=Eqψ [∑T
t=0 log pϕ(st|zt) + ∑T t=1 log pϕ(rt−1|zt)
−KL ( qψ(z0|s0)||p(z0) ) − ∑T t=1 KL ( qψ(zt|zt−1, at−1, st)||pϕ(zt|zt−1, at−1) )] .
(24)
Note that the transition from equation 20 to equation 21 follows Jensen’s inequality.
F BASICS OF VARIATIONAL INFERENCE
Classic variational auto-encoders (VAEs) are designed to generate synthetic data that share similar characteristics than the ones used for training (Kingma & Welling, 2013). Specifically, VAEs learn an approximated posterior qψ(z|x) and a generative model pϕ(x|z), over the prior p(z), with x being the data and z the latent variable. It’s true posterior pϕ(z|x) is intractable, i.e.,
pϕ(z|x) = pϕ(x|z)p(z) pϕ(x) ; (25)
since the marginal likelihood in the denominator, pϕ(x) = ∫ z pϕ(x|z)p(z)dz, requires integration over the unknown latent space. For the same reason, VAEs cannot be trained to directly maximize the marginal log-likelihood, max log pϕ(x). To resolve this, one could maximize a lower bound of pϕ(x), i.e.,
max ψ,ϕ −KL(qψ(z|x)||p(z)) + Eqψ [log pϕ(x|z)], (26)
which is the evidence lower bound (ELBO).
Reparameterization. During training, it is required to sample from qψ(z|x) and pϕ(x|z) constantly. The reparameterization technique is introduced in (Kingma & Welling, 2013), to ensure that the gradients can flow through such sampling process during back-propagation. For example, if both distributions (qψ(z|x) and pϕ(x|z)) follow diagonal Gaussians, with mean and diagonal covariance determined by MLPs, i.e.,
z ∼ qψ(z|x) = N ( µ = ψMLPµ (x), Σ = ψ MLP Σ (x) ) , (27)
x ∼ pϕ(x|z) = N ( µ = ϕMLPµ (z), Σ = ϕ MLP Σ (z) ) ; (28)
here, ψMLPµ , ψ MLP Σ , ϕ MLP µ , ϕ MLP Σ are the MLPs that generate the means and covariances. The sampling processes above can be captured by reparameterization, i.e.,
z = ψMLPµ (x) + ψ MLP Σ (x) · ϵ, (29)
x = ϕMLPµ (z) + ϕ MLP Σ (z) · ϵ, (30)
with ϵ ∼ N (0, I). Consequently, the gradients over ψ and ϕ can be calculated following the chain rule, and used for back-propagation during training. We direct readers to (Kingma & Welling, 2013) for a comprehensive review of reparameterization.
G ADDITIONAL RELATED WORKS
Overview of latent-model based RL methods. In SLAC, latent representations are used to improve the sample efficiency of model-free RL training algorithms, by jointly modeling and learning dynamics and controls over the latent space. Similarly, SOLAR improves data efficiency for multi-task RL by first learning high-level latent representations of the environment, which can be shared across different tasks. Then, local dynamics models are inferred from the abstraction, with controls solved by linear-quadratic regulators. PlaNet and Dreamer further improve the architecture and training objectives of latent models, allowing them to look ahead multiple steps and plan for longer horizon. There also exist LatCo which directly performs trajectory optimization over the latent space, allowing the agent to temporarily bypass dynamical constraints and quickly navigate to the high-reward regions in early training stage. To summarize, methods above leverage latent representations to gain sufficient exploration coverage and quickly navigate to high-reward regions, improving sample efficiency for policy optimization. Note that they mostly require online interactions with the environment to formulate a growing experience replay buffer for policy learning, which have different goals than OPE which requires learning from a fixed set of offline trajectories. | 1. What is the focus of the paper regarding off-policy evaluation?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its application to control tasks and other types of OPE tasks?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or suggestions regarding the paper's experimental results and comparisons with other works?
5. Is there any discussion or suggestion on using the proposed approach for policy optimization rather than just evaluation? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work outlines a model-based approach for off-policy evaluation. It revolves around learning (from a static, offline dataset) a recurrent variational latent model of the environment. They outline two "tricks" to improve the approach. One is termed "recurrent state alignment", an additional loss is used to encourage the hidden state representations in the decoder and the encoder to be similar. Secondly, they use multiple decoders and when using the model for OPE average over the predictions of multiple decoders. They test their approach on some off-policy benchmarks on control tasks and compare with other OPE baselines as well as try ablations.
Strengths And Weaknesses
Strengths:
The paper is well-written.
Off-policy evaluation is an important topic for practical use of RL and of interest to the community.
Source code is provided.
The empirical results are compelling.
Weaknesses:
The environments used (control tasks from D4RL) are typically more used for off policy optimization rather than evaluation. It would be helpful to test the approach on other kinds of OPE tasks such as recommender systems / advertising e.g. [1, 2].
It would be helpful to discuss (and ideally report experiments) on using this approach for policy optimization rather than just evaluation.
Minor:
Spelling mistake in section heading 2.
[1] Chen, Minmin, et al. "Off-Policy Actor-critic for Recommender Systems." Proceedings of the 16th ACM Conference on Recommender Systems. 2022.
[2] Saito, Y., Udagawa, T., Kiyohara, H., Mogi, K., Narita, Y., & Tateno, K. (2021, September). Evaluating the robustness of off-policy evaluation. In Fifteenth ACM Conference on Recommender Systems (pp. 114-123).
Clarity, Quality, Novelty And Reproducibility
Clarity: The paper is overall well-explained. However, I believe it would be helpful to provide a little more detail (e.g. pseudocode algorithm) on how the model is used for OPE.
Quality: The paper is of high quality.
Novelty: The use of variational autoencoders for models in RL has a long history (as cited here). However, the specific tricks introduced here to make the approach work off-policy is novel and of interest to the community. I would assign this work a moderate degree of novelty (not ground breaking, but certainly not just a reproduction of earlier work).
Reproducibility: The code is made available by the authors (but I did not try running it), so I assume you would be able to reproduce their results. The paper describes the experiments in reasonable detail. |
ICLR | Title
Variational Latent Branching Model for Off-Policy Evaluation
Abstract
Model-based methods have recently shown great potential for off-policy evaluation (OPE); offline trajectories induced by behavioral policies are fitted to transitions of Markov decision processes (MDPs), which are used to rollout simulated trajectories and estimate the performance of policies. Model-based OPE methods face two key challenges. First, as offline trajectories are usually fixed, they tend to cover limited state and action space. Second, the performance of model-based methods can be sensitive to the initialization of their parameters. In this work, we propose the variational latent branching model (VLBM) to learn the transition function of MDPs by formulating the environmental dynamics as a compact latent space, from which the next states and rewards are then sampled. Specifically, VLBM leverages and extends the variational inference framework with the recurrent state alignment (RSA), which is designed to capture as much information underlying the limited training data, by smoothing out the information flow between the variational (encoding) and generative (decoding) part of VLBM. Moreover, we also introduce the branching architecture to improve the model’s robustness against randomly initialized model weights. The effectiveness of the VLBM is evaluated on the deep OPE (DOPE) benchmark, from which the training trajectories are designed to result in varied coverage of the state-action space. We show that the VLBM outperforms existing state-of-the-art OPE methods in general.
1 INTRODUCTION
Off-policy evaluation (OPE) allows for evaluation of reinforcement learning (RL) policies without online interactions. It is applicable to many domains where on-policy data collection could be prevented due to efficiency and safety concerns, e.g., healthcare (Gao et al., 2022c;a; Tang & Wiens, 2021), recommendation systems (Mehrotra et al., 2018; Li et al., 2011), education (Mandel et al., 2014), social science (Segal et al., 2018) and optimal control (Silver et al., 2016; Vinyals et al., 2019; Gao et al., 2020a; 2019; 2020b). Recently, as reported in the deep OPE (DOPE) benchmark (Fu et al., 2020b), model-based OPE methods, leveraging feed-forward (Fu et al., 2020b) and auto-regressive (AR) (Zhang et al., 2020a) architectures, have shown promising results toward estimating the return of target policies, by fitting transition functions of MDPs. However, model-based OPE methods remain challenged as they can only be trained using offline trajectory data, which often offers limited coverage of state and action space. Thus, they may perform sub-optimally on tasks where parts of the dynamics are not fully explored (Fu et al., 2020b). Moreover, different initialization of the model weights could lead to varied evaluation performance (Hanin & Rolnick, 2018; Rossi et al., 2019), reducing the robustness of downstream OPE estimations. Some approaches in RL policy optimization literature use latent models trained to capture a compact space from which the dynamics underlying MDPs are extrapolated; this allows learning expressive representations over the state-action space. However, such approaches usually require online data collections as the focus is on quickly navigating to the high-reward regions (Rybkin et al., 2021), as well as on improving coverage of the explored state and action space (Zhang et al., 2019; Hafner et al., 2019; 2020a) or sample efficiency (Lee et al., 2020).
In this work, we propose the variational latent branching model (VLBM), aiming to learn a compact and disentangled latent representation space from offline trajectories, which can better capture the
∗Duke University, USA. Emails: {qitong.gao, miroslav.pajic}@duke.edu. †North Carolina State University, USA. Emails: {ggao5, mchi}@ncsu.edu
Code available at https://github.com/gaoqitong/vlbm.
dynamics underlying environments. VLBM enriches the architectures and optimization objectives for existing latent modeling frameworks, allowing them to learn from a fixed set of offline trajectories. Specifically, VLBM considers learning variational (encoding) and generative (decoding) distributions, both represented by long short-term memories (LSTMs) with reparameterization (Kingma & Welling, 2013), to encode the state-action pairs and enforce the transitions over the latent space, respectively. To train such models, we optimize over the evidence lower bound (ELBO) jointly with a recurrent state alignment (RSA) term defined over the LSTM states; this ensures that the information encoded into the latent space can be effectively teased out by the decoder. Then, we introduce the branching architecture that allows for multiple decoders to jointly infer from the latent space and reach a consensus, from which the next state and reward are generated. This is designed to mitigate the side effects of model-based methods where different weight initializations could lead to varied performance (Fu et al., 2020b; Hanin & Rolnick, 2018; Rossi et al., 2019).
We focus on using the VLBM to facilitate OPE since it allows to better distinguish the improvements made upon learning dynamics underlying the MDP used for estimating policy returns, as opposed to RL training where performance can be affected by multiple factors, e.g., techniques used for exploration and policy optimization. Moreover, model-based OPE methods is helpful for evaluating the safety and efficacy of RL-based controllers before deployments in the real world (Gao et al., 2022b), e.g., how a surgical robot would react to states that are critical to a successful procedure. The key contributions of this paper are summarized as follows: (i) to the best of our knowledge, the VLBM is the first method that leverages variational inference for OPE. It can be trained using offline trajectories and capture environment dynamics over latent space, as well as estimate returns of target (evaluation) policies accurately. (ii) The design of the RSA loss term and branching architecture can effectively smooth the information flow in the latent space shared by the encoder and decoder, increasing the expressiveness and robustness of the model. This is empirically shown in experiments by comparing with ablation baselines. (iii) Our method generally outperforms existing model-based and model-free OPE methods, for evaluating policies over various D4RL environments (Fu et al., 2020a). Specifically, we follow guidelines provided by the DOPE benchmark (Fu et al., 2020b), which contains challenging OPE tasks where the training trajectories include varying levels of coverage of the state-action space, and target policies are designed toward resulting in state-action distributions different from the ones induced by behavioral policies.
2 VARIATIONAL LATENT BRANCHING MODEL
In this section, we first introduce the objective of OPE and the variational latent model (VLM) we consider. Then, we propose the recurrent state alignment (RSA) term as well as the branching architecture that constitute the variational latent branching model (VLBM).
2.1 OPE OBJECTIVE
We first introduce the MDP used to characterize the environment. Specifically, an MDP can be defined as a tuple M = (S,A,P, R, s0, γ), where S is the set of states, A the set of actions, P : S × A → S is the transition distribution usually captured by probabilities p(st|st−1, at−1), R : S ×A → R is the reward function, s0 is the initial state sampled from the initial state distribution p(s0), γ ∈ [0, 1) is the discounting factor. Finally, the agent interacts with the MDP following some policy π(a|s) which defines the probabilities of taking action a at state s. Then, the goal of OPE can be formulated as follows. Given trajectories collected by a behavioral policy β, ρβ = {[(s0, a0, r0, s1), . . . , (sT−1, aT−1, rT−1, sT )](0), [(s0, a0, r0, s1), . . . ](1), . . . |at ∼ β(at|st)}1, estimate the expected total return over the unknown state-action visitation distribution ρπ of the target (evaluation) policy π – i.e., for T being the horizon,
E(s,a)∼ρπ,r∼R [∑T
t=0 γtR(st, at)
] . (1)
2.2 VARIATIONAL LATENT MODEL
We consider the VLM consisting of a prior p(z) over the latent variables z ∈ Z ⊂ Rl, with Z representing the latent space and l the dimension, along with a variational encoder qψ(zt|zt−1, at−1, st)
1We slightly abuse the notation ρβ , to represent either the trajectories or state-action visitation distribution under the behavioral policy, depending on the context.
and a generative decoder pϕ(zt, st, rt−1|zt−1, at−1), parameterized by ψ and ϕ respectively. Basics of variational inference are introduced in Appendix F.
Latent Prior p(z0). The prior specifies the distribution from which the latent variable of the initial stage, z0, is sampled. We configure p(z0) to follow a Gaussian with zero mean and identity covariance matrix, which is a common choice under the variational inference framework (Kingma & Welling, 2013; Lee et al., 2020).
Variational Encoder for Inference qψ(zt|zt−1, at−1, st). The encoder is used to approximate the intractable posterior, p(zt|zt−1, at−1, st) = p(zt−1,at−1,zt,st)∫ zt∈Z p(zt−1,at−1,zt,st)dzt , where the denominator requires integrating over the unknown latent space. Specifically, the encoder can be decomposed into two parts, given that
qψ(z0:T |s0:T , a0:T−1)
=qψ(z0|s0) T∏ t=1 qψ(zt|zt−1, at−1, st); (2)
here, qψ(z0|s0) encodes the initial state s0 in to the corresponding latent variable z0, then, qψ(zt|zt−1, at−1, st) enforces the transi-
tion from zt−1 to zt conditioned on at−1 and st. Both distributions are diagonal Gaussians2, with means and diagonal of covariance matrices determined by multi-layered perceptron (MLP) (Bishop, 2006) and long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) respectively. The weights for both neural networks are referred to as ψ in general.
Consequently, the inference process for zt can be summarized as
zψ0 ∼ qψ(z0|s0), h ψ t = fψ(h ψ t−1, z ψ t−1, at−1, st), z ψ t ∼ qψ(zt|h ψ t ), (3)
where fψ represents the LSTM layer and h ψ t the LSTM recurrent (hidden) state. Note that we use ψ in superscripts to distinguish the variables involved in this inference process, against the generative process introduced below. Moreover, reparameterization can be used to sample zψ0 and z ψ t , such that gradients of sampling can be back-propagated, as introduced in (Kingma & Welling, 2013). Overview of the inference and generative processes are illustrated in Fig. 1.
Generative Decoder for Sampling pϕ(zt, st, rt−1|zt−1, at−1). The decoder is used to interact with the target policies and acts as a synthetic environment during policy evaluation, from which the expected returns can be estimated as the mean return of simulated trajectories. The decoder can be represented by the multiplication of three diagonal Gaussian distributions, given that
pϕ(z1:T , s0:T , r0:T−1|z0, π) = T∏ t=0 pϕ(st|zt) T∏ t=1 pϕ(zt|zt−1, at−1)pϕ(rt−1|zt), (4)
with at ∼ π(at|st) at each time step. Specifically, pϕ(zt|zt−1, at−1) has its mean and covariance determined by an LSTM, enforcing the transition from zt−1 to zt in the latent space given action at−1. In what follows, pϕ(st|zt) and pϕ(rt−1|zt) generate the current state st and reward rt−1 given zt, whose mean and covariance are determined by MLPs. As a result, the generative process starts with sampling the initial latent variable from the latent prior, i.e., zϕ0 ∼ p(z0). Then, the initial state sϕ0 ∼ pϕ(s0|z ϕ 0 ) and action a0 ∼ π(a0|s ϕ 0 ) are obtained from pϕ and target policy π, respectively; the rest of generative process can be summarized as
hϕt = fϕ(h ϕ t−1, z ϕ t−1, at−1), h̃ ϕ t = gϕ(h ϕ t ), z ϕ t ∼ pϕ(h̃ ϕ t ),
sϕt ∼ pϕ(st|z ϕ t ), r ϕ t−1 ∼ pϕ(rt−1|z ϕ t ), at ∼ π(at|s ϕ t ), (5)
2Assume that different dimensions of the states are non-correlated with each other. Otherwise, the states can be projected to orthogonal basis, such that non-diagonal elements of the covariance matrix will be zeros.
where fϕ is the LSTM layer producing recurrent state h ϕ t . Then, an MLP gϕ is used to generate mapping between hϕt and h̃ ϕ t that will be used for recurrent state alignment (RSA) introduced below, to augment the information flow between the inference and generative process.
Furthermore, to train the elements in the encoder (3) and decoder (5), one can maximize the evidence lower bound (ELBO), a lower bound of the joint log-likelihood p(s0:T , r0:T−1), following
LELBO(ψ, ϕ) =Eqψ [∑T
t=0 log pϕ(st|zt) + ∑T t=1 log pϕ(rt−1|zt)−KL ( qψ(z0|s0)||p(z0) ) − ∑T
t=1 KL
( qψ(zt|zt−1, at−1, st)||pϕ(zt|zt−1, at−1) )] ; (6)
here, the first two terms represent the log-likelihood of reconstructing the states and rewards, and the last two terms regularize the approximated posterior. The proof can be found in Appendix E.
2.3 RECURRENT STATE ALIGNMENT
The latent model discussed above is somewhat reminiscent of the ones used in model-based RL policy training methods, e.g., recurrent state space model (RSSM) used in PlaNet (Hafner et al., 2019) and Dreamer (Hafner et al., 2020a;b), as well as similar ones in Lee et al. (2020); Lu et al. (2022). Such methods rely on a growing experience buffer for training, which is collected online by the target policy that is being concurrently updated (with exploration noise added); however, OPE aims to extrapolate returns from a fixed set of offline trajectories which may result in limited coverage of the state and action space. Consequently, directly applying VLM for OPE can lead to subpar performance empirically; see results in Sec. 3. Moreover, the encoder above plays a key role of capturing the temporal transitions between latent variables, i.e., pψ(zt|zt−1, at−1, st) from (2). However, it is absent in the generative process, as the decoder leverages a separate network to determine the latent transitions, i.e., pϕ(zt|zt−1, at−1). Moreover, from the ELBO (6) above it can be seen that only the KL-divergence terms are used to regularize these two parts, which may not be sufficient for OPE as limited offline trajectories are provided. As a result, we introduce the RSA term as part of the training objective, to further regularize pψ(zt|zt−1, at−1, st) and pϕ(zt|zt−1, at−1). A graphical illustration of RSA can be found in Fig. 2.3
Specifically, RSA is defined as the mean pairwise squared error between hψt from the encoder (3) and h̃ϕt from the decoder (5), i.e.,
LRSA(h̃ϕt , h ψ t ;ψ, ϕ) =
1
N N∑ i=1 T∑ t=0 M(M − 1) 2 [M−1∑ j=1 M∑ k=j+1 ( (h̃ϕt [j]− h̃ ϕ t [k])− (h ψ t [j]− h ψ t [k]) )2] ;
(7)
here, we assume that both LSTM recurrent states have the same dimension h̃ϕt , h ψ t ∈ RM , with h (·) t [j] referring to the j-th element of the recurrent state, and N the number of training trajectories.
Here, we choose the pairwise squared loss over the classic mean squared error (MSE), because MSE could be too strong to regularize hψt and h̃ ϕ t which support the inference and generative processes respectively and are not supposed to be exactly the same. In contrast, the pairwise loss (7) can
3Rewards and actions are omitted for conciseness of the presentation.
promote structural similarity between the LSTM recurrent states of the encoder and decoder, without strictly enforcing them to become the same. Note that this design choice has been justified in Sec. 3 through an ablation study by comparing against models trained with MSE. In general, the pairwise loss has also been adopted in many domains for similar purposes, e.g., object detection (Gould et al., 2009; Rocco et al., 2018), ranking systems (Doughty et al., 2018; Saquil et al., 2021) and contrastive learning (Wang et al., 2021; Chen et al., 2020). Similarly, we apply the pairwise loss over hψt and h̃ ϕ t , instead of directly over h ψ t and h ϕ t , as the mapping gϕ (from equation 5) could serve as a regularization layer to ensure optimality over LRSA without changing hψt , h ϕ t significantly.
As a result, the objective for training the VLM, following architectures specified in (3) and (5), can be formulated as
max ψ,ϕ LV LM (ψ, ϕ) = max ψ,ϕ
( LELBO(ψ, ϕ)− C · LRSA(h̃ϕt , h ψ t ;ψ, ϕ) ) , (8)
with C > 0 and C ∈ R being the constant balancing the scale of the ELBO and RSA terms.
2.4 BRANCHING FOR GENERATIVE DECODER
The performance of model-based methods can vary upon different design factors (Fu et al., 2020b; Hanin & Rolnick, 2018). Specifically, Rossi et al. (2019) has found that the convergence speed and optimality of variational models are sensitive to the choice of weight initialization techniques. Moreover, under the typical variational inference setup followed by the VLM above, the latent transitions reconstructed by the decoder, pϕ(zt|zt−1, at−1), are only trained through regularization losses in (6) and (7), but are fully responsible for rolling out trajectories during evaluation. Consequently, in this sub-section we introduce the branching architecture for decoder, with the goal of minimizing the impact brought by random weight initialization of the networks, and allowing the decoder to best reconstruct the latent transitions pϕ(zt|zt−1, at−1) as well as st’s and rt−1’s correctly. Specifically, the branching architecture leverages an ensemble of B ∈ Z+ decoders to tease out information from the latent space formulated by the encoder, with final predictions sampled from a mixture of the Gaussian output distributions from (5). Note that the classic setup of ensembles is not considered, i.e., train and average over B VLMs end-to-end; because in this case B different latent space exist, each of which is still associated with a single decoder, leaving the challenges above unresolved. This design choice is justified by ablations studies in Sec. 3, by comparing VLBM against a (classic) ensemble of VLMs.
Branching Architecture. Consider the generative process involving B branches of the decoders parameterized by {ϕ1, . . . , ϕB}. The forward architecture over a single step is illustrated in Fig. 2.4 Specifically, the procedure of sampling zϕbt and s ϕb t for each b ∈ [1, B] follows from (5). Recall that by definition pϕb(st|z ϕb t ) follows multivariate Gaussian with mean and diagonal of covariance matrix determined by the corresponding MLPs, i.e., µ(sϕbt ) = ϕ MLP b,µ (z ϕb t ) and Σdiag(s ϕb t ) = ϕ MLP b,Σ (z ϕb t ). In what follows, the final outcome sϕt can be sampled following diagonal Gaussian with mean and variance determined by weighted averaging across all branches using weights wb’s, i.e.,
sϕt ∼ pϕ(st|z ϕ1 t , . . . , z ϕB t ) = N
( µ = ∑ b wb · µ(sϕbt ),Σdiag = ∑ b w2b · Σdiag(s ϕb t ) ) . (9)
The objective below can be used to jointly update, wb’s, ψ and ϕb’s, i.e.,
max ψ,ϕ,w
LV LBM (ψ, ϕ1, . . . , ϕB , w1, . . . , wB)
= max ψ,ϕ,w ( T∑ t=0 log pϕ(s ϕ t |z ϕ1 t , . . . , z ϕB t )− C1 · ∑ b LRSA(h̃ϕbt , h ψ t ;ψ, ϕb) + C2 ∑ b LELBO(ψ, ϕb) ) ,
s.t. w1, . . . , wB > 0 , ∑ b wb = 1 and constants C1, C2 > 0. (10)
Though the first term above already propagates through allwb’s and ϕb’s, the third term and constraints over wb’s regularize ϕb in each individual branch such that they are all trained toward maximizing
4For simplicity, the parts generating rewards are omitted without lost of generality.
the likelihood pϕb(s ϕb t |z ϕb t ). Pseudo-code for training and evaluating the VLBM can be found in Appendix C. Further, in practice, one can define wb = v2b
ϵ+ ∑ b v 2 b
, with vb ∈ R the learnable variables and 0 < ϵ ≪ 1, ϵ ∈ R, the constant ensuring denominator to be greater than zero, to convert (10) into unconstrained optimization and solve it using gradient descent. Lastly, note that complementary latent modeling methods, e.g., latent overshooting from Hafner et al. (2019), could be adopted in (10). However, we keep the objective straightforward, so that the source of performance improvements can be isolated.
3 EXPERIMENTS
To evaluate the VLBM, we follow the guidelines from the deep OPE (DOPE) benchmark (Fu et al., 2020b). Specifically, we follow the D4RL branch in DOPE and use the GymMujoco and Adroit suites as the test base (Fu et al., 2020a). Such environments have long horizons and highdimensional state and action space, which are usually challenging for model-based methods. The provided offline trajectories for training are collected using behavioral policies at varied scale, including limited exploration, human teleoperation etc., which can result in different levels of
coverage over the state-action space. Also, the target (evaluation) policies are generated using online RL training, aiming to reduce the similarity between behavioral and target policies; it introduces another challenge that during evaluation the agent may visit states unseen from training trajectories.
Environmental and Training Setup. A total of 8 environments are provided by Gym-Mujoco and Adroit suites (Fu et al., 2020b;a). Moreover, each environment is provided with 5 (for Gym-Mujoco) or 3 (for Adroit) training datasets collected using different behavioral policies, resulting in a total of 32 sets of env-dataset tasks5 – a full list can be found in Appendix A. DOPE also provides 11 target policies for each environment, whose performance are to be evaluated by the OPE methods. They in general result in varied scales of returns, as shown in the x-axes of Fig. 7. Moreover, we consider the decoder to have B = 10 branches, i.e., {pϕ1 , . . . , pϕ10}. The dimension of latent space is set to be 16, i.e., z ∈ Z ⊂ R16. Other implementation details can be found in Appendix A. Baselines and Evaluation Metrics. In addition to the five baselines reported from DOPE, i.e., importance sampling (IS) (Precup, 2000), doubly robust (DR) (Thomas & Brunskill, 2016), variational power method (VPM) (Wen et al., 2020), distribution correction estimation (DICE) (Yang et al., 2020), and fitted Q-evaluation (FQE) (Le et al., 2019), the effectiveness of VLBM is also compared against the state-of-the-art model-based OPE method leveraging the auto-regressive (AR) architecture (Zhang et al., 2020a). Specifically, for each task we train an ensemble of 10 AR models, for fair comparisons against VLBM which leverages the branching architecture; see Appendix A for details of the AR ensemble setup. Following the DOPE benchmark (Fu et al., 2020b), our evaluation metrics includes rank correlation, regret@1, and mean absolute error (MAE). VLBM and all baselines are trained using 3 different random seeds over each task, leading to the results reported below.
Ablation. Four ablation baselines are also considered, i.e., VLM, VLM+RSA, VLM+RSA(MSE) and VLM+RSA Ensemble. Specifically, VLM refers to the model introduced in Sec. 2.2, trained toward maximizing only the ELBO, i.e., (6). Note that, arguably, VLM could be seen as the generalization of directly applying latent-models proposed in existing RL policy optimization literature (Lee et al., 2020; Hafner et al., 2019; 2020a;b; Lu et al., 2022); details can be found in Sec. 4 below. The VLM+RSA ablation baseline follows the same model architecture as VLM, but is trained to optimize over both ELBO and recurrent state alignment (RSA) as introduced in (8), i.e., branching is not used comparing to VLBM. The design of these two baselines can help analyze the effectiveness of the RSA
5From now on the dataset names are abbreviated by their initials, e.g., Ant-M-R refers to Ant-Medium-Replay.
loss term and branching architecture introduced in Sec. 2.3 and 2.4. Moreover, VLM+RSA(MSE) uses mean squared error to replace the pairwise loss introduced in (7), and the VLM+RSA Ensemble applies classic ensembles by averaging over B VLM+RSA models end-to-end, instead of branching from decoder as in VLBM. These two ablation baselines can help justify the use of pairwise loss for RSA, and the benefit of using branching architecture over classic ensembles.
Results. Fig. 3 shows the mean overall performance attained by VLBM and baselines over all the 32 GymMujoco and Adroit tasks. In general VLBM leads to significantly increased rank correlations and decreased regret@1’s over existing methods, with MAEs maintained at the state-of-the-art level. Specifically, VLBM achieves state-of-the-art performance in 31, 29, and 15 (out of 32) tasks in terms of rank correlation, regret@1 and MAE, respectively. Performance for each task can be found in Tables 1- 6 at the end of Appendices. Note that results for IS, VPM, DICE, DR, and FQE are obtained directly from DOPE benchmark (Fu et al., 2020b), since the same experimental setup is considered. Fig. 4 and 5 visualize
the mean performance for each Gym-Mujoco and Adroit environment respectively, over all the associated datasets. It can be also observed that the model-based and FQE baselines generally perform better than the other baselines, which is consistent with findings from DOPE.
The fact that VLM+RSA outperforming the VLM ablation baseline, as shown in Fig. 4, illustrates the need of the RSA loss term to smooth the flow of information between the encoder and decoder, in the latent space. Moreover, one can observe that VLM+RSA(MSE) sometimes performs worse than VLM, and significantly worse than VLM+RSA in general. Specifically, it has be found that, compared to VLM and VLM+RSA respectively, VLM+RSA(MSE) significantly worsen at least two metrics in 7 and 12 (out of 20) Gym-Mujoco tasks; detailed performance over these tasks can be found in Tables 1- 6 at the end of Appendices. Such a finding backs up the design choice of using pairwise loss for RSA instead of MSE, as MSE could be overly strong to regularize the LSTM recurrent states of the encoder and decoder, while pairwise loss only enforces structural similarities. Moreover, VLBM significantly improves rank correlations and regrets greatly compared to VLM+RSA, illustrating the importance of the branching architecture. In the paragraph below, we show empirically the benefits brought in by branching over classic ensembles.
Branching versus Classic Ensembles. Fig. 4 shows that the VLM+RSA Ensemble does not improve performance over the VLM+RSA in general, and even leads to worse overall rank correlations and regrets in Walker2d and Hopper environments. This supports the rationale provided in Sec. 2.4 that each decoder still samples from different latent space exclusively, and averaging over the output distributions may not help reduce the disturbance brought in by the modeling artifacts under the variational inference framework, e.g., random weight initializations (Hanin & Rolnick, 2018; Rossi et al., 2019). In contrast, the VLBM leverages the branching architecture, allowing all the branches to sample from the same latent space formulated by the encoder. Empirically, we find that the branching weights, wb’s in (9), allows VLBM to kill branches that are not helpful toward reconstructing the trajectories accurately, to possibly overcome bad initializations etc. Over all the the 32 tasks we consider, most of VLBMs only keep 1-3 branches (out of 10), i.e., wb < 10−5 for all other branches. The distribution of all wb’s, from VLBMs trained on the 32 tasks, are shown in Fig. 6; one can observe that most of the wb’s are close to zero, while the others generally fall in the range of (0, 0.25] and [0.75, 1).
AR ensembles also lead to compelling rank correlations and regrets, but attains much smaller margins in MAEs over other baselines in general; see Fig. 3. From Fig. 7, one can observe that it tends to significantly under-estimate most of the high-performing policies. Scatter plots for the other tasks can be found in Appendix A, which also show this trend. The reason could be that its model architecture and training objectives are designed to directly learn the transitions of the MDP; thus, may produce biased predictions when the target policies lead to visitation of the states that are not substantially presented in training data, since such data are obtained using behavioral policies that are sub-optimal. In
contrast, the VLBM can leverage RSA and branching against such situations, thus outperforming AR ensembles in most of the OPE tasks in terms of all metrics we considered. Interestingly, Fig. 7 also shows that latent models could sometimes over-estimate the returns. For example, in Hopper-M-E and Walker2d-M-E, VLM tends to over-estimate most policies. The VLBM performs consistently well in Hopper-M-E, but is mildly affected by such an effect in Walker2d-M-E, though over fewer policies and smaller margins. It has been found that variational inference may fall short in approximating true distributions that are asymmetric, and produce biased estimations (Yao et al., 2018). So the hypothesis would be that the dynamics used to define certain environments may lead to asymmetry in the true posterior p(zt|zt−1, at−1, st), which could be hard to be captured by the latent modeling framework we consider. More comprehensive understanding of such behavior can be explored in future work. However, the VLBM still significantly outperforms VLM overall, and achieves top-performing rank correlations and regrets; such results illustrate the VLBM’s improved robustness as a result of its architectural design and choices over training objectives.
t-SNE Visualization of the Latent Space. Fig. 8 illustrates t-SNE visualization of the latent space by rolling out trajectories using all target policies respectively, followed by feeding the state-action pairs into the encoder of VLBM which maps them into the latent space. It shows the encoded state-action pairs induced from policies with similar performance are in general swirled and clustered together, illustrating that VLBM can learn expressive and disentangled representations of its inputs.
4 RELATED WORK
Latent Modeling in RL. Though variational inference has rarely been explored to facilitate modelbased OPE methods so far, there exist several latent models designed for RL policy optimization that are related to our work, such as SLAC (Lee et al., 2020), SOLAR (Zhang et al., 2019), LatCo (Rybkin et al., 2021), PlaNet (Hafner et al., 2019), Dreamer (Hafner et al., 2020a;b). Below we discuss the connections and distinctions between VLBM and the latent models leveraged by them, with a detailed overview of these methods provided in Appendix G. Specifically, SLAC and SOLAR learn latent representations of the dynamics jointly with optimization of the target policies, using the latent information to improve sample efficiency. Similarly, LatCo performs trajectory optimization over the latent space to allow for temporarily bypassing dynamic constraints. As a result, latent models used in such methods are not designed toward rolling out trajectories independently, as opposed to the use of VLBM in this paper. PlaNet and Dreamer train the recurrent state space model (RSSM) using a growing experience dataset collected by the target policy that is being concurrently updated (with exploration noise added), which requires online data collection. In contrast, under the OPE setup, VLBM is trained over a fixed set of offline trajectories collected over unknown behavioral policies. Moreover, note that the VLM baseline is somewhat reminiscent of the RSSM and similar ones as in Lee et al. (2020); Lu et al. (2022), however, experiments above show that directly using VLM for OPE could lead to subpar performance. On the other hand, though MOPO (Yu et al., 2020), LOMPO (Rafailov et al., 2021) and COMBO (Yu et al., 2021) can learn from offline data, they focus on quantifying the uncertainty of model’s predictions toward next states and rewards, followed by incorporating them into policy optimization objectives to penalize for visiting regions where transitions are not fully captured; thus, such works are also orthogonal to the use case of OPE. OPE. Classic OPE methods adopt IS to estimate expectations over the unknown visitation distribution over the target policy, resulting in weighted IS, step-wise IS and weighted step-wise IS (Precup, 2000). IS can lead to estimations with low (or zero) bias, but with high variance (Kostrikov & Nachum, 2020; Jiang & Li, 2016), which sparks a long line of research to address this challenge. DR methods propose to reduce variance by coupling IS with a value function approximator (Jiang & Li, 2016; Thomas & Brunskill, 2016; Farajtabar et al., 2018). However, the introduction of such approximations may increase bias, so the method proposed in Tang et al. (2019) attempts to balance the scale of bias and variance for DR. Unlike IS and DR methods that require the behavioral policies to be fully known, DICE family of estimators (Zhang et al., 2020c;b; Yang et al., 2021; 2020; Nachum et al., 2019; Dai et al., 2020) and VPM (Wen et al., 2020) can be behavioral-agnostic; they directly capture marginalized IS weights as the ratio between the propensity of the target policy to visit particular state-action pairs, relative to their likelihood of appearing in the logged data. There also exist FQE methods which extrapolate policy returns from approximated Q-functions (Hao et al., 2021; Le et al., 2019; Kostrikov & Nachum, 2020). Existing model-based OPE methods are designed to directly fit MDP transitions using feed-forward (Fu et al., 2020b) or auto-regressive (Zhang et al., 2020a) models, and has shown promising results over model-free methods as reported in a recent benchmark (Fu et al., 2020b). However, such model-based approaches could be sensitive to the initialization of weights (Hanin & Rolnick, 2018; Rossi et al., 2019) and produce biased predictions, due to the limited coverage over state and action space provided by offline trajectories (Fu et al., 2020b). Instead, VLBM mitigates such effects by capturing the dynamics over the latent space, such that states and rewards are evolved from a compact feature space over time. Moreover, RSA and the branching can lead to increased expressiveness and robustness, such that future states and rewards are predicted accurately. There also exist OPE methods proposed toward specific applications (Chen et al., 2022; Saito et al., 2021; Gao et al., 2023; 2022b).
5 CONCLUSION AND FUTURE WORK
We have developed the VLBM which can accurately capture the dynamics underlying environments from offline training data that provide limited coverage of the state and action space; this is achieved by using the RSA term to smooth out the information flow from the encoders to decoders in the latent space, as well as the branching architecture which improve VLBM’s robustness against random initializations. We have followed evaluation guidelines provided by the DOPE benchmark, and experimental results have shown that the VLBM generally outperforms the state-of-the-art modelbased OPE method using AR architectures, as well as other model-free methods. VLBM can also facilitate off-policy optimizations, which can be explored in future works. Specifically, VLBM can serve as a synthetic environment on which optimal controllers (e.g., linear–quadratic regulator) can be deployed. On the other hand, similar to Dreamer and SLAC, policies can be updated jointly with training of VLBM, but without the need of online interactions with the environment during training.
ACKNOWLEDGMENTS
This work is sponsored in part by the AFOSR under award number FA9550-19-1-0169, and by the NSF CNS-1652544, CNS-1837499, DUE-1726550, IIS-1651909 and DUE-2013502 awards, as well as the National AI Institute for Edge Computing Leveraging Next Generation Wireless Networks, Grant CNS-2112562.
A ADDITIONAL EXPERIMENTAL DETAILS AND RESULTS
Additional Results and Discussions. Rank correlations, regret@1 and MAEs for all 32 tasks are documented in Tables 1- 6 below.6 The mean and standard deviation (in subscripts) over 3 random seeds are reported. Note that in each column, performance of multiple methods may be highlighted in bold, meaning they all achieve the best performance and do not significantly outperform each other. The fact that VLBM outperforms the ablation baselines in most cases suggests that the RSA loss term and branching architecture can effectively increase model expressiveness, and allow to learn the dynamics underlying the MDP more accurately and robustly from offline data that provide limited exploration coverage. Yet, smaller margins are attained between the VLBM and VLM+RSA in Hopper-M-E and Hopper-M. It is likely because Hopper has relatively lower dimensional state space compared to the other three environments, from which the underlying dynamics can be sufficiently captured by the VLM+RSA. Fig. 10 and 11 shows the correlation between estimated (y-axis) and true returns (x-axis) for all the OPE tasks we consider. It can be found that for Halfcheetah-R, -M-R, -M, most of the model-based methods cannot significantly distinguish the returns across target policies. The cause could be that the offline trajectories provided for this task are relatively more challenging, compared to the other OPE tasks. Such an effect appears to affect IS, VPM, DICE, DR and FQE at larger scale. It can be observed from the scatter plots reported in the DOPE benchmark (Fu et al., 2020b) that these methods could hardly tell the scale of returns across different target policies; as the dots almost form a horizontal line in each plot. However, the estimated returns from VLBM and IS still preserve the rank, which leads to high rank correlations and low regrets.
Implementation Details and Hyper-parameter. The model-based methods are evaluated by directly interacting with each target policy for 50 episodes, and the mean of discounted total returns (γ = 0.995) over all episodes is used as estimated performance for the policy. We choose the neural network architectures as follows. For the components involving LSTMs, which include qψ(zt|zt−1, at−1, st) and pϕ(zt|zt−1, at−1), their architecture include one LSTM layer with 64 nodes, followed by a dense layer with 64 nodes. All other components do not have LSTM layers involved, so they are constituted by a neural network with 2 dense layers, with 128 and 64 nodes respectively. The output layers that determine the mean and diagonal covariance of diagonal Gaussian distributions use linear and softplus activations, respectively. The ones that determine the mean of Bernoulli distributions (e.g., for capturing early termination of episodes) are configured to use sigmoid activations. VLBM and the two ablation baselines, VLM and VLM+RSA, are trained using offline trajectories provided by DOPE, with max_iter in Alg. 1 set to 1,000 and minibatch size set to 64. Adam optimizer is used to perform gradient descent. To determine the learning rate, we perform grid search among {0.003, 0.001, 0.0007, 0.0005, 0.0003, 0.0001, 0.00005}. Exponential decay is applied to the learning rate, which decays the learning rate by 0.997 every iteration. To train VLBM, we set the constants from equation 10 following C1 = C2, and perform grid search among
6Some VPM entries are absent since they were not reported in Fu et al. (2020b), nor the code is open-sourced.
{5, 1, 0.1, 0.05, 0.01, 0.005, 0.001, 0.0001}. To train VLM+RSA, the constant C from equation 8 is determined by grid search among the same set of parameters above. L2-regularization with decay of 0.001 and batch normalization are applied to all hidden layers. Consider that some of the environments (e.g., Ant, Hopper, Walker2d, Pen) may terminate an episode, before timeout, if the state meets specific conditions; details for VLBM to capture such early termination behavior is introduced in Appendix D.
The DOPE Benchmark. The deep OPE (DOPE) benchmark (Fu et al., 2020b) provides standardized training and evaluation procedure for OPE works to follow, which facilitates fair and comprehensive comparisons among various OPE methods. Specifically, it utilizes existing environments and training trajectories provided by D4RL7 and RLUnplugged8, which are two benchmark suites for offline RL training, and additionally provide target policies for OPE methods to evaluate. In the D4RL branch, the training trajectories are originally collected from various sources including random exploration, human teleoperation, and RL-trained policies with limited exploration; thus, can provide varied levels of coverage over the state-action space. Moreover, the target policies are trained using online RL algorithms, which can in general lead to different state-action visitations than in the training trajectories. We leverage the D4RL branch as our test base, since the OPE tasks it provides are considered challenging, i.e., the limited coverage introduced by training data, as well as the discrepancy between the behavioral and target policies. Graphical illustrations of the Gym-Mujoco and Adroit environments considered are shown in Fig. 9. Details on the environments and datasets used are shown in Tables 7 and 8, from the perspectives of state and action dimensions, if episodes can be terminated before timeout, if controls are performed over continuous space, and the size of the offline trajectories used for training. In contrast, in the RLUnplugged branch, the training trajectories are always collected using online RL training, which can result in adequate coverage over the state-action space. The target policies are trained by applying offline RL over the training trajectories, so that behavioral and target policies can lead to similar state-action visitation distributions. As discussed in DOPE (Fu et al., 2020b), such tasks are suitable for studies where ideal data are needed, such as complexity comparisons.
Evaluation Metrics. Following from (Fu et al., 2020b), we consider rank correlation, regret@1 and mean absolute error (MAE) as the evaluation metrics. Specifically, rank correlation measures the strength and direction of monotonic association between the rank of OPE-estimated returns and true returns over all target policies. It is is captured by Spearsman’s correlation coefficient between the ordinal rankings between estimated and true returns. Regret@1 is captured by the difference between the return of the policy corresponding to the highest return as estimated by OPE and the return of the policy that actually produces the highest true return. In other words, regret@1 evaluates how worse the policy resulting in the highest OPE-estimated return would perform than the actual best policy. The two metrics above evaluate how useful OPE would be to facilitate important applications such as policy selection. Finally, we also consider MAE which is commonly used in estimation/regression tasks. Mathematical definitions of these metrics can be found in (Fu et al., 2020b).
Implementation of AR Ensembles. For fair comparisons with VLBM, in experiments we train an ensemble of the state-of-the-art model-based OPE method, auto-regressive (AR) models (Zhang et al., 2020a), as one of the baselines. Specifically, we train an ensemble of 10 AR models to learn p(st+1, rt|st, at) following the auto-regressive manner, with each individual model following the design introduced in (Zhang et al., 2020a), i.e.,
s (j) t+1 ∼ p(s (j) t+1|st, at, s (1) t+1, . . . , s (j−1) t+1 ), (11)
with s(j)t+1 representing the element located at the j-th dimension of the state variable, and D the dimension of state space. The reward is treated as an additional dimension of the states, i.e., rt ∼ p(rt|st, at, s(1)t+1, . . . , s (D) t+1). However, in the original literature (Zhang et al., 2020a) it does not introduce in details regarding which specific ensemble architecture is used (e.g., overall averaging or weighted averaging). As a result, we choose the same weighted averaging procedure as used in VLBM branching, to sort out the influence of different ensemble architectures and facilitate fair comparisons. Specifically, a total of 10 AR models, parameterized by {θ1, . . . , θ10}, along with 10
7https://github.com/rail-berkeley/d4rl 8https://github.com/deepmind/deepmind-research/tree/master/rl_unplugged
weight variables {wθ1, . . . , wθ10| ∑ i w θ i = 1}, are trained. Similar to weighted averaging architecture used in VLBM, i.e., equation 9, the mean and variance of the prediction s(j)t+1, captured by normal distribution N (µ, σ2), follow
µ = ∑10
i=1 wθi · µθi(s (j) t+1), σ
2 = ∑10
i=1 (wθi ) 2 · σ2θi(s (j) t+1), (12)
where µθi(s (j) t+1) and σ 2 θi (s (j) t+1) are the mean and variance produced from each individual AR model in the ensemble.
Training Resources. Training of the proposed method, and baselines, are facilitated by Nvidia Quadro RTX 6000, NVIDIA RTX A5000, and NVIDIA TITAN XP GPUs.
License. The use of DOPE9 and D4RL (Fu et al., 2020a) follow the Apache License 2.0.
9https://github.com/google-research/deep_ope
B MORE t-SNE VISUALIZATIONS
Figures 12 and 13 above visualize the latent space captured by two ablation baselines, VLM and VLM+RSA(MSE), respectively. It can be observed that comparing to the latent space captured by VLM are not disentangled well compared to VLBM (shown in Figure 8), as the state-action pairs induced by policies with different levels of performance are generally cluster together without explicit boundaries. Such a finding illustrated the importance of the use of RSA loss (7) empirically, as it can effectively regularize pψ(zt|zt−1, at−1, st) and allows the encoder to map the MDP states to an expressive and compact latent space from which the decoder can reconstruct states and rewards accurately. Moreover, Figure 13 shows that the latent representations of the state-action pairs captured by VLM+RSA(MSE) distributed almost uniformly over the latent space. This justifies the rationale provided in Sec. 2.3 where MSE is too strong to regularize the hidden states of the encoder and decoder, and is also consistent with the results reported in Figure 3 that MSE+RSA(MSE) performs worse than VLM in general.
C ALGORITHMS FOR TRAINING AND EVALUATING VLBM
Algorithm 1 Train VLBM.
Input: Model weights ψ, ϕ1, . . . , ϕB , w1, . . . , wB , offline trajectories ρβ , and learning rate α. Begin:
1: Initialize ψ, ϕ1, . . . , ϕB , w1, . . . , wB 2: for iter in 1 : max_iter do 3: Sample a trajectory [(s0, a0, r0, s1), . . . , (sT−1, aT−1, rT−1, sT )] ∼ ρβ 4: zψ0 ∼ qψ(z0|s0) 5: zϕb0 ∼ p(z0), for all b ∈ [1, B] 6: Run forward pass of VLBM following (3), (5) and (9) for t = 1 : T , and collect all variables needed to evaluate LV LBM as specified in (10). 7: ψ ← ψ + α∇ψLV LBM 8: for b in 1 : B do 9: ϕb ← ϕb + α∇ϕbLV LBM
10: wb ← wb + α∇wbLV LBM 11: end for 12: end for
Algorithm 2 Evaluate VLBM. Input: Trained model weights ψ, ϕ1, . . . , ϕB , w1, . . . , wB Begin:
1: Initialize the list that stores the accumulated returns over all episodesR = [] 2: for epi in 1 : max_epi do 3: Initialize the variable r = 0 that tracks the accumulated return for the current episode 4: Initialize latent states from the prior, i.e., zϕb0 ∼ p(z0) for all b ∈ [1, B] 5: Initialize LSTM hidden states hϕb0 = 0 for all b ∈ [1, B] 6: Sample sϕb0 ∼ pϕ(s0|z ϕb t ) for all b ∈ [1, B] and generate initial MDP state s ϕ 0 following (9) 7: for t in 1 : T do 8: Determine the action following the target policy π, i.e., at−1 ∼ π(at−1|sϕt−1) 9: for b in 1 : B do
10: Update hϕbt , h̃ ϕb t , z ϕb t , s ϕb t , r ϕb t−1 following (5). 11: end for 12: Generate the next state sϕt following (9), as well as the reward r ϕ t−1 ∼
pϕ(rt−1|zϕ1t , . . . , z ϕB t ) = N
( µ = ∑ b wb · µ(r ϕb t−1),Σdiag = ∑ b w 2 b · Σdiag(r ϕb t−1) ) 13: Update r ← r + γt−1rϕt−1, with γ being the discounting factor 14: end for 15: Append r intoR 16: end for 17: Average over all elements inR, which serves as the estimated return over π
D EARLY TERMINATION OF ENVIRONMENTS
Given that some Gym-Mujoco environments, including Ant, Hopper, Walker2d and Pen, may terminate an episode before reaching the maximum steps, if the state violates specific constraints. Below we introduce how VLM and VLBM can be enriched to capture such early termination behaviors.
VLM For VLM, we introduce an additional component dϕt ∼ pϕ(dt|z ϕ t ) to the generative process equation 5, where dϕt is a Bernoulli variable determining if an episode should be terminated at its t-th step. Specifically, pϕ(dt|zϕt ) follows Bernoulli distribution, with mean determined by an MLP with sigmoid activation applied to the output layer. As a result, the generative process now follows
hϕt = fϕ(h ϕ t−1, z ϕ t−1, at−1), h̃ ϕ t = gϕ(h ϕ t ), z ϕ t ∼ pϕ(h̃ ϕ t ),
sϕt ∼ pϕ(st|z ϕ t ), r ϕ t−1 ∼ pϕ(rt−1|z ϕ t ), d ϕ t ∼ pϕ(dt|z ϕ t ), at ∼ π(at|s ϕ t ). (13)
Moreover, we add in a new term to VLM’s training objective, in order to update the component introduced above during training, i.e.,
Learly_termV LM (ψ, ϕ) = LV LM (ψ, ϕ) + ∑T
t=0 log pϕ(dt|zt), (14)
with LV LM (ψ, ϕ) being the original objective of VLM, as presented in equation 8.
VLBM For VLBM, the termination of an episode is determined following, i.e.,
dϕt ∼ pϕ(dt|z ϕ1 t , . . . , z ϕB t ) = Bernoulli(µ = ∑ b wb · µd(dϕbt )), (15)
where µd(d ϕb t ) = ϕ MLP b,µd (zϕbt ) is the mean of d ϕb t produced from the b-th branch of the decoder, and ϕMLPb,µd is the corresponding MLP that maps z ϕb t to µd(d ϕb t ). To update the components involved in the procedure above, we introduce a new term to the VLBM’s objective, i.e.,
Learly_termV LBM (ψ, ϕ1, . . . , ϕB , w1, · · · , wB) (16) =LV LBM (ψ, ϕ1, . . . , ϕB , w1, · · · , wB) + ∑T
t=0 log pϕ(d
ϕ t |z ϕ1 t , . . . , z ϕB t ), (17)
with LV LBM being the original objective of VLBM, as presented in equation 10.
E BOUND DERIVATION
We now derive the evidence lower bound (ELBO) for the joint log-likelihood distribution, i.e.,
log pϕ(s0:T , r0:T−1) (18)
= log ∫ z1:T∈Z pϕ(s0:T , z1:T , r0:T−1)dz (19)
= log ∫ z1:T∈Z pϕ(s0:T , z1:T , r0:T−1) qψ(z0:T |s0:T , a0:T−1) qψ(z0:T |s0:T , a0:T−1)dz (20) ≥Eqψ [log p(z0) + log pϕ(s0:T , z1:T , r0:T−1|z0)− log qψ(z0:T |s0:T , a0:T−1)] (21)
=Eqψ [ log p(z0) + log pϕ(s0|z0) + ∑T t=1 log pϕ(st, zt, rt−1|zt−1, at−1)
− log qψ(z0|s0)− ∑T
t=1 log qψ(zt|zt−1, at−1, st)
] (22)
=Eqψ [ log p(z0)− log qψ(z0|s0) + log pϕ(s0|z0) + ∑T t=1 log ( pϕ(st|zt)pϕ(rt−1|zt)pϕ(zt|zt−1, at−1) ) − ∑T
t=1 log qψ(zt|zt−1, at−1, st)
] (23)
=Eqψ [∑T
t=0 log pϕ(st|zt) + ∑T t=1 log pϕ(rt−1|zt)
−KL ( qψ(z0|s0)||p(z0) ) − ∑T t=1 KL ( qψ(zt|zt−1, at−1, st)||pϕ(zt|zt−1, at−1) )] .
(24)
Note that the transition from equation 20 to equation 21 follows Jensen’s inequality.
F BASICS OF VARIATIONAL INFERENCE
Classic variational auto-encoders (VAEs) are designed to generate synthetic data that share similar characteristics than the ones used for training (Kingma & Welling, 2013). Specifically, VAEs learn an approximated posterior qψ(z|x) and a generative model pϕ(x|z), over the prior p(z), with x being the data and z the latent variable. It’s true posterior pϕ(z|x) is intractable, i.e.,
pϕ(z|x) = pϕ(x|z)p(z) pϕ(x) ; (25)
since the marginal likelihood in the denominator, pϕ(x) = ∫ z pϕ(x|z)p(z)dz, requires integration over the unknown latent space. For the same reason, VAEs cannot be trained to directly maximize the marginal log-likelihood, max log pϕ(x). To resolve this, one could maximize a lower bound of pϕ(x), i.e.,
max ψ,ϕ −KL(qψ(z|x)||p(z)) + Eqψ [log pϕ(x|z)], (26)
which is the evidence lower bound (ELBO).
Reparameterization. During training, it is required to sample from qψ(z|x) and pϕ(x|z) constantly. The reparameterization technique is introduced in (Kingma & Welling, 2013), to ensure that the gradients can flow through such sampling process during back-propagation. For example, if both distributions (qψ(z|x) and pϕ(x|z)) follow diagonal Gaussians, with mean and diagonal covariance determined by MLPs, i.e.,
z ∼ qψ(z|x) = N ( µ = ψMLPµ (x), Σ = ψ MLP Σ (x) ) , (27)
x ∼ pϕ(x|z) = N ( µ = ϕMLPµ (z), Σ = ϕ MLP Σ (z) ) ; (28)
here, ψMLPµ , ψ MLP Σ , ϕ MLP µ , ϕ MLP Σ are the MLPs that generate the means and covariances. The sampling processes above can be captured by reparameterization, i.e.,
z = ψMLPµ (x) + ψ MLP Σ (x) · ϵ, (29)
x = ϕMLPµ (z) + ϕ MLP Σ (z) · ϵ, (30)
with ϵ ∼ N (0, I). Consequently, the gradients over ψ and ϕ can be calculated following the chain rule, and used for back-propagation during training. We direct readers to (Kingma & Welling, 2013) for a comprehensive review of reparameterization.
G ADDITIONAL RELATED WORKS
Overview of latent-model based RL methods. In SLAC, latent representations are used to improve the sample efficiency of model-free RL training algorithms, by jointly modeling and learning dynamics and controls over the latent space. Similarly, SOLAR improves data efficiency for multi-task RL by first learning high-level latent representations of the environment, which can be shared across different tasks. Then, local dynamics models are inferred from the abstraction, with controls solved by linear-quadratic regulators. PlaNet and Dreamer further improve the architecture and training objectives of latent models, allowing them to look ahead multiple steps and plan for longer horizon. There also exist LatCo which directly performs trajectory optimization over the latent space, allowing the agent to temporarily bypass dynamical constraints and quickly navigate to the high-reward regions in early training stage. To summarize, methods above leverage latent representations to gain sufficient exploration coverage and quickly navigate to high-reward regions, improving sample efficiency for policy optimization. Note that they mostly require online interactions with the environment to formulate a growing experience replay buffer for policy learning, which have different goals than OPE which requires learning from a fixed set of offline trajectories. | 1. What is the main contribution of the paper regarding Off-Policy Evaluation?
2. What are the strengths and weaknesses of the proposed Recurrent Variational Auto-encoder?
3. Do you have any concerns about the claims and motivation for adding RSA and the branching architecture?
4. How do you assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes to tackle the Off-Policy Evaluation problem in a model-based fashion, i.e., contrary to importance sampling-based methods, a model is trained to directly generate future trajectories based on past information. The proposed model is a Recurrent Variational Auto-encoder, where both the encoder and decoder are embodied by LSTMs. The main contribution, besides leveraging variational inference for model-based OPE, is the addition of two features: Recurrent State Alignment (RSA) and branching architecture for the decoder. RSA adds a term in the loss so that the hidden LSTM states of the encoder and the decoder stay close to each other. The authors claim that this term augments the information flow and helps the decoder generate more realistic trajectories. The branching architecture is a kind of ensemble of decoders, where each branch is assigned a weight (summing to 1 over all branches) that is trained jointly with the parameters of the model. The authors claim that it helps alleviate the effect of initialization. The authors present results in the DOPE benchmark against model-based and model-free baselines and achieve superior performance overall.
Strengths And Weaknesses
Strengths:
The results on the DOPE benchmark are compelling
The ablation study is well conducted
Writing and presentation are pleasant and help get to the point quickly.
Weaknesses:
The claims and motivation for adding RSA and the branching architecture lack theoretical or empirical justifications, beyond the final results: would it be possible to characterize the behavior observed when MSE is used / RSA is not used, and explain why decoder initialization can lead to such largely decreased performance
The intuitive statements made by the authors are sometimes vague: would it be possible t clarify terms such as "structure", "information flow" and "impact of initialization artifacts" in this particular context?
The results might be poorly reliable, as they are given on only 3 random seeds.
Clarity, Quality, Novelty And Reproducibility
Clarity: The paper is well-written overall but motivation is sometimes eluded. Quality: Results on DOPE are compelling, but evaluating only 3 seeds is not enough especially considering the sometimes unexplained variations between variants. Novelty: This paper is the first to leverage variational inference, and additional features seem novel. Reproducibility: the model is evaluated on the open-source DOPE benchmark and sufficient implementation details are given, but there seems to be no plan for code release. |
ICLR | Title
Variational Latent Branching Model for Off-Policy Evaluation
Abstract
Model-based methods have recently shown great potential for off-policy evaluation (OPE); offline trajectories induced by behavioral policies are fitted to transitions of Markov decision processes (MDPs), which are used to rollout simulated trajectories and estimate the performance of policies. Model-based OPE methods face two key challenges. First, as offline trajectories are usually fixed, they tend to cover limited state and action space. Second, the performance of model-based methods can be sensitive to the initialization of their parameters. In this work, we propose the variational latent branching model (VLBM) to learn the transition function of MDPs by formulating the environmental dynamics as a compact latent space, from which the next states and rewards are then sampled. Specifically, VLBM leverages and extends the variational inference framework with the recurrent state alignment (RSA), which is designed to capture as much information underlying the limited training data, by smoothing out the information flow between the variational (encoding) and generative (decoding) part of VLBM. Moreover, we also introduce the branching architecture to improve the model’s robustness against randomly initialized model weights. The effectiveness of the VLBM is evaluated on the deep OPE (DOPE) benchmark, from which the training trajectories are designed to result in varied coverage of the state-action space. We show that the VLBM outperforms existing state-of-the-art OPE methods in general.
1 INTRODUCTION
Off-policy evaluation (OPE) allows for evaluation of reinforcement learning (RL) policies without online interactions. It is applicable to many domains where on-policy data collection could be prevented due to efficiency and safety concerns, e.g., healthcare (Gao et al., 2022c;a; Tang & Wiens, 2021), recommendation systems (Mehrotra et al., 2018; Li et al., 2011), education (Mandel et al., 2014), social science (Segal et al., 2018) and optimal control (Silver et al., 2016; Vinyals et al., 2019; Gao et al., 2020a; 2019; 2020b). Recently, as reported in the deep OPE (DOPE) benchmark (Fu et al., 2020b), model-based OPE methods, leveraging feed-forward (Fu et al., 2020b) and auto-regressive (AR) (Zhang et al., 2020a) architectures, have shown promising results toward estimating the return of target policies, by fitting transition functions of MDPs. However, model-based OPE methods remain challenged as they can only be trained using offline trajectory data, which often offers limited coverage of state and action space. Thus, they may perform sub-optimally on tasks where parts of the dynamics are not fully explored (Fu et al., 2020b). Moreover, different initialization of the model weights could lead to varied evaluation performance (Hanin & Rolnick, 2018; Rossi et al., 2019), reducing the robustness of downstream OPE estimations. Some approaches in RL policy optimization literature use latent models trained to capture a compact space from which the dynamics underlying MDPs are extrapolated; this allows learning expressive representations over the state-action space. However, such approaches usually require online data collections as the focus is on quickly navigating to the high-reward regions (Rybkin et al., 2021), as well as on improving coverage of the explored state and action space (Zhang et al., 2019; Hafner et al., 2019; 2020a) or sample efficiency (Lee et al., 2020).
In this work, we propose the variational latent branching model (VLBM), aiming to learn a compact and disentangled latent representation space from offline trajectories, which can better capture the
∗Duke University, USA. Emails: {qitong.gao, miroslav.pajic}@duke.edu. †North Carolina State University, USA. Emails: {ggao5, mchi}@ncsu.edu
Code available at https://github.com/gaoqitong/vlbm.
dynamics underlying environments. VLBM enriches the architectures and optimization objectives for existing latent modeling frameworks, allowing them to learn from a fixed set of offline trajectories. Specifically, VLBM considers learning variational (encoding) and generative (decoding) distributions, both represented by long short-term memories (LSTMs) with reparameterization (Kingma & Welling, 2013), to encode the state-action pairs and enforce the transitions over the latent space, respectively. To train such models, we optimize over the evidence lower bound (ELBO) jointly with a recurrent state alignment (RSA) term defined over the LSTM states; this ensures that the information encoded into the latent space can be effectively teased out by the decoder. Then, we introduce the branching architecture that allows for multiple decoders to jointly infer from the latent space and reach a consensus, from which the next state and reward are generated. This is designed to mitigate the side effects of model-based methods where different weight initializations could lead to varied performance (Fu et al., 2020b; Hanin & Rolnick, 2018; Rossi et al., 2019).
We focus on using the VLBM to facilitate OPE since it allows to better distinguish the improvements made upon learning dynamics underlying the MDP used for estimating policy returns, as opposed to RL training where performance can be affected by multiple factors, e.g., techniques used for exploration and policy optimization. Moreover, model-based OPE methods is helpful for evaluating the safety and efficacy of RL-based controllers before deployments in the real world (Gao et al., 2022b), e.g., how a surgical robot would react to states that are critical to a successful procedure. The key contributions of this paper are summarized as follows: (i) to the best of our knowledge, the VLBM is the first method that leverages variational inference for OPE. It can be trained using offline trajectories and capture environment dynamics over latent space, as well as estimate returns of target (evaluation) policies accurately. (ii) The design of the RSA loss term and branching architecture can effectively smooth the information flow in the latent space shared by the encoder and decoder, increasing the expressiveness and robustness of the model. This is empirically shown in experiments by comparing with ablation baselines. (iii) Our method generally outperforms existing model-based and model-free OPE methods, for evaluating policies over various D4RL environments (Fu et al., 2020a). Specifically, we follow guidelines provided by the DOPE benchmark (Fu et al., 2020b), which contains challenging OPE tasks where the training trajectories include varying levels of coverage of the state-action space, and target policies are designed toward resulting in state-action distributions different from the ones induced by behavioral policies.
2 VARIATIONAL LATENT BRANCHING MODEL
In this section, we first introduce the objective of OPE and the variational latent model (VLM) we consider. Then, we propose the recurrent state alignment (RSA) term as well as the branching architecture that constitute the variational latent branching model (VLBM).
2.1 OPE OBJECTIVE
We first introduce the MDP used to characterize the environment. Specifically, an MDP can be defined as a tuple M = (S,A,P, R, s0, γ), where S is the set of states, A the set of actions, P : S × A → S is the transition distribution usually captured by probabilities p(st|st−1, at−1), R : S ×A → R is the reward function, s0 is the initial state sampled from the initial state distribution p(s0), γ ∈ [0, 1) is the discounting factor. Finally, the agent interacts with the MDP following some policy π(a|s) which defines the probabilities of taking action a at state s. Then, the goal of OPE can be formulated as follows. Given trajectories collected by a behavioral policy β, ρβ = {[(s0, a0, r0, s1), . . . , (sT−1, aT−1, rT−1, sT )](0), [(s0, a0, r0, s1), . . . ](1), . . . |at ∼ β(at|st)}1, estimate the expected total return over the unknown state-action visitation distribution ρπ of the target (evaluation) policy π – i.e., for T being the horizon,
E(s,a)∼ρπ,r∼R [∑T
t=0 γtR(st, at)
] . (1)
2.2 VARIATIONAL LATENT MODEL
We consider the VLM consisting of a prior p(z) over the latent variables z ∈ Z ⊂ Rl, with Z representing the latent space and l the dimension, along with a variational encoder qψ(zt|zt−1, at−1, st)
1We slightly abuse the notation ρβ , to represent either the trajectories or state-action visitation distribution under the behavioral policy, depending on the context.
and a generative decoder pϕ(zt, st, rt−1|zt−1, at−1), parameterized by ψ and ϕ respectively. Basics of variational inference are introduced in Appendix F.
Latent Prior p(z0). The prior specifies the distribution from which the latent variable of the initial stage, z0, is sampled. We configure p(z0) to follow a Gaussian with zero mean and identity covariance matrix, which is a common choice under the variational inference framework (Kingma & Welling, 2013; Lee et al., 2020).
Variational Encoder for Inference qψ(zt|zt−1, at−1, st). The encoder is used to approximate the intractable posterior, p(zt|zt−1, at−1, st) = p(zt−1,at−1,zt,st)∫ zt∈Z p(zt−1,at−1,zt,st)dzt , where the denominator requires integrating over the unknown latent space. Specifically, the encoder can be decomposed into two parts, given that
qψ(z0:T |s0:T , a0:T−1)
=qψ(z0|s0) T∏ t=1 qψ(zt|zt−1, at−1, st); (2)
here, qψ(z0|s0) encodes the initial state s0 in to the corresponding latent variable z0, then, qψ(zt|zt−1, at−1, st) enforces the transi-
tion from zt−1 to zt conditioned on at−1 and st. Both distributions are diagonal Gaussians2, with means and diagonal of covariance matrices determined by multi-layered perceptron (MLP) (Bishop, 2006) and long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) respectively. The weights for both neural networks are referred to as ψ in general.
Consequently, the inference process for zt can be summarized as
zψ0 ∼ qψ(z0|s0), h ψ t = fψ(h ψ t−1, z ψ t−1, at−1, st), z ψ t ∼ qψ(zt|h ψ t ), (3)
where fψ represents the LSTM layer and h ψ t the LSTM recurrent (hidden) state. Note that we use ψ in superscripts to distinguish the variables involved in this inference process, against the generative process introduced below. Moreover, reparameterization can be used to sample zψ0 and z ψ t , such that gradients of sampling can be back-propagated, as introduced in (Kingma & Welling, 2013). Overview of the inference and generative processes are illustrated in Fig. 1.
Generative Decoder for Sampling pϕ(zt, st, rt−1|zt−1, at−1). The decoder is used to interact with the target policies and acts as a synthetic environment during policy evaluation, from which the expected returns can be estimated as the mean return of simulated trajectories. The decoder can be represented by the multiplication of three diagonal Gaussian distributions, given that
pϕ(z1:T , s0:T , r0:T−1|z0, π) = T∏ t=0 pϕ(st|zt) T∏ t=1 pϕ(zt|zt−1, at−1)pϕ(rt−1|zt), (4)
with at ∼ π(at|st) at each time step. Specifically, pϕ(zt|zt−1, at−1) has its mean and covariance determined by an LSTM, enforcing the transition from zt−1 to zt in the latent space given action at−1. In what follows, pϕ(st|zt) and pϕ(rt−1|zt) generate the current state st and reward rt−1 given zt, whose mean and covariance are determined by MLPs. As a result, the generative process starts with sampling the initial latent variable from the latent prior, i.e., zϕ0 ∼ p(z0). Then, the initial state sϕ0 ∼ pϕ(s0|z ϕ 0 ) and action a0 ∼ π(a0|s ϕ 0 ) are obtained from pϕ and target policy π, respectively; the rest of generative process can be summarized as
hϕt = fϕ(h ϕ t−1, z ϕ t−1, at−1), h̃ ϕ t = gϕ(h ϕ t ), z ϕ t ∼ pϕ(h̃ ϕ t ),
sϕt ∼ pϕ(st|z ϕ t ), r ϕ t−1 ∼ pϕ(rt−1|z ϕ t ), at ∼ π(at|s ϕ t ), (5)
2Assume that different dimensions of the states are non-correlated with each other. Otherwise, the states can be projected to orthogonal basis, such that non-diagonal elements of the covariance matrix will be zeros.
where fϕ is the LSTM layer producing recurrent state h ϕ t . Then, an MLP gϕ is used to generate mapping between hϕt and h̃ ϕ t that will be used for recurrent state alignment (RSA) introduced below, to augment the information flow between the inference and generative process.
Furthermore, to train the elements in the encoder (3) and decoder (5), one can maximize the evidence lower bound (ELBO), a lower bound of the joint log-likelihood p(s0:T , r0:T−1), following
LELBO(ψ, ϕ) =Eqψ [∑T
t=0 log pϕ(st|zt) + ∑T t=1 log pϕ(rt−1|zt)−KL ( qψ(z0|s0)||p(z0) ) − ∑T
t=1 KL
( qψ(zt|zt−1, at−1, st)||pϕ(zt|zt−1, at−1) )] ; (6)
here, the first two terms represent the log-likelihood of reconstructing the states and rewards, and the last two terms regularize the approximated posterior. The proof can be found in Appendix E.
2.3 RECURRENT STATE ALIGNMENT
The latent model discussed above is somewhat reminiscent of the ones used in model-based RL policy training methods, e.g., recurrent state space model (RSSM) used in PlaNet (Hafner et al., 2019) and Dreamer (Hafner et al., 2020a;b), as well as similar ones in Lee et al. (2020); Lu et al. (2022). Such methods rely on a growing experience buffer for training, which is collected online by the target policy that is being concurrently updated (with exploration noise added); however, OPE aims to extrapolate returns from a fixed set of offline trajectories which may result in limited coverage of the state and action space. Consequently, directly applying VLM for OPE can lead to subpar performance empirically; see results in Sec. 3. Moreover, the encoder above plays a key role of capturing the temporal transitions between latent variables, i.e., pψ(zt|zt−1, at−1, st) from (2). However, it is absent in the generative process, as the decoder leverages a separate network to determine the latent transitions, i.e., pϕ(zt|zt−1, at−1). Moreover, from the ELBO (6) above it can be seen that only the KL-divergence terms are used to regularize these two parts, which may not be sufficient for OPE as limited offline trajectories are provided. As a result, we introduce the RSA term as part of the training objective, to further regularize pψ(zt|zt−1, at−1, st) and pϕ(zt|zt−1, at−1). A graphical illustration of RSA can be found in Fig. 2.3
Specifically, RSA is defined as the mean pairwise squared error between hψt from the encoder (3) and h̃ϕt from the decoder (5), i.e.,
LRSA(h̃ϕt , h ψ t ;ψ, ϕ) =
1
N N∑ i=1 T∑ t=0 M(M − 1) 2 [M−1∑ j=1 M∑ k=j+1 ( (h̃ϕt [j]− h̃ ϕ t [k])− (h ψ t [j]− h ψ t [k]) )2] ;
(7)
here, we assume that both LSTM recurrent states have the same dimension h̃ϕt , h ψ t ∈ RM , with h (·) t [j] referring to the j-th element of the recurrent state, and N the number of training trajectories.
Here, we choose the pairwise squared loss over the classic mean squared error (MSE), because MSE could be too strong to regularize hψt and h̃ ϕ t which support the inference and generative processes respectively and are not supposed to be exactly the same. In contrast, the pairwise loss (7) can
3Rewards and actions are omitted for conciseness of the presentation.
promote structural similarity between the LSTM recurrent states of the encoder and decoder, without strictly enforcing them to become the same. Note that this design choice has been justified in Sec. 3 through an ablation study by comparing against models trained with MSE. In general, the pairwise loss has also been adopted in many domains for similar purposes, e.g., object detection (Gould et al., 2009; Rocco et al., 2018), ranking systems (Doughty et al., 2018; Saquil et al., 2021) and contrastive learning (Wang et al., 2021; Chen et al., 2020). Similarly, we apply the pairwise loss over hψt and h̃ ϕ t , instead of directly over h ψ t and h ϕ t , as the mapping gϕ (from equation 5) could serve as a regularization layer to ensure optimality over LRSA without changing hψt , h ϕ t significantly.
As a result, the objective for training the VLM, following architectures specified in (3) and (5), can be formulated as
max ψ,ϕ LV LM (ψ, ϕ) = max ψ,ϕ
( LELBO(ψ, ϕ)− C · LRSA(h̃ϕt , h ψ t ;ψ, ϕ) ) , (8)
with C > 0 and C ∈ R being the constant balancing the scale of the ELBO and RSA terms.
2.4 BRANCHING FOR GENERATIVE DECODER
The performance of model-based methods can vary upon different design factors (Fu et al., 2020b; Hanin & Rolnick, 2018). Specifically, Rossi et al. (2019) has found that the convergence speed and optimality of variational models are sensitive to the choice of weight initialization techniques. Moreover, under the typical variational inference setup followed by the VLM above, the latent transitions reconstructed by the decoder, pϕ(zt|zt−1, at−1), are only trained through regularization losses in (6) and (7), but are fully responsible for rolling out trajectories during evaluation. Consequently, in this sub-section we introduce the branching architecture for decoder, with the goal of minimizing the impact brought by random weight initialization of the networks, and allowing the decoder to best reconstruct the latent transitions pϕ(zt|zt−1, at−1) as well as st’s and rt−1’s correctly. Specifically, the branching architecture leverages an ensemble of B ∈ Z+ decoders to tease out information from the latent space formulated by the encoder, with final predictions sampled from a mixture of the Gaussian output distributions from (5). Note that the classic setup of ensembles is not considered, i.e., train and average over B VLMs end-to-end; because in this case B different latent space exist, each of which is still associated with a single decoder, leaving the challenges above unresolved. This design choice is justified by ablations studies in Sec. 3, by comparing VLBM against a (classic) ensemble of VLMs.
Branching Architecture. Consider the generative process involving B branches of the decoders parameterized by {ϕ1, . . . , ϕB}. The forward architecture over a single step is illustrated in Fig. 2.4 Specifically, the procedure of sampling zϕbt and s ϕb t for each b ∈ [1, B] follows from (5). Recall that by definition pϕb(st|z ϕb t ) follows multivariate Gaussian with mean and diagonal of covariance matrix determined by the corresponding MLPs, i.e., µ(sϕbt ) = ϕ MLP b,µ (z ϕb t ) and Σdiag(s ϕb t ) = ϕ MLP b,Σ (z ϕb t ). In what follows, the final outcome sϕt can be sampled following diagonal Gaussian with mean and variance determined by weighted averaging across all branches using weights wb’s, i.e.,
sϕt ∼ pϕ(st|z ϕ1 t , . . . , z ϕB t ) = N
( µ = ∑ b wb · µ(sϕbt ),Σdiag = ∑ b w2b · Σdiag(s ϕb t ) ) . (9)
The objective below can be used to jointly update, wb’s, ψ and ϕb’s, i.e.,
max ψ,ϕ,w
LV LBM (ψ, ϕ1, . . . , ϕB , w1, . . . , wB)
= max ψ,ϕ,w ( T∑ t=0 log pϕ(s ϕ t |z ϕ1 t , . . . , z ϕB t )− C1 · ∑ b LRSA(h̃ϕbt , h ψ t ;ψ, ϕb) + C2 ∑ b LELBO(ψ, ϕb) ) ,
s.t. w1, . . . , wB > 0 , ∑ b wb = 1 and constants C1, C2 > 0. (10)
Though the first term above already propagates through allwb’s and ϕb’s, the third term and constraints over wb’s regularize ϕb in each individual branch such that they are all trained toward maximizing
4For simplicity, the parts generating rewards are omitted without lost of generality.
the likelihood pϕb(s ϕb t |z ϕb t ). Pseudo-code for training and evaluating the VLBM can be found in Appendix C. Further, in practice, one can define wb = v2b
ϵ+ ∑ b v 2 b
, with vb ∈ R the learnable variables and 0 < ϵ ≪ 1, ϵ ∈ R, the constant ensuring denominator to be greater than zero, to convert (10) into unconstrained optimization and solve it using gradient descent. Lastly, note that complementary latent modeling methods, e.g., latent overshooting from Hafner et al. (2019), could be adopted in (10). However, we keep the objective straightforward, so that the source of performance improvements can be isolated.
3 EXPERIMENTS
To evaluate the VLBM, we follow the guidelines from the deep OPE (DOPE) benchmark (Fu et al., 2020b). Specifically, we follow the D4RL branch in DOPE and use the GymMujoco and Adroit suites as the test base (Fu et al., 2020a). Such environments have long horizons and highdimensional state and action space, which are usually challenging for model-based methods. The provided offline trajectories for training are collected using behavioral policies at varied scale, including limited exploration, human teleoperation etc., which can result in different levels of
coverage over the state-action space. Also, the target (evaluation) policies are generated using online RL training, aiming to reduce the similarity between behavioral and target policies; it introduces another challenge that during evaluation the agent may visit states unseen from training trajectories.
Environmental and Training Setup. A total of 8 environments are provided by Gym-Mujoco and Adroit suites (Fu et al., 2020b;a). Moreover, each environment is provided with 5 (for Gym-Mujoco) or 3 (for Adroit) training datasets collected using different behavioral policies, resulting in a total of 32 sets of env-dataset tasks5 – a full list can be found in Appendix A. DOPE also provides 11 target policies for each environment, whose performance are to be evaluated by the OPE methods. They in general result in varied scales of returns, as shown in the x-axes of Fig. 7. Moreover, we consider the decoder to have B = 10 branches, i.e., {pϕ1 , . . . , pϕ10}. The dimension of latent space is set to be 16, i.e., z ∈ Z ⊂ R16. Other implementation details can be found in Appendix A. Baselines and Evaluation Metrics. In addition to the five baselines reported from DOPE, i.e., importance sampling (IS) (Precup, 2000), doubly robust (DR) (Thomas & Brunskill, 2016), variational power method (VPM) (Wen et al., 2020), distribution correction estimation (DICE) (Yang et al., 2020), and fitted Q-evaluation (FQE) (Le et al., 2019), the effectiveness of VLBM is also compared against the state-of-the-art model-based OPE method leveraging the auto-regressive (AR) architecture (Zhang et al., 2020a). Specifically, for each task we train an ensemble of 10 AR models, for fair comparisons against VLBM which leverages the branching architecture; see Appendix A for details of the AR ensemble setup. Following the DOPE benchmark (Fu et al., 2020b), our evaluation metrics includes rank correlation, regret@1, and mean absolute error (MAE). VLBM and all baselines are trained using 3 different random seeds over each task, leading to the results reported below.
Ablation. Four ablation baselines are also considered, i.e., VLM, VLM+RSA, VLM+RSA(MSE) and VLM+RSA Ensemble. Specifically, VLM refers to the model introduced in Sec. 2.2, trained toward maximizing only the ELBO, i.e., (6). Note that, arguably, VLM could be seen as the generalization of directly applying latent-models proposed in existing RL policy optimization literature (Lee et al., 2020; Hafner et al., 2019; 2020a;b; Lu et al., 2022); details can be found in Sec. 4 below. The VLM+RSA ablation baseline follows the same model architecture as VLM, but is trained to optimize over both ELBO and recurrent state alignment (RSA) as introduced in (8), i.e., branching is not used comparing to VLBM. The design of these two baselines can help analyze the effectiveness of the RSA
5From now on the dataset names are abbreviated by their initials, e.g., Ant-M-R refers to Ant-Medium-Replay.
loss term and branching architecture introduced in Sec. 2.3 and 2.4. Moreover, VLM+RSA(MSE) uses mean squared error to replace the pairwise loss introduced in (7), and the VLM+RSA Ensemble applies classic ensembles by averaging over B VLM+RSA models end-to-end, instead of branching from decoder as in VLBM. These two ablation baselines can help justify the use of pairwise loss for RSA, and the benefit of using branching architecture over classic ensembles.
Results. Fig. 3 shows the mean overall performance attained by VLBM and baselines over all the 32 GymMujoco and Adroit tasks. In general VLBM leads to significantly increased rank correlations and decreased regret@1’s over existing methods, with MAEs maintained at the state-of-the-art level. Specifically, VLBM achieves state-of-the-art performance in 31, 29, and 15 (out of 32) tasks in terms of rank correlation, regret@1 and MAE, respectively. Performance for each task can be found in Tables 1- 6 at the end of Appendices. Note that results for IS, VPM, DICE, DR, and FQE are obtained directly from DOPE benchmark (Fu et al., 2020b), since the same experimental setup is considered. Fig. 4 and 5 visualize
the mean performance for each Gym-Mujoco and Adroit environment respectively, over all the associated datasets. It can be also observed that the model-based and FQE baselines generally perform better than the other baselines, which is consistent with findings from DOPE.
The fact that VLM+RSA outperforming the VLM ablation baseline, as shown in Fig. 4, illustrates the need of the RSA loss term to smooth the flow of information between the encoder and decoder, in the latent space. Moreover, one can observe that VLM+RSA(MSE) sometimes performs worse than VLM, and significantly worse than VLM+RSA in general. Specifically, it has be found that, compared to VLM and VLM+RSA respectively, VLM+RSA(MSE) significantly worsen at least two metrics in 7 and 12 (out of 20) Gym-Mujoco tasks; detailed performance over these tasks can be found in Tables 1- 6 at the end of Appendices. Such a finding backs up the design choice of using pairwise loss for RSA instead of MSE, as MSE could be overly strong to regularize the LSTM recurrent states of the encoder and decoder, while pairwise loss only enforces structural similarities. Moreover, VLBM significantly improves rank correlations and regrets greatly compared to VLM+RSA, illustrating the importance of the branching architecture. In the paragraph below, we show empirically the benefits brought in by branching over classic ensembles.
Branching versus Classic Ensembles. Fig. 4 shows that the VLM+RSA Ensemble does not improve performance over the VLM+RSA in general, and even leads to worse overall rank correlations and regrets in Walker2d and Hopper environments. This supports the rationale provided in Sec. 2.4 that each decoder still samples from different latent space exclusively, and averaging over the output distributions may not help reduce the disturbance brought in by the modeling artifacts under the variational inference framework, e.g., random weight initializations (Hanin & Rolnick, 2018; Rossi et al., 2019). In contrast, the VLBM leverages the branching architecture, allowing all the branches to sample from the same latent space formulated by the encoder. Empirically, we find that the branching weights, wb’s in (9), allows VLBM to kill branches that are not helpful toward reconstructing the trajectories accurately, to possibly overcome bad initializations etc. Over all the the 32 tasks we consider, most of VLBMs only keep 1-3 branches (out of 10), i.e., wb < 10−5 for all other branches. The distribution of all wb’s, from VLBMs trained on the 32 tasks, are shown in Fig. 6; one can observe that most of the wb’s are close to zero, while the others generally fall in the range of (0, 0.25] and [0.75, 1).
AR ensembles also lead to compelling rank correlations and regrets, but attains much smaller margins in MAEs over other baselines in general; see Fig. 3. From Fig. 7, one can observe that it tends to significantly under-estimate most of the high-performing policies. Scatter plots for the other tasks can be found in Appendix A, which also show this trend. The reason could be that its model architecture and training objectives are designed to directly learn the transitions of the MDP; thus, may produce biased predictions when the target policies lead to visitation of the states that are not substantially presented in training data, since such data are obtained using behavioral policies that are sub-optimal. In
contrast, the VLBM can leverage RSA and branching against such situations, thus outperforming AR ensembles in most of the OPE tasks in terms of all metrics we considered. Interestingly, Fig. 7 also shows that latent models could sometimes over-estimate the returns. For example, in Hopper-M-E and Walker2d-M-E, VLM tends to over-estimate most policies. The VLBM performs consistently well in Hopper-M-E, but is mildly affected by such an effect in Walker2d-M-E, though over fewer policies and smaller margins. It has been found that variational inference may fall short in approximating true distributions that are asymmetric, and produce biased estimations (Yao et al., 2018). So the hypothesis would be that the dynamics used to define certain environments may lead to asymmetry in the true posterior p(zt|zt−1, at−1, st), which could be hard to be captured by the latent modeling framework we consider. More comprehensive understanding of such behavior can be explored in future work. However, the VLBM still significantly outperforms VLM overall, and achieves top-performing rank correlations and regrets; such results illustrate the VLBM’s improved robustness as a result of its architectural design and choices over training objectives.
t-SNE Visualization of the Latent Space. Fig. 8 illustrates t-SNE visualization of the latent space by rolling out trajectories using all target policies respectively, followed by feeding the state-action pairs into the encoder of VLBM which maps them into the latent space. It shows the encoded state-action pairs induced from policies with similar performance are in general swirled and clustered together, illustrating that VLBM can learn expressive and disentangled representations of its inputs.
4 RELATED WORK
Latent Modeling in RL. Though variational inference has rarely been explored to facilitate modelbased OPE methods so far, there exist several latent models designed for RL policy optimization that are related to our work, such as SLAC (Lee et al., 2020), SOLAR (Zhang et al., 2019), LatCo (Rybkin et al., 2021), PlaNet (Hafner et al., 2019), Dreamer (Hafner et al., 2020a;b). Below we discuss the connections and distinctions between VLBM and the latent models leveraged by them, with a detailed overview of these methods provided in Appendix G. Specifically, SLAC and SOLAR learn latent representations of the dynamics jointly with optimization of the target policies, using the latent information to improve sample efficiency. Similarly, LatCo performs trajectory optimization over the latent space to allow for temporarily bypassing dynamic constraints. As a result, latent models used in such methods are not designed toward rolling out trajectories independently, as opposed to the use of VLBM in this paper. PlaNet and Dreamer train the recurrent state space model (RSSM) using a growing experience dataset collected by the target policy that is being concurrently updated (with exploration noise added), which requires online data collection. In contrast, under the OPE setup, VLBM is trained over a fixed set of offline trajectories collected over unknown behavioral policies. Moreover, note that the VLM baseline is somewhat reminiscent of the RSSM and similar ones as in Lee et al. (2020); Lu et al. (2022), however, experiments above show that directly using VLM for OPE could lead to subpar performance. On the other hand, though MOPO (Yu et al., 2020), LOMPO (Rafailov et al., 2021) and COMBO (Yu et al., 2021) can learn from offline data, they focus on quantifying the uncertainty of model’s predictions toward next states and rewards, followed by incorporating them into policy optimization objectives to penalize for visiting regions where transitions are not fully captured; thus, such works are also orthogonal to the use case of OPE. OPE. Classic OPE methods adopt IS to estimate expectations over the unknown visitation distribution over the target policy, resulting in weighted IS, step-wise IS and weighted step-wise IS (Precup, 2000). IS can lead to estimations with low (or zero) bias, but with high variance (Kostrikov & Nachum, 2020; Jiang & Li, 2016), which sparks a long line of research to address this challenge. DR methods propose to reduce variance by coupling IS with a value function approximator (Jiang & Li, 2016; Thomas & Brunskill, 2016; Farajtabar et al., 2018). However, the introduction of such approximations may increase bias, so the method proposed in Tang et al. (2019) attempts to balance the scale of bias and variance for DR. Unlike IS and DR methods that require the behavioral policies to be fully known, DICE family of estimators (Zhang et al., 2020c;b; Yang et al., 2021; 2020; Nachum et al., 2019; Dai et al., 2020) and VPM (Wen et al., 2020) can be behavioral-agnostic; they directly capture marginalized IS weights as the ratio between the propensity of the target policy to visit particular state-action pairs, relative to their likelihood of appearing in the logged data. There also exist FQE methods which extrapolate policy returns from approximated Q-functions (Hao et al., 2021; Le et al., 2019; Kostrikov & Nachum, 2020). Existing model-based OPE methods are designed to directly fit MDP transitions using feed-forward (Fu et al., 2020b) or auto-regressive (Zhang et al., 2020a) models, and has shown promising results over model-free methods as reported in a recent benchmark (Fu et al., 2020b). However, such model-based approaches could be sensitive to the initialization of weights (Hanin & Rolnick, 2018; Rossi et al., 2019) and produce biased predictions, due to the limited coverage over state and action space provided by offline trajectories (Fu et al., 2020b). Instead, VLBM mitigates such effects by capturing the dynamics over the latent space, such that states and rewards are evolved from a compact feature space over time. Moreover, RSA and the branching can lead to increased expressiveness and robustness, such that future states and rewards are predicted accurately. There also exist OPE methods proposed toward specific applications (Chen et al., 2022; Saito et al., 2021; Gao et al., 2023; 2022b).
5 CONCLUSION AND FUTURE WORK
We have developed the VLBM which can accurately capture the dynamics underlying environments from offline training data that provide limited coverage of the state and action space; this is achieved by using the RSA term to smooth out the information flow from the encoders to decoders in the latent space, as well as the branching architecture which improve VLBM’s robustness against random initializations. We have followed evaluation guidelines provided by the DOPE benchmark, and experimental results have shown that the VLBM generally outperforms the state-of-the-art modelbased OPE method using AR architectures, as well as other model-free methods. VLBM can also facilitate off-policy optimizations, which can be explored in future works. Specifically, VLBM can serve as a synthetic environment on which optimal controllers (e.g., linear–quadratic regulator) can be deployed. On the other hand, similar to Dreamer and SLAC, policies can be updated jointly with training of VLBM, but without the need of online interactions with the environment during training.
ACKNOWLEDGMENTS
This work is sponsored in part by the AFOSR under award number FA9550-19-1-0169, and by the NSF CNS-1652544, CNS-1837499, DUE-1726550, IIS-1651909 and DUE-2013502 awards, as well as the National AI Institute for Edge Computing Leveraging Next Generation Wireless Networks, Grant CNS-2112562.
A ADDITIONAL EXPERIMENTAL DETAILS AND RESULTS
Additional Results and Discussions. Rank correlations, regret@1 and MAEs for all 32 tasks are documented in Tables 1- 6 below.6 The mean and standard deviation (in subscripts) over 3 random seeds are reported. Note that in each column, performance of multiple methods may be highlighted in bold, meaning they all achieve the best performance and do not significantly outperform each other. The fact that VLBM outperforms the ablation baselines in most cases suggests that the RSA loss term and branching architecture can effectively increase model expressiveness, and allow to learn the dynamics underlying the MDP more accurately and robustly from offline data that provide limited exploration coverage. Yet, smaller margins are attained between the VLBM and VLM+RSA in Hopper-M-E and Hopper-M. It is likely because Hopper has relatively lower dimensional state space compared to the other three environments, from which the underlying dynamics can be sufficiently captured by the VLM+RSA. Fig. 10 and 11 shows the correlation between estimated (y-axis) and true returns (x-axis) for all the OPE tasks we consider. It can be found that for Halfcheetah-R, -M-R, -M, most of the model-based methods cannot significantly distinguish the returns across target policies. The cause could be that the offline trajectories provided for this task are relatively more challenging, compared to the other OPE tasks. Such an effect appears to affect IS, VPM, DICE, DR and FQE at larger scale. It can be observed from the scatter plots reported in the DOPE benchmark (Fu et al., 2020b) that these methods could hardly tell the scale of returns across different target policies; as the dots almost form a horizontal line in each plot. However, the estimated returns from VLBM and IS still preserve the rank, which leads to high rank correlations and low regrets.
Implementation Details and Hyper-parameter. The model-based methods are evaluated by directly interacting with each target policy for 50 episodes, and the mean of discounted total returns (γ = 0.995) over all episodes is used as estimated performance for the policy. We choose the neural network architectures as follows. For the components involving LSTMs, which include qψ(zt|zt−1, at−1, st) and pϕ(zt|zt−1, at−1), their architecture include one LSTM layer with 64 nodes, followed by a dense layer with 64 nodes. All other components do not have LSTM layers involved, so they are constituted by a neural network with 2 dense layers, with 128 and 64 nodes respectively. The output layers that determine the mean and diagonal covariance of diagonal Gaussian distributions use linear and softplus activations, respectively. The ones that determine the mean of Bernoulli distributions (e.g., for capturing early termination of episodes) are configured to use sigmoid activations. VLBM and the two ablation baselines, VLM and VLM+RSA, are trained using offline trajectories provided by DOPE, with max_iter in Alg. 1 set to 1,000 and minibatch size set to 64. Adam optimizer is used to perform gradient descent. To determine the learning rate, we perform grid search among {0.003, 0.001, 0.0007, 0.0005, 0.0003, 0.0001, 0.00005}. Exponential decay is applied to the learning rate, which decays the learning rate by 0.997 every iteration. To train VLBM, we set the constants from equation 10 following C1 = C2, and perform grid search among
6Some VPM entries are absent since they were not reported in Fu et al. (2020b), nor the code is open-sourced.
{5, 1, 0.1, 0.05, 0.01, 0.005, 0.001, 0.0001}. To train VLM+RSA, the constant C from equation 8 is determined by grid search among the same set of parameters above. L2-regularization with decay of 0.001 and batch normalization are applied to all hidden layers. Consider that some of the environments (e.g., Ant, Hopper, Walker2d, Pen) may terminate an episode, before timeout, if the state meets specific conditions; details for VLBM to capture such early termination behavior is introduced in Appendix D.
The DOPE Benchmark. The deep OPE (DOPE) benchmark (Fu et al., 2020b) provides standardized training and evaluation procedure for OPE works to follow, which facilitates fair and comprehensive comparisons among various OPE methods. Specifically, it utilizes existing environments and training trajectories provided by D4RL7 and RLUnplugged8, which are two benchmark suites for offline RL training, and additionally provide target policies for OPE methods to evaluate. In the D4RL branch, the training trajectories are originally collected from various sources including random exploration, human teleoperation, and RL-trained policies with limited exploration; thus, can provide varied levels of coverage over the state-action space. Moreover, the target policies are trained using online RL algorithms, which can in general lead to different state-action visitations than in the training trajectories. We leverage the D4RL branch as our test base, since the OPE tasks it provides are considered challenging, i.e., the limited coverage introduced by training data, as well as the discrepancy between the behavioral and target policies. Graphical illustrations of the Gym-Mujoco and Adroit environments considered are shown in Fig. 9. Details on the environments and datasets used are shown in Tables 7 and 8, from the perspectives of state and action dimensions, if episodes can be terminated before timeout, if controls are performed over continuous space, and the size of the offline trajectories used for training. In contrast, in the RLUnplugged branch, the training trajectories are always collected using online RL training, which can result in adequate coverage over the state-action space. The target policies are trained by applying offline RL over the training trajectories, so that behavioral and target policies can lead to similar state-action visitation distributions. As discussed in DOPE (Fu et al., 2020b), such tasks are suitable for studies where ideal data are needed, such as complexity comparisons.
Evaluation Metrics. Following from (Fu et al., 2020b), we consider rank correlation, regret@1 and mean absolute error (MAE) as the evaluation metrics. Specifically, rank correlation measures the strength and direction of monotonic association between the rank of OPE-estimated returns and true returns over all target policies. It is is captured by Spearsman’s correlation coefficient between the ordinal rankings between estimated and true returns. Regret@1 is captured by the difference between the return of the policy corresponding to the highest return as estimated by OPE and the return of the policy that actually produces the highest true return. In other words, regret@1 evaluates how worse the policy resulting in the highest OPE-estimated return would perform than the actual best policy. The two metrics above evaluate how useful OPE would be to facilitate important applications such as policy selection. Finally, we also consider MAE which is commonly used in estimation/regression tasks. Mathematical definitions of these metrics can be found in (Fu et al., 2020b).
Implementation of AR Ensembles. For fair comparisons with VLBM, in experiments we train an ensemble of the state-of-the-art model-based OPE method, auto-regressive (AR) models (Zhang et al., 2020a), as one of the baselines. Specifically, we train an ensemble of 10 AR models to learn p(st+1, rt|st, at) following the auto-regressive manner, with each individual model following the design introduced in (Zhang et al., 2020a), i.e.,
s (j) t+1 ∼ p(s (j) t+1|st, at, s (1) t+1, . . . , s (j−1) t+1 ), (11)
with s(j)t+1 representing the element located at the j-th dimension of the state variable, and D the dimension of state space. The reward is treated as an additional dimension of the states, i.e., rt ∼ p(rt|st, at, s(1)t+1, . . . , s (D) t+1). However, in the original literature (Zhang et al., 2020a) it does not introduce in details regarding which specific ensemble architecture is used (e.g., overall averaging or weighted averaging). As a result, we choose the same weighted averaging procedure as used in VLBM branching, to sort out the influence of different ensemble architectures and facilitate fair comparisons. Specifically, a total of 10 AR models, parameterized by {θ1, . . . , θ10}, along with 10
7https://github.com/rail-berkeley/d4rl 8https://github.com/deepmind/deepmind-research/tree/master/rl_unplugged
weight variables {wθ1, . . . , wθ10| ∑ i w θ i = 1}, are trained. Similar to weighted averaging architecture used in VLBM, i.e., equation 9, the mean and variance of the prediction s(j)t+1, captured by normal distribution N (µ, σ2), follow
µ = ∑10
i=1 wθi · µθi(s (j) t+1), σ
2 = ∑10
i=1 (wθi ) 2 · σ2θi(s (j) t+1), (12)
where µθi(s (j) t+1) and σ 2 θi (s (j) t+1) are the mean and variance produced from each individual AR model in the ensemble.
Training Resources. Training of the proposed method, and baselines, are facilitated by Nvidia Quadro RTX 6000, NVIDIA RTX A5000, and NVIDIA TITAN XP GPUs.
License. The use of DOPE9 and D4RL (Fu et al., 2020a) follow the Apache License 2.0.
9https://github.com/google-research/deep_ope
B MORE t-SNE VISUALIZATIONS
Figures 12 and 13 above visualize the latent space captured by two ablation baselines, VLM and VLM+RSA(MSE), respectively. It can be observed that comparing to the latent space captured by VLM are not disentangled well compared to VLBM (shown in Figure 8), as the state-action pairs induced by policies with different levels of performance are generally cluster together without explicit boundaries. Such a finding illustrated the importance of the use of RSA loss (7) empirically, as it can effectively regularize pψ(zt|zt−1, at−1, st) and allows the encoder to map the MDP states to an expressive and compact latent space from which the decoder can reconstruct states and rewards accurately. Moreover, Figure 13 shows that the latent representations of the state-action pairs captured by VLM+RSA(MSE) distributed almost uniformly over the latent space. This justifies the rationale provided in Sec. 2.3 where MSE is too strong to regularize the hidden states of the encoder and decoder, and is also consistent with the results reported in Figure 3 that MSE+RSA(MSE) performs worse than VLM in general.
C ALGORITHMS FOR TRAINING AND EVALUATING VLBM
Algorithm 1 Train VLBM.
Input: Model weights ψ, ϕ1, . . . , ϕB , w1, . . . , wB , offline trajectories ρβ , and learning rate α. Begin:
1: Initialize ψ, ϕ1, . . . , ϕB , w1, . . . , wB 2: for iter in 1 : max_iter do 3: Sample a trajectory [(s0, a0, r0, s1), . . . , (sT−1, aT−1, rT−1, sT )] ∼ ρβ 4: zψ0 ∼ qψ(z0|s0) 5: zϕb0 ∼ p(z0), for all b ∈ [1, B] 6: Run forward pass of VLBM following (3), (5) and (9) for t = 1 : T , and collect all variables needed to evaluate LV LBM as specified in (10). 7: ψ ← ψ + α∇ψLV LBM 8: for b in 1 : B do 9: ϕb ← ϕb + α∇ϕbLV LBM
10: wb ← wb + α∇wbLV LBM 11: end for 12: end for
Algorithm 2 Evaluate VLBM. Input: Trained model weights ψ, ϕ1, . . . , ϕB , w1, . . . , wB Begin:
1: Initialize the list that stores the accumulated returns over all episodesR = [] 2: for epi in 1 : max_epi do 3: Initialize the variable r = 0 that tracks the accumulated return for the current episode 4: Initialize latent states from the prior, i.e., zϕb0 ∼ p(z0) for all b ∈ [1, B] 5: Initialize LSTM hidden states hϕb0 = 0 for all b ∈ [1, B] 6: Sample sϕb0 ∼ pϕ(s0|z ϕb t ) for all b ∈ [1, B] and generate initial MDP state s ϕ 0 following (9) 7: for t in 1 : T do 8: Determine the action following the target policy π, i.e., at−1 ∼ π(at−1|sϕt−1) 9: for b in 1 : B do
10: Update hϕbt , h̃ ϕb t , z ϕb t , s ϕb t , r ϕb t−1 following (5). 11: end for 12: Generate the next state sϕt following (9), as well as the reward r ϕ t−1 ∼
pϕ(rt−1|zϕ1t , . . . , z ϕB t ) = N
( µ = ∑ b wb · µ(r ϕb t−1),Σdiag = ∑ b w 2 b · Σdiag(r ϕb t−1) ) 13: Update r ← r + γt−1rϕt−1, with γ being the discounting factor 14: end for 15: Append r intoR 16: end for 17: Average over all elements inR, which serves as the estimated return over π
D EARLY TERMINATION OF ENVIRONMENTS
Given that some Gym-Mujoco environments, including Ant, Hopper, Walker2d and Pen, may terminate an episode before reaching the maximum steps, if the state violates specific constraints. Below we introduce how VLM and VLBM can be enriched to capture such early termination behaviors.
VLM For VLM, we introduce an additional component dϕt ∼ pϕ(dt|z ϕ t ) to the generative process equation 5, where dϕt is a Bernoulli variable determining if an episode should be terminated at its t-th step. Specifically, pϕ(dt|zϕt ) follows Bernoulli distribution, with mean determined by an MLP with sigmoid activation applied to the output layer. As a result, the generative process now follows
hϕt = fϕ(h ϕ t−1, z ϕ t−1, at−1), h̃ ϕ t = gϕ(h ϕ t ), z ϕ t ∼ pϕ(h̃ ϕ t ),
sϕt ∼ pϕ(st|z ϕ t ), r ϕ t−1 ∼ pϕ(rt−1|z ϕ t ), d ϕ t ∼ pϕ(dt|z ϕ t ), at ∼ π(at|s ϕ t ). (13)
Moreover, we add in a new term to VLM’s training objective, in order to update the component introduced above during training, i.e.,
Learly_termV LM (ψ, ϕ) = LV LM (ψ, ϕ) + ∑T
t=0 log pϕ(dt|zt), (14)
with LV LM (ψ, ϕ) being the original objective of VLM, as presented in equation 8.
VLBM For VLBM, the termination of an episode is determined following, i.e.,
dϕt ∼ pϕ(dt|z ϕ1 t , . . . , z ϕB t ) = Bernoulli(µ = ∑ b wb · µd(dϕbt )), (15)
where µd(d ϕb t ) = ϕ MLP b,µd (zϕbt ) is the mean of d ϕb t produced from the b-th branch of the decoder, and ϕMLPb,µd is the corresponding MLP that maps z ϕb t to µd(d ϕb t ). To update the components involved in the procedure above, we introduce a new term to the VLBM’s objective, i.e.,
Learly_termV LBM (ψ, ϕ1, . . . , ϕB , w1, · · · , wB) (16) =LV LBM (ψ, ϕ1, . . . , ϕB , w1, · · · , wB) + ∑T
t=0 log pϕ(d
ϕ t |z ϕ1 t , . . . , z ϕB t ), (17)
with LV LBM being the original objective of VLBM, as presented in equation 10.
E BOUND DERIVATION
We now derive the evidence lower bound (ELBO) for the joint log-likelihood distribution, i.e.,
log pϕ(s0:T , r0:T−1) (18)
= log ∫ z1:T∈Z pϕ(s0:T , z1:T , r0:T−1)dz (19)
= log ∫ z1:T∈Z pϕ(s0:T , z1:T , r0:T−1) qψ(z0:T |s0:T , a0:T−1) qψ(z0:T |s0:T , a0:T−1)dz (20) ≥Eqψ [log p(z0) + log pϕ(s0:T , z1:T , r0:T−1|z0)− log qψ(z0:T |s0:T , a0:T−1)] (21)
=Eqψ [ log p(z0) + log pϕ(s0|z0) + ∑T t=1 log pϕ(st, zt, rt−1|zt−1, at−1)
− log qψ(z0|s0)− ∑T
t=1 log qψ(zt|zt−1, at−1, st)
] (22)
=Eqψ [ log p(z0)− log qψ(z0|s0) + log pϕ(s0|z0) + ∑T t=1 log ( pϕ(st|zt)pϕ(rt−1|zt)pϕ(zt|zt−1, at−1) ) − ∑T
t=1 log qψ(zt|zt−1, at−1, st)
] (23)
=Eqψ [∑T
t=0 log pϕ(st|zt) + ∑T t=1 log pϕ(rt−1|zt)
−KL ( qψ(z0|s0)||p(z0) ) − ∑T t=1 KL ( qψ(zt|zt−1, at−1, st)||pϕ(zt|zt−1, at−1) )] .
(24)
Note that the transition from equation 20 to equation 21 follows Jensen’s inequality.
F BASICS OF VARIATIONAL INFERENCE
Classic variational auto-encoders (VAEs) are designed to generate synthetic data that share similar characteristics than the ones used for training (Kingma & Welling, 2013). Specifically, VAEs learn an approximated posterior qψ(z|x) and a generative model pϕ(x|z), over the prior p(z), with x being the data and z the latent variable. It’s true posterior pϕ(z|x) is intractable, i.e.,
pϕ(z|x) = pϕ(x|z)p(z) pϕ(x) ; (25)
since the marginal likelihood in the denominator, pϕ(x) = ∫ z pϕ(x|z)p(z)dz, requires integration over the unknown latent space. For the same reason, VAEs cannot be trained to directly maximize the marginal log-likelihood, max log pϕ(x). To resolve this, one could maximize a lower bound of pϕ(x), i.e.,
max ψ,ϕ −KL(qψ(z|x)||p(z)) + Eqψ [log pϕ(x|z)], (26)
which is the evidence lower bound (ELBO).
Reparameterization. During training, it is required to sample from qψ(z|x) and pϕ(x|z) constantly. The reparameterization technique is introduced in (Kingma & Welling, 2013), to ensure that the gradients can flow through such sampling process during back-propagation. For example, if both distributions (qψ(z|x) and pϕ(x|z)) follow diagonal Gaussians, with mean and diagonal covariance determined by MLPs, i.e.,
z ∼ qψ(z|x) = N ( µ = ψMLPµ (x), Σ = ψ MLP Σ (x) ) , (27)
x ∼ pϕ(x|z) = N ( µ = ϕMLPµ (z), Σ = ϕ MLP Σ (z) ) ; (28)
here, ψMLPµ , ψ MLP Σ , ϕ MLP µ , ϕ MLP Σ are the MLPs that generate the means and covariances. The sampling processes above can be captured by reparameterization, i.e.,
z = ψMLPµ (x) + ψ MLP Σ (x) · ϵ, (29)
x = ϕMLPµ (z) + ϕ MLP Σ (z) · ϵ, (30)
with ϵ ∼ N (0, I). Consequently, the gradients over ψ and ϕ can be calculated following the chain rule, and used for back-propagation during training. We direct readers to (Kingma & Welling, 2013) for a comprehensive review of reparameterization.
G ADDITIONAL RELATED WORKS
Overview of latent-model based RL methods. In SLAC, latent representations are used to improve the sample efficiency of model-free RL training algorithms, by jointly modeling and learning dynamics and controls over the latent space. Similarly, SOLAR improves data efficiency for multi-task RL by first learning high-level latent representations of the environment, which can be shared across different tasks. Then, local dynamics models are inferred from the abstraction, with controls solved by linear-quadratic regulators. PlaNet and Dreamer further improve the architecture and training objectives of latent models, allowing them to look ahead multiple steps and plan for longer horizon. There also exist LatCo which directly performs trajectory optimization over the latent space, allowing the agent to temporarily bypass dynamical constraints and quickly navigate to the high-reward regions in early training stage. To summarize, methods above leverage latent representations to gain sufficient exploration coverage and quickly navigate to high-reward regions, improving sample efficiency for policy optimization. Note that they mostly require online interactions with the environment to formulate a growing experience replay buffer for policy learning, which have different goals than OPE which requires learning from a fixed set of offline trajectories. | 1. What is the focus and contribution of the paper on off-policy evaluation in offline reinforcement learning?
2. What are the strengths and weaknesses of the proposed approach, particularly in its conceptual contribution and empirical techniques?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What questions do you have regarding the paper, such as the motivation behind using a recurrent model instead of a Markovian world model? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes the application of recurrent world model to the task of off-policy evaluation (OPE) in offline RL setting. As contributions, the paper identifies two empirical techniques to help improve performance in the offline ODE setting: (1) recurrent state alignment (RSA); (2) branching architecture. Finally, experiment results show improvement over baseline methods.
Strengths And Weaknesses
=== Strengths ===
The paper seems to propose an interesting application of learning recurrent world model for the purpose of offline ODE. Intriguingly, applying such world model technique does not work optimally out-of-the-box, and as the authors suggest in the paper, would require additional techniques to work better. The paper identifies such important elements as the RSA loss and proposes the novel branching architecture, which can be useful for future investigation in the model-based RL for OPE in general. Experiment results also seem solid.
=== Weakness ===
I think a major weakness of the paper lies in its conceptual contribution. One central question is: since all the tasks being evaluated are Mujoco continuous controls or Adroit tasks, arguably the environments are almost fully Markovian, then what is the principal reason why using a recurrent model works better than just using a Markovian world model? This is a key motivational question not addressed enough by the paper. I will expand more in the following.
Clarity, Quality, Novelty And Reproducibility
=== Clarity ===
The paper is written quite clearly.
=== Quality ===
The paper has good writing quality and has clear presentations of experiment results. The paper also has good technical quality in terms of empirical contributions, but may have flaws regarding its motivational and conceptual contributions.
=== Novelty ===
The paper is novel in its new branching architecture and provides another application of recurrent world model to offline RL OPE. Otherwise it feels like a fairly plain application of recurrent world model to OPE that seems to obtain better performance. Overall, the novelty is mediocre.
=== Reproduce ===
Maybe reproducible since the authors have provided code. |
ICLR | Title
Variational Latent Branching Model for Off-Policy Evaluation
Abstract
Model-based methods have recently shown great potential for off-policy evaluation (OPE); offline trajectories induced by behavioral policies are fitted to transitions of Markov decision processes (MDPs), which are used to rollout simulated trajectories and estimate the performance of policies. Model-based OPE methods face two key challenges. First, as offline trajectories are usually fixed, they tend to cover limited state and action space. Second, the performance of model-based methods can be sensitive to the initialization of their parameters. In this work, we propose the variational latent branching model (VLBM) to learn the transition function of MDPs by formulating the environmental dynamics as a compact latent space, from which the next states and rewards are then sampled. Specifically, VLBM leverages and extends the variational inference framework with the recurrent state alignment (RSA), which is designed to capture as much information underlying the limited training data, by smoothing out the information flow between the variational (encoding) and generative (decoding) part of VLBM. Moreover, we also introduce the branching architecture to improve the model’s robustness against randomly initialized model weights. The effectiveness of the VLBM is evaluated on the deep OPE (DOPE) benchmark, from which the training trajectories are designed to result in varied coverage of the state-action space. We show that the VLBM outperforms existing state-of-the-art OPE methods in general.
1 INTRODUCTION
Off-policy evaluation (OPE) allows for evaluation of reinforcement learning (RL) policies without online interactions. It is applicable to many domains where on-policy data collection could be prevented due to efficiency and safety concerns, e.g., healthcare (Gao et al., 2022c;a; Tang & Wiens, 2021), recommendation systems (Mehrotra et al., 2018; Li et al., 2011), education (Mandel et al., 2014), social science (Segal et al., 2018) and optimal control (Silver et al., 2016; Vinyals et al., 2019; Gao et al., 2020a; 2019; 2020b). Recently, as reported in the deep OPE (DOPE) benchmark (Fu et al., 2020b), model-based OPE methods, leveraging feed-forward (Fu et al., 2020b) and auto-regressive (AR) (Zhang et al., 2020a) architectures, have shown promising results toward estimating the return of target policies, by fitting transition functions of MDPs. However, model-based OPE methods remain challenged as they can only be trained using offline trajectory data, which often offers limited coverage of state and action space. Thus, they may perform sub-optimally on tasks where parts of the dynamics are not fully explored (Fu et al., 2020b). Moreover, different initialization of the model weights could lead to varied evaluation performance (Hanin & Rolnick, 2018; Rossi et al., 2019), reducing the robustness of downstream OPE estimations. Some approaches in RL policy optimization literature use latent models trained to capture a compact space from which the dynamics underlying MDPs are extrapolated; this allows learning expressive representations over the state-action space. However, such approaches usually require online data collections as the focus is on quickly navigating to the high-reward regions (Rybkin et al., 2021), as well as on improving coverage of the explored state and action space (Zhang et al., 2019; Hafner et al., 2019; 2020a) or sample efficiency (Lee et al., 2020).
In this work, we propose the variational latent branching model (VLBM), aiming to learn a compact and disentangled latent representation space from offline trajectories, which can better capture the
∗Duke University, USA. Emails: {qitong.gao, miroslav.pajic}@duke.edu. †North Carolina State University, USA. Emails: {ggao5, mchi}@ncsu.edu
Code available at https://github.com/gaoqitong/vlbm.
dynamics underlying environments. VLBM enriches the architectures and optimization objectives for existing latent modeling frameworks, allowing them to learn from a fixed set of offline trajectories. Specifically, VLBM considers learning variational (encoding) and generative (decoding) distributions, both represented by long short-term memories (LSTMs) with reparameterization (Kingma & Welling, 2013), to encode the state-action pairs and enforce the transitions over the latent space, respectively. To train such models, we optimize over the evidence lower bound (ELBO) jointly with a recurrent state alignment (RSA) term defined over the LSTM states; this ensures that the information encoded into the latent space can be effectively teased out by the decoder. Then, we introduce the branching architecture that allows for multiple decoders to jointly infer from the latent space and reach a consensus, from which the next state and reward are generated. This is designed to mitigate the side effects of model-based methods where different weight initializations could lead to varied performance (Fu et al., 2020b; Hanin & Rolnick, 2018; Rossi et al., 2019).
We focus on using the VLBM to facilitate OPE since it allows to better distinguish the improvements made upon learning dynamics underlying the MDP used for estimating policy returns, as opposed to RL training where performance can be affected by multiple factors, e.g., techniques used for exploration and policy optimization. Moreover, model-based OPE methods is helpful for evaluating the safety and efficacy of RL-based controllers before deployments in the real world (Gao et al., 2022b), e.g., how a surgical robot would react to states that are critical to a successful procedure. The key contributions of this paper are summarized as follows: (i) to the best of our knowledge, the VLBM is the first method that leverages variational inference for OPE. It can be trained using offline trajectories and capture environment dynamics over latent space, as well as estimate returns of target (evaluation) policies accurately. (ii) The design of the RSA loss term and branching architecture can effectively smooth the information flow in the latent space shared by the encoder and decoder, increasing the expressiveness and robustness of the model. This is empirically shown in experiments by comparing with ablation baselines. (iii) Our method generally outperforms existing model-based and model-free OPE methods, for evaluating policies over various D4RL environments (Fu et al., 2020a). Specifically, we follow guidelines provided by the DOPE benchmark (Fu et al., 2020b), which contains challenging OPE tasks where the training trajectories include varying levels of coverage of the state-action space, and target policies are designed toward resulting in state-action distributions different from the ones induced by behavioral policies.
2 VARIATIONAL LATENT BRANCHING MODEL
In this section, we first introduce the objective of OPE and the variational latent model (VLM) we consider. Then, we propose the recurrent state alignment (RSA) term as well as the branching architecture that constitute the variational latent branching model (VLBM).
2.1 OPE OBJECTIVE
We first introduce the MDP used to characterize the environment. Specifically, an MDP can be defined as a tuple M = (S,A,P, R, s0, γ), where S is the set of states, A the set of actions, P : S × A → S is the transition distribution usually captured by probabilities p(st|st−1, at−1), R : S ×A → R is the reward function, s0 is the initial state sampled from the initial state distribution p(s0), γ ∈ [0, 1) is the discounting factor. Finally, the agent interacts with the MDP following some policy π(a|s) which defines the probabilities of taking action a at state s. Then, the goal of OPE can be formulated as follows. Given trajectories collected by a behavioral policy β, ρβ = {[(s0, a0, r0, s1), . . . , (sT−1, aT−1, rT−1, sT )](0), [(s0, a0, r0, s1), . . . ](1), . . . |at ∼ β(at|st)}1, estimate the expected total return over the unknown state-action visitation distribution ρπ of the target (evaluation) policy π – i.e., for T being the horizon,
E(s,a)∼ρπ,r∼R [∑T
t=0 γtR(st, at)
] . (1)
2.2 VARIATIONAL LATENT MODEL
We consider the VLM consisting of a prior p(z) over the latent variables z ∈ Z ⊂ Rl, with Z representing the latent space and l the dimension, along with a variational encoder qψ(zt|zt−1, at−1, st)
1We slightly abuse the notation ρβ , to represent either the trajectories or state-action visitation distribution under the behavioral policy, depending on the context.
and a generative decoder pϕ(zt, st, rt−1|zt−1, at−1), parameterized by ψ and ϕ respectively. Basics of variational inference are introduced in Appendix F.
Latent Prior p(z0). The prior specifies the distribution from which the latent variable of the initial stage, z0, is sampled. We configure p(z0) to follow a Gaussian with zero mean and identity covariance matrix, which is a common choice under the variational inference framework (Kingma & Welling, 2013; Lee et al., 2020).
Variational Encoder for Inference qψ(zt|zt−1, at−1, st). The encoder is used to approximate the intractable posterior, p(zt|zt−1, at−1, st) = p(zt−1,at−1,zt,st)∫ zt∈Z p(zt−1,at−1,zt,st)dzt , where the denominator requires integrating over the unknown latent space. Specifically, the encoder can be decomposed into two parts, given that
qψ(z0:T |s0:T , a0:T−1)
=qψ(z0|s0) T∏ t=1 qψ(zt|zt−1, at−1, st); (2)
here, qψ(z0|s0) encodes the initial state s0 in to the corresponding latent variable z0, then, qψ(zt|zt−1, at−1, st) enforces the transi-
tion from zt−1 to zt conditioned on at−1 and st. Both distributions are diagonal Gaussians2, with means and diagonal of covariance matrices determined by multi-layered perceptron (MLP) (Bishop, 2006) and long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) respectively. The weights for both neural networks are referred to as ψ in general.
Consequently, the inference process for zt can be summarized as
zψ0 ∼ qψ(z0|s0), h ψ t = fψ(h ψ t−1, z ψ t−1, at−1, st), z ψ t ∼ qψ(zt|h ψ t ), (3)
where fψ represents the LSTM layer and h ψ t the LSTM recurrent (hidden) state. Note that we use ψ in superscripts to distinguish the variables involved in this inference process, against the generative process introduced below. Moreover, reparameterization can be used to sample zψ0 and z ψ t , such that gradients of sampling can be back-propagated, as introduced in (Kingma & Welling, 2013). Overview of the inference and generative processes are illustrated in Fig. 1.
Generative Decoder for Sampling pϕ(zt, st, rt−1|zt−1, at−1). The decoder is used to interact with the target policies and acts as a synthetic environment during policy evaluation, from which the expected returns can be estimated as the mean return of simulated trajectories. The decoder can be represented by the multiplication of three diagonal Gaussian distributions, given that
pϕ(z1:T , s0:T , r0:T−1|z0, π) = T∏ t=0 pϕ(st|zt) T∏ t=1 pϕ(zt|zt−1, at−1)pϕ(rt−1|zt), (4)
with at ∼ π(at|st) at each time step. Specifically, pϕ(zt|zt−1, at−1) has its mean and covariance determined by an LSTM, enforcing the transition from zt−1 to zt in the latent space given action at−1. In what follows, pϕ(st|zt) and pϕ(rt−1|zt) generate the current state st and reward rt−1 given zt, whose mean and covariance are determined by MLPs. As a result, the generative process starts with sampling the initial latent variable from the latent prior, i.e., zϕ0 ∼ p(z0). Then, the initial state sϕ0 ∼ pϕ(s0|z ϕ 0 ) and action a0 ∼ π(a0|s ϕ 0 ) are obtained from pϕ and target policy π, respectively; the rest of generative process can be summarized as
hϕt = fϕ(h ϕ t−1, z ϕ t−1, at−1), h̃ ϕ t = gϕ(h ϕ t ), z ϕ t ∼ pϕ(h̃ ϕ t ),
sϕt ∼ pϕ(st|z ϕ t ), r ϕ t−1 ∼ pϕ(rt−1|z ϕ t ), at ∼ π(at|s ϕ t ), (5)
2Assume that different dimensions of the states are non-correlated with each other. Otherwise, the states can be projected to orthogonal basis, such that non-diagonal elements of the covariance matrix will be zeros.
where fϕ is the LSTM layer producing recurrent state h ϕ t . Then, an MLP gϕ is used to generate mapping between hϕt and h̃ ϕ t that will be used for recurrent state alignment (RSA) introduced below, to augment the information flow between the inference and generative process.
Furthermore, to train the elements in the encoder (3) and decoder (5), one can maximize the evidence lower bound (ELBO), a lower bound of the joint log-likelihood p(s0:T , r0:T−1), following
LELBO(ψ, ϕ) =Eqψ [∑T
t=0 log pϕ(st|zt) + ∑T t=1 log pϕ(rt−1|zt)−KL ( qψ(z0|s0)||p(z0) ) − ∑T
t=1 KL
( qψ(zt|zt−1, at−1, st)||pϕ(zt|zt−1, at−1) )] ; (6)
here, the first two terms represent the log-likelihood of reconstructing the states and rewards, and the last two terms regularize the approximated posterior. The proof can be found in Appendix E.
2.3 RECURRENT STATE ALIGNMENT
The latent model discussed above is somewhat reminiscent of the ones used in model-based RL policy training methods, e.g., recurrent state space model (RSSM) used in PlaNet (Hafner et al., 2019) and Dreamer (Hafner et al., 2020a;b), as well as similar ones in Lee et al. (2020); Lu et al. (2022). Such methods rely on a growing experience buffer for training, which is collected online by the target policy that is being concurrently updated (with exploration noise added); however, OPE aims to extrapolate returns from a fixed set of offline trajectories which may result in limited coverage of the state and action space. Consequently, directly applying VLM for OPE can lead to subpar performance empirically; see results in Sec. 3. Moreover, the encoder above plays a key role of capturing the temporal transitions between latent variables, i.e., pψ(zt|zt−1, at−1, st) from (2). However, it is absent in the generative process, as the decoder leverages a separate network to determine the latent transitions, i.e., pϕ(zt|zt−1, at−1). Moreover, from the ELBO (6) above it can be seen that only the KL-divergence terms are used to regularize these two parts, which may not be sufficient for OPE as limited offline trajectories are provided. As a result, we introduce the RSA term as part of the training objective, to further regularize pψ(zt|zt−1, at−1, st) and pϕ(zt|zt−1, at−1). A graphical illustration of RSA can be found in Fig. 2.3
Specifically, RSA is defined as the mean pairwise squared error between hψt from the encoder (3) and h̃ϕt from the decoder (5), i.e.,
LRSA(h̃ϕt , h ψ t ;ψ, ϕ) =
1
N N∑ i=1 T∑ t=0 M(M − 1) 2 [M−1∑ j=1 M∑ k=j+1 ( (h̃ϕt [j]− h̃ ϕ t [k])− (h ψ t [j]− h ψ t [k]) )2] ;
(7)
here, we assume that both LSTM recurrent states have the same dimension h̃ϕt , h ψ t ∈ RM , with h (·) t [j] referring to the j-th element of the recurrent state, and N the number of training trajectories.
Here, we choose the pairwise squared loss over the classic mean squared error (MSE), because MSE could be too strong to regularize hψt and h̃ ϕ t which support the inference and generative processes respectively and are not supposed to be exactly the same. In contrast, the pairwise loss (7) can
3Rewards and actions are omitted for conciseness of the presentation.
promote structural similarity between the LSTM recurrent states of the encoder and decoder, without strictly enforcing them to become the same. Note that this design choice has been justified in Sec. 3 through an ablation study by comparing against models trained with MSE. In general, the pairwise loss has also been adopted in many domains for similar purposes, e.g., object detection (Gould et al., 2009; Rocco et al., 2018), ranking systems (Doughty et al., 2018; Saquil et al., 2021) and contrastive learning (Wang et al., 2021; Chen et al., 2020). Similarly, we apply the pairwise loss over hψt and h̃ ϕ t , instead of directly over h ψ t and h ϕ t , as the mapping gϕ (from equation 5) could serve as a regularization layer to ensure optimality over LRSA without changing hψt , h ϕ t significantly.
As a result, the objective for training the VLM, following architectures specified in (3) and (5), can be formulated as
max ψ,ϕ LV LM (ψ, ϕ) = max ψ,ϕ
( LELBO(ψ, ϕ)− C · LRSA(h̃ϕt , h ψ t ;ψ, ϕ) ) , (8)
with C > 0 and C ∈ R being the constant balancing the scale of the ELBO and RSA terms.
2.4 BRANCHING FOR GENERATIVE DECODER
The performance of model-based methods can vary upon different design factors (Fu et al., 2020b; Hanin & Rolnick, 2018). Specifically, Rossi et al. (2019) has found that the convergence speed and optimality of variational models are sensitive to the choice of weight initialization techniques. Moreover, under the typical variational inference setup followed by the VLM above, the latent transitions reconstructed by the decoder, pϕ(zt|zt−1, at−1), are only trained through regularization losses in (6) and (7), but are fully responsible for rolling out trajectories during evaluation. Consequently, in this sub-section we introduce the branching architecture for decoder, with the goal of minimizing the impact brought by random weight initialization of the networks, and allowing the decoder to best reconstruct the latent transitions pϕ(zt|zt−1, at−1) as well as st’s and rt−1’s correctly. Specifically, the branching architecture leverages an ensemble of B ∈ Z+ decoders to tease out information from the latent space formulated by the encoder, with final predictions sampled from a mixture of the Gaussian output distributions from (5). Note that the classic setup of ensembles is not considered, i.e., train and average over B VLMs end-to-end; because in this case B different latent space exist, each of which is still associated with a single decoder, leaving the challenges above unresolved. This design choice is justified by ablations studies in Sec. 3, by comparing VLBM against a (classic) ensemble of VLMs.
Branching Architecture. Consider the generative process involving B branches of the decoders parameterized by {ϕ1, . . . , ϕB}. The forward architecture over a single step is illustrated in Fig. 2.4 Specifically, the procedure of sampling zϕbt and s ϕb t for each b ∈ [1, B] follows from (5). Recall that by definition pϕb(st|z ϕb t ) follows multivariate Gaussian with mean and diagonal of covariance matrix determined by the corresponding MLPs, i.e., µ(sϕbt ) = ϕ MLP b,µ (z ϕb t ) and Σdiag(s ϕb t ) = ϕ MLP b,Σ (z ϕb t ). In what follows, the final outcome sϕt can be sampled following diagonal Gaussian with mean and variance determined by weighted averaging across all branches using weights wb’s, i.e.,
sϕt ∼ pϕ(st|z ϕ1 t , . . . , z ϕB t ) = N
( µ = ∑ b wb · µ(sϕbt ),Σdiag = ∑ b w2b · Σdiag(s ϕb t ) ) . (9)
The objective below can be used to jointly update, wb’s, ψ and ϕb’s, i.e.,
max ψ,ϕ,w
LV LBM (ψ, ϕ1, . . . , ϕB , w1, . . . , wB)
= max ψ,ϕ,w ( T∑ t=0 log pϕ(s ϕ t |z ϕ1 t , . . . , z ϕB t )− C1 · ∑ b LRSA(h̃ϕbt , h ψ t ;ψ, ϕb) + C2 ∑ b LELBO(ψ, ϕb) ) ,
s.t. w1, . . . , wB > 0 , ∑ b wb = 1 and constants C1, C2 > 0. (10)
Though the first term above already propagates through allwb’s and ϕb’s, the third term and constraints over wb’s regularize ϕb in each individual branch such that they are all trained toward maximizing
4For simplicity, the parts generating rewards are omitted without lost of generality.
the likelihood pϕb(s ϕb t |z ϕb t ). Pseudo-code for training and evaluating the VLBM can be found in Appendix C. Further, in practice, one can define wb = v2b
ϵ+ ∑ b v 2 b
, with vb ∈ R the learnable variables and 0 < ϵ ≪ 1, ϵ ∈ R, the constant ensuring denominator to be greater than zero, to convert (10) into unconstrained optimization and solve it using gradient descent. Lastly, note that complementary latent modeling methods, e.g., latent overshooting from Hafner et al. (2019), could be adopted in (10). However, we keep the objective straightforward, so that the source of performance improvements can be isolated.
3 EXPERIMENTS
To evaluate the VLBM, we follow the guidelines from the deep OPE (DOPE) benchmark (Fu et al., 2020b). Specifically, we follow the D4RL branch in DOPE and use the GymMujoco and Adroit suites as the test base (Fu et al., 2020a). Such environments have long horizons and highdimensional state and action space, which are usually challenging for model-based methods. The provided offline trajectories for training are collected using behavioral policies at varied scale, including limited exploration, human teleoperation etc., which can result in different levels of
coverage over the state-action space. Also, the target (evaluation) policies are generated using online RL training, aiming to reduce the similarity between behavioral and target policies; it introduces another challenge that during evaluation the agent may visit states unseen from training trajectories.
Environmental and Training Setup. A total of 8 environments are provided by Gym-Mujoco and Adroit suites (Fu et al., 2020b;a). Moreover, each environment is provided with 5 (for Gym-Mujoco) or 3 (for Adroit) training datasets collected using different behavioral policies, resulting in a total of 32 sets of env-dataset tasks5 – a full list can be found in Appendix A. DOPE also provides 11 target policies for each environment, whose performance are to be evaluated by the OPE methods. They in general result in varied scales of returns, as shown in the x-axes of Fig. 7. Moreover, we consider the decoder to have B = 10 branches, i.e., {pϕ1 , . . . , pϕ10}. The dimension of latent space is set to be 16, i.e., z ∈ Z ⊂ R16. Other implementation details can be found in Appendix A. Baselines and Evaluation Metrics. In addition to the five baselines reported from DOPE, i.e., importance sampling (IS) (Precup, 2000), doubly robust (DR) (Thomas & Brunskill, 2016), variational power method (VPM) (Wen et al., 2020), distribution correction estimation (DICE) (Yang et al., 2020), and fitted Q-evaluation (FQE) (Le et al., 2019), the effectiveness of VLBM is also compared against the state-of-the-art model-based OPE method leveraging the auto-regressive (AR) architecture (Zhang et al., 2020a). Specifically, for each task we train an ensemble of 10 AR models, for fair comparisons against VLBM which leverages the branching architecture; see Appendix A for details of the AR ensemble setup. Following the DOPE benchmark (Fu et al., 2020b), our evaluation metrics includes rank correlation, regret@1, and mean absolute error (MAE). VLBM and all baselines are trained using 3 different random seeds over each task, leading to the results reported below.
Ablation. Four ablation baselines are also considered, i.e., VLM, VLM+RSA, VLM+RSA(MSE) and VLM+RSA Ensemble. Specifically, VLM refers to the model introduced in Sec. 2.2, trained toward maximizing only the ELBO, i.e., (6). Note that, arguably, VLM could be seen as the generalization of directly applying latent-models proposed in existing RL policy optimization literature (Lee et al., 2020; Hafner et al., 2019; 2020a;b; Lu et al., 2022); details can be found in Sec. 4 below. The VLM+RSA ablation baseline follows the same model architecture as VLM, but is trained to optimize over both ELBO and recurrent state alignment (RSA) as introduced in (8), i.e., branching is not used comparing to VLBM. The design of these two baselines can help analyze the effectiveness of the RSA
5From now on the dataset names are abbreviated by their initials, e.g., Ant-M-R refers to Ant-Medium-Replay.
loss term and branching architecture introduced in Sec. 2.3 and 2.4. Moreover, VLM+RSA(MSE) uses mean squared error to replace the pairwise loss introduced in (7), and the VLM+RSA Ensemble applies classic ensembles by averaging over B VLM+RSA models end-to-end, instead of branching from decoder as in VLBM. These two ablation baselines can help justify the use of pairwise loss for RSA, and the benefit of using branching architecture over classic ensembles.
Results. Fig. 3 shows the mean overall performance attained by VLBM and baselines over all the 32 GymMujoco and Adroit tasks. In general VLBM leads to significantly increased rank correlations and decreased regret@1’s over existing methods, with MAEs maintained at the state-of-the-art level. Specifically, VLBM achieves state-of-the-art performance in 31, 29, and 15 (out of 32) tasks in terms of rank correlation, regret@1 and MAE, respectively. Performance for each task can be found in Tables 1- 6 at the end of Appendices. Note that results for IS, VPM, DICE, DR, and FQE are obtained directly from DOPE benchmark (Fu et al., 2020b), since the same experimental setup is considered. Fig. 4 and 5 visualize
the mean performance for each Gym-Mujoco and Adroit environment respectively, over all the associated datasets. It can be also observed that the model-based and FQE baselines generally perform better than the other baselines, which is consistent with findings from DOPE.
The fact that VLM+RSA outperforming the VLM ablation baseline, as shown in Fig. 4, illustrates the need of the RSA loss term to smooth the flow of information between the encoder and decoder, in the latent space. Moreover, one can observe that VLM+RSA(MSE) sometimes performs worse than VLM, and significantly worse than VLM+RSA in general. Specifically, it has be found that, compared to VLM and VLM+RSA respectively, VLM+RSA(MSE) significantly worsen at least two metrics in 7 and 12 (out of 20) Gym-Mujoco tasks; detailed performance over these tasks can be found in Tables 1- 6 at the end of Appendices. Such a finding backs up the design choice of using pairwise loss for RSA instead of MSE, as MSE could be overly strong to regularize the LSTM recurrent states of the encoder and decoder, while pairwise loss only enforces structural similarities. Moreover, VLBM significantly improves rank correlations and regrets greatly compared to VLM+RSA, illustrating the importance of the branching architecture. In the paragraph below, we show empirically the benefits brought in by branching over classic ensembles.
Branching versus Classic Ensembles. Fig. 4 shows that the VLM+RSA Ensemble does not improve performance over the VLM+RSA in general, and even leads to worse overall rank correlations and regrets in Walker2d and Hopper environments. This supports the rationale provided in Sec. 2.4 that each decoder still samples from different latent space exclusively, and averaging over the output distributions may not help reduce the disturbance brought in by the modeling artifacts under the variational inference framework, e.g., random weight initializations (Hanin & Rolnick, 2018; Rossi et al., 2019). In contrast, the VLBM leverages the branching architecture, allowing all the branches to sample from the same latent space formulated by the encoder. Empirically, we find that the branching weights, wb’s in (9), allows VLBM to kill branches that are not helpful toward reconstructing the trajectories accurately, to possibly overcome bad initializations etc. Over all the the 32 tasks we consider, most of VLBMs only keep 1-3 branches (out of 10), i.e., wb < 10−5 for all other branches. The distribution of all wb’s, from VLBMs trained on the 32 tasks, are shown in Fig. 6; one can observe that most of the wb’s are close to zero, while the others generally fall in the range of (0, 0.25] and [0.75, 1).
AR ensembles also lead to compelling rank correlations and regrets, but attains much smaller margins in MAEs over other baselines in general; see Fig. 3. From Fig. 7, one can observe that it tends to significantly under-estimate most of the high-performing policies. Scatter plots for the other tasks can be found in Appendix A, which also show this trend. The reason could be that its model architecture and training objectives are designed to directly learn the transitions of the MDP; thus, may produce biased predictions when the target policies lead to visitation of the states that are not substantially presented in training data, since such data are obtained using behavioral policies that are sub-optimal. In
contrast, the VLBM can leverage RSA and branching against such situations, thus outperforming AR ensembles in most of the OPE tasks in terms of all metrics we considered. Interestingly, Fig. 7 also shows that latent models could sometimes over-estimate the returns. For example, in Hopper-M-E and Walker2d-M-E, VLM tends to over-estimate most policies. The VLBM performs consistently well in Hopper-M-E, but is mildly affected by such an effect in Walker2d-M-E, though over fewer policies and smaller margins. It has been found that variational inference may fall short in approximating true distributions that are asymmetric, and produce biased estimations (Yao et al., 2018). So the hypothesis would be that the dynamics used to define certain environments may lead to asymmetry in the true posterior p(zt|zt−1, at−1, st), which could be hard to be captured by the latent modeling framework we consider. More comprehensive understanding of such behavior can be explored in future work. However, the VLBM still significantly outperforms VLM overall, and achieves top-performing rank correlations and regrets; such results illustrate the VLBM’s improved robustness as a result of its architectural design and choices over training objectives.
t-SNE Visualization of the Latent Space. Fig. 8 illustrates t-SNE visualization of the latent space by rolling out trajectories using all target policies respectively, followed by feeding the state-action pairs into the encoder of VLBM which maps them into the latent space. It shows the encoded state-action pairs induced from policies with similar performance are in general swirled and clustered together, illustrating that VLBM can learn expressive and disentangled representations of its inputs.
4 RELATED WORK
Latent Modeling in RL. Though variational inference has rarely been explored to facilitate modelbased OPE methods so far, there exist several latent models designed for RL policy optimization that are related to our work, such as SLAC (Lee et al., 2020), SOLAR (Zhang et al., 2019), LatCo (Rybkin et al., 2021), PlaNet (Hafner et al., 2019), Dreamer (Hafner et al., 2020a;b). Below we discuss the connections and distinctions between VLBM and the latent models leveraged by them, with a detailed overview of these methods provided in Appendix G. Specifically, SLAC and SOLAR learn latent representations of the dynamics jointly with optimization of the target policies, using the latent information to improve sample efficiency. Similarly, LatCo performs trajectory optimization over the latent space to allow for temporarily bypassing dynamic constraints. As a result, latent models used in such methods are not designed toward rolling out trajectories independently, as opposed to the use of VLBM in this paper. PlaNet and Dreamer train the recurrent state space model (RSSM) using a growing experience dataset collected by the target policy that is being concurrently updated (with exploration noise added), which requires online data collection. In contrast, under the OPE setup, VLBM is trained over a fixed set of offline trajectories collected over unknown behavioral policies. Moreover, note that the VLM baseline is somewhat reminiscent of the RSSM and similar ones as in Lee et al. (2020); Lu et al. (2022), however, experiments above show that directly using VLM for OPE could lead to subpar performance. On the other hand, though MOPO (Yu et al., 2020), LOMPO (Rafailov et al., 2021) and COMBO (Yu et al., 2021) can learn from offline data, they focus on quantifying the uncertainty of model’s predictions toward next states and rewards, followed by incorporating them into policy optimization objectives to penalize for visiting regions where transitions are not fully captured; thus, such works are also orthogonal to the use case of OPE. OPE. Classic OPE methods adopt IS to estimate expectations over the unknown visitation distribution over the target policy, resulting in weighted IS, step-wise IS and weighted step-wise IS (Precup, 2000). IS can lead to estimations with low (or zero) bias, but with high variance (Kostrikov & Nachum, 2020; Jiang & Li, 2016), which sparks a long line of research to address this challenge. DR methods propose to reduce variance by coupling IS with a value function approximator (Jiang & Li, 2016; Thomas & Brunskill, 2016; Farajtabar et al., 2018). However, the introduction of such approximations may increase bias, so the method proposed in Tang et al. (2019) attempts to balance the scale of bias and variance for DR. Unlike IS and DR methods that require the behavioral policies to be fully known, DICE family of estimators (Zhang et al., 2020c;b; Yang et al., 2021; 2020; Nachum et al., 2019; Dai et al., 2020) and VPM (Wen et al., 2020) can be behavioral-agnostic; they directly capture marginalized IS weights as the ratio between the propensity of the target policy to visit particular state-action pairs, relative to their likelihood of appearing in the logged data. There also exist FQE methods which extrapolate policy returns from approximated Q-functions (Hao et al., 2021; Le et al., 2019; Kostrikov & Nachum, 2020). Existing model-based OPE methods are designed to directly fit MDP transitions using feed-forward (Fu et al., 2020b) or auto-regressive (Zhang et al., 2020a) models, and has shown promising results over model-free methods as reported in a recent benchmark (Fu et al., 2020b). However, such model-based approaches could be sensitive to the initialization of weights (Hanin & Rolnick, 2018; Rossi et al., 2019) and produce biased predictions, due to the limited coverage over state and action space provided by offline trajectories (Fu et al., 2020b). Instead, VLBM mitigates such effects by capturing the dynamics over the latent space, such that states and rewards are evolved from a compact feature space over time. Moreover, RSA and the branching can lead to increased expressiveness and robustness, such that future states and rewards are predicted accurately. There also exist OPE methods proposed toward specific applications (Chen et al., 2022; Saito et al., 2021; Gao et al., 2023; 2022b).
5 CONCLUSION AND FUTURE WORK
We have developed the VLBM which can accurately capture the dynamics underlying environments from offline training data that provide limited coverage of the state and action space; this is achieved by using the RSA term to smooth out the information flow from the encoders to decoders in the latent space, as well as the branching architecture which improve VLBM’s robustness against random initializations. We have followed evaluation guidelines provided by the DOPE benchmark, and experimental results have shown that the VLBM generally outperforms the state-of-the-art modelbased OPE method using AR architectures, as well as other model-free methods. VLBM can also facilitate off-policy optimizations, which can be explored in future works. Specifically, VLBM can serve as a synthetic environment on which optimal controllers (e.g., linear–quadratic regulator) can be deployed. On the other hand, similar to Dreamer and SLAC, policies can be updated jointly with training of VLBM, but without the need of online interactions with the environment during training.
ACKNOWLEDGMENTS
This work is sponsored in part by the AFOSR under award number FA9550-19-1-0169, and by the NSF CNS-1652544, CNS-1837499, DUE-1726550, IIS-1651909 and DUE-2013502 awards, as well as the National AI Institute for Edge Computing Leveraging Next Generation Wireless Networks, Grant CNS-2112562.
A ADDITIONAL EXPERIMENTAL DETAILS AND RESULTS
Additional Results and Discussions. Rank correlations, regret@1 and MAEs for all 32 tasks are documented in Tables 1- 6 below.6 The mean and standard deviation (in subscripts) over 3 random seeds are reported. Note that in each column, performance of multiple methods may be highlighted in bold, meaning they all achieve the best performance and do not significantly outperform each other. The fact that VLBM outperforms the ablation baselines in most cases suggests that the RSA loss term and branching architecture can effectively increase model expressiveness, and allow to learn the dynamics underlying the MDP more accurately and robustly from offline data that provide limited exploration coverage. Yet, smaller margins are attained between the VLBM and VLM+RSA in Hopper-M-E and Hopper-M. It is likely because Hopper has relatively lower dimensional state space compared to the other three environments, from which the underlying dynamics can be sufficiently captured by the VLM+RSA. Fig. 10 and 11 shows the correlation between estimated (y-axis) and true returns (x-axis) for all the OPE tasks we consider. It can be found that for Halfcheetah-R, -M-R, -M, most of the model-based methods cannot significantly distinguish the returns across target policies. The cause could be that the offline trajectories provided for this task are relatively more challenging, compared to the other OPE tasks. Such an effect appears to affect IS, VPM, DICE, DR and FQE at larger scale. It can be observed from the scatter plots reported in the DOPE benchmark (Fu et al., 2020b) that these methods could hardly tell the scale of returns across different target policies; as the dots almost form a horizontal line in each plot. However, the estimated returns from VLBM and IS still preserve the rank, which leads to high rank correlations and low regrets.
Implementation Details and Hyper-parameter. The model-based methods are evaluated by directly interacting with each target policy for 50 episodes, and the mean of discounted total returns (γ = 0.995) over all episodes is used as estimated performance for the policy. We choose the neural network architectures as follows. For the components involving LSTMs, which include qψ(zt|zt−1, at−1, st) and pϕ(zt|zt−1, at−1), their architecture include one LSTM layer with 64 nodes, followed by a dense layer with 64 nodes. All other components do not have LSTM layers involved, so they are constituted by a neural network with 2 dense layers, with 128 and 64 nodes respectively. The output layers that determine the mean and diagonal covariance of diagonal Gaussian distributions use linear and softplus activations, respectively. The ones that determine the mean of Bernoulli distributions (e.g., for capturing early termination of episodes) are configured to use sigmoid activations. VLBM and the two ablation baselines, VLM and VLM+RSA, are trained using offline trajectories provided by DOPE, with max_iter in Alg. 1 set to 1,000 and minibatch size set to 64. Adam optimizer is used to perform gradient descent. To determine the learning rate, we perform grid search among {0.003, 0.001, 0.0007, 0.0005, 0.0003, 0.0001, 0.00005}. Exponential decay is applied to the learning rate, which decays the learning rate by 0.997 every iteration. To train VLBM, we set the constants from equation 10 following C1 = C2, and perform grid search among
6Some VPM entries are absent since they were not reported in Fu et al. (2020b), nor the code is open-sourced.
{5, 1, 0.1, 0.05, 0.01, 0.005, 0.001, 0.0001}. To train VLM+RSA, the constant C from equation 8 is determined by grid search among the same set of parameters above. L2-regularization with decay of 0.001 and batch normalization are applied to all hidden layers. Consider that some of the environments (e.g., Ant, Hopper, Walker2d, Pen) may terminate an episode, before timeout, if the state meets specific conditions; details for VLBM to capture such early termination behavior is introduced in Appendix D.
The DOPE Benchmark. The deep OPE (DOPE) benchmark (Fu et al., 2020b) provides standardized training and evaluation procedure for OPE works to follow, which facilitates fair and comprehensive comparisons among various OPE methods. Specifically, it utilizes existing environments and training trajectories provided by D4RL7 and RLUnplugged8, which are two benchmark suites for offline RL training, and additionally provide target policies for OPE methods to evaluate. In the D4RL branch, the training trajectories are originally collected from various sources including random exploration, human teleoperation, and RL-trained policies with limited exploration; thus, can provide varied levels of coverage over the state-action space. Moreover, the target policies are trained using online RL algorithms, which can in general lead to different state-action visitations than in the training trajectories. We leverage the D4RL branch as our test base, since the OPE tasks it provides are considered challenging, i.e., the limited coverage introduced by training data, as well as the discrepancy between the behavioral and target policies. Graphical illustrations of the Gym-Mujoco and Adroit environments considered are shown in Fig. 9. Details on the environments and datasets used are shown in Tables 7 and 8, from the perspectives of state and action dimensions, if episodes can be terminated before timeout, if controls are performed over continuous space, and the size of the offline trajectories used for training. In contrast, in the RLUnplugged branch, the training trajectories are always collected using online RL training, which can result in adequate coverage over the state-action space. The target policies are trained by applying offline RL over the training trajectories, so that behavioral and target policies can lead to similar state-action visitation distributions. As discussed in DOPE (Fu et al., 2020b), such tasks are suitable for studies where ideal data are needed, such as complexity comparisons.
Evaluation Metrics. Following from (Fu et al., 2020b), we consider rank correlation, regret@1 and mean absolute error (MAE) as the evaluation metrics. Specifically, rank correlation measures the strength and direction of monotonic association between the rank of OPE-estimated returns and true returns over all target policies. It is is captured by Spearsman’s correlation coefficient between the ordinal rankings between estimated and true returns. Regret@1 is captured by the difference between the return of the policy corresponding to the highest return as estimated by OPE and the return of the policy that actually produces the highest true return. In other words, regret@1 evaluates how worse the policy resulting in the highest OPE-estimated return would perform than the actual best policy. The two metrics above evaluate how useful OPE would be to facilitate important applications such as policy selection. Finally, we also consider MAE which is commonly used in estimation/regression tasks. Mathematical definitions of these metrics can be found in (Fu et al., 2020b).
Implementation of AR Ensembles. For fair comparisons with VLBM, in experiments we train an ensemble of the state-of-the-art model-based OPE method, auto-regressive (AR) models (Zhang et al., 2020a), as one of the baselines. Specifically, we train an ensemble of 10 AR models to learn p(st+1, rt|st, at) following the auto-regressive manner, with each individual model following the design introduced in (Zhang et al., 2020a), i.e.,
s (j) t+1 ∼ p(s (j) t+1|st, at, s (1) t+1, . . . , s (j−1) t+1 ), (11)
with s(j)t+1 representing the element located at the j-th dimension of the state variable, and D the dimension of state space. The reward is treated as an additional dimension of the states, i.e., rt ∼ p(rt|st, at, s(1)t+1, . . . , s (D) t+1). However, in the original literature (Zhang et al., 2020a) it does not introduce in details regarding which specific ensemble architecture is used (e.g., overall averaging or weighted averaging). As a result, we choose the same weighted averaging procedure as used in VLBM branching, to sort out the influence of different ensemble architectures and facilitate fair comparisons. Specifically, a total of 10 AR models, parameterized by {θ1, . . . , θ10}, along with 10
7https://github.com/rail-berkeley/d4rl 8https://github.com/deepmind/deepmind-research/tree/master/rl_unplugged
weight variables {wθ1, . . . , wθ10| ∑ i w θ i = 1}, are trained. Similar to weighted averaging architecture used in VLBM, i.e., equation 9, the mean and variance of the prediction s(j)t+1, captured by normal distribution N (µ, σ2), follow
µ = ∑10
i=1 wθi · µθi(s (j) t+1), σ
2 = ∑10
i=1 (wθi ) 2 · σ2θi(s (j) t+1), (12)
where µθi(s (j) t+1) and σ 2 θi (s (j) t+1) are the mean and variance produced from each individual AR model in the ensemble.
Training Resources. Training of the proposed method, and baselines, are facilitated by Nvidia Quadro RTX 6000, NVIDIA RTX A5000, and NVIDIA TITAN XP GPUs.
License. The use of DOPE9 and D4RL (Fu et al., 2020a) follow the Apache License 2.0.
9https://github.com/google-research/deep_ope
B MORE t-SNE VISUALIZATIONS
Figures 12 and 13 above visualize the latent space captured by two ablation baselines, VLM and VLM+RSA(MSE), respectively. It can be observed that comparing to the latent space captured by VLM are not disentangled well compared to VLBM (shown in Figure 8), as the state-action pairs induced by policies with different levels of performance are generally cluster together without explicit boundaries. Such a finding illustrated the importance of the use of RSA loss (7) empirically, as it can effectively regularize pψ(zt|zt−1, at−1, st) and allows the encoder to map the MDP states to an expressive and compact latent space from which the decoder can reconstruct states and rewards accurately. Moreover, Figure 13 shows that the latent representations of the state-action pairs captured by VLM+RSA(MSE) distributed almost uniformly over the latent space. This justifies the rationale provided in Sec. 2.3 where MSE is too strong to regularize the hidden states of the encoder and decoder, and is also consistent with the results reported in Figure 3 that MSE+RSA(MSE) performs worse than VLM in general.
C ALGORITHMS FOR TRAINING AND EVALUATING VLBM
Algorithm 1 Train VLBM.
Input: Model weights ψ, ϕ1, . . . , ϕB , w1, . . . , wB , offline trajectories ρβ , and learning rate α. Begin:
1: Initialize ψ, ϕ1, . . . , ϕB , w1, . . . , wB 2: for iter in 1 : max_iter do 3: Sample a trajectory [(s0, a0, r0, s1), . . . , (sT−1, aT−1, rT−1, sT )] ∼ ρβ 4: zψ0 ∼ qψ(z0|s0) 5: zϕb0 ∼ p(z0), for all b ∈ [1, B] 6: Run forward pass of VLBM following (3), (5) and (9) for t = 1 : T , and collect all variables needed to evaluate LV LBM as specified in (10). 7: ψ ← ψ + α∇ψLV LBM 8: for b in 1 : B do 9: ϕb ← ϕb + α∇ϕbLV LBM
10: wb ← wb + α∇wbLV LBM 11: end for 12: end for
Algorithm 2 Evaluate VLBM. Input: Trained model weights ψ, ϕ1, . . . , ϕB , w1, . . . , wB Begin:
1: Initialize the list that stores the accumulated returns over all episodesR = [] 2: for epi in 1 : max_epi do 3: Initialize the variable r = 0 that tracks the accumulated return for the current episode 4: Initialize latent states from the prior, i.e., zϕb0 ∼ p(z0) for all b ∈ [1, B] 5: Initialize LSTM hidden states hϕb0 = 0 for all b ∈ [1, B] 6: Sample sϕb0 ∼ pϕ(s0|z ϕb t ) for all b ∈ [1, B] and generate initial MDP state s ϕ 0 following (9) 7: for t in 1 : T do 8: Determine the action following the target policy π, i.e., at−1 ∼ π(at−1|sϕt−1) 9: for b in 1 : B do
10: Update hϕbt , h̃ ϕb t , z ϕb t , s ϕb t , r ϕb t−1 following (5). 11: end for 12: Generate the next state sϕt following (9), as well as the reward r ϕ t−1 ∼
pϕ(rt−1|zϕ1t , . . . , z ϕB t ) = N
( µ = ∑ b wb · µ(r ϕb t−1),Σdiag = ∑ b w 2 b · Σdiag(r ϕb t−1) ) 13: Update r ← r + γt−1rϕt−1, with γ being the discounting factor 14: end for 15: Append r intoR 16: end for 17: Average over all elements inR, which serves as the estimated return over π
D EARLY TERMINATION OF ENVIRONMENTS
Given that some Gym-Mujoco environments, including Ant, Hopper, Walker2d and Pen, may terminate an episode before reaching the maximum steps, if the state violates specific constraints. Below we introduce how VLM and VLBM can be enriched to capture such early termination behaviors.
VLM For VLM, we introduce an additional component dϕt ∼ pϕ(dt|z ϕ t ) to the generative process equation 5, where dϕt is a Bernoulli variable determining if an episode should be terminated at its t-th step. Specifically, pϕ(dt|zϕt ) follows Bernoulli distribution, with mean determined by an MLP with sigmoid activation applied to the output layer. As a result, the generative process now follows
hϕt = fϕ(h ϕ t−1, z ϕ t−1, at−1), h̃ ϕ t = gϕ(h ϕ t ), z ϕ t ∼ pϕ(h̃ ϕ t ),
sϕt ∼ pϕ(st|z ϕ t ), r ϕ t−1 ∼ pϕ(rt−1|z ϕ t ), d ϕ t ∼ pϕ(dt|z ϕ t ), at ∼ π(at|s ϕ t ). (13)
Moreover, we add in a new term to VLM’s training objective, in order to update the component introduced above during training, i.e.,
Learly_termV LM (ψ, ϕ) = LV LM (ψ, ϕ) + ∑T
t=0 log pϕ(dt|zt), (14)
with LV LM (ψ, ϕ) being the original objective of VLM, as presented in equation 8.
VLBM For VLBM, the termination of an episode is determined following, i.e.,
dϕt ∼ pϕ(dt|z ϕ1 t , . . . , z ϕB t ) = Bernoulli(µ = ∑ b wb · µd(dϕbt )), (15)
where µd(d ϕb t ) = ϕ MLP b,µd (zϕbt ) is the mean of d ϕb t produced from the b-th branch of the decoder, and ϕMLPb,µd is the corresponding MLP that maps z ϕb t to µd(d ϕb t ). To update the components involved in the procedure above, we introduce a new term to the VLBM’s objective, i.e.,
Learly_termV LBM (ψ, ϕ1, . . . , ϕB , w1, · · · , wB) (16) =LV LBM (ψ, ϕ1, . . . , ϕB , w1, · · · , wB) + ∑T
t=0 log pϕ(d
ϕ t |z ϕ1 t , . . . , z ϕB t ), (17)
with LV LBM being the original objective of VLBM, as presented in equation 10.
E BOUND DERIVATION
We now derive the evidence lower bound (ELBO) for the joint log-likelihood distribution, i.e.,
log pϕ(s0:T , r0:T−1) (18)
= log ∫ z1:T∈Z pϕ(s0:T , z1:T , r0:T−1)dz (19)
= log ∫ z1:T∈Z pϕ(s0:T , z1:T , r0:T−1) qψ(z0:T |s0:T , a0:T−1) qψ(z0:T |s0:T , a0:T−1)dz (20) ≥Eqψ [log p(z0) + log pϕ(s0:T , z1:T , r0:T−1|z0)− log qψ(z0:T |s0:T , a0:T−1)] (21)
=Eqψ [ log p(z0) + log pϕ(s0|z0) + ∑T t=1 log pϕ(st, zt, rt−1|zt−1, at−1)
− log qψ(z0|s0)− ∑T
t=1 log qψ(zt|zt−1, at−1, st)
] (22)
=Eqψ [ log p(z0)− log qψ(z0|s0) + log pϕ(s0|z0) + ∑T t=1 log ( pϕ(st|zt)pϕ(rt−1|zt)pϕ(zt|zt−1, at−1) ) − ∑T
t=1 log qψ(zt|zt−1, at−1, st)
] (23)
=Eqψ [∑T
t=0 log pϕ(st|zt) + ∑T t=1 log pϕ(rt−1|zt)
−KL ( qψ(z0|s0)||p(z0) ) − ∑T t=1 KL ( qψ(zt|zt−1, at−1, st)||pϕ(zt|zt−1, at−1) )] .
(24)
Note that the transition from equation 20 to equation 21 follows Jensen’s inequality.
F BASICS OF VARIATIONAL INFERENCE
Classic variational auto-encoders (VAEs) are designed to generate synthetic data that share similar characteristics than the ones used for training (Kingma & Welling, 2013). Specifically, VAEs learn an approximated posterior qψ(z|x) and a generative model pϕ(x|z), over the prior p(z), with x being the data and z the latent variable. It’s true posterior pϕ(z|x) is intractable, i.e.,
pϕ(z|x) = pϕ(x|z)p(z) pϕ(x) ; (25)
since the marginal likelihood in the denominator, pϕ(x) = ∫ z pϕ(x|z)p(z)dz, requires integration over the unknown latent space. For the same reason, VAEs cannot be trained to directly maximize the marginal log-likelihood, max log pϕ(x). To resolve this, one could maximize a lower bound of pϕ(x), i.e.,
max ψ,ϕ −KL(qψ(z|x)||p(z)) + Eqψ [log pϕ(x|z)], (26)
which is the evidence lower bound (ELBO).
Reparameterization. During training, it is required to sample from qψ(z|x) and pϕ(x|z) constantly. The reparameterization technique is introduced in (Kingma & Welling, 2013), to ensure that the gradients can flow through such sampling process during back-propagation. For example, if both distributions (qψ(z|x) and pϕ(x|z)) follow diagonal Gaussians, with mean and diagonal covariance determined by MLPs, i.e.,
z ∼ qψ(z|x) = N ( µ = ψMLPµ (x), Σ = ψ MLP Σ (x) ) , (27)
x ∼ pϕ(x|z) = N ( µ = ϕMLPµ (z), Σ = ϕ MLP Σ (z) ) ; (28)
here, ψMLPµ , ψ MLP Σ , ϕ MLP µ , ϕ MLP Σ are the MLPs that generate the means and covariances. The sampling processes above can be captured by reparameterization, i.e.,
z = ψMLPµ (x) + ψ MLP Σ (x) · ϵ, (29)
x = ϕMLPµ (z) + ϕ MLP Σ (z) · ϵ, (30)
with ϵ ∼ N (0, I). Consequently, the gradients over ψ and ϕ can be calculated following the chain rule, and used for back-propagation during training. We direct readers to (Kingma & Welling, 2013) for a comprehensive review of reparameterization.
G ADDITIONAL RELATED WORKS
Overview of latent-model based RL methods. In SLAC, latent representations are used to improve the sample efficiency of model-free RL training algorithms, by jointly modeling and learning dynamics and controls over the latent space. Similarly, SOLAR improves data efficiency for multi-task RL by first learning high-level latent representations of the environment, which can be shared across different tasks. Then, local dynamics models are inferred from the abstraction, with controls solved by linear-quadratic regulators. PlaNet and Dreamer further improve the architecture and training objectives of latent models, allowing them to look ahead multiple steps and plan for longer horizon. There also exist LatCo which directly performs trajectory optimization over the latent space, allowing the agent to temporarily bypass dynamical constraints and quickly navigate to the high-reward regions in early training stage. To summarize, methods above leverage latent representations to gain sufficient exploration coverage and quickly navigate to high-reward regions, improving sample efficiency for policy optimization. Note that they mostly require online interactions with the environment to formulate a growing experience replay buffer for policy learning, which have different goals than OPE which requires learning from a fixed set of offline trajectories. | 1. What is the focus and contribution of the paper on off-policy evaluation?
2. What are the strengths and weaknesses of the proposed variational latent branching model?
3. Do you have any concerns regarding the limitation of the technical novelty or the lack of explanation for the core issue of OPE?
4. Are there any typos or unclear points in the paper that need clarification?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This papar focuses on the model-based off-policy evaluation(OPE) method, where there exist both limited coverage of state-action space and sensitive initialization during training the transition model. To address these, this paper presents the variational latent branching model(VLBM) to obtain as much information from the limited offline data and improve its robustness, through variational inference framwork with the Recurrent State Alignment and Branching Architecture of decoders respectively.
Strengths And Weaknesses
Strength:
The proposed method is very detailed and sounds reasonable, and the empirical results and ablation studies are sufficient to verify its effectiveness;
Weaknesses:
technical novelty is limited: the proposed method is basically a straightforward extention to the VAE model in the sense that the latent variable is extended to a sequence of latent variables modeled, where both encoder and decoder is implemented with LSTMs.
The main concern of this paper is how to train a better transition model, but why this is the core issue of OPE needs further explanation. I think that it's better to evaluate the presented method across more model-based RL applications rather than OPE.
typos: In formula (4), the left side doesn't contain a_{t}, but the right side contain a_{t}.
In Fig.6, one can observe that only two weights are relatively large - what this means? how does this affect performance?
it is claimed that "From Fig. 7, one can observe that it tends to significantly under-estimate", why? it seems that there has over-estimate and under-estimate in Fig.7.
in eq.(3), why a diagnal Gaussian is chosen to model the latent transition model? It could be over-simplified in some complex enironment.
Clarity, Quality, Novelty And Reproducibility
fair |
ICLR | Title
Learning To Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning
Abstract
Gradient inversion attack enables recovery of training samples from model updates in federated learning (FL) and constitutes a serious threat to data privacy. To mitigate this vulnerability, prior work proposed both principled defenses based on differential privacy, as well as heuristic defenses based on gradient compression as countermeasures. These defenses have so far been very effective, in particular those based on gradient compression that allow the model to maintain high accuracy while greatly reducing the attack’s effectiveness. In this work, we argue that such findings do not accurately reflect the privacy risk in FL, and show that existing defenses can be broken by a simple adaptive attack that trains a model using auxiliary data to learn how to invert gradients on both vision and language tasks.
N/A
Gradient inversion attack enables recovery of training samples from model updates in federated learning (FL) and constitutes a serious threat to data privacy. To mitigate this vulnerability, prior work proposed both principled defenses based on differential privacy, as well as heuristic defenses based on gradient compression as countermeasures. These defenses have so far been very effective, in particular those based on gradient compression that allow the model to maintain high accuracy while greatly reducing the attack’s effectiveness. In this work, we argue that such findings do not accurately reflect the privacy risk in FL, and show that existing defenses can be broken by a simple adaptive attack that trains a model using auxiliary data to learn how to invert gradients on both vision and language tasks.
1 INTRODUCTION
Federated learning (FL; (McMahan et al., 2017)) is a popular framework for distributed model training on sensitive user data. Instead of centrally storing the training data, FL operates in a serverclient setting where the server hosts the model and has no direct access to the data. The clients can apply the model on their private data and send gradient updates back to the server. This learning regime promises data privacy as users only share gradients but never any raw data. However, recent work (Zhu et al., 2019; Zhao et al., 2020; Geiping et al., 2020) showed that despite these efforts, the server can still recover the training data from gradient updates, violating the promise of data privacy in FL. These so-called gradient inversion attacks operate by optimizing over the input space to find training samples whose gradient matches that of the observed gradient, and such attacks remain effective even when clients utilize secure aggregation (Bonawitz et al., 2016) to avoid revealing individual updates (Yin et al., 2021; Jeon et al., 2021).
As countermeasures against these gradient inversion attacks, prior work proposed both principled defenses based on differential privacy (Abadi et al., 2016), as well as heuristics that compress the gradient update through gradient pruning (Aji & Heafield, 2017) or sign compression (Bernstein et al., 2018). In particular, gradient compression defenses have so far enjoyed great success, severely hindering the effectiveness of existing optimization-based attacks (Zhu et al., 2019; Jeon et al., 2021) while maintaining close to the same level of accuracy for the trained model. As a result, these limitations seemingly diminish the threat of gradient inversion in practical FL applications.
In this paper we argue that evaluating defenses on existing optimization-based attacks may provide a false sense of security. To this end, we propose a simple learning-based attack—which we call Learning To Invert (LTI)—that trains a model to learn how to invert the gradient update to recover client samples; see Figure 1 for an illustration. We assume that the adversary (i.e., the server) has access to an auxiliary dataset whose distribution is similar to that of the private data, and use it to generate training samples for the gradient inversion model by querying the global model for gradients. Our attack is highly adaptable to different defenses since applying a defense simply amounts to training data augmentation for the gradient inversion model.
We empirically demonstrate that LTI can successfully circumvent defenses based on gradient perturbation (i.e., using differential privacy; (Abadi et al., 2016)), gradient pruning (Aji & Heafield, 2017) and sign compression (Bernstein et al., 2018) on both vision and language tasks.
• Vision: We evaluate on the CIFAR10 (Krizhevsky et al., 2009) classification dataset. LTI attains recovery accuracy close to that of the best optimization-based method when no defense is applied, and significantly outperforms all prior attacks under defense. • NLP: We experiment with causal language model training on the WikiText (Merity et al., 2016) dataset, where LTI attains state-of-the-art performance in all settings, with or without defense.
Given the strong empirical performance of LTI and its adaptability to different learning tasks and defense mechanisms, we advocate for its use as a simple baseline for future studies on gradient inversion attacks in FL.
2 BACKGROUND
Federated learning. The objective of federated learning (McMahan et al., 2017) is to train a machine learning model in a distributed fashion without centralized collection of training data. In detail, let fw be the global model parameterized by w, and consider a supervised learning setting that optimizes w by minimizing a loss function ℓ over the training set Dtrain:∑
(x,y)∈Dtrain ℓ(fw(x), y). In centralized learning this is typically done by computing a stochastic gradient 1B ∑B
i=1 ∇wℓ(fw(xi), yi) over a randomly drawn batch of data (x1, y1), . . . , (xB , yB) and minimizing ℓ using gradient descent.
In FL, instead of centrally collecting Dtrain to draw a random batch during training, the training set Dtrain is distributed across multiple clients and the model fw is stored on a central server. At each iteration, the model parameter w is transmitted to each client to compute the per-sample gradients {∇wℓ(fw(xi), yi)}Bi=1 locally over a set of clients. The server and clients then execute a federated aggregation protocol to compute the average gradient for the gradient descent update. A major advantage of FL is data privacy since clients do not need to disclose their data explicitly, but rather only send their gradient ∇wℓ(fw(xi), yi) to the server. Techniques such as secure aggregation (Bonawitz et al., 2016) and differential privacy (Dwork et al., 2006; 2014) can further reduce the privacy leakage from sending this gradient update.
Gradient inversion attack. Despite the promise of data privacy in FL, recent work showed that the heuristic of sending gradient updates instead of training samples themselves in fact provides a false sense of security. Zhu et al. (2019) showed in their seminal paper that it is possible for the server to recover the full batch of training samples given aggregated gradients. These optimizationbased gradient inversion attacks operate by optimizing a set of dummy data x̃1, . . . , x̃B and labels ỹ1, . . . , ỹB to match their gradient to the observed gradient:
min x̃ ∥∥∥∥∥ B∑ i=1 ∇wℓ(fw(x̃i), ỹi)− B∑ i=1 ∇wℓ(fw(xi), yi) ∥∥∥∥∥ 2
2
. (1)
For image tasks, since Equation 1 is differentiable in x̃i and ỹi and the model parameter w is known to the server, the server can optimize Equation 1 using gradient-based search. Doing so yields recovered samples (x̃i, ỹi) that closely resemble actual samples (xi, yi) in the batch. In practice this approach is highly effective, and follow-up works proposed several optimizations to further improve its recovery accuracy (Geiping et al., 2020; Yin et al., 2021; Jeon et al., 2021).
For language tasks this optimization problem is considerably more complex since the samples x1, . . . ,xB are sequences of discrete tokens, and optimizing Equation 1 amounts to solving a discrete optimization problem. To circumvent this difficulty, Zhu et al. (2019) and Deng et al. (2021) instead optimize the token embeddings to match the observed gradient and then maps the recovered embeddings to their closest tokens in the embedding layer to recover the private text. In contrast, Gupta et al. (2022) leveraged the insight that gradient of the token embedding layer can be used to recover exactly the set of tokens present in the training sample, and then uses beam search to optimize the ordering of tokens for fluency to recover the private text.
Gradient inversion under the malicious server setting. The aforementioned gradient inversion attacks operate under the honest-but-curious setting where the server faithfully executes the federated learning protocol, but attempts to extract private information from the observed gradients. Fowl et al. (2021), Boenisch et al. (2021) and Fowl et al. (2022) consider a stronger malicious server threat model that allows the server to transmit arbitrary model parameters w to the clients. Under this threat model, it is possible to carefully craft the model parameters so that the training sample can be recovered exactly from its gradient even when the batch size B is large. While this setting is certainly realistic and relevant, our paper operates under the weaker honest-but-curious threat model.
3 LEARNING TO INVERT: LEARNING-BASED GRADIENT INVERSION ATTACKS
Motivation. The threat of gradient inversion attack has prompted prior work to employ defense mechanisms to mitigate this privacy risk in FL (Zhu et al., 2019; Jeon et al., 2021). Intuitively, such defenses reduce the amount of information contained in the gradient about the training sample by either perturbing the gradient with noise (Abadi et al., 2016) or compressing them (Aji & Heafield, 2017; Bernstein et al., 2018), making recovery much more difficult. However, doing so also reduces the amount of information a sample can provide for training the global model, and hence has a negative impact on the model’s performance. This is certainly true for principled defenses based on differential privacy (Dwork et al., 2006) such as gradient perturbation (Abadi et al., 2016), however, defenses based on gradient compression seemingly provide a much better privacy-utility trade-off, effectively preventing the attack with minor reduction in model performance (Zhu et al., 2019).
The empirical success of existing defenses seemingly diminish the threat of gradient inversion in FL, especially since gradient compression (Aji & Heafield, 2017; Bernstein et al., 2018) is already commonplace in practical FL applications to reduce communication cost. However, we argue that optimization-based attacks underestimate the power of the adversary: If the adversary has access to an auxiliary dataset Daux, they can train a gradient inversion model to recover Daux from its gradients computed on the global model. As we will establish later, this greatly empowers the adversary, exposing existing risks to federate learning.
Threat model. We consider the setting where the adversary is an honest-but-curious server, who executes the learning protocol faithfully but aims to extract private training data from the observed gradients. We also assume that the FL protocol does not leverage secure aggregation, so per-client gradients are revealed to the server. Under these assumptions, in each FL iteration the adversary
has the knowledge of model weights w and the gradients ∇wℓ(fw(x), y) for each sample (x, y) in the batch. Moreover, we assume the adversary has an auxiliary dataset Daux, which could be in-distribution or a mixture of in-distribution and out-of-distribution data. This assumption is similar to the setting in Jeon et al. (2021), which assumes a generative model that is trained from the in-distribution data, and is common in the study of other privacy attacks such as membership inference (Shokri et al., 2017).
Learning to invert (LTI). Since the adversary has knowledge of the model weights, he/she is able to generate the gradient ∇wℓ(fw(xaux), yaux) for each sample (xaux, yaux) in the auxiliary dataset. This allows the adversary to learn a gradient inversion model gθ, parameterized by θ, to predict the data point (xaux, yaux) from the gradient of the global model ∇wℓ(fw(xaux), yaux) by solving the following learning problem:
min θ ∑ (xaux,yaux)∈Daux ℓattack (gθ (∇wℓ(fw(xaux), yaux)) , (xaux, yaux)) . (2)
In practice, ℓattack can be the cross-entropy (for discrete input) or squared-loss (for continuous-valued input) function and we find that using a multi-layer perceptron (MLP) (Bishop et al., 1995) for gθ is effective empirically. Importantly, when a defense mechanism such as gradient perturbation or gradient compression is applied, we can apply the same transformation to ∇wℓ(fw(xaux), yaux) to augment the training data for gθ to carry out an adaptive attack. We will show in section 4 that this simple approach is surprisingly effective at circumventing existing defenses.
Dimensionality reduction for large models. One potential problem for LTI is that the gradients ∇wℓ(fw(xaux), yaux) can be extremely high-dimensional. For example, both ResNet18 (He et al., 2016) for vision tasks and a three-layer transformer (Vaswani et al., 2017) for language tasks have approximately 1.1 million trainable parameters. Such high-dimensional input to the model gθ can lead to memory issues, as the first layer of the MLP would have 11M × h parameters, where h denotes the size of the first hidden layer.
To address this issue, we use feature hashing (Weinberger et al., 2009) to reduce the dimensionality of the input gradient. To this end we create k bins, where k is much smaller than the size of gradient m, and assign each gradient dimension i ∈ [m] to a random bin r(i) ∈ [k]. For each bin, we sum up the gradient values that are assigned to this bin. As a result, we obtain a feature vector of size k for the inversion model gθ. In other words, we project the gradient ∇wℓ(fw(xaux), yaux) to P∇wℓ(fw(xaux), yaux) using the random projection matrix P given by:
P ∈ {0, 1}k×ms.t. ∀i, Pj,i = 0 (∀j ̸= r(i)), Pr(i),i = 1.
If r(i) is implemented with a pseudo-uniform hashing function, P does not need to be stored in memory, reducing the memory footprint of gθ to a constant independent of the gradient dimension.
4 EXPERIMENT
We evaluate LTI on vision and language tasks against several existing defenses to show that it vastly outperforms prior gradient inversion attacks. We consider the following defense mechanisms evaluated in prior work (Zhu et al., 2019; Jeon et al., 2021):
• None. The gradient shared between the server and clients is the full gradient without any defense. This is the most common setting that previous papers focus on. • Sign compression (Bernstein et al., 2018) applies the sign function to each dimension of the gradient independently to compress the gradient to one bit per dimension. • Gradient pruning with pruning rate α (Aji & Heafield, 2017) zeroes out the bottom 1− α fraction of coordinates of ∇wℓ(fw(x), y) in terms of absolute value, which effectively compresses the gradient to (1− α)m dimensions. • Gradient perturbation with Gaussian standard deviation σ (Abadi et al., 2016) is a differentially private mechanism used commonly for training private models with SGD. An i.i.d. Gaussian random vector N (0, σ2) is added to the gradient, which one can show achieves ϵ-local differential privacy (Kasiviswanathan et al., 2011) with ϵ = O(1/σ).
4.1 EVALUATION ON VISION TASK
For evaluating LTI on a vision task, we experiment with image classification on CIFAR10 (Krizhevsky et al., 2009). The target model fw is LeNet (LeCun et al., 1998) with 1.5× 104 parameters trained using the cross-entropy loss.
Baselines. We compare our method with two baseline gradient inversion attacks: Inverting Gradients (IG; Geiping et al. (2020)), a representative optimization-based method, and Gradient Inversion with Generative Image Prior (GI-GIP; Jeon et al. (2021)), the state-of-the-art optimization-based method that uses a generative model to encode the data prior. We make minor modifications to these attacks to adapt them to various defenses; see appendix for details. The threat model for our attack is most similar to GI-GIP since both use an auxiliary dataset to encode the data prior.
Inversion model training. We follow the setup below for training the gradient inversion model gθ.
• Auxiliary dataset. We use the train split of CIFAR10 as the auxiliary dataset for training the inversion model gθ and the test split for inverting gradients computed on the global model fw. • Inversion model architecture. We use a three-layer MLP with hidden size 3000 for our inversion model gθ. The MLP takes the flattened gradient vector as input and outputs a 3072-dimensional vector representing the flattened image. The training objective ℓattack in Equation 2 is the mean squared error (MSE) between the output vector from MLP and the flattened ground truth image. • Training details. We use the Adam (Kingma & Ba, 2014) optimizer for training gθ. The model is trained for 200 epochs using training batch size 256. The initial learning rate is 10−4 with learning rate drop to 10−5 after 150 epochs. • Computation cost. Our experiments are conducted using NVIDIA GeForce RTX 2080 GPUs and each training run takes about 1.5 hours.
Evaluation methodology. We evaluate LTI and the aforementioned baselines on 1, 000 random images from the CIFAR10 test split. To measure reconstruction quality, we use three metrics:
• Mean squared error (MSE) measures the average pixel-wise (squared) distance between the reconstructed image and the ground truth image. Lower is better. • Peak signal-to-noise ratio (PSNR) measures the ratio between the maximum image pixel value and MSE. Higher is better. • Learned perceptual image patch similarity (LPIPS) measures distance in the features space of a VGG (Simonyan & Zisserman, 2014) model trained on ImageNet. Lower is better.
4.1.1 MAIN RESULTS
Quantitative evaluation. Table 1 gives quantitative comparisons for IG, GI-GIP, and LTI against various defense mechanisms on CIFAR10. When no defense is applied, GI-GIP achieves the best performance according to all three metrics, whereas LTI performs almost equally well in terms of MSE and close to that of IG in terms of PSNR and LPIPS. However, when the gradient is augmented with a defense mechanism, both IG and GI-GIP have considerably worse performance with MSE
close to 0.1. By comparison, LTI outperforms both baselines significantly and consistently across all three defense mechanisms. For example, under gradient perturbation with σ = 0.1, which prior work believed is sufficient for preventing gradient inversion attacks (Zhu et al., 2019; Jeon et al., 2021), MSE can be as low as 0.012 for LTI. Our result therefore provides considerable additional insight for the level of empirical privacy achieved by DP-SGD (Abadi et al., 2016), and suggests that the theoretical privacy leakage as predicted by DP ϵ may be tighter than previously thought.
Qualitative evaluation. Figure 2 shows 4 random CIFAR10 test samples and their reconstructions under different defense mechanisms. Without any defense in place, all three methods recover a considerable amount of semantic information about the object of interest, with both GI-GIP and LTI faithfully reconstructing the training sample. Under the sign compression defense, IG completely fails to reconstruct all 4 samples, while GI-GIP only successfully reconstructs the second image. In contrast, LTI is able to recover the semantic information in all 4 samples. Results for gradient pruning and gradient perturbation yield similar conclusions. Additional samples are given in the appendix.
4.1.2 ABLATION STUDIES
Since LTI learns to invert gradients using the auxiliary dataset, its performance depends on the quantity and quality of data available to the adversary. We perform ablation studies to better understand this dependence by changing the auxiliary dataset size and its distribution.
Varying the auxiliary dataset size. We randomly subsample the CIFAR10 training set to construct auxiliary datasets of size {500, 5000, 15000, 25000, 35000, 45000, 50000} and evaluate the performance of LTI under various defenses. Figure 3(a) plots reconstruction MSE as a function of the auxiliary dataset size, which is monotonically decreasing as expected. Moreover, with just 5, 000 samples for training the inversion model (second point in each curve), the performance is nearly as good as when training using the full CIFAR10 training set. Notably, even if auxiliary dataset size as small as 500, reconstruction MSE is still lower than that of IG and GI-GIP in Table 1. Corresponding figures for PSNR and LPIPS in the appendix show similar findings.
Varying the auxiliary data distribution. Although access to a large set of in-distribution data may be not available in practice, it is plausible that the adversary can collect out-of-distribution samples for the auxiliary dataset. This is beneficial for the adversary since a model that learns how to invert out-of-distribution samples given their gradients may transfer to in-distribution data as well. To simulate this scenario, we divide CIFAR10 into two halves with disjoint classes, and construct the auxiliary dataset by combining a β fraction of samples from the first half and a 1 − β fraction of samples from the second half for β ∈ {0, 0.01, 0.1, 0.3, 0.5, 0.7, 0.9, 1}. The target model fw is trained only on samples from the first half, and hence the auxiliary set has the exact same distribution as the target model’s data when β = 1 and only has out-of-distribution data when β = 0.
Figure 3(b) shows reconstruction MSE as a function of β; corresponding figures for PSNR and LPIPS are given in the appendix. We make the following observations:
1. Even if the auxiliary dataset only contains 250 in-distribution samples (β = 0.01; second point in each curve), MSE of the inversion model is still lower than that of the best baseline in Table 1. For example, with the sign compression defense, LTI attains an MSE of ≤ 0.02, which is much lower than the MSE of 0.116 for IG and 0.091 for GI-GIP. 2. When the auxiliary dataset contains only out-of-distribution data (β = 0), the inversion model has very high reconstruction MSE, which suggests that methods for improving out-of-distribution generalization may be necessary for further improvement.
4.2 EVALUATION ON LANGUAGE TASK
For evaluating LTI on a language task, we experiment with causal language model training1 for next-token prediction. The language model fw is a three-layer transformer (Vaswani et al., 2017) with frozen token embedding layer. This is a common technique for language model fine-tuning (Sun et al., 2019), which also has privacy benefits since direct privacy leakage from the gradient magnitude of the token embedding layer can be prevented (Fowl et al., 2022; Gupta et al., 2022). As a result, the trainable model contains about 1.1 × 106 parameters. We train the language model on WikiText (Merity et al., 2016), where each training sample is limited to L = 16 tokens and the language model is trained to predict the next token xl given x:l−1 for l = 1, . . . , L using the cross-entropy loss.
Baseline. We compare LTI with TAG (Deng et al., 2021)—the state-of-the-art language model gradient inversion attack without utilizing the token embedding layer gradient2. The objective function for TAG is a slight modification of Equation 1 that uses both the ℓ2 and ℓ1 distance between the observed gradient and the gradient of dummy data. We also modify TAG slightly to adaptive it different defenses; see appendix for details.
Inversion model training. We follow the setup below for training the gradient inversion model gθ.
• Auxiliary dataset. We use ∼ 1.8 × 105 samples from the train split of Wikitext as the auxiliary dataset, and 1, 000 samples from the test split for evaluating the attack. In addition, we introduce a weaker variant of our attack that only assumes knowledge of the marginal token distribution for the language model training data. Instead of using the WikiText train split as auxiliary data, we sample random tokens according to the marginal token distribution to generate pseudo-data for training the inversion model. We show that this variant, which we denote LTI-P, can even outperform LTI with in-distribution auxiliary data due to access to infinite training data.
1We follow the task setup and code in https://github.com/JonasGeiping/breaching 2We do not compare against a more recent attack by Gupta et al. (2022) since it crucially depends on access
to the token embedding layer gradient.
• Inversion model architecture. We train a two-layer MLP with ReLU activation and first hidden-layer size 600 and second hidden-layer size 1, 000. The inversion model outputs L probability vectors each with size equal to the vocabulary size (∼ 50, 000), and we train it using the cross-entropy loss to predict the L tokens given the target model gradient. We use feature hashing (Weinberger et al., 2009) to reduce the target model gradient to 10% of its original dimensions as input to the inversion model. • Training details. We use Adam (Kingma & Ba, 2014) to train the inversion model over 20 epochs with batch size 64. Learning rates are selected separately for each defense from {10−3, 10−4, 10−5}. • Computation cost. Our experiments are conducted using NVIDIA GeForce RTX 3090 GPUs and each training run takes about 3 hours.
Evaluation methodology. We evaluate LTI and the TAG baseline on 1, 000 samples from the WikiText test set. To measure the quality of reconstructed text, we use four metrics:
• Accuracy(%) measures the average token-wise zero-one accuracy. Higher is better. • Rouge-1(%), Rouge-2(%) and Rouge-L(%) measure the overlap of unigram, bigram, and length of
longest common subsequence between the ground truth and the reconstructed text. Higher is better.
Results. Table 2 shows quantitative comparison between LTI (and its variant LTI-P) and TAG against various defenses. The overall trend is remarkably consistent: LTI and LTI-P outperform TAG in all four metrics for all defense settings, with LTI-P achieving state-of-the-art recovery accuracy by far. This result suggests that knowledge of the marginal token distribution encodes enough data prior for LTI-P to train the inversion model, and having access to infinite training data allows it to better generalize to the test set compared to LTI. In practice, it is very plausible that the marginal token distribution is known to the adversary, and hence LTI-P serves as a surprisingly simple and effective baseline for gradient inversion in NLP.
Figure 4 shows 3 random test samples from WikiText and their reconstructions using LTI-P and TAG, with tokens that are correctly reconstructed highlighted in blue. Without any defense, both TAG and LTI-P yield reasonably accurate reconstructions, with LTI-P faithfully reconstructing all but 1-2 tokens. With the sign compression defense applied, TAG fails to recover any token correctly, whereas LTI-P can faithfully recover almost half of the tokens in each sample. Results for gradient pruning and gradient perturbation yield similar conclusions, with TAG recovering a larger but still relatively insignificant set of tokens. Additional samples are given in the appendix.
5 CONCLUSION AND FUTURE WORK
We demonstrated the effectiveness of LTI—a simple learning-based gradient inversion attack—under realistic federated learning settings. For both vision and language tasks, LTI can match or exceed the performance of state-of-the-art optimization-based methods when no defense is applied, and significantly outperform all prior works under defenses based on gradient perturbation and gradient
compression. Given its simplicity and versatility, we advocate the use of LTI both as a strong baseline for future research as well as a diagnostic tool for evaluating privacy leakage in FL.
Negative societal impact. The concept of a gradient inversion attack can lead to negative consequences if used inappropriately. Our work showed that if FL is deployed without consideration for gradient inversion attacks, an adversary can leverage its vulnerabilities to compromise the data privacy of clients even under strong empirical defenses. However, we strongly emphasize that our work should not be interpreted as a tool for adversaries, but rather serve to inform the community about the risks of data privacy breach in FL and promote future research into safe practices.
Limitations. This paper serves as preliminary work towards understanding the effectiveness of learning-based gradient inversion attacks, and our method can be further improved along several different directions. 1. For large models, our current approach is to hash the gradients into a lowerdimensional space to reduce memory cost. It may be possible to leverage the model’s architecture to design more effective dimensionality reduction techniques to further scale up the method. 2. Currently we only focus on the setting with batch size 1, which precludes the use of secure aggregation (Bonawitz et al., 2016)—a common technique in FL for amplifying privacy by aggregating the gradients from multiple clients before revealing it to the server. For LTI, the complexity of MLP would increase when the batch size increases, which makes the learning harder. More advanced model architecture and loss design might help with the large batch case. 3. LTI in its current form does not leverage additional data priors such as the smoothness prior for images and text fluency prior for text. We can readily incorporate these priors by modifying the inversion model’s loss function with total variation (for image data) or perplexity on a trained language model (for text data), which may further improve the performance of LTI.
A SUPPLEMENTARY MATERIAL FOR SECTION 4
A.1 MODIFICATIONS FOR BASELINE METHODS
Vision baselines. Both IG and GI-GIP use cosine distance instead of ℓ2 distance in Equation 1 for optimizing the dummy data. For the sign compression defense, this loss function does not optimize the correct objective since the dummy data’s gradient is not a vector with ±1 entries but rather a real-valued vector with the same sign. We replace cosine distance by the loss ∑m i=1 ( ℓisign )2 where
ℓisign = max {−∇wiℓ(fw(x̃), ỹ) · Sign (∇wiℓ(fw(x), y)) , 0} . (3)
One sanity check for this loss is that when ∇wiℓ(fw(x̃), ỹ) has the same sign as that of ∇wiℓ(fw(x), y), the minimum loss value of 0 is achieved. For the gradient pruning defense, optimizing the cosine distance between the dummy data gradient and the pruned ground truth gradient will force too many gradient values to 0, which is the incorrect value for the full ground truth gradient. Therefore we only compute cosine distance over the non-zero dimensions of pruned gradient.
Language baselines. For TAG, we find that the loss function also needs to be modified slightly to accommodate the sign compression and gradient pruning defenses:
• Sign compression. Similar to the vision baselines, the ℓ2 and ℓ1 distance between the dummy data gradient and the ground truth gradient sign do not optimize the correct objective. We replace ∥ · ∥22 and ∥ · ∥1 by ∑m i=1 ( ℓisign )2 and ℓisign, respectively, where ∑m i=1 ℓ i sign is defined in Equation 3. • Gradient pruning. We make the same modification to TAG as in the vision baselines.
A.2 AUXILIARY DATASET ABLATION STUDIES
In section 4.1.2 we showed reconstruction MSE for LTI as a function of the auxiliary dataset size and the shift factor β. For completeness, we show the corresponding PSNR and LPIPS curves in Figure 6. Similar to Figure 3, when reducing the auxiliary dataset size (e.g., from 50, 000 to 5, 000) or reducing the proportion of in-distribution data (e.g., from β = 1 to β = 0.1), the performance of LTI does not worsen significantly.
A.3 ADDITIONAL SAMPLES
Figure 7 and Figure 8 show additional samples and their reconstructions under various defense mechanisms. The result is consistent with Figure 2 and Figure 4, where LTI shows consistently better reconstruction quality compared to baselines.
Sign Comp.
Sign Comp. | 1. What is the focus of the paper, and what are the strengths and weaknesses of the proposed approach?
2. What are the assumptions made in the threat model, and how do they impact the practicality of the evaluation setup?
3. How does the paper address dimensionality reduction, and what are the limitations of the approach?
4. What are the implications of using an auxiliary dataset for training the gradient inversion model, and how does this impact the practicability of the method?
5. How does the paper evaluate its performance, and what are the limitations of the evaluation setup?
6. Are there any missing details in the paper that would impact the reproducibility of the results? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes to recover training data from gradient updates in federated learning. Specifically, it leverages an auxiliary dataset to obtain the gradients for these samples. A neural network model is then trained on them by mapping the gradient to the input. This paper also utilizes existing feature hashing method to reduce the dimensionality of the input gradient. The evaluation is conducted on one image classification dataset and one language task. The experimental results show the proposed method has better attack performance compared to existing techniques against different defense techniques.
Strengths And Weaknesses
Strength
Important topic of gradient inversion attack
Easy to follow
Weaknesses
Unrealistic assumption
Limited novelty
Impractical evaluation setup
Limited evaluation
Detailed comments
In the threat model, the paper assumes "the FL protocol does not leverage secure aggregation", which is an unrealistic assumption. The purpose of secure aggregation is to avoid revealing private information to other parties in a distributed setting such as federated learning. The paper discards the secure aggregation and builds an attack on top of such an unrealistic assumption. In addition, it assumes the gradient for each sample in the batch (actually it assumes the batch size is 1), which is impractical. It is not meaningful to study the problem in such a setting.
This paper leverages an auxiliary dataset to train a model for mapping the gradient to the input. Similar ideas have already been studied in many existing works. The dimensionality reduction issue is addressed by directly using an existing feature hashing technique. There is very limited technical contribution in this paper.
The paper uses the term "auxiliary dataset" to denote a dataset different from the training data by the subject model. However, in the evaluation, the paper directly uses the training data that the subject was trained on to construct their gradient inversion model, which is impractical.
The evaluation is conducted on both computer vision and natural language processing, which is good. However, only one dataset in each domain is evaluated. The model used on the computer vision task is LeNet, which is a very simple model structure. There is no evaluation on advanced and complex model structures.
Important details are missing in the paper. There is no data showing the performance of the subject model on corresponding tasks. The baseline GI-GIP uses ImageNet to train the generator in the original paper. It is unclear whether the authors directly use the trained generator from GI-GIP or retrain the generator on CIFAR-10. If it is the former case, the comparison is not considered fair.
Clarity, Quality, Novelty And Reproducibility
Clarity, Quality, Novelty
Please see detailed comments in Strength And Weaknesses.
Reproducibility
The submission includes the code. However the baselines are missing from the code. Without the aforementioned details regarding baselines, it is hard to reproduce the results. |
ICLR | Title
Learning To Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning
Abstract
Gradient inversion attack enables recovery of training samples from model updates in federated learning (FL) and constitutes a serious threat to data privacy. To mitigate this vulnerability, prior work proposed both principled defenses based on differential privacy, as well as heuristic defenses based on gradient compression as countermeasures. These defenses have so far been very effective, in particular those based on gradient compression that allow the model to maintain high accuracy while greatly reducing the attack’s effectiveness. In this work, we argue that such findings do not accurately reflect the privacy risk in FL, and show that existing defenses can be broken by a simple adaptive attack that trains a model using auxiliary data to learn how to invert gradients on both vision and language tasks.
N/A
Gradient inversion attack enables recovery of training samples from model updates in federated learning (FL) and constitutes a serious threat to data privacy. To mitigate this vulnerability, prior work proposed both principled defenses based on differential privacy, as well as heuristic defenses based on gradient compression as countermeasures. These defenses have so far been very effective, in particular those based on gradient compression that allow the model to maintain high accuracy while greatly reducing the attack’s effectiveness. In this work, we argue that such findings do not accurately reflect the privacy risk in FL, and show that existing defenses can be broken by a simple adaptive attack that trains a model using auxiliary data to learn how to invert gradients on both vision and language tasks.
1 INTRODUCTION
Federated learning (FL; (McMahan et al., 2017)) is a popular framework for distributed model training on sensitive user data. Instead of centrally storing the training data, FL operates in a serverclient setting where the server hosts the model and has no direct access to the data. The clients can apply the model on their private data and send gradient updates back to the server. This learning regime promises data privacy as users only share gradients but never any raw data. However, recent work (Zhu et al., 2019; Zhao et al., 2020; Geiping et al., 2020) showed that despite these efforts, the server can still recover the training data from gradient updates, violating the promise of data privacy in FL. These so-called gradient inversion attacks operate by optimizing over the input space to find training samples whose gradient matches that of the observed gradient, and such attacks remain effective even when clients utilize secure aggregation (Bonawitz et al., 2016) to avoid revealing individual updates (Yin et al., 2021; Jeon et al., 2021).
As countermeasures against these gradient inversion attacks, prior work proposed both principled defenses based on differential privacy (Abadi et al., 2016), as well as heuristics that compress the gradient update through gradient pruning (Aji & Heafield, 2017) or sign compression (Bernstein et al., 2018). In particular, gradient compression defenses have so far enjoyed great success, severely hindering the effectiveness of existing optimization-based attacks (Zhu et al., 2019; Jeon et al., 2021) while maintaining close to the same level of accuracy for the trained model. As a result, these limitations seemingly diminish the threat of gradient inversion in practical FL applications.
In this paper we argue that evaluating defenses on existing optimization-based attacks may provide a false sense of security. To this end, we propose a simple learning-based attack—which we call Learning To Invert (LTI)—that trains a model to learn how to invert the gradient update to recover client samples; see Figure 1 for an illustration. We assume that the adversary (i.e., the server) has access to an auxiliary dataset whose distribution is similar to that of the private data, and use it to generate training samples for the gradient inversion model by querying the global model for gradients. Our attack is highly adaptable to different defenses since applying a defense simply amounts to training data augmentation for the gradient inversion model.
We empirically demonstrate that LTI can successfully circumvent defenses based on gradient perturbation (i.e., using differential privacy; (Abadi et al., 2016)), gradient pruning (Aji & Heafield, 2017) and sign compression (Bernstein et al., 2018) on both vision and language tasks.
• Vision: We evaluate on the CIFAR10 (Krizhevsky et al., 2009) classification dataset. LTI attains recovery accuracy close to that of the best optimization-based method when no defense is applied, and significantly outperforms all prior attacks under defense. • NLP: We experiment with causal language model training on the WikiText (Merity et al., 2016) dataset, where LTI attains state-of-the-art performance in all settings, with or without defense.
Given the strong empirical performance of LTI and its adaptability to different learning tasks and defense mechanisms, we advocate for its use as a simple baseline for future studies on gradient inversion attacks in FL.
2 BACKGROUND
Federated learning. The objective of federated learning (McMahan et al., 2017) is to train a machine learning model in a distributed fashion without centralized collection of training data. In detail, let fw be the global model parameterized by w, and consider a supervised learning setting that optimizes w by minimizing a loss function ℓ over the training set Dtrain:∑
(x,y)∈Dtrain ℓ(fw(x), y). In centralized learning this is typically done by computing a stochastic gradient 1B ∑B
i=1 ∇wℓ(fw(xi), yi) over a randomly drawn batch of data (x1, y1), . . . , (xB , yB) and minimizing ℓ using gradient descent.
In FL, instead of centrally collecting Dtrain to draw a random batch during training, the training set Dtrain is distributed across multiple clients and the model fw is stored on a central server. At each iteration, the model parameter w is transmitted to each client to compute the per-sample gradients {∇wℓ(fw(xi), yi)}Bi=1 locally over a set of clients. The server and clients then execute a federated aggregation protocol to compute the average gradient for the gradient descent update. A major advantage of FL is data privacy since clients do not need to disclose their data explicitly, but rather only send their gradient ∇wℓ(fw(xi), yi) to the server. Techniques such as secure aggregation (Bonawitz et al., 2016) and differential privacy (Dwork et al., 2006; 2014) can further reduce the privacy leakage from sending this gradient update.
Gradient inversion attack. Despite the promise of data privacy in FL, recent work showed that the heuristic of sending gradient updates instead of training samples themselves in fact provides a false sense of security. Zhu et al. (2019) showed in their seminal paper that it is possible for the server to recover the full batch of training samples given aggregated gradients. These optimizationbased gradient inversion attacks operate by optimizing a set of dummy data x̃1, . . . , x̃B and labels ỹ1, . . . , ỹB to match their gradient to the observed gradient:
min x̃ ∥∥∥∥∥ B∑ i=1 ∇wℓ(fw(x̃i), ỹi)− B∑ i=1 ∇wℓ(fw(xi), yi) ∥∥∥∥∥ 2
2
. (1)
For image tasks, since Equation 1 is differentiable in x̃i and ỹi and the model parameter w is known to the server, the server can optimize Equation 1 using gradient-based search. Doing so yields recovered samples (x̃i, ỹi) that closely resemble actual samples (xi, yi) in the batch. In practice this approach is highly effective, and follow-up works proposed several optimizations to further improve its recovery accuracy (Geiping et al., 2020; Yin et al., 2021; Jeon et al., 2021).
For language tasks this optimization problem is considerably more complex since the samples x1, . . . ,xB are sequences of discrete tokens, and optimizing Equation 1 amounts to solving a discrete optimization problem. To circumvent this difficulty, Zhu et al. (2019) and Deng et al. (2021) instead optimize the token embeddings to match the observed gradient and then maps the recovered embeddings to their closest tokens in the embedding layer to recover the private text. In contrast, Gupta et al. (2022) leveraged the insight that gradient of the token embedding layer can be used to recover exactly the set of tokens present in the training sample, and then uses beam search to optimize the ordering of tokens for fluency to recover the private text.
Gradient inversion under the malicious server setting. The aforementioned gradient inversion attacks operate under the honest-but-curious setting where the server faithfully executes the federated learning protocol, but attempts to extract private information from the observed gradients. Fowl et al. (2021), Boenisch et al. (2021) and Fowl et al. (2022) consider a stronger malicious server threat model that allows the server to transmit arbitrary model parameters w to the clients. Under this threat model, it is possible to carefully craft the model parameters so that the training sample can be recovered exactly from its gradient even when the batch size B is large. While this setting is certainly realistic and relevant, our paper operates under the weaker honest-but-curious threat model.
3 LEARNING TO INVERT: LEARNING-BASED GRADIENT INVERSION ATTACKS
Motivation. The threat of gradient inversion attack has prompted prior work to employ defense mechanisms to mitigate this privacy risk in FL (Zhu et al., 2019; Jeon et al., 2021). Intuitively, such defenses reduce the amount of information contained in the gradient about the training sample by either perturbing the gradient with noise (Abadi et al., 2016) or compressing them (Aji & Heafield, 2017; Bernstein et al., 2018), making recovery much more difficult. However, doing so also reduces the amount of information a sample can provide for training the global model, and hence has a negative impact on the model’s performance. This is certainly true for principled defenses based on differential privacy (Dwork et al., 2006) such as gradient perturbation (Abadi et al., 2016), however, defenses based on gradient compression seemingly provide a much better privacy-utility trade-off, effectively preventing the attack with minor reduction in model performance (Zhu et al., 2019).
The empirical success of existing defenses seemingly diminish the threat of gradient inversion in FL, especially since gradient compression (Aji & Heafield, 2017; Bernstein et al., 2018) is already commonplace in practical FL applications to reduce communication cost. However, we argue that optimization-based attacks underestimate the power of the adversary: If the adversary has access to an auxiliary dataset Daux, they can train a gradient inversion model to recover Daux from its gradients computed on the global model. As we will establish later, this greatly empowers the adversary, exposing existing risks to federate learning.
Threat model. We consider the setting where the adversary is an honest-but-curious server, who executes the learning protocol faithfully but aims to extract private training data from the observed gradients. We also assume that the FL protocol does not leverage secure aggregation, so per-client gradients are revealed to the server. Under these assumptions, in each FL iteration the adversary
has the knowledge of model weights w and the gradients ∇wℓ(fw(x), y) for each sample (x, y) in the batch. Moreover, we assume the adversary has an auxiliary dataset Daux, which could be in-distribution or a mixture of in-distribution and out-of-distribution data. This assumption is similar to the setting in Jeon et al. (2021), which assumes a generative model that is trained from the in-distribution data, and is common in the study of other privacy attacks such as membership inference (Shokri et al., 2017).
Learning to invert (LTI). Since the adversary has knowledge of the model weights, he/she is able to generate the gradient ∇wℓ(fw(xaux), yaux) for each sample (xaux, yaux) in the auxiliary dataset. This allows the adversary to learn a gradient inversion model gθ, parameterized by θ, to predict the data point (xaux, yaux) from the gradient of the global model ∇wℓ(fw(xaux), yaux) by solving the following learning problem:
min θ ∑ (xaux,yaux)∈Daux ℓattack (gθ (∇wℓ(fw(xaux), yaux)) , (xaux, yaux)) . (2)
In practice, ℓattack can be the cross-entropy (for discrete input) or squared-loss (for continuous-valued input) function and we find that using a multi-layer perceptron (MLP) (Bishop et al., 1995) for gθ is effective empirically. Importantly, when a defense mechanism such as gradient perturbation or gradient compression is applied, we can apply the same transformation to ∇wℓ(fw(xaux), yaux) to augment the training data for gθ to carry out an adaptive attack. We will show in section 4 that this simple approach is surprisingly effective at circumventing existing defenses.
Dimensionality reduction for large models. One potential problem for LTI is that the gradients ∇wℓ(fw(xaux), yaux) can be extremely high-dimensional. For example, both ResNet18 (He et al., 2016) for vision tasks and a three-layer transformer (Vaswani et al., 2017) for language tasks have approximately 1.1 million trainable parameters. Such high-dimensional input to the model gθ can lead to memory issues, as the first layer of the MLP would have 11M × h parameters, where h denotes the size of the first hidden layer.
To address this issue, we use feature hashing (Weinberger et al., 2009) to reduce the dimensionality of the input gradient. To this end we create k bins, where k is much smaller than the size of gradient m, and assign each gradient dimension i ∈ [m] to a random bin r(i) ∈ [k]. For each bin, we sum up the gradient values that are assigned to this bin. As a result, we obtain a feature vector of size k for the inversion model gθ. In other words, we project the gradient ∇wℓ(fw(xaux), yaux) to P∇wℓ(fw(xaux), yaux) using the random projection matrix P given by:
P ∈ {0, 1}k×ms.t. ∀i, Pj,i = 0 (∀j ̸= r(i)), Pr(i),i = 1.
If r(i) is implemented with a pseudo-uniform hashing function, P does not need to be stored in memory, reducing the memory footprint of gθ to a constant independent of the gradient dimension.
4 EXPERIMENT
We evaluate LTI on vision and language tasks against several existing defenses to show that it vastly outperforms prior gradient inversion attacks. We consider the following defense mechanisms evaluated in prior work (Zhu et al., 2019; Jeon et al., 2021):
• None. The gradient shared between the server and clients is the full gradient without any defense. This is the most common setting that previous papers focus on. • Sign compression (Bernstein et al., 2018) applies the sign function to each dimension of the gradient independently to compress the gradient to one bit per dimension. • Gradient pruning with pruning rate α (Aji & Heafield, 2017) zeroes out the bottom 1− α fraction of coordinates of ∇wℓ(fw(x), y) in terms of absolute value, which effectively compresses the gradient to (1− α)m dimensions. • Gradient perturbation with Gaussian standard deviation σ (Abadi et al., 2016) is a differentially private mechanism used commonly for training private models with SGD. An i.i.d. Gaussian random vector N (0, σ2) is added to the gradient, which one can show achieves ϵ-local differential privacy (Kasiviswanathan et al., 2011) with ϵ = O(1/σ).
4.1 EVALUATION ON VISION TASK
For evaluating LTI on a vision task, we experiment with image classification on CIFAR10 (Krizhevsky et al., 2009). The target model fw is LeNet (LeCun et al., 1998) with 1.5× 104 parameters trained using the cross-entropy loss.
Baselines. We compare our method with two baseline gradient inversion attacks: Inverting Gradients (IG; Geiping et al. (2020)), a representative optimization-based method, and Gradient Inversion with Generative Image Prior (GI-GIP; Jeon et al. (2021)), the state-of-the-art optimization-based method that uses a generative model to encode the data prior. We make minor modifications to these attacks to adapt them to various defenses; see appendix for details. The threat model for our attack is most similar to GI-GIP since both use an auxiliary dataset to encode the data prior.
Inversion model training. We follow the setup below for training the gradient inversion model gθ.
• Auxiliary dataset. We use the train split of CIFAR10 as the auxiliary dataset for training the inversion model gθ and the test split for inverting gradients computed on the global model fw. • Inversion model architecture. We use a three-layer MLP with hidden size 3000 for our inversion model gθ. The MLP takes the flattened gradient vector as input and outputs a 3072-dimensional vector representing the flattened image. The training objective ℓattack in Equation 2 is the mean squared error (MSE) between the output vector from MLP and the flattened ground truth image. • Training details. We use the Adam (Kingma & Ba, 2014) optimizer for training gθ. The model is trained for 200 epochs using training batch size 256. The initial learning rate is 10−4 with learning rate drop to 10−5 after 150 epochs. • Computation cost. Our experiments are conducted using NVIDIA GeForce RTX 2080 GPUs and each training run takes about 1.5 hours.
Evaluation methodology. We evaluate LTI and the aforementioned baselines on 1, 000 random images from the CIFAR10 test split. To measure reconstruction quality, we use three metrics:
• Mean squared error (MSE) measures the average pixel-wise (squared) distance between the reconstructed image and the ground truth image. Lower is better. • Peak signal-to-noise ratio (PSNR) measures the ratio between the maximum image pixel value and MSE. Higher is better. • Learned perceptual image patch similarity (LPIPS) measures distance in the features space of a VGG (Simonyan & Zisserman, 2014) model trained on ImageNet. Lower is better.
4.1.1 MAIN RESULTS
Quantitative evaluation. Table 1 gives quantitative comparisons for IG, GI-GIP, and LTI against various defense mechanisms on CIFAR10. When no defense is applied, GI-GIP achieves the best performance according to all three metrics, whereas LTI performs almost equally well in terms of MSE and close to that of IG in terms of PSNR and LPIPS. However, when the gradient is augmented with a defense mechanism, both IG and GI-GIP have considerably worse performance with MSE
close to 0.1. By comparison, LTI outperforms both baselines significantly and consistently across all three defense mechanisms. For example, under gradient perturbation with σ = 0.1, which prior work believed is sufficient for preventing gradient inversion attacks (Zhu et al., 2019; Jeon et al., 2021), MSE can be as low as 0.012 for LTI. Our result therefore provides considerable additional insight for the level of empirical privacy achieved by DP-SGD (Abadi et al., 2016), and suggests that the theoretical privacy leakage as predicted by DP ϵ may be tighter than previously thought.
Qualitative evaluation. Figure 2 shows 4 random CIFAR10 test samples and their reconstructions under different defense mechanisms. Without any defense in place, all three methods recover a considerable amount of semantic information about the object of interest, with both GI-GIP and LTI faithfully reconstructing the training sample. Under the sign compression defense, IG completely fails to reconstruct all 4 samples, while GI-GIP only successfully reconstructs the second image. In contrast, LTI is able to recover the semantic information in all 4 samples. Results for gradient pruning and gradient perturbation yield similar conclusions. Additional samples are given in the appendix.
4.1.2 ABLATION STUDIES
Since LTI learns to invert gradients using the auxiliary dataset, its performance depends on the quantity and quality of data available to the adversary. We perform ablation studies to better understand this dependence by changing the auxiliary dataset size and its distribution.
Varying the auxiliary dataset size. We randomly subsample the CIFAR10 training set to construct auxiliary datasets of size {500, 5000, 15000, 25000, 35000, 45000, 50000} and evaluate the performance of LTI under various defenses. Figure 3(a) plots reconstruction MSE as a function of the auxiliary dataset size, which is monotonically decreasing as expected. Moreover, with just 5, 000 samples for training the inversion model (second point in each curve), the performance is nearly as good as when training using the full CIFAR10 training set. Notably, even if auxiliary dataset size as small as 500, reconstruction MSE is still lower than that of IG and GI-GIP in Table 1. Corresponding figures for PSNR and LPIPS in the appendix show similar findings.
Varying the auxiliary data distribution. Although access to a large set of in-distribution data may be not available in practice, it is plausible that the adversary can collect out-of-distribution samples for the auxiliary dataset. This is beneficial for the adversary since a model that learns how to invert out-of-distribution samples given their gradients may transfer to in-distribution data as well. To simulate this scenario, we divide CIFAR10 into two halves with disjoint classes, and construct the auxiliary dataset by combining a β fraction of samples from the first half and a 1 − β fraction of samples from the second half for β ∈ {0, 0.01, 0.1, 0.3, 0.5, 0.7, 0.9, 1}. The target model fw is trained only on samples from the first half, and hence the auxiliary set has the exact same distribution as the target model’s data when β = 1 and only has out-of-distribution data when β = 0.
Figure 3(b) shows reconstruction MSE as a function of β; corresponding figures for PSNR and LPIPS are given in the appendix. We make the following observations:
1. Even if the auxiliary dataset only contains 250 in-distribution samples (β = 0.01; second point in each curve), MSE of the inversion model is still lower than that of the best baseline in Table 1. For example, with the sign compression defense, LTI attains an MSE of ≤ 0.02, which is much lower than the MSE of 0.116 for IG and 0.091 for GI-GIP. 2. When the auxiliary dataset contains only out-of-distribution data (β = 0), the inversion model has very high reconstruction MSE, which suggests that methods for improving out-of-distribution generalization may be necessary for further improvement.
4.2 EVALUATION ON LANGUAGE TASK
For evaluating LTI on a language task, we experiment with causal language model training1 for next-token prediction. The language model fw is a three-layer transformer (Vaswani et al., 2017) with frozen token embedding layer. This is a common technique for language model fine-tuning (Sun et al., 2019), which also has privacy benefits since direct privacy leakage from the gradient magnitude of the token embedding layer can be prevented (Fowl et al., 2022; Gupta et al., 2022). As a result, the trainable model contains about 1.1 × 106 parameters. We train the language model on WikiText (Merity et al., 2016), where each training sample is limited to L = 16 tokens and the language model is trained to predict the next token xl given x:l−1 for l = 1, . . . , L using the cross-entropy loss.
Baseline. We compare LTI with TAG (Deng et al., 2021)—the state-of-the-art language model gradient inversion attack without utilizing the token embedding layer gradient2. The objective function for TAG is a slight modification of Equation 1 that uses both the ℓ2 and ℓ1 distance between the observed gradient and the gradient of dummy data. We also modify TAG slightly to adaptive it different defenses; see appendix for details.
Inversion model training. We follow the setup below for training the gradient inversion model gθ.
• Auxiliary dataset. We use ∼ 1.8 × 105 samples from the train split of Wikitext as the auxiliary dataset, and 1, 000 samples from the test split for evaluating the attack. In addition, we introduce a weaker variant of our attack that only assumes knowledge of the marginal token distribution for the language model training data. Instead of using the WikiText train split as auxiliary data, we sample random tokens according to the marginal token distribution to generate pseudo-data for training the inversion model. We show that this variant, which we denote LTI-P, can even outperform LTI with in-distribution auxiliary data due to access to infinite training data.
1We follow the task setup and code in https://github.com/JonasGeiping/breaching 2We do not compare against a more recent attack by Gupta et al. (2022) since it crucially depends on access
to the token embedding layer gradient.
• Inversion model architecture. We train a two-layer MLP with ReLU activation and first hidden-layer size 600 and second hidden-layer size 1, 000. The inversion model outputs L probability vectors each with size equal to the vocabulary size (∼ 50, 000), and we train it using the cross-entropy loss to predict the L tokens given the target model gradient. We use feature hashing (Weinberger et al., 2009) to reduce the target model gradient to 10% of its original dimensions as input to the inversion model. • Training details. We use Adam (Kingma & Ba, 2014) to train the inversion model over 20 epochs with batch size 64. Learning rates are selected separately for each defense from {10−3, 10−4, 10−5}. • Computation cost. Our experiments are conducted using NVIDIA GeForce RTX 3090 GPUs and each training run takes about 3 hours.
Evaluation methodology. We evaluate LTI and the TAG baseline on 1, 000 samples from the WikiText test set. To measure the quality of reconstructed text, we use four metrics:
• Accuracy(%) measures the average token-wise zero-one accuracy. Higher is better. • Rouge-1(%), Rouge-2(%) and Rouge-L(%) measure the overlap of unigram, bigram, and length of
longest common subsequence between the ground truth and the reconstructed text. Higher is better.
Results. Table 2 shows quantitative comparison between LTI (and its variant LTI-P) and TAG against various defenses. The overall trend is remarkably consistent: LTI and LTI-P outperform TAG in all four metrics for all defense settings, with LTI-P achieving state-of-the-art recovery accuracy by far. This result suggests that knowledge of the marginal token distribution encodes enough data prior for LTI-P to train the inversion model, and having access to infinite training data allows it to better generalize to the test set compared to LTI. In practice, it is very plausible that the marginal token distribution is known to the adversary, and hence LTI-P serves as a surprisingly simple and effective baseline for gradient inversion in NLP.
Figure 4 shows 3 random test samples from WikiText and their reconstructions using LTI-P and TAG, with tokens that are correctly reconstructed highlighted in blue. Without any defense, both TAG and LTI-P yield reasonably accurate reconstructions, with LTI-P faithfully reconstructing all but 1-2 tokens. With the sign compression defense applied, TAG fails to recover any token correctly, whereas LTI-P can faithfully recover almost half of the tokens in each sample. Results for gradient pruning and gradient perturbation yield similar conclusions, with TAG recovering a larger but still relatively insignificant set of tokens. Additional samples are given in the appendix.
5 CONCLUSION AND FUTURE WORK
We demonstrated the effectiveness of LTI—a simple learning-based gradient inversion attack—under realistic federated learning settings. For both vision and language tasks, LTI can match or exceed the performance of state-of-the-art optimization-based methods when no defense is applied, and significantly outperform all prior works under defenses based on gradient perturbation and gradient
compression. Given its simplicity and versatility, we advocate the use of LTI both as a strong baseline for future research as well as a diagnostic tool for evaluating privacy leakage in FL.
Negative societal impact. The concept of a gradient inversion attack can lead to negative consequences if used inappropriately. Our work showed that if FL is deployed without consideration for gradient inversion attacks, an adversary can leverage its vulnerabilities to compromise the data privacy of clients even under strong empirical defenses. However, we strongly emphasize that our work should not be interpreted as a tool for adversaries, but rather serve to inform the community about the risks of data privacy breach in FL and promote future research into safe practices.
Limitations. This paper serves as preliminary work towards understanding the effectiveness of learning-based gradient inversion attacks, and our method can be further improved along several different directions. 1. For large models, our current approach is to hash the gradients into a lowerdimensional space to reduce memory cost. It may be possible to leverage the model’s architecture to design more effective dimensionality reduction techniques to further scale up the method. 2. Currently we only focus on the setting with batch size 1, which precludes the use of secure aggregation (Bonawitz et al., 2016)—a common technique in FL for amplifying privacy by aggregating the gradients from multiple clients before revealing it to the server. For LTI, the complexity of MLP would increase when the batch size increases, which makes the learning harder. More advanced model architecture and loss design might help with the large batch case. 3. LTI in its current form does not leverage additional data priors such as the smoothness prior for images and text fluency prior for text. We can readily incorporate these priors by modifying the inversion model’s loss function with total variation (for image data) or perplexity on a trained language model (for text data), which may further improve the performance of LTI.
A SUPPLEMENTARY MATERIAL FOR SECTION 4
A.1 MODIFICATIONS FOR BASELINE METHODS
Vision baselines. Both IG and GI-GIP use cosine distance instead of ℓ2 distance in Equation 1 for optimizing the dummy data. For the sign compression defense, this loss function does not optimize the correct objective since the dummy data’s gradient is not a vector with ±1 entries but rather a real-valued vector with the same sign. We replace cosine distance by the loss ∑m i=1 ( ℓisign )2 where
ℓisign = max {−∇wiℓ(fw(x̃), ỹ) · Sign (∇wiℓ(fw(x), y)) , 0} . (3)
One sanity check for this loss is that when ∇wiℓ(fw(x̃), ỹ) has the same sign as that of ∇wiℓ(fw(x), y), the minimum loss value of 0 is achieved. For the gradient pruning defense, optimizing the cosine distance between the dummy data gradient and the pruned ground truth gradient will force too many gradient values to 0, which is the incorrect value for the full ground truth gradient. Therefore we only compute cosine distance over the non-zero dimensions of pruned gradient.
Language baselines. For TAG, we find that the loss function also needs to be modified slightly to accommodate the sign compression and gradient pruning defenses:
• Sign compression. Similar to the vision baselines, the ℓ2 and ℓ1 distance between the dummy data gradient and the ground truth gradient sign do not optimize the correct objective. We replace ∥ · ∥22 and ∥ · ∥1 by ∑m i=1 ( ℓisign )2 and ℓisign, respectively, where ∑m i=1 ℓ i sign is defined in Equation 3. • Gradient pruning. We make the same modification to TAG as in the vision baselines.
A.2 AUXILIARY DATASET ABLATION STUDIES
In section 4.1.2 we showed reconstruction MSE for LTI as a function of the auxiliary dataset size and the shift factor β. For completeness, we show the corresponding PSNR and LPIPS curves in Figure 6. Similar to Figure 3, when reducing the auxiliary dataset size (e.g., from 50, 000 to 5, 000) or reducing the proportion of in-distribution data (e.g., from β = 1 to β = 0.1), the performance of LTI does not worsen significantly.
A.3 ADDITIONAL SAMPLES
Figure 7 and Figure 8 show additional samples and their reconstructions under various defense mechanisms. The result is consistent with Figure 2 and Figure 4, where LTI shows consistently better reconstruction quality compared to baselines.
Sign Comp.
Sign Comp. | 1. What is the focus of the paper on federated learning?
2. What are the strengths of the proposed approach, particularly in its simplicity and effectiveness?
3. What are the weaknesses of the paper, especially regarding the attacking strategy and batch size limitation?
4. Do you have any questions about the research problem or the proposed method?
5. How would you assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper investigates potential privacy risk in federated learning. They found that existing privacy defenses in FL can be broken via a simple adaptive attack. In particular, the proposed learning-based approach (Learning to invert) aims to train an inversion model to reconstruct training samples from their gradient with the help from auxiliary dataset. Experiments demonstrate the effectiveness of the proposed model.
Strengths And Weaknesses
Strength:
The research problem is very important and this paper provides a simple but effective attacking method to conduct gradient inversion attacks in federated learning.
This paper is easy to follow and clearly written.
The experimental results demonstrate the effectiveness of the proposed method.
Weaknesses
There are some concerns regarding this paper.
Auxiliary datasets are used to help learn the inversion model for privacy attacks in FL. In such cases, this task can be considered as a malicious server that wants to steal private information from some clients. The attacker task is weird to me. To steal information from one client, why not just a serve and a client pair, and attack this client directly? Such an attacking strategy is not hard to achieve.
The batch size is set to 1. When the batch size increases, the proposed method can perform much worse. This may limit their applications to many real-world tasks, where many FL methods would consider using large batch sizes. Meanwhile, it's somehow unfair compared with other privacy attacks in FL.
Clarity, Quality, Novelty And Reproducibility
N.A |
ICLR | Title
Learning To Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning
Abstract
Gradient inversion attack enables recovery of training samples from model updates in federated learning (FL) and constitutes a serious threat to data privacy. To mitigate this vulnerability, prior work proposed both principled defenses based on differential privacy, as well as heuristic defenses based on gradient compression as countermeasures. These defenses have so far been very effective, in particular those based on gradient compression that allow the model to maintain high accuracy while greatly reducing the attack’s effectiveness. In this work, we argue that such findings do not accurately reflect the privacy risk in FL, and show that existing defenses can be broken by a simple adaptive attack that trains a model using auxiliary data to learn how to invert gradients on both vision and language tasks.
N/A
Gradient inversion attack enables recovery of training samples from model updates in federated learning (FL) and constitutes a serious threat to data privacy. To mitigate this vulnerability, prior work proposed both principled defenses based on differential privacy, as well as heuristic defenses based on gradient compression as countermeasures. These defenses have so far been very effective, in particular those based on gradient compression that allow the model to maintain high accuracy while greatly reducing the attack’s effectiveness. In this work, we argue that such findings do not accurately reflect the privacy risk in FL, and show that existing defenses can be broken by a simple adaptive attack that trains a model using auxiliary data to learn how to invert gradients on both vision and language tasks.
1 INTRODUCTION
Federated learning (FL; (McMahan et al., 2017)) is a popular framework for distributed model training on sensitive user data. Instead of centrally storing the training data, FL operates in a serverclient setting where the server hosts the model and has no direct access to the data. The clients can apply the model on their private data and send gradient updates back to the server. This learning regime promises data privacy as users only share gradients but never any raw data. However, recent work (Zhu et al., 2019; Zhao et al., 2020; Geiping et al., 2020) showed that despite these efforts, the server can still recover the training data from gradient updates, violating the promise of data privacy in FL. These so-called gradient inversion attacks operate by optimizing over the input space to find training samples whose gradient matches that of the observed gradient, and such attacks remain effective even when clients utilize secure aggregation (Bonawitz et al., 2016) to avoid revealing individual updates (Yin et al., 2021; Jeon et al., 2021).
As countermeasures against these gradient inversion attacks, prior work proposed both principled defenses based on differential privacy (Abadi et al., 2016), as well as heuristics that compress the gradient update through gradient pruning (Aji & Heafield, 2017) or sign compression (Bernstein et al., 2018). In particular, gradient compression defenses have so far enjoyed great success, severely hindering the effectiveness of existing optimization-based attacks (Zhu et al., 2019; Jeon et al., 2021) while maintaining close to the same level of accuracy for the trained model. As a result, these limitations seemingly diminish the threat of gradient inversion in practical FL applications.
In this paper we argue that evaluating defenses on existing optimization-based attacks may provide a false sense of security. To this end, we propose a simple learning-based attack—which we call Learning To Invert (LTI)—that trains a model to learn how to invert the gradient update to recover client samples; see Figure 1 for an illustration. We assume that the adversary (i.e., the server) has access to an auxiliary dataset whose distribution is similar to that of the private data, and use it to generate training samples for the gradient inversion model by querying the global model for gradients. Our attack is highly adaptable to different defenses since applying a defense simply amounts to training data augmentation for the gradient inversion model.
We empirically demonstrate that LTI can successfully circumvent defenses based on gradient perturbation (i.e., using differential privacy; (Abadi et al., 2016)), gradient pruning (Aji & Heafield, 2017) and sign compression (Bernstein et al., 2018) on both vision and language tasks.
• Vision: We evaluate on the CIFAR10 (Krizhevsky et al., 2009) classification dataset. LTI attains recovery accuracy close to that of the best optimization-based method when no defense is applied, and significantly outperforms all prior attacks under defense. • NLP: We experiment with causal language model training on the WikiText (Merity et al., 2016) dataset, where LTI attains state-of-the-art performance in all settings, with or without defense.
Given the strong empirical performance of LTI and its adaptability to different learning tasks and defense mechanisms, we advocate for its use as a simple baseline for future studies on gradient inversion attacks in FL.
2 BACKGROUND
Federated learning. The objective of federated learning (McMahan et al., 2017) is to train a machine learning model in a distributed fashion without centralized collection of training data. In detail, let fw be the global model parameterized by w, and consider a supervised learning setting that optimizes w by minimizing a loss function ℓ over the training set Dtrain:∑
(x,y)∈Dtrain ℓ(fw(x), y). In centralized learning this is typically done by computing a stochastic gradient 1B ∑B
i=1 ∇wℓ(fw(xi), yi) over a randomly drawn batch of data (x1, y1), . . . , (xB , yB) and minimizing ℓ using gradient descent.
In FL, instead of centrally collecting Dtrain to draw a random batch during training, the training set Dtrain is distributed across multiple clients and the model fw is stored on a central server. At each iteration, the model parameter w is transmitted to each client to compute the per-sample gradients {∇wℓ(fw(xi), yi)}Bi=1 locally over a set of clients. The server and clients then execute a federated aggregation protocol to compute the average gradient for the gradient descent update. A major advantage of FL is data privacy since clients do not need to disclose their data explicitly, but rather only send their gradient ∇wℓ(fw(xi), yi) to the server. Techniques such as secure aggregation (Bonawitz et al., 2016) and differential privacy (Dwork et al., 2006; 2014) can further reduce the privacy leakage from sending this gradient update.
Gradient inversion attack. Despite the promise of data privacy in FL, recent work showed that the heuristic of sending gradient updates instead of training samples themselves in fact provides a false sense of security. Zhu et al. (2019) showed in their seminal paper that it is possible for the server to recover the full batch of training samples given aggregated gradients. These optimizationbased gradient inversion attacks operate by optimizing a set of dummy data x̃1, . . . , x̃B and labels ỹ1, . . . , ỹB to match their gradient to the observed gradient:
min x̃ ∥∥∥∥∥ B∑ i=1 ∇wℓ(fw(x̃i), ỹi)− B∑ i=1 ∇wℓ(fw(xi), yi) ∥∥∥∥∥ 2
2
. (1)
For image tasks, since Equation 1 is differentiable in x̃i and ỹi and the model parameter w is known to the server, the server can optimize Equation 1 using gradient-based search. Doing so yields recovered samples (x̃i, ỹi) that closely resemble actual samples (xi, yi) in the batch. In practice this approach is highly effective, and follow-up works proposed several optimizations to further improve its recovery accuracy (Geiping et al., 2020; Yin et al., 2021; Jeon et al., 2021).
For language tasks this optimization problem is considerably more complex since the samples x1, . . . ,xB are sequences of discrete tokens, and optimizing Equation 1 amounts to solving a discrete optimization problem. To circumvent this difficulty, Zhu et al. (2019) and Deng et al. (2021) instead optimize the token embeddings to match the observed gradient and then maps the recovered embeddings to their closest tokens in the embedding layer to recover the private text. In contrast, Gupta et al. (2022) leveraged the insight that gradient of the token embedding layer can be used to recover exactly the set of tokens present in the training sample, and then uses beam search to optimize the ordering of tokens for fluency to recover the private text.
Gradient inversion under the malicious server setting. The aforementioned gradient inversion attacks operate under the honest-but-curious setting where the server faithfully executes the federated learning protocol, but attempts to extract private information from the observed gradients. Fowl et al. (2021), Boenisch et al. (2021) and Fowl et al. (2022) consider a stronger malicious server threat model that allows the server to transmit arbitrary model parameters w to the clients. Under this threat model, it is possible to carefully craft the model parameters so that the training sample can be recovered exactly from its gradient even when the batch size B is large. While this setting is certainly realistic and relevant, our paper operates under the weaker honest-but-curious threat model.
3 LEARNING TO INVERT: LEARNING-BASED GRADIENT INVERSION ATTACKS
Motivation. The threat of gradient inversion attack has prompted prior work to employ defense mechanisms to mitigate this privacy risk in FL (Zhu et al., 2019; Jeon et al., 2021). Intuitively, such defenses reduce the amount of information contained in the gradient about the training sample by either perturbing the gradient with noise (Abadi et al., 2016) or compressing them (Aji & Heafield, 2017; Bernstein et al., 2018), making recovery much more difficult. However, doing so also reduces the amount of information a sample can provide for training the global model, and hence has a negative impact on the model’s performance. This is certainly true for principled defenses based on differential privacy (Dwork et al., 2006) such as gradient perturbation (Abadi et al., 2016), however, defenses based on gradient compression seemingly provide a much better privacy-utility trade-off, effectively preventing the attack with minor reduction in model performance (Zhu et al., 2019).
The empirical success of existing defenses seemingly diminish the threat of gradient inversion in FL, especially since gradient compression (Aji & Heafield, 2017; Bernstein et al., 2018) is already commonplace in practical FL applications to reduce communication cost. However, we argue that optimization-based attacks underestimate the power of the adversary: If the adversary has access to an auxiliary dataset Daux, they can train a gradient inversion model to recover Daux from its gradients computed on the global model. As we will establish later, this greatly empowers the adversary, exposing existing risks to federate learning.
Threat model. We consider the setting where the adversary is an honest-but-curious server, who executes the learning protocol faithfully but aims to extract private training data from the observed gradients. We also assume that the FL protocol does not leverage secure aggregation, so per-client gradients are revealed to the server. Under these assumptions, in each FL iteration the adversary
has the knowledge of model weights w and the gradients ∇wℓ(fw(x), y) for each sample (x, y) in the batch. Moreover, we assume the adversary has an auxiliary dataset Daux, which could be in-distribution or a mixture of in-distribution and out-of-distribution data. This assumption is similar to the setting in Jeon et al. (2021), which assumes a generative model that is trained from the in-distribution data, and is common in the study of other privacy attacks such as membership inference (Shokri et al., 2017).
Learning to invert (LTI). Since the adversary has knowledge of the model weights, he/she is able to generate the gradient ∇wℓ(fw(xaux), yaux) for each sample (xaux, yaux) in the auxiliary dataset. This allows the adversary to learn a gradient inversion model gθ, parameterized by θ, to predict the data point (xaux, yaux) from the gradient of the global model ∇wℓ(fw(xaux), yaux) by solving the following learning problem:
min θ ∑ (xaux,yaux)∈Daux ℓattack (gθ (∇wℓ(fw(xaux), yaux)) , (xaux, yaux)) . (2)
In practice, ℓattack can be the cross-entropy (for discrete input) or squared-loss (for continuous-valued input) function and we find that using a multi-layer perceptron (MLP) (Bishop et al., 1995) for gθ is effective empirically. Importantly, when a defense mechanism such as gradient perturbation or gradient compression is applied, we can apply the same transformation to ∇wℓ(fw(xaux), yaux) to augment the training data for gθ to carry out an adaptive attack. We will show in section 4 that this simple approach is surprisingly effective at circumventing existing defenses.
Dimensionality reduction for large models. One potential problem for LTI is that the gradients ∇wℓ(fw(xaux), yaux) can be extremely high-dimensional. For example, both ResNet18 (He et al., 2016) for vision tasks and a three-layer transformer (Vaswani et al., 2017) for language tasks have approximately 1.1 million trainable parameters. Such high-dimensional input to the model gθ can lead to memory issues, as the first layer of the MLP would have 11M × h parameters, where h denotes the size of the first hidden layer.
To address this issue, we use feature hashing (Weinberger et al., 2009) to reduce the dimensionality of the input gradient. To this end we create k bins, where k is much smaller than the size of gradient m, and assign each gradient dimension i ∈ [m] to a random bin r(i) ∈ [k]. For each bin, we sum up the gradient values that are assigned to this bin. As a result, we obtain a feature vector of size k for the inversion model gθ. In other words, we project the gradient ∇wℓ(fw(xaux), yaux) to P∇wℓ(fw(xaux), yaux) using the random projection matrix P given by:
P ∈ {0, 1}k×ms.t. ∀i, Pj,i = 0 (∀j ̸= r(i)), Pr(i),i = 1.
If r(i) is implemented with a pseudo-uniform hashing function, P does not need to be stored in memory, reducing the memory footprint of gθ to a constant independent of the gradient dimension.
4 EXPERIMENT
We evaluate LTI on vision and language tasks against several existing defenses to show that it vastly outperforms prior gradient inversion attacks. We consider the following defense mechanisms evaluated in prior work (Zhu et al., 2019; Jeon et al., 2021):
• None. The gradient shared between the server and clients is the full gradient without any defense. This is the most common setting that previous papers focus on. • Sign compression (Bernstein et al., 2018) applies the sign function to each dimension of the gradient independently to compress the gradient to one bit per dimension. • Gradient pruning with pruning rate α (Aji & Heafield, 2017) zeroes out the bottom 1− α fraction of coordinates of ∇wℓ(fw(x), y) in terms of absolute value, which effectively compresses the gradient to (1− α)m dimensions. • Gradient perturbation with Gaussian standard deviation σ (Abadi et al., 2016) is a differentially private mechanism used commonly for training private models with SGD. An i.i.d. Gaussian random vector N (0, σ2) is added to the gradient, which one can show achieves ϵ-local differential privacy (Kasiviswanathan et al., 2011) with ϵ = O(1/σ).
4.1 EVALUATION ON VISION TASK
For evaluating LTI on a vision task, we experiment with image classification on CIFAR10 (Krizhevsky et al., 2009). The target model fw is LeNet (LeCun et al., 1998) with 1.5× 104 parameters trained using the cross-entropy loss.
Baselines. We compare our method with two baseline gradient inversion attacks: Inverting Gradients (IG; Geiping et al. (2020)), a representative optimization-based method, and Gradient Inversion with Generative Image Prior (GI-GIP; Jeon et al. (2021)), the state-of-the-art optimization-based method that uses a generative model to encode the data prior. We make minor modifications to these attacks to adapt them to various defenses; see appendix for details. The threat model for our attack is most similar to GI-GIP since both use an auxiliary dataset to encode the data prior.
Inversion model training. We follow the setup below for training the gradient inversion model gθ.
• Auxiliary dataset. We use the train split of CIFAR10 as the auxiliary dataset for training the inversion model gθ and the test split for inverting gradients computed on the global model fw. • Inversion model architecture. We use a three-layer MLP with hidden size 3000 for our inversion model gθ. The MLP takes the flattened gradient vector as input and outputs a 3072-dimensional vector representing the flattened image. The training objective ℓattack in Equation 2 is the mean squared error (MSE) between the output vector from MLP and the flattened ground truth image. • Training details. We use the Adam (Kingma & Ba, 2014) optimizer for training gθ. The model is trained for 200 epochs using training batch size 256. The initial learning rate is 10−4 with learning rate drop to 10−5 after 150 epochs. • Computation cost. Our experiments are conducted using NVIDIA GeForce RTX 2080 GPUs and each training run takes about 1.5 hours.
Evaluation methodology. We evaluate LTI and the aforementioned baselines on 1, 000 random images from the CIFAR10 test split. To measure reconstruction quality, we use three metrics:
• Mean squared error (MSE) measures the average pixel-wise (squared) distance between the reconstructed image and the ground truth image. Lower is better. • Peak signal-to-noise ratio (PSNR) measures the ratio between the maximum image pixel value and MSE. Higher is better. • Learned perceptual image patch similarity (LPIPS) measures distance in the features space of a VGG (Simonyan & Zisserman, 2014) model trained on ImageNet. Lower is better.
4.1.1 MAIN RESULTS
Quantitative evaluation. Table 1 gives quantitative comparisons for IG, GI-GIP, and LTI against various defense mechanisms on CIFAR10. When no defense is applied, GI-GIP achieves the best performance according to all three metrics, whereas LTI performs almost equally well in terms of MSE and close to that of IG in terms of PSNR and LPIPS. However, when the gradient is augmented with a defense mechanism, both IG and GI-GIP have considerably worse performance with MSE
close to 0.1. By comparison, LTI outperforms both baselines significantly and consistently across all three defense mechanisms. For example, under gradient perturbation with σ = 0.1, which prior work believed is sufficient for preventing gradient inversion attacks (Zhu et al., 2019; Jeon et al., 2021), MSE can be as low as 0.012 for LTI. Our result therefore provides considerable additional insight for the level of empirical privacy achieved by DP-SGD (Abadi et al., 2016), and suggests that the theoretical privacy leakage as predicted by DP ϵ may be tighter than previously thought.
Qualitative evaluation. Figure 2 shows 4 random CIFAR10 test samples and their reconstructions under different defense mechanisms. Without any defense in place, all three methods recover a considerable amount of semantic information about the object of interest, with both GI-GIP and LTI faithfully reconstructing the training sample. Under the sign compression defense, IG completely fails to reconstruct all 4 samples, while GI-GIP only successfully reconstructs the second image. In contrast, LTI is able to recover the semantic information in all 4 samples. Results for gradient pruning and gradient perturbation yield similar conclusions. Additional samples are given in the appendix.
4.1.2 ABLATION STUDIES
Since LTI learns to invert gradients using the auxiliary dataset, its performance depends on the quantity and quality of data available to the adversary. We perform ablation studies to better understand this dependence by changing the auxiliary dataset size and its distribution.
Varying the auxiliary dataset size. We randomly subsample the CIFAR10 training set to construct auxiliary datasets of size {500, 5000, 15000, 25000, 35000, 45000, 50000} and evaluate the performance of LTI under various defenses. Figure 3(a) plots reconstruction MSE as a function of the auxiliary dataset size, which is monotonically decreasing as expected. Moreover, with just 5, 000 samples for training the inversion model (second point in each curve), the performance is nearly as good as when training using the full CIFAR10 training set. Notably, even if auxiliary dataset size as small as 500, reconstruction MSE is still lower than that of IG and GI-GIP in Table 1. Corresponding figures for PSNR and LPIPS in the appendix show similar findings.
Varying the auxiliary data distribution. Although access to a large set of in-distribution data may be not available in practice, it is plausible that the adversary can collect out-of-distribution samples for the auxiliary dataset. This is beneficial for the adversary since a model that learns how to invert out-of-distribution samples given their gradients may transfer to in-distribution data as well. To simulate this scenario, we divide CIFAR10 into two halves with disjoint classes, and construct the auxiliary dataset by combining a β fraction of samples from the first half and a 1 − β fraction of samples from the second half for β ∈ {0, 0.01, 0.1, 0.3, 0.5, 0.7, 0.9, 1}. The target model fw is trained only on samples from the first half, and hence the auxiliary set has the exact same distribution as the target model’s data when β = 1 and only has out-of-distribution data when β = 0.
Figure 3(b) shows reconstruction MSE as a function of β; corresponding figures for PSNR and LPIPS are given in the appendix. We make the following observations:
1. Even if the auxiliary dataset only contains 250 in-distribution samples (β = 0.01; second point in each curve), MSE of the inversion model is still lower than that of the best baseline in Table 1. For example, with the sign compression defense, LTI attains an MSE of ≤ 0.02, which is much lower than the MSE of 0.116 for IG and 0.091 for GI-GIP. 2. When the auxiliary dataset contains only out-of-distribution data (β = 0), the inversion model has very high reconstruction MSE, which suggests that methods for improving out-of-distribution generalization may be necessary for further improvement.
4.2 EVALUATION ON LANGUAGE TASK
For evaluating LTI on a language task, we experiment with causal language model training1 for next-token prediction. The language model fw is a three-layer transformer (Vaswani et al., 2017) with frozen token embedding layer. This is a common technique for language model fine-tuning (Sun et al., 2019), which also has privacy benefits since direct privacy leakage from the gradient magnitude of the token embedding layer can be prevented (Fowl et al., 2022; Gupta et al., 2022). As a result, the trainable model contains about 1.1 × 106 parameters. We train the language model on WikiText (Merity et al., 2016), where each training sample is limited to L = 16 tokens and the language model is trained to predict the next token xl given x:l−1 for l = 1, . . . , L using the cross-entropy loss.
Baseline. We compare LTI with TAG (Deng et al., 2021)—the state-of-the-art language model gradient inversion attack without utilizing the token embedding layer gradient2. The objective function for TAG is a slight modification of Equation 1 that uses both the ℓ2 and ℓ1 distance between the observed gradient and the gradient of dummy data. We also modify TAG slightly to adaptive it different defenses; see appendix for details.
Inversion model training. We follow the setup below for training the gradient inversion model gθ.
• Auxiliary dataset. We use ∼ 1.8 × 105 samples from the train split of Wikitext as the auxiliary dataset, and 1, 000 samples from the test split for evaluating the attack. In addition, we introduce a weaker variant of our attack that only assumes knowledge of the marginal token distribution for the language model training data. Instead of using the WikiText train split as auxiliary data, we sample random tokens according to the marginal token distribution to generate pseudo-data for training the inversion model. We show that this variant, which we denote LTI-P, can even outperform LTI with in-distribution auxiliary data due to access to infinite training data.
1We follow the task setup and code in https://github.com/JonasGeiping/breaching 2We do not compare against a more recent attack by Gupta et al. (2022) since it crucially depends on access
to the token embedding layer gradient.
• Inversion model architecture. We train a two-layer MLP with ReLU activation and first hidden-layer size 600 and second hidden-layer size 1, 000. The inversion model outputs L probability vectors each with size equal to the vocabulary size (∼ 50, 000), and we train it using the cross-entropy loss to predict the L tokens given the target model gradient. We use feature hashing (Weinberger et al., 2009) to reduce the target model gradient to 10% of its original dimensions as input to the inversion model. • Training details. We use Adam (Kingma & Ba, 2014) to train the inversion model over 20 epochs with batch size 64. Learning rates are selected separately for each defense from {10−3, 10−4, 10−5}. • Computation cost. Our experiments are conducted using NVIDIA GeForce RTX 3090 GPUs and each training run takes about 3 hours.
Evaluation methodology. We evaluate LTI and the TAG baseline on 1, 000 samples from the WikiText test set. To measure the quality of reconstructed text, we use four metrics:
• Accuracy(%) measures the average token-wise zero-one accuracy. Higher is better. • Rouge-1(%), Rouge-2(%) and Rouge-L(%) measure the overlap of unigram, bigram, and length of
longest common subsequence between the ground truth and the reconstructed text. Higher is better.
Results. Table 2 shows quantitative comparison between LTI (and its variant LTI-P) and TAG against various defenses. The overall trend is remarkably consistent: LTI and LTI-P outperform TAG in all four metrics for all defense settings, with LTI-P achieving state-of-the-art recovery accuracy by far. This result suggests that knowledge of the marginal token distribution encodes enough data prior for LTI-P to train the inversion model, and having access to infinite training data allows it to better generalize to the test set compared to LTI. In practice, it is very plausible that the marginal token distribution is known to the adversary, and hence LTI-P serves as a surprisingly simple and effective baseline for gradient inversion in NLP.
Figure 4 shows 3 random test samples from WikiText and their reconstructions using LTI-P and TAG, with tokens that are correctly reconstructed highlighted in blue. Without any defense, both TAG and LTI-P yield reasonably accurate reconstructions, with LTI-P faithfully reconstructing all but 1-2 tokens. With the sign compression defense applied, TAG fails to recover any token correctly, whereas LTI-P can faithfully recover almost half of the tokens in each sample. Results for gradient pruning and gradient perturbation yield similar conclusions, with TAG recovering a larger but still relatively insignificant set of tokens. Additional samples are given in the appendix.
5 CONCLUSION AND FUTURE WORK
We demonstrated the effectiveness of LTI—a simple learning-based gradient inversion attack—under realistic federated learning settings. For both vision and language tasks, LTI can match or exceed the performance of state-of-the-art optimization-based methods when no defense is applied, and significantly outperform all prior works under defenses based on gradient perturbation and gradient
compression. Given its simplicity and versatility, we advocate the use of LTI both as a strong baseline for future research as well as a diagnostic tool for evaluating privacy leakage in FL.
Negative societal impact. The concept of a gradient inversion attack can lead to negative consequences if used inappropriately. Our work showed that if FL is deployed without consideration for gradient inversion attacks, an adversary can leverage its vulnerabilities to compromise the data privacy of clients even under strong empirical defenses. However, we strongly emphasize that our work should not be interpreted as a tool for adversaries, but rather serve to inform the community about the risks of data privacy breach in FL and promote future research into safe practices.
Limitations. This paper serves as preliminary work towards understanding the effectiveness of learning-based gradient inversion attacks, and our method can be further improved along several different directions. 1. For large models, our current approach is to hash the gradients into a lowerdimensional space to reduce memory cost. It may be possible to leverage the model’s architecture to design more effective dimensionality reduction techniques to further scale up the method. 2. Currently we only focus on the setting with batch size 1, which precludes the use of secure aggregation (Bonawitz et al., 2016)—a common technique in FL for amplifying privacy by aggregating the gradients from multiple clients before revealing it to the server. For LTI, the complexity of MLP would increase when the batch size increases, which makes the learning harder. More advanced model architecture and loss design might help with the large batch case. 3. LTI in its current form does not leverage additional data priors such as the smoothness prior for images and text fluency prior for text. We can readily incorporate these priors by modifying the inversion model’s loss function with total variation (for image data) or perplexity on a trained language model (for text data), which may further improve the performance of LTI.
A SUPPLEMENTARY MATERIAL FOR SECTION 4
A.1 MODIFICATIONS FOR BASELINE METHODS
Vision baselines. Both IG and GI-GIP use cosine distance instead of ℓ2 distance in Equation 1 for optimizing the dummy data. For the sign compression defense, this loss function does not optimize the correct objective since the dummy data’s gradient is not a vector with ±1 entries but rather a real-valued vector with the same sign. We replace cosine distance by the loss ∑m i=1 ( ℓisign )2 where
ℓisign = max {−∇wiℓ(fw(x̃), ỹ) · Sign (∇wiℓ(fw(x), y)) , 0} . (3)
One sanity check for this loss is that when ∇wiℓ(fw(x̃), ỹ) has the same sign as that of ∇wiℓ(fw(x), y), the minimum loss value of 0 is achieved. For the gradient pruning defense, optimizing the cosine distance between the dummy data gradient and the pruned ground truth gradient will force too many gradient values to 0, which is the incorrect value for the full ground truth gradient. Therefore we only compute cosine distance over the non-zero dimensions of pruned gradient.
Language baselines. For TAG, we find that the loss function also needs to be modified slightly to accommodate the sign compression and gradient pruning defenses:
• Sign compression. Similar to the vision baselines, the ℓ2 and ℓ1 distance between the dummy data gradient and the ground truth gradient sign do not optimize the correct objective. We replace ∥ · ∥22 and ∥ · ∥1 by ∑m i=1 ( ℓisign )2 and ℓisign, respectively, where ∑m i=1 ℓ i sign is defined in Equation 3. • Gradient pruning. We make the same modification to TAG as in the vision baselines.
A.2 AUXILIARY DATASET ABLATION STUDIES
In section 4.1.2 we showed reconstruction MSE for LTI as a function of the auxiliary dataset size and the shift factor β. For completeness, we show the corresponding PSNR and LPIPS curves in Figure 6. Similar to Figure 3, when reducing the auxiliary dataset size (e.g., from 50, 000 to 5, 000) or reducing the proportion of in-distribution data (e.g., from β = 1 to β = 0.1), the performance of LTI does not worsen significantly.
A.3 ADDITIONAL SAMPLES
Figure 7 and Figure 8 show additional samples and their reconstructions under various defense mechanisms. The result is consistent with Figure 2 and Figure 4, where LTI shows consistently better reconstruction quality compared to baselines.
Sign Comp.
Sign Comp. | 1. What is the focus of the paper regarding Federated Learning?
2. What are the strengths and weaknesses of the proposed "Learning To Invert" approach?
3. Do you have any questions or concerns regarding the experimental setup and results?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any limitations or potential improvements regarding the applicability and scalability of the proposed method? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper introduces a new gradient inversion method in Federated Learning. The proposed approach called "Learning To Invert" (LTI) directly attempts to learn the mapping between the gradient of a sample and the corresponding input sample using an auxiliary dataset. A simple multi-layer perceptron (MLP) is used to learn this mapping. Dimensionality reduction is applied to the gradients to make the MLP practically feasible. The paper asserts that current defense mechanisms such as Sign Compression, Gradient Pruning, and Gaussian Perturbation cannot defend against the proposed attack, because the same transformations can be applied by the server before it learns to reconstruct. The method is evaluated on vision and language tasks (CIFAR10, WikiText).
Strengths And Weaknesses
Strengths: 1. Evaluation of the proposed method for both vision and language tasks. 2. Comparison of LTI with current SOTA gradient inversion attacks. 3. Ablation results based on the auxiliary dataset (both degree of overlap and size of the dataset).
Weaknesses:
The attack works only in the presence of an auxiliary dataset (which has some overlap with the client distributions) at the server. Hence, the comparison with SOTA gradient inversion methods is not fair. For a fair comparison, other inversion methods should be updated to make use of the auxiliary dataset. When
β
=
0
(no overlap between the auxiliary and training sets), the MSE of the proposed method is the worst. Note that even for
β
=
0.01
, there will be significant number of common samples between the two sets, which explains the superiority of the proposed method. Ideally, Tables 1 & 2 and Figures 2 & 4 should be reported for low values of
b
e
t
a
(< 0.1) or with an auxiliary dataset that is completely different (some other natural image and NLP dataset that is different from the training).
The main experimental setup is designed such that the training samples and samples in the auxiliary dataset are the SAME. Furthermore, the gradients are computed based on a single sample. So, effectively there is one-to-one mapping between a sample and its gradient, which any network can easily learn (all one needs is a indexing table!). So, the results in Table 1 and Figure 2 are unsurprising - in fact, it is somewhat underwhelming because exact reconstruction should be possible in this setting unless there is loss of some information in the dimensionality reduction step.
There is no information about how the proposed approach will scale to higher fidelity data (say, images of 224 x 224 x 3 resolution). The MLP approach is likely to become practically infeasible for higher fidelity data.
The other key aspect that is missing is the FL round in which the reconstruction is attempted. Several works in the literature have shown that reconstruction is easier in the first few rounds (when training from scratch), while it becomes harder in the later rounds. That is why multiple rounds of local training is typically carried out before the collaboration starts and this serves as a good defense against gradient inversion attacks.
Clarity, Quality, Novelty And Reproducibility
The paper is well written. However, the novelty is limited and the experiments are not comprehensive.There are no concerns about the reproducibility. |
ICLR | Title
Learning To Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning
Abstract
Gradient inversion attack enables recovery of training samples from model updates in federated learning (FL) and constitutes a serious threat to data privacy. To mitigate this vulnerability, prior work proposed both principled defenses based on differential privacy, as well as heuristic defenses based on gradient compression as countermeasures. These defenses have so far been very effective, in particular those based on gradient compression that allow the model to maintain high accuracy while greatly reducing the attack’s effectiveness. In this work, we argue that such findings do not accurately reflect the privacy risk in FL, and show that existing defenses can be broken by a simple adaptive attack that trains a model using auxiliary data to learn how to invert gradients on both vision and language tasks.
N/A
Gradient inversion attack enables recovery of training samples from model updates in federated learning (FL) and constitutes a serious threat to data privacy. To mitigate this vulnerability, prior work proposed both principled defenses based on differential privacy, as well as heuristic defenses based on gradient compression as countermeasures. These defenses have so far been very effective, in particular those based on gradient compression that allow the model to maintain high accuracy while greatly reducing the attack’s effectiveness. In this work, we argue that such findings do not accurately reflect the privacy risk in FL, and show that existing defenses can be broken by a simple adaptive attack that trains a model using auxiliary data to learn how to invert gradients on both vision and language tasks.
1 INTRODUCTION
Federated learning (FL; (McMahan et al., 2017)) is a popular framework for distributed model training on sensitive user data. Instead of centrally storing the training data, FL operates in a serverclient setting where the server hosts the model and has no direct access to the data. The clients can apply the model on their private data and send gradient updates back to the server. This learning regime promises data privacy as users only share gradients but never any raw data. However, recent work (Zhu et al., 2019; Zhao et al., 2020; Geiping et al., 2020) showed that despite these efforts, the server can still recover the training data from gradient updates, violating the promise of data privacy in FL. These so-called gradient inversion attacks operate by optimizing over the input space to find training samples whose gradient matches that of the observed gradient, and such attacks remain effective even when clients utilize secure aggregation (Bonawitz et al., 2016) to avoid revealing individual updates (Yin et al., 2021; Jeon et al., 2021).
As countermeasures against these gradient inversion attacks, prior work proposed both principled defenses based on differential privacy (Abadi et al., 2016), as well as heuristics that compress the gradient update through gradient pruning (Aji & Heafield, 2017) or sign compression (Bernstein et al., 2018). In particular, gradient compression defenses have so far enjoyed great success, severely hindering the effectiveness of existing optimization-based attacks (Zhu et al., 2019; Jeon et al., 2021) while maintaining close to the same level of accuracy for the trained model. As a result, these limitations seemingly diminish the threat of gradient inversion in practical FL applications.
In this paper we argue that evaluating defenses on existing optimization-based attacks may provide a false sense of security. To this end, we propose a simple learning-based attack—which we call Learning To Invert (LTI)—that trains a model to learn how to invert the gradient update to recover client samples; see Figure 1 for an illustration. We assume that the adversary (i.e., the server) has access to an auxiliary dataset whose distribution is similar to that of the private data, and use it to generate training samples for the gradient inversion model by querying the global model for gradients. Our attack is highly adaptable to different defenses since applying a defense simply amounts to training data augmentation for the gradient inversion model.
We empirically demonstrate that LTI can successfully circumvent defenses based on gradient perturbation (i.e., using differential privacy; (Abadi et al., 2016)), gradient pruning (Aji & Heafield, 2017) and sign compression (Bernstein et al., 2018) on both vision and language tasks.
• Vision: We evaluate on the CIFAR10 (Krizhevsky et al., 2009) classification dataset. LTI attains recovery accuracy close to that of the best optimization-based method when no defense is applied, and significantly outperforms all prior attacks under defense. • NLP: We experiment with causal language model training on the WikiText (Merity et al., 2016) dataset, where LTI attains state-of-the-art performance in all settings, with or without defense.
Given the strong empirical performance of LTI and its adaptability to different learning tasks and defense mechanisms, we advocate for its use as a simple baseline for future studies on gradient inversion attacks in FL.
2 BACKGROUND
Federated learning. The objective of federated learning (McMahan et al., 2017) is to train a machine learning model in a distributed fashion without centralized collection of training data. In detail, let fw be the global model parameterized by w, and consider a supervised learning setting that optimizes w by minimizing a loss function ℓ over the training set Dtrain:∑
(x,y)∈Dtrain ℓ(fw(x), y). In centralized learning this is typically done by computing a stochastic gradient 1B ∑B
i=1 ∇wℓ(fw(xi), yi) over a randomly drawn batch of data (x1, y1), . . . , (xB , yB) and minimizing ℓ using gradient descent.
In FL, instead of centrally collecting Dtrain to draw a random batch during training, the training set Dtrain is distributed across multiple clients and the model fw is stored on a central server. At each iteration, the model parameter w is transmitted to each client to compute the per-sample gradients {∇wℓ(fw(xi), yi)}Bi=1 locally over a set of clients. The server and clients then execute a federated aggregation protocol to compute the average gradient for the gradient descent update. A major advantage of FL is data privacy since clients do not need to disclose their data explicitly, but rather only send their gradient ∇wℓ(fw(xi), yi) to the server. Techniques such as secure aggregation (Bonawitz et al., 2016) and differential privacy (Dwork et al., 2006; 2014) can further reduce the privacy leakage from sending this gradient update.
Gradient inversion attack. Despite the promise of data privacy in FL, recent work showed that the heuristic of sending gradient updates instead of training samples themselves in fact provides a false sense of security. Zhu et al. (2019) showed in their seminal paper that it is possible for the server to recover the full batch of training samples given aggregated gradients. These optimizationbased gradient inversion attacks operate by optimizing a set of dummy data x̃1, . . . , x̃B and labels ỹ1, . . . , ỹB to match their gradient to the observed gradient:
min x̃ ∥∥∥∥∥ B∑ i=1 ∇wℓ(fw(x̃i), ỹi)− B∑ i=1 ∇wℓ(fw(xi), yi) ∥∥∥∥∥ 2
2
. (1)
For image tasks, since Equation 1 is differentiable in x̃i and ỹi and the model parameter w is known to the server, the server can optimize Equation 1 using gradient-based search. Doing so yields recovered samples (x̃i, ỹi) that closely resemble actual samples (xi, yi) in the batch. In practice this approach is highly effective, and follow-up works proposed several optimizations to further improve its recovery accuracy (Geiping et al., 2020; Yin et al., 2021; Jeon et al., 2021).
For language tasks this optimization problem is considerably more complex since the samples x1, . . . ,xB are sequences of discrete tokens, and optimizing Equation 1 amounts to solving a discrete optimization problem. To circumvent this difficulty, Zhu et al. (2019) and Deng et al. (2021) instead optimize the token embeddings to match the observed gradient and then maps the recovered embeddings to their closest tokens in the embedding layer to recover the private text. In contrast, Gupta et al. (2022) leveraged the insight that gradient of the token embedding layer can be used to recover exactly the set of tokens present in the training sample, and then uses beam search to optimize the ordering of tokens for fluency to recover the private text.
Gradient inversion under the malicious server setting. The aforementioned gradient inversion attacks operate under the honest-but-curious setting where the server faithfully executes the federated learning protocol, but attempts to extract private information from the observed gradients. Fowl et al. (2021), Boenisch et al. (2021) and Fowl et al. (2022) consider a stronger malicious server threat model that allows the server to transmit arbitrary model parameters w to the clients. Under this threat model, it is possible to carefully craft the model parameters so that the training sample can be recovered exactly from its gradient even when the batch size B is large. While this setting is certainly realistic and relevant, our paper operates under the weaker honest-but-curious threat model.
3 LEARNING TO INVERT: LEARNING-BASED GRADIENT INVERSION ATTACKS
Motivation. The threat of gradient inversion attack has prompted prior work to employ defense mechanisms to mitigate this privacy risk in FL (Zhu et al., 2019; Jeon et al., 2021). Intuitively, such defenses reduce the amount of information contained in the gradient about the training sample by either perturbing the gradient with noise (Abadi et al., 2016) or compressing them (Aji & Heafield, 2017; Bernstein et al., 2018), making recovery much more difficult. However, doing so also reduces the amount of information a sample can provide for training the global model, and hence has a negative impact on the model’s performance. This is certainly true for principled defenses based on differential privacy (Dwork et al., 2006) such as gradient perturbation (Abadi et al., 2016), however, defenses based on gradient compression seemingly provide a much better privacy-utility trade-off, effectively preventing the attack with minor reduction in model performance (Zhu et al., 2019).
The empirical success of existing defenses seemingly diminish the threat of gradient inversion in FL, especially since gradient compression (Aji & Heafield, 2017; Bernstein et al., 2018) is already commonplace in practical FL applications to reduce communication cost. However, we argue that optimization-based attacks underestimate the power of the adversary: If the adversary has access to an auxiliary dataset Daux, they can train a gradient inversion model to recover Daux from its gradients computed on the global model. As we will establish later, this greatly empowers the adversary, exposing existing risks to federate learning.
Threat model. We consider the setting where the adversary is an honest-but-curious server, who executes the learning protocol faithfully but aims to extract private training data from the observed gradients. We also assume that the FL protocol does not leverage secure aggregation, so per-client gradients are revealed to the server. Under these assumptions, in each FL iteration the adversary
has the knowledge of model weights w and the gradients ∇wℓ(fw(x), y) for each sample (x, y) in the batch. Moreover, we assume the adversary has an auxiliary dataset Daux, which could be in-distribution or a mixture of in-distribution and out-of-distribution data. This assumption is similar to the setting in Jeon et al. (2021), which assumes a generative model that is trained from the in-distribution data, and is common in the study of other privacy attacks such as membership inference (Shokri et al., 2017).
Learning to invert (LTI). Since the adversary has knowledge of the model weights, he/she is able to generate the gradient ∇wℓ(fw(xaux), yaux) for each sample (xaux, yaux) in the auxiliary dataset. This allows the adversary to learn a gradient inversion model gθ, parameterized by θ, to predict the data point (xaux, yaux) from the gradient of the global model ∇wℓ(fw(xaux), yaux) by solving the following learning problem:
min θ ∑ (xaux,yaux)∈Daux ℓattack (gθ (∇wℓ(fw(xaux), yaux)) , (xaux, yaux)) . (2)
In practice, ℓattack can be the cross-entropy (for discrete input) or squared-loss (for continuous-valued input) function and we find that using a multi-layer perceptron (MLP) (Bishop et al., 1995) for gθ is effective empirically. Importantly, when a defense mechanism such as gradient perturbation or gradient compression is applied, we can apply the same transformation to ∇wℓ(fw(xaux), yaux) to augment the training data for gθ to carry out an adaptive attack. We will show in section 4 that this simple approach is surprisingly effective at circumventing existing defenses.
Dimensionality reduction for large models. One potential problem for LTI is that the gradients ∇wℓ(fw(xaux), yaux) can be extremely high-dimensional. For example, both ResNet18 (He et al., 2016) for vision tasks and a three-layer transformer (Vaswani et al., 2017) for language tasks have approximately 1.1 million trainable parameters. Such high-dimensional input to the model gθ can lead to memory issues, as the first layer of the MLP would have 11M × h parameters, where h denotes the size of the first hidden layer.
To address this issue, we use feature hashing (Weinberger et al., 2009) to reduce the dimensionality of the input gradient. To this end we create k bins, where k is much smaller than the size of gradient m, and assign each gradient dimension i ∈ [m] to a random bin r(i) ∈ [k]. For each bin, we sum up the gradient values that are assigned to this bin. As a result, we obtain a feature vector of size k for the inversion model gθ. In other words, we project the gradient ∇wℓ(fw(xaux), yaux) to P∇wℓ(fw(xaux), yaux) using the random projection matrix P given by:
P ∈ {0, 1}k×ms.t. ∀i, Pj,i = 0 (∀j ̸= r(i)), Pr(i),i = 1.
If r(i) is implemented with a pseudo-uniform hashing function, P does not need to be stored in memory, reducing the memory footprint of gθ to a constant independent of the gradient dimension.
4 EXPERIMENT
We evaluate LTI on vision and language tasks against several existing defenses to show that it vastly outperforms prior gradient inversion attacks. We consider the following defense mechanisms evaluated in prior work (Zhu et al., 2019; Jeon et al., 2021):
• None. The gradient shared between the server and clients is the full gradient without any defense. This is the most common setting that previous papers focus on. • Sign compression (Bernstein et al., 2018) applies the sign function to each dimension of the gradient independently to compress the gradient to one bit per dimension. • Gradient pruning with pruning rate α (Aji & Heafield, 2017) zeroes out the bottom 1− α fraction of coordinates of ∇wℓ(fw(x), y) in terms of absolute value, which effectively compresses the gradient to (1− α)m dimensions. • Gradient perturbation with Gaussian standard deviation σ (Abadi et al., 2016) is a differentially private mechanism used commonly for training private models with SGD. An i.i.d. Gaussian random vector N (0, σ2) is added to the gradient, which one can show achieves ϵ-local differential privacy (Kasiviswanathan et al., 2011) with ϵ = O(1/σ).
4.1 EVALUATION ON VISION TASK
For evaluating LTI on a vision task, we experiment with image classification on CIFAR10 (Krizhevsky et al., 2009). The target model fw is LeNet (LeCun et al., 1998) with 1.5× 104 parameters trained using the cross-entropy loss.
Baselines. We compare our method with two baseline gradient inversion attacks: Inverting Gradients (IG; Geiping et al. (2020)), a representative optimization-based method, and Gradient Inversion with Generative Image Prior (GI-GIP; Jeon et al. (2021)), the state-of-the-art optimization-based method that uses a generative model to encode the data prior. We make minor modifications to these attacks to adapt them to various defenses; see appendix for details. The threat model for our attack is most similar to GI-GIP since both use an auxiliary dataset to encode the data prior.
Inversion model training. We follow the setup below for training the gradient inversion model gθ.
• Auxiliary dataset. We use the train split of CIFAR10 as the auxiliary dataset for training the inversion model gθ and the test split for inverting gradients computed on the global model fw. • Inversion model architecture. We use a three-layer MLP with hidden size 3000 for our inversion model gθ. The MLP takes the flattened gradient vector as input and outputs a 3072-dimensional vector representing the flattened image. The training objective ℓattack in Equation 2 is the mean squared error (MSE) between the output vector from MLP and the flattened ground truth image. • Training details. We use the Adam (Kingma & Ba, 2014) optimizer for training gθ. The model is trained for 200 epochs using training batch size 256. The initial learning rate is 10−4 with learning rate drop to 10−5 after 150 epochs. • Computation cost. Our experiments are conducted using NVIDIA GeForce RTX 2080 GPUs and each training run takes about 1.5 hours.
Evaluation methodology. We evaluate LTI and the aforementioned baselines on 1, 000 random images from the CIFAR10 test split. To measure reconstruction quality, we use three metrics:
• Mean squared error (MSE) measures the average pixel-wise (squared) distance between the reconstructed image and the ground truth image. Lower is better. • Peak signal-to-noise ratio (PSNR) measures the ratio between the maximum image pixel value and MSE. Higher is better. • Learned perceptual image patch similarity (LPIPS) measures distance in the features space of a VGG (Simonyan & Zisserman, 2014) model trained on ImageNet. Lower is better.
4.1.1 MAIN RESULTS
Quantitative evaluation. Table 1 gives quantitative comparisons for IG, GI-GIP, and LTI against various defense mechanisms on CIFAR10. When no defense is applied, GI-GIP achieves the best performance according to all three metrics, whereas LTI performs almost equally well in terms of MSE and close to that of IG in terms of PSNR and LPIPS. However, when the gradient is augmented with a defense mechanism, both IG and GI-GIP have considerably worse performance with MSE
close to 0.1. By comparison, LTI outperforms both baselines significantly and consistently across all three defense mechanisms. For example, under gradient perturbation with σ = 0.1, which prior work believed is sufficient for preventing gradient inversion attacks (Zhu et al., 2019; Jeon et al., 2021), MSE can be as low as 0.012 for LTI. Our result therefore provides considerable additional insight for the level of empirical privacy achieved by DP-SGD (Abadi et al., 2016), and suggests that the theoretical privacy leakage as predicted by DP ϵ may be tighter than previously thought.
Qualitative evaluation. Figure 2 shows 4 random CIFAR10 test samples and their reconstructions under different defense mechanisms. Without any defense in place, all three methods recover a considerable amount of semantic information about the object of interest, with both GI-GIP and LTI faithfully reconstructing the training sample. Under the sign compression defense, IG completely fails to reconstruct all 4 samples, while GI-GIP only successfully reconstructs the second image. In contrast, LTI is able to recover the semantic information in all 4 samples. Results for gradient pruning and gradient perturbation yield similar conclusions. Additional samples are given in the appendix.
4.1.2 ABLATION STUDIES
Since LTI learns to invert gradients using the auxiliary dataset, its performance depends on the quantity and quality of data available to the adversary. We perform ablation studies to better understand this dependence by changing the auxiliary dataset size and its distribution.
Varying the auxiliary dataset size. We randomly subsample the CIFAR10 training set to construct auxiliary datasets of size {500, 5000, 15000, 25000, 35000, 45000, 50000} and evaluate the performance of LTI under various defenses. Figure 3(a) plots reconstruction MSE as a function of the auxiliary dataset size, which is monotonically decreasing as expected. Moreover, with just 5, 000 samples for training the inversion model (second point in each curve), the performance is nearly as good as when training using the full CIFAR10 training set. Notably, even if auxiliary dataset size as small as 500, reconstruction MSE is still lower than that of IG and GI-GIP in Table 1. Corresponding figures for PSNR and LPIPS in the appendix show similar findings.
Varying the auxiliary data distribution. Although access to a large set of in-distribution data may be not available in practice, it is plausible that the adversary can collect out-of-distribution samples for the auxiliary dataset. This is beneficial for the adversary since a model that learns how to invert out-of-distribution samples given their gradients may transfer to in-distribution data as well. To simulate this scenario, we divide CIFAR10 into two halves with disjoint classes, and construct the auxiliary dataset by combining a β fraction of samples from the first half and a 1 − β fraction of samples from the second half for β ∈ {0, 0.01, 0.1, 0.3, 0.5, 0.7, 0.9, 1}. The target model fw is trained only on samples from the first half, and hence the auxiliary set has the exact same distribution as the target model’s data when β = 1 and only has out-of-distribution data when β = 0.
Figure 3(b) shows reconstruction MSE as a function of β; corresponding figures for PSNR and LPIPS are given in the appendix. We make the following observations:
1. Even if the auxiliary dataset only contains 250 in-distribution samples (β = 0.01; second point in each curve), MSE of the inversion model is still lower than that of the best baseline in Table 1. For example, with the sign compression defense, LTI attains an MSE of ≤ 0.02, which is much lower than the MSE of 0.116 for IG and 0.091 for GI-GIP. 2. When the auxiliary dataset contains only out-of-distribution data (β = 0), the inversion model has very high reconstruction MSE, which suggests that methods for improving out-of-distribution generalization may be necessary for further improvement.
4.2 EVALUATION ON LANGUAGE TASK
For evaluating LTI on a language task, we experiment with causal language model training1 for next-token prediction. The language model fw is a three-layer transformer (Vaswani et al., 2017) with frozen token embedding layer. This is a common technique for language model fine-tuning (Sun et al., 2019), which also has privacy benefits since direct privacy leakage from the gradient magnitude of the token embedding layer can be prevented (Fowl et al., 2022; Gupta et al., 2022). As a result, the trainable model contains about 1.1 × 106 parameters. We train the language model on WikiText (Merity et al., 2016), where each training sample is limited to L = 16 tokens and the language model is trained to predict the next token xl given x:l−1 for l = 1, . . . , L using the cross-entropy loss.
Baseline. We compare LTI with TAG (Deng et al., 2021)—the state-of-the-art language model gradient inversion attack without utilizing the token embedding layer gradient2. The objective function for TAG is a slight modification of Equation 1 that uses both the ℓ2 and ℓ1 distance between the observed gradient and the gradient of dummy data. We also modify TAG slightly to adaptive it different defenses; see appendix for details.
Inversion model training. We follow the setup below for training the gradient inversion model gθ.
• Auxiliary dataset. We use ∼ 1.8 × 105 samples from the train split of Wikitext as the auxiliary dataset, and 1, 000 samples from the test split for evaluating the attack. In addition, we introduce a weaker variant of our attack that only assumes knowledge of the marginal token distribution for the language model training data. Instead of using the WikiText train split as auxiliary data, we sample random tokens according to the marginal token distribution to generate pseudo-data for training the inversion model. We show that this variant, which we denote LTI-P, can even outperform LTI with in-distribution auxiliary data due to access to infinite training data.
1We follow the task setup and code in https://github.com/JonasGeiping/breaching 2We do not compare against a more recent attack by Gupta et al. (2022) since it crucially depends on access
to the token embedding layer gradient.
• Inversion model architecture. We train a two-layer MLP with ReLU activation and first hidden-layer size 600 and second hidden-layer size 1, 000. The inversion model outputs L probability vectors each with size equal to the vocabulary size (∼ 50, 000), and we train it using the cross-entropy loss to predict the L tokens given the target model gradient. We use feature hashing (Weinberger et al., 2009) to reduce the target model gradient to 10% of its original dimensions as input to the inversion model. • Training details. We use Adam (Kingma & Ba, 2014) to train the inversion model over 20 epochs with batch size 64. Learning rates are selected separately for each defense from {10−3, 10−4, 10−5}. • Computation cost. Our experiments are conducted using NVIDIA GeForce RTX 3090 GPUs and each training run takes about 3 hours.
Evaluation methodology. We evaluate LTI and the TAG baseline on 1, 000 samples from the WikiText test set. To measure the quality of reconstructed text, we use four metrics:
• Accuracy(%) measures the average token-wise zero-one accuracy. Higher is better. • Rouge-1(%), Rouge-2(%) and Rouge-L(%) measure the overlap of unigram, bigram, and length of
longest common subsequence between the ground truth and the reconstructed text. Higher is better.
Results. Table 2 shows quantitative comparison between LTI (and its variant LTI-P) and TAG against various defenses. The overall trend is remarkably consistent: LTI and LTI-P outperform TAG in all four metrics for all defense settings, with LTI-P achieving state-of-the-art recovery accuracy by far. This result suggests that knowledge of the marginal token distribution encodes enough data prior for LTI-P to train the inversion model, and having access to infinite training data allows it to better generalize to the test set compared to LTI. In practice, it is very plausible that the marginal token distribution is known to the adversary, and hence LTI-P serves as a surprisingly simple and effective baseline for gradient inversion in NLP.
Figure 4 shows 3 random test samples from WikiText and their reconstructions using LTI-P and TAG, with tokens that are correctly reconstructed highlighted in blue. Without any defense, both TAG and LTI-P yield reasonably accurate reconstructions, with LTI-P faithfully reconstructing all but 1-2 tokens. With the sign compression defense applied, TAG fails to recover any token correctly, whereas LTI-P can faithfully recover almost half of the tokens in each sample. Results for gradient pruning and gradient perturbation yield similar conclusions, with TAG recovering a larger but still relatively insignificant set of tokens. Additional samples are given in the appendix.
5 CONCLUSION AND FUTURE WORK
We demonstrated the effectiveness of LTI—a simple learning-based gradient inversion attack—under realistic federated learning settings. For both vision and language tasks, LTI can match or exceed the performance of state-of-the-art optimization-based methods when no defense is applied, and significantly outperform all prior works under defenses based on gradient perturbation and gradient
compression. Given its simplicity and versatility, we advocate the use of LTI both as a strong baseline for future research as well as a diagnostic tool for evaluating privacy leakage in FL.
Negative societal impact. The concept of a gradient inversion attack can lead to negative consequences if used inappropriately. Our work showed that if FL is deployed without consideration for gradient inversion attacks, an adversary can leverage its vulnerabilities to compromise the data privacy of clients even under strong empirical defenses. However, we strongly emphasize that our work should not be interpreted as a tool for adversaries, but rather serve to inform the community about the risks of data privacy breach in FL and promote future research into safe practices.
Limitations. This paper serves as preliminary work towards understanding the effectiveness of learning-based gradient inversion attacks, and our method can be further improved along several different directions. 1. For large models, our current approach is to hash the gradients into a lowerdimensional space to reduce memory cost. It may be possible to leverage the model’s architecture to design more effective dimensionality reduction techniques to further scale up the method. 2. Currently we only focus on the setting with batch size 1, which precludes the use of secure aggregation (Bonawitz et al., 2016)—a common technique in FL for amplifying privacy by aggregating the gradients from multiple clients before revealing it to the server. For LTI, the complexity of MLP would increase when the batch size increases, which makes the learning harder. More advanced model architecture and loss design might help with the large batch case. 3. LTI in its current form does not leverage additional data priors such as the smoothness prior for images and text fluency prior for text. We can readily incorporate these priors by modifying the inversion model’s loss function with total variation (for image data) or perplexity on a trained language model (for text data), which may further improve the performance of LTI.
A SUPPLEMENTARY MATERIAL FOR SECTION 4
A.1 MODIFICATIONS FOR BASELINE METHODS
Vision baselines. Both IG and GI-GIP use cosine distance instead of ℓ2 distance in Equation 1 for optimizing the dummy data. For the sign compression defense, this loss function does not optimize the correct objective since the dummy data’s gradient is not a vector with ±1 entries but rather a real-valued vector with the same sign. We replace cosine distance by the loss ∑m i=1 ( ℓisign )2 where
ℓisign = max {−∇wiℓ(fw(x̃), ỹ) · Sign (∇wiℓ(fw(x), y)) , 0} . (3)
One sanity check for this loss is that when ∇wiℓ(fw(x̃), ỹ) has the same sign as that of ∇wiℓ(fw(x), y), the minimum loss value of 0 is achieved. For the gradient pruning defense, optimizing the cosine distance between the dummy data gradient and the pruned ground truth gradient will force too many gradient values to 0, which is the incorrect value for the full ground truth gradient. Therefore we only compute cosine distance over the non-zero dimensions of pruned gradient.
Language baselines. For TAG, we find that the loss function also needs to be modified slightly to accommodate the sign compression and gradient pruning defenses:
• Sign compression. Similar to the vision baselines, the ℓ2 and ℓ1 distance between the dummy data gradient and the ground truth gradient sign do not optimize the correct objective. We replace ∥ · ∥22 and ∥ · ∥1 by ∑m i=1 ( ℓisign )2 and ℓisign, respectively, where ∑m i=1 ℓ i sign is defined in Equation 3. • Gradient pruning. We make the same modification to TAG as in the vision baselines.
A.2 AUXILIARY DATASET ABLATION STUDIES
In section 4.1.2 we showed reconstruction MSE for LTI as a function of the auxiliary dataset size and the shift factor β. For completeness, we show the corresponding PSNR and LPIPS curves in Figure 6. Similar to Figure 3, when reducing the auxiliary dataset size (e.g., from 50, 000 to 5, 000) or reducing the proportion of in-distribution data (e.g., from β = 1 to β = 0.1), the performance of LTI does not worsen significantly.
A.3 ADDITIONAL SAMPLES
Figure 7 and Figure 8 show additional samples and their reconstructions under various defense mechanisms. The result is consistent with Figure 2 and Figure 4, where LTI shows consistently better reconstruction quality compared to baselines.
Sign Comp.
Sign Comp. | 1. What is the focus and contribution of the paper on gradient inversion attacks?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its ability to learn model parameters and require auxiliary data?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or questions regarding the effectiveness of the proposed method in reducing the number of auxiliary data while maintaining performance?
5. Do you have any suggestions for improving the learning function or algorithm used in the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper presents a simple learning-based gradient inversion attack. The new method is trained using auxiliary data and can learn how to invert gradients on both vision and language tasks. A new learning function is built to learn model parameters on the auxiliary data.
Strengths And Weaknesses
To learn the parameter theta, a large number of auxiliary data are required. This is also a limitation of the proposed method. A challenging problem would be reduce the number of the auxiliary data while keeping the performance.
The learning function of eq.(2) is simple yet powerful. A new algorithm is expected to show the learning steps with data input/output and all the parameters used in the algorithm.
Clarity, Quality, Novelty And Reproducibility
The codes and data are available. |
ICLR | Title
Learning To Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning
Abstract
Gradient inversion attack enables recovery of training samples from model updates in federated learning (FL) and constitutes a serious threat to data privacy. To mitigate this vulnerability, prior work proposed both principled defenses based on differential privacy, as well as heuristic defenses based on gradient compression as countermeasures. These defenses have so far been very effective, in particular those based on gradient compression that allow the model to maintain high accuracy while greatly reducing the attack’s effectiveness. In this work, we argue that such findings do not accurately reflect the privacy risk in FL, and show that existing defenses can be broken by a simple adaptive attack that trains a model using auxiliary data to learn how to invert gradients on both vision and language tasks.
N/A
Gradient inversion attack enables recovery of training samples from model updates in federated learning (FL) and constitutes a serious threat to data privacy. To mitigate this vulnerability, prior work proposed both principled defenses based on differential privacy, as well as heuristic defenses based on gradient compression as countermeasures. These defenses have so far been very effective, in particular those based on gradient compression that allow the model to maintain high accuracy while greatly reducing the attack’s effectiveness. In this work, we argue that such findings do not accurately reflect the privacy risk in FL, and show that existing defenses can be broken by a simple adaptive attack that trains a model using auxiliary data to learn how to invert gradients on both vision and language tasks.
1 INTRODUCTION
Federated learning (FL; (McMahan et al., 2017)) is a popular framework for distributed model training on sensitive user data. Instead of centrally storing the training data, FL operates in a serverclient setting where the server hosts the model and has no direct access to the data. The clients can apply the model on their private data and send gradient updates back to the server. This learning regime promises data privacy as users only share gradients but never any raw data. However, recent work (Zhu et al., 2019; Zhao et al., 2020; Geiping et al., 2020) showed that despite these efforts, the server can still recover the training data from gradient updates, violating the promise of data privacy in FL. These so-called gradient inversion attacks operate by optimizing over the input space to find training samples whose gradient matches that of the observed gradient, and such attacks remain effective even when clients utilize secure aggregation (Bonawitz et al., 2016) to avoid revealing individual updates (Yin et al., 2021; Jeon et al., 2021).
As countermeasures against these gradient inversion attacks, prior work proposed both principled defenses based on differential privacy (Abadi et al., 2016), as well as heuristics that compress the gradient update through gradient pruning (Aji & Heafield, 2017) or sign compression (Bernstein et al., 2018). In particular, gradient compression defenses have so far enjoyed great success, severely hindering the effectiveness of existing optimization-based attacks (Zhu et al., 2019; Jeon et al., 2021) while maintaining close to the same level of accuracy for the trained model. As a result, these limitations seemingly diminish the threat of gradient inversion in practical FL applications.
In this paper we argue that evaluating defenses on existing optimization-based attacks may provide a false sense of security. To this end, we propose a simple learning-based attack—which we call Learning To Invert (LTI)—that trains a model to learn how to invert the gradient update to recover client samples; see Figure 1 for an illustration. We assume that the adversary (i.e., the server) has access to an auxiliary dataset whose distribution is similar to that of the private data, and use it to generate training samples for the gradient inversion model by querying the global model for gradients. Our attack is highly adaptable to different defenses since applying a defense simply amounts to training data augmentation for the gradient inversion model.
We empirically demonstrate that LTI can successfully circumvent defenses based on gradient perturbation (i.e., using differential privacy; (Abadi et al., 2016)), gradient pruning (Aji & Heafield, 2017) and sign compression (Bernstein et al., 2018) on both vision and language tasks.
• Vision: We evaluate on the CIFAR10 (Krizhevsky et al., 2009) classification dataset. LTI attains recovery accuracy close to that of the best optimization-based method when no defense is applied, and significantly outperforms all prior attacks under defense. • NLP: We experiment with causal language model training on the WikiText (Merity et al., 2016) dataset, where LTI attains state-of-the-art performance in all settings, with or without defense.
Given the strong empirical performance of LTI and its adaptability to different learning tasks and defense mechanisms, we advocate for its use as a simple baseline for future studies on gradient inversion attacks in FL.
2 BACKGROUND
Federated learning. The objective of federated learning (McMahan et al., 2017) is to train a machine learning model in a distributed fashion without centralized collection of training data. In detail, let fw be the global model parameterized by w, and consider a supervised learning setting that optimizes w by minimizing a loss function ℓ over the training set Dtrain:∑
(x,y)∈Dtrain ℓ(fw(x), y). In centralized learning this is typically done by computing a stochastic gradient 1B ∑B
i=1 ∇wℓ(fw(xi), yi) over a randomly drawn batch of data (x1, y1), . . . , (xB , yB) and minimizing ℓ using gradient descent.
In FL, instead of centrally collecting Dtrain to draw a random batch during training, the training set Dtrain is distributed across multiple clients and the model fw is stored on a central server. At each iteration, the model parameter w is transmitted to each client to compute the per-sample gradients {∇wℓ(fw(xi), yi)}Bi=1 locally over a set of clients. The server and clients then execute a federated aggregation protocol to compute the average gradient for the gradient descent update. A major advantage of FL is data privacy since clients do not need to disclose their data explicitly, but rather only send their gradient ∇wℓ(fw(xi), yi) to the server. Techniques such as secure aggregation (Bonawitz et al., 2016) and differential privacy (Dwork et al., 2006; 2014) can further reduce the privacy leakage from sending this gradient update.
Gradient inversion attack. Despite the promise of data privacy in FL, recent work showed that the heuristic of sending gradient updates instead of training samples themselves in fact provides a false sense of security. Zhu et al. (2019) showed in their seminal paper that it is possible for the server to recover the full batch of training samples given aggregated gradients. These optimizationbased gradient inversion attacks operate by optimizing a set of dummy data x̃1, . . . , x̃B and labels ỹ1, . . . , ỹB to match their gradient to the observed gradient:
min x̃ ∥∥∥∥∥ B∑ i=1 ∇wℓ(fw(x̃i), ỹi)− B∑ i=1 ∇wℓ(fw(xi), yi) ∥∥∥∥∥ 2
2
. (1)
For image tasks, since Equation 1 is differentiable in x̃i and ỹi and the model parameter w is known to the server, the server can optimize Equation 1 using gradient-based search. Doing so yields recovered samples (x̃i, ỹi) that closely resemble actual samples (xi, yi) in the batch. In practice this approach is highly effective, and follow-up works proposed several optimizations to further improve its recovery accuracy (Geiping et al., 2020; Yin et al., 2021; Jeon et al., 2021).
For language tasks this optimization problem is considerably more complex since the samples x1, . . . ,xB are sequences of discrete tokens, and optimizing Equation 1 amounts to solving a discrete optimization problem. To circumvent this difficulty, Zhu et al. (2019) and Deng et al. (2021) instead optimize the token embeddings to match the observed gradient and then maps the recovered embeddings to their closest tokens in the embedding layer to recover the private text. In contrast, Gupta et al. (2022) leveraged the insight that gradient of the token embedding layer can be used to recover exactly the set of tokens present in the training sample, and then uses beam search to optimize the ordering of tokens for fluency to recover the private text.
Gradient inversion under the malicious server setting. The aforementioned gradient inversion attacks operate under the honest-but-curious setting where the server faithfully executes the federated learning protocol, but attempts to extract private information from the observed gradients. Fowl et al. (2021), Boenisch et al. (2021) and Fowl et al. (2022) consider a stronger malicious server threat model that allows the server to transmit arbitrary model parameters w to the clients. Under this threat model, it is possible to carefully craft the model parameters so that the training sample can be recovered exactly from its gradient even when the batch size B is large. While this setting is certainly realistic and relevant, our paper operates under the weaker honest-but-curious threat model.
3 LEARNING TO INVERT: LEARNING-BASED GRADIENT INVERSION ATTACKS
Motivation. The threat of gradient inversion attack has prompted prior work to employ defense mechanisms to mitigate this privacy risk in FL (Zhu et al., 2019; Jeon et al., 2021). Intuitively, such defenses reduce the amount of information contained in the gradient about the training sample by either perturbing the gradient with noise (Abadi et al., 2016) or compressing them (Aji & Heafield, 2017; Bernstein et al., 2018), making recovery much more difficult. However, doing so also reduces the amount of information a sample can provide for training the global model, and hence has a negative impact on the model’s performance. This is certainly true for principled defenses based on differential privacy (Dwork et al., 2006) such as gradient perturbation (Abadi et al., 2016), however, defenses based on gradient compression seemingly provide a much better privacy-utility trade-off, effectively preventing the attack with minor reduction in model performance (Zhu et al., 2019).
The empirical success of existing defenses seemingly diminish the threat of gradient inversion in FL, especially since gradient compression (Aji & Heafield, 2017; Bernstein et al., 2018) is already commonplace in practical FL applications to reduce communication cost. However, we argue that optimization-based attacks underestimate the power of the adversary: If the adversary has access to an auxiliary dataset Daux, they can train a gradient inversion model to recover Daux from its gradients computed on the global model. As we will establish later, this greatly empowers the adversary, exposing existing risks to federate learning.
Threat model. We consider the setting where the adversary is an honest-but-curious server, who executes the learning protocol faithfully but aims to extract private training data from the observed gradients. We also assume that the FL protocol does not leverage secure aggregation, so per-client gradients are revealed to the server. Under these assumptions, in each FL iteration the adversary
has the knowledge of model weights w and the gradients ∇wℓ(fw(x), y) for each sample (x, y) in the batch. Moreover, we assume the adversary has an auxiliary dataset Daux, which could be in-distribution or a mixture of in-distribution and out-of-distribution data. This assumption is similar to the setting in Jeon et al. (2021), which assumes a generative model that is trained from the in-distribution data, and is common in the study of other privacy attacks such as membership inference (Shokri et al., 2017).
Learning to invert (LTI). Since the adversary has knowledge of the model weights, he/she is able to generate the gradient ∇wℓ(fw(xaux), yaux) for each sample (xaux, yaux) in the auxiliary dataset. This allows the adversary to learn a gradient inversion model gθ, parameterized by θ, to predict the data point (xaux, yaux) from the gradient of the global model ∇wℓ(fw(xaux), yaux) by solving the following learning problem:
min θ ∑ (xaux,yaux)∈Daux ℓattack (gθ (∇wℓ(fw(xaux), yaux)) , (xaux, yaux)) . (2)
In practice, ℓattack can be the cross-entropy (for discrete input) or squared-loss (for continuous-valued input) function and we find that using a multi-layer perceptron (MLP) (Bishop et al., 1995) for gθ is effective empirically. Importantly, when a defense mechanism such as gradient perturbation or gradient compression is applied, we can apply the same transformation to ∇wℓ(fw(xaux), yaux) to augment the training data for gθ to carry out an adaptive attack. We will show in section 4 that this simple approach is surprisingly effective at circumventing existing defenses.
Dimensionality reduction for large models. One potential problem for LTI is that the gradients ∇wℓ(fw(xaux), yaux) can be extremely high-dimensional. For example, both ResNet18 (He et al., 2016) for vision tasks and a three-layer transformer (Vaswani et al., 2017) for language tasks have approximately 1.1 million trainable parameters. Such high-dimensional input to the model gθ can lead to memory issues, as the first layer of the MLP would have 11M × h parameters, where h denotes the size of the first hidden layer.
To address this issue, we use feature hashing (Weinberger et al., 2009) to reduce the dimensionality of the input gradient. To this end we create k bins, where k is much smaller than the size of gradient m, and assign each gradient dimension i ∈ [m] to a random bin r(i) ∈ [k]. For each bin, we sum up the gradient values that are assigned to this bin. As a result, we obtain a feature vector of size k for the inversion model gθ. In other words, we project the gradient ∇wℓ(fw(xaux), yaux) to P∇wℓ(fw(xaux), yaux) using the random projection matrix P given by:
P ∈ {0, 1}k×ms.t. ∀i, Pj,i = 0 (∀j ̸= r(i)), Pr(i),i = 1.
If r(i) is implemented with a pseudo-uniform hashing function, P does not need to be stored in memory, reducing the memory footprint of gθ to a constant independent of the gradient dimension.
4 EXPERIMENT
We evaluate LTI on vision and language tasks against several existing defenses to show that it vastly outperforms prior gradient inversion attacks. We consider the following defense mechanisms evaluated in prior work (Zhu et al., 2019; Jeon et al., 2021):
• None. The gradient shared between the server and clients is the full gradient without any defense. This is the most common setting that previous papers focus on. • Sign compression (Bernstein et al., 2018) applies the sign function to each dimension of the gradient independently to compress the gradient to one bit per dimension. • Gradient pruning with pruning rate α (Aji & Heafield, 2017) zeroes out the bottom 1− α fraction of coordinates of ∇wℓ(fw(x), y) in terms of absolute value, which effectively compresses the gradient to (1− α)m dimensions. • Gradient perturbation with Gaussian standard deviation σ (Abadi et al., 2016) is a differentially private mechanism used commonly for training private models with SGD. An i.i.d. Gaussian random vector N (0, σ2) is added to the gradient, which one can show achieves ϵ-local differential privacy (Kasiviswanathan et al., 2011) with ϵ = O(1/σ).
4.1 EVALUATION ON VISION TASK
For evaluating LTI on a vision task, we experiment with image classification on CIFAR10 (Krizhevsky et al., 2009). The target model fw is LeNet (LeCun et al., 1998) with 1.5× 104 parameters trained using the cross-entropy loss.
Baselines. We compare our method with two baseline gradient inversion attacks: Inverting Gradients (IG; Geiping et al. (2020)), a representative optimization-based method, and Gradient Inversion with Generative Image Prior (GI-GIP; Jeon et al. (2021)), the state-of-the-art optimization-based method that uses a generative model to encode the data prior. We make minor modifications to these attacks to adapt them to various defenses; see appendix for details. The threat model for our attack is most similar to GI-GIP since both use an auxiliary dataset to encode the data prior.
Inversion model training. We follow the setup below for training the gradient inversion model gθ.
• Auxiliary dataset. We use the train split of CIFAR10 as the auxiliary dataset for training the inversion model gθ and the test split for inverting gradients computed on the global model fw. • Inversion model architecture. We use a three-layer MLP with hidden size 3000 for our inversion model gθ. The MLP takes the flattened gradient vector as input and outputs a 3072-dimensional vector representing the flattened image. The training objective ℓattack in Equation 2 is the mean squared error (MSE) between the output vector from MLP and the flattened ground truth image. • Training details. We use the Adam (Kingma & Ba, 2014) optimizer for training gθ. The model is trained for 200 epochs using training batch size 256. The initial learning rate is 10−4 with learning rate drop to 10−5 after 150 epochs. • Computation cost. Our experiments are conducted using NVIDIA GeForce RTX 2080 GPUs and each training run takes about 1.5 hours.
Evaluation methodology. We evaluate LTI and the aforementioned baselines on 1, 000 random images from the CIFAR10 test split. To measure reconstruction quality, we use three metrics:
• Mean squared error (MSE) measures the average pixel-wise (squared) distance between the reconstructed image and the ground truth image. Lower is better. • Peak signal-to-noise ratio (PSNR) measures the ratio between the maximum image pixel value and MSE. Higher is better. • Learned perceptual image patch similarity (LPIPS) measures distance in the features space of a VGG (Simonyan & Zisserman, 2014) model trained on ImageNet. Lower is better.
4.1.1 MAIN RESULTS
Quantitative evaluation. Table 1 gives quantitative comparisons for IG, GI-GIP, and LTI against various defense mechanisms on CIFAR10. When no defense is applied, GI-GIP achieves the best performance according to all three metrics, whereas LTI performs almost equally well in terms of MSE and close to that of IG in terms of PSNR and LPIPS. However, when the gradient is augmented with a defense mechanism, both IG and GI-GIP have considerably worse performance with MSE
close to 0.1. By comparison, LTI outperforms both baselines significantly and consistently across all three defense mechanisms. For example, under gradient perturbation with σ = 0.1, which prior work believed is sufficient for preventing gradient inversion attacks (Zhu et al., 2019; Jeon et al., 2021), MSE can be as low as 0.012 for LTI. Our result therefore provides considerable additional insight for the level of empirical privacy achieved by DP-SGD (Abadi et al., 2016), and suggests that the theoretical privacy leakage as predicted by DP ϵ may be tighter than previously thought.
Qualitative evaluation. Figure 2 shows 4 random CIFAR10 test samples and their reconstructions under different defense mechanisms. Without any defense in place, all three methods recover a considerable amount of semantic information about the object of interest, with both GI-GIP and LTI faithfully reconstructing the training sample. Under the sign compression defense, IG completely fails to reconstruct all 4 samples, while GI-GIP only successfully reconstructs the second image. In contrast, LTI is able to recover the semantic information in all 4 samples. Results for gradient pruning and gradient perturbation yield similar conclusions. Additional samples are given in the appendix.
4.1.2 ABLATION STUDIES
Since LTI learns to invert gradients using the auxiliary dataset, its performance depends on the quantity and quality of data available to the adversary. We perform ablation studies to better understand this dependence by changing the auxiliary dataset size and its distribution.
Varying the auxiliary dataset size. We randomly subsample the CIFAR10 training set to construct auxiliary datasets of size {500, 5000, 15000, 25000, 35000, 45000, 50000} and evaluate the performance of LTI under various defenses. Figure 3(a) plots reconstruction MSE as a function of the auxiliary dataset size, which is monotonically decreasing as expected. Moreover, with just 5, 000 samples for training the inversion model (second point in each curve), the performance is nearly as good as when training using the full CIFAR10 training set. Notably, even if auxiliary dataset size as small as 500, reconstruction MSE is still lower than that of IG and GI-GIP in Table 1. Corresponding figures for PSNR and LPIPS in the appendix show similar findings.
Varying the auxiliary data distribution. Although access to a large set of in-distribution data may be not available in practice, it is plausible that the adversary can collect out-of-distribution samples for the auxiliary dataset. This is beneficial for the adversary since a model that learns how to invert out-of-distribution samples given their gradients may transfer to in-distribution data as well. To simulate this scenario, we divide CIFAR10 into two halves with disjoint classes, and construct the auxiliary dataset by combining a β fraction of samples from the first half and a 1 − β fraction of samples from the second half for β ∈ {0, 0.01, 0.1, 0.3, 0.5, 0.7, 0.9, 1}. The target model fw is trained only on samples from the first half, and hence the auxiliary set has the exact same distribution as the target model’s data when β = 1 and only has out-of-distribution data when β = 0.
Figure 3(b) shows reconstruction MSE as a function of β; corresponding figures for PSNR and LPIPS are given in the appendix. We make the following observations:
1. Even if the auxiliary dataset only contains 250 in-distribution samples (β = 0.01; second point in each curve), MSE of the inversion model is still lower than that of the best baseline in Table 1. For example, with the sign compression defense, LTI attains an MSE of ≤ 0.02, which is much lower than the MSE of 0.116 for IG and 0.091 for GI-GIP. 2. When the auxiliary dataset contains only out-of-distribution data (β = 0), the inversion model has very high reconstruction MSE, which suggests that methods for improving out-of-distribution generalization may be necessary for further improvement.
4.2 EVALUATION ON LANGUAGE TASK
For evaluating LTI on a language task, we experiment with causal language model training1 for next-token prediction. The language model fw is a three-layer transformer (Vaswani et al., 2017) with frozen token embedding layer. This is a common technique for language model fine-tuning (Sun et al., 2019), which also has privacy benefits since direct privacy leakage from the gradient magnitude of the token embedding layer can be prevented (Fowl et al., 2022; Gupta et al., 2022). As a result, the trainable model contains about 1.1 × 106 parameters. We train the language model on WikiText (Merity et al., 2016), where each training sample is limited to L = 16 tokens and the language model is trained to predict the next token xl given x:l−1 for l = 1, . . . , L using the cross-entropy loss.
Baseline. We compare LTI with TAG (Deng et al., 2021)—the state-of-the-art language model gradient inversion attack without utilizing the token embedding layer gradient2. The objective function for TAG is a slight modification of Equation 1 that uses both the ℓ2 and ℓ1 distance between the observed gradient and the gradient of dummy data. We also modify TAG slightly to adaptive it different defenses; see appendix for details.
Inversion model training. We follow the setup below for training the gradient inversion model gθ.
• Auxiliary dataset. We use ∼ 1.8 × 105 samples from the train split of Wikitext as the auxiliary dataset, and 1, 000 samples from the test split for evaluating the attack. In addition, we introduce a weaker variant of our attack that only assumes knowledge of the marginal token distribution for the language model training data. Instead of using the WikiText train split as auxiliary data, we sample random tokens according to the marginal token distribution to generate pseudo-data for training the inversion model. We show that this variant, which we denote LTI-P, can even outperform LTI with in-distribution auxiliary data due to access to infinite training data.
1We follow the task setup and code in https://github.com/JonasGeiping/breaching 2We do not compare against a more recent attack by Gupta et al. (2022) since it crucially depends on access
to the token embedding layer gradient.
• Inversion model architecture. We train a two-layer MLP with ReLU activation and first hidden-layer size 600 and second hidden-layer size 1, 000. The inversion model outputs L probability vectors each with size equal to the vocabulary size (∼ 50, 000), and we train it using the cross-entropy loss to predict the L tokens given the target model gradient. We use feature hashing (Weinberger et al., 2009) to reduce the target model gradient to 10% of its original dimensions as input to the inversion model. • Training details. We use Adam (Kingma & Ba, 2014) to train the inversion model over 20 epochs with batch size 64. Learning rates are selected separately for each defense from {10−3, 10−4, 10−5}. • Computation cost. Our experiments are conducted using NVIDIA GeForce RTX 3090 GPUs and each training run takes about 3 hours.
Evaluation methodology. We evaluate LTI and the TAG baseline on 1, 000 samples from the WikiText test set. To measure the quality of reconstructed text, we use four metrics:
• Accuracy(%) measures the average token-wise zero-one accuracy. Higher is better. • Rouge-1(%), Rouge-2(%) and Rouge-L(%) measure the overlap of unigram, bigram, and length of
longest common subsequence between the ground truth and the reconstructed text. Higher is better.
Results. Table 2 shows quantitative comparison between LTI (and its variant LTI-P) and TAG against various defenses. The overall trend is remarkably consistent: LTI and LTI-P outperform TAG in all four metrics for all defense settings, with LTI-P achieving state-of-the-art recovery accuracy by far. This result suggests that knowledge of the marginal token distribution encodes enough data prior for LTI-P to train the inversion model, and having access to infinite training data allows it to better generalize to the test set compared to LTI. In practice, it is very plausible that the marginal token distribution is known to the adversary, and hence LTI-P serves as a surprisingly simple and effective baseline for gradient inversion in NLP.
Figure 4 shows 3 random test samples from WikiText and their reconstructions using LTI-P and TAG, with tokens that are correctly reconstructed highlighted in blue. Without any defense, both TAG and LTI-P yield reasonably accurate reconstructions, with LTI-P faithfully reconstructing all but 1-2 tokens. With the sign compression defense applied, TAG fails to recover any token correctly, whereas LTI-P can faithfully recover almost half of the tokens in each sample. Results for gradient pruning and gradient perturbation yield similar conclusions, with TAG recovering a larger but still relatively insignificant set of tokens. Additional samples are given in the appendix.
5 CONCLUSION AND FUTURE WORK
We demonstrated the effectiveness of LTI—a simple learning-based gradient inversion attack—under realistic federated learning settings. For both vision and language tasks, LTI can match or exceed the performance of state-of-the-art optimization-based methods when no defense is applied, and significantly outperform all prior works under defenses based on gradient perturbation and gradient
compression. Given its simplicity and versatility, we advocate the use of LTI both as a strong baseline for future research as well as a diagnostic tool for evaluating privacy leakage in FL.
Negative societal impact. The concept of a gradient inversion attack can lead to negative consequences if used inappropriately. Our work showed that if FL is deployed without consideration for gradient inversion attacks, an adversary can leverage its vulnerabilities to compromise the data privacy of clients even under strong empirical defenses. However, we strongly emphasize that our work should not be interpreted as a tool for adversaries, but rather serve to inform the community about the risks of data privacy breach in FL and promote future research into safe practices.
Limitations. This paper serves as preliminary work towards understanding the effectiveness of learning-based gradient inversion attacks, and our method can be further improved along several different directions. 1. For large models, our current approach is to hash the gradients into a lowerdimensional space to reduce memory cost. It may be possible to leverage the model’s architecture to design more effective dimensionality reduction techniques to further scale up the method. 2. Currently we only focus on the setting with batch size 1, which precludes the use of secure aggregation (Bonawitz et al., 2016)—a common technique in FL for amplifying privacy by aggregating the gradients from multiple clients before revealing it to the server. For LTI, the complexity of MLP would increase when the batch size increases, which makes the learning harder. More advanced model architecture and loss design might help with the large batch case. 3. LTI in its current form does not leverage additional data priors such as the smoothness prior for images and text fluency prior for text. We can readily incorporate these priors by modifying the inversion model’s loss function with total variation (for image data) or perplexity on a trained language model (for text data), which may further improve the performance of LTI.
A SUPPLEMENTARY MATERIAL FOR SECTION 4
A.1 MODIFICATIONS FOR BASELINE METHODS
Vision baselines. Both IG and GI-GIP use cosine distance instead of ℓ2 distance in Equation 1 for optimizing the dummy data. For the sign compression defense, this loss function does not optimize the correct objective since the dummy data’s gradient is not a vector with ±1 entries but rather a real-valued vector with the same sign. We replace cosine distance by the loss ∑m i=1 ( ℓisign )2 where
ℓisign = max {−∇wiℓ(fw(x̃), ỹ) · Sign (∇wiℓ(fw(x), y)) , 0} . (3)
One sanity check for this loss is that when ∇wiℓ(fw(x̃), ỹ) has the same sign as that of ∇wiℓ(fw(x), y), the minimum loss value of 0 is achieved. For the gradient pruning defense, optimizing the cosine distance between the dummy data gradient and the pruned ground truth gradient will force too many gradient values to 0, which is the incorrect value for the full ground truth gradient. Therefore we only compute cosine distance over the non-zero dimensions of pruned gradient.
Language baselines. For TAG, we find that the loss function also needs to be modified slightly to accommodate the sign compression and gradient pruning defenses:
• Sign compression. Similar to the vision baselines, the ℓ2 and ℓ1 distance between the dummy data gradient and the ground truth gradient sign do not optimize the correct objective. We replace ∥ · ∥22 and ∥ · ∥1 by ∑m i=1 ( ℓisign )2 and ℓisign, respectively, where ∑m i=1 ℓ i sign is defined in Equation 3. • Gradient pruning. We make the same modification to TAG as in the vision baselines.
A.2 AUXILIARY DATASET ABLATION STUDIES
In section 4.1.2 we showed reconstruction MSE for LTI as a function of the auxiliary dataset size and the shift factor β. For completeness, we show the corresponding PSNR and LPIPS curves in Figure 6. Similar to Figure 3, when reducing the auxiliary dataset size (e.g., from 50, 000 to 5, 000) or reducing the proportion of in-distribution data (e.g., from β = 1 to β = 0.1), the performance of LTI does not worsen significantly.
A.3 ADDITIONAL SAMPLES
Figure 7 and Figure 8 show additional samples and their reconstructions under various defense mechanisms. The result is consistent with Figure 2 and Figure 4, where LTI shows consistently better reconstruction quality compared to baselines.
Sign Comp.
Sign Comp. | 1. What is the focus and contribution of the paper on federated learning?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its effectiveness and limitations?
3. Do you have any concerns about the applicability of the method in real-world scenarios?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions regarding the training process and evaluation of the attacks and defenses? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a new gradient inversion attack against federated learning. By assuming that the server has access to some auxiliary data, the main idea of the paper is to learn a gradient inversion model from the auxiliary data. The paper shows that compared with previous optimization-based approaches such as Inverting Gradients (IG), Gradient Inversion with Generative Image Prior (GI-GIP), and TAG, the proposed learning to invert (LTI) method obtains better reconstruction accuracy on both an image task (CIFAR-10) and a language task (Wikitext), under both gradient perturbation and compression based defenses.
Strengths And Weaknesses
Strengths
The idea of learning an inversion model using auxiliary data seems a simple yet effective approach.
Ablation studies show that even when the auxiliary data only consists of 500 images sampled from CIFAR-10 or contains 250 in-distribution samples, LTI still outperforms IG and GI-GIP, which is impressive.
Weaknesses
An important limitation of the proposed approach is that it only works for gradients computed from a single data sample. For a real FL system, even without using secure aggregation, a local update from a client is obtained by taking the gradient of a batch of data samples or through multiple gradient descent steps. Thus, the proposed approach is not sophisticated enough to be applied to real FL systems.
Another limitation is that the proposed attack method is only tested against gradient perturbation and gradient compression, which were not originally designed for countering gradient inversion. It would be useful to understand how the proposed method performs against more recent defenses, such as Soteria [1], that target gradient inversion.
[1] Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, and Yiran Chen. Soteria: Provable defense against privacy leakage in federated learning from representation perspective. CVPR 2021.
Clarity, Quality, Novelty And Reproducibility
Multiple global models are generated during the FL training process. An important detail that is unclear is which of them are used to train the gradient inversion module and which of them are used to evaluate the attacks and defenses. It is unlikely that the same model can be used for both purposes in practice, given the amount of time needed to train the gradient inversion module. Thus, the question is whether a module trained using early FL models can be effective for the inversion task against new FL models. A related question is whether it is easier or harder to perform inversion attacks at the earlier stage of FL training.
The idea of training a gradient-inversion module using a small amount of auxiliary data seems novel. |
ICLR | Title
Positive-Unlabeled Learning with Uncertainty-aware Pseudo-label Selection
Abstract
Positive-unlabeled (PU) learning aims at learning a binary classifier from only positive and unlabeled training data. Recent approaches addressed this problem via cost-sensitive learning by developing unbiased loss functions, and their performance was later improved by iterative pseudo-labeling solutions. However, such two-step procedures are vulnerable to incorrectly estimated pseudo-labels, as errors are propagated in later iterations when a new model is trained on erroneous predictions. To mitigate this issue we propose PUUPL, a new loss-agnostic training procedure for PU learning that incorporates epistemic uncertainty in pseudolabeling. Using an ensemble of neural networks and assigning pseudo-labels based on high confidence predictions improves the reliability of pseudo-labels, increasing the predictive performance of our method and leading to new state-ofthe-art results in PU learning. With extensive experiments, we show the effectiveness of our method over different datasets, modalities, and learning tasks, as well as improved robustness over misspecifications of hyperparameters and biased positive data. The source code of the method and all the experiments are available in the supplementary material.
1 INTRODUCTION
Many real-world applications involve positive and unlabeled (PU) datasets in which only some of the data is labeled positive while the majority is unlabeled and contains both positives and negatives. PU learning aims to learn a binary classifier in this challenging setting without any labeled negative examples. Learning from PU data can reduce deployment costs in many deep learning applications that otherwise require annotations from experts such as medical image diagnosis (Armenian and Lilienfeld, 1974) and protein function prediction (Gligorijević et al., 2021), and it can even enable applications in settings where the measurement technology itself can not detect negative examples (Purcell et al., 2019).
Some recent approaches such as unbiased PU (Du Plessis et al., 2014, uPU) and non-negative PU (Kiryo et al., 2017, nnPU) formulate this problem as cost-sensitive learning. Others approach PU learning as a two-step procedure first identifying and labeling some reliable negative examples, and then re-training the model based on this newly constructed labeled dataset (Bekker and Davis, 2020). These approaches show similarities with pseudo-labeling in semi-supervised classification settings (Lee, 2013).
Such pseudo-labeling techniques are however especially vulnerable to incorrectly assigned labels of the selected examples as these errors will propagate and magnify in the retrained model, resulting in a negative feedback loop. Worse yet, since the true labels are unavailable in PU learning, this situation is hard to detect by any metrics computed on the training set. This erroneous selection of unreliable pseudo-labels occurs when wrong model predictions are associated with excessive model confidence. Such poor model calibration is accompanied by a distortion of the signal for the pseudo label selection (Van Engelen and Hoos, 2020).
In recent literature on pseudo-labeling, this problem is recognized and successfully addressed by explicitly estimating the prediction uncertainty (Abdar et al., 2021; Rizve et al., 2021; Arazo et al., 2020). While this is the case for semi-supervised classification, there does not yet exist a method that explores the use of uncertainty quantification for pseudo-labeling in a PU learning context.
Contributions: Motivated by this, we propose a novel, uncertainty-aware pseudo-labeling framework for PU learning that uses established uncertainty quantification techniques to identify reliable examples to pseudo-label (Fig. 1). In particular, our contributions are: (1) We introduce PUUPL (Positive-Unlabeled, Uncertainty-Aware Pseudo-Labeling), a simple uncertainty-aware pseudolabeling framework for PU learning. (2) PUUPL can use any loss function for PU learning, improving model performance while being robust to the specific data biases that the respective loss considers. (3) We evaluate our methods on a wide range of benchmarks and PU datasets, achieving state-of-the-art performance in PU learning. (4) Our extensive ablation studies provide new insights into uncertainty-aware pseudo-labeling for PU learning. Further, they show that our method is robust to the choices of hyperparameters, with 1% or less variability in test accuracy among different choices as well as distribution shifts between labeled positives in the train and test datasets. To the best of our knowledge, PUUPL is the first framework for PU learning which leverages uncertainty information during pseudo-labeling.
2 RELATED WORK
PU Learning PUL was introduced as a variant of binary classification (Liu et al., 2003) and is related to one-class learning (Ruff et al., 2018; Li et al., 2010), multi-positive learning (Xu et al., 2017), multi-task learning (Kaji et al., 2018), and semi-supervised learning (Chapelle et al., 2009). There exist three main research branches for PUL: two-step techniques, class prior incorporation, and biased PUL (Bekker and Davis, 2020). In this work, we combine Pseudo-Labeling which has similarities to two-step techniques, with biased PUL, also coined as reweighting methods, and refer to Bekker and Davis (2020) for a comprehensive overview of the field. In this context, Du Plessis et al. (2014) introduced the unbiased risk estimator uPU. Kiryo et al. (2017) showed this loss function is prone to overfitting in deep learning contexts as it lacks a lower bound and proposed the non-negative risk estimator nnPU as a remedy. Follow-up work on loss functions for PUL has focused on robustness w.r.t biases in the sampling process such as PUSB (Kato et al., 2019), PUbN (Hsieh et al., 2019) or PULNS (Luo et al., 2021).
Uncertainty-aware Pseudo-Labeling Pseudo-labeling follows the rationale that the model leverages its own predictions on unlabeled data as pseudo training targets to enable iterative semisupervised model training. The first such approach for deep learning was introduced by Lee (2013), simply selecting the class with the highest predicted probability as a pseudo label. One weakness of pseudo-labeling is that erroneously selected pseudo-labels can amplify for training, potentially leading to model degradation. This is grounded in poor model calibration distorting the signal for the pseudo label selection (Van Engelen and Hoos, 2020). Iscen et al. (2019) try to mitigate this issue using confidence and class weights. Shi et al. (2018) use confident scores based on the geometric neighborhood of the unlabeled samples while Arazo et al. (2020) effectively tackle this confirmation bias using Mixup (Zhang et al., 2017), Label Noise (Tanaka et al., 2018), and Entropy Regularization (Grandvalet et al., 2005). Rizve et al. (2021) introduced a pseudo-labeling framework using a weighting scheme for class balancing and MC dropout (Gal and Ghahramani, 2016) for calibration, while Beluch et al. (2018) found deep ensembles (Lakshminarayanan et al., 2017a) to yield the best model calibration in an active learning context, especially in low-label regimes. The commonality of these works is the explicit consideration of model uncertainty to improve pseudo-label selection, which motivates us to apply this in the context of PU learning.
Pseudo-Labeling for PU Learning Two-step approaches in PU learning first identify negative samples from the unlabeled dataset, and then train a binary classification model on the original dataset augmented with the newly identified negatives (Bekker and Davis, 2020). These approaches share similarities with pseudo-labeling but lack an iterative feedback loop after the completion of the second step.
A first attempt to combine pseudo-labeling with PU learning was made with Self-PU (Chen et al., 2020b), where self-paced learning, a confidence-weighting scheme based on the model predictions and a teacher-student distillation approach are combined. Via this complex training scheme, Self-PU was shown to marginally outperform recent baselines. With PUUPL, we propose an alternative PL strategy for PU learning that performs better in a simpler and more principled way using implicitly well-calibrated models to improve the pseudo-label selection.
Uncertainty-aware pseudo-labeling for PU learning To the best of our knowledge, we are the first to introduce an uncertainty-aware pseudo-labeling paradigm to PU learning. Although our method shares the same motivation as that from Rizve et al. (2021) for semi-supervised classification, we differ in several important aspects: (1) we specifically target PU data with a PU loss, (2) we quantify uncertainty with an ensemble instead of Monte Carlo dropout, (3) we use epistemic uncertainty instead of the predicted class probabilities for the selection, (4) we do not use temperature scaling, and (5) we use soft labels.
3 METHOD
We propose PUUPL (Positive Unlabeled, Uncertainty-aware Pseudo-Labeling), an iterative pseudolabeling procedure to progressively select and label the most confident examples from unlabeled data. The pseudo-code for PUUPL is shown as Algorithm 1. Our method separates the training set Xtr into the sets P , U , and L which contain the initial positives, the currently unlabeled, and the pseudo-labeled samples respectively. The set L is initially empty. At each pseudo-labeling iteration, we first train our model using all samples in P , U , and L until some convergence condition is met (Section 3.2). Then, samples in U are predicted and ranked w.r.t their predictive uncertainty (Section 3.3) and samples with the most confident score are assigned the predicted label and moved into the set L (Section 3.4). Similarly, samples in L are also predicted and the most uncertain samples are moved back to the unlabeled set U (Section 3.5). Next, the model is re-initialized to the same initial weights and a new pseudo-labeling iteration starts.
In the following, we first describe the notation used in this paper and then explain in detail the training procedure of PUUPL.
3.1 NOTATION
Consider input samples X with label y and superscripts ·tr, ·va and ·te for training, validation, and test data respectively. The initial training labels ytr are set to one for all samples in P and zero for all others in U . We group the indices of original positives, unlabeled, and pseudo-labeled samples
Algorithm 1 Pseudocode for the PUUPL Training Procedure Input
• Train, validation and test data Xtr, ytr, Xva, yva, Xte, yte
• Number K of networks in the ensemble (suggested K = 2)
• Maximum number T of pseudo-labels to assign at each round (suggested T = 1000)
• Maximum uncertainty threshold tl to assign pseudo-labels (suggested tl = 0.05)
• Minimum uncertainty threshold tu to remove pseudo-labels (suggested tu = 0.35)
Output Model parameters θ∗
1: P ← indices of positive samples in Xtr 2: U ← indices of unlabeled samples in Xtr 3: L← ∅ . Indices of pseudo-labeled samples 4: θ0 ← Random weight initialization 5: while not converged do 6: Initialize model weights to θ0 . Training 7: Train an ensemble of K networks on Xtr, ytr using the loss in Eq. 1 8: Validate on Xva, yva and update weights θ∗ if accuracy improved 9: f̂ ← ensemble predictions for Xtr . Uncertainty 10: Compute epistemic uncertainty ûe with f̂ via Eq. 6 11: Lnew ← Balanced set of examples to pseudo-label via Eq. 7 using ûeU , T and tl . Pseudo-labeling 12: U new ← Examples to pseudo-unlabel via Eq. 10 using ûeL and tu 13: L← L ∪ Lnewb \ U new . Update indices 14: U ← U \ Lnewb ∪ U new 15: yLnew ← p̂Lnew . Update pseudo-labels 16: yUnew ← 0 17: end while 18: Restore the weights θ∗ that scored highest on the validation set 19: Compute accuracy on the held-out test set Xte, yte 20: return θ∗
in Xtr into the sets P , U , and L respectively. Our proposed model is an ensemble of K deep neural networks whose random initial weights are collectively denoted as θ0. The predictions of the k-th network for sample i are indicated with p̂ik = σ(f̂ik), with σ(·) the logistic function and f̂ik the predicted logits. The logits and predictions for a sample averaged across the networks in the ensemble are denoted by f̂i and p̂i respectively. We subscript data and predictions with i to index individual samples, and use an index set in the subscript to index all samples in the set (e.g., XtrU = {Xtri |i ∈ U} denotes the features of all unlabeled samples). We denote the total, epistemic and aleatoric uncertainty of sample i as ûti, û e i , and û a i , respectively.
3.2 LOSS FUNCTION
We train our proposed model with a loss function L that is a convex combination of a loss LPU for the samples in the positive and unlabeled set (P ∪ U ) and a loss LL for the samples in the pseudo-labeled set (L): L = λ · LL + (1− λ) · LPU (1) with λ ∈ (0, 1). The loss LL is the binary cross-entropy computed w.r.t the assigned pseudo-labels y. Our model is agnostic to the specific PU loss LPU used. This allows our method to be easily adapted to different scenarios for which a PU loss was proposed and improve over its performance, for example when coping with a selection bias in the positive examples (Kato et al., 2019) or the availability of a biased negative set (Hsieh et al., 2019). For the standard setting of PU learning, we use the non-negative correction nnPU of the PU loss (Kiryo et al., 2017):
LPU = π · `(P, 1) + max {0, `(U,−1)− π · `(P,−1)} (2) With π the prior probability of a sample being positive, which we assume known and can be estimated from PU data (du Plessis et al., 2016), and `(S, y) the expected sigmoid loss of samples in the set S with label y:
`(S, y) = 1 |S| ∑ i∈S
1
1 + exp(y · p̂i) (3)
3.3 MODEL UNCERTAINTY
To quantify the predictive uncertainty, we utilize a deep ensemble with K networks with the same architecture, each trained on the full training dataset (Lakshminarayanan et al., 2017b). Given the predictions p̂i1, . . . , p̂iK for a sample xi, we associate three types of uncertainties to xi’s predictions (Hüllermeier and Waegeman, 2021): the aleatoric uncertainty as the mean of the entropy of the predictions (Eq. 4), the total uncertainty as the entropy of the mean prediction (Eq. 5), and the epistemic uncertainty formulated as the difference between the two (Eq. 6).
ûai = − 1
K K∑ k=1 [p̂ik log p̂ik + (1− p̂ik) log(1− p̂ik)] (4)
ûti = −p̂i log p̂i − (1− p̂i) log(1− p̂i) (5) ûei = û t i − ûai (6)
Epistemic uncertainty corresponds to the mutual information between the parameters of the model and the true label of the sample. Low epistemic uncertainty thus means that that the model parameters would not change significantly if trained on the true label, suggesting that the prediction is indeed correct. Using such a prediction as target in the cross entropy loss would in turn provide a stronger, more explicit learning signal to the model, so that a correctly pseudo-labeled example provides a larger decrease in risk compared to using the same example without any label within the positive-unlabeled loss.
3.4 PSEUDO-LABELING
The estimated epistemic uncertainty (Eq. 6) is used to rank samples of the unlabeled set U and to select reliable samples for pseudo-labeling. Next, the predictions of the unlabeled samples U are ranked according to their epistemic uncertainty (Eq. 6). Let ρ(i) denote the rank of sample i, then the set Lnew of newly pseudo-labeled samples is formed by taking the T samples with lowest uncertainty from U , ensuring that it is lower than the threshold tl:
Lnew = {i ∈ U |ρ(i) ≤ T ∧ uei ≤ tl} (7)
Previous works have shown that balancing the pseudo-label selection between the two classes, i.e., ensuring that the ratio of newly labeled positives and negatives is close to a given target ratio r, is beneficial (Rizve et al., 2021). In this case, the set Lnew should be partitioned according to the model’s predictions into a set Lnew+ of predicted positives and L new − of predicted negatives, and the most uncertain samples in the larger set should be discarded to reach the desired ratio r, which we fix to 1. We then assign soft pseudo-labels, i.e., the average prediction in the open interval (0, 1), to these samples:
yi = p̂i ∀i ∈ Lnew− ∪ Lnew+ (8)
3.5 PSEUDO-UNLABELING
Similar to the way that low uncertainty on an unlabeled example indicates that the prediction can be trusted, high uncertainty on a pseudo-labeled example indicates that the assigned pseudo-label might not be correct after all. To avoid training on such possibly incorrect pseudo-labels, we move the pseudo-labeled examples with uncertainty above a threshold tu back into the unlabeled set:
U new = {i ∈ L|ûei ≥ tu} (9) yi = 0 ∀i ∈ U new (10)
4 EXPERIMENTS
4.1 EXPERIMENTAL PROTOCOL
To empirically compare our proposed framework to existing state-of-the-art losses and models, we followed standard protocols for PU learning (Chen et al., 2020b; Kiryo et al., 2017; Kato et al., 2019).
Datasets: We evaluated our method in the standard setting of MNIST (Deng, 2012) and CIFAR10 (Krizhevsky et al., 2009) datasets, as well as Fashion MNIST (F-MNIST) (Xiao et al., 2017), STL-10 (Coates et al., 2011) and IMDb (Maas et al., 2011) datasets to show the applicability to different data modalities. Similar to previous studies (Chen et al., 2020b; Kiryo et al., 2017; Kato et al., 2019), positives were defined as odd digits in MNIST, vehicles in CIFAR-10 and STL-10, and we used trousers, dress, sandals, sneaker, and ankle boots for F-MNIST and positive reviews for IMDb. For STL-10 we used all available labeled and unlabeled data and the official ten crossvalidation folds. For all other datasets, we reserved a validation set of 5,000 samples and use all other samples for training with 1,000 randomly chosen labeled positives, as is common practice, and evaluated on the canonical test set of each dataset. More details are provided in Appendix B
Network architectures: To ensure a fair comparison with other works in PU learning (Chen et al., 2020b; Kiryo et al., 2017) we used the same architectures on the same datasets, namely a 13-layers convolutional neural network for the experiments on CIFAR-10 (Table A.3) and a MLP with four fully connected hidden layers of 300 neurons each and ReLU activation for MNIST and F-MNIST. For IMDb we used a bidirectional LSTM network with a MLP head whose architecture was optimized as part of the hyperparameter search.
Training: We trained all models with the Adam optimizer (Kingma and Ba, 2015) with β1 = 0.9 and β2 = 0.999, and an exponential learning rate decay with γ = 0.99. We further used the nnPU loss (Kiryo et al., 2017) as LPU (Eq. 1) unless otherwise stated. As is common in the pseudolabeling literature (Chen et al., 2020b; Rizve et al., 2021; Kato et al., 2019; Tanaka et al., 2018; Hu et al., 2021), we assume that a positive and negative labeled validation set is available and use this validation set for early stopping, i.e., stop the pseudo-labeling loop when the model’s accuracy on this set has stopped improving, and use the parameters that achieved the highest validation accuracy to compute the test performance. An experiment will show how this requirement can be relaxed in practice.
Hyperparameter tuning: We used the Hyperband algorithm (Li et al., 2017) to optimize all hyperparameters on the CIFAR-10 dataset with η = 3 and S = 4, using the validation accuracy as the criterion to be optimized. The configuration that achieved the highest validation accuracy was
then used as a basis for the ablation studies and fine-tuned on the remaining datasets to show that the pseudo-labeling hyperparameters (Table A.1) do not require tuning when transferred to other datasets and data modalities. Specifically, on the other datasets we only tuned hyperparameters related to network training such batch size, learning rate, weight decay, number of training epochs, dropout probability and other details of the network architecture by running only the first bracket of Hyperband with η = 3 and S = 3.
Evaluation: the best configuration found by Hyperband was trained five times with different random training/validation splits and evaluated on the test set to produce the final results, except for STL-10 where we used the official ten cross-validation folds.
4.2 RESULTS
The best performance achieved by our method is shown in Table 1 and compared with SelfPU (Chen et al., 2020b), DAN (Liu et al., 2019) and PAN (Hu et al., 2021). PUUPL was always able to improve over the baseline nnPU loss, with larger gap for more difficult datasets such as CIFAR-10 (+0.98%) and IMDb (+1.89%) as well as over SelfPU (Chen et al., 2020b) and DAN (Liu et al., 2019), setting a new state-of-the-art for PU learning. Moreover, PUUPL is naturally very well calibrated despite the absence of explicit calibration on labeled data (Fig. 2), making its predictions inherently reliable. The best pseudo-labeling hyperparameters constitute the defaults we suggested in Algorithm 1 and are K = 2, T = 1000, tl = 0.05 and tu = 0.35. Note that the baseline nnPU scores reported in Table 1 were also obtained by training an ensemble of two networks with the nnPU loss, thus possibly explaining the discrepancy observed with SelfPU. The best network architecture for IMDb is shown in Table A.2.
4.3 ABLATION STUDIES
We performed ablation studies on the CIFAR-10 dataset by changing one parameter at a time of the best configuration found by Hyperband, training and evaluating with five different splits and reporting the test accuracy corresponding to the best validation score for each run. To limit the computational resources needed, we used at most 15 pseudo-labeling iterations.
Weights initialization: We confirmed the observation that it is beneficial to re-initialize the weights after each pseudo-labeling step (Arazo et al., 2020), with slightly better performance (+0.052%) achieved when the weights are re-initialized to the same values before every pseudo-labeling iteration (Fig. 3a). We believe this encourages the model to be consistent across pseudo-labeling rounds.
Pseudo-label assignment: Soft pseudo-labels were preferred over hard ones (+0.75%). We found that our model was very well calibrated with ECEs as low as 0.05 on the labeled validation data (Fig. 2), indicating that the soft pseudo-labels they estimated were reliable training targets and that post hoc calibration was not necessary. Contrary to expectation, however, re-assigning all pseudolabels at every iteration harmed performance (−0.12%); instead, pseudo-labels should be kept fixed after being assigned for the first time. A possible explanation is that fixed pseudo-labels prevent the
model’s predictions from drifting too far away from the initial pseudo-labeling towards an incorrect assignment. It was also beneficial to assign the same number of positive and negative pseudo-labels (Fig. 3b) compared to keeping the same ratio π of positives and negatives found in the whole dataset (−0.20%) or not balancing the selection at all (−0.55%). Uncertainty: Ranking predictions by aleatoric performance was almost as good as ranking by epistemic uncertainty (−0.08%), while total uncertainty produced moderately worse rankings (−0.37%, Fig. 3c). An ensemble with only two networks achieved the best performance, while larger ensembles performed worse, and Monte Carlo dropout (−0.85%) was better than ensembles of five (−1.00%) and ten networks (−1.58%). Early stopping: Finally, performing early stopping on the validation PU loss resulted in worse accuracy (−1.12%) compared to using the accuracy on positive-negative labels (Fig. 3d). Although considerable when compared to the impact of other algorithmic choices, such performance drop indicates that PUUPL can be used effectively in real-world scenarios with no labeled validation data available.
4.4 ROBUSTNESS
Following the same protocol as the ablation studies in Section 4.3, we tested the robustness of our method with respect to misspecifications of the continuous hyperparameters (Fig. 4).
Pseudo-labeling: Our method was fairly robust to the maximum number T of assigned pseudolabels and the maximum uncertainty threshold tl for the pseudo-labels, with almost constant performance up to T = 1000 and tl = 0.1. The best performance was achieved by the combination having T = 1000 and tl = 0.05, but both of these experiments were performed while disabling the other constraint (i.e., setting T = inf when testing tl and vice-versa). Using only a constraint on T resulted in a reduction of −0.11%, while constraining tl alone resulted in a reduction of −1.04%. The results for tu were less conclusive as for the general trend, possibly because values lower than 0.35 require more than the 15 pseudo-labeling iterations we used for the experiment, and values above 0.4 did not show significant differences.
Misspecification of the class prior: The performance of our framework slowly degraded as the prior π moved further from the true value of 0.4 with a performance reduction of less than 2.5% in the range [0.3, 0.6] (Fig. 4a). Furthermore, the performance gap between PUUPL and nnPU widens as π is more severly misspecified. Modern losses for PU learning such as uPU and nnPU rely on the
correct estimation of the positive class prior π from domain knowledge or a priori estimation of π, which constitutes a whole research branch in PU learning (Chen et al., 2020a) and is a significant challenge in any practical PU application (Bekker and Davis, 2020; Chen et al., 2020a). We believe that the inclusion of epistemic uncertainty, the usage of soft labels and the convex combination of two losses enables PUUPL to be considerably more robust to significant misspecification of the class prior π.
Loss combination: The best performing combination had λ = 0.15, with modest performance reduction until λ = 0.5 (−0.25%, Fig. 4b). Values of 0.05 and below resulted in the same performance reduction of -0.5%, similarly to λ = 0.75, and performance was 1.09% worst at λ = 0.9. Too large λ might facilitate the emergence of a harmful confirmation bias, but it is nonetheless important to train on the pseudo-labels, too, to avoid losing the information contained therein.
Number of training labeled positives: The performance of our method steadily increased and seemed to plateau at 91.4% between 3,000 and 6,000 labeled positives. The gap between nnPU and PUUPL is largest in the low labeled data region with a 1.44% gap at 250 labels, where we achieved 87.59% accuracy, shrinking to a gap of 0.52% with 3,000 labels, where our performance was 91.44% (Fig. 4c). This supports our intuition about the importance of uncertainty because, as the amount of labeled data decreases, uncertainty becomes more important to detect overfitting and to prevent the model from assigning incorrect pseudo-labels.
Positive bias: The most general assumption of PU learning is that the labeled examples are a biased sample from the positive distribution (Bekker and Davis, 2020). We tested PUUPL in such a biased setting where positives in the training and validation sets were with 50% chance an airplane, 30% chance an automobile, 15% chance ship and 5% chance truck, while in previous experiments the positives were evenly composed of airplanes, automobiles, ships and trucks. The test distribution was unchanged, meaning that test samples are half as likely to be airplanes compared to the training set, and five times more likely to be to be truck images. We also fixed all hyperparameters to the values identified previously, except for the loss LPU where we used the nnPUSB loss (Kato et al., 2019) to handle the positive bias. The baseline with nnPUSB loss performed better than the nnPU loss (+0.26%), but worse than PUUPL with the nnPU loss (-0.39%), highlighting the benefit of our uncertainty-aware approach. The best performance was however achieved with PUUPL on top of the nnPUSB loss (+0.21% compared to nnPU and -2.27% compared to the unbiased setting), showing that PUUPL can leverage the advantages of different PU losses and further improve on them (Table 2).
5 CONCLUSIONS
In this paper, we proposed an uncertainty-aware pseudo-labeling framework for PU learning which quantifies the epistemic uncertainty of an ensemble of networks and selects which examples to pseudo-label based on their predictive uncertainty. We demonstrated the benefits of our approach on different data modalities and biased settings, achieving state-of-the-art performance in all our benchmarks. We further conducted extensive ablation studies and investigated the robustness of our approach, showing it to be reliable in settings that are likely to be encountered in the real world, such as a bias in the positive data, the unavailability of labeled negatives as validation data and the misspecification of the class prior π.
Ethics statement: Most of the ethical concerns stem from the specific application and dataset. Here we have shown a certain robustness towards biased positive labels without providing a comprehensive assessment, therefore practitioners should always ensure, insofar as possible, that the obtained
predictions are ”fair” (with ”fairness” defined appropriately w.r.t. the target application) and do not systematically affect particular subsets of the population of interest.
Reproducibility statement: The source code for the framework is available in the supplementary material.
A NETWORK ARCHITECTURE AND HYPERPARAMETERS
Table A.1 reports the hyperparameters related to pseudo-labeling and their ranges. Table A.2 reports the network architecture used in the IMDb experiments, while Table A.3 reports the network used with CIFAR-10.
B DATASET INFORMATION
Table B.1 reports the number of samples for each split and each dataset. For the image datasets, we subtracted the mean pixel intensity in the training set and divided by the standard deviation. For IMDb we used pre-trained GloVe embeddings of size 200 on a corpus of six billion tokens. | 1. What is the focus of the paper regarding pseudo-labeling techniques in PU learning?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. Do you have any questions or concerns about the algorithm design and its novelty?
4. How does the reviewer assess the clarity and quality of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposed a method to learn with PU data which quantifies the epistemic uncertainty of an ensemble of networks and selects which examples to pseudo-label based on their predictive uncertainty. The authors propose to use the pseudo-labeling technique based on the uncertainty of the prediction, combined with early stopping.
Review
Trivial motivation:
In Abstract, the authors say that "two-steps procedures are vulnerable to incorrectly estimated pseudo-labels" and to mitigate this issue they propose this method. It is well known that many important PU learning methods are not two-step, such as nnPU.
Limited novelty:
First, I am not convinced that the proposed method is novel. The key techniques used are transferred from some existing methods (from Sec.3.2-3.6). Second, I don't understand the meaning of designing an algorithm like this. For example, it uses an unbiased risk (Kiryo et al., 2017), which can already guarantee the consistency of learning. But complicating the learning procedure makes this loss lose its original advantage, and an additional regularization method has to be added. And it labels the instance according to the epistemic uncertainty (Eq.4-6). It is unclear why this measure is used and what the advantages are. The algorithm design is almost at random, at least from the current description. In addition, I think the complex algorithm design does not result in significant performance improvements.
Minor concerns:
What is Eq.(3) for?
The writing of the paper can be improved, e.g., "two-steps procedures"->"two-step procedures" |
ICLR | Title
Positive-Unlabeled Learning with Uncertainty-aware Pseudo-label Selection
Abstract
Positive-unlabeled (PU) learning aims at learning a binary classifier from only positive and unlabeled training data. Recent approaches addressed this problem via cost-sensitive learning by developing unbiased loss functions, and their performance was later improved by iterative pseudo-labeling solutions. However, such two-step procedures are vulnerable to incorrectly estimated pseudo-labels, as errors are propagated in later iterations when a new model is trained on erroneous predictions. To mitigate this issue we propose PUUPL, a new loss-agnostic training procedure for PU learning that incorporates epistemic uncertainty in pseudolabeling. Using an ensemble of neural networks and assigning pseudo-labels based on high confidence predictions improves the reliability of pseudo-labels, increasing the predictive performance of our method and leading to new state-ofthe-art results in PU learning. With extensive experiments, we show the effectiveness of our method over different datasets, modalities, and learning tasks, as well as improved robustness over misspecifications of hyperparameters and biased positive data. The source code of the method and all the experiments are available in the supplementary material.
1 INTRODUCTION
Many real-world applications involve positive and unlabeled (PU) datasets in which only some of the data is labeled positive while the majority is unlabeled and contains both positives and negatives. PU learning aims to learn a binary classifier in this challenging setting without any labeled negative examples. Learning from PU data can reduce deployment costs in many deep learning applications that otherwise require annotations from experts such as medical image diagnosis (Armenian and Lilienfeld, 1974) and protein function prediction (Gligorijević et al., 2021), and it can even enable applications in settings where the measurement technology itself can not detect negative examples (Purcell et al., 2019).
Some recent approaches such as unbiased PU (Du Plessis et al., 2014, uPU) and non-negative PU (Kiryo et al., 2017, nnPU) formulate this problem as cost-sensitive learning. Others approach PU learning as a two-step procedure first identifying and labeling some reliable negative examples, and then re-training the model based on this newly constructed labeled dataset (Bekker and Davis, 2020). These approaches show similarities with pseudo-labeling in semi-supervised classification settings (Lee, 2013).
Such pseudo-labeling techniques are however especially vulnerable to incorrectly assigned labels of the selected examples as these errors will propagate and magnify in the retrained model, resulting in a negative feedback loop. Worse yet, since the true labels are unavailable in PU learning, this situation is hard to detect by any metrics computed on the training set. This erroneous selection of unreliable pseudo-labels occurs when wrong model predictions are associated with excessive model confidence. Such poor model calibration is accompanied by a distortion of the signal for the pseudo label selection (Van Engelen and Hoos, 2020).
In recent literature on pseudo-labeling, this problem is recognized and successfully addressed by explicitly estimating the prediction uncertainty (Abdar et al., 2021; Rizve et al., 2021; Arazo et al., 2020). While this is the case for semi-supervised classification, there does not yet exist a method that explores the use of uncertainty quantification for pseudo-labeling in a PU learning context.
Contributions: Motivated by this, we propose a novel, uncertainty-aware pseudo-labeling framework for PU learning that uses established uncertainty quantification techniques to identify reliable examples to pseudo-label (Fig. 1). In particular, our contributions are: (1) We introduce PUUPL (Positive-Unlabeled, Uncertainty-Aware Pseudo-Labeling), a simple uncertainty-aware pseudolabeling framework for PU learning. (2) PUUPL can use any loss function for PU learning, improving model performance while being robust to the specific data biases that the respective loss considers. (3) We evaluate our methods on a wide range of benchmarks and PU datasets, achieving state-of-the-art performance in PU learning. (4) Our extensive ablation studies provide new insights into uncertainty-aware pseudo-labeling for PU learning. Further, they show that our method is robust to the choices of hyperparameters, with 1% or less variability in test accuracy among different choices as well as distribution shifts between labeled positives in the train and test datasets. To the best of our knowledge, PUUPL is the first framework for PU learning which leverages uncertainty information during pseudo-labeling.
2 RELATED WORK
PU Learning PUL was introduced as a variant of binary classification (Liu et al., 2003) and is related to one-class learning (Ruff et al., 2018; Li et al., 2010), multi-positive learning (Xu et al., 2017), multi-task learning (Kaji et al., 2018), and semi-supervised learning (Chapelle et al., 2009). There exist three main research branches for PUL: two-step techniques, class prior incorporation, and biased PUL (Bekker and Davis, 2020). In this work, we combine Pseudo-Labeling which has similarities to two-step techniques, with biased PUL, also coined as reweighting methods, and refer to Bekker and Davis (2020) for a comprehensive overview of the field. In this context, Du Plessis et al. (2014) introduced the unbiased risk estimator uPU. Kiryo et al. (2017) showed this loss function is prone to overfitting in deep learning contexts as it lacks a lower bound and proposed the non-negative risk estimator nnPU as a remedy. Follow-up work on loss functions for PUL has focused on robustness w.r.t biases in the sampling process such as PUSB (Kato et al., 2019), PUbN (Hsieh et al., 2019) or PULNS (Luo et al., 2021).
Uncertainty-aware Pseudo-Labeling Pseudo-labeling follows the rationale that the model leverages its own predictions on unlabeled data as pseudo training targets to enable iterative semisupervised model training. The first such approach for deep learning was introduced by Lee (2013), simply selecting the class with the highest predicted probability as a pseudo label. One weakness of pseudo-labeling is that erroneously selected pseudo-labels can amplify for training, potentially leading to model degradation. This is grounded in poor model calibration distorting the signal for the pseudo label selection (Van Engelen and Hoos, 2020). Iscen et al. (2019) try to mitigate this issue using confidence and class weights. Shi et al. (2018) use confident scores based on the geometric neighborhood of the unlabeled samples while Arazo et al. (2020) effectively tackle this confirmation bias using Mixup (Zhang et al., 2017), Label Noise (Tanaka et al., 2018), and Entropy Regularization (Grandvalet et al., 2005). Rizve et al. (2021) introduced a pseudo-labeling framework using a weighting scheme for class balancing and MC dropout (Gal and Ghahramani, 2016) for calibration, while Beluch et al. (2018) found deep ensembles (Lakshminarayanan et al., 2017a) to yield the best model calibration in an active learning context, especially in low-label regimes. The commonality of these works is the explicit consideration of model uncertainty to improve pseudo-label selection, which motivates us to apply this in the context of PU learning.
Pseudo-Labeling for PU Learning Two-step approaches in PU learning first identify negative samples from the unlabeled dataset, and then train a binary classification model on the original dataset augmented with the newly identified negatives (Bekker and Davis, 2020). These approaches share similarities with pseudo-labeling but lack an iterative feedback loop after the completion of the second step.
A first attempt to combine pseudo-labeling with PU learning was made with Self-PU (Chen et al., 2020b), where self-paced learning, a confidence-weighting scheme based on the model predictions and a teacher-student distillation approach are combined. Via this complex training scheme, Self-PU was shown to marginally outperform recent baselines. With PUUPL, we propose an alternative PL strategy for PU learning that performs better in a simpler and more principled way using implicitly well-calibrated models to improve the pseudo-label selection.
Uncertainty-aware pseudo-labeling for PU learning To the best of our knowledge, we are the first to introduce an uncertainty-aware pseudo-labeling paradigm to PU learning. Although our method shares the same motivation as that from Rizve et al. (2021) for semi-supervised classification, we differ in several important aspects: (1) we specifically target PU data with a PU loss, (2) we quantify uncertainty with an ensemble instead of Monte Carlo dropout, (3) we use epistemic uncertainty instead of the predicted class probabilities for the selection, (4) we do not use temperature scaling, and (5) we use soft labels.
3 METHOD
We propose PUUPL (Positive Unlabeled, Uncertainty-aware Pseudo-Labeling), an iterative pseudolabeling procedure to progressively select and label the most confident examples from unlabeled data. The pseudo-code for PUUPL is shown as Algorithm 1. Our method separates the training set Xtr into the sets P , U , and L which contain the initial positives, the currently unlabeled, and the pseudo-labeled samples respectively. The set L is initially empty. At each pseudo-labeling iteration, we first train our model using all samples in P , U , and L until some convergence condition is met (Section 3.2). Then, samples in U are predicted and ranked w.r.t their predictive uncertainty (Section 3.3) and samples with the most confident score are assigned the predicted label and moved into the set L (Section 3.4). Similarly, samples in L are also predicted and the most uncertain samples are moved back to the unlabeled set U (Section 3.5). Next, the model is re-initialized to the same initial weights and a new pseudo-labeling iteration starts.
In the following, we first describe the notation used in this paper and then explain in detail the training procedure of PUUPL.
3.1 NOTATION
Consider input samples X with label y and superscripts ·tr, ·va and ·te for training, validation, and test data respectively. The initial training labels ytr are set to one for all samples in P and zero for all others in U . We group the indices of original positives, unlabeled, and pseudo-labeled samples
Algorithm 1 Pseudocode for the PUUPL Training Procedure Input
• Train, validation and test data Xtr, ytr, Xva, yva, Xte, yte
• Number K of networks in the ensemble (suggested K = 2)
• Maximum number T of pseudo-labels to assign at each round (suggested T = 1000)
• Maximum uncertainty threshold tl to assign pseudo-labels (suggested tl = 0.05)
• Minimum uncertainty threshold tu to remove pseudo-labels (suggested tu = 0.35)
Output Model parameters θ∗
1: P ← indices of positive samples in Xtr 2: U ← indices of unlabeled samples in Xtr 3: L← ∅ . Indices of pseudo-labeled samples 4: θ0 ← Random weight initialization 5: while not converged do 6: Initialize model weights to θ0 . Training 7: Train an ensemble of K networks on Xtr, ytr using the loss in Eq. 1 8: Validate on Xva, yva and update weights θ∗ if accuracy improved 9: f̂ ← ensemble predictions for Xtr . Uncertainty 10: Compute epistemic uncertainty ûe with f̂ via Eq. 6 11: Lnew ← Balanced set of examples to pseudo-label via Eq. 7 using ûeU , T and tl . Pseudo-labeling 12: U new ← Examples to pseudo-unlabel via Eq. 10 using ûeL and tu 13: L← L ∪ Lnewb \ U new . Update indices 14: U ← U \ Lnewb ∪ U new 15: yLnew ← p̂Lnew . Update pseudo-labels 16: yUnew ← 0 17: end while 18: Restore the weights θ∗ that scored highest on the validation set 19: Compute accuracy on the held-out test set Xte, yte 20: return θ∗
in Xtr into the sets P , U , and L respectively. Our proposed model is an ensemble of K deep neural networks whose random initial weights are collectively denoted as θ0. The predictions of the k-th network for sample i are indicated with p̂ik = σ(f̂ik), with σ(·) the logistic function and f̂ik the predicted logits. The logits and predictions for a sample averaged across the networks in the ensemble are denoted by f̂i and p̂i respectively. We subscript data and predictions with i to index individual samples, and use an index set in the subscript to index all samples in the set (e.g., XtrU = {Xtri |i ∈ U} denotes the features of all unlabeled samples). We denote the total, epistemic and aleatoric uncertainty of sample i as ûti, û e i , and û a i , respectively.
3.2 LOSS FUNCTION
We train our proposed model with a loss function L that is a convex combination of a loss LPU for the samples in the positive and unlabeled set (P ∪ U ) and a loss LL for the samples in the pseudo-labeled set (L): L = λ · LL + (1− λ) · LPU (1) with λ ∈ (0, 1). The loss LL is the binary cross-entropy computed w.r.t the assigned pseudo-labels y. Our model is agnostic to the specific PU loss LPU used. This allows our method to be easily adapted to different scenarios for which a PU loss was proposed and improve over its performance, for example when coping with a selection bias in the positive examples (Kato et al., 2019) or the availability of a biased negative set (Hsieh et al., 2019). For the standard setting of PU learning, we use the non-negative correction nnPU of the PU loss (Kiryo et al., 2017):
LPU = π · `(P, 1) + max {0, `(U,−1)− π · `(P,−1)} (2) With π the prior probability of a sample being positive, which we assume known and can be estimated from PU data (du Plessis et al., 2016), and `(S, y) the expected sigmoid loss of samples in the set S with label y:
`(S, y) = 1 |S| ∑ i∈S
1
1 + exp(y · p̂i) (3)
3.3 MODEL UNCERTAINTY
To quantify the predictive uncertainty, we utilize a deep ensemble with K networks with the same architecture, each trained on the full training dataset (Lakshminarayanan et al., 2017b). Given the predictions p̂i1, . . . , p̂iK for a sample xi, we associate three types of uncertainties to xi’s predictions (Hüllermeier and Waegeman, 2021): the aleatoric uncertainty as the mean of the entropy of the predictions (Eq. 4), the total uncertainty as the entropy of the mean prediction (Eq. 5), and the epistemic uncertainty formulated as the difference between the two (Eq. 6).
ûai = − 1
K K∑ k=1 [p̂ik log p̂ik + (1− p̂ik) log(1− p̂ik)] (4)
ûti = −p̂i log p̂i − (1− p̂i) log(1− p̂i) (5) ûei = û t i − ûai (6)
Epistemic uncertainty corresponds to the mutual information between the parameters of the model and the true label of the sample. Low epistemic uncertainty thus means that that the model parameters would not change significantly if trained on the true label, suggesting that the prediction is indeed correct. Using such a prediction as target in the cross entropy loss would in turn provide a stronger, more explicit learning signal to the model, so that a correctly pseudo-labeled example provides a larger decrease in risk compared to using the same example without any label within the positive-unlabeled loss.
3.4 PSEUDO-LABELING
The estimated epistemic uncertainty (Eq. 6) is used to rank samples of the unlabeled set U and to select reliable samples for pseudo-labeling. Next, the predictions of the unlabeled samples U are ranked according to their epistemic uncertainty (Eq. 6). Let ρ(i) denote the rank of sample i, then the set Lnew of newly pseudo-labeled samples is formed by taking the T samples with lowest uncertainty from U , ensuring that it is lower than the threshold tl:
Lnew = {i ∈ U |ρ(i) ≤ T ∧ uei ≤ tl} (7)
Previous works have shown that balancing the pseudo-label selection between the two classes, i.e., ensuring that the ratio of newly labeled positives and negatives is close to a given target ratio r, is beneficial (Rizve et al., 2021). In this case, the set Lnew should be partitioned according to the model’s predictions into a set Lnew+ of predicted positives and L new − of predicted negatives, and the most uncertain samples in the larger set should be discarded to reach the desired ratio r, which we fix to 1. We then assign soft pseudo-labels, i.e., the average prediction in the open interval (0, 1), to these samples:
yi = p̂i ∀i ∈ Lnew− ∪ Lnew+ (8)
3.5 PSEUDO-UNLABELING
Similar to the way that low uncertainty on an unlabeled example indicates that the prediction can be trusted, high uncertainty on a pseudo-labeled example indicates that the assigned pseudo-label might not be correct after all. To avoid training on such possibly incorrect pseudo-labels, we move the pseudo-labeled examples with uncertainty above a threshold tu back into the unlabeled set:
U new = {i ∈ L|ûei ≥ tu} (9) yi = 0 ∀i ∈ U new (10)
4 EXPERIMENTS
4.1 EXPERIMENTAL PROTOCOL
To empirically compare our proposed framework to existing state-of-the-art losses and models, we followed standard protocols for PU learning (Chen et al., 2020b; Kiryo et al., 2017; Kato et al., 2019).
Datasets: We evaluated our method in the standard setting of MNIST (Deng, 2012) and CIFAR10 (Krizhevsky et al., 2009) datasets, as well as Fashion MNIST (F-MNIST) (Xiao et al., 2017), STL-10 (Coates et al., 2011) and IMDb (Maas et al., 2011) datasets to show the applicability to different data modalities. Similar to previous studies (Chen et al., 2020b; Kiryo et al., 2017; Kato et al., 2019), positives were defined as odd digits in MNIST, vehicles in CIFAR-10 and STL-10, and we used trousers, dress, sandals, sneaker, and ankle boots for F-MNIST and positive reviews for IMDb. For STL-10 we used all available labeled and unlabeled data and the official ten crossvalidation folds. For all other datasets, we reserved a validation set of 5,000 samples and use all other samples for training with 1,000 randomly chosen labeled positives, as is common practice, and evaluated on the canonical test set of each dataset. More details are provided in Appendix B
Network architectures: To ensure a fair comparison with other works in PU learning (Chen et al., 2020b; Kiryo et al., 2017) we used the same architectures on the same datasets, namely a 13-layers convolutional neural network for the experiments on CIFAR-10 (Table A.3) and a MLP with four fully connected hidden layers of 300 neurons each and ReLU activation for MNIST and F-MNIST. For IMDb we used a bidirectional LSTM network with a MLP head whose architecture was optimized as part of the hyperparameter search.
Training: We trained all models with the Adam optimizer (Kingma and Ba, 2015) with β1 = 0.9 and β2 = 0.999, and an exponential learning rate decay with γ = 0.99. We further used the nnPU loss (Kiryo et al., 2017) as LPU (Eq. 1) unless otherwise stated. As is common in the pseudolabeling literature (Chen et al., 2020b; Rizve et al., 2021; Kato et al., 2019; Tanaka et al., 2018; Hu et al., 2021), we assume that a positive and negative labeled validation set is available and use this validation set for early stopping, i.e., stop the pseudo-labeling loop when the model’s accuracy on this set has stopped improving, and use the parameters that achieved the highest validation accuracy to compute the test performance. An experiment will show how this requirement can be relaxed in practice.
Hyperparameter tuning: We used the Hyperband algorithm (Li et al., 2017) to optimize all hyperparameters on the CIFAR-10 dataset with η = 3 and S = 4, using the validation accuracy as the criterion to be optimized. The configuration that achieved the highest validation accuracy was
then used as a basis for the ablation studies and fine-tuned on the remaining datasets to show that the pseudo-labeling hyperparameters (Table A.1) do not require tuning when transferred to other datasets and data modalities. Specifically, on the other datasets we only tuned hyperparameters related to network training such batch size, learning rate, weight decay, number of training epochs, dropout probability and other details of the network architecture by running only the first bracket of Hyperband with η = 3 and S = 3.
Evaluation: the best configuration found by Hyperband was trained five times with different random training/validation splits and evaluated on the test set to produce the final results, except for STL-10 where we used the official ten cross-validation folds.
4.2 RESULTS
The best performance achieved by our method is shown in Table 1 and compared with SelfPU (Chen et al., 2020b), DAN (Liu et al., 2019) and PAN (Hu et al., 2021). PUUPL was always able to improve over the baseline nnPU loss, with larger gap for more difficult datasets such as CIFAR-10 (+0.98%) and IMDb (+1.89%) as well as over SelfPU (Chen et al., 2020b) and DAN (Liu et al., 2019), setting a new state-of-the-art for PU learning. Moreover, PUUPL is naturally very well calibrated despite the absence of explicit calibration on labeled data (Fig. 2), making its predictions inherently reliable. The best pseudo-labeling hyperparameters constitute the defaults we suggested in Algorithm 1 and are K = 2, T = 1000, tl = 0.05 and tu = 0.35. Note that the baseline nnPU scores reported in Table 1 were also obtained by training an ensemble of two networks with the nnPU loss, thus possibly explaining the discrepancy observed with SelfPU. The best network architecture for IMDb is shown in Table A.2.
4.3 ABLATION STUDIES
We performed ablation studies on the CIFAR-10 dataset by changing one parameter at a time of the best configuration found by Hyperband, training and evaluating with five different splits and reporting the test accuracy corresponding to the best validation score for each run. To limit the computational resources needed, we used at most 15 pseudo-labeling iterations.
Weights initialization: We confirmed the observation that it is beneficial to re-initialize the weights after each pseudo-labeling step (Arazo et al., 2020), with slightly better performance (+0.052%) achieved when the weights are re-initialized to the same values before every pseudo-labeling iteration (Fig. 3a). We believe this encourages the model to be consistent across pseudo-labeling rounds.
Pseudo-label assignment: Soft pseudo-labels were preferred over hard ones (+0.75%). We found that our model was very well calibrated with ECEs as low as 0.05 on the labeled validation data (Fig. 2), indicating that the soft pseudo-labels they estimated were reliable training targets and that post hoc calibration was not necessary. Contrary to expectation, however, re-assigning all pseudolabels at every iteration harmed performance (−0.12%); instead, pseudo-labels should be kept fixed after being assigned for the first time. A possible explanation is that fixed pseudo-labels prevent the
model’s predictions from drifting too far away from the initial pseudo-labeling towards an incorrect assignment. It was also beneficial to assign the same number of positive and negative pseudo-labels (Fig. 3b) compared to keeping the same ratio π of positives and negatives found in the whole dataset (−0.20%) or not balancing the selection at all (−0.55%). Uncertainty: Ranking predictions by aleatoric performance was almost as good as ranking by epistemic uncertainty (−0.08%), while total uncertainty produced moderately worse rankings (−0.37%, Fig. 3c). An ensemble with only two networks achieved the best performance, while larger ensembles performed worse, and Monte Carlo dropout (−0.85%) was better than ensembles of five (−1.00%) and ten networks (−1.58%). Early stopping: Finally, performing early stopping on the validation PU loss resulted in worse accuracy (−1.12%) compared to using the accuracy on positive-negative labels (Fig. 3d). Although considerable when compared to the impact of other algorithmic choices, such performance drop indicates that PUUPL can be used effectively in real-world scenarios with no labeled validation data available.
4.4 ROBUSTNESS
Following the same protocol as the ablation studies in Section 4.3, we tested the robustness of our method with respect to misspecifications of the continuous hyperparameters (Fig. 4).
Pseudo-labeling: Our method was fairly robust to the maximum number T of assigned pseudolabels and the maximum uncertainty threshold tl for the pseudo-labels, with almost constant performance up to T = 1000 and tl = 0.1. The best performance was achieved by the combination having T = 1000 and tl = 0.05, but both of these experiments were performed while disabling the other constraint (i.e., setting T = inf when testing tl and vice-versa). Using only a constraint on T resulted in a reduction of −0.11%, while constraining tl alone resulted in a reduction of −1.04%. The results for tu were less conclusive as for the general trend, possibly because values lower than 0.35 require more than the 15 pseudo-labeling iterations we used for the experiment, and values above 0.4 did not show significant differences.
Misspecification of the class prior: The performance of our framework slowly degraded as the prior π moved further from the true value of 0.4 with a performance reduction of less than 2.5% in the range [0.3, 0.6] (Fig. 4a). Furthermore, the performance gap between PUUPL and nnPU widens as π is more severly misspecified. Modern losses for PU learning such as uPU and nnPU rely on the
correct estimation of the positive class prior π from domain knowledge or a priori estimation of π, which constitutes a whole research branch in PU learning (Chen et al., 2020a) and is a significant challenge in any practical PU application (Bekker and Davis, 2020; Chen et al., 2020a). We believe that the inclusion of epistemic uncertainty, the usage of soft labels and the convex combination of two losses enables PUUPL to be considerably more robust to significant misspecification of the class prior π.
Loss combination: The best performing combination had λ = 0.15, with modest performance reduction until λ = 0.5 (−0.25%, Fig. 4b). Values of 0.05 and below resulted in the same performance reduction of -0.5%, similarly to λ = 0.75, and performance was 1.09% worst at λ = 0.9. Too large λ might facilitate the emergence of a harmful confirmation bias, but it is nonetheless important to train on the pseudo-labels, too, to avoid losing the information contained therein.
Number of training labeled positives: The performance of our method steadily increased and seemed to plateau at 91.4% between 3,000 and 6,000 labeled positives. The gap between nnPU and PUUPL is largest in the low labeled data region with a 1.44% gap at 250 labels, where we achieved 87.59% accuracy, shrinking to a gap of 0.52% with 3,000 labels, where our performance was 91.44% (Fig. 4c). This supports our intuition about the importance of uncertainty because, as the amount of labeled data decreases, uncertainty becomes more important to detect overfitting and to prevent the model from assigning incorrect pseudo-labels.
Positive bias: The most general assumption of PU learning is that the labeled examples are a biased sample from the positive distribution (Bekker and Davis, 2020). We tested PUUPL in such a biased setting where positives in the training and validation sets were with 50% chance an airplane, 30% chance an automobile, 15% chance ship and 5% chance truck, while in previous experiments the positives were evenly composed of airplanes, automobiles, ships and trucks. The test distribution was unchanged, meaning that test samples are half as likely to be airplanes compared to the training set, and five times more likely to be to be truck images. We also fixed all hyperparameters to the values identified previously, except for the loss LPU where we used the nnPUSB loss (Kato et al., 2019) to handle the positive bias. The baseline with nnPUSB loss performed better than the nnPU loss (+0.26%), but worse than PUUPL with the nnPU loss (-0.39%), highlighting the benefit of our uncertainty-aware approach. The best performance was however achieved with PUUPL on top of the nnPUSB loss (+0.21% compared to nnPU and -2.27% compared to the unbiased setting), showing that PUUPL can leverage the advantages of different PU losses and further improve on them (Table 2).
5 CONCLUSIONS
In this paper, we proposed an uncertainty-aware pseudo-labeling framework for PU learning which quantifies the epistemic uncertainty of an ensemble of networks and selects which examples to pseudo-label based on their predictive uncertainty. We demonstrated the benefits of our approach on different data modalities and biased settings, achieving state-of-the-art performance in all our benchmarks. We further conducted extensive ablation studies and investigated the robustness of our approach, showing it to be reliable in settings that are likely to be encountered in the real world, such as a bias in the positive data, the unavailability of labeled negatives as validation data and the misspecification of the class prior π.
Ethics statement: Most of the ethical concerns stem from the specific application and dataset. Here we have shown a certain robustness towards biased positive labels without providing a comprehensive assessment, therefore practitioners should always ensure, insofar as possible, that the obtained
predictions are ”fair” (with ”fairness” defined appropriately w.r.t. the target application) and do not systematically affect particular subsets of the population of interest.
Reproducibility statement: The source code for the framework is available in the supplementary material.
A NETWORK ARCHITECTURE AND HYPERPARAMETERS
Table A.1 reports the hyperparameters related to pseudo-labeling and their ranges. Table A.2 reports the network architecture used in the IMDb experiments, while Table A.3 reports the network used with CIFAR-10.
B DATASET INFORMATION
Table B.1 reports the number of samples for each split and each dataset. For the image datasets, we subtracted the mean pixel intensity in the training set and divided by the standard deviation. For IMDb we used pre-trained GloVe embeddings of size 200 on a corpus of six billion tokens. | 1. What is the focus of the paper regarding pseudo-labels and PU learning?
2. What are the strengths of the proposed approach, particularly its simplicity and generalizability?
3. What are the weaknesses of the paper, especially regarding its novelty and lack of clarity in certain aspects?
4. Do you have any suggestions for improving the notation and experimental comparisons in the paper? | Summary Of The Paper
Review | Summary Of The Paper
The authors proposed to use pseudo-labels based on high confidence predictions to improve the classification performance of PU learning. Experiments show the effectiveness of the proposed method.
Review
Strengths
A simple uncertainty-aware pseudo-labeling framework for PU learning is proposed.
The proposed method is general and can be integrated into any loss function for PU learning.
This paper is easy to follow and the logic is clear.
Weaknesses
The novelty of this paper may not be enough. Specifically, the pseudo-labeling techniques have been well studied in a lot of weakly surprised learning scenarios such as label noise learning and semi-supervised learning.
The motivation of the metric used to quantify the uncertainty is not clear. The authors should give more explanations of the advantage of the proposed quantified metric.
The notations in Section 3.3 (which is the most important section) may cause confusion. For example, is it a sample x_i or example x_i? Is \hat{p}{i} a vector or scalar? How about \hat{p}{ik}?
Only four baseline methods are used in experiments. I think it is better to add more. |
ICLR | Title
Positive-Unlabeled Learning with Uncertainty-aware Pseudo-label Selection
Abstract
Positive-unlabeled (PU) learning aims at learning a binary classifier from only positive and unlabeled training data. Recent approaches addressed this problem via cost-sensitive learning by developing unbiased loss functions, and their performance was later improved by iterative pseudo-labeling solutions. However, such two-step procedures are vulnerable to incorrectly estimated pseudo-labels, as errors are propagated in later iterations when a new model is trained on erroneous predictions. To mitigate this issue we propose PUUPL, a new loss-agnostic training procedure for PU learning that incorporates epistemic uncertainty in pseudolabeling. Using an ensemble of neural networks and assigning pseudo-labels based on high confidence predictions improves the reliability of pseudo-labels, increasing the predictive performance of our method and leading to new state-ofthe-art results in PU learning. With extensive experiments, we show the effectiveness of our method over different datasets, modalities, and learning tasks, as well as improved robustness over misspecifications of hyperparameters and biased positive data. The source code of the method and all the experiments are available in the supplementary material.
1 INTRODUCTION
Many real-world applications involve positive and unlabeled (PU) datasets in which only some of the data is labeled positive while the majority is unlabeled and contains both positives and negatives. PU learning aims to learn a binary classifier in this challenging setting without any labeled negative examples. Learning from PU data can reduce deployment costs in many deep learning applications that otherwise require annotations from experts such as medical image diagnosis (Armenian and Lilienfeld, 1974) and protein function prediction (Gligorijević et al., 2021), and it can even enable applications in settings where the measurement technology itself can not detect negative examples (Purcell et al., 2019).
Some recent approaches such as unbiased PU (Du Plessis et al., 2014, uPU) and non-negative PU (Kiryo et al., 2017, nnPU) formulate this problem as cost-sensitive learning. Others approach PU learning as a two-step procedure first identifying and labeling some reliable negative examples, and then re-training the model based on this newly constructed labeled dataset (Bekker and Davis, 2020). These approaches show similarities with pseudo-labeling in semi-supervised classification settings (Lee, 2013).
Such pseudo-labeling techniques are however especially vulnerable to incorrectly assigned labels of the selected examples as these errors will propagate and magnify in the retrained model, resulting in a negative feedback loop. Worse yet, since the true labels are unavailable in PU learning, this situation is hard to detect by any metrics computed on the training set. This erroneous selection of unreliable pseudo-labels occurs when wrong model predictions are associated with excessive model confidence. Such poor model calibration is accompanied by a distortion of the signal for the pseudo label selection (Van Engelen and Hoos, 2020).
In recent literature on pseudo-labeling, this problem is recognized and successfully addressed by explicitly estimating the prediction uncertainty (Abdar et al., 2021; Rizve et al., 2021; Arazo et al., 2020). While this is the case for semi-supervised classification, there does not yet exist a method that explores the use of uncertainty quantification for pseudo-labeling in a PU learning context.
Contributions: Motivated by this, we propose a novel, uncertainty-aware pseudo-labeling framework for PU learning that uses established uncertainty quantification techniques to identify reliable examples to pseudo-label (Fig. 1). In particular, our contributions are: (1) We introduce PUUPL (Positive-Unlabeled, Uncertainty-Aware Pseudo-Labeling), a simple uncertainty-aware pseudolabeling framework for PU learning. (2) PUUPL can use any loss function for PU learning, improving model performance while being robust to the specific data biases that the respective loss considers. (3) We evaluate our methods on a wide range of benchmarks and PU datasets, achieving state-of-the-art performance in PU learning. (4) Our extensive ablation studies provide new insights into uncertainty-aware pseudo-labeling for PU learning. Further, they show that our method is robust to the choices of hyperparameters, with 1% or less variability in test accuracy among different choices as well as distribution shifts between labeled positives in the train and test datasets. To the best of our knowledge, PUUPL is the first framework for PU learning which leverages uncertainty information during pseudo-labeling.
2 RELATED WORK
PU Learning PUL was introduced as a variant of binary classification (Liu et al., 2003) and is related to one-class learning (Ruff et al., 2018; Li et al., 2010), multi-positive learning (Xu et al., 2017), multi-task learning (Kaji et al., 2018), and semi-supervised learning (Chapelle et al., 2009). There exist three main research branches for PUL: two-step techniques, class prior incorporation, and biased PUL (Bekker and Davis, 2020). In this work, we combine Pseudo-Labeling which has similarities to two-step techniques, with biased PUL, also coined as reweighting methods, and refer to Bekker and Davis (2020) for a comprehensive overview of the field. In this context, Du Plessis et al. (2014) introduced the unbiased risk estimator uPU. Kiryo et al. (2017) showed this loss function is prone to overfitting in deep learning contexts as it lacks a lower bound and proposed the non-negative risk estimator nnPU as a remedy. Follow-up work on loss functions for PUL has focused on robustness w.r.t biases in the sampling process such as PUSB (Kato et al., 2019), PUbN (Hsieh et al., 2019) or PULNS (Luo et al., 2021).
Uncertainty-aware Pseudo-Labeling Pseudo-labeling follows the rationale that the model leverages its own predictions on unlabeled data as pseudo training targets to enable iterative semisupervised model training. The first such approach for deep learning was introduced by Lee (2013), simply selecting the class with the highest predicted probability as a pseudo label. One weakness of pseudo-labeling is that erroneously selected pseudo-labels can amplify for training, potentially leading to model degradation. This is grounded in poor model calibration distorting the signal for the pseudo label selection (Van Engelen and Hoos, 2020). Iscen et al. (2019) try to mitigate this issue using confidence and class weights. Shi et al. (2018) use confident scores based on the geometric neighborhood of the unlabeled samples while Arazo et al. (2020) effectively tackle this confirmation bias using Mixup (Zhang et al., 2017), Label Noise (Tanaka et al., 2018), and Entropy Regularization (Grandvalet et al., 2005). Rizve et al. (2021) introduced a pseudo-labeling framework using a weighting scheme for class balancing and MC dropout (Gal and Ghahramani, 2016) for calibration, while Beluch et al. (2018) found deep ensembles (Lakshminarayanan et al., 2017a) to yield the best model calibration in an active learning context, especially in low-label regimes. The commonality of these works is the explicit consideration of model uncertainty to improve pseudo-label selection, which motivates us to apply this in the context of PU learning.
Pseudo-Labeling for PU Learning Two-step approaches in PU learning first identify negative samples from the unlabeled dataset, and then train a binary classification model on the original dataset augmented with the newly identified negatives (Bekker and Davis, 2020). These approaches share similarities with pseudo-labeling but lack an iterative feedback loop after the completion of the second step.
A first attempt to combine pseudo-labeling with PU learning was made with Self-PU (Chen et al., 2020b), where self-paced learning, a confidence-weighting scheme based on the model predictions and a teacher-student distillation approach are combined. Via this complex training scheme, Self-PU was shown to marginally outperform recent baselines. With PUUPL, we propose an alternative PL strategy for PU learning that performs better in a simpler and more principled way using implicitly well-calibrated models to improve the pseudo-label selection.
Uncertainty-aware pseudo-labeling for PU learning To the best of our knowledge, we are the first to introduce an uncertainty-aware pseudo-labeling paradigm to PU learning. Although our method shares the same motivation as that from Rizve et al. (2021) for semi-supervised classification, we differ in several important aspects: (1) we specifically target PU data with a PU loss, (2) we quantify uncertainty with an ensemble instead of Monte Carlo dropout, (3) we use epistemic uncertainty instead of the predicted class probabilities for the selection, (4) we do not use temperature scaling, and (5) we use soft labels.
3 METHOD
We propose PUUPL (Positive Unlabeled, Uncertainty-aware Pseudo-Labeling), an iterative pseudolabeling procedure to progressively select and label the most confident examples from unlabeled data. The pseudo-code for PUUPL is shown as Algorithm 1. Our method separates the training set Xtr into the sets P , U , and L which contain the initial positives, the currently unlabeled, and the pseudo-labeled samples respectively. The set L is initially empty. At each pseudo-labeling iteration, we first train our model using all samples in P , U , and L until some convergence condition is met (Section 3.2). Then, samples in U are predicted and ranked w.r.t their predictive uncertainty (Section 3.3) and samples with the most confident score are assigned the predicted label and moved into the set L (Section 3.4). Similarly, samples in L are also predicted and the most uncertain samples are moved back to the unlabeled set U (Section 3.5). Next, the model is re-initialized to the same initial weights and a new pseudo-labeling iteration starts.
In the following, we first describe the notation used in this paper and then explain in detail the training procedure of PUUPL.
3.1 NOTATION
Consider input samples X with label y and superscripts ·tr, ·va and ·te for training, validation, and test data respectively. The initial training labels ytr are set to one for all samples in P and zero for all others in U . We group the indices of original positives, unlabeled, and pseudo-labeled samples
Algorithm 1 Pseudocode for the PUUPL Training Procedure Input
• Train, validation and test data Xtr, ytr, Xva, yva, Xte, yte
• Number K of networks in the ensemble (suggested K = 2)
• Maximum number T of pseudo-labels to assign at each round (suggested T = 1000)
• Maximum uncertainty threshold tl to assign pseudo-labels (suggested tl = 0.05)
• Minimum uncertainty threshold tu to remove pseudo-labels (suggested tu = 0.35)
Output Model parameters θ∗
1: P ← indices of positive samples in Xtr 2: U ← indices of unlabeled samples in Xtr 3: L← ∅ . Indices of pseudo-labeled samples 4: θ0 ← Random weight initialization 5: while not converged do 6: Initialize model weights to θ0 . Training 7: Train an ensemble of K networks on Xtr, ytr using the loss in Eq. 1 8: Validate on Xva, yva and update weights θ∗ if accuracy improved 9: f̂ ← ensemble predictions for Xtr . Uncertainty 10: Compute epistemic uncertainty ûe with f̂ via Eq. 6 11: Lnew ← Balanced set of examples to pseudo-label via Eq. 7 using ûeU , T and tl . Pseudo-labeling 12: U new ← Examples to pseudo-unlabel via Eq. 10 using ûeL and tu 13: L← L ∪ Lnewb \ U new . Update indices 14: U ← U \ Lnewb ∪ U new 15: yLnew ← p̂Lnew . Update pseudo-labels 16: yUnew ← 0 17: end while 18: Restore the weights θ∗ that scored highest on the validation set 19: Compute accuracy on the held-out test set Xte, yte 20: return θ∗
in Xtr into the sets P , U , and L respectively. Our proposed model is an ensemble of K deep neural networks whose random initial weights are collectively denoted as θ0. The predictions of the k-th network for sample i are indicated with p̂ik = σ(f̂ik), with σ(·) the logistic function and f̂ik the predicted logits. The logits and predictions for a sample averaged across the networks in the ensemble are denoted by f̂i and p̂i respectively. We subscript data and predictions with i to index individual samples, and use an index set in the subscript to index all samples in the set (e.g., XtrU = {Xtri |i ∈ U} denotes the features of all unlabeled samples). We denote the total, epistemic and aleatoric uncertainty of sample i as ûti, û e i , and û a i , respectively.
3.2 LOSS FUNCTION
We train our proposed model with a loss function L that is a convex combination of a loss LPU for the samples in the positive and unlabeled set (P ∪ U ) and a loss LL for the samples in the pseudo-labeled set (L): L = λ · LL + (1− λ) · LPU (1) with λ ∈ (0, 1). The loss LL is the binary cross-entropy computed w.r.t the assigned pseudo-labels y. Our model is agnostic to the specific PU loss LPU used. This allows our method to be easily adapted to different scenarios for which a PU loss was proposed and improve over its performance, for example when coping with a selection bias in the positive examples (Kato et al., 2019) or the availability of a biased negative set (Hsieh et al., 2019). For the standard setting of PU learning, we use the non-negative correction nnPU of the PU loss (Kiryo et al., 2017):
LPU = π · `(P, 1) + max {0, `(U,−1)− π · `(P,−1)} (2) With π the prior probability of a sample being positive, which we assume known and can be estimated from PU data (du Plessis et al., 2016), and `(S, y) the expected sigmoid loss of samples in the set S with label y:
`(S, y) = 1 |S| ∑ i∈S
1
1 + exp(y · p̂i) (3)
3.3 MODEL UNCERTAINTY
To quantify the predictive uncertainty, we utilize a deep ensemble with K networks with the same architecture, each trained on the full training dataset (Lakshminarayanan et al., 2017b). Given the predictions p̂i1, . . . , p̂iK for a sample xi, we associate three types of uncertainties to xi’s predictions (Hüllermeier and Waegeman, 2021): the aleatoric uncertainty as the mean of the entropy of the predictions (Eq. 4), the total uncertainty as the entropy of the mean prediction (Eq. 5), and the epistemic uncertainty formulated as the difference between the two (Eq. 6).
ûai = − 1
K K∑ k=1 [p̂ik log p̂ik + (1− p̂ik) log(1− p̂ik)] (4)
ûti = −p̂i log p̂i − (1− p̂i) log(1− p̂i) (5) ûei = û t i − ûai (6)
Epistemic uncertainty corresponds to the mutual information between the parameters of the model and the true label of the sample. Low epistemic uncertainty thus means that that the model parameters would not change significantly if trained on the true label, suggesting that the prediction is indeed correct. Using such a prediction as target in the cross entropy loss would in turn provide a stronger, more explicit learning signal to the model, so that a correctly pseudo-labeled example provides a larger decrease in risk compared to using the same example without any label within the positive-unlabeled loss.
3.4 PSEUDO-LABELING
The estimated epistemic uncertainty (Eq. 6) is used to rank samples of the unlabeled set U and to select reliable samples for pseudo-labeling. Next, the predictions of the unlabeled samples U are ranked according to their epistemic uncertainty (Eq. 6). Let ρ(i) denote the rank of sample i, then the set Lnew of newly pseudo-labeled samples is formed by taking the T samples with lowest uncertainty from U , ensuring that it is lower than the threshold tl:
Lnew = {i ∈ U |ρ(i) ≤ T ∧ uei ≤ tl} (7)
Previous works have shown that balancing the pseudo-label selection between the two classes, i.e., ensuring that the ratio of newly labeled positives and negatives is close to a given target ratio r, is beneficial (Rizve et al., 2021). In this case, the set Lnew should be partitioned according to the model’s predictions into a set Lnew+ of predicted positives and L new − of predicted negatives, and the most uncertain samples in the larger set should be discarded to reach the desired ratio r, which we fix to 1. We then assign soft pseudo-labels, i.e., the average prediction in the open interval (0, 1), to these samples:
yi = p̂i ∀i ∈ Lnew− ∪ Lnew+ (8)
3.5 PSEUDO-UNLABELING
Similar to the way that low uncertainty on an unlabeled example indicates that the prediction can be trusted, high uncertainty on a pseudo-labeled example indicates that the assigned pseudo-label might not be correct after all. To avoid training on such possibly incorrect pseudo-labels, we move the pseudo-labeled examples with uncertainty above a threshold tu back into the unlabeled set:
U new = {i ∈ L|ûei ≥ tu} (9) yi = 0 ∀i ∈ U new (10)
4 EXPERIMENTS
4.1 EXPERIMENTAL PROTOCOL
To empirically compare our proposed framework to existing state-of-the-art losses and models, we followed standard protocols for PU learning (Chen et al., 2020b; Kiryo et al., 2017; Kato et al., 2019).
Datasets: We evaluated our method in the standard setting of MNIST (Deng, 2012) and CIFAR10 (Krizhevsky et al., 2009) datasets, as well as Fashion MNIST (F-MNIST) (Xiao et al., 2017), STL-10 (Coates et al., 2011) and IMDb (Maas et al., 2011) datasets to show the applicability to different data modalities. Similar to previous studies (Chen et al., 2020b; Kiryo et al., 2017; Kato et al., 2019), positives were defined as odd digits in MNIST, vehicles in CIFAR-10 and STL-10, and we used trousers, dress, sandals, sneaker, and ankle boots for F-MNIST and positive reviews for IMDb. For STL-10 we used all available labeled and unlabeled data and the official ten crossvalidation folds. For all other datasets, we reserved a validation set of 5,000 samples and use all other samples for training with 1,000 randomly chosen labeled positives, as is common practice, and evaluated on the canonical test set of each dataset. More details are provided in Appendix B
Network architectures: To ensure a fair comparison with other works in PU learning (Chen et al., 2020b; Kiryo et al., 2017) we used the same architectures on the same datasets, namely a 13-layers convolutional neural network for the experiments on CIFAR-10 (Table A.3) and a MLP with four fully connected hidden layers of 300 neurons each and ReLU activation for MNIST and F-MNIST. For IMDb we used a bidirectional LSTM network with a MLP head whose architecture was optimized as part of the hyperparameter search.
Training: We trained all models with the Adam optimizer (Kingma and Ba, 2015) with β1 = 0.9 and β2 = 0.999, and an exponential learning rate decay with γ = 0.99. We further used the nnPU loss (Kiryo et al., 2017) as LPU (Eq. 1) unless otherwise stated. As is common in the pseudolabeling literature (Chen et al., 2020b; Rizve et al., 2021; Kato et al., 2019; Tanaka et al., 2018; Hu et al., 2021), we assume that a positive and negative labeled validation set is available and use this validation set for early stopping, i.e., stop the pseudo-labeling loop when the model’s accuracy on this set has stopped improving, and use the parameters that achieved the highest validation accuracy to compute the test performance. An experiment will show how this requirement can be relaxed in practice.
Hyperparameter tuning: We used the Hyperband algorithm (Li et al., 2017) to optimize all hyperparameters on the CIFAR-10 dataset with η = 3 and S = 4, using the validation accuracy as the criterion to be optimized. The configuration that achieved the highest validation accuracy was
then used as a basis for the ablation studies and fine-tuned on the remaining datasets to show that the pseudo-labeling hyperparameters (Table A.1) do not require tuning when transferred to other datasets and data modalities. Specifically, on the other datasets we only tuned hyperparameters related to network training such batch size, learning rate, weight decay, number of training epochs, dropout probability and other details of the network architecture by running only the first bracket of Hyperband with η = 3 and S = 3.
Evaluation: the best configuration found by Hyperband was trained five times with different random training/validation splits and evaluated on the test set to produce the final results, except for STL-10 where we used the official ten cross-validation folds.
4.2 RESULTS
The best performance achieved by our method is shown in Table 1 and compared with SelfPU (Chen et al., 2020b), DAN (Liu et al., 2019) and PAN (Hu et al., 2021). PUUPL was always able to improve over the baseline nnPU loss, with larger gap for more difficult datasets such as CIFAR-10 (+0.98%) and IMDb (+1.89%) as well as over SelfPU (Chen et al., 2020b) and DAN (Liu et al., 2019), setting a new state-of-the-art for PU learning. Moreover, PUUPL is naturally very well calibrated despite the absence of explicit calibration on labeled data (Fig. 2), making its predictions inherently reliable. The best pseudo-labeling hyperparameters constitute the defaults we suggested in Algorithm 1 and are K = 2, T = 1000, tl = 0.05 and tu = 0.35. Note that the baseline nnPU scores reported in Table 1 were also obtained by training an ensemble of two networks with the nnPU loss, thus possibly explaining the discrepancy observed with SelfPU. The best network architecture for IMDb is shown in Table A.2.
4.3 ABLATION STUDIES
We performed ablation studies on the CIFAR-10 dataset by changing one parameter at a time of the best configuration found by Hyperband, training and evaluating with five different splits and reporting the test accuracy corresponding to the best validation score for each run. To limit the computational resources needed, we used at most 15 pseudo-labeling iterations.
Weights initialization: We confirmed the observation that it is beneficial to re-initialize the weights after each pseudo-labeling step (Arazo et al., 2020), with slightly better performance (+0.052%) achieved when the weights are re-initialized to the same values before every pseudo-labeling iteration (Fig. 3a). We believe this encourages the model to be consistent across pseudo-labeling rounds.
Pseudo-label assignment: Soft pseudo-labels were preferred over hard ones (+0.75%). We found that our model was very well calibrated with ECEs as low as 0.05 on the labeled validation data (Fig. 2), indicating that the soft pseudo-labels they estimated were reliable training targets and that post hoc calibration was not necessary. Contrary to expectation, however, re-assigning all pseudolabels at every iteration harmed performance (−0.12%); instead, pseudo-labels should be kept fixed after being assigned for the first time. A possible explanation is that fixed pseudo-labels prevent the
model’s predictions from drifting too far away from the initial pseudo-labeling towards an incorrect assignment. It was also beneficial to assign the same number of positive and negative pseudo-labels (Fig. 3b) compared to keeping the same ratio π of positives and negatives found in the whole dataset (−0.20%) or not balancing the selection at all (−0.55%). Uncertainty: Ranking predictions by aleatoric performance was almost as good as ranking by epistemic uncertainty (−0.08%), while total uncertainty produced moderately worse rankings (−0.37%, Fig. 3c). An ensemble with only two networks achieved the best performance, while larger ensembles performed worse, and Monte Carlo dropout (−0.85%) was better than ensembles of five (−1.00%) and ten networks (−1.58%). Early stopping: Finally, performing early stopping on the validation PU loss resulted in worse accuracy (−1.12%) compared to using the accuracy on positive-negative labels (Fig. 3d). Although considerable when compared to the impact of other algorithmic choices, such performance drop indicates that PUUPL can be used effectively in real-world scenarios with no labeled validation data available.
4.4 ROBUSTNESS
Following the same protocol as the ablation studies in Section 4.3, we tested the robustness of our method with respect to misspecifications of the continuous hyperparameters (Fig. 4).
Pseudo-labeling: Our method was fairly robust to the maximum number T of assigned pseudolabels and the maximum uncertainty threshold tl for the pseudo-labels, with almost constant performance up to T = 1000 and tl = 0.1. The best performance was achieved by the combination having T = 1000 and tl = 0.05, but both of these experiments were performed while disabling the other constraint (i.e., setting T = inf when testing tl and vice-versa). Using only a constraint on T resulted in a reduction of −0.11%, while constraining tl alone resulted in a reduction of −1.04%. The results for tu were less conclusive as for the general trend, possibly because values lower than 0.35 require more than the 15 pseudo-labeling iterations we used for the experiment, and values above 0.4 did not show significant differences.
Misspecification of the class prior: The performance of our framework slowly degraded as the prior π moved further from the true value of 0.4 with a performance reduction of less than 2.5% in the range [0.3, 0.6] (Fig. 4a). Furthermore, the performance gap between PUUPL and nnPU widens as π is more severly misspecified. Modern losses for PU learning such as uPU and nnPU rely on the
correct estimation of the positive class prior π from domain knowledge or a priori estimation of π, which constitutes a whole research branch in PU learning (Chen et al., 2020a) and is a significant challenge in any practical PU application (Bekker and Davis, 2020; Chen et al., 2020a). We believe that the inclusion of epistemic uncertainty, the usage of soft labels and the convex combination of two losses enables PUUPL to be considerably more robust to significant misspecification of the class prior π.
Loss combination: The best performing combination had λ = 0.15, with modest performance reduction until λ = 0.5 (−0.25%, Fig. 4b). Values of 0.05 and below resulted in the same performance reduction of -0.5%, similarly to λ = 0.75, and performance was 1.09% worst at λ = 0.9. Too large λ might facilitate the emergence of a harmful confirmation bias, but it is nonetheless important to train on the pseudo-labels, too, to avoid losing the information contained therein.
Number of training labeled positives: The performance of our method steadily increased and seemed to plateau at 91.4% between 3,000 and 6,000 labeled positives. The gap between nnPU and PUUPL is largest in the low labeled data region with a 1.44% gap at 250 labels, where we achieved 87.59% accuracy, shrinking to a gap of 0.52% with 3,000 labels, where our performance was 91.44% (Fig. 4c). This supports our intuition about the importance of uncertainty because, as the amount of labeled data decreases, uncertainty becomes more important to detect overfitting and to prevent the model from assigning incorrect pseudo-labels.
Positive bias: The most general assumption of PU learning is that the labeled examples are a biased sample from the positive distribution (Bekker and Davis, 2020). We tested PUUPL in such a biased setting where positives in the training and validation sets were with 50% chance an airplane, 30% chance an automobile, 15% chance ship and 5% chance truck, while in previous experiments the positives were evenly composed of airplanes, automobiles, ships and trucks. The test distribution was unchanged, meaning that test samples are half as likely to be airplanes compared to the training set, and five times more likely to be to be truck images. We also fixed all hyperparameters to the values identified previously, except for the loss LPU where we used the nnPUSB loss (Kato et al., 2019) to handle the positive bias. The baseline with nnPUSB loss performed better than the nnPU loss (+0.26%), but worse than PUUPL with the nnPU loss (-0.39%), highlighting the benefit of our uncertainty-aware approach. The best performance was however achieved with PUUPL on top of the nnPUSB loss (+0.21% compared to nnPU and -2.27% compared to the unbiased setting), showing that PUUPL can leverage the advantages of different PU losses and further improve on them (Table 2).
5 CONCLUSIONS
In this paper, we proposed an uncertainty-aware pseudo-labeling framework for PU learning which quantifies the epistemic uncertainty of an ensemble of networks and selects which examples to pseudo-label based on their predictive uncertainty. We demonstrated the benefits of our approach on different data modalities and biased settings, achieving state-of-the-art performance in all our benchmarks. We further conducted extensive ablation studies and investigated the robustness of our approach, showing it to be reliable in settings that are likely to be encountered in the real world, such as a bias in the positive data, the unavailability of labeled negatives as validation data and the misspecification of the class prior π.
Ethics statement: Most of the ethical concerns stem from the specific application and dataset. Here we have shown a certain robustness towards biased positive labels without providing a comprehensive assessment, therefore practitioners should always ensure, insofar as possible, that the obtained
predictions are ”fair” (with ”fairness” defined appropriately w.r.t. the target application) and do not systematically affect particular subsets of the population of interest.
Reproducibility statement: The source code for the framework is available in the supplementary material.
A NETWORK ARCHITECTURE AND HYPERPARAMETERS
Table A.1 reports the hyperparameters related to pseudo-labeling and their ranges. Table A.2 reports the network architecture used in the IMDb experiments, while Table A.3 reports the network used with CIFAR-10.
B DATASET INFORMATION
Table B.1 reports the number of samples for each split and each dataset. For the image datasets, we subtracted the mean pixel intensity in the training set and divided by the standard deviation. For IMDb we used pre-trained GloVe embeddings of size 200 on a corpus of six billion tokens. | 1. What is the focus and contribution of the paper on positive-unlabeled learning?
2. What are the strengths of the proposed approach, particularly in utilizing epistemic uncertainty?
3. What are the weaknesses of the paper regarding its technical novelty and comparison with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes PUUPL, an uncertainty-aware pseudo-label selection method for positive-unlabeled (PU) learning. To improve the performance of pseudo-labeling, the authors suggest using the epistemic uncertainty (the difference between the entropy of the mean prediction and the mean of entropies of each prediction). In the experiments, PUUPL outperformed the existing state-of-the-art PU learning methods.
Review
The idea of using the epistemic uncertainty in PU learning is interesting. However, it is not clear the technical novelty and it lacks a simple baseline against uncertainty-aware pseudo-labeling.
What is the difficulty to introduce uncertainty-aware pseudo-labeling techniques to PU learning? It seems that simply applying uncertainty-aware pseudo-labeling technique results in the proposed method. That is, the technical novelty of the proposed method seems not strong. If the authors can clarify non-trivial technical contributions, it would help understand this paper well.
Comparison with uncertainty-unaware pseudo-labeling techniques helps understand the superiority of the proposed method. It is expected to illustrate that considering uncertainty is important when we use pseudo-labeling in PU learning. For example, nnPU with ordinary pseudo-labeling techniques could be compared with nnPU with the proposed uncertainty-aware pseudo-labeling.
In Section 3.2, there is the sentence "we assume cleanly labeled (positives and negatives) validation and test sets (see Section 3.6) as usual in literature." I checked the cited papers quickly, but could not find such a setting. If negative samples are available, we can use them (e.g., a half of them) in training. And, it might result in better performance thanks to negative samples. So, the assumption sounds not practical. Thus, the results reported in the experiments are not good evidence of the effectiveness of the proposed method. But, there is a chance that I miss the sentences supporting the authors' assumption. If the authors can show the sentences describing the assumption about the validation set, I will confirm it.
=== After rebuttal ===
Thank you for your responses.
The idea of using the epistemic uncertainty in PU learning is interesting. So, I hope that the authors will resolve my and the others' concerns in the next venue. |
ICLR | Title
Positive-Unlabeled Learning with Uncertainty-aware Pseudo-label Selection
Abstract
Positive-unlabeled (PU) learning aims at learning a binary classifier from only positive and unlabeled training data. Recent approaches addressed this problem via cost-sensitive learning by developing unbiased loss functions, and their performance was later improved by iterative pseudo-labeling solutions. However, such two-step procedures are vulnerable to incorrectly estimated pseudo-labels, as errors are propagated in later iterations when a new model is trained on erroneous predictions. To mitigate this issue we propose PUUPL, a new loss-agnostic training procedure for PU learning that incorporates epistemic uncertainty in pseudolabeling. Using an ensemble of neural networks and assigning pseudo-labels based on high confidence predictions improves the reliability of pseudo-labels, increasing the predictive performance of our method and leading to new state-ofthe-art results in PU learning. With extensive experiments, we show the effectiveness of our method over different datasets, modalities, and learning tasks, as well as improved robustness over misspecifications of hyperparameters and biased positive data. The source code of the method and all the experiments are available in the supplementary material.
1 INTRODUCTION
Many real-world applications involve positive and unlabeled (PU) datasets in which only some of the data is labeled positive while the majority is unlabeled and contains both positives and negatives. PU learning aims to learn a binary classifier in this challenging setting without any labeled negative examples. Learning from PU data can reduce deployment costs in many deep learning applications that otherwise require annotations from experts such as medical image diagnosis (Armenian and Lilienfeld, 1974) and protein function prediction (Gligorijević et al., 2021), and it can even enable applications in settings where the measurement technology itself can not detect negative examples (Purcell et al., 2019).
Some recent approaches such as unbiased PU (Du Plessis et al., 2014, uPU) and non-negative PU (Kiryo et al., 2017, nnPU) formulate this problem as cost-sensitive learning. Others approach PU learning as a two-step procedure first identifying and labeling some reliable negative examples, and then re-training the model based on this newly constructed labeled dataset (Bekker and Davis, 2020). These approaches show similarities with pseudo-labeling in semi-supervised classification settings (Lee, 2013).
Such pseudo-labeling techniques are however especially vulnerable to incorrectly assigned labels of the selected examples as these errors will propagate and magnify in the retrained model, resulting in a negative feedback loop. Worse yet, since the true labels are unavailable in PU learning, this situation is hard to detect by any metrics computed on the training set. This erroneous selection of unreliable pseudo-labels occurs when wrong model predictions are associated with excessive model confidence. Such poor model calibration is accompanied by a distortion of the signal for the pseudo label selection (Van Engelen and Hoos, 2020).
In recent literature on pseudo-labeling, this problem is recognized and successfully addressed by explicitly estimating the prediction uncertainty (Abdar et al., 2021; Rizve et al., 2021; Arazo et al., 2020). While this is the case for semi-supervised classification, there does not yet exist a method that explores the use of uncertainty quantification for pseudo-labeling in a PU learning context.
Contributions: Motivated by this, we propose a novel, uncertainty-aware pseudo-labeling framework for PU learning that uses established uncertainty quantification techniques to identify reliable examples to pseudo-label (Fig. 1). In particular, our contributions are: (1) We introduce PUUPL (Positive-Unlabeled, Uncertainty-Aware Pseudo-Labeling), a simple uncertainty-aware pseudolabeling framework for PU learning. (2) PUUPL can use any loss function for PU learning, improving model performance while being robust to the specific data biases that the respective loss considers. (3) We evaluate our methods on a wide range of benchmarks and PU datasets, achieving state-of-the-art performance in PU learning. (4) Our extensive ablation studies provide new insights into uncertainty-aware pseudo-labeling for PU learning. Further, they show that our method is robust to the choices of hyperparameters, with 1% or less variability in test accuracy among different choices as well as distribution shifts between labeled positives in the train and test datasets. To the best of our knowledge, PUUPL is the first framework for PU learning which leverages uncertainty information during pseudo-labeling.
2 RELATED WORK
PU Learning PUL was introduced as a variant of binary classification (Liu et al., 2003) and is related to one-class learning (Ruff et al., 2018; Li et al., 2010), multi-positive learning (Xu et al., 2017), multi-task learning (Kaji et al., 2018), and semi-supervised learning (Chapelle et al., 2009). There exist three main research branches for PUL: two-step techniques, class prior incorporation, and biased PUL (Bekker and Davis, 2020). In this work, we combine Pseudo-Labeling which has similarities to two-step techniques, with biased PUL, also coined as reweighting methods, and refer to Bekker and Davis (2020) for a comprehensive overview of the field. In this context, Du Plessis et al. (2014) introduced the unbiased risk estimator uPU. Kiryo et al. (2017) showed this loss function is prone to overfitting in deep learning contexts as it lacks a lower bound and proposed the non-negative risk estimator nnPU as a remedy. Follow-up work on loss functions for PUL has focused on robustness w.r.t biases in the sampling process such as PUSB (Kato et al., 2019), PUbN (Hsieh et al., 2019) or PULNS (Luo et al., 2021).
Uncertainty-aware Pseudo-Labeling Pseudo-labeling follows the rationale that the model leverages its own predictions on unlabeled data as pseudo training targets to enable iterative semisupervised model training. The first such approach for deep learning was introduced by Lee (2013), simply selecting the class with the highest predicted probability as a pseudo label. One weakness of pseudo-labeling is that erroneously selected pseudo-labels can amplify for training, potentially leading to model degradation. This is grounded in poor model calibration distorting the signal for the pseudo label selection (Van Engelen and Hoos, 2020). Iscen et al. (2019) try to mitigate this issue using confidence and class weights. Shi et al. (2018) use confident scores based on the geometric neighborhood of the unlabeled samples while Arazo et al. (2020) effectively tackle this confirmation bias using Mixup (Zhang et al., 2017), Label Noise (Tanaka et al., 2018), and Entropy Regularization (Grandvalet et al., 2005). Rizve et al. (2021) introduced a pseudo-labeling framework using a weighting scheme for class balancing and MC dropout (Gal and Ghahramani, 2016) for calibration, while Beluch et al. (2018) found deep ensembles (Lakshminarayanan et al., 2017a) to yield the best model calibration in an active learning context, especially in low-label regimes. The commonality of these works is the explicit consideration of model uncertainty to improve pseudo-label selection, which motivates us to apply this in the context of PU learning.
Pseudo-Labeling for PU Learning Two-step approaches in PU learning first identify negative samples from the unlabeled dataset, and then train a binary classification model on the original dataset augmented with the newly identified negatives (Bekker and Davis, 2020). These approaches share similarities with pseudo-labeling but lack an iterative feedback loop after the completion of the second step.
A first attempt to combine pseudo-labeling with PU learning was made with Self-PU (Chen et al., 2020b), where self-paced learning, a confidence-weighting scheme based on the model predictions and a teacher-student distillation approach are combined. Via this complex training scheme, Self-PU was shown to marginally outperform recent baselines. With PUUPL, we propose an alternative PL strategy for PU learning that performs better in a simpler and more principled way using implicitly well-calibrated models to improve the pseudo-label selection.
Uncertainty-aware pseudo-labeling for PU learning To the best of our knowledge, we are the first to introduce an uncertainty-aware pseudo-labeling paradigm to PU learning. Although our method shares the same motivation as that from Rizve et al. (2021) for semi-supervised classification, we differ in several important aspects: (1) we specifically target PU data with a PU loss, (2) we quantify uncertainty with an ensemble instead of Monte Carlo dropout, (3) we use epistemic uncertainty instead of the predicted class probabilities for the selection, (4) we do not use temperature scaling, and (5) we use soft labels.
3 METHOD
We propose PUUPL (Positive Unlabeled, Uncertainty-aware Pseudo-Labeling), an iterative pseudolabeling procedure to progressively select and label the most confident examples from unlabeled data. The pseudo-code for PUUPL is shown as Algorithm 1. Our method separates the training set Xtr into the sets P , U , and L which contain the initial positives, the currently unlabeled, and the pseudo-labeled samples respectively. The set L is initially empty. At each pseudo-labeling iteration, we first train our model using all samples in P , U , and L until some convergence condition is met (Section 3.2). Then, samples in U are predicted and ranked w.r.t their predictive uncertainty (Section 3.3) and samples with the most confident score are assigned the predicted label and moved into the set L (Section 3.4). Similarly, samples in L are also predicted and the most uncertain samples are moved back to the unlabeled set U (Section 3.5). Next, the model is re-initialized to the same initial weights and a new pseudo-labeling iteration starts.
In the following, we first describe the notation used in this paper and then explain in detail the training procedure of PUUPL.
3.1 NOTATION
Consider input samples X with label y and superscripts ·tr, ·va and ·te for training, validation, and test data respectively. The initial training labels ytr are set to one for all samples in P and zero for all others in U . We group the indices of original positives, unlabeled, and pseudo-labeled samples
Algorithm 1 Pseudocode for the PUUPL Training Procedure Input
• Train, validation and test data Xtr, ytr, Xva, yva, Xte, yte
• Number K of networks in the ensemble (suggested K = 2)
• Maximum number T of pseudo-labels to assign at each round (suggested T = 1000)
• Maximum uncertainty threshold tl to assign pseudo-labels (suggested tl = 0.05)
• Minimum uncertainty threshold tu to remove pseudo-labels (suggested tu = 0.35)
Output Model parameters θ∗
1: P ← indices of positive samples in Xtr 2: U ← indices of unlabeled samples in Xtr 3: L← ∅ . Indices of pseudo-labeled samples 4: θ0 ← Random weight initialization 5: while not converged do 6: Initialize model weights to θ0 . Training 7: Train an ensemble of K networks on Xtr, ytr using the loss in Eq. 1 8: Validate on Xva, yva and update weights θ∗ if accuracy improved 9: f̂ ← ensemble predictions for Xtr . Uncertainty 10: Compute epistemic uncertainty ûe with f̂ via Eq. 6 11: Lnew ← Balanced set of examples to pseudo-label via Eq. 7 using ûeU , T and tl . Pseudo-labeling 12: U new ← Examples to pseudo-unlabel via Eq. 10 using ûeL and tu 13: L← L ∪ Lnewb \ U new . Update indices 14: U ← U \ Lnewb ∪ U new 15: yLnew ← p̂Lnew . Update pseudo-labels 16: yUnew ← 0 17: end while 18: Restore the weights θ∗ that scored highest on the validation set 19: Compute accuracy on the held-out test set Xte, yte 20: return θ∗
in Xtr into the sets P , U , and L respectively. Our proposed model is an ensemble of K deep neural networks whose random initial weights are collectively denoted as θ0. The predictions of the k-th network for sample i are indicated with p̂ik = σ(f̂ik), with σ(·) the logistic function and f̂ik the predicted logits. The logits and predictions for a sample averaged across the networks in the ensemble are denoted by f̂i and p̂i respectively. We subscript data and predictions with i to index individual samples, and use an index set in the subscript to index all samples in the set (e.g., XtrU = {Xtri |i ∈ U} denotes the features of all unlabeled samples). We denote the total, epistemic and aleatoric uncertainty of sample i as ûti, û e i , and û a i , respectively.
3.2 LOSS FUNCTION
We train our proposed model with a loss function L that is a convex combination of a loss LPU for the samples in the positive and unlabeled set (P ∪ U ) and a loss LL for the samples in the pseudo-labeled set (L): L = λ · LL + (1− λ) · LPU (1) with λ ∈ (0, 1). The loss LL is the binary cross-entropy computed w.r.t the assigned pseudo-labels y. Our model is agnostic to the specific PU loss LPU used. This allows our method to be easily adapted to different scenarios for which a PU loss was proposed and improve over its performance, for example when coping with a selection bias in the positive examples (Kato et al., 2019) or the availability of a biased negative set (Hsieh et al., 2019). For the standard setting of PU learning, we use the non-negative correction nnPU of the PU loss (Kiryo et al., 2017):
LPU = π · `(P, 1) + max {0, `(U,−1)− π · `(P,−1)} (2) With π the prior probability of a sample being positive, which we assume known and can be estimated from PU data (du Plessis et al., 2016), and `(S, y) the expected sigmoid loss of samples in the set S with label y:
`(S, y) = 1 |S| ∑ i∈S
1
1 + exp(y · p̂i) (3)
3.3 MODEL UNCERTAINTY
To quantify the predictive uncertainty, we utilize a deep ensemble with K networks with the same architecture, each trained on the full training dataset (Lakshminarayanan et al., 2017b). Given the predictions p̂i1, . . . , p̂iK for a sample xi, we associate three types of uncertainties to xi’s predictions (Hüllermeier and Waegeman, 2021): the aleatoric uncertainty as the mean of the entropy of the predictions (Eq. 4), the total uncertainty as the entropy of the mean prediction (Eq. 5), and the epistemic uncertainty formulated as the difference between the two (Eq. 6).
ûai = − 1
K K∑ k=1 [p̂ik log p̂ik + (1− p̂ik) log(1− p̂ik)] (4)
ûti = −p̂i log p̂i − (1− p̂i) log(1− p̂i) (5) ûei = û t i − ûai (6)
Epistemic uncertainty corresponds to the mutual information between the parameters of the model and the true label of the sample. Low epistemic uncertainty thus means that that the model parameters would not change significantly if trained on the true label, suggesting that the prediction is indeed correct. Using such a prediction as target in the cross entropy loss would in turn provide a stronger, more explicit learning signal to the model, so that a correctly pseudo-labeled example provides a larger decrease in risk compared to using the same example without any label within the positive-unlabeled loss.
3.4 PSEUDO-LABELING
The estimated epistemic uncertainty (Eq. 6) is used to rank samples of the unlabeled set U and to select reliable samples for pseudo-labeling. Next, the predictions of the unlabeled samples U are ranked according to their epistemic uncertainty (Eq. 6). Let ρ(i) denote the rank of sample i, then the set Lnew of newly pseudo-labeled samples is formed by taking the T samples with lowest uncertainty from U , ensuring that it is lower than the threshold tl:
Lnew = {i ∈ U |ρ(i) ≤ T ∧ uei ≤ tl} (7)
Previous works have shown that balancing the pseudo-label selection between the two classes, i.e., ensuring that the ratio of newly labeled positives and negatives is close to a given target ratio r, is beneficial (Rizve et al., 2021). In this case, the set Lnew should be partitioned according to the model’s predictions into a set Lnew+ of predicted positives and L new − of predicted negatives, and the most uncertain samples in the larger set should be discarded to reach the desired ratio r, which we fix to 1. We then assign soft pseudo-labels, i.e., the average prediction in the open interval (0, 1), to these samples:
yi = p̂i ∀i ∈ Lnew− ∪ Lnew+ (8)
3.5 PSEUDO-UNLABELING
Similar to the way that low uncertainty on an unlabeled example indicates that the prediction can be trusted, high uncertainty on a pseudo-labeled example indicates that the assigned pseudo-label might not be correct after all. To avoid training on such possibly incorrect pseudo-labels, we move the pseudo-labeled examples with uncertainty above a threshold tu back into the unlabeled set:
U new = {i ∈ L|ûei ≥ tu} (9) yi = 0 ∀i ∈ U new (10)
4 EXPERIMENTS
4.1 EXPERIMENTAL PROTOCOL
To empirically compare our proposed framework to existing state-of-the-art losses and models, we followed standard protocols for PU learning (Chen et al., 2020b; Kiryo et al., 2017; Kato et al., 2019).
Datasets: We evaluated our method in the standard setting of MNIST (Deng, 2012) and CIFAR10 (Krizhevsky et al., 2009) datasets, as well as Fashion MNIST (F-MNIST) (Xiao et al., 2017), STL-10 (Coates et al., 2011) and IMDb (Maas et al., 2011) datasets to show the applicability to different data modalities. Similar to previous studies (Chen et al., 2020b; Kiryo et al., 2017; Kato et al., 2019), positives were defined as odd digits in MNIST, vehicles in CIFAR-10 and STL-10, and we used trousers, dress, sandals, sneaker, and ankle boots for F-MNIST and positive reviews for IMDb. For STL-10 we used all available labeled and unlabeled data and the official ten crossvalidation folds. For all other datasets, we reserved a validation set of 5,000 samples and use all other samples for training with 1,000 randomly chosen labeled positives, as is common practice, and evaluated on the canonical test set of each dataset. More details are provided in Appendix B
Network architectures: To ensure a fair comparison with other works in PU learning (Chen et al., 2020b; Kiryo et al., 2017) we used the same architectures on the same datasets, namely a 13-layers convolutional neural network for the experiments on CIFAR-10 (Table A.3) and a MLP with four fully connected hidden layers of 300 neurons each and ReLU activation for MNIST and F-MNIST. For IMDb we used a bidirectional LSTM network with a MLP head whose architecture was optimized as part of the hyperparameter search.
Training: We trained all models with the Adam optimizer (Kingma and Ba, 2015) with β1 = 0.9 and β2 = 0.999, and an exponential learning rate decay with γ = 0.99. We further used the nnPU loss (Kiryo et al., 2017) as LPU (Eq. 1) unless otherwise stated. As is common in the pseudolabeling literature (Chen et al., 2020b; Rizve et al., 2021; Kato et al., 2019; Tanaka et al., 2018; Hu et al., 2021), we assume that a positive and negative labeled validation set is available and use this validation set for early stopping, i.e., stop the pseudo-labeling loop when the model’s accuracy on this set has stopped improving, and use the parameters that achieved the highest validation accuracy to compute the test performance. An experiment will show how this requirement can be relaxed in practice.
Hyperparameter tuning: We used the Hyperband algorithm (Li et al., 2017) to optimize all hyperparameters on the CIFAR-10 dataset with η = 3 and S = 4, using the validation accuracy as the criterion to be optimized. The configuration that achieved the highest validation accuracy was
then used as a basis for the ablation studies and fine-tuned on the remaining datasets to show that the pseudo-labeling hyperparameters (Table A.1) do not require tuning when transferred to other datasets and data modalities. Specifically, on the other datasets we only tuned hyperparameters related to network training such batch size, learning rate, weight decay, number of training epochs, dropout probability and other details of the network architecture by running only the first bracket of Hyperband with η = 3 and S = 3.
Evaluation: the best configuration found by Hyperband was trained five times with different random training/validation splits and evaluated on the test set to produce the final results, except for STL-10 where we used the official ten cross-validation folds.
4.2 RESULTS
The best performance achieved by our method is shown in Table 1 and compared with SelfPU (Chen et al., 2020b), DAN (Liu et al., 2019) and PAN (Hu et al., 2021). PUUPL was always able to improve over the baseline nnPU loss, with larger gap for more difficult datasets such as CIFAR-10 (+0.98%) and IMDb (+1.89%) as well as over SelfPU (Chen et al., 2020b) and DAN (Liu et al., 2019), setting a new state-of-the-art for PU learning. Moreover, PUUPL is naturally very well calibrated despite the absence of explicit calibration on labeled data (Fig. 2), making its predictions inherently reliable. The best pseudo-labeling hyperparameters constitute the defaults we suggested in Algorithm 1 and are K = 2, T = 1000, tl = 0.05 and tu = 0.35. Note that the baseline nnPU scores reported in Table 1 were also obtained by training an ensemble of two networks with the nnPU loss, thus possibly explaining the discrepancy observed with SelfPU. The best network architecture for IMDb is shown in Table A.2.
4.3 ABLATION STUDIES
We performed ablation studies on the CIFAR-10 dataset by changing one parameter at a time of the best configuration found by Hyperband, training and evaluating with five different splits and reporting the test accuracy corresponding to the best validation score for each run. To limit the computational resources needed, we used at most 15 pseudo-labeling iterations.
Weights initialization: We confirmed the observation that it is beneficial to re-initialize the weights after each pseudo-labeling step (Arazo et al., 2020), with slightly better performance (+0.052%) achieved when the weights are re-initialized to the same values before every pseudo-labeling iteration (Fig. 3a). We believe this encourages the model to be consistent across pseudo-labeling rounds.
Pseudo-label assignment: Soft pseudo-labels were preferred over hard ones (+0.75%). We found that our model was very well calibrated with ECEs as low as 0.05 on the labeled validation data (Fig. 2), indicating that the soft pseudo-labels they estimated were reliable training targets and that post hoc calibration was not necessary. Contrary to expectation, however, re-assigning all pseudolabels at every iteration harmed performance (−0.12%); instead, pseudo-labels should be kept fixed after being assigned for the first time. A possible explanation is that fixed pseudo-labels prevent the
model’s predictions from drifting too far away from the initial pseudo-labeling towards an incorrect assignment. It was also beneficial to assign the same number of positive and negative pseudo-labels (Fig. 3b) compared to keeping the same ratio π of positives and negatives found in the whole dataset (−0.20%) or not balancing the selection at all (−0.55%). Uncertainty: Ranking predictions by aleatoric performance was almost as good as ranking by epistemic uncertainty (−0.08%), while total uncertainty produced moderately worse rankings (−0.37%, Fig. 3c). An ensemble with only two networks achieved the best performance, while larger ensembles performed worse, and Monte Carlo dropout (−0.85%) was better than ensembles of five (−1.00%) and ten networks (−1.58%). Early stopping: Finally, performing early stopping on the validation PU loss resulted in worse accuracy (−1.12%) compared to using the accuracy on positive-negative labels (Fig. 3d). Although considerable when compared to the impact of other algorithmic choices, such performance drop indicates that PUUPL can be used effectively in real-world scenarios with no labeled validation data available.
4.4 ROBUSTNESS
Following the same protocol as the ablation studies in Section 4.3, we tested the robustness of our method with respect to misspecifications of the continuous hyperparameters (Fig. 4).
Pseudo-labeling: Our method was fairly robust to the maximum number T of assigned pseudolabels and the maximum uncertainty threshold tl for the pseudo-labels, with almost constant performance up to T = 1000 and tl = 0.1. The best performance was achieved by the combination having T = 1000 and tl = 0.05, but both of these experiments were performed while disabling the other constraint (i.e., setting T = inf when testing tl and vice-versa). Using only a constraint on T resulted in a reduction of −0.11%, while constraining tl alone resulted in a reduction of −1.04%. The results for tu were less conclusive as for the general trend, possibly because values lower than 0.35 require more than the 15 pseudo-labeling iterations we used for the experiment, and values above 0.4 did not show significant differences.
Misspecification of the class prior: The performance of our framework slowly degraded as the prior π moved further from the true value of 0.4 with a performance reduction of less than 2.5% in the range [0.3, 0.6] (Fig. 4a). Furthermore, the performance gap between PUUPL and nnPU widens as π is more severly misspecified. Modern losses for PU learning such as uPU and nnPU rely on the
correct estimation of the positive class prior π from domain knowledge or a priori estimation of π, which constitutes a whole research branch in PU learning (Chen et al., 2020a) and is a significant challenge in any practical PU application (Bekker and Davis, 2020; Chen et al., 2020a). We believe that the inclusion of epistemic uncertainty, the usage of soft labels and the convex combination of two losses enables PUUPL to be considerably more robust to significant misspecification of the class prior π.
Loss combination: The best performing combination had λ = 0.15, with modest performance reduction until λ = 0.5 (−0.25%, Fig. 4b). Values of 0.05 and below resulted in the same performance reduction of -0.5%, similarly to λ = 0.75, and performance was 1.09% worst at λ = 0.9. Too large λ might facilitate the emergence of a harmful confirmation bias, but it is nonetheless important to train on the pseudo-labels, too, to avoid losing the information contained therein.
Number of training labeled positives: The performance of our method steadily increased and seemed to plateau at 91.4% between 3,000 and 6,000 labeled positives. The gap between nnPU and PUUPL is largest in the low labeled data region with a 1.44% gap at 250 labels, where we achieved 87.59% accuracy, shrinking to a gap of 0.52% with 3,000 labels, where our performance was 91.44% (Fig. 4c). This supports our intuition about the importance of uncertainty because, as the amount of labeled data decreases, uncertainty becomes more important to detect overfitting and to prevent the model from assigning incorrect pseudo-labels.
Positive bias: The most general assumption of PU learning is that the labeled examples are a biased sample from the positive distribution (Bekker and Davis, 2020). We tested PUUPL in such a biased setting where positives in the training and validation sets were with 50% chance an airplane, 30% chance an automobile, 15% chance ship and 5% chance truck, while in previous experiments the positives were evenly composed of airplanes, automobiles, ships and trucks. The test distribution was unchanged, meaning that test samples are half as likely to be airplanes compared to the training set, and five times more likely to be to be truck images. We also fixed all hyperparameters to the values identified previously, except for the loss LPU where we used the nnPUSB loss (Kato et al., 2019) to handle the positive bias. The baseline with nnPUSB loss performed better than the nnPU loss (+0.26%), but worse than PUUPL with the nnPU loss (-0.39%), highlighting the benefit of our uncertainty-aware approach. The best performance was however achieved with PUUPL on top of the nnPUSB loss (+0.21% compared to nnPU and -2.27% compared to the unbiased setting), showing that PUUPL can leverage the advantages of different PU losses and further improve on them (Table 2).
5 CONCLUSIONS
In this paper, we proposed an uncertainty-aware pseudo-labeling framework for PU learning which quantifies the epistemic uncertainty of an ensemble of networks and selects which examples to pseudo-label based on their predictive uncertainty. We demonstrated the benefits of our approach on different data modalities and biased settings, achieving state-of-the-art performance in all our benchmarks. We further conducted extensive ablation studies and investigated the robustness of our approach, showing it to be reliable in settings that are likely to be encountered in the real world, such as a bias in the positive data, the unavailability of labeled negatives as validation data and the misspecification of the class prior π.
Ethics statement: Most of the ethical concerns stem from the specific application and dataset. Here we have shown a certain robustness towards biased positive labels without providing a comprehensive assessment, therefore practitioners should always ensure, insofar as possible, that the obtained
predictions are ”fair” (with ”fairness” defined appropriately w.r.t. the target application) and do not systematically affect particular subsets of the population of interest.
Reproducibility statement: The source code for the framework is available in the supplementary material.
A NETWORK ARCHITECTURE AND HYPERPARAMETERS
Table A.1 reports the hyperparameters related to pseudo-labeling and their ranges. Table A.2 reports the network architecture used in the IMDb experiments, while Table A.3 reports the network used with CIFAR-10.
B DATASET INFORMATION
Table B.1 reports the number of samples for each split and each dataset. For the image datasets, we subtracted the mean pixel intensity in the training set and divided by the standard deviation. For IMDb we used pre-trained GloVe embeddings of size 200 on a corpus of six billion tokens. | 1. What is the focus of the paper regarding the PU learning problem?
2. What are the strengths of the proposed approach, particularly in its uniqueness compared to previous methods?
3. What are the weaknesses of the paper, especially regarding its assumptions and comparisons with other works?
4. How does the reviewer assess the clarity and completeness of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper studies the PU learning problem. It proposes a two-step approach that can estimate the pseudo-label uncertainty so that more reliable pseudo-labels can be assigned, which improves the predictive performance. The proposed estimation method is different from previous methods.
Review
This paper studies the PU learning problem. It proposes a two-step approach that can estimate the pseudo-label uncertainty so that more reliable pseudo-labels can be assigned, which improves the predictive performance. The proposed estimation method is different from previous methods.
My main concern is that two key assumptions are made in the paper, which are not realistic. The first one is that a positive and negative labeled validation set is available and used as the validation set for early stopping. The second is that the proportion of positive and negative samples is known. Both are problematic. The paper justifies the assumptions by listing some prior work that made one or the other assumption. However, I don’t think the fact that some prior work using these assumptions makes the assumptions right. In a real-life situation, these numbers are not available. There are many approaches that do not make any of the assumptions, but they are not compared, e.g., the recent system PAN, “Predictive adversarial learning from positive and unlabeled data,” AAAI-2021.
Prior work has used several techniques to score the unlabeled set and then label them. Since the paper that proposed the two-step approach, “partially supervised classification of text documents,” ICML-2002, several approaches have been proposed to pseudo-label the unlabeled set. I understand that the proposed approach is different and probably more advanced than those older approaches, but it will be more complete to have an experimental comparison with some earlier approaches to show the advantage of the proposed method.
The paper is easy to understand. |
ICLR | Title
Fairness of Federated Learning with Dynamic Participants
Abstract
The concept of fairness has been widely caught attention in Federated Learning (FL). While there are tremendous studies about various notations of fairness in FL in recent years, all of them only consider the case where the training process starts and ends at the time point for all participants. Actually, participants could be dynamic and they may join and leave the training process at different time points. However, participants who join the training process at different time points receive similar incentive benefits can be seen as a signal of unfairness. In this paper, we provide the first study on such fairness of FL for dynamic participants. First, we propose a new mathematical definition of the above fairness namely dynamic fairness. Briefly speaking, an algorithm is dynamically fair and satisfies that local agents who participate in the model training longer should receive more benefits than those who participate in the process shorter. Second, we develop a simple but novel method, which could be seen as a normalized version of Fedavg, and theoretically show that it is fairer than Fedavg. Moreover, we can combine our method with the previous methods in fair FL for static participants to additionally guarantee fair treatment for local agents who join the training process at the same time point by minimizing the discrepancy of benefits they receive. Finally, empirically we propose a measure for dynamic fairness and demonstrate that our method can achieve a fairer performance under our definition of fairness through intensive experiments on three benchmark datasets.
1 INTRODUCTION
As one of the most fundamental learning frameworks for preserving the privacy of distributed data, Federated Learning (FL) (Konečnỳ et al., 2016) has prospered in the machine learning community in the last few years. In the canonical FL setting, there are several local agents, and each of them holds a dataset for local training. And there is a controller (server) which aggregates gradient vectors or local models from agents for global model updates. During the training process, the agents only communicate their gradients or local models to the server and the original data never leaves the local agents. Therefore, FL can protect the data information of each agent from leaking. To comply with the privacy regulations such as the General Data Protection Regulation (GDPR) (gdpr), variants of FL frameworks have been widely studied, and recently adopted in industry, such as Apple’s “FE&T” (Paulik et al., 2021), Google’s Gboard (gboard), and Alibaba’s FederatedScope (Xie et al., 2022).
While the recent advances in FL present a promising framework to learn from distributed data privately and efficiently, most of the current research mainly focuses on the central server’s benefits, i.e., developing methods to improve the convergence rate or the generalization performance in the FL setting, while ignores local agents’ interests. However, such attention to the server’s benefits may cause fairness issues which make local agents less interested in participating in the model training. For instance, those methods usually apply thresholds such as bandwidth and transmission speed to selectively choose clients (Shi et al., 2021), which potentially leads to unfair client selection in the FL system. Local devices with low transmission speed might be neglected frequently during the training process, and eventually become never-represented or under-represented client groups. Moreover, some researchers have noticed that the participants sometimes suffer from unfair incentive rewards (Zhan et al., 2021). Kairouz et al. (2021) notice the free-rider problem in the FL system. In the free-rider scenario, clients who contribute less (e.g., better data quality vs. worse data quality)
in training the model receive the same resulting model as those who contribute more to the training. Distributing models with performance incommensurate to each participant’s contribution might discourage active clients from continuously collaborating in the model training.
To leverage the unfairness issue, there are tremendous work studies on Fair FL by considering various definitions of fairness recently, such as selection fairness (Zhou et al., 2021) and collaboration (Lyu et al., 2020) (see Related Work section for more details). However, all of these work only considers the case where all the participants are static, i.e., they join the training process at the same time point, while in practice such assumptions may not always hold as the participants may be dynamic, i.e., different agents could join or leave the training at different time points. In such a dynamic scenario, there are additional fairness issues compared with static ones. Consider the following case as an example, suppose the agents could join at different time points and they will never leave before the training process ends. In this case, the agents who join the training earlier (contributed more) will expect higher benefits than the ones who join later. Thus, participants who join the training process at different time points receive similar incentive benefits can be seen as a signal of inequality. However, to our best knowledge, there are no previous work studies on such fairness in FL.
In this paper, we provide the first study to alleviate the above fairness issue caused by dynamic participants by providing some new definitions, methods, and measures. Specifically, our contributions can be summarized as follows:
1. First, we provide a rigorous definition for the above fairness, namely dynamic fairness. Briefly speaking, we call an algorithm dynamically fair if its performance is commensurate to the length of each client’s participating time. Equivalently, it satisfies that the agents with longer participation time receive more benefits, which could be seen as betweengroup fairness. Besides that, we also provide criteria to compare the dynamic fairness of two algorithms.
2. Next, we propose several dynamically fair methods. First, we propose a simple but efficient method namely Normalized Fedavg. Generally speaking, our method could be thought of as a normalized version of Fedavg where we use the normalized SGD instead of SGD for local training. Interestingly, we theoretically show that our algorithm is fairer than the vanilla Fedavg (McMahan et al., 2017). To further improve the convergence rate practically, we propose a method namely Modified Normalized Fedavg.
3. Moreover, due to the simplicity of the idea, our method is compatible with other fair FL methods. Specifically, we combine our method with the previous methods in fair FL for static participants to additionally guarantee fair treatment for local agents who join the training process at the same time point by minimizing the discrepancy of benefits they receive, i.e, we can achieve within-group fairness additionally.
4. Finally, we propose new measures for dynamic fairness and provide empirical studies of our methods. With extensive experiments on three datasets MNIST (LeCun et al., 1998), Fashion MNIST (Xiao et al., 2017), and CIFAR10 (Krizhevsky et al., 2009), we find that our methods are not only dynamically fair, but also achieve better fairness compared with Fedavg.
Due to the space limit, all the proofs, some additional sections, algorithms, and experiments of our methods are included in Appendix.
2 RELATED WORK
Existing studies have proposed several definitions of fairness in federated learning. Zhou et al. (2021) proposed the concept of selection fairness: a fair FL model should provide more participation opportunities for never-represented or under-represented client groups. The following literature tries to promote a fair client selection by introducing the sampling constraints to the FL model (Huang et al., 2020).
Different from selection fairness, Li et al. (2019) mentioned that one essential notion of fairness is to accomplish a relatively uniform accuracy distribution across devices, which is defined as the standard accuracy parity (Zafar et al., 2017). In a previous study, Li et al. (2021) suggested reducing
the variation of model performance on different clients’ datasets can be seen as a reliable indicator for standard accuracy parity. However, the researchers ignored the importance of clients’ contributions in training an FL model. For instance, as Lyu et al. (2020) proposed in their literature, a client who contributes more to the federated system deserves a better performing local model than those who contributed less, which is defined as collaboration fairness. Lyu et al. (2020) proposed that the quality of each client’s uploaded gradients is sufficient to determine participants’ contribution.
One critique of the above scenario is that the concept of time is ignored. When training an FL model, all the clients need to incur some cost to participate in the training. For instance, if a company wants to build a profitable FL model, they have to invest not only money and data but also plenty of time since training and commercialization of the FL models take time. Yu et al. (2020) introduced the idea of regret, which refers to the difference between the incentive rewards clients have received and what they should receive while taking how long they have waited to receive the payoff into account.
However, each participant’s training time was ignored in all the above scenarios. In this paper, we proposed that in the long term, clients who join an FL model training longer should be rewarded with better model performance than those who participate in the training shorter since they contribute more time to the model training.
3 DYNAMIC FAIRNESS FOR FEDERATED LEARNING
In this section, we will formally define the fairness discussed in the Introduction. Before that, we provide an overview of the standard Federated Learning (FL) setting.
In FL, there are m agents where the i-th agent has a local dataset Di = {xi,j}nij=1 (the data samples could be either i.i.d. or non-i.i.d. sampled) and a central server. We also have a loss function ℓ and the central server aims to solve the following minimization problem:
min w∈Rd F (w) = m∑ i=1 piFi(w), (1)
where Fi(w) = 1|Di| ∑
x∈Di ℓ(w;x) is the empirical risk function for the i-th agent on his/her dataset Di and pi is the weight for the k-th agent, for example pi = ni∑ni . Dynamic Federated Learning Setting: While most of the previous work focus on the case where all agents are static, i.e., all of them join in the training process at the same time point (for simplicity, in this paper we assume one-time step responds to one update of the global model). Here we consider a dynamic setting of FL. For simplicity, we consider a dynamic setting with a finite number of time points. That is there are S time points t1, · · · , tS , and for each time point there is a set of agents vi who will join the training process (for simplicity here we denote t1 as the time when the training process starts). Since the server cannot get the information for all participants, now its goal is to minimize ∑ i≤M Lvi(w) at time point tM with 1 ≤ M ≤ S, where Lvi(w) = ∑ j∈vi p M j Fj(w), where pMj is the weight for the j-th agent at time point M , i.e., the objective function is the weighted sum of empirical risk functions of all the agents who join at time point ti. That is, it wants to minimize the empirical risk for all the agents who join at or before the time point tM . 1 It is notable that when M = S, then the objective function is equivalent to the original one in (1).
As we mentioned earlier, in the above dynamic FL setting there could be additional fairness concerns. For example, we consider two succeed time points t1 and t2 with the associated participant sets v1 and v2 (we assume t2 > t1), and the server conducts Fedavg to train the model. At time point t2, from the perspective of agents in v1 the algorithm itself may be unfair to them as the agents in v2 can directly use the current model (which has already been trained for several rounds by using the data in v1) without any cost. We can see the above unfairness is ubiquitous in the dynamic FL setting. In this paper, we aim to mitigate such unfairness. However, before showing our method, we need to provide a mathematical definition for the above fairness.
Defining such fairness is challenging. The most direct way is to use the value of the empirical risk function for a different set of participants vi, i.e., the value of Lvi(w). However, such measurement
1Note that in this paper, we assume all the agents will never leave the training process before the training process ends. We leave it as future research to study the case where each agent could join and leave the training.
is unsatisfactory as our fairness should ensure the agents gain more as they join in the training longer, and the function value cannot reveal this relationship. In practice actually, we can use the ”difference of accuracy” between different time points to measure the benefit, i.e., fairness. Consider an extreme case as an example, where half of the agents join at the beginning of the training, i.e., t1 and the other half join the training at the last time point tS . When the training ends, we hope that the improvement of the accuracy for the first half agents is much greater than for the other half agents. Motivated by this, mathematically we can use the difference in the empirical risk function values to measure the improvement of test accuracy. Based on that, in the following Definition 1, we first define the benefit for group vi at the current time point t by the difference between the empirical risk function of the group vi joining the training time point ti and the current time point t. Definition 1 (Benefit). Under our dynamic FL setting, for a training algorithm A, the benefit at timepoint t for a group vi joining training at timepoint ti (t > ti) is defined as Lviti (wt) = Lvi(wti)−Lvi(wt), (2) where wti and wt is the trained model at timepoint ti and t respectively. Moreover, we define the benefit agents in vi get in timepoint t as Lvi(wt−1)−Lvi(wt).
Based on the definition of benefit, we then propose our desired definition of federated learning fairness criterion with dynamic participants. Generally speaking, we consider a training algorithm is dynamically fair if the benefits of the agents who join earlier are higher than the ones who join later. Definition 2 (Absolute Dynamic Fairness). Under our dynamic FL setting, for a training algorithm A, it is absolutely dynamically fair if for any two different time points ti < tj and any t > tj we have
Lviti (wt) > L v2 tj (wt), (3)
where wt is the trained model at time point t of the algorithm.
Note that the fairness we propose in Definition 2 is real-time, i.e., the definition of fairness in Definition 2 holds regardless of whether t is the last time point of training or the time point in training, as long as t > tj . In practice, we not only want to design absolutely dynamically fair algorithms, but also expect to design develop new fair algorithms that are more fairer than the existing ones. In the following, we quantify such relative fairness between two algorithms, i.e., the algorithm that allows the group that participates longer to get more benefits will be more fair. Definition 3 (Relative Dynamic Fairness). Under our dynamic FL setting, consider two absolutely dynamically fair training algorithms A and Ã, we call algorithm A is dynamically fairer than algorithm à if for any two different time points ti < tj and any t > tj we have
Lviti (wt)− L vj tj (wt) > L vi ti (w̃t)− L vj tj (w̃t), (4)
where wt and w̃t is the trained model at time point t of algorithm A and à respectively.
It is notable that in Definition 3 we require both A and à be absolutely dynamically fair. This is necessary as relative dynamic fairness cannot imply absolute dynamic fairness. Moreover, although there is no data distribution assumption in our previous definition, we can see they are more suitable to the non-i.i.d. data for different agents. This is due to that if all the data are i.i.d. and when m and each ni (i ∈ [m]) is large enough, then we have Lvi(w) ≈ Lvj (w) for any group i and j as both of them are approximately equal to the underlying population risk Ex∼P [ℓ(w;x)] by the Hoeffding’s inequality if the loss function is bounded, where P is the underlying distribution of the data. And in the ablation study of the experimental part we will also verify this empirically. Thus, in the following parts we will always consider the non-i.i.d. case.
Note that in Definition 1 we use the difference of the empirical loss at two time points to measure the benefit of an agent. However, there could be other ways to define the benefits, such as the relative difference. We will leave them as future work to consider these definitions of benefit.
4 ACHIEVING DYNAMIC FAIRNESS
In the previous section, we presented the dynamic fairness that we aim to study in this paper. Now we aim to develop methods that is absolutely dynamically fair. Moreover, we want it to be fairer than Fedavg (McMahan et al., 2017).
Before diving into details, let us back to the Fedavg to see why it may cause unfairness and how to improve its fairness. For simplicity we consider the case where there are only two groups v1 and v2 which join the training at t1 and t2 respectively, and we assume there is only one agent in each group with the same size of data and each agent will performs one step of Gradient Descent (GD) locally and then send the model to server to be aggregated. Suppose we have already trained the model giving agents in v1 for long time, and now we achieve the time point t2 with model wt2 . Now we consider time point t2 + 1. We will show the above variant of Fedavg is unfair:
Theorem 1. Under the above setting, Fedavg is not dynamically fair at the timepoint t2 + 1 if ∥∇wLv2(wt2)∥2 is sufficiently large such that ∥∇wLv2(wt2)∥2 ≥ Ω(∥∇wLv1(wt2)∥2) and ∥∇wLv2(wt2)∥2 ≥ Ω(Lv1(wt2) − Lv1(w1)) and η = O(1), where η is the stepsize of GD for each agent.
Note that although in Theorem 1 we need the assume that ∥∇wLv2(wt2)∥2 is sufficiently large, such assumption is quite natural. As we know wt2 is the model trained via Lv1 with several rounds, which implies that ∥∇wLv1(wt2)∥2 will be small enough. On the other side, since we get wt2 before v2 joining and we assume the each data in v1 and v2 is non-i.i.d. sampled, thus we have that wt2 will be far from the minimizer of Lv2(w), i.e., ∥∇wLv2(wt2)∥2 is large. Moreover, as w1 is the initializer at time t1. Thus, when w1 is close to the minimizer of Lv1 then Lv1(wt2)−Lv1(w1) could also be small.
In the following, we will intuitively explain why the previous Fedavg is unfair. We assume both Lv1(w) and Lv2(w) are L-smooth, µ-strongly convex and 1-Lispschitz. Then by the assumption of smoothness and strong convexity, and the gradient descent in each agent we have
(η − η 2L
2 )∥∇wLv2(wt2)∥22 ≤ Lv2(wt2)−Lv2(w2t2+1) ≤ (η −
η2µ
2 )∥∇wLv2(wt2)∥22, (5)
(η − η 2L
2 )∥∇wLv1(wt2)∥22 ≤ Lv1(wt2)−Lv1(w1t2+1) ≤ (η −
η2µ
2 )∥∇wLv1(wt2)∥22, (6)
where η is the stepsize, and wit2+1 (i = 1, 2) is the local model in the i-th agent by performing the GD. If the benefit from the aggregation step in the server is sufficiently small, then from (5) we can see the benefit for v2, which depends on Θ(∥∇wLv2(wt2)∥22), could be very large. On the other side, for v1, ∥∇wLv1(wt2)∥2 is very small, indicating that the benefit they get in this round is quite small. If they did not get large benefit in the previous round before v2 joining, then the total benift for v1 will be less than the benefit for v2, i.e., the algorithm is unfair.
From the previous intuitive analysis we can see that in order to make agents in v2 get less benefit at time point t2 + 1, we cannot use the GD (or similarly SGD) as it could make the benefit depend on Θ(∥∇wLv2(wt2)∥22), which is quite large. Equivalently, the ℓ2-norm of the gradient plays an important role for the benefits of agents in v2. Motivated by this, a natural way is performing the normalized gradient descent (NGD) instead of GD, i.e., w2t2+1 = wt2 − η ∇wLv2 (wt2 ) ∥∇wLv2 (wt2 )∥2 and w1t2+1 = wt2 − η ∇wLv1 (wt2 )
∥∇wLv1 (wt2 )∥2 . In this case, considering when Lv2 is 1-Lipschitz and we have the
same stepsize as above, then we have
Lv2(wt2+1)−Lv2(wt2) ≤ ∥wt2+1 − wt2∥2 ≤ 2η,
i.e., the benefit now is bounded by η, which is much smaller than ∥∇wLv2(wt2)∥2. This indicates that as long as the benefit of v1 at t2, i.e., Lv1(wt1) − Lvi(wt) > 2η then the algorithm will be absolutely dynamically fair at t2 + 1. Moreover since now we limit the benefit for v2, we can show using NGD is fairer than implementing the above vanilla Fedavg.
Theorem 2. Consider the same setting as in Theorem 1 with normalized GD and fixed w1, wt2 and η, then if ∥∇wLv2(wt2)∥2 ≥ Ω(1) we have NGD is dynamically fairer than the above Fedavg at time point t2 + 1.
Note that although in the previous theorem we only considered the time point t2 + 1. As we can see from the experimental part, our algorithm is fairer than Fedavg in practice at each time point. Moreover, the above results relies on the assumption of ∥∇wLv2(wt2)∥2 ≥ Ω(1). Actually, when ∥∇wLv2(w)∥2 is small enough, the group v2 will get smaller benefit, i.e., for any q > 0 by the
properties of the loss function we have
η∥∇wLv2(wt2 + q)∥2 − η2L
2 ≤ Lv2(wt2+q)−Lv2(w̃2t2+q+1) ≤ η∥∇wLv2(wt2+q)∥2 −
η2µ
2 ,
η∥∇wLv1(wt2 + q)∥2 − η2L
2 ≤ Lv1(wt2+q)−Lv1(w̃1t2+q+1) ≤ η∥∇wLv1(wt2+q)∥2 −
η2µ
2 ,
where w̃it2+q+1 (i = 1, 2) is the local model in the i-th agent by performing NGD and wt2+q+1 = w̃1t2+q+1
+w̃2t2+q+1 2 . If we ignore the benefit of the aggregation step in the server, then from the
previous two results we can see the benefit the vi group get is Θ(η∥∇wLvi(wt2 +q)∥2). Thus, when η and the two gradient norms are small, then the benefits are also small which could be considered to be equal. In total we have that, when ∥∇wLvi(wt2 + q)∥2 is large, then if Lv1(wt1)−Lvi(wt) ≥ ω(η), our previous algorithm will be dynamically fair. And when ∥∇wLvi(wt2 + q)∥2 becomes sufficiently small then since both groups get almost the same benefit in time point t2+ q. Therefore, our algorithm is still dynamically fair.
Algorithm 1 Normalized Fedavg: Two groups v1, v2 with joining time point t1, t2 (t1 = 1, t1 < t2). |v| indicates the number of clients in group v, |B| is the local minibatch size, E is the number of local epochs, and η is the learning rate. C is a constant
Server executes: 1: initialization: w1 2: for each round t = 1, 2, . . . do 3: if t < t2 do m← max(C · |v1|, 1) else do m← max (C · (|v1|+ |v2|), 1) 4: St ← (random set of m clients) 6: for each client k ∈ St in parallel do 7: wkt+1 ← ClientUpdate(k, wt) 8: if t ≥ t2 do
wkt+1 ← wt − ηt wt−wkt+1
||wt−wkt+1|| // Normalization 9: wt+1 ← ∑
k∈St p i kw k t+1, where p i k is the weight where i = 1 when t < t2 otherwise i = 2.
ClientUpdate (k, w): // Run on client k 1: B ← (split Dk into batches of size |B|) 2: for each local epoch i from 1 to E do 3: for batch b ∈ B do 6: w ← w − ηt∇l(w; b) 7: return w to server
Based on our above idea of normalizing the gradient to limit the benefit for each new agent, we can modify the vanilla Fedavg to improve its dynamic fairness, i.e., we propose Normalized Fedavg in Algorithm 1 (for simplicity we only present the case where only two groups are trained, and the multi-group case can be easily generalized). Compared with the previous normalized stochastic gradient descent (NSGD) (Zhao et al., 2020; Cutkosky & Mehta, 2020; You et al., 2019; Hazan et al., 2015), there are two critical differences: While in the existing work on NSGD we normalize the gradients each iteration, in Algorithm 1 each agent still uses SGD to train local model and then send model to the server, then the server normalizes these local model updates to update the model (step 8) and then perform the aggregation step (step 9). This is due to that in practice we find that using directly NSGD locally for all agents will make the algorithm hard to be convergent. Thus, before the normalization step, we still need each agent perform SGD (step 7). The second difference it that, where in the previous NSGD based methods we need to calculating the global norm for all parameters of the model and then perform the normalization step. In Algorithm 1, for the normalization step we use layer-wise norm (LN) as using the global norm could lead to non-convergence (see experiments in Section D.1 in Appendix for details).
Although in practice we found Normalized Fedavg can indeed improve the dynamic fairness compared with Fedavg, its convergence rate is quite slow. The main reason is that the model update of local agents becomes quite small at the early stage after adding group v2. To address the issue, we simply modify the normalization update step (step 8 in Algorithm 1) by adding a decayed coefficient. We choose the model update norm G at the time point when group v2 joined in the training
Algorithm 2 The Modified Normalized Fedavg algorithm. The β is a hyperparameter and takes value from 0 ∼ 1. The default value of β is 1. LN : (w[0], ..., w[L−1])→ (||w[0]||2, ..., ||w[L−1]||2) is the function to compute the model update norm at each layer.
Server executes: 1: initialization: w0, β 2: for each round t = 1, 2, . . . do 3: if t < t2 do m← max(C · |v1|, 1) else do m← max (C · (|v1|+ |v2|), 1) 4: St ← (random set of m clients) 5: if t = t2 + 1 do G ← LN(wt−1 − wt) 6: for each client k ∈ St in parallel do 7: wkt+1 ← ClientUpdate(k, wt) 8: if t ≥ t2 + 1 do
wkt+1 ← wt − ( β G1+t−t2 + (1− β)LN(wt − w k t+1) ) wt−wkt+1
LN(wt−wkt+1) 9: wt+1 ← ∑
k∈St p i kw k t+1
ClientUpdate (k, w): same as Algorithm 1
as the initial value of this coefficient, and decay it with the training round increasing. The modified formula for local device model update is
wkt+1 := wt − G 1 + t− t2 wt − wkt+1 ||wt − wkt+1||2 , (7)
where t ≥ t2+1, G = ||wt2−wt2+1||. In order to further enhance the applicability of the algorithm, we introduce another hyperparameter β to combine Fedavg and normalized Fedavg:
wkt+1 := wt − ( β
G 1 + t− t2
+ (1− β)||wt − wkt+1||2 )
wt − wkt+1 ||wt − wkt+1||2
(8)
where the value of β is from 0 to 1. If the value of β is close to 1, the algorithm will be more fair; And if the value is close to 0, the algorithm will converge faster and close to Fedavg.
Note that in the above methods we normalize the model update (gradients) to limit each agent’s benefit. A natural question is whether we can use other ways. We known that other than normalization, clipping is another commonly used operation in deep learning (e.g., poisoning attacks (Guo et al., 2021; Xie et al., 2021; Panda et al., 2022) and privacy (Truex et al., 2019; Lee & Kifer, 2018)). Motivated by this we propose the Clipping Fedavg (Algorithm 3). We find that clipping can also improve the fairness via experiments. However, its improvement compared with Fedavg is quite limited. See Section B in Appendix for details.
Actually, due to the simplicity of our idea, our methods are compatible with other fair FL methods. Specifically, we combine our method with the existing methods in fair FL for static participants to additionally guarantee fair treatment for local agents who join the training process at the same time point by minimizing the discrepancy of benefits they receive, i.e, we can achieve within-group fairness additionally. See Section C in Appendix for details.
5 EXPERIMENTS
In this section, we will study the practical performance of our proposed algorithms on several benchmark datasets.
Experimental Settings: To verify whether the normalization-based methods can indeed improve dynamic fairness, we design the Two-groups experiment to simulate the scenario in which some clients join first (group1) while some join in training at timepoint t2 (group2). In this type of experiment, each group contains 5 clients.
To make our experiments more convincing and applicable, we also design the Multi-groups experiment, in which more clients are added to the training at several different timepoints. In detail, there are S groups {v1, ..., vS} (S ≥ 2), and each group is added to trained at a specific time point {t1, ..., tS} (t1 = 1). In the Multi-groups experiment, each group contains 3 clients. For all experiments, all clients run 5 local epochs, 32 local batch-size and η = 1e− 2 in each round. Also, we define a paramter α to control the degree of non-i.i.d of the dataset, i.e., if two groups
join the training with α = 0.9 and the size of class of the dataset is 10, then one group has 90% of the data in 5 classes and the second group has 90% of the data in the other 5 classes. For the Two-groups experiment, we choose 10 clients in total and each group contains 5 clients. The second group is added to training in round 10. For the Multi-groups experiment, we choose 30 clients in total and each group contains 3 clients). The timepoint each group join in training is in the set {0, 10, 20, 30, 40, 50, 60, 70, 80, 90}.
Datasets and Models We use three classical dataset MNIST (LeCun et al., 1998), Fashion MNIST (FMNIST) (Xiao et al., 2017), and CIFAR10 (Krizhevsky et al., 2009) and two popular model LeNet (LeCun et al., 1998) and ResNet18 (He et al., 2016) to evaluate our algorithms. Like most of works, we used LeNet on MNIST and FMNIST, and ResNet18 on CIFAR10 for evaluation, respectively.
Evaluation Metrics Based on our Definition 1, to better describe the benefits of all groups during the training process, we propose groups benefits (GB) as one of our experimental metrics. This metric shows the difference in the value of the loss between the group that joins later and the group that joins first. A positive value of GB indicates that the algorithm is fair, and larger value indicates better fairness of the algorithm. The metric is defined in (9). GB (train) and GB (test) are calculated from the train dataset and the test dataset, respectively. If there is only one group, we set GB as 0.
GBt = 1
n− 1 n−1∑ i=1 exp ( Lvi+1(wt)−Lvi(wt) ) − 1 (9)
where n ∈ [2, N ] denotes the number of current running groups. It should be noted that the metric here does not exactly follow Definition 1, the reason can be seen in section E in Appendix.
Main Experiment Results The experiment results are shown in the Figure 1 and 2. It can be seen that Algorithm 1 and 2 exhibit much higher benefits than Fedavg, and their benefits eventually converge to a positive large value on all the datasets. In contrast, Fedavg makes the value of GB smaller
or even negative (absolute unfairness). A noteworthy phenomenon is that for Fedavg, compared with its GB values that keep decreasing during the training process, the GB values will increase slightly during the testing process for both datasets (MNIST and CIFAR10). However, the increased values are still much smaller than those of our methods. Remarkably, we find that Algorithm 2 is fairer than Algorithm 1 in Figure 1, but Figure 2 shows the exact opposite phenomenon. Therefore, for those two algorithms, we cannot conclude which algorithm outperforms measured by GB, which indicates that our strategies in Algorithm 2 do not significantly reduce fairness compared with Algorithm 1.
Figure 1 and 2 both illustrate a negative correlation between fairness and convergence rate. Algorithms 1 and 2 have better fairness performance but their convergence rates are slower than Fedavg. Meanwhile, we can conclude that Algorithm 2 converges faster than Algorithm 1, which shows the effectiveness of our proposed Algorithm 2.
To summary, both of Algorithm 1 and 2 demonstrate greater dynamic fairness than Fedavg. Besides, Algorithm 2 can achieve a faster convergence rate than Algorithm 1 while maintaining a similar dynamic fairness with Algorithm 1.
We defer the ablation study to Section D.1 in Appendix due to space limit.
6 CONCLUSION
In this paper, we focused on the fairness in the setting of Federated Learning with dynamic participants, meaning that clients can join in the training at different time points. We proposed a new definition of federated learning fairness namely dynamic fairness to guarantee higher benefits for local agents who participate in the FL model training for longer time periods than those do not. We developed algorithms with normalization to guarantee the dynamic fairness based on Fedavg. Furthermore, we improved the efficiency of Normalized Fedavg via some strategies. Intensive experiment results showed that our methods are dynamically fair. And specifically, our algorithms are fairer than Fedavg.
A OMITTED PROOFS
Proof of Theorem 1. In Fedavg from the server side it computes wt2+1 = w2t1+1 +w2t2+1 2 , where w2t1+1 = wt2 − η∇wLv2(wt2) and w 1 t1+1 = wt2 − η∇wLv1(wt2) with the stepsize η. Thus, from the first order approximation we have
Lv1(wt2+1) = Lv1(wt2)− η∇wLv1(wt2) · ∇wLv1(wt2) +∇wLv2(wt2)
2 + o [ ||wt2+1 − wt2 ||2 ] (10) Lv2(wt2+1) = Lv2(wt2)− η∇wLv2(wt2) ·
∇wLv1(wt2) +∇wLv2(wt2) 2
+ o [ ||wt2+1 − wt2 ||2 ] (11) Thus we have [Lv2(wt2)−Lv2(wt2+1)]− [Lv1(wt2)−Lv1(wt2+1)] ≈ η∥∇wLv2(wt2)∥22 − η∥∇wLv1(wt2)∥22 Thus, based on the definition 1 the difference of benefit for agents in v2 and benefit for agents in v1 is approxiamtely equal to
η[∥∇wLv2(wt2)∥22 − ∥∇wLv1(wt2)∥22] + [Lv1(wt2)−Lv1(w1)]. Thus, when ∥∇wLv2(wt2)∥2 ≥ Ω(∥∇wLv1(wt2)∥2) and ∥∇wLv2(wt2)∥2 ≥ Ω(Lv1(wt2) − Lv1(w1)) then the benefit for v2 is larger and the algorithm is no longer fair.
Proof of Theorem 2. In Fedavg from the server side it computes wt2+1 = w1t2+1 +w2t2+1 2 , where w2t1+1 = wt2 − η ∇wLv2 (wt2 ) ∥∇wLv2 (wt2 )∥2 and w1t2+1 = wt2 − η ∇wLv1 (wt2 ) ∥∇wLv1 (wt2 )∥2
with the stepsize η. Thus, from the first order approximation we have
Lv1(wt2+1) = Lv1(wt2)− η
2 ∇wLv1(wt2) · ( ∇wLv1(wt2) ∥∇wLv1(wt2)∥2 + ∇wLv2(wt2) ∥∇wLv2(wt2)∥2 )
+ o [ ||wt2+1 − wt2 ||2 ] (12) Lv2(wt2+1) = Lv2(wt2)− η
2 ∇wLv2(wt2) · ( ∇wLv1(wt2) ∥∇wLv1(wt2)∥2 + ∇wLv2(wt2) ∥∇wLv2(wt2)∥2 )
+ o [ ||wt2+1 − wt2 ||2 ] (13) Thus we have
[Lv2(wt2)−Lv2(wt2+1)]− [Lv1(wt2)−Lv1(wt2+1)]
≈ η 2 (∥∇wLv2(wt2)∥2 − ∥∇wLv1(wt2)∥2)(1 + ∇wLv1(wt2) · ∇wLv2(wt2) ∥∇wLv1(wt2)∥2∥∇wLv2(wt2)∥2 )
Thus, based on Definition 1 the difference of benefit for agents in v2 and benefit for agents in v1 is η
2 (∥∇wLv2(wt2)∥2 − ∥∇wLv1(wt2)∥2)(1 + ∇wLv1(wt2) · ∇wLv2(wt2) ∥∇wLv1(wt2)∥2∥∇wLv2(wt2)∥2 )
+ [Lv1(wt2)−Lv1(w1)]. (14)
And it is smaller than the difference in the case of Theorem 1 with fixed w1, wt2 and η when ∥∇wLv2(wt2)∥2 ≥ 2. Thus, it is more fairer than Theorem 1.
B CLIPPING FEDAVG AND EXPERIMENT
Note that in the above methods we normalize the model update (gradients) to improve algorithms’ fairness. A natural question is whether we can use other ways. Other than normalization, clipping is another commonly used operation in deep learning (e.g., poisoning attacks (Guo et al., 2021; Xie et al., 2021; Panda et al., 2022) and privacy (Truex et al., 2019; Lee & Kifer, 2018)). Motivated by this, we propose the Clipping Fedavg algorithm, whose complete pseudo-code is given in Algorithm 3. With experiments in figure 3, we found that on the whole training stage, it is hard to argue that clipping can improve the fairness of the algorithm. we can see that although algorithm 3 can maintain a high level of fairness in the early stage after new group joins, it is not significantly different from Fedavg in the later stage. Therefore, we do not recommend using algorithm 3 to improve fairness in practical applications.
Algorithm 3 The Clipping Fedavg algorithm. The running clients are indexed by k, and η is the learning rate. LM : (w[0], ..., w[L−1]) → (max(w[0]), ...,max(w[L−1])) is the function to compute the max value of model update at each layer, and Clipg : (w
[0], ..., w[L−1]) → (threshold(w[0],±g), ..., threshold(w[L−1],±g)) is the function to clip the paramters at each layer with specified value, where threshold(w,±g) can limit w with ±g.
Server executes: 1: initialization: w0 2: for each round t = 1, 2, . . . do 3: if t < t2 do m← max(C · |v1|, 1) else do m← max (C · (|v1|+ |v2|), 1) 4: St ← (random set of m clients) 5: if t = t2 do G ← LM(wt−1 − wt) 6: for each client k ∈ St in parallel do 7: wkt+1 ← ClientUpdate(k, wt) 8: if t ≥ t2 do wkt+1 ← wt −ClipG(wt − wkt+1) // Clipping 9: wt+1 ← ∑St k=1 pkw k t+1 10: if t ≥ t2 do compute test loss: Lv1(wt), Lv2(wt)
ClientUpdate (k, w): same to algorithm 1
C FURTHER EXPANSION OF THE FAIRNESS DEFINITION
Additionally, we combine our method with the existing methods in fair FL for static participants. Via such approach, we can minimize the discrepancy of benefits for the agents who join the training process at the same time point, and guarantee a fair treatment for them, i.e, we can achieve withingroup fairness.
By combining the fairness definition of Li et al. (2019) and ours, we extend the definition 2, 3 to definition 4, including between-group fairness (guarantees that agents with longer participating time benefit more) and within-group fairness (guarantees performance uniformity for agents with the same participating time).
Definition 4. Dynamic Fairness (Extended): Under our dynamic FL setting, for a training algorithm A, it is absolutely dynamically fair if for any two different time points ti < tj and any t > tj we have
Between-group fairness: Lv1t1 (wt) > L v2 t2 (wt) (15) Within-group fairness: stdk∈v {Fk(w)} → 0 (16)
Under our dynamic FL setting, consider two absolutely dynamically fair training algorithms A and Ã, we call algorithm A is dynamically fairer than algorithm à if for any two different time points ti < tj and any t > tj we have
Between-group fairness:
Lv1t1 (wt)− L v2 t2 (wt) > L v1 t1 (w̃t)− L v2 t2 (w̃t) (17)
Within-group fairness: stdk∈vi {Fk(w)} < stdk∈vi {Fk(w̃)} (18) Here Lviti (wt) is defined in Definition 1, stdk∈v {Fk(w)} denotes the standard deviation of the test loss of all devices in group vi, and vi is anyone of all groups currently participating.
We further modified our algorithm by combining above algorithms with q-Fedavg (Li et al. (2019)), which is an excellent solution of within-group fairness. The pseudo-code of our modified algorithm is given in algorithm 4.
Algorithm 4 We merged our methods (step 8) into the q-Fedavg. The notation k is the index of running clients, wt is the global model at current round t, and η is the learning rate. LN : (w[0], ..., w[L−1])→ (||w[0]||, ..., ||w[L−1]||) is the function used to compute the model update norm at each layer. q is a hyperparameter of q-Fedavg, and its default value is 0.1.
Server executes: 1: initialization: w0 2: for each round t = 1, 2, . . . do 3: if t < t2 do m← max(C · |v1|, 1) else do m← max (C · (|v1|+ |v2|), 1) 4: St ← (random set of m clients) 6: for each client k ∈ St in parallel do 7: wkt+1, Fk(wt)← ClientUpdate(k, wt) 8: wkt+1 ← Our operation (based on algorithm 1, 2, 3) 9: △kt = F q k (wt) ∗ (wt − wkt+1) 10: hkt = qF q−1 k (wt)LN(wt − wkt+1) + F q k (wt)
11: wt+1 ← wt − ∑ v∈St ∑ k∈v pk △kt hkt 12: if t ≥ t2 do compute test loss: Lv1(wt), Lv2(wt)
ClientUpdate (k, w): same to algorithm 1
Then we provide the experiment results for Algorithm 4.
We provide the experiment results of Algorithm 4.
First, we define an evaluation metric loss std (LS) for within-group fairness. This metric indicates the level of performance uniformity across clients within the same group. A lower value of the metric indicates higher uniformity. If only one group runs, we set LS to 0.
LSt = 1
n n∑ i=1 √√√√ |vi|∑ k=1 (F kt − L vi t ) 2 |vi| (19)
Second, we verified that the q-Fedavg Li et al. (2019) is still valid in our Two-groups experiment condition in figure 4(section 5). We then selected q = 0.1 that best fits algorithm 4.
Last, as shown in figure 5, we find that the algorithm 4 can improve both between-groups fairness (lower LS) and within-group fairness (greater Group benefit).
D ADDITIONAL EXPERIMENTAL RESULTS
D.1 ABLATION STUDY
In ablation study, we use the experiment results of "Two-groups experiment" (MNIST) as the control group. We explore the effects of three key variables (α, global or layer norm, and β) on the experiment results.
Impact of α. Figure 6 shows that in a scenario of a low level of non-iid, fairness is not guaranteed regardless of whether normalization is implemented or not. And only with strong non-iid, normalization can guarantee fairness and is fairer than Fedavg, which is consistent with our idea in the previous section.
Impact of global / layer norm. We investigate whether the global parameter norm of the model update or the per-layer parameter norm should be used in algorithm 1 and 2. As seen in figure 7(a), the global norm for all parameters of model update in algorithm 2 prevents the model from converging. On the contrary, the layer norm (LN) is able to make the model converge.
Impact of β for algorithm 2. Due to a necessary trade-off between fairness and convergence speed in practical applications, we expect the hyperparameter β to regulate the degree of fairness of algorithm 2. Figure 7(b) also demonstrates that the effect of adjusting β is consistent with our expectation.
D.2 EVALUATION OF OTHER MODELS
We redid the Two-group experiment using Linear-regression and 2-layer neural network and got the same results (figure 8) as Lenet. This proves that our method is not limited by the model
E SUPPLEMENTAL NOTION
Explanation of implementing Evaluation Metric "Group Benefit" If we follow the benefit definition (2), then the equation (9) should be rewritten as
GBt = 1
n− 1 n−1∑ i=1 exp ( Lviti (wt)− L vi+1 ti+1 (wt) ) − 1
= 1
n− 1 n−1∑ i=1 exp (Lvi(wti)−Lvi+1(wti+1))︸ ︷︷ ︸ Our proposed algorithms do not change it +(Lvi+1(wt)−Lvi(wt)) − 1 (20)
However, we find that the former term Lvi(wti) in (2) is much larger than the latter term Lvi(wt) in the actual experiments, which makes the actual benefit Lviti (wt) always be close to Lvi(wti) and remains constant. Moreover, we find that Lvi(wti) − Lvi+1(wti+1) in (20) is the same for our algorithms and Fedavg, so we remove the former term (Lvi(wti)−Lvi+1(wti+1)) in the metric (20) and change the metric to (9). | 1. What is the focus of the paper regarding fairness in federated learning?
2. What are the strengths and weaknesses of the proposed approach, particularly in its definition of fairness and experimental results?
3. Do you have any concerns or questions regarding the writing quality and clarity of the paper?
4. How does the reviewer assess the novelty and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper looks at fairness in federated learning. Here, the salient modeling element of the paper is fairness across time: i.e., agents that have been in the mechanism longer should get more benefit from it. This is what the authors refer to as "dynamic" fairness.
Strengths And Weaknesses
Strengths: I think the question asked by the paper is important and interesting in two ways:
it looks at a realistic paradigm for learning that is used in practice, federated learning.
The paper takes fairness considerations in FL into a new dimension, which takes into account how long agents have been in the mechanism and contributing data
Weaknesses:
i) I am worried about the definition of fairness, and that things can go wrong with it. One example is when the authors say "the agents in v_2 can directly used the current model (which has already been trained for several rounds by using the data in v_1)", which seems to suggest that even if agents in v_2 significantly contribute to the model and provide significantly many data points (say, many more than the agents in v_1), v_2 should not see any benefit from doing so. It seems that the definition of fairness takes how long agents have been in the mechanism into account, without taking into account how much data they have contributed. As such, I can imagine a situation like the following:
in the first time step, 10 agents from group 1 show up.
in the second time step, n -> +\infty agents from group 2 show up. In this case, the data of the agents in group 1 effectively is insufficient to train any practical model. The model depends entirely (almost) on the data of group 2. Here, I am under the impression that the definition of fairness says that group 1 should reap more benefit by virtue of being first, even though they contributed almost no data and in practice should get no benefit. Since the main technique seems to use renormalized gradient descent, I think it implicitly suggests that issues like the one above are not taken into account as the "magnitude" of the data/gradient step are not taken into account.
ii) The results are hard to understand. For example:
Theorem 1 gives a very theoretical condition in terms of the gradients of the algorithm... but note that generally these gradients may be hard to understand and highly dependent on the algorithm, so it feels that the characterization in that sense is hard to use and get insights from.
In section 5, the set-up feels a bit unclear. One place that is hard to understand is the \alpha and the degree of non-i.i.d.-ness of the dataset.
iii) The experiments feel a bit weak, in that it seems there is a very small number of clients. I am also not clear on (forgive me if I missed this) on how much data each client provides. I think it would be interesting to extend these experiments to having a larger number of clients, closer to what happens in practice. Right now it is hard to see if any of the proposed insights scale up.
I also have some concerns about the results of the experiments: the figures clearly show that the benefit must be decreasing for the groups that arrive later, since the difference in value for the group that join first vs the groups that join later remains high. This seems to suggest that later groups get little benefit, so how is the train and text accuracy so high if the loss of the groups that arrive later is also high?
iv) My last main concern is the writing. There are already significant typos in the abstract only, e.g.:
"The concept of fairness has been widely caught attention in FL" --> "has caught/attracted wide attention"
"an algorithm is dynamically fair and satisfies (...)" --> "if it satisfies"? The rest of the sentence does not seem to make sense otherwise In many places the writing seems informal and too, and there are quite a few places where punctuation (in particular comma) is missing. I think the paper needs a careful re-read from typos and grammatical mistakes.
Clarity, Quality, Novelty And Reproducibility
See above -- the paper is not very clear and the writing has mistakes, making it hard to understand. |
ICLR | Title
Fairness of Federated Learning with Dynamic Participants
Abstract
The concept of fairness has been widely caught attention in Federated Learning (FL). While there are tremendous studies about various notations of fairness in FL in recent years, all of them only consider the case where the training process starts and ends at the time point for all participants. Actually, participants could be dynamic and they may join and leave the training process at different time points. However, participants who join the training process at different time points receive similar incentive benefits can be seen as a signal of unfairness. In this paper, we provide the first study on such fairness of FL for dynamic participants. First, we propose a new mathematical definition of the above fairness namely dynamic fairness. Briefly speaking, an algorithm is dynamically fair and satisfies that local agents who participate in the model training longer should receive more benefits than those who participate in the process shorter. Second, we develop a simple but novel method, which could be seen as a normalized version of Fedavg, and theoretically show that it is fairer than Fedavg. Moreover, we can combine our method with the previous methods in fair FL for static participants to additionally guarantee fair treatment for local agents who join the training process at the same time point by minimizing the discrepancy of benefits they receive. Finally, empirically we propose a measure for dynamic fairness and demonstrate that our method can achieve a fairer performance under our definition of fairness through intensive experiments on three benchmark datasets.
1 INTRODUCTION
As one of the most fundamental learning frameworks for preserving the privacy of distributed data, Federated Learning (FL) (Konečnỳ et al., 2016) has prospered in the machine learning community in the last few years. In the canonical FL setting, there are several local agents, and each of them holds a dataset for local training. And there is a controller (server) which aggregates gradient vectors or local models from agents for global model updates. During the training process, the agents only communicate their gradients or local models to the server and the original data never leaves the local agents. Therefore, FL can protect the data information of each agent from leaking. To comply with the privacy regulations such as the General Data Protection Regulation (GDPR) (gdpr), variants of FL frameworks have been widely studied, and recently adopted in industry, such as Apple’s “FE&T” (Paulik et al., 2021), Google’s Gboard (gboard), and Alibaba’s FederatedScope (Xie et al., 2022).
While the recent advances in FL present a promising framework to learn from distributed data privately and efficiently, most of the current research mainly focuses on the central server’s benefits, i.e., developing methods to improve the convergence rate or the generalization performance in the FL setting, while ignores local agents’ interests. However, such attention to the server’s benefits may cause fairness issues which make local agents less interested in participating in the model training. For instance, those methods usually apply thresholds such as bandwidth and transmission speed to selectively choose clients (Shi et al., 2021), which potentially leads to unfair client selection in the FL system. Local devices with low transmission speed might be neglected frequently during the training process, and eventually become never-represented or under-represented client groups. Moreover, some researchers have noticed that the participants sometimes suffer from unfair incentive rewards (Zhan et al., 2021). Kairouz et al. (2021) notice the free-rider problem in the FL system. In the free-rider scenario, clients who contribute less (e.g., better data quality vs. worse data quality)
in training the model receive the same resulting model as those who contribute more to the training. Distributing models with performance incommensurate to each participant’s contribution might discourage active clients from continuously collaborating in the model training.
To leverage the unfairness issue, there are tremendous work studies on Fair FL by considering various definitions of fairness recently, such as selection fairness (Zhou et al., 2021) and collaboration (Lyu et al., 2020) (see Related Work section for more details). However, all of these work only considers the case where all the participants are static, i.e., they join the training process at the same time point, while in practice such assumptions may not always hold as the participants may be dynamic, i.e., different agents could join or leave the training at different time points. In such a dynamic scenario, there are additional fairness issues compared with static ones. Consider the following case as an example, suppose the agents could join at different time points and they will never leave before the training process ends. In this case, the agents who join the training earlier (contributed more) will expect higher benefits than the ones who join later. Thus, participants who join the training process at different time points receive similar incentive benefits can be seen as a signal of inequality. However, to our best knowledge, there are no previous work studies on such fairness in FL.
In this paper, we provide the first study to alleviate the above fairness issue caused by dynamic participants by providing some new definitions, methods, and measures. Specifically, our contributions can be summarized as follows:
1. First, we provide a rigorous definition for the above fairness, namely dynamic fairness. Briefly speaking, we call an algorithm dynamically fair if its performance is commensurate to the length of each client’s participating time. Equivalently, it satisfies that the agents with longer participation time receive more benefits, which could be seen as betweengroup fairness. Besides that, we also provide criteria to compare the dynamic fairness of two algorithms.
2. Next, we propose several dynamically fair methods. First, we propose a simple but efficient method namely Normalized Fedavg. Generally speaking, our method could be thought of as a normalized version of Fedavg where we use the normalized SGD instead of SGD for local training. Interestingly, we theoretically show that our algorithm is fairer than the vanilla Fedavg (McMahan et al., 2017). To further improve the convergence rate practically, we propose a method namely Modified Normalized Fedavg.
3. Moreover, due to the simplicity of the idea, our method is compatible with other fair FL methods. Specifically, we combine our method with the previous methods in fair FL for static participants to additionally guarantee fair treatment for local agents who join the training process at the same time point by minimizing the discrepancy of benefits they receive, i.e, we can achieve within-group fairness additionally.
4. Finally, we propose new measures for dynamic fairness and provide empirical studies of our methods. With extensive experiments on three datasets MNIST (LeCun et al., 1998), Fashion MNIST (Xiao et al., 2017), and CIFAR10 (Krizhevsky et al., 2009), we find that our methods are not only dynamically fair, but also achieve better fairness compared with Fedavg.
Due to the space limit, all the proofs, some additional sections, algorithms, and experiments of our methods are included in Appendix.
2 RELATED WORK
Existing studies have proposed several definitions of fairness in federated learning. Zhou et al. (2021) proposed the concept of selection fairness: a fair FL model should provide more participation opportunities for never-represented or under-represented client groups. The following literature tries to promote a fair client selection by introducing the sampling constraints to the FL model (Huang et al., 2020).
Different from selection fairness, Li et al. (2019) mentioned that one essential notion of fairness is to accomplish a relatively uniform accuracy distribution across devices, which is defined as the standard accuracy parity (Zafar et al., 2017). In a previous study, Li et al. (2021) suggested reducing
the variation of model performance on different clients’ datasets can be seen as a reliable indicator for standard accuracy parity. However, the researchers ignored the importance of clients’ contributions in training an FL model. For instance, as Lyu et al. (2020) proposed in their literature, a client who contributes more to the federated system deserves a better performing local model than those who contributed less, which is defined as collaboration fairness. Lyu et al. (2020) proposed that the quality of each client’s uploaded gradients is sufficient to determine participants’ contribution.
One critique of the above scenario is that the concept of time is ignored. When training an FL model, all the clients need to incur some cost to participate in the training. For instance, if a company wants to build a profitable FL model, they have to invest not only money and data but also plenty of time since training and commercialization of the FL models take time. Yu et al. (2020) introduced the idea of regret, which refers to the difference between the incentive rewards clients have received and what they should receive while taking how long they have waited to receive the payoff into account.
However, each participant’s training time was ignored in all the above scenarios. In this paper, we proposed that in the long term, clients who join an FL model training longer should be rewarded with better model performance than those who participate in the training shorter since they contribute more time to the model training.
3 DYNAMIC FAIRNESS FOR FEDERATED LEARNING
In this section, we will formally define the fairness discussed in the Introduction. Before that, we provide an overview of the standard Federated Learning (FL) setting.
In FL, there are m agents where the i-th agent has a local dataset Di = {xi,j}nij=1 (the data samples could be either i.i.d. or non-i.i.d. sampled) and a central server. We also have a loss function ℓ and the central server aims to solve the following minimization problem:
min w∈Rd F (w) = m∑ i=1 piFi(w), (1)
where Fi(w) = 1|Di| ∑
x∈Di ℓ(w;x) is the empirical risk function for the i-th agent on his/her dataset Di and pi is the weight for the k-th agent, for example pi = ni∑ni . Dynamic Federated Learning Setting: While most of the previous work focus on the case where all agents are static, i.e., all of them join in the training process at the same time point (for simplicity, in this paper we assume one-time step responds to one update of the global model). Here we consider a dynamic setting of FL. For simplicity, we consider a dynamic setting with a finite number of time points. That is there are S time points t1, · · · , tS , and for each time point there is a set of agents vi who will join the training process (for simplicity here we denote t1 as the time when the training process starts). Since the server cannot get the information for all participants, now its goal is to minimize ∑ i≤M Lvi(w) at time point tM with 1 ≤ M ≤ S, where Lvi(w) = ∑ j∈vi p M j Fj(w), where pMj is the weight for the j-th agent at time point M , i.e., the objective function is the weighted sum of empirical risk functions of all the agents who join at time point ti. That is, it wants to minimize the empirical risk for all the agents who join at or before the time point tM . 1 It is notable that when M = S, then the objective function is equivalent to the original one in (1).
As we mentioned earlier, in the above dynamic FL setting there could be additional fairness concerns. For example, we consider two succeed time points t1 and t2 with the associated participant sets v1 and v2 (we assume t2 > t1), and the server conducts Fedavg to train the model. At time point t2, from the perspective of agents in v1 the algorithm itself may be unfair to them as the agents in v2 can directly use the current model (which has already been trained for several rounds by using the data in v1) without any cost. We can see the above unfairness is ubiquitous in the dynamic FL setting. In this paper, we aim to mitigate such unfairness. However, before showing our method, we need to provide a mathematical definition for the above fairness.
Defining such fairness is challenging. The most direct way is to use the value of the empirical risk function for a different set of participants vi, i.e., the value of Lvi(w). However, such measurement
1Note that in this paper, we assume all the agents will never leave the training process before the training process ends. We leave it as future research to study the case where each agent could join and leave the training.
is unsatisfactory as our fairness should ensure the agents gain more as they join in the training longer, and the function value cannot reveal this relationship. In practice actually, we can use the ”difference of accuracy” between different time points to measure the benefit, i.e., fairness. Consider an extreme case as an example, where half of the agents join at the beginning of the training, i.e., t1 and the other half join the training at the last time point tS . When the training ends, we hope that the improvement of the accuracy for the first half agents is much greater than for the other half agents. Motivated by this, mathematically we can use the difference in the empirical risk function values to measure the improvement of test accuracy. Based on that, in the following Definition 1, we first define the benefit for group vi at the current time point t by the difference between the empirical risk function of the group vi joining the training time point ti and the current time point t. Definition 1 (Benefit). Under our dynamic FL setting, for a training algorithm A, the benefit at timepoint t for a group vi joining training at timepoint ti (t > ti) is defined as Lviti (wt) = Lvi(wti)−Lvi(wt), (2) where wti and wt is the trained model at timepoint ti and t respectively. Moreover, we define the benefit agents in vi get in timepoint t as Lvi(wt−1)−Lvi(wt).
Based on the definition of benefit, we then propose our desired definition of federated learning fairness criterion with dynamic participants. Generally speaking, we consider a training algorithm is dynamically fair if the benefits of the agents who join earlier are higher than the ones who join later. Definition 2 (Absolute Dynamic Fairness). Under our dynamic FL setting, for a training algorithm A, it is absolutely dynamically fair if for any two different time points ti < tj and any t > tj we have
Lviti (wt) > L v2 tj (wt), (3)
where wt is the trained model at time point t of the algorithm.
Note that the fairness we propose in Definition 2 is real-time, i.e., the definition of fairness in Definition 2 holds regardless of whether t is the last time point of training or the time point in training, as long as t > tj . In practice, we not only want to design absolutely dynamically fair algorithms, but also expect to design develop new fair algorithms that are more fairer than the existing ones. In the following, we quantify such relative fairness between two algorithms, i.e., the algorithm that allows the group that participates longer to get more benefits will be more fair. Definition 3 (Relative Dynamic Fairness). Under our dynamic FL setting, consider two absolutely dynamically fair training algorithms A and Ã, we call algorithm A is dynamically fairer than algorithm à if for any two different time points ti < tj and any t > tj we have
Lviti (wt)− L vj tj (wt) > L vi ti (w̃t)− L vj tj (w̃t), (4)
where wt and w̃t is the trained model at time point t of algorithm A and à respectively.
It is notable that in Definition 3 we require both A and à be absolutely dynamically fair. This is necessary as relative dynamic fairness cannot imply absolute dynamic fairness. Moreover, although there is no data distribution assumption in our previous definition, we can see they are more suitable to the non-i.i.d. data for different agents. This is due to that if all the data are i.i.d. and when m and each ni (i ∈ [m]) is large enough, then we have Lvi(w) ≈ Lvj (w) for any group i and j as both of them are approximately equal to the underlying population risk Ex∼P [ℓ(w;x)] by the Hoeffding’s inequality if the loss function is bounded, where P is the underlying distribution of the data. And in the ablation study of the experimental part we will also verify this empirically. Thus, in the following parts we will always consider the non-i.i.d. case.
Note that in Definition 1 we use the difference of the empirical loss at two time points to measure the benefit of an agent. However, there could be other ways to define the benefits, such as the relative difference. We will leave them as future work to consider these definitions of benefit.
4 ACHIEVING DYNAMIC FAIRNESS
In the previous section, we presented the dynamic fairness that we aim to study in this paper. Now we aim to develop methods that is absolutely dynamically fair. Moreover, we want it to be fairer than Fedavg (McMahan et al., 2017).
Before diving into details, let us back to the Fedavg to see why it may cause unfairness and how to improve its fairness. For simplicity we consider the case where there are only two groups v1 and v2 which join the training at t1 and t2 respectively, and we assume there is only one agent in each group with the same size of data and each agent will performs one step of Gradient Descent (GD) locally and then send the model to server to be aggregated. Suppose we have already trained the model giving agents in v1 for long time, and now we achieve the time point t2 with model wt2 . Now we consider time point t2 + 1. We will show the above variant of Fedavg is unfair:
Theorem 1. Under the above setting, Fedavg is not dynamically fair at the timepoint t2 + 1 if ∥∇wLv2(wt2)∥2 is sufficiently large such that ∥∇wLv2(wt2)∥2 ≥ Ω(∥∇wLv1(wt2)∥2) and ∥∇wLv2(wt2)∥2 ≥ Ω(Lv1(wt2) − Lv1(w1)) and η = O(1), where η is the stepsize of GD for each agent.
Note that although in Theorem 1 we need the assume that ∥∇wLv2(wt2)∥2 is sufficiently large, such assumption is quite natural. As we know wt2 is the model trained via Lv1 with several rounds, which implies that ∥∇wLv1(wt2)∥2 will be small enough. On the other side, since we get wt2 before v2 joining and we assume the each data in v1 and v2 is non-i.i.d. sampled, thus we have that wt2 will be far from the minimizer of Lv2(w), i.e., ∥∇wLv2(wt2)∥2 is large. Moreover, as w1 is the initializer at time t1. Thus, when w1 is close to the minimizer of Lv1 then Lv1(wt2)−Lv1(w1) could also be small.
In the following, we will intuitively explain why the previous Fedavg is unfair. We assume both Lv1(w) and Lv2(w) are L-smooth, µ-strongly convex and 1-Lispschitz. Then by the assumption of smoothness and strong convexity, and the gradient descent in each agent we have
(η − η 2L
2 )∥∇wLv2(wt2)∥22 ≤ Lv2(wt2)−Lv2(w2t2+1) ≤ (η −
η2µ
2 )∥∇wLv2(wt2)∥22, (5)
(η − η 2L
2 )∥∇wLv1(wt2)∥22 ≤ Lv1(wt2)−Lv1(w1t2+1) ≤ (η −
η2µ
2 )∥∇wLv1(wt2)∥22, (6)
where η is the stepsize, and wit2+1 (i = 1, 2) is the local model in the i-th agent by performing the GD. If the benefit from the aggregation step in the server is sufficiently small, then from (5) we can see the benefit for v2, which depends on Θ(∥∇wLv2(wt2)∥22), could be very large. On the other side, for v1, ∥∇wLv1(wt2)∥2 is very small, indicating that the benefit they get in this round is quite small. If they did not get large benefit in the previous round before v2 joining, then the total benift for v1 will be less than the benefit for v2, i.e., the algorithm is unfair.
From the previous intuitive analysis we can see that in order to make agents in v2 get less benefit at time point t2 + 1, we cannot use the GD (or similarly SGD) as it could make the benefit depend on Θ(∥∇wLv2(wt2)∥22), which is quite large. Equivalently, the ℓ2-norm of the gradient plays an important role for the benefits of agents in v2. Motivated by this, a natural way is performing the normalized gradient descent (NGD) instead of GD, i.e., w2t2+1 = wt2 − η ∇wLv2 (wt2 ) ∥∇wLv2 (wt2 )∥2 and w1t2+1 = wt2 − η ∇wLv1 (wt2 )
∥∇wLv1 (wt2 )∥2 . In this case, considering when Lv2 is 1-Lipschitz and we have the
same stepsize as above, then we have
Lv2(wt2+1)−Lv2(wt2) ≤ ∥wt2+1 − wt2∥2 ≤ 2η,
i.e., the benefit now is bounded by η, which is much smaller than ∥∇wLv2(wt2)∥2. This indicates that as long as the benefit of v1 at t2, i.e., Lv1(wt1) − Lvi(wt) > 2η then the algorithm will be absolutely dynamically fair at t2 + 1. Moreover since now we limit the benefit for v2, we can show using NGD is fairer than implementing the above vanilla Fedavg.
Theorem 2. Consider the same setting as in Theorem 1 with normalized GD and fixed w1, wt2 and η, then if ∥∇wLv2(wt2)∥2 ≥ Ω(1) we have NGD is dynamically fairer than the above Fedavg at time point t2 + 1.
Note that although in the previous theorem we only considered the time point t2 + 1. As we can see from the experimental part, our algorithm is fairer than Fedavg in practice at each time point. Moreover, the above results relies on the assumption of ∥∇wLv2(wt2)∥2 ≥ Ω(1). Actually, when ∥∇wLv2(w)∥2 is small enough, the group v2 will get smaller benefit, i.e., for any q > 0 by the
properties of the loss function we have
η∥∇wLv2(wt2 + q)∥2 − η2L
2 ≤ Lv2(wt2+q)−Lv2(w̃2t2+q+1) ≤ η∥∇wLv2(wt2+q)∥2 −
η2µ
2 ,
η∥∇wLv1(wt2 + q)∥2 − η2L
2 ≤ Lv1(wt2+q)−Lv1(w̃1t2+q+1) ≤ η∥∇wLv1(wt2+q)∥2 −
η2µ
2 ,
where w̃it2+q+1 (i = 1, 2) is the local model in the i-th agent by performing NGD and wt2+q+1 = w̃1t2+q+1
+w̃2t2+q+1 2 . If we ignore the benefit of the aggregation step in the server, then from the
previous two results we can see the benefit the vi group get is Θ(η∥∇wLvi(wt2 +q)∥2). Thus, when η and the two gradient norms are small, then the benefits are also small which could be considered to be equal. In total we have that, when ∥∇wLvi(wt2 + q)∥2 is large, then if Lv1(wt1)−Lvi(wt) ≥ ω(η), our previous algorithm will be dynamically fair. And when ∥∇wLvi(wt2 + q)∥2 becomes sufficiently small then since both groups get almost the same benefit in time point t2+ q. Therefore, our algorithm is still dynamically fair.
Algorithm 1 Normalized Fedavg: Two groups v1, v2 with joining time point t1, t2 (t1 = 1, t1 < t2). |v| indicates the number of clients in group v, |B| is the local minibatch size, E is the number of local epochs, and η is the learning rate. C is a constant
Server executes: 1: initialization: w1 2: for each round t = 1, 2, . . . do 3: if t < t2 do m← max(C · |v1|, 1) else do m← max (C · (|v1|+ |v2|), 1) 4: St ← (random set of m clients) 6: for each client k ∈ St in parallel do 7: wkt+1 ← ClientUpdate(k, wt) 8: if t ≥ t2 do
wkt+1 ← wt − ηt wt−wkt+1
||wt−wkt+1|| // Normalization 9: wt+1 ← ∑
k∈St p i kw k t+1, where p i k is the weight where i = 1 when t < t2 otherwise i = 2.
ClientUpdate (k, w): // Run on client k 1: B ← (split Dk into batches of size |B|) 2: for each local epoch i from 1 to E do 3: for batch b ∈ B do 6: w ← w − ηt∇l(w; b) 7: return w to server
Based on our above idea of normalizing the gradient to limit the benefit for each new agent, we can modify the vanilla Fedavg to improve its dynamic fairness, i.e., we propose Normalized Fedavg in Algorithm 1 (for simplicity we only present the case where only two groups are trained, and the multi-group case can be easily generalized). Compared with the previous normalized stochastic gradient descent (NSGD) (Zhao et al., 2020; Cutkosky & Mehta, 2020; You et al., 2019; Hazan et al., 2015), there are two critical differences: While in the existing work on NSGD we normalize the gradients each iteration, in Algorithm 1 each agent still uses SGD to train local model and then send model to the server, then the server normalizes these local model updates to update the model (step 8) and then perform the aggregation step (step 9). This is due to that in practice we find that using directly NSGD locally for all agents will make the algorithm hard to be convergent. Thus, before the normalization step, we still need each agent perform SGD (step 7). The second difference it that, where in the previous NSGD based methods we need to calculating the global norm for all parameters of the model and then perform the normalization step. In Algorithm 1, for the normalization step we use layer-wise norm (LN) as using the global norm could lead to non-convergence (see experiments in Section D.1 in Appendix for details).
Although in practice we found Normalized Fedavg can indeed improve the dynamic fairness compared with Fedavg, its convergence rate is quite slow. The main reason is that the model update of local agents becomes quite small at the early stage after adding group v2. To address the issue, we simply modify the normalization update step (step 8 in Algorithm 1) by adding a decayed coefficient. We choose the model update norm G at the time point when group v2 joined in the training
Algorithm 2 The Modified Normalized Fedavg algorithm. The β is a hyperparameter and takes value from 0 ∼ 1. The default value of β is 1. LN : (w[0], ..., w[L−1])→ (||w[0]||2, ..., ||w[L−1]||2) is the function to compute the model update norm at each layer.
Server executes: 1: initialization: w0, β 2: for each round t = 1, 2, . . . do 3: if t < t2 do m← max(C · |v1|, 1) else do m← max (C · (|v1|+ |v2|), 1) 4: St ← (random set of m clients) 5: if t = t2 + 1 do G ← LN(wt−1 − wt) 6: for each client k ∈ St in parallel do 7: wkt+1 ← ClientUpdate(k, wt) 8: if t ≥ t2 + 1 do
wkt+1 ← wt − ( β G1+t−t2 + (1− β)LN(wt − w k t+1) ) wt−wkt+1
LN(wt−wkt+1) 9: wt+1 ← ∑
k∈St p i kw k t+1
ClientUpdate (k, w): same as Algorithm 1
as the initial value of this coefficient, and decay it with the training round increasing. The modified formula for local device model update is
wkt+1 := wt − G 1 + t− t2 wt − wkt+1 ||wt − wkt+1||2 , (7)
where t ≥ t2+1, G = ||wt2−wt2+1||. In order to further enhance the applicability of the algorithm, we introduce another hyperparameter β to combine Fedavg and normalized Fedavg:
wkt+1 := wt − ( β
G 1 + t− t2
+ (1− β)||wt − wkt+1||2 )
wt − wkt+1 ||wt − wkt+1||2
(8)
where the value of β is from 0 to 1. If the value of β is close to 1, the algorithm will be more fair; And if the value is close to 0, the algorithm will converge faster and close to Fedavg.
Note that in the above methods we normalize the model update (gradients) to limit each agent’s benefit. A natural question is whether we can use other ways. We known that other than normalization, clipping is another commonly used operation in deep learning (e.g., poisoning attacks (Guo et al., 2021; Xie et al., 2021; Panda et al., 2022) and privacy (Truex et al., 2019; Lee & Kifer, 2018)). Motivated by this we propose the Clipping Fedavg (Algorithm 3). We find that clipping can also improve the fairness via experiments. However, its improvement compared with Fedavg is quite limited. See Section B in Appendix for details.
Actually, due to the simplicity of our idea, our methods are compatible with other fair FL methods. Specifically, we combine our method with the existing methods in fair FL for static participants to additionally guarantee fair treatment for local agents who join the training process at the same time point by minimizing the discrepancy of benefits they receive, i.e, we can achieve within-group fairness additionally. See Section C in Appendix for details.
5 EXPERIMENTS
In this section, we will study the practical performance of our proposed algorithms on several benchmark datasets.
Experimental Settings: To verify whether the normalization-based methods can indeed improve dynamic fairness, we design the Two-groups experiment to simulate the scenario in which some clients join first (group1) while some join in training at timepoint t2 (group2). In this type of experiment, each group contains 5 clients.
To make our experiments more convincing and applicable, we also design the Multi-groups experiment, in which more clients are added to the training at several different timepoints. In detail, there are S groups {v1, ..., vS} (S ≥ 2), and each group is added to trained at a specific time point {t1, ..., tS} (t1 = 1). In the Multi-groups experiment, each group contains 3 clients. For all experiments, all clients run 5 local epochs, 32 local batch-size and η = 1e− 2 in each round. Also, we define a paramter α to control the degree of non-i.i.d of the dataset, i.e., if two groups
join the training with α = 0.9 and the size of class of the dataset is 10, then one group has 90% of the data in 5 classes and the second group has 90% of the data in the other 5 classes. For the Two-groups experiment, we choose 10 clients in total and each group contains 5 clients. The second group is added to training in round 10. For the Multi-groups experiment, we choose 30 clients in total and each group contains 3 clients). The timepoint each group join in training is in the set {0, 10, 20, 30, 40, 50, 60, 70, 80, 90}.
Datasets and Models We use three classical dataset MNIST (LeCun et al., 1998), Fashion MNIST (FMNIST) (Xiao et al., 2017), and CIFAR10 (Krizhevsky et al., 2009) and two popular model LeNet (LeCun et al., 1998) and ResNet18 (He et al., 2016) to evaluate our algorithms. Like most of works, we used LeNet on MNIST and FMNIST, and ResNet18 on CIFAR10 for evaluation, respectively.
Evaluation Metrics Based on our Definition 1, to better describe the benefits of all groups during the training process, we propose groups benefits (GB) as one of our experimental metrics. This metric shows the difference in the value of the loss between the group that joins later and the group that joins first. A positive value of GB indicates that the algorithm is fair, and larger value indicates better fairness of the algorithm. The metric is defined in (9). GB (train) and GB (test) are calculated from the train dataset and the test dataset, respectively. If there is only one group, we set GB as 0.
GBt = 1
n− 1 n−1∑ i=1 exp ( Lvi+1(wt)−Lvi(wt) ) − 1 (9)
where n ∈ [2, N ] denotes the number of current running groups. It should be noted that the metric here does not exactly follow Definition 1, the reason can be seen in section E in Appendix.
Main Experiment Results The experiment results are shown in the Figure 1 and 2. It can be seen that Algorithm 1 and 2 exhibit much higher benefits than Fedavg, and their benefits eventually converge to a positive large value on all the datasets. In contrast, Fedavg makes the value of GB smaller
or even negative (absolute unfairness). A noteworthy phenomenon is that for Fedavg, compared with its GB values that keep decreasing during the training process, the GB values will increase slightly during the testing process for both datasets (MNIST and CIFAR10). However, the increased values are still much smaller than those of our methods. Remarkably, we find that Algorithm 2 is fairer than Algorithm 1 in Figure 1, but Figure 2 shows the exact opposite phenomenon. Therefore, for those two algorithms, we cannot conclude which algorithm outperforms measured by GB, which indicates that our strategies in Algorithm 2 do not significantly reduce fairness compared with Algorithm 1.
Figure 1 and 2 both illustrate a negative correlation between fairness and convergence rate. Algorithms 1 and 2 have better fairness performance but their convergence rates are slower than Fedavg. Meanwhile, we can conclude that Algorithm 2 converges faster than Algorithm 1, which shows the effectiveness of our proposed Algorithm 2.
To summary, both of Algorithm 1 and 2 demonstrate greater dynamic fairness than Fedavg. Besides, Algorithm 2 can achieve a faster convergence rate than Algorithm 1 while maintaining a similar dynamic fairness with Algorithm 1.
We defer the ablation study to Section D.1 in Appendix due to space limit.
6 CONCLUSION
In this paper, we focused on the fairness in the setting of Federated Learning with dynamic participants, meaning that clients can join in the training at different time points. We proposed a new definition of federated learning fairness namely dynamic fairness to guarantee higher benefits for local agents who participate in the FL model training for longer time periods than those do not. We developed algorithms with normalization to guarantee the dynamic fairness based on Fedavg. Furthermore, we improved the efficiency of Normalized Fedavg via some strategies. Intensive experiment results showed that our methods are dynamically fair. And specifically, our algorithms are fairer than Fedavg.
A OMITTED PROOFS
Proof of Theorem 1. In Fedavg from the server side it computes wt2+1 = w2t1+1 +w2t2+1 2 , where w2t1+1 = wt2 − η∇wLv2(wt2) and w 1 t1+1 = wt2 − η∇wLv1(wt2) with the stepsize η. Thus, from the first order approximation we have
Lv1(wt2+1) = Lv1(wt2)− η∇wLv1(wt2) · ∇wLv1(wt2) +∇wLv2(wt2)
2 + o [ ||wt2+1 − wt2 ||2 ] (10) Lv2(wt2+1) = Lv2(wt2)− η∇wLv2(wt2) ·
∇wLv1(wt2) +∇wLv2(wt2) 2
+ o [ ||wt2+1 − wt2 ||2 ] (11) Thus we have [Lv2(wt2)−Lv2(wt2+1)]− [Lv1(wt2)−Lv1(wt2+1)] ≈ η∥∇wLv2(wt2)∥22 − η∥∇wLv1(wt2)∥22 Thus, based on the definition 1 the difference of benefit for agents in v2 and benefit for agents in v1 is approxiamtely equal to
η[∥∇wLv2(wt2)∥22 − ∥∇wLv1(wt2)∥22] + [Lv1(wt2)−Lv1(w1)]. Thus, when ∥∇wLv2(wt2)∥2 ≥ Ω(∥∇wLv1(wt2)∥2) and ∥∇wLv2(wt2)∥2 ≥ Ω(Lv1(wt2) − Lv1(w1)) then the benefit for v2 is larger and the algorithm is no longer fair.
Proof of Theorem 2. In Fedavg from the server side it computes wt2+1 = w1t2+1 +w2t2+1 2 , where w2t1+1 = wt2 − η ∇wLv2 (wt2 ) ∥∇wLv2 (wt2 )∥2 and w1t2+1 = wt2 − η ∇wLv1 (wt2 ) ∥∇wLv1 (wt2 )∥2
with the stepsize η. Thus, from the first order approximation we have
Lv1(wt2+1) = Lv1(wt2)− η
2 ∇wLv1(wt2) · ( ∇wLv1(wt2) ∥∇wLv1(wt2)∥2 + ∇wLv2(wt2) ∥∇wLv2(wt2)∥2 )
+ o [ ||wt2+1 − wt2 ||2 ] (12) Lv2(wt2+1) = Lv2(wt2)− η
2 ∇wLv2(wt2) · ( ∇wLv1(wt2) ∥∇wLv1(wt2)∥2 + ∇wLv2(wt2) ∥∇wLv2(wt2)∥2 )
+ o [ ||wt2+1 − wt2 ||2 ] (13) Thus we have
[Lv2(wt2)−Lv2(wt2+1)]− [Lv1(wt2)−Lv1(wt2+1)]
≈ η 2 (∥∇wLv2(wt2)∥2 − ∥∇wLv1(wt2)∥2)(1 + ∇wLv1(wt2) · ∇wLv2(wt2) ∥∇wLv1(wt2)∥2∥∇wLv2(wt2)∥2 )
Thus, based on Definition 1 the difference of benefit for agents in v2 and benefit for agents in v1 is η
2 (∥∇wLv2(wt2)∥2 − ∥∇wLv1(wt2)∥2)(1 + ∇wLv1(wt2) · ∇wLv2(wt2) ∥∇wLv1(wt2)∥2∥∇wLv2(wt2)∥2 )
+ [Lv1(wt2)−Lv1(w1)]. (14)
And it is smaller than the difference in the case of Theorem 1 with fixed w1, wt2 and η when ∥∇wLv2(wt2)∥2 ≥ 2. Thus, it is more fairer than Theorem 1.
B CLIPPING FEDAVG AND EXPERIMENT
Note that in the above methods we normalize the model update (gradients) to improve algorithms’ fairness. A natural question is whether we can use other ways. Other than normalization, clipping is another commonly used operation in deep learning (e.g., poisoning attacks (Guo et al., 2021; Xie et al., 2021; Panda et al., 2022) and privacy (Truex et al., 2019; Lee & Kifer, 2018)). Motivated by this, we propose the Clipping Fedavg algorithm, whose complete pseudo-code is given in Algorithm 3. With experiments in figure 3, we found that on the whole training stage, it is hard to argue that clipping can improve the fairness of the algorithm. we can see that although algorithm 3 can maintain a high level of fairness in the early stage after new group joins, it is not significantly different from Fedavg in the later stage. Therefore, we do not recommend using algorithm 3 to improve fairness in practical applications.
Algorithm 3 The Clipping Fedavg algorithm. The running clients are indexed by k, and η is the learning rate. LM : (w[0], ..., w[L−1]) → (max(w[0]), ...,max(w[L−1])) is the function to compute the max value of model update at each layer, and Clipg : (w
[0], ..., w[L−1]) → (threshold(w[0],±g), ..., threshold(w[L−1],±g)) is the function to clip the paramters at each layer with specified value, where threshold(w,±g) can limit w with ±g.
Server executes: 1: initialization: w0 2: for each round t = 1, 2, . . . do 3: if t < t2 do m← max(C · |v1|, 1) else do m← max (C · (|v1|+ |v2|), 1) 4: St ← (random set of m clients) 5: if t = t2 do G ← LM(wt−1 − wt) 6: for each client k ∈ St in parallel do 7: wkt+1 ← ClientUpdate(k, wt) 8: if t ≥ t2 do wkt+1 ← wt −ClipG(wt − wkt+1) // Clipping 9: wt+1 ← ∑St k=1 pkw k t+1 10: if t ≥ t2 do compute test loss: Lv1(wt), Lv2(wt)
ClientUpdate (k, w): same to algorithm 1
C FURTHER EXPANSION OF THE FAIRNESS DEFINITION
Additionally, we combine our method with the existing methods in fair FL for static participants. Via such approach, we can minimize the discrepancy of benefits for the agents who join the training process at the same time point, and guarantee a fair treatment for them, i.e, we can achieve withingroup fairness.
By combining the fairness definition of Li et al. (2019) and ours, we extend the definition 2, 3 to definition 4, including between-group fairness (guarantees that agents with longer participating time benefit more) and within-group fairness (guarantees performance uniformity for agents with the same participating time).
Definition 4. Dynamic Fairness (Extended): Under our dynamic FL setting, for a training algorithm A, it is absolutely dynamically fair if for any two different time points ti < tj and any t > tj we have
Between-group fairness: Lv1t1 (wt) > L v2 t2 (wt) (15) Within-group fairness: stdk∈v {Fk(w)} → 0 (16)
Under our dynamic FL setting, consider two absolutely dynamically fair training algorithms A and Ã, we call algorithm A is dynamically fairer than algorithm à if for any two different time points ti < tj and any t > tj we have
Between-group fairness:
Lv1t1 (wt)− L v2 t2 (wt) > L v1 t1 (w̃t)− L v2 t2 (w̃t) (17)
Within-group fairness: stdk∈vi {Fk(w)} < stdk∈vi {Fk(w̃)} (18) Here Lviti (wt) is defined in Definition 1, stdk∈v {Fk(w)} denotes the standard deviation of the test loss of all devices in group vi, and vi is anyone of all groups currently participating.
We further modified our algorithm by combining above algorithms with q-Fedavg (Li et al. (2019)), which is an excellent solution of within-group fairness. The pseudo-code of our modified algorithm is given in algorithm 4.
Algorithm 4 We merged our methods (step 8) into the q-Fedavg. The notation k is the index of running clients, wt is the global model at current round t, and η is the learning rate. LN : (w[0], ..., w[L−1])→ (||w[0]||, ..., ||w[L−1]||) is the function used to compute the model update norm at each layer. q is a hyperparameter of q-Fedavg, and its default value is 0.1.
Server executes: 1: initialization: w0 2: for each round t = 1, 2, . . . do 3: if t < t2 do m← max(C · |v1|, 1) else do m← max (C · (|v1|+ |v2|), 1) 4: St ← (random set of m clients) 6: for each client k ∈ St in parallel do 7: wkt+1, Fk(wt)← ClientUpdate(k, wt) 8: wkt+1 ← Our operation (based on algorithm 1, 2, 3) 9: △kt = F q k (wt) ∗ (wt − wkt+1) 10: hkt = qF q−1 k (wt)LN(wt − wkt+1) + F q k (wt)
11: wt+1 ← wt − ∑ v∈St ∑ k∈v pk △kt hkt 12: if t ≥ t2 do compute test loss: Lv1(wt), Lv2(wt)
ClientUpdate (k, w): same to algorithm 1
Then we provide the experiment results for Algorithm 4.
We provide the experiment results of Algorithm 4.
First, we define an evaluation metric loss std (LS) for within-group fairness. This metric indicates the level of performance uniformity across clients within the same group. A lower value of the metric indicates higher uniformity. If only one group runs, we set LS to 0.
LSt = 1
n n∑ i=1 √√√√ |vi|∑ k=1 (F kt − L vi t ) 2 |vi| (19)
Second, we verified that the q-Fedavg Li et al. (2019) is still valid in our Two-groups experiment condition in figure 4(section 5). We then selected q = 0.1 that best fits algorithm 4.
Last, as shown in figure 5, we find that the algorithm 4 can improve both between-groups fairness (lower LS) and within-group fairness (greater Group benefit).
D ADDITIONAL EXPERIMENTAL RESULTS
D.1 ABLATION STUDY
In ablation study, we use the experiment results of "Two-groups experiment" (MNIST) as the control group. We explore the effects of three key variables (α, global or layer norm, and β) on the experiment results.
Impact of α. Figure 6 shows that in a scenario of a low level of non-iid, fairness is not guaranteed regardless of whether normalization is implemented or not. And only with strong non-iid, normalization can guarantee fairness and is fairer than Fedavg, which is consistent with our idea in the previous section.
Impact of global / layer norm. We investigate whether the global parameter norm of the model update or the per-layer parameter norm should be used in algorithm 1 and 2. As seen in figure 7(a), the global norm for all parameters of model update in algorithm 2 prevents the model from converging. On the contrary, the layer norm (LN) is able to make the model converge.
Impact of β for algorithm 2. Due to a necessary trade-off between fairness and convergence speed in practical applications, we expect the hyperparameter β to regulate the degree of fairness of algorithm 2. Figure 7(b) also demonstrates that the effect of adjusting β is consistent with our expectation.
D.2 EVALUATION OF OTHER MODELS
We redid the Two-group experiment using Linear-regression and 2-layer neural network and got the same results (figure 8) as Lenet. This proves that our method is not limited by the model
E SUPPLEMENTAL NOTION
Explanation of implementing Evaluation Metric "Group Benefit" If we follow the benefit definition (2), then the equation (9) should be rewritten as
GBt = 1
n− 1 n−1∑ i=1 exp ( Lviti (wt)− L vi+1 ti+1 (wt) ) − 1
= 1
n− 1 n−1∑ i=1 exp (Lvi(wti)−Lvi+1(wti+1))︸ ︷︷ ︸ Our proposed algorithms do not change it +(Lvi+1(wt)−Lvi(wt)) − 1 (20)
However, we find that the former term Lvi(wti) in (2) is much larger than the latter term Lvi(wt) in the actual experiments, which makes the actual benefit Lviti (wt) always be close to Lvi(wti) and remains constant. Moreover, we find that Lvi(wti) − Lvi+1(wti+1) in (20) is the same for our algorithms and Fedavg, so we remove the former term (Lvi(wti)−Lvi+1(wti+1)) in the metric (20) and change the metric to (9). | 1. What is the focus of the paper regarding federated learning?
2. What are the strengths and weaknesses of the proposed approach in addressing dynamic fairness?
3. Do you have any concerns about the definition of dynamic fairness and its relation to the free rider issue?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions regarding the proposed method, particularly when clients leave the FL process early? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies the so-called dynamic fairness for federated learning (FL), where the fairness is defined as the improvement of test accuracy of FL between the time the clients join and the current time. It studies normalized FedAvg and shows that it can be combined with other fairness methods in FL. Simulation results are provided.
Strengths And Weaknesses
Strength:
The problem of fairness when clients can dynamically join (and leave) FL is interesting.
Weakness:
The definition of dynamic fairness is not technically sound. It depends on the change (decrease) of loss function values after the new clients join the FL process. However, such change can easily be due to existing clients, not the newly joined ones. For example, we can imagine that a client joins FL at
t
0
and absolutely does no training at all (hence is equivalent of not participating). This definition, however, is based on the change of loss function values from all existing clients, which cannot prevent the free rider issue as the authors have mentioned.
A related issue is that the convergence of the FL model depends on the (non)IID-ness of the client datasets. This factor may overwhelm the participation fairness of clients.
The current model does not have clients leaving FL early, but it is mentioned in the abstract. How is early leaving handled in the proposed framework?
The proposed (modified) normalized FedAvg is incremental, and the combination with existing fairness FL method is straightforward.
Clarity, Quality, Novelty And Reproducibility
The paper writing can be significantly improved. Staring from the first sentence of abstract ("... has been widely caught attention"), the paper is difficult to read. |
ICLR | Title
Fairness of Federated Learning with Dynamic Participants
Abstract
The concept of fairness has been widely caught attention in Federated Learning (FL). While there are tremendous studies about various notations of fairness in FL in recent years, all of them only consider the case where the training process starts and ends at the time point for all participants. Actually, participants could be dynamic and they may join and leave the training process at different time points. However, participants who join the training process at different time points receive similar incentive benefits can be seen as a signal of unfairness. In this paper, we provide the first study on such fairness of FL for dynamic participants. First, we propose a new mathematical definition of the above fairness namely dynamic fairness. Briefly speaking, an algorithm is dynamically fair and satisfies that local agents who participate in the model training longer should receive more benefits than those who participate in the process shorter. Second, we develop a simple but novel method, which could be seen as a normalized version of Fedavg, and theoretically show that it is fairer than Fedavg. Moreover, we can combine our method with the previous methods in fair FL for static participants to additionally guarantee fair treatment for local agents who join the training process at the same time point by minimizing the discrepancy of benefits they receive. Finally, empirically we propose a measure for dynamic fairness and demonstrate that our method can achieve a fairer performance under our definition of fairness through intensive experiments on three benchmark datasets.
1 INTRODUCTION
As one of the most fundamental learning frameworks for preserving the privacy of distributed data, Federated Learning (FL) (Konečnỳ et al., 2016) has prospered in the machine learning community in the last few years. In the canonical FL setting, there are several local agents, and each of them holds a dataset for local training. And there is a controller (server) which aggregates gradient vectors or local models from agents for global model updates. During the training process, the agents only communicate their gradients or local models to the server and the original data never leaves the local agents. Therefore, FL can protect the data information of each agent from leaking. To comply with the privacy regulations such as the General Data Protection Regulation (GDPR) (gdpr), variants of FL frameworks have been widely studied, and recently adopted in industry, such as Apple’s “FE&T” (Paulik et al., 2021), Google’s Gboard (gboard), and Alibaba’s FederatedScope (Xie et al., 2022).
While the recent advances in FL present a promising framework to learn from distributed data privately and efficiently, most of the current research mainly focuses on the central server’s benefits, i.e., developing methods to improve the convergence rate or the generalization performance in the FL setting, while ignores local agents’ interests. However, such attention to the server’s benefits may cause fairness issues which make local agents less interested in participating in the model training. For instance, those methods usually apply thresholds such as bandwidth and transmission speed to selectively choose clients (Shi et al., 2021), which potentially leads to unfair client selection in the FL system. Local devices with low transmission speed might be neglected frequently during the training process, and eventually become never-represented or under-represented client groups. Moreover, some researchers have noticed that the participants sometimes suffer from unfair incentive rewards (Zhan et al., 2021). Kairouz et al. (2021) notice the free-rider problem in the FL system. In the free-rider scenario, clients who contribute less (e.g., better data quality vs. worse data quality)
in training the model receive the same resulting model as those who contribute more to the training. Distributing models with performance incommensurate to each participant’s contribution might discourage active clients from continuously collaborating in the model training.
To leverage the unfairness issue, there are tremendous work studies on Fair FL by considering various definitions of fairness recently, such as selection fairness (Zhou et al., 2021) and collaboration (Lyu et al., 2020) (see Related Work section for more details). However, all of these work only considers the case where all the participants are static, i.e., they join the training process at the same time point, while in practice such assumptions may not always hold as the participants may be dynamic, i.e., different agents could join or leave the training at different time points. In such a dynamic scenario, there are additional fairness issues compared with static ones. Consider the following case as an example, suppose the agents could join at different time points and they will never leave before the training process ends. In this case, the agents who join the training earlier (contributed more) will expect higher benefits than the ones who join later. Thus, participants who join the training process at different time points receive similar incentive benefits can be seen as a signal of inequality. However, to our best knowledge, there are no previous work studies on such fairness in FL.
In this paper, we provide the first study to alleviate the above fairness issue caused by dynamic participants by providing some new definitions, methods, and measures. Specifically, our contributions can be summarized as follows:
1. First, we provide a rigorous definition for the above fairness, namely dynamic fairness. Briefly speaking, we call an algorithm dynamically fair if its performance is commensurate to the length of each client’s participating time. Equivalently, it satisfies that the agents with longer participation time receive more benefits, which could be seen as betweengroup fairness. Besides that, we also provide criteria to compare the dynamic fairness of two algorithms.
2. Next, we propose several dynamically fair methods. First, we propose a simple but efficient method namely Normalized Fedavg. Generally speaking, our method could be thought of as a normalized version of Fedavg where we use the normalized SGD instead of SGD for local training. Interestingly, we theoretically show that our algorithm is fairer than the vanilla Fedavg (McMahan et al., 2017). To further improve the convergence rate practically, we propose a method namely Modified Normalized Fedavg.
3. Moreover, due to the simplicity of the idea, our method is compatible with other fair FL methods. Specifically, we combine our method with the previous methods in fair FL for static participants to additionally guarantee fair treatment for local agents who join the training process at the same time point by minimizing the discrepancy of benefits they receive, i.e, we can achieve within-group fairness additionally.
4. Finally, we propose new measures for dynamic fairness and provide empirical studies of our methods. With extensive experiments on three datasets MNIST (LeCun et al., 1998), Fashion MNIST (Xiao et al., 2017), and CIFAR10 (Krizhevsky et al., 2009), we find that our methods are not only dynamically fair, but also achieve better fairness compared with Fedavg.
Due to the space limit, all the proofs, some additional sections, algorithms, and experiments of our methods are included in Appendix.
2 RELATED WORK
Existing studies have proposed several definitions of fairness in federated learning. Zhou et al. (2021) proposed the concept of selection fairness: a fair FL model should provide more participation opportunities for never-represented or under-represented client groups. The following literature tries to promote a fair client selection by introducing the sampling constraints to the FL model (Huang et al., 2020).
Different from selection fairness, Li et al. (2019) mentioned that one essential notion of fairness is to accomplish a relatively uniform accuracy distribution across devices, which is defined as the standard accuracy parity (Zafar et al., 2017). In a previous study, Li et al. (2021) suggested reducing
the variation of model performance on different clients’ datasets can be seen as a reliable indicator for standard accuracy parity. However, the researchers ignored the importance of clients’ contributions in training an FL model. For instance, as Lyu et al. (2020) proposed in their literature, a client who contributes more to the federated system deserves a better performing local model than those who contributed less, which is defined as collaboration fairness. Lyu et al. (2020) proposed that the quality of each client’s uploaded gradients is sufficient to determine participants’ contribution.
One critique of the above scenario is that the concept of time is ignored. When training an FL model, all the clients need to incur some cost to participate in the training. For instance, if a company wants to build a profitable FL model, they have to invest not only money and data but also plenty of time since training and commercialization of the FL models take time. Yu et al. (2020) introduced the idea of regret, which refers to the difference between the incentive rewards clients have received and what they should receive while taking how long they have waited to receive the payoff into account.
However, each participant’s training time was ignored in all the above scenarios. In this paper, we proposed that in the long term, clients who join an FL model training longer should be rewarded with better model performance than those who participate in the training shorter since they contribute more time to the model training.
3 DYNAMIC FAIRNESS FOR FEDERATED LEARNING
In this section, we will formally define the fairness discussed in the Introduction. Before that, we provide an overview of the standard Federated Learning (FL) setting.
In FL, there are m agents where the i-th agent has a local dataset Di = {xi,j}nij=1 (the data samples could be either i.i.d. or non-i.i.d. sampled) and a central server. We also have a loss function ℓ and the central server aims to solve the following minimization problem:
min w∈Rd F (w) = m∑ i=1 piFi(w), (1)
where Fi(w) = 1|Di| ∑
x∈Di ℓ(w;x) is the empirical risk function for the i-th agent on his/her dataset Di and pi is the weight for the k-th agent, for example pi = ni∑ni . Dynamic Federated Learning Setting: While most of the previous work focus on the case where all agents are static, i.e., all of them join in the training process at the same time point (for simplicity, in this paper we assume one-time step responds to one update of the global model). Here we consider a dynamic setting of FL. For simplicity, we consider a dynamic setting with a finite number of time points. That is there are S time points t1, · · · , tS , and for each time point there is a set of agents vi who will join the training process (for simplicity here we denote t1 as the time when the training process starts). Since the server cannot get the information for all participants, now its goal is to minimize ∑ i≤M Lvi(w) at time point tM with 1 ≤ M ≤ S, where Lvi(w) = ∑ j∈vi p M j Fj(w), where pMj is the weight for the j-th agent at time point M , i.e., the objective function is the weighted sum of empirical risk functions of all the agents who join at time point ti. That is, it wants to minimize the empirical risk for all the agents who join at or before the time point tM . 1 It is notable that when M = S, then the objective function is equivalent to the original one in (1).
As we mentioned earlier, in the above dynamic FL setting there could be additional fairness concerns. For example, we consider two succeed time points t1 and t2 with the associated participant sets v1 and v2 (we assume t2 > t1), and the server conducts Fedavg to train the model. At time point t2, from the perspective of agents in v1 the algorithm itself may be unfair to them as the agents in v2 can directly use the current model (which has already been trained for several rounds by using the data in v1) without any cost. We can see the above unfairness is ubiquitous in the dynamic FL setting. In this paper, we aim to mitigate such unfairness. However, before showing our method, we need to provide a mathematical definition for the above fairness.
Defining such fairness is challenging. The most direct way is to use the value of the empirical risk function for a different set of participants vi, i.e., the value of Lvi(w). However, such measurement
1Note that in this paper, we assume all the agents will never leave the training process before the training process ends. We leave it as future research to study the case where each agent could join and leave the training.
is unsatisfactory as our fairness should ensure the agents gain more as they join in the training longer, and the function value cannot reveal this relationship. In practice actually, we can use the ”difference of accuracy” between different time points to measure the benefit, i.e., fairness. Consider an extreme case as an example, where half of the agents join at the beginning of the training, i.e., t1 and the other half join the training at the last time point tS . When the training ends, we hope that the improvement of the accuracy for the first half agents is much greater than for the other half agents. Motivated by this, mathematically we can use the difference in the empirical risk function values to measure the improvement of test accuracy. Based on that, in the following Definition 1, we first define the benefit for group vi at the current time point t by the difference between the empirical risk function of the group vi joining the training time point ti and the current time point t. Definition 1 (Benefit). Under our dynamic FL setting, for a training algorithm A, the benefit at timepoint t for a group vi joining training at timepoint ti (t > ti) is defined as Lviti (wt) = Lvi(wti)−Lvi(wt), (2) where wti and wt is the trained model at timepoint ti and t respectively. Moreover, we define the benefit agents in vi get in timepoint t as Lvi(wt−1)−Lvi(wt).
Based on the definition of benefit, we then propose our desired definition of federated learning fairness criterion with dynamic participants. Generally speaking, we consider a training algorithm is dynamically fair if the benefits of the agents who join earlier are higher than the ones who join later. Definition 2 (Absolute Dynamic Fairness). Under our dynamic FL setting, for a training algorithm A, it is absolutely dynamically fair if for any two different time points ti < tj and any t > tj we have
Lviti (wt) > L v2 tj (wt), (3)
where wt is the trained model at time point t of the algorithm.
Note that the fairness we propose in Definition 2 is real-time, i.e., the definition of fairness in Definition 2 holds regardless of whether t is the last time point of training or the time point in training, as long as t > tj . In practice, we not only want to design absolutely dynamically fair algorithms, but also expect to design develop new fair algorithms that are more fairer than the existing ones. In the following, we quantify such relative fairness between two algorithms, i.e., the algorithm that allows the group that participates longer to get more benefits will be more fair. Definition 3 (Relative Dynamic Fairness). Under our dynamic FL setting, consider two absolutely dynamically fair training algorithms A and Ã, we call algorithm A is dynamically fairer than algorithm à if for any two different time points ti < tj and any t > tj we have
Lviti (wt)− L vj tj (wt) > L vi ti (w̃t)− L vj tj (w̃t), (4)
where wt and w̃t is the trained model at time point t of algorithm A and à respectively.
It is notable that in Definition 3 we require both A and à be absolutely dynamically fair. This is necessary as relative dynamic fairness cannot imply absolute dynamic fairness. Moreover, although there is no data distribution assumption in our previous definition, we can see they are more suitable to the non-i.i.d. data for different agents. This is due to that if all the data are i.i.d. and when m and each ni (i ∈ [m]) is large enough, then we have Lvi(w) ≈ Lvj (w) for any group i and j as both of them are approximately equal to the underlying population risk Ex∼P [ℓ(w;x)] by the Hoeffding’s inequality if the loss function is bounded, where P is the underlying distribution of the data. And in the ablation study of the experimental part we will also verify this empirically. Thus, in the following parts we will always consider the non-i.i.d. case.
Note that in Definition 1 we use the difference of the empirical loss at two time points to measure the benefit of an agent. However, there could be other ways to define the benefits, such as the relative difference. We will leave them as future work to consider these definitions of benefit.
4 ACHIEVING DYNAMIC FAIRNESS
In the previous section, we presented the dynamic fairness that we aim to study in this paper. Now we aim to develop methods that is absolutely dynamically fair. Moreover, we want it to be fairer than Fedavg (McMahan et al., 2017).
Before diving into details, let us back to the Fedavg to see why it may cause unfairness and how to improve its fairness. For simplicity we consider the case where there are only two groups v1 and v2 which join the training at t1 and t2 respectively, and we assume there is only one agent in each group with the same size of data and each agent will performs one step of Gradient Descent (GD) locally and then send the model to server to be aggregated. Suppose we have already trained the model giving agents in v1 for long time, and now we achieve the time point t2 with model wt2 . Now we consider time point t2 + 1. We will show the above variant of Fedavg is unfair:
Theorem 1. Under the above setting, Fedavg is not dynamically fair at the timepoint t2 + 1 if ∥∇wLv2(wt2)∥2 is sufficiently large such that ∥∇wLv2(wt2)∥2 ≥ Ω(∥∇wLv1(wt2)∥2) and ∥∇wLv2(wt2)∥2 ≥ Ω(Lv1(wt2) − Lv1(w1)) and η = O(1), where η is the stepsize of GD for each agent.
Note that although in Theorem 1 we need the assume that ∥∇wLv2(wt2)∥2 is sufficiently large, such assumption is quite natural. As we know wt2 is the model trained via Lv1 with several rounds, which implies that ∥∇wLv1(wt2)∥2 will be small enough. On the other side, since we get wt2 before v2 joining and we assume the each data in v1 and v2 is non-i.i.d. sampled, thus we have that wt2 will be far from the minimizer of Lv2(w), i.e., ∥∇wLv2(wt2)∥2 is large. Moreover, as w1 is the initializer at time t1. Thus, when w1 is close to the minimizer of Lv1 then Lv1(wt2)−Lv1(w1) could also be small.
In the following, we will intuitively explain why the previous Fedavg is unfair. We assume both Lv1(w) and Lv2(w) are L-smooth, µ-strongly convex and 1-Lispschitz. Then by the assumption of smoothness and strong convexity, and the gradient descent in each agent we have
(η − η 2L
2 )∥∇wLv2(wt2)∥22 ≤ Lv2(wt2)−Lv2(w2t2+1) ≤ (η −
η2µ
2 )∥∇wLv2(wt2)∥22, (5)
(η − η 2L
2 )∥∇wLv1(wt2)∥22 ≤ Lv1(wt2)−Lv1(w1t2+1) ≤ (η −
η2µ
2 )∥∇wLv1(wt2)∥22, (6)
where η is the stepsize, and wit2+1 (i = 1, 2) is the local model in the i-th agent by performing the GD. If the benefit from the aggregation step in the server is sufficiently small, then from (5) we can see the benefit for v2, which depends on Θ(∥∇wLv2(wt2)∥22), could be very large. On the other side, for v1, ∥∇wLv1(wt2)∥2 is very small, indicating that the benefit they get in this round is quite small. If they did not get large benefit in the previous round before v2 joining, then the total benift for v1 will be less than the benefit for v2, i.e., the algorithm is unfair.
From the previous intuitive analysis we can see that in order to make agents in v2 get less benefit at time point t2 + 1, we cannot use the GD (or similarly SGD) as it could make the benefit depend on Θ(∥∇wLv2(wt2)∥22), which is quite large. Equivalently, the ℓ2-norm of the gradient plays an important role for the benefits of agents in v2. Motivated by this, a natural way is performing the normalized gradient descent (NGD) instead of GD, i.e., w2t2+1 = wt2 − η ∇wLv2 (wt2 ) ∥∇wLv2 (wt2 )∥2 and w1t2+1 = wt2 − η ∇wLv1 (wt2 )
∥∇wLv1 (wt2 )∥2 . In this case, considering when Lv2 is 1-Lipschitz and we have the
same stepsize as above, then we have
Lv2(wt2+1)−Lv2(wt2) ≤ ∥wt2+1 − wt2∥2 ≤ 2η,
i.e., the benefit now is bounded by η, which is much smaller than ∥∇wLv2(wt2)∥2. This indicates that as long as the benefit of v1 at t2, i.e., Lv1(wt1) − Lvi(wt) > 2η then the algorithm will be absolutely dynamically fair at t2 + 1. Moreover since now we limit the benefit for v2, we can show using NGD is fairer than implementing the above vanilla Fedavg.
Theorem 2. Consider the same setting as in Theorem 1 with normalized GD and fixed w1, wt2 and η, then if ∥∇wLv2(wt2)∥2 ≥ Ω(1) we have NGD is dynamically fairer than the above Fedavg at time point t2 + 1.
Note that although in the previous theorem we only considered the time point t2 + 1. As we can see from the experimental part, our algorithm is fairer than Fedavg in practice at each time point. Moreover, the above results relies on the assumption of ∥∇wLv2(wt2)∥2 ≥ Ω(1). Actually, when ∥∇wLv2(w)∥2 is small enough, the group v2 will get smaller benefit, i.e., for any q > 0 by the
properties of the loss function we have
η∥∇wLv2(wt2 + q)∥2 − η2L
2 ≤ Lv2(wt2+q)−Lv2(w̃2t2+q+1) ≤ η∥∇wLv2(wt2+q)∥2 −
η2µ
2 ,
η∥∇wLv1(wt2 + q)∥2 − η2L
2 ≤ Lv1(wt2+q)−Lv1(w̃1t2+q+1) ≤ η∥∇wLv1(wt2+q)∥2 −
η2µ
2 ,
where w̃it2+q+1 (i = 1, 2) is the local model in the i-th agent by performing NGD and wt2+q+1 = w̃1t2+q+1
+w̃2t2+q+1 2 . If we ignore the benefit of the aggregation step in the server, then from the
previous two results we can see the benefit the vi group get is Θ(η∥∇wLvi(wt2 +q)∥2). Thus, when η and the two gradient norms are small, then the benefits are also small which could be considered to be equal. In total we have that, when ∥∇wLvi(wt2 + q)∥2 is large, then if Lv1(wt1)−Lvi(wt) ≥ ω(η), our previous algorithm will be dynamically fair. And when ∥∇wLvi(wt2 + q)∥2 becomes sufficiently small then since both groups get almost the same benefit in time point t2+ q. Therefore, our algorithm is still dynamically fair.
Algorithm 1 Normalized Fedavg: Two groups v1, v2 with joining time point t1, t2 (t1 = 1, t1 < t2). |v| indicates the number of clients in group v, |B| is the local minibatch size, E is the number of local epochs, and η is the learning rate. C is a constant
Server executes: 1: initialization: w1 2: for each round t = 1, 2, . . . do 3: if t < t2 do m← max(C · |v1|, 1) else do m← max (C · (|v1|+ |v2|), 1) 4: St ← (random set of m clients) 6: for each client k ∈ St in parallel do 7: wkt+1 ← ClientUpdate(k, wt) 8: if t ≥ t2 do
wkt+1 ← wt − ηt wt−wkt+1
||wt−wkt+1|| // Normalization 9: wt+1 ← ∑
k∈St p i kw k t+1, where p i k is the weight where i = 1 when t < t2 otherwise i = 2.
ClientUpdate (k, w): // Run on client k 1: B ← (split Dk into batches of size |B|) 2: for each local epoch i from 1 to E do 3: for batch b ∈ B do 6: w ← w − ηt∇l(w; b) 7: return w to server
Based on our above idea of normalizing the gradient to limit the benefit for each new agent, we can modify the vanilla Fedavg to improve its dynamic fairness, i.e., we propose Normalized Fedavg in Algorithm 1 (for simplicity we only present the case where only two groups are trained, and the multi-group case can be easily generalized). Compared with the previous normalized stochastic gradient descent (NSGD) (Zhao et al., 2020; Cutkosky & Mehta, 2020; You et al., 2019; Hazan et al., 2015), there are two critical differences: While in the existing work on NSGD we normalize the gradients each iteration, in Algorithm 1 each agent still uses SGD to train local model and then send model to the server, then the server normalizes these local model updates to update the model (step 8) and then perform the aggregation step (step 9). This is due to that in practice we find that using directly NSGD locally for all agents will make the algorithm hard to be convergent. Thus, before the normalization step, we still need each agent perform SGD (step 7). The second difference it that, where in the previous NSGD based methods we need to calculating the global norm for all parameters of the model and then perform the normalization step. In Algorithm 1, for the normalization step we use layer-wise norm (LN) as using the global norm could lead to non-convergence (see experiments in Section D.1 in Appendix for details).
Although in practice we found Normalized Fedavg can indeed improve the dynamic fairness compared with Fedavg, its convergence rate is quite slow. The main reason is that the model update of local agents becomes quite small at the early stage after adding group v2. To address the issue, we simply modify the normalization update step (step 8 in Algorithm 1) by adding a decayed coefficient. We choose the model update norm G at the time point when group v2 joined in the training
Algorithm 2 The Modified Normalized Fedavg algorithm. The β is a hyperparameter and takes value from 0 ∼ 1. The default value of β is 1. LN : (w[0], ..., w[L−1])→ (||w[0]||2, ..., ||w[L−1]||2) is the function to compute the model update norm at each layer.
Server executes: 1: initialization: w0, β 2: for each round t = 1, 2, . . . do 3: if t < t2 do m← max(C · |v1|, 1) else do m← max (C · (|v1|+ |v2|), 1) 4: St ← (random set of m clients) 5: if t = t2 + 1 do G ← LN(wt−1 − wt) 6: for each client k ∈ St in parallel do 7: wkt+1 ← ClientUpdate(k, wt) 8: if t ≥ t2 + 1 do
wkt+1 ← wt − ( β G1+t−t2 + (1− β)LN(wt − w k t+1) ) wt−wkt+1
LN(wt−wkt+1) 9: wt+1 ← ∑
k∈St p i kw k t+1
ClientUpdate (k, w): same as Algorithm 1
as the initial value of this coefficient, and decay it with the training round increasing. The modified formula for local device model update is
wkt+1 := wt − G 1 + t− t2 wt − wkt+1 ||wt − wkt+1||2 , (7)
where t ≥ t2+1, G = ||wt2−wt2+1||. In order to further enhance the applicability of the algorithm, we introduce another hyperparameter β to combine Fedavg and normalized Fedavg:
wkt+1 := wt − ( β
G 1 + t− t2
+ (1− β)||wt − wkt+1||2 )
wt − wkt+1 ||wt − wkt+1||2
(8)
where the value of β is from 0 to 1. If the value of β is close to 1, the algorithm will be more fair; And if the value is close to 0, the algorithm will converge faster and close to Fedavg.
Note that in the above methods we normalize the model update (gradients) to limit each agent’s benefit. A natural question is whether we can use other ways. We known that other than normalization, clipping is another commonly used operation in deep learning (e.g., poisoning attacks (Guo et al., 2021; Xie et al., 2021; Panda et al., 2022) and privacy (Truex et al., 2019; Lee & Kifer, 2018)). Motivated by this we propose the Clipping Fedavg (Algorithm 3). We find that clipping can also improve the fairness via experiments. However, its improvement compared with Fedavg is quite limited. See Section B in Appendix for details.
Actually, due to the simplicity of our idea, our methods are compatible with other fair FL methods. Specifically, we combine our method with the existing methods in fair FL for static participants to additionally guarantee fair treatment for local agents who join the training process at the same time point by minimizing the discrepancy of benefits they receive, i.e, we can achieve within-group fairness additionally. See Section C in Appendix for details.
5 EXPERIMENTS
In this section, we will study the practical performance of our proposed algorithms on several benchmark datasets.
Experimental Settings: To verify whether the normalization-based methods can indeed improve dynamic fairness, we design the Two-groups experiment to simulate the scenario in which some clients join first (group1) while some join in training at timepoint t2 (group2). In this type of experiment, each group contains 5 clients.
To make our experiments more convincing and applicable, we also design the Multi-groups experiment, in which more clients are added to the training at several different timepoints. In detail, there are S groups {v1, ..., vS} (S ≥ 2), and each group is added to trained at a specific time point {t1, ..., tS} (t1 = 1). In the Multi-groups experiment, each group contains 3 clients. For all experiments, all clients run 5 local epochs, 32 local batch-size and η = 1e− 2 in each round. Also, we define a paramter α to control the degree of non-i.i.d of the dataset, i.e., if two groups
join the training with α = 0.9 and the size of class of the dataset is 10, then one group has 90% of the data in 5 classes and the second group has 90% of the data in the other 5 classes. For the Two-groups experiment, we choose 10 clients in total and each group contains 5 clients. The second group is added to training in round 10. For the Multi-groups experiment, we choose 30 clients in total and each group contains 3 clients). The timepoint each group join in training is in the set {0, 10, 20, 30, 40, 50, 60, 70, 80, 90}.
Datasets and Models We use three classical dataset MNIST (LeCun et al., 1998), Fashion MNIST (FMNIST) (Xiao et al., 2017), and CIFAR10 (Krizhevsky et al., 2009) and two popular model LeNet (LeCun et al., 1998) and ResNet18 (He et al., 2016) to evaluate our algorithms. Like most of works, we used LeNet on MNIST and FMNIST, and ResNet18 on CIFAR10 for evaluation, respectively.
Evaluation Metrics Based on our Definition 1, to better describe the benefits of all groups during the training process, we propose groups benefits (GB) as one of our experimental metrics. This metric shows the difference in the value of the loss between the group that joins later and the group that joins first. A positive value of GB indicates that the algorithm is fair, and larger value indicates better fairness of the algorithm. The metric is defined in (9). GB (train) and GB (test) are calculated from the train dataset and the test dataset, respectively. If there is only one group, we set GB as 0.
GBt = 1
n− 1 n−1∑ i=1 exp ( Lvi+1(wt)−Lvi(wt) ) − 1 (9)
where n ∈ [2, N ] denotes the number of current running groups. It should be noted that the metric here does not exactly follow Definition 1, the reason can be seen in section E in Appendix.
Main Experiment Results The experiment results are shown in the Figure 1 and 2. It can be seen that Algorithm 1 and 2 exhibit much higher benefits than Fedavg, and their benefits eventually converge to a positive large value on all the datasets. In contrast, Fedavg makes the value of GB smaller
or even negative (absolute unfairness). A noteworthy phenomenon is that for Fedavg, compared with its GB values that keep decreasing during the training process, the GB values will increase slightly during the testing process for both datasets (MNIST and CIFAR10). However, the increased values are still much smaller than those of our methods. Remarkably, we find that Algorithm 2 is fairer than Algorithm 1 in Figure 1, but Figure 2 shows the exact opposite phenomenon. Therefore, for those two algorithms, we cannot conclude which algorithm outperforms measured by GB, which indicates that our strategies in Algorithm 2 do not significantly reduce fairness compared with Algorithm 1.
Figure 1 and 2 both illustrate a negative correlation between fairness and convergence rate. Algorithms 1 and 2 have better fairness performance but their convergence rates are slower than Fedavg. Meanwhile, we can conclude that Algorithm 2 converges faster than Algorithm 1, which shows the effectiveness of our proposed Algorithm 2.
To summary, both of Algorithm 1 and 2 demonstrate greater dynamic fairness than Fedavg. Besides, Algorithm 2 can achieve a faster convergence rate than Algorithm 1 while maintaining a similar dynamic fairness with Algorithm 1.
We defer the ablation study to Section D.1 in Appendix due to space limit.
6 CONCLUSION
In this paper, we focused on the fairness in the setting of Federated Learning with dynamic participants, meaning that clients can join in the training at different time points. We proposed a new definition of federated learning fairness namely dynamic fairness to guarantee higher benefits for local agents who participate in the FL model training for longer time periods than those do not. We developed algorithms with normalization to guarantee the dynamic fairness based on Fedavg. Furthermore, we improved the efficiency of Normalized Fedavg via some strategies. Intensive experiment results showed that our methods are dynamically fair. And specifically, our algorithms are fairer than Fedavg.
A OMITTED PROOFS
Proof of Theorem 1. In Fedavg from the server side it computes wt2+1 = w2t1+1 +w2t2+1 2 , where w2t1+1 = wt2 − η∇wLv2(wt2) and w 1 t1+1 = wt2 − η∇wLv1(wt2) with the stepsize η. Thus, from the first order approximation we have
Lv1(wt2+1) = Lv1(wt2)− η∇wLv1(wt2) · ∇wLv1(wt2) +∇wLv2(wt2)
2 + o [ ||wt2+1 − wt2 ||2 ] (10) Lv2(wt2+1) = Lv2(wt2)− η∇wLv2(wt2) ·
∇wLv1(wt2) +∇wLv2(wt2) 2
+ o [ ||wt2+1 − wt2 ||2 ] (11) Thus we have [Lv2(wt2)−Lv2(wt2+1)]− [Lv1(wt2)−Lv1(wt2+1)] ≈ η∥∇wLv2(wt2)∥22 − η∥∇wLv1(wt2)∥22 Thus, based on the definition 1 the difference of benefit for agents in v2 and benefit for agents in v1 is approxiamtely equal to
η[∥∇wLv2(wt2)∥22 − ∥∇wLv1(wt2)∥22] + [Lv1(wt2)−Lv1(w1)]. Thus, when ∥∇wLv2(wt2)∥2 ≥ Ω(∥∇wLv1(wt2)∥2) and ∥∇wLv2(wt2)∥2 ≥ Ω(Lv1(wt2) − Lv1(w1)) then the benefit for v2 is larger and the algorithm is no longer fair.
Proof of Theorem 2. In Fedavg from the server side it computes wt2+1 = w1t2+1 +w2t2+1 2 , where w2t1+1 = wt2 − η ∇wLv2 (wt2 ) ∥∇wLv2 (wt2 )∥2 and w1t2+1 = wt2 − η ∇wLv1 (wt2 ) ∥∇wLv1 (wt2 )∥2
with the stepsize η. Thus, from the first order approximation we have
Lv1(wt2+1) = Lv1(wt2)− η
2 ∇wLv1(wt2) · ( ∇wLv1(wt2) ∥∇wLv1(wt2)∥2 + ∇wLv2(wt2) ∥∇wLv2(wt2)∥2 )
+ o [ ||wt2+1 − wt2 ||2 ] (12) Lv2(wt2+1) = Lv2(wt2)− η
2 ∇wLv2(wt2) · ( ∇wLv1(wt2) ∥∇wLv1(wt2)∥2 + ∇wLv2(wt2) ∥∇wLv2(wt2)∥2 )
+ o [ ||wt2+1 − wt2 ||2 ] (13) Thus we have
[Lv2(wt2)−Lv2(wt2+1)]− [Lv1(wt2)−Lv1(wt2+1)]
≈ η 2 (∥∇wLv2(wt2)∥2 − ∥∇wLv1(wt2)∥2)(1 + ∇wLv1(wt2) · ∇wLv2(wt2) ∥∇wLv1(wt2)∥2∥∇wLv2(wt2)∥2 )
Thus, based on Definition 1 the difference of benefit for agents in v2 and benefit for agents in v1 is η
2 (∥∇wLv2(wt2)∥2 − ∥∇wLv1(wt2)∥2)(1 + ∇wLv1(wt2) · ∇wLv2(wt2) ∥∇wLv1(wt2)∥2∥∇wLv2(wt2)∥2 )
+ [Lv1(wt2)−Lv1(w1)]. (14)
And it is smaller than the difference in the case of Theorem 1 with fixed w1, wt2 and η when ∥∇wLv2(wt2)∥2 ≥ 2. Thus, it is more fairer than Theorem 1.
B CLIPPING FEDAVG AND EXPERIMENT
Note that in the above methods we normalize the model update (gradients) to improve algorithms’ fairness. A natural question is whether we can use other ways. Other than normalization, clipping is another commonly used operation in deep learning (e.g., poisoning attacks (Guo et al., 2021; Xie et al., 2021; Panda et al., 2022) and privacy (Truex et al., 2019; Lee & Kifer, 2018)). Motivated by this, we propose the Clipping Fedavg algorithm, whose complete pseudo-code is given in Algorithm 3. With experiments in figure 3, we found that on the whole training stage, it is hard to argue that clipping can improve the fairness of the algorithm. we can see that although algorithm 3 can maintain a high level of fairness in the early stage after new group joins, it is not significantly different from Fedavg in the later stage. Therefore, we do not recommend using algorithm 3 to improve fairness in practical applications.
Algorithm 3 The Clipping Fedavg algorithm. The running clients are indexed by k, and η is the learning rate. LM : (w[0], ..., w[L−1]) → (max(w[0]), ...,max(w[L−1])) is the function to compute the max value of model update at each layer, and Clipg : (w
[0], ..., w[L−1]) → (threshold(w[0],±g), ..., threshold(w[L−1],±g)) is the function to clip the paramters at each layer with specified value, where threshold(w,±g) can limit w with ±g.
Server executes: 1: initialization: w0 2: for each round t = 1, 2, . . . do 3: if t < t2 do m← max(C · |v1|, 1) else do m← max (C · (|v1|+ |v2|), 1) 4: St ← (random set of m clients) 5: if t = t2 do G ← LM(wt−1 − wt) 6: for each client k ∈ St in parallel do 7: wkt+1 ← ClientUpdate(k, wt) 8: if t ≥ t2 do wkt+1 ← wt −ClipG(wt − wkt+1) // Clipping 9: wt+1 ← ∑St k=1 pkw k t+1 10: if t ≥ t2 do compute test loss: Lv1(wt), Lv2(wt)
ClientUpdate (k, w): same to algorithm 1
C FURTHER EXPANSION OF THE FAIRNESS DEFINITION
Additionally, we combine our method with the existing methods in fair FL for static participants. Via such approach, we can minimize the discrepancy of benefits for the agents who join the training process at the same time point, and guarantee a fair treatment for them, i.e, we can achieve withingroup fairness.
By combining the fairness definition of Li et al. (2019) and ours, we extend the definition 2, 3 to definition 4, including between-group fairness (guarantees that agents with longer participating time benefit more) and within-group fairness (guarantees performance uniformity for agents with the same participating time).
Definition 4. Dynamic Fairness (Extended): Under our dynamic FL setting, for a training algorithm A, it is absolutely dynamically fair if for any two different time points ti < tj and any t > tj we have
Between-group fairness: Lv1t1 (wt) > L v2 t2 (wt) (15) Within-group fairness: stdk∈v {Fk(w)} → 0 (16)
Under our dynamic FL setting, consider two absolutely dynamically fair training algorithms A and Ã, we call algorithm A is dynamically fairer than algorithm à if for any two different time points ti < tj and any t > tj we have
Between-group fairness:
Lv1t1 (wt)− L v2 t2 (wt) > L v1 t1 (w̃t)− L v2 t2 (w̃t) (17)
Within-group fairness: stdk∈vi {Fk(w)} < stdk∈vi {Fk(w̃)} (18) Here Lviti (wt) is defined in Definition 1, stdk∈v {Fk(w)} denotes the standard deviation of the test loss of all devices in group vi, and vi is anyone of all groups currently participating.
We further modified our algorithm by combining above algorithms with q-Fedavg (Li et al. (2019)), which is an excellent solution of within-group fairness. The pseudo-code of our modified algorithm is given in algorithm 4.
Algorithm 4 We merged our methods (step 8) into the q-Fedavg. The notation k is the index of running clients, wt is the global model at current round t, and η is the learning rate. LN : (w[0], ..., w[L−1])→ (||w[0]||, ..., ||w[L−1]||) is the function used to compute the model update norm at each layer. q is a hyperparameter of q-Fedavg, and its default value is 0.1.
Server executes: 1: initialization: w0 2: for each round t = 1, 2, . . . do 3: if t < t2 do m← max(C · |v1|, 1) else do m← max (C · (|v1|+ |v2|), 1) 4: St ← (random set of m clients) 6: for each client k ∈ St in parallel do 7: wkt+1, Fk(wt)← ClientUpdate(k, wt) 8: wkt+1 ← Our operation (based on algorithm 1, 2, 3) 9: △kt = F q k (wt) ∗ (wt − wkt+1) 10: hkt = qF q−1 k (wt)LN(wt − wkt+1) + F q k (wt)
11: wt+1 ← wt − ∑ v∈St ∑ k∈v pk △kt hkt 12: if t ≥ t2 do compute test loss: Lv1(wt), Lv2(wt)
ClientUpdate (k, w): same to algorithm 1
Then we provide the experiment results for Algorithm 4.
We provide the experiment results of Algorithm 4.
First, we define an evaluation metric loss std (LS) for within-group fairness. This metric indicates the level of performance uniformity across clients within the same group. A lower value of the metric indicates higher uniformity. If only one group runs, we set LS to 0.
LSt = 1
n n∑ i=1 √√√√ |vi|∑ k=1 (F kt − L vi t ) 2 |vi| (19)
Second, we verified that the q-Fedavg Li et al. (2019) is still valid in our Two-groups experiment condition in figure 4(section 5). We then selected q = 0.1 that best fits algorithm 4.
Last, as shown in figure 5, we find that the algorithm 4 can improve both between-groups fairness (lower LS) and within-group fairness (greater Group benefit).
D ADDITIONAL EXPERIMENTAL RESULTS
D.1 ABLATION STUDY
In ablation study, we use the experiment results of "Two-groups experiment" (MNIST) as the control group. We explore the effects of three key variables (α, global or layer norm, and β) on the experiment results.
Impact of α. Figure 6 shows that in a scenario of a low level of non-iid, fairness is not guaranteed regardless of whether normalization is implemented or not. And only with strong non-iid, normalization can guarantee fairness and is fairer than Fedavg, which is consistent with our idea in the previous section.
Impact of global / layer norm. We investigate whether the global parameter norm of the model update or the per-layer parameter norm should be used in algorithm 1 and 2. As seen in figure 7(a), the global norm for all parameters of model update in algorithm 2 prevents the model from converging. On the contrary, the layer norm (LN) is able to make the model converge.
Impact of β for algorithm 2. Due to a necessary trade-off between fairness and convergence speed in practical applications, we expect the hyperparameter β to regulate the degree of fairness of algorithm 2. Figure 7(b) also demonstrates that the effect of adjusting β is consistent with our expectation.
D.2 EVALUATION OF OTHER MODELS
We redid the Two-group experiment using Linear-regression and 2-layer neural network and got the same results (figure 8) as Lenet. This proves that our method is not limited by the model
E SUPPLEMENTAL NOTION
Explanation of implementing Evaluation Metric "Group Benefit" If we follow the benefit definition (2), then the equation (9) should be rewritten as
GBt = 1
n− 1 n−1∑ i=1 exp ( Lviti (wt)− L vi+1 ti+1 (wt) ) − 1
= 1
n− 1 n−1∑ i=1 exp (Lvi(wti)−Lvi+1(wti+1))︸ ︷︷ ︸ Our proposed algorithms do not change it +(Lvi+1(wt)−Lvi(wt)) − 1 (20)
However, we find that the former term Lvi(wti) in (2) is much larger than the latter term Lvi(wt) in the actual experiments, which makes the actual benefit Lviti (wt) always be close to Lvi(wti) and remains constant. Moreover, we find that Lvi(wti) − Lvi+1(wti+1) in (20) is the same for our algorithms and Fedavg, so we remove the former term (Lvi(wti)−Lvi+1(wti+1)) in the metric (20) and change the metric to (9). | 1. What is the focus of the paper regarding fairness in federated learning?
2. What are the strengths and weaknesses of the proposed approach?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Do you have any concerns or questions about the paper's perspective on dynamic fairness?
5. Are there any areas where the writing can be improved in terms of clarity and quality? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies fairness in federated learning with dynamic clients (i.e., clients may join at different time points). In particular, the paper proposes a perspective that, to achieve fairness, clients that have been training longer should receive better performance improvement. It introduces a dynamic fairness definition to formalize it. The paper shows a variant of FedAvg is unfair and designs several algorithms to achieve the dynamic fairness. Empirical results on three standard datasets comparing against FedAvg are provided.
Strengths And Weaknesses
Strengths
The studied problem of fairness in federated learning is relevant.
The perspective of dynamic fairness is easy to understand that clients that have been training longer should receive better benefits.
Weaknesses
Motivation is not so clear. In particular, the introduction can be improved to make the setting of dynamic clients and yet fairness is important much more compelling.
The phrase "dynamic clients" in the abstract and introduction may be overclaiming. The abstract and introduction both mention
different agents could join or leave the training at different time points.
However, the paper focuses only on the agents/clients joining at different times.
The technical rigor can be improved. For instance, the papers claims to study the non-i.i.d. setting without formalizing the specific non-i.i.d. definition. As another example, Theorem 1 requires an assumption that
‖
∇
w
L
t
2
(
w
t
2
)
‖
(i.e., the norm of the gradient of the loss function) without rigorously justifying why.
The proposed perspective "clients that have been training longer should receive better benefits" is not sufficient well justified. Note that while it is easy to understand what the perspective is trying to say (as mentioned as one of the strengths), it is unclear as to why this particular perspective or how this perspective is important.
There are some places where the writing is unclear or difficult to parse the meaning. E.g., "we consider two succeed time points", "We can see the above unfairness is ubiquitous in the dynamic FL", "Before diving into details, let us back to the Fedavg to", "and now we achieve the time point" etc.
Clarity, Quality, Novelty And Reproducibility
Clarity and Quality:
The writing can be improved in terms of both clarity and quality (see the last weakness).
Novelty:
The perspective of clients that have been training longer should receive better benefits is not completely novel, as it seems to be achievable in [1,2] where the contribution/shapley value of a client not in the training iteration is set to zero and the total contribution of the clients are the cumulative contributions.
References
[1] T. Song, Y. Tong, and S. Wei, “Profit Allocation for Federated Learning,” in Proc. IEEE Big Data, 2019, pp. 2577–2586.
[1] T. Wang, J. Rausch, C. Zhang, R. Jia, and D. Song, “A Principled Approach to Data Valuation for Federated Learning,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 12500 LNCS, pp. 153–167, 2020.
Reproducibility:
The reproducibility is reasonable. |
ICLR | Title
Trust-consistent Visual Semantic Embedding for Image-Text Matching
Abstract
Visual Semantic Embedding (VSE), as a link between Computer Vision and Natural Language Processing, aims at jointly learning cross-modal embeddings to bridge the discrepancy across visual and textual spaces. In recent years, VSE has achieved great success in image-text matching benefiting from the outstanding representation power of deep learning. However, existing methods produce retrieved results only relying on the ranking of cross-modal similarities, even if the retrieved results are unreliable and uncertain. That is to say, they cannot selfevaluate the quality of retrieved results for trustworthy retrieval, resulting in ignoring the ubiquitous uncertainty in data and models. To address this problem, we propose a novel VSE-based method for image-text matching, namely Trustconsistent Visual Semantic Embedding (TcVSE), to embrace trustworthy retrieval and self-evaluation for image-text matching. To be specific, first, TcVSE models the evidence based on cross-modal similarities to capture accurate uncertainty. Second, a simple yet effective consistency module is presented to enforce subjective opinions of bidirectional VSE models (i2t+t2i) to be consistent for high reliability and accuracy. Finally, extensive comparison experiments are conducted to demonstrate the superiority of TcVSE on two widely-used benchmark datasets, i.e., Flickr30K and MS-COCO. Furthermore, some qualitative experiments are carried out to provide comprehensive and insightful analyses for the reliability and rationality of our method.
N/A
1 INTRODUCTION
Visual Semantic Embedding aims to learn a shared embedding space to enforce visual data coincide with their corresponding semantic textual descriptions, which is an important approach to understanding the cross-modal semantic association for downstream applications, such as image-text matching Faghri et al. (2017) and visual question-answering Malinowski et al. (2015), etc. Thus, the key issue of VSE is how to eliminate the discrepancy across images and texts to learn a reliable common embedding space. To address this issue, numerous methods attempt to project visual and textual data into a latent common space. However, it is still unknown to self-evaluate the retrieval performance to achieve interpretable and reliable inference.
In this paper, we focus on image-text matching (ITM), one of the fundamental tasks of cross-modal learning, i.e., cross-modal retrieval, which expects to search the
most relevant sentences for a given image query (i2t) or retrieve the related images from a given sentence query (t2i) according to the pairwise visual-semantic similarities. Some early works based on VSE Kiros et al. (2014); Wang et al. (2016); Faghri et al. (2017) leverage the powerful feature extraction capability of deep neural networks (DNNs) to obtain the global representation of images and texts, such as VGG Simonyan & Zisserman (2014), ResNet He et al. (2016), and GRU Chung et al. (2014), etc., by maximizing the correlated cross-modal similarities. More granularly, recent
VSRN Li et al. (2019) performs reasoning with Graph Convolutional Neural networks (GCNs) Kipf & Welling (2016) to generate enhanced visual representations, which captures both objects and corresponding semantic relationships for better visual semantic embedding. VSE∞ Chen et al. (2021) presents an adaptive pooling strategy (GPO) that aggregates (region-based or grid-based) local features to lean a better common representation. Unlike the aforementioned VSE-based methods, some works Lee et al. (2018); Chen et al. (2020); Wu et al. (2019); Liu et al. (2020); Diao et al. (2021); Cheng et al. (2022); Li et al. (2022a) present a specific mechanism or model to explicitly learn and integrate the fine-grained relationships between image regions and word tokens for cross-modal similarity inference.
Although prior approaches could achieve promising performance, they are only able to estimate image-text similarities for cross-modal retrieval, wherein image-text pairs with high similarity are taken for granted as matched. Since the ubiquitous uncertainty in data and models, it is inevitable to produce unreliable retrieval results. Therefore, it requires revisiting the questions such as “Is this retrieval trustworthy?” to evaluate the uncertainty or unreliability of predictions. To this end, it is valuable and necessary to measure such uncertainty for self-evaluation, but less touched in existing image-text matching methods.
To address this problem, we propose a novel VSE framework, termed Trust-consistent Visual Semantic Embedding (TcVSE). Not only does TcVSE outperform prior works (Figure 1), but it is also more efficient, achieving trustworthy image-text matching. More specifically, (1) we employ Evidential Deep Learning (EDL) built on the Dempster-Shafer Theory of Evidence (DST) Yager & Liu (2008) and the Subjective Logical Theory Sensoy et al. (2018) (SL) into VSE models to capture the uncertainty, thus endowing the model with the ability to self-evaluate retrieval quality. Following the principles of DST and SL, we consider the pairwise similarity measured by VSE as a source of evidence and parameterize the evidence as a Dirichlet distribution, which not only models the density of query probabilities but also the uncertainty. (2) Unlike prior EDL methods, our TcVSE focuses on ITM instead of classification. Thus, our TcVSE should overcome two challenges to apply EDL on ITM, namely instance retrieval and bidirectional inference. To tackle the first challenge, we relax the instance-level retrieval to a K-way querying for training, thus enabling uncertainty estimation via cross-modal similarity. To counter the second challenge, two VSE branches (i2t and t2i) with EDL are proposed to learn bidirectional retrieval, however, the difference between the two tasks unavoidably leads to the gap between their uncertainty. To address the problem, we present a simple yet effective consistency module to enforce subjective opinions of different branches to be consistent for more reliable uncertainty estimation, thus embracing performance improvement. (3) Finally, we demonstrate the effectiveness and superiority of our method with extensive experiments on two widely used benchmark datasets, i.e., Flickr30K and MS-COCO. The comprehensive ablation studies and insightful analyses verify the reliability and practicability of our method.
2 TRUST-CONSISTENT VISUAL SEMANTIC EMBEDDING
In this section, we summarize our method in Section 2.1 and elaborate on how to estimate the evidence-based uncertainty for trustworthy image-text matching in Section 2.2. Moreover, we present a Consistent Module to make two VSE branches obtain consistent predictions on subjective opinions during evidential deep learning in Section 2.3.
2.1 OVERVIEW
To achieve trustworthy image-text matching, unlike most standard methods, TcVSE utilizes EDL and a consistent module to accurately measure the visual-textual similarity and additionally quantify the uncertainty of the VSE model for self-evaluation. Figure 2 shows the framework of our proposed method. We first define our Visual Semantic Embedding model for image-text matching as illustrated in Figure 2(a). Let (V, C) denote a visual and textual dataset, which contains a set of images V and a set of texts C. Feature Encoding: For any sample pair (u, c) in (V, C), their feature representations could be encoded by some deep backbone networks, e.g., Faster-RCNN for visual features and Bi-GRU for
textual features, respectively:
V(u,Θϕ) : u→ {xi}Vi=1,xi ∈ Rd, T(c,Θψ) : c→ {rj}Mj=1, rj ∈ Rd
where d is the dimensionality of the joint embedding space, V(∗,Θϕ) and T(∗,Θψ) are respectively visual and textual backbones, Θϕ and Θψ are respectively the corresponding model parameters, {xi}Vi=1 is a set of V encoded local region features, {rj}Mj=1 is a set of M word token features, rj ∈ Rd is a word token feature and M is the number of words for c. Following Hüllermeier & Waegeman (2021), we randomly discard the region features extracted by the backbone network (Faster-RCNN) to achieve data augmentation, which is different from the common augmentation of the raw image, e.g., cropping, rotation, etc. Meanwhile, “Mask”, “Discard”, or “Swap” operations are performed on the word tokens for text augmentation.
Similarity Representation: To obtain the global similarity, the encoded visual features {xi}Ni=1 and textual features {rj}Mj=1 would be aggregated by Max pooling into a common embedding space.
v = MaxPooling ( {xi}Ni=1 ) , t = MaxPooling ( {rj}Mj=1 ) .
Then, the similarity score of (v, t) is measured by the cosine similarity as follows:
S(v, t) = v⊤t
∥v∥ · ∥t∥ . (1)
Learning with TcVSE: A VSE model aims at minimizing the visual-semantic distance in a common space, i.e., maximizing the similarity of matched visual and textual samples. Our TcVSE aims to achieve that goal while also endowing the VSE models with the reliable capability of uncertainty estimation. More specifically, TcVSE conducts a two-step learning process to optimize models. The first step is to optimize the uncertainty-aware loss Lu based on the cross-modal evidential deep learning. The second step is multiple optimizations for opinion-based consistency loss Lc. See Algorithm 1 for more details on the optimization process.
2.2 UNCERTAINTY ESTIMATION
In this section, we follow the notions and principles of evidential deep learning (EDL) Sensoy et al. (2018) to model the uncertainty of VSE models. To estimate uncertainty, the Dempster-Shafer
Theory of Evidence (DST) Yager & Liu (2008) and the theory of Subjective Local (SL) Jsang (2016) are employed to build the learning paradigm of EDL. The existing EDL learns a deterministic model from the observable evidence supporting subjective opinions (i.e., model predictions). However, these methods almost focus on unimodal classification and less touching image-text matching.
For image-text matching, VSE projects the visual and textual feature representations into a common space, thus making it possible to measure the similarity across different modalities. Different from existing EDL methods Sensoy et al. (2018), VSE does not have a nonlinear classifier to predict the evidence, thus making it difficult to quantify the uncertainty. To address the issue, our TcVSE relaxes the instance-level retrieval to a K-way querying, thus the evidences could be estimated by the cross-modal similarities, i.e., ek = [g(sk1), g(sk2), · · · , g(skK)] for the k-th query, where K is the number of matching events and g(·) is a function to transform similarity into a non-negative evidence (i.e., e ∈ [0,+∞)) as bellow:
e = g(s) = ReLU(s/τ) or exp(s/τ), (2)
where s is the visual-semantic similarity computed by Equation (1) and 0 < τ < 1 is a temperature parameter Wu et al. (2018). To model the uncertainty, the similarity-based evidence vector e could be associated with the parameters of a Dirichlet distribution α = [α1, · · · , αK ] (αk = ek + 1) built on SL theory, which provides an overall uncertainty mass u and a belief mass bi for each singleton that is one of K retrieval events of a Qurey in image-text matching. These K+1 masses are defined as
bk = ek S = αk − 1 S and u = K S , (3)
where S = ∑K k=1 (ek + 1) = ∑K k=1 αk and ∑K k=1 bk + u = 1. The belief masses b = [b1, b2, · · · , bK ] could be treated as subjective opinions corresponding to the parameters of Dirichlet distribution α and the S could be considered as the distribution strength.
Intuitively, ITM could be viewed as a process of retrieving counterparts with the highest matching probability from different modalities. Hence, the matching probability assignment over the retrieved samples of each “Query” could be denoted as p = [p1, p2, · · · , pK ], where ∑K i=1 pi = 1. By using the Dirichlet distribution to model such probability assignment, given an opinion, the expected probability of the k-th matched event can written as ED(p|α) [pk] = ∫ pkD(p | α)dp = αkS , where the Dirichlet distribution with parameters ⟨α1, α2, · · · , αK⟩ parameterized over the evidence ⟨e1, e2, · · · , eK⟩ expresses the density of such probability assignment and simultaneously models the overall uncertainty Jsang (2016). The density function is given by
D(p | α) =
{ 1
B(α) ∏K j=1 p αj−1 j for p ∈ SK 0 otherwise , (4)
where B(α) is the K-dimensional multinomial beta function and SK is the K-dimensional unit simplex. For a deep classifier, the widely used loss function is cross-entropy, formally as
Lce(y,p) = − K∑ j=1 yj log(pj).
Considering the density function D(p | α) molded by the Dirichlet distribution α, the Bayes risk of Lce can be computed by
ED(p|α) [Lce] = ∫ K∑
j=1
−yj log (pj) 1 B (α) K∏ j=1 p αj−1 j dp = K∑ j=1 yj (ψ (S)− ψ (αj)) , (5)
where ψ (·) is the digamma function. By minimizing such risk, it is possible to ensure that correctly labeled observations generate as strong evidence as possible. Since the number of annotated pairs for ITM training is much larger than the number of categories for multi-classification, we simply regard “K” as the size of one training mini-batch, wherein visual and textual samples have a one-to-one correspondence. Therefore, such risk can be considered as the equivalent of the uncertainty-aware cross-entropy Luce of TcVSE, which is defined as
Luce(α) = 1
K K∑ i=1 ED(pi|αi) [Lce(IK ,Pi)] , (6)
where IK is an identity matrix of size K. Luce encourages VSE to generate as strong evidence as possible for positive pairs, which guarantees that evidence of positive pairs is higher than that of negative pairs. Furthermore, to further extreme the predicted evidence, we introduce KullbackLeibler (KL) divergence to enforce the evidence of negative pairs to be zero. The penalization loss could be formulated as:
Lkl(α) = 1
K K∑ i=1 KL [D (pi | α̃i) ∥D (pi | ⟨1, 1, · · · , 1⟩)]
= 1
K K∑ k=1
[ − log (Γ(K)B (α̃i)) +
K∑ k=1 (α̃ik − 1) ( ψ (α̃ik)− ψ ( S̃i ))] ,
(7)
where S̃i = ∑K j=1 α̃ij , α̃i = IK(i,:) + ( 1− IK(i,:) ) ⊙ αi, Γ(·) is the gamma function, and ψ(·) is the digamma function. Thus, the uncertainty-aware loss of one VSE branch (e.g., image-to-text) is given by Li2tu = Li2tuce + λLi2tkl , (8) where λ is a balance factor that dynamically increases with the number of epochs. The dynamical strategy prevents the optimizer from overemphasizing the KL divergence at the beginning of training, otherwise, the optimizer will be misled by immature opinions leading to performance degradation. Finally, to simultaneously consider the bidirectional retrieval, we jointly optimize the two VSE branches as below: Lu = Li2tu + Lt2iu , (9) where Lt2iu is the evidential loss of t2i VSE branch, which could be computed like Equations (6) to (8).
2.3 CONSISTENT MODULE
In our TcVSE, each branch focuses on different learning directions, due to the discrepancy between distinct retrieval tasks (one for image-to-text and another for text-to-image). Unfortunately, this will lead to various branches producing inconsistent uncertainty estimation, resulting in a performance drop. Specifically, given one query, one branch produces a prediction of low uncertainty, whereas the uncertainty of another branch might be higher as shown in Figure 2(b). Therefore, we introduce a consistency regularization to enforce the two VSE branches to produce consistent predictions on subjective opinions. To simplify presentation without losing generality, we only elaborate on the consistency loss of one direction (i.e., image-to-text) as follows:
Li2tc ( bi2t, b̂i2t ) = 1
K K∑ k=1 ∣∣∣bi2tk − b̂i2tk ∣∣∣ , (10) where bi2t and b̂i2t are obtained from i2t and t2i branches with Equation (3), respectively. Similarly, we could easily obtain the consistency loss in another direction (e.g., text-to-image). Finally, the consistency loss Lc of our TcVSE could be formulated as:
Lc = 1
K K∑ k=1 [ Li2tc ( bi2tk , b̂ i2t k ) + Lt2ic ( b̂t2ik ,b t2i k )] . (11)
The optimization process for our TcVSE is summarized in Algorithm 1.
3 EXPERIMENT
To evaluate our TcVSE, we conduct extensive experiments on two widely used benchmark datasets for Image-Text Matching. Following Lee et al. (2018), we measure the performance of image-to-text and text-to-image retrieval by Recall@K (K=1,5,10), which is defined as the proportion of correct items retrieved in the top K samples of the query. In addition, we adopt the sum of all Recall results to evaluate the overall performance.
Visual Backbone: Faster-RCNN, Textual Backbone: Bi-GRU
Visual Backbone: Faster-RCNN, Textual Backbone: Bert-base
3.1 DATASETS AND IMPLEMENTATION DETAILS
Datasets: The benchmark datasets used in our experiments are Flickr30K Young et al. (2014) and MS-COCO Lin et al. (2014). Flickr30K is an image-text dataset collected from Flickr website and contains 31,000 images with five semantically correlated captions each. Following Lee et al. (2018), we adopt the same dataset splits in our experiments, i.e., 29,000 training images, 1,000 validation images, and 1,000 testing images. MS-COCO consists of 123,287 images, and each image also has five annotated text descriptions. Following Lee et al. (2018), 113,287 images for training, 5000 images for validation, and the remaining 5000 images for testing.
Implementation Detail. In our TcVSE, like VSE∞ Chen et al. (2021), a Faster-RCNN Anderson et al. (2018) detector (with ResNet-101) and Bi-GRU (or Bert-base Devlin et al. (2018)) serve as our visual and textual backbones, respectively. For each image, the visual backbone extracts the region proposals with top-36 confidence scores and projects each region into a 2,048-dimensional feature vector. Following Chen et al. (2021), we randomly discard some region proposals of each image to achieve augmentation during training. For each text description, we randomly mask some words to achieve data augmentation for each description. The dimensionality of the common embedding space is 1024. Different from most methods, we use the uncertainty-aware loss based on EDL for training, which additionally endows the model with the ability to uncertainty estimation.
We employ the AdamW optimizer Loshchilov & Hutter (2017) with weight decay factor 10e-4 to train the VSE branches. The learning rate of the visual model is 5e-4. For the textual model, the initial learning rate is 5e-4, except for Bert-base with 5e-5, and decaying by 10% every 10 epochs. The mini-batch size K is 128 with 25 training epochs on both Flickr30K and MS-COCO.
3.2 COMPARISON WITH STATE-OF-THE-ART METHODS
For a comprehensive evaluation, we compare our TcVSE with 12 state-of-the-art baselines, including SCAN Lee et al. (2018), CAMP Wang et al. (2019), VSRN Li et al. (2019), IMRAM Chen et al. (2020), GSMN Liu et al. (2020), SGRAF(SAF+SGR) Diao et al. (2021), VSE++* Chen et al. (2021), VSE∞ Chen et al. (2021), NCR Huang et al. (2021), CGMN Cheng et al. (2022), URDA Li et al. (2022a) and VSRN++ Li et al. (2022a). VSE++* is the basic version based on VSE∞ using Average Pooling. We conduct abundant comparison experiments as shown in Tables 1 and 2.
Furthermore, we also provide the comparison results compared with the state-of-the-art VSE-based methods in Table 5 for a comprehensive evaluation.
Results on Flickr30K. We report the experimental results on Flickr30K in Table 1. From the table, one could find that our TcVSE with a single VSE branch achieves comparable results, e.g., TcVSE (i2t) outperforms all baselines with rSum=504.4 under Bi-GRU and rSum=517.7 under Bert-base. Thanks to our trust-consistent learning, our TcVSE (i2t+t2i) is superior to all compared methods. Under the textual Bert-base backbone, our TcVSE could outperform all baselines with either one or two branches. Specifically, TcVSE (i2t+t2i) achieves remarkable improvement with the best R@1=82.9% for sentence retrieval and R@1=63.9% for image retrieval.
Results on MS-COCO. We present the qualitative results on MS-COCO with 5-fold 1K and full 5K test images in Tables 1 and 2, respectively. With Bi-GRU, our TcVSE could achieve a competitive performance compared to the state-of-the-arts. More specifically, TcVSE (i2t+t2i) achieves the best R@1 80.6% for sentence retrieval. In addition, Bert-base could further boost our TcVSE remarkably, i.e., a relative improvement of about 3% on R@1 compared to the best baseline VSE∞. In brief, our TcVSE with either one VSE branch or two branches could remarkably outperform all baselines, which demonstrates the effectiveness of our method.
For the experiments on MS-COCO 5K test images, the performance improvement is even more pronounced in terms of sentence retrieval, with a relative improvement of 7.7% (Bi-GRU) and 12.9% (Bert-base) on R@1 compared to best baselines. Both one and two branches of our TcVSE (Bert-base) could achieve conspicuous performance improvement. Furthermore, the consistent module could further boost the performance of TcVSE with one branch, which indicates that our trust-consistent learning will produce complementary and trustworthy predictions for retrieval improvement.
3.3 ABLATION STUDY
In this section, extensive ablation studies are carried out on Flickr30K to verify the contribution of each component to image-text matching. The experimental results are as shown in Table 3. We could comprehensively analyze the results from the following three distinct aspects:
Effectiveness. To verify the effectiveness of our EDL, we replace our evidential loss with Max of Hinge Loss (MH) Faghri et al. (2017) to optimize our VSE, i.e. #7 VSE with MH loss. From Table 3, one could see that other variants with EDL (i.e., #1–6) achieve better retrieval performance than VSE with MH loss, which indicates that our VSE endowed with EDL could remarkably improve performance by capturing the uncertainty. Moreover, our consistency module could further improve the retrieval performance of the two branches, even using only one branch for inference. More specifically, the module could relatively improve the performance by 1.65% (#1 vs. #2), 1.02% (#3 vs. #4), and 1.68% (#5 vs. #6), and in terms of R@1 for sentence retrieval, respectively. By fusing the two branches, our TcVSE could achieve further improvement, e.g., the full version of our TcVSE (#1) could relatively improve the version of one branch #3 and #5 by 1.52% and 2.03% in terms of R@1 for sentence retrieval, respectively.
Complementarity. Two VSE branches are exploited to focus on different retrieval tasks, i.e., imageto-text and text-to-image matching. Obviously, such differences between tasks lead to distinct emphasis. Thus, aggregating the two VSE branches will take advantage of their complementary information, leading to further improvement, which has been verified by the results. Specifically, the variants with aggregation (i.e., #1 and #2) achieve better performance compared to the variants with single branches (i.e., #3-6).
Consistency. Thanks to our consistent module, the performance of our TcVSE could be improved even with only one single branch, i.e., #3 vs. #4, and #5 vs. #6. Hence, our consistent module could mutually promote the performance of different branches by eliminating the prediction discrepancy across different branches. Furthermore, the full version of TcVSE (#1) could achieve the best retrieval performance, which indicates that our consistent module not only mutually promotes the performance of each branch but also remains complementary information of different branches.
3.4 VISUALIZATION OF UNCERTAINTY
To visually illustrate the uncertainty estimation, we plot the distribution diagrams of obtained uncertainty on the test sets of Flickr30K and MS-COCO. Since the intrinsic perturbation in data is uncontrollable and inconspicuous, it is hard to quantitatively evaluate the uncertainty estimated by the proposed method. To this end, we manually corrupt the inputs to amplify the unreliability of the data for easier observation, e.g., discard, swap, and mask operations used in Huang et al. (2021). Such data corruption could be seen as data augmentation. The proportion of corrupted image regions and words denotes the augmentation rate (AR). In the experiment, we investigate the uncertainty distribution quantified by our TcVSE (Bi-GRU) under three ARs (i.e., 0.0, 0.3, 0.6) as shown in Figure 3. From the figure, one could see that most retrievals under the low ARs have low certainty, i.e. clustering on the left. On the contrary, the uncertainty of the retrieval gradually increases as the ARs increase, as shown by most of the retrieval uncertainty gathered to the right in Figures 3(a) to 3(d). That is to say, as the AR increases, the correlation between image-text pairs will be degraded, resulting in increasing the retrieval uncertainty, which is consistent with the fact that data disturbance increases unreliability/uncertainty. Therefore, our method could effectively capture the uncertainty.
3.5 QUALITATIVE RESULTS
Figures 4 and 5 illustrate some qualitative cross-modal results retrieved by our TcVSE. In the figures, we also report the estimated uncertainty and the ensemble similarity measured by TcVSE for intuitive analysis. Unlike prior visual-textual matching methods, our TcVSE could quantify the overall uncertainty for cross-modal retrieval given each query, thus providing self-evaluation scores for the retrieved results. That is to say, our TcVSE could not only compute the similarities across different modalities for cross-modal retrieval inference but also self-evaluate the reliability of the results in terms of uncertainty, improving the interpretability of retrieval. For example, in Figure 4(a-c), the
predicted uncertainty by our TcVSE could self-evaluate the retrieval quality, namely more incorrect retrieved results with high uncertainty.
For example, in the completely correct examples (i.e. Figure 4(a) and Figure 5(a)), the correct retrieval is with high similarity and low overall uncertainty, which is viewed as trustworthy retrievals. In Figure 4(c) and Figure 5(d), retrieved results with high uncertainty are usually unreliable even with relatively high similarity, e.g., Figure 4(c) and Figure 5(d). More specifically, although the retrieved results have relatively high similarities compared to other correctly retrieved ones, they ignore/misunderstand some details in the queries, such as ”one female” in Figure 4(c) and ”skateboard” vs ”rollerblade” in Figure 5(d). That is to say, it is very hard to evaluate the retrieval quality by the obtained similarities. Fortunately, our TcVSE could accurately estimate the uncertainty of the retrieved results leading to self-evaluating the retrieval quality.
4 CONCLUSION
In this paper, we revisit a practicable and meaningful problem in VSE-based image-text matching, i.e., “How to make retrieval trustworthy?”. To this end, we present a Trust-consistent Visual Semantic Embedding method (TcVSE) for image-text matching, thus endowing the VSE models with the ability to self-evaluate the retrieval quality for trustworthy retrieval. Specifically, first, cross-modal evidential deep learning is proposed to capture accurate uncertainty of image-text matching. Second, a consistency module is presented to enforce the subjective opinion of distinct branches to be consistent for high reliability. Finally, we conduct extensive experiments and analyses to verify the effectiveness and self-evaluation of TcVSE.
A APPENDIX
A.1 RELATED WORKS
Image-text matching. Most of the existing methods for image-text matching (ITM) could be roughly divided into two groups, i.e., the global-level methods represented by visual-semantic embedding (VSE) and the local-level methods with complex similarity inference. The global-level methods mainly aim to obtain good global representations from visual and textual modalities with the help of the well-designed feature extraction, enhancement, or aggregation strategy, and then directly compute the similarity, e.g., VSE++ Faghri et al. (2017), VSRN Li et al. (2019), and VSE∞ Chen et al. (2021). The local-level methods desire to learn the latent fine-grained alignments across different modalities for more accurate similarity inference, e.g., SCAN Lee et al. (2018), IMRAM Chen et al. (2020), SGRAF Diao et al. (2021), UARDA Zhang et al. (2022) and son on. Different from the mentioned lightweight methods, further breakthroughs have been made in the performance of downstream cross-modal tasks with the rapid development of large-scale visual language pre-training models in recent years, e.g., UNICODER-VL Li et al. (2020), CLIP Radford et al. (2021), and MaskCLIP Dong et al. (2022). However, the models are usually accompanied by high training or fine-tuning costs. In this paper, our research belongs to the lightweight global-level method. Uncertainty-based learning. Deep learning has made promising progress in both academic research and industrial applications, but it is hard to quantify the uncertainty of deep models directly due to deterministic network prediction. Bayesian neural networks (BNNs) have been used to model uncertainty in computer vision tasks by placing priors over network deterministic weights, e.g., variational inference Kingma et al. (2015), approximations via dropout Gal & Ghahramani (2016); Gal et al. (2017), and so on. However, modeling uncertainty with BNNs is usually limited by the expensive sampling cost. Recently, Sensoy et al. (2018) proposed an uncertainty learning paradigm that combines evidence theory with DNNs, which places Dirichlet priors over discrete model predictions to directly model uncertainty with lower cost and it has been successfully applied in various tasks,
e.g., ClassificationSensoy et al. (2018); Han et al. (2022), RecognitionBao et al. (2021), and SegmentationZou et al. (2022). In this paper, we focus on the estimation of the uncertainty in image-text matching based on evidential deep learning.
A.2 DERIVATION
We carry out a detailed derivation process for some of the formulas in the paper.
The derivation of Equation (5):
ED(p|α) [Lce] = ∫ K∑
j=1
−yj log (pj) 1 B (α) K∏ j=1 p αj−1 j dp
= K∑ j=1 yj ∫ log (pj) 1 B (α) K∏ j=1 p αj−1 j dp =
K∑ j=1 yjE [log (pj)] .
From Minka (2003), E [log (pj)] could be as ψ (S)− ψ (αj), where S = ∑K k=1 αk. Thus,
ED(p|α) [Lce] = K∑ j=1 yj (ψ (S)− ψ (αj)) .
The derivation of Equation (7):
Lkl(α) = 1
K K∑ i=1 KL [D (pi | α̃i) ∥D (pi | ⟨1, 1, · · · , 1⟩)]
= 1
K K∑ i=1 E [ log D (pi | α̃i) D (pi | ⟨1, 1, · · · , 1⟩) ]
= 1
K K∑ i=1 E
log Γ (∑K j=1 α̃ij ) Γ(K) ∏K j=1 Γ(α̃ij) K∏ j=1 p α̃ij−1 ij = 1
K K∑ i=1
log Γ (∑K j=1 α̃ij ) Γ(K) ∏K j=1 Γ(α̃ij) + E log K∏ j=1 p α̃ij−1 ij = 1
K K∑ i=1 − log (Γ(K)B (α̃i)) + K∑ j=1 (α̃ij − 1)E [log pij ] = 1
K K∑ i=1
[ − log (Γ(K)B (α̃i)) +
K∑ k=1 (α̃ik − 1) ( ψ (α̃ik)− ψ ( S̃i
))]
A.3 PSEUDOCODE
We provide the pseudocode of TcVSE (Algorithm 1) to help understand how TcVSE works.
A.4 PARAMETRIC ANALYSIS
TcVSE has two key hyper-parameters, i.e., τ in Equation (2) andMaxTimes in Algorithm 1. Thus, we conduct detailed parameter experiments (shown in Figure 6) to evaluate the impact of different
Algorithm 1: TcVSE: Trust-consistent Visual Semantic Embedding pseudocode
Input: A well-paired subset {(un, cn)}Nn=1 of (V, C), temperature parameter τ . Initialize: Initialize the parameters Θ of TcVSE. while e < MaxEpoch do
for x in Batches do /*First step*/ x′ = Augment(x){ ei2tk }K k=1 ←− VSEi2t(x′) \\ image-to-text{
ei2tk }K k=1 ←− VSEt2i(x′) \\ text-to-image
for each query do Dirichlet distributions D(p | α)←− e \\ α = e+ 1 end Obtain uncertainty-aware loss Lu with Equation (9) Θ = AdamW(Lu,Θ) /*Second step*/ for t < MaxTimes do
Recompute { ei2tk }K k=1 and { êi2tk }K k=1
\\ image-to-text for each i2t query do
Obtain Subjective Opinions bi2t, b̂i2t with Equation (3) end Recompute { êt2ik }K k=1 and { et2ik }K k=1
\\ text-to-image for each t2i query do
Obtain Subjective Opinions b̂t2i, bt2i with Equation (3) end Obtain the consistency loss Lc with Equation (11) Θ = AdamW(Lc,Θ)
end end
end Output: The learned parameters Θ
hyper-parameter settings and obtain the better parameter settings for TcVSE. From Figure 6(a), TcVSE with too small τ will not be optimized well and perform poorly. Moreover, the performance of TcVSE gradually decreases from the best (τ = 0.03) with the increment of τ , so we recommend setting τ for TcVSE within 0.03∼0.05 to obtain stable and reliable performance. In all our experiments, τ is 0.05.
From Figure 6(b), as theMaxTimes of consistency regularization increases, the better performance of TcVSE could be obtained due to the more consistent prediction, which is obviously reasonable. From the figures, one could find that when MaxTimes is set to 3∼6, the performance gap is not large. In our experiments, we set MaxTimes to 3.
A.5 SUPPLEMENTAL RESULTS
In this section, we supplement some experimental results. Specifically, we provide more detailed experimental results on the MS-COCO 5K test set and the results comparison of our TcVSE with the popular VSE-based methods (VSE++ Faghri et al. (2017), VSRN Li et al. (2019), LIWE Wehrmann et al. (2019), CVE Wang et al. (2020), VSE∞ Chen et al. (2021), VSRN++ Li et al. (2022a)),and MV-VSELi et al. (2022b). From Figure 5, our TcVSE achieves competitive results compared with that of the state-of-the-art image-text matching methods. Meanwhile, as shown in Table 5, compared with these popular VSE-based methods, TcVSE obviously achieves the best performance.
Image Backbone: Faster-RCNN, Text Backbone: Bi-GRU
Image Backbone: Faster-RCNN, Text Backbone: Bert-base
A.6 MORE RETRIEVAL RESULTS | 1. What is the focus and contribution of the paper regarding trust-consistent visual-text matching?
2. What are the strengths of the proposed approach, particularly in terms of uncertainty modeling and reliability scoring?
3. What are the weaknesses of the paper, especially regarding performance and experimental setup?
4. Do you have any concerns about the qualitative results and dataset suitability for evaluating the task?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a method for trust-consistent visual-text matching. The authors add evidential deep learning techniques to existing ranking-based contrastive learning losses, namely an uncertainty-aware loss and a consistency loss. The uncertainty modeling allows the model to output a normalized uncertainty score that indicates the reliability of the current prediction. Experiments on two datasets show the proposed method outperforms previous methods with the same backbone features.
Strengths And Weaknesses
Strength
The need for a reliable/trustworthy image retrieval model is valid. Previous works with unbounded ranking scores are difficult to deploy in real-world systems.
The writing is clear and the proposed method seems straightforward and easy to implement.
Weaknesses
The performance increase seems insignificant, especially with the non-ensemble runs. The i2t and t2i are identical branches (Figure 2.a), so essentially they are an ensemble of the ITM models (and the computation doubles). Maybe it would make more sense if the authors show a comparison of computational costs like FLOPs between the baselines and the proposed method.
The qualitative results are somewhat concerning to me, and I think the two datasets are not best suited to evaluate this task. See Figure 4.c. The top 1 and even all top 4 sentences look correct/plausible to me (GT says only 1 is correct). Therefore I’m not sure how this kind of ambiguous annotation could have impacted the model. Also, would removing data augmentation during training change the behavior of the model (I understand Figure 3 shows this on the test set)? Please elaborate on this.
Clarity, Quality, Novelty And Reproducibility
The paper is generally well-written. I think the need for a reliable/trustworthy image retrieval model is valid and the proposed method is novel for image-text retrieval. The method seems straightforward and can be reproduced. |
ICLR | Title
Trust-consistent Visual Semantic Embedding for Image-Text Matching
Abstract
Visual Semantic Embedding (VSE), as a link between Computer Vision and Natural Language Processing, aims at jointly learning cross-modal embeddings to bridge the discrepancy across visual and textual spaces. In recent years, VSE has achieved great success in image-text matching benefiting from the outstanding representation power of deep learning. However, existing methods produce retrieved results only relying on the ranking of cross-modal similarities, even if the retrieved results are unreliable and uncertain. That is to say, they cannot selfevaluate the quality of retrieved results for trustworthy retrieval, resulting in ignoring the ubiquitous uncertainty in data and models. To address this problem, we propose a novel VSE-based method for image-text matching, namely Trustconsistent Visual Semantic Embedding (TcVSE), to embrace trustworthy retrieval and self-evaluation for image-text matching. To be specific, first, TcVSE models the evidence based on cross-modal similarities to capture accurate uncertainty. Second, a simple yet effective consistency module is presented to enforce subjective opinions of bidirectional VSE models (i2t+t2i) to be consistent for high reliability and accuracy. Finally, extensive comparison experiments are conducted to demonstrate the superiority of TcVSE on two widely-used benchmark datasets, i.e., Flickr30K and MS-COCO. Furthermore, some qualitative experiments are carried out to provide comprehensive and insightful analyses for the reliability and rationality of our method.
N/A
1 INTRODUCTION
Visual Semantic Embedding aims to learn a shared embedding space to enforce visual data coincide with their corresponding semantic textual descriptions, which is an important approach to understanding the cross-modal semantic association for downstream applications, such as image-text matching Faghri et al. (2017) and visual question-answering Malinowski et al. (2015), etc. Thus, the key issue of VSE is how to eliminate the discrepancy across images and texts to learn a reliable common embedding space. To address this issue, numerous methods attempt to project visual and textual data into a latent common space. However, it is still unknown to self-evaluate the retrieval performance to achieve interpretable and reliable inference.
In this paper, we focus on image-text matching (ITM), one of the fundamental tasks of cross-modal learning, i.e., cross-modal retrieval, which expects to search the
most relevant sentences for a given image query (i2t) or retrieve the related images from a given sentence query (t2i) according to the pairwise visual-semantic similarities. Some early works based on VSE Kiros et al. (2014); Wang et al. (2016); Faghri et al. (2017) leverage the powerful feature extraction capability of deep neural networks (DNNs) to obtain the global representation of images and texts, such as VGG Simonyan & Zisserman (2014), ResNet He et al. (2016), and GRU Chung et al. (2014), etc., by maximizing the correlated cross-modal similarities. More granularly, recent
VSRN Li et al. (2019) performs reasoning with Graph Convolutional Neural networks (GCNs) Kipf & Welling (2016) to generate enhanced visual representations, which captures both objects and corresponding semantic relationships for better visual semantic embedding. VSE∞ Chen et al. (2021) presents an adaptive pooling strategy (GPO) that aggregates (region-based or grid-based) local features to lean a better common representation. Unlike the aforementioned VSE-based methods, some works Lee et al. (2018); Chen et al. (2020); Wu et al. (2019); Liu et al. (2020); Diao et al. (2021); Cheng et al. (2022); Li et al. (2022a) present a specific mechanism or model to explicitly learn and integrate the fine-grained relationships between image regions and word tokens for cross-modal similarity inference.
Although prior approaches could achieve promising performance, they are only able to estimate image-text similarities for cross-modal retrieval, wherein image-text pairs with high similarity are taken for granted as matched. Since the ubiquitous uncertainty in data and models, it is inevitable to produce unreliable retrieval results. Therefore, it requires revisiting the questions such as “Is this retrieval trustworthy?” to evaluate the uncertainty or unreliability of predictions. To this end, it is valuable and necessary to measure such uncertainty for self-evaluation, but less touched in existing image-text matching methods.
To address this problem, we propose a novel VSE framework, termed Trust-consistent Visual Semantic Embedding (TcVSE). Not only does TcVSE outperform prior works (Figure 1), but it is also more efficient, achieving trustworthy image-text matching. More specifically, (1) we employ Evidential Deep Learning (EDL) built on the Dempster-Shafer Theory of Evidence (DST) Yager & Liu (2008) and the Subjective Logical Theory Sensoy et al. (2018) (SL) into VSE models to capture the uncertainty, thus endowing the model with the ability to self-evaluate retrieval quality. Following the principles of DST and SL, we consider the pairwise similarity measured by VSE as a source of evidence and parameterize the evidence as a Dirichlet distribution, which not only models the density of query probabilities but also the uncertainty. (2) Unlike prior EDL methods, our TcVSE focuses on ITM instead of classification. Thus, our TcVSE should overcome two challenges to apply EDL on ITM, namely instance retrieval and bidirectional inference. To tackle the first challenge, we relax the instance-level retrieval to a K-way querying for training, thus enabling uncertainty estimation via cross-modal similarity. To counter the second challenge, two VSE branches (i2t and t2i) with EDL are proposed to learn bidirectional retrieval, however, the difference between the two tasks unavoidably leads to the gap between their uncertainty. To address the problem, we present a simple yet effective consistency module to enforce subjective opinions of different branches to be consistent for more reliable uncertainty estimation, thus embracing performance improvement. (3) Finally, we demonstrate the effectiveness and superiority of our method with extensive experiments on two widely used benchmark datasets, i.e., Flickr30K and MS-COCO. The comprehensive ablation studies and insightful analyses verify the reliability and practicability of our method.
2 TRUST-CONSISTENT VISUAL SEMANTIC EMBEDDING
In this section, we summarize our method in Section 2.1 and elaborate on how to estimate the evidence-based uncertainty for trustworthy image-text matching in Section 2.2. Moreover, we present a Consistent Module to make two VSE branches obtain consistent predictions on subjective opinions during evidential deep learning in Section 2.3.
2.1 OVERVIEW
To achieve trustworthy image-text matching, unlike most standard methods, TcVSE utilizes EDL and a consistent module to accurately measure the visual-textual similarity and additionally quantify the uncertainty of the VSE model for self-evaluation. Figure 2 shows the framework of our proposed method. We first define our Visual Semantic Embedding model for image-text matching as illustrated in Figure 2(a). Let (V, C) denote a visual and textual dataset, which contains a set of images V and a set of texts C. Feature Encoding: For any sample pair (u, c) in (V, C), their feature representations could be encoded by some deep backbone networks, e.g., Faster-RCNN for visual features and Bi-GRU for
textual features, respectively:
V(u,Θϕ) : u→ {xi}Vi=1,xi ∈ Rd, T(c,Θψ) : c→ {rj}Mj=1, rj ∈ Rd
where d is the dimensionality of the joint embedding space, V(∗,Θϕ) and T(∗,Θψ) are respectively visual and textual backbones, Θϕ and Θψ are respectively the corresponding model parameters, {xi}Vi=1 is a set of V encoded local region features, {rj}Mj=1 is a set of M word token features, rj ∈ Rd is a word token feature and M is the number of words for c. Following Hüllermeier & Waegeman (2021), we randomly discard the region features extracted by the backbone network (Faster-RCNN) to achieve data augmentation, which is different from the common augmentation of the raw image, e.g., cropping, rotation, etc. Meanwhile, “Mask”, “Discard”, or “Swap” operations are performed on the word tokens for text augmentation.
Similarity Representation: To obtain the global similarity, the encoded visual features {xi}Ni=1 and textual features {rj}Mj=1 would be aggregated by Max pooling into a common embedding space.
v = MaxPooling ( {xi}Ni=1 ) , t = MaxPooling ( {rj}Mj=1 ) .
Then, the similarity score of (v, t) is measured by the cosine similarity as follows:
S(v, t) = v⊤t
∥v∥ · ∥t∥ . (1)
Learning with TcVSE: A VSE model aims at minimizing the visual-semantic distance in a common space, i.e., maximizing the similarity of matched visual and textual samples. Our TcVSE aims to achieve that goal while also endowing the VSE models with the reliable capability of uncertainty estimation. More specifically, TcVSE conducts a two-step learning process to optimize models. The first step is to optimize the uncertainty-aware loss Lu based on the cross-modal evidential deep learning. The second step is multiple optimizations for opinion-based consistency loss Lc. See Algorithm 1 for more details on the optimization process.
2.2 UNCERTAINTY ESTIMATION
In this section, we follow the notions and principles of evidential deep learning (EDL) Sensoy et al. (2018) to model the uncertainty of VSE models. To estimate uncertainty, the Dempster-Shafer
Theory of Evidence (DST) Yager & Liu (2008) and the theory of Subjective Local (SL) Jsang (2016) are employed to build the learning paradigm of EDL. The existing EDL learns a deterministic model from the observable evidence supporting subjective opinions (i.e., model predictions). However, these methods almost focus on unimodal classification and less touching image-text matching.
For image-text matching, VSE projects the visual and textual feature representations into a common space, thus making it possible to measure the similarity across different modalities. Different from existing EDL methods Sensoy et al. (2018), VSE does not have a nonlinear classifier to predict the evidence, thus making it difficult to quantify the uncertainty. To address the issue, our TcVSE relaxes the instance-level retrieval to a K-way querying, thus the evidences could be estimated by the cross-modal similarities, i.e., ek = [g(sk1), g(sk2), · · · , g(skK)] for the k-th query, where K is the number of matching events and g(·) is a function to transform similarity into a non-negative evidence (i.e., e ∈ [0,+∞)) as bellow:
e = g(s) = ReLU(s/τ) or exp(s/τ), (2)
where s is the visual-semantic similarity computed by Equation (1) and 0 < τ < 1 is a temperature parameter Wu et al. (2018). To model the uncertainty, the similarity-based evidence vector e could be associated with the parameters of a Dirichlet distribution α = [α1, · · · , αK ] (αk = ek + 1) built on SL theory, which provides an overall uncertainty mass u and a belief mass bi for each singleton that is one of K retrieval events of a Qurey in image-text matching. These K+1 masses are defined as
bk = ek S = αk − 1 S and u = K S , (3)
where S = ∑K k=1 (ek + 1) = ∑K k=1 αk and ∑K k=1 bk + u = 1. The belief masses b = [b1, b2, · · · , bK ] could be treated as subjective opinions corresponding to the parameters of Dirichlet distribution α and the S could be considered as the distribution strength.
Intuitively, ITM could be viewed as a process of retrieving counterparts with the highest matching probability from different modalities. Hence, the matching probability assignment over the retrieved samples of each “Query” could be denoted as p = [p1, p2, · · · , pK ], where ∑K i=1 pi = 1. By using the Dirichlet distribution to model such probability assignment, given an opinion, the expected probability of the k-th matched event can written as ED(p|α) [pk] = ∫ pkD(p | α)dp = αkS , where the Dirichlet distribution with parameters ⟨α1, α2, · · · , αK⟩ parameterized over the evidence ⟨e1, e2, · · · , eK⟩ expresses the density of such probability assignment and simultaneously models the overall uncertainty Jsang (2016). The density function is given by
D(p | α) =
{ 1
B(α) ∏K j=1 p αj−1 j for p ∈ SK 0 otherwise , (4)
where B(α) is the K-dimensional multinomial beta function and SK is the K-dimensional unit simplex. For a deep classifier, the widely used loss function is cross-entropy, formally as
Lce(y,p) = − K∑ j=1 yj log(pj).
Considering the density function D(p | α) molded by the Dirichlet distribution α, the Bayes risk of Lce can be computed by
ED(p|α) [Lce] = ∫ K∑
j=1
−yj log (pj) 1 B (α) K∏ j=1 p αj−1 j dp = K∑ j=1 yj (ψ (S)− ψ (αj)) , (5)
where ψ (·) is the digamma function. By minimizing such risk, it is possible to ensure that correctly labeled observations generate as strong evidence as possible. Since the number of annotated pairs for ITM training is much larger than the number of categories for multi-classification, we simply regard “K” as the size of one training mini-batch, wherein visual and textual samples have a one-to-one correspondence. Therefore, such risk can be considered as the equivalent of the uncertainty-aware cross-entropy Luce of TcVSE, which is defined as
Luce(α) = 1
K K∑ i=1 ED(pi|αi) [Lce(IK ,Pi)] , (6)
where IK is an identity matrix of size K. Luce encourages VSE to generate as strong evidence as possible for positive pairs, which guarantees that evidence of positive pairs is higher than that of negative pairs. Furthermore, to further extreme the predicted evidence, we introduce KullbackLeibler (KL) divergence to enforce the evidence of negative pairs to be zero. The penalization loss could be formulated as:
Lkl(α) = 1
K K∑ i=1 KL [D (pi | α̃i) ∥D (pi | ⟨1, 1, · · · , 1⟩)]
= 1
K K∑ k=1
[ − log (Γ(K)B (α̃i)) +
K∑ k=1 (α̃ik − 1) ( ψ (α̃ik)− ψ ( S̃i ))] ,
(7)
where S̃i = ∑K j=1 α̃ij , α̃i = IK(i,:) + ( 1− IK(i,:) ) ⊙ αi, Γ(·) is the gamma function, and ψ(·) is the digamma function. Thus, the uncertainty-aware loss of one VSE branch (e.g., image-to-text) is given by Li2tu = Li2tuce + λLi2tkl , (8) where λ is a balance factor that dynamically increases with the number of epochs. The dynamical strategy prevents the optimizer from overemphasizing the KL divergence at the beginning of training, otherwise, the optimizer will be misled by immature opinions leading to performance degradation. Finally, to simultaneously consider the bidirectional retrieval, we jointly optimize the two VSE branches as below: Lu = Li2tu + Lt2iu , (9) where Lt2iu is the evidential loss of t2i VSE branch, which could be computed like Equations (6) to (8).
2.3 CONSISTENT MODULE
In our TcVSE, each branch focuses on different learning directions, due to the discrepancy between distinct retrieval tasks (one for image-to-text and another for text-to-image). Unfortunately, this will lead to various branches producing inconsistent uncertainty estimation, resulting in a performance drop. Specifically, given one query, one branch produces a prediction of low uncertainty, whereas the uncertainty of another branch might be higher as shown in Figure 2(b). Therefore, we introduce a consistency regularization to enforce the two VSE branches to produce consistent predictions on subjective opinions. To simplify presentation without losing generality, we only elaborate on the consistency loss of one direction (i.e., image-to-text) as follows:
Li2tc ( bi2t, b̂i2t ) = 1
K K∑ k=1 ∣∣∣bi2tk − b̂i2tk ∣∣∣ , (10) where bi2t and b̂i2t are obtained from i2t and t2i branches with Equation (3), respectively. Similarly, we could easily obtain the consistency loss in another direction (e.g., text-to-image). Finally, the consistency loss Lc of our TcVSE could be formulated as:
Lc = 1
K K∑ k=1 [ Li2tc ( bi2tk , b̂ i2t k ) + Lt2ic ( b̂t2ik ,b t2i k )] . (11)
The optimization process for our TcVSE is summarized in Algorithm 1.
3 EXPERIMENT
To evaluate our TcVSE, we conduct extensive experiments on two widely used benchmark datasets for Image-Text Matching. Following Lee et al. (2018), we measure the performance of image-to-text and text-to-image retrieval by Recall@K (K=1,5,10), which is defined as the proportion of correct items retrieved in the top K samples of the query. In addition, we adopt the sum of all Recall results to evaluate the overall performance.
Visual Backbone: Faster-RCNN, Textual Backbone: Bi-GRU
Visual Backbone: Faster-RCNN, Textual Backbone: Bert-base
3.1 DATASETS AND IMPLEMENTATION DETAILS
Datasets: The benchmark datasets used in our experiments are Flickr30K Young et al. (2014) and MS-COCO Lin et al. (2014). Flickr30K is an image-text dataset collected from Flickr website and contains 31,000 images with five semantically correlated captions each. Following Lee et al. (2018), we adopt the same dataset splits in our experiments, i.e., 29,000 training images, 1,000 validation images, and 1,000 testing images. MS-COCO consists of 123,287 images, and each image also has five annotated text descriptions. Following Lee et al. (2018), 113,287 images for training, 5000 images for validation, and the remaining 5000 images for testing.
Implementation Detail. In our TcVSE, like VSE∞ Chen et al. (2021), a Faster-RCNN Anderson et al. (2018) detector (with ResNet-101) and Bi-GRU (or Bert-base Devlin et al. (2018)) serve as our visual and textual backbones, respectively. For each image, the visual backbone extracts the region proposals with top-36 confidence scores and projects each region into a 2,048-dimensional feature vector. Following Chen et al. (2021), we randomly discard some region proposals of each image to achieve augmentation during training. For each text description, we randomly mask some words to achieve data augmentation for each description. The dimensionality of the common embedding space is 1024. Different from most methods, we use the uncertainty-aware loss based on EDL for training, which additionally endows the model with the ability to uncertainty estimation.
We employ the AdamW optimizer Loshchilov & Hutter (2017) with weight decay factor 10e-4 to train the VSE branches. The learning rate of the visual model is 5e-4. For the textual model, the initial learning rate is 5e-4, except for Bert-base with 5e-5, and decaying by 10% every 10 epochs. The mini-batch size K is 128 with 25 training epochs on both Flickr30K and MS-COCO.
3.2 COMPARISON WITH STATE-OF-THE-ART METHODS
For a comprehensive evaluation, we compare our TcVSE with 12 state-of-the-art baselines, including SCAN Lee et al. (2018), CAMP Wang et al. (2019), VSRN Li et al. (2019), IMRAM Chen et al. (2020), GSMN Liu et al. (2020), SGRAF(SAF+SGR) Diao et al. (2021), VSE++* Chen et al. (2021), VSE∞ Chen et al. (2021), NCR Huang et al. (2021), CGMN Cheng et al. (2022), URDA Li et al. (2022a) and VSRN++ Li et al. (2022a). VSE++* is the basic version based on VSE∞ using Average Pooling. We conduct abundant comparison experiments as shown in Tables 1 and 2.
Furthermore, we also provide the comparison results compared with the state-of-the-art VSE-based methods in Table 5 for a comprehensive evaluation.
Results on Flickr30K. We report the experimental results on Flickr30K in Table 1. From the table, one could find that our TcVSE with a single VSE branch achieves comparable results, e.g., TcVSE (i2t) outperforms all baselines with rSum=504.4 under Bi-GRU and rSum=517.7 under Bert-base. Thanks to our trust-consistent learning, our TcVSE (i2t+t2i) is superior to all compared methods. Under the textual Bert-base backbone, our TcVSE could outperform all baselines with either one or two branches. Specifically, TcVSE (i2t+t2i) achieves remarkable improvement with the best R@1=82.9% for sentence retrieval and R@1=63.9% for image retrieval.
Results on MS-COCO. We present the qualitative results on MS-COCO with 5-fold 1K and full 5K test images in Tables 1 and 2, respectively. With Bi-GRU, our TcVSE could achieve a competitive performance compared to the state-of-the-arts. More specifically, TcVSE (i2t+t2i) achieves the best R@1 80.6% for sentence retrieval. In addition, Bert-base could further boost our TcVSE remarkably, i.e., a relative improvement of about 3% on R@1 compared to the best baseline VSE∞. In brief, our TcVSE with either one VSE branch or two branches could remarkably outperform all baselines, which demonstrates the effectiveness of our method.
For the experiments on MS-COCO 5K test images, the performance improvement is even more pronounced in terms of sentence retrieval, with a relative improvement of 7.7% (Bi-GRU) and 12.9% (Bert-base) on R@1 compared to best baselines. Both one and two branches of our TcVSE (Bert-base) could achieve conspicuous performance improvement. Furthermore, the consistent module could further boost the performance of TcVSE with one branch, which indicates that our trust-consistent learning will produce complementary and trustworthy predictions for retrieval improvement.
3.3 ABLATION STUDY
In this section, extensive ablation studies are carried out on Flickr30K to verify the contribution of each component to image-text matching. The experimental results are as shown in Table 3. We could comprehensively analyze the results from the following three distinct aspects:
Effectiveness. To verify the effectiveness of our EDL, we replace our evidential loss with Max of Hinge Loss (MH) Faghri et al. (2017) to optimize our VSE, i.e. #7 VSE with MH loss. From Table 3, one could see that other variants with EDL (i.e., #1–6) achieve better retrieval performance than VSE with MH loss, which indicates that our VSE endowed with EDL could remarkably improve performance by capturing the uncertainty. Moreover, our consistency module could further improve the retrieval performance of the two branches, even using only one branch for inference. More specifically, the module could relatively improve the performance by 1.65% (#1 vs. #2), 1.02% (#3 vs. #4), and 1.68% (#5 vs. #6), and in terms of R@1 for sentence retrieval, respectively. By fusing the two branches, our TcVSE could achieve further improvement, e.g., the full version of our TcVSE (#1) could relatively improve the version of one branch #3 and #5 by 1.52% and 2.03% in terms of R@1 for sentence retrieval, respectively.
Complementarity. Two VSE branches are exploited to focus on different retrieval tasks, i.e., imageto-text and text-to-image matching. Obviously, such differences between tasks lead to distinct emphasis. Thus, aggregating the two VSE branches will take advantage of their complementary information, leading to further improvement, which has been verified by the results. Specifically, the variants with aggregation (i.e., #1 and #2) achieve better performance compared to the variants with single branches (i.e., #3-6).
Consistency. Thanks to our consistent module, the performance of our TcVSE could be improved even with only one single branch, i.e., #3 vs. #4, and #5 vs. #6. Hence, our consistent module could mutually promote the performance of different branches by eliminating the prediction discrepancy across different branches. Furthermore, the full version of TcVSE (#1) could achieve the best retrieval performance, which indicates that our consistent module not only mutually promotes the performance of each branch but also remains complementary information of different branches.
3.4 VISUALIZATION OF UNCERTAINTY
To visually illustrate the uncertainty estimation, we plot the distribution diagrams of obtained uncertainty on the test sets of Flickr30K and MS-COCO. Since the intrinsic perturbation in data is uncontrollable and inconspicuous, it is hard to quantitatively evaluate the uncertainty estimated by the proposed method. To this end, we manually corrupt the inputs to amplify the unreliability of the data for easier observation, e.g., discard, swap, and mask operations used in Huang et al. (2021). Such data corruption could be seen as data augmentation. The proportion of corrupted image regions and words denotes the augmentation rate (AR). In the experiment, we investigate the uncertainty distribution quantified by our TcVSE (Bi-GRU) under three ARs (i.e., 0.0, 0.3, 0.6) as shown in Figure 3. From the figure, one could see that most retrievals under the low ARs have low certainty, i.e. clustering on the left. On the contrary, the uncertainty of the retrieval gradually increases as the ARs increase, as shown by most of the retrieval uncertainty gathered to the right in Figures 3(a) to 3(d). That is to say, as the AR increases, the correlation between image-text pairs will be degraded, resulting in increasing the retrieval uncertainty, which is consistent with the fact that data disturbance increases unreliability/uncertainty. Therefore, our method could effectively capture the uncertainty.
3.5 QUALITATIVE RESULTS
Figures 4 and 5 illustrate some qualitative cross-modal results retrieved by our TcVSE. In the figures, we also report the estimated uncertainty and the ensemble similarity measured by TcVSE for intuitive analysis. Unlike prior visual-textual matching methods, our TcVSE could quantify the overall uncertainty for cross-modal retrieval given each query, thus providing self-evaluation scores for the retrieved results. That is to say, our TcVSE could not only compute the similarities across different modalities for cross-modal retrieval inference but also self-evaluate the reliability of the results in terms of uncertainty, improving the interpretability of retrieval. For example, in Figure 4(a-c), the
predicted uncertainty by our TcVSE could self-evaluate the retrieval quality, namely more incorrect retrieved results with high uncertainty.
For example, in the completely correct examples (i.e. Figure 4(a) and Figure 5(a)), the correct retrieval is with high similarity and low overall uncertainty, which is viewed as trustworthy retrievals. In Figure 4(c) and Figure 5(d), retrieved results with high uncertainty are usually unreliable even with relatively high similarity, e.g., Figure 4(c) and Figure 5(d). More specifically, although the retrieved results have relatively high similarities compared to other correctly retrieved ones, they ignore/misunderstand some details in the queries, such as ”one female” in Figure 4(c) and ”skateboard” vs ”rollerblade” in Figure 5(d). That is to say, it is very hard to evaluate the retrieval quality by the obtained similarities. Fortunately, our TcVSE could accurately estimate the uncertainty of the retrieved results leading to self-evaluating the retrieval quality.
4 CONCLUSION
In this paper, we revisit a practicable and meaningful problem in VSE-based image-text matching, i.e., “How to make retrieval trustworthy?”. To this end, we present a Trust-consistent Visual Semantic Embedding method (TcVSE) for image-text matching, thus endowing the VSE models with the ability to self-evaluate the retrieval quality for trustworthy retrieval. Specifically, first, cross-modal evidential deep learning is proposed to capture accurate uncertainty of image-text matching. Second, a consistency module is presented to enforce the subjective opinion of distinct branches to be consistent for high reliability. Finally, we conduct extensive experiments and analyses to verify the effectiveness and self-evaluation of TcVSE.
A APPENDIX
A.1 RELATED WORKS
Image-text matching. Most of the existing methods for image-text matching (ITM) could be roughly divided into two groups, i.e., the global-level methods represented by visual-semantic embedding (VSE) and the local-level methods with complex similarity inference. The global-level methods mainly aim to obtain good global representations from visual and textual modalities with the help of the well-designed feature extraction, enhancement, or aggregation strategy, and then directly compute the similarity, e.g., VSE++ Faghri et al. (2017), VSRN Li et al. (2019), and VSE∞ Chen et al. (2021). The local-level methods desire to learn the latent fine-grained alignments across different modalities for more accurate similarity inference, e.g., SCAN Lee et al. (2018), IMRAM Chen et al. (2020), SGRAF Diao et al. (2021), UARDA Zhang et al. (2022) and son on. Different from the mentioned lightweight methods, further breakthroughs have been made in the performance of downstream cross-modal tasks with the rapid development of large-scale visual language pre-training models in recent years, e.g., UNICODER-VL Li et al. (2020), CLIP Radford et al. (2021), and MaskCLIP Dong et al. (2022). However, the models are usually accompanied by high training or fine-tuning costs. In this paper, our research belongs to the lightweight global-level method. Uncertainty-based learning. Deep learning has made promising progress in both academic research and industrial applications, but it is hard to quantify the uncertainty of deep models directly due to deterministic network prediction. Bayesian neural networks (BNNs) have been used to model uncertainty in computer vision tasks by placing priors over network deterministic weights, e.g., variational inference Kingma et al. (2015), approximations via dropout Gal & Ghahramani (2016); Gal et al. (2017), and so on. However, modeling uncertainty with BNNs is usually limited by the expensive sampling cost. Recently, Sensoy et al. (2018) proposed an uncertainty learning paradigm that combines evidence theory with DNNs, which places Dirichlet priors over discrete model predictions to directly model uncertainty with lower cost and it has been successfully applied in various tasks,
e.g., ClassificationSensoy et al. (2018); Han et al. (2022), RecognitionBao et al. (2021), and SegmentationZou et al. (2022). In this paper, we focus on the estimation of the uncertainty in image-text matching based on evidential deep learning.
A.2 DERIVATION
We carry out a detailed derivation process for some of the formulas in the paper.
The derivation of Equation (5):
ED(p|α) [Lce] = ∫ K∑
j=1
−yj log (pj) 1 B (α) K∏ j=1 p αj−1 j dp
= K∑ j=1 yj ∫ log (pj) 1 B (α) K∏ j=1 p αj−1 j dp =
K∑ j=1 yjE [log (pj)] .
From Minka (2003), E [log (pj)] could be as ψ (S)− ψ (αj), where S = ∑K k=1 αk. Thus,
ED(p|α) [Lce] = K∑ j=1 yj (ψ (S)− ψ (αj)) .
The derivation of Equation (7):
Lkl(α) = 1
K K∑ i=1 KL [D (pi | α̃i) ∥D (pi | ⟨1, 1, · · · , 1⟩)]
= 1
K K∑ i=1 E [ log D (pi | α̃i) D (pi | ⟨1, 1, · · · , 1⟩) ]
= 1
K K∑ i=1 E
log Γ (∑K j=1 α̃ij ) Γ(K) ∏K j=1 Γ(α̃ij) K∏ j=1 p α̃ij−1 ij = 1
K K∑ i=1
log Γ (∑K j=1 α̃ij ) Γ(K) ∏K j=1 Γ(α̃ij) + E log K∏ j=1 p α̃ij−1 ij = 1
K K∑ i=1 − log (Γ(K)B (α̃i)) + K∑ j=1 (α̃ij − 1)E [log pij ] = 1
K K∑ i=1
[ − log (Γ(K)B (α̃i)) +
K∑ k=1 (α̃ik − 1) ( ψ (α̃ik)− ψ ( S̃i
))]
A.3 PSEUDOCODE
We provide the pseudocode of TcVSE (Algorithm 1) to help understand how TcVSE works.
A.4 PARAMETRIC ANALYSIS
TcVSE has two key hyper-parameters, i.e., τ in Equation (2) andMaxTimes in Algorithm 1. Thus, we conduct detailed parameter experiments (shown in Figure 6) to evaluate the impact of different
Algorithm 1: TcVSE: Trust-consistent Visual Semantic Embedding pseudocode
Input: A well-paired subset {(un, cn)}Nn=1 of (V, C), temperature parameter τ . Initialize: Initialize the parameters Θ of TcVSE. while e < MaxEpoch do
for x in Batches do /*First step*/ x′ = Augment(x){ ei2tk }K k=1 ←− VSEi2t(x′) \\ image-to-text{
ei2tk }K k=1 ←− VSEt2i(x′) \\ text-to-image
for each query do Dirichlet distributions D(p | α)←− e \\ α = e+ 1 end Obtain uncertainty-aware loss Lu with Equation (9) Θ = AdamW(Lu,Θ) /*Second step*/ for t < MaxTimes do
Recompute { ei2tk }K k=1 and { êi2tk }K k=1
\\ image-to-text for each i2t query do
Obtain Subjective Opinions bi2t, b̂i2t with Equation (3) end Recompute { êt2ik }K k=1 and { et2ik }K k=1
\\ text-to-image for each t2i query do
Obtain Subjective Opinions b̂t2i, bt2i with Equation (3) end Obtain the consistency loss Lc with Equation (11) Θ = AdamW(Lc,Θ)
end end
end Output: The learned parameters Θ
hyper-parameter settings and obtain the better parameter settings for TcVSE. From Figure 6(a), TcVSE with too small τ will not be optimized well and perform poorly. Moreover, the performance of TcVSE gradually decreases from the best (τ = 0.03) with the increment of τ , so we recommend setting τ for TcVSE within 0.03∼0.05 to obtain stable and reliable performance. In all our experiments, τ is 0.05.
From Figure 6(b), as theMaxTimes of consistency regularization increases, the better performance of TcVSE could be obtained due to the more consistent prediction, which is obviously reasonable. From the figures, one could find that when MaxTimes is set to 3∼6, the performance gap is not large. In our experiments, we set MaxTimes to 3.
A.5 SUPPLEMENTAL RESULTS
In this section, we supplement some experimental results. Specifically, we provide more detailed experimental results on the MS-COCO 5K test set and the results comparison of our TcVSE with the popular VSE-based methods (VSE++ Faghri et al. (2017), VSRN Li et al. (2019), LIWE Wehrmann et al. (2019), CVE Wang et al. (2020), VSE∞ Chen et al. (2021), VSRN++ Li et al. (2022a)),and MV-VSELi et al. (2022b). From Figure 5, our TcVSE achieves competitive results compared with that of the state-of-the-art image-text matching methods. Meanwhile, as shown in Table 5, compared with these popular VSE-based methods, TcVSE obviously achieves the best performance.
Image Backbone: Faster-RCNN, Text Backbone: Bi-GRU
Image Backbone: Faster-RCNN, Text Backbone: Bert-base
A.6 MORE RETRIEVAL RESULTS | 1. What is the focus and contribution of the paper on visual-semantic embedding?
2. What are the strengths of the proposed approach, particularly in terms of uncertainty estimation and performance improvement?
3. What are the weaknesses of the paper regarding its limitations and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper introduces uncertainty estimation module into visual-semantic embedding model using Dempster–Shafer Theory of Evidence and Subjective Logic Theory, embracing trustworthy retrieval. The proposed method significantly improves the performance and is able to provide an uncertainty score.
Strengths And Weaknesses
Strength:
S1: Uncertainty is important for deep learning methods. This paper introduces uncertainty estimation into image-text embedding models using Dempster–Shafer Theory of Evidence and Subjective Logic Theory, which is interesting.
S2: The performance of the proposed model is notable, outperforming its counterparts.
Weaknesses:
W1: In addition to VSE framework, can other image-text models use the uncertainty estimation module? For example CLIP model.
W2: Some advanced baselines are missing, for example UNICODER [1] and the proposed approach is inferior to the models with large-scale training. So I am wondering whether the proposed approach can also be used for UNICODER.
[1] G. Li et al. Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modalPre-training. AAAI, 2020.
Clarity, Quality, Novelty And Reproducibility
The paper is clear and high-quality, and I can easily follow it. Also, I think the model is reproducible. |
ICLR | Title
Trust-consistent Visual Semantic Embedding for Image-Text Matching
Abstract
Visual Semantic Embedding (VSE), as a link between Computer Vision and Natural Language Processing, aims at jointly learning cross-modal embeddings to bridge the discrepancy across visual and textual spaces. In recent years, VSE has achieved great success in image-text matching benefiting from the outstanding representation power of deep learning. However, existing methods produce retrieved results only relying on the ranking of cross-modal similarities, even if the retrieved results are unreliable and uncertain. That is to say, they cannot selfevaluate the quality of retrieved results for trustworthy retrieval, resulting in ignoring the ubiquitous uncertainty in data and models. To address this problem, we propose a novel VSE-based method for image-text matching, namely Trustconsistent Visual Semantic Embedding (TcVSE), to embrace trustworthy retrieval and self-evaluation for image-text matching. To be specific, first, TcVSE models the evidence based on cross-modal similarities to capture accurate uncertainty. Second, a simple yet effective consistency module is presented to enforce subjective opinions of bidirectional VSE models (i2t+t2i) to be consistent for high reliability and accuracy. Finally, extensive comparison experiments are conducted to demonstrate the superiority of TcVSE on two widely-used benchmark datasets, i.e., Flickr30K and MS-COCO. Furthermore, some qualitative experiments are carried out to provide comprehensive and insightful analyses for the reliability and rationality of our method.
N/A
1 INTRODUCTION
Visual Semantic Embedding aims to learn a shared embedding space to enforce visual data coincide with their corresponding semantic textual descriptions, which is an important approach to understanding the cross-modal semantic association for downstream applications, such as image-text matching Faghri et al. (2017) and visual question-answering Malinowski et al. (2015), etc. Thus, the key issue of VSE is how to eliminate the discrepancy across images and texts to learn a reliable common embedding space. To address this issue, numerous methods attempt to project visual and textual data into a latent common space. However, it is still unknown to self-evaluate the retrieval performance to achieve interpretable and reliable inference.
In this paper, we focus on image-text matching (ITM), one of the fundamental tasks of cross-modal learning, i.e., cross-modal retrieval, which expects to search the
most relevant sentences for a given image query (i2t) or retrieve the related images from a given sentence query (t2i) according to the pairwise visual-semantic similarities. Some early works based on VSE Kiros et al. (2014); Wang et al. (2016); Faghri et al. (2017) leverage the powerful feature extraction capability of deep neural networks (DNNs) to obtain the global representation of images and texts, such as VGG Simonyan & Zisserman (2014), ResNet He et al. (2016), and GRU Chung et al. (2014), etc., by maximizing the correlated cross-modal similarities. More granularly, recent
VSRN Li et al. (2019) performs reasoning with Graph Convolutional Neural networks (GCNs) Kipf & Welling (2016) to generate enhanced visual representations, which captures both objects and corresponding semantic relationships for better visual semantic embedding. VSE∞ Chen et al. (2021) presents an adaptive pooling strategy (GPO) that aggregates (region-based or grid-based) local features to lean a better common representation. Unlike the aforementioned VSE-based methods, some works Lee et al. (2018); Chen et al. (2020); Wu et al. (2019); Liu et al. (2020); Diao et al. (2021); Cheng et al. (2022); Li et al. (2022a) present a specific mechanism or model to explicitly learn and integrate the fine-grained relationships between image regions and word tokens for cross-modal similarity inference.
Although prior approaches could achieve promising performance, they are only able to estimate image-text similarities for cross-modal retrieval, wherein image-text pairs with high similarity are taken for granted as matched. Since the ubiquitous uncertainty in data and models, it is inevitable to produce unreliable retrieval results. Therefore, it requires revisiting the questions such as “Is this retrieval trustworthy?” to evaluate the uncertainty or unreliability of predictions. To this end, it is valuable and necessary to measure such uncertainty for self-evaluation, but less touched in existing image-text matching methods.
To address this problem, we propose a novel VSE framework, termed Trust-consistent Visual Semantic Embedding (TcVSE). Not only does TcVSE outperform prior works (Figure 1), but it is also more efficient, achieving trustworthy image-text matching. More specifically, (1) we employ Evidential Deep Learning (EDL) built on the Dempster-Shafer Theory of Evidence (DST) Yager & Liu (2008) and the Subjective Logical Theory Sensoy et al. (2018) (SL) into VSE models to capture the uncertainty, thus endowing the model with the ability to self-evaluate retrieval quality. Following the principles of DST and SL, we consider the pairwise similarity measured by VSE as a source of evidence and parameterize the evidence as a Dirichlet distribution, which not only models the density of query probabilities but also the uncertainty. (2) Unlike prior EDL methods, our TcVSE focuses on ITM instead of classification. Thus, our TcVSE should overcome two challenges to apply EDL on ITM, namely instance retrieval and bidirectional inference. To tackle the first challenge, we relax the instance-level retrieval to a K-way querying for training, thus enabling uncertainty estimation via cross-modal similarity. To counter the second challenge, two VSE branches (i2t and t2i) with EDL are proposed to learn bidirectional retrieval, however, the difference between the two tasks unavoidably leads to the gap between their uncertainty. To address the problem, we present a simple yet effective consistency module to enforce subjective opinions of different branches to be consistent for more reliable uncertainty estimation, thus embracing performance improvement. (3) Finally, we demonstrate the effectiveness and superiority of our method with extensive experiments on two widely used benchmark datasets, i.e., Flickr30K and MS-COCO. The comprehensive ablation studies and insightful analyses verify the reliability and practicability of our method.
2 TRUST-CONSISTENT VISUAL SEMANTIC EMBEDDING
In this section, we summarize our method in Section 2.1 and elaborate on how to estimate the evidence-based uncertainty for trustworthy image-text matching in Section 2.2. Moreover, we present a Consistent Module to make two VSE branches obtain consistent predictions on subjective opinions during evidential deep learning in Section 2.3.
2.1 OVERVIEW
To achieve trustworthy image-text matching, unlike most standard methods, TcVSE utilizes EDL and a consistent module to accurately measure the visual-textual similarity and additionally quantify the uncertainty of the VSE model for self-evaluation. Figure 2 shows the framework of our proposed method. We first define our Visual Semantic Embedding model for image-text matching as illustrated in Figure 2(a). Let (V, C) denote a visual and textual dataset, which contains a set of images V and a set of texts C. Feature Encoding: For any sample pair (u, c) in (V, C), their feature representations could be encoded by some deep backbone networks, e.g., Faster-RCNN for visual features and Bi-GRU for
textual features, respectively:
V(u,Θϕ) : u→ {xi}Vi=1,xi ∈ Rd, T(c,Θψ) : c→ {rj}Mj=1, rj ∈ Rd
where d is the dimensionality of the joint embedding space, V(∗,Θϕ) and T(∗,Θψ) are respectively visual and textual backbones, Θϕ and Θψ are respectively the corresponding model parameters, {xi}Vi=1 is a set of V encoded local region features, {rj}Mj=1 is a set of M word token features, rj ∈ Rd is a word token feature and M is the number of words for c. Following Hüllermeier & Waegeman (2021), we randomly discard the region features extracted by the backbone network (Faster-RCNN) to achieve data augmentation, which is different from the common augmentation of the raw image, e.g., cropping, rotation, etc. Meanwhile, “Mask”, “Discard”, or “Swap” operations are performed on the word tokens for text augmentation.
Similarity Representation: To obtain the global similarity, the encoded visual features {xi}Ni=1 and textual features {rj}Mj=1 would be aggregated by Max pooling into a common embedding space.
v = MaxPooling ( {xi}Ni=1 ) , t = MaxPooling ( {rj}Mj=1 ) .
Then, the similarity score of (v, t) is measured by the cosine similarity as follows:
S(v, t) = v⊤t
∥v∥ · ∥t∥ . (1)
Learning with TcVSE: A VSE model aims at minimizing the visual-semantic distance in a common space, i.e., maximizing the similarity of matched visual and textual samples. Our TcVSE aims to achieve that goal while also endowing the VSE models with the reliable capability of uncertainty estimation. More specifically, TcVSE conducts a two-step learning process to optimize models. The first step is to optimize the uncertainty-aware loss Lu based on the cross-modal evidential deep learning. The second step is multiple optimizations for opinion-based consistency loss Lc. See Algorithm 1 for more details on the optimization process.
2.2 UNCERTAINTY ESTIMATION
In this section, we follow the notions and principles of evidential deep learning (EDL) Sensoy et al. (2018) to model the uncertainty of VSE models. To estimate uncertainty, the Dempster-Shafer
Theory of Evidence (DST) Yager & Liu (2008) and the theory of Subjective Local (SL) Jsang (2016) are employed to build the learning paradigm of EDL. The existing EDL learns a deterministic model from the observable evidence supporting subjective opinions (i.e., model predictions). However, these methods almost focus on unimodal classification and less touching image-text matching.
For image-text matching, VSE projects the visual and textual feature representations into a common space, thus making it possible to measure the similarity across different modalities. Different from existing EDL methods Sensoy et al. (2018), VSE does not have a nonlinear classifier to predict the evidence, thus making it difficult to quantify the uncertainty. To address the issue, our TcVSE relaxes the instance-level retrieval to a K-way querying, thus the evidences could be estimated by the cross-modal similarities, i.e., ek = [g(sk1), g(sk2), · · · , g(skK)] for the k-th query, where K is the number of matching events and g(·) is a function to transform similarity into a non-negative evidence (i.e., e ∈ [0,+∞)) as bellow:
e = g(s) = ReLU(s/τ) or exp(s/τ), (2)
where s is the visual-semantic similarity computed by Equation (1) and 0 < τ < 1 is a temperature parameter Wu et al. (2018). To model the uncertainty, the similarity-based evidence vector e could be associated with the parameters of a Dirichlet distribution α = [α1, · · · , αK ] (αk = ek + 1) built on SL theory, which provides an overall uncertainty mass u and a belief mass bi for each singleton that is one of K retrieval events of a Qurey in image-text matching. These K+1 masses are defined as
bk = ek S = αk − 1 S and u = K S , (3)
where S = ∑K k=1 (ek + 1) = ∑K k=1 αk and ∑K k=1 bk + u = 1. The belief masses b = [b1, b2, · · · , bK ] could be treated as subjective opinions corresponding to the parameters of Dirichlet distribution α and the S could be considered as the distribution strength.
Intuitively, ITM could be viewed as a process of retrieving counterparts with the highest matching probability from different modalities. Hence, the matching probability assignment over the retrieved samples of each “Query” could be denoted as p = [p1, p2, · · · , pK ], where ∑K i=1 pi = 1. By using the Dirichlet distribution to model such probability assignment, given an opinion, the expected probability of the k-th matched event can written as ED(p|α) [pk] = ∫ pkD(p | α)dp = αkS , where the Dirichlet distribution with parameters ⟨α1, α2, · · · , αK⟩ parameterized over the evidence ⟨e1, e2, · · · , eK⟩ expresses the density of such probability assignment and simultaneously models the overall uncertainty Jsang (2016). The density function is given by
D(p | α) =
{ 1
B(α) ∏K j=1 p αj−1 j for p ∈ SK 0 otherwise , (4)
where B(α) is the K-dimensional multinomial beta function and SK is the K-dimensional unit simplex. For a deep classifier, the widely used loss function is cross-entropy, formally as
Lce(y,p) = − K∑ j=1 yj log(pj).
Considering the density function D(p | α) molded by the Dirichlet distribution α, the Bayes risk of Lce can be computed by
ED(p|α) [Lce] = ∫ K∑
j=1
−yj log (pj) 1 B (α) K∏ j=1 p αj−1 j dp = K∑ j=1 yj (ψ (S)− ψ (αj)) , (5)
where ψ (·) is the digamma function. By minimizing such risk, it is possible to ensure that correctly labeled observations generate as strong evidence as possible. Since the number of annotated pairs for ITM training is much larger than the number of categories for multi-classification, we simply regard “K” as the size of one training mini-batch, wherein visual and textual samples have a one-to-one correspondence. Therefore, such risk can be considered as the equivalent of the uncertainty-aware cross-entropy Luce of TcVSE, which is defined as
Luce(α) = 1
K K∑ i=1 ED(pi|αi) [Lce(IK ,Pi)] , (6)
where IK is an identity matrix of size K. Luce encourages VSE to generate as strong evidence as possible for positive pairs, which guarantees that evidence of positive pairs is higher than that of negative pairs. Furthermore, to further extreme the predicted evidence, we introduce KullbackLeibler (KL) divergence to enforce the evidence of negative pairs to be zero. The penalization loss could be formulated as:
Lkl(α) = 1
K K∑ i=1 KL [D (pi | α̃i) ∥D (pi | ⟨1, 1, · · · , 1⟩)]
= 1
K K∑ k=1
[ − log (Γ(K)B (α̃i)) +
K∑ k=1 (α̃ik − 1) ( ψ (α̃ik)− ψ ( S̃i ))] ,
(7)
where S̃i = ∑K j=1 α̃ij , α̃i = IK(i,:) + ( 1− IK(i,:) ) ⊙ αi, Γ(·) is the gamma function, and ψ(·) is the digamma function. Thus, the uncertainty-aware loss of one VSE branch (e.g., image-to-text) is given by Li2tu = Li2tuce + λLi2tkl , (8) where λ is a balance factor that dynamically increases with the number of epochs. The dynamical strategy prevents the optimizer from overemphasizing the KL divergence at the beginning of training, otherwise, the optimizer will be misled by immature opinions leading to performance degradation. Finally, to simultaneously consider the bidirectional retrieval, we jointly optimize the two VSE branches as below: Lu = Li2tu + Lt2iu , (9) where Lt2iu is the evidential loss of t2i VSE branch, which could be computed like Equations (6) to (8).
2.3 CONSISTENT MODULE
In our TcVSE, each branch focuses on different learning directions, due to the discrepancy between distinct retrieval tasks (one for image-to-text and another for text-to-image). Unfortunately, this will lead to various branches producing inconsistent uncertainty estimation, resulting in a performance drop. Specifically, given one query, one branch produces a prediction of low uncertainty, whereas the uncertainty of another branch might be higher as shown in Figure 2(b). Therefore, we introduce a consistency regularization to enforce the two VSE branches to produce consistent predictions on subjective opinions. To simplify presentation without losing generality, we only elaborate on the consistency loss of one direction (i.e., image-to-text) as follows:
Li2tc ( bi2t, b̂i2t ) = 1
K K∑ k=1 ∣∣∣bi2tk − b̂i2tk ∣∣∣ , (10) where bi2t and b̂i2t are obtained from i2t and t2i branches with Equation (3), respectively. Similarly, we could easily obtain the consistency loss in another direction (e.g., text-to-image). Finally, the consistency loss Lc of our TcVSE could be formulated as:
Lc = 1
K K∑ k=1 [ Li2tc ( bi2tk , b̂ i2t k ) + Lt2ic ( b̂t2ik ,b t2i k )] . (11)
The optimization process for our TcVSE is summarized in Algorithm 1.
3 EXPERIMENT
To evaluate our TcVSE, we conduct extensive experiments on two widely used benchmark datasets for Image-Text Matching. Following Lee et al. (2018), we measure the performance of image-to-text and text-to-image retrieval by Recall@K (K=1,5,10), which is defined as the proportion of correct items retrieved in the top K samples of the query. In addition, we adopt the sum of all Recall results to evaluate the overall performance.
Visual Backbone: Faster-RCNN, Textual Backbone: Bi-GRU
Visual Backbone: Faster-RCNN, Textual Backbone: Bert-base
3.1 DATASETS AND IMPLEMENTATION DETAILS
Datasets: The benchmark datasets used in our experiments are Flickr30K Young et al. (2014) and MS-COCO Lin et al. (2014). Flickr30K is an image-text dataset collected from Flickr website and contains 31,000 images with five semantically correlated captions each. Following Lee et al. (2018), we adopt the same dataset splits in our experiments, i.e., 29,000 training images, 1,000 validation images, and 1,000 testing images. MS-COCO consists of 123,287 images, and each image also has five annotated text descriptions. Following Lee et al. (2018), 113,287 images for training, 5000 images for validation, and the remaining 5000 images for testing.
Implementation Detail. In our TcVSE, like VSE∞ Chen et al. (2021), a Faster-RCNN Anderson et al. (2018) detector (with ResNet-101) and Bi-GRU (or Bert-base Devlin et al. (2018)) serve as our visual and textual backbones, respectively. For each image, the visual backbone extracts the region proposals with top-36 confidence scores and projects each region into a 2,048-dimensional feature vector. Following Chen et al. (2021), we randomly discard some region proposals of each image to achieve augmentation during training. For each text description, we randomly mask some words to achieve data augmentation for each description. The dimensionality of the common embedding space is 1024. Different from most methods, we use the uncertainty-aware loss based on EDL for training, which additionally endows the model with the ability to uncertainty estimation.
We employ the AdamW optimizer Loshchilov & Hutter (2017) with weight decay factor 10e-4 to train the VSE branches. The learning rate of the visual model is 5e-4. For the textual model, the initial learning rate is 5e-4, except for Bert-base with 5e-5, and decaying by 10% every 10 epochs. The mini-batch size K is 128 with 25 training epochs on both Flickr30K and MS-COCO.
3.2 COMPARISON WITH STATE-OF-THE-ART METHODS
For a comprehensive evaluation, we compare our TcVSE with 12 state-of-the-art baselines, including SCAN Lee et al. (2018), CAMP Wang et al. (2019), VSRN Li et al. (2019), IMRAM Chen et al. (2020), GSMN Liu et al. (2020), SGRAF(SAF+SGR) Diao et al. (2021), VSE++* Chen et al. (2021), VSE∞ Chen et al. (2021), NCR Huang et al. (2021), CGMN Cheng et al. (2022), URDA Li et al. (2022a) and VSRN++ Li et al. (2022a). VSE++* is the basic version based on VSE∞ using Average Pooling. We conduct abundant comparison experiments as shown in Tables 1 and 2.
Furthermore, we also provide the comparison results compared with the state-of-the-art VSE-based methods in Table 5 for a comprehensive evaluation.
Results on Flickr30K. We report the experimental results on Flickr30K in Table 1. From the table, one could find that our TcVSE with a single VSE branch achieves comparable results, e.g., TcVSE (i2t) outperforms all baselines with rSum=504.4 under Bi-GRU and rSum=517.7 under Bert-base. Thanks to our trust-consistent learning, our TcVSE (i2t+t2i) is superior to all compared methods. Under the textual Bert-base backbone, our TcVSE could outperform all baselines with either one or two branches. Specifically, TcVSE (i2t+t2i) achieves remarkable improvement with the best R@1=82.9% for sentence retrieval and R@1=63.9% for image retrieval.
Results on MS-COCO. We present the qualitative results on MS-COCO with 5-fold 1K and full 5K test images in Tables 1 and 2, respectively. With Bi-GRU, our TcVSE could achieve a competitive performance compared to the state-of-the-arts. More specifically, TcVSE (i2t+t2i) achieves the best R@1 80.6% for sentence retrieval. In addition, Bert-base could further boost our TcVSE remarkably, i.e., a relative improvement of about 3% on R@1 compared to the best baseline VSE∞. In brief, our TcVSE with either one VSE branch or two branches could remarkably outperform all baselines, which demonstrates the effectiveness of our method.
For the experiments on MS-COCO 5K test images, the performance improvement is even more pronounced in terms of sentence retrieval, with a relative improvement of 7.7% (Bi-GRU) and 12.9% (Bert-base) on R@1 compared to best baselines. Both one and two branches of our TcVSE (Bert-base) could achieve conspicuous performance improvement. Furthermore, the consistent module could further boost the performance of TcVSE with one branch, which indicates that our trust-consistent learning will produce complementary and trustworthy predictions for retrieval improvement.
3.3 ABLATION STUDY
In this section, extensive ablation studies are carried out on Flickr30K to verify the contribution of each component to image-text matching. The experimental results are as shown in Table 3. We could comprehensively analyze the results from the following three distinct aspects:
Effectiveness. To verify the effectiveness of our EDL, we replace our evidential loss with Max of Hinge Loss (MH) Faghri et al. (2017) to optimize our VSE, i.e. #7 VSE with MH loss. From Table 3, one could see that other variants with EDL (i.e., #1–6) achieve better retrieval performance than VSE with MH loss, which indicates that our VSE endowed with EDL could remarkably improve performance by capturing the uncertainty. Moreover, our consistency module could further improve the retrieval performance of the two branches, even using only one branch for inference. More specifically, the module could relatively improve the performance by 1.65% (#1 vs. #2), 1.02% (#3 vs. #4), and 1.68% (#5 vs. #6), and in terms of R@1 for sentence retrieval, respectively. By fusing the two branches, our TcVSE could achieve further improvement, e.g., the full version of our TcVSE (#1) could relatively improve the version of one branch #3 and #5 by 1.52% and 2.03% in terms of R@1 for sentence retrieval, respectively.
Complementarity. Two VSE branches are exploited to focus on different retrieval tasks, i.e., imageto-text and text-to-image matching. Obviously, such differences between tasks lead to distinct emphasis. Thus, aggregating the two VSE branches will take advantage of their complementary information, leading to further improvement, which has been verified by the results. Specifically, the variants with aggregation (i.e., #1 and #2) achieve better performance compared to the variants with single branches (i.e., #3-6).
Consistency. Thanks to our consistent module, the performance of our TcVSE could be improved even with only one single branch, i.e., #3 vs. #4, and #5 vs. #6. Hence, our consistent module could mutually promote the performance of different branches by eliminating the prediction discrepancy across different branches. Furthermore, the full version of TcVSE (#1) could achieve the best retrieval performance, which indicates that our consistent module not only mutually promotes the performance of each branch but also remains complementary information of different branches.
3.4 VISUALIZATION OF UNCERTAINTY
To visually illustrate the uncertainty estimation, we plot the distribution diagrams of obtained uncertainty on the test sets of Flickr30K and MS-COCO. Since the intrinsic perturbation in data is uncontrollable and inconspicuous, it is hard to quantitatively evaluate the uncertainty estimated by the proposed method. To this end, we manually corrupt the inputs to amplify the unreliability of the data for easier observation, e.g., discard, swap, and mask operations used in Huang et al. (2021). Such data corruption could be seen as data augmentation. The proportion of corrupted image regions and words denotes the augmentation rate (AR). In the experiment, we investigate the uncertainty distribution quantified by our TcVSE (Bi-GRU) under three ARs (i.e., 0.0, 0.3, 0.6) as shown in Figure 3. From the figure, one could see that most retrievals under the low ARs have low certainty, i.e. clustering on the left. On the contrary, the uncertainty of the retrieval gradually increases as the ARs increase, as shown by most of the retrieval uncertainty gathered to the right in Figures 3(a) to 3(d). That is to say, as the AR increases, the correlation between image-text pairs will be degraded, resulting in increasing the retrieval uncertainty, which is consistent with the fact that data disturbance increases unreliability/uncertainty. Therefore, our method could effectively capture the uncertainty.
3.5 QUALITATIVE RESULTS
Figures 4 and 5 illustrate some qualitative cross-modal results retrieved by our TcVSE. In the figures, we also report the estimated uncertainty and the ensemble similarity measured by TcVSE for intuitive analysis. Unlike prior visual-textual matching methods, our TcVSE could quantify the overall uncertainty for cross-modal retrieval given each query, thus providing self-evaluation scores for the retrieved results. That is to say, our TcVSE could not only compute the similarities across different modalities for cross-modal retrieval inference but also self-evaluate the reliability of the results in terms of uncertainty, improving the interpretability of retrieval. For example, in Figure 4(a-c), the
predicted uncertainty by our TcVSE could self-evaluate the retrieval quality, namely more incorrect retrieved results with high uncertainty.
For example, in the completely correct examples (i.e. Figure 4(a) and Figure 5(a)), the correct retrieval is with high similarity and low overall uncertainty, which is viewed as trustworthy retrievals. In Figure 4(c) and Figure 5(d), retrieved results with high uncertainty are usually unreliable even with relatively high similarity, e.g., Figure 4(c) and Figure 5(d). More specifically, although the retrieved results have relatively high similarities compared to other correctly retrieved ones, they ignore/misunderstand some details in the queries, such as ”one female” in Figure 4(c) and ”skateboard” vs ”rollerblade” in Figure 5(d). That is to say, it is very hard to evaluate the retrieval quality by the obtained similarities. Fortunately, our TcVSE could accurately estimate the uncertainty of the retrieved results leading to self-evaluating the retrieval quality.
4 CONCLUSION
In this paper, we revisit a practicable and meaningful problem in VSE-based image-text matching, i.e., “How to make retrieval trustworthy?”. To this end, we present a Trust-consistent Visual Semantic Embedding method (TcVSE) for image-text matching, thus endowing the VSE models with the ability to self-evaluate the retrieval quality for trustworthy retrieval. Specifically, first, cross-modal evidential deep learning is proposed to capture accurate uncertainty of image-text matching. Second, a consistency module is presented to enforce the subjective opinion of distinct branches to be consistent for high reliability. Finally, we conduct extensive experiments and analyses to verify the effectiveness and self-evaluation of TcVSE.
A APPENDIX
A.1 RELATED WORKS
Image-text matching. Most of the existing methods for image-text matching (ITM) could be roughly divided into two groups, i.e., the global-level methods represented by visual-semantic embedding (VSE) and the local-level methods with complex similarity inference. The global-level methods mainly aim to obtain good global representations from visual and textual modalities with the help of the well-designed feature extraction, enhancement, or aggregation strategy, and then directly compute the similarity, e.g., VSE++ Faghri et al. (2017), VSRN Li et al. (2019), and VSE∞ Chen et al. (2021). The local-level methods desire to learn the latent fine-grained alignments across different modalities for more accurate similarity inference, e.g., SCAN Lee et al. (2018), IMRAM Chen et al. (2020), SGRAF Diao et al. (2021), UARDA Zhang et al. (2022) and son on. Different from the mentioned lightweight methods, further breakthroughs have been made in the performance of downstream cross-modal tasks with the rapid development of large-scale visual language pre-training models in recent years, e.g., UNICODER-VL Li et al. (2020), CLIP Radford et al. (2021), and MaskCLIP Dong et al. (2022). However, the models are usually accompanied by high training or fine-tuning costs. In this paper, our research belongs to the lightweight global-level method. Uncertainty-based learning. Deep learning has made promising progress in both academic research and industrial applications, but it is hard to quantify the uncertainty of deep models directly due to deterministic network prediction. Bayesian neural networks (BNNs) have been used to model uncertainty in computer vision tasks by placing priors over network deterministic weights, e.g., variational inference Kingma et al. (2015), approximations via dropout Gal & Ghahramani (2016); Gal et al. (2017), and so on. However, modeling uncertainty with BNNs is usually limited by the expensive sampling cost. Recently, Sensoy et al. (2018) proposed an uncertainty learning paradigm that combines evidence theory with DNNs, which places Dirichlet priors over discrete model predictions to directly model uncertainty with lower cost and it has been successfully applied in various tasks,
e.g., ClassificationSensoy et al. (2018); Han et al. (2022), RecognitionBao et al. (2021), and SegmentationZou et al. (2022). In this paper, we focus on the estimation of the uncertainty in image-text matching based on evidential deep learning.
A.2 DERIVATION
We carry out a detailed derivation process for some of the formulas in the paper.
The derivation of Equation (5):
ED(p|α) [Lce] = ∫ K∑
j=1
−yj log (pj) 1 B (α) K∏ j=1 p αj−1 j dp
= K∑ j=1 yj ∫ log (pj) 1 B (α) K∏ j=1 p αj−1 j dp =
K∑ j=1 yjE [log (pj)] .
From Minka (2003), E [log (pj)] could be as ψ (S)− ψ (αj), where S = ∑K k=1 αk. Thus,
ED(p|α) [Lce] = K∑ j=1 yj (ψ (S)− ψ (αj)) .
The derivation of Equation (7):
Lkl(α) = 1
K K∑ i=1 KL [D (pi | α̃i) ∥D (pi | ⟨1, 1, · · · , 1⟩)]
= 1
K K∑ i=1 E [ log D (pi | α̃i) D (pi | ⟨1, 1, · · · , 1⟩) ]
= 1
K K∑ i=1 E
log Γ (∑K j=1 α̃ij ) Γ(K) ∏K j=1 Γ(α̃ij) K∏ j=1 p α̃ij−1 ij = 1
K K∑ i=1
log Γ (∑K j=1 α̃ij ) Γ(K) ∏K j=1 Γ(α̃ij) + E log K∏ j=1 p α̃ij−1 ij = 1
K K∑ i=1 − log (Γ(K)B (α̃i)) + K∑ j=1 (α̃ij − 1)E [log pij ] = 1
K K∑ i=1
[ − log (Γ(K)B (α̃i)) +
K∑ k=1 (α̃ik − 1) ( ψ (α̃ik)− ψ ( S̃i
))]
A.3 PSEUDOCODE
We provide the pseudocode of TcVSE (Algorithm 1) to help understand how TcVSE works.
A.4 PARAMETRIC ANALYSIS
TcVSE has two key hyper-parameters, i.e., τ in Equation (2) andMaxTimes in Algorithm 1. Thus, we conduct detailed parameter experiments (shown in Figure 6) to evaluate the impact of different
Algorithm 1: TcVSE: Trust-consistent Visual Semantic Embedding pseudocode
Input: A well-paired subset {(un, cn)}Nn=1 of (V, C), temperature parameter τ . Initialize: Initialize the parameters Θ of TcVSE. while e < MaxEpoch do
for x in Batches do /*First step*/ x′ = Augment(x){ ei2tk }K k=1 ←− VSEi2t(x′) \\ image-to-text{
ei2tk }K k=1 ←− VSEt2i(x′) \\ text-to-image
for each query do Dirichlet distributions D(p | α)←− e \\ α = e+ 1 end Obtain uncertainty-aware loss Lu with Equation (9) Θ = AdamW(Lu,Θ) /*Second step*/ for t < MaxTimes do
Recompute { ei2tk }K k=1 and { êi2tk }K k=1
\\ image-to-text for each i2t query do
Obtain Subjective Opinions bi2t, b̂i2t with Equation (3) end Recompute { êt2ik }K k=1 and { et2ik }K k=1
\\ text-to-image for each t2i query do
Obtain Subjective Opinions b̂t2i, bt2i with Equation (3) end Obtain the consistency loss Lc with Equation (11) Θ = AdamW(Lc,Θ)
end end
end Output: The learned parameters Θ
hyper-parameter settings and obtain the better parameter settings for TcVSE. From Figure 6(a), TcVSE with too small τ will not be optimized well and perform poorly. Moreover, the performance of TcVSE gradually decreases from the best (τ = 0.03) with the increment of τ , so we recommend setting τ for TcVSE within 0.03∼0.05 to obtain stable and reliable performance. In all our experiments, τ is 0.05.
From Figure 6(b), as theMaxTimes of consistency regularization increases, the better performance of TcVSE could be obtained due to the more consistent prediction, which is obviously reasonable. From the figures, one could find that when MaxTimes is set to 3∼6, the performance gap is not large. In our experiments, we set MaxTimes to 3.
A.5 SUPPLEMENTAL RESULTS
In this section, we supplement some experimental results. Specifically, we provide more detailed experimental results on the MS-COCO 5K test set and the results comparison of our TcVSE with the popular VSE-based methods (VSE++ Faghri et al. (2017), VSRN Li et al. (2019), LIWE Wehrmann et al. (2019), CVE Wang et al. (2020), VSE∞ Chen et al. (2021), VSRN++ Li et al. (2022a)),and MV-VSELi et al. (2022b). From Figure 5, our TcVSE achieves competitive results compared with that of the state-of-the-art image-text matching methods. Meanwhile, as shown in Table 5, compared with these popular VSE-based methods, TcVSE obviously achieves the best performance.
Image Backbone: Faster-RCNN, Text Backbone: Bi-GRU
Image Backbone: Faster-RCNN, Text Backbone: Bert-base
A.6 MORE RETRIEVAL RESULTS | 1. What is the main contribution of the paper regarding visual semantic embedding?
2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to achieve better performance than other VSE methods?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What questions or concerns does the reviewer have regarding the paper's definition and modeling of uncertainty in the image-text domain?
5. Does the reviewer have any suggestions for improving the readability and understanding of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies the visual semantic embedding task. The main idea is to estimate the uncertainty in the image-text retrieval learning. The paper claims that by minimizing the uncertainty, the model could achieves better performance. Unfortunately, the author didn't explain what is the uncertainty throughout the whole paper. The paper claims to achieves better performance than the state-of-the-art model on the VSE domain on both Flickr30k and COCO-5k dataset.
Strengths And Weaknesses
Pros:
The paper achieves better performance than other VSE approaches on benchmarks.
Cons:
What is the definition of the uncertainty discussed through the whole paper?
What are the benefits of modeling the uncertainty?
Why would we need to model the uncertainty on the image-text retrieval task?
To be honest, the paper becomes very hard to read and understand after sect. 2.2. The main reason is the author didn't fully explain the benefits and the motivations of modeling uncertainty in the image-text domain. The whole task lacks a formal math definition nor a thorough text explanation. Instead, this paper directly immerses the reader with the whole implementation details and the loss functions, which easily confuses the reader.
For figure 1, I think the main contribution of this paper is the loss for estimating the uncertainty on image-text retrieval. I could understand with this loss, the model achieves higher performance than
V
S
E
∞
. However, I don't understand why the approach achieves faster processing speed?
Question 5 raises an additional question about the architecture. This paper seems not mention the difference of the architecture between the proposed approach and the state-of-the-art model (e.g.
V
S
E
∞
). I wonder whether the performance improvement is due to the architecture difference or the proposed uncertainty estimation loss?
It seems the whole paper is based on a cross-modal evidential learning framework. I would suggest the paper could first formally state the setting of this framework and emphasize the difference against the standard image text retrieval (contrastive learning) framework.
Unfortunately, this paper is also harshly written, which aggregates the difficulties of understanding and grasping the main idea:
What is the image data augmentation mentioned in Sect. 2.1 page 3?
What is the meaning of 'However, almost these methods focus on unimodal classification'? Is it a typo for 'almost of'?
Eq. 2 what is s?
What is the meaning of K-way querying? What are the queries? And why it is K-way?
What is the evidence
e
? What is the function of the evidence?
BiGUR -> BiGRU in implementation detail.
IMT -> ITM in the last paragraph of the introduction.
Clarity, Quality, Novelty And Reproducibility
As I mentioned in Strength and Weaknesses. I think there is a large room for improving the readability of the draft. In terms of the quality, unfortunately, my rating of this paper is fogged by the unclarity of the paper writing. I might not give a high rating for the quality term. For novelty, to be honest, I haven't seen an approach approaching the estimating the uncertainty for image-text retrieval. If the paper writing can be improved and the results are compared and verified faithfully against the state-of-the-art approaches, I would think the paper is pretty novel. As the paper introduces a new training objective, it is unclear to me how to implement the proposed training objective. Although I know the image-text retrieval modality pretty well, I might lack the knowledge and background for the uncertainty estimation, which makes it harder for me to implement the proposed approach. |
ICLR | Title
Trust-consistent Visual Semantic Embedding for Image-Text Matching
Abstract
Visual Semantic Embedding (VSE), as a link between Computer Vision and Natural Language Processing, aims at jointly learning cross-modal embeddings to bridge the discrepancy across visual and textual spaces. In recent years, VSE has achieved great success in image-text matching benefiting from the outstanding representation power of deep learning. However, existing methods produce retrieved results only relying on the ranking of cross-modal similarities, even if the retrieved results are unreliable and uncertain. That is to say, they cannot selfevaluate the quality of retrieved results for trustworthy retrieval, resulting in ignoring the ubiquitous uncertainty in data and models. To address this problem, we propose a novel VSE-based method for image-text matching, namely Trustconsistent Visual Semantic Embedding (TcVSE), to embrace trustworthy retrieval and self-evaluation for image-text matching. To be specific, first, TcVSE models the evidence based on cross-modal similarities to capture accurate uncertainty. Second, a simple yet effective consistency module is presented to enforce subjective opinions of bidirectional VSE models (i2t+t2i) to be consistent for high reliability and accuracy. Finally, extensive comparison experiments are conducted to demonstrate the superiority of TcVSE on two widely-used benchmark datasets, i.e., Flickr30K and MS-COCO. Furthermore, some qualitative experiments are carried out to provide comprehensive and insightful analyses for the reliability and rationality of our method.
N/A
1 INTRODUCTION
Visual Semantic Embedding aims to learn a shared embedding space to enforce visual data coincide with their corresponding semantic textual descriptions, which is an important approach to understanding the cross-modal semantic association for downstream applications, such as image-text matching Faghri et al. (2017) and visual question-answering Malinowski et al. (2015), etc. Thus, the key issue of VSE is how to eliminate the discrepancy across images and texts to learn a reliable common embedding space. To address this issue, numerous methods attempt to project visual and textual data into a latent common space. However, it is still unknown to self-evaluate the retrieval performance to achieve interpretable and reliable inference.
In this paper, we focus on image-text matching (ITM), one of the fundamental tasks of cross-modal learning, i.e., cross-modal retrieval, which expects to search the
most relevant sentences for a given image query (i2t) or retrieve the related images from a given sentence query (t2i) according to the pairwise visual-semantic similarities. Some early works based on VSE Kiros et al. (2014); Wang et al. (2016); Faghri et al. (2017) leverage the powerful feature extraction capability of deep neural networks (DNNs) to obtain the global representation of images and texts, such as VGG Simonyan & Zisserman (2014), ResNet He et al. (2016), and GRU Chung et al. (2014), etc., by maximizing the correlated cross-modal similarities. More granularly, recent
VSRN Li et al. (2019) performs reasoning with Graph Convolutional Neural networks (GCNs) Kipf & Welling (2016) to generate enhanced visual representations, which captures both objects and corresponding semantic relationships for better visual semantic embedding. VSE∞ Chen et al. (2021) presents an adaptive pooling strategy (GPO) that aggregates (region-based or grid-based) local features to lean a better common representation. Unlike the aforementioned VSE-based methods, some works Lee et al. (2018); Chen et al. (2020); Wu et al. (2019); Liu et al. (2020); Diao et al. (2021); Cheng et al. (2022); Li et al. (2022a) present a specific mechanism or model to explicitly learn and integrate the fine-grained relationships between image regions and word tokens for cross-modal similarity inference.
Although prior approaches could achieve promising performance, they are only able to estimate image-text similarities for cross-modal retrieval, wherein image-text pairs with high similarity are taken for granted as matched. Since the ubiquitous uncertainty in data and models, it is inevitable to produce unreliable retrieval results. Therefore, it requires revisiting the questions such as “Is this retrieval trustworthy?” to evaluate the uncertainty or unreliability of predictions. To this end, it is valuable and necessary to measure such uncertainty for self-evaluation, but less touched in existing image-text matching methods.
To address this problem, we propose a novel VSE framework, termed Trust-consistent Visual Semantic Embedding (TcVSE). Not only does TcVSE outperform prior works (Figure 1), but it is also more efficient, achieving trustworthy image-text matching. More specifically, (1) we employ Evidential Deep Learning (EDL) built on the Dempster-Shafer Theory of Evidence (DST) Yager & Liu (2008) and the Subjective Logical Theory Sensoy et al. (2018) (SL) into VSE models to capture the uncertainty, thus endowing the model with the ability to self-evaluate retrieval quality. Following the principles of DST and SL, we consider the pairwise similarity measured by VSE as a source of evidence and parameterize the evidence as a Dirichlet distribution, which not only models the density of query probabilities but also the uncertainty. (2) Unlike prior EDL methods, our TcVSE focuses on ITM instead of classification. Thus, our TcVSE should overcome two challenges to apply EDL on ITM, namely instance retrieval and bidirectional inference. To tackle the first challenge, we relax the instance-level retrieval to a K-way querying for training, thus enabling uncertainty estimation via cross-modal similarity. To counter the second challenge, two VSE branches (i2t and t2i) with EDL are proposed to learn bidirectional retrieval, however, the difference between the two tasks unavoidably leads to the gap between their uncertainty. To address the problem, we present a simple yet effective consistency module to enforce subjective opinions of different branches to be consistent for more reliable uncertainty estimation, thus embracing performance improvement. (3) Finally, we demonstrate the effectiveness and superiority of our method with extensive experiments on two widely used benchmark datasets, i.e., Flickr30K and MS-COCO. The comprehensive ablation studies and insightful analyses verify the reliability and practicability of our method.
2 TRUST-CONSISTENT VISUAL SEMANTIC EMBEDDING
In this section, we summarize our method in Section 2.1 and elaborate on how to estimate the evidence-based uncertainty for trustworthy image-text matching in Section 2.2. Moreover, we present a Consistent Module to make two VSE branches obtain consistent predictions on subjective opinions during evidential deep learning in Section 2.3.
2.1 OVERVIEW
To achieve trustworthy image-text matching, unlike most standard methods, TcVSE utilizes EDL and a consistent module to accurately measure the visual-textual similarity and additionally quantify the uncertainty of the VSE model for self-evaluation. Figure 2 shows the framework of our proposed method. We first define our Visual Semantic Embedding model for image-text matching as illustrated in Figure 2(a). Let (V, C) denote a visual and textual dataset, which contains a set of images V and a set of texts C. Feature Encoding: For any sample pair (u, c) in (V, C), their feature representations could be encoded by some deep backbone networks, e.g., Faster-RCNN for visual features and Bi-GRU for
textual features, respectively:
V(u,Θϕ) : u→ {xi}Vi=1,xi ∈ Rd, T(c,Θψ) : c→ {rj}Mj=1, rj ∈ Rd
where d is the dimensionality of the joint embedding space, V(∗,Θϕ) and T(∗,Θψ) are respectively visual and textual backbones, Θϕ and Θψ are respectively the corresponding model parameters, {xi}Vi=1 is a set of V encoded local region features, {rj}Mj=1 is a set of M word token features, rj ∈ Rd is a word token feature and M is the number of words for c. Following Hüllermeier & Waegeman (2021), we randomly discard the region features extracted by the backbone network (Faster-RCNN) to achieve data augmentation, which is different from the common augmentation of the raw image, e.g., cropping, rotation, etc. Meanwhile, “Mask”, “Discard”, or “Swap” operations are performed on the word tokens for text augmentation.
Similarity Representation: To obtain the global similarity, the encoded visual features {xi}Ni=1 and textual features {rj}Mj=1 would be aggregated by Max pooling into a common embedding space.
v = MaxPooling ( {xi}Ni=1 ) , t = MaxPooling ( {rj}Mj=1 ) .
Then, the similarity score of (v, t) is measured by the cosine similarity as follows:
S(v, t) = v⊤t
∥v∥ · ∥t∥ . (1)
Learning with TcVSE: A VSE model aims at minimizing the visual-semantic distance in a common space, i.e., maximizing the similarity of matched visual and textual samples. Our TcVSE aims to achieve that goal while also endowing the VSE models with the reliable capability of uncertainty estimation. More specifically, TcVSE conducts a two-step learning process to optimize models. The first step is to optimize the uncertainty-aware loss Lu based on the cross-modal evidential deep learning. The second step is multiple optimizations for opinion-based consistency loss Lc. See Algorithm 1 for more details on the optimization process.
2.2 UNCERTAINTY ESTIMATION
In this section, we follow the notions and principles of evidential deep learning (EDL) Sensoy et al. (2018) to model the uncertainty of VSE models. To estimate uncertainty, the Dempster-Shafer
Theory of Evidence (DST) Yager & Liu (2008) and the theory of Subjective Local (SL) Jsang (2016) are employed to build the learning paradigm of EDL. The existing EDL learns a deterministic model from the observable evidence supporting subjective opinions (i.e., model predictions). However, these methods almost focus on unimodal classification and less touching image-text matching.
For image-text matching, VSE projects the visual and textual feature representations into a common space, thus making it possible to measure the similarity across different modalities. Different from existing EDL methods Sensoy et al. (2018), VSE does not have a nonlinear classifier to predict the evidence, thus making it difficult to quantify the uncertainty. To address the issue, our TcVSE relaxes the instance-level retrieval to a K-way querying, thus the evidences could be estimated by the cross-modal similarities, i.e., ek = [g(sk1), g(sk2), · · · , g(skK)] for the k-th query, where K is the number of matching events and g(·) is a function to transform similarity into a non-negative evidence (i.e., e ∈ [0,+∞)) as bellow:
e = g(s) = ReLU(s/τ) or exp(s/τ), (2)
where s is the visual-semantic similarity computed by Equation (1) and 0 < τ < 1 is a temperature parameter Wu et al. (2018). To model the uncertainty, the similarity-based evidence vector e could be associated with the parameters of a Dirichlet distribution α = [α1, · · · , αK ] (αk = ek + 1) built on SL theory, which provides an overall uncertainty mass u and a belief mass bi for each singleton that is one of K retrieval events of a Qurey in image-text matching. These K+1 masses are defined as
bk = ek S = αk − 1 S and u = K S , (3)
where S = ∑K k=1 (ek + 1) = ∑K k=1 αk and ∑K k=1 bk + u = 1. The belief masses b = [b1, b2, · · · , bK ] could be treated as subjective opinions corresponding to the parameters of Dirichlet distribution α and the S could be considered as the distribution strength.
Intuitively, ITM could be viewed as a process of retrieving counterparts with the highest matching probability from different modalities. Hence, the matching probability assignment over the retrieved samples of each “Query” could be denoted as p = [p1, p2, · · · , pK ], where ∑K i=1 pi = 1. By using the Dirichlet distribution to model such probability assignment, given an opinion, the expected probability of the k-th matched event can written as ED(p|α) [pk] = ∫ pkD(p | α)dp = αkS , where the Dirichlet distribution with parameters ⟨α1, α2, · · · , αK⟩ parameterized over the evidence ⟨e1, e2, · · · , eK⟩ expresses the density of such probability assignment and simultaneously models the overall uncertainty Jsang (2016). The density function is given by
D(p | α) =
{ 1
B(α) ∏K j=1 p αj−1 j for p ∈ SK 0 otherwise , (4)
where B(α) is the K-dimensional multinomial beta function and SK is the K-dimensional unit simplex. For a deep classifier, the widely used loss function is cross-entropy, formally as
Lce(y,p) = − K∑ j=1 yj log(pj).
Considering the density function D(p | α) molded by the Dirichlet distribution α, the Bayes risk of Lce can be computed by
ED(p|α) [Lce] = ∫ K∑
j=1
−yj log (pj) 1 B (α) K∏ j=1 p αj−1 j dp = K∑ j=1 yj (ψ (S)− ψ (αj)) , (5)
where ψ (·) is the digamma function. By minimizing such risk, it is possible to ensure that correctly labeled observations generate as strong evidence as possible. Since the number of annotated pairs for ITM training is much larger than the number of categories for multi-classification, we simply regard “K” as the size of one training mini-batch, wherein visual and textual samples have a one-to-one correspondence. Therefore, such risk can be considered as the equivalent of the uncertainty-aware cross-entropy Luce of TcVSE, which is defined as
Luce(α) = 1
K K∑ i=1 ED(pi|αi) [Lce(IK ,Pi)] , (6)
where IK is an identity matrix of size K. Luce encourages VSE to generate as strong evidence as possible for positive pairs, which guarantees that evidence of positive pairs is higher than that of negative pairs. Furthermore, to further extreme the predicted evidence, we introduce KullbackLeibler (KL) divergence to enforce the evidence of negative pairs to be zero. The penalization loss could be formulated as:
Lkl(α) = 1
K K∑ i=1 KL [D (pi | α̃i) ∥D (pi | ⟨1, 1, · · · , 1⟩)]
= 1
K K∑ k=1
[ − log (Γ(K)B (α̃i)) +
K∑ k=1 (α̃ik − 1) ( ψ (α̃ik)− ψ ( S̃i ))] ,
(7)
where S̃i = ∑K j=1 α̃ij , α̃i = IK(i,:) + ( 1− IK(i,:) ) ⊙ αi, Γ(·) is the gamma function, and ψ(·) is the digamma function. Thus, the uncertainty-aware loss of one VSE branch (e.g., image-to-text) is given by Li2tu = Li2tuce + λLi2tkl , (8) where λ is a balance factor that dynamically increases with the number of epochs. The dynamical strategy prevents the optimizer from overemphasizing the KL divergence at the beginning of training, otherwise, the optimizer will be misled by immature opinions leading to performance degradation. Finally, to simultaneously consider the bidirectional retrieval, we jointly optimize the two VSE branches as below: Lu = Li2tu + Lt2iu , (9) where Lt2iu is the evidential loss of t2i VSE branch, which could be computed like Equations (6) to (8).
2.3 CONSISTENT MODULE
In our TcVSE, each branch focuses on different learning directions, due to the discrepancy between distinct retrieval tasks (one for image-to-text and another for text-to-image). Unfortunately, this will lead to various branches producing inconsistent uncertainty estimation, resulting in a performance drop. Specifically, given one query, one branch produces a prediction of low uncertainty, whereas the uncertainty of another branch might be higher as shown in Figure 2(b). Therefore, we introduce a consistency regularization to enforce the two VSE branches to produce consistent predictions on subjective opinions. To simplify presentation without losing generality, we only elaborate on the consistency loss of one direction (i.e., image-to-text) as follows:
Li2tc ( bi2t, b̂i2t ) = 1
K K∑ k=1 ∣∣∣bi2tk − b̂i2tk ∣∣∣ , (10) where bi2t and b̂i2t are obtained from i2t and t2i branches with Equation (3), respectively. Similarly, we could easily obtain the consistency loss in another direction (e.g., text-to-image). Finally, the consistency loss Lc of our TcVSE could be formulated as:
Lc = 1
K K∑ k=1 [ Li2tc ( bi2tk , b̂ i2t k ) + Lt2ic ( b̂t2ik ,b t2i k )] . (11)
The optimization process for our TcVSE is summarized in Algorithm 1.
3 EXPERIMENT
To evaluate our TcVSE, we conduct extensive experiments on two widely used benchmark datasets for Image-Text Matching. Following Lee et al. (2018), we measure the performance of image-to-text and text-to-image retrieval by Recall@K (K=1,5,10), which is defined as the proportion of correct items retrieved in the top K samples of the query. In addition, we adopt the sum of all Recall results to evaluate the overall performance.
Visual Backbone: Faster-RCNN, Textual Backbone: Bi-GRU
Visual Backbone: Faster-RCNN, Textual Backbone: Bert-base
3.1 DATASETS AND IMPLEMENTATION DETAILS
Datasets: The benchmark datasets used in our experiments are Flickr30K Young et al. (2014) and MS-COCO Lin et al. (2014). Flickr30K is an image-text dataset collected from Flickr website and contains 31,000 images with five semantically correlated captions each. Following Lee et al. (2018), we adopt the same dataset splits in our experiments, i.e., 29,000 training images, 1,000 validation images, and 1,000 testing images. MS-COCO consists of 123,287 images, and each image also has five annotated text descriptions. Following Lee et al. (2018), 113,287 images for training, 5000 images for validation, and the remaining 5000 images for testing.
Implementation Detail. In our TcVSE, like VSE∞ Chen et al. (2021), a Faster-RCNN Anderson et al. (2018) detector (with ResNet-101) and Bi-GRU (or Bert-base Devlin et al. (2018)) serve as our visual and textual backbones, respectively. For each image, the visual backbone extracts the region proposals with top-36 confidence scores and projects each region into a 2,048-dimensional feature vector. Following Chen et al. (2021), we randomly discard some region proposals of each image to achieve augmentation during training. For each text description, we randomly mask some words to achieve data augmentation for each description. The dimensionality of the common embedding space is 1024. Different from most methods, we use the uncertainty-aware loss based on EDL for training, which additionally endows the model with the ability to uncertainty estimation.
We employ the AdamW optimizer Loshchilov & Hutter (2017) with weight decay factor 10e-4 to train the VSE branches. The learning rate of the visual model is 5e-4. For the textual model, the initial learning rate is 5e-4, except for Bert-base with 5e-5, and decaying by 10% every 10 epochs. The mini-batch size K is 128 with 25 training epochs on both Flickr30K and MS-COCO.
3.2 COMPARISON WITH STATE-OF-THE-ART METHODS
For a comprehensive evaluation, we compare our TcVSE with 12 state-of-the-art baselines, including SCAN Lee et al. (2018), CAMP Wang et al. (2019), VSRN Li et al. (2019), IMRAM Chen et al. (2020), GSMN Liu et al. (2020), SGRAF(SAF+SGR) Diao et al. (2021), VSE++* Chen et al. (2021), VSE∞ Chen et al. (2021), NCR Huang et al. (2021), CGMN Cheng et al. (2022), URDA Li et al. (2022a) and VSRN++ Li et al. (2022a). VSE++* is the basic version based on VSE∞ using Average Pooling. We conduct abundant comparison experiments as shown in Tables 1 and 2.
Furthermore, we also provide the comparison results compared with the state-of-the-art VSE-based methods in Table 5 for a comprehensive evaluation.
Results on Flickr30K. We report the experimental results on Flickr30K in Table 1. From the table, one could find that our TcVSE with a single VSE branch achieves comparable results, e.g., TcVSE (i2t) outperforms all baselines with rSum=504.4 under Bi-GRU and rSum=517.7 under Bert-base. Thanks to our trust-consistent learning, our TcVSE (i2t+t2i) is superior to all compared methods. Under the textual Bert-base backbone, our TcVSE could outperform all baselines with either one or two branches. Specifically, TcVSE (i2t+t2i) achieves remarkable improvement with the best R@1=82.9% for sentence retrieval and R@1=63.9% for image retrieval.
Results on MS-COCO. We present the qualitative results on MS-COCO with 5-fold 1K and full 5K test images in Tables 1 and 2, respectively. With Bi-GRU, our TcVSE could achieve a competitive performance compared to the state-of-the-arts. More specifically, TcVSE (i2t+t2i) achieves the best R@1 80.6% for sentence retrieval. In addition, Bert-base could further boost our TcVSE remarkably, i.e., a relative improvement of about 3% on R@1 compared to the best baseline VSE∞. In brief, our TcVSE with either one VSE branch or two branches could remarkably outperform all baselines, which demonstrates the effectiveness of our method.
For the experiments on MS-COCO 5K test images, the performance improvement is even more pronounced in terms of sentence retrieval, with a relative improvement of 7.7% (Bi-GRU) and 12.9% (Bert-base) on R@1 compared to best baselines. Both one and two branches of our TcVSE (Bert-base) could achieve conspicuous performance improvement. Furthermore, the consistent module could further boost the performance of TcVSE with one branch, which indicates that our trust-consistent learning will produce complementary and trustworthy predictions for retrieval improvement.
3.3 ABLATION STUDY
In this section, extensive ablation studies are carried out on Flickr30K to verify the contribution of each component to image-text matching. The experimental results are as shown in Table 3. We could comprehensively analyze the results from the following three distinct aspects:
Effectiveness. To verify the effectiveness of our EDL, we replace our evidential loss with Max of Hinge Loss (MH) Faghri et al. (2017) to optimize our VSE, i.e. #7 VSE with MH loss. From Table 3, one could see that other variants with EDL (i.e., #1–6) achieve better retrieval performance than VSE with MH loss, which indicates that our VSE endowed with EDL could remarkably improve performance by capturing the uncertainty. Moreover, our consistency module could further improve the retrieval performance of the two branches, even using only one branch for inference. More specifically, the module could relatively improve the performance by 1.65% (#1 vs. #2), 1.02% (#3 vs. #4), and 1.68% (#5 vs. #6), and in terms of R@1 for sentence retrieval, respectively. By fusing the two branches, our TcVSE could achieve further improvement, e.g., the full version of our TcVSE (#1) could relatively improve the version of one branch #3 and #5 by 1.52% and 2.03% in terms of R@1 for sentence retrieval, respectively.
Complementarity. Two VSE branches are exploited to focus on different retrieval tasks, i.e., imageto-text and text-to-image matching. Obviously, such differences between tasks lead to distinct emphasis. Thus, aggregating the two VSE branches will take advantage of their complementary information, leading to further improvement, which has been verified by the results. Specifically, the variants with aggregation (i.e., #1 and #2) achieve better performance compared to the variants with single branches (i.e., #3-6).
Consistency. Thanks to our consistent module, the performance of our TcVSE could be improved even with only one single branch, i.e., #3 vs. #4, and #5 vs. #6. Hence, our consistent module could mutually promote the performance of different branches by eliminating the prediction discrepancy across different branches. Furthermore, the full version of TcVSE (#1) could achieve the best retrieval performance, which indicates that our consistent module not only mutually promotes the performance of each branch but also remains complementary information of different branches.
3.4 VISUALIZATION OF UNCERTAINTY
To visually illustrate the uncertainty estimation, we plot the distribution diagrams of obtained uncertainty on the test sets of Flickr30K and MS-COCO. Since the intrinsic perturbation in data is uncontrollable and inconspicuous, it is hard to quantitatively evaluate the uncertainty estimated by the proposed method. To this end, we manually corrupt the inputs to amplify the unreliability of the data for easier observation, e.g., discard, swap, and mask operations used in Huang et al. (2021). Such data corruption could be seen as data augmentation. The proportion of corrupted image regions and words denotes the augmentation rate (AR). In the experiment, we investigate the uncertainty distribution quantified by our TcVSE (Bi-GRU) under three ARs (i.e., 0.0, 0.3, 0.6) as shown in Figure 3. From the figure, one could see that most retrievals under the low ARs have low certainty, i.e. clustering on the left. On the contrary, the uncertainty of the retrieval gradually increases as the ARs increase, as shown by most of the retrieval uncertainty gathered to the right in Figures 3(a) to 3(d). That is to say, as the AR increases, the correlation between image-text pairs will be degraded, resulting in increasing the retrieval uncertainty, which is consistent with the fact that data disturbance increases unreliability/uncertainty. Therefore, our method could effectively capture the uncertainty.
3.5 QUALITATIVE RESULTS
Figures 4 and 5 illustrate some qualitative cross-modal results retrieved by our TcVSE. In the figures, we also report the estimated uncertainty and the ensemble similarity measured by TcVSE for intuitive analysis. Unlike prior visual-textual matching methods, our TcVSE could quantify the overall uncertainty for cross-modal retrieval given each query, thus providing self-evaluation scores for the retrieved results. That is to say, our TcVSE could not only compute the similarities across different modalities for cross-modal retrieval inference but also self-evaluate the reliability of the results in terms of uncertainty, improving the interpretability of retrieval. For example, in Figure 4(a-c), the
predicted uncertainty by our TcVSE could self-evaluate the retrieval quality, namely more incorrect retrieved results with high uncertainty.
For example, in the completely correct examples (i.e. Figure 4(a) and Figure 5(a)), the correct retrieval is with high similarity and low overall uncertainty, which is viewed as trustworthy retrievals. In Figure 4(c) and Figure 5(d), retrieved results with high uncertainty are usually unreliable even with relatively high similarity, e.g., Figure 4(c) and Figure 5(d). More specifically, although the retrieved results have relatively high similarities compared to other correctly retrieved ones, they ignore/misunderstand some details in the queries, such as ”one female” in Figure 4(c) and ”skateboard” vs ”rollerblade” in Figure 5(d). That is to say, it is very hard to evaluate the retrieval quality by the obtained similarities. Fortunately, our TcVSE could accurately estimate the uncertainty of the retrieved results leading to self-evaluating the retrieval quality.
4 CONCLUSION
In this paper, we revisit a practicable and meaningful problem in VSE-based image-text matching, i.e., “How to make retrieval trustworthy?”. To this end, we present a Trust-consistent Visual Semantic Embedding method (TcVSE) for image-text matching, thus endowing the VSE models with the ability to self-evaluate the retrieval quality for trustworthy retrieval. Specifically, first, cross-modal evidential deep learning is proposed to capture accurate uncertainty of image-text matching. Second, a consistency module is presented to enforce the subjective opinion of distinct branches to be consistent for high reliability. Finally, we conduct extensive experiments and analyses to verify the effectiveness and self-evaluation of TcVSE.
A APPENDIX
A.1 RELATED WORKS
Image-text matching. Most of the existing methods for image-text matching (ITM) could be roughly divided into two groups, i.e., the global-level methods represented by visual-semantic embedding (VSE) and the local-level methods with complex similarity inference. The global-level methods mainly aim to obtain good global representations from visual and textual modalities with the help of the well-designed feature extraction, enhancement, or aggregation strategy, and then directly compute the similarity, e.g., VSE++ Faghri et al. (2017), VSRN Li et al. (2019), and VSE∞ Chen et al. (2021). The local-level methods desire to learn the latent fine-grained alignments across different modalities for more accurate similarity inference, e.g., SCAN Lee et al. (2018), IMRAM Chen et al. (2020), SGRAF Diao et al. (2021), UARDA Zhang et al. (2022) and son on. Different from the mentioned lightweight methods, further breakthroughs have been made in the performance of downstream cross-modal tasks with the rapid development of large-scale visual language pre-training models in recent years, e.g., UNICODER-VL Li et al. (2020), CLIP Radford et al. (2021), and MaskCLIP Dong et al. (2022). However, the models are usually accompanied by high training or fine-tuning costs. In this paper, our research belongs to the lightweight global-level method. Uncertainty-based learning. Deep learning has made promising progress in both academic research and industrial applications, but it is hard to quantify the uncertainty of deep models directly due to deterministic network prediction. Bayesian neural networks (BNNs) have been used to model uncertainty in computer vision tasks by placing priors over network deterministic weights, e.g., variational inference Kingma et al. (2015), approximations via dropout Gal & Ghahramani (2016); Gal et al. (2017), and so on. However, modeling uncertainty with BNNs is usually limited by the expensive sampling cost. Recently, Sensoy et al. (2018) proposed an uncertainty learning paradigm that combines evidence theory with DNNs, which places Dirichlet priors over discrete model predictions to directly model uncertainty with lower cost and it has been successfully applied in various tasks,
e.g., ClassificationSensoy et al. (2018); Han et al. (2022), RecognitionBao et al. (2021), and SegmentationZou et al. (2022). In this paper, we focus on the estimation of the uncertainty in image-text matching based on evidential deep learning.
A.2 DERIVATION
We carry out a detailed derivation process for some of the formulas in the paper.
The derivation of Equation (5):
ED(p|α) [Lce] = ∫ K∑
j=1
−yj log (pj) 1 B (α) K∏ j=1 p αj−1 j dp
= K∑ j=1 yj ∫ log (pj) 1 B (α) K∏ j=1 p αj−1 j dp =
K∑ j=1 yjE [log (pj)] .
From Minka (2003), E [log (pj)] could be as ψ (S)− ψ (αj), where S = ∑K k=1 αk. Thus,
ED(p|α) [Lce] = K∑ j=1 yj (ψ (S)− ψ (αj)) .
The derivation of Equation (7):
Lkl(α) = 1
K K∑ i=1 KL [D (pi | α̃i) ∥D (pi | ⟨1, 1, · · · , 1⟩)]
= 1
K K∑ i=1 E [ log D (pi | α̃i) D (pi | ⟨1, 1, · · · , 1⟩) ]
= 1
K K∑ i=1 E
log Γ (∑K j=1 α̃ij ) Γ(K) ∏K j=1 Γ(α̃ij) K∏ j=1 p α̃ij−1 ij = 1
K K∑ i=1
log Γ (∑K j=1 α̃ij ) Γ(K) ∏K j=1 Γ(α̃ij) + E log K∏ j=1 p α̃ij−1 ij = 1
K K∑ i=1 − log (Γ(K)B (α̃i)) + K∑ j=1 (α̃ij − 1)E [log pij ] = 1
K K∑ i=1
[ − log (Γ(K)B (α̃i)) +
K∑ k=1 (α̃ik − 1) ( ψ (α̃ik)− ψ ( S̃i
))]
A.3 PSEUDOCODE
We provide the pseudocode of TcVSE (Algorithm 1) to help understand how TcVSE works.
A.4 PARAMETRIC ANALYSIS
TcVSE has two key hyper-parameters, i.e., τ in Equation (2) andMaxTimes in Algorithm 1. Thus, we conduct detailed parameter experiments (shown in Figure 6) to evaluate the impact of different
Algorithm 1: TcVSE: Trust-consistent Visual Semantic Embedding pseudocode
Input: A well-paired subset {(un, cn)}Nn=1 of (V, C), temperature parameter τ . Initialize: Initialize the parameters Θ of TcVSE. while e < MaxEpoch do
for x in Batches do /*First step*/ x′ = Augment(x){ ei2tk }K k=1 ←− VSEi2t(x′) \\ image-to-text{
ei2tk }K k=1 ←− VSEt2i(x′) \\ text-to-image
for each query do Dirichlet distributions D(p | α)←− e \\ α = e+ 1 end Obtain uncertainty-aware loss Lu with Equation (9) Θ = AdamW(Lu,Θ) /*Second step*/ for t < MaxTimes do
Recompute { ei2tk }K k=1 and { êi2tk }K k=1
\\ image-to-text for each i2t query do
Obtain Subjective Opinions bi2t, b̂i2t with Equation (3) end Recompute { êt2ik }K k=1 and { et2ik }K k=1
\\ text-to-image for each t2i query do
Obtain Subjective Opinions b̂t2i, bt2i with Equation (3) end Obtain the consistency loss Lc with Equation (11) Θ = AdamW(Lc,Θ)
end end
end Output: The learned parameters Θ
hyper-parameter settings and obtain the better parameter settings for TcVSE. From Figure 6(a), TcVSE with too small τ will not be optimized well and perform poorly. Moreover, the performance of TcVSE gradually decreases from the best (τ = 0.03) with the increment of τ , so we recommend setting τ for TcVSE within 0.03∼0.05 to obtain stable and reliable performance. In all our experiments, τ is 0.05.
From Figure 6(b), as theMaxTimes of consistency regularization increases, the better performance of TcVSE could be obtained due to the more consistent prediction, which is obviously reasonable. From the figures, one could find that when MaxTimes is set to 3∼6, the performance gap is not large. In our experiments, we set MaxTimes to 3.
A.5 SUPPLEMENTAL RESULTS
In this section, we supplement some experimental results. Specifically, we provide more detailed experimental results on the MS-COCO 5K test set and the results comparison of our TcVSE with the popular VSE-based methods (VSE++ Faghri et al. (2017), VSRN Li et al. (2019), LIWE Wehrmann et al. (2019), CVE Wang et al. (2020), VSE∞ Chen et al. (2021), VSRN++ Li et al. (2022a)),and MV-VSELi et al. (2022b). From Figure 5, our TcVSE achieves competitive results compared with that of the state-of-the-art image-text matching methods. Meanwhile, as shown in Table 5, compared with these popular VSE-based methods, TcVSE obviously achieves the best performance.
Image Backbone: Faster-RCNN, Text Backbone: Bi-GRU
Image Backbone: Faster-RCNN, Text Backbone: Bert-base
A.6 MORE RETRIEVAL RESULTS | 1. What is the main contribution of the paper regarding cross-modal retrieval?
2. What are the strengths of the proposed method, particularly in utilizing evidential deep learning?
3. What are the weaknesses of the paper, such as the lack of summary of related works and further discussions on data augmentation and pooling strategies?
4. Do you have any concerns or questions about the experimental results and their interpretation?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper aims to self-evaluate retrieval quality by capturing uncertainty. Almost all previous works use similarity as the only criterion to conduct image-text retrieval or matching. Naturally, the pair with the highest pair-wise similarity is considered as correct retrieval, even if the retrieval is wrong or unreliable. This is actually difficult to decide correctly. Motivated by this, the authors endow the model with this capability by utilizing evidence deep learning (EDL) to quantify uncertainty for cross-modal retrieval. Meanwhile, extensive experiments are conducted on two widely used cross-modal datasets (Flickr30K and MS-COCO) to verify the superiority and effectiveness of the proposed method. Moreover, some qualitative results (e.g., the relatively high similarity with high uncertainty) are given in the experiments, which could provide some insightful evaluation. Overall, this work is interesting and novel.
Strengths And Weaknesses
Strengths:
By utilizing evidential deep learning to quantify uncertainty in image-text matching, the motivation for trustworthy retrieval is interesting and applicable. The method introduces a new perspective (point) for evaluating retrieval quality, i.e., uncertainty, not only similarity, which provides self-evaluation for cross-modal learning.
The authors solve some challenges of utilizing evidential deep learning from classification tasks to cross-modal retrieval, e.g., the difference between instance-level and category-level evidence quantification, and the optimization challenge of bi-directionality when applying EDL in VSE.
Moreover, the experiments are detailed and reasonable, especially some retrieved cases with high and low uncertainty are given for self-evaluation.
Weaknesses:
This paper lacks a summary of related works. Although the authors review some works in the INTRODUCTION (e.g., VSE-based methods, the methods using complex similarity inference, etc.), it is still not sufficient to comprehensively review the existing works.
The authors use a data augmentation strategy for image-text representation learning, but there is a lack of further discussions, such as the augmentation rate (AR.) mentioned in the paper.
This work conducts Max Pooling to aggregate features for visual semantic embedding. The recent work (i.e., VSE
∞
) proposed a Generalized Pooling Operator (GPO) to aggregate image and text features and achieve good performance. Could GPO or other pooling strategies replace Max pooling to achieve higher performance?
As shown in Fig.6-b, the more times the consistency module is performed, the better the performance. Why didn't the authors choose more “MaxTimes” in the experiments?
In the paper, the authors employ K-way classification in mini-batches with a fixed size to simulate instance-level retrieval. However, the actual instance number is much larger than the batch size. Thus, when considering a much larger number of negative pairs in full instance-level retrieval, the big negatives may bring more uncertain information leading to negative effects. How does this work tackle this gap between simulated classification and real retrieval?
Clarity, Quality, Novelty And Reproducibility
Overall, this paper is well-written and easy to follow. The idea is interesting and could give some insights into the community. The implementation details are ample and reasonable. |
ICLR | Title
Lifelong Generative Modeling
Abstract
Lifelong learning is the problem of learning multiple consecutive tasks in a sequential manner where knowledge gained from previous tasks is retained and used for future learning. It is essential towards the development of intelligent machines that can adapt to their surroundings. In this work we focus on a lifelong learning approach to generative modeling where we continuously incorporate newly observed streaming distributions into our learnt model. We do so through a student-teacher architecture which allows us to learn and preserve all the distributions seen so far without the need to retain the past data nor the past models. Through the introduction of a novel cross-model regularizer, the student model leverages the information learnt by the teacher, which acts as a summary of everything seen till now. The regularizer has the additional benefit of reducing the effect of catastrophic interference that appears when we learn over streaming data. We demonstrate its efficacy on streaming distributions as well as its ability to learn a common latent representation across a complex transfer learning scenario.
1 INTRODUCTION
Deep unsupervised generative learning allows us to take advantage of the massive amount of unlabeled data available in order to build models that efficiently compress and learn an approximation of the true data distribution. It has numerous applications such as image denoising, inpainting, super-resolution, structured prediction, clustering, pre-training and many more. However, something that is lacking in the modern ML toolbox is an efficient way to learn these deep generative models in a sequential, lifelong setting.
In a lot of real world scenarios we observe distributions sequentially. Examples of this include streaming data from sensors such as cameras and microphones or other similar time series data. A system can also be resource limited wherein all of the past data or learnt models cannot be stored. We are interested in the lifelong learning setting for generative models where data arrives sequentially in a stream and where the storage of all data is infeasible. Within the stream, instances are generated according to some non-observed distribution which changes at given time-points. We assume we know the time points at which the transitions occur and whether the latent distribution is a completely new one or one that has been observed before. We do not however know the underlying identity of the individual distributions. Our goal is to learn a generative model that can summarize all the distributions seen so far in the stream. We give an example of such a setting in figure 1(a) using MNIST LeCun & Cortes (2010), where we have three unique distributions and one that is repeated.
Since we only observe one distribution at a time we need to develop a strategy of retaining the previously learnt knowledge (i.e. the previously learnt distributions) and integrate it into future learning. To accumulate additional distributions in the current generative model we utilize a student-teacher architecture similar to that in distillation methods Hinton et al. (2015); Furlanello et al. (2016). The teacher contains a summary of all past distributions and is used to augment the data used to train the student model. The student model thus receives data samples from the currently observable distribution as well as synthetic data samples from previous distributions. This allows the student model to learn a distribution that summarizes the current as well as all previously observed distributions. Once a new distribution shift occurs the existing teacher model is discarded, the student becomes the teacher and a new student is instantiated.
We further leverage the generative model of the teacher by introducing a regularizer in the learning objective function of the student that brings the posterior distribution of the latter close to that of the
former. This allows us to build upon and extend the teacher’s generative model in the student each time the latter is re-instantiated (rather than re-learning it from scratch). By coupling this regularizer with a weight transfer from the teacher to the student we also allow for faster convergence of the student model. We empirically show that the regularizer allows us to learn a much larger set of distributions without catastrophic interference McCloskey & Cohen (1989).
We build our lifelong generative models over Variational Autoencoders (VAEs) Kingma & Welling (2014). VAEs learn the posterior distribution of a latent variable model using an encoder network; they generate data by sampling from a prior and decoding the sample through a conditional distribution learnt by a decoder network.
Using a vanilla VAE as a teacher to generate synthetic data for the student is problematic due to a couple of limitations of the VAE generative process. 1) Sampling the prior can select a point in the latent space that is in between two separate distributions, causing generation of unrealistic synthetic data and eventually leading to loss of previously learnt distributions. 2) Additionally, data points mapped to the posterior that are further away from the prior mean will be sampled less frequently resulting in an unbalanced sampling of the constituent distributions. Both limitations can be understood by visually inspecting the learnt posterior distribution of a standard VAE evaluated on test images from MNIST as shown in figure 1(b). To address the VAE’s sampling limitations we decompose the latent variable vector into a continuous and a discrete component. The discrete component is used to summarize the discriminative information of the individual generative distributions while the continuous caters for the remaining sample variability. By independently sampling the discrete and continuous components we preserve the distributional boundaries and circumvent the two problems above.
This sampling strategy, combined with the proposed regularizer allows us to learn and remember all the individual distributions observed in the past. In addition we are also able to generate samples from any of the past distributions at will; we call this property consistent sampling.
2 RELATED WORK
Past work in sequential learning of generative models has focused on learning Gaussian mixture models Singer & Warmuth (1999); Declercq & Piater (2008) or on variational methods such as Variational EM Ghahramani & Attias (2000). Work that is closer to ours is the online or sequential learning of generative models in a streaming setting. Variational methods have been adapted for a streaming setting, e.g: Streaming Variational Bayes Broderick et al. (2013), Streaming Variational Mixture models Tank et al. (2015), and the Population Posterior McInerney et al. (2015). However their learning objectives are very different from ours. The objective of these methods is to adjust the learnt model such that it reflects the current data distribution as accurately as possible, while forgetting the previously observed distributions. Instead we want to do lifelong learning and retain all previously observed distributions within our learnt model. As far as we know our work is the first one that tries to bring generative models, and in particular VAEs, into a lifelong setting where distributions are seen, learnt, and remembered sequentially.
VAEs rely on an encoder and a decoder neural network in order to learn the parameters of the posterior and likelihood. One of the central problems that arise when training a neural network in an sequential manner is that it causes the model to run into the problem of catastrophic interference McCloskey & Cohen (1989). Catastrophic interference appears when we train neural networks in a sequential manner and model parameters start to become biased to the most recent samples observed, while forgetting what was learnt from older samples. This generally happens when we stop exposing the model to past data. There have been a number of attempts to solve the problem of catastrophic interference in neural networks. These range from distillation methods such as the original method Hinton et al. (2015) and ALTM Furlanello et al. (2016), to utilizing privileged information Lopez-Paz et al. (2016), as well as transfer learning approaches such as Learning Without Forgetting Li & Hoiem (2016) and methods that relay information from previously learnt hidden layers such as in Progressive Neural Networks Rusu et al. (2016) and Deep Block-Modular Neural Networks Terekhov et al. (2015). All of these methods necessitate the storage of previous models or data; our method does not.
The recent work of elastic weight consolidation (EWC) Kirkpatrick et al. (2017) utilizes the Fisher Information matrix (FIM) to avoid the problem of catastrophic interference. The FIM captures the sensitivity of the log-likelihood with respect to the model parameters; EWC leverages this (via a linear approximation of the FIM) to control the change of model parameter values between varying distributions. Intuitively, important parameters should not have their values changed, while non-important parameters are left unconstrained. Since EWC assumes model parameters being distributed under an exponential family, it allows for the utilization of the FIM as a quadratic approximationJeffreys (1946) to the Kullback-Leibler (KL) divergence. Our model makes no such distributional assumptions about the model parameters. Instead of constraining the parameters of the model as in EWC, we restrict the posterior representation of the student model to be close to that of the teacher for the previous distributions accumulated by the teacher. This allows the model parameters to vary as necessary in order to best fit the data.
3 BACKGROUND
We consider an unsupervised setting where we observe a sample X of K ≥ 1 realizations X = {x(0),x(1), ...,x(K)} from an unknown true distribution P ∗(x) with x ∈ RN . We assume that the data is generated by a random process involving a non-observed random variable z ∈ RM . In order to incorporate our prior knowledge we posit a prior P (z) over z. Our objective is to approximate the true underlying data distribution by a model Pθ(x) such that Pθ(x) ≈ P ∗(x). Given a latent variable model Pθ(x|z)P (z) we obtain the marginal likelihood Pθ(x) by integrating out the latent variable z from the joint distribution. The joint distribution can in turn be factorized using the conditional distribution Pθ(x|z) or the posterior Pθ(z|x).
Pθ(x) = ∫ Pθ(x, z)δz = ∫ Pθ(z|x)Pθ(x)δz = ∫ Pθ(x|z)P (z)δz (1)
We model the conditional distribution Pθ(x|z) by a decoder, typically a neural network. Very often the marginal likelihood Pθ(x) will be intractable because the integral in equation (1) does not have an analytical form nor an efficient estimator (Kingma (2017)). As a result the respective posterior distribution, Pθ(z|x), is also intractable. Variational inference side-steps the intractability of the posterior by approximating it with a tractable distribution Qφ(z|x) ≈ Pθ(z|x). VAEs use an encoder (generally a neural network) to model the approximate posterior Qφ(z|x) and optimize the parameters φ to minimize the reverse KL divergence KL[Qφ(z|x)||Pθ(z|x)] between the approximate posterior distribution Qφ(z|x) and the true posterior Pθ(z|x). Given that Qφ(z|x) is a powerful model (such that the KL divergence against the true posterior will be close to zero) we maximize the tractable Evidence Lower BOund (ELBO) to the intractable marginal likelihood. Lθ(x) ≤ Pθ(x) (full derivation available in the appendix)
ELBO: Lθ(x) = EQφ(z|x)[logPθ(x|z)]−KL[Qφ(z|x) || P (z)] (2)
By sharing the variational parameters φ of the encoder across the data points (amortized inference Gershman & Goodman (2014)), variational autoencoders avoid per-data optimization loops typically needed by mean-field approaches.
3.1 SEQUENTIAL GENERATIVE MODELING
The standard setting in maximum-likelihood generative modeling is to estimate the set of parameters θ that will maximize the marginal likelihood Pθ(x) for data sample X generated IID from a single true data distribution P ∗(x). In our work we assume the data are generated from multiple distributions P ∗i (x) such that P ∗(x) = ∑ i π ∗ i P ∗ i (x). In classical batch generative modelling, the individual data points are not associated with the specific generative distributions P ∗i (x). Instead, the whole sample X is considered to be generated from the mixture distribution P ∗(x). Latent variable models Pθ(x, z) = Pθ(x|z)P (z) (such as VAEs) capture the complex structures in P ∗(x) by conditioning the observed variables x on the latent variables z and combining these in (possibly infinite) mixtures Pθ(x) = ∫ Pθ(x|z)P (z)δz.
Our sequential setting is vastly different from the batch approach described above. We receive a stream of (possibly infinite) data X = {X1,X2, . . .} where the data samples Xi = {x(1)i ,x (2) i , . . . ,x (Ki) i } originate from the components P ∗i (x) of the generative distribution. At any given time we observe the latest sample Xi generated from a single component P ∗i (x) without access to any of the previous samples generated by the other components of P ∗(x). Our goal is to sequentially build an approximation Pθ(x) of the true mixture P ∗(x) by only observing data from a single component P ∗i (x) at a time.
4 MODEL
To enable lifelong generative learning we propose a dual model architecture based on a student-teacher model. The teacher and the student have rather different roles throughout the learning process: the teacher’s role is to preserve the memory of the previously learned tasks and to pass this knowledge onto the student; the student’s role is to learn the distributions over the new incoming data while accommodating for the knowledge obtained from the teacher. The dual model architecture is summarized in figure 2.
The top part represents the teacher model. At any given time the teacher contains a summary of all previous distributions within the learned parameters of the encoder QΦ(z|x) and the decoder PΘ(x|z). The teacher is used to generate synthetic samples x̂ from these past distributions by decoding samples from the prior ẑ ∼ P (z) through the decoder x̂ ∼ PΘ(x|ẑ). The generated synthetic samples x̂ are passed onto the student model as a form of knowledge transfer about the past distributions.
The bottom part of figure 2 represents the student, which is responsible for updating the parameters of the encoder Qφ(z|x) and decoder Pθ(x|z) models over the newly observed data. The student is exposed to a mixture of learning instances x sampled from x ∼ P (ω)P (x|ω), ω ∼ Ber(π); it sees synthetic instances generated by the teacher P (x|ω = 0) = PΘ(x|z), and real ones sampled from the currently active training distribution P (x|ω = 1) = P ∗(x). The mean π of the Bernouli distribution controls the sampling proportion of the previously learnt distributions to the current one.
If we have seen k distinct distributions prior to the currently active one then π = kk+1 . In this way we ensure that all the past distributions and the current one are equally represented in the training set used by the student model.
Once a new distribution is signalled, the old teacher is dropped, the student model is frozen and becomes the new teacher (φ→ Φ,θ → Θ), and a new student is initiated with the latest weights φ and θ from the previous student (the new teacher).
4.1 TEACHER-STUDENT CONSISTENCY
Each new student instantiation uses the input data mix to learn a new approximate posterior Qφ(z|x). In addition to being initiated by the new teacher’s weights and receiving information about the teacher’s knowledge via the synthetic samples x̂, we further foster the lifelong learning idea by bringing the latent variable posterior induced by the student model closer to the respective posterior induced by the teacher model. We enforce the latter constraint only over the synthetic samples, ensuring that the previously learnt latent variable posteriors are preserved over the different models. In doing so, we alleviate the effect of catastrophic interference.
To achieve this, we complement the classical VAE objective (equation (2)) with a term minimizing the KL divergence KL[Qφ(z|x̂)||QΦ(z|x̂)] between the student’s and the teacher’s posteriors over the synthetic data x̂. The teacher’s encoder model, which already has the accumulated knowledge from the previous learning steps, is thus reused within the new student’s objective. Under certain mild assumptions, we show that this objective reparameterizes the student model’s posterior, while preserving the same learning objective as a standard VAE (appendix section 7.0.1).
4.2 LATENT VARIABLE
A critical component of our model is the synthetic data generation by the teacher’s decoder x̂ ∼ PΘ(x|z). The synthetic samples need to be representative of all the previously observed distributions in order to provide the student with ample information about the learning history. The teacher generates these synthetic samples by first sampling the latent variable from the prior ẑ ∼ P (z) followed by the decoding step x̂ ∼ PΘ(x|ẑ). As we will describe shortly, the latent variable ẑ has a categorical component which corresponds to all the past distributions. This categorical component allows us to uniformly sample synthetic instances from all past distributions.
A simple unimodal prior distribution P (z), such as the isotropic Gaussian typically used in classical VAEs, results in an undersampling of the data points that are mapped to a posterior mean that is further away from the prior mean. Visualizing the 2d latent posterior of MNIST in figure 1(b) allows us to get a better intuition of this problem. If for example the prior mean corresponds to a point in latent space between two disparate distributions, the sample generated will not correspond to a sample from the real distribution. Since we use synthetic samples from the teacher in the student model, this aliased sample corresponding to the prior mean, will be reused over and over again, causing corruption in the learning process. In addition, we would under represent the respective true distributions in the learning input mix of the student and eventually lead to distribution loss.
We circumvent this in our model by decomposing the latent variable z into a discrete component zd ∈ RJ and a continuous component zc ∈ RF , z = [zd, zc]. The discrete component zd shall summarise the most discriminative information about each of the true generating distributions P ∗i (x). We use the uniform multivariate categorical prior zd ∼ Cat( 1J ) to represent it and the same parametric family for the approximate posterior QΦ(z|x). The continuous zc component is the global representation of the distributional variability and we use the multivariate standard normal as the prior zc ∼ N(0, I) and the isotropic multivariate normal N(µ, σ2I) for the approximate posterior.
When generating synthetic data, the teacher now independently samples from the discrete and continuous priors ẑd ∼ P (zd), ẑc ∼ P (zc) and uses the composition of these to condition the decoding step x̂ ∼ PΘ(x|ẑd, ẑc). Since the discrete representation ẑd is associated with the true generative distribution components P ∗i (x), uniformly sampling the discrete prior ensures that that the distributions are well represented in the synthetic mix that the student observes.
In general, the capacity of a categorical distribution is less than that of a continuous normal distribution. To prevent the VAE’s encoder from using primarily the continuous representation while disregarding the discrete one we further complement the learning objective by a term maximising the mutual information between the discrete representation and the data I(zd; x) = H(zd)−H(zd|x). H(zd) is used to denote the marginal entropy of zd and H(zd|x) denotes the conditional entropy of zd given x.1
4.3 LEARNING OBJECTIVE
The final learning objective for each of the student models is the maximization of the ELBO from equation (2), augmented by the negative of the cross-model consistency term introduced in section 4.1 and the mutual information term proposed in section 4.2.
EQφ [log Pθ(x|z)]−KL[Qφ(z|x)||P (z)]︸ ︷︷ ︸ VAE ELBO −1(ω = 0)KL[Qφ(zd|x)||QΦ(zd|x)]︸ ︷︷ ︸ Consistency Regularizer +λI(zd; x)︸ ︷︷ ︸ Mutual Info ,
(3)
We sample the training instances x from x ∼ P (ω)P (x|ω),ω ∼ Ber(π) as described in section 4. Thus they can either be generated from the teacher model (ω = 0) or come from the training set of the currently active distribution (ω = 1). 1(.) is the indicator function which evaluates to 1 if its argument is true and zero otherwise; it makes sure that the consistency regularizer is applied only over the synthetic samples generated by the teacher. The λ hyper-parameter controls the importance of the mutual information regularizer. We present the analytical evaluation of the consistency regularizer in appendix section 7.0.1.
5 EXPERIMENTS
We conducted a set of experiments to explore the behaviour and properties of the method we propose. We specifically concentrate on the benefits our model brings in the lifelong learning setting which is the main motivation of our work. We explain the settings of the individual experiments and their focus in the following three sections.
In all the experiments we use the notion of a distributional ‘interval’: the interval in which we observe samples from a single distribution P ∗i (x) before the transition to the next distribution P ∗i+1(x) occurs. The length of the intervals is in principle random and we developed a heuristic to generate these. We provide further details on this together with other technical details related to the network implementation and training common for all the experiments in the appendix.
5.1 FASHION MNIST : SEQUENTIAL GENERATION
In this experiment, we seek to establish the performance benefit that our augmented objective formulation in section 4.3 brings into the learning in contrast to the simple ELBO objective 2. We do so by training two models with identical student-teacher architectures as introduced in section 4, with one using the consistency and mutual information augmented objective (with consistency) and the other using the standard ELBO objective (without consistency). We also demonstrate the ability of our model to disambiguate distributional boundaries from the distributional variations.
We use Fashion MNIST Xiao et al. (2017) 2 to simulate our sequential learning setting. We treat each object as a different distribution and present the model with samples drawn from a single distribution at a time. We sequentially progress over the ten available distributions. When a distribution transition occurs (new object) we signal the model, make the latest student the new teacher and instantiate a new student model.
We quantify the performance of the generative models by computing the ELBO over the standard Fashion MNIST test set after every distributional transition. The test set contains objects from all of the individual distributions. We run this procedure ten times and report the average test ELBO over the ten repetitions in figure 3(c). We see that around the 3rd interval (the 3rd distributional
1A similar idea is leveraged in InfoGAN Chen et al. (2016). 2We do a similar experiment over MNIST in the appendix
transition), the negative ELBO of the with consistency model is systematically below (∼ 20 nats ) that of the without consistency model. This confirms the benefits of our new objective formulation for reducing the effects of the catastrophic interference, a crucial property in our lifelong learning setting. In the same figure we also plot the ELBO of the baseline batch VAE. The batch VAE will always outperform our model because it has simultaneous access to all of the distributions during training.
After observing and training over all ten distributions we generate samples from the final students of the two models. We do this by fixing the discrete distribution zd to one-hot vectors over the whole categorical distribution, while randomly sampling the continuous prior zc ∼ N (0, I). We contrast samples generated from the model with consistency (figure 3(a)) to the model without consistency (figure 3(b)). Our model learns to separate ’style’ from the distributional boundaries. For example, in the last row of our with consistency model, we observe the various styles of shoes. The without consistency model mixes the distributions randomly. This illustrates the benefits that our augmented objective has for achieving consistent sampling from the individual distributional components.
5.2 ROTATED MNIST : LONG TERM DISTRIBUTION ACCUMULATION
In this experiment we dig deeper into the benefits our objective formulation brings for the lifelong learning setting. We expose the models to a much larger number of distributions and we explore how our augmented objective from 4.3 helps in preserving the previously learned knowledge. As in section 5.1, we compare models with and without consistency with identical teacher-student architectures. We measure the ability of the models to recall the previously learned information by looking at the consistency between the posterior of the student and the teacher models over the test data set
consistency: #{k : QΦ(zd|xk) == Qφ(zd|xk),xk ∈ Xtest} . (4)
We use the MNIST dataset in which we rotate each of the original digit samples by angles ν = [30◦, 70◦, 130◦, 200◦, 250◦]. We treat each rotation of a single digit family as an individual distribution {P ∗i (x)}70i=1. Within each distributional interval, we sample the data by first sampling (uniformly with replacement) one of the 70 distributions and then sampling the data instances x from the selected distribution.
Figure 4(b) compares the consistency results of the two tested models throughout the learning process. Our model with the augmented objective clearly outperforms the model that uses the simple ELBO objective. This confirms the usefulness of the additional terms in our objective for preserving the previously learned knowledge in accordance with the lifelong learning paradigms. In addition, similarly as in experiment 5.1, figure 4(a) documents that the model with the augmented objective (thanks to reducing the effects of the catastrophic interference) achieves lower negative test ELBO systematically over the much longer course of learning (∼ 30 nats). We also visualise in figure 4(c) how the accumulation of knowledge speeds up the learning process. For each distributional interval we plot the norms of the model gradients across the learning iterations. We observe that for later distributional intervals the curves become steeper much quicker, reducing the gradients and reaching (lower) steady states much faster then in the early learning stages. This suggests that the latter models are able to learn quicker in our proposed architecture.
5.3 SVHN TO MNIST
In this experiment we explore the ability of our model to retain and transfer knowledge across completely different datasets. We use MNIST and SVHN Netzer et al. (2011) to demonstrate this. We treat all samples from SVHN as being generated by one distribution P ∗1 (x) and all the MNIST 3 samples as generated by another distribution P ∗2 (x) (irrespective of the specific digit).
We first train a student model (standard VAE) over the entire SVHN data set. Once done, we freeze the parameters of the encoder and the decoder and transfer the model into the teacher state (φ → Φ,θ → Θ). We then use this teacher to aid the learning of the new student over the mix of the teacher-generated synthetic SVHN samples x̂ and the true MNIST data.
We use the final student model to reconstruct samples from the two datasets by passing them through the learned encoding/decoding flow: x ∼ P ∗i (x) → z ∼ Qφ(z|x) → x̂ ∼ Pθ(x|z). We visualise examples of the true inputs x and the respective reconstructions x̂ in figure 5(a). We see that even though the only true data the final model received for training were from MNIST, it can still reconstruct SVHN data. This confirms the ability of our architecture to transition between complex distributions while still preserving the knowledge learned from the previously observed distributions.
Finally, in figure 5(b) and 5(c) we illustrate the data generated from an interpolation of a 2-dimensional continuous latent space. For this we specifically trained the models with the continuous latent variable zc ∈ R2. To generate the data, we fix the discrete categorical zd to one of the possible values {[0, 1], [1, 0]} and linearly interpolate the continuous zc over the range [−3, 3]. We then decode these to obtain the samples x̂ ∼ Pθ(x|zd, zc). The model learns a common
3In order to work over both of these datasets we convert MNIST to RGB and resize it to 32x32 to make it consistent with the dimensions of SVHN.
continuous structure for the two distributions which can be followed by observing the development in the generated samples from top left to bottom right on both figure 5(b) and 5(c).
6 CONCLUSION
In this work we propose a novel method for learning generative models over streaming data following the lifelong learning principles. The principal assumption for the data is that they are generated by multiple distributions and presented to the learner in a sequential manner (a set of observations from a single distribution followed by a distributional transition). A key limitation for the learning is that the method can only access data generated by the current distribution and has no access to any of the data generated by any of the previous distributions.
The proposed method is based on a dual student-teacher architecture where the teacher’s role is to preserve the past knowledge and aid the student in future learning. We argue for and augment the standard VAE’s ELBO objective by terms helping the teacher-student knowledge transfer. We demonstrate on a series of experiments the benefits this augmented objective brings in the lifelong learning settings by supporting the retention of previously learned knowledge (models) and limiting the usual effects of catastrophic interference.
In our future work we will explore the possibilities to extend our architecture to GAN-like Goodfellow et al. (2014) learning with the prospect to further improve the generative abilities of our method. GANs, however, do not use a metric for measuring the quality of the learned distributions such as the marginal likelihood or the ELBO in their objective and therefore the transfer of our architecture to these is not straightforward.
7 APPENDIX
7.0.1 UNDERSTANDING THE CONSISTENCY REGULARIZER
The analytical derivations of the consistency regularizer show that the regularizer can be interpreted as an a transformation of the standard VAE regularizer. In the case of an isotropic gaussian posterior, the proposed regularizer scales the mean and variance of the student posterior by the variance of the teacher 7.0.2 and adds an extra ’volume’ term. This interpretation of the consistency regularizer shows that the proposed regularizer preserves the same learning objective as that of the standard VAE. Below we present the analytical form of the consistency regularizer with categorical and isotropic gaussian posteriors:
Corollary 7.0.1 We parameterize the learnt posterior of the teacher by Φi = exp(pEi )∑J i=1 exp(p E i ) and the posterior of the student by φi = exp(pSi )∑J i=1 exp(p S i ) . We also redefine the normalizing constants as
cE = ∑J
i=1 exp(p E i ) and c
S = ∑J
i=1 exp(p S i ) for the teacher and student models respectively. The
reverse KL divergence in equation 8 can now be re-written as:
KL(Qφ(zd|x)||QΦ(zd|x)) = J∑
i=1
exp(pSi )
cS log
( exp(pSi )
cS cE
exp(pEi ) ) = H(pS ,pS − pE) = −H(ps) +H(pS ,pE)
(5)
where H( ) is the entropy operator and H( , ) is the cross-entropy operator.
Corollary 7.0.2 We assume the learnt posterior of the teacher is parameterized by a centered, isotropic gaussian with Φ = [µE = 0,ΣE = σE 2
I] and the posterior of our student by a non-centered isotropic gaussian with φ = [µS ,ΣS = σS2I], then
KL(Qφ(z|x)||QΦ(z|x)) = 0.5 [ tr(ΣE −1 ΣS) + (µE − µS)TΣE −1 (µE − µS)− F + log ( |ΣE | |ΣS | )] = 0.5
F∑ j=1 [ 1 σE2(j) (σS2(j) + µS2(j))− 1 + log σE2(j)− log σS2(j) ] = KL(Qφ∗(z|x)||N (0, I))− log |ΣE |
(6)
Via a reparameterization of the student’s parameters:
φ∗ = [µS∗,σS∗2]
µS∗ = µS(j)
σE2(j) ;σS∗2 =
σS2(j)
σE2(j)
(7)
It is also interesting to note that our posterior regularizer becomes the prior if:
limσE2 7→1KL(Qφ(z|x)||QΦ(z|x)) = KL(Qφ(z|x)||N (0, I))
7.1 ELBO DERIVATION
Variational inference Hoffman et al. (2013) side-steps the intractability of the posterior distribution by approximating it with a tractable distribution QΦ(z|x); we then optimize the parameters Φ in order to bring this distribution close to PΦ(z|x). The form of this approximate distribution is fixed and is generally conjugate to the prior P (z). Variational inference converts the problem of posterior inference into an optimization problem over Φ. This allows us to utilize stochastic gradient descent to solve our problem. To be more concrete, variational inference tries to minimize the reverse Kullback-Leibler (KL) divergence between the variational posterior distribution QΦ(z|x) and the true posterior Pθ(z|x):
KL[QΦ(z|x)||Pθ(z|x)] = log Pθ(x)− EQΦ(z|x) [ log Pθ(x, z)
QΦ(z|x) ] ︸ ︷︷ ︸
Lθ
(8)
Rearranging the terms in equation 8 and utilizing the fact that the KL divergence is a measure, we can derive the evidence lower bound Lθ (ELBO) which is the objective function we directly optimize:
log Pθ(x) ≥ EQΦ(z|x)[log Pθ(x|z)]−KL(QΦ(z|x) || P (z)) = Lθ (9)
In order to backpropagate it is necessary to remove the dependence on the stochastic variable z. To achieve this, we push the sampling operation outside of the computational graph for the normal distribution via the reparameterization trick Kingma & Welling (2014) and the gumbel-softmax reparameterization Maddison et al. (2016); Jang et al. (2017) for the discrete distribution. In essence the reparameterization trick allows us to introduce a distribution P ( ) that is not a function of the data or computational graph in order to move the gradient operator into the expectation:
∇ EQΦ(z|x) [ log Pθ(x, z)
QΦ(z|x)
] 7→ EP ( ) [ ∇ log Pθ(x, z)
QΦ(z|x)
] (10)
7.2 MODEL RELATED
In this section we provide extra details of our model architecture.
7.2.1 MODEL ARCHITECTURE
We utilized two different architectures for our experiments. The first two utilize a standard deep neural network with two layers of 512 to map to the latent representation and two layers of 512 to map back to the reconstruction for the decoder. We used batch norm Ioffe & Szegedy (2015) and ELU activations for all the layers barring the layer projecting into the latent representation and the output layer.
The final experiment with the transfer from SVHN to MNIST utilizes a fully convolutional architecture with only strided convolutional layers in the encoder (where the number of filters are doubled at each layer). The final projection layer for the encoder maps the data to a [C=|zd|, 1, 1] output which is then reparameterized in the standard way. The decoder utilizes fractional strides for the convolutional-transpose (de-convolution) layers where we reduce the number of filters in half at each layer. The full architecture can be examined in our code repository [which will be de-anonymized after the review process]. All layers used batch norm Ioffe & Szegedy (2015) and ELU activations.
We utilized Adam Kingma & Ba (2015) to optimize all of our problems with a learning rate of 1e-4. When we utilized weight transfer we re-initialized the accumulated momentum vector of Adam as well as the aggregated mean and covariance of the Batch Norm layers. Our code is already available online under an MIT license at 4
7.2.2 GUMBEL REPARAMETERIZATION
Since we model our latent variable as a combination of a discrete and a continuous distribution we also use the Gumbel-Softmax reparameterization Maddison et al. (2016); Jang et al. (2017). The Gumbel-Softmax reparameterization over logits [linear output of the last layer in the encoder] p ∈ RM and an annealed temperature parameter τ ∈ R is defined as:
z = softmax( log(p) + g
τ ); g = −log(−log(u ∼ Unif(0, 1))) (11)
u ∈ RM , g ∈ RM . As the temperature parameter τ 7→ 0, z converges to a categorical.
4https://github.com/¡anonymized¿
7.2.3 EXPANDABLE MODEL CAPACITY AND REPRESENTATIONS
Multilayer neural networks with sigmoidal activations have a VC dimension bounded between O(ρ2)Sontag (1998) and O(ρ4)Karpinski & Macintyre (1997) where ρ are the number of parameters. A model that is able to consistently add new information should also be able to expand its VC dimension by adding new parameters over time. Our formulation imposes no restrictions on the model architecture: i.e. new layers can be added freely to the new student model.
In addition we also allow the dimensionality of zd ∈ RJ , our discrete latent representation to grow in order to accommodate new distributions. This is possible because the KL divergence between two categorical distributions of different sizes can be evaluated by simply zero padding the teacher’s smaller discrete distribution. Since we also transfer weights between the teacher and the student model, we need to handle the case of expanding latent representations appropriately. In the event that we add a new distribution we copy all the weights besides the ones immediately surrounding the projection into and out of the latent distribution. These surrounding weights are reinitialized to their standard Glorot initializations Glorot & Bengio (2010).
7.3 FORWARD VS. REVERSE KL
In our setting we have the ability to utilize the zero forcing (reverse or mode-seeking) KL or the zero avoiding (forward) KL divergence. In general, if the true underlying posterior is multi-modal, it is preferable to operate with the reverse KL divergence (Murphy (2012) 21.2.2). In addition, utilizing the mode-seeking KL divergence generates more realistic results when operating over image data.
In order to validate this, we repeat the experiment in 5.1. We train two models: one with the forward KL posterior regularizer and one with the reverse. We evaluate the -ELBO mean and variance over ten trials. Empirically, we observed no difference between the different measures. This is demonstrated in figure 6.
7.4 NUMBER OF REQUIRED SAMPLES
Our method derives its sample complexity from standard VAEs. In practice we evaluate the number of required real and synthetic samples by utilizing early stopping. When the negative ELBO on the validation set stops decreasing for 50 steps we stop training the current model and transition to the next distribution interval. Using this and the fact that we keep equal proportions of all observed distributions in our minibatch, we can evaluate the number of synthetic and real samples used during the single distribution interval. We demonstrate this procedure on experiment 5.1 in figure 7.
We observe a rapid decrease of the number of required real samples as we assimilate more distributions into our model.
7.5 EXPERIMENTS RELATED
In this section we provide an extra experiment run on MNIST as well as some extra images from the rotated MNIST experiment.
7.5.1 MNIST : GENERATION AND ELBO
In this experiment, we seek to establish the performance benefit that the consistency regularizer brings into the learning process. We do so by evaluating the ELBO for a model with and without the consistency and mutual information regularizers. We also demonstrate the ability of the regularizers to disambiguate distributional boundaries and their inter-distributional variations. I.e. for MNIST this separates the MNIST digits from their inter-class variants (i.e drawing style).
We use MNIST to simulate our sequential learning setting. We treat each digit as a different distribution and present the model with samples drawn from a single distribution at a time. For the purpose of this experiment we sequentially progress over the ten distributions (i.e. interval sampling involves linearly iterating over all the distributions ).
When an interval transition occurs we signal the model, make the student the new teacher and instantiate a new student model. We contrast this to a model that utilizes the same graphical model, without our consistency and mutual information regularizers. We quantify the performance of the generative models by computing the ELBO over the standard MNIST test set at every interval. The test set contains digits from all of the individual distributions. We run this procedure ten times and report the average ELBO over the test set.
After observing all ten distributions we evaluate samples generated from the final student model. We do this by fixing the discrete distribution zd, while randomly sampling zc ∼ N (0, I). We contrast samples generated from the model with both regularizers (left-most image in 8) to the model without the regularizers (center image in 8). Our model learns to separate ’style’ from distributional boundaries. This is demonstrated by observing the digit ’2’: i.e. different samples of zc produce different styles of writing a ’2’.
7.5.2 ROTATED MNIST EXPERIMENT
We provide a larger sized image for the ELBO from experiment 5.2. We also visualize reconstructions from the rotated MNIST problem (visualized in figure 10). Finally in figure 11 we show the effects on the reconstructions when we do not use the mutual information regularizer. We believe this is due to the fact that the network utilizes the larger continuous representation to model the discriminative aspects of the observed distribution. | 1. What is the focus of the paper in terms of the problem it addresses?
2. What sets this paper apart from other variations of variational autoencoders?
3. How does the paper approach the problem of lifelong learning?
4. Is the construction of the generative model in line with standard practices in the field?
5. Are the derivations presented in the paper correct?
6. How does the reviewer assess the diversity and convincing nature of the experimental evaluation? | Review | Review
We have seen numerous variants of variational autoencoders, most of them introducing delta changes to the original architecture to address the same sort of modeling problems. This paper attacks a different kind of problem, namely lifelong learning. This key aspect of the paper, besides the fact that it constitutes a very important problem, does also addes a strong element of freshness to the paper.
The construction of the generative model is correct, and commensurate with standard practice in the field of deep generative models. The derivations are correct, while the experimental evaluation is diverse and convincing. |
ICLR | Title
Lifelong Generative Modeling
Abstract
Lifelong learning is the problem of learning multiple consecutive tasks in a sequential manner where knowledge gained from previous tasks is retained and used for future learning. It is essential towards the development of intelligent machines that can adapt to their surroundings. In this work we focus on a lifelong learning approach to generative modeling where we continuously incorporate newly observed streaming distributions into our learnt model. We do so through a student-teacher architecture which allows us to learn and preserve all the distributions seen so far without the need to retain the past data nor the past models. Through the introduction of a novel cross-model regularizer, the student model leverages the information learnt by the teacher, which acts as a summary of everything seen till now. The regularizer has the additional benefit of reducing the effect of catastrophic interference that appears when we learn over streaming data. We demonstrate its efficacy on streaming distributions as well as its ability to learn a common latent representation across a complex transfer learning scenario.
1 INTRODUCTION
Deep unsupervised generative learning allows us to take advantage of the massive amount of unlabeled data available in order to build models that efficiently compress and learn an approximation of the true data distribution. It has numerous applications such as image denoising, inpainting, super-resolution, structured prediction, clustering, pre-training and many more. However, something that is lacking in the modern ML toolbox is an efficient way to learn these deep generative models in a sequential, lifelong setting.
In a lot of real world scenarios we observe distributions sequentially. Examples of this include streaming data from sensors such as cameras and microphones or other similar time series data. A system can also be resource limited wherein all of the past data or learnt models cannot be stored. We are interested in the lifelong learning setting for generative models where data arrives sequentially in a stream and where the storage of all data is infeasible. Within the stream, instances are generated according to some non-observed distribution which changes at given time-points. We assume we know the time points at which the transitions occur and whether the latent distribution is a completely new one or one that has been observed before. We do not however know the underlying identity of the individual distributions. Our goal is to learn a generative model that can summarize all the distributions seen so far in the stream. We give an example of such a setting in figure 1(a) using MNIST LeCun & Cortes (2010), where we have three unique distributions and one that is repeated.
Since we only observe one distribution at a time we need to develop a strategy of retaining the previously learnt knowledge (i.e. the previously learnt distributions) and integrate it into future learning. To accumulate additional distributions in the current generative model we utilize a student-teacher architecture similar to that in distillation methods Hinton et al. (2015); Furlanello et al. (2016). The teacher contains a summary of all past distributions and is used to augment the data used to train the student model. The student model thus receives data samples from the currently observable distribution as well as synthetic data samples from previous distributions. This allows the student model to learn a distribution that summarizes the current as well as all previously observed distributions. Once a new distribution shift occurs the existing teacher model is discarded, the student becomes the teacher and a new student is instantiated.
We further leverage the generative model of the teacher by introducing a regularizer in the learning objective function of the student that brings the posterior distribution of the latter close to that of the
former. This allows us to build upon and extend the teacher’s generative model in the student each time the latter is re-instantiated (rather than re-learning it from scratch). By coupling this regularizer with a weight transfer from the teacher to the student we also allow for faster convergence of the student model. We empirically show that the regularizer allows us to learn a much larger set of distributions without catastrophic interference McCloskey & Cohen (1989).
We build our lifelong generative models over Variational Autoencoders (VAEs) Kingma & Welling (2014). VAEs learn the posterior distribution of a latent variable model using an encoder network; they generate data by sampling from a prior and decoding the sample through a conditional distribution learnt by a decoder network.
Using a vanilla VAE as a teacher to generate synthetic data for the student is problematic due to a couple of limitations of the VAE generative process. 1) Sampling the prior can select a point in the latent space that is in between two separate distributions, causing generation of unrealistic synthetic data and eventually leading to loss of previously learnt distributions. 2) Additionally, data points mapped to the posterior that are further away from the prior mean will be sampled less frequently resulting in an unbalanced sampling of the constituent distributions. Both limitations can be understood by visually inspecting the learnt posterior distribution of a standard VAE evaluated on test images from MNIST as shown in figure 1(b). To address the VAE’s sampling limitations we decompose the latent variable vector into a continuous and a discrete component. The discrete component is used to summarize the discriminative information of the individual generative distributions while the continuous caters for the remaining sample variability. By independently sampling the discrete and continuous components we preserve the distributional boundaries and circumvent the two problems above.
This sampling strategy, combined with the proposed regularizer allows us to learn and remember all the individual distributions observed in the past. In addition we are also able to generate samples from any of the past distributions at will; we call this property consistent sampling.
2 RELATED WORK
Past work in sequential learning of generative models has focused on learning Gaussian mixture models Singer & Warmuth (1999); Declercq & Piater (2008) or on variational methods such as Variational EM Ghahramani & Attias (2000). Work that is closer to ours is the online or sequential learning of generative models in a streaming setting. Variational methods have been adapted for a streaming setting, e.g: Streaming Variational Bayes Broderick et al. (2013), Streaming Variational Mixture models Tank et al. (2015), and the Population Posterior McInerney et al. (2015). However their learning objectives are very different from ours. The objective of these methods is to adjust the learnt model such that it reflects the current data distribution as accurately as possible, while forgetting the previously observed distributions. Instead we want to do lifelong learning and retain all previously observed distributions within our learnt model. As far as we know our work is the first one that tries to bring generative models, and in particular VAEs, into a lifelong setting where distributions are seen, learnt, and remembered sequentially.
VAEs rely on an encoder and a decoder neural network in order to learn the parameters of the posterior and likelihood. One of the central problems that arise when training a neural network in an sequential manner is that it causes the model to run into the problem of catastrophic interference McCloskey & Cohen (1989). Catastrophic interference appears when we train neural networks in a sequential manner and model parameters start to become biased to the most recent samples observed, while forgetting what was learnt from older samples. This generally happens when we stop exposing the model to past data. There have been a number of attempts to solve the problem of catastrophic interference in neural networks. These range from distillation methods such as the original method Hinton et al. (2015) and ALTM Furlanello et al. (2016), to utilizing privileged information Lopez-Paz et al. (2016), as well as transfer learning approaches such as Learning Without Forgetting Li & Hoiem (2016) and methods that relay information from previously learnt hidden layers such as in Progressive Neural Networks Rusu et al. (2016) and Deep Block-Modular Neural Networks Terekhov et al. (2015). All of these methods necessitate the storage of previous models or data; our method does not.
The recent work of elastic weight consolidation (EWC) Kirkpatrick et al. (2017) utilizes the Fisher Information matrix (FIM) to avoid the problem of catastrophic interference. The FIM captures the sensitivity of the log-likelihood with respect to the model parameters; EWC leverages this (via a linear approximation of the FIM) to control the change of model parameter values between varying distributions. Intuitively, important parameters should not have their values changed, while non-important parameters are left unconstrained. Since EWC assumes model parameters being distributed under an exponential family, it allows for the utilization of the FIM as a quadratic approximationJeffreys (1946) to the Kullback-Leibler (KL) divergence. Our model makes no such distributional assumptions about the model parameters. Instead of constraining the parameters of the model as in EWC, we restrict the posterior representation of the student model to be close to that of the teacher for the previous distributions accumulated by the teacher. This allows the model parameters to vary as necessary in order to best fit the data.
3 BACKGROUND
We consider an unsupervised setting where we observe a sample X of K ≥ 1 realizations X = {x(0),x(1), ...,x(K)} from an unknown true distribution P ∗(x) with x ∈ RN . We assume that the data is generated by a random process involving a non-observed random variable z ∈ RM . In order to incorporate our prior knowledge we posit a prior P (z) over z. Our objective is to approximate the true underlying data distribution by a model Pθ(x) such that Pθ(x) ≈ P ∗(x). Given a latent variable model Pθ(x|z)P (z) we obtain the marginal likelihood Pθ(x) by integrating out the latent variable z from the joint distribution. The joint distribution can in turn be factorized using the conditional distribution Pθ(x|z) or the posterior Pθ(z|x).
Pθ(x) = ∫ Pθ(x, z)δz = ∫ Pθ(z|x)Pθ(x)δz = ∫ Pθ(x|z)P (z)δz (1)
We model the conditional distribution Pθ(x|z) by a decoder, typically a neural network. Very often the marginal likelihood Pθ(x) will be intractable because the integral in equation (1) does not have an analytical form nor an efficient estimator (Kingma (2017)). As a result the respective posterior distribution, Pθ(z|x), is also intractable. Variational inference side-steps the intractability of the posterior by approximating it with a tractable distribution Qφ(z|x) ≈ Pθ(z|x). VAEs use an encoder (generally a neural network) to model the approximate posterior Qφ(z|x) and optimize the parameters φ to minimize the reverse KL divergence KL[Qφ(z|x)||Pθ(z|x)] between the approximate posterior distribution Qφ(z|x) and the true posterior Pθ(z|x). Given that Qφ(z|x) is a powerful model (such that the KL divergence against the true posterior will be close to zero) we maximize the tractable Evidence Lower BOund (ELBO) to the intractable marginal likelihood. Lθ(x) ≤ Pθ(x) (full derivation available in the appendix)
ELBO: Lθ(x) = EQφ(z|x)[logPθ(x|z)]−KL[Qφ(z|x) || P (z)] (2)
By sharing the variational parameters φ of the encoder across the data points (amortized inference Gershman & Goodman (2014)), variational autoencoders avoid per-data optimization loops typically needed by mean-field approaches.
3.1 SEQUENTIAL GENERATIVE MODELING
The standard setting in maximum-likelihood generative modeling is to estimate the set of parameters θ that will maximize the marginal likelihood Pθ(x) for data sample X generated IID from a single true data distribution P ∗(x). In our work we assume the data are generated from multiple distributions P ∗i (x) such that P ∗(x) = ∑ i π ∗ i P ∗ i (x). In classical batch generative modelling, the individual data points are not associated with the specific generative distributions P ∗i (x). Instead, the whole sample X is considered to be generated from the mixture distribution P ∗(x). Latent variable models Pθ(x, z) = Pθ(x|z)P (z) (such as VAEs) capture the complex structures in P ∗(x) by conditioning the observed variables x on the latent variables z and combining these in (possibly infinite) mixtures Pθ(x) = ∫ Pθ(x|z)P (z)δz.
Our sequential setting is vastly different from the batch approach described above. We receive a stream of (possibly infinite) data X = {X1,X2, . . .} where the data samples Xi = {x(1)i ,x (2) i , . . . ,x (Ki) i } originate from the components P ∗i (x) of the generative distribution. At any given time we observe the latest sample Xi generated from a single component P ∗i (x) without access to any of the previous samples generated by the other components of P ∗(x). Our goal is to sequentially build an approximation Pθ(x) of the true mixture P ∗(x) by only observing data from a single component P ∗i (x) at a time.
4 MODEL
To enable lifelong generative learning we propose a dual model architecture based on a student-teacher model. The teacher and the student have rather different roles throughout the learning process: the teacher’s role is to preserve the memory of the previously learned tasks and to pass this knowledge onto the student; the student’s role is to learn the distributions over the new incoming data while accommodating for the knowledge obtained from the teacher. The dual model architecture is summarized in figure 2.
The top part represents the teacher model. At any given time the teacher contains a summary of all previous distributions within the learned parameters of the encoder QΦ(z|x) and the decoder PΘ(x|z). The teacher is used to generate synthetic samples x̂ from these past distributions by decoding samples from the prior ẑ ∼ P (z) through the decoder x̂ ∼ PΘ(x|ẑ). The generated synthetic samples x̂ are passed onto the student model as a form of knowledge transfer about the past distributions.
The bottom part of figure 2 represents the student, which is responsible for updating the parameters of the encoder Qφ(z|x) and decoder Pθ(x|z) models over the newly observed data. The student is exposed to a mixture of learning instances x sampled from x ∼ P (ω)P (x|ω), ω ∼ Ber(π); it sees synthetic instances generated by the teacher P (x|ω = 0) = PΘ(x|z), and real ones sampled from the currently active training distribution P (x|ω = 1) = P ∗(x). The mean π of the Bernouli distribution controls the sampling proportion of the previously learnt distributions to the current one.
If we have seen k distinct distributions prior to the currently active one then π = kk+1 . In this way we ensure that all the past distributions and the current one are equally represented in the training set used by the student model.
Once a new distribution is signalled, the old teacher is dropped, the student model is frozen and becomes the new teacher (φ→ Φ,θ → Θ), and a new student is initiated with the latest weights φ and θ from the previous student (the new teacher).
4.1 TEACHER-STUDENT CONSISTENCY
Each new student instantiation uses the input data mix to learn a new approximate posterior Qφ(z|x). In addition to being initiated by the new teacher’s weights and receiving information about the teacher’s knowledge via the synthetic samples x̂, we further foster the lifelong learning idea by bringing the latent variable posterior induced by the student model closer to the respective posterior induced by the teacher model. We enforce the latter constraint only over the synthetic samples, ensuring that the previously learnt latent variable posteriors are preserved over the different models. In doing so, we alleviate the effect of catastrophic interference.
To achieve this, we complement the classical VAE objective (equation (2)) with a term minimizing the KL divergence KL[Qφ(z|x̂)||QΦ(z|x̂)] between the student’s and the teacher’s posteriors over the synthetic data x̂. The teacher’s encoder model, which already has the accumulated knowledge from the previous learning steps, is thus reused within the new student’s objective. Under certain mild assumptions, we show that this objective reparameterizes the student model’s posterior, while preserving the same learning objective as a standard VAE (appendix section 7.0.1).
4.2 LATENT VARIABLE
A critical component of our model is the synthetic data generation by the teacher’s decoder x̂ ∼ PΘ(x|z). The synthetic samples need to be representative of all the previously observed distributions in order to provide the student with ample information about the learning history. The teacher generates these synthetic samples by first sampling the latent variable from the prior ẑ ∼ P (z) followed by the decoding step x̂ ∼ PΘ(x|ẑ). As we will describe shortly, the latent variable ẑ has a categorical component which corresponds to all the past distributions. This categorical component allows us to uniformly sample synthetic instances from all past distributions.
A simple unimodal prior distribution P (z), such as the isotropic Gaussian typically used in classical VAEs, results in an undersampling of the data points that are mapped to a posterior mean that is further away from the prior mean. Visualizing the 2d latent posterior of MNIST in figure 1(b) allows us to get a better intuition of this problem. If for example the prior mean corresponds to a point in latent space between two disparate distributions, the sample generated will not correspond to a sample from the real distribution. Since we use synthetic samples from the teacher in the student model, this aliased sample corresponding to the prior mean, will be reused over and over again, causing corruption in the learning process. In addition, we would under represent the respective true distributions in the learning input mix of the student and eventually lead to distribution loss.
We circumvent this in our model by decomposing the latent variable z into a discrete component zd ∈ RJ and a continuous component zc ∈ RF , z = [zd, zc]. The discrete component zd shall summarise the most discriminative information about each of the true generating distributions P ∗i (x). We use the uniform multivariate categorical prior zd ∼ Cat( 1J ) to represent it and the same parametric family for the approximate posterior QΦ(z|x). The continuous zc component is the global representation of the distributional variability and we use the multivariate standard normal as the prior zc ∼ N(0, I) and the isotropic multivariate normal N(µ, σ2I) for the approximate posterior.
When generating synthetic data, the teacher now independently samples from the discrete and continuous priors ẑd ∼ P (zd), ẑc ∼ P (zc) and uses the composition of these to condition the decoding step x̂ ∼ PΘ(x|ẑd, ẑc). Since the discrete representation ẑd is associated with the true generative distribution components P ∗i (x), uniformly sampling the discrete prior ensures that that the distributions are well represented in the synthetic mix that the student observes.
In general, the capacity of a categorical distribution is less than that of a continuous normal distribution. To prevent the VAE’s encoder from using primarily the continuous representation while disregarding the discrete one we further complement the learning objective by a term maximising the mutual information between the discrete representation and the data I(zd; x) = H(zd)−H(zd|x). H(zd) is used to denote the marginal entropy of zd and H(zd|x) denotes the conditional entropy of zd given x.1
4.3 LEARNING OBJECTIVE
The final learning objective for each of the student models is the maximization of the ELBO from equation (2), augmented by the negative of the cross-model consistency term introduced in section 4.1 and the mutual information term proposed in section 4.2.
EQφ [log Pθ(x|z)]−KL[Qφ(z|x)||P (z)]︸ ︷︷ ︸ VAE ELBO −1(ω = 0)KL[Qφ(zd|x)||QΦ(zd|x)]︸ ︷︷ ︸ Consistency Regularizer +λI(zd; x)︸ ︷︷ ︸ Mutual Info ,
(3)
We sample the training instances x from x ∼ P (ω)P (x|ω),ω ∼ Ber(π) as described in section 4. Thus they can either be generated from the teacher model (ω = 0) or come from the training set of the currently active distribution (ω = 1). 1(.) is the indicator function which evaluates to 1 if its argument is true and zero otherwise; it makes sure that the consistency regularizer is applied only over the synthetic samples generated by the teacher. The λ hyper-parameter controls the importance of the mutual information regularizer. We present the analytical evaluation of the consistency regularizer in appendix section 7.0.1.
5 EXPERIMENTS
We conducted a set of experiments to explore the behaviour and properties of the method we propose. We specifically concentrate on the benefits our model brings in the lifelong learning setting which is the main motivation of our work. We explain the settings of the individual experiments and their focus in the following three sections.
In all the experiments we use the notion of a distributional ‘interval’: the interval in which we observe samples from a single distribution P ∗i (x) before the transition to the next distribution P ∗i+1(x) occurs. The length of the intervals is in principle random and we developed a heuristic to generate these. We provide further details on this together with other technical details related to the network implementation and training common for all the experiments in the appendix.
5.1 FASHION MNIST : SEQUENTIAL GENERATION
In this experiment, we seek to establish the performance benefit that our augmented objective formulation in section 4.3 brings into the learning in contrast to the simple ELBO objective 2. We do so by training two models with identical student-teacher architectures as introduced in section 4, with one using the consistency and mutual information augmented objective (with consistency) and the other using the standard ELBO objective (without consistency). We also demonstrate the ability of our model to disambiguate distributional boundaries from the distributional variations.
We use Fashion MNIST Xiao et al. (2017) 2 to simulate our sequential learning setting. We treat each object as a different distribution and present the model with samples drawn from a single distribution at a time. We sequentially progress over the ten available distributions. When a distribution transition occurs (new object) we signal the model, make the latest student the new teacher and instantiate a new student model.
We quantify the performance of the generative models by computing the ELBO over the standard Fashion MNIST test set after every distributional transition. The test set contains objects from all of the individual distributions. We run this procedure ten times and report the average test ELBO over the ten repetitions in figure 3(c). We see that around the 3rd interval (the 3rd distributional
1A similar idea is leveraged in InfoGAN Chen et al. (2016). 2We do a similar experiment over MNIST in the appendix
transition), the negative ELBO of the with consistency model is systematically below (∼ 20 nats ) that of the without consistency model. This confirms the benefits of our new objective formulation for reducing the effects of the catastrophic interference, a crucial property in our lifelong learning setting. In the same figure we also plot the ELBO of the baseline batch VAE. The batch VAE will always outperform our model because it has simultaneous access to all of the distributions during training.
After observing and training over all ten distributions we generate samples from the final students of the two models. We do this by fixing the discrete distribution zd to one-hot vectors over the whole categorical distribution, while randomly sampling the continuous prior zc ∼ N (0, I). We contrast samples generated from the model with consistency (figure 3(a)) to the model without consistency (figure 3(b)). Our model learns to separate ’style’ from the distributional boundaries. For example, in the last row of our with consistency model, we observe the various styles of shoes. The without consistency model mixes the distributions randomly. This illustrates the benefits that our augmented objective has for achieving consistent sampling from the individual distributional components.
5.2 ROTATED MNIST : LONG TERM DISTRIBUTION ACCUMULATION
In this experiment we dig deeper into the benefits our objective formulation brings for the lifelong learning setting. We expose the models to a much larger number of distributions and we explore how our augmented objective from 4.3 helps in preserving the previously learned knowledge. As in section 5.1, we compare models with and without consistency with identical teacher-student architectures. We measure the ability of the models to recall the previously learned information by looking at the consistency between the posterior of the student and the teacher models over the test data set
consistency: #{k : QΦ(zd|xk) == Qφ(zd|xk),xk ∈ Xtest} . (4)
We use the MNIST dataset in which we rotate each of the original digit samples by angles ν = [30◦, 70◦, 130◦, 200◦, 250◦]. We treat each rotation of a single digit family as an individual distribution {P ∗i (x)}70i=1. Within each distributional interval, we sample the data by first sampling (uniformly with replacement) one of the 70 distributions and then sampling the data instances x from the selected distribution.
Figure 4(b) compares the consistency results of the two tested models throughout the learning process. Our model with the augmented objective clearly outperforms the model that uses the simple ELBO objective. This confirms the usefulness of the additional terms in our objective for preserving the previously learned knowledge in accordance with the lifelong learning paradigms. In addition, similarly as in experiment 5.1, figure 4(a) documents that the model with the augmented objective (thanks to reducing the effects of the catastrophic interference) achieves lower negative test ELBO systematically over the much longer course of learning (∼ 30 nats). We also visualise in figure 4(c) how the accumulation of knowledge speeds up the learning process. For each distributional interval we plot the norms of the model gradients across the learning iterations. We observe that for later distributional intervals the curves become steeper much quicker, reducing the gradients and reaching (lower) steady states much faster then in the early learning stages. This suggests that the latter models are able to learn quicker in our proposed architecture.
5.3 SVHN TO MNIST
In this experiment we explore the ability of our model to retain and transfer knowledge across completely different datasets. We use MNIST and SVHN Netzer et al. (2011) to demonstrate this. We treat all samples from SVHN as being generated by one distribution P ∗1 (x) and all the MNIST 3 samples as generated by another distribution P ∗2 (x) (irrespective of the specific digit).
We first train a student model (standard VAE) over the entire SVHN data set. Once done, we freeze the parameters of the encoder and the decoder and transfer the model into the teacher state (φ → Φ,θ → Θ). We then use this teacher to aid the learning of the new student over the mix of the teacher-generated synthetic SVHN samples x̂ and the true MNIST data.
We use the final student model to reconstruct samples from the two datasets by passing them through the learned encoding/decoding flow: x ∼ P ∗i (x) → z ∼ Qφ(z|x) → x̂ ∼ Pθ(x|z). We visualise examples of the true inputs x and the respective reconstructions x̂ in figure 5(a). We see that even though the only true data the final model received for training were from MNIST, it can still reconstruct SVHN data. This confirms the ability of our architecture to transition between complex distributions while still preserving the knowledge learned from the previously observed distributions.
Finally, in figure 5(b) and 5(c) we illustrate the data generated from an interpolation of a 2-dimensional continuous latent space. For this we specifically trained the models with the continuous latent variable zc ∈ R2. To generate the data, we fix the discrete categorical zd to one of the possible values {[0, 1], [1, 0]} and linearly interpolate the continuous zc over the range [−3, 3]. We then decode these to obtain the samples x̂ ∼ Pθ(x|zd, zc). The model learns a common
3In order to work over both of these datasets we convert MNIST to RGB and resize it to 32x32 to make it consistent with the dimensions of SVHN.
continuous structure for the two distributions which can be followed by observing the development in the generated samples from top left to bottom right on both figure 5(b) and 5(c).
6 CONCLUSION
In this work we propose a novel method for learning generative models over streaming data following the lifelong learning principles. The principal assumption for the data is that they are generated by multiple distributions and presented to the learner in a sequential manner (a set of observations from a single distribution followed by a distributional transition). A key limitation for the learning is that the method can only access data generated by the current distribution and has no access to any of the data generated by any of the previous distributions.
The proposed method is based on a dual student-teacher architecture where the teacher’s role is to preserve the past knowledge and aid the student in future learning. We argue for and augment the standard VAE’s ELBO objective by terms helping the teacher-student knowledge transfer. We demonstrate on a series of experiments the benefits this augmented objective brings in the lifelong learning settings by supporting the retention of previously learned knowledge (models) and limiting the usual effects of catastrophic interference.
In our future work we will explore the possibilities to extend our architecture to GAN-like Goodfellow et al. (2014) learning with the prospect to further improve the generative abilities of our method. GANs, however, do not use a metric for measuring the quality of the learned distributions such as the marginal likelihood or the ELBO in their objective and therefore the transfer of our architecture to these is not straightforward.
7 APPENDIX
7.0.1 UNDERSTANDING THE CONSISTENCY REGULARIZER
The analytical derivations of the consistency regularizer show that the regularizer can be interpreted as an a transformation of the standard VAE regularizer. In the case of an isotropic gaussian posterior, the proposed regularizer scales the mean and variance of the student posterior by the variance of the teacher 7.0.2 and adds an extra ’volume’ term. This interpretation of the consistency regularizer shows that the proposed regularizer preserves the same learning objective as that of the standard VAE. Below we present the analytical form of the consistency regularizer with categorical and isotropic gaussian posteriors:
Corollary 7.0.1 We parameterize the learnt posterior of the teacher by Φi = exp(pEi )∑J i=1 exp(p E i ) and the posterior of the student by φi = exp(pSi )∑J i=1 exp(p S i ) . We also redefine the normalizing constants as
cE = ∑J
i=1 exp(p E i ) and c
S = ∑J
i=1 exp(p S i ) for the teacher and student models respectively. The
reverse KL divergence in equation 8 can now be re-written as:
KL(Qφ(zd|x)||QΦ(zd|x)) = J∑
i=1
exp(pSi )
cS log
( exp(pSi )
cS cE
exp(pEi ) ) = H(pS ,pS − pE) = −H(ps) +H(pS ,pE)
(5)
where H( ) is the entropy operator and H( , ) is the cross-entropy operator.
Corollary 7.0.2 We assume the learnt posterior of the teacher is parameterized by a centered, isotropic gaussian with Φ = [µE = 0,ΣE = σE 2
I] and the posterior of our student by a non-centered isotropic gaussian with φ = [µS ,ΣS = σS2I], then
KL(Qφ(z|x)||QΦ(z|x)) = 0.5 [ tr(ΣE −1 ΣS) + (µE − µS)TΣE −1 (µE − µS)− F + log ( |ΣE | |ΣS | )] = 0.5
F∑ j=1 [ 1 σE2(j) (σS2(j) + µS2(j))− 1 + log σE2(j)− log σS2(j) ] = KL(Qφ∗(z|x)||N (0, I))− log |ΣE |
(6)
Via a reparameterization of the student’s parameters:
φ∗ = [µS∗,σS∗2]
µS∗ = µS(j)
σE2(j) ;σS∗2 =
σS2(j)
σE2(j)
(7)
It is also interesting to note that our posterior regularizer becomes the prior if:
limσE2 7→1KL(Qφ(z|x)||QΦ(z|x)) = KL(Qφ(z|x)||N (0, I))
7.1 ELBO DERIVATION
Variational inference Hoffman et al. (2013) side-steps the intractability of the posterior distribution by approximating it with a tractable distribution QΦ(z|x); we then optimize the parameters Φ in order to bring this distribution close to PΦ(z|x). The form of this approximate distribution is fixed and is generally conjugate to the prior P (z). Variational inference converts the problem of posterior inference into an optimization problem over Φ. This allows us to utilize stochastic gradient descent to solve our problem. To be more concrete, variational inference tries to minimize the reverse Kullback-Leibler (KL) divergence between the variational posterior distribution QΦ(z|x) and the true posterior Pθ(z|x):
KL[QΦ(z|x)||Pθ(z|x)] = log Pθ(x)− EQΦ(z|x) [ log Pθ(x, z)
QΦ(z|x) ] ︸ ︷︷ ︸
Lθ
(8)
Rearranging the terms in equation 8 and utilizing the fact that the KL divergence is a measure, we can derive the evidence lower bound Lθ (ELBO) which is the objective function we directly optimize:
log Pθ(x) ≥ EQΦ(z|x)[log Pθ(x|z)]−KL(QΦ(z|x) || P (z)) = Lθ (9)
In order to backpropagate it is necessary to remove the dependence on the stochastic variable z. To achieve this, we push the sampling operation outside of the computational graph for the normal distribution via the reparameterization trick Kingma & Welling (2014) and the gumbel-softmax reparameterization Maddison et al. (2016); Jang et al. (2017) for the discrete distribution. In essence the reparameterization trick allows us to introduce a distribution P ( ) that is not a function of the data or computational graph in order to move the gradient operator into the expectation:
∇ EQΦ(z|x) [ log Pθ(x, z)
QΦ(z|x)
] 7→ EP ( ) [ ∇ log Pθ(x, z)
QΦ(z|x)
] (10)
7.2 MODEL RELATED
In this section we provide extra details of our model architecture.
7.2.1 MODEL ARCHITECTURE
We utilized two different architectures for our experiments. The first two utilize a standard deep neural network with two layers of 512 to map to the latent representation and two layers of 512 to map back to the reconstruction for the decoder. We used batch norm Ioffe & Szegedy (2015) and ELU activations for all the layers barring the layer projecting into the latent representation and the output layer.
The final experiment with the transfer from SVHN to MNIST utilizes a fully convolutional architecture with only strided convolutional layers in the encoder (where the number of filters are doubled at each layer). The final projection layer for the encoder maps the data to a [C=|zd|, 1, 1] output which is then reparameterized in the standard way. The decoder utilizes fractional strides for the convolutional-transpose (de-convolution) layers where we reduce the number of filters in half at each layer. The full architecture can be examined in our code repository [which will be de-anonymized after the review process]. All layers used batch norm Ioffe & Szegedy (2015) and ELU activations.
We utilized Adam Kingma & Ba (2015) to optimize all of our problems with a learning rate of 1e-4. When we utilized weight transfer we re-initialized the accumulated momentum vector of Adam as well as the aggregated mean and covariance of the Batch Norm layers. Our code is already available online under an MIT license at 4
7.2.2 GUMBEL REPARAMETERIZATION
Since we model our latent variable as a combination of a discrete and a continuous distribution we also use the Gumbel-Softmax reparameterization Maddison et al. (2016); Jang et al. (2017). The Gumbel-Softmax reparameterization over logits [linear output of the last layer in the encoder] p ∈ RM and an annealed temperature parameter τ ∈ R is defined as:
z = softmax( log(p) + g
τ ); g = −log(−log(u ∼ Unif(0, 1))) (11)
u ∈ RM , g ∈ RM . As the temperature parameter τ 7→ 0, z converges to a categorical.
4https://github.com/¡anonymized¿
7.2.3 EXPANDABLE MODEL CAPACITY AND REPRESENTATIONS
Multilayer neural networks with sigmoidal activations have a VC dimension bounded between O(ρ2)Sontag (1998) and O(ρ4)Karpinski & Macintyre (1997) where ρ are the number of parameters. A model that is able to consistently add new information should also be able to expand its VC dimension by adding new parameters over time. Our formulation imposes no restrictions on the model architecture: i.e. new layers can be added freely to the new student model.
In addition we also allow the dimensionality of zd ∈ RJ , our discrete latent representation to grow in order to accommodate new distributions. This is possible because the KL divergence between two categorical distributions of different sizes can be evaluated by simply zero padding the teacher’s smaller discrete distribution. Since we also transfer weights between the teacher and the student model, we need to handle the case of expanding latent representations appropriately. In the event that we add a new distribution we copy all the weights besides the ones immediately surrounding the projection into and out of the latent distribution. These surrounding weights are reinitialized to their standard Glorot initializations Glorot & Bengio (2010).
7.3 FORWARD VS. REVERSE KL
In our setting we have the ability to utilize the zero forcing (reverse or mode-seeking) KL or the zero avoiding (forward) KL divergence. In general, if the true underlying posterior is multi-modal, it is preferable to operate with the reverse KL divergence (Murphy (2012) 21.2.2). In addition, utilizing the mode-seeking KL divergence generates more realistic results when operating over image data.
In order to validate this, we repeat the experiment in 5.1. We train two models: one with the forward KL posterior regularizer and one with the reverse. We evaluate the -ELBO mean and variance over ten trials. Empirically, we observed no difference between the different measures. This is demonstrated in figure 6.
7.4 NUMBER OF REQUIRED SAMPLES
Our method derives its sample complexity from standard VAEs. In practice we evaluate the number of required real and synthetic samples by utilizing early stopping. When the negative ELBO on the validation set stops decreasing for 50 steps we stop training the current model and transition to the next distribution interval. Using this and the fact that we keep equal proportions of all observed distributions in our minibatch, we can evaluate the number of synthetic and real samples used during the single distribution interval. We demonstrate this procedure on experiment 5.1 in figure 7.
We observe a rapid decrease of the number of required real samples as we assimilate more distributions into our model.
7.5 EXPERIMENTS RELATED
In this section we provide an extra experiment run on MNIST as well as some extra images from the rotated MNIST experiment.
7.5.1 MNIST : GENERATION AND ELBO
In this experiment, we seek to establish the performance benefit that the consistency regularizer brings into the learning process. We do so by evaluating the ELBO for a model with and without the consistency and mutual information regularizers. We also demonstrate the ability of the regularizers to disambiguate distributional boundaries and their inter-distributional variations. I.e. for MNIST this separates the MNIST digits from their inter-class variants (i.e drawing style).
We use MNIST to simulate our sequential learning setting. We treat each digit as a different distribution and present the model with samples drawn from a single distribution at a time. For the purpose of this experiment we sequentially progress over the ten distributions (i.e. interval sampling involves linearly iterating over all the distributions ).
When an interval transition occurs we signal the model, make the student the new teacher and instantiate a new student model. We contrast this to a model that utilizes the same graphical model, without our consistency and mutual information regularizers. We quantify the performance of the generative models by computing the ELBO over the standard MNIST test set at every interval. The test set contains digits from all of the individual distributions. We run this procedure ten times and report the average ELBO over the test set.
After observing all ten distributions we evaluate samples generated from the final student model. We do this by fixing the discrete distribution zd, while randomly sampling zc ∼ N (0, I). We contrast samples generated from the model with both regularizers (left-most image in 8) to the model without the regularizers (center image in 8). Our model learns to separate ’style’ from distributional boundaries. This is demonstrated by observing the digit ’2’: i.e. different samples of zc produce different styles of writing a ’2’.
7.5.2 ROTATED MNIST EXPERIMENT
We provide a larger sized image for the ELBO from experiment 5.2. We also visualize reconstructions from the rotated MNIST problem (visualized in figure 10). Finally in figure 11 we show the effects on the reconstructions when we do not use the mutual information regularizer. We believe this is due to the fact that the network utilizes the larger continuous representation to model the discriminative aspects of the observed distribution. | 1. How does the reviewer assess the motivation and experiments presented in the paper?
2. What are the reviewer's concerns regarding the technical section of the paper, particularly in terms of clarity and mathematical formulations?
3. How does the reviewer interpret the limitation of synthetic samples being representative of all previously observed distributions?
4. Does the reviewer have any questions or concerns about the sampling method used in the standard form of VAEs?
5. What are the reviewer's thoughts on the restriction of the posterior representation of the student model to be close to that of the teacher?
6. How does the reviewer think the approach would handle situations where the previous distributions are not close to the new one?
7. Are there any minor issues with the paper that the reviewer noticed, such as typos or inconsistencies in notation? | Review | Review
- Second paragraph in Section 1: Nice motivation. I am not sure though whether the performed experiments are the most expressive for such motivation. For instance, is the experiment in Section 5.1 a common task in that sequential lifelong learning setting?
- Section 4, which is the main technical section of the paper, is quite full of lengthy descriptions that are a bit equivocal. I reckon each claim really needs to be supported by a corresponding unequivocal mathermatical formulation.
- An example of the last point can be found in Section 4.2: "The synthetic samples need to be representative of all the previously observed distributions ...": It will be much clearer how such samples are representative if a formulation follows, and that did not happen in Section 4.2.
- "1) Sampling the prior can select a point in the latent space that is in between two separate distributions ...": I am not sure I got this drawback of using the standard form of VAEs. Could you please further elaborate on this?
- "we restrict the posterior representation of the student model to **be close to that of the teacher** for the previous distributions** accumulated by the teacher. This allows the model parameters to **vary as necessary** in order to best fit the data": What if the previous distributions are not that close to the new one?
- Distribution intervals: Will it be the case in reality that these intervals will be given? Otherwise, what are the solutions to that? Can they be estimated somehow (as a future work)?
Minor:
- "we observe a sample X of K": sample X of size K, I guess?
- "... form nor an efficient estimator Kingma (2017)": citation style.
- "we illustrates ..." |
ICLR | Title
Lifelong Generative Modeling
Abstract
Lifelong learning is the problem of learning multiple consecutive tasks in a sequential manner where knowledge gained from previous tasks is retained and used for future learning. It is essential towards the development of intelligent machines that can adapt to their surroundings. In this work we focus on a lifelong learning approach to generative modeling where we continuously incorporate newly observed streaming distributions into our learnt model. We do so through a student-teacher architecture which allows us to learn and preserve all the distributions seen so far without the need to retain the past data nor the past models. Through the introduction of a novel cross-model regularizer, the student model leverages the information learnt by the teacher, which acts as a summary of everything seen till now. The regularizer has the additional benefit of reducing the effect of catastrophic interference that appears when we learn over streaming data. We demonstrate its efficacy on streaming distributions as well as its ability to learn a common latent representation across a complex transfer learning scenario.
1 INTRODUCTION
Deep unsupervised generative learning allows us to take advantage of the massive amount of unlabeled data available in order to build models that efficiently compress and learn an approximation of the true data distribution. It has numerous applications such as image denoising, inpainting, super-resolution, structured prediction, clustering, pre-training and many more. However, something that is lacking in the modern ML toolbox is an efficient way to learn these deep generative models in a sequential, lifelong setting.
In a lot of real world scenarios we observe distributions sequentially. Examples of this include streaming data from sensors such as cameras and microphones or other similar time series data. A system can also be resource limited wherein all of the past data or learnt models cannot be stored. We are interested in the lifelong learning setting for generative models where data arrives sequentially in a stream and where the storage of all data is infeasible. Within the stream, instances are generated according to some non-observed distribution which changes at given time-points. We assume we know the time points at which the transitions occur and whether the latent distribution is a completely new one or one that has been observed before. We do not however know the underlying identity of the individual distributions. Our goal is to learn a generative model that can summarize all the distributions seen so far in the stream. We give an example of such a setting in figure 1(a) using MNIST LeCun & Cortes (2010), where we have three unique distributions and one that is repeated.
Since we only observe one distribution at a time we need to develop a strategy of retaining the previously learnt knowledge (i.e. the previously learnt distributions) and integrate it into future learning. To accumulate additional distributions in the current generative model we utilize a student-teacher architecture similar to that in distillation methods Hinton et al. (2015); Furlanello et al. (2016). The teacher contains a summary of all past distributions and is used to augment the data used to train the student model. The student model thus receives data samples from the currently observable distribution as well as synthetic data samples from previous distributions. This allows the student model to learn a distribution that summarizes the current as well as all previously observed distributions. Once a new distribution shift occurs the existing teacher model is discarded, the student becomes the teacher and a new student is instantiated.
We further leverage the generative model of the teacher by introducing a regularizer in the learning objective function of the student that brings the posterior distribution of the latter close to that of the
former. This allows us to build upon and extend the teacher’s generative model in the student each time the latter is re-instantiated (rather than re-learning it from scratch). By coupling this regularizer with a weight transfer from the teacher to the student we also allow for faster convergence of the student model. We empirically show that the regularizer allows us to learn a much larger set of distributions without catastrophic interference McCloskey & Cohen (1989).
We build our lifelong generative models over Variational Autoencoders (VAEs) Kingma & Welling (2014). VAEs learn the posterior distribution of a latent variable model using an encoder network; they generate data by sampling from a prior and decoding the sample through a conditional distribution learnt by a decoder network.
Using a vanilla VAE as a teacher to generate synthetic data for the student is problematic due to a couple of limitations of the VAE generative process. 1) Sampling the prior can select a point in the latent space that is in between two separate distributions, causing generation of unrealistic synthetic data and eventually leading to loss of previously learnt distributions. 2) Additionally, data points mapped to the posterior that are further away from the prior mean will be sampled less frequently resulting in an unbalanced sampling of the constituent distributions. Both limitations can be understood by visually inspecting the learnt posterior distribution of a standard VAE evaluated on test images from MNIST as shown in figure 1(b). To address the VAE’s sampling limitations we decompose the latent variable vector into a continuous and a discrete component. The discrete component is used to summarize the discriminative information of the individual generative distributions while the continuous caters for the remaining sample variability. By independently sampling the discrete and continuous components we preserve the distributional boundaries and circumvent the two problems above.
This sampling strategy, combined with the proposed regularizer allows us to learn and remember all the individual distributions observed in the past. In addition we are also able to generate samples from any of the past distributions at will; we call this property consistent sampling.
2 RELATED WORK
Past work in sequential learning of generative models has focused on learning Gaussian mixture models Singer & Warmuth (1999); Declercq & Piater (2008) or on variational methods such as Variational EM Ghahramani & Attias (2000). Work that is closer to ours is the online or sequential learning of generative models in a streaming setting. Variational methods have been adapted for a streaming setting, e.g: Streaming Variational Bayes Broderick et al. (2013), Streaming Variational Mixture models Tank et al. (2015), and the Population Posterior McInerney et al. (2015). However their learning objectives are very different from ours. The objective of these methods is to adjust the learnt model such that it reflects the current data distribution as accurately as possible, while forgetting the previously observed distributions. Instead we want to do lifelong learning and retain all previously observed distributions within our learnt model. As far as we know our work is the first one that tries to bring generative models, and in particular VAEs, into a lifelong setting where distributions are seen, learnt, and remembered sequentially.
VAEs rely on an encoder and a decoder neural network in order to learn the parameters of the posterior and likelihood. One of the central problems that arise when training a neural network in an sequential manner is that it causes the model to run into the problem of catastrophic interference McCloskey & Cohen (1989). Catastrophic interference appears when we train neural networks in a sequential manner and model parameters start to become biased to the most recent samples observed, while forgetting what was learnt from older samples. This generally happens when we stop exposing the model to past data. There have been a number of attempts to solve the problem of catastrophic interference in neural networks. These range from distillation methods such as the original method Hinton et al. (2015) and ALTM Furlanello et al. (2016), to utilizing privileged information Lopez-Paz et al. (2016), as well as transfer learning approaches such as Learning Without Forgetting Li & Hoiem (2016) and methods that relay information from previously learnt hidden layers such as in Progressive Neural Networks Rusu et al. (2016) and Deep Block-Modular Neural Networks Terekhov et al. (2015). All of these methods necessitate the storage of previous models or data; our method does not.
The recent work of elastic weight consolidation (EWC) Kirkpatrick et al. (2017) utilizes the Fisher Information matrix (FIM) to avoid the problem of catastrophic interference. The FIM captures the sensitivity of the log-likelihood with respect to the model parameters; EWC leverages this (via a linear approximation of the FIM) to control the change of model parameter values between varying distributions. Intuitively, important parameters should not have their values changed, while non-important parameters are left unconstrained. Since EWC assumes model parameters being distributed under an exponential family, it allows for the utilization of the FIM as a quadratic approximationJeffreys (1946) to the Kullback-Leibler (KL) divergence. Our model makes no such distributional assumptions about the model parameters. Instead of constraining the parameters of the model as in EWC, we restrict the posterior representation of the student model to be close to that of the teacher for the previous distributions accumulated by the teacher. This allows the model parameters to vary as necessary in order to best fit the data.
3 BACKGROUND
We consider an unsupervised setting where we observe a sample X of K ≥ 1 realizations X = {x(0),x(1), ...,x(K)} from an unknown true distribution P ∗(x) with x ∈ RN . We assume that the data is generated by a random process involving a non-observed random variable z ∈ RM . In order to incorporate our prior knowledge we posit a prior P (z) over z. Our objective is to approximate the true underlying data distribution by a model Pθ(x) such that Pθ(x) ≈ P ∗(x). Given a latent variable model Pθ(x|z)P (z) we obtain the marginal likelihood Pθ(x) by integrating out the latent variable z from the joint distribution. The joint distribution can in turn be factorized using the conditional distribution Pθ(x|z) or the posterior Pθ(z|x).
Pθ(x) = ∫ Pθ(x, z)δz = ∫ Pθ(z|x)Pθ(x)δz = ∫ Pθ(x|z)P (z)δz (1)
We model the conditional distribution Pθ(x|z) by a decoder, typically a neural network. Very often the marginal likelihood Pθ(x) will be intractable because the integral in equation (1) does not have an analytical form nor an efficient estimator (Kingma (2017)). As a result the respective posterior distribution, Pθ(z|x), is also intractable. Variational inference side-steps the intractability of the posterior by approximating it with a tractable distribution Qφ(z|x) ≈ Pθ(z|x). VAEs use an encoder (generally a neural network) to model the approximate posterior Qφ(z|x) and optimize the parameters φ to minimize the reverse KL divergence KL[Qφ(z|x)||Pθ(z|x)] between the approximate posterior distribution Qφ(z|x) and the true posterior Pθ(z|x). Given that Qφ(z|x) is a powerful model (such that the KL divergence against the true posterior will be close to zero) we maximize the tractable Evidence Lower BOund (ELBO) to the intractable marginal likelihood. Lθ(x) ≤ Pθ(x) (full derivation available in the appendix)
ELBO: Lθ(x) = EQφ(z|x)[logPθ(x|z)]−KL[Qφ(z|x) || P (z)] (2)
By sharing the variational parameters φ of the encoder across the data points (amortized inference Gershman & Goodman (2014)), variational autoencoders avoid per-data optimization loops typically needed by mean-field approaches.
3.1 SEQUENTIAL GENERATIVE MODELING
The standard setting in maximum-likelihood generative modeling is to estimate the set of parameters θ that will maximize the marginal likelihood Pθ(x) for data sample X generated IID from a single true data distribution P ∗(x). In our work we assume the data are generated from multiple distributions P ∗i (x) such that P ∗(x) = ∑ i π ∗ i P ∗ i (x). In classical batch generative modelling, the individual data points are not associated with the specific generative distributions P ∗i (x). Instead, the whole sample X is considered to be generated from the mixture distribution P ∗(x). Latent variable models Pθ(x, z) = Pθ(x|z)P (z) (such as VAEs) capture the complex structures in P ∗(x) by conditioning the observed variables x on the latent variables z and combining these in (possibly infinite) mixtures Pθ(x) = ∫ Pθ(x|z)P (z)δz.
Our sequential setting is vastly different from the batch approach described above. We receive a stream of (possibly infinite) data X = {X1,X2, . . .} where the data samples Xi = {x(1)i ,x (2) i , . . . ,x (Ki) i } originate from the components P ∗i (x) of the generative distribution. At any given time we observe the latest sample Xi generated from a single component P ∗i (x) without access to any of the previous samples generated by the other components of P ∗(x). Our goal is to sequentially build an approximation Pθ(x) of the true mixture P ∗(x) by only observing data from a single component P ∗i (x) at a time.
4 MODEL
To enable lifelong generative learning we propose a dual model architecture based on a student-teacher model. The teacher and the student have rather different roles throughout the learning process: the teacher’s role is to preserve the memory of the previously learned tasks and to pass this knowledge onto the student; the student’s role is to learn the distributions over the new incoming data while accommodating for the knowledge obtained from the teacher. The dual model architecture is summarized in figure 2.
The top part represents the teacher model. At any given time the teacher contains a summary of all previous distributions within the learned parameters of the encoder QΦ(z|x) and the decoder PΘ(x|z). The teacher is used to generate synthetic samples x̂ from these past distributions by decoding samples from the prior ẑ ∼ P (z) through the decoder x̂ ∼ PΘ(x|ẑ). The generated synthetic samples x̂ are passed onto the student model as a form of knowledge transfer about the past distributions.
The bottom part of figure 2 represents the student, which is responsible for updating the parameters of the encoder Qφ(z|x) and decoder Pθ(x|z) models over the newly observed data. The student is exposed to a mixture of learning instances x sampled from x ∼ P (ω)P (x|ω), ω ∼ Ber(π); it sees synthetic instances generated by the teacher P (x|ω = 0) = PΘ(x|z), and real ones sampled from the currently active training distribution P (x|ω = 1) = P ∗(x). The mean π of the Bernouli distribution controls the sampling proportion of the previously learnt distributions to the current one.
If we have seen k distinct distributions prior to the currently active one then π = kk+1 . In this way we ensure that all the past distributions and the current one are equally represented in the training set used by the student model.
Once a new distribution is signalled, the old teacher is dropped, the student model is frozen and becomes the new teacher (φ→ Φ,θ → Θ), and a new student is initiated with the latest weights φ and θ from the previous student (the new teacher).
4.1 TEACHER-STUDENT CONSISTENCY
Each new student instantiation uses the input data mix to learn a new approximate posterior Qφ(z|x). In addition to being initiated by the new teacher’s weights and receiving information about the teacher’s knowledge via the synthetic samples x̂, we further foster the lifelong learning idea by bringing the latent variable posterior induced by the student model closer to the respective posterior induced by the teacher model. We enforce the latter constraint only over the synthetic samples, ensuring that the previously learnt latent variable posteriors are preserved over the different models. In doing so, we alleviate the effect of catastrophic interference.
To achieve this, we complement the classical VAE objective (equation (2)) with a term minimizing the KL divergence KL[Qφ(z|x̂)||QΦ(z|x̂)] between the student’s and the teacher’s posteriors over the synthetic data x̂. The teacher’s encoder model, which already has the accumulated knowledge from the previous learning steps, is thus reused within the new student’s objective. Under certain mild assumptions, we show that this objective reparameterizes the student model’s posterior, while preserving the same learning objective as a standard VAE (appendix section 7.0.1).
4.2 LATENT VARIABLE
A critical component of our model is the synthetic data generation by the teacher’s decoder x̂ ∼ PΘ(x|z). The synthetic samples need to be representative of all the previously observed distributions in order to provide the student with ample information about the learning history. The teacher generates these synthetic samples by first sampling the latent variable from the prior ẑ ∼ P (z) followed by the decoding step x̂ ∼ PΘ(x|ẑ). As we will describe shortly, the latent variable ẑ has a categorical component which corresponds to all the past distributions. This categorical component allows us to uniformly sample synthetic instances from all past distributions.
A simple unimodal prior distribution P (z), such as the isotropic Gaussian typically used in classical VAEs, results in an undersampling of the data points that are mapped to a posterior mean that is further away from the prior mean. Visualizing the 2d latent posterior of MNIST in figure 1(b) allows us to get a better intuition of this problem. If for example the prior mean corresponds to a point in latent space between two disparate distributions, the sample generated will not correspond to a sample from the real distribution. Since we use synthetic samples from the teacher in the student model, this aliased sample corresponding to the prior mean, will be reused over and over again, causing corruption in the learning process. In addition, we would under represent the respective true distributions in the learning input mix of the student and eventually lead to distribution loss.
We circumvent this in our model by decomposing the latent variable z into a discrete component zd ∈ RJ and a continuous component zc ∈ RF , z = [zd, zc]. The discrete component zd shall summarise the most discriminative information about each of the true generating distributions P ∗i (x). We use the uniform multivariate categorical prior zd ∼ Cat( 1J ) to represent it and the same parametric family for the approximate posterior QΦ(z|x). The continuous zc component is the global representation of the distributional variability and we use the multivariate standard normal as the prior zc ∼ N(0, I) and the isotropic multivariate normal N(µ, σ2I) for the approximate posterior.
When generating synthetic data, the teacher now independently samples from the discrete and continuous priors ẑd ∼ P (zd), ẑc ∼ P (zc) and uses the composition of these to condition the decoding step x̂ ∼ PΘ(x|ẑd, ẑc). Since the discrete representation ẑd is associated with the true generative distribution components P ∗i (x), uniformly sampling the discrete prior ensures that that the distributions are well represented in the synthetic mix that the student observes.
In general, the capacity of a categorical distribution is less than that of a continuous normal distribution. To prevent the VAE’s encoder from using primarily the continuous representation while disregarding the discrete one we further complement the learning objective by a term maximising the mutual information between the discrete representation and the data I(zd; x) = H(zd)−H(zd|x). H(zd) is used to denote the marginal entropy of zd and H(zd|x) denotes the conditional entropy of zd given x.1
4.3 LEARNING OBJECTIVE
The final learning objective for each of the student models is the maximization of the ELBO from equation (2), augmented by the negative of the cross-model consistency term introduced in section 4.1 and the mutual information term proposed in section 4.2.
EQφ [log Pθ(x|z)]−KL[Qφ(z|x)||P (z)]︸ ︷︷ ︸ VAE ELBO −1(ω = 0)KL[Qφ(zd|x)||QΦ(zd|x)]︸ ︷︷ ︸ Consistency Regularizer +λI(zd; x)︸ ︷︷ ︸ Mutual Info ,
(3)
We sample the training instances x from x ∼ P (ω)P (x|ω),ω ∼ Ber(π) as described in section 4. Thus they can either be generated from the teacher model (ω = 0) or come from the training set of the currently active distribution (ω = 1). 1(.) is the indicator function which evaluates to 1 if its argument is true and zero otherwise; it makes sure that the consistency regularizer is applied only over the synthetic samples generated by the teacher. The λ hyper-parameter controls the importance of the mutual information regularizer. We present the analytical evaluation of the consistency regularizer in appendix section 7.0.1.
5 EXPERIMENTS
We conducted a set of experiments to explore the behaviour and properties of the method we propose. We specifically concentrate on the benefits our model brings in the lifelong learning setting which is the main motivation of our work. We explain the settings of the individual experiments and their focus in the following three sections.
In all the experiments we use the notion of a distributional ‘interval’: the interval in which we observe samples from a single distribution P ∗i (x) before the transition to the next distribution P ∗i+1(x) occurs. The length of the intervals is in principle random and we developed a heuristic to generate these. We provide further details on this together with other technical details related to the network implementation and training common for all the experiments in the appendix.
5.1 FASHION MNIST : SEQUENTIAL GENERATION
In this experiment, we seek to establish the performance benefit that our augmented objective formulation in section 4.3 brings into the learning in contrast to the simple ELBO objective 2. We do so by training two models with identical student-teacher architectures as introduced in section 4, with one using the consistency and mutual information augmented objective (with consistency) and the other using the standard ELBO objective (without consistency). We also demonstrate the ability of our model to disambiguate distributional boundaries from the distributional variations.
We use Fashion MNIST Xiao et al. (2017) 2 to simulate our sequential learning setting. We treat each object as a different distribution and present the model with samples drawn from a single distribution at a time. We sequentially progress over the ten available distributions. When a distribution transition occurs (new object) we signal the model, make the latest student the new teacher and instantiate a new student model.
We quantify the performance of the generative models by computing the ELBO over the standard Fashion MNIST test set after every distributional transition. The test set contains objects from all of the individual distributions. We run this procedure ten times and report the average test ELBO over the ten repetitions in figure 3(c). We see that around the 3rd interval (the 3rd distributional
1A similar idea is leveraged in InfoGAN Chen et al. (2016). 2We do a similar experiment over MNIST in the appendix
transition), the negative ELBO of the with consistency model is systematically below (∼ 20 nats ) that of the without consistency model. This confirms the benefits of our new objective formulation for reducing the effects of the catastrophic interference, a crucial property in our lifelong learning setting. In the same figure we also plot the ELBO of the baseline batch VAE. The batch VAE will always outperform our model because it has simultaneous access to all of the distributions during training.
After observing and training over all ten distributions we generate samples from the final students of the two models. We do this by fixing the discrete distribution zd to one-hot vectors over the whole categorical distribution, while randomly sampling the continuous prior zc ∼ N (0, I). We contrast samples generated from the model with consistency (figure 3(a)) to the model without consistency (figure 3(b)). Our model learns to separate ’style’ from the distributional boundaries. For example, in the last row of our with consistency model, we observe the various styles of shoes. The without consistency model mixes the distributions randomly. This illustrates the benefits that our augmented objective has for achieving consistent sampling from the individual distributional components.
5.2 ROTATED MNIST : LONG TERM DISTRIBUTION ACCUMULATION
In this experiment we dig deeper into the benefits our objective formulation brings for the lifelong learning setting. We expose the models to a much larger number of distributions and we explore how our augmented objective from 4.3 helps in preserving the previously learned knowledge. As in section 5.1, we compare models with and without consistency with identical teacher-student architectures. We measure the ability of the models to recall the previously learned information by looking at the consistency between the posterior of the student and the teacher models over the test data set
consistency: #{k : QΦ(zd|xk) == Qφ(zd|xk),xk ∈ Xtest} . (4)
We use the MNIST dataset in which we rotate each of the original digit samples by angles ν = [30◦, 70◦, 130◦, 200◦, 250◦]. We treat each rotation of a single digit family as an individual distribution {P ∗i (x)}70i=1. Within each distributional interval, we sample the data by first sampling (uniformly with replacement) one of the 70 distributions and then sampling the data instances x from the selected distribution.
Figure 4(b) compares the consistency results of the two tested models throughout the learning process. Our model with the augmented objective clearly outperforms the model that uses the simple ELBO objective. This confirms the usefulness of the additional terms in our objective for preserving the previously learned knowledge in accordance with the lifelong learning paradigms. In addition, similarly as in experiment 5.1, figure 4(a) documents that the model with the augmented objective (thanks to reducing the effects of the catastrophic interference) achieves lower negative test ELBO systematically over the much longer course of learning (∼ 30 nats). We also visualise in figure 4(c) how the accumulation of knowledge speeds up the learning process. For each distributional interval we plot the norms of the model gradients across the learning iterations. We observe that for later distributional intervals the curves become steeper much quicker, reducing the gradients and reaching (lower) steady states much faster then in the early learning stages. This suggests that the latter models are able to learn quicker in our proposed architecture.
5.3 SVHN TO MNIST
In this experiment we explore the ability of our model to retain and transfer knowledge across completely different datasets. We use MNIST and SVHN Netzer et al. (2011) to demonstrate this. We treat all samples from SVHN as being generated by one distribution P ∗1 (x) and all the MNIST 3 samples as generated by another distribution P ∗2 (x) (irrespective of the specific digit).
We first train a student model (standard VAE) over the entire SVHN data set. Once done, we freeze the parameters of the encoder and the decoder and transfer the model into the teacher state (φ → Φ,θ → Θ). We then use this teacher to aid the learning of the new student over the mix of the teacher-generated synthetic SVHN samples x̂ and the true MNIST data.
We use the final student model to reconstruct samples from the two datasets by passing them through the learned encoding/decoding flow: x ∼ P ∗i (x) → z ∼ Qφ(z|x) → x̂ ∼ Pθ(x|z). We visualise examples of the true inputs x and the respective reconstructions x̂ in figure 5(a). We see that even though the only true data the final model received for training were from MNIST, it can still reconstruct SVHN data. This confirms the ability of our architecture to transition between complex distributions while still preserving the knowledge learned from the previously observed distributions.
Finally, in figure 5(b) and 5(c) we illustrate the data generated from an interpolation of a 2-dimensional continuous latent space. For this we specifically trained the models with the continuous latent variable zc ∈ R2. To generate the data, we fix the discrete categorical zd to one of the possible values {[0, 1], [1, 0]} and linearly interpolate the continuous zc over the range [−3, 3]. We then decode these to obtain the samples x̂ ∼ Pθ(x|zd, zc). The model learns a common
3In order to work over both of these datasets we convert MNIST to RGB and resize it to 32x32 to make it consistent with the dimensions of SVHN.
continuous structure for the two distributions which can be followed by observing the development in the generated samples from top left to bottom right on both figure 5(b) and 5(c).
6 CONCLUSION
In this work we propose a novel method for learning generative models over streaming data following the lifelong learning principles. The principal assumption for the data is that they are generated by multiple distributions and presented to the learner in a sequential manner (a set of observations from a single distribution followed by a distributional transition). A key limitation for the learning is that the method can only access data generated by the current distribution and has no access to any of the data generated by any of the previous distributions.
The proposed method is based on a dual student-teacher architecture where the teacher’s role is to preserve the past knowledge and aid the student in future learning. We argue for and augment the standard VAE’s ELBO objective by terms helping the teacher-student knowledge transfer. We demonstrate on a series of experiments the benefits this augmented objective brings in the lifelong learning settings by supporting the retention of previously learned knowledge (models) and limiting the usual effects of catastrophic interference.
In our future work we will explore the possibilities to extend our architecture to GAN-like Goodfellow et al. (2014) learning with the prospect to further improve the generative abilities of our method. GANs, however, do not use a metric for measuring the quality of the learned distributions such as the marginal likelihood or the ELBO in their objective and therefore the transfer of our architecture to these is not straightforward.
7 APPENDIX
7.0.1 UNDERSTANDING THE CONSISTENCY REGULARIZER
The analytical derivations of the consistency regularizer show that the regularizer can be interpreted as an a transformation of the standard VAE regularizer. In the case of an isotropic gaussian posterior, the proposed regularizer scales the mean and variance of the student posterior by the variance of the teacher 7.0.2 and adds an extra ’volume’ term. This interpretation of the consistency regularizer shows that the proposed regularizer preserves the same learning objective as that of the standard VAE. Below we present the analytical form of the consistency regularizer with categorical and isotropic gaussian posteriors:
Corollary 7.0.1 We parameterize the learnt posterior of the teacher by Φi = exp(pEi )∑J i=1 exp(p E i ) and the posterior of the student by φi = exp(pSi )∑J i=1 exp(p S i ) . We also redefine the normalizing constants as
cE = ∑J
i=1 exp(p E i ) and c
S = ∑J
i=1 exp(p S i ) for the teacher and student models respectively. The
reverse KL divergence in equation 8 can now be re-written as:
KL(Qφ(zd|x)||QΦ(zd|x)) = J∑
i=1
exp(pSi )
cS log
( exp(pSi )
cS cE
exp(pEi ) ) = H(pS ,pS − pE) = −H(ps) +H(pS ,pE)
(5)
where H( ) is the entropy operator and H( , ) is the cross-entropy operator.
Corollary 7.0.2 We assume the learnt posterior of the teacher is parameterized by a centered, isotropic gaussian with Φ = [µE = 0,ΣE = σE 2
I] and the posterior of our student by a non-centered isotropic gaussian with φ = [µS ,ΣS = σS2I], then
KL(Qφ(z|x)||QΦ(z|x)) = 0.5 [ tr(ΣE −1 ΣS) + (µE − µS)TΣE −1 (µE − µS)− F + log ( |ΣE | |ΣS | )] = 0.5
F∑ j=1 [ 1 σE2(j) (σS2(j) + µS2(j))− 1 + log σE2(j)− log σS2(j) ] = KL(Qφ∗(z|x)||N (0, I))− log |ΣE |
(6)
Via a reparameterization of the student’s parameters:
φ∗ = [µS∗,σS∗2]
µS∗ = µS(j)
σE2(j) ;σS∗2 =
σS2(j)
σE2(j)
(7)
It is also interesting to note that our posterior regularizer becomes the prior if:
limσE2 7→1KL(Qφ(z|x)||QΦ(z|x)) = KL(Qφ(z|x)||N (0, I))
7.1 ELBO DERIVATION
Variational inference Hoffman et al. (2013) side-steps the intractability of the posterior distribution by approximating it with a tractable distribution QΦ(z|x); we then optimize the parameters Φ in order to bring this distribution close to PΦ(z|x). The form of this approximate distribution is fixed and is generally conjugate to the prior P (z). Variational inference converts the problem of posterior inference into an optimization problem over Φ. This allows us to utilize stochastic gradient descent to solve our problem. To be more concrete, variational inference tries to minimize the reverse Kullback-Leibler (KL) divergence between the variational posterior distribution QΦ(z|x) and the true posterior Pθ(z|x):
KL[QΦ(z|x)||Pθ(z|x)] = log Pθ(x)− EQΦ(z|x) [ log Pθ(x, z)
QΦ(z|x) ] ︸ ︷︷ ︸
Lθ
(8)
Rearranging the terms in equation 8 and utilizing the fact that the KL divergence is a measure, we can derive the evidence lower bound Lθ (ELBO) which is the objective function we directly optimize:
log Pθ(x) ≥ EQΦ(z|x)[log Pθ(x|z)]−KL(QΦ(z|x) || P (z)) = Lθ (9)
In order to backpropagate it is necessary to remove the dependence on the stochastic variable z. To achieve this, we push the sampling operation outside of the computational graph for the normal distribution via the reparameterization trick Kingma & Welling (2014) and the gumbel-softmax reparameterization Maddison et al. (2016); Jang et al. (2017) for the discrete distribution. In essence the reparameterization trick allows us to introduce a distribution P ( ) that is not a function of the data or computational graph in order to move the gradient operator into the expectation:
∇ EQΦ(z|x) [ log Pθ(x, z)
QΦ(z|x)
] 7→ EP ( ) [ ∇ log Pθ(x, z)
QΦ(z|x)
] (10)
7.2 MODEL RELATED
In this section we provide extra details of our model architecture.
7.2.1 MODEL ARCHITECTURE
We utilized two different architectures for our experiments. The first two utilize a standard deep neural network with two layers of 512 to map to the latent representation and two layers of 512 to map back to the reconstruction for the decoder. We used batch norm Ioffe & Szegedy (2015) and ELU activations for all the layers barring the layer projecting into the latent representation and the output layer.
The final experiment with the transfer from SVHN to MNIST utilizes a fully convolutional architecture with only strided convolutional layers in the encoder (where the number of filters are doubled at each layer). The final projection layer for the encoder maps the data to a [C=|zd|, 1, 1] output which is then reparameterized in the standard way. The decoder utilizes fractional strides for the convolutional-transpose (de-convolution) layers where we reduce the number of filters in half at each layer. The full architecture can be examined in our code repository [which will be de-anonymized after the review process]. All layers used batch norm Ioffe & Szegedy (2015) and ELU activations.
We utilized Adam Kingma & Ba (2015) to optimize all of our problems with a learning rate of 1e-4. When we utilized weight transfer we re-initialized the accumulated momentum vector of Adam as well as the aggregated mean and covariance of the Batch Norm layers. Our code is already available online under an MIT license at 4
7.2.2 GUMBEL REPARAMETERIZATION
Since we model our latent variable as a combination of a discrete and a continuous distribution we also use the Gumbel-Softmax reparameterization Maddison et al. (2016); Jang et al. (2017). The Gumbel-Softmax reparameterization over logits [linear output of the last layer in the encoder] p ∈ RM and an annealed temperature parameter τ ∈ R is defined as:
z = softmax( log(p) + g
τ ); g = −log(−log(u ∼ Unif(0, 1))) (11)
u ∈ RM , g ∈ RM . As the temperature parameter τ 7→ 0, z converges to a categorical.
4https://github.com/¡anonymized¿
7.2.3 EXPANDABLE MODEL CAPACITY AND REPRESENTATIONS
Multilayer neural networks with sigmoidal activations have a VC dimension bounded between O(ρ2)Sontag (1998) and O(ρ4)Karpinski & Macintyre (1997) where ρ are the number of parameters. A model that is able to consistently add new information should also be able to expand its VC dimension by adding new parameters over time. Our formulation imposes no restrictions on the model architecture: i.e. new layers can be added freely to the new student model.
In addition we also allow the dimensionality of zd ∈ RJ , our discrete latent representation to grow in order to accommodate new distributions. This is possible because the KL divergence between two categorical distributions of different sizes can be evaluated by simply zero padding the teacher’s smaller discrete distribution. Since we also transfer weights between the teacher and the student model, we need to handle the case of expanding latent representations appropriately. In the event that we add a new distribution we copy all the weights besides the ones immediately surrounding the projection into and out of the latent distribution. These surrounding weights are reinitialized to their standard Glorot initializations Glorot & Bengio (2010).
7.3 FORWARD VS. REVERSE KL
In our setting we have the ability to utilize the zero forcing (reverse or mode-seeking) KL or the zero avoiding (forward) KL divergence. In general, if the true underlying posterior is multi-modal, it is preferable to operate with the reverse KL divergence (Murphy (2012) 21.2.2). In addition, utilizing the mode-seeking KL divergence generates more realistic results when operating over image data.
In order to validate this, we repeat the experiment in 5.1. We train two models: one with the forward KL posterior regularizer and one with the reverse. We evaluate the -ELBO mean and variance over ten trials. Empirically, we observed no difference between the different measures. This is demonstrated in figure 6.
7.4 NUMBER OF REQUIRED SAMPLES
Our method derives its sample complexity from standard VAEs. In practice we evaluate the number of required real and synthetic samples by utilizing early stopping. When the negative ELBO on the validation set stops decreasing for 50 steps we stop training the current model and transition to the next distribution interval. Using this and the fact that we keep equal proportions of all observed distributions in our minibatch, we can evaluate the number of synthetic and real samples used during the single distribution interval. We demonstrate this procedure on experiment 5.1 in figure 7.
We observe a rapid decrease of the number of required real samples as we assimilate more distributions into our model.
7.5 EXPERIMENTS RELATED
In this section we provide an extra experiment run on MNIST as well as some extra images from the rotated MNIST experiment.
7.5.1 MNIST : GENERATION AND ELBO
In this experiment, we seek to establish the performance benefit that the consistency regularizer brings into the learning process. We do so by evaluating the ELBO for a model with and without the consistency and mutual information regularizers. We also demonstrate the ability of the regularizers to disambiguate distributional boundaries and their inter-distributional variations. I.e. for MNIST this separates the MNIST digits from their inter-class variants (i.e drawing style).
We use MNIST to simulate our sequential learning setting. We treat each digit as a different distribution and present the model with samples drawn from a single distribution at a time. For the purpose of this experiment we sequentially progress over the ten distributions (i.e. interval sampling involves linearly iterating over all the distributions ).
When an interval transition occurs we signal the model, make the student the new teacher and instantiate a new student model. We contrast this to a model that utilizes the same graphical model, without our consistency and mutual information regularizers. We quantify the performance of the generative models by computing the ELBO over the standard MNIST test set at every interval. The test set contains digits from all of the individual distributions. We run this procedure ten times and report the average ELBO over the test set.
After observing all ten distributions we evaluate samples generated from the final student model. We do this by fixing the discrete distribution zd, while randomly sampling zc ∼ N (0, I). We contrast samples generated from the model with both regularizers (left-most image in 8) to the model without the regularizers (center image in 8). Our model learns to separate ’style’ from distributional boundaries. This is demonstrated by observing the digit ’2’: i.e. different samples of zc produce different styles of writing a ’2’.
7.5.2 ROTATED MNIST EXPERIMENT
We provide a larger sized image for the ELBO from experiment 5.2. We also visualize reconstructions from the rotated MNIST problem (visualized in figure 10). Finally in figure 11 we show the effects on the reconstructions when we do not use the mutual information regularizer. We believe this is due to the fact that the network utilizes the larger continuous representation to model the discriminative aspects of the observed distribution. | 1. What is the focus and contribution of the paper on adapting VAE training to streaming data?
2. What are the strengths and weaknesses of the proposed teacher-student framework and modified objective function?
3. Do you have any concerns regarding the experimental results and their interpretation?
4. How does the reviewer assess the clarity, quality, originality, and significance of the paper's content?
5. Are there any questions or issues that the reviewer has raised but not explicitly stated? | Review | Review
The paper proposed a teacher-student framework and a modified objective function to adapt VAE training to streaming data setting. The qualitative experimental result shows that the learned model can generate reasonable-looking samples. I'm not sure about what conclusion to make from the numerical result, as the test negative ELBO actually increased after decreasing initially. Why did it increase?
The modified objective function is a little ad-hoc, and it's unclear how to relate the overall objective function to Bayesian posterior inference (what exactly is the posterior that the encoder tries to approximate?). There is a term in the objective function that is synthetic data specific. Does that imply that the objective function is different depending on if the data is synthetic or real? What is the motivation/justification of choosing KL(Q_student||Q_teacher) as regularisation instead of the other way around? Would that make a difference in the goodness of the learned model? If not, wouldn't KL(Q_teacher||Q_student) result reduction in the variance of gradients and therefore a better choice?
Details on the minimum number of real samples per interval for the model to be able to learn is also missing. Also, how many synthetic samples per real samples are needed? How is the update with respect to synthetic sample scheduled? Given infinite amount of streaming data with a fixed number of classes/underlying distributions and interval length, and sample the class of each interval (uniformly) randomly, will the model/algorithm converge? Is there a minimum number of real examples that the student learner needs to see before it can be turned into a teacher?
Other question: How is the number of latent category J of the latent discrete distribution chosen?
Quality: The numerical experiment doesn't really compare to any other streaming benchmark and is a little unsatisfying. Without a streaming benchmark or a realistic motivating example in which the proposed scheme makes a significant difference, it's difficult to judge the contribution of this work.
Clarity: The manuscript is reasonably well-written. (minor: Paragraph 2, section 5, 'in principle' instead of 'in principal')
Originality: Average. The student-teacher framework by itself isn't novel. The modifications to the objective function appears to be novel as far as I am aware, but it doesn't require much special insights.
Significance: Below average. I think it will be very helpful if the authors can include a realistic motivating example where lifelong unsupervised learning is critical, and demonstrate that the proposed scheme makes a difference in the example. |
ICLR | Title
Retrospection: Leveraging the Past for Efficient Training of Deep Neural Networks
Abstract
Deep neural networks are powerful learning machines that have enabled breakthroughs in several domains. In this work, we introduce retrospection loss to improve performance of neural networks by utilizing prior experiences during training. Minimizing the retrospection loss pushes the parameter state at the current training step towards the optimal parameter state while pulling it away from the parameter state at a previous training step. We conduct extensive experiments to show that the proposed retrospection loss results in improved performance across multiple tasks, input types and network architectures.
N/A
Deep neural networks are powerful learning machines that have enabled breakthroughs in several domains. In this work, we introduce retrospection loss to improve performance of neural networks by utilizing prior experiences during training. Minimizing the retrospection loss pushes the parameter state at the current training step towards the optimal parameter state while pulling it away from the parameter state at a previous training step. We conduct extensive experiments to show that the proposed retrospection loss results in improved performance across multiple tasks, input types and network architectures.
1 INTRODUCTION
Large deep neural networks have enabled breakthroughs in fields such as computer vision (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012), natural language understanding (Mikolov et al., 2013) and reinforcement learning (Mnih et al., 2015). Hence, in recent times, significant effort has been directed towards enhancing network efficiency through data augmentation, regularization methods and novel training strategies (Zhong et al., 2017) (Zhang et al., 2017), (Huang et al., 2017) (Noh et al., 2017), (Wang et al., 2018) (Han et al., 2016). In this work, we introduce a technique to improve performance by utilizing prior experiences of the network during training.
Humans are efficient learners with the ability to quickly understand and process diverse ideas. A hallmark of human intelligence is the capability to internalize these complex ideas by actively referencing past interpretations to continually adapt understanding. Our artificial agents should be able to do the same, learning and adapting quickly. This kind of fast and flexible learning is challenging, since the agent must effectively integrate its prior experience with a small amount of new information, while avoiding overfitting to the new data.
In this work, we introduce a new retrospection loss that utilizes prior training experiences of a deep neural network to guide parameter updates and improve performance. The idea for the retrospection loss is simple - to ensure that the predictions at a training step are more similar to the ground truth than to the predictions from a previous training step. As training proceeds, minimizing the loss constrains the network parameters to continually evolve towards the optimal state by successive constriction of the predictions into tighter spaces around the goal. The proposed retrospection loss is simple, easy to implement and we empirically show that it works well across multiple tasks, input types and network architectures.
The key contributions of our work can be summarized as:
• We propose a new simple, easy to implement retrospective loss that is based on looking back at the trajectory of gradient descent and providing an earlier parameter state as guidance for further learning.
• We exhaustively experiment on a wide range of tasks including image classification (+ fewshot learning), GANs, speech recognition, text classification and consistently beat state-ofthe-art methods on benchmark datasets with the addition of this loss term.
• To the best of our knowledge, this is the first such effort; our empirical studies showed a consistent improvement in performance across the tasks in our multiple trials, demonstrating the potential of this method to have a strong impact on practical use in real-world applications across domains.
2 RELATED WORK
The retrospection loss leverages the parameter state from a previous training step as guidance to compute the current gradient update. Correspondingly, one could find similarities with efforts in optimization, that utilize information from past training steps for future weight updates as well as methods that leverage guidance from other parameter states during training.
Techniques such as SVRG (Johnson & Zhang, 2013), SARAH(Nguyen et al., 2017), ProxSARAH (Pham et al., 2019) use gradients from earlier training steps to predict better weight updates. Other optimization methods like Momentum (Sutskever et al., 2013), Adam (Kingma & Ba, 2014) Nesterov Momentum (Jin et al., 2018) accumulate past gradients to accelerate weight updates in the right direction in order to achieve faster convergence. In contrast, our work introduces an additional training objective to guide convergence, and can be used to improve performance when used with different optimizer configurations, as shown in our results.
In reinforcement learning (RL), where techniques involve optimizing using moving (evolving) targets, methods for Q-learning and policy gradients benefit from using a guidance network during training. The DQN algorithm proposed by (Mnih et al., 2015) uses an additional target network (same as online network) for Q-value updates, where parameters are updated by copying from the online network at discrete steps. Double Q-learning (Hasselt, 2010) learns two Q functions, where each Q-function is updated with a value for the next state from the other Q-function. Policy gradient methods such as TRPO (Schulman et al., 2015), PPO (Schulman et al., 2017) use a KL-divergence objective during training that constrains the loss to ensure deviation from a previously learned policy is small. In these techniques, leveraging a guidance during training results in improved convergence and sample efficiency. Note that all these efforts are constrained to the RL setting. Further, the objective in the RL setting is to control divergence from the guidance step to better handle moving targets. On the other hand, the proposed retrospection loss is formalized differently to address the supervised learning setting. To the best of our knowledge, this is the first such effort that uses an idea such as retrospection in supervised learning.
3 METHODOLOGY
We now present the formulation of our retrospective loss. Consider a neural network, g(·), parameterized by its weights θ. Let the optimal parameters of the neural networks at the end of training be given by θ∗. The current parameters of the network at time step T during training are given by θT . The objective of the retrospective loss is to leverage the past states during training, and cue the network to be closer to the ground truth than a past state at time step Tp. Given an input data-label pair (xi, yi), the retrospective loss is given by:
LTretrospective = κ ∗ ||gθT (xi)− yi|| − ||gθT (xi)− gθTp (xi)|| (1)
The retrospective loss is designed such that minimizing it with respect to θ over the training steps would constrain the parameter state at each reference step θT to be more similar to θ∗ than the parameter state from the delayed time step θTp . The κ scaling term is required to obtain sufficient gradient signal in later stages of training when gθT (xi) is close to yi, and the first term becomes small.
Adding this loss term to an existing supervised learning task loss provides for efficient training, as shown in our experiments later. The retrospective loss is introduced to the training objective following a warm-up period wherein the neural network function can be considered stable for use of retrospective updates. The training objective at any training step T with the retrospective loss is hence defined as:
L = { Ltask T < IW Ltask + LTretrospective T ≥ IW
(2)
where Ltask is the task-specific training objective and IW is the number of warm-up iterations. We simply use Tp = F ∗ bT/F c as the time step for retrospection in this work, and show gains in efficiency of training. One could however mine for Tp intelligently to further improve the performance. Here, F is the retrospective update frequency, which gives an upper bound difference of the previous training step (Tp) from the current training step T at which we compute the retrospective loss.
Geometric Intuition. Figure 1 illustrates the geometric intuition of the working of the retrospective loss. By design (Eqn 1), Lretrospective is negative when the current parameter state is farther away from the retrospective step, Tp, than the optimal solution (which is the desirable objective).
Algorithm 1 Retrospective Training
1: Input: Training Set V, Current Model Parameters θT , Previous State Model Parameters θTp , Update Frequency F, # of Warm-Up Iterations IW , 2: for Step 1 to n do 3: gradtask ← 0 (Initialising the gradients w.r.t task-specific loss) 4: gradretrospective ← 0 (Initialising the gradients w.r.t retrospective loss) 5: Training Data of minibatch size B pairs of (X(i), Y(i)). 6: L(θT ,X(i),Y(i)) = Ltask(θ
T (X(i)),Y(i)) 7: gradtask ← ∇(L(θT ,X(i),Y(i)) 8: if Step > IW then 9: L(θT , θT p,X(i),Y(i)) = Lretrospective(θ
T (X(i)), θTp(X(i)),Y(i)) 10: gradretrospective ← ∇(L(θT , θTp ,X(i),Y(i)) 11: end if 12: if Step % F == 0 then 13: θTp ← θT 14: end if 15: θT ← θT − η ∗ (gradtask + gradretrospective) 16: end for 17:
1
One could view the loss term as dividing the parameter space into two regions: a polytope around the optimal θ∗ where Lretrospective < 0, and the region outside the polytope where Lretrospective > 0. Minimizing retrospective loss pushes the network towards parameters further inside the polytope, thus helping speed up the training process. As shown on the right subfigure in Figure 1, the polytope shrinks over time, since the retrospective support, Tp, is also updated to more recent parameter states. This helps further push the parameters into a near-optimal region around θ∗. The loss term helps in improved solution in most cases, and faster training in certain cases, as shown in our extensive empirical studies in Section 4. Algorithm 1 summarizes the methodology.
Connection with Triplet Loss. The triplet loss ((Chechik et al., 2010; Schroff et al., 2015; Hoffer & Ailon, 2015)) has been proposed and used extensively over the last few years to learn highquality data embeddings, by considering a triplet of data points, xa (anchor point), xp (point from the positive/same class as the sample under consideration), and xn (point from the negative class/class different from the sample under consideration). The loss is then defined as:
max ( ‖ga − gp‖2 − ‖ga − gn‖2 +m, 0 ) (3)
where g is the neural network model, and m is a minimum desired margin of separation. The triplet loss, inspired by contrastive loss (Hadsell et al., 2006), attempts to learn parameters θ of a neural network in such a way that data points belonging to the same class are pulled together closer than a data point from another class. One could view the proposed retrospection loss as a triplet loss in the parameter space. While the traditional triplet loss consider a triplet of data samples, we consider a triplet of parameters, θT , θ∗, and θTp . We however believe that retrospection captures the proposed loss better, since we consider previous parameter states in time.
Connection with Momentum. Viewing retrospection from the perspective of previous gradients in the training trajectory, one can connect it to the use of momentum, although more in a contrasting sense. The use of momentum and variants such as Nesterov momentum (Jin et al., 2018) in training neural networks use the past gradient, say at θT−1 or the gradient over the previous few steps, at {θT−q, · · · , θT−1}, q > 0), while updating the parameters in the current step. This assumes local
consistency of the direction of the gradient update in the training trajectory, and that one can use these previous directions to get a more robust estimate of the gradient step to be taken currently. In contrast, retrospection leverages the same idea from the opposite perspective, viz., consistency of the direction of the gradient update is only local, and hence the parameter state, θTp farther away from the current state θT , provides a cue of what the next parameter must be far from. This raises interesting discussions, and the possibility of analyzing retrospection as a thrust obtained from an undesirable parameter state, as opposed to momentum. We leave these as interesting directions of future work, and focus this work on proposing the method, and showing its effectiveness in training neural networks.
4 EXPERIMENTS AND RESULTS
We conduct experiments using retrospection on the following tasks: image classification (Sec 4.1), image generation (Sec 4.2), speech recognition (Sec 4.3), text classification (Sec 4.4) and fewshot image classification (Sec 4.5). During experimentation, the original (without retrospection) and retrospective (with retrospection) configurations are trained using same weight initialization, to ensure consistency of comparison. For all experiments, we use the L1-norm as the choice of norm in our implementation for the retrospective loss (Eqn 1). When retrospection is used without warm-up, the guidance parameters, θTp , are initialized at random.
4.1 IMAGE CLASSIFICATION
We perform image classification experiments using Fashion-MNIST (Xiao et al., 2017), SVHN (Netzer et al., 2011) and CIFAR-10 (Krizhevsky, 2009) datasets. The retrospection loss, for classification, uses activations of the softmax layer. The default hyperparameter configurations for retrospection include a warm-up period of zero epochs and a retrospective update frequency of fifty steps. The parameter, κ, is initialized at 4 and increased by 2% at each retrospective update. Quantitative results for image classification are compiled in Table 1.
Fashion-MNIST. For experiments on Fashion MNIST, we use LeNet (Lecun et al., 2001) and ResNet-20 (He et al., 2016) architectures. Models in each experiment are trained to convergence using the SGD optimizer (lr=0.1, momentum=0.5, mini-batch=32) running over 70,000 steps. Results in Figure 2 (a)-(b) show that using the retrospective loss results in improved training.
SVHN. For experiments on SVHN, we use VGG-11 (Simonyan & Zisserman, 2014) and ResNet-18 (He et al., 2016) architectures. Models in each experiment are trained to convergence using the SGD optimizer (lr=0.001, momentum=0.9, mini-batch=100) running over 200,000 steps. Results in Figure 2 (c)-(d) show that using the retrospective loss results in more efficient training.
CIFAR-10. For experiments on CIFAR-10 (Krizhevsky, 2009), we use larger variants of ResNet including
ResNet - 44, 56, 110 (He et al., 2016). Models in each experiment are trained for 200 epochs, using the training configuration (mini-batch, lr policy) detailed in (He et al., 2016). Here, we observe that
using the retrospection loss in later stages of training results in best improvement in performance. Correspondingly, the retrospective loss is introduced after a warm-up of 150 epochs and the retrospective update frequency is one epoch. The parameter, κ, is initialized at 4 and updated by 2% once every ten retrospective updates. Quantitative performance is reported in Table 1. For sake of completion, we also mention (in brackets) the error rates for the corresponding experiments as reported by authors in the original work (He et al., 2016).
4.2 IMAGE GENERATION
Next, we perform experiment with Generative Adversarial Networks (GAN) using Fashion-MNIST (F-MNIST) (Xiao et al., 2017) and CIFAR-10 (Krizhevsky, 2009) datasets. Our study considers both unconditional (DCGAN, LSGAN) and conditional (ACGAN) variants of GANs. We adapt implementations from (dcg) for LSGAN (Mao et al., 2016), DCGAN (Radford et al., 2015) and from (acg) for ACGAN (Odena et al., 2016). For our experiments, we train the generator and discriminator for 100 epochs, with initial learning rate of 0.0002 on minibatches of size 64 using Adam optimizer. We report performance using Inception Score (Salimans et al., 2016), a standard metric for evaluating GANs. The inception score is calculated using implementation in (inc, 2018) with predictions for CIFAR-10 generated using network in (Szegedy et al., 2015) and features for F-MNIST using network in (Krizhevsky et al., 2012).
For all experiments, the retrospection loss is initialized without any warm-up period (zero epochs). The loss is computed on outputs of the discriminator and is used to train the generator model. For DCGAN (Radford et al., 2015) and LSGAN (Mao et al., 2016) L2-norm as choice of norm. The retrospective update happens six times in one epoch. The scaling parameter, κ is initialized at 4 and is not changed during training. For ACGAN (Odena et al., 2016), which is conditional, the retrospective loss consists of both adversarial loss and class loss components. L1-norm is used for class component and L2-norm is used for adversarial component. Figure 3 presents comparative inception score plots when the various dataset-network pairs are trained with (without) the retrospection loss. Additionally, Figure 4 presents images generated over epochs when training ACGAN (Odena et al., 2016), with and without retrospection, on F-MNIST (Xiao et al., 2017).
4.3 SPEECH RECOGNITION
We perform speech recognition experiments using the Google Commands (Warden, 2017) dataset. The dataset consists of 65,000 utterances, where each utterance is about one-second long and belongs to one out of 30 classes. The classes correspond to voice commands such as yes, no, down, left, as pronounced by a few thousand different speakers. We follow (Zhang et al., 2017) to preprocess the utterances where we first extract normalized spectrograms from the original waveforms at a sampling rate of 16 kHz and subsequently we zero-pad the spectrograms to equalize their sizes at 160 x 101.
For this experiment, we compare LeNet(Lecun et al., 2001) and VGG-11(Simonyan & Zisserman, 2014) architecture, each of which is composed of two convolutional and two fully-connected layers.
We train each model for 30 epochs with minibatches of 100 examples, using Adam as the optimizer. Training starts with a learning rate of 3x10−3 and is divided by 10 every 10 epochs. The retrospective loss is introduced after a warm-up period of eight epochs, since we find it speeds up initial convergence. The retrospection update frequency is half epoch. The loss scaling margin, κ,
is initialized at 4, and is increased by 1% at each retrospective update. Results in Table 2 highlight that training using the retrospection loss decreases error rate for both LeNet (Lecun et al., 2001) and VGG-11 (Simonyan & Zisserman, 2014) on both validation and testing sets.
4.4 TEXT CLASSIFICATION
We perform text classification experiments on the task of emotion detection in dyadic conversations. We baseline our experiments against DialogueRNN (Majumder et al., 2019), a recent state-of-the-art work, which is composed of an attentive network consisting of three Gated Recurrent Units(GRU). We perform experiments using AVEC (Schuller et al., 2012) and IEMOCAP (Busso et al., 2008) datasets. While the datasets are multi-modal (image and text), following (Majumder et al., 2019), we restrict scope of our experiments to using text. To feed into the network, the text data is preprocessed to obtain n-gram features as detailed in (Majumder et al., 2019). We follow the same traintest split and training configurations as in the original work. Performance comparison is reported against BiDialogueRNN+Att, the best performing variant from the original work.
For experiments on IEMOCAP, models in each experiment are trained for 60 epochs on cross-entropy objective with F1-Score and accuracy as performance metrics.
For retrospection, a warm-up of zero epochs is used. On AVEC, models in each experiment are trained for 100 epochs using MSE loss with MSE and pear-score(r) as the performance metrics. Here, introducing the retrospection loss after a warm-up of sev-
enty five epochs produces best performance. For experiments on both IEMOCAP and AVEC, the retrospective update frequency is one epoch. The loss scaling margin, κ, is set to 4 at initialization and is updated by 2% at each retrospective update. Experiments are conducted using the official
code repository (Co, 2019). Results in Table 3 show that using the retrospection loss when training DialogueRNN improves performance on both IECOMAP and AVEC datasets.
4.5 FEW-SHOT CLASSIFICATION
We conduct experiments on the task of few shot classification using the CUB-200 (Wah et al., 2011) dataset. The CUB-200 dataset consists of 11,788 images from 200 bird species. In few-shot learning, the ability of a model is measured by its performance on n-shot, k-way tasks where the model is given a query sample belonging to a new, previously unseen class and a support set, S, consisting of n examples each from k different unseen classes. The model then has to determine which of the support set classes the query sample belongs to. We restrict the scope of our experiments to the 5- way 5-shot setting and baseline against closerlook (Chen et al., 2019), a recent state-of-the-art work, and protonet (Snell et al., 2017) another popular work from the domain. Our experiments follow from (Chen et al., 2019) and implementations use code in (Chen, 2019). We conduct experiments with backbones of varying depths - Conv4, Conv6 and ResNet34, as presented in (Chen et al., 2019).
For our experiments, each model is trained on protonet (Snell et al., 2017) for 400 epochs and on closerlook (Chen et al., 2019) for 200 epochs.
Model protonet closerlook original retrospective original retrospective Conv4 75.26 ±1.05 77.42 ± 1.25 79.03 ±0.63 79.95 ± 0.75 Conv6 80.71 ±1.55 81.78 ± 1.40 81.05 ±0.55 81.35 ± 0.30 ResNet34 88.75 ±1.01 89.99 ± 1.13 82.23 ±0.59 83.11 ± 0.55
Table 4: Classification performance using retrospection for few-shot classification on CUB dataset
For Conv4 and Conv6 configurations on both closerlook and protonet, retrospection is introduced without any warm-up pe-
riod (zero epochs). For ResNet34, a warm-up period of 280 epochs for protonet and 150 epochs for closerlook is used. For all experiments, the retrospective update frequency is one epoch each. The scaling parameter, κ, is initialized at 4 and increased by 2% at each retrospective update. For closerlook, we report comparative performance with baseline++, the best performing variant. Results in Table 4 highlight that training with the retrospective loss results in improved classification accuracy for all backbones configurations on both closerlook and protonet. 1
5 ANALYSIS
In this section, we presents ablation studies to analyse the impact of different hyperparameters - batch size, optimizer, retrospective update frequency (F ) and the scaling parameter κ. The studies are conducted on the task of image classification on F-MNIST (Xiao et al., 2017) dataset using LeNet (Lecun et al., 2001) architecture. The default training configurations are used from Section 4.1. In all the studies, networks trained for each configuration are initialized with the same weights to ensure consistent comparison.
Impact of Batch Size We perform experiments to analyse the invariance of the proposed retrospection loss to batch size. For this study, we consider batch sizes - 32, 64, 128. Results presented in Figure 5 highlight that the retrospection loss results in improved training performance, which is achieved much faster, across all the different batch sizes.
Impact of Optimizer We perform experiments to analyse the invariance of the proposed retrospection loss to choice of optimizer. For this study, we use Adam (Kingma & Ba, 2014) and SGD optimizers. The classification performance when using Adam and SGD (momentum=0.5) are reported in Figure 6 (Row 2). The observed results highlight that the retrospective loss results in improved training performance across different optimizers.
Choice of Retrospective Update Frequency, F . We study the impact of different update frequencies (F ) for the retrospective loss. We experiment with 150, 200, 250 steps. Results are presented in Figure 6 (Row 1) with the best performance achieved using F = 250 steps. All configurations of the retrospection loss outperforms the configuration (in blue) trained without it. While experiments in the current work used randomized search to estimate update frequencies, retrospective mining can be an interesting future direction.
1Results in some experiments on the original configuration do not match values (are higher or lower) reported in (Chen et al., 2019) even after using official code and same training config. However, we ensure consistency of comparison by using the same initializations for original and retrospective configurations
Choice of scaling margin, κ We conduct experiments using different initial values of the loss scaling margin, κ. For this analysis, the value of κ remains unchanged during the training. Results are presented in Figure 6 (Row 1) with best performance achieved with κ = 4. All configurations produce better performance than with κ = 1.
6 CONCLUSION AND FUTURE WORK
In this work, we introduced a retrospective loss that utilizes parameter states from previous training steps to condition weight updates and guide the network towards convergence. We conduct experiments across multiple tasks, input types and architectures to empirically validate the effectiveness of the proposed loss. We perform ablation studies to analyze its behaviour. As an interesting future direction to explore the connection between retrospection and momentum, we conducted preliminary experiments on image classi-
fication to evaluate the impact of the retrospective loss on optimization. We contrast performance from three different configurations on image classification: (a) trained without retrospective loss (SGD); (b) trained without retrospective loss (SGD + momentum); and (c) with retrospective loss (SGD). Results in Figure 6 (Row 3) highlight that introducing retrospection improves performance (blue vs green); moreover, using the retrospective loss improves convergence even when SGD is optimized without momentum.
A EXPERIMENTAL RESULTS ON GRAPH NEURAL NETWORKS
We study the impact of the retrospection loss on the task of semi-supervised node classification using CORA and CITESEER datasets (Sen et al., 2008). For our experiments, we use two different models: ARMA (Bianchi et al., 2019) (a recent state-of-the-art method) and GCN (Kipf & Welling, 2016), another popular variant. Our implementations follow from (Fey & Lenssen, 2019). Performance is reported by averaging results over 30 experimental runs, each of which involves training the model for 100 epochs. For all experiments, the retrospective loss is introduced without any warmup period (zero epochs). The hyperparameters, F and κ, used for training on both datasets (CORA and CITESEER) for each of the three networks are: a) GCN: F = 2, κ = 4; b) ARMA: F = 1, κ=3. Table 5 presents quantitative impact of using the retrospection loss.
B ROBUSTNESS OF RESULTS
To ensure consistency of comparison, we reported performance in the main paper by initializing both retrospective and original experiments with the same weights. Now, for comprehensive analysis, we report experimental values for Image Classification, Speech Recognition and Text Classification tasks averaged over 10 runs. (Note that for few-shot learning, we already included this information in the main paper.) Table 6, Table 7 and Table 8 present the corresponding mean and standard deviation of the results for image, text and speech classification respectively. We also note that all the results in the submitted paper are in the same range as the mean ± std in the results below, although these were separately performed - showing the consistency.
C ABLATION STUDY: MOMENTUM
We also conducted an additional study to analyse the impact of choice of momentum values in the optimizer when using the retrospection loss in training. As in other ablation studies, we train LeNet for image classification on F-MNIST using SGD and experiment with different values of
the momentum parameter (0.5, 0.7, 0.9). The other parameter configurations remain the same as initially presented in Section 4.1 (lr=0.1, batch size=32). As highlighted by results in Table 9, the retrospection loss is independent of momentum value since retrospective training results in better performance than original training (w/o retrospection) for all the different momentum values.
D ABLATION STUDY: WARM-UP PERIOD
As in other ablation studies, we train LeNet on the task of FMNIST (60,000 images) image classification for 70,000 iterations with batch size = 32 using SGD (mom=0.9). The error rates with different warm-ups are presented in Table 10. We observed that on simpler datasets (like FMNIST) since networks start at a reasonable accuracy, retrospection is effective even when we introduce it with a very low warm-up period (Iw = 0).
Further, we observed that for tasks on more complex datasets with bigger networks, it is best to introduce the retrospection loss after training the network for some epochs when the network has started to converge to some extent, empirically around 50-75% of the training epochs. While introducing retrospection early also improves over baseline, later introduction of the retrospection loss further improves performance. Table 11 presents results obtained when we trained ResNet-56 on the task of image classification using CIFAR-10 for 200 epochs. Here, when the network is trained without retrospection (the original config as in the ResNet paper), we got an error rate of 6.86 (6.97 is reported in ResNet paper). However, on using retrospection, performance improved to 6.78 when the warm-up period (Iw) of 50 epochs was used and it further improved to 6.52 with a warm-up period of 150 epochs.
Next, we study the impact of the warm-up period on the task of image generation using GAN’s. For the GAN experiments in the main paper, we used the retrospection loss with a warm-up period of zero epochs which resulted in improved performance over the baselines. We believe that since GANs are inherently unstable and do not train to a fixed target, the warm-up period is unlikely to have a significant impact. Here, we present results from our study of the impact of different warmup periods when training GANs. Figure 7 plots the inception scores when DCGAN is trained on FMNIST dataset using retrospection loss with different warm-up periods (0, 10 and 30 epochs) and all other parameters are same as in Section 4.2. As highlighted by the results, retrospection trained with all the three warm-up configurations improves upon the baseline method (better max inception scores) trained without retrospection (blue). For training with the retrospection loss, the warm-up period does not have a significant impact on overall performance with warm-up of 10 epochs (green) and 30 epochs (red) producing almost similar peak values.
E ABLATION STUDY: NORM
In this work, we seek to present the retrospection loss as a general concept that encourages using past parameter states as guidance during training to improve performance. Hence, we conduct an ablation study to analyse the effect of using different norms for the retrospection loss. Table 12 and Table 13 presents results of using retrospection loss with L1-norm and L2-norm on the task of image classification. As highlighted by the results, both configurations of the retrospection loss (L1-norm, L2-norm) improved performance as compared to training without retrospection but using L1-norm resulted in the better performance.
Further, we even tried a KL-divergence based formulation of the retrospection loss. Consider an input (xi, yi) and network gθ parameterized by θ. Here gθ(xi) are the activations of the softmax layer and yi is the ground-truth class embedding. For the loss, we define: outcurr = gθT (xi) ; outprev = gθTp (xi) ; target = yi. For KL-divergence, we used the following formulation of the retrospective loss at a training step T:
Loss(KL) = −1 ∗KL div(outcurr, outprev) + CrossEntropy(outcurr, target) (4)
In the above experiment on SVHN, we obtained 5.45 and 4.31 as error rates for VGG-11 and ResNet18 respectively.
While all our variants, L1-norm, L2-norm and KL-divergence, improved upon baselines, L1-norm resulted in better performance across tasks, except in unconditional GANs, where L2-norm is used to apply the retrospective loss on the adversarial predictions of the generator (Sec 4.2). One hypothesis is that when the L1-norm is used, the gradient is simply a dimension-wise sign (+ve vs -ve), which provides a clearer direction to gradient updates, especially when training to a fixed target embedding in predictive tasks. | 1. What is the focus and contribution of the paper regarding neural network optimization?
2. What are the strengths and weaknesses of the proposed loss function, particularly in terms of intuition and heuristics?
3. Do you have any concerns or questions regarding the methodology, definitions, and details in the paper?
4. How does the choice of norm affect convergence, and what is the motivation behind using L1 norm?
5. What is the impact of hyperparameter choices, such as learning rate schedules, on the performance of the proposed method?
6. Would it be beneficial to include a warm-up period for GAN experiments, and how would it affect the results?
7. Could you provide further investigation or analysis regarding the potential divergence of the proposed loss function in certain scenarios? | Review | Review
The paper proposes a new loss function which adds to the training objective another term that pulls the current parameters of a neural network further away from the parameters at a previous time step.
Intuitively, this aims to push the current parameters further to the local optimum.
On a variety of benchmarks, optimizing the proposed loss function achieves better results than just optimizing the training loss.
The paper is well written and easy to follow. However, I am not entirely convinced about the intuition of the proposed method and I think further investigation are necessary.
While the method is simple and general, it also seems to be rather heuristic and requires carefully chosen hyperparameters.
Having said that, the empirical evidence shows that the proposed loss function consistently improves performance.
The following details should be addressed further:
- I am a bit confused by the definition of the loss function. In Equation 1 it seems that the term on the left represents the training objective. If that is correct than Equation 2 second case contains the training objective twice?
- F in Section 3 after Equation 2 is not properly defined
- Could it happen that the proposed loss function leads to divergence, for example if the parameter from a previous time step theta^Tp is close to the optimum theta_star?
- What is the motivation to use the L1 norm? How does this choice affect convergence compared to let's L2 norm?
- Section 4.1 typo in first paragraph: K instead of \kappa
- Section 4.1 the results would be more convincing if all networks were trained multiple times with a different random initialization and Table 1 would include the mean and std.
- Why is no warm-up period used for the GAN experiments?
- Section 4.3: why is \kappa increase by 1% for the speech recognition experiments where as by 2% for all other experiments?
- I suggest to increase the line width of all figures since they are somewhat hard to identify on a print version.
- Why is the momentum set to 0.5 for SGD in the ablation study? Most frameworks use a default value of 0.9.
- I would like to see the affect of the warm-up period to the performance in the ablation study.
- How does the choice of learning rate schedule, such as for example cosine annealing, affect the loss function?
post rebuttal
------------------
I thank the authors for clarifying my questions and providing additional experiments. I think that especially the additional ablation studies and reporting the mean and std of multiple trials make the contribution of the paper more convincing. Hence, I increased my score. |
ICLR | Title
Retrospection: Leveraging the Past for Efficient Training of Deep Neural Networks
Abstract
Deep neural networks are powerful learning machines that have enabled breakthroughs in several domains. In this work, we introduce retrospection loss to improve performance of neural networks by utilizing prior experiences during training. Minimizing the retrospection loss pushes the parameter state at the current training step towards the optimal parameter state while pulling it away from the parameter state at a previous training step. We conduct extensive experiments to show that the proposed retrospection loss results in improved performance across multiple tasks, input types and network architectures.
N/A
Deep neural networks are powerful learning machines that have enabled breakthroughs in several domains. In this work, we introduce retrospection loss to improve performance of neural networks by utilizing prior experiences during training. Minimizing the retrospection loss pushes the parameter state at the current training step towards the optimal parameter state while pulling it away from the parameter state at a previous training step. We conduct extensive experiments to show that the proposed retrospection loss results in improved performance across multiple tasks, input types and network architectures.
1 INTRODUCTION
Large deep neural networks have enabled breakthroughs in fields such as computer vision (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012), natural language understanding (Mikolov et al., 2013) and reinforcement learning (Mnih et al., 2015). Hence, in recent times, significant effort has been directed towards enhancing network efficiency through data augmentation, regularization methods and novel training strategies (Zhong et al., 2017) (Zhang et al., 2017), (Huang et al., 2017) (Noh et al., 2017), (Wang et al., 2018) (Han et al., 2016). In this work, we introduce a technique to improve performance by utilizing prior experiences of the network during training.
Humans are efficient learners with the ability to quickly understand and process diverse ideas. A hallmark of human intelligence is the capability to internalize these complex ideas by actively referencing past interpretations to continually adapt understanding. Our artificial agents should be able to do the same, learning and adapting quickly. This kind of fast and flexible learning is challenging, since the agent must effectively integrate its prior experience with a small amount of new information, while avoiding overfitting to the new data.
In this work, we introduce a new retrospection loss that utilizes prior training experiences of a deep neural network to guide parameter updates and improve performance. The idea for the retrospection loss is simple - to ensure that the predictions at a training step are more similar to the ground truth than to the predictions from a previous training step. As training proceeds, minimizing the loss constrains the network parameters to continually evolve towards the optimal state by successive constriction of the predictions into tighter spaces around the goal. The proposed retrospection loss is simple, easy to implement and we empirically show that it works well across multiple tasks, input types and network architectures.
The key contributions of our work can be summarized as:
• We propose a new simple, easy to implement retrospective loss that is based on looking back at the trajectory of gradient descent and providing an earlier parameter state as guidance for further learning.
• We exhaustively experiment on a wide range of tasks including image classification (+ fewshot learning), GANs, speech recognition, text classification and consistently beat state-ofthe-art methods on benchmark datasets with the addition of this loss term.
• To the best of our knowledge, this is the first such effort; our empirical studies showed a consistent improvement in performance across the tasks in our multiple trials, demonstrating the potential of this method to have a strong impact on practical use in real-world applications across domains.
2 RELATED WORK
The retrospection loss leverages the parameter state from a previous training step as guidance to compute the current gradient update. Correspondingly, one could find similarities with efforts in optimization, that utilize information from past training steps for future weight updates as well as methods that leverage guidance from other parameter states during training.
Techniques such as SVRG (Johnson & Zhang, 2013), SARAH(Nguyen et al., 2017), ProxSARAH (Pham et al., 2019) use gradients from earlier training steps to predict better weight updates. Other optimization methods like Momentum (Sutskever et al., 2013), Adam (Kingma & Ba, 2014) Nesterov Momentum (Jin et al., 2018) accumulate past gradients to accelerate weight updates in the right direction in order to achieve faster convergence. In contrast, our work introduces an additional training objective to guide convergence, and can be used to improve performance when used with different optimizer configurations, as shown in our results.
In reinforcement learning (RL), where techniques involve optimizing using moving (evolving) targets, methods for Q-learning and policy gradients benefit from using a guidance network during training. The DQN algorithm proposed by (Mnih et al., 2015) uses an additional target network (same as online network) for Q-value updates, where parameters are updated by copying from the online network at discrete steps. Double Q-learning (Hasselt, 2010) learns two Q functions, where each Q-function is updated with a value for the next state from the other Q-function. Policy gradient methods such as TRPO (Schulman et al., 2015), PPO (Schulman et al., 2017) use a KL-divergence objective during training that constrains the loss to ensure deviation from a previously learned policy is small. In these techniques, leveraging a guidance during training results in improved convergence and sample efficiency. Note that all these efforts are constrained to the RL setting. Further, the objective in the RL setting is to control divergence from the guidance step to better handle moving targets. On the other hand, the proposed retrospection loss is formalized differently to address the supervised learning setting. To the best of our knowledge, this is the first such effort that uses an idea such as retrospection in supervised learning.
3 METHODOLOGY
We now present the formulation of our retrospective loss. Consider a neural network, g(·), parameterized by its weights θ. Let the optimal parameters of the neural networks at the end of training be given by θ∗. The current parameters of the network at time step T during training are given by θT . The objective of the retrospective loss is to leverage the past states during training, and cue the network to be closer to the ground truth than a past state at time step Tp. Given an input data-label pair (xi, yi), the retrospective loss is given by:
LTretrospective = κ ∗ ||gθT (xi)− yi|| − ||gθT (xi)− gθTp (xi)|| (1)
The retrospective loss is designed such that minimizing it with respect to θ over the training steps would constrain the parameter state at each reference step θT to be more similar to θ∗ than the parameter state from the delayed time step θTp . The κ scaling term is required to obtain sufficient gradient signal in later stages of training when gθT (xi) is close to yi, and the first term becomes small.
Adding this loss term to an existing supervised learning task loss provides for efficient training, as shown in our experiments later. The retrospective loss is introduced to the training objective following a warm-up period wherein the neural network function can be considered stable for use of retrospective updates. The training objective at any training step T with the retrospective loss is hence defined as:
L = { Ltask T < IW Ltask + LTretrospective T ≥ IW
(2)
where Ltask is the task-specific training objective and IW is the number of warm-up iterations. We simply use Tp = F ∗ bT/F c as the time step for retrospection in this work, and show gains in efficiency of training. One could however mine for Tp intelligently to further improve the performance. Here, F is the retrospective update frequency, which gives an upper bound difference of the previous training step (Tp) from the current training step T at which we compute the retrospective loss.
Geometric Intuition. Figure 1 illustrates the geometric intuition of the working of the retrospective loss. By design (Eqn 1), Lretrospective is negative when the current parameter state is farther away from the retrospective step, Tp, than the optimal solution (which is the desirable objective).
Algorithm 1 Retrospective Training
1: Input: Training Set V, Current Model Parameters θT , Previous State Model Parameters θTp , Update Frequency F, # of Warm-Up Iterations IW , 2: for Step 1 to n do 3: gradtask ← 0 (Initialising the gradients w.r.t task-specific loss) 4: gradretrospective ← 0 (Initialising the gradients w.r.t retrospective loss) 5: Training Data of minibatch size B pairs of (X(i), Y(i)). 6: L(θT ,X(i),Y(i)) = Ltask(θ
T (X(i)),Y(i)) 7: gradtask ← ∇(L(θT ,X(i),Y(i)) 8: if Step > IW then 9: L(θT , θT p,X(i),Y(i)) = Lretrospective(θ
T (X(i)), θTp(X(i)),Y(i)) 10: gradretrospective ← ∇(L(θT , θTp ,X(i),Y(i)) 11: end if 12: if Step % F == 0 then 13: θTp ← θT 14: end if 15: θT ← θT − η ∗ (gradtask + gradretrospective) 16: end for 17:
1
One could view the loss term as dividing the parameter space into two regions: a polytope around the optimal θ∗ where Lretrospective < 0, and the region outside the polytope where Lretrospective > 0. Minimizing retrospective loss pushes the network towards parameters further inside the polytope, thus helping speed up the training process. As shown on the right subfigure in Figure 1, the polytope shrinks over time, since the retrospective support, Tp, is also updated to more recent parameter states. This helps further push the parameters into a near-optimal region around θ∗. The loss term helps in improved solution in most cases, and faster training in certain cases, as shown in our extensive empirical studies in Section 4. Algorithm 1 summarizes the methodology.
Connection with Triplet Loss. The triplet loss ((Chechik et al., 2010; Schroff et al., 2015; Hoffer & Ailon, 2015)) has been proposed and used extensively over the last few years to learn highquality data embeddings, by considering a triplet of data points, xa (anchor point), xp (point from the positive/same class as the sample under consideration), and xn (point from the negative class/class different from the sample under consideration). The loss is then defined as:
max ( ‖ga − gp‖2 − ‖ga − gn‖2 +m, 0 ) (3)
where g is the neural network model, and m is a minimum desired margin of separation. The triplet loss, inspired by contrastive loss (Hadsell et al., 2006), attempts to learn parameters θ of a neural network in such a way that data points belonging to the same class are pulled together closer than a data point from another class. One could view the proposed retrospection loss as a triplet loss in the parameter space. While the traditional triplet loss consider a triplet of data samples, we consider a triplet of parameters, θT , θ∗, and θTp . We however believe that retrospection captures the proposed loss better, since we consider previous parameter states in time.
Connection with Momentum. Viewing retrospection from the perspective of previous gradients in the training trajectory, one can connect it to the use of momentum, although more in a contrasting sense. The use of momentum and variants such as Nesterov momentum (Jin et al., 2018) in training neural networks use the past gradient, say at θT−1 or the gradient over the previous few steps, at {θT−q, · · · , θT−1}, q > 0), while updating the parameters in the current step. This assumes local
consistency of the direction of the gradient update in the training trajectory, and that one can use these previous directions to get a more robust estimate of the gradient step to be taken currently. In contrast, retrospection leverages the same idea from the opposite perspective, viz., consistency of the direction of the gradient update is only local, and hence the parameter state, θTp farther away from the current state θT , provides a cue of what the next parameter must be far from. This raises interesting discussions, and the possibility of analyzing retrospection as a thrust obtained from an undesirable parameter state, as opposed to momentum. We leave these as interesting directions of future work, and focus this work on proposing the method, and showing its effectiveness in training neural networks.
4 EXPERIMENTS AND RESULTS
We conduct experiments using retrospection on the following tasks: image classification (Sec 4.1), image generation (Sec 4.2), speech recognition (Sec 4.3), text classification (Sec 4.4) and fewshot image classification (Sec 4.5). During experimentation, the original (without retrospection) and retrospective (with retrospection) configurations are trained using same weight initialization, to ensure consistency of comparison. For all experiments, we use the L1-norm as the choice of norm in our implementation for the retrospective loss (Eqn 1). When retrospection is used without warm-up, the guidance parameters, θTp , are initialized at random.
4.1 IMAGE CLASSIFICATION
We perform image classification experiments using Fashion-MNIST (Xiao et al., 2017), SVHN (Netzer et al., 2011) and CIFAR-10 (Krizhevsky, 2009) datasets. The retrospection loss, for classification, uses activations of the softmax layer. The default hyperparameter configurations for retrospection include a warm-up period of zero epochs and a retrospective update frequency of fifty steps. The parameter, κ, is initialized at 4 and increased by 2% at each retrospective update. Quantitative results for image classification are compiled in Table 1.
Fashion-MNIST. For experiments on Fashion MNIST, we use LeNet (Lecun et al., 2001) and ResNet-20 (He et al., 2016) architectures. Models in each experiment are trained to convergence using the SGD optimizer (lr=0.1, momentum=0.5, mini-batch=32) running over 70,000 steps. Results in Figure 2 (a)-(b) show that using the retrospective loss results in improved training.
SVHN. For experiments on SVHN, we use VGG-11 (Simonyan & Zisserman, 2014) and ResNet-18 (He et al., 2016) architectures. Models in each experiment are trained to convergence using the SGD optimizer (lr=0.001, momentum=0.9, mini-batch=100) running over 200,000 steps. Results in Figure 2 (c)-(d) show that using the retrospective loss results in more efficient training.
CIFAR-10. For experiments on CIFAR-10 (Krizhevsky, 2009), we use larger variants of ResNet including
ResNet - 44, 56, 110 (He et al., 2016). Models in each experiment are trained for 200 epochs, using the training configuration (mini-batch, lr policy) detailed in (He et al., 2016). Here, we observe that
using the retrospection loss in later stages of training results in best improvement in performance. Correspondingly, the retrospective loss is introduced after a warm-up of 150 epochs and the retrospective update frequency is one epoch. The parameter, κ, is initialized at 4 and updated by 2% once every ten retrospective updates. Quantitative performance is reported in Table 1. For sake of completion, we also mention (in brackets) the error rates for the corresponding experiments as reported by authors in the original work (He et al., 2016).
4.2 IMAGE GENERATION
Next, we perform experiment with Generative Adversarial Networks (GAN) using Fashion-MNIST (F-MNIST) (Xiao et al., 2017) and CIFAR-10 (Krizhevsky, 2009) datasets. Our study considers both unconditional (DCGAN, LSGAN) and conditional (ACGAN) variants of GANs. We adapt implementations from (dcg) for LSGAN (Mao et al., 2016), DCGAN (Radford et al., 2015) and from (acg) for ACGAN (Odena et al., 2016). For our experiments, we train the generator and discriminator for 100 epochs, with initial learning rate of 0.0002 on minibatches of size 64 using Adam optimizer. We report performance using Inception Score (Salimans et al., 2016), a standard metric for evaluating GANs. The inception score is calculated using implementation in (inc, 2018) with predictions for CIFAR-10 generated using network in (Szegedy et al., 2015) and features for F-MNIST using network in (Krizhevsky et al., 2012).
For all experiments, the retrospection loss is initialized without any warm-up period (zero epochs). The loss is computed on outputs of the discriminator and is used to train the generator model. For DCGAN (Radford et al., 2015) and LSGAN (Mao et al., 2016) L2-norm as choice of norm. The retrospective update happens six times in one epoch. The scaling parameter, κ is initialized at 4 and is not changed during training. For ACGAN (Odena et al., 2016), which is conditional, the retrospective loss consists of both adversarial loss and class loss components. L1-norm is used for class component and L2-norm is used for adversarial component. Figure 3 presents comparative inception score plots when the various dataset-network pairs are trained with (without) the retrospection loss. Additionally, Figure 4 presents images generated over epochs when training ACGAN (Odena et al., 2016), with and without retrospection, on F-MNIST (Xiao et al., 2017).
4.3 SPEECH RECOGNITION
We perform speech recognition experiments using the Google Commands (Warden, 2017) dataset. The dataset consists of 65,000 utterances, where each utterance is about one-second long and belongs to one out of 30 classes. The classes correspond to voice commands such as yes, no, down, left, as pronounced by a few thousand different speakers. We follow (Zhang et al., 2017) to preprocess the utterances where we first extract normalized spectrograms from the original waveforms at a sampling rate of 16 kHz and subsequently we zero-pad the spectrograms to equalize their sizes at 160 x 101.
For this experiment, we compare LeNet(Lecun et al., 2001) and VGG-11(Simonyan & Zisserman, 2014) architecture, each of which is composed of two convolutional and two fully-connected layers.
We train each model for 30 epochs with minibatches of 100 examples, using Adam as the optimizer. Training starts with a learning rate of 3x10−3 and is divided by 10 every 10 epochs. The retrospective loss is introduced after a warm-up period of eight epochs, since we find it speeds up initial convergence. The retrospection update frequency is half epoch. The loss scaling margin, κ,
is initialized at 4, and is increased by 1% at each retrospective update. Results in Table 2 highlight that training using the retrospection loss decreases error rate for both LeNet (Lecun et al., 2001) and VGG-11 (Simonyan & Zisserman, 2014) on both validation and testing sets.
4.4 TEXT CLASSIFICATION
We perform text classification experiments on the task of emotion detection in dyadic conversations. We baseline our experiments against DialogueRNN (Majumder et al., 2019), a recent state-of-the-art work, which is composed of an attentive network consisting of three Gated Recurrent Units(GRU). We perform experiments using AVEC (Schuller et al., 2012) and IEMOCAP (Busso et al., 2008) datasets. While the datasets are multi-modal (image and text), following (Majumder et al., 2019), we restrict scope of our experiments to using text. To feed into the network, the text data is preprocessed to obtain n-gram features as detailed in (Majumder et al., 2019). We follow the same traintest split and training configurations as in the original work. Performance comparison is reported against BiDialogueRNN+Att, the best performing variant from the original work.
For experiments on IEMOCAP, models in each experiment are trained for 60 epochs on cross-entropy objective with F1-Score and accuracy as performance metrics.
For retrospection, a warm-up of zero epochs is used. On AVEC, models in each experiment are trained for 100 epochs using MSE loss with MSE and pear-score(r) as the performance metrics. Here, introducing the retrospection loss after a warm-up of sev-
enty five epochs produces best performance. For experiments on both IEMOCAP and AVEC, the retrospective update frequency is one epoch. The loss scaling margin, κ, is set to 4 at initialization and is updated by 2% at each retrospective update. Experiments are conducted using the official
code repository (Co, 2019). Results in Table 3 show that using the retrospection loss when training DialogueRNN improves performance on both IECOMAP and AVEC datasets.
4.5 FEW-SHOT CLASSIFICATION
We conduct experiments on the task of few shot classification using the CUB-200 (Wah et al., 2011) dataset. The CUB-200 dataset consists of 11,788 images from 200 bird species. In few-shot learning, the ability of a model is measured by its performance on n-shot, k-way tasks where the model is given a query sample belonging to a new, previously unseen class and a support set, S, consisting of n examples each from k different unseen classes. The model then has to determine which of the support set classes the query sample belongs to. We restrict the scope of our experiments to the 5- way 5-shot setting and baseline against closerlook (Chen et al., 2019), a recent state-of-the-art work, and protonet (Snell et al., 2017) another popular work from the domain. Our experiments follow from (Chen et al., 2019) and implementations use code in (Chen, 2019). We conduct experiments with backbones of varying depths - Conv4, Conv6 and ResNet34, as presented in (Chen et al., 2019).
For our experiments, each model is trained on protonet (Snell et al., 2017) for 400 epochs and on closerlook (Chen et al., 2019) for 200 epochs.
Model protonet closerlook original retrospective original retrospective Conv4 75.26 ±1.05 77.42 ± 1.25 79.03 ±0.63 79.95 ± 0.75 Conv6 80.71 ±1.55 81.78 ± 1.40 81.05 ±0.55 81.35 ± 0.30 ResNet34 88.75 ±1.01 89.99 ± 1.13 82.23 ±0.59 83.11 ± 0.55
Table 4: Classification performance using retrospection for few-shot classification on CUB dataset
For Conv4 and Conv6 configurations on both closerlook and protonet, retrospection is introduced without any warm-up pe-
riod (zero epochs). For ResNet34, a warm-up period of 280 epochs for protonet and 150 epochs for closerlook is used. For all experiments, the retrospective update frequency is one epoch each. The scaling parameter, κ, is initialized at 4 and increased by 2% at each retrospective update. For closerlook, we report comparative performance with baseline++, the best performing variant. Results in Table 4 highlight that training with the retrospective loss results in improved classification accuracy for all backbones configurations on both closerlook and protonet. 1
5 ANALYSIS
In this section, we presents ablation studies to analyse the impact of different hyperparameters - batch size, optimizer, retrospective update frequency (F ) and the scaling parameter κ. The studies are conducted on the task of image classification on F-MNIST (Xiao et al., 2017) dataset using LeNet (Lecun et al., 2001) architecture. The default training configurations are used from Section 4.1. In all the studies, networks trained for each configuration are initialized with the same weights to ensure consistent comparison.
Impact of Batch Size We perform experiments to analyse the invariance of the proposed retrospection loss to batch size. For this study, we consider batch sizes - 32, 64, 128. Results presented in Figure 5 highlight that the retrospection loss results in improved training performance, which is achieved much faster, across all the different batch sizes.
Impact of Optimizer We perform experiments to analyse the invariance of the proposed retrospection loss to choice of optimizer. For this study, we use Adam (Kingma & Ba, 2014) and SGD optimizers. The classification performance when using Adam and SGD (momentum=0.5) are reported in Figure 6 (Row 2). The observed results highlight that the retrospective loss results in improved training performance across different optimizers.
Choice of Retrospective Update Frequency, F . We study the impact of different update frequencies (F ) for the retrospective loss. We experiment with 150, 200, 250 steps. Results are presented in Figure 6 (Row 1) with the best performance achieved using F = 250 steps. All configurations of the retrospection loss outperforms the configuration (in blue) trained without it. While experiments in the current work used randomized search to estimate update frequencies, retrospective mining can be an interesting future direction.
1Results in some experiments on the original configuration do not match values (are higher or lower) reported in (Chen et al., 2019) even after using official code and same training config. However, we ensure consistency of comparison by using the same initializations for original and retrospective configurations
Choice of scaling margin, κ We conduct experiments using different initial values of the loss scaling margin, κ. For this analysis, the value of κ remains unchanged during the training. Results are presented in Figure 6 (Row 1) with best performance achieved with κ = 4. All configurations produce better performance than with κ = 1.
6 CONCLUSION AND FUTURE WORK
In this work, we introduced a retrospective loss that utilizes parameter states from previous training steps to condition weight updates and guide the network towards convergence. We conduct experiments across multiple tasks, input types and architectures to empirically validate the effectiveness of the proposed loss. We perform ablation studies to analyze its behaviour. As an interesting future direction to explore the connection between retrospection and momentum, we conducted preliminary experiments on image classi-
fication to evaluate the impact of the retrospective loss on optimization. We contrast performance from three different configurations on image classification: (a) trained without retrospective loss (SGD); (b) trained without retrospective loss (SGD + momentum); and (c) with retrospective loss (SGD). Results in Figure 6 (Row 3) highlight that introducing retrospection improves performance (blue vs green); moreover, using the retrospective loss improves convergence even when SGD is optimized without momentum.
A EXPERIMENTAL RESULTS ON GRAPH NEURAL NETWORKS
We study the impact of the retrospection loss on the task of semi-supervised node classification using CORA and CITESEER datasets (Sen et al., 2008). For our experiments, we use two different models: ARMA (Bianchi et al., 2019) (a recent state-of-the-art method) and GCN (Kipf & Welling, 2016), another popular variant. Our implementations follow from (Fey & Lenssen, 2019). Performance is reported by averaging results over 30 experimental runs, each of which involves training the model for 100 epochs. For all experiments, the retrospective loss is introduced without any warmup period (zero epochs). The hyperparameters, F and κ, used for training on both datasets (CORA and CITESEER) for each of the three networks are: a) GCN: F = 2, κ = 4; b) ARMA: F = 1, κ=3. Table 5 presents quantitative impact of using the retrospection loss.
B ROBUSTNESS OF RESULTS
To ensure consistency of comparison, we reported performance in the main paper by initializing both retrospective and original experiments with the same weights. Now, for comprehensive analysis, we report experimental values for Image Classification, Speech Recognition and Text Classification tasks averaged over 10 runs. (Note that for few-shot learning, we already included this information in the main paper.) Table 6, Table 7 and Table 8 present the corresponding mean and standard deviation of the results for image, text and speech classification respectively. We also note that all the results in the submitted paper are in the same range as the mean ± std in the results below, although these were separately performed - showing the consistency.
C ABLATION STUDY: MOMENTUM
We also conducted an additional study to analyse the impact of choice of momentum values in the optimizer when using the retrospection loss in training. As in other ablation studies, we train LeNet for image classification on F-MNIST using SGD and experiment with different values of
the momentum parameter (0.5, 0.7, 0.9). The other parameter configurations remain the same as initially presented in Section 4.1 (lr=0.1, batch size=32). As highlighted by results in Table 9, the retrospection loss is independent of momentum value since retrospective training results in better performance than original training (w/o retrospection) for all the different momentum values.
D ABLATION STUDY: WARM-UP PERIOD
As in other ablation studies, we train LeNet on the task of FMNIST (60,000 images) image classification for 70,000 iterations with batch size = 32 using SGD (mom=0.9). The error rates with different warm-ups are presented in Table 10. We observed that on simpler datasets (like FMNIST) since networks start at a reasonable accuracy, retrospection is effective even when we introduce it with a very low warm-up period (Iw = 0).
Further, we observed that for tasks on more complex datasets with bigger networks, it is best to introduce the retrospection loss after training the network for some epochs when the network has started to converge to some extent, empirically around 50-75% of the training epochs. While introducing retrospection early also improves over baseline, later introduction of the retrospection loss further improves performance. Table 11 presents results obtained when we trained ResNet-56 on the task of image classification using CIFAR-10 for 200 epochs. Here, when the network is trained without retrospection (the original config as in the ResNet paper), we got an error rate of 6.86 (6.97 is reported in ResNet paper). However, on using retrospection, performance improved to 6.78 when the warm-up period (Iw) of 50 epochs was used and it further improved to 6.52 with a warm-up period of 150 epochs.
Next, we study the impact of the warm-up period on the task of image generation using GAN’s. For the GAN experiments in the main paper, we used the retrospection loss with a warm-up period of zero epochs which resulted in improved performance over the baselines. We believe that since GANs are inherently unstable and do not train to a fixed target, the warm-up period is unlikely to have a significant impact. Here, we present results from our study of the impact of different warmup periods when training GANs. Figure 7 plots the inception scores when DCGAN is trained on FMNIST dataset using retrospection loss with different warm-up periods (0, 10 and 30 epochs) and all other parameters are same as in Section 4.2. As highlighted by the results, retrospection trained with all the three warm-up configurations improves upon the baseline method (better max inception scores) trained without retrospection (blue). For training with the retrospection loss, the warm-up period does not have a significant impact on overall performance with warm-up of 10 epochs (green) and 30 epochs (red) producing almost similar peak values.
E ABLATION STUDY: NORM
In this work, we seek to present the retrospection loss as a general concept that encourages using past parameter states as guidance during training to improve performance. Hence, we conduct an ablation study to analyse the effect of using different norms for the retrospection loss. Table 12 and Table 13 presents results of using retrospection loss with L1-norm and L2-norm on the task of image classification. As highlighted by the results, both configurations of the retrospection loss (L1-norm, L2-norm) improved performance as compared to training without retrospection but using L1-norm resulted in the better performance.
Further, we even tried a KL-divergence based formulation of the retrospection loss. Consider an input (xi, yi) and network gθ parameterized by θ. Here gθ(xi) are the activations of the softmax layer and yi is the ground-truth class embedding. For the loss, we define: outcurr = gθT (xi) ; outprev = gθTp (xi) ; target = yi. For KL-divergence, we used the following formulation of the retrospective loss at a training step T:
Loss(KL) = −1 ∗KL div(outcurr, outprev) + CrossEntropy(outcurr, target) (4)
In the above experiment on SVHN, we obtained 5.45 and 4.31 as error rates for VGG-11 and ResNet18 respectively.
While all our variants, L1-norm, L2-norm and KL-divergence, improved upon baselines, L1-norm resulted in better performance across tasks, except in unconditional GANs, where L2-norm is used to apply the retrospective loss on the adversarial predictions of the generator (Sec 4.2). One hypothesis is that when the L1-norm is used, the gradient is simply a dimension-wise sign (+ve vs -ve), which provides a clearer direction to gradient updates, especially when training to a fixed target embedding in predictive tasks. | 1. What is the main contribution of the paper, and how does it optimize neural network training?
2. What are the strengths of the paper, particularly in terms of its experimental results?
3. What are the weaknesses of the paper regarding its writing and presentation?
4. Do you have any concerns about the retrospective loss, such as its hyperparameters or the choice of L-1 norm?
5. Are there any questions regarding the paper's experimental results or related work discussion? | Review | Review
This paper presents the retrospective loss to optimize neural network training. The idea behind the retrospective loss is to add a penalization term between the current model to the model from a few iterations before. Extensive experimental results on a wide range of datasets are provided to show the effectiveness of the retrospective loss.
The retrospective loss is additionally controlled by two hyperparameters, the strength parameter K and the update frequency T_p. This loss, measured in L-1 norm, is added to the training objective. The geometric intuition of the added loss term is that this pushes the model away from the model at iteration T_p. The paper argues that this shrinks the parameter space of the loss function.
One of the concern regards the writing of the paper.
- Algorithm 1 and Figure 6 look very blurry, which I think are both below the publication standard.
- The introduction could be written to be more helpful, such as providing more context on why the obtained experimental results are important (e.g. getting state-of-the-art results on the datasets studied in the experiments)
- The Related Work contrasts with previous work which is not clear because the precise contribution has not been stated at the point.
More detailed questions:
- What are the standard deviations for the experimental results (as you reported in Table 4 but not in other experiments)?
- I'm curious whether the use of L-1 norm is critical or not in the retrospective loss. |
ICLR | Title
Discrete Graph Structure Learning for Forecasting Multiple Time Series
Abstract
Time series forecasting is an extensively studied subject in statistics, economics, and computer science. Exploration of the correlation and causation among the variables in a multivariate time series shows promise in enhancing the performance of a time series model. When using deep neural networks as forecasting models, we hypothesize that exploiting the pairwise information among multiple (multivariate) time series also improves their forecast. If an explicit graph structure is known, graph neural networks (GNNs) have been demonstrated as powerful tools to exploit the structure. In this work, we propose learning the structure simultaneously with the GNN if the graph is unknown. We cast the problem as learning a probabilistic graph model through optimizing the mean performance over the graph distribution. The distribution is parameterized by a neural network so that discrete graphs can be sampled differentiably through reparameterization. Empirical evaluations show that our method is simpler, more efficient, and better performing than a recently proposed bilevel learning approach for graph structure learning, as well as a broad array of forecasting models, either deep or non-deep learning based, and graph or non-graph based.
1 INTRODUCTION
Time series data are widely studied in science and engineering that involve temporal measurements. Time series forecasting is concerned with the prediction of future values based on observed ones in the past. It has played important roles in climate studies, market analysis, traffic control, and energy grid management (Makridakis et al., 1997) and has inspired the development of various predictive models that capture the temporal dynamics of the underlying system. These models range from early autoregressive approaches (Hamilton, 1994; Asteriou & Hall, 2011) to the recent deep learning methods (Seo et al., 2016; Li et al., 2018; Yu et al., 2018; Zhao et al., 2019).
Analysis of univariate time series (a single longitudinal variable) has been extended to multivariate time series and multiple (univariate or multivariate) time series. Multivariate forecasting models find strong predictive power in stressing the interdependency (and even causal relationship) among the variables. The vector autoregressive model (Hamilton, 1994) is an example of multivariate analysis, wherein the coefficient magnitudes offer hints into the Granger causality (Granger, 1969) of one variable to another.
For multiple time series, pairwise similarities or connections among them have also been explored to improve the forecasting accuracy (Yu et al., 2018). An example is the traffic network where each node denotes a time series recording captured by a particular sensor. The spatial connections of the roads offer insights into how traffic dynamics propagates along the network. Several graph neural
∗This work was done while C. Shang was an intern at MIT-IBM Watson AI Lab, IBM Research. †To whom correspondence should be addressed.
network (GNN) approaches (Seo et al., 2016; Li et al., 2018; Yu et al., 2018; Zhao et al., 2019) have been proposed recently to leverage the graph structure for forecasting all time series simultaneously.
The graph structure however is not always available or it may be incomplete. There could be several reasons, including the difficulty in obtaining such information or a deliberate shielding for the protection of sensitive information. For example, a data set comprising sensory readings of the nation-wide energy grid is granted access to specific users without disclosure of the grid structure. Such practical situations incentivize the automatic learning of the hidden graph structure jointly with the forecasting model.
Because GNN approaches show promise in forecasting multiple interrelated time series, in this paper we are concerned with structure learning methods applicable to the downstream use of GNNs. A prominent example is the recent work of Franceschi et al. (2019) (named LDS), which is a meta-learning approach that treats the graph as a hyperparameter in a bilevel optimization framework (Franceschi et al., 2017). Specifically, let Xtrain and Xval denote the training and the validation sets of time series respectively, A ∈ {0, 1}n×n denote the graph adjacency matrix of the n time series, w denote the parameters used in the GNN, and L and F denote the the loss functions used during training and validation respectively (which may not be identical). LDS formulates the problem as learning the probability matrix θ ∈ [0, 1]n×n, which parameterizes the element-wise Bernoulli distribution from which the adjacency matrix A is sampled:
min θ EA∼Ber(θ)[F (A,w(θ), Xval)],
s.t. w(θ) = argmin w
EA∼Ber(θ)[L(A,w,Xtrain)]. (1)
Formulation (1) gives a bilevel optimization problem. The constraint (which by itself is an optimization problem) defines the GNN weights as a function of the given graph, so that the objective is to optimize over such a graph only. Note that for differentiability, one does not directly operate on the discrete graph adjacency matrix A, but on the continuous probabilities θ instead.
LDS has two drawbacks. First, its computation is expensive. The derivative of w with respect to θ is computed by applying the chain rule on a recursive-dynamics surrogate of the inner optimization argmin. Applying the chain rule on this surrogate is equivalent to differentiating an RNN, which is either memory intensive if done in the reverse mode or time consuming if done in the forward mode, when unrolling a deep dynamics. Second, it is challenging to scale. The matrix θ has Θ(n2) entries to optimize and thus the method is hard to scale to increasingly more time series.
In light of the challenges of LDS, we instead advocate a unilevel optimization:
min w EA∼Ber(θ(w))[F (A,w,Xtrain)]. (2)
Formulation (2) trains the GNN model as usual, except that the probabilities θ (which parameterizes the distribution from which A is sampled), is by itself parameterized. We absorb these parameters, together with the GNN parameters, into the notation w. We still use a validation set Xval for usual hyperparameter tuning, but these hyperparameters are not θ as treated by (1). In fact, formulation (1) may need a second validation set to tune other hyperparameters.
The major distinction of our approach from LDS is the parameterization θ(w), as opposed to an inner optimization w(θ). In our approach, a modeler owns the freedom to design the parameterization and better control the number of parameters as n2 increases. To this end, time series representation learning and link prediction techniques offer ample inspiration for modeling. In contrast, LDS is more agnostic as no modeling is needed. The effort, instead, lies in the nontrivial treatment of the inner optimization (in particular, its differentiation).
As such, our approach is advantageous in two regards. First, its computation is less expensive, because the gradient computation of a unilevel optimization is straightforward and efficient and implementations are mature. Second, it better scales, because the number of parameters does not grow quadratically with the number of time series.
We coin our approach GTS (short for “graph for time series”), signaling the usefulness of graph structure learning for enhancing time series forecasting. It is important to note that the end purpose of the graph is to improve forecasting quality, rather than identifying causal relationship of the series or recovering the ground-truth graph, if any. While causal discovery of multiple scalar variables is an
established field, identifying causality among multiple multivariate time series requires a nontrivial extension that spans beyond the current study. On the other hand, the graph, either learned or preexisting, serves as additional information that helps the model better capture global signals and apply on each series. There does not exist a golden measure for the quality of the learned graph except forecasting accuracy. For example, the traffic network does not necessarily offer the best pairwise relationship a GNN can exploit for forecasting traffic series. Nevertheless, to robustify GTS we incorporate regularization that penalizes significant departure from one’s prior belief. If a certain “ground-truth” graph is believed, the learned graph will be a healthy variation of it for a more accurate forecast.
2 RELATED WORK
Time series forecasting has been studied for decades by statisticians. It is out of the scope of this paper to comprehensively survey the literature, but we will focus more on late developments under the deep learning context. Early textbook methods include (vector) autoregressive models (Hamilton, 1994), autoregressive integrated moving average (ARIMA) (Asteriou & Hall, 2011), hidden Markov models (HMM) (Baum & Petrie, 1966), and Kalman filters (Zarchan & Musoff, 2000). Generally speaking, these are linear models that use a window of the past information to predict the next time step, although nonlinear versions with parameterization are subsequently developed.
A notable nonlinear extension was the RNN (Williams et al., 1986), which later evolved into LSTM (Hochreiter & Schmidhuber, 1997), BiLSTM (Schuster & Paliwal, 1997), and GRU (Cho et al., 2014), which addressed several limitations of the vanilla RNN, such as the vanishing gradient problem. These architectures are hard to parallelize because of the recurrent nature of the forward and backward computation. More recently, Transformer (Vaswani et al., 2017) and BERT (Devlin et al., 2019) were developed to address parallelization, by introducing attention mechanisms that simultaneously digested past (and future) information. Although these models are more heavily used for sequence data under the context of natural language processing, they are readily applicable for time series as well (Shih et al., 2019; Li et al., 2019).
Graph neural networks (Zhang et al., 2018; Zhou et al., 2018; Wu et al., 2019) emerged quickly in deep learning to handle graph-structured data. Typically, graph nodes are represented by feature vectors, but for the case of time series, a number of specialized architectures were recently developed; see, e.g., GCRN (Seo et al., 2016), DCRNN (Li et al., 2018), STGCN (Yu et al., 2018), and TGCN (Zhao et al., 2019). These architectures essentially combine the temporal recurrent processing with graph convolution to augment the representation learning of the individual time series.
Graph structure learning (not necessarily for time series) appears in various contexts and thus methods span a broad spectrum. One field of study is probabilistic graphical models and casual inference, whereby the directed acyclic structure is enforced. Gradient-based approaches in this context include NOTEARS (Zheng et al., 2018), DAG-GNN (Yu et al., 2019), and GraN-DAG (Lachapelle et al., 2020). On the other hand, a general graph may still be useful without resorting to causality. LDS (Franceschi et al., 2019) is a meta-learning approach that demonstrates to improve the performance on node classification tasks. MTGNN (Wu et al., 2020) parameterizes the graph as a degree-k graph, which is learned end-to-end with a GNN for forecasting time series. We, on the other hand, allow a more general structural prior for the graph. NRI (Kipf et al., 2018) adopts a latent-variable approach and learns a latent graph for forecasting system dynamics. Our approach is closely related to NRI and we will compare with it in the following section after introducing the technical details.
3 METHOD
In this section, we present the proposed GTS method, elaborate the model parameterization, and describe the training technique. We also highlight the distinctions from NRI (Kipf et al., 2018).
Let us first settle the notations. Denote by X the training data, which is a three dimensional tensor, with the three dimensions being feature, time, and the n series. Superscript refers to the series and subscript refers to time; that is, Xi denotes the i-th series for all features and time and Xt denotes the t-th time step for all features and series. There are in total S time steps for training. The model will use a window of T steps to forecast the next τ steps. For each valid t, denote by
feature extractor
link predictor
recurrent graph
convolution
recurrent graph
convolution …
entire data X sampled graph A
windowed data forecast
sampling
learned structure 𝜃
t + 1 t + 2 t + T t + T + 1 t + T + 𝜏
recurrent graph
convolution
recurrent graph
convolution …
Figure 1: GTS architecture.
X̂t+T+1:t+T+τ = f(A,w,Xt+1:t+T ) the model, which forecasts X̂t+T+1:t+T+τ from observations Xt+1:t+T , through exploiting the graph structureA and being parameterized byw. Using ` to denote the loss function between the prediction and the ground truth, a typical training objective reads∑
t `(f(A,w,Xt+1:t+T ), Xt+T+1:t+T+τ ). (3)
Three remaining details are the parameterization of A, the model f , and the loss `.
3.1 GRAPH STRUCTURE PARAMETERIZATION
The binary matrix A ∈ {0, 1}n×n by itself is challenging to parameterize, because it requires a differentiable function that outputs discrete values 0/1. A natural idea is to letA be a random variable of the matrix Bernoulli distribution parameterized by θ ∈ [0, 1]n×n, so thatAij is independent for all the (i, j) pairs with Aij ∼ Ber(θij). Here, θij is the success probability of a Bernoulli distribution. Then, the training objective (3) needs to be modified to
EA∼Ber(θ) [ ∑ t `(f(A,w,Xt+1:t+T ), Xt+T+1:t+T+τ )] . (4)
As hinted in Section 1, we further parameterize θ as θ(w), because otherwise the n2 degrees of freedom in θ render the optimization hard to scale. Such a parameterization, however, imposes a challenge on differentiability, if the expectation (4) is evaluated through sample average: the gradient of (4) does not flow through A in a usual Bernoulli sampling. Hence, we apply the Gumbel reparameterization trick proposed by Jang et al. (2017) and Maddison et al. (2017): Aij = sigmoid((log(θij/(1 − θij)) + (g1ij − g2ij))/s), where g1ij , g2ij ∼ Gumbel(0, 1) for all i, j. When the temperature s → 0, Aij = 1 with probability θij and 0 with remaining probability. In practice, we anneal s progressively in training such that it tends to zero.
For the parameterization of θ, we use a feature extractor to yield a feature vector for each series and a link predictor that takes in a pair of feature vectors and outputs a link probability. The feature extractor maps a matrixXi to a vector zi for each i. Many sequence architectures can be applied; we opt for a simple one. Specifically, we perform convolution along the temporal dimension, vectorize along this dimension, and apply a fully connected layer to reduce the dimension; that is, zi = FC(vec(Conv(Xi))). Note that the feature extractor is conducted on the entire sequence rather than a window of T time steps. Weights are shared among all series.
The link predictor maps a pair of vectors (zi, zj) to a scalar θij ∈ [0, 1]. We concatenate the two vectors and apply two fully connected layers to achieve so; that is, θij = FC(FC(zi‖zj)). The last activation needs be a sigmoid. See the top part of Figure 1.
3.2 GRAPH NEURAL NETWORK FORECASTING
The bottom part of Figure 1 is the forecasting model f . We use a sequence-to-sequence (seq2seq) model (Sutskever et al., 2014) to map Xit+1:t+T to X i t+T+1:t+T+τ for each series i. Seq2seq is
typically a recurrent model, but with a graph structure available among the series, we leverage recurrent graph convolution to handle all series simultaneously, as opposed to the usual recurrent mechanism that treats each series separately.
Specifically, for each time step t′, the seq2seq model takes Xt′ for all series as input and updates the internal hidden state from Ht′−1 to Ht′ . The encoder part of the seq2seq performs recurrent updates from t′ = t + 1 to t′ = t + T , producing Ht+T as a summary of the input. The decoder part uses Ht+T to continue the recurrence and evolves the hidden state for another τ steps. Each hidden state Ht′ , t′ = t + T + 1 : t + T + τ , simultaneously serves as the output X̂t′ and the input to the next time step.
The recurrence that accepts input and updates hidden states collectively for all series uses a graph convolution to replace the usual multiplication with a weight matrix. Several existing architectures serve this purpose (e.g., GCRN (Seo et al., 2016), STGCN (Yu et al., 2018), and T-GCN (Zhao et al., 2019)), but we use the diffusion convolutional GRU defined in DCRNN (Li et al., 2018) because it is designed for directed graphs:
Rt′ = sigmoid(WR ?A [Xt′ ‖ Ht′−1] + bR), Ct′ = tanh(WC ?A [Xt′ ‖ (Rt′ Ht′−1] + bC), Ut′ = sigmoid(WU ?A [Xt′ ‖ Ht′−1] + bU ), Ht′ = Ut′ Ht′−1 + (1− Ut′) Ct′ ,
where the graph convolution ?A is defined as WQ ?A Y = ∑K k=0 ( wQk,1(D −1 O A) k + wQk,2(D −1 I A T )k ) Y,
with DO and DI being the out-degree and in-degree matrix and ‖ being concatenation along the feature dimension. Here, wQk,1, w Q k,2, bQ for Q = R,U,C are model parameters and the diffusion degree K is a hyperparameter.
We remark that as a subsequent experiment corroborates, this GNN model can be replaced by other similar ones (e.g., T-GCN), such that the forecast performance remains similar while still being superior over all baselines. In comparison, the more crucial part of our proposal is the structure learning component (presented in the preceding subsection), without which it falls back to a model either using no graphs or needing a supplied one, both performing less well.
3.3 TRAINING, OPTIONALLY WITH A PRIORI KNOWLEDGE OF THE GRAPH
The base training loss (per window) is the mean absolute error between the forecast and the ground truth
`tbase(X̂t+T+1:t+T+τ , Xt+T+1:t+T+τ ) = 1 τ ∑t+T+τ t′=t+T+1 |X̂t′ −Xt′ |.
Additionally, we propose a regularization that improves graph quality, through injecting a priori knowledge of the pairwise interaction into the model. Sometimes an actual graph among the time series is known, such as the case of traffic network mentioned in Section 1. Generally, even if an explicit structure is unknown, a neighborhood graph (such as a kNN graph) may still serve as reasonable knowledge. The use of kNN encourages sparsity if k is small, which circumvents the drawback of `1 constraints that cannot be easily imposed because the graph is not a raw variable to optimize. As such, we use the cross-entropy between θ and the a priori graph Aa as the regularization:
`reg = ∑ ij −Aaij log θij − (1−Aaij) log(1− θij). (5)
The overall training loss is then ∑ t ` t base + λ`reg, with λ > 0 being the regularization magnitude.
3.4 COMPARISON WITH NRI
GTS appears similar to NRI (Kipf et al., 2018) on the surface, because both compute a pairwise structure from multiple time series and use the structure to improve forecasting. In these two methods, the architecture to compute the structure, as well as the one to forecast, bare many differences; but these differences are only secondary. The most essential distinction is the number of structures. To avoid confusion, here we say “structure” (θ) rather than “graph” (A) because there are combinatorially many graph samples from the same structure. Our approach produces one single structure given one set of n series. On the contrary, the autoencoder approach adopted by NRI produces different structures given different encoding inputs. Hence, a feasible use of NRI can only occur in the
following two manners. (a) A single set of n series is given and training is done on windowed data, where each window will produce a separate structure. (b) Many sets are given and training is done through iterating each set, which corresponds to a separate structure. Both cases are different from our scenario, where a single set of time series is given and a single structure is produced.
Fundamentally, NRI is a variational autoencoder and thus the inference of the structure is an amortized inference: under setting (b) above, the inferred structure is a posterior given a set of series. The amortization uses an encoder parameterization to free off the tedious posterior inference whenever a new set of series arrives. Moreover, under the evidence lower bound (ELBO) training objective, the prior is a graph, each edge of which takes a value uniformly in [0, 1]. In our case, on the contrary, a single structure is desired. Thus, amortized inference is neither necessary nor relevant. Furthermore, one may interpret the a priori information Aa for regularization as a “structural prior;” however, for each node pair it offers a stronger preference on the existence/absence of an edge than a uniform probability.
4 EXPERIMENTS
In this section, we conduct extensive experiments to show that the proposed method GTS outperforms a comprehensive set of forecasting methods, including one that learns a hidden graph structure (LDS, adapted for time series). We also demonstrate that GTS is computationally efficient and is able to learn a graph close to the a priori knowledge through regularization, with little compromise on the forecasting quality.
4.1 SETUP
Data sets. We experiment with two benchmark data sets METR-LA and PEMS-BAY from Li et al. (2018) and a proprietary data set PMU. The first two are traffic data sets with given graphs serving as ground truths; we perform no processing and follow the same configuration as in the referenced work for experimentation. The last one is a sensor network of the U.S. power grid without a given grid topology. For details, see Appendix Section A. For all data sets, we use a temporal 70/10/20 split for training, validation, and testing, respectively.
Baselines. We compare with a number of forecasting methods:
1. Non-deep learning methods: historical average (HA), ARIMA with Kalman filter (ARIMA), vector auto-regression (VAR), and support vector regression (SVR). The historical average accounts for weekly seasonality and predicts for a day by using the weighted average of the same day in the past few weeks.
2. Deep learning methods that treat each series separately (i.e., no graph): feed-forward neural network (FNN) and LSTM.
3. GNN method applied on the given graph (or kNN graph for PMU): DCRNN (Li et al., 2018).
4. GNN methods that simultaneously learn a graph structure. We use LDS (Franceschi et al., 2019) to learn the graph, wherein the forecast model is DCRNN. We name the method “LDS” for short. Additionally, we compare with NRI (Kipf et al., 2018).
5. Variant of GTS: We maintain the graph structure parameterization but replace the DCRNN forecast model by T-GCN (Zhao et al., 2019). We name the variant “GTSv.”
Except LDS and NRI, all baselines follow the configurations presented in Li et al. (2018). For LDS, we follow Franceschi et al. (2019). For NRI, we follow Kipf et al. (2018).
Evaluation metrics. All methods are evaluated with three metrics: mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage error (MAPE).
For details on hyperparameter setting and training platform, see Appendix Section B.
4.2 RESULTS
Forecasting quality. We first evaluate the performance of GTS through comparing it with all the aforementioned baselines. Because of the significant memory consumption of NRI, this method is executed on only the smaller data set PMU. The tasks are to forecast 15, 30, and 60 minutes.
Table 1 summarizes the results for METR-LA. A few observations follow. (1) Deep learning methods generally outperform non-deep learning methods, except historical average that performs on par with deep learning in some metrics. Seasonality is a strong indicator of the repeating traffic patterns and not surprisingly HA performs reasonably well despite simplicity. (2) Among the deep learning methods, graph-based models outperform non-graph models. This result corroborates the premise of this work: graph structure is helpful. (3) Among the graph-based methods, LDS performs slightly better than DCRNN. The difference between these two methods is that the latter employs the given graph, which may or may not imply direct interactions, whereas the former learns a graph in the data-driven manner. Their performances however are quite similar. (4) The most encouraging result is that the proposed method GTS significantly outperforms LDS and hence DCRNN. GTS learns a graph structure through parameterization, rather than treating it as a (hyper)parameter which is the case in LDS. (5) The performance of the variant GTSv stays between GTS and LDS. This observation corroborates that the proposed structure learning component contributes more crucially to the overall performance than does a careful choice of the GNN forecasting component.
To dive into the behavioral difference between GTS and DCRNN, we plot in Figure 2 two forecasting examples. One sees that both methods produce smooth series. In the top example, overall the GTS curve is closer to the moving average of the ground truth than is the DCRNN curve (see e.g., the left part and the U shape). In the bottom example, the GTS curve better captures the sharp dip toward the end of the series. In both examples, there exist several short but deep downward spikes. Such anomalous data are captured by neither methods.
Additionally, we summarize the results for PEMS-BAY and PMU in Table 3 and 2, respectively. (see Appendix Section C for the former). The observations are rather similar to those of METRLA. Our model produces the best prediction in all scenarios and under all metrics. Additionally, for the PMU data set, NRI performs competitively, second to GTS/GTSv and better than LDS in most of the cases.
Computational efficiency. We compare the training costs of the graph-based methods: DCRNN, LDS, and GTS. See Figure 3. DCRNN is the most efficient to train, since no graph structure learning is involved. To learn the graph, LDS needs orders of magnitude more time than does DCRNN. Recall that LDS employs a bilevel optimization (1), which is computationally highly challenging. In contrast, the proposed method GTS learns the graph structure as a byproduct of the model training (2). Its training time is approximately three times of that of DCRNN, a favorable overhead compared with the forbidding cost of LDS.
Effect of regularization. We propose in Section 3.3 using regularization to incorporate a priori knowledge of the graph. One salient example of knowledge is sparsity, which postulates that many node pairs barely interact. We show the effect of regularization on the data set PMU with the use of a kNN graph as knowledge. The task is 15-minute forecasting and results (expected degree and MAE) are plotted in Figure 4. The decreasing curves in both plots indicate that using a smaller k or increasing the regularization magnitude produces sparser graphs. The bars give the MAEs, all around 2.4e-4, indicating equally good forecasting quality. (Note that the MAE for LDS is 4.9e-4.)
Learned structures. To examine the learned structure θ, we further show its difference from the given graph adjacency matrix Aa (binary) and visualize one particular example in Figure 5. The difference is defined as `reg/n2 (average cross entropy; see (5)). One reads that when λ = 20, the difference is 0.34. It indicates that the learned probabilities in θ are on average 0.3 away from the entries of Aa, because − log(1 − 0.3) ≈ 0.34. When using 0.5 as a cutoff threshold for θ, such a difference possibly results in false-positive edges (existing in θ but not in Aa; orange dashed) and false-negative edges (existing in Aa but not in θ; none in the example).
Note that the regularization strength λ weighs the forecasting error (MAE) and the cross entropy in the loss function. When λ = 0, the training loss is not regularized, yielding optimal forecast results reported in Table 2. When λ = ∞, one effectively enforces θ to be identical to Aa and hence the model reduces to DCRNN, whose forecasting performance is worse than our model. The interesting question is when λ interpolates between the two extremes, whether it is possible to find a sweet spot such that forecasting performance is close to our model but meanwhile θ is close to Aa. Figure 5 suggests positively. We stress that our model does not intend to learn a “ground-truth” graph (e.g., the traffic network or the power grid); but rather, learn a structure that a GNN can exploit to improve forecast.
Other structural priors. In the PMU data set, we use a synthetic kNN structure prior Aa due to the lack of a known graph. For METR-LA and PEMS-BAY, however, such as graph can be constructed based on spatial proximity (Li et al., 2018). We show in Figure 6 the effect of regularization for these data sets. Similar to the findings of PMU, moving λ between 0 and ∞ interpolates two extremes: the best forecast quality and recovery of Aa. With a reasonable choice of λ (e.g., 0.3), the forecast quality degrades only slightly but the learned structure is rather close to the given Aa, judged from the average cross entropy.
5 CONCLUSIONS
We have presented a time series forecasting model that learns a graph structure among multiple time series and forecasts them simultaneously with a GNN. Both the graph and the GNN are learned end-to-end, maximally exploiting the pairwise interactions among data streams. The graph structure is parameterized by neural networks rather than being treated as a (hyper)parameter, hence significantly reducing the training cost compared with a recently proposed bilevel optimization approach LDS. We conduct comprehensive comparisons with a number of baselines, including nondeep learning methods and deep learning methods (which either ignore the pairwise interaction, use a given graph, or learn a graph by using LDS), and show that our approach attains the best forecasting quality. We also demonstrate that regularization helps incorporate a priori knowledge, rendering the learned graph a healthy variation of the given one for more accurate forecast.
ACKNOWLEDGMENT AND DISCLAIMER
This material is based upon work supported by the Department of Energy under Award Number(s) DE-OE0000910. C. Shang was also supported by National Science Foundation grant IIS-1718738 (to J. Bi) during this work. J. Bi was additionally supported by National Institutes of Health grants K02-DA043063 and R01-DA051922. This report was prepared as an account of work sponsored by agencies of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
A ADDITIONAL DETAILS OF DATA SETS
METR-LA is a traffic data set collected from loop detectors in the highway of Los Angles, CA (Jagadish et al., 2014). It contains 207 sensors, each of which records four months of data at the frequency of five minutes. A graph of sensors is given; it was constructed by imposing a radial basis function on the pairwise distance of sensors at a certain cutoff. For more information see Li et al. (2018). We perform no processing and follow the same configuration as in Li et al. (2018)
PEMS-BAY is also a traffic data set, collected by the California Transportation Agencies Performance Measurement System. It includes 325 sensors in the Bay Area for a period of six months, at the same five-minute frequency. Construction of the graph is the same as that of METR-LA. No processing is performed.
PMU contains time series data recorded by the phasor measurement units (PMUs) deployed across the U.S. power grid. We extract one month of data (February 2017) from one interconnect of the grid, which includes 42 PMU sources. Each PMU records a number of state variables and we use only the voltage magnitude and the current magnitude. The PMUs sample the system states at high rates (either 30 or 60 Hertz). We aggregate every five minutes, yielding a data frequency the same as the above two data sets. Different from them, this data set offers neither the grid topology, the sensor identities, nor the sensor locations. Hence, a “ground truth” graph is unknown.
However, it is highly plausible that the PMUs interact in a nontrivial manner, since some series are highly correlated whereas others not much. Figure 7 shows three example series. Visually, the first series appears more correlated to the second one than to the third one. For example, in the first two series, the blue curves (the variable ip m) are visually seasonal and synchronous. Moreover, inside the purple window, the red curves (the variable vp m) in the first two series show three downward spikes, which are missing in the third series. Indeed, the correlation matrix between the first two series is ( 0.76 −0.04 −0.31 0.96 ) and that between the first and the third series is ( 0.18 −0.10 0.22 0.22 ) . Such an observation justifies graph structure learning among the PMUs.
It is important to note a few processing steps of the data set because of its noisy and incomplete nature. The data set contains a fair amount of unreliable readings (e.g., outliers). Hence, we consult domain experts and set lower and upper bounds to filter out extremely small and large values. Accounting for missing data, within every five minutes we take the mean of the available readings if any, or impute with the mean of the entire series.
B ADDITIONAL DETAILS OF EXPERIMENT SETTING
Hyperparameters. Several hyperparameters are tuned through grid search: initial learning rate {0.1, 0.01, 0.001}, dropout rate {0.1, 0.2, 0.3}, embedding size of LSTM {32, 64, 128, 256}, the k value in kNN {5, 10, 20, 30}, and the weight of regularization {0, 1, 2, 5, 10, 20}. For other hyperparameters, the convolution kernel size in the feature extractor is 10 and the decay ratio of learning rate is 0.1. After tuning, the best initial learning rate for METR-LA and PEMS-BAY is 0.01 and for PMU is 0.001. The optimizer is Adam.
Because the loss function is an expectation (see (1) and (2)), the expectation is computed as an average of 10 random samples. Such an averaging is needed only for model evaluation. In training, one random sample suffices because the optimizer is a stochastic optimizer.
Platform. We implement the models in PyTorch. All experiments are run on one compute node of an IBM Power9 server. The compute node contains 80 CPU cores and four NVidia V100 GPUs, but because of scheduling limitation we use only one GPU.
Code is available at https://github.com/chaoshangcs/GTS.
C ADDITIONAL RESULTS FOR FORECASTING QUALITY
See Table 3 for the forecast results of PEMS-BAY. The observations are rather similar to those of Tables 1 and 2 in Section 4.2. In particular, GTS produces the best prediction in all scenarios and under all metrics.
D UPDATES OF TABLES 1, 2, AND 3
Our implementation had been developed based on the PyTorch version of DCRNN (https: //github.com/chnsh/DCRNN_PyTorch). It was brought to our attention recently that this version calculated the evaluation metrics MAE/RMSE/MAPE in a manner slightly different from that used to report the results of DCRNN in the official publication (https://github.com/ chnsh/DCRNN_PyTorch/issues/3). We updated Tables 1, 2, and 3 by correcting the calculations to be consistent with the official DCRNN results. See Tables 4, 5, and 6. Despite the correction, observations and conclusions regarding the comparison of different methods remain unchanged. | 1. What is the main contribution of the paper in multivariate time series forecasting?
2. How does the proposed approach differ from previous works such as NRI and LDS?
3. Is the empirical evaluation convincing enough to support the claims made in the paper?
4. What is the effectiveness of the regularization method used in the proposed approach?
5. Can the learned graph structure be considered novel and different from previous works?
6. Are there any concerns regarding the comparison with other works in the related field? | Review | Review
The paper proposes an approach for multivariate time series forecasting by trying to estimate dependence across dimensions via a learned graph structure. The dimensions are considered as nodes in a graph, and the problem is mapped to learning a discrete graph structure that can help with downstream forecasting task. The paper shows that a graph neural network (GNN) can be leveraged even though an explicit structure is unknown, to improve forecasting performance. This is achieved while learning the graph structure and forecasting architectures in an end-to-end fashion. The proposed approach is computationally efficient compared to a bilevel optimization approach where a discrete graph structure is learnt in a meta-learning framework. The approach is further claimed to be able to incorporate apriori knowledge of the graph structure by proposing a regularizer that ensures that the learned graph structure stays close to the known graph structure. The proposed approach improves forecasting performance in comparison to several strong baseline methods on three real-world datasets. In general, the paper is well-written and easy to follow.
Attempts have been made in the past for learning such a discrete graph structure from data. The authors mention LDS [2] and NRI [3] as closest to their work. The authors attempt to explicitly compare the proposed approach to NRI. The authors claim that "The most essential distinction is the number of structures": one structure is learned in the proposed approach while many structures are learned in NRI. From what I could follow, this single structure is achieved by using the entire multivariate time series data to obtain a feature vector for each dimension (series) via a neural network instead of using window-wise data. In this sense, this appears to be a simplification of NRI, rather than being something novel and different. The proposed setup and the approach are different and novel compared to NRI as the "Amortized inference is not desired nor relevant": I am not sure how this makes the proposed approach non-trivial given NRI? Furthermore, in contrast to LDS, the key contribution of the proposed approach is to get rid of the bilevel optimization. But then, that also seems to rely mainly on the Gumbel reparameterization trick which has been used in NRI for forecasting albeit for a slightly different setting.
Another very closely related approach to the proposed one is that in [1], which the authors seem to be unaware of. One of the main claims of the proposed approach is an attempt to learn the graph structure and forecasting model in an end-to-end learning fashion. However, this problem has already been attempted in [1]. Therefore, it is difficult to comment on the novelty and contribution of the paper without a comparison with [1], especially since most of the benchmark tasks and datasets used in this paper are present in [1] as well.
I have some concerns regarding the empirical evaluation:
Can the observations in Fig. 2 be attributed to the graph learning part? Despite the fact that the only difference between DCRNN and the proposed method seems to be the graph structure learning part, it is still not obvious qualitatively as to why the observations of Fig. 2 can be attributed to the graph learning part, e.g. why is "GTS curve better captures the sharp dip toward the end of the series" attributable to the graph learning part qualitatively or as per domain knowledge? I think an empirical analysis on a synthetic dataset to support such claims related to ablation could be useful.
In Fig. 4a, the regularization seems to induce sparsity and has been observed by the authors as well: "increasing the regularization magnitude produces sparser graphs". But the efficacy of such regularization on forecasting performance is not clear as k=0 (no kNN regularization) seems to have the best forecasting performance in Fig. 4a. This seems to imply that using kNN graph knowledge is not adding any value. Similar observations can be made in Fig. 4b, where increasing
λ
leads to increasing MAE. As such, the effect or usefulness of regularization is not clear.
The analysis on regularization and learned structures is done using a kNN graph as apriori knowledge for the PMU dataset. Rather than relying on another data-driven graph structure (kNN graph) as ground truth, I wonder if it would be useful to do such analysis on the public datasets (METR-LA and PEMS-BAY) for which the ground truth structures are actually known. As such, the evaluations on "effect of regularization" and "learned structures" do not seem conclusive.
In Tables 2 and 3, some of the bolds also apply to GTSv but are missing.
Given the above points, the originality of the work and the contributions are not clear.
Other minor points:
Related Work can also benefit from more precise references to papers that use the mentioned architectures for time series forecasting. The current references are too generic.
typo: hyperparemeter
References: [1] Connecting the Dots: Multivariate Time Series Forecasting with Graph Neural Networks, Wu et. al, KDD2020. https://dl.acm.org/doi/abs/10.1145/3394486.3403118 [2] LDS: Learning discrete structures for graph neural networks, Franceschi et. al, ICML, 2019 [3] NRI: Neural relational inference for interacting systems. Kipf et. al, ICML, 2018. |
ICLR | Title
Discrete Graph Structure Learning for Forecasting Multiple Time Series
Abstract
Time series forecasting is an extensively studied subject in statistics, economics, and computer science. Exploration of the correlation and causation among the variables in a multivariate time series shows promise in enhancing the performance of a time series model. When using deep neural networks as forecasting models, we hypothesize that exploiting the pairwise information among multiple (multivariate) time series also improves their forecast. If an explicit graph structure is known, graph neural networks (GNNs) have been demonstrated as powerful tools to exploit the structure. In this work, we propose learning the structure simultaneously with the GNN if the graph is unknown. We cast the problem as learning a probabilistic graph model through optimizing the mean performance over the graph distribution. The distribution is parameterized by a neural network so that discrete graphs can be sampled differentiably through reparameterization. Empirical evaluations show that our method is simpler, more efficient, and better performing than a recently proposed bilevel learning approach for graph structure learning, as well as a broad array of forecasting models, either deep or non-deep learning based, and graph or non-graph based.
1 INTRODUCTION
Time series data are widely studied in science and engineering that involve temporal measurements. Time series forecasting is concerned with the prediction of future values based on observed ones in the past. It has played important roles in climate studies, market analysis, traffic control, and energy grid management (Makridakis et al., 1997) and has inspired the development of various predictive models that capture the temporal dynamics of the underlying system. These models range from early autoregressive approaches (Hamilton, 1994; Asteriou & Hall, 2011) to the recent deep learning methods (Seo et al., 2016; Li et al., 2018; Yu et al., 2018; Zhao et al., 2019).
Analysis of univariate time series (a single longitudinal variable) has been extended to multivariate time series and multiple (univariate or multivariate) time series. Multivariate forecasting models find strong predictive power in stressing the interdependency (and even causal relationship) among the variables. The vector autoregressive model (Hamilton, 1994) is an example of multivariate analysis, wherein the coefficient magnitudes offer hints into the Granger causality (Granger, 1969) of one variable to another.
For multiple time series, pairwise similarities or connections among them have also been explored to improve the forecasting accuracy (Yu et al., 2018). An example is the traffic network where each node denotes a time series recording captured by a particular sensor. The spatial connections of the roads offer insights into how traffic dynamics propagates along the network. Several graph neural
∗This work was done while C. Shang was an intern at MIT-IBM Watson AI Lab, IBM Research. †To whom correspondence should be addressed.
network (GNN) approaches (Seo et al., 2016; Li et al., 2018; Yu et al., 2018; Zhao et al., 2019) have been proposed recently to leverage the graph structure for forecasting all time series simultaneously.
The graph structure however is not always available or it may be incomplete. There could be several reasons, including the difficulty in obtaining such information or a deliberate shielding for the protection of sensitive information. For example, a data set comprising sensory readings of the nation-wide energy grid is granted access to specific users without disclosure of the grid structure. Such practical situations incentivize the automatic learning of the hidden graph structure jointly with the forecasting model.
Because GNN approaches show promise in forecasting multiple interrelated time series, in this paper we are concerned with structure learning methods applicable to the downstream use of GNNs. A prominent example is the recent work of Franceschi et al. (2019) (named LDS), which is a meta-learning approach that treats the graph as a hyperparameter in a bilevel optimization framework (Franceschi et al., 2017). Specifically, let Xtrain and Xval denote the training and the validation sets of time series respectively, A ∈ {0, 1}n×n denote the graph adjacency matrix of the n time series, w denote the parameters used in the GNN, and L and F denote the the loss functions used during training and validation respectively (which may not be identical). LDS formulates the problem as learning the probability matrix θ ∈ [0, 1]n×n, which parameterizes the element-wise Bernoulli distribution from which the adjacency matrix A is sampled:
min θ EA∼Ber(θ)[F (A,w(θ), Xval)],
s.t. w(θ) = argmin w
EA∼Ber(θ)[L(A,w,Xtrain)]. (1)
Formulation (1) gives a bilevel optimization problem. The constraint (which by itself is an optimization problem) defines the GNN weights as a function of the given graph, so that the objective is to optimize over such a graph only. Note that for differentiability, one does not directly operate on the discrete graph adjacency matrix A, but on the continuous probabilities θ instead.
LDS has two drawbacks. First, its computation is expensive. The derivative of w with respect to θ is computed by applying the chain rule on a recursive-dynamics surrogate of the inner optimization argmin. Applying the chain rule on this surrogate is equivalent to differentiating an RNN, which is either memory intensive if done in the reverse mode or time consuming if done in the forward mode, when unrolling a deep dynamics. Second, it is challenging to scale. The matrix θ has Θ(n2) entries to optimize and thus the method is hard to scale to increasingly more time series.
In light of the challenges of LDS, we instead advocate a unilevel optimization:
min w EA∼Ber(θ(w))[F (A,w,Xtrain)]. (2)
Formulation (2) trains the GNN model as usual, except that the probabilities θ (which parameterizes the distribution from which A is sampled), is by itself parameterized. We absorb these parameters, together with the GNN parameters, into the notation w. We still use a validation set Xval for usual hyperparameter tuning, but these hyperparameters are not θ as treated by (1). In fact, formulation (1) may need a second validation set to tune other hyperparameters.
The major distinction of our approach from LDS is the parameterization θ(w), as opposed to an inner optimization w(θ). In our approach, a modeler owns the freedom to design the parameterization and better control the number of parameters as n2 increases. To this end, time series representation learning and link prediction techniques offer ample inspiration for modeling. In contrast, LDS is more agnostic as no modeling is needed. The effort, instead, lies in the nontrivial treatment of the inner optimization (in particular, its differentiation).
As such, our approach is advantageous in two regards. First, its computation is less expensive, because the gradient computation of a unilevel optimization is straightforward and efficient and implementations are mature. Second, it better scales, because the number of parameters does not grow quadratically with the number of time series.
We coin our approach GTS (short for “graph for time series”), signaling the usefulness of graph structure learning for enhancing time series forecasting. It is important to note that the end purpose of the graph is to improve forecasting quality, rather than identifying causal relationship of the series or recovering the ground-truth graph, if any. While causal discovery of multiple scalar variables is an
established field, identifying causality among multiple multivariate time series requires a nontrivial extension that spans beyond the current study. On the other hand, the graph, either learned or preexisting, serves as additional information that helps the model better capture global signals and apply on each series. There does not exist a golden measure for the quality of the learned graph except forecasting accuracy. For example, the traffic network does not necessarily offer the best pairwise relationship a GNN can exploit for forecasting traffic series. Nevertheless, to robustify GTS we incorporate regularization that penalizes significant departure from one’s prior belief. If a certain “ground-truth” graph is believed, the learned graph will be a healthy variation of it for a more accurate forecast.
2 RELATED WORK
Time series forecasting has been studied for decades by statisticians. It is out of the scope of this paper to comprehensively survey the literature, but we will focus more on late developments under the deep learning context. Early textbook methods include (vector) autoregressive models (Hamilton, 1994), autoregressive integrated moving average (ARIMA) (Asteriou & Hall, 2011), hidden Markov models (HMM) (Baum & Petrie, 1966), and Kalman filters (Zarchan & Musoff, 2000). Generally speaking, these are linear models that use a window of the past information to predict the next time step, although nonlinear versions with parameterization are subsequently developed.
A notable nonlinear extension was the RNN (Williams et al., 1986), which later evolved into LSTM (Hochreiter & Schmidhuber, 1997), BiLSTM (Schuster & Paliwal, 1997), and GRU (Cho et al., 2014), which addressed several limitations of the vanilla RNN, such as the vanishing gradient problem. These architectures are hard to parallelize because of the recurrent nature of the forward and backward computation. More recently, Transformer (Vaswani et al., 2017) and BERT (Devlin et al., 2019) were developed to address parallelization, by introducing attention mechanisms that simultaneously digested past (and future) information. Although these models are more heavily used for sequence data under the context of natural language processing, they are readily applicable for time series as well (Shih et al., 2019; Li et al., 2019).
Graph neural networks (Zhang et al., 2018; Zhou et al., 2018; Wu et al., 2019) emerged quickly in deep learning to handle graph-structured data. Typically, graph nodes are represented by feature vectors, but for the case of time series, a number of specialized architectures were recently developed; see, e.g., GCRN (Seo et al., 2016), DCRNN (Li et al., 2018), STGCN (Yu et al., 2018), and TGCN (Zhao et al., 2019). These architectures essentially combine the temporal recurrent processing with graph convolution to augment the representation learning of the individual time series.
Graph structure learning (not necessarily for time series) appears in various contexts and thus methods span a broad spectrum. One field of study is probabilistic graphical models and casual inference, whereby the directed acyclic structure is enforced. Gradient-based approaches in this context include NOTEARS (Zheng et al., 2018), DAG-GNN (Yu et al., 2019), and GraN-DAG (Lachapelle et al., 2020). On the other hand, a general graph may still be useful without resorting to causality. LDS (Franceschi et al., 2019) is a meta-learning approach that demonstrates to improve the performance on node classification tasks. MTGNN (Wu et al., 2020) parameterizes the graph as a degree-k graph, which is learned end-to-end with a GNN for forecasting time series. We, on the other hand, allow a more general structural prior for the graph. NRI (Kipf et al., 2018) adopts a latent-variable approach and learns a latent graph for forecasting system dynamics. Our approach is closely related to NRI and we will compare with it in the following section after introducing the technical details.
3 METHOD
In this section, we present the proposed GTS method, elaborate the model parameterization, and describe the training technique. We also highlight the distinctions from NRI (Kipf et al., 2018).
Let us first settle the notations. Denote by X the training data, which is a three dimensional tensor, with the three dimensions being feature, time, and the n series. Superscript refers to the series and subscript refers to time; that is, Xi denotes the i-th series for all features and time and Xt denotes the t-th time step for all features and series. There are in total S time steps for training. The model will use a window of T steps to forecast the next τ steps. For each valid t, denote by
feature extractor
link predictor
recurrent graph
convolution
recurrent graph
convolution …
entire data X sampled graph A
windowed data forecast
sampling
learned structure 𝜃
t + 1 t + 2 t + T t + T + 1 t + T + 𝜏
recurrent graph
convolution
recurrent graph
convolution …
Figure 1: GTS architecture.
X̂t+T+1:t+T+τ = f(A,w,Xt+1:t+T ) the model, which forecasts X̂t+T+1:t+T+τ from observations Xt+1:t+T , through exploiting the graph structureA and being parameterized byw. Using ` to denote the loss function between the prediction and the ground truth, a typical training objective reads∑
t `(f(A,w,Xt+1:t+T ), Xt+T+1:t+T+τ ). (3)
Three remaining details are the parameterization of A, the model f , and the loss `.
3.1 GRAPH STRUCTURE PARAMETERIZATION
The binary matrix A ∈ {0, 1}n×n by itself is challenging to parameterize, because it requires a differentiable function that outputs discrete values 0/1. A natural idea is to letA be a random variable of the matrix Bernoulli distribution parameterized by θ ∈ [0, 1]n×n, so thatAij is independent for all the (i, j) pairs with Aij ∼ Ber(θij). Here, θij is the success probability of a Bernoulli distribution. Then, the training objective (3) needs to be modified to
EA∼Ber(θ) [ ∑ t `(f(A,w,Xt+1:t+T ), Xt+T+1:t+T+τ )] . (4)
As hinted in Section 1, we further parameterize θ as θ(w), because otherwise the n2 degrees of freedom in θ render the optimization hard to scale. Such a parameterization, however, imposes a challenge on differentiability, if the expectation (4) is evaluated through sample average: the gradient of (4) does not flow through A in a usual Bernoulli sampling. Hence, we apply the Gumbel reparameterization trick proposed by Jang et al. (2017) and Maddison et al. (2017): Aij = sigmoid((log(θij/(1 − θij)) + (g1ij − g2ij))/s), where g1ij , g2ij ∼ Gumbel(0, 1) for all i, j. When the temperature s → 0, Aij = 1 with probability θij and 0 with remaining probability. In practice, we anneal s progressively in training such that it tends to zero.
For the parameterization of θ, we use a feature extractor to yield a feature vector for each series and a link predictor that takes in a pair of feature vectors and outputs a link probability. The feature extractor maps a matrixXi to a vector zi for each i. Many sequence architectures can be applied; we opt for a simple one. Specifically, we perform convolution along the temporal dimension, vectorize along this dimension, and apply a fully connected layer to reduce the dimension; that is, zi = FC(vec(Conv(Xi))). Note that the feature extractor is conducted on the entire sequence rather than a window of T time steps. Weights are shared among all series.
The link predictor maps a pair of vectors (zi, zj) to a scalar θij ∈ [0, 1]. We concatenate the two vectors and apply two fully connected layers to achieve so; that is, θij = FC(FC(zi‖zj)). The last activation needs be a sigmoid. See the top part of Figure 1.
3.2 GRAPH NEURAL NETWORK FORECASTING
The bottom part of Figure 1 is the forecasting model f . We use a sequence-to-sequence (seq2seq) model (Sutskever et al., 2014) to map Xit+1:t+T to X i t+T+1:t+T+τ for each series i. Seq2seq is
typically a recurrent model, but with a graph structure available among the series, we leverage recurrent graph convolution to handle all series simultaneously, as opposed to the usual recurrent mechanism that treats each series separately.
Specifically, for each time step t′, the seq2seq model takes Xt′ for all series as input and updates the internal hidden state from Ht′−1 to Ht′ . The encoder part of the seq2seq performs recurrent updates from t′ = t + 1 to t′ = t + T , producing Ht+T as a summary of the input. The decoder part uses Ht+T to continue the recurrence and evolves the hidden state for another τ steps. Each hidden state Ht′ , t′ = t + T + 1 : t + T + τ , simultaneously serves as the output X̂t′ and the input to the next time step.
The recurrence that accepts input and updates hidden states collectively for all series uses a graph convolution to replace the usual multiplication with a weight matrix. Several existing architectures serve this purpose (e.g., GCRN (Seo et al., 2016), STGCN (Yu et al., 2018), and T-GCN (Zhao et al., 2019)), but we use the diffusion convolutional GRU defined in DCRNN (Li et al., 2018) because it is designed for directed graphs:
Rt′ = sigmoid(WR ?A [Xt′ ‖ Ht′−1] + bR), Ct′ = tanh(WC ?A [Xt′ ‖ (Rt′ Ht′−1] + bC), Ut′ = sigmoid(WU ?A [Xt′ ‖ Ht′−1] + bU ), Ht′ = Ut′ Ht′−1 + (1− Ut′) Ct′ ,
where the graph convolution ?A is defined as WQ ?A Y = ∑K k=0 ( wQk,1(D −1 O A) k + wQk,2(D −1 I A T )k ) Y,
with DO and DI being the out-degree and in-degree matrix and ‖ being concatenation along the feature dimension. Here, wQk,1, w Q k,2, bQ for Q = R,U,C are model parameters and the diffusion degree K is a hyperparameter.
We remark that as a subsequent experiment corroborates, this GNN model can be replaced by other similar ones (e.g., T-GCN), such that the forecast performance remains similar while still being superior over all baselines. In comparison, the more crucial part of our proposal is the structure learning component (presented in the preceding subsection), without which it falls back to a model either using no graphs or needing a supplied one, both performing less well.
3.3 TRAINING, OPTIONALLY WITH A PRIORI KNOWLEDGE OF THE GRAPH
The base training loss (per window) is the mean absolute error between the forecast and the ground truth
`tbase(X̂t+T+1:t+T+τ , Xt+T+1:t+T+τ ) = 1 τ ∑t+T+τ t′=t+T+1 |X̂t′ −Xt′ |.
Additionally, we propose a regularization that improves graph quality, through injecting a priori knowledge of the pairwise interaction into the model. Sometimes an actual graph among the time series is known, such as the case of traffic network mentioned in Section 1. Generally, even if an explicit structure is unknown, a neighborhood graph (such as a kNN graph) may still serve as reasonable knowledge. The use of kNN encourages sparsity if k is small, which circumvents the drawback of `1 constraints that cannot be easily imposed because the graph is not a raw variable to optimize. As such, we use the cross-entropy between θ and the a priori graph Aa as the regularization:
`reg = ∑ ij −Aaij log θij − (1−Aaij) log(1− θij). (5)
The overall training loss is then ∑ t ` t base + λ`reg, with λ > 0 being the regularization magnitude.
3.4 COMPARISON WITH NRI
GTS appears similar to NRI (Kipf et al., 2018) on the surface, because both compute a pairwise structure from multiple time series and use the structure to improve forecasting. In these two methods, the architecture to compute the structure, as well as the one to forecast, bare many differences; but these differences are only secondary. The most essential distinction is the number of structures. To avoid confusion, here we say “structure” (θ) rather than “graph” (A) because there are combinatorially many graph samples from the same structure. Our approach produces one single structure given one set of n series. On the contrary, the autoencoder approach adopted by NRI produces different structures given different encoding inputs. Hence, a feasible use of NRI can only occur in the
following two manners. (a) A single set of n series is given and training is done on windowed data, where each window will produce a separate structure. (b) Many sets are given and training is done through iterating each set, which corresponds to a separate structure. Both cases are different from our scenario, where a single set of time series is given and a single structure is produced.
Fundamentally, NRI is a variational autoencoder and thus the inference of the structure is an amortized inference: under setting (b) above, the inferred structure is a posterior given a set of series. The amortization uses an encoder parameterization to free off the tedious posterior inference whenever a new set of series arrives. Moreover, under the evidence lower bound (ELBO) training objective, the prior is a graph, each edge of which takes a value uniformly in [0, 1]. In our case, on the contrary, a single structure is desired. Thus, amortized inference is neither necessary nor relevant. Furthermore, one may interpret the a priori information Aa for regularization as a “structural prior;” however, for each node pair it offers a stronger preference on the existence/absence of an edge than a uniform probability.
4 EXPERIMENTS
In this section, we conduct extensive experiments to show that the proposed method GTS outperforms a comprehensive set of forecasting methods, including one that learns a hidden graph structure (LDS, adapted for time series). We also demonstrate that GTS is computationally efficient and is able to learn a graph close to the a priori knowledge through regularization, with little compromise on the forecasting quality.
4.1 SETUP
Data sets. We experiment with two benchmark data sets METR-LA and PEMS-BAY from Li et al. (2018) and a proprietary data set PMU. The first two are traffic data sets with given graphs serving as ground truths; we perform no processing and follow the same configuration as in the referenced work for experimentation. The last one is a sensor network of the U.S. power grid without a given grid topology. For details, see Appendix Section A. For all data sets, we use a temporal 70/10/20 split for training, validation, and testing, respectively.
Baselines. We compare with a number of forecasting methods:
1. Non-deep learning methods: historical average (HA), ARIMA with Kalman filter (ARIMA), vector auto-regression (VAR), and support vector regression (SVR). The historical average accounts for weekly seasonality and predicts for a day by using the weighted average of the same day in the past few weeks.
2. Deep learning methods that treat each series separately (i.e., no graph): feed-forward neural network (FNN) and LSTM.
3. GNN method applied on the given graph (or kNN graph for PMU): DCRNN (Li et al., 2018).
4. GNN methods that simultaneously learn a graph structure. We use LDS (Franceschi et al., 2019) to learn the graph, wherein the forecast model is DCRNN. We name the method “LDS” for short. Additionally, we compare with NRI (Kipf et al., 2018).
5. Variant of GTS: We maintain the graph structure parameterization but replace the DCRNN forecast model by T-GCN (Zhao et al., 2019). We name the variant “GTSv.”
Except LDS and NRI, all baselines follow the configurations presented in Li et al. (2018). For LDS, we follow Franceschi et al. (2019). For NRI, we follow Kipf et al. (2018).
Evaluation metrics. All methods are evaluated with three metrics: mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage error (MAPE).
For details on hyperparameter setting and training platform, see Appendix Section B.
4.2 RESULTS
Forecasting quality. We first evaluate the performance of GTS through comparing it with all the aforementioned baselines. Because of the significant memory consumption of NRI, this method is executed on only the smaller data set PMU. The tasks are to forecast 15, 30, and 60 minutes.
Table 1 summarizes the results for METR-LA. A few observations follow. (1) Deep learning methods generally outperform non-deep learning methods, except historical average that performs on par with deep learning in some metrics. Seasonality is a strong indicator of the repeating traffic patterns and not surprisingly HA performs reasonably well despite simplicity. (2) Among the deep learning methods, graph-based models outperform non-graph models. This result corroborates the premise of this work: graph structure is helpful. (3) Among the graph-based methods, LDS performs slightly better than DCRNN. The difference between these two methods is that the latter employs the given graph, which may or may not imply direct interactions, whereas the former learns a graph in the data-driven manner. Their performances however are quite similar. (4) The most encouraging result is that the proposed method GTS significantly outperforms LDS and hence DCRNN. GTS learns a graph structure through parameterization, rather than treating it as a (hyper)parameter which is the case in LDS. (5) The performance of the variant GTSv stays between GTS and LDS. This observation corroborates that the proposed structure learning component contributes more crucially to the overall performance than does a careful choice of the GNN forecasting component.
To dive into the behavioral difference between GTS and DCRNN, we plot in Figure 2 two forecasting examples. One sees that both methods produce smooth series. In the top example, overall the GTS curve is closer to the moving average of the ground truth than is the DCRNN curve (see e.g., the left part and the U shape). In the bottom example, the GTS curve better captures the sharp dip toward the end of the series. In both examples, there exist several short but deep downward spikes. Such anomalous data are captured by neither methods.
Additionally, we summarize the results for PEMS-BAY and PMU in Table 3 and 2, respectively. (see Appendix Section C for the former). The observations are rather similar to those of METRLA. Our model produces the best prediction in all scenarios and under all metrics. Additionally, for the PMU data set, NRI performs competitively, second to GTS/GTSv and better than LDS in most of the cases.
Computational efficiency. We compare the training costs of the graph-based methods: DCRNN, LDS, and GTS. See Figure 3. DCRNN is the most efficient to train, since no graph structure learning is involved. To learn the graph, LDS needs orders of magnitude more time than does DCRNN. Recall that LDS employs a bilevel optimization (1), which is computationally highly challenging. In contrast, the proposed method GTS learns the graph structure as a byproduct of the model training (2). Its training time is approximately three times of that of DCRNN, a favorable overhead compared with the forbidding cost of LDS.
Effect of regularization. We propose in Section 3.3 using regularization to incorporate a priori knowledge of the graph. One salient example of knowledge is sparsity, which postulates that many node pairs barely interact. We show the effect of regularization on the data set PMU with the use of a kNN graph as knowledge. The task is 15-minute forecasting and results (expected degree and MAE) are plotted in Figure 4. The decreasing curves in both plots indicate that using a smaller k or increasing the regularization magnitude produces sparser graphs. The bars give the MAEs, all around 2.4e-4, indicating equally good forecasting quality. (Note that the MAE for LDS is 4.9e-4.)
Learned structures. To examine the learned structure θ, we further show its difference from the given graph adjacency matrix Aa (binary) and visualize one particular example in Figure 5. The difference is defined as `reg/n2 (average cross entropy; see (5)). One reads that when λ = 20, the difference is 0.34. It indicates that the learned probabilities in θ are on average 0.3 away from the entries of Aa, because − log(1 − 0.3) ≈ 0.34. When using 0.5 as a cutoff threshold for θ, such a difference possibly results in false-positive edges (existing in θ but not in Aa; orange dashed) and false-negative edges (existing in Aa but not in θ; none in the example).
Note that the regularization strength λ weighs the forecasting error (MAE) and the cross entropy in the loss function. When λ = 0, the training loss is not regularized, yielding optimal forecast results reported in Table 2. When λ = ∞, one effectively enforces θ to be identical to Aa and hence the model reduces to DCRNN, whose forecasting performance is worse than our model. The interesting question is when λ interpolates between the two extremes, whether it is possible to find a sweet spot such that forecasting performance is close to our model but meanwhile θ is close to Aa. Figure 5 suggests positively. We stress that our model does not intend to learn a “ground-truth” graph (e.g., the traffic network or the power grid); but rather, learn a structure that a GNN can exploit to improve forecast.
Other structural priors. In the PMU data set, we use a synthetic kNN structure prior Aa due to the lack of a known graph. For METR-LA and PEMS-BAY, however, such as graph can be constructed based on spatial proximity (Li et al., 2018). We show in Figure 6 the effect of regularization for these data sets. Similar to the findings of PMU, moving λ between 0 and ∞ interpolates two extremes: the best forecast quality and recovery of Aa. With a reasonable choice of λ (e.g., 0.3), the forecast quality degrades only slightly but the learned structure is rather close to the given Aa, judged from the average cross entropy.
5 CONCLUSIONS
We have presented a time series forecasting model that learns a graph structure among multiple time series and forecasts them simultaneously with a GNN. Both the graph and the GNN are learned end-to-end, maximally exploiting the pairwise interactions among data streams. The graph structure is parameterized by neural networks rather than being treated as a (hyper)parameter, hence significantly reducing the training cost compared with a recently proposed bilevel optimization approach LDS. We conduct comprehensive comparisons with a number of baselines, including nondeep learning methods and deep learning methods (which either ignore the pairwise interaction, use a given graph, or learn a graph by using LDS), and show that our approach attains the best forecasting quality. We also demonstrate that regularization helps incorporate a priori knowledge, rendering the learned graph a healthy variation of the given one for more accurate forecast.
ACKNOWLEDGMENT AND DISCLAIMER
This material is based upon work supported by the Department of Energy under Award Number(s) DE-OE0000910. C. Shang was also supported by National Science Foundation grant IIS-1718738 (to J. Bi) during this work. J. Bi was additionally supported by National Institutes of Health grants K02-DA043063 and R01-DA051922. This report was prepared as an account of work sponsored by agencies of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
A ADDITIONAL DETAILS OF DATA SETS
METR-LA is a traffic data set collected from loop detectors in the highway of Los Angles, CA (Jagadish et al., 2014). It contains 207 sensors, each of which records four months of data at the frequency of five minutes. A graph of sensors is given; it was constructed by imposing a radial basis function on the pairwise distance of sensors at a certain cutoff. For more information see Li et al. (2018). We perform no processing and follow the same configuration as in Li et al. (2018)
PEMS-BAY is also a traffic data set, collected by the California Transportation Agencies Performance Measurement System. It includes 325 sensors in the Bay Area for a period of six months, at the same five-minute frequency. Construction of the graph is the same as that of METR-LA. No processing is performed.
PMU contains time series data recorded by the phasor measurement units (PMUs) deployed across the U.S. power grid. We extract one month of data (February 2017) from one interconnect of the grid, which includes 42 PMU sources. Each PMU records a number of state variables and we use only the voltage magnitude and the current magnitude. The PMUs sample the system states at high rates (either 30 or 60 Hertz). We aggregate every five minutes, yielding a data frequency the same as the above two data sets. Different from them, this data set offers neither the grid topology, the sensor identities, nor the sensor locations. Hence, a “ground truth” graph is unknown.
However, it is highly plausible that the PMUs interact in a nontrivial manner, since some series are highly correlated whereas others not much. Figure 7 shows three example series. Visually, the first series appears more correlated to the second one than to the third one. For example, in the first two series, the blue curves (the variable ip m) are visually seasonal and synchronous. Moreover, inside the purple window, the red curves (the variable vp m) in the first two series show three downward spikes, which are missing in the third series. Indeed, the correlation matrix between the first two series is ( 0.76 −0.04 −0.31 0.96 ) and that between the first and the third series is ( 0.18 −0.10 0.22 0.22 ) . Such an observation justifies graph structure learning among the PMUs.
It is important to note a few processing steps of the data set because of its noisy and incomplete nature. The data set contains a fair amount of unreliable readings (e.g., outliers). Hence, we consult domain experts and set lower and upper bounds to filter out extremely small and large values. Accounting for missing data, within every five minutes we take the mean of the available readings if any, or impute with the mean of the entire series.
B ADDITIONAL DETAILS OF EXPERIMENT SETTING
Hyperparameters. Several hyperparameters are tuned through grid search: initial learning rate {0.1, 0.01, 0.001}, dropout rate {0.1, 0.2, 0.3}, embedding size of LSTM {32, 64, 128, 256}, the k value in kNN {5, 10, 20, 30}, and the weight of regularization {0, 1, 2, 5, 10, 20}. For other hyperparameters, the convolution kernel size in the feature extractor is 10 and the decay ratio of learning rate is 0.1. After tuning, the best initial learning rate for METR-LA and PEMS-BAY is 0.01 and for PMU is 0.001. The optimizer is Adam.
Because the loss function is an expectation (see (1) and (2)), the expectation is computed as an average of 10 random samples. Such an averaging is needed only for model evaluation. In training, one random sample suffices because the optimizer is a stochastic optimizer.
Platform. We implement the models in PyTorch. All experiments are run on one compute node of an IBM Power9 server. The compute node contains 80 CPU cores and four NVidia V100 GPUs, but because of scheduling limitation we use only one GPU.
Code is available at https://github.com/chaoshangcs/GTS.
C ADDITIONAL RESULTS FOR FORECASTING QUALITY
See Table 3 for the forecast results of PEMS-BAY. The observations are rather similar to those of Tables 1 and 2 in Section 4.2. In particular, GTS produces the best prediction in all scenarios and under all metrics.
D UPDATES OF TABLES 1, 2, AND 3
Our implementation had been developed based on the PyTorch version of DCRNN (https: //github.com/chnsh/DCRNN_PyTorch). It was brought to our attention recently that this version calculated the evaluation metrics MAE/RMSE/MAPE in a manner slightly different from that used to report the results of DCRNN in the official publication (https://github.com/ chnsh/DCRNN_PyTorch/issues/3). We updated Tables 1, 2, and 3 by correcting the calculations to be consistent with the official DCRNN results. See Tables 4, 5, and 6. Despite the correction, observations and conclusions regarding the comparison of different methods remain unchanged. | 1. What is the main contribution of the paper regarding time series forecasting?
2. What are the strengths of the proposed approach, particularly in its ability to learn graph structures and parameterize GNNs?
3. Are there any concerns or questions regarding the comparison with other approaches, such as NRI and LDS?
4. How does the paper address the issue of computational efficiency in learning graph structures?
5. Can the authors provide more details or explanations regarding the PMU dataset and its construction?
6. Are there any minor errors or typos throughout the paper that need to be addressed? | Review | Review
Paper summary:
This paper proposes an approach for time series forecasting that learns the graph structure among multiple (multivariate) time series simultaneously with the parameters of a Graph Neural Network (GNN). The problem is formulated as learning a probabilistic graphical model by optimizing the expectation over the graph distribution, which is parameterized by a neural network and encapsulated in a single differentiable objective. Empirical evidence suggests that the proposed GTS obtains superior forecasting performance to both deep and non-deep learning based, as well as graph and non-graph based, competitor forecasting models. In addition, GTS appears to be more computationally efficient compared to LDS, a recently proposed meta-learning graph-based approach.
##########################################################################
Strong points:
A time series forecasting model is proposed to automatically learn a graph structure among multiple time series and forecast them simultaneously using a GNN.
The graph structure and the parameters of the GNN are learned simultaneously in a joint end-to-end framework.
The graph structure is parameterized by neural networks rather than being treated as a (hyper)parameter, thus significantly reducing the training cost compared with the recently proposed bilevel optimization approach LDS.
A structural prior-based regularization is incorporated in GTS. In case a “ground-truth” graph is provided upfront, this may serve as a healthy variation of such a graph for the purpose of more accurate forecast.
Extensive experiments are conducted in which the proposed GTS is compared to a number of baselines, including a recently proposed graph structure learning approach, and deep or non-deep learning based (as well as graph or non-graph based) forecasting models.
The experimental results demonstrate that GTS outperforms its competitor approaches in terms of forecasting accuracy and is more efficient that the recently proposed LDS.
Generally, the paper is well written, while the notation is clear and easy to follow.
##########################################################################
Improvement points:
In section 3.4. (Comparison with NRI), the authors state that the “structural prior”
A
a
offers a stronger preference on the existence/absence of each edge than a uniform distribution over all edges. This seems a bit unclear, thus I would encourage the authors to elaborate a bit more on this difference between GTS and NRI w.r.t. the structural prior.
In the case of the PMU dataset, despite the fact that the grid topology is not provided, the authors still consider a certain structural prior by constructing a kNN graph among the PMUs. I am wondering whether the correlation between the series (mentioned briefly in Appendix A) is used for the graph construction or another distance/similarity metric is considered?
Two variables are recorded by the 42 PMUs, however each node in the constructed graph (shown in Fig. 5) corresponds to one PMU. In case a single node corresponds to a single PMU, then I wonder how the similarity between two PMUs’ recordings is calculated across the two variables (voltage magnitude and current magnitude)?
The authors construct the PMU dataset by extracting only one month of data. However, a single month of PMU data would not allow for capturing certain long-term seasonalities (for instance, the PMU recordings are typically impacted by outages that occur more frequently in certain seasons or periods in the year). Is this perhaps due to data unavailability? If that is not the case, I would ask the authors to clarify the reasoning behind the decision to extract the data for February 2017?
In Tables 2 & 3 (Appendix C), some of the MAPE values obtained by GST are bolded even though the same percentages are reported for GTSv. In such cases, I would suggest the authors to either bold the MAPEs obtained by both GTS and GTSv, or present the MAPE values using more decimal places.
There are several minor textual errors throughout the paper that can be easily addressed. Some of them are summarized as follows:
The term “LDS” is initially used at the beginning of page 2, but is not defined earlier in the text.
In the third paragraph on page 2, “computation is expensive” should be replaced by “its computation is expensive”.
I am wondering whether the training loss
L
should be used instead of the validation loss
F
in Eq. (2)? If so, correct accordingly, otherwise disregard this comment.
In the next-to-last paragraph on page 2, consider replacing “it is better scaled” with “it scales better”.
In the last paragraph of the Related Work section, “of node classification tasks” should be replaced by “on node classification tasks”.
At the beginning of section 3, the term “NRI” is used, but is not defined earlier in the text.
On page 5, consider replacing the abbreviation “ELBO” with “evidence lower bound (ELBO)”.
In the first paragraph on page 7, “treating it a (hyper)parameter as in LDS” could be replaced with “treating it as a (hyper)parameter which is the case in LDS”.
In the second paragraph on page 8, replace “regularization
λ
” with “regularization strength
λ
”. In the same sentence, consider adding “the” before both “forecasting error” and “cross entropy”.
##########################################################################
Questions during rebuttal period: Please address the aforementioned remarks/questions. |
ICLR | Title
Discrete Graph Structure Learning for Forecasting Multiple Time Series
Abstract
Time series forecasting is an extensively studied subject in statistics, economics, and computer science. Exploration of the correlation and causation among the variables in a multivariate time series shows promise in enhancing the performance of a time series model. When using deep neural networks as forecasting models, we hypothesize that exploiting the pairwise information among multiple (multivariate) time series also improves their forecast. If an explicit graph structure is known, graph neural networks (GNNs) have been demonstrated as powerful tools to exploit the structure. In this work, we propose learning the structure simultaneously with the GNN if the graph is unknown. We cast the problem as learning a probabilistic graph model through optimizing the mean performance over the graph distribution. The distribution is parameterized by a neural network so that discrete graphs can be sampled differentiably through reparameterization. Empirical evaluations show that our method is simpler, more efficient, and better performing than a recently proposed bilevel learning approach for graph structure learning, as well as a broad array of forecasting models, either deep or non-deep learning based, and graph or non-graph based.
1 INTRODUCTION
Time series data are widely studied in science and engineering that involve temporal measurements. Time series forecasting is concerned with the prediction of future values based on observed ones in the past. It has played important roles in climate studies, market analysis, traffic control, and energy grid management (Makridakis et al., 1997) and has inspired the development of various predictive models that capture the temporal dynamics of the underlying system. These models range from early autoregressive approaches (Hamilton, 1994; Asteriou & Hall, 2011) to the recent deep learning methods (Seo et al., 2016; Li et al., 2018; Yu et al., 2018; Zhao et al., 2019).
Analysis of univariate time series (a single longitudinal variable) has been extended to multivariate time series and multiple (univariate or multivariate) time series. Multivariate forecasting models find strong predictive power in stressing the interdependency (and even causal relationship) among the variables. The vector autoregressive model (Hamilton, 1994) is an example of multivariate analysis, wherein the coefficient magnitudes offer hints into the Granger causality (Granger, 1969) of one variable to another.
For multiple time series, pairwise similarities or connections among them have also been explored to improve the forecasting accuracy (Yu et al., 2018). An example is the traffic network where each node denotes a time series recording captured by a particular sensor. The spatial connections of the roads offer insights into how traffic dynamics propagates along the network. Several graph neural
∗This work was done while C. Shang was an intern at MIT-IBM Watson AI Lab, IBM Research. †To whom correspondence should be addressed.
network (GNN) approaches (Seo et al., 2016; Li et al., 2018; Yu et al., 2018; Zhao et al., 2019) have been proposed recently to leverage the graph structure for forecasting all time series simultaneously.
The graph structure however is not always available or it may be incomplete. There could be several reasons, including the difficulty in obtaining such information or a deliberate shielding for the protection of sensitive information. For example, a data set comprising sensory readings of the nation-wide energy grid is granted access to specific users without disclosure of the grid structure. Such practical situations incentivize the automatic learning of the hidden graph structure jointly with the forecasting model.
Because GNN approaches show promise in forecasting multiple interrelated time series, in this paper we are concerned with structure learning methods applicable to the downstream use of GNNs. A prominent example is the recent work of Franceschi et al. (2019) (named LDS), which is a meta-learning approach that treats the graph as a hyperparameter in a bilevel optimization framework (Franceschi et al., 2017). Specifically, let Xtrain and Xval denote the training and the validation sets of time series respectively, A ∈ {0, 1}n×n denote the graph adjacency matrix of the n time series, w denote the parameters used in the GNN, and L and F denote the the loss functions used during training and validation respectively (which may not be identical). LDS formulates the problem as learning the probability matrix θ ∈ [0, 1]n×n, which parameterizes the element-wise Bernoulli distribution from which the adjacency matrix A is sampled:
min θ EA∼Ber(θ)[F (A,w(θ), Xval)],
s.t. w(θ) = argmin w
EA∼Ber(θ)[L(A,w,Xtrain)]. (1)
Formulation (1) gives a bilevel optimization problem. The constraint (which by itself is an optimization problem) defines the GNN weights as a function of the given graph, so that the objective is to optimize over such a graph only. Note that for differentiability, one does not directly operate on the discrete graph adjacency matrix A, but on the continuous probabilities θ instead.
LDS has two drawbacks. First, its computation is expensive. The derivative of w with respect to θ is computed by applying the chain rule on a recursive-dynamics surrogate of the inner optimization argmin. Applying the chain rule on this surrogate is equivalent to differentiating an RNN, which is either memory intensive if done in the reverse mode or time consuming if done in the forward mode, when unrolling a deep dynamics. Second, it is challenging to scale. The matrix θ has Θ(n2) entries to optimize and thus the method is hard to scale to increasingly more time series.
In light of the challenges of LDS, we instead advocate a unilevel optimization:
min w EA∼Ber(θ(w))[F (A,w,Xtrain)]. (2)
Formulation (2) trains the GNN model as usual, except that the probabilities θ (which parameterizes the distribution from which A is sampled), is by itself parameterized. We absorb these parameters, together with the GNN parameters, into the notation w. We still use a validation set Xval for usual hyperparameter tuning, but these hyperparameters are not θ as treated by (1). In fact, formulation (1) may need a second validation set to tune other hyperparameters.
The major distinction of our approach from LDS is the parameterization θ(w), as opposed to an inner optimization w(θ). In our approach, a modeler owns the freedom to design the parameterization and better control the number of parameters as n2 increases. To this end, time series representation learning and link prediction techniques offer ample inspiration for modeling. In contrast, LDS is more agnostic as no modeling is needed. The effort, instead, lies in the nontrivial treatment of the inner optimization (in particular, its differentiation).
As such, our approach is advantageous in two regards. First, its computation is less expensive, because the gradient computation of a unilevel optimization is straightforward and efficient and implementations are mature. Second, it better scales, because the number of parameters does not grow quadratically with the number of time series.
We coin our approach GTS (short for “graph for time series”), signaling the usefulness of graph structure learning for enhancing time series forecasting. It is important to note that the end purpose of the graph is to improve forecasting quality, rather than identifying causal relationship of the series or recovering the ground-truth graph, if any. While causal discovery of multiple scalar variables is an
established field, identifying causality among multiple multivariate time series requires a nontrivial extension that spans beyond the current study. On the other hand, the graph, either learned or preexisting, serves as additional information that helps the model better capture global signals and apply on each series. There does not exist a golden measure for the quality of the learned graph except forecasting accuracy. For example, the traffic network does not necessarily offer the best pairwise relationship a GNN can exploit for forecasting traffic series. Nevertheless, to robustify GTS we incorporate regularization that penalizes significant departure from one’s prior belief. If a certain “ground-truth” graph is believed, the learned graph will be a healthy variation of it for a more accurate forecast.
2 RELATED WORK
Time series forecasting has been studied for decades by statisticians. It is out of the scope of this paper to comprehensively survey the literature, but we will focus more on late developments under the deep learning context. Early textbook methods include (vector) autoregressive models (Hamilton, 1994), autoregressive integrated moving average (ARIMA) (Asteriou & Hall, 2011), hidden Markov models (HMM) (Baum & Petrie, 1966), and Kalman filters (Zarchan & Musoff, 2000). Generally speaking, these are linear models that use a window of the past information to predict the next time step, although nonlinear versions with parameterization are subsequently developed.
A notable nonlinear extension was the RNN (Williams et al., 1986), which later evolved into LSTM (Hochreiter & Schmidhuber, 1997), BiLSTM (Schuster & Paliwal, 1997), and GRU (Cho et al., 2014), which addressed several limitations of the vanilla RNN, such as the vanishing gradient problem. These architectures are hard to parallelize because of the recurrent nature of the forward and backward computation. More recently, Transformer (Vaswani et al., 2017) and BERT (Devlin et al., 2019) were developed to address parallelization, by introducing attention mechanisms that simultaneously digested past (and future) information. Although these models are more heavily used for sequence data under the context of natural language processing, they are readily applicable for time series as well (Shih et al., 2019; Li et al., 2019).
Graph neural networks (Zhang et al., 2018; Zhou et al., 2018; Wu et al., 2019) emerged quickly in deep learning to handle graph-structured data. Typically, graph nodes are represented by feature vectors, but for the case of time series, a number of specialized architectures were recently developed; see, e.g., GCRN (Seo et al., 2016), DCRNN (Li et al., 2018), STGCN (Yu et al., 2018), and TGCN (Zhao et al., 2019). These architectures essentially combine the temporal recurrent processing with graph convolution to augment the representation learning of the individual time series.
Graph structure learning (not necessarily for time series) appears in various contexts and thus methods span a broad spectrum. One field of study is probabilistic graphical models and casual inference, whereby the directed acyclic structure is enforced. Gradient-based approaches in this context include NOTEARS (Zheng et al., 2018), DAG-GNN (Yu et al., 2019), and GraN-DAG (Lachapelle et al., 2020). On the other hand, a general graph may still be useful without resorting to causality. LDS (Franceschi et al., 2019) is a meta-learning approach that demonstrates to improve the performance on node classification tasks. MTGNN (Wu et al., 2020) parameterizes the graph as a degree-k graph, which is learned end-to-end with a GNN for forecasting time series. We, on the other hand, allow a more general structural prior for the graph. NRI (Kipf et al., 2018) adopts a latent-variable approach and learns a latent graph for forecasting system dynamics. Our approach is closely related to NRI and we will compare with it in the following section after introducing the technical details.
3 METHOD
In this section, we present the proposed GTS method, elaborate the model parameterization, and describe the training technique. We also highlight the distinctions from NRI (Kipf et al., 2018).
Let us first settle the notations. Denote by X the training data, which is a three dimensional tensor, with the three dimensions being feature, time, and the n series. Superscript refers to the series and subscript refers to time; that is, Xi denotes the i-th series for all features and time and Xt denotes the t-th time step for all features and series. There are in total S time steps for training. The model will use a window of T steps to forecast the next τ steps. For each valid t, denote by
feature extractor
link predictor
recurrent graph
convolution
recurrent graph
convolution …
entire data X sampled graph A
windowed data forecast
sampling
learned structure 𝜃
t + 1 t + 2 t + T t + T + 1 t + T + 𝜏
recurrent graph
convolution
recurrent graph
convolution …
Figure 1: GTS architecture.
X̂t+T+1:t+T+τ = f(A,w,Xt+1:t+T ) the model, which forecasts X̂t+T+1:t+T+τ from observations Xt+1:t+T , through exploiting the graph structureA and being parameterized byw. Using ` to denote the loss function between the prediction and the ground truth, a typical training objective reads∑
t `(f(A,w,Xt+1:t+T ), Xt+T+1:t+T+τ ). (3)
Three remaining details are the parameterization of A, the model f , and the loss `.
3.1 GRAPH STRUCTURE PARAMETERIZATION
The binary matrix A ∈ {0, 1}n×n by itself is challenging to parameterize, because it requires a differentiable function that outputs discrete values 0/1. A natural idea is to letA be a random variable of the matrix Bernoulli distribution parameterized by θ ∈ [0, 1]n×n, so thatAij is independent for all the (i, j) pairs with Aij ∼ Ber(θij). Here, θij is the success probability of a Bernoulli distribution. Then, the training objective (3) needs to be modified to
EA∼Ber(θ) [ ∑ t `(f(A,w,Xt+1:t+T ), Xt+T+1:t+T+τ )] . (4)
As hinted in Section 1, we further parameterize θ as θ(w), because otherwise the n2 degrees of freedom in θ render the optimization hard to scale. Such a parameterization, however, imposes a challenge on differentiability, if the expectation (4) is evaluated through sample average: the gradient of (4) does not flow through A in a usual Bernoulli sampling. Hence, we apply the Gumbel reparameterization trick proposed by Jang et al. (2017) and Maddison et al. (2017): Aij = sigmoid((log(θij/(1 − θij)) + (g1ij − g2ij))/s), where g1ij , g2ij ∼ Gumbel(0, 1) for all i, j. When the temperature s → 0, Aij = 1 with probability θij and 0 with remaining probability. In practice, we anneal s progressively in training such that it tends to zero.
For the parameterization of θ, we use a feature extractor to yield a feature vector for each series and a link predictor that takes in a pair of feature vectors and outputs a link probability. The feature extractor maps a matrixXi to a vector zi for each i. Many sequence architectures can be applied; we opt for a simple one. Specifically, we perform convolution along the temporal dimension, vectorize along this dimension, and apply a fully connected layer to reduce the dimension; that is, zi = FC(vec(Conv(Xi))). Note that the feature extractor is conducted on the entire sequence rather than a window of T time steps. Weights are shared among all series.
The link predictor maps a pair of vectors (zi, zj) to a scalar θij ∈ [0, 1]. We concatenate the two vectors and apply two fully connected layers to achieve so; that is, θij = FC(FC(zi‖zj)). The last activation needs be a sigmoid. See the top part of Figure 1.
3.2 GRAPH NEURAL NETWORK FORECASTING
The bottom part of Figure 1 is the forecasting model f . We use a sequence-to-sequence (seq2seq) model (Sutskever et al., 2014) to map Xit+1:t+T to X i t+T+1:t+T+τ for each series i. Seq2seq is
typically a recurrent model, but with a graph structure available among the series, we leverage recurrent graph convolution to handle all series simultaneously, as opposed to the usual recurrent mechanism that treats each series separately.
Specifically, for each time step t′, the seq2seq model takes Xt′ for all series as input and updates the internal hidden state from Ht′−1 to Ht′ . The encoder part of the seq2seq performs recurrent updates from t′ = t + 1 to t′ = t + T , producing Ht+T as a summary of the input. The decoder part uses Ht+T to continue the recurrence and evolves the hidden state for another τ steps. Each hidden state Ht′ , t′ = t + T + 1 : t + T + τ , simultaneously serves as the output X̂t′ and the input to the next time step.
The recurrence that accepts input and updates hidden states collectively for all series uses a graph convolution to replace the usual multiplication with a weight matrix. Several existing architectures serve this purpose (e.g., GCRN (Seo et al., 2016), STGCN (Yu et al., 2018), and T-GCN (Zhao et al., 2019)), but we use the diffusion convolutional GRU defined in DCRNN (Li et al., 2018) because it is designed for directed graphs:
Rt′ = sigmoid(WR ?A [Xt′ ‖ Ht′−1] + bR), Ct′ = tanh(WC ?A [Xt′ ‖ (Rt′ Ht′−1] + bC), Ut′ = sigmoid(WU ?A [Xt′ ‖ Ht′−1] + bU ), Ht′ = Ut′ Ht′−1 + (1− Ut′) Ct′ ,
where the graph convolution ?A is defined as WQ ?A Y = ∑K k=0 ( wQk,1(D −1 O A) k + wQk,2(D −1 I A T )k ) Y,
with DO and DI being the out-degree and in-degree matrix and ‖ being concatenation along the feature dimension. Here, wQk,1, w Q k,2, bQ for Q = R,U,C are model parameters and the diffusion degree K is a hyperparameter.
We remark that as a subsequent experiment corroborates, this GNN model can be replaced by other similar ones (e.g., T-GCN), such that the forecast performance remains similar while still being superior over all baselines. In comparison, the more crucial part of our proposal is the structure learning component (presented in the preceding subsection), without which it falls back to a model either using no graphs or needing a supplied one, both performing less well.
3.3 TRAINING, OPTIONALLY WITH A PRIORI KNOWLEDGE OF THE GRAPH
The base training loss (per window) is the mean absolute error between the forecast and the ground truth
`tbase(X̂t+T+1:t+T+τ , Xt+T+1:t+T+τ ) = 1 τ ∑t+T+τ t′=t+T+1 |X̂t′ −Xt′ |.
Additionally, we propose a regularization that improves graph quality, through injecting a priori knowledge of the pairwise interaction into the model. Sometimes an actual graph among the time series is known, such as the case of traffic network mentioned in Section 1. Generally, even if an explicit structure is unknown, a neighborhood graph (such as a kNN graph) may still serve as reasonable knowledge. The use of kNN encourages sparsity if k is small, which circumvents the drawback of `1 constraints that cannot be easily imposed because the graph is not a raw variable to optimize. As such, we use the cross-entropy between θ and the a priori graph Aa as the regularization:
`reg = ∑ ij −Aaij log θij − (1−Aaij) log(1− θij). (5)
The overall training loss is then ∑ t ` t base + λ`reg, with λ > 0 being the regularization magnitude.
3.4 COMPARISON WITH NRI
GTS appears similar to NRI (Kipf et al., 2018) on the surface, because both compute a pairwise structure from multiple time series and use the structure to improve forecasting. In these two methods, the architecture to compute the structure, as well as the one to forecast, bare many differences; but these differences are only secondary. The most essential distinction is the number of structures. To avoid confusion, here we say “structure” (θ) rather than “graph” (A) because there are combinatorially many graph samples from the same structure. Our approach produces one single structure given one set of n series. On the contrary, the autoencoder approach adopted by NRI produces different structures given different encoding inputs. Hence, a feasible use of NRI can only occur in the
following two manners. (a) A single set of n series is given and training is done on windowed data, where each window will produce a separate structure. (b) Many sets are given and training is done through iterating each set, which corresponds to a separate structure. Both cases are different from our scenario, where a single set of time series is given and a single structure is produced.
Fundamentally, NRI is a variational autoencoder and thus the inference of the structure is an amortized inference: under setting (b) above, the inferred structure is a posterior given a set of series. The amortization uses an encoder parameterization to free off the tedious posterior inference whenever a new set of series arrives. Moreover, under the evidence lower bound (ELBO) training objective, the prior is a graph, each edge of which takes a value uniformly in [0, 1]. In our case, on the contrary, a single structure is desired. Thus, amortized inference is neither necessary nor relevant. Furthermore, one may interpret the a priori information Aa for regularization as a “structural prior;” however, for each node pair it offers a stronger preference on the existence/absence of an edge than a uniform probability.
4 EXPERIMENTS
In this section, we conduct extensive experiments to show that the proposed method GTS outperforms a comprehensive set of forecasting methods, including one that learns a hidden graph structure (LDS, adapted for time series). We also demonstrate that GTS is computationally efficient and is able to learn a graph close to the a priori knowledge through regularization, with little compromise on the forecasting quality.
4.1 SETUP
Data sets. We experiment with two benchmark data sets METR-LA and PEMS-BAY from Li et al. (2018) and a proprietary data set PMU. The first two are traffic data sets with given graphs serving as ground truths; we perform no processing and follow the same configuration as in the referenced work for experimentation. The last one is a sensor network of the U.S. power grid without a given grid topology. For details, see Appendix Section A. For all data sets, we use a temporal 70/10/20 split for training, validation, and testing, respectively.
Baselines. We compare with a number of forecasting methods:
1. Non-deep learning methods: historical average (HA), ARIMA with Kalman filter (ARIMA), vector auto-regression (VAR), and support vector regression (SVR). The historical average accounts for weekly seasonality and predicts for a day by using the weighted average of the same day in the past few weeks.
2. Deep learning methods that treat each series separately (i.e., no graph): feed-forward neural network (FNN) and LSTM.
3. GNN method applied on the given graph (or kNN graph for PMU): DCRNN (Li et al., 2018).
4. GNN methods that simultaneously learn a graph structure. We use LDS (Franceschi et al., 2019) to learn the graph, wherein the forecast model is DCRNN. We name the method “LDS” for short. Additionally, we compare with NRI (Kipf et al., 2018).
5. Variant of GTS: We maintain the graph structure parameterization but replace the DCRNN forecast model by T-GCN (Zhao et al., 2019). We name the variant “GTSv.”
Except LDS and NRI, all baselines follow the configurations presented in Li et al. (2018). For LDS, we follow Franceschi et al. (2019). For NRI, we follow Kipf et al. (2018).
Evaluation metrics. All methods are evaluated with three metrics: mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage error (MAPE).
For details on hyperparameter setting and training platform, see Appendix Section B.
4.2 RESULTS
Forecasting quality. We first evaluate the performance of GTS through comparing it with all the aforementioned baselines. Because of the significant memory consumption of NRI, this method is executed on only the smaller data set PMU. The tasks are to forecast 15, 30, and 60 minutes.
Table 1 summarizes the results for METR-LA. A few observations follow. (1) Deep learning methods generally outperform non-deep learning methods, except historical average that performs on par with deep learning in some metrics. Seasonality is a strong indicator of the repeating traffic patterns and not surprisingly HA performs reasonably well despite simplicity. (2) Among the deep learning methods, graph-based models outperform non-graph models. This result corroborates the premise of this work: graph structure is helpful. (3) Among the graph-based methods, LDS performs slightly better than DCRNN. The difference between these two methods is that the latter employs the given graph, which may or may not imply direct interactions, whereas the former learns a graph in the data-driven manner. Their performances however are quite similar. (4) The most encouraging result is that the proposed method GTS significantly outperforms LDS and hence DCRNN. GTS learns a graph structure through parameterization, rather than treating it as a (hyper)parameter which is the case in LDS. (5) The performance of the variant GTSv stays between GTS and LDS. This observation corroborates that the proposed structure learning component contributes more crucially to the overall performance than does a careful choice of the GNN forecasting component.
To dive into the behavioral difference between GTS and DCRNN, we plot in Figure 2 two forecasting examples. One sees that both methods produce smooth series. In the top example, overall the GTS curve is closer to the moving average of the ground truth than is the DCRNN curve (see e.g., the left part and the U shape). In the bottom example, the GTS curve better captures the sharp dip toward the end of the series. In both examples, there exist several short but deep downward spikes. Such anomalous data are captured by neither methods.
Additionally, we summarize the results for PEMS-BAY and PMU in Table 3 and 2, respectively. (see Appendix Section C for the former). The observations are rather similar to those of METRLA. Our model produces the best prediction in all scenarios and under all metrics. Additionally, for the PMU data set, NRI performs competitively, second to GTS/GTSv and better than LDS in most of the cases.
Computational efficiency. We compare the training costs of the graph-based methods: DCRNN, LDS, and GTS. See Figure 3. DCRNN is the most efficient to train, since no graph structure learning is involved. To learn the graph, LDS needs orders of magnitude more time than does DCRNN. Recall that LDS employs a bilevel optimization (1), which is computationally highly challenging. In contrast, the proposed method GTS learns the graph structure as a byproduct of the model training (2). Its training time is approximately three times of that of DCRNN, a favorable overhead compared with the forbidding cost of LDS.
Effect of regularization. We propose in Section 3.3 using regularization to incorporate a priori knowledge of the graph. One salient example of knowledge is sparsity, which postulates that many node pairs barely interact. We show the effect of regularization on the data set PMU with the use of a kNN graph as knowledge. The task is 15-minute forecasting and results (expected degree and MAE) are plotted in Figure 4. The decreasing curves in both plots indicate that using a smaller k or increasing the regularization magnitude produces sparser graphs. The bars give the MAEs, all around 2.4e-4, indicating equally good forecasting quality. (Note that the MAE for LDS is 4.9e-4.)
Learned structures. To examine the learned structure θ, we further show its difference from the given graph adjacency matrix Aa (binary) and visualize one particular example in Figure 5. The difference is defined as `reg/n2 (average cross entropy; see (5)). One reads that when λ = 20, the difference is 0.34. It indicates that the learned probabilities in θ are on average 0.3 away from the entries of Aa, because − log(1 − 0.3) ≈ 0.34. When using 0.5 as a cutoff threshold for θ, such a difference possibly results in false-positive edges (existing in θ but not in Aa; orange dashed) and false-negative edges (existing in Aa but not in θ; none in the example).
Note that the regularization strength λ weighs the forecasting error (MAE) and the cross entropy in the loss function. When λ = 0, the training loss is not regularized, yielding optimal forecast results reported in Table 2. When λ = ∞, one effectively enforces θ to be identical to Aa and hence the model reduces to DCRNN, whose forecasting performance is worse than our model. The interesting question is when λ interpolates between the two extremes, whether it is possible to find a sweet spot such that forecasting performance is close to our model but meanwhile θ is close to Aa. Figure 5 suggests positively. We stress that our model does not intend to learn a “ground-truth” graph (e.g., the traffic network or the power grid); but rather, learn a structure that a GNN can exploit to improve forecast.
Other structural priors. In the PMU data set, we use a synthetic kNN structure prior Aa due to the lack of a known graph. For METR-LA and PEMS-BAY, however, such as graph can be constructed based on spatial proximity (Li et al., 2018). We show in Figure 6 the effect of regularization for these data sets. Similar to the findings of PMU, moving λ between 0 and ∞ interpolates two extremes: the best forecast quality and recovery of Aa. With a reasonable choice of λ (e.g., 0.3), the forecast quality degrades only slightly but the learned structure is rather close to the given Aa, judged from the average cross entropy.
5 CONCLUSIONS
We have presented a time series forecasting model that learns a graph structure among multiple time series and forecasts them simultaneously with a GNN. Both the graph and the GNN are learned end-to-end, maximally exploiting the pairwise interactions among data streams. The graph structure is parameterized by neural networks rather than being treated as a (hyper)parameter, hence significantly reducing the training cost compared with a recently proposed bilevel optimization approach LDS. We conduct comprehensive comparisons with a number of baselines, including nondeep learning methods and deep learning methods (which either ignore the pairwise interaction, use a given graph, or learn a graph by using LDS), and show that our approach attains the best forecasting quality. We also demonstrate that regularization helps incorporate a priori knowledge, rendering the learned graph a healthy variation of the given one for more accurate forecast.
ACKNOWLEDGMENT AND DISCLAIMER
This material is based upon work supported by the Department of Energy under Award Number(s) DE-OE0000910. C. Shang was also supported by National Science Foundation grant IIS-1718738 (to J. Bi) during this work. J. Bi was additionally supported by National Institutes of Health grants K02-DA043063 and R01-DA051922. This report was prepared as an account of work sponsored by agencies of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
A ADDITIONAL DETAILS OF DATA SETS
METR-LA is a traffic data set collected from loop detectors in the highway of Los Angles, CA (Jagadish et al., 2014). It contains 207 sensors, each of which records four months of data at the frequency of five minutes. A graph of sensors is given; it was constructed by imposing a radial basis function on the pairwise distance of sensors at a certain cutoff. For more information see Li et al. (2018). We perform no processing and follow the same configuration as in Li et al. (2018)
PEMS-BAY is also a traffic data set, collected by the California Transportation Agencies Performance Measurement System. It includes 325 sensors in the Bay Area for a period of six months, at the same five-minute frequency. Construction of the graph is the same as that of METR-LA. No processing is performed.
PMU contains time series data recorded by the phasor measurement units (PMUs) deployed across the U.S. power grid. We extract one month of data (February 2017) from one interconnect of the grid, which includes 42 PMU sources. Each PMU records a number of state variables and we use only the voltage magnitude and the current magnitude. The PMUs sample the system states at high rates (either 30 or 60 Hertz). We aggregate every five minutes, yielding a data frequency the same as the above two data sets. Different from them, this data set offers neither the grid topology, the sensor identities, nor the sensor locations. Hence, a “ground truth” graph is unknown.
However, it is highly plausible that the PMUs interact in a nontrivial manner, since some series are highly correlated whereas others not much. Figure 7 shows three example series. Visually, the first series appears more correlated to the second one than to the third one. For example, in the first two series, the blue curves (the variable ip m) are visually seasonal and synchronous. Moreover, inside the purple window, the red curves (the variable vp m) in the first two series show three downward spikes, which are missing in the third series. Indeed, the correlation matrix between the first two series is ( 0.76 −0.04 −0.31 0.96 ) and that between the first and the third series is ( 0.18 −0.10 0.22 0.22 ) . Such an observation justifies graph structure learning among the PMUs.
It is important to note a few processing steps of the data set because of its noisy and incomplete nature. The data set contains a fair amount of unreliable readings (e.g., outliers). Hence, we consult domain experts and set lower and upper bounds to filter out extremely small and large values. Accounting for missing data, within every five minutes we take the mean of the available readings if any, or impute with the mean of the entire series.
B ADDITIONAL DETAILS OF EXPERIMENT SETTING
Hyperparameters. Several hyperparameters are tuned through grid search: initial learning rate {0.1, 0.01, 0.001}, dropout rate {0.1, 0.2, 0.3}, embedding size of LSTM {32, 64, 128, 256}, the k value in kNN {5, 10, 20, 30}, and the weight of regularization {0, 1, 2, 5, 10, 20}. For other hyperparameters, the convolution kernel size in the feature extractor is 10 and the decay ratio of learning rate is 0.1. After tuning, the best initial learning rate for METR-LA and PEMS-BAY is 0.01 and for PMU is 0.001. The optimizer is Adam.
Because the loss function is an expectation (see (1) and (2)), the expectation is computed as an average of 10 random samples. Such an averaging is needed only for model evaluation. In training, one random sample suffices because the optimizer is a stochastic optimizer.
Platform. We implement the models in PyTorch. All experiments are run on one compute node of an IBM Power9 server. The compute node contains 80 CPU cores and four NVidia V100 GPUs, but because of scheduling limitation we use only one GPU.
Code is available at https://github.com/chaoshangcs/GTS.
C ADDITIONAL RESULTS FOR FORECASTING QUALITY
See Table 3 for the forecast results of PEMS-BAY. The observations are rather similar to those of Tables 1 and 2 in Section 4.2. In particular, GTS produces the best prediction in all scenarios and under all metrics.
D UPDATES OF TABLES 1, 2, AND 3
Our implementation had been developed based on the PyTorch version of DCRNN (https: //github.com/chnsh/DCRNN_PyTorch). It was brought to our attention recently that this version calculated the evaluation metrics MAE/RMSE/MAPE in a manner slightly different from that used to report the results of DCRNN in the official publication (https://github.com/ chnsh/DCRNN_PyTorch/issues/3). We updated Tables 1, 2, and 3 by correcting the calculations to be consistent with the official DCRNN results. See Tables 4, 5, and 6. Despite the correction, observations and conclusions regarding the comparison of different methods remain unchanged. | 1. What is the focus of the paper regarding learning graph structures and neural networks?
2. What are the strengths of the proposed approach compared to prior works such as LDS?
3. What are the concerns or limitations of the method, particularly its empirical improvement?
4. How does the reviewer assess the presentation and clarity of the paper's content?
5. Are there any suggestions or questions regarding the applicability of the proposed idea beyond time series data? | Review | Review
The paper considers learning both graph structures and NNs for time series data, similar to the idea of LDS (Franceschi et al., 2019). Observing the computation and scalability issues with LDS, authors propose a unilevel optimization form wrt. the mean performance over the graph distribution. This is done via NNs, with input being the observed sequence, to output a real matrix whose elements are then treated as weights for the Gumbel trick. NN structures, training procedure, etc. mostly follow existing works.
Overall, the paper is well presented, easy to understand, with a simple and somewhat effective modification over LDS. I generally like simple ideas with sufficient insights and explanations (though there is not much in this work), but I'm not sure if the empirical improvement is sufficient. I recommend a weak acceptance for now and may change my score after reading other reviews.
I only have one question: the proposed idea is not restricted to time-series case. So how does it perform for non-time-series data? It would be a big benefit if the proposed idea also helps in a more general case than the present scope.
** after reading response ** I thank authors for replying to my question. I maintain the previous rating. |
ICLR | Title
Agent as Scientist: Learning to Verify Hypotheses
Abstract
In this paper, we formulate hypothesis verification as a reinforcement learning problem. Specifically, we aim to build an agent that, given a hypothesis about the dynamics of the world can take actions to generate observations which can help predict whether the hypothesis is true or false. Our first observation is that agents trained end-to-end with the reward fail to learn to solve this problem. In order to train the agents, we exploit the underlying structure in the majority of hypotheses – they can be formulated as triplets (pre-condition, action sequence, post-condition). Once the agents have been pretrained to verify hypotheses with this structure, they can be fine-tuned to verify more general hypotheses. Our work takes a step towards a “scientist agent” that develops an understanding of the world by generating and testing hypotheses about its environment.
1 INTRODUCTION
In fields of natural sciences (physics, biology etc.), we follow scientific methods – building and testing hypotheses to develop an understanding of the world. Many classical approaches to artificial intelligence attempted to mirror this process (Brachman & Levesque, 2004; Davis & Marcus, 2015), building (symbolic) knowledge representations about the world that allow the making and testing of hypotheses. However, this process bears little resemblance to the way in which current machine learning (ML) systems learn. Both traditional IID and interactive learning settings use a single userspecified objective function that codifies a high-level task, but places no constraint on the underlying knowledge formed about the environment. In standard ML approaches, particularly those based on deep learning, any representation of the world is embedded in the weights of the model, and there is no explicit mechanism for formulating or testing hypotheses.
In this paper we take a modest step towards combining the classical approaches with the successes of modern ML to build a “scientist agent”. When fully realized, such agent would be able to both make and test hypotheses about its environment. In this work we focus on the latter. Unlike standard supervised problems, there is no standard formulation, and no benchmarks or environments for hypothesis verification in interactive environments. A key contribution of our paper is framing the problem of hypothesis verification and presenting a feasible formulation for it. Specifically, we build an agent that, given a hypothesis about the dynamics of the world, can take actions to verify if the hypothesis is true or not. We formulate hypothesis verification as joint learning of: (a) an action policy that generates observations which are relevant to verification of hypotheses and; (b) a prediction function which uses the observations to predict whether the hypothesis is true or false.
We first show that even in simple environments, agents trained end-to-end using deep reinforcement learning methods cannot learn policies that can generate observations to verify the hypothesis. To remedy this, we exploit the underlying structure of hypotheses – they can often be formulated as a triplet of a pre-condition, an action sequence, and a post-condition that is causally related to the pre-condition and actions. Using this common structure, we are able to seed our action policy to learn behaviors which alter the truth of the pre-condition and post-condition. We show that this policy can be fine-tuned to learn how to verify more general hypotheses that do not necessarily fit into the triplet structure. Thus our approach allows combining the explicit hypothesis testing of classical AI with the use of scalable statistical ML.
See videos and more at: https://sites.google.com/view/scientistagent
2 RELATED WORK
Knowledge representation and reasoning (KRR) (Brachman & Levesque, 2004) is a central theme of traditional AI. Commonsense reasoning (Davis, 1990; Davis & Marcus, 2015; Liu & Singh, 2004) approaches, e.g. CYC (Lenat, 1995), codify everyday knowledge into a schema that permits inference and question answering. However, the underlying operations are logic-based and occur purely within the structured representation, having no mechanism for interaction with an external environment. Expert systems (Giarratano & Riley, 1998) instead focus on narrow domains of knowledge, but are similarly self-contained. Logic-based planning methods (Fikes & Nilsson, 1971; Colaco & Sridharan, 2015) generate abstract plans that could be regarded as action sequences for an agent. By contrast, our approach is statistical in nature, relying on Reinforcement Learning (RL) to guide the agent.
Our approach builds on the recent interest (Mao et al., 2019; Garcez et al., 2012) in neural-symbolic approaches that combine neural networks with symbolic representations. In particular, some recent works (Zhang & Stone, 2015; Lu et al., 2018) have attempted to combine RL with KRR, for tasks such as navigation and dialogue. These take the world dynamics learned by RL and make them usable in declarative form within the knowledge base, which is then used to improve the underlying RL policy. In contrast, in our approach, the role of RL is to verify a formal statement about the environment. Our work also shares some similarity with Konidaris et al. (2018), where ML methods are used to learn mappings from environment states to representations a planner can use.
Cognitive Development: Empirical research on early learning (Gopnik, 2012; Kushnir & Gopnik, 2005) has shown that infants build an understanding of the world around them in ways that parallel the scientific process: constantly formulating hypotheses about how some physical aspect of the world might work and then proving or disproving them through deliberate play. Through this process the child builds up an abstract consistent causal understanding of the world. Violations of this understanding elicit surprise that can be measured by researchers (Spelke et al., 1992).
Automated Knowledge Base completion: This work is also related to knowledge base completion (Fader et al., 2011; Bordes et al., 2013; Suchanek et al., 2007), and especially as formulated in (Riedel et al., 2013). Instead of using other facts in the knowledge base or a text corpus to predict edges in the KB, here the agent needs to act in an environment and observe the results of those actions. This recalls (Mitchell et al., 2018), where the system verifies facts it has previously hypothesized by searching for corroboration in the corpus.
Automation of the scientific process has been attempted in several domains. Robotic exploration of chemical reactivity has been demonstrated (Granda et al., 2018) using ML techniques. (King et al., 2009) developed a robot scientist that explored geonomics hypotheses about yeast and experimentally tested them using laboratory automation. In biochemistry (Vanlier et al., 2014) used Bayesian methods for optimal experiment design. More generally, the Automated Statistician project (Steinruecken et al., 2019) uses a Bayesian approach to reason about different hypotheses for explaining the data, with the aims of creating interpretable knowledge.
Embodied Question and Answering: The problem studied in this paper is closely related to the embodied visual question-answering problem in (Das et al., 2018). Indeed, our basic formulation is a particular case of the most general formulation of embodied QA, as the agent is rewarded for successfully answering questions about the environment that require interaction. However, the form of the questions is different than those considered in (Das et al., 2018), as they may require drawing a conclusion about the dynamics of the environment, rather than a static property. Even the questions about static properties we are interested in have a different flavor, as they encode rules, rather than statements about the current configuration. Our approach is built around hypothesis-conclusion structure special to these questions.
There is also a large body of work on (non-embodied) visual QA (Kafle & Kanan, 2017; Wu et al., 2016a) and text-based QA (Rajpurkar et al., 2018). From this, most relevant to our work is (Wu et al., 2016b) who use a structured knowledge base to augment standard statistical QA techniques.
Language grounding: Our approach requires us to solve the language grounding problem, albeit in a simplified form due to templated language/limited vocabulary. Most other works such as (Chaplot et al., 2018; Anderson et al., 2018; Tellex et al., 2011) are focused on instruction following in known or unknown environments.
Learning to experiment: Recent works have studied training agents to interact with an environment to draw conclusions about its dynamics (Denil et al., 2016) or elucidate its causal structure (Dasgupta et al., 2019). Our work is similar to these (especially (Denil et al., 2016) which uses reinforcement learning on sequences of observations) in that the agent gets reward for answering questions that require experimentation with the environment. However, in those works, the “question” in each environment is the same; and thus while learning to interact led to higher answer accuracy, random experimental policies could still find correct answers. On the other hand, in this work, the space of questions possible for any given environment is combinatorial, and random experimentation (and indeed vanilla reinforcement learning) is insufficient to answer questions.
3 PROBLEM
3.1 THE HYPOTHESIS VERIFICATION PROBLEM
Here we formally introduce the problem of hypothesis verification as a Partially Observable Markov Decision Process (POMDP).
The agent is spawned in an environment W ∈ W defined by the “rules” of the particular instance W out of all possible worldsW . For instance, in a crafting world, W will be defined as a set or rules for what items can be crafting from which other items, and this ruleset will be different from other environments inW Given the environment W , the agent is given a hypothesis to test h which relates to the rules of the world instance. By construction, h is either true or false. The agent can take actions a ∈ A (for example, move left, move right, craft, etc), including two special actions ansT and ansF . The goal of the agent is to correctly identify the hypothesis h as true or false and take the corresponding answering action. At the end of the episode, the agent is told whether the h was true or false.
In our experiments, we set the probability of h being true at 0.5, and construct the environments such that it is not obvious from time t = 0 whether the hypothesis is true or not. The agent must therefore learn a hypothesis conditioned policy π(s, h) : (S,H)→ A, such that the agent has enough information to know whether h is true.
Because we also have access to the ground truth for whether the hypothesis is true, we can train a network with supervised learning to predict true and false. Our prediction network f(st, st−1, ...st−N , h) : (SN ,H) → hpred takes in the last N observed observations of the environment and the hypothesis and predicts whether or not the hypothesis is true. The special ans action replaces the earlier ansT and ansF , and the prediction network is used to decide whether the agent answer true or false.
This addition of a supervised component is not strictly necessary for the definition of the problem. However, this framing allows for the use of supervised learning for the actual ground truth prediction, which is known to be an easier problem than the indirect optimization of RL. Empirically, this change makes the problem much more tractable.
Now, to train the policy network π, we can now define a reward function to allow for standard RL training of the policy. In essence, we give the agent a positive reward at the end of an episode if it correctly guesses the correct truth value of the hypothesis h.
Rans = { +C a = ans & hpred = hgt −C a = ans & hpred 6= hgt 0 otherwise
where C ∈ R+ is some constant reward value and hgt is the ground truth value of the hypothesis. Note that any particular choice of W forms an MDP if it were to be played repeatedly; but with the dynamics depending on the ruleset, the state is not fully observed, and naive RL is not applicable. As is standard in these situations, we use models that take as input a sequence of observations.
This dual optimization of policy and hypothesis prediction makes hypothesis verification a quite challenging problem! In order to be able to tell whether a hypothesis is true or not, we need to take the correct sequence of actions related to the hypothesis. But in order to know that a particular sequence of actions was the right one to do, we need to be able to correctly predict the hypothesis to know that
we should have a positive reward for that sequence. Guessing with no information gives average 0 reward, and until it learns good predictor it has no signal to guide the policy to do the right thing. We find that a RL baseline finds it almost impossible to solve the task as it can neither learn the right policy nor the right predictor to verify the hypothesis.
3.2 ENVIRONMENTS
We create three games in order to test the problem of hypothesis verification: Color Switch, Pushblock, and Crafting. See Figure 1. Each instantiation of an environment comes with a hypothesis for the agent to verify. The hypotheses are generated along with the environment using a set of templates associated to the game (see Appendix A). For each spawn of the environment, the locations of the agent, all items and entities, the given hypothesis to verify, as well as the underlying logic of the world is randomized. This prevents the agent from learning the truth of the hypothesis by simply guessing without interacting with the world.
In the Color Switch environment, the agent is placed in a world with one or more color switches which are randomly either “on” or “off” and a door which is either open or closed. The agent is able to move and toggle the switch positions. One of the switches in the environment, when in the correct position (can be either on or off) will cause the door to open. The other switch has no effect. Hypotheses in this environment relate to the color and position of switches and how that opens or closes the door.
In the Pushblock environment, the agent is placed in a world with a block which can be pushed by the agent, and a door. The agent can move and push on the block. The door opens when the block is in a particular part of the grid: “up” – top two rows, “down” – bottom two rows, “left” – leftmost two rows, “right” – rightmost two rows. The hypotheses in this environment related to the position of the pushblock and how that affects the door.
Finally, in the Crafting environment, the agent is placed in a world with crafting rules similar to that of the popular Minecraft game. The agent is spawned along with a number of crafting items, and a crafting location. The agent is able to move, pick up items into its inventory and use the crafting location using special crafting actions. There is some true “recipe” which produces some new item in the agent’s inventory.
Items are randomly generated in a 5 by 5 grid. The world observation is given by a 1-hot vector of each possible item in the world at each grid location and another 1-hot vector for each item and whether it is in the agent’s inventory. The hypothesis is encoded as sequence of tokens. As we describe in Section 3.1, the (sparse) reward function for these environments is C = 10 if the agent takes the special ans action and correctly verifies the hypothesis as true or false, and −10 if it incorrectly guesses.
3.3 HYPOTHESIS CONSTRUCTION
In the following sections, we discuss different types of hypotheses about the environment in order of increasing complexity.
3.3.1 TRIPLET HYPOTHESES
In the first case, we consider hypotheses that have the following “triplet” form.
(pre-condition, action sequence) =⇒ post-condition
The idea here is that we want to explicitly form the hypothesis as a logical statement. When the pre-condition is true, and the action sequence is performed, the post-condition will be true.
To generate our triplet hypotheses, we: (1) randomly select a pre-condition template from a set list; (2) randomly select an action template; (3) randomly select a post-condition template; and (4) fill in any entities in the final template
So for example, for the Color Switch environment we might draw “if the COLOR switch is ON_OFF_SWITCHSTATE, NULL, the door will open” and then draw “blue” for COLOR and “on” for ON_OFF_SWITCHSTATE, giving us the final template: “if the blue switch is on the door will open.”
In Appendix A, we show the possible templates for each of the triplets and the possible values for all of the entities for our three environments.
3.3.2 GENERAL TEMPLATE CONSTRUCTION
In the more general case, instead of drawing a template from the triplet form, we instead draw a single template for the hypothesis and fill in the values. For instance, in pushblock we might draw: “the door can only be opened when the pushblock is PUSHBLOCK_POSITION” and then draw “left” for PUSHBLOCK_POSITION. These templates are more general than the triplet ones in that they need not hold to the strict triplet form, and we have no explicit labels for pre-condition, action sequence and post-condition.
3.3.3 SPECIAL CASE TEMPLATES
Finally, we also can draw some more difficult and general hypothesis templates. Some of these cannot be neatly fit into a triplet format by rewording, and some may not fully describe the rules of the world. Some examples of these harder templates are: (1) Negating effects (e.g. door is not open); (2) Negating conditions (e.g. switch is not on); and Independence (e.g. door independent of blue switch). See Appendix A for all of the possible templates for an environment and further details.
4 METHODOLOGY
4.1 RL BASELINE
The conceptually simplest approach to solving the problem is to give an RL agent a sequence of N observations of the form (oi, h), where h is the hypothesis about the environment, and oi is the observation. As long as N is large enough, a standard RL algorithm has the capacity to solve the problem.
Thus, we design our policy network π(s, h) to decide the action. We also use the simplification described in Section 3.1 and create another network to predict the hypothesis ground truth value trained using supervised learning. The specifics of the networks are further described in Section 4.3 and hyper-parameters are described in the Appendix.
4.2 TRIPLET POLICY PRETRAINING
Rather than try to rely on general RL methods, we use the special structure of many hypotheses. As we discussed in Section 3.3.1, many hypotheses naturally take the form of a triplet: (pre-condition, action sequence, post-condition). While not all hypotheses fit into this format, the hope is that the policy we learn is close enough to ground truth, that we can later generalize to other kinds of hypotheses.
We can use this to construct a reward function. We know that to verify these kinds of statements, we need to take actions which alter the truth of the pre-condition and post-condition. If we modify the
pre-condition and take the action, if the statement is true, the post-condition should toggle from false to true in the environment. Similarly, if post-condition changes but the pre-condition did not change, we know that the statement must be false.
Thus we construct the following reward function to encourage our agents to toggle the pre-conditions and post-conditions:
Rpre = { +C a = ans & pre changed in last N frames 0 otherwise
Rppost = { +C a = ans & post+pre changed in last N frames 0 otherwise
It encourages the policy to change the pre-condition and post-conditions (via pre-condition) in the last N frames of the video, so that a predictor looking at the last N frames of observations will be able to deduce the truth value of the hypothesis.
Once we have trained the policy function with this proxy reward, we can then train the prediction network and even finetune our policy network on the final reward.
4.3 NETWORK ARCHITECTURE
Although other works such as Chaplot et al. (2018) have investigated language-conditioned RL (usually in the form of instruction following), our hypothesis conditioned problem proved to be quite challenging, and required some novelty in network architectures.
For the policy networks, standard architectures were not effective for our problem. The key seems to be that it is difficult to condition action on language without explicit interaction between the language and non-language components. In particular, of all of the network architectures we experimented with, an explicit attention network using the language as the key input was by far the most effective. The hypothesis is fed into a seq2vec model and used as the key to the a dot-product attention mechanism. The state of the network (the grid locations of the items in the world and the inventory of the agent) after being fed through a one layer networks is fed as input to N parallel MLPs. The output of the MLPs are fed as the values into the attention mechanism. The output of the module is then fed into the final hidden layer of the actor-critic network.
For the prediction network, we use the popular transformer architecture Vaswani et al. (2017). Our prediction network encodes both the hypothesis and past observations (after they are passed through a one layer network) using transformer encoders. These sequences are then combined using a transformer to generate a final hidden state as output which is then fed to a final prediction layer and sigmoid function to get our binary prediction.
In Figure 5, we provide ablation analysis for both of our neural network architectures. See Appendix C for more network details and hyperparameters and network diagrams.
5 EXPERIMENTS
First, we train using our policy networks using our pretraining proxy functions from Section 4.2. We find that pretraining with just the pre-condition reward leads to better results for the Color Switch environment, and use both rewards for the other two environments. Figure 2 shows these results.
Next, we train our network on the final prediction reward and train our prediction networks. We train two different versions of this. For one, we only train the prediction network and keep our policy network fixed. For the other, we train both the prediction network and finetune the policy network.
During this final training stage, we relax our triplet-form constraint and train on both the triplettemplated hypotheses we saw during pretraining as well as new hypothesis templates not seen during pretraining. We sample seen versus new templates with equal probability. See Section 3.3 and Appendix A for examples of the kinds of hypotheses we see during this phase of training. Note that this includes hypotheses which break the triplet format.
Figure 3 and left of Figure 4 show our final hypothesis verification results. We show the max out of five for each of the methods shown. We also break down the final hypothesis prediction accuracy for our methods in Table 1, and show its success on the triplet hypotheses (which our methods were pretrained on) and non-triplet hypotheses (which they were not).
RL baseline We can first see clearly that the RL baselines fail. This is due to the unlikelihood of taking the right actions to verify correctly and therefore train the prediction net properly. Because the average reward for answering is 0 if you cannot predict correctly, the agent does not even bother answering the question much of the time (which is why this baseline gets less than 50%, it does not bother guessing in most games).
Other baselines We also include two other simple baselines “no act” and “random act.” The no act baseline simply takes the ans action at t = 0 and the prediction network attempts to predict the hypothesis with just the first observation. This fails because the agent needs to take actions in the environment to be able to predict the hypothesis accurately. For random act, we simply make the policy to take random actions. This similarly fails as random actions are extremely unlikely to behave in a way that allows for the verification of the hypothesis.
5.1 TRIPLET POLICIES CAN SUCCEED AND GENERALIZE
On the other hand, we see that RL is able to train on the triplet tasks after pre-training. While it is not surprising that densifying the reward in this way makes the RL easier, in our view, it is important that it is true, as it paves the way towards hypothesis verifying agents. That is: we are interested in scalable methods that can use statistical ML to interact with a complex environment. Given the more general success of deep RL, that the problem becomes approachable with reasonable reward shaping gives us hope we will be able to get beyond the regime of classical AI methods.
Morever, in Pushblock and Color Switch, even with the policy learned from the triplet pre/post reward, the agent is able to generalize and perform well on templates not seen in the pre-training phase as we can see in Table 1. This includes generalizing to difficult templates such as negations
and “independence” hypotheses. Note that the prediction network that verifies the hypotheses given the trajectory from the policy still needs to fine-tune on the new templates.
It’s worth noting that although we can do well using finetuning using a few random seeds, these methods are high variance. In the appendix we show and discuss this more clearly. In Figure 7 we show the variances of these methods which show that the variance on our method is high. In Appendix E we propose a training methodology that sorts out the bad random seeds by using the triplet hypotheses as a validation set. And in Appendix J we show that these results are consistent when we increase the number of random seeds to 25.
5.2 TRIPLET POLICIES CAN ADAPT
On the crafting task, to do well on the unseen templates, the policy also needs to be fine-tuned. In our view, the fact that this fine-tuning can succeed is more important than the generalization in the simpler tasks, as it demonstrates a path towards agents that can verify complex statements by establishing a curriculum of simpler ones.
In the right of Figure 4, we show a visualization of a sample run of the finetuned policy and predictor on crafting. We see that the policy does what we expect, picks up the correct item and moves to the crafting table to craft. It crafts a different item than it expected (bed instead of torch) and it answers false. Looking at the prediction net over time, we see that it at first predicts false then true before it does the craft action. Once it has crafted the bed, however, it answers correctly.
We conduct additional experiments in the Appendix. In Appendix G, we tease further analyse the problem by experimenting with an oracle hypothesis predictor. In Appendix F we experiment with different pretraining functions. In appendix Appendix H we look at training baselines for longer. And In Appendix I, we look at whether giving the baselines more past frames N improves performance.
In Figure 5 we see the results of our network architecture ablation. As we can see, our new policy architecture described in Section 4.3 clearly outperforms a standard MLP policy network on the
language-condition pretraining task. We also see that the transformer architecture outperforms the LSTM and MLP model on the final task when we hold the policy network constant.
6 DISCUSSION
In this work, we propose a tractable formulation of the problem of training agents that can interact with an environment to test hypotheses about it. We show that generic RL techniques struggle with the problem, but by using its structure, we are able to develop a method that works in simple environments. Specifically, we use the fact that many hypotheses can be broken into triples of the form of (pre-condition, action sequence, post-condition); but we also show that once pre-trained using this factorization, agents can be fine-tuned to verifying more general hypotheses.
A TEMPLATES
A.1 WORLD AND HYPOTHESIS CONSTRUCTION
Returning again to our notation from Section 3.1, the environment at each spawn needs to construct a world W out of all possibleW , and a hypothesis h that is either true or false in the world. W in particular describes the rules about how the environment works (i.e. which switch opens the door) which in our case can precisely be describe by a hypothesis. So given a true hypothesis, we can exactly describe the rules of the world. Therefore, in order to create an instance of a possible WinW , we can instead draw a true hypothesis about the world at random. From the hypothesis, we can then construct the rules the determine how objects in the world behave. Note that there are couple exceptions to this for our harder hypotheses, where the hypothesis can be true but only partially describes all the rules of W . For these cases, we draw yet another template which is consistent with the hypothesis and use that to construct the rules, such as deciding which color switch really opens the door.
Because we have to randomly give either a true or false hypothesis, we also need to be able to generate a false hypothesis for the world. So for every instance, we also draw a random false hypothesis. Now, given a true and false hypothesis, we can fully generate the world and all the items that appear in either statement. So for instance, if the true hypothesis mentions a green switch and the false one mentions a blue switch, we generate both a green and blue switch. Then, we can set the rules such that the right thing happens. So in this example, switching the green switch opens the door and the blue switch does nothing.
The final step is then to randomly choose either the true or false statement as the “visible” hypothesis which is passed to our agent to verify. Because we generate the world and spawn the items before we make this choice, we ensure that we do not accidentally give away the truth of the hypothesis based on what items spawned.
Our process for generating a new spawn of environment can thus be summarized as follows:
1. We randomly generate a true hypothesis 2. We randomly generate a false hypothesis 3. We construct a ruleset from the true hypothesis 4. We spawn the agent and the items in the world described in both the true and false hypothesis 5. We randomly choose either the true or false hypothesis as the “visible” hypothesis that the
agent must verify
Color Switch:
Pre-condition: if the COLOR switch is ON_OFF_SWITCHSTATE when the COLOR switch is in the ON_OFF_SWITCHSTATE position the COLOR switch is ON_OFF_SWITCHSTATE
Action: ""
Post-condition: then the door is open the door is passable and we see the door is open the door will open
Finetune templates: the door can only be opened by switching the COLOR switch to ON_OFF_SWITCHSTATE when we see the COLOR switch is ON_OFF_SWITCHSTATE the door must be open
if the COLOR switch turns ON_OFF_SWITCHSTATE the door opens when we see the door open it must be that the COLOR switch is in the ON_OFF_SWITCHSTATE position those who want to open the door must first switch the COLOR switch ON_OFF_SWITCHSTATE no password just make the COLOR switch be ON_OFF_SWITCHSTATE to open the door COLOR switch ON_OFF_SWITCHSTATE implies door is open only the COLOR switch being ON_OFF_SWITCHSTATE opens the door the door is open because COLOR switch is in the ON_OFF_SWITCHSTATE position COLOR switch ON_OFF_SWITCHSTATE equals open door the COLOR switch opens the door but only when it is ON_OFF_SWITCHSTATE door is open must mean that COLOR switch is ON_OFF_SWITCHSTATE an ON_OFF_SWITCHSTATE means the door is open but only if it is COLOR COLOR controls the door and it opens when it is ON_OFF_SWITCHSTATE ON_OFF_SWITCHSTATE is the correct position of the COLOR switch and it opens the door the switch that causes the door to be open when it is ON_OFF_SWITCHSTATE is COLOR if you see COLOR switch then the door is open the door is independent of the COLOR switch if the door is not open then the COLOR switch must be ON_OFF_SWITCHSTATE if the COLOR switch is not ON_OFF_SWITCHSTATE then the door is open to make the door not open the COLOR switch must be not ON_OFF_SWITCHSTATE whether the door is open is completely independent of the COLOR switch the COLOR switch is what controls the door a not ON_OFF_SWITCHSTATE COLOR switch opens the door
Template Values COLOR: blue red green black
ON_OFF_SWITCHSTATE: on off
Pushblock
Pre-condition: whenever the pushblock is in the PUSHBLOCK_POSITION if the pushblock is at the PUSHBLOCK_POSITION the pushblock is at the PUSHBLOCK_POSITION
Action: ""
Post-condition: then the door is open the door is passable and we see the door is open the door will open
SP_FULL_TRAIN: PUSHBLOCK_POSITION is the correct position for the pushblock for the door to open if the door is open it must be that the pushblock is at the PUSHBLOCK_POSITION when the door is open it is because the pushblock is in the PUSHBLOCK_POSITION when the pushblock is at the PUSHBLOCK_POSITION the door is open pushblock PUSHBLOCK_POSITION means door open the door can only be opened when the pushblock is PUSHBLOCK_POSITION if the pushblock is PUSHBLOCK_POSITION it means the door is open PUSHBLOCK_POSITION pushblock opens the door open door implies pushblock PUSHBLOCK_POSITION
open door means pushblock PUSHBLOCK_POSITION door opens when PUSHBLOCK_POSITION is where the pushblock is PUSHBLOCK_POSITION is the correct position for the pushblock to open the door the door when the pushblock is PUSHBLOCK_POSITION is open PUSHBLOCK_POSITION position of the pushblock causes the door to open door only opens on PUSHBLOCK_POSITION pushblock door can only open with pushblock being PUSHBLOCK_POSITION the pushblock being at the PUSHBLOCK_POSITION is completely independent of the door the pushblock being PUSHBLOCK_POSITION is independent of the door being open the door state is independent of pushblock PUSHBLOCK_POSITION PUSHBLOCK_POSITION pushblock and door are independent
Pushblock values: PUSHBLOCK_POSITION: left right top bottom
Crafting
Pre-condition: when you are at LOCATION and you have CRAFTING_ITEM you are at LOCATION and have in your inventory CRAFTING_ITEM whenever you have a CRAFTING_ITEM and are at LOCATION
Action: and you do CRAFTING_ACTION then you CRAFTING_ACTION
Post-condition: you now have CREATED_ITEM in your inventory then CREATED_ITEM is created and this creates CREATED_ITEM so CREATED_ITEM is created and put in your inventory then CREATED_ITEM is made
Finetune Templates: to create a CREATED_ITEM you must have CRAFTING_ITEM and go to LOCATION and do the action CRAFTING_ACTION CREATED_ITEM can be created by doing CRAFTING_ACTION at LOCATION when CRAFTING_ITEM is in inventory whenever you do CRAFTING_ACTION and have CRAFTING_ITEM at LOCATION a CREATED_ITEM is made you have CRAFTING_ITEM and go to LOCATION and CRAFTING_ACTION and CREATED_ITEM will be created whoever does CRAFTING_ACTION at LOCATION with CRAFTING_ITEM gets CREATED_ITEM if you have CRAFTING_ITEM at LOCATION and you CRAFTING_ACTION you get CREATED_ITEM if you do CRAFTING_ACTION at LOCATION with CRAFTING_ITEM you make CREATED_ITEM whenever you have CRAFTING_ITEM at LOCATION and do CRAFTING_ACTION then you make a CREATED_ITEM having CRAFTING_ITEM in your inventory being at LOCATION and doing CRAFTING_ACTION creates CREATED_ITEM CREATED_ITEM can be made with CRAFTING_ITEM when you do CRAFTING_ACTION at LOCATION CRAFTING_ITEM plus LOCATION plus CRAFTING_ACTION equals CREATED_ITEM create a CREATED_ITEM by being at LOCATION with CRAFTING_ITEM and doing CRAFTING_ACTION CRAFTING_ACTION at LOCATION creates CREATED_ITEM but only if you have a CRAFTING_ITEM if you want to make a CREATED_ITEM then go to LOCATION with CRAFTING_ITEM and do CRAFTING_ACTION CRAFTING_ITEM in inventory at LOCATION makes CREATED_ITEM if you do CRAFTING_ACTION CREATED_ITEM when CRAFTING_ITEM at LOCATION and do CRAFTING_ACTION if you are at LOCATION and do CRAFTING_ACTION you make CREATED_ITEM if you are anywhere and do CRAFTING_ACTION with CRAFTING_ITEM you make a CREATED_ITEM having CRAFTING_ITEM at LOCATION and doing CRAFTING_ACTION does not make a CREATED_ITEM CREATED_ITEM is created by being at LOCATION and doing CRAFTING_ACTION make a CREATED_ITEM by having a CRAFTING_ITEM and doing CRAFTING_ACTION
you have CRAFTING_ITEM and go to LOCATION and CRAFTING_ACTION and CREATED_ITEM will not be created LOCATION plus CRAFTING_ACTION creates a CREATED_ITEM with a CRAFTING_ITEM you can make a CREATED_ITEM by doing CRAFTING_ACTION
Template Values: CRAFTING_ITEM : iron wood stick pickaxe coal
CREATED_ITEM: torch bed
LOCATION: craftingtable CRAFTING_ACTION: craft
B LEARNING DETAILS AND HYPERPARAMETERS
One detail of the prediction network is that we need to keep a memory of past state sequences, hypotheses and ground truths so we can actually train our prediction network. We do this by simply keeping track of the lastN times our agent answered a question, and keeping these in a FIFO memory. When we update our prediction network, we randomly sample from this pool. This also necessitates a 100k step break in period to collect enough examples.
In our policy finetuning experiments, we also stabilize our dual optimization problem by trading of optimization of the policy network and the prediction network. We must also start with the prediction network so that the reward for answering correctly is meaningful.
Basis of RL implementations was from Kostrikov (2018)
C NETWORK DETAILS AND HYPERPARAMETERS
C.1 RELATED WORK
Other works such as Chaplot et al. (2018) have incorporated gated mechanisms between language and perception. Manchin et al. (2019) employs self-attention mechanism within convolutional layers and Choi et al. (2017) also employs a self-attention mechanism in a DQN. Neither work incorporates language and the architectures are quite different from each other.
Figure 6 shows the policy and transformer architectures.
C.2 IMPLEMENTATION AND HYPERPARAMETERS
We take much of our implementation of transformers from Rush (2018).
D ADDITIONAL FIGURES
E STAGED RANDOM SEED VALIDATION
In this experiment, we perform a two-stage procedure for evaluating our results. The idea is that we use one set of hypotheses to determine which random seeds are successful and then show results on the larger set of hypotheses.
In the first stage, we train and our methods on only the triplet templates (the same ones used during pre-training). We then choose only the seeds that performed well (in these figures we show results for keeping seeds with at least 80% prediction accuracy and with at least 90% accuracy. If a method has no seeds performing high enough, we choose the top 5 for that experiment.) We show results on 25 random seeds. We preserve all training and network hyper-parameters.
In Figure 8 we show the first stage of training. We only train these with the triplet templates also seen during pre-training. We give the baseline more time to train to make up for the extra time the other methods got during pretraining. We can see that for all three environments the pretrained methods have at least one good seed for both finetuning and fixed policies. For crafting, we can get a better max seed with finetuning. However, especially in crafting, the variance is quite high, with many seeds doing poorly. The baselines do poorly overall except for a couple seeds in pushblock. This is the simplest environment, so it makes sense that this would be the one where the baseline RL might be able to find a policy. The max of this still slightly underperforms the pre-trained policies.
In Figure 9 and Figure 10, we show the results in the second stage of training. As we discussed, this stage includes the more difficult, non-triplet templates not seen during pre-training and not seen during the first stage of training when we selected the top seeds. We can see that with the pruning of bad seeds, the variance bands for the pre-train methods is much smaller and more clearly outperforms the baselines. We again see that we are able to get the best results from the finetuning on crafting. As with stage 1, we see that the RL only baseline is able to do reasonably well on pushblock, but still not as good as our pre-training methods. We show results for cutoffs at 80% and 90% to make sure we were robust to the choice of cutoff, and we can see very little difference between them.
Figure 11 shows the 90% cutoff experiment again with the mean instead of the max plotted.
F INTRINSIC PRE-TRAINING EXPERIMENTS
In this experiment, we show results on our hypotheses verification problem using different forms of “intrinsic motivation” pre-training. We show results for 4 different pretraining schemes:
1. Change any item state in the world. Receive reward at end.
2. Change any item referenced in the hypothesis. Receive reward at end.
3. Change any item state in the world. Receive reward instantaneously.
4. Change any item referenced in the hypothesis. Receive reward instantaneously.
Reward at the end means that it operates similar to our hypothesis pre-training. Specifically, the agent get reward only at the end of the episode when it has taken a stop action. At that step it gets a +C reward if it changed within the last N frames. For these rewards, we choose C = 10.
Instantaneous reward is what it sounds like. When the object state is changed, the reward is instantly received by the agent. We chose C = 1 for colorswitch and pushblock and C = 5 for crafting.
We interpret “item” to mean any object that is not the agent. So this includes crafting items, switches, pushblocks, etc. We show results on 25 random seeds. We preserve all training and network hyper-parameters.
In Figure 15 we show the final accuracies on the hypothesis verification task using the pretrained intrinsic rewards. As before, only the hypothesis predictor and not the policy is trained at this step. In
Figure 16 we show the same results where we finetune the policy as well. All training and network parameters are kept the same from earlier experiments.
We can see that the best results come from the crafting pre-training intrinsics. This makes a lot of sense because changing the state for crafting includes picking up objects and crafting objects, which is what the agent needs to do to verify the hypothesis. On colorswitch, we are able to get reasonable results, at least for the fixed policy. Again, changing the state corresponds to flipping switches which is also useful for verify colorswitch hypotheses. For pushbloc, nothing performed better than chance. Here, merely changing the state of the object isn’t enough to verify anything. To verify pushblock hypotheses, the state of the pushblock (it’s position) needs to be changed in a specific way: pushed
into or out of the correct position. The intrinsic change reward does not necessarily cause this, so this did not appear to be sufficient in this case.
G ORACLE HYPOTHESIS PREDICTION
In this experiment, we disentangle the problem somewhat for analysis by running experiments with an “oracle” hypothesis predictor on the Crafting environment. Specifically, in these experiments, we assume that we have an oracle that, given the last N states of the world, if it is possible to infer the truth state of the hypothesis given that sequence of states, the oracle returns the ground truth of the hypothesis. This should allow us to analyize the upper bounds of this problem and see what the hard part of our problem is.
First, we train a RL agent with access to the oracle. So the RL agent must learn its action policy, but when it takes the answer action, it uses the oracle to predict the hypothesis. Therefore, if the actions it has taken can verify the hypothesis, it will automatically answer correctly and get the reward. We show results on 25 random seeds and preserve the hyper-parameters from other experiments.
We show the result of this in Figure 17. We see that the RL is quickly, although not instantly, able to converge to perfect performance. From this we should summize that if we know how to predict the hypothesis already, it’s quite easy to get the reward - we just have to learn to do the patterns necessary to make the oracle prediction possible. The RL baseline without pretraining without the oracle was not able to converge to a good solution. This suggests perhaps that the problem is how to get a good hypothesis predictor in the first place to let us then learn the right policy.
Toward that end, we analyize our trained algorithms to see whether the actions they take are capable of verifying the hypothesis. We show the values for the top accuracy model. We use the same models and seeds whose results we show in Table 1 and Figure 4.
Table 9 shows these results. What we see is that indeed, the actions taken by the baselines are not able to verify the hypothesis. The Baseline RL policy only allows the oracle predictor to predict the hypothesis 3% of the time, giving us a upper bound of 51.5% on hypothesis accuracy. Random action is even worse, only leading to the right state sequence 0.7% of the time. No action (the agent that just tries to answer right away) as expected is never able to get the right sequence. For the pre-trainined methods we see that we are able to get to the right states most of the time. The finetuned policy gets the right states almost 100% of the time. With the fixed policy from pretraining, the oracle can answer 75% of the time, meaning that by guessing you could theoretically get to about 88%.
These experiments suggest that as we expected, the hard part of this problem is simultanously learning the policy and prediction is the difficult part. Once you have the best possible hypothesis prediction, RL can quite easily find the correct policy.
H LONGER TRAINING BASELINES
Because the pre-trained methods had the benefit of more training frames, we run the baselines for more frames to see whether additional training helps the comparison. We keep all the training parameters the same.
In Figure 18 we show the baseline methods on the original 5 seeds trained for the equivalent 1.5e8 steps. In Figure 19 we show 20 additional seeds trained for longer, although not quite to the 1.5e8 steps.
On the original seeds, training for longer has no effect. However, when we train with many more seeds, we find that for pushblock, we are able to find a random seeds that can get to about 75% accuracy. This is the simplest environment, so it makes sense that this would be the one where the baseline RL might be able to find a policy. The max of this still slightly underperforms the pre-trained policies.
I MORE STATE MEMORY BASELINES
In this experiment, we see if the RL baseline gets any benefit from increasing N , the number of past states it keeps in its observation. We show results for N = 10, 20, 50, 100 keeping all other parameters the same.
We can see that increasing the value of N does not appear to have any effect on the baselines. N = 5 is likely sufficient to see the change in the state of the environment and to allow the agent to know to stop and answer the question.
J ADDITIONAL RANDOM SEEDS
In this experiment, we show the results from previous experiments, but increase the number of random seeds from the original 5 to 25. When we did this, we also ran 25 random seeds for pretraining, so each results encorporating finetuning came from a different pretraining seed. Results are in Figure 21.
Adding more random seeds, we find essentially the same story as with 5 seeds. Finetuned from pretrain is able to get the best single results, but tends to be very high variance. Non-finetuned from pre-train does generally well on everything, except underperformance on crafting (especially on the new templates). And the baselines do still not do well.
One difference worth noting that in Figure 19, we find that training the RL baseline for longer and given more random seeds, it is able to get one good random seed on pushblock. As we noted there, this is the simplest environment, so it makes sense that this would be the one where the baseline RL might be able to find a policy.
For additional clarity, we show these same plots again in Figure 22 with the mean plotted instead of max. This shows the high variance a bit clearer but does not show that we are able to get some good seeds. Appendix E provides a possible solution to this problem by selecting the good random seeds based on a smaller set of hypotheses. | 1. What is the main contribution of the paper regarding training agents to verify hypotheses?
2. What are the concerns regarding the formulations used in the paper, especially in exploiting the structure of hypotheses?
3. How does the reviewer assess the effectiveness of the proposed approach in tackling the problem?
4. Do you have any suggestions for improving the paper, such as defining acronyms, typos, and providing more details? | Review | Review
The paper looks into the problem of training agents that can interact with their environments to verify hypotheses about it. It first formulates the problem as a MDP, where the agent takes actions to explore the environment and has two special actions (Answer_True, and Answer_False) to indicate that the agent has made a prediction about the validity of the hypothesis. The reward depends on how correct the agent's prediction is. A second formulation uses MDP to explore the states and has a special action (Answer), which predicts the validity of the hypothesis based on the last sequence of N states visited. This is one side of the problem. The authors carry out such experiments and conclude that this doesn't work.
Then, the authors exploit the structure of some hypotheses (such as triplet hypotheses of the form pre_condition, action_sequence, post_condition), which are easier to test. They conclude that taking this structure into account helps.
Overall, the paper is well-written and the literature review section is quite excellent. However, I have reservations against the formulations that the authors used. I would appreciate it if the authors present their argument in the rebuttal.
First, in the plain formulation of MDP, a policy produces an action according to the current state only. The authors add (Answer_True, and Answer_False) to the list of actions in MDP. So, if the agent is trained on some hypotheses, the agent will essentially learn to identify for each h which state s that can be used to to verify h (either prove or disprove it). To me, this is essentially memorization, and the agent cannot learn to predict the validity of new hypotheses. So, it seems that formulating the problem using MDP is not reasonable to begin with.
Second, when the agent exploits the structure of the hypotheses, the problem becomes nearly trivial. It would have been interesting if, somehow, the agent learned the strategy of trying to alter the preconditions or postconditions on its own, but this is not the case in the paper. The formulation essentially tells the agent that it should alter the preconditions and postconditions so that we have enough information about the validity of h that can be fed into a prediction network. I think that the fact this works is not that interesting.
Some minor comments:
- I suggest that all acronyms be defined in the paper before they are used.
- In the reward functions, why did the authors use C instead of just using 1.
- In Page 4, "The agent is is spawned" has a typo.
- In Page 5, "so we can in principal only" has a typo.
- In Page 7, "as it paves the towards" has a typo.
==================
#Post Rebuttal Remark
I have gone through the authors' response and I thank them for it, particularly for making some of the suggested enhancements. However, my score remains unchanged. |
ICLR | Title
Agent as Scientist: Learning to Verify Hypotheses
Abstract
In this paper, we formulate hypothesis verification as a reinforcement learning problem. Specifically, we aim to build an agent that, given a hypothesis about the dynamics of the world can take actions to generate observations which can help predict whether the hypothesis is true or false. Our first observation is that agents trained end-to-end with the reward fail to learn to solve this problem. In order to train the agents, we exploit the underlying structure in the majority of hypotheses – they can be formulated as triplets (pre-condition, action sequence, post-condition). Once the agents have been pretrained to verify hypotheses with this structure, they can be fine-tuned to verify more general hypotheses. Our work takes a step towards a “scientist agent” that develops an understanding of the world by generating and testing hypotheses about its environment.
1 INTRODUCTION
In fields of natural sciences (physics, biology etc.), we follow scientific methods – building and testing hypotheses to develop an understanding of the world. Many classical approaches to artificial intelligence attempted to mirror this process (Brachman & Levesque, 2004; Davis & Marcus, 2015), building (symbolic) knowledge representations about the world that allow the making and testing of hypotheses. However, this process bears little resemblance to the way in which current machine learning (ML) systems learn. Both traditional IID and interactive learning settings use a single userspecified objective function that codifies a high-level task, but places no constraint on the underlying knowledge formed about the environment. In standard ML approaches, particularly those based on deep learning, any representation of the world is embedded in the weights of the model, and there is no explicit mechanism for formulating or testing hypotheses.
In this paper we take a modest step towards combining the classical approaches with the successes of modern ML to build a “scientist agent”. When fully realized, such agent would be able to both make and test hypotheses about its environment. In this work we focus on the latter. Unlike standard supervised problems, there is no standard formulation, and no benchmarks or environments for hypothesis verification in interactive environments. A key contribution of our paper is framing the problem of hypothesis verification and presenting a feasible formulation for it. Specifically, we build an agent that, given a hypothesis about the dynamics of the world, can take actions to verify if the hypothesis is true or not. We formulate hypothesis verification as joint learning of: (a) an action policy that generates observations which are relevant to verification of hypotheses and; (b) a prediction function which uses the observations to predict whether the hypothesis is true or false.
We first show that even in simple environments, agents trained end-to-end using deep reinforcement learning methods cannot learn policies that can generate observations to verify the hypothesis. To remedy this, we exploit the underlying structure of hypotheses – they can often be formulated as a triplet of a pre-condition, an action sequence, and a post-condition that is causally related to the pre-condition and actions. Using this common structure, we are able to seed our action policy to learn behaviors which alter the truth of the pre-condition and post-condition. We show that this policy can be fine-tuned to learn how to verify more general hypotheses that do not necessarily fit into the triplet structure. Thus our approach allows combining the explicit hypothesis testing of classical AI with the use of scalable statistical ML.
See videos and more at: https://sites.google.com/view/scientistagent
2 RELATED WORK
Knowledge representation and reasoning (KRR) (Brachman & Levesque, 2004) is a central theme of traditional AI. Commonsense reasoning (Davis, 1990; Davis & Marcus, 2015; Liu & Singh, 2004) approaches, e.g. CYC (Lenat, 1995), codify everyday knowledge into a schema that permits inference and question answering. However, the underlying operations are logic-based and occur purely within the structured representation, having no mechanism for interaction with an external environment. Expert systems (Giarratano & Riley, 1998) instead focus on narrow domains of knowledge, but are similarly self-contained. Logic-based planning methods (Fikes & Nilsson, 1971; Colaco & Sridharan, 2015) generate abstract plans that could be regarded as action sequences for an agent. By contrast, our approach is statistical in nature, relying on Reinforcement Learning (RL) to guide the agent.
Our approach builds on the recent interest (Mao et al., 2019; Garcez et al., 2012) in neural-symbolic approaches that combine neural networks with symbolic representations. In particular, some recent works (Zhang & Stone, 2015; Lu et al., 2018) have attempted to combine RL with KRR, for tasks such as navigation and dialogue. These take the world dynamics learned by RL and make them usable in declarative form within the knowledge base, which is then used to improve the underlying RL policy. In contrast, in our approach, the role of RL is to verify a formal statement about the environment. Our work also shares some similarity with Konidaris et al. (2018), where ML methods are used to learn mappings from environment states to representations a planner can use.
Cognitive Development: Empirical research on early learning (Gopnik, 2012; Kushnir & Gopnik, 2005) has shown that infants build an understanding of the world around them in ways that parallel the scientific process: constantly formulating hypotheses about how some physical aspect of the world might work and then proving or disproving them through deliberate play. Through this process the child builds up an abstract consistent causal understanding of the world. Violations of this understanding elicit surprise that can be measured by researchers (Spelke et al., 1992).
Automated Knowledge Base completion: This work is also related to knowledge base completion (Fader et al., 2011; Bordes et al., 2013; Suchanek et al., 2007), and especially as formulated in (Riedel et al., 2013). Instead of using other facts in the knowledge base or a text corpus to predict edges in the KB, here the agent needs to act in an environment and observe the results of those actions. This recalls (Mitchell et al., 2018), where the system verifies facts it has previously hypothesized by searching for corroboration in the corpus.
Automation of the scientific process has been attempted in several domains. Robotic exploration of chemical reactivity has been demonstrated (Granda et al., 2018) using ML techniques. (King et al., 2009) developed a robot scientist that explored geonomics hypotheses about yeast and experimentally tested them using laboratory automation. In biochemistry (Vanlier et al., 2014) used Bayesian methods for optimal experiment design. More generally, the Automated Statistician project (Steinruecken et al., 2019) uses a Bayesian approach to reason about different hypotheses for explaining the data, with the aims of creating interpretable knowledge.
Embodied Question and Answering: The problem studied in this paper is closely related to the embodied visual question-answering problem in (Das et al., 2018). Indeed, our basic formulation is a particular case of the most general formulation of embodied QA, as the agent is rewarded for successfully answering questions about the environment that require interaction. However, the form of the questions is different than those considered in (Das et al., 2018), as they may require drawing a conclusion about the dynamics of the environment, rather than a static property. Even the questions about static properties we are interested in have a different flavor, as they encode rules, rather than statements about the current configuration. Our approach is built around hypothesis-conclusion structure special to these questions.
There is also a large body of work on (non-embodied) visual QA (Kafle & Kanan, 2017; Wu et al., 2016a) and text-based QA (Rajpurkar et al., 2018). From this, most relevant to our work is (Wu et al., 2016b) who use a structured knowledge base to augment standard statistical QA techniques.
Language grounding: Our approach requires us to solve the language grounding problem, albeit in a simplified form due to templated language/limited vocabulary. Most other works such as (Chaplot et al., 2018; Anderson et al., 2018; Tellex et al., 2011) are focused on instruction following in known or unknown environments.
Learning to experiment: Recent works have studied training agents to interact with an environment to draw conclusions about its dynamics (Denil et al., 2016) or elucidate its causal structure (Dasgupta et al., 2019). Our work is similar to these (especially (Denil et al., 2016) which uses reinforcement learning on sequences of observations) in that the agent gets reward for answering questions that require experimentation with the environment. However, in those works, the “question” in each environment is the same; and thus while learning to interact led to higher answer accuracy, random experimental policies could still find correct answers. On the other hand, in this work, the space of questions possible for any given environment is combinatorial, and random experimentation (and indeed vanilla reinforcement learning) is insufficient to answer questions.
3 PROBLEM
3.1 THE HYPOTHESIS VERIFICATION PROBLEM
Here we formally introduce the problem of hypothesis verification as a Partially Observable Markov Decision Process (POMDP).
The agent is spawned in an environment W ∈ W defined by the “rules” of the particular instance W out of all possible worldsW . For instance, in a crafting world, W will be defined as a set or rules for what items can be crafting from which other items, and this ruleset will be different from other environments inW Given the environment W , the agent is given a hypothesis to test h which relates to the rules of the world instance. By construction, h is either true or false. The agent can take actions a ∈ A (for example, move left, move right, craft, etc), including two special actions ansT and ansF . The goal of the agent is to correctly identify the hypothesis h as true or false and take the corresponding answering action. At the end of the episode, the agent is told whether the h was true or false.
In our experiments, we set the probability of h being true at 0.5, and construct the environments such that it is not obvious from time t = 0 whether the hypothesis is true or not. The agent must therefore learn a hypothesis conditioned policy π(s, h) : (S,H)→ A, such that the agent has enough information to know whether h is true.
Because we also have access to the ground truth for whether the hypothesis is true, we can train a network with supervised learning to predict true and false. Our prediction network f(st, st−1, ...st−N , h) : (SN ,H) → hpred takes in the last N observed observations of the environment and the hypothesis and predicts whether or not the hypothesis is true. The special ans action replaces the earlier ansT and ansF , and the prediction network is used to decide whether the agent answer true or false.
This addition of a supervised component is not strictly necessary for the definition of the problem. However, this framing allows for the use of supervised learning for the actual ground truth prediction, which is known to be an easier problem than the indirect optimization of RL. Empirically, this change makes the problem much more tractable.
Now, to train the policy network π, we can now define a reward function to allow for standard RL training of the policy. In essence, we give the agent a positive reward at the end of an episode if it correctly guesses the correct truth value of the hypothesis h.
Rans = { +C a = ans & hpred = hgt −C a = ans & hpred 6= hgt 0 otherwise
where C ∈ R+ is some constant reward value and hgt is the ground truth value of the hypothesis. Note that any particular choice of W forms an MDP if it were to be played repeatedly; but with the dynamics depending on the ruleset, the state is not fully observed, and naive RL is not applicable. As is standard in these situations, we use models that take as input a sequence of observations.
This dual optimization of policy and hypothesis prediction makes hypothesis verification a quite challenging problem! In order to be able to tell whether a hypothesis is true or not, we need to take the correct sequence of actions related to the hypothesis. But in order to know that a particular sequence of actions was the right one to do, we need to be able to correctly predict the hypothesis to know that
we should have a positive reward for that sequence. Guessing with no information gives average 0 reward, and until it learns good predictor it has no signal to guide the policy to do the right thing. We find that a RL baseline finds it almost impossible to solve the task as it can neither learn the right policy nor the right predictor to verify the hypothesis.
3.2 ENVIRONMENTS
We create three games in order to test the problem of hypothesis verification: Color Switch, Pushblock, and Crafting. See Figure 1. Each instantiation of an environment comes with a hypothesis for the agent to verify. The hypotheses are generated along with the environment using a set of templates associated to the game (see Appendix A). For each spawn of the environment, the locations of the agent, all items and entities, the given hypothesis to verify, as well as the underlying logic of the world is randomized. This prevents the agent from learning the truth of the hypothesis by simply guessing without interacting with the world.
In the Color Switch environment, the agent is placed in a world with one or more color switches which are randomly either “on” or “off” and a door which is either open or closed. The agent is able to move and toggle the switch positions. One of the switches in the environment, when in the correct position (can be either on or off) will cause the door to open. The other switch has no effect. Hypotheses in this environment relate to the color and position of switches and how that opens or closes the door.
In the Pushblock environment, the agent is placed in a world with a block which can be pushed by the agent, and a door. The agent can move and push on the block. The door opens when the block is in a particular part of the grid: “up” – top two rows, “down” – bottom two rows, “left” – leftmost two rows, “right” – rightmost two rows. The hypotheses in this environment related to the position of the pushblock and how that affects the door.
Finally, in the Crafting environment, the agent is placed in a world with crafting rules similar to that of the popular Minecraft game. The agent is spawned along with a number of crafting items, and a crafting location. The agent is able to move, pick up items into its inventory and use the crafting location using special crafting actions. There is some true “recipe” which produces some new item in the agent’s inventory.
Items are randomly generated in a 5 by 5 grid. The world observation is given by a 1-hot vector of each possible item in the world at each grid location and another 1-hot vector for each item and whether it is in the agent’s inventory. The hypothesis is encoded as sequence of tokens. As we describe in Section 3.1, the (sparse) reward function for these environments is C = 10 if the agent takes the special ans action and correctly verifies the hypothesis as true or false, and −10 if it incorrectly guesses.
3.3 HYPOTHESIS CONSTRUCTION
In the following sections, we discuss different types of hypotheses about the environment in order of increasing complexity.
3.3.1 TRIPLET HYPOTHESES
In the first case, we consider hypotheses that have the following “triplet” form.
(pre-condition, action sequence) =⇒ post-condition
The idea here is that we want to explicitly form the hypothesis as a logical statement. When the pre-condition is true, and the action sequence is performed, the post-condition will be true.
To generate our triplet hypotheses, we: (1) randomly select a pre-condition template from a set list; (2) randomly select an action template; (3) randomly select a post-condition template; and (4) fill in any entities in the final template
So for example, for the Color Switch environment we might draw “if the COLOR switch is ON_OFF_SWITCHSTATE, NULL, the door will open” and then draw “blue” for COLOR and “on” for ON_OFF_SWITCHSTATE, giving us the final template: “if the blue switch is on the door will open.”
In Appendix A, we show the possible templates for each of the triplets and the possible values for all of the entities for our three environments.
3.3.2 GENERAL TEMPLATE CONSTRUCTION
In the more general case, instead of drawing a template from the triplet form, we instead draw a single template for the hypothesis and fill in the values. For instance, in pushblock we might draw: “the door can only be opened when the pushblock is PUSHBLOCK_POSITION” and then draw “left” for PUSHBLOCK_POSITION. These templates are more general than the triplet ones in that they need not hold to the strict triplet form, and we have no explicit labels for pre-condition, action sequence and post-condition.
3.3.3 SPECIAL CASE TEMPLATES
Finally, we also can draw some more difficult and general hypothesis templates. Some of these cannot be neatly fit into a triplet format by rewording, and some may not fully describe the rules of the world. Some examples of these harder templates are: (1) Negating effects (e.g. door is not open); (2) Negating conditions (e.g. switch is not on); and Independence (e.g. door independent of blue switch). See Appendix A for all of the possible templates for an environment and further details.
4 METHODOLOGY
4.1 RL BASELINE
The conceptually simplest approach to solving the problem is to give an RL agent a sequence of N observations of the form (oi, h), where h is the hypothesis about the environment, and oi is the observation. As long as N is large enough, a standard RL algorithm has the capacity to solve the problem.
Thus, we design our policy network π(s, h) to decide the action. We also use the simplification described in Section 3.1 and create another network to predict the hypothesis ground truth value trained using supervised learning. The specifics of the networks are further described in Section 4.3 and hyper-parameters are described in the Appendix.
4.2 TRIPLET POLICY PRETRAINING
Rather than try to rely on general RL methods, we use the special structure of many hypotheses. As we discussed in Section 3.3.1, many hypotheses naturally take the form of a triplet: (pre-condition, action sequence, post-condition). While not all hypotheses fit into this format, the hope is that the policy we learn is close enough to ground truth, that we can later generalize to other kinds of hypotheses.
We can use this to construct a reward function. We know that to verify these kinds of statements, we need to take actions which alter the truth of the pre-condition and post-condition. If we modify the
pre-condition and take the action, if the statement is true, the post-condition should toggle from false to true in the environment. Similarly, if post-condition changes but the pre-condition did not change, we know that the statement must be false.
Thus we construct the following reward function to encourage our agents to toggle the pre-conditions and post-conditions:
Rpre = { +C a = ans & pre changed in last N frames 0 otherwise
Rppost = { +C a = ans & post+pre changed in last N frames 0 otherwise
It encourages the policy to change the pre-condition and post-conditions (via pre-condition) in the last N frames of the video, so that a predictor looking at the last N frames of observations will be able to deduce the truth value of the hypothesis.
Once we have trained the policy function with this proxy reward, we can then train the prediction network and even finetune our policy network on the final reward.
4.3 NETWORK ARCHITECTURE
Although other works such as Chaplot et al. (2018) have investigated language-conditioned RL (usually in the form of instruction following), our hypothesis conditioned problem proved to be quite challenging, and required some novelty in network architectures.
For the policy networks, standard architectures were not effective for our problem. The key seems to be that it is difficult to condition action on language without explicit interaction between the language and non-language components. In particular, of all of the network architectures we experimented with, an explicit attention network using the language as the key input was by far the most effective. The hypothesis is fed into a seq2vec model and used as the key to the a dot-product attention mechanism. The state of the network (the grid locations of the items in the world and the inventory of the agent) after being fed through a one layer networks is fed as input to N parallel MLPs. The output of the MLPs are fed as the values into the attention mechanism. The output of the module is then fed into the final hidden layer of the actor-critic network.
For the prediction network, we use the popular transformer architecture Vaswani et al. (2017). Our prediction network encodes both the hypothesis and past observations (after they are passed through a one layer network) using transformer encoders. These sequences are then combined using a transformer to generate a final hidden state as output which is then fed to a final prediction layer and sigmoid function to get our binary prediction.
In Figure 5, we provide ablation analysis for both of our neural network architectures. See Appendix C for more network details and hyperparameters and network diagrams.
5 EXPERIMENTS
First, we train using our policy networks using our pretraining proxy functions from Section 4.2. We find that pretraining with just the pre-condition reward leads to better results for the Color Switch environment, and use both rewards for the other two environments. Figure 2 shows these results.
Next, we train our network on the final prediction reward and train our prediction networks. We train two different versions of this. For one, we only train the prediction network and keep our policy network fixed. For the other, we train both the prediction network and finetune the policy network.
During this final training stage, we relax our triplet-form constraint and train on both the triplettemplated hypotheses we saw during pretraining as well as new hypothesis templates not seen during pretraining. We sample seen versus new templates with equal probability. See Section 3.3 and Appendix A for examples of the kinds of hypotheses we see during this phase of training. Note that this includes hypotheses which break the triplet format.
Figure 3 and left of Figure 4 show our final hypothesis verification results. We show the max out of five for each of the methods shown. We also break down the final hypothesis prediction accuracy for our methods in Table 1, and show its success on the triplet hypotheses (which our methods were pretrained on) and non-triplet hypotheses (which they were not).
RL baseline We can first see clearly that the RL baselines fail. This is due to the unlikelihood of taking the right actions to verify correctly and therefore train the prediction net properly. Because the average reward for answering is 0 if you cannot predict correctly, the agent does not even bother answering the question much of the time (which is why this baseline gets less than 50%, it does not bother guessing in most games).
Other baselines We also include two other simple baselines “no act” and “random act.” The no act baseline simply takes the ans action at t = 0 and the prediction network attempts to predict the hypothesis with just the first observation. This fails because the agent needs to take actions in the environment to be able to predict the hypothesis accurately. For random act, we simply make the policy to take random actions. This similarly fails as random actions are extremely unlikely to behave in a way that allows for the verification of the hypothesis.
5.1 TRIPLET POLICIES CAN SUCCEED AND GENERALIZE
On the other hand, we see that RL is able to train on the triplet tasks after pre-training. While it is not surprising that densifying the reward in this way makes the RL easier, in our view, it is important that it is true, as it paves the way towards hypothesis verifying agents. That is: we are interested in scalable methods that can use statistical ML to interact with a complex environment. Given the more general success of deep RL, that the problem becomes approachable with reasonable reward shaping gives us hope we will be able to get beyond the regime of classical AI methods.
Morever, in Pushblock and Color Switch, even with the policy learned from the triplet pre/post reward, the agent is able to generalize and perform well on templates not seen in the pre-training phase as we can see in Table 1. This includes generalizing to difficult templates such as negations
and “independence” hypotheses. Note that the prediction network that verifies the hypotheses given the trajectory from the policy still needs to fine-tune on the new templates.
It’s worth noting that although we can do well using finetuning using a few random seeds, these methods are high variance. In the appendix we show and discuss this more clearly. In Figure 7 we show the variances of these methods which show that the variance on our method is high. In Appendix E we propose a training methodology that sorts out the bad random seeds by using the triplet hypotheses as a validation set. And in Appendix J we show that these results are consistent when we increase the number of random seeds to 25.
5.2 TRIPLET POLICIES CAN ADAPT
On the crafting task, to do well on the unseen templates, the policy also needs to be fine-tuned. In our view, the fact that this fine-tuning can succeed is more important than the generalization in the simpler tasks, as it demonstrates a path towards agents that can verify complex statements by establishing a curriculum of simpler ones.
In the right of Figure 4, we show a visualization of a sample run of the finetuned policy and predictor on crafting. We see that the policy does what we expect, picks up the correct item and moves to the crafting table to craft. It crafts a different item than it expected (bed instead of torch) and it answers false. Looking at the prediction net over time, we see that it at first predicts false then true before it does the craft action. Once it has crafted the bed, however, it answers correctly.
We conduct additional experiments in the Appendix. In Appendix G, we tease further analyse the problem by experimenting with an oracle hypothesis predictor. In Appendix F we experiment with different pretraining functions. In appendix Appendix H we look at training baselines for longer. And In Appendix I, we look at whether giving the baselines more past frames N improves performance.
In Figure 5 we see the results of our network architecture ablation. As we can see, our new policy architecture described in Section 4.3 clearly outperforms a standard MLP policy network on the
language-condition pretraining task. We also see that the transformer architecture outperforms the LSTM and MLP model on the final task when we hold the policy network constant.
6 DISCUSSION
In this work, we propose a tractable formulation of the problem of training agents that can interact with an environment to test hypotheses about it. We show that generic RL techniques struggle with the problem, but by using its structure, we are able to develop a method that works in simple environments. Specifically, we use the fact that many hypotheses can be broken into triples of the form of (pre-condition, action sequence, post-condition); but we also show that once pre-trained using this factorization, agents can be fine-tuned to verifying more general hypotheses.
A TEMPLATES
A.1 WORLD AND HYPOTHESIS CONSTRUCTION
Returning again to our notation from Section 3.1, the environment at each spawn needs to construct a world W out of all possibleW , and a hypothesis h that is either true or false in the world. W in particular describes the rules about how the environment works (i.e. which switch opens the door) which in our case can precisely be describe by a hypothesis. So given a true hypothesis, we can exactly describe the rules of the world. Therefore, in order to create an instance of a possible WinW , we can instead draw a true hypothesis about the world at random. From the hypothesis, we can then construct the rules the determine how objects in the world behave. Note that there are couple exceptions to this for our harder hypotheses, where the hypothesis can be true but only partially describes all the rules of W . For these cases, we draw yet another template which is consistent with the hypothesis and use that to construct the rules, such as deciding which color switch really opens the door.
Because we have to randomly give either a true or false hypothesis, we also need to be able to generate a false hypothesis for the world. So for every instance, we also draw a random false hypothesis. Now, given a true and false hypothesis, we can fully generate the world and all the items that appear in either statement. So for instance, if the true hypothesis mentions a green switch and the false one mentions a blue switch, we generate both a green and blue switch. Then, we can set the rules such that the right thing happens. So in this example, switching the green switch opens the door and the blue switch does nothing.
The final step is then to randomly choose either the true or false statement as the “visible” hypothesis which is passed to our agent to verify. Because we generate the world and spawn the items before we make this choice, we ensure that we do not accidentally give away the truth of the hypothesis based on what items spawned.
Our process for generating a new spawn of environment can thus be summarized as follows:
1. We randomly generate a true hypothesis 2. We randomly generate a false hypothesis 3. We construct a ruleset from the true hypothesis 4. We spawn the agent and the items in the world described in both the true and false hypothesis 5. We randomly choose either the true or false hypothesis as the “visible” hypothesis that the
agent must verify
Color Switch:
Pre-condition: if the COLOR switch is ON_OFF_SWITCHSTATE when the COLOR switch is in the ON_OFF_SWITCHSTATE position the COLOR switch is ON_OFF_SWITCHSTATE
Action: ""
Post-condition: then the door is open the door is passable and we see the door is open the door will open
Finetune templates: the door can only be opened by switching the COLOR switch to ON_OFF_SWITCHSTATE when we see the COLOR switch is ON_OFF_SWITCHSTATE the door must be open
if the COLOR switch turns ON_OFF_SWITCHSTATE the door opens when we see the door open it must be that the COLOR switch is in the ON_OFF_SWITCHSTATE position those who want to open the door must first switch the COLOR switch ON_OFF_SWITCHSTATE no password just make the COLOR switch be ON_OFF_SWITCHSTATE to open the door COLOR switch ON_OFF_SWITCHSTATE implies door is open only the COLOR switch being ON_OFF_SWITCHSTATE opens the door the door is open because COLOR switch is in the ON_OFF_SWITCHSTATE position COLOR switch ON_OFF_SWITCHSTATE equals open door the COLOR switch opens the door but only when it is ON_OFF_SWITCHSTATE door is open must mean that COLOR switch is ON_OFF_SWITCHSTATE an ON_OFF_SWITCHSTATE means the door is open but only if it is COLOR COLOR controls the door and it opens when it is ON_OFF_SWITCHSTATE ON_OFF_SWITCHSTATE is the correct position of the COLOR switch and it opens the door the switch that causes the door to be open when it is ON_OFF_SWITCHSTATE is COLOR if you see COLOR switch then the door is open the door is independent of the COLOR switch if the door is not open then the COLOR switch must be ON_OFF_SWITCHSTATE if the COLOR switch is not ON_OFF_SWITCHSTATE then the door is open to make the door not open the COLOR switch must be not ON_OFF_SWITCHSTATE whether the door is open is completely independent of the COLOR switch the COLOR switch is what controls the door a not ON_OFF_SWITCHSTATE COLOR switch opens the door
Template Values COLOR: blue red green black
ON_OFF_SWITCHSTATE: on off
Pushblock
Pre-condition: whenever the pushblock is in the PUSHBLOCK_POSITION if the pushblock is at the PUSHBLOCK_POSITION the pushblock is at the PUSHBLOCK_POSITION
Action: ""
Post-condition: then the door is open the door is passable and we see the door is open the door will open
SP_FULL_TRAIN: PUSHBLOCK_POSITION is the correct position for the pushblock for the door to open if the door is open it must be that the pushblock is at the PUSHBLOCK_POSITION when the door is open it is because the pushblock is in the PUSHBLOCK_POSITION when the pushblock is at the PUSHBLOCK_POSITION the door is open pushblock PUSHBLOCK_POSITION means door open the door can only be opened when the pushblock is PUSHBLOCK_POSITION if the pushblock is PUSHBLOCK_POSITION it means the door is open PUSHBLOCK_POSITION pushblock opens the door open door implies pushblock PUSHBLOCK_POSITION
open door means pushblock PUSHBLOCK_POSITION door opens when PUSHBLOCK_POSITION is where the pushblock is PUSHBLOCK_POSITION is the correct position for the pushblock to open the door the door when the pushblock is PUSHBLOCK_POSITION is open PUSHBLOCK_POSITION position of the pushblock causes the door to open door only opens on PUSHBLOCK_POSITION pushblock door can only open with pushblock being PUSHBLOCK_POSITION the pushblock being at the PUSHBLOCK_POSITION is completely independent of the door the pushblock being PUSHBLOCK_POSITION is independent of the door being open the door state is independent of pushblock PUSHBLOCK_POSITION PUSHBLOCK_POSITION pushblock and door are independent
Pushblock values: PUSHBLOCK_POSITION: left right top bottom
Crafting
Pre-condition: when you are at LOCATION and you have CRAFTING_ITEM you are at LOCATION and have in your inventory CRAFTING_ITEM whenever you have a CRAFTING_ITEM and are at LOCATION
Action: and you do CRAFTING_ACTION then you CRAFTING_ACTION
Post-condition: you now have CREATED_ITEM in your inventory then CREATED_ITEM is created and this creates CREATED_ITEM so CREATED_ITEM is created and put in your inventory then CREATED_ITEM is made
Finetune Templates: to create a CREATED_ITEM you must have CRAFTING_ITEM and go to LOCATION and do the action CRAFTING_ACTION CREATED_ITEM can be created by doing CRAFTING_ACTION at LOCATION when CRAFTING_ITEM is in inventory whenever you do CRAFTING_ACTION and have CRAFTING_ITEM at LOCATION a CREATED_ITEM is made you have CRAFTING_ITEM and go to LOCATION and CRAFTING_ACTION and CREATED_ITEM will be created whoever does CRAFTING_ACTION at LOCATION with CRAFTING_ITEM gets CREATED_ITEM if you have CRAFTING_ITEM at LOCATION and you CRAFTING_ACTION you get CREATED_ITEM if you do CRAFTING_ACTION at LOCATION with CRAFTING_ITEM you make CREATED_ITEM whenever you have CRAFTING_ITEM at LOCATION and do CRAFTING_ACTION then you make a CREATED_ITEM having CRAFTING_ITEM in your inventory being at LOCATION and doing CRAFTING_ACTION creates CREATED_ITEM CREATED_ITEM can be made with CRAFTING_ITEM when you do CRAFTING_ACTION at LOCATION CRAFTING_ITEM plus LOCATION plus CRAFTING_ACTION equals CREATED_ITEM create a CREATED_ITEM by being at LOCATION with CRAFTING_ITEM and doing CRAFTING_ACTION CRAFTING_ACTION at LOCATION creates CREATED_ITEM but only if you have a CRAFTING_ITEM if you want to make a CREATED_ITEM then go to LOCATION with CRAFTING_ITEM and do CRAFTING_ACTION CRAFTING_ITEM in inventory at LOCATION makes CREATED_ITEM if you do CRAFTING_ACTION CREATED_ITEM when CRAFTING_ITEM at LOCATION and do CRAFTING_ACTION if you are at LOCATION and do CRAFTING_ACTION you make CREATED_ITEM if you are anywhere and do CRAFTING_ACTION with CRAFTING_ITEM you make a CREATED_ITEM having CRAFTING_ITEM at LOCATION and doing CRAFTING_ACTION does not make a CREATED_ITEM CREATED_ITEM is created by being at LOCATION and doing CRAFTING_ACTION make a CREATED_ITEM by having a CRAFTING_ITEM and doing CRAFTING_ACTION
you have CRAFTING_ITEM and go to LOCATION and CRAFTING_ACTION and CREATED_ITEM will not be created LOCATION plus CRAFTING_ACTION creates a CREATED_ITEM with a CRAFTING_ITEM you can make a CREATED_ITEM by doing CRAFTING_ACTION
Template Values: CRAFTING_ITEM : iron wood stick pickaxe coal
CREATED_ITEM: torch bed
LOCATION: craftingtable CRAFTING_ACTION: craft
B LEARNING DETAILS AND HYPERPARAMETERS
One detail of the prediction network is that we need to keep a memory of past state sequences, hypotheses and ground truths so we can actually train our prediction network. We do this by simply keeping track of the lastN times our agent answered a question, and keeping these in a FIFO memory. When we update our prediction network, we randomly sample from this pool. This also necessitates a 100k step break in period to collect enough examples.
In our policy finetuning experiments, we also stabilize our dual optimization problem by trading of optimization of the policy network and the prediction network. We must also start with the prediction network so that the reward for answering correctly is meaningful.
Basis of RL implementations was from Kostrikov (2018)
C NETWORK DETAILS AND HYPERPARAMETERS
C.1 RELATED WORK
Other works such as Chaplot et al. (2018) have incorporated gated mechanisms between language and perception. Manchin et al. (2019) employs self-attention mechanism within convolutional layers and Choi et al. (2017) also employs a self-attention mechanism in a DQN. Neither work incorporates language and the architectures are quite different from each other.
Figure 6 shows the policy and transformer architectures.
C.2 IMPLEMENTATION AND HYPERPARAMETERS
We take much of our implementation of transformers from Rush (2018).
D ADDITIONAL FIGURES
E STAGED RANDOM SEED VALIDATION
In this experiment, we perform a two-stage procedure for evaluating our results. The idea is that we use one set of hypotheses to determine which random seeds are successful and then show results on the larger set of hypotheses.
In the first stage, we train and our methods on only the triplet templates (the same ones used during pre-training). We then choose only the seeds that performed well (in these figures we show results for keeping seeds with at least 80% prediction accuracy and with at least 90% accuracy. If a method has no seeds performing high enough, we choose the top 5 for that experiment.) We show results on 25 random seeds. We preserve all training and network hyper-parameters.
In Figure 8 we show the first stage of training. We only train these with the triplet templates also seen during pre-training. We give the baseline more time to train to make up for the extra time the other methods got during pretraining. We can see that for all three environments the pretrained methods have at least one good seed for both finetuning and fixed policies. For crafting, we can get a better max seed with finetuning. However, especially in crafting, the variance is quite high, with many seeds doing poorly. The baselines do poorly overall except for a couple seeds in pushblock. This is the simplest environment, so it makes sense that this would be the one where the baseline RL might be able to find a policy. The max of this still slightly underperforms the pre-trained policies.
In Figure 9 and Figure 10, we show the results in the second stage of training. As we discussed, this stage includes the more difficult, non-triplet templates not seen during pre-training and not seen during the first stage of training when we selected the top seeds. We can see that with the pruning of bad seeds, the variance bands for the pre-train methods is much smaller and more clearly outperforms the baselines. We again see that we are able to get the best results from the finetuning on crafting. As with stage 1, we see that the RL only baseline is able to do reasonably well on pushblock, but still not as good as our pre-training methods. We show results for cutoffs at 80% and 90% to make sure we were robust to the choice of cutoff, and we can see very little difference between them.
Figure 11 shows the 90% cutoff experiment again with the mean instead of the max plotted.
F INTRINSIC PRE-TRAINING EXPERIMENTS
In this experiment, we show results on our hypotheses verification problem using different forms of “intrinsic motivation” pre-training. We show results for 4 different pretraining schemes:
1. Change any item state in the world. Receive reward at end.
2. Change any item referenced in the hypothesis. Receive reward at end.
3. Change any item state in the world. Receive reward instantaneously.
4. Change any item referenced in the hypothesis. Receive reward instantaneously.
Reward at the end means that it operates similar to our hypothesis pre-training. Specifically, the agent get reward only at the end of the episode when it has taken a stop action. At that step it gets a +C reward if it changed within the last N frames. For these rewards, we choose C = 10.
Instantaneous reward is what it sounds like. When the object state is changed, the reward is instantly received by the agent. We chose C = 1 for colorswitch and pushblock and C = 5 for crafting.
We interpret “item” to mean any object that is not the agent. So this includes crafting items, switches, pushblocks, etc. We show results on 25 random seeds. We preserve all training and network hyper-parameters.
In Figure 15 we show the final accuracies on the hypothesis verification task using the pretrained intrinsic rewards. As before, only the hypothesis predictor and not the policy is trained at this step. In
Figure 16 we show the same results where we finetune the policy as well. All training and network parameters are kept the same from earlier experiments.
We can see that the best results come from the crafting pre-training intrinsics. This makes a lot of sense because changing the state for crafting includes picking up objects and crafting objects, which is what the agent needs to do to verify the hypothesis. On colorswitch, we are able to get reasonable results, at least for the fixed policy. Again, changing the state corresponds to flipping switches which is also useful for verify colorswitch hypotheses. For pushbloc, nothing performed better than chance. Here, merely changing the state of the object isn’t enough to verify anything. To verify pushblock hypotheses, the state of the pushblock (it’s position) needs to be changed in a specific way: pushed
into or out of the correct position. The intrinsic change reward does not necessarily cause this, so this did not appear to be sufficient in this case.
G ORACLE HYPOTHESIS PREDICTION
In this experiment, we disentangle the problem somewhat for analysis by running experiments with an “oracle” hypothesis predictor on the Crafting environment. Specifically, in these experiments, we assume that we have an oracle that, given the last N states of the world, if it is possible to infer the truth state of the hypothesis given that sequence of states, the oracle returns the ground truth of the hypothesis. This should allow us to analyize the upper bounds of this problem and see what the hard part of our problem is.
First, we train a RL agent with access to the oracle. So the RL agent must learn its action policy, but when it takes the answer action, it uses the oracle to predict the hypothesis. Therefore, if the actions it has taken can verify the hypothesis, it will automatically answer correctly and get the reward. We show results on 25 random seeds and preserve the hyper-parameters from other experiments.
We show the result of this in Figure 17. We see that the RL is quickly, although not instantly, able to converge to perfect performance. From this we should summize that if we know how to predict the hypothesis already, it’s quite easy to get the reward - we just have to learn to do the patterns necessary to make the oracle prediction possible. The RL baseline without pretraining without the oracle was not able to converge to a good solution. This suggests perhaps that the problem is how to get a good hypothesis predictor in the first place to let us then learn the right policy.
Toward that end, we analyize our trained algorithms to see whether the actions they take are capable of verifying the hypothesis. We show the values for the top accuracy model. We use the same models and seeds whose results we show in Table 1 and Figure 4.
Table 9 shows these results. What we see is that indeed, the actions taken by the baselines are not able to verify the hypothesis. The Baseline RL policy only allows the oracle predictor to predict the hypothesis 3% of the time, giving us a upper bound of 51.5% on hypothesis accuracy. Random action is even worse, only leading to the right state sequence 0.7% of the time. No action (the agent that just tries to answer right away) as expected is never able to get the right sequence. For the pre-trainined methods we see that we are able to get to the right states most of the time. The finetuned policy gets the right states almost 100% of the time. With the fixed policy from pretraining, the oracle can answer 75% of the time, meaning that by guessing you could theoretically get to about 88%.
These experiments suggest that as we expected, the hard part of this problem is simultanously learning the policy and prediction is the difficult part. Once you have the best possible hypothesis prediction, RL can quite easily find the correct policy.
H LONGER TRAINING BASELINES
Because the pre-trained methods had the benefit of more training frames, we run the baselines for more frames to see whether additional training helps the comparison. We keep all the training parameters the same.
In Figure 18 we show the baseline methods on the original 5 seeds trained for the equivalent 1.5e8 steps. In Figure 19 we show 20 additional seeds trained for longer, although not quite to the 1.5e8 steps.
On the original seeds, training for longer has no effect. However, when we train with many more seeds, we find that for pushblock, we are able to find a random seeds that can get to about 75% accuracy. This is the simplest environment, so it makes sense that this would be the one where the baseline RL might be able to find a policy. The max of this still slightly underperforms the pre-trained policies.
I MORE STATE MEMORY BASELINES
In this experiment, we see if the RL baseline gets any benefit from increasing N , the number of past states it keeps in its observation. We show results for N = 10, 20, 50, 100 keeping all other parameters the same.
We can see that increasing the value of N does not appear to have any effect on the baselines. N = 5 is likely sufficient to see the change in the state of the environment and to allow the agent to know to stop and answer the question.
J ADDITIONAL RANDOM SEEDS
In this experiment, we show the results from previous experiments, but increase the number of random seeds from the original 5 to 25. When we did this, we also ran 25 random seeds for pretraining, so each results encorporating finetuning came from a different pretraining seed. Results are in Figure 21.
Adding more random seeds, we find essentially the same story as with 5 seeds. Finetuned from pretrain is able to get the best single results, but tends to be very high variance. Non-finetuned from pre-train does generally well on everything, except underperformance on crafting (especially on the new templates). And the baselines do still not do well.
One difference worth noting that in Figure 19, we find that training the RL baseline for longer and given more random seeds, it is able to get one good random seed on pushblock. As we noted there, this is the simplest environment, so it makes sense that this would be the one where the baseline RL might be able to find a policy.
For additional clarity, we show these same plots again in Figure 22 with the mean plotted instead of max. This shows the high variance a bit clearer but does not show that we are able to get some good seeds. Appendix E provides a possible solution to this problem by selecting the good random seeds based on a smaller set of hypotheses. | 1. What is the focus and contribution of the paper on training agents to verify hypotheses?
2. What are the strengths of the proposed approach, particularly in terms of its ability to handle complex hypotheses?
3. What are the weaknesses of the paper regarding its lack of discussion on related work in causal reasoning and partial observability?
4. How does the reviewer assess the quality and thoroughness of the evaluations conducted in the paper?
5. What are the limitations of the proposed approach regarding exploration behavior by the policy?
6. Can you provide more detailed analyses to tease apart the behavior of the agent?
7. How important is the fact that the reward is given based on the pre and postconditions?
8. Can you provide comparisons with other forms of intrinsic motivation?
9. What is the significance of the choice of pretraining reward, and how does it impact the behavior of the agent?
10. Can you clarify whether Figure 2 shows the proxy rewards or the true rewards?
11. What values do you use for C and N in R_pre and R_ppost? | Review | Review
This paper trains agents which are able to verify hypotheses, such as “the blue switch causes the door to open”. It does this by first pretraining the agent to perform interventions in the environment which change the states of the objects of interest, and then finetuning the agent to actually make a decision about whether the given hypothesis is correct. The paper shows that agents trained using this procedure are able to not only verify the types of hypotheses seen during pre-training, but also learn to verify more complex hypotheses. In contrast, an agent which is trained directly on the hypothesis verification task is unable to learn to do it.
Overall, I enjoyed reading this paper and thought that it provided an interesting take on the question of how to train agents that can appropriately gather information about their environments. However, (1) the paper lacks any discussion of related work in terms of causal reasoning and partial observability, and (2) the experiments and analysis seem weak. I thus am giving a score of “weak reject”, though it is possible I could increase my score is some of my concerns can be addressed.
First, I was very surprised to see that the paper included no discussion at all about either causal reasoning or partial observability. The whole notion of verifying hypotheses—particularly those in the triplet form as presented in the paper—is equivalent to the idea of performing inference about the structure of a causal graph with three variables. The choice of which interventions to perform in order to make these inferences is a well-studied problem [1] and has been recently explored in the context of RL as well [2]. The novelty here seems to be in embedding the problem of causal reasoning in harder credit assignment problems (i.e. longer time horizon), though see [3]. Similarly, the setup of the MDP in the paper is actually a POMDP, where the state includes the truth value of the hypothesis but where observations do not include this information. Yet, there is no mention of POMDPs or discussion of the literature on partial observability in the paper.
Second, I felt that the setup was overly complex in places making it difficult to draw conclusions, that there were a lack of comparisons, and that the analysis was not as in depth as it could have been. For example, why is it necessary to represent the hypothesis with natural language? Why not use a symbolic representation? It seems like including the pseudo-natural language adds unnecessary complexity and makes it difficult to distentangle what about the problem is hard (Understanding the hypothesis? Choosing the right interventions? Parsing the observations correctly?). The utility of having it be closer to language is that you might see generalization between related hypotheses, but this isn’t really something that is actually tested for since all hypotheses are trained on either during pretraining or finetuning.
I also feel like the choice of pretraining reward feels somewhat arbitrary, and it would have been nice to see comparisons to other alternatives (and even better, to other forms of intrinsic motivation). For example, here are a few alternate ways of rewarding the agent that seem intuitively like they could also work:
Reward the agent for changing the state of any of the objects in the environment
Reward the agent for changing the state of any object referenced in the hypothesis
Reward the agent for observing a state of the world it has not seen before (i.e. count-based exploration)
In other words, how important is the fact that the reward is given based on the pre and postconditions?
I thought the paper would benefit from more detailed analyses to tease apart the behavior of the agent. For example, I am curious how many errors are a result of errors in the predictor versus poor exploration behavior by the policy. Could you report (1) how frequently the policy’s behavior results in the right observations necessary to make a decision, and (2) results with a policy which uses an oracle predictor (i.e. which will always report the correct answer, if there was enough data in the last N frames to detect that answer)?
On the more practical side, I also thought the quality of the evaluations was not very thorough. For instance, it looks like the pretraining proceeds for 1e8 steps and finetuning for 5e7 steps, based on the plots (these values should be stated more explicitly in the paper). However, this is a bit of an unfair comparison for the “RL Baseline”, as it only is trained for 5e7 steps while the other agents are trained for 1.5e8 steps. I would like to see a comparison where the RL Baseline agent is trained for 1.5e8 steps as well. Similarly, on the bottom of page 6 the paper says “we show the max out of five for each of the methods shown”. However, only reporting the max value is considered bad practice and can result in misleading comparisons (see Joelle Pineau’s talk on “Reproducible, Reusable, and Robust Reinforcement Learning” at NeurIPS 2018). I’d like to see the data in all figures and tables reported with means or medians across seeds, rather than best seeds.
A few minor comments:
- Please state in the main text which RL algorithm you use.
- Can you clarify whether Figure 2 show the proxy rewards or the true rewards?
- For R_pre and R_ppost, what values do you use for C and N?
[1] Pearl, J. (2000). Causality: models, reasoning and inference (Vol. 29). Cambridge: MIT press.
[2] Dasgupta, I., Wang, J., Chiappa, S., Mitrovic, J., Ortega, P., Raposo, D., ... & Kurth-Nelson, Z. (2019). Causal reasoning from meta-reinforcement learning. arXiv preprint arXiv:1901.08162.
[3] Denil, M., Agrawal, P., Kulkarni, T. D., Erez, T., Battaglia, P., & de Freitas, N. (2016). Learning to perform physics experiments via deep reinforcement learning. arXiv preprint arXiv:1611.01843.
--
Update after rebuttal:
Thank you very much for your response. However, I do not feel that all of my concerns have been addressed and thus will keep my score as it is. In particular, I still feel the paper lacks sufficient discussion of the literature on causal reasoning. I also do not think it is sufficient to add an appendix with the results across multiple seeds: these results should be in the main paper. I'm not sure I follow the justification that max seeds make sense because "the reward distribution is quite binary in nature"---the plots shown in Figure 3 and 4, for example, span a range of values from 0 to 1. I find the plots that have both variance and max seed very hard to interpret---in some cases the mean is so much lower than the max seed that the variance region doesn't overlap at all. More broadly, it might be easier to compare using bar plots showing final performance, rather than training curves
I appreciate the additional results, especially with different pretraining schemes---thanks for adding these! I have a bit of hard time interpreting the results though since there are no direct comparisons with the triplet pretraining scheme; it would be helpful if these results could be included in these figures too. |
ICLR | Title
Agent as Scientist: Learning to Verify Hypotheses
Abstract
In this paper, we formulate hypothesis verification as a reinforcement learning problem. Specifically, we aim to build an agent that, given a hypothesis about the dynamics of the world can take actions to generate observations which can help predict whether the hypothesis is true or false. Our first observation is that agents trained end-to-end with the reward fail to learn to solve this problem. In order to train the agents, we exploit the underlying structure in the majority of hypotheses – they can be formulated as triplets (pre-condition, action sequence, post-condition). Once the agents have been pretrained to verify hypotheses with this structure, they can be fine-tuned to verify more general hypotheses. Our work takes a step towards a “scientist agent” that develops an understanding of the world by generating and testing hypotheses about its environment.
1 INTRODUCTION
In fields of natural sciences (physics, biology etc.), we follow scientific methods – building and testing hypotheses to develop an understanding of the world. Many classical approaches to artificial intelligence attempted to mirror this process (Brachman & Levesque, 2004; Davis & Marcus, 2015), building (symbolic) knowledge representations about the world that allow the making and testing of hypotheses. However, this process bears little resemblance to the way in which current machine learning (ML) systems learn. Both traditional IID and interactive learning settings use a single userspecified objective function that codifies a high-level task, but places no constraint on the underlying knowledge formed about the environment. In standard ML approaches, particularly those based on deep learning, any representation of the world is embedded in the weights of the model, and there is no explicit mechanism for formulating or testing hypotheses.
In this paper we take a modest step towards combining the classical approaches with the successes of modern ML to build a “scientist agent”. When fully realized, such agent would be able to both make and test hypotheses about its environment. In this work we focus on the latter. Unlike standard supervised problems, there is no standard formulation, and no benchmarks or environments for hypothesis verification in interactive environments. A key contribution of our paper is framing the problem of hypothesis verification and presenting a feasible formulation for it. Specifically, we build an agent that, given a hypothesis about the dynamics of the world, can take actions to verify if the hypothesis is true or not. We formulate hypothesis verification as joint learning of: (a) an action policy that generates observations which are relevant to verification of hypotheses and; (b) a prediction function which uses the observations to predict whether the hypothesis is true or false.
We first show that even in simple environments, agents trained end-to-end using deep reinforcement learning methods cannot learn policies that can generate observations to verify the hypothesis. To remedy this, we exploit the underlying structure of hypotheses – they can often be formulated as a triplet of a pre-condition, an action sequence, and a post-condition that is causally related to the pre-condition and actions. Using this common structure, we are able to seed our action policy to learn behaviors which alter the truth of the pre-condition and post-condition. We show that this policy can be fine-tuned to learn how to verify more general hypotheses that do not necessarily fit into the triplet structure. Thus our approach allows combining the explicit hypothesis testing of classical AI with the use of scalable statistical ML.
See videos and more at: https://sites.google.com/view/scientistagent
2 RELATED WORK
Knowledge representation and reasoning (KRR) (Brachman & Levesque, 2004) is a central theme of traditional AI. Commonsense reasoning (Davis, 1990; Davis & Marcus, 2015; Liu & Singh, 2004) approaches, e.g. CYC (Lenat, 1995), codify everyday knowledge into a schema that permits inference and question answering. However, the underlying operations are logic-based and occur purely within the structured representation, having no mechanism for interaction with an external environment. Expert systems (Giarratano & Riley, 1998) instead focus on narrow domains of knowledge, but are similarly self-contained. Logic-based planning methods (Fikes & Nilsson, 1971; Colaco & Sridharan, 2015) generate abstract plans that could be regarded as action sequences for an agent. By contrast, our approach is statistical in nature, relying on Reinforcement Learning (RL) to guide the agent.
Our approach builds on the recent interest (Mao et al., 2019; Garcez et al., 2012) in neural-symbolic approaches that combine neural networks with symbolic representations. In particular, some recent works (Zhang & Stone, 2015; Lu et al., 2018) have attempted to combine RL with KRR, for tasks such as navigation and dialogue. These take the world dynamics learned by RL and make them usable in declarative form within the knowledge base, which is then used to improve the underlying RL policy. In contrast, in our approach, the role of RL is to verify a formal statement about the environment. Our work also shares some similarity with Konidaris et al. (2018), where ML methods are used to learn mappings from environment states to representations a planner can use.
Cognitive Development: Empirical research on early learning (Gopnik, 2012; Kushnir & Gopnik, 2005) has shown that infants build an understanding of the world around them in ways that parallel the scientific process: constantly formulating hypotheses about how some physical aspect of the world might work and then proving or disproving them through deliberate play. Through this process the child builds up an abstract consistent causal understanding of the world. Violations of this understanding elicit surprise that can be measured by researchers (Spelke et al., 1992).
Automated Knowledge Base completion: This work is also related to knowledge base completion (Fader et al., 2011; Bordes et al., 2013; Suchanek et al., 2007), and especially as formulated in (Riedel et al., 2013). Instead of using other facts in the knowledge base or a text corpus to predict edges in the KB, here the agent needs to act in an environment and observe the results of those actions. This recalls (Mitchell et al., 2018), where the system verifies facts it has previously hypothesized by searching for corroboration in the corpus.
Automation of the scientific process has been attempted in several domains. Robotic exploration of chemical reactivity has been demonstrated (Granda et al., 2018) using ML techniques. (King et al., 2009) developed a robot scientist that explored geonomics hypotheses about yeast and experimentally tested them using laboratory automation. In biochemistry (Vanlier et al., 2014) used Bayesian methods for optimal experiment design. More generally, the Automated Statistician project (Steinruecken et al., 2019) uses a Bayesian approach to reason about different hypotheses for explaining the data, with the aims of creating interpretable knowledge.
Embodied Question and Answering: The problem studied in this paper is closely related to the embodied visual question-answering problem in (Das et al., 2018). Indeed, our basic formulation is a particular case of the most general formulation of embodied QA, as the agent is rewarded for successfully answering questions about the environment that require interaction. However, the form of the questions is different than those considered in (Das et al., 2018), as they may require drawing a conclusion about the dynamics of the environment, rather than a static property. Even the questions about static properties we are interested in have a different flavor, as they encode rules, rather than statements about the current configuration. Our approach is built around hypothesis-conclusion structure special to these questions.
There is also a large body of work on (non-embodied) visual QA (Kafle & Kanan, 2017; Wu et al., 2016a) and text-based QA (Rajpurkar et al., 2018). From this, most relevant to our work is (Wu et al., 2016b) who use a structured knowledge base to augment standard statistical QA techniques.
Language grounding: Our approach requires us to solve the language grounding problem, albeit in a simplified form due to templated language/limited vocabulary. Most other works such as (Chaplot et al., 2018; Anderson et al., 2018; Tellex et al., 2011) are focused on instruction following in known or unknown environments.
Learning to experiment: Recent works have studied training agents to interact with an environment to draw conclusions about its dynamics (Denil et al., 2016) or elucidate its causal structure (Dasgupta et al., 2019). Our work is similar to these (especially (Denil et al., 2016) which uses reinforcement learning on sequences of observations) in that the agent gets reward for answering questions that require experimentation with the environment. However, in those works, the “question” in each environment is the same; and thus while learning to interact led to higher answer accuracy, random experimental policies could still find correct answers. On the other hand, in this work, the space of questions possible for any given environment is combinatorial, and random experimentation (and indeed vanilla reinforcement learning) is insufficient to answer questions.
3 PROBLEM
3.1 THE HYPOTHESIS VERIFICATION PROBLEM
Here we formally introduce the problem of hypothesis verification as a Partially Observable Markov Decision Process (POMDP).
The agent is spawned in an environment W ∈ W defined by the “rules” of the particular instance W out of all possible worldsW . For instance, in a crafting world, W will be defined as a set or rules for what items can be crafting from which other items, and this ruleset will be different from other environments inW Given the environment W , the agent is given a hypothesis to test h which relates to the rules of the world instance. By construction, h is either true or false. The agent can take actions a ∈ A (for example, move left, move right, craft, etc), including two special actions ansT and ansF . The goal of the agent is to correctly identify the hypothesis h as true or false and take the corresponding answering action. At the end of the episode, the agent is told whether the h was true or false.
In our experiments, we set the probability of h being true at 0.5, and construct the environments such that it is not obvious from time t = 0 whether the hypothesis is true or not. The agent must therefore learn a hypothesis conditioned policy π(s, h) : (S,H)→ A, such that the agent has enough information to know whether h is true.
Because we also have access to the ground truth for whether the hypothesis is true, we can train a network with supervised learning to predict true and false. Our prediction network f(st, st−1, ...st−N , h) : (SN ,H) → hpred takes in the last N observed observations of the environment and the hypothesis and predicts whether or not the hypothesis is true. The special ans action replaces the earlier ansT and ansF , and the prediction network is used to decide whether the agent answer true or false.
This addition of a supervised component is not strictly necessary for the definition of the problem. However, this framing allows for the use of supervised learning for the actual ground truth prediction, which is known to be an easier problem than the indirect optimization of RL. Empirically, this change makes the problem much more tractable.
Now, to train the policy network π, we can now define a reward function to allow for standard RL training of the policy. In essence, we give the agent a positive reward at the end of an episode if it correctly guesses the correct truth value of the hypothesis h.
Rans = { +C a = ans & hpred = hgt −C a = ans & hpred 6= hgt 0 otherwise
where C ∈ R+ is some constant reward value and hgt is the ground truth value of the hypothesis. Note that any particular choice of W forms an MDP if it were to be played repeatedly; but with the dynamics depending on the ruleset, the state is not fully observed, and naive RL is not applicable. As is standard in these situations, we use models that take as input a sequence of observations.
This dual optimization of policy and hypothesis prediction makes hypothesis verification a quite challenging problem! In order to be able to tell whether a hypothesis is true or not, we need to take the correct sequence of actions related to the hypothesis. But in order to know that a particular sequence of actions was the right one to do, we need to be able to correctly predict the hypothesis to know that
we should have a positive reward for that sequence. Guessing with no information gives average 0 reward, and until it learns good predictor it has no signal to guide the policy to do the right thing. We find that a RL baseline finds it almost impossible to solve the task as it can neither learn the right policy nor the right predictor to verify the hypothesis.
3.2 ENVIRONMENTS
We create three games in order to test the problem of hypothesis verification: Color Switch, Pushblock, and Crafting. See Figure 1. Each instantiation of an environment comes with a hypothesis for the agent to verify. The hypotheses are generated along with the environment using a set of templates associated to the game (see Appendix A). For each spawn of the environment, the locations of the agent, all items and entities, the given hypothesis to verify, as well as the underlying logic of the world is randomized. This prevents the agent from learning the truth of the hypothesis by simply guessing without interacting with the world.
In the Color Switch environment, the agent is placed in a world with one or more color switches which are randomly either “on” or “off” and a door which is either open or closed. The agent is able to move and toggle the switch positions. One of the switches in the environment, when in the correct position (can be either on or off) will cause the door to open. The other switch has no effect. Hypotheses in this environment relate to the color and position of switches and how that opens or closes the door.
In the Pushblock environment, the agent is placed in a world with a block which can be pushed by the agent, and a door. The agent can move and push on the block. The door opens when the block is in a particular part of the grid: “up” – top two rows, “down” – bottom two rows, “left” – leftmost two rows, “right” – rightmost two rows. The hypotheses in this environment related to the position of the pushblock and how that affects the door.
Finally, in the Crafting environment, the agent is placed in a world with crafting rules similar to that of the popular Minecraft game. The agent is spawned along with a number of crafting items, and a crafting location. The agent is able to move, pick up items into its inventory and use the crafting location using special crafting actions. There is some true “recipe” which produces some new item in the agent’s inventory.
Items are randomly generated in a 5 by 5 grid. The world observation is given by a 1-hot vector of each possible item in the world at each grid location and another 1-hot vector for each item and whether it is in the agent’s inventory. The hypothesis is encoded as sequence of tokens. As we describe in Section 3.1, the (sparse) reward function for these environments is C = 10 if the agent takes the special ans action and correctly verifies the hypothesis as true or false, and −10 if it incorrectly guesses.
3.3 HYPOTHESIS CONSTRUCTION
In the following sections, we discuss different types of hypotheses about the environment in order of increasing complexity.
3.3.1 TRIPLET HYPOTHESES
In the first case, we consider hypotheses that have the following “triplet” form.
(pre-condition, action sequence) =⇒ post-condition
The idea here is that we want to explicitly form the hypothesis as a logical statement. When the pre-condition is true, and the action sequence is performed, the post-condition will be true.
To generate our triplet hypotheses, we: (1) randomly select a pre-condition template from a set list; (2) randomly select an action template; (3) randomly select a post-condition template; and (4) fill in any entities in the final template
So for example, for the Color Switch environment we might draw “if the COLOR switch is ON_OFF_SWITCHSTATE, NULL, the door will open” and then draw “blue” for COLOR and “on” for ON_OFF_SWITCHSTATE, giving us the final template: “if the blue switch is on the door will open.”
In Appendix A, we show the possible templates for each of the triplets and the possible values for all of the entities for our three environments.
3.3.2 GENERAL TEMPLATE CONSTRUCTION
In the more general case, instead of drawing a template from the triplet form, we instead draw a single template for the hypothesis and fill in the values. For instance, in pushblock we might draw: “the door can only be opened when the pushblock is PUSHBLOCK_POSITION” and then draw “left” for PUSHBLOCK_POSITION. These templates are more general than the triplet ones in that they need not hold to the strict triplet form, and we have no explicit labels for pre-condition, action sequence and post-condition.
3.3.3 SPECIAL CASE TEMPLATES
Finally, we also can draw some more difficult and general hypothesis templates. Some of these cannot be neatly fit into a triplet format by rewording, and some may not fully describe the rules of the world. Some examples of these harder templates are: (1) Negating effects (e.g. door is not open); (2) Negating conditions (e.g. switch is not on); and Independence (e.g. door independent of blue switch). See Appendix A for all of the possible templates for an environment and further details.
4 METHODOLOGY
4.1 RL BASELINE
The conceptually simplest approach to solving the problem is to give an RL agent a sequence of N observations of the form (oi, h), where h is the hypothesis about the environment, and oi is the observation. As long as N is large enough, a standard RL algorithm has the capacity to solve the problem.
Thus, we design our policy network π(s, h) to decide the action. We also use the simplification described in Section 3.1 and create another network to predict the hypothesis ground truth value trained using supervised learning. The specifics of the networks are further described in Section 4.3 and hyper-parameters are described in the Appendix.
4.2 TRIPLET POLICY PRETRAINING
Rather than try to rely on general RL methods, we use the special structure of many hypotheses. As we discussed in Section 3.3.1, many hypotheses naturally take the form of a triplet: (pre-condition, action sequence, post-condition). While not all hypotheses fit into this format, the hope is that the policy we learn is close enough to ground truth, that we can later generalize to other kinds of hypotheses.
We can use this to construct a reward function. We know that to verify these kinds of statements, we need to take actions which alter the truth of the pre-condition and post-condition. If we modify the
pre-condition and take the action, if the statement is true, the post-condition should toggle from false to true in the environment. Similarly, if post-condition changes but the pre-condition did not change, we know that the statement must be false.
Thus we construct the following reward function to encourage our agents to toggle the pre-conditions and post-conditions:
Rpre = { +C a = ans & pre changed in last N frames 0 otherwise
Rppost = { +C a = ans & post+pre changed in last N frames 0 otherwise
It encourages the policy to change the pre-condition and post-conditions (via pre-condition) in the last N frames of the video, so that a predictor looking at the last N frames of observations will be able to deduce the truth value of the hypothesis.
Once we have trained the policy function with this proxy reward, we can then train the prediction network and even finetune our policy network on the final reward.
4.3 NETWORK ARCHITECTURE
Although other works such as Chaplot et al. (2018) have investigated language-conditioned RL (usually in the form of instruction following), our hypothesis conditioned problem proved to be quite challenging, and required some novelty in network architectures.
For the policy networks, standard architectures were not effective for our problem. The key seems to be that it is difficult to condition action on language without explicit interaction between the language and non-language components. In particular, of all of the network architectures we experimented with, an explicit attention network using the language as the key input was by far the most effective. The hypothesis is fed into a seq2vec model and used as the key to the a dot-product attention mechanism. The state of the network (the grid locations of the items in the world and the inventory of the agent) after being fed through a one layer networks is fed as input to N parallel MLPs. The output of the MLPs are fed as the values into the attention mechanism. The output of the module is then fed into the final hidden layer of the actor-critic network.
For the prediction network, we use the popular transformer architecture Vaswani et al. (2017). Our prediction network encodes both the hypothesis and past observations (after they are passed through a one layer network) using transformer encoders. These sequences are then combined using a transformer to generate a final hidden state as output which is then fed to a final prediction layer and sigmoid function to get our binary prediction.
In Figure 5, we provide ablation analysis for both of our neural network architectures. See Appendix C for more network details and hyperparameters and network diagrams.
5 EXPERIMENTS
First, we train using our policy networks using our pretraining proxy functions from Section 4.2. We find that pretraining with just the pre-condition reward leads to better results for the Color Switch environment, and use both rewards for the other two environments. Figure 2 shows these results.
Next, we train our network on the final prediction reward and train our prediction networks. We train two different versions of this. For one, we only train the prediction network and keep our policy network fixed. For the other, we train both the prediction network and finetune the policy network.
During this final training stage, we relax our triplet-form constraint and train on both the triplettemplated hypotheses we saw during pretraining as well as new hypothesis templates not seen during pretraining. We sample seen versus new templates with equal probability. See Section 3.3 and Appendix A for examples of the kinds of hypotheses we see during this phase of training. Note that this includes hypotheses which break the triplet format.
Figure 3 and left of Figure 4 show our final hypothesis verification results. We show the max out of five for each of the methods shown. We also break down the final hypothesis prediction accuracy for our methods in Table 1, and show its success on the triplet hypotheses (which our methods were pretrained on) and non-triplet hypotheses (which they were not).
RL baseline We can first see clearly that the RL baselines fail. This is due to the unlikelihood of taking the right actions to verify correctly and therefore train the prediction net properly. Because the average reward for answering is 0 if you cannot predict correctly, the agent does not even bother answering the question much of the time (which is why this baseline gets less than 50%, it does not bother guessing in most games).
Other baselines We also include two other simple baselines “no act” and “random act.” The no act baseline simply takes the ans action at t = 0 and the prediction network attempts to predict the hypothesis with just the first observation. This fails because the agent needs to take actions in the environment to be able to predict the hypothesis accurately. For random act, we simply make the policy to take random actions. This similarly fails as random actions are extremely unlikely to behave in a way that allows for the verification of the hypothesis.
5.1 TRIPLET POLICIES CAN SUCCEED AND GENERALIZE
On the other hand, we see that RL is able to train on the triplet tasks after pre-training. While it is not surprising that densifying the reward in this way makes the RL easier, in our view, it is important that it is true, as it paves the way towards hypothesis verifying agents. That is: we are interested in scalable methods that can use statistical ML to interact with a complex environment. Given the more general success of deep RL, that the problem becomes approachable with reasonable reward shaping gives us hope we will be able to get beyond the regime of classical AI methods.
Morever, in Pushblock and Color Switch, even with the policy learned from the triplet pre/post reward, the agent is able to generalize and perform well on templates not seen in the pre-training phase as we can see in Table 1. This includes generalizing to difficult templates such as negations
and “independence” hypotheses. Note that the prediction network that verifies the hypotheses given the trajectory from the policy still needs to fine-tune on the new templates.
It’s worth noting that although we can do well using finetuning using a few random seeds, these methods are high variance. In the appendix we show and discuss this more clearly. In Figure 7 we show the variances of these methods which show that the variance on our method is high. In Appendix E we propose a training methodology that sorts out the bad random seeds by using the triplet hypotheses as a validation set. And in Appendix J we show that these results are consistent when we increase the number of random seeds to 25.
5.2 TRIPLET POLICIES CAN ADAPT
On the crafting task, to do well on the unseen templates, the policy also needs to be fine-tuned. In our view, the fact that this fine-tuning can succeed is more important than the generalization in the simpler tasks, as it demonstrates a path towards agents that can verify complex statements by establishing a curriculum of simpler ones.
In the right of Figure 4, we show a visualization of a sample run of the finetuned policy and predictor on crafting. We see that the policy does what we expect, picks up the correct item and moves to the crafting table to craft. It crafts a different item than it expected (bed instead of torch) and it answers false. Looking at the prediction net over time, we see that it at first predicts false then true before it does the craft action. Once it has crafted the bed, however, it answers correctly.
We conduct additional experiments in the Appendix. In Appendix G, we tease further analyse the problem by experimenting with an oracle hypothesis predictor. In Appendix F we experiment with different pretraining functions. In appendix Appendix H we look at training baselines for longer. And In Appendix I, we look at whether giving the baselines more past frames N improves performance.
In Figure 5 we see the results of our network architecture ablation. As we can see, our new policy architecture described in Section 4.3 clearly outperforms a standard MLP policy network on the
language-condition pretraining task. We also see that the transformer architecture outperforms the LSTM and MLP model on the final task when we hold the policy network constant.
6 DISCUSSION
In this work, we propose a tractable formulation of the problem of training agents that can interact with an environment to test hypotheses about it. We show that generic RL techniques struggle with the problem, but by using its structure, we are able to develop a method that works in simple environments. Specifically, we use the fact that many hypotheses can be broken into triples of the form of (pre-condition, action sequence, post-condition); but we also show that once pre-trained using this factorization, agents can be fine-tuned to verifying more general hypotheses.
A TEMPLATES
A.1 WORLD AND HYPOTHESIS CONSTRUCTION
Returning again to our notation from Section 3.1, the environment at each spawn needs to construct a world W out of all possibleW , and a hypothesis h that is either true or false in the world. W in particular describes the rules about how the environment works (i.e. which switch opens the door) which in our case can precisely be describe by a hypothesis. So given a true hypothesis, we can exactly describe the rules of the world. Therefore, in order to create an instance of a possible WinW , we can instead draw a true hypothesis about the world at random. From the hypothesis, we can then construct the rules the determine how objects in the world behave. Note that there are couple exceptions to this for our harder hypotheses, where the hypothesis can be true but only partially describes all the rules of W . For these cases, we draw yet another template which is consistent with the hypothesis and use that to construct the rules, such as deciding which color switch really opens the door.
Because we have to randomly give either a true or false hypothesis, we also need to be able to generate a false hypothesis for the world. So for every instance, we also draw a random false hypothesis. Now, given a true and false hypothesis, we can fully generate the world and all the items that appear in either statement. So for instance, if the true hypothesis mentions a green switch and the false one mentions a blue switch, we generate both a green and blue switch. Then, we can set the rules such that the right thing happens. So in this example, switching the green switch opens the door and the blue switch does nothing.
The final step is then to randomly choose either the true or false statement as the “visible” hypothesis which is passed to our agent to verify. Because we generate the world and spawn the items before we make this choice, we ensure that we do not accidentally give away the truth of the hypothesis based on what items spawned.
Our process for generating a new spawn of environment can thus be summarized as follows:
1. We randomly generate a true hypothesis 2. We randomly generate a false hypothesis 3. We construct a ruleset from the true hypothesis 4. We spawn the agent and the items in the world described in both the true and false hypothesis 5. We randomly choose either the true or false hypothesis as the “visible” hypothesis that the
agent must verify
Color Switch:
Pre-condition: if the COLOR switch is ON_OFF_SWITCHSTATE when the COLOR switch is in the ON_OFF_SWITCHSTATE position the COLOR switch is ON_OFF_SWITCHSTATE
Action: ""
Post-condition: then the door is open the door is passable and we see the door is open the door will open
Finetune templates: the door can only be opened by switching the COLOR switch to ON_OFF_SWITCHSTATE when we see the COLOR switch is ON_OFF_SWITCHSTATE the door must be open
if the COLOR switch turns ON_OFF_SWITCHSTATE the door opens when we see the door open it must be that the COLOR switch is in the ON_OFF_SWITCHSTATE position those who want to open the door must first switch the COLOR switch ON_OFF_SWITCHSTATE no password just make the COLOR switch be ON_OFF_SWITCHSTATE to open the door COLOR switch ON_OFF_SWITCHSTATE implies door is open only the COLOR switch being ON_OFF_SWITCHSTATE opens the door the door is open because COLOR switch is in the ON_OFF_SWITCHSTATE position COLOR switch ON_OFF_SWITCHSTATE equals open door the COLOR switch opens the door but only when it is ON_OFF_SWITCHSTATE door is open must mean that COLOR switch is ON_OFF_SWITCHSTATE an ON_OFF_SWITCHSTATE means the door is open but only if it is COLOR COLOR controls the door and it opens when it is ON_OFF_SWITCHSTATE ON_OFF_SWITCHSTATE is the correct position of the COLOR switch and it opens the door the switch that causes the door to be open when it is ON_OFF_SWITCHSTATE is COLOR if you see COLOR switch then the door is open the door is independent of the COLOR switch if the door is not open then the COLOR switch must be ON_OFF_SWITCHSTATE if the COLOR switch is not ON_OFF_SWITCHSTATE then the door is open to make the door not open the COLOR switch must be not ON_OFF_SWITCHSTATE whether the door is open is completely independent of the COLOR switch the COLOR switch is what controls the door a not ON_OFF_SWITCHSTATE COLOR switch opens the door
Template Values COLOR: blue red green black
ON_OFF_SWITCHSTATE: on off
Pushblock
Pre-condition: whenever the pushblock is in the PUSHBLOCK_POSITION if the pushblock is at the PUSHBLOCK_POSITION the pushblock is at the PUSHBLOCK_POSITION
Action: ""
Post-condition: then the door is open the door is passable and we see the door is open the door will open
SP_FULL_TRAIN: PUSHBLOCK_POSITION is the correct position for the pushblock for the door to open if the door is open it must be that the pushblock is at the PUSHBLOCK_POSITION when the door is open it is because the pushblock is in the PUSHBLOCK_POSITION when the pushblock is at the PUSHBLOCK_POSITION the door is open pushblock PUSHBLOCK_POSITION means door open the door can only be opened when the pushblock is PUSHBLOCK_POSITION if the pushblock is PUSHBLOCK_POSITION it means the door is open PUSHBLOCK_POSITION pushblock opens the door open door implies pushblock PUSHBLOCK_POSITION
open door means pushblock PUSHBLOCK_POSITION door opens when PUSHBLOCK_POSITION is where the pushblock is PUSHBLOCK_POSITION is the correct position for the pushblock to open the door the door when the pushblock is PUSHBLOCK_POSITION is open PUSHBLOCK_POSITION position of the pushblock causes the door to open door only opens on PUSHBLOCK_POSITION pushblock door can only open with pushblock being PUSHBLOCK_POSITION the pushblock being at the PUSHBLOCK_POSITION is completely independent of the door the pushblock being PUSHBLOCK_POSITION is independent of the door being open the door state is independent of pushblock PUSHBLOCK_POSITION PUSHBLOCK_POSITION pushblock and door are independent
Pushblock values: PUSHBLOCK_POSITION: left right top bottom
Crafting
Pre-condition: when you are at LOCATION and you have CRAFTING_ITEM you are at LOCATION and have in your inventory CRAFTING_ITEM whenever you have a CRAFTING_ITEM and are at LOCATION
Action: and you do CRAFTING_ACTION then you CRAFTING_ACTION
Post-condition: you now have CREATED_ITEM in your inventory then CREATED_ITEM is created and this creates CREATED_ITEM so CREATED_ITEM is created and put in your inventory then CREATED_ITEM is made
Finetune Templates: to create a CREATED_ITEM you must have CRAFTING_ITEM and go to LOCATION and do the action CRAFTING_ACTION CREATED_ITEM can be created by doing CRAFTING_ACTION at LOCATION when CRAFTING_ITEM is in inventory whenever you do CRAFTING_ACTION and have CRAFTING_ITEM at LOCATION a CREATED_ITEM is made you have CRAFTING_ITEM and go to LOCATION and CRAFTING_ACTION and CREATED_ITEM will be created whoever does CRAFTING_ACTION at LOCATION with CRAFTING_ITEM gets CREATED_ITEM if you have CRAFTING_ITEM at LOCATION and you CRAFTING_ACTION you get CREATED_ITEM if you do CRAFTING_ACTION at LOCATION with CRAFTING_ITEM you make CREATED_ITEM whenever you have CRAFTING_ITEM at LOCATION and do CRAFTING_ACTION then you make a CREATED_ITEM having CRAFTING_ITEM in your inventory being at LOCATION and doing CRAFTING_ACTION creates CREATED_ITEM CREATED_ITEM can be made with CRAFTING_ITEM when you do CRAFTING_ACTION at LOCATION CRAFTING_ITEM plus LOCATION plus CRAFTING_ACTION equals CREATED_ITEM create a CREATED_ITEM by being at LOCATION with CRAFTING_ITEM and doing CRAFTING_ACTION CRAFTING_ACTION at LOCATION creates CREATED_ITEM but only if you have a CRAFTING_ITEM if you want to make a CREATED_ITEM then go to LOCATION with CRAFTING_ITEM and do CRAFTING_ACTION CRAFTING_ITEM in inventory at LOCATION makes CREATED_ITEM if you do CRAFTING_ACTION CREATED_ITEM when CRAFTING_ITEM at LOCATION and do CRAFTING_ACTION if you are at LOCATION and do CRAFTING_ACTION you make CREATED_ITEM if you are anywhere and do CRAFTING_ACTION with CRAFTING_ITEM you make a CREATED_ITEM having CRAFTING_ITEM at LOCATION and doing CRAFTING_ACTION does not make a CREATED_ITEM CREATED_ITEM is created by being at LOCATION and doing CRAFTING_ACTION make a CREATED_ITEM by having a CRAFTING_ITEM and doing CRAFTING_ACTION
you have CRAFTING_ITEM and go to LOCATION and CRAFTING_ACTION and CREATED_ITEM will not be created LOCATION plus CRAFTING_ACTION creates a CREATED_ITEM with a CRAFTING_ITEM you can make a CREATED_ITEM by doing CRAFTING_ACTION
Template Values: CRAFTING_ITEM : iron wood stick pickaxe coal
CREATED_ITEM: torch bed
LOCATION: craftingtable CRAFTING_ACTION: craft
B LEARNING DETAILS AND HYPERPARAMETERS
One detail of the prediction network is that we need to keep a memory of past state sequences, hypotheses and ground truths so we can actually train our prediction network. We do this by simply keeping track of the lastN times our agent answered a question, and keeping these in a FIFO memory. When we update our prediction network, we randomly sample from this pool. This also necessitates a 100k step break in period to collect enough examples.
In our policy finetuning experiments, we also stabilize our dual optimization problem by trading of optimization of the policy network and the prediction network. We must also start with the prediction network so that the reward for answering correctly is meaningful.
Basis of RL implementations was from Kostrikov (2018)
C NETWORK DETAILS AND HYPERPARAMETERS
C.1 RELATED WORK
Other works such as Chaplot et al. (2018) have incorporated gated mechanisms between language and perception. Manchin et al. (2019) employs self-attention mechanism within convolutional layers and Choi et al. (2017) also employs a self-attention mechanism in a DQN. Neither work incorporates language and the architectures are quite different from each other.
Figure 6 shows the policy and transformer architectures.
C.2 IMPLEMENTATION AND HYPERPARAMETERS
We take much of our implementation of transformers from Rush (2018).
D ADDITIONAL FIGURES
E STAGED RANDOM SEED VALIDATION
In this experiment, we perform a two-stage procedure for evaluating our results. The idea is that we use one set of hypotheses to determine which random seeds are successful and then show results on the larger set of hypotheses.
In the first stage, we train and our methods on only the triplet templates (the same ones used during pre-training). We then choose only the seeds that performed well (in these figures we show results for keeping seeds with at least 80% prediction accuracy and with at least 90% accuracy. If a method has no seeds performing high enough, we choose the top 5 for that experiment.) We show results on 25 random seeds. We preserve all training and network hyper-parameters.
In Figure 8 we show the first stage of training. We only train these with the triplet templates also seen during pre-training. We give the baseline more time to train to make up for the extra time the other methods got during pretraining. We can see that for all three environments the pretrained methods have at least one good seed for both finetuning and fixed policies. For crafting, we can get a better max seed with finetuning. However, especially in crafting, the variance is quite high, with many seeds doing poorly. The baselines do poorly overall except for a couple seeds in pushblock. This is the simplest environment, so it makes sense that this would be the one where the baseline RL might be able to find a policy. The max of this still slightly underperforms the pre-trained policies.
In Figure 9 and Figure 10, we show the results in the second stage of training. As we discussed, this stage includes the more difficult, non-triplet templates not seen during pre-training and not seen during the first stage of training when we selected the top seeds. We can see that with the pruning of bad seeds, the variance bands for the pre-train methods is much smaller and more clearly outperforms the baselines. We again see that we are able to get the best results from the finetuning on crafting. As with stage 1, we see that the RL only baseline is able to do reasonably well on pushblock, but still not as good as our pre-training methods. We show results for cutoffs at 80% and 90% to make sure we were robust to the choice of cutoff, and we can see very little difference between them.
Figure 11 shows the 90% cutoff experiment again with the mean instead of the max plotted.
F INTRINSIC PRE-TRAINING EXPERIMENTS
In this experiment, we show results on our hypotheses verification problem using different forms of “intrinsic motivation” pre-training. We show results for 4 different pretraining schemes:
1. Change any item state in the world. Receive reward at end.
2. Change any item referenced in the hypothesis. Receive reward at end.
3. Change any item state in the world. Receive reward instantaneously.
4. Change any item referenced in the hypothesis. Receive reward instantaneously.
Reward at the end means that it operates similar to our hypothesis pre-training. Specifically, the agent get reward only at the end of the episode when it has taken a stop action. At that step it gets a +C reward if it changed within the last N frames. For these rewards, we choose C = 10.
Instantaneous reward is what it sounds like. When the object state is changed, the reward is instantly received by the agent. We chose C = 1 for colorswitch and pushblock and C = 5 for crafting.
We interpret “item” to mean any object that is not the agent. So this includes crafting items, switches, pushblocks, etc. We show results on 25 random seeds. We preserve all training and network hyper-parameters.
In Figure 15 we show the final accuracies on the hypothesis verification task using the pretrained intrinsic rewards. As before, only the hypothesis predictor and not the policy is trained at this step. In
Figure 16 we show the same results where we finetune the policy as well. All training and network parameters are kept the same from earlier experiments.
We can see that the best results come from the crafting pre-training intrinsics. This makes a lot of sense because changing the state for crafting includes picking up objects and crafting objects, which is what the agent needs to do to verify the hypothesis. On colorswitch, we are able to get reasonable results, at least for the fixed policy. Again, changing the state corresponds to flipping switches which is also useful for verify colorswitch hypotheses. For pushbloc, nothing performed better than chance. Here, merely changing the state of the object isn’t enough to verify anything. To verify pushblock hypotheses, the state of the pushblock (it’s position) needs to be changed in a specific way: pushed
into or out of the correct position. The intrinsic change reward does not necessarily cause this, so this did not appear to be sufficient in this case.
G ORACLE HYPOTHESIS PREDICTION
In this experiment, we disentangle the problem somewhat for analysis by running experiments with an “oracle” hypothesis predictor on the Crafting environment. Specifically, in these experiments, we assume that we have an oracle that, given the last N states of the world, if it is possible to infer the truth state of the hypothesis given that sequence of states, the oracle returns the ground truth of the hypothesis. This should allow us to analyize the upper bounds of this problem and see what the hard part of our problem is.
First, we train a RL agent with access to the oracle. So the RL agent must learn its action policy, but when it takes the answer action, it uses the oracle to predict the hypothesis. Therefore, if the actions it has taken can verify the hypothesis, it will automatically answer correctly and get the reward. We show results on 25 random seeds and preserve the hyper-parameters from other experiments.
We show the result of this in Figure 17. We see that the RL is quickly, although not instantly, able to converge to perfect performance. From this we should summize that if we know how to predict the hypothesis already, it’s quite easy to get the reward - we just have to learn to do the patterns necessary to make the oracle prediction possible. The RL baseline without pretraining without the oracle was not able to converge to a good solution. This suggests perhaps that the problem is how to get a good hypothesis predictor in the first place to let us then learn the right policy.
Toward that end, we analyize our trained algorithms to see whether the actions they take are capable of verifying the hypothesis. We show the values for the top accuracy model. We use the same models and seeds whose results we show in Table 1 and Figure 4.
Table 9 shows these results. What we see is that indeed, the actions taken by the baselines are not able to verify the hypothesis. The Baseline RL policy only allows the oracle predictor to predict the hypothesis 3% of the time, giving us a upper bound of 51.5% on hypothesis accuracy. Random action is even worse, only leading to the right state sequence 0.7% of the time. No action (the agent that just tries to answer right away) as expected is never able to get the right sequence. For the pre-trainined methods we see that we are able to get to the right states most of the time. The finetuned policy gets the right states almost 100% of the time. With the fixed policy from pretraining, the oracle can answer 75% of the time, meaning that by guessing you could theoretically get to about 88%.
These experiments suggest that as we expected, the hard part of this problem is simultanously learning the policy and prediction is the difficult part. Once you have the best possible hypothesis prediction, RL can quite easily find the correct policy.
H LONGER TRAINING BASELINES
Because the pre-trained methods had the benefit of more training frames, we run the baselines for more frames to see whether additional training helps the comparison. We keep all the training parameters the same.
In Figure 18 we show the baseline methods on the original 5 seeds trained for the equivalent 1.5e8 steps. In Figure 19 we show 20 additional seeds trained for longer, although not quite to the 1.5e8 steps.
On the original seeds, training for longer has no effect. However, when we train with many more seeds, we find that for pushblock, we are able to find a random seeds that can get to about 75% accuracy. This is the simplest environment, so it makes sense that this would be the one where the baseline RL might be able to find a policy. The max of this still slightly underperforms the pre-trained policies.
I MORE STATE MEMORY BASELINES
In this experiment, we see if the RL baseline gets any benefit from increasing N , the number of past states it keeps in its observation. We show results for N = 10, 20, 50, 100 keeping all other parameters the same.
We can see that increasing the value of N does not appear to have any effect on the baselines. N = 5 is likely sufficient to see the change in the state of the environment and to allow the agent to know to stop and answer the question.
J ADDITIONAL RANDOM SEEDS
In this experiment, we show the results from previous experiments, but increase the number of random seeds from the original 5 to 25. When we did this, we also ran 25 random seeds for pretraining, so each results encorporating finetuning came from a different pretraining seed. Results are in Figure 21.
Adding more random seeds, we find essentially the same story as with 5 seeds. Finetuned from pretrain is able to get the best single results, but tends to be very high variance. Non-finetuned from pre-train does generally well on everything, except underperformance on crafting (especially on the new templates). And the baselines do still not do well.
One difference worth noting that in Figure 19, we find that training the RL baseline for longer and given more random seeds, it is able to get one good random seed on pushblock. As we noted there, this is the simplest environment, so it makes sense that this would be the one where the baseline RL might be able to find a policy.
For additional clarity, we show these same plots again in Figure 22 with the mean plotted instead of max. This shows the high variance a bit clearer but does not show that we are able to get some good seeds. Appendix E provides a possible solution to this problem by selecting the good random seeds based on a smaller set of hypotheses. | 1. How does the proposed framework for testing structured hypotheses about environment dynamics differ from traditional reinforcement learning approaches?
2. Can you provide examples of how the proposed method can be applied to real-world problems?
3. How does the policy and predictor decomposition in section 3.1 work, and why is it effective?
4. What are the strengths and weaknesses of the proposed approach compared to other methods in the field?
5. How do the experimental results support the effectiveness of the proposed method, and what are the limitations of the experimental design?
6. How does the paper contribute to the broader understanding of reinforcement learning and its applications? | Review | Review
The authors present a framework for testing a set of structured hypotheses about environment dynamics by learning an exploratory policy using Reinforcement Learning and a evaluator through supervised learning. They propose a formulation that decomposes environment hypotheses into sets of pre-conditions, required actions, and post-conditions. They then exploit this decomposition to (a) decouple the problem into both RL and supervised learning, and (b) provide localised pre-training to make the problem more tractable.
Overall, I really wanted to like this paper. The problem is interesting, and it certainly provides a great venue for interesting and impactful research in RL, language-conditioned decision making, structured / symbolic learning, and so on. However, I've found it relatively difficult to understand good parts of the methods and part of the experimental section, due to missing or misleading details.
In particular:
1. the justification for splitting the problem in a _exploratory_ / verification policy and a predictor is sound in principle, however it's unclear to me whether the problem is after all that intractable. In the experiment section a "RL Baseline" is mentioned in principle, however (1) it is unclear whether it was pre-trained similarly to the proposed methods, and (2) if the policy has learnt enough about the problems that its poking methodology provides enough signal to the predictor, I would expect the same policy to be able to learn the same function given enough memory and training steps.
2. I'm confused by the way the authors decomposed the action space for the policy and the predictor in section 3.1. Does the policy use ans_T and ans_F at any point during training? Does the actor effectively decide (i.e. by choosing "ans") when to query the prediction network?
3. The way the authors split the templates is confusing to me. Up to section 3.3.1 (and - really - until I read the appendix...), the writing sort of led me to assume that (1) the "(pre-condition, action sequence) -> post-condition" split was a fairly standard manner of compose a hypothesis, and that (2) the templates were mostly symbolic. However after reading the appendix, I found the imposed structure to be fairly arbitrary, and the usage of natural language overkill and not necessarily well justified. Ideally, I would like to see some comparisons between this type of hypothesis and other decompositions used in previous literature, since it seems like the method exploits this particular structure quite heavily and I don't quite understand how it generalises to other tasks.
4. The environments seem to be all fairly similar, both in terms of overall complexity, size, and features. It would have been better to also present problems with fairly different settings (e.g. much different - sparser and/or denser - types of reward function), rather than evaluating multiple times on effectively the same grid-world. I was though encouraged to see that one of the environment seemed to require slightly different setting in the pre-training reward setup, however the authors didn't follow up with some analysis on why there was such a difference.
5. I'm confused by how the pre-training is done. I understand that R_{pre} is used by itself in one environment, but I couldn't figure out whether it's both reward functions at the same time that are used in the rest of them, or just R_{ppost}. Looking at the scale of the (average?) reward, the former seems to be the case, but it would be good to be certain about such things.
6. The final accuracy of all the experiments are shown using the max of top-5, however appendix D shows quite a significant variance for the methods. Thus I'm not sure the analysis and final considerations are reasonable. What happens if the methods are trained on more seeds?
6. [nit] the title is somewhat misleading: in the introduction, a scientist is defined as being both a proposer and a verifier of hypotheses, which is a reasonable, however the authors fundamentally propose to solve only arguably the more straightforward of the two problems. A less _flashy_ title would go a long way towards providing reasonable expectations for the reader.
To improve this paper, I would like to see:
- Better clarity on how the hypothesis setup stands to previous literature.
- The difference in performance on each environment with different pre-training reward function (only one in show in the paper right now)
- At least one more environments with significantly different dynamics, or an explanation of how the existing settings differ in qualitative terms.
- A baseline employing some form of memory (such as heavy usage of frame stacking or recurrency), to attempt at figuring out whether it's really not reasonable to learn the whole problem simply using RL, with ablation of pre-training (which I suspect might make a significant difference).
At this point, I cannot recommend the article for acceptance, but I'd be willing to change my rating if the authors were to address some of the above points. |
ICLR | Title
Demystifying black-box DNN training processes through Concept-Monitor
Abstract
Despite the successes of deep neural networks (DNNs) on a broad range of tasks little has been understood of why and how they achieve such victories due to their complex architecture and their opaque black-box training processes. With the goal to unveil the mystery of DNNs, in this work, we propose a general framework called Concept-Monitor to uncover the black-box DNN training processes automatically for the first time. Our proposed Concept-Monitor enables humaninterpretable visualization of the DNN training processes and thus facilitates transparency as well as deeper understanding on how DNNs function and operate along the training iterations. Using Concept-Monitor, we are able to observe and compare different training paradigms at ease, including standard training, finetuning, adversarial training and network pruning for Lottery Ticket Hypothesis, which brings new insights on why and how adversarial training and network pruning work and how they modify the network during training. For example, we find that the lottery ticket hypothesis discovers a mask that makes neurons interpretable at initialization, without any finetuning, and we also found that adversarially robust models have more neurons relying on color as compared to standard models trained on the same dataset.
1 INTRODUCTION
Unprecedented success of deep learning have lead to their rapid applications to a wide range of tasks; however, deep neural networks (DNNs) are also known to be black-box and non-interpretable. To deploy these deep neural network (DNN) models into real-world applications, especially for the safety-critical applications such as healthcare and autonomous driving, it is imperative for us to understand what is going behind the black box. There have been a proliferation of research efforts towards interpretating DNNs and they can be mainly divided into two categories: the first approach focuses on attributing DNN’s prediction to the importance of individual-input and identify which pixels or features are important (Zhou et al., 2016; Selvaraju et al., 2019; Sundararajan et al., 2017; Smilkov et al., 2017) while the other approach investigates the functionalities (known as concept) of each individual-neuron (Bau et al., 2017a; Mu & Andreas, 2020; Oikarinen & Weng, 2022).
However, most of these methods only focus on examining a DNN model after it has been trained, and therefore missing out useful information that could be available in the training process. For example, for a deep learning researcher and engineer, it would be very useful to know:
What are the concepts learned by the DNN model and how has the DNN model learnt the concepts along the training process?
The answer to the above question would be useful in two-fold: (i) it can shed light on why and how DNNs can achieve great success, which could be helpful to inspire new DNN training algorithms; (ii) it can also help to debug DNNs and prevent catastrophic failure if anything goes wrong.
Motivated by the above question, it is the main goal of this work to develop a novel framework Concept-Monitor, which makes the black-box DNNs training process become transparent and human-understandable. Our proposed Concept-Monitor is scalable and automated – which are crucial to demystify the opaque DNN training process efficiently and help researchers better understand the training dynamics of the model. More formally, in this paper we provide the following contributions:
• We propose a general framework Concept-Monitor, which is the first automatic and efficient pipeline to make the black-box neural network training transparent and interpretable. Our pipeline monitors and tracks the training progress with human-interpretable concepts which provide useful statistics and insights of the DNN model being trained
• We develop a novel universal embedding space which allows us to efficiently track how the neurons’ concepts evolve and visualize their semantic evolution through out the training process without the need to re-learn an embedding space proposed in prior work.
• We provide four case studies to analyze various deep learning training paradigms, including training standard deep vision models, the mysterious lottery ticket hypothesis, adversarial robust training and fine-tuning on a medical dataset. With Concept-Monitor, we are able to discover new insights into the obscure training process that helps explain some of the empirical observations and hypothesis of the black-box deep learning through the lens of interpretability.
2 BACKGROUND AND RELATED WORKS
2.1 NEURON-LEVEL INTERPRETABILITY METHODS
Recently, there has been a great interest towards understanding deep neural network models at the neuron-level, which is different from mainstream methods that focus on interpreting individual decisions through the input features and pixels (Ribeiro et al., 2016; Lundberg & Lee, 2017; Selvaraju
et al., 2019; Sundararajan et al., 2017). We call this new direction as neuron-level interpretability methods and review the representative techniques below. To begin with, the techniques in this direction can be briefly divided into whether it needs to collect a curated annotated concept dataset to dissect DNNs. For the techniques that require a curated probing data labelled with pre-defined concepts, classic methods in this category includes Network dissection and its variation (Bau et al., 2017b; Mu & Andreas, 2020) as well as Test Concept Activation Vector and its variation (Kim et al., 2017; Goyal et al., 2019; Ghorbani et al., 2019). The key idea of Network dissection is to identify concepts of neurons by calculating an Intersection over Unit (IoU) score of intermediate activation maps and pre-defined concept masks, while the key idea of Test Concept Activation Vector is to use directional derivatives to quantify the model’s sensitivity to the pre-defined concepts.
However, one limitation of this type of approach is the need of a curated probing dataset annotated with concept labels which may be expensive and time-consuming to collect. On the other hand, a recent method Clip-Dissect (Oikarinen & Weng, 2022) addresses this challenge by leveraging the paradigm of multi-modal model (Radford et al., 2021) and allows automatic identification of neuron concepts without the need of collecting concept labelled data. We note that these techniques are all compatible to our proposed Concept-Monitor to facilitate automatic concept monitoring on the DNN training process. In our experiments, we demonstrated the versatility of our Concept-Monitor by showing the results with different concept detectors in section 3.2 when we study standard DNN training process.
2.2 UNDERSTAND DNN TRAINING DYNAMICS
Most of the existing research has been primarily focused on analyzing models after training instead of investigating how the interpretation/concepts change during the training DNN process, which is the main focus of our work. We note that there is a recent work Concept-Evo (Park et al., 2022) having the same goal as ours, but their proposed method is very different from our Concept-Monitor and their methods have some limitations as discussed below. First, their main idea is to learn a universal semantic space for each neuron, using a base model and then project the target model to this space, while we do not need to perform any training. For example, their embedding space uses a base model (VGG19 trained on imagenet) to project target neurons, while we use a pre-trained CLIP (Radford et al., 2021) text encoder to define a universal embedding space. Their methods would be much expensive than ours as they have to redo the learning every time they change the base model or the probing dataset. Second, the approach proposed in Concept-Evo does not associate humaninterpretable concepts to the neurons and thus human intervention is required to actually describe each of the neuron, which is another heavy cost (especially when the model size becomes larger and when the training epochs increase) and hard to automate. On the other hand, our method is fully automated and can explicitly provide top k human-understandable concepts for a neuron, which is another advantage of our Concept-Monitor.
3 CONCEPT-MONITOR: A NOVEL, SCALABLE AND AUTOMATED TOOL TO DEMYSTIFY BLACK-BOX DNN TRAINING PROCESS
In section 3.1 we detail the key components in Concept-Monitor including the concept detector and the universal embedding space. Next in section 3.2, we use Concept-Monitor to demystify the standard training process of a deep vision model and discuss the results and insights.
3.1 CONCEPT DETECTOR AND A UNIFIED EMBEDDING SPACE
Concept Detector: The first part of our method is to use a concept detector (ϕ) to automatically identify the concept of a neuron at any stage in the training. Given a set of concept words S and a probing image dataset Dprobe, a concept detector ϕ would return a concept word wn for a neuron n that maximally activates it. To achieve automatic concept monitoring of a DNN training process, we use two automated neuron-level interpretability tools, Network Dissection (Bau et al., 2017a) and CLIP-Dissect (Oikarinen & Weng, 2022) as the concept detectors in our experiment as a proofof-concept, and we note that Concept-Monitor is compatible with other neuron-level tools as well. Although the technical approach of each concept detector is different, we can actually unify them as a tool calculating a distance metric dni which quantifies neuron n’s association with the concept wi.
For example, the distance dni in Network-Dissection (Bau et al., 2017a) is defined to be the IoU score between activation maps and concept masks, while the distance dni in CLIP-dissect (Oikarinen & Weng, 2022) is a measure of the similarity between concept activation matrix and neuron activation maps. Based on this distance metric, we can also define interpretable neuron, which are the neurons whose distance to the closest concept word is less than some threshold, i.e. min(dni ) < τ , where the threshold τ is dependent on the concept detector ϕ.
Unified embedding space: The second part of our method is to define a unified embedding space in order to visually track neurons’ evolution. Here we detail the steps to project a neuron n into our unified embedding space.
Step 1: To start with, we use wi to denote the ith concept in the concept set S and use vi to denote the associated text embedding where vi = f(wi) with f being the text encoder of a pretrained large language model. We use {v1, v2, . . . , v|S|} as the basis of our semantic space and project neurons on this space using a weighted linear combination of vi of the neuron’s top-k concept words.
Step 2: Let Wn = [wn1′ , w n 2′ . . . w n k′ ] be the list of top k concept words for neuron n. For each neuron n, we can then calculate the embedding un using Equation (1) below,
un = k∑ i=1 λni f(w n i′) (1)
where λni is the weight of the concept wi′ for describing the neuron n and depends on the concept-detector used. For Network-Dissection, (Bau et al., 2017a), we use the distance vector dn = [−IoU1′ ,−IoU2′ , · · · − IoUk′ ], and for CLIP-Dissect (Oikarinen & Weng, 2022) dn = [−h1′ ,−h2′ · · · − hk′ ] where h is the point-wise mutual information distance metric proposed in the CLIP-dissect paper. We can then calculate λni by fitting a softmax distribution on the
corresponding (negative) distance vector have λni = e −di′/ k∑ j=1 e−dj′ . The pseudo code for calculating the unified embedding space is presented in Appendix Algorithm 1
Remarks:
1. Note that since our method is general, when using a new concept detector, we only need to change the distance vector dn associated with that concept detector, which describes how closely related a neuron is to a specific concept.
2. Another benefit of our unified embedding space is that we can project any general concept word α into the same embedding space by calculating its text embedding f(α). This lets us mark the embedding space with concept ”anchors” (see the red stars in Fig 3), which are concepts that a researcher thinks would be represented in a well trained model. The researcher can then track whether and which neurons are converging or diverging away from those anchors giving useful feedback during training.
3. Unlike prior work Concept-Evo (Park et al., 2022) which requires training an embedding space every time when a base model changes, our unified semantic space doesn’t need to train a base model or learn the image embeddings. Please refer to Table 1 for full comparison between our method and Concept-Evo (Park et al., 2022).
3.2 CASE STUDY (I) MONITORING STANDARD TRAINING
Now we use Concept-Monitor to investigate standard training of ResNet-18 model on Places365 dataset. We investigate the concept evolution of neurons at different epochs in the training using the proposed unified embedding space described in section 3.1.
Results and observations. Our main goal is to inspect the training process and study how the concepts evolve across training and whether there is a correlation between accuracy and concept generalization. The main results are plotted in Figure 3 and we summarize three observations from the standard training below:
1. Model learns to look at more complex features as training progresses. As shown in Figure 2, initially neuron 479 is maximally activated by images containing ”striped” pattern. As the training progresses, we can see that it starts to learn to identify windmill structures at Epoch 5 and stays the same for the rest of the training. Another examples is neuron 256 which moves from grid pattern like concept of ”anechoic chambers” to learning the detect a ”field road”.
2. Shallower layers are comparatively more likely to learn low-level features like material and
texture while deeper layers learn more nuanced object detectors. We consider the broad categories of [Material, Texture, Object, Part, Scene] to group neurons. These labels were also
used in the original Broden dataset to group the labels. We find that the categories Scene, Object and Part to be concerned with higher level concepts like Fields and Windmills while Textures to be concerned with concepts like Striped, Matted etc. From Figure 4, its evident that Layer 2 and Layer 3 are learning a lot more low level information than Layer 4.
3. Concept diversity happens later in the training. Using the unified embedding space in Figure 3 we can see that the neurons are clumped together in the middle initially (Epoch 0) and as the training progresses they spread out and hence learn more generalized concepts. This suggests that at the initial stage of the training, only a limited number of concepts have been learned and these concepts are similar (close in the embedding space).
Discussion: Using our method in standard training, we have seen a correlation between training stage and interpretability of a model. We notice that for a well trained model there is a progression from a low level concepts understanding to higher level conceptual understanding. We propose that an inverse relation might help to improve the model training as well, i.e., a good progression of concepts learnt might indicate a well trained model. Using our methodology, specifically tracking the neuron concept evolution in the unified embedding space, deep learning researchers can meticulously monitor and manage the status of DNN training e.g., they can pause training or modify hyper-parameters when they see neurons grouping up or not spreading out in the semantic space.
Using another concept detector: We show that Concept-Monitor is able to work with a another concept-detector like Network Dissection (Bau et al., 2017a) by analyzing the same Resnet-18 model trained on Places365 dataset. Our results are in Figure 14 in Appendix C and we can see that the observations are consistent across different concept detectors:shallower layers are more likely to learn low-level features like texture and that model learns more complex features as the training progresses. We also see the embedding space starting from a clump in the center for Epoch 0 and then spreading out indicating the generalization of the concepts learnt.
4 CASE STUDIES OF OTHER TRAINING PARADIGMS
In this section, we show the Concept-Monitor is versatile and can be used to study various training paradigms to gain insights into how and why they work. We also provide useful observations and insights that could help future researchers better understand these training procedures.
4.1 CASE STUDY (II) LOTTERY TICKET HYPOTHESIS
Lottery Ticket Hypothesis (LTH) (Frankle & Carbin, 2018) is a popular method to prune deep neural networks without sacrificing their performance. In this case study, we use Concept-Monitor to demystify the success behind LTH in a human understandable way. The main idea of LTH is to use iterative magnitude pruning (IMP) to prune the model iteratively by repeating the steps of training, pruning and rewinding to an initial epoch. LTH hypothesizes the existence of ”winning tickets” at initialization which are sub-networks within the network that can be trained to performance equivalent to the original model. However, it was observed that rewinding to initial weight leads to a performance drop and it is better to rewind to an earlier training epoch instead of fully reversing to the initial weights. (Frankle et al., 2019a) attribute this phenomenon to SGD noise in initial training and we will use Concept-Monitor to investigate LTH through the lens of interpretability. We train a ResNet18 on CIFAR 10 dataset using IMP in 8 stages. For full details on our experimental setup, please refer to Appendix section A.
We study LTH with three different rewinding stages of IMP: rewinding to initial weights(epoch 0), epoch 5 and epoch 16. We found rewinding to epoch 5 performs better than rewinding to epoch 0 and epoch 16 when the sparsity level is high, which we attribute to be the two extremes of initialization i.e, initialization to epoch 0 in which the model is too noisy or initialization to epoch 16 in which the model has learnt a rigid structure which would need to be rewired by pruning. For instance, when pruned to 2.8% of initial weights, rewind to epoch 5 has 93.78% accuracy as compared to 91.8%, 93.4% of rewinding to epoch 0 and epoch 16 respectively. Rewinding to 0 is inefficient as noted by (Frankle et al., 2019b) and rewinding to epoch 16 doesn’t give the model much freedom to adjust to the sparse weights. We use Concept-Monitor to track the training process of these 3 different rewinding strategies and plot the results in Fig 5 and Fig 9 in the appendix.
Observations and Results:
In our analysis, we make the following observations:
1. Pruning the network learns to encode some concepts without any fine tuning. Figure 5a shows the number of interpretable neurons in layer 4 of the model after rewinding to initialization. We notice the trend that for rewinding to epoch 16 and epoch 5, the number of interpretable neurons decreases as we increase the sparsity, but for rewinding to the initial weights (epoch 0) the number of interpretable neurons increase. Since the weights are randomly initialized, the only way there can be a gain in interpretable neurons is through the changes that happen during pruning, i.e. the zeroing out of certain weights in the network. Hence, we believe that there is a possibility that the training is learning to remove connections that are harming the network and this leads to the resultant network to be different than the original (with the only change being that some weights are zero). This leads to some neurons being activated to certain low level concepts and hence our observation of increased interpretable neurons. We note that this phenomenon was also observed by another work (Zhou et al., 2019) which says that IMP zeros out weights that would ultimately go towards zero anyway after training. Hence, they hypothesize that a pruned initial network encodes a portion of the training process itself, which they refer to as ”masking is learning”. This also explains why we see interpretable neurons with just pruned initial weights.
2. The percentage of concepts retained through pruning is highest with Epoch 5 rewinding.
Figure 5b plots the percentage of top-x percentile interpretable neurons that retain their concepts throughout the pruning process (y-axis) vs x percentile (x axis). In other words it plots the relation of the interpretability of a neuron to its concept retention.Note that, interpretablity of neurons is dependent on the threshold defined by the concept detector (see section 3.1). We see that for rewind to epoch 16 as we decrease the interpretability the percentage of neurons retaining concepts increases, or the more interpretable neurons are likely to lose their concepts during IMP, while the less interpretable neurons keep their concepts. For rewind to epoch 5 we see that the more interpretable neurons keep their concepts and the retention decreases as the neurons become less interpretable. This leads us to the hypothesis that rewinding to epoch 5 learns concepts that are more general and hence are able to be retained, while rewind to epoch 16 learns concepts that are rigid and the model has to relearn those concepts to preserve accuracy during pruning. This effect is also shown in the accuracy of the models in which rewinding to epoch 5 performs better than rewinding epoch 16 at higher pruning.
Discussion: From observation 1, we find that it is very likely that the lottery ticket sparse pruning mask actually encodes learning, which was also suggested in (Zhou et al., 2019) as ”masking is learning”. It is also noted from observation 2 that certain rewinding points are more suitable to retain concepts, e.g. epoch 5 in our case, and there is a correlation between this and the model performance as noted by the accuracy at higher sparsity.
4.2 CASE STUDY (III) ADVERSARIAL TRAINING
DNNs are known to be vulnerable against small perturbations in their inputs (Szegedy et al., 2013). This is problematic as networks can fail unexpectedly after small random or adversarial perturbations which raises concerns over their safety. Fortunately, methods have been developed to defend against
adversarial attacks, most popular of these being Adversarial Training (Madry et al., 2018). This successfully makes networks more robust against such attacks, but comes at a cost of degraded performance on clean test data. In this study, we apply Concept-Monitor to adversarial training to better understand how adversarial training changes a network and why standard accuracy suffers. We analyse a ResNet18 model trained on CIFAR10 with and without adversarial training. For full details on our experimental setup please refer to Appendix section A.
Observations and Results: Using Concept-Monitor we have the following three observations.
1. Adversarially robust network has less interpretable neurons in late layers, but more in
earlier layers. In Fig 6, we plot the number of interpretable neurons in layer 2-4 at three different training stages. It can be seen at the end of training that 293 out of 512 of the layer 4 neurons are interpretable for standard training while only 215 out of 512 are interpretable for the robustly trained model. For layer 3 it is 91 out of 256 for standard model and 125 out of 256 for the robust model. We observe similar trend for layer 2 neurons, please refer to Fig 16 in Appendix. 2. Adversarially robust network relies more on colors, less on materials and textures. When combining concepts detected across 3 layers, we observe that the robust model has a lot more ”color” neurons than the standard model (74 vs 15) Figure 16. In contrast, the standard model has 154 neurons detecting ”textures” while robust model has only 97, and standard model has 10 ”material” neurons compared to only 2 of the robust model. This finding is sensible as detecting textures and materials often relies on high frequency patterns that are easily affected by l∞ noise therefore the adversarial training forces the model to rely less on them and more on more resilient features like color. 3. Standard training learns neurons detecting target in the second to last layer while robust training does not. As seen in Figure 7, the standard network has many neurons detecting its target classes present in the second to last layer. For example, the standard network has 17 interpretable neurons detecting cars and 13 neurons detecting horses in layer4, while the robust network has no layer4 neurons detecting either car or horse.
Discussion: We find that adversarial training harms the ability of the network to detect certain concepts that rely on high frequency patterns like texture. Since these patterns are useful for many tasks, losing them may be a significant cause for the degradation in standard performance as observed in the experiments. Another cause for poorer performance of the robust network may be the lack of neurons detecting target class objects in second to last layer, but why this happens is still unclear to us. We believe addressing these two issues may be the key to improving clean accuracy of robust models. On the other hand, the robust network seems to learn more interpretable lower level features perhaps learning a more diverse representation similar to the findings of (Salman et al., 2020) who showed that adversarially robust models have better features for transfer learning.
4.3 CASE STUDY (IV) FINE-TUNING ON A MEDICAL DATASET
In this section, we use Concept-Monitor to observe the fine-tuning of a pretrained DNN on a diabetric retinopathy dataset (APTOS, 2019). This experiment allows us to test our method on a dataset from a different domain, as well as gather insights on the process of finetuning a pretrained model. The setup details are in Appendix A.
Observations and results: We probed the model training at a few intermediate steps. We observe that for the initial weights, as the neurons are pretrained on Imagenet, they show a lot of diverse and high level concepts(as shown in Figure 15 in Appendix). However, as the training progresses we notice that more neurons are getting activated by textural concepts like dots and patterns rather than objects. This is what we expect because as the model gets better at classifying retinopathy images shown in Figure
11, we expect it to rely more on textures and presence of ”dots” which is consistent to what we observe here as shown by the top interpretable neurons in epochs 20 and 40 in Figure 15. From Figure 8 we see that the number of interpretable neurons in ”object” category decreases as the training progresses while the number of interpretable neurons in the ”material” category increases which further confirms our theory that the model learns to focus more on lower level features like material and textures as compared to objects.
5 CONCLUSIONS
We have presented Concept-Monitor, a novel method to automatically track and monitor neural network training process in a transparent and human-understandable way. With the 4 comprehensive case studies on various deep learning training paradigms, we show that Concept-Monitor allows us to better understand the underlying mechanism of standard DNN training, the two alternative training methods, Lottery Ticket Hypothesis and adversarial training, as well as the fine-tuning on medical task. With Concept-Monitor we discover that surprisingly lottery ticket hypothesis prunes the network in a way that the neurons are interpretable even at initialization, discovering interpretability hidden in random initialization. Furthermore, we discover that adversarial training causes the hidden neurons to detect more simple concepts like colors while losing representations of materials and target class objects. We also test our method on medical dataset and find that the model learns to focus more on low level features which reflect the medical dataset.
Reproducibility statement: We acknowledge the importance of replicating our experiments and for that reason we have explicitly mentioned the implementation details of all our experiments in Appendix A.
A. EXPERIMENTAL SETUP
Standard training (section 3.2):
Setup: We train a Resnet-18 model on Places-365 dataset, which contains a lot of diverse classes allowing the DNN model to learn diverse concepts. To reduce the training time, we randomly selected 1000 images for each of the 365 classes and trained for 30 epochs reaching top-1 accuracy of 48.3%. We use batch size of 256 and an initial learning rate of 0.1 with cosine annealing scheduler.
Probing methodology: We use Broden (Bau et al., 2017a) dataset as Dprobe and use associated concept labels as a decoupled concept set S. Our embedding space, as described in section 3.1, is computed using CLIP’s text embeddings of Broden labels as a basis. For visualizing in a 2- dimension plot, we follow (Park et al., 2022) and use UMAP dimensionality reduction (McInnes et al., 2018), as it preserves inter-point distance in the lower dimensions. We set k = 5 in Eq(1), i.e. we use top-5 concepts to compute the embedding.
Lottery ticket hypothesis experiments (section 4.1):
Setup: We train ResNet 18 on CIFAR 10 dataset using IMP as in the LTH paper (Frankle & Carbin, 2018), rewinding to different initial weights. For each stage of IMP we train the model for 160 epochs, prune 40% of the weights and rewind to initialization. We consider rewinding to three different stages: initial weights, epoch 5 and epoch 16, using (Chen et al., 2022) implementation as reference.
Probing methodology: For our Dprobe, we use CIFAR 100 training dataset and for concept set S we use broden labels.
Adversarial Learning experiments (section 4.2):
Setup: We perform adversarial training with PGD attacks on a ResNet-18 architecture. We follow reop (Wong et al., 2020) and train the network with ϵ = 8/255 and l∞ perturbations for 40 epochs. We compare it against a CIFAR-10 network trained using the same exact training setup but no adversarial training. The standard model reaches a final accuracy 94.29%, while the robust model reaches 83.42% accuracy on clean data and 50.00% robust accuracy against a PGD adversarial attack as shown in Figure 10. The standard model expectedly performs really badly on adversarial images.
Probing methodology: We use Broden images as Dprobe and for concept set S we use the broden labels as the concepts can be easily categorized.
Fine-tuning on medical dataset (section 4.3)
Setup: We used ResNet-34 backbone pretrained on ImageNet dataset as our feature extractor and used a simple linear layer as the classification head. We trained this network on the diabetic retinopathy classification dataset (APTOS, 2019) (Figure 11) and it achieved an accuracy of 72.77%. We followed the work from (Balaji, 2019) for our experiments. We use Broden as Dprobe and broden labels as S.
B. CONCEPT-MONITOR ALGORITHM
Algorithm 1: Pseudo code for Concept-Monitor for a neuron n Input : Neuron n, concept detector ϕ, Concept set S , Probing dataset Dprobe Output: Embedding plot, Concept statistics Function Concept-Monitor(ϕ, S, Dprobe)
for t from 1→ tepoch do W tn, d t n = ϕ(S, Dprobe, n)
λni =softmax(−dtn) utn = k∑
i=1
λni f(W t n[i]))
plot(utn) Rn.append(W t n) Dn.append(d t n)
end stats← getStats(Rn, Dn)
C. VISUALIZING EVOLUTION IN THE EMBEDDING SPACE
Here we use Concept-Monitor’s unified embedding space to observe the evolution of few neuron’s in layer 4 of ResNet-18 trained on Places 365 dataset as described in Section 3.1. Our embedding space is designed in such a way that it is possible for us to add ”anchors” to it, which are positions in the embedding space that represent a particular chosen concept. We show these anchors as red stars in Figure 12. These anchors are fixed through training and mark the region of the embedding space encoding a particular concept so a user may track neuron movements relative to those anchors through training. For brevity we leave out the specific concept labels represented by the anchors in the figure and enumerate them instead. We see that at the beginning most of the neurons
are concentrated around anchors 14,13 and 16 which represent the concepts ”grid”,”dotted” and ”porous” respectively, which are low level features. This is expected as the model has just started training and hasn’t learnt to encode high level concepts yet. Most neurons move away from this space, except neuron 408 which stays in similar space throughout the training encoding low level textural concepts.
We also would like to highlight the trajectory of neuron 190, which starts from bottom left and slowly moves towards anchor 0 representing the concept minibike. By the end of training, this neuron comes very close to the anchor denoting that it has successfully learnt that concept. This concept of distance to the anchors can also be used as a quick visual aid to tell whether the concepts that the neurons represent are strongly represented or not. If the neuron’s concept label is far from the corresponding anchor in the space, we can safely mark that neuron as uninterpretable.
The labels corresponding to the anchors (red stars) in Figure 12 are 0 - minibike, 1 - exhaust hood, 2 - kitchen island, 3 - leaf, 4 - shower curtain, 5 - net, 6 - pantry, 7 - striped, 8 - countertop, 9 - granite, 10 - forecourt, 11 - cat, 12 - bed, 13 - grid, 14 - dotted, 15 - shower stall, 16 - porous, 17 - aqueduct, 18 - fabric.
D. CONCEPT MONITOR WITH DIFFERENT PROBING DATASET
As stated in section 3 our method with Clip-Dissect is able to work with any probing and concept dataset. We provide most of our analysis using Broden dataset as it contains a collection of different concept images and hence is able to provide much better results as compared to a limited dataset. Here we provide an example of that by using CIFAR-100 training images as the probing dataset to analyze the same model as section 3. As shown in section 3 and appendix A, neurons 479 represents concept ”windmill” and neuron 256 represents the concept ”field-road”. We now use CIFAR-100 training images to monitor these neurons. From the embedding space in Figure 13 we can see that neuron 256 converges to the ”Field” anchor. We also look at the highly activating images for each neuron in Figure 13 and see that for neuron 479 the most activating images are tree like structures across the sky which are the most similar images to windmills in the CIFAR-100 dataset. The point of this exercise is that concept monitor as all other model dissection methods is dependent on the probing dataset, however if we use clip-dissect we are able to use much larger and diverse datasets since we don’t require any labelling of images and can simply use entire set of images directly. | 1. What is the main contribution of the paper regarding visualization services?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of interpretability and novelty?
3. Do you have any concerns regarding the selection of use cases and the coverage of related literature?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any missing discussions or demonstrations in the paper that could enhance its value? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper presents a visualization service called Concept Monitor that can be used to inspect the evolution of concepts associated with single neurons throughout the training of a vision model. The system's interoperability allows for it to be utilized with different concept detection algorithms from the literature, such as Network Dissection (Bau et al., 2017) and CLIP-Dissect (Oikarinen and Weng, 2022), among others. The authors use the embedding space of a pretrained language model as their unified embedding space onto which concept words are projected; this has the advantage of providing stationary anchor points that don't vary over time, as this embedding function is pretrained and fixed -- only neurons will move across this space over time due to training. The solution offered in this contribution intends to shed light onto training dynamics by providing a more user-friendly visualization tool.
Strengths And Weaknesses
Strengths:
Neat packaging of prior research contributions into a useful tool for interpretability
Interesting selection of use-cases (e.g. lottery tickets, adversarial training)
Clarity of exposition
Weaknesses:
Concept Monitor is not, itself, a novel interpretability method and does not provide any novel insight into the inner workings of a network. It consists, instead, of a system built on top of various interpretability methods to better track and visualize the output of these other methods over time during training.
It is only applied, in this paper, to the field of image processing, and, specifically, to a very small set of architectures and datasets, with no replication of results across seeds nor ablations across tasks and setups, so there is no evidence of generality of the observations presented in this work.
It requires a predefined set of concepts, which might be limiting. Since it's only applied to image processing, accepted concept sets do exist and capture high level clusters of concepts focusing on texture, material, color, etc. However, in other context, the weakness of this approach has already been pointed out as undesirable.
The paper limits itself to interpreting models at the neuron level, which has been criticized in the literature as not being the right level of abstraction for interpreting neural networks. It is possible that Concept Monitor would work at other levels of abstraction as well (layer, circuit, branch, etc.) because it ultimately delegates the extraction and assignment of interpretable concepts to interpretable units to other methods in the literature, therefore only building on top of the outputs of these methods. However, no discussion nor demonstration of this is shown in this work.
Poor coverage of the related literature. Among many others, this paper is missing references and discussion of the work on circuits (e.g. Olah et al, 2020), neuron specialization (Goh et al., 2021, Nguyen et al., 2016, Cammarata et al. 2020), polysemanticity and superposition (Elhage et al., 2022), neuroscience, and much more. The work is also lacking a discussion of bias in word embeddings, which it heavily relies on.
The insights are based on one-time observations and demos, with no statistical significance associated with the results. This is because experimental evidence is intended to be used as proof of concept for the utility of the Concept Monitor service, and not to extract generalizable insights of scientific value. Most of the discussion in the paper, however, focuses on the insights extracted from these one-time experiments, attempting to draw conclusions about, say, a particular network rewound to epochs 0, 5, or 16, but utilizing results such as the ones in Figure 5 with no error bars or confidence bands.
Insights about the changes in relative importance of texture vs higher level concepts in adversarially trained models are already well known.
Clarity, Quality, Novelty And Reproducibility
The paper is clearly written, but the quality and novelty of the contribution are extremely limited. The scientific insights are not likely to reproduce across settings and tasks, therefore not offering any useful information without re-experimentation in the domain/dataset/architecture of interest. In addition, without open-sourcing or API for the Concept Monitor product, one would have to completely reimplement the system, without being able to take advantage of the work of the authors. |
ICLR | Title
Demystifying black-box DNN training processes through Concept-Monitor
Abstract
Despite the successes of deep neural networks (DNNs) on a broad range of tasks little has been understood of why and how they achieve such victories due to their complex architecture and their opaque black-box training processes. With the goal to unveil the mystery of DNNs, in this work, we propose a general framework called Concept-Monitor to uncover the black-box DNN training processes automatically for the first time. Our proposed Concept-Monitor enables humaninterpretable visualization of the DNN training processes and thus facilitates transparency as well as deeper understanding on how DNNs function and operate along the training iterations. Using Concept-Monitor, we are able to observe and compare different training paradigms at ease, including standard training, finetuning, adversarial training and network pruning for Lottery Ticket Hypothesis, which brings new insights on why and how adversarial training and network pruning work and how they modify the network during training. For example, we find that the lottery ticket hypothesis discovers a mask that makes neurons interpretable at initialization, without any finetuning, and we also found that adversarially robust models have more neurons relying on color as compared to standard models trained on the same dataset.
1 INTRODUCTION
Unprecedented success of deep learning have lead to their rapid applications to a wide range of tasks; however, deep neural networks (DNNs) are also known to be black-box and non-interpretable. To deploy these deep neural network (DNN) models into real-world applications, especially for the safety-critical applications such as healthcare and autonomous driving, it is imperative for us to understand what is going behind the black box. There have been a proliferation of research efforts towards interpretating DNNs and they can be mainly divided into two categories: the first approach focuses on attributing DNN’s prediction to the importance of individual-input and identify which pixels or features are important (Zhou et al., 2016; Selvaraju et al., 2019; Sundararajan et al., 2017; Smilkov et al., 2017) while the other approach investigates the functionalities (known as concept) of each individual-neuron (Bau et al., 2017a; Mu & Andreas, 2020; Oikarinen & Weng, 2022).
However, most of these methods only focus on examining a DNN model after it has been trained, and therefore missing out useful information that could be available in the training process. For example, for a deep learning researcher and engineer, it would be very useful to know:
What are the concepts learned by the DNN model and how has the DNN model learnt the concepts along the training process?
The answer to the above question would be useful in two-fold: (i) it can shed light on why and how DNNs can achieve great success, which could be helpful to inspire new DNN training algorithms; (ii) it can also help to debug DNNs and prevent catastrophic failure if anything goes wrong.
Motivated by the above question, it is the main goal of this work to develop a novel framework Concept-Monitor, which makes the black-box DNNs training process become transparent and human-understandable. Our proposed Concept-Monitor is scalable and automated – which are crucial to demystify the opaque DNN training process efficiently and help researchers better understand the training dynamics of the model. More formally, in this paper we provide the following contributions:
• We propose a general framework Concept-Monitor, which is the first automatic and efficient pipeline to make the black-box neural network training transparent and interpretable. Our pipeline monitors and tracks the training progress with human-interpretable concepts which provide useful statistics and insights of the DNN model being trained
• We develop a novel universal embedding space which allows us to efficiently track how the neurons’ concepts evolve and visualize their semantic evolution through out the training process without the need to re-learn an embedding space proposed in prior work.
• We provide four case studies to analyze various deep learning training paradigms, including training standard deep vision models, the mysterious lottery ticket hypothesis, adversarial robust training and fine-tuning on a medical dataset. With Concept-Monitor, we are able to discover new insights into the obscure training process that helps explain some of the empirical observations and hypothesis of the black-box deep learning through the lens of interpretability.
2 BACKGROUND AND RELATED WORKS
2.1 NEURON-LEVEL INTERPRETABILITY METHODS
Recently, there has been a great interest towards understanding deep neural network models at the neuron-level, which is different from mainstream methods that focus on interpreting individual decisions through the input features and pixels (Ribeiro et al., 2016; Lundberg & Lee, 2017; Selvaraju
et al., 2019; Sundararajan et al., 2017). We call this new direction as neuron-level interpretability methods and review the representative techniques below. To begin with, the techniques in this direction can be briefly divided into whether it needs to collect a curated annotated concept dataset to dissect DNNs. For the techniques that require a curated probing data labelled with pre-defined concepts, classic methods in this category includes Network dissection and its variation (Bau et al., 2017b; Mu & Andreas, 2020) as well as Test Concept Activation Vector and its variation (Kim et al., 2017; Goyal et al., 2019; Ghorbani et al., 2019). The key idea of Network dissection is to identify concepts of neurons by calculating an Intersection over Unit (IoU) score of intermediate activation maps and pre-defined concept masks, while the key idea of Test Concept Activation Vector is to use directional derivatives to quantify the model’s sensitivity to the pre-defined concepts.
However, one limitation of this type of approach is the need of a curated probing dataset annotated with concept labels which may be expensive and time-consuming to collect. On the other hand, a recent method Clip-Dissect (Oikarinen & Weng, 2022) addresses this challenge by leveraging the paradigm of multi-modal model (Radford et al., 2021) and allows automatic identification of neuron concepts without the need of collecting concept labelled data. We note that these techniques are all compatible to our proposed Concept-Monitor to facilitate automatic concept monitoring on the DNN training process. In our experiments, we demonstrated the versatility of our Concept-Monitor by showing the results with different concept detectors in section 3.2 when we study standard DNN training process.
2.2 UNDERSTAND DNN TRAINING DYNAMICS
Most of the existing research has been primarily focused on analyzing models after training instead of investigating how the interpretation/concepts change during the training DNN process, which is the main focus of our work. We note that there is a recent work Concept-Evo (Park et al., 2022) having the same goal as ours, but their proposed method is very different from our Concept-Monitor and their methods have some limitations as discussed below. First, their main idea is to learn a universal semantic space for each neuron, using a base model and then project the target model to this space, while we do not need to perform any training. For example, their embedding space uses a base model (VGG19 trained on imagenet) to project target neurons, while we use a pre-trained CLIP (Radford et al., 2021) text encoder to define a universal embedding space. Their methods would be much expensive than ours as they have to redo the learning every time they change the base model or the probing dataset. Second, the approach proposed in Concept-Evo does not associate humaninterpretable concepts to the neurons and thus human intervention is required to actually describe each of the neuron, which is another heavy cost (especially when the model size becomes larger and when the training epochs increase) and hard to automate. On the other hand, our method is fully automated and can explicitly provide top k human-understandable concepts for a neuron, which is another advantage of our Concept-Monitor.
3 CONCEPT-MONITOR: A NOVEL, SCALABLE AND AUTOMATED TOOL TO DEMYSTIFY BLACK-BOX DNN TRAINING PROCESS
In section 3.1 we detail the key components in Concept-Monitor including the concept detector and the universal embedding space. Next in section 3.2, we use Concept-Monitor to demystify the standard training process of a deep vision model and discuss the results and insights.
3.1 CONCEPT DETECTOR AND A UNIFIED EMBEDDING SPACE
Concept Detector: The first part of our method is to use a concept detector (ϕ) to automatically identify the concept of a neuron at any stage in the training. Given a set of concept words S and a probing image dataset Dprobe, a concept detector ϕ would return a concept word wn for a neuron n that maximally activates it. To achieve automatic concept monitoring of a DNN training process, we use two automated neuron-level interpretability tools, Network Dissection (Bau et al., 2017a) and CLIP-Dissect (Oikarinen & Weng, 2022) as the concept detectors in our experiment as a proofof-concept, and we note that Concept-Monitor is compatible with other neuron-level tools as well. Although the technical approach of each concept detector is different, we can actually unify them as a tool calculating a distance metric dni which quantifies neuron n’s association with the concept wi.
For example, the distance dni in Network-Dissection (Bau et al., 2017a) is defined to be the IoU score between activation maps and concept masks, while the distance dni in CLIP-dissect (Oikarinen & Weng, 2022) is a measure of the similarity between concept activation matrix and neuron activation maps. Based on this distance metric, we can also define interpretable neuron, which are the neurons whose distance to the closest concept word is less than some threshold, i.e. min(dni ) < τ , where the threshold τ is dependent on the concept detector ϕ.
Unified embedding space: The second part of our method is to define a unified embedding space in order to visually track neurons’ evolution. Here we detail the steps to project a neuron n into our unified embedding space.
Step 1: To start with, we use wi to denote the ith concept in the concept set S and use vi to denote the associated text embedding where vi = f(wi) with f being the text encoder of a pretrained large language model. We use {v1, v2, . . . , v|S|} as the basis of our semantic space and project neurons on this space using a weighted linear combination of vi of the neuron’s top-k concept words.
Step 2: Let Wn = [wn1′ , w n 2′ . . . w n k′ ] be the list of top k concept words for neuron n. For each neuron n, we can then calculate the embedding un using Equation (1) below,
un = k∑ i=1 λni f(w n i′) (1)
where λni is the weight of the concept wi′ for describing the neuron n and depends on the concept-detector used. For Network-Dissection, (Bau et al., 2017a), we use the distance vector dn = [−IoU1′ ,−IoU2′ , · · · − IoUk′ ], and for CLIP-Dissect (Oikarinen & Weng, 2022) dn = [−h1′ ,−h2′ · · · − hk′ ] where h is the point-wise mutual information distance metric proposed in the CLIP-dissect paper. We can then calculate λni by fitting a softmax distribution on the
corresponding (negative) distance vector have λni = e −di′/ k∑ j=1 e−dj′ . The pseudo code for calculating the unified embedding space is presented in Appendix Algorithm 1
Remarks:
1. Note that since our method is general, when using a new concept detector, we only need to change the distance vector dn associated with that concept detector, which describes how closely related a neuron is to a specific concept.
2. Another benefit of our unified embedding space is that we can project any general concept word α into the same embedding space by calculating its text embedding f(α). This lets us mark the embedding space with concept ”anchors” (see the red stars in Fig 3), which are concepts that a researcher thinks would be represented in a well trained model. The researcher can then track whether and which neurons are converging or diverging away from those anchors giving useful feedback during training.
3. Unlike prior work Concept-Evo (Park et al., 2022) which requires training an embedding space every time when a base model changes, our unified semantic space doesn’t need to train a base model or learn the image embeddings. Please refer to Table 1 for full comparison between our method and Concept-Evo (Park et al., 2022).
3.2 CASE STUDY (I) MONITORING STANDARD TRAINING
Now we use Concept-Monitor to investigate standard training of ResNet-18 model on Places365 dataset. We investigate the concept evolution of neurons at different epochs in the training using the proposed unified embedding space described in section 3.1.
Results and observations. Our main goal is to inspect the training process and study how the concepts evolve across training and whether there is a correlation between accuracy and concept generalization. The main results are plotted in Figure 3 and we summarize three observations from the standard training below:
1. Model learns to look at more complex features as training progresses. As shown in Figure 2, initially neuron 479 is maximally activated by images containing ”striped” pattern. As the training progresses, we can see that it starts to learn to identify windmill structures at Epoch 5 and stays the same for the rest of the training. Another examples is neuron 256 which moves from grid pattern like concept of ”anechoic chambers” to learning the detect a ”field road”.
2. Shallower layers are comparatively more likely to learn low-level features like material and
texture while deeper layers learn more nuanced object detectors. We consider the broad categories of [Material, Texture, Object, Part, Scene] to group neurons. These labels were also
used in the original Broden dataset to group the labels. We find that the categories Scene, Object and Part to be concerned with higher level concepts like Fields and Windmills while Textures to be concerned with concepts like Striped, Matted etc. From Figure 4, its evident that Layer 2 and Layer 3 are learning a lot more low level information than Layer 4.
3. Concept diversity happens later in the training. Using the unified embedding space in Figure 3 we can see that the neurons are clumped together in the middle initially (Epoch 0) and as the training progresses they spread out and hence learn more generalized concepts. This suggests that at the initial stage of the training, only a limited number of concepts have been learned and these concepts are similar (close in the embedding space).
Discussion: Using our method in standard training, we have seen a correlation between training stage and interpretability of a model. We notice that for a well trained model there is a progression from a low level concepts understanding to higher level conceptual understanding. We propose that an inverse relation might help to improve the model training as well, i.e., a good progression of concepts learnt might indicate a well trained model. Using our methodology, specifically tracking the neuron concept evolution in the unified embedding space, deep learning researchers can meticulously monitor and manage the status of DNN training e.g., they can pause training or modify hyper-parameters when they see neurons grouping up or not spreading out in the semantic space.
Using another concept detector: We show that Concept-Monitor is able to work with a another concept-detector like Network Dissection (Bau et al., 2017a) by analyzing the same Resnet-18 model trained on Places365 dataset. Our results are in Figure 14 in Appendix C and we can see that the observations are consistent across different concept detectors:shallower layers are more likely to learn low-level features like texture and that model learns more complex features as the training progresses. We also see the embedding space starting from a clump in the center for Epoch 0 and then spreading out indicating the generalization of the concepts learnt.
4 CASE STUDIES OF OTHER TRAINING PARADIGMS
In this section, we show the Concept-Monitor is versatile and can be used to study various training paradigms to gain insights into how and why they work. We also provide useful observations and insights that could help future researchers better understand these training procedures.
4.1 CASE STUDY (II) LOTTERY TICKET HYPOTHESIS
Lottery Ticket Hypothesis (LTH) (Frankle & Carbin, 2018) is a popular method to prune deep neural networks without sacrificing their performance. In this case study, we use Concept-Monitor to demystify the success behind LTH in a human understandable way. The main idea of LTH is to use iterative magnitude pruning (IMP) to prune the model iteratively by repeating the steps of training, pruning and rewinding to an initial epoch. LTH hypothesizes the existence of ”winning tickets” at initialization which are sub-networks within the network that can be trained to performance equivalent to the original model. However, it was observed that rewinding to initial weight leads to a performance drop and it is better to rewind to an earlier training epoch instead of fully reversing to the initial weights. (Frankle et al., 2019a) attribute this phenomenon to SGD noise in initial training and we will use Concept-Monitor to investigate LTH through the lens of interpretability. We train a ResNet18 on CIFAR 10 dataset using IMP in 8 stages. For full details on our experimental setup, please refer to Appendix section A.
We study LTH with three different rewinding stages of IMP: rewinding to initial weights(epoch 0), epoch 5 and epoch 16. We found rewinding to epoch 5 performs better than rewinding to epoch 0 and epoch 16 when the sparsity level is high, which we attribute to be the two extremes of initialization i.e, initialization to epoch 0 in which the model is too noisy or initialization to epoch 16 in which the model has learnt a rigid structure which would need to be rewired by pruning. For instance, when pruned to 2.8% of initial weights, rewind to epoch 5 has 93.78% accuracy as compared to 91.8%, 93.4% of rewinding to epoch 0 and epoch 16 respectively. Rewinding to 0 is inefficient as noted by (Frankle et al., 2019b) and rewinding to epoch 16 doesn’t give the model much freedom to adjust to the sparse weights. We use Concept-Monitor to track the training process of these 3 different rewinding strategies and plot the results in Fig 5 and Fig 9 in the appendix.
Observations and Results:
In our analysis, we make the following observations:
1. Pruning the network learns to encode some concepts without any fine tuning. Figure 5a shows the number of interpretable neurons in layer 4 of the model after rewinding to initialization. We notice the trend that for rewinding to epoch 16 and epoch 5, the number of interpretable neurons decreases as we increase the sparsity, but for rewinding to the initial weights (epoch 0) the number of interpretable neurons increase. Since the weights are randomly initialized, the only way there can be a gain in interpretable neurons is through the changes that happen during pruning, i.e. the zeroing out of certain weights in the network. Hence, we believe that there is a possibility that the training is learning to remove connections that are harming the network and this leads to the resultant network to be different than the original (with the only change being that some weights are zero). This leads to some neurons being activated to certain low level concepts and hence our observation of increased interpretable neurons. We note that this phenomenon was also observed by another work (Zhou et al., 2019) which says that IMP zeros out weights that would ultimately go towards zero anyway after training. Hence, they hypothesize that a pruned initial network encodes a portion of the training process itself, which they refer to as ”masking is learning”. This also explains why we see interpretable neurons with just pruned initial weights.
2. The percentage of concepts retained through pruning is highest with Epoch 5 rewinding.
Figure 5b plots the percentage of top-x percentile interpretable neurons that retain their concepts throughout the pruning process (y-axis) vs x percentile (x axis). In other words it plots the relation of the interpretability of a neuron to its concept retention.Note that, interpretablity of neurons is dependent on the threshold defined by the concept detector (see section 3.1). We see that for rewind to epoch 16 as we decrease the interpretability the percentage of neurons retaining concepts increases, or the more interpretable neurons are likely to lose their concepts during IMP, while the less interpretable neurons keep their concepts. For rewind to epoch 5 we see that the more interpretable neurons keep their concepts and the retention decreases as the neurons become less interpretable. This leads us to the hypothesis that rewinding to epoch 5 learns concepts that are more general and hence are able to be retained, while rewind to epoch 16 learns concepts that are rigid and the model has to relearn those concepts to preserve accuracy during pruning. This effect is also shown in the accuracy of the models in which rewinding to epoch 5 performs better than rewinding epoch 16 at higher pruning.
Discussion: From observation 1, we find that it is very likely that the lottery ticket sparse pruning mask actually encodes learning, which was also suggested in (Zhou et al., 2019) as ”masking is learning”. It is also noted from observation 2 that certain rewinding points are more suitable to retain concepts, e.g. epoch 5 in our case, and there is a correlation between this and the model performance as noted by the accuracy at higher sparsity.
4.2 CASE STUDY (III) ADVERSARIAL TRAINING
DNNs are known to be vulnerable against small perturbations in their inputs (Szegedy et al., 2013). This is problematic as networks can fail unexpectedly after small random or adversarial perturbations which raises concerns over their safety. Fortunately, methods have been developed to defend against
adversarial attacks, most popular of these being Adversarial Training (Madry et al., 2018). This successfully makes networks more robust against such attacks, but comes at a cost of degraded performance on clean test data. In this study, we apply Concept-Monitor to adversarial training to better understand how adversarial training changes a network and why standard accuracy suffers. We analyse a ResNet18 model trained on CIFAR10 with and without adversarial training. For full details on our experimental setup please refer to Appendix section A.
Observations and Results: Using Concept-Monitor we have the following three observations.
1. Adversarially robust network has less interpretable neurons in late layers, but more in
earlier layers. In Fig 6, we plot the number of interpretable neurons in layer 2-4 at three different training stages. It can be seen at the end of training that 293 out of 512 of the layer 4 neurons are interpretable for standard training while only 215 out of 512 are interpretable for the robustly trained model. For layer 3 it is 91 out of 256 for standard model and 125 out of 256 for the robust model. We observe similar trend for layer 2 neurons, please refer to Fig 16 in Appendix. 2. Adversarially robust network relies more on colors, less on materials and textures. When combining concepts detected across 3 layers, we observe that the robust model has a lot more ”color” neurons than the standard model (74 vs 15) Figure 16. In contrast, the standard model has 154 neurons detecting ”textures” while robust model has only 97, and standard model has 10 ”material” neurons compared to only 2 of the robust model. This finding is sensible as detecting textures and materials often relies on high frequency patterns that are easily affected by l∞ noise therefore the adversarial training forces the model to rely less on them and more on more resilient features like color. 3. Standard training learns neurons detecting target in the second to last layer while robust training does not. As seen in Figure 7, the standard network has many neurons detecting its target classes present in the second to last layer. For example, the standard network has 17 interpretable neurons detecting cars and 13 neurons detecting horses in layer4, while the robust network has no layer4 neurons detecting either car or horse.
Discussion: We find that adversarial training harms the ability of the network to detect certain concepts that rely on high frequency patterns like texture. Since these patterns are useful for many tasks, losing them may be a significant cause for the degradation in standard performance as observed in the experiments. Another cause for poorer performance of the robust network may be the lack of neurons detecting target class objects in second to last layer, but why this happens is still unclear to us. We believe addressing these two issues may be the key to improving clean accuracy of robust models. On the other hand, the robust network seems to learn more interpretable lower level features perhaps learning a more diverse representation similar to the findings of (Salman et al., 2020) who showed that adversarially robust models have better features for transfer learning.
4.3 CASE STUDY (IV) FINE-TUNING ON A MEDICAL DATASET
In this section, we use Concept-Monitor to observe the fine-tuning of a pretrained DNN on a diabetric retinopathy dataset (APTOS, 2019). This experiment allows us to test our method on a dataset from a different domain, as well as gather insights on the process of finetuning a pretrained model. The setup details are in Appendix A.
Observations and results: We probed the model training at a few intermediate steps. We observe that for the initial weights, as the neurons are pretrained on Imagenet, they show a lot of diverse and high level concepts(as shown in Figure 15 in Appendix). However, as the training progresses we notice that more neurons are getting activated by textural concepts like dots and patterns rather than objects. This is what we expect because as the model gets better at classifying retinopathy images shown in Figure
11, we expect it to rely more on textures and presence of ”dots” which is consistent to what we observe here as shown by the top interpretable neurons in epochs 20 and 40 in Figure 15. From Figure 8 we see that the number of interpretable neurons in ”object” category decreases as the training progresses while the number of interpretable neurons in the ”material” category increases which further confirms our theory that the model learns to focus more on lower level features like material and textures as compared to objects.
5 CONCLUSIONS
We have presented Concept-Monitor, a novel method to automatically track and monitor neural network training process in a transparent and human-understandable way. With the 4 comprehensive case studies on various deep learning training paradigms, we show that Concept-Monitor allows us to better understand the underlying mechanism of standard DNN training, the two alternative training methods, Lottery Ticket Hypothesis and adversarial training, as well as the fine-tuning on medical task. With Concept-Monitor we discover that surprisingly lottery ticket hypothesis prunes the network in a way that the neurons are interpretable even at initialization, discovering interpretability hidden in random initialization. Furthermore, we discover that adversarial training causes the hidden neurons to detect more simple concepts like colors while losing representations of materials and target class objects. We also test our method on medical dataset and find that the model learns to focus more on low level features which reflect the medical dataset.
Reproducibility statement: We acknowledge the importance of replicating our experiments and for that reason we have explicitly mentioned the implementation details of all our experiments in Appendix A.
A. EXPERIMENTAL SETUP
Standard training (section 3.2):
Setup: We train a Resnet-18 model on Places-365 dataset, which contains a lot of diverse classes allowing the DNN model to learn diverse concepts. To reduce the training time, we randomly selected 1000 images for each of the 365 classes and trained for 30 epochs reaching top-1 accuracy of 48.3%. We use batch size of 256 and an initial learning rate of 0.1 with cosine annealing scheduler.
Probing methodology: We use Broden (Bau et al., 2017a) dataset as Dprobe and use associated concept labels as a decoupled concept set S. Our embedding space, as described in section 3.1, is computed using CLIP’s text embeddings of Broden labels as a basis. For visualizing in a 2- dimension plot, we follow (Park et al., 2022) and use UMAP dimensionality reduction (McInnes et al., 2018), as it preserves inter-point distance in the lower dimensions. We set k = 5 in Eq(1), i.e. we use top-5 concepts to compute the embedding.
Lottery ticket hypothesis experiments (section 4.1):
Setup: We train ResNet 18 on CIFAR 10 dataset using IMP as in the LTH paper (Frankle & Carbin, 2018), rewinding to different initial weights. For each stage of IMP we train the model for 160 epochs, prune 40% of the weights and rewind to initialization. We consider rewinding to three different stages: initial weights, epoch 5 and epoch 16, using (Chen et al., 2022) implementation as reference.
Probing methodology: For our Dprobe, we use CIFAR 100 training dataset and for concept set S we use broden labels.
Adversarial Learning experiments (section 4.2):
Setup: We perform adversarial training with PGD attacks on a ResNet-18 architecture. We follow reop (Wong et al., 2020) and train the network with ϵ = 8/255 and l∞ perturbations for 40 epochs. We compare it against a CIFAR-10 network trained using the same exact training setup but no adversarial training. The standard model reaches a final accuracy 94.29%, while the robust model reaches 83.42% accuracy on clean data and 50.00% robust accuracy against a PGD adversarial attack as shown in Figure 10. The standard model expectedly performs really badly on adversarial images.
Probing methodology: We use Broden images as Dprobe and for concept set S we use the broden labels as the concepts can be easily categorized.
Fine-tuning on medical dataset (section 4.3)
Setup: We used ResNet-34 backbone pretrained on ImageNet dataset as our feature extractor and used a simple linear layer as the classification head. We trained this network on the diabetic retinopathy classification dataset (APTOS, 2019) (Figure 11) and it achieved an accuracy of 72.77%. We followed the work from (Balaji, 2019) for our experiments. We use Broden as Dprobe and broden labels as S.
B. CONCEPT-MONITOR ALGORITHM
Algorithm 1: Pseudo code for Concept-Monitor for a neuron n Input : Neuron n, concept detector ϕ, Concept set S , Probing dataset Dprobe Output: Embedding plot, Concept statistics Function Concept-Monitor(ϕ, S, Dprobe)
for t from 1→ tepoch do W tn, d t n = ϕ(S, Dprobe, n)
λni =softmax(−dtn) utn = k∑
i=1
λni f(W t n[i]))
plot(utn) Rn.append(W t n) Dn.append(d t n)
end stats← getStats(Rn, Dn)
C. VISUALIZING EVOLUTION IN THE EMBEDDING SPACE
Here we use Concept-Monitor’s unified embedding space to observe the evolution of few neuron’s in layer 4 of ResNet-18 trained on Places 365 dataset as described in Section 3.1. Our embedding space is designed in such a way that it is possible for us to add ”anchors” to it, which are positions in the embedding space that represent a particular chosen concept. We show these anchors as red stars in Figure 12. These anchors are fixed through training and mark the region of the embedding space encoding a particular concept so a user may track neuron movements relative to those anchors through training. For brevity we leave out the specific concept labels represented by the anchors in the figure and enumerate them instead. We see that at the beginning most of the neurons
are concentrated around anchors 14,13 and 16 which represent the concepts ”grid”,”dotted” and ”porous” respectively, which are low level features. This is expected as the model has just started training and hasn’t learnt to encode high level concepts yet. Most neurons move away from this space, except neuron 408 which stays in similar space throughout the training encoding low level textural concepts.
We also would like to highlight the trajectory of neuron 190, which starts from bottom left and slowly moves towards anchor 0 representing the concept minibike. By the end of training, this neuron comes very close to the anchor denoting that it has successfully learnt that concept. This concept of distance to the anchors can also be used as a quick visual aid to tell whether the concepts that the neurons represent are strongly represented or not. If the neuron’s concept label is far from the corresponding anchor in the space, we can safely mark that neuron as uninterpretable.
The labels corresponding to the anchors (red stars) in Figure 12 are 0 - minibike, 1 - exhaust hood, 2 - kitchen island, 3 - leaf, 4 - shower curtain, 5 - net, 6 - pantry, 7 - striped, 8 - countertop, 9 - granite, 10 - forecourt, 11 - cat, 12 - bed, 13 - grid, 14 - dotted, 15 - shower stall, 16 - porous, 17 - aqueduct, 18 - fabric.
D. CONCEPT MONITOR WITH DIFFERENT PROBING DATASET
As stated in section 3 our method with Clip-Dissect is able to work with any probing and concept dataset. We provide most of our analysis using Broden dataset as it contains a collection of different concept images and hence is able to provide much better results as compared to a limited dataset. Here we provide an example of that by using CIFAR-100 training images as the probing dataset to analyze the same model as section 3. As shown in section 3 and appendix A, neurons 479 represents concept ”windmill” and neuron 256 represents the concept ”field-road”. We now use CIFAR-100 training images to monitor these neurons. From the embedding space in Figure 13 we can see that neuron 256 converges to the ”Field” anchor. We also look at the highly activating images for each neuron in Figure 13 and see that for neuron 479 the most activating images are tree like structures across the sky which are the most similar images to windmills in the CIFAR-100 dataset. The point of this exercise is that concept monitor as all other model dissection methods is dependent on the probing dataset, however if we use clip-dissect we are able to use much larger and diverse datasets since we don’t require any labelling of images and can simply use entire set of images directly. | 1. What is the focus and contribution of the paper on neuron interpretability?
2. What are the strengths and weaknesses of the proposed approach, particularly in its simplicity and lightweight nature?
3. Do you have any concerns regarding the consistency and rigidity of the method, especially in comparison with other approaches?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions that can be further discussed in case studies, such as the impact of adversarial training on the results? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper introduces a mechanism to describe and quantify the interpretability of neurons in the training process and the experiments are mainly devoted to demonstrating how the concepts shift as the training progresses. For this purpose, the authors utilize the description of neuron representations in the literature to calculate the embedding weights after encoding the top k concept words for neurons.
Strengths And Weaknesses
Strengths:
The paper is overall clear and the topic is well-motivated.
Weaknesses
Although the approach is simple and lightweight, the paper fails to present a systematic and rigid approach to support their findings and observations in the three case studies. In particular, while the authors compared their method to [Park et al., 2022] in terms of efficiency, they did not provide any comparisons with this approach to show the consistency of their method. It is also necessary to show if the results are consistent across different initializations, different models, datasets, etc.
Below I listed some of the problems in the paper:
It requires a reference. "a good progression of concepts learnt might indicate a well trained model."
The paper claims: "they can pause training or modify hyper-parameters when they see neurons grouping up or not spreading out in the semantic space." It would be great to show if it would practically work for poorly trained models. Does a poorly-trained model show odd concepts in terms of neuron interpretability?
The authors need to provide more details on training the encoder for robustness of the findings to the chosen thresholds \tau.
The results of Figure 4 and Figure 14 are not consistent. For example, for Epoch 0 the patterns are entirely different. In addition, Layer 3 of epoch 29 is very different as "part" is not detected in Figure 4. The number of interpretable neurons for Figure in layer 2 is notably higher than in Figure 14.
There are some questions that can be further discussed in case studies: For example in adversarial training, what happens if a model is more robust than the other model? Or how do the results shift for different types of adversarial training such as PGD with different number of iterations or randomized smoothing.
Clarity, Quality, Novelty And Reproducibility
This work is marginally novel, however, regarding clarity, there are some details related to concept sets, concept detectors, and the language model encoder which are missing. |
ICLR | Title
Demystifying black-box DNN training processes through Concept-Monitor
Abstract
Despite the successes of deep neural networks (DNNs) on a broad range of tasks little has been understood of why and how they achieve such victories due to their complex architecture and their opaque black-box training processes. With the goal to unveil the mystery of DNNs, in this work, we propose a general framework called Concept-Monitor to uncover the black-box DNN training processes automatically for the first time. Our proposed Concept-Monitor enables humaninterpretable visualization of the DNN training processes and thus facilitates transparency as well as deeper understanding on how DNNs function and operate along the training iterations. Using Concept-Monitor, we are able to observe and compare different training paradigms at ease, including standard training, finetuning, adversarial training and network pruning for Lottery Ticket Hypothesis, which brings new insights on why and how adversarial training and network pruning work and how they modify the network during training. For example, we find that the lottery ticket hypothesis discovers a mask that makes neurons interpretable at initialization, without any finetuning, and we also found that adversarially robust models have more neurons relying on color as compared to standard models trained on the same dataset.
1 INTRODUCTION
Unprecedented success of deep learning have lead to their rapid applications to a wide range of tasks; however, deep neural networks (DNNs) are also known to be black-box and non-interpretable. To deploy these deep neural network (DNN) models into real-world applications, especially for the safety-critical applications such as healthcare and autonomous driving, it is imperative for us to understand what is going behind the black box. There have been a proliferation of research efforts towards interpretating DNNs and they can be mainly divided into two categories: the first approach focuses on attributing DNN’s prediction to the importance of individual-input and identify which pixels or features are important (Zhou et al., 2016; Selvaraju et al., 2019; Sundararajan et al., 2017; Smilkov et al., 2017) while the other approach investigates the functionalities (known as concept) of each individual-neuron (Bau et al., 2017a; Mu & Andreas, 2020; Oikarinen & Weng, 2022).
However, most of these methods only focus on examining a DNN model after it has been trained, and therefore missing out useful information that could be available in the training process. For example, for a deep learning researcher and engineer, it would be very useful to know:
What are the concepts learned by the DNN model and how has the DNN model learnt the concepts along the training process?
The answer to the above question would be useful in two-fold: (i) it can shed light on why and how DNNs can achieve great success, which could be helpful to inspire new DNN training algorithms; (ii) it can also help to debug DNNs and prevent catastrophic failure if anything goes wrong.
Motivated by the above question, it is the main goal of this work to develop a novel framework Concept-Monitor, which makes the black-box DNNs training process become transparent and human-understandable. Our proposed Concept-Monitor is scalable and automated – which are crucial to demystify the opaque DNN training process efficiently and help researchers better understand the training dynamics of the model. More formally, in this paper we provide the following contributions:
• We propose a general framework Concept-Monitor, which is the first automatic and efficient pipeline to make the black-box neural network training transparent and interpretable. Our pipeline monitors and tracks the training progress with human-interpretable concepts which provide useful statistics and insights of the DNN model being trained
• We develop a novel universal embedding space which allows us to efficiently track how the neurons’ concepts evolve and visualize their semantic evolution through out the training process without the need to re-learn an embedding space proposed in prior work.
• We provide four case studies to analyze various deep learning training paradigms, including training standard deep vision models, the mysterious lottery ticket hypothesis, adversarial robust training and fine-tuning on a medical dataset. With Concept-Monitor, we are able to discover new insights into the obscure training process that helps explain some of the empirical observations and hypothesis of the black-box deep learning through the lens of interpretability.
2 BACKGROUND AND RELATED WORKS
2.1 NEURON-LEVEL INTERPRETABILITY METHODS
Recently, there has been a great interest towards understanding deep neural network models at the neuron-level, which is different from mainstream methods that focus on interpreting individual decisions through the input features and pixels (Ribeiro et al., 2016; Lundberg & Lee, 2017; Selvaraju
et al., 2019; Sundararajan et al., 2017). We call this new direction as neuron-level interpretability methods and review the representative techniques below. To begin with, the techniques in this direction can be briefly divided into whether it needs to collect a curated annotated concept dataset to dissect DNNs. For the techniques that require a curated probing data labelled with pre-defined concepts, classic methods in this category includes Network dissection and its variation (Bau et al., 2017b; Mu & Andreas, 2020) as well as Test Concept Activation Vector and its variation (Kim et al., 2017; Goyal et al., 2019; Ghorbani et al., 2019). The key idea of Network dissection is to identify concepts of neurons by calculating an Intersection over Unit (IoU) score of intermediate activation maps and pre-defined concept masks, while the key idea of Test Concept Activation Vector is to use directional derivatives to quantify the model’s sensitivity to the pre-defined concepts.
However, one limitation of this type of approach is the need of a curated probing dataset annotated with concept labels which may be expensive and time-consuming to collect. On the other hand, a recent method Clip-Dissect (Oikarinen & Weng, 2022) addresses this challenge by leveraging the paradigm of multi-modal model (Radford et al., 2021) and allows automatic identification of neuron concepts without the need of collecting concept labelled data. We note that these techniques are all compatible to our proposed Concept-Monitor to facilitate automatic concept monitoring on the DNN training process. In our experiments, we demonstrated the versatility of our Concept-Monitor by showing the results with different concept detectors in section 3.2 when we study standard DNN training process.
2.2 UNDERSTAND DNN TRAINING DYNAMICS
Most of the existing research has been primarily focused on analyzing models after training instead of investigating how the interpretation/concepts change during the training DNN process, which is the main focus of our work. We note that there is a recent work Concept-Evo (Park et al., 2022) having the same goal as ours, but their proposed method is very different from our Concept-Monitor and their methods have some limitations as discussed below. First, their main idea is to learn a universal semantic space for each neuron, using a base model and then project the target model to this space, while we do not need to perform any training. For example, their embedding space uses a base model (VGG19 trained on imagenet) to project target neurons, while we use a pre-trained CLIP (Radford et al., 2021) text encoder to define a universal embedding space. Their methods would be much expensive than ours as they have to redo the learning every time they change the base model or the probing dataset. Second, the approach proposed in Concept-Evo does not associate humaninterpretable concepts to the neurons and thus human intervention is required to actually describe each of the neuron, which is another heavy cost (especially when the model size becomes larger and when the training epochs increase) and hard to automate. On the other hand, our method is fully automated and can explicitly provide top k human-understandable concepts for a neuron, which is another advantage of our Concept-Monitor.
3 CONCEPT-MONITOR: A NOVEL, SCALABLE AND AUTOMATED TOOL TO DEMYSTIFY BLACK-BOX DNN TRAINING PROCESS
In section 3.1 we detail the key components in Concept-Monitor including the concept detector and the universal embedding space. Next in section 3.2, we use Concept-Monitor to demystify the standard training process of a deep vision model and discuss the results and insights.
3.1 CONCEPT DETECTOR AND A UNIFIED EMBEDDING SPACE
Concept Detector: The first part of our method is to use a concept detector (ϕ) to automatically identify the concept of a neuron at any stage in the training. Given a set of concept words S and a probing image dataset Dprobe, a concept detector ϕ would return a concept word wn for a neuron n that maximally activates it. To achieve automatic concept monitoring of a DNN training process, we use two automated neuron-level interpretability tools, Network Dissection (Bau et al., 2017a) and CLIP-Dissect (Oikarinen & Weng, 2022) as the concept detectors in our experiment as a proofof-concept, and we note that Concept-Monitor is compatible with other neuron-level tools as well. Although the technical approach of each concept detector is different, we can actually unify them as a tool calculating a distance metric dni which quantifies neuron n’s association with the concept wi.
For example, the distance dni in Network-Dissection (Bau et al., 2017a) is defined to be the IoU score between activation maps and concept masks, while the distance dni in CLIP-dissect (Oikarinen & Weng, 2022) is a measure of the similarity between concept activation matrix and neuron activation maps. Based on this distance metric, we can also define interpretable neuron, which are the neurons whose distance to the closest concept word is less than some threshold, i.e. min(dni ) < τ , where the threshold τ is dependent on the concept detector ϕ.
Unified embedding space: The second part of our method is to define a unified embedding space in order to visually track neurons’ evolution. Here we detail the steps to project a neuron n into our unified embedding space.
Step 1: To start with, we use wi to denote the ith concept in the concept set S and use vi to denote the associated text embedding where vi = f(wi) with f being the text encoder of a pretrained large language model. We use {v1, v2, . . . , v|S|} as the basis of our semantic space and project neurons on this space using a weighted linear combination of vi of the neuron’s top-k concept words.
Step 2: Let Wn = [wn1′ , w n 2′ . . . w n k′ ] be the list of top k concept words for neuron n. For each neuron n, we can then calculate the embedding un using Equation (1) below,
un = k∑ i=1 λni f(w n i′) (1)
where λni is the weight of the concept wi′ for describing the neuron n and depends on the concept-detector used. For Network-Dissection, (Bau et al., 2017a), we use the distance vector dn = [−IoU1′ ,−IoU2′ , · · · − IoUk′ ], and for CLIP-Dissect (Oikarinen & Weng, 2022) dn = [−h1′ ,−h2′ · · · − hk′ ] where h is the point-wise mutual information distance metric proposed in the CLIP-dissect paper. We can then calculate λni by fitting a softmax distribution on the
corresponding (negative) distance vector have λni = e −di′/ k∑ j=1 e−dj′ . The pseudo code for calculating the unified embedding space is presented in Appendix Algorithm 1
Remarks:
1. Note that since our method is general, when using a new concept detector, we only need to change the distance vector dn associated with that concept detector, which describes how closely related a neuron is to a specific concept.
2. Another benefit of our unified embedding space is that we can project any general concept word α into the same embedding space by calculating its text embedding f(α). This lets us mark the embedding space with concept ”anchors” (see the red stars in Fig 3), which are concepts that a researcher thinks would be represented in a well trained model. The researcher can then track whether and which neurons are converging or diverging away from those anchors giving useful feedback during training.
3. Unlike prior work Concept-Evo (Park et al., 2022) which requires training an embedding space every time when a base model changes, our unified semantic space doesn’t need to train a base model or learn the image embeddings. Please refer to Table 1 for full comparison between our method and Concept-Evo (Park et al., 2022).
3.2 CASE STUDY (I) MONITORING STANDARD TRAINING
Now we use Concept-Monitor to investigate standard training of ResNet-18 model on Places365 dataset. We investigate the concept evolution of neurons at different epochs in the training using the proposed unified embedding space described in section 3.1.
Results and observations. Our main goal is to inspect the training process and study how the concepts evolve across training and whether there is a correlation between accuracy and concept generalization. The main results are plotted in Figure 3 and we summarize three observations from the standard training below:
1. Model learns to look at more complex features as training progresses. As shown in Figure 2, initially neuron 479 is maximally activated by images containing ”striped” pattern. As the training progresses, we can see that it starts to learn to identify windmill structures at Epoch 5 and stays the same for the rest of the training. Another examples is neuron 256 which moves from grid pattern like concept of ”anechoic chambers” to learning the detect a ”field road”.
2. Shallower layers are comparatively more likely to learn low-level features like material and
texture while deeper layers learn more nuanced object detectors. We consider the broad categories of [Material, Texture, Object, Part, Scene] to group neurons. These labels were also
used in the original Broden dataset to group the labels. We find that the categories Scene, Object and Part to be concerned with higher level concepts like Fields and Windmills while Textures to be concerned with concepts like Striped, Matted etc. From Figure 4, its evident that Layer 2 and Layer 3 are learning a lot more low level information than Layer 4.
3. Concept diversity happens later in the training. Using the unified embedding space in Figure 3 we can see that the neurons are clumped together in the middle initially (Epoch 0) and as the training progresses they spread out and hence learn more generalized concepts. This suggests that at the initial stage of the training, only a limited number of concepts have been learned and these concepts are similar (close in the embedding space).
Discussion: Using our method in standard training, we have seen a correlation between training stage and interpretability of a model. We notice that for a well trained model there is a progression from a low level concepts understanding to higher level conceptual understanding. We propose that an inverse relation might help to improve the model training as well, i.e., a good progression of concepts learnt might indicate a well trained model. Using our methodology, specifically tracking the neuron concept evolution in the unified embedding space, deep learning researchers can meticulously monitor and manage the status of DNN training e.g., they can pause training or modify hyper-parameters when they see neurons grouping up or not spreading out in the semantic space.
Using another concept detector: We show that Concept-Monitor is able to work with a another concept-detector like Network Dissection (Bau et al., 2017a) by analyzing the same Resnet-18 model trained on Places365 dataset. Our results are in Figure 14 in Appendix C and we can see that the observations are consistent across different concept detectors:shallower layers are more likely to learn low-level features like texture and that model learns more complex features as the training progresses. We also see the embedding space starting from a clump in the center for Epoch 0 and then spreading out indicating the generalization of the concepts learnt.
4 CASE STUDIES OF OTHER TRAINING PARADIGMS
In this section, we show the Concept-Monitor is versatile and can be used to study various training paradigms to gain insights into how and why they work. We also provide useful observations and insights that could help future researchers better understand these training procedures.
4.1 CASE STUDY (II) LOTTERY TICKET HYPOTHESIS
Lottery Ticket Hypothesis (LTH) (Frankle & Carbin, 2018) is a popular method to prune deep neural networks without sacrificing their performance. In this case study, we use Concept-Monitor to demystify the success behind LTH in a human understandable way. The main idea of LTH is to use iterative magnitude pruning (IMP) to prune the model iteratively by repeating the steps of training, pruning and rewinding to an initial epoch. LTH hypothesizes the existence of ”winning tickets” at initialization which are sub-networks within the network that can be trained to performance equivalent to the original model. However, it was observed that rewinding to initial weight leads to a performance drop and it is better to rewind to an earlier training epoch instead of fully reversing to the initial weights. (Frankle et al., 2019a) attribute this phenomenon to SGD noise in initial training and we will use Concept-Monitor to investigate LTH through the lens of interpretability. We train a ResNet18 on CIFAR 10 dataset using IMP in 8 stages. For full details on our experimental setup, please refer to Appendix section A.
We study LTH with three different rewinding stages of IMP: rewinding to initial weights(epoch 0), epoch 5 and epoch 16. We found rewinding to epoch 5 performs better than rewinding to epoch 0 and epoch 16 when the sparsity level is high, which we attribute to be the two extremes of initialization i.e, initialization to epoch 0 in which the model is too noisy or initialization to epoch 16 in which the model has learnt a rigid structure which would need to be rewired by pruning. For instance, when pruned to 2.8% of initial weights, rewind to epoch 5 has 93.78% accuracy as compared to 91.8%, 93.4% of rewinding to epoch 0 and epoch 16 respectively. Rewinding to 0 is inefficient as noted by (Frankle et al., 2019b) and rewinding to epoch 16 doesn’t give the model much freedom to adjust to the sparse weights. We use Concept-Monitor to track the training process of these 3 different rewinding strategies and plot the results in Fig 5 and Fig 9 in the appendix.
Observations and Results:
In our analysis, we make the following observations:
1. Pruning the network learns to encode some concepts without any fine tuning. Figure 5a shows the number of interpretable neurons in layer 4 of the model after rewinding to initialization. We notice the trend that for rewinding to epoch 16 and epoch 5, the number of interpretable neurons decreases as we increase the sparsity, but for rewinding to the initial weights (epoch 0) the number of interpretable neurons increase. Since the weights are randomly initialized, the only way there can be a gain in interpretable neurons is through the changes that happen during pruning, i.e. the zeroing out of certain weights in the network. Hence, we believe that there is a possibility that the training is learning to remove connections that are harming the network and this leads to the resultant network to be different than the original (with the only change being that some weights are zero). This leads to some neurons being activated to certain low level concepts and hence our observation of increased interpretable neurons. We note that this phenomenon was also observed by another work (Zhou et al., 2019) which says that IMP zeros out weights that would ultimately go towards zero anyway after training. Hence, they hypothesize that a pruned initial network encodes a portion of the training process itself, which they refer to as ”masking is learning”. This also explains why we see interpretable neurons with just pruned initial weights.
2. The percentage of concepts retained through pruning is highest with Epoch 5 rewinding.
Figure 5b plots the percentage of top-x percentile interpretable neurons that retain their concepts throughout the pruning process (y-axis) vs x percentile (x axis). In other words it plots the relation of the interpretability of a neuron to its concept retention.Note that, interpretablity of neurons is dependent on the threshold defined by the concept detector (see section 3.1). We see that for rewind to epoch 16 as we decrease the interpretability the percentage of neurons retaining concepts increases, or the more interpretable neurons are likely to lose their concepts during IMP, while the less interpretable neurons keep their concepts. For rewind to epoch 5 we see that the more interpretable neurons keep their concepts and the retention decreases as the neurons become less interpretable. This leads us to the hypothesis that rewinding to epoch 5 learns concepts that are more general and hence are able to be retained, while rewind to epoch 16 learns concepts that are rigid and the model has to relearn those concepts to preserve accuracy during pruning. This effect is also shown in the accuracy of the models in which rewinding to epoch 5 performs better than rewinding epoch 16 at higher pruning.
Discussion: From observation 1, we find that it is very likely that the lottery ticket sparse pruning mask actually encodes learning, which was also suggested in (Zhou et al., 2019) as ”masking is learning”. It is also noted from observation 2 that certain rewinding points are more suitable to retain concepts, e.g. epoch 5 in our case, and there is a correlation between this and the model performance as noted by the accuracy at higher sparsity.
4.2 CASE STUDY (III) ADVERSARIAL TRAINING
DNNs are known to be vulnerable against small perturbations in their inputs (Szegedy et al., 2013). This is problematic as networks can fail unexpectedly after small random or adversarial perturbations which raises concerns over their safety. Fortunately, methods have been developed to defend against
adversarial attacks, most popular of these being Adversarial Training (Madry et al., 2018). This successfully makes networks more robust against such attacks, but comes at a cost of degraded performance on clean test data. In this study, we apply Concept-Monitor to adversarial training to better understand how adversarial training changes a network and why standard accuracy suffers. We analyse a ResNet18 model trained on CIFAR10 with and without adversarial training. For full details on our experimental setup please refer to Appendix section A.
Observations and Results: Using Concept-Monitor we have the following three observations.
1. Adversarially robust network has less interpretable neurons in late layers, but more in
earlier layers. In Fig 6, we plot the number of interpretable neurons in layer 2-4 at three different training stages. It can be seen at the end of training that 293 out of 512 of the layer 4 neurons are interpretable for standard training while only 215 out of 512 are interpretable for the robustly trained model. For layer 3 it is 91 out of 256 for standard model and 125 out of 256 for the robust model. We observe similar trend for layer 2 neurons, please refer to Fig 16 in Appendix. 2. Adversarially robust network relies more on colors, less on materials and textures. When combining concepts detected across 3 layers, we observe that the robust model has a lot more ”color” neurons than the standard model (74 vs 15) Figure 16. In contrast, the standard model has 154 neurons detecting ”textures” while robust model has only 97, and standard model has 10 ”material” neurons compared to only 2 of the robust model. This finding is sensible as detecting textures and materials often relies on high frequency patterns that are easily affected by l∞ noise therefore the adversarial training forces the model to rely less on them and more on more resilient features like color. 3. Standard training learns neurons detecting target in the second to last layer while robust training does not. As seen in Figure 7, the standard network has many neurons detecting its target classes present in the second to last layer. For example, the standard network has 17 interpretable neurons detecting cars and 13 neurons detecting horses in layer4, while the robust network has no layer4 neurons detecting either car or horse.
Discussion: We find that adversarial training harms the ability of the network to detect certain concepts that rely on high frequency patterns like texture. Since these patterns are useful for many tasks, losing them may be a significant cause for the degradation in standard performance as observed in the experiments. Another cause for poorer performance of the robust network may be the lack of neurons detecting target class objects in second to last layer, but why this happens is still unclear to us. We believe addressing these two issues may be the key to improving clean accuracy of robust models. On the other hand, the robust network seems to learn more interpretable lower level features perhaps learning a more diverse representation similar to the findings of (Salman et al., 2020) who showed that adversarially robust models have better features for transfer learning.
4.3 CASE STUDY (IV) FINE-TUNING ON A MEDICAL DATASET
In this section, we use Concept-Monitor to observe the fine-tuning of a pretrained DNN on a diabetric retinopathy dataset (APTOS, 2019). This experiment allows us to test our method on a dataset from a different domain, as well as gather insights on the process of finetuning a pretrained model. The setup details are in Appendix A.
Observations and results: We probed the model training at a few intermediate steps. We observe that for the initial weights, as the neurons are pretrained on Imagenet, they show a lot of diverse and high level concepts(as shown in Figure 15 in Appendix). However, as the training progresses we notice that more neurons are getting activated by textural concepts like dots and patterns rather than objects. This is what we expect because as the model gets better at classifying retinopathy images shown in Figure
11, we expect it to rely more on textures and presence of ”dots” which is consistent to what we observe here as shown by the top interpretable neurons in epochs 20 and 40 in Figure 15. From Figure 8 we see that the number of interpretable neurons in ”object” category decreases as the training progresses while the number of interpretable neurons in the ”material” category increases which further confirms our theory that the model learns to focus more on lower level features like material and textures as compared to objects.
5 CONCLUSIONS
We have presented Concept-Monitor, a novel method to automatically track and monitor neural network training process in a transparent and human-understandable way. With the 4 comprehensive case studies on various deep learning training paradigms, we show that Concept-Monitor allows us to better understand the underlying mechanism of standard DNN training, the two alternative training methods, Lottery Ticket Hypothesis and adversarial training, as well as the fine-tuning on medical task. With Concept-Monitor we discover that surprisingly lottery ticket hypothesis prunes the network in a way that the neurons are interpretable even at initialization, discovering interpretability hidden in random initialization. Furthermore, we discover that adversarial training causes the hidden neurons to detect more simple concepts like colors while losing representations of materials and target class objects. We also test our method on medical dataset and find that the model learns to focus more on low level features which reflect the medical dataset.
Reproducibility statement: We acknowledge the importance of replicating our experiments and for that reason we have explicitly mentioned the implementation details of all our experiments in Appendix A.
A. EXPERIMENTAL SETUP
Standard training (section 3.2):
Setup: We train a Resnet-18 model on Places-365 dataset, which contains a lot of diverse classes allowing the DNN model to learn diverse concepts. To reduce the training time, we randomly selected 1000 images for each of the 365 classes and trained for 30 epochs reaching top-1 accuracy of 48.3%. We use batch size of 256 and an initial learning rate of 0.1 with cosine annealing scheduler.
Probing methodology: We use Broden (Bau et al., 2017a) dataset as Dprobe and use associated concept labels as a decoupled concept set S. Our embedding space, as described in section 3.1, is computed using CLIP’s text embeddings of Broden labels as a basis. For visualizing in a 2- dimension plot, we follow (Park et al., 2022) and use UMAP dimensionality reduction (McInnes et al., 2018), as it preserves inter-point distance in the lower dimensions. We set k = 5 in Eq(1), i.e. we use top-5 concepts to compute the embedding.
Lottery ticket hypothesis experiments (section 4.1):
Setup: We train ResNet 18 on CIFAR 10 dataset using IMP as in the LTH paper (Frankle & Carbin, 2018), rewinding to different initial weights. For each stage of IMP we train the model for 160 epochs, prune 40% of the weights and rewind to initialization. We consider rewinding to three different stages: initial weights, epoch 5 and epoch 16, using (Chen et al., 2022) implementation as reference.
Probing methodology: For our Dprobe, we use CIFAR 100 training dataset and for concept set S we use broden labels.
Adversarial Learning experiments (section 4.2):
Setup: We perform adversarial training with PGD attacks on a ResNet-18 architecture. We follow reop (Wong et al., 2020) and train the network with ϵ = 8/255 and l∞ perturbations for 40 epochs. We compare it against a CIFAR-10 network trained using the same exact training setup but no adversarial training. The standard model reaches a final accuracy 94.29%, while the robust model reaches 83.42% accuracy on clean data and 50.00% robust accuracy against a PGD adversarial attack as shown in Figure 10. The standard model expectedly performs really badly on adversarial images.
Probing methodology: We use Broden images as Dprobe and for concept set S we use the broden labels as the concepts can be easily categorized.
Fine-tuning on medical dataset (section 4.3)
Setup: We used ResNet-34 backbone pretrained on ImageNet dataset as our feature extractor and used a simple linear layer as the classification head. We trained this network on the diabetic retinopathy classification dataset (APTOS, 2019) (Figure 11) and it achieved an accuracy of 72.77%. We followed the work from (Balaji, 2019) for our experiments. We use Broden as Dprobe and broden labels as S.
B. CONCEPT-MONITOR ALGORITHM
Algorithm 1: Pseudo code for Concept-Monitor for a neuron n Input : Neuron n, concept detector ϕ, Concept set S , Probing dataset Dprobe Output: Embedding plot, Concept statistics Function Concept-Monitor(ϕ, S, Dprobe)
for t from 1→ tepoch do W tn, d t n = ϕ(S, Dprobe, n)
λni =softmax(−dtn) utn = k∑
i=1
λni f(W t n[i]))
plot(utn) Rn.append(W t n) Dn.append(d t n)
end stats← getStats(Rn, Dn)
C. VISUALIZING EVOLUTION IN THE EMBEDDING SPACE
Here we use Concept-Monitor’s unified embedding space to observe the evolution of few neuron’s in layer 4 of ResNet-18 trained on Places 365 dataset as described in Section 3.1. Our embedding space is designed in such a way that it is possible for us to add ”anchors” to it, which are positions in the embedding space that represent a particular chosen concept. We show these anchors as red stars in Figure 12. These anchors are fixed through training and mark the region of the embedding space encoding a particular concept so a user may track neuron movements relative to those anchors through training. For brevity we leave out the specific concept labels represented by the anchors in the figure and enumerate them instead. We see that at the beginning most of the neurons
are concentrated around anchors 14,13 and 16 which represent the concepts ”grid”,”dotted” and ”porous” respectively, which are low level features. This is expected as the model has just started training and hasn’t learnt to encode high level concepts yet. Most neurons move away from this space, except neuron 408 which stays in similar space throughout the training encoding low level textural concepts.
We also would like to highlight the trajectory of neuron 190, which starts from bottom left and slowly moves towards anchor 0 representing the concept minibike. By the end of training, this neuron comes very close to the anchor denoting that it has successfully learnt that concept. This concept of distance to the anchors can also be used as a quick visual aid to tell whether the concepts that the neurons represent are strongly represented or not. If the neuron’s concept label is far from the corresponding anchor in the space, we can safely mark that neuron as uninterpretable.
The labels corresponding to the anchors (red stars) in Figure 12 are 0 - minibike, 1 - exhaust hood, 2 - kitchen island, 3 - leaf, 4 - shower curtain, 5 - net, 6 - pantry, 7 - striped, 8 - countertop, 9 - granite, 10 - forecourt, 11 - cat, 12 - bed, 13 - grid, 14 - dotted, 15 - shower stall, 16 - porous, 17 - aqueduct, 18 - fabric.
D. CONCEPT MONITOR WITH DIFFERENT PROBING DATASET
As stated in section 3 our method with Clip-Dissect is able to work with any probing and concept dataset. We provide most of our analysis using Broden dataset as it contains a collection of different concept images and hence is able to provide much better results as compared to a limited dataset. Here we provide an example of that by using CIFAR-100 training images as the probing dataset to analyze the same model as section 3. As shown in section 3 and appendix A, neurons 479 represents concept ”windmill” and neuron 256 represents the concept ”field-road”. We now use CIFAR-100 training images to monitor these neurons. From the embedding space in Figure 13 we can see that neuron 256 converges to the ”Field” anchor. We also look at the highly activating images for each neuron in Figure 13 and see that for neuron 479 the most activating images are tree like structures across the sky which are the most similar images to windmills in the CIFAR-100 dataset. The point of this exercise is that concept monitor as all other model dissection methods is dependent on the probing dataset, however if we use clip-dissect we are able to use much larger and diverse datasets since we don’t require any labelling of images and can simply use entire set of images directly. | 1. What is the focus and contribution of the paper regarding interpreting neural network training processes?
2. What are the strengths of the proposed approach, particularly its ability to provide human-interpretable visualization during training?
3. What are the weaknesses of the paper, such as the simplicity of the method and potential computational costs?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors propose a novel method to interpret the black-box neural network training process. Extensive experiments demonstrate the proposed Concept-Monitor can help find some intriguing properties of adversarial training and network pruning.
Strengths And Weaknesses
Strength: 1. Unlike the previous works on neural network explanations that can interpret a static neural network, the proposed Concept-Monitor can produce human-interpretable visualization during training and help us better understand the training process of black-box neural networks. 2. The proposed method is simple and easy to reproduce. It is training free and easy to be adapted to new model architectures. 3. The new findings with Concept-Monitor on adversarial training and network pruning are interesting and provide a new perspective to understand other techniques in deep network training.
Weakness: 1. The proposed method is a simple combination of network dissection and CLIP-dissect to define the interpretable neuron. I would like to see more clarification to differentiate the proposed method and those two classic methods. Please explain more on the technical contributions and motivations compare with network-dissection and CLIP-dissect. 2. The proposed method should involve intensive computation costs to get the concept of some interpretable neurons and it may limit the practical usage of the proposed method. I would like to see the computation analysis and potential improvement.
Clarity, Quality, Novelty And Reproducibility
The paper is well-organized and easy to understand. |
ICLR | Title
Demystifying black-box DNN training processes through Concept-Monitor
Abstract
Despite the successes of deep neural networks (DNNs) on a broad range of tasks little has been understood of why and how they achieve such victories due to their complex architecture and their opaque black-box training processes. With the goal to unveil the mystery of DNNs, in this work, we propose a general framework called Concept-Monitor to uncover the black-box DNN training processes automatically for the first time. Our proposed Concept-Monitor enables humaninterpretable visualization of the DNN training processes and thus facilitates transparency as well as deeper understanding on how DNNs function and operate along the training iterations. Using Concept-Monitor, we are able to observe and compare different training paradigms at ease, including standard training, finetuning, adversarial training and network pruning for Lottery Ticket Hypothesis, which brings new insights on why and how adversarial training and network pruning work and how they modify the network during training. For example, we find that the lottery ticket hypothesis discovers a mask that makes neurons interpretable at initialization, without any finetuning, and we also found that adversarially robust models have more neurons relying on color as compared to standard models trained on the same dataset.
1 INTRODUCTION
Unprecedented success of deep learning have lead to their rapid applications to a wide range of tasks; however, deep neural networks (DNNs) are also known to be black-box and non-interpretable. To deploy these deep neural network (DNN) models into real-world applications, especially for the safety-critical applications such as healthcare and autonomous driving, it is imperative for us to understand what is going behind the black box. There have been a proliferation of research efforts towards interpretating DNNs and they can be mainly divided into two categories: the first approach focuses on attributing DNN’s prediction to the importance of individual-input and identify which pixels or features are important (Zhou et al., 2016; Selvaraju et al., 2019; Sundararajan et al., 2017; Smilkov et al., 2017) while the other approach investigates the functionalities (known as concept) of each individual-neuron (Bau et al., 2017a; Mu & Andreas, 2020; Oikarinen & Weng, 2022).
However, most of these methods only focus on examining a DNN model after it has been trained, and therefore missing out useful information that could be available in the training process. For example, for a deep learning researcher and engineer, it would be very useful to know:
What are the concepts learned by the DNN model and how has the DNN model learnt the concepts along the training process?
The answer to the above question would be useful in two-fold: (i) it can shed light on why and how DNNs can achieve great success, which could be helpful to inspire new DNN training algorithms; (ii) it can also help to debug DNNs and prevent catastrophic failure if anything goes wrong.
Motivated by the above question, it is the main goal of this work to develop a novel framework Concept-Monitor, which makes the black-box DNNs training process become transparent and human-understandable. Our proposed Concept-Monitor is scalable and automated – which are crucial to demystify the opaque DNN training process efficiently and help researchers better understand the training dynamics of the model. More formally, in this paper we provide the following contributions:
• We propose a general framework Concept-Monitor, which is the first automatic and efficient pipeline to make the black-box neural network training transparent and interpretable. Our pipeline monitors and tracks the training progress with human-interpretable concepts which provide useful statistics and insights of the DNN model being trained
• We develop a novel universal embedding space which allows us to efficiently track how the neurons’ concepts evolve and visualize their semantic evolution through out the training process without the need to re-learn an embedding space proposed in prior work.
• We provide four case studies to analyze various deep learning training paradigms, including training standard deep vision models, the mysterious lottery ticket hypothesis, adversarial robust training and fine-tuning on a medical dataset. With Concept-Monitor, we are able to discover new insights into the obscure training process that helps explain some of the empirical observations and hypothesis of the black-box deep learning through the lens of interpretability.
2 BACKGROUND AND RELATED WORKS
2.1 NEURON-LEVEL INTERPRETABILITY METHODS
Recently, there has been a great interest towards understanding deep neural network models at the neuron-level, which is different from mainstream methods that focus on interpreting individual decisions through the input features and pixels (Ribeiro et al., 2016; Lundberg & Lee, 2017; Selvaraju
et al., 2019; Sundararajan et al., 2017). We call this new direction as neuron-level interpretability methods and review the representative techniques below. To begin with, the techniques in this direction can be briefly divided into whether it needs to collect a curated annotated concept dataset to dissect DNNs. For the techniques that require a curated probing data labelled with pre-defined concepts, classic methods in this category includes Network dissection and its variation (Bau et al., 2017b; Mu & Andreas, 2020) as well as Test Concept Activation Vector and its variation (Kim et al., 2017; Goyal et al., 2019; Ghorbani et al., 2019). The key idea of Network dissection is to identify concepts of neurons by calculating an Intersection over Unit (IoU) score of intermediate activation maps and pre-defined concept masks, while the key idea of Test Concept Activation Vector is to use directional derivatives to quantify the model’s sensitivity to the pre-defined concepts.
However, one limitation of this type of approach is the need of a curated probing dataset annotated with concept labels which may be expensive and time-consuming to collect. On the other hand, a recent method Clip-Dissect (Oikarinen & Weng, 2022) addresses this challenge by leveraging the paradigm of multi-modal model (Radford et al., 2021) and allows automatic identification of neuron concepts without the need of collecting concept labelled data. We note that these techniques are all compatible to our proposed Concept-Monitor to facilitate automatic concept monitoring on the DNN training process. In our experiments, we demonstrated the versatility of our Concept-Monitor by showing the results with different concept detectors in section 3.2 when we study standard DNN training process.
2.2 UNDERSTAND DNN TRAINING DYNAMICS
Most of the existing research has been primarily focused on analyzing models after training instead of investigating how the interpretation/concepts change during the training DNN process, which is the main focus of our work. We note that there is a recent work Concept-Evo (Park et al., 2022) having the same goal as ours, but their proposed method is very different from our Concept-Monitor and their methods have some limitations as discussed below. First, their main idea is to learn a universal semantic space for each neuron, using a base model and then project the target model to this space, while we do not need to perform any training. For example, their embedding space uses a base model (VGG19 trained on imagenet) to project target neurons, while we use a pre-trained CLIP (Radford et al., 2021) text encoder to define a universal embedding space. Their methods would be much expensive than ours as they have to redo the learning every time they change the base model or the probing dataset. Second, the approach proposed in Concept-Evo does not associate humaninterpretable concepts to the neurons and thus human intervention is required to actually describe each of the neuron, which is another heavy cost (especially when the model size becomes larger and when the training epochs increase) and hard to automate. On the other hand, our method is fully automated and can explicitly provide top k human-understandable concepts for a neuron, which is another advantage of our Concept-Monitor.
3 CONCEPT-MONITOR: A NOVEL, SCALABLE AND AUTOMATED TOOL TO DEMYSTIFY BLACK-BOX DNN TRAINING PROCESS
In section 3.1 we detail the key components in Concept-Monitor including the concept detector and the universal embedding space. Next in section 3.2, we use Concept-Monitor to demystify the standard training process of a deep vision model and discuss the results and insights.
3.1 CONCEPT DETECTOR AND A UNIFIED EMBEDDING SPACE
Concept Detector: The first part of our method is to use a concept detector (ϕ) to automatically identify the concept of a neuron at any stage in the training. Given a set of concept words S and a probing image dataset Dprobe, a concept detector ϕ would return a concept word wn for a neuron n that maximally activates it. To achieve automatic concept monitoring of a DNN training process, we use two automated neuron-level interpretability tools, Network Dissection (Bau et al., 2017a) and CLIP-Dissect (Oikarinen & Weng, 2022) as the concept detectors in our experiment as a proofof-concept, and we note that Concept-Monitor is compatible with other neuron-level tools as well. Although the technical approach of each concept detector is different, we can actually unify them as a tool calculating a distance metric dni which quantifies neuron n’s association with the concept wi.
For example, the distance dni in Network-Dissection (Bau et al., 2017a) is defined to be the IoU score between activation maps and concept masks, while the distance dni in CLIP-dissect (Oikarinen & Weng, 2022) is a measure of the similarity between concept activation matrix and neuron activation maps. Based on this distance metric, we can also define interpretable neuron, which are the neurons whose distance to the closest concept word is less than some threshold, i.e. min(dni ) < τ , where the threshold τ is dependent on the concept detector ϕ.
Unified embedding space: The second part of our method is to define a unified embedding space in order to visually track neurons’ evolution. Here we detail the steps to project a neuron n into our unified embedding space.
Step 1: To start with, we use wi to denote the ith concept in the concept set S and use vi to denote the associated text embedding where vi = f(wi) with f being the text encoder of a pretrained large language model. We use {v1, v2, . . . , v|S|} as the basis of our semantic space and project neurons on this space using a weighted linear combination of vi of the neuron’s top-k concept words.
Step 2: Let Wn = [wn1′ , w n 2′ . . . w n k′ ] be the list of top k concept words for neuron n. For each neuron n, we can then calculate the embedding un using Equation (1) below,
un = k∑ i=1 λni f(w n i′) (1)
where λni is the weight of the concept wi′ for describing the neuron n and depends on the concept-detector used. For Network-Dissection, (Bau et al., 2017a), we use the distance vector dn = [−IoU1′ ,−IoU2′ , · · · − IoUk′ ], and for CLIP-Dissect (Oikarinen & Weng, 2022) dn = [−h1′ ,−h2′ · · · − hk′ ] where h is the point-wise mutual information distance metric proposed in the CLIP-dissect paper. We can then calculate λni by fitting a softmax distribution on the
corresponding (negative) distance vector have λni = e −di′/ k∑ j=1 e−dj′ . The pseudo code for calculating the unified embedding space is presented in Appendix Algorithm 1
Remarks:
1. Note that since our method is general, when using a new concept detector, we only need to change the distance vector dn associated with that concept detector, which describes how closely related a neuron is to a specific concept.
2. Another benefit of our unified embedding space is that we can project any general concept word α into the same embedding space by calculating its text embedding f(α). This lets us mark the embedding space with concept ”anchors” (see the red stars in Fig 3), which are concepts that a researcher thinks would be represented in a well trained model. The researcher can then track whether and which neurons are converging or diverging away from those anchors giving useful feedback during training.
3. Unlike prior work Concept-Evo (Park et al., 2022) which requires training an embedding space every time when a base model changes, our unified semantic space doesn’t need to train a base model or learn the image embeddings. Please refer to Table 1 for full comparison between our method and Concept-Evo (Park et al., 2022).
3.2 CASE STUDY (I) MONITORING STANDARD TRAINING
Now we use Concept-Monitor to investigate standard training of ResNet-18 model on Places365 dataset. We investigate the concept evolution of neurons at different epochs in the training using the proposed unified embedding space described in section 3.1.
Results and observations. Our main goal is to inspect the training process and study how the concepts evolve across training and whether there is a correlation between accuracy and concept generalization. The main results are plotted in Figure 3 and we summarize three observations from the standard training below:
1. Model learns to look at more complex features as training progresses. As shown in Figure 2, initially neuron 479 is maximally activated by images containing ”striped” pattern. As the training progresses, we can see that it starts to learn to identify windmill structures at Epoch 5 and stays the same for the rest of the training. Another examples is neuron 256 which moves from grid pattern like concept of ”anechoic chambers” to learning the detect a ”field road”.
2. Shallower layers are comparatively more likely to learn low-level features like material and
texture while deeper layers learn more nuanced object detectors. We consider the broad categories of [Material, Texture, Object, Part, Scene] to group neurons. These labels were also
used in the original Broden dataset to group the labels. We find that the categories Scene, Object and Part to be concerned with higher level concepts like Fields and Windmills while Textures to be concerned with concepts like Striped, Matted etc. From Figure 4, its evident that Layer 2 and Layer 3 are learning a lot more low level information than Layer 4.
3. Concept diversity happens later in the training. Using the unified embedding space in Figure 3 we can see that the neurons are clumped together in the middle initially (Epoch 0) and as the training progresses they spread out and hence learn more generalized concepts. This suggests that at the initial stage of the training, only a limited number of concepts have been learned and these concepts are similar (close in the embedding space).
Discussion: Using our method in standard training, we have seen a correlation between training stage and interpretability of a model. We notice that for a well trained model there is a progression from a low level concepts understanding to higher level conceptual understanding. We propose that an inverse relation might help to improve the model training as well, i.e., a good progression of concepts learnt might indicate a well trained model. Using our methodology, specifically tracking the neuron concept evolution in the unified embedding space, deep learning researchers can meticulously monitor and manage the status of DNN training e.g., they can pause training or modify hyper-parameters when they see neurons grouping up or not spreading out in the semantic space.
Using another concept detector: We show that Concept-Monitor is able to work with a another concept-detector like Network Dissection (Bau et al., 2017a) by analyzing the same Resnet-18 model trained on Places365 dataset. Our results are in Figure 14 in Appendix C and we can see that the observations are consistent across different concept detectors:shallower layers are more likely to learn low-level features like texture and that model learns more complex features as the training progresses. We also see the embedding space starting from a clump in the center for Epoch 0 and then spreading out indicating the generalization of the concepts learnt.
4 CASE STUDIES OF OTHER TRAINING PARADIGMS
In this section, we show the Concept-Monitor is versatile and can be used to study various training paradigms to gain insights into how and why they work. We also provide useful observations and insights that could help future researchers better understand these training procedures.
4.1 CASE STUDY (II) LOTTERY TICKET HYPOTHESIS
Lottery Ticket Hypothesis (LTH) (Frankle & Carbin, 2018) is a popular method to prune deep neural networks without sacrificing their performance. In this case study, we use Concept-Monitor to demystify the success behind LTH in a human understandable way. The main idea of LTH is to use iterative magnitude pruning (IMP) to prune the model iteratively by repeating the steps of training, pruning and rewinding to an initial epoch. LTH hypothesizes the existence of ”winning tickets” at initialization which are sub-networks within the network that can be trained to performance equivalent to the original model. However, it was observed that rewinding to initial weight leads to a performance drop and it is better to rewind to an earlier training epoch instead of fully reversing to the initial weights. (Frankle et al., 2019a) attribute this phenomenon to SGD noise in initial training and we will use Concept-Monitor to investigate LTH through the lens of interpretability. We train a ResNet18 on CIFAR 10 dataset using IMP in 8 stages. For full details on our experimental setup, please refer to Appendix section A.
We study LTH with three different rewinding stages of IMP: rewinding to initial weights(epoch 0), epoch 5 and epoch 16. We found rewinding to epoch 5 performs better than rewinding to epoch 0 and epoch 16 when the sparsity level is high, which we attribute to be the two extremes of initialization i.e, initialization to epoch 0 in which the model is too noisy or initialization to epoch 16 in which the model has learnt a rigid structure which would need to be rewired by pruning. For instance, when pruned to 2.8% of initial weights, rewind to epoch 5 has 93.78% accuracy as compared to 91.8%, 93.4% of rewinding to epoch 0 and epoch 16 respectively. Rewinding to 0 is inefficient as noted by (Frankle et al., 2019b) and rewinding to epoch 16 doesn’t give the model much freedom to adjust to the sparse weights. We use Concept-Monitor to track the training process of these 3 different rewinding strategies and plot the results in Fig 5 and Fig 9 in the appendix.
Observations and Results:
In our analysis, we make the following observations:
1. Pruning the network learns to encode some concepts without any fine tuning. Figure 5a shows the number of interpretable neurons in layer 4 of the model after rewinding to initialization. We notice the trend that for rewinding to epoch 16 and epoch 5, the number of interpretable neurons decreases as we increase the sparsity, but for rewinding to the initial weights (epoch 0) the number of interpretable neurons increase. Since the weights are randomly initialized, the only way there can be a gain in interpretable neurons is through the changes that happen during pruning, i.e. the zeroing out of certain weights in the network. Hence, we believe that there is a possibility that the training is learning to remove connections that are harming the network and this leads to the resultant network to be different than the original (with the only change being that some weights are zero). This leads to some neurons being activated to certain low level concepts and hence our observation of increased interpretable neurons. We note that this phenomenon was also observed by another work (Zhou et al., 2019) which says that IMP zeros out weights that would ultimately go towards zero anyway after training. Hence, they hypothesize that a pruned initial network encodes a portion of the training process itself, which they refer to as ”masking is learning”. This also explains why we see interpretable neurons with just pruned initial weights.
2. The percentage of concepts retained through pruning is highest with Epoch 5 rewinding.
Figure 5b plots the percentage of top-x percentile interpretable neurons that retain their concepts throughout the pruning process (y-axis) vs x percentile (x axis). In other words it plots the relation of the interpretability of a neuron to its concept retention.Note that, interpretablity of neurons is dependent on the threshold defined by the concept detector (see section 3.1). We see that for rewind to epoch 16 as we decrease the interpretability the percentage of neurons retaining concepts increases, or the more interpretable neurons are likely to lose their concepts during IMP, while the less interpretable neurons keep their concepts. For rewind to epoch 5 we see that the more interpretable neurons keep their concepts and the retention decreases as the neurons become less interpretable. This leads us to the hypothesis that rewinding to epoch 5 learns concepts that are more general and hence are able to be retained, while rewind to epoch 16 learns concepts that are rigid and the model has to relearn those concepts to preserve accuracy during pruning. This effect is also shown in the accuracy of the models in which rewinding to epoch 5 performs better than rewinding epoch 16 at higher pruning.
Discussion: From observation 1, we find that it is very likely that the lottery ticket sparse pruning mask actually encodes learning, which was also suggested in (Zhou et al., 2019) as ”masking is learning”. It is also noted from observation 2 that certain rewinding points are more suitable to retain concepts, e.g. epoch 5 in our case, and there is a correlation between this and the model performance as noted by the accuracy at higher sparsity.
4.2 CASE STUDY (III) ADVERSARIAL TRAINING
DNNs are known to be vulnerable against small perturbations in their inputs (Szegedy et al., 2013). This is problematic as networks can fail unexpectedly after small random or adversarial perturbations which raises concerns over their safety. Fortunately, methods have been developed to defend against
adversarial attacks, most popular of these being Adversarial Training (Madry et al., 2018). This successfully makes networks more robust against such attacks, but comes at a cost of degraded performance on clean test data. In this study, we apply Concept-Monitor to adversarial training to better understand how adversarial training changes a network and why standard accuracy suffers. We analyse a ResNet18 model trained on CIFAR10 with and without adversarial training. For full details on our experimental setup please refer to Appendix section A.
Observations and Results: Using Concept-Monitor we have the following three observations.
1. Adversarially robust network has less interpretable neurons in late layers, but more in
earlier layers. In Fig 6, we plot the number of interpretable neurons in layer 2-4 at three different training stages. It can be seen at the end of training that 293 out of 512 of the layer 4 neurons are interpretable for standard training while only 215 out of 512 are interpretable for the robustly trained model. For layer 3 it is 91 out of 256 for standard model and 125 out of 256 for the robust model. We observe similar trend for layer 2 neurons, please refer to Fig 16 in Appendix. 2. Adversarially robust network relies more on colors, less on materials and textures. When combining concepts detected across 3 layers, we observe that the robust model has a lot more ”color” neurons than the standard model (74 vs 15) Figure 16. In contrast, the standard model has 154 neurons detecting ”textures” while robust model has only 97, and standard model has 10 ”material” neurons compared to only 2 of the robust model. This finding is sensible as detecting textures and materials often relies on high frequency patterns that are easily affected by l∞ noise therefore the adversarial training forces the model to rely less on them and more on more resilient features like color. 3. Standard training learns neurons detecting target in the second to last layer while robust training does not. As seen in Figure 7, the standard network has many neurons detecting its target classes present in the second to last layer. For example, the standard network has 17 interpretable neurons detecting cars and 13 neurons detecting horses in layer4, while the robust network has no layer4 neurons detecting either car or horse.
Discussion: We find that adversarial training harms the ability of the network to detect certain concepts that rely on high frequency patterns like texture. Since these patterns are useful for many tasks, losing them may be a significant cause for the degradation in standard performance as observed in the experiments. Another cause for poorer performance of the robust network may be the lack of neurons detecting target class objects in second to last layer, but why this happens is still unclear to us. We believe addressing these two issues may be the key to improving clean accuracy of robust models. On the other hand, the robust network seems to learn more interpretable lower level features perhaps learning a more diverse representation similar to the findings of (Salman et al., 2020) who showed that adversarially robust models have better features for transfer learning.
4.3 CASE STUDY (IV) FINE-TUNING ON A MEDICAL DATASET
In this section, we use Concept-Monitor to observe the fine-tuning of a pretrained DNN on a diabetric retinopathy dataset (APTOS, 2019). This experiment allows us to test our method on a dataset from a different domain, as well as gather insights on the process of finetuning a pretrained model. The setup details are in Appendix A.
Observations and results: We probed the model training at a few intermediate steps. We observe that for the initial weights, as the neurons are pretrained on Imagenet, they show a lot of diverse and high level concepts(as shown in Figure 15 in Appendix). However, as the training progresses we notice that more neurons are getting activated by textural concepts like dots and patterns rather than objects. This is what we expect because as the model gets better at classifying retinopathy images shown in Figure
11, we expect it to rely more on textures and presence of ”dots” which is consistent to what we observe here as shown by the top interpretable neurons in epochs 20 and 40 in Figure 15. From Figure 8 we see that the number of interpretable neurons in ”object” category decreases as the training progresses while the number of interpretable neurons in the ”material” category increases which further confirms our theory that the model learns to focus more on lower level features like material and textures as compared to objects.
5 CONCLUSIONS
We have presented Concept-Monitor, a novel method to automatically track and monitor neural network training process in a transparent and human-understandable way. With the 4 comprehensive case studies on various deep learning training paradigms, we show that Concept-Monitor allows us to better understand the underlying mechanism of standard DNN training, the two alternative training methods, Lottery Ticket Hypothesis and adversarial training, as well as the fine-tuning on medical task. With Concept-Monitor we discover that surprisingly lottery ticket hypothesis prunes the network in a way that the neurons are interpretable even at initialization, discovering interpretability hidden in random initialization. Furthermore, we discover that adversarial training causes the hidden neurons to detect more simple concepts like colors while losing representations of materials and target class objects. We also test our method on medical dataset and find that the model learns to focus more on low level features which reflect the medical dataset.
Reproducibility statement: We acknowledge the importance of replicating our experiments and for that reason we have explicitly mentioned the implementation details of all our experiments in Appendix A.
A. EXPERIMENTAL SETUP
Standard training (section 3.2):
Setup: We train a Resnet-18 model on Places-365 dataset, which contains a lot of diverse classes allowing the DNN model to learn diverse concepts. To reduce the training time, we randomly selected 1000 images for each of the 365 classes and trained for 30 epochs reaching top-1 accuracy of 48.3%. We use batch size of 256 and an initial learning rate of 0.1 with cosine annealing scheduler.
Probing methodology: We use Broden (Bau et al., 2017a) dataset as Dprobe and use associated concept labels as a decoupled concept set S. Our embedding space, as described in section 3.1, is computed using CLIP’s text embeddings of Broden labels as a basis. For visualizing in a 2- dimension plot, we follow (Park et al., 2022) and use UMAP dimensionality reduction (McInnes et al., 2018), as it preserves inter-point distance in the lower dimensions. We set k = 5 in Eq(1), i.e. we use top-5 concepts to compute the embedding.
Lottery ticket hypothesis experiments (section 4.1):
Setup: We train ResNet 18 on CIFAR 10 dataset using IMP as in the LTH paper (Frankle & Carbin, 2018), rewinding to different initial weights. For each stage of IMP we train the model for 160 epochs, prune 40% of the weights and rewind to initialization. We consider rewinding to three different stages: initial weights, epoch 5 and epoch 16, using (Chen et al., 2022) implementation as reference.
Probing methodology: For our Dprobe, we use CIFAR 100 training dataset and for concept set S we use broden labels.
Adversarial Learning experiments (section 4.2):
Setup: We perform adversarial training with PGD attacks on a ResNet-18 architecture. We follow reop (Wong et al., 2020) and train the network with ϵ = 8/255 and l∞ perturbations for 40 epochs. We compare it against a CIFAR-10 network trained using the same exact training setup but no adversarial training. The standard model reaches a final accuracy 94.29%, while the robust model reaches 83.42% accuracy on clean data and 50.00% robust accuracy against a PGD adversarial attack as shown in Figure 10. The standard model expectedly performs really badly on adversarial images.
Probing methodology: We use Broden images as Dprobe and for concept set S we use the broden labels as the concepts can be easily categorized.
Fine-tuning on medical dataset (section 4.3)
Setup: We used ResNet-34 backbone pretrained on ImageNet dataset as our feature extractor and used a simple linear layer as the classification head. We trained this network on the diabetic retinopathy classification dataset (APTOS, 2019) (Figure 11) and it achieved an accuracy of 72.77%. We followed the work from (Balaji, 2019) for our experiments. We use Broden as Dprobe and broden labels as S.
B. CONCEPT-MONITOR ALGORITHM
Algorithm 1: Pseudo code for Concept-Monitor for a neuron n Input : Neuron n, concept detector ϕ, Concept set S , Probing dataset Dprobe Output: Embedding plot, Concept statistics Function Concept-Monitor(ϕ, S, Dprobe)
for t from 1→ tepoch do W tn, d t n = ϕ(S, Dprobe, n)
λni =softmax(−dtn) utn = k∑
i=1
λni f(W t n[i]))
plot(utn) Rn.append(W t n) Dn.append(d t n)
end stats← getStats(Rn, Dn)
C. VISUALIZING EVOLUTION IN THE EMBEDDING SPACE
Here we use Concept-Monitor’s unified embedding space to observe the evolution of few neuron’s in layer 4 of ResNet-18 trained on Places 365 dataset as described in Section 3.1. Our embedding space is designed in such a way that it is possible for us to add ”anchors” to it, which are positions in the embedding space that represent a particular chosen concept. We show these anchors as red stars in Figure 12. These anchors are fixed through training and mark the region of the embedding space encoding a particular concept so a user may track neuron movements relative to those anchors through training. For brevity we leave out the specific concept labels represented by the anchors in the figure and enumerate them instead. We see that at the beginning most of the neurons
are concentrated around anchors 14,13 and 16 which represent the concepts ”grid”,”dotted” and ”porous” respectively, which are low level features. This is expected as the model has just started training and hasn’t learnt to encode high level concepts yet. Most neurons move away from this space, except neuron 408 which stays in similar space throughout the training encoding low level textural concepts.
We also would like to highlight the trajectory of neuron 190, which starts from bottom left and slowly moves towards anchor 0 representing the concept minibike. By the end of training, this neuron comes very close to the anchor denoting that it has successfully learnt that concept. This concept of distance to the anchors can also be used as a quick visual aid to tell whether the concepts that the neurons represent are strongly represented or not. If the neuron’s concept label is far from the corresponding anchor in the space, we can safely mark that neuron as uninterpretable.
The labels corresponding to the anchors (red stars) in Figure 12 are 0 - minibike, 1 - exhaust hood, 2 - kitchen island, 3 - leaf, 4 - shower curtain, 5 - net, 6 - pantry, 7 - striped, 8 - countertop, 9 - granite, 10 - forecourt, 11 - cat, 12 - bed, 13 - grid, 14 - dotted, 15 - shower stall, 16 - porous, 17 - aqueduct, 18 - fabric.
D. CONCEPT MONITOR WITH DIFFERENT PROBING DATASET
As stated in section 3 our method with Clip-Dissect is able to work with any probing and concept dataset. We provide most of our analysis using Broden dataset as it contains a collection of different concept images and hence is able to provide much better results as compared to a limited dataset. Here we provide an example of that by using CIFAR-100 training images as the probing dataset to analyze the same model as section 3. As shown in section 3 and appendix A, neurons 479 represents concept ”windmill” and neuron 256 represents the concept ”field-road”. We now use CIFAR-100 training images to monitor these neurons. From the embedding space in Figure 13 we can see that neuron 256 converges to the ”Field” anchor. We also look at the highly activating images for each neuron in Figure 13 and see that for neuron 479 the most activating images are tree like structures across the sky which are the most similar images to windmills in the CIFAR-100 dataset. The point of this exercise is that concept monitor as all other model dissection methods is dependent on the probing dataset, however if we use clip-dissect we are able to use much larger and diverse datasets since we don’t require any labelling of images and can simply use entire set of images directly. | 1. What is the focus and contribution of the paper on uncovering DNN training processes?
2. What are the strengths and weaknesses of the proposed approach, particularly in its application and usefulness?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any suggestions or recommendations for improving the tool or exploring deeper research directions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper presents a general framework called Concept-Monitor to uncover the black-box DNN training processes. The paper is built on top of existing work capturing concept based representation, but rather on a temporal / training scale, which sounds interesting but of limited use.
Strengths And Weaknesses
Interesting idea of visualizing neurons on a temporal / training scale But
no real use for such tool
unclear motivation
already known from the community
I would recommend to use the tool to drive deeper research directions e.g., how this could guide / better shape re-training on the go by having systems that could be recommended particular images - combining the output with GAN architecture would be interesting.
Clarity, Quality, Novelty And Reproducibility
Clear presentation
Novelty is too limited
Results are straightforward |
ICLR | Title
Demystifying black-box DNN training processes through Concept-Monitor
Abstract
Despite the successes of deep neural networks (DNNs) on a broad range of tasks little has been understood of why and how they achieve such victories due to their complex architecture and their opaque black-box training processes. With the goal to unveil the mystery of DNNs, in this work, we propose a general framework called Concept-Monitor to uncover the black-box DNN training processes automatically for the first time. Our proposed Concept-Monitor enables humaninterpretable visualization of the DNN training processes and thus facilitates transparency as well as deeper understanding on how DNNs function and operate along the training iterations. Using Concept-Monitor, we are able to observe and compare different training paradigms at ease, including standard training, finetuning, adversarial training and network pruning for Lottery Ticket Hypothesis, which brings new insights on why and how adversarial training and network pruning work and how they modify the network during training. For example, we find that the lottery ticket hypothesis discovers a mask that makes neurons interpretable at initialization, without any finetuning, and we also found that adversarially robust models have more neurons relying on color as compared to standard models trained on the same dataset.
1 INTRODUCTION
Unprecedented success of deep learning have lead to their rapid applications to a wide range of tasks; however, deep neural networks (DNNs) are also known to be black-box and non-interpretable. To deploy these deep neural network (DNN) models into real-world applications, especially for the safety-critical applications such as healthcare and autonomous driving, it is imperative for us to understand what is going behind the black box. There have been a proliferation of research efforts towards interpretating DNNs and they can be mainly divided into two categories: the first approach focuses on attributing DNN’s prediction to the importance of individual-input and identify which pixels or features are important (Zhou et al., 2016; Selvaraju et al., 2019; Sundararajan et al., 2017; Smilkov et al., 2017) while the other approach investigates the functionalities (known as concept) of each individual-neuron (Bau et al., 2017a; Mu & Andreas, 2020; Oikarinen & Weng, 2022).
However, most of these methods only focus on examining a DNN model after it has been trained, and therefore missing out useful information that could be available in the training process. For example, for a deep learning researcher and engineer, it would be very useful to know:
What are the concepts learned by the DNN model and how has the DNN model learnt the concepts along the training process?
The answer to the above question would be useful in two-fold: (i) it can shed light on why and how DNNs can achieve great success, which could be helpful to inspire new DNN training algorithms; (ii) it can also help to debug DNNs and prevent catastrophic failure if anything goes wrong.
Motivated by the above question, it is the main goal of this work to develop a novel framework Concept-Monitor, which makes the black-box DNNs training process become transparent and human-understandable. Our proposed Concept-Monitor is scalable and automated – which are crucial to demystify the opaque DNN training process efficiently and help researchers better understand the training dynamics of the model. More formally, in this paper we provide the following contributions:
• We propose a general framework Concept-Monitor, which is the first automatic and efficient pipeline to make the black-box neural network training transparent and interpretable. Our pipeline monitors and tracks the training progress with human-interpretable concepts which provide useful statistics and insights of the DNN model being trained
• We develop a novel universal embedding space which allows us to efficiently track how the neurons’ concepts evolve and visualize their semantic evolution through out the training process without the need to re-learn an embedding space proposed in prior work.
• We provide four case studies to analyze various deep learning training paradigms, including training standard deep vision models, the mysterious lottery ticket hypothesis, adversarial robust training and fine-tuning on a medical dataset. With Concept-Monitor, we are able to discover new insights into the obscure training process that helps explain some of the empirical observations and hypothesis of the black-box deep learning through the lens of interpretability.
2 BACKGROUND AND RELATED WORKS
2.1 NEURON-LEVEL INTERPRETABILITY METHODS
Recently, there has been a great interest towards understanding deep neural network models at the neuron-level, which is different from mainstream methods that focus on interpreting individual decisions through the input features and pixels (Ribeiro et al., 2016; Lundberg & Lee, 2017; Selvaraju
et al., 2019; Sundararajan et al., 2017). We call this new direction as neuron-level interpretability methods and review the representative techniques below. To begin with, the techniques in this direction can be briefly divided into whether it needs to collect a curated annotated concept dataset to dissect DNNs. For the techniques that require a curated probing data labelled with pre-defined concepts, classic methods in this category includes Network dissection and its variation (Bau et al., 2017b; Mu & Andreas, 2020) as well as Test Concept Activation Vector and its variation (Kim et al., 2017; Goyal et al., 2019; Ghorbani et al., 2019). The key idea of Network dissection is to identify concepts of neurons by calculating an Intersection over Unit (IoU) score of intermediate activation maps and pre-defined concept masks, while the key idea of Test Concept Activation Vector is to use directional derivatives to quantify the model’s sensitivity to the pre-defined concepts.
However, one limitation of this type of approach is the need of a curated probing dataset annotated with concept labels which may be expensive and time-consuming to collect. On the other hand, a recent method Clip-Dissect (Oikarinen & Weng, 2022) addresses this challenge by leveraging the paradigm of multi-modal model (Radford et al., 2021) and allows automatic identification of neuron concepts without the need of collecting concept labelled data. We note that these techniques are all compatible to our proposed Concept-Monitor to facilitate automatic concept monitoring on the DNN training process. In our experiments, we demonstrated the versatility of our Concept-Monitor by showing the results with different concept detectors in section 3.2 when we study standard DNN training process.
2.2 UNDERSTAND DNN TRAINING DYNAMICS
Most of the existing research has been primarily focused on analyzing models after training instead of investigating how the interpretation/concepts change during the training DNN process, which is the main focus of our work. We note that there is a recent work Concept-Evo (Park et al., 2022) having the same goal as ours, but their proposed method is very different from our Concept-Monitor and their methods have some limitations as discussed below. First, their main idea is to learn a universal semantic space for each neuron, using a base model and then project the target model to this space, while we do not need to perform any training. For example, their embedding space uses a base model (VGG19 trained on imagenet) to project target neurons, while we use a pre-trained CLIP (Radford et al., 2021) text encoder to define a universal embedding space. Their methods would be much expensive than ours as they have to redo the learning every time they change the base model or the probing dataset. Second, the approach proposed in Concept-Evo does not associate humaninterpretable concepts to the neurons and thus human intervention is required to actually describe each of the neuron, which is another heavy cost (especially when the model size becomes larger and when the training epochs increase) and hard to automate. On the other hand, our method is fully automated and can explicitly provide top k human-understandable concepts for a neuron, which is another advantage of our Concept-Monitor.
3 CONCEPT-MONITOR: A NOVEL, SCALABLE AND AUTOMATED TOOL TO DEMYSTIFY BLACK-BOX DNN TRAINING PROCESS
In section 3.1 we detail the key components in Concept-Monitor including the concept detector and the universal embedding space. Next in section 3.2, we use Concept-Monitor to demystify the standard training process of a deep vision model and discuss the results and insights.
3.1 CONCEPT DETECTOR AND A UNIFIED EMBEDDING SPACE
Concept Detector: The first part of our method is to use a concept detector (ϕ) to automatically identify the concept of a neuron at any stage in the training. Given a set of concept words S and a probing image dataset Dprobe, a concept detector ϕ would return a concept word wn for a neuron n that maximally activates it. To achieve automatic concept monitoring of a DNN training process, we use two automated neuron-level interpretability tools, Network Dissection (Bau et al., 2017a) and CLIP-Dissect (Oikarinen & Weng, 2022) as the concept detectors in our experiment as a proofof-concept, and we note that Concept-Monitor is compatible with other neuron-level tools as well. Although the technical approach of each concept detector is different, we can actually unify them as a tool calculating a distance metric dni which quantifies neuron n’s association with the concept wi.
For example, the distance dni in Network-Dissection (Bau et al., 2017a) is defined to be the IoU score between activation maps and concept masks, while the distance dni in CLIP-dissect (Oikarinen & Weng, 2022) is a measure of the similarity between concept activation matrix and neuron activation maps. Based on this distance metric, we can also define interpretable neuron, which are the neurons whose distance to the closest concept word is less than some threshold, i.e. min(dni ) < τ , where the threshold τ is dependent on the concept detector ϕ.
Unified embedding space: The second part of our method is to define a unified embedding space in order to visually track neurons’ evolution. Here we detail the steps to project a neuron n into our unified embedding space.
Step 1: To start with, we use wi to denote the ith concept in the concept set S and use vi to denote the associated text embedding where vi = f(wi) with f being the text encoder of a pretrained large language model. We use {v1, v2, . . . , v|S|} as the basis of our semantic space and project neurons on this space using a weighted linear combination of vi of the neuron’s top-k concept words.
Step 2: Let Wn = [wn1′ , w n 2′ . . . w n k′ ] be the list of top k concept words for neuron n. For each neuron n, we can then calculate the embedding un using Equation (1) below,
un = k∑ i=1 λni f(w n i′) (1)
where λni is the weight of the concept wi′ for describing the neuron n and depends on the concept-detector used. For Network-Dissection, (Bau et al., 2017a), we use the distance vector dn = [−IoU1′ ,−IoU2′ , · · · − IoUk′ ], and for CLIP-Dissect (Oikarinen & Weng, 2022) dn = [−h1′ ,−h2′ · · · − hk′ ] where h is the point-wise mutual information distance metric proposed in the CLIP-dissect paper. We can then calculate λni by fitting a softmax distribution on the
corresponding (negative) distance vector have λni = e −di′/ k∑ j=1 e−dj′ . The pseudo code for calculating the unified embedding space is presented in Appendix Algorithm 1
Remarks:
1. Note that since our method is general, when using a new concept detector, we only need to change the distance vector dn associated with that concept detector, which describes how closely related a neuron is to a specific concept.
2. Another benefit of our unified embedding space is that we can project any general concept word α into the same embedding space by calculating its text embedding f(α). This lets us mark the embedding space with concept ”anchors” (see the red stars in Fig 3), which are concepts that a researcher thinks would be represented in a well trained model. The researcher can then track whether and which neurons are converging or diverging away from those anchors giving useful feedback during training.
3. Unlike prior work Concept-Evo (Park et al., 2022) which requires training an embedding space every time when a base model changes, our unified semantic space doesn’t need to train a base model or learn the image embeddings. Please refer to Table 1 for full comparison between our method and Concept-Evo (Park et al., 2022).
3.2 CASE STUDY (I) MONITORING STANDARD TRAINING
Now we use Concept-Monitor to investigate standard training of ResNet-18 model on Places365 dataset. We investigate the concept evolution of neurons at different epochs in the training using the proposed unified embedding space described in section 3.1.
Results and observations. Our main goal is to inspect the training process and study how the concepts evolve across training and whether there is a correlation between accuracy and concept generalization. The main results are plotted in Figure 3 and we summarize three observations from the standard training below:
1. Model learns to look at more complex features as training progresses. As shown in Figure 2, initially neuron 479 is maximally activated by images containing ”striped” pattern. As the training progresses, we can see that it starts to learn to identify windmill structures at Epoch 5 and stays the same for the rest of the training. Another examples is neuron 256 which moves from grid pattern like concept of ”anechoic chambers” to learning the detect a ”field road”.
2. Shallower layers are comparatively more likely to learn low-level features like material and
texture while deeper layers learn more nuanced object detectors. We consider the broad categories of [Material, Texture, Object, Part, Scene] to group neurons. These labels were also
used in the original Broden dataset to group the labels. We find that the categories Scene, Object and Part to be concerned with higher level concepts like Fields and Windmills while Textures to be concerned with concepts like Striped, Matted etc. From Figure 4, its evident that Layer 2 and Layer 3 are learning a lot more low level information than Layer 4.
3. Concept diversity happens later in the training. Using the unified embedding space in Figure 3 we can see that the neurons are clumped together in the middle initially (Epoch 0) and as the training progresses they spread out and hence learn more generalized concepts. This suggests that at the initial stage of the training, only a limited number of concepts have been learned and these concepts are similar (close in the embedding space).
Discussion: Using our method in standard training, we have seen a correlation between training stage and interpretability of a model. We notice that for a well trained model there is a progression from a low level concepts understanding to higher level conceptual understanding. We propose that an inverse relation might help to improve the model training as well, i.e., a good progression of concepts learnt might indicate a well trained model. Using our methodology, specifically tracking the neuron concept evolution in the unified embedding space, deep learning researchers can meticulously monitor and manage the status of DNN training e.g., they can pause training or modify hyper-parameters when they see neurons grouping up or not spreading out in the semantic space.
Using another concept detector: We show that Concept-Monitor is able to work with a another concept-detector like Network Dissection (Bau et al., 2017a) by analyzing the same Resnet-18 model trained on Places365 dataset. Our results are in Figure 14 in Appendix C and we can see that the observations are consistent across different concept detectors:shallower layers are more likely to learn low-level features like texture and that model learns more complex features as the training progresses. We also see the embedding space starting from a clump in the center for Epoch 0 and then spreading out indicating the generalization of the concepts learnt.
4 CASE STUDIES OF OTHER TRAINING PARADIGMS
In this section, we show the Concept-Monitor is versatile and can be used to study various training paradigms to gain insights into how and why they work. We also provide useful observations and insights that could help future researchers better understand these training procedures.
4.1 CASE STUDY (II) LOTTERY TICKET HYPOTHESIS
Lottery Ticket Hypothesis (LTH) (Frankle & Carbin, 2018) is a popular method to prune deep neural networks without sacrificing their performance. In this case study, we use Concept-Monitor to demystify the success behind LTH in a human understandable way. The main idea of LTH is to use iterative magnitude pruning (IMP) to prune the model iteratively by repeating the steps of training, pruning and rewinding to an initial epoch. LTH hypothesizes the existence of ”winning tickets” at initialization which are sub-networks within the network that can be trained to performance equivalent to the original model. However, it was observed that rewinding to initial weight leads to a performance drop and it is better to rewind to an earlier training epoch instead of fully reversing to the initial weights. (Frankle et al., 2019a) attribute this phenomenon to SGD noise in initial training and we will use Concept-Monitor to investigate LTH through the lens of interpretability. We train a ResNet18 on CIFAR 10 dataset using IMP in 8 stages. For full details on our experimental setup, please refer to Appendix section A.
We study LTH with three different rewinding stages of IMP: rewinding to initial weights(epoch 0), epoch 5 and epoch 16. We found rewinding to epoch 5 performs better than rewinding to epoch 0 and epoch 16 when the sparsity level is high, which we attribute to be the two extremes of initialization i.e, initialization to epoch 0 in which the model is too noisy or initialization to epoch 16 in which the model has learnt a rigid structure which would need to be rewired by pruning. For instance, when pruned to 2.8% of initial weights, rewind to epoch 5 has 93.78% accuracy as compared to 91.8%, 93.4% of rewinding to epoch 0 and epoch 16 respectively. Rewinding to 0 is inefficient as noted by (Frankle et al., 2019b) and rewinding to epoch 16 doesn’t give the model much freedom to adjust to the sparse weights. We use Concept-Monitor to track the training process of these 3 different rewinding strategies and plot the results in Fig 5 and Fig 9 in the appendix.
Observations and Results:
In our analysis, we make the following observations:
1. Pruning the network learns to encode some concepts without any fine tuning. Figure 5a shows the number of interpretable neurons in layer 4 of the model after rewinding to initialization. We notice the trend that for rewinding to epoch 16 and epoch 5, the number of interpretable neurons decreases as we increase the sparsity, but for rewinding to the initial weights (epoch 0) the number of interpretable neurons increase. Since the weights are randomly initialized, the only way there can be a gain in interpretable neurons is through the changes that happen during pruning, i.e. the zeroing out of certain weights in the network. Hence, we believe that there is a possibility that the training is learning to remove connections that are harming the network and this leads to the resultant network to be different than the original (with the only change being that some weights are zero). This leads to some neurons being activated to certain low level concepts and hence our observation of increased interpretable neurons. We note that this phenomenon was also observed by another work (Zhou et al., 2019) which says that IMP zeros out weights that would ultimately go towards zero anyway after training. Hence, they hypothesize that a pruned initial network encodes a portion of the training process itself, which they refer to as ”masking is learning”. This also explains why we see interpretable neurons with just pruned initial weights.
2. The percentage of concepts retained through pruning is highest with Epoch 5 rewinding.
Figure 5b plots the percentage of top-x percentile interpretable neurons that retain their concepts throughout the pruning process (y-axis) vs x percentile (x axis). In other words it plots the relation of the interpretability of a neuron to its concept retention.Note that, interpretablity of neurons is dependent on the threshold defined by the concept detector (see section 3.1). We see that for rewind to epoch 16 as we decrease the interpretability the percentage of neurons retaining concepts increases, or the more interpretable neurons are likely to lose their concepts during IMP, while the less interpretable neurons keep their concepts. For rewind to epoch 5 we see that the more interpretable neurons keep their concepts and the retention decreases as the neurons become less interpretable. This leads us to the hypothesis that rewinding to epoch 5 learns concepts that are more general and hence are able to be retained, while rewind to epoch 16 learns concepts that are rigid and the model has to relearn those concepts to preserve accuracy during pruning. This effect is also shown in the accuracy of the models in which rewinding to epoch 5 performs better than rewinding epoch 16 at higher pruning.
Discussion: From observation 1, we find that it is very likely that the lottery ticket sparse pruning mask actually encodes learning, which was also suggested in (Zhou et al., 2019) as ”masking is learning”. It is also noted from observation 2 that certain rewinding points are more suitable to retain concepts, e.g. epoch 5 in our case, and there is a correlation between this and the model performance as noted by the accuracy at higher sparsity.
4.2 CASE STUDY (III) ADVERSARIAL TRAINING
DNNs are known to be vulnerable against small perturbations in their inputs (Szegedy et al., 2013). This is problematic as networks can fail unexpectedly after small random or adversarial perturbations which raises concerns over their safety. Fortunately, methods have been developed to defend against
adversarial attacks, most popular of these being Adversarial Training (Madry et al., 2018). This successfully makes networks more robust against such attacks, but comes at a cost of degraded performance on clean test data. In this study, we apply Concept-Monitor to adversarial training to better understand how adversarial training changes a network and why standard accuracy suffers. We analyse a ResNet18 model trained on CIFAR10 with and without adversarial training. For full details on our experimental setup please refer to Appendix section A.
Observations and Results: Using Concept-Monitor we have the following three observations.
1. Adversarially robust network has less interpretable neurons in late layers, but more in
earlier layers. In Fig 6, we plot the number of interpretable neurons in layer 2-4 at three different training stages. It can be seen at the end of training that 293 out of 512 of the layer 4 neurons are interpretable for standard training while only 215 out of 512 are interpretable for the robustly trained model. For layer 3 it is 91 out of 256 for standard model and 125 out of 256 for the robust model. We observe similar trend for layer 2 neurons, please refer to Fig 16 in Appendix. 2. Adversarially robust network relies more on colors, less on materials and textures. When combining concepts detected across 3 layers, we observe that the robust model has a lot more ”color” neurons than the standard model (74 vs 15) Figure 16. In contrast, the standard model has 154 neurons detecting ”textures” while robust model has only 97, and standard model has 10 ”material” neurons compared to only 2 of the robust model. This finding is sensible as detecting textures and materials often relies on high frequency patterns that are easily affected by l∞ noise therefore the adversarial training forces the model to rely less on them and more on more resilient features like color. 3. Standard training learns neurons detecting target in the second to last layer while robust training does not. As seen in Figure 7, the standard network has many neurons detecting its target classes present in the second to last layer. For example, the standard network has 17 interpretable neurons detecting cars and 13 neurons detecting horses in layer4, while the robust network has no layer4 neurons detecting either car or horse.
Discussion: We find that adversarial training harms the ability of the network to detect certain concepts that rely on high frequency patterns like texture. Since these patterns are useful for many tasks, losing them may be a significant cause for the degradation in standard performance as observed in the experiments. Another cause for poorer performance of the robust network may be the lack of neurons detecting target class objects in second to last layer, but why this happens is still unclear to us. We believe addressing these two issues may be the key to improving clean accuracy of robust models. On the other hand, the robust network seems to learn more interpretable lower level features perhaps learning a more diverse representation similar to the findings of (Salman et al., 2020) who showed that adversarially robust models have better features for transfer learning.
4.3 CASE STUDY (IV) FINE-TUNING ON A MEDICAL DATASET
In this section, we use Concept-Monitor to observe the fine-tuning of a pretrained DNN on a diabetric retinopathy dataset (APTOS, 2019). This experiment allows us to test our method on a dataset from a different domain, as well as gather insights on the process of finetuning a pretrained model. The setup details are in Appendix A.
Observations and results: We probed the model training at a few intermediate steps. We observe that for the initial weights, as the neurons are pretrained on Imagenet, they show a lot of diverse and high level concepts(as shown in Figure 15 in Appendix). However, as the training progresses we notice that more neurons are getting activated by textural concepts like dots and patterns rather than objects. This is what we expect because as the model gets better at classifying retinopathy images shown in Figure
11, we expect it to rely more on textures and presence of ”dots” which is consistent to what we observe here as shown by the top interpretable neurons in epochs 20 and 40 in Figure 15. From Figure 8 we see that the number of interpretable neurons in ”object” category decreases as the training progresses while the number of interpretable neurons in the ”material” category increases which further confirms our theory that the model learns to focus more on lower level features like material and textures as compared to objects.
5 CONCLUSIONS
We have presented Concept-Monitor, a novel method to automatically track and monitor neural network training process in a transparent and human-understandable way. With the 4 comprehensive case studies on various deep learning training paradigms, we show that Concept-Monitor allows us to better understand the underlying mechanism of standard DNN training, the two alternative training methods, Lottery Ticket Hypothesis and adversarial training, as well as the fine-tuning on medical task. With Concept-Monitor we discover that surprisingly lottery ticket hypothesis prunes the network in a way that the neurons are interpretable even at initialization, discovering interpretability hidden in random initialization. Furthermore, we discover that adversarial training causes the hidden neurons to detect more simple concepts like colors while losing representations of materials and target class objects. We also test our method on medical dataset and find that the model learns to focus more on low level features which reflect the medical dataset.
Reproducibility statement: We acknowledge the importance of replicating our experiments and for that reason we have explicitly mentioned the implementation details of all our experiments in Appendix A.
A. EXPERIMENTAL SETUP
Standard training (section 3.2):
Setup: We train a Resnet-18 model on Places-365 dataset, which contains a lot of diverse classes allowing the DNN model to learn diverse concepts. To reduce the training time, we randomly selected 1000 images for each of the 365 classes and trained for 30 epochs reaching top-1 accuracy of 48.3%. We use batch size of 256 and an initial learning rate of 0.1 with cosine annealing scheduler.
Probing methodology: We use Broden (Bau et al., 2017a) dataset as Dprobe and use associated concept labels as a decoupled concept set S. Our embedding space, as described in section 3.1, is computed using CLIP’s text embeddings of Broden labels as a basis. For visualizing in a 2- dimension plot, we follow (Park et al., 2022) and use UMAP dimensionality reduction (McInnes et al., 2018), as it preserves inter-point distance in the lower dimensions. We set k = 5 in Eq(1), i.e. we use top-5 concepts to compute the embedding.
Lottery ticket hypothesis experiments (section 4.1):
Setup: We train ResNet 18 on CIFAR 10 dataset using IMP as in the LTH paper (Frankle & Carbin, 2018), rewinding to different initial weights. For each stage of IMP we train the model for 160 epochs, prune 40% of the weights and rewind to initialization. We consider rewinding to three different stages: initial weights, epoch 5 and epoch 16, using (Chen et al., 2022) implementation as reference.
Probing methodology: For our Dprobe, we use CIFAR 100 training dataset and for concept set S we use broden labels.
Adversarial Learning experiments (section 4.2):
Setup: We perform adversarial training with PGD attacks on a ResNet-18 architecture. We follow reop (Wong et al., 2020) and train the network with ϵ = 8/255 and l∞ perturbations for 40 epochs. We compare it against a CIFAR-10 network trained using the same exact training setup but no adversarial training. The standard model reaches a final accuracy 94.29%, while the robust model reaches 83.42% accuracy on clean data and 50.00% robust accuracy against a PGD adversarial attack as shown in Figure 10. The standard model expectedly performs really badly on adversarial images.
Probing methodology: We use Broden images as Dprobe and for concept set S we use the broden labels as the concepts can be easily categorized.
Fine-tuning on medical dataset (section 4.3)
Setup: We used ResNet-34 backbone pretrained on ImageNet dataset as our feature extractor and used a simple linear layer as the classification head. We trained this network on the diabetic retinopathy classification dataset (APTOS, 2019) (Figure 11) and it achieved an accuracy of 72.77%. We followed the work from (Balaji, 2019) for our experiments. We use Broden as Dprobe and broden labels as S.
B. CONCEPT-MONITOR ALGORITHM
Algorithm 1: Pseudo code for Concept-Monitor for a neuron n Input : Neuron n, concept detector ϕ, Concept set S , Probing dataset Dprobe Output: Embedding plot, Concept statistics Function Concept-Monitor(ϕ, S, Dprobe)
for t from 1→ tepoch do W tn, d t n = ϕ(S, Dprobe, n)
λni =softmax(−dtn) utn = k∑
i=1
λni f(W t n[i]))
plot(utn) Rn.append(W t n) Dn.append(d t n)
end stats← getStats(Rn, Dn)
C. VISUALIZING EVOLUTION IN THE EMBEDDING SPACE
Here we use Concept-Monitor’s unified embedding space to observe the evolution of few neuron’s in layer 4 of ResNet-18 trained on Places 365 dataset as described in Section 3.1. Our embedding space is designed in such a way that it is possible for us to add ”anchors” to it, which are positions in the embedding space that represent a particular chosen concept. We show these anchors as red stars in Figure 12. These anchors are fixed through training and mark the region of the embedding space encoding a particular concept so a user may track neuron movements relative to those anchors through training. For brevity we leave out the specific concept labels represented by the anchors in the figure and enumerate them instead. We see that at the beginning most of the neurons
are concentrated around anchors 14,13 and 16 which represent the concepts ”grid”,”dotted” and ”porous” respectively, which are low level features. This is expected as the model has just started training and hasn’t learnt to encode high level concepts yet. Most neurons move away from this space, except neuron 408 which stays in similar space throughout the training encoding low level textural concepts.
We also would like to highlight the trajectory of neuron 190, which starts from bottom left and slowly moves towards anchor 0 representing the concept minibike. By the end of training, this neuron comes very close to the anchor denoting that it has successfully learnt that concept. This concept of distance to the anchors can also be used as a quick visual aid to tell whether the concepts that the neurons represent are strongly represented or not. If the neuron’s concept label is far from the corresponding anchor in the space, we can safely mark that neuron as uninterpretable.
The labels corresponding to the anchors (red stars) in Figure 12 are 0 - minibike, 1 - exhaust hood, 2 - kitchen island, 3 - leaf, 4 - shower curtain, 5 - net, 6 - pantry, 7 - striped, 8 - countertop, 9 - granite, 10 - forecourt, 11 - cat, 12 - bed, 13 - grid, 14 - dotted, 15 - shower stall, 16 - porous, 17 - aqueduct, 18 - fabric.
D. CONCEPT MONITOR WITH DIFFERENT PROBING DATASET
As stated in section 3 our method with Clip-Dissect is able to work with any probing and concept dataset. We provide most of our analysis using Broden dataset as it contains a collection of different concept images and hence is able to provide much better results as compared to a limited dataset. Here we provide an example of that by using CIFAR-100 training images as the probing dataset to analyze the same model as section 3. As shown in section 3 and appendix A, neurons 479 represents concept ”windmill” and neuron 256 represents the concept ”field-road”. We now use CIFAR-100 training images to monitor these neurons. From the embedding space in Figure 13 we can see that neuron 256 converges to the ”Field” anchor. We also look at the highly activating images for each neuron in Figure 13 and see that for neuron 479 the most activating images are tree like structures across the sky which are the most similar images to windmills in the CIFAR-100 dataset. The point of this exercise is that concept monitor as all other model dissection methods is dependent on the probing dataset, however if we use clip-dissect we are able to use much larger and diverse datasets since we don’t require any labelling of images and can simply use entire set of images directly. | 1. What is the focus and contribution of the paper on neural network training process analysis?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of simplicity, clarity, and computational costs?
3. How does the reviewer assess the novelty and reproducibility of the paper's content?
4. Are there any concerns or limitations regarding the applicability and effectiveness of the proposed method in different use cases? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes "Concept-Monitor", a method for analyzing the training process of a given neural network architecture from the perspective of interpretability. More specifically, the proposed method applies concept detectors at every step (i.e. iteration or epoch) of the training process. The detected concepts are then projected into an embedding space determined by a pretrained large language model. Finally, each neuron is represented by the linear combination of all words/text-labels that have sufficient coverage with it.
The proposed method is validated in four use cases including: 1) Monitoring a regular training process, 2) the analysis of the pruning process in the Lottery Ticket Hypothesis, 3) Adversarial Training, and 4) Fine-tuning.
Strengths And Weaknesses
Strengths
Simplicity
Clarity
Good level of detail
Weaknesses
Limited technical novelty
Added value w.r.t. to existing work is not that clear.
Computational Costs seem to be very high.
No ablation study
Limitations are not discussed.
Clarity, Quality, Novelty And Reproducibility
The contents of the paper are presented in the a clear manner. The presentation of the content has a good flow.
Regarding novelty, as stated in the summary of the paper, the proposed method is defined by the combination of several existing methods. Beyond the use of an embedding no other novel technical aspect is in place. Novelty is somewhat limited, at this point the paper seems to be more application oriented.
Reproducibility of the paper is acceptable, implementation details are provided in the supplementary material. In this regard releasing sample code of the implementation of their method with one of the considered concept detectors would have strengthen reproducibility of the proposed method. |
ICLR | Title
Distributed Zeroth-Order Optimization: Convergence Rates That Match Centralized Counterpart
Abstract
Zeroth-order optimization has become increasingly important in complex optimization and machine learning when cost functions are impossible to be described in closed analytical forms. The key idea of zeroth-order optimization lies in the ability for a learner to build gradient estimates by queries sent to the cost function, and then traditional gradient descent algorithms can be executed replacing gradients by the estimates. For optimization over large-scale multi-agent systems with decentralized data and costs, zeroth-order optimization can continue to be utilized to develop scalable and distributed algorithms. In this paper, we aim at understanding the trend in performance transitioning from centralized to distributed zeroth-order algorithms in terms of convergence rates, and focus on multi-agent systems with time-varying communication networks. We establish a series of convergence rates for distributed zeroth-order subgradient algorithms under both one-point and two-point zeroth-order oracles. Apart from the additional node-to-node communication cost due to the distributed nature of algorithms, the established rates in convergence are shown to match their centralized counterpart. We also propose a multi-stage distributed zeroth-order algorithm that better utilizes the learning rates, reduces the computational complexity, and attains even faster convergence rates for compact decision set.
N/A
Zeroth-order optimization has become increasingly important in complex optimization and machine learning when cost functions are impossible to be described in closed analytical forms. The key idea of zeroth-order optimization lies in the ability for a learner to build gradient estimates by queries sent to the cost function, and then traditional gradient descent algorithms can be executed replacing gradients by the estimates. For optimization over large-scale multi-agent systems with decentralized data and costs, zeroth-order optimization can continue to be utilized to develop scalable and distributed algorithms. In this paper, we aim at understanding the trend in performance transitioning from centralized to distributed zeroth-order algorithms in terms of convergence rates, and focus on multi-agent systems with time-varying communication networks. We establish a series of convergence rates for distributed zeroth-order subgradient algorithms under both one-point and two-point zeroth-order oracles. Apart from the additional node-to-node communication cost due to the distributed nature of algorithms, the established rates in convergence are shown to match their centralized counterpart. We also propose a multi-stage distributed zeroth-order algorithm that better utilizes the learning rates, reduces the computational complexity, and attains even faster convergence rates for compact decision set.
1 INTRODUCTION
Various machine learning tasks ultimately boil down to solving optimization problems of different forms, where the cost functions are formed jointly by the data accumulated in experiences and the model used in representing the learning framework. Gradient descent algorithms have been playing a foundational role in practically solving such optimization problems. However, for learning tasks with high-dimensional data and involved learning representations, access to the gradient of the cost function may turn out not possible: the cost function supporting the learning may not have a closed analytical form; or it is simply too computationally costly to be properly differentiated. Zeroth-order optimization provides a systemic way of facilitating gradient descent without direct access to gradient information, where oracles query the cost function values and generate gradient estimates. Zeroth-order methods have shown a number of successful applications, e.g., searching for adversarial attacks in deep learning Chen et al. (2019); Liu et al. (2019) and policy search in reinforcement learning Vemula et al. (2019).
The literature has also explored the potential in extending the standard (centralized) zeroth-order optimization to distributed settings over multi-agent systems, where the data and cost functions are scattered across a network of decentralized agents. With the help of a communication network, the agents may collaboratively solve the network-level optimization task by iteratively exchanging decisions obtained from local zeroth-order descent. The rates of convergence of centralized zerothorder optimization algorithms are now well understood for several sub-classes of convex functions. We are interested in systematically investigating these convergence rates scale for the corresponding distributed algorithms, and focus on the case of time-varying communication networks.
1.1 PROBLEM DEFINITION
Consider a network of agents (nodes) V = {1, . . . , N}. The agents aim to collectively solve the following distributed optimization problem
minimize f(x) := N∑ i=1 fi(x)
subject to x ∈ X. (1)
Here x ∈ Rd is the decision variable, X ⊆ Rd is a convex decision space, and fi : Rd → R is a private convex objective function associated with agent i.
The communication network connecting the nodes is described by a time-varying graph G(t) = (V,E(t)), where E(t) is the set of activated links at time t. Let A(t) be a weight matrix at time t for the graph G(t): for each link (i, j) ∈ E(t), a weight [A(t)]ij > 0 is assigned, and [A(t)]ij = 0 for (i, j) /∈ E(t). We impose the following assumption on the communication network E(t) and the weight matrix A(t).
Assumption 1 (i) There exists a positive integer B such that the union graph (V,E(kB+ 1)∪ · · ·∪ E((k+1)B)) is strongly connected for all k ≥ 0; (ii) A(t) is doubly stochastic, i.e., ∑N i=1[A(t)]ij =
1 and ∑N j=1[A(t)]ij = 1; (iii) [A(t)]ii ≥ ξ for all i, and [A(t)]ij ≥ ξ if (j, i) ∈ E(t), where ξ > 0.
1.2 FUNCTION CLASSES
Let Fcvx denote the set of all convex functions on Rd. We define the following three classes of convex functions in Fcvx.
• The Lipschitz continuous class Flip(Lf ,X) contains the functions in Fcvx that admit a finite Lipschitz constant Lf over X, i.e.,
Flip(Lf ,X) := {g ∈ Fcvx : ∀x,x′ ∈ X, |g(x)− g(x′)| ≤ Lf‖x− x′‖}.
• The smooth class Fsmo(sf ,X) contains the functions that admit a sf -Lipschitz continuous gradient over X, i.e.,
Fsmo(sf ,X) = {g ∈ Fcvx : ∀x,x′ ∈ X, ‖∇g(x)−∇g(x′)‖ ≤ sf‖x− x′‖}.
• The strongly convex class Fsc(µf ,X) contains the functions that are µf -strongly convex, i.e.,
Fsc(µf ,X) = { g ∈ Fcvx : ∀x,x′ ∈ X, g(x) ≥ g(x′) + 〈∇g(x′),x− x′〉+
µf 2 ‖x− x′‖2
} .
1.3 CONTRIBUTIONS AND RELATED WORK
Contributions. We first present MAZOPA, a multi-agent zeroth-order projection averaging algorithm. In MAZOPA, the agents iteratively carry out local zeroth-order descents for their private costs to generate intermediate decisions, send these intermediate decisions to their neighbors over the graph G(t), and then update their decisions by projecting the average neighboring intermediate decisions onto X. For distributed zeroth-order oracles based on one-point or two-point estimates, a series of convergence rate results are established for the three basic function classes. Remarkably, the convergence rates for distributed algorithms are found to be matching their centralized counterpart, and sometimes even tighter rates are obtained, as summarized in Table 1. These results show that by paying the price of node-to-node communication, distributed zeroth-order optimization provides equal performance guarantees as those of centralized approaches. Next, we generalize the MAZOPA to a multi-stage setting, where the local zeroth-order descents take place for multiple steps before the projected averaging in a sequence of epochs. Such multi-stage MAZOPA is shown to be able to reduce the computational complexity, while providing improved convergences rates compared to MAZOPA when the decision set is compact.
Related Work. Recently, many types of centralized zeroth-order optimization algorithms have been studied, and their convergence rates (and the way they depend on the dimension) have been established in different settings. For unconstrained convex optimization, Nesterov & Spokoiny (2017) develops several types of two-point gradient estimators and achieves convergence rates that scale with dimension as O(d2). For constrained stochastic optimization, Duchi et al. (2015) establishes that the convergence rates are sharp up to factors at most logarithmic in the dimension. Zeroth-order optimization has a natural connection to bandit online optimization, where the latter focuses on dynamic environment where the objective functions are varying over time (see, e.g., Flaxman et al. (2005); Agarwal et al. (2010); Shamir (2013; 2017); Bubeck et al. (2017); Lattimore (2020); Hazan & Levy (2014)). In particular, the seminal work Flaxman et al. (2005) constructs a one-point gradient estimator (or one-point bandit feedback model) and achieves an O(d/T 1/4) average regret. For two-point gradient estimator, Shamir (2017) establishes the tightness of the dimension-dependent factor O( √ d) in the framework of zeroth-order stochastic mirror descent.
It is worth zooming into the literature on distributed zeroth-order/bandit online optimization. Due to the absence of a central coordinator, the algorithms developed should always rely on local computations and communications (e.g., Yuan & Ho (2015); Yi et al. (2020); Jakovetic et al. (2018); Hajinezhad et al. (2019); Wang et al. (2019); Pang & Hu (2019); Hajinezhad & Zavlanos (2018); Wan et al. (2020)). This makes the convergence analysis of the distributed zeroth-order/bandit online optimization algorithms more challenging. In Yuan & Ho (2015), the authors develop a class of distributed zeroth-order optimization algorithms that require two functional evaluations at each iteration, and establishes asymptotic convergence of the algorithm. Non-asymptotic convergence is established in Jakovetic et al. (2018); Hajinezhad et al. (2019); Wang et al. (2019); Pang & Hu (2019); Hajinezhad & Zavlanos (2018), but the dimension-dependence factors are either O(d2) or far from optimal. The work Yi et al. (2020) considers distributed online optimization with long-term constraints and establishes bounds on regret as well as constraint violations. To avoid Euclidean projection onto the constraint set, Wan et al. (2020) develops a distributed bandit online optimization algorithm based on conditional gradient descent and one-point bandit feedback, and achieves a regret scaling of O(T 3/4 √ lnT ).
2 THE MAZOPA ALGORITHM AND ITS CONVERGENCE RATES
In this section, we present the MAZOPA algorithm and establish the convergence rates for the three function classes.
2.1 DISTRIBUTED ZEROTH-ORDER ORACLES
Let n be a random vector in Rd drawn from some probability distribution. Then
f̂i(x; δ) := En [fi(x + δn)] (2)
is a smoothed function for fi. Here δ > 0 is a parameter setting the level of the smoothing. We introduce the following definition on distributed zeroth-order oracles (DistZOO).
Definition 1 (DistZOO) A vector g̃i(x; δ) ∈ Rd is called a distributed zeroth-order oracle at node i if the following conditions hold:
(i) E [g̃i(x; δ)] = ∇f̂i(x; δ) for all x ∈ Rd; (ii) If fi ∈ Flip(Lf ), then f̂i ∈ Flip(Lf ) as well, and there holds ∣∣f̂i(x; δ) − fi(x)∣∣ ≤ pdLfδ,
with pd being some positive constant;
(iii) If fi ∈ Fsmo(sf ), then ∣∣f̂i(x; δ)− fi(x)∣∣ ≤ 12 p̃dsfδ2 with p̃d being some positive constant.
A number of DistZOO satisfying Definition 1 can be obtained using existing gradient estimators, see, e.g., Liu et al. (2020). In the paper, we provide two representative gradient estimators that are commonly adopted in the literature. Let ui be a random vector independently generated from a unit sphere B1 in Rd. Then (e.g., Flaxman et al. (2005))
g̃OPi (x; δ) := fi(x + δtui)uid/δ (3)
is a one-point DistZOO satisfying Definition 1. Moreover,
g̃TPi (x; δ) := d
2δ
( fi(x + δu)− fi(x− δu) ) u (4)
is a two-point DistZOO satisfying Definition 1 (e.g., Shamir (2017)).
2.2 THE MAZOPA ALGORITHM
We present the following Multi-Agent Zeroth-Order Projection Averaging (MAZOPA) algorithm, which consists of two steps, a local zeroth-order optimization step and a distributed averaging step. MAZOPA, whose pseudo-code is presented in Algorithm 1, is a variation of the multi-agent subgradient averaging algorithm proposed in Nedic et al. (2008); Nedic & Ozdaglar (2009); Nedic et al. (2010), where the local optimization step is executed by sub-gradient descent.
Algorithm 1 MAZOPA: x̂i(T ) = MAZOPA (xi(1), ηt, δt,X) Require: step size ηt, DistZOO g̃i(x; δt) with exploration parameter δt for all i ∈ V Ensure: xi(1) ∈ X, ∀i ∈ V
1: for t = 1 to T do 2: Node i queries the DistZOO at point xi(t) and receives g̃i(xi(t); δt) 3: Node i computes
vi(t) = xi(t)− ηt · g̃i(xi(t); δt)
4: Node i updates its state by using the information received from its instant neighbors
xi(t+ 1) = projX ( N∑ j=1 [A(t)]ijvj(t) )
5: end for Output: x̂i(T ) = 1T ∑T t=1 xi(t)
2.3 MAIN RESULTS
Let x̂i(T ) be the output of Algorithm 1 at agent i. We denote the optimal solution of problem (1) by x? = arg minx∈X f(x). Defining X◦ := {x + u : x ∈ X,u ∈ B1}, we present the following results on the convergence rate of the MAZOPA algorithm.
Theorem 1 Let Assumption 1 hold. Let DistZOO take the form of g̃OPi (·). Further assume that |fi(xi(t) + δtui(t))| ≤ C for all i ∈ V. We have the following convergence results for every i ∈ V and all T ≥ 1.
(i) Consider fi ∈ Flip(Lf ,X◦) for all i ∈ V. Setting ηt = 1dT 3/4 and δt = 1 t1/4 , t = 1, . . . , T , it holds that E [ f(x̂i(T )) ] − f(x?) = O ( d
T 1/4
) .
(ii) Consider fi ∈ Flip(Lf ,X◦) ∩ Fsmo(sf ,X◦). Setting ηt = 1dT 2/3 and δt = 1 t1/6 , t = 1, . . . , T , it holds that E [ f(x̂i(T )) ] − f(x?) = O ( d
T 1/3
) .
Theorem 2 Let Assumption 1 hold. Let DistZOO take the form of g̃TPi (·). Set ηt = 1√dT and δt =
1√ t , t = 1, . . . , T . Consider fi ∈ Flip(Lf ,X◦), i ∈ V. Then, for every i ∈ V and all T ≥ 1, we have E [ f(x̂i(T )) ] − f(x?) = O (√ d T ) .
With strong convexity, the convergence rates established above can be further strengthened.
Theorem 3 Let Assumption 1 hold. Let DistZOO take the form of g̃OPi (·). Further assume that |fi(xi(t) + δtui(t))| ≤ C for all i ∈ V. We have the following convergence results for every i ∈ V and all T ≥ 1.
(i) Consider fi ∈ Flip(Lf ,X◦) ∩ Fsc(µf ,X◦) for all i ∈ V. Setting ηt = 1µf t and δt = 1 t1/3 , t = 1, . . . , T , it holds that E [ f(x̂i(T )) ] − f(x?) = O ( d2
T 1/3
) .
(ii) Consider fi ∈ Flip(Lf ,X◦)∩Fsmo(sf ,X◦)∩Fsc(µf ,X◦). Setting ηt = 1µf t and δt = 1 t1/4 , t = 1, . . . , T , it holds that E [ f(x̂i(T )) ] − f(x?) = O ( d2√ T ) .
Theorem 4 Let Assumption 1 hold. Let DistZOO take the form of g̃TPi (·). Set ηt = 1µf t and δt = 1 t , t = 1, . . . , T . Consider fi ∈ Flip(Lf ,X◦)∩Fsc(µf ,X◦), i ∈ V. Then, for every i ∈ V and all T ≥ 1, we have E [ f(x̂i(T )) ] − f(x?) = O (d ln(T ) T ) .
3 MULTISTAGE MAZOPA: ADAPTIVE LOCAL DESCENT
We now propose a multi-stage variant of Algorithm 1. We impose the following compactness assumption on the constraint set X.
Assumption 2 There exists 0 < RX <∞ such that ‖x‖ ≤ RX for all x ∈ X.
3.1 THE ALGORITHM
The basic idea is to divide the optimization process into a sequence of epochs, each of which has an exponentially decreasing step size and an exponentially increasing iteration number. The updates in the inner loop of each stage are just made according to Algorithm 1 with fixed step size. In each stage only the average point is maintained and used as the starting point of the next stage. This idea of setting up multi-stage optimization algorithms was originally explored in Hazan & Kale (2011).
Take positive integersm ≥ 1 and a ≥ 2. Let k\ = ⌊ loga ( T m + 1 )⌋ , where bxc represents the largest inter with value no greater than x ∈ R. We divide the T time steps into k\ epochs by
Epoch 1 : 1, . . . , T (1);
Epoch 2 : T (1) + 1, . . . , T (2);
...
Epoch k\ : T (k \−1) + 1, . . . , T (k \).
Here T (1) = m, T (2) = am, . . . , T (k \) = ak \
m. For the jth epoch, all agents will run the MAZOPA algorithm, and denote output of the j-th epoch at agent i by x̂(j)i (T
(j)). The pseudo-code of the resulting multi-stage MAZOPA is presented in Algorithm 2. Compared to the MAZOPA algorithm, the multi-stage MAZOPA has the following advantages:
(i) Multistage MAZOPA only requires each node projects its estimates onto the ball BRX , rather than the constraint set X in each epoch. In particular, multistage MAZOPA algorithm significantly reduces the number of Euclidean projections onto the constraint set X from T to k\. This makes the algorithm more computationally efficient.
(ii) Multistage MAZOPA better utilizes the step size rules, in the sense that at earlier epochs of the algorithm, larger step sizes are adopted to facilitate convergence, while smaller step sizes are adopted to achieve better accuracy at later epochs.
3.2 OPTIMAL CONVERGENCE RATES
We now modify the definitions of Flip, Fsmo and Fsc by replacing X with BRX , with slight abuse of notation. As it turns out, the multistage MAZOPA enjoys refined convergence rates.
Algorithm 2 Multistage MAZOPA Require: exploration parameter δ(1), step size η(1), T (1) = m, total number of iterations T , integer
a ≥ 2, and scalar b > 1 Ensure: x(1)i (1) ∈ X for all i ∈ V, and set k = 1
1: while j = 1, . . . , k\ do 2: Call Algorithm 1 to obtain
x̂ (j) i (T (j)) = MAZOPA ( x (j) i (1), η (j), δ(j),BRX )
3: Compute x(j+1)i (1) = projX ( x̂ (j) i (T (j)) ) 4: Update η(k+1) = 1aη (k) and δ(j+1) = 1b δ (j) 5: Update T (j+1) = aT (j) 6: Update j = j + 1 7: end while
Output: x̄i(T ) = projX ( x̂ (k\) i (T (k\)) )
Theorem 5 Let Assumptions 1 and 2 hold. Let DistZOO take the form of g̃TPi (·). Set a = b, T (1) = m = 1, η(1) = 4a3µf and δ (1) = 1. Consider fi ∈ Flip(Lf ,BRX) ∩ Fsc(µf ,BRX), i ∈ V. We
have E [ maxi∈V { ‖x̄i(T )− x?‖2 }] = O ( d
T+1
) .
The idea of the analysis leading to Theorem 5 can also be extended to one-point oracles. If DistZOO takes the form of g̃OPi (·), one needs to impose the following assumption on the objective functions, that is, |fi(xi(t) + δtui(t))| ≤ C for all i ∈ V. For fi ∈ Flip(Lf ,BRX) ∩ Fsc(µf ,BRX), setting a = b3, the final estimates enjoy a convergence rate of E [ maxi∈V { ‖x̄i(T ) − x?‖2 }] =
O ( d2
(T+1)1/3
) . For fi ∈ Flip(Lf ,BRX)∩Fsmo(sf ,BRX)∩Fsc(µf ,BRX), setting a = b4, there holds
E [ maxi∈V { ‖x̄i(T )− x?‖2 }] = O ( d2√ T+1 ) .
4 NUMERICAL EXAMPLES
In this section, we evaluate the performance of the proposed algorithms on a distributed ridge regression problem.
System setup. The optimization problem has the following form:
minimize f(x) = N∑ i=1 ( 1 2 (a T i x− bi)2 + ρ‖x‖2 ) subject to ‖x‖1 ≤ k
(5)
where x ∈ Rd is the optimization variable, the data pair (ai, bi) ∈ Rd × R is only known to node i with ai and bi being generated uniformly from the unit normal distribution.
Network setup. We implement the proposed algorithms over a randomly generated network that consists of N = 50 nodes, which is shown in Fig. 1. In the simulations, we set d = 10, k = 3/4, ρ = 1/2, and RW = 3/4. We evaluate the performance of the algorithms via the average of 10 implementations. The weight matrix associated with the graph is generated according to the maximum-degree weights:
[A(t)]ij = 1 1+dmax , (j, i) ∈ Et
1− di1+dmax , i = j 0, (j, i) /∈ Et
where dmax = maxi∈V{di} is the maximum degree of Gt (di denotes the degree of node i). Results. The performance of algorithms MAZOPA and multistage MAZOPA is illustrated via plotting the maximum function errors, maxi∈V f(x̂i(T )) and maxi∈V f(x̄i(T )), as a function of the number of iterations T in Fig. 2. As a benchmark, the convergence performance of the gradient
algorithm is displayed in Fig. 2 as well. From the numerical results it is clear that the maximum function errors are vanishing for all zerroth-order algorithms. In fact, the convergence performance of two-point MAZOPA is even comparable to the gradient method. Moreover, the multistage variants in general exhibit better convergence performance, and this is more obvious for the case of two-point MAZOPA. These numerical results are in compliance with the theoretical findings in the paper.
Reproduction of the results. The code used for producing this numerical example is provided in the suplementary material.
5 CONCLUSIONS
We have established a series of convergence rates for distributed zeroth-order subgradient algorithms that match their centralized counterpart for Lipschitz, smooth, and strongly convex function classes. These results provided the theoretical benchmarks for zeroth-order approaches over complex dynamic networks. We also proposed a multi-stage variant of the algorithm that better utilizes the learning rates and attains even improved convergence rates. In future work, it is worth exploring the connection between the convergence rates and the underlying communication complexity for distributed zeroth-order algorithms.
A KEY LEMMAS
We first establish the basic convergence result for Algorithm 1 that is based on DistZOO g̃i(x; δt), which plays a crucial role in subsequent analyses. We will sometimes use i• to denote a node in V just to highlight the focus on a given node (but i• indeed may take any value in V and therefore it is a generic node).
Lemma 1 Let Assumption 1 hold. Let g̃i(xi(t); δt) be a DistZOO that satisfies Definition 1 (i) and (ii). Then, for any i• ∈ V and all T ≥ 1, there holds
T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ T∑ t=1 N∑ i=1 E [∣∣fi(x?)− f̂i(x?; δt)∣∣]
+ T∑ t=1 N∑ i=1 E [∣∣fi(xi•(t))− f̂i(xi•(t); δt)∣∣]
+ T∑ t=1 E [Λ?(t)]− E [Λ?(t+ 1)] 2ηt + p1Lf
+ 1
2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] + p2Lf
T−1∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖ ] where p1 = 2N maxi∈V{‖xavg(1)−xi(1)‖}+ 2Nαβ1−β (∑N i=1 ‖xi(1)‖ ) , p2 = 2N ( 3α 1−β + 4 ) ,
and Λ?(t) = ∑N i=1 ‖xi(t) − x?‖2 with xavg(1) = 1 N ∑N i=1 xi(1), α = ( 1− ξ4N2 )−2 , and
β = ( 1− ξ4N2 )1/B .
Before presenting the proof of Lemma 1, we provide the following two supporting lemmas. The first lemma characterizes the convergence property of the transition matrix induced by weight matrix A(t) (see Nedic et al. (2008)).
Lemma 2 Define the transition matrix as A(t : `) = A(t)A(t − 1) · · ·A(` + 1)A(`) for all t ≥ ` ≥ 1, and write A(t : t) = A(t). A(t : `) satisfies∣∣∣∣[A(t : `)]ij − 1N ∣∣∣∣ ≤ αβt−`+1 where α = ( 1− ξ4N2 )−2 and β = ( 1− ξ4N2 )1/B .
The second lemma establishes the accumulated disagreement for every node in the network.
Lemma 3 (Disagreement) Let Assumption 1 hold. For every node i ∈ V, we have T∑ t=1 ‖xavg(t)− xi(t)‖ ≤ ‖xavg(1)− xi(1)‖+ αβ 1− β ( N∑ i=1 ‖xi(1)‖ )
+
( 3α
1− β + 4 ) T−1∑ t=1 ηt N∑ i=1 ‖g̃i(xi(t); δt)‖
where xavg(t) = 1N ∑N i=1 xi(t).
Proof. To simplify the presentation, we denote
ṽi(t) = N∑ j=1 [A(t)]ijvj(t)
si(t) = projX (ṽi(t))− ṽi(t).
Step 3 in Algorithm 1 can be rewritten as
xi(t+ 1) = ṽi(t) + si(t).
Our analysis relies on the estimate of ‖si(t)‖, which can be bounded as follows:
‖si(t)‖ ≤ ∥∥∥projX (ṽi(t))− N∑
j=1
[A(t)]ijxj(t) ∥∥∥+ N∑
j=1
[A(t)]ij ‖ηtg̃j(xj(t); δt)‖
where the inequality is based on Step 3 in Algorithm 1 and A(t) is double stochastic (cf. Assumption 1). Using the non-expansiveness of the Euclidean projection projX(·) and the fact that∑N j=1[A(t)]ijxj(t) ∈ X, we have
‖si(t)‖ ≤ 2 N∑ j=1 [A(t)]ij ‖ηtg̃j(xj(t); δt)‖ . (6)
We now derive the general expressions for xavg(t+ 1) and xi(t+ 1), respectively. For xavg(t+ 1), we have
xavg(t+ 1) = xavg(t)− 1
N N∑ i=1 ηtg̃i(xi(t); δt) + 1 N N∑ i=1 si(t).
Applying the preceding inequality recursively, we get
xavg(t+ 1) = xavg(1)− t∑ `=1 1 N N∑ i=1 η`g̃i(xi(`); δ`) + t∑ `=1 1 N N∑ i=1 si(`). (7)
Similarly, for xi(t+ 1), we have
xi(t+ 1) = N∑ j=1 [A(t : 1)]ijxj(1)− t∑ `=1 N∑ j=1 [A(t : `)]ijη`g̃j(xj(`); δ`) + t−1∑ `=1 N∑ j=1 [A(t : `+ 1)]ijsj(`) + si(t). (8) Combining (7) and (8), gives ‖xavg(t+ 1)− xi(t+ 1)‖ ≤ N∑ j=1 ∣∣∣∣[A(t : 1)]ij − 1N ∣∣∣∣ ‖xj(1)‖+ t∑ `=1 N∑ j=1 ∣∣∣∣[A(t : `)]ij − 1N ∣∣∣∣ η`‖g̃j(xj(`); δ`)‖
+ t−1∑ `=1 N∑ j=1 ∣∣∣∣[A(t : `+ 1)]ij − 1N ∣∣∣∣ ‖sj(`)‖+ ‖si(t)‖+ 1N N∑ i=1 ‖si(t)‖.
(9) Combining the results in (6), (9) and Lemma 2, leads to
‖xavg(t+ 1)− xi(t+ 1)‖ ≤ αβt (
N∑ i=1 ‖xi(1)‖
) + 3α
t∑ `=1 βt−`η` N∑ i=1 ‖g̃i(xi(`); δ`)‖+ 4ηt N∑ i=1 ‖g̃i(xi(t); δt)‖
where we used the following relation, based on (6):
‖si(t)‖+ 1
N N∑ i=1 ‖si(t)‖ ≤ 2 N∑ i=1 ‖si(t)‖ ≤ 4 N∑ i=1 N∑ j=1 [A(t)]ij ‖ηtg̃j(xj(t); δt)‖ ≤ 4ηt N∑ i=1 ‖g̃i(xi(t); δt)‖.
This implies that
T∑ t=1 ‖xavg(t)− xi(t)‖ ≤ ‖xavg(1)− xi(1)‖+ α ( N∑ i=1 ‖xi(1)‖ ) T−1∑ t=1 βt
+ 3α T−1∑ t=1 t∑ `=1 βt−`η` N∑ i=1 ‖g̃i(xi(`); δ`)‖+ 4 T−1∑ t=1 ηt N∑ i=1 ‖g̃i(xi(t); δt)‖.
(10)
This, in combination with (10), leads to the final bound.
[Proof of Lemma 1]. Denote
Λ(t) = N∑ i=1 ‖xi(t)− x‖2, ∀x ∈ X, t ≥ 1. (11)
We follow the standard analysis by deriving the general evolution of ∆(t),
Λ(t+ 1) = N∑ i=1 ∥∥∥∥∥projX N∑ j=1 [A(t)]ijvj(t) − x∥∥∥∥∥ 2 ≤ N∑ i=1 ‖vi(t)− x‖2 (12)
where the inequality follows from the non-expansiveness of the Euclidean projection and the convexity of norm square function. Expanding the term further gives
Λ(t+ 1) = Λ(t) + N∑ i=1 ‖ηtg̃i(xi(t); δt)‖2 − 2ηt N∑ i=1 〈g̃i(xi(t); δt),xi(t)− x〉 (13)
Taking the expectation on both sides and using the following property of DistZOO (cf. Definition 1(i)):
E [g̃i(xi(t); δt)] = ∇f̂i(xi(t); δt) we further obtain
E [Λ(t+ 1)] = E [Λ(t)] + η2t N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] − 2ηt N∑ i=1 ( E [ f̂i(xi(t); δt) ] − f̂i(x; δt) ) (14)
which implies T∑ t=1 N∑ i=1 E [ f̂i(xi(t); δt) ] − f̂(x; δt) ≤ T∑ t=1 E [Λ(t)]− E [Λ(t+ 1)] 2ηt + 1 2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] . (15) We turn our attention to the left-hand side of (15). By adding and subtracting the term f̂i(xi•(t); δt) and using the Lipschitz continuity of f̂i (cf. Definition 1(ii)), it follows that
T∑ t=1 N∑ i=1 f̂i(xi(t); δt) ≥ T∑ t=1 f̂(xi•(t); δt)− Lf T∑ t=1 N∑ i=1 ‖xi(t)− xi•(t)‖
which, together with (15), yields
T∑ t=1 E [ f̂(xi•(t); δt) ] − f̂(x; δt) ≤ T∑ t=1 E [Λ(t)]− E [Λ(t+ 1)] 2ηt
+ 1
2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] + Lf T∑ t=1 N∑ i=1 ‖xi(t)− xi•(t)‖.
(16)
The desired result follows by relating the left-hand side to the original function f , using the disagreement estimate in Lemma 3, and setting x = x?.
B PROOFS OF THEOREMS 1 AND 2
We first provide the the following lemma that characterizes the properties of the DistZOOs in (3) and (4). Its proof can be derived by resorting to Flaxman et al. (2005); Shamir (2017), which we omit here to save space.
Lemma 4 Suppose that fi ∈ Flip(Lf ) for all i ∈ V. We have the following.
(i) For g̃OPi (·), there hold pd = 1 and
E [ ‖g̃OPi (xi(t); δt)‖2 ] ≤ ( Cd
δt )2 where C = maxi∈V |fi(xi(t) + δtui(t))| with xi(t) ∈ X and ui(t) uniformly drawn from B1. We have p̃d = 1 when fi ∈ Fsmo(sf ).
(ii) For g̃TPi (·), there hold pd = 1 and E [ ‖g̃TPi (xi(t); δt)‖2 ] ≤ cL2fd
where c is some universal constant. In addition, p̃d = 1 when fi ∈ Fsmo(sf ).
Now we are ready to prove Theorems 1 and 2.
(i) First, using the property of the DistZOO g̃OPi (·) (cf. Definition 1(iii)), it follows that N∑ i=1 ∣∣fi(xi•(t))− f̂i(xi•(t); δt)∣∣ ≤ NLfδt N∑ i=1 ∣∣fi(x?)− f̂i(x?; δt)∣∣ ≤ NLfδt (17)
We now focus on the case of fi ∈ Flip(Lf ,X◦). Combining with inequality (17), the results in Lemmas 4(i) and 1, gives
T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ p1Lf + 2NLf T∑ t=1 δt + 1 2ηt E [Λ?(1)]
+ 1
2 NC2d2 T∑ t=1 ηt δ2t + p2NLfCd T∑ t=1 ηt δt
(18)
where we used the fact that ηt is a function of T and Jensen’s inequality, i.e., E [ ∥∥g̃OPi (xi(t); δt)∥∥ ] ≤ (E[ ∥∥g̃OPi (xi(t); δt)∥∥2 ])1/2 ≤ Cdδt .
Substituting the explicit expressions of η = 1 dT 3/4 and δt = 1t1/4 into (18) and dividing both sides by T , we find that
1
T T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) = O ( d T 1/4 ) (19)
where we used the following inequality thet ∑T t=1 t
a = O(T 1+a),∀a 6= −1. The desired result follows by using the convexity of function f , i.e., 1T ∑T t=1 f(xi•(t)) ≥ f(x̂i•(T )). When fi ∈ Flip(Lf ,X◦) ∩ Fsmo(sf ,X◦), it follows from the property of the DistZOO g̃OPi (·) (cf. Definition 1(iii)) that
N∑ i=1 ∣∣fi(xi•(t))− f̂i(xi•(t); δt)∣∣ ≤ 1 2 Nsfδ 2 t
N∑ i=1 ∣∣fi(x?)− f̂i(x?; δt)∣∣ ≤ 1 2 Nsfδ 2 t .
(20)
We then combine the preceding inequality and the results in Lemmas 4(i) and 1 to get T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ p1Lf +Nsf T∑ t=1 δ2t + 1 2ηt E [Λ?(1)]
+ 1
2 NC2d2 T∑ t=1 ηt δ2t + p2NLfCd T∑ t=1 ηt δt .
(21)
In contrast with the bound in (18), the second term on the right-hand side of (21) now becomes Nsf ∑T t=1 δ 2 t , which gives us much more space when choosing δt; we can show that the choices of η = 1 dT 2/3 and δt = 1t1/6 yield the optimal convergence rate O ( d T 1/3 ) .
(ii) When fi ∈ Flip(Lf ,X◦), we have the following result for Algorithm 1 running with DistZOO g̃TPi (·):
T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ p1Lf + 2NLf T∑ t=1 δt + 1 2ηt E [Λ?(1)]
+
( 1
2 cd+ p2
√ c √ d ) NL2fηtT
(22)
where we have used the bounds in (17), Lemma 4(ii) and Lemma 1. Then we can deduce from the terms 1ηt and ηtT that the optimal choice of ηt is 1√ T
. Hence, substituting the explicit expressions for ηt = 1√dT and δt = 1√ t
into (22), dividing both sides with T , and using the convexity of function F , the desired bound can be concluded.
When fi ∈ Flip(Lf ,X◦) ∩ Fsmo(sf ,X◦), it can be shown that the term (2NLf ∑T t=1 δt) on the
right-hand side of (22) is replaced by (Nsf ∑T t=1 δ 2 t ), because of (20). As we discussed in the case of fi ∈ Flip(Lf ,X◦), the convergence rate is determined by the terms involving 1ηt and ηtT . Hence, the convergence rate is the same as that of the case when fi in only Lipschitz continuous. The proof is complete.
C PROOFS OF THEOREM 3 AND 4
First, we claim that for the DistZOOs g̃OPi (·) and g̃TPi (·), the strongly convexity of fi implies the strongly convexity of its smoothed variant f̂i, and its proof is straightforward. We now establish the basic convergence results for Algorithm 1 running with g̃OPi (·) and g̃TPi (·). It follows from (13) that
E [Λ(t+ 1)] = E [Λ(t)] + η2t N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] − 2ηt N∑ i=1 E [〈 ∇f̂i(xi(t); δt),xi(t)− x 〉] ≤ E [Λ(t)] + η2t
N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] − 2ηt N∑ i=1 ( E [ f̂i(xi(t); δt) ] − f̂i(x) ) − µfηtE [Λ(t)]
(23) where in the equality we used the relation E [g̃i(xi(t); δt)] = ∇f̂i(xi(t); δt) (cf. Definition 1(i)) and in the inequality we used the strongly convexity of function f̂i. Summing the inequalities in (23) over t = 1 to t = T and regrouping the terms, we obtain T∑ t=1 N∑ i=1 ( E [ f̂i(xi(t); δt) ] − f̂i(x) ) ≤ 1 2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2
] + 1
2 T∑ t=1 ( 1 ηt (E [Λ(t)]− E [Λ(t+ 1)])− µfE [Λ(t)] )
= 1
2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] + 1 2 ( 1 η1 −µf ) E [Λ(1)]
+ 1
2 T∑ t=2 ( 1 ηt − 1 ηt−1 −µf ) E [Λ(t)]− 1 2ηT E [Λ(T + 1)] .
(24) By substituting the expression for ηt = 1µf t into (24) and dropping the negative term, it follows
T∑ t=1 N∑ i=1 ( E [ f̂i(xi(t); δt) ] − f̂i(x) ) ≤ 1 2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] . (25)
Then, following the same lines as that of the proof of Lemma 1, we have that, for DistZOOs g̃OPi (·) and g̃TPi (·), T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ T∑ t=1 E [∣∣f(x?)− f̂(x?; δt)∣∣]+ T∑ t=1 E [∣∣f(xi•(t))− f̂(xi•(t); δt)∣∣]
+ p1Lf + 1
2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] + p2Lf T−1∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖ ] .
(26)
(i) We now derive the convergence rate results for Algorithm 1 running with DistZOOs g̃OPi (·). When fi ∈ Flip(Lf ,X◦) ∩ Fsc(µf ,X◦), we combine the results in (17), (26) and Lemma 4(i) to get T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ p1Lf + 2NLf T∑ t=1 δt + 1 2 NC2d2 T∑ t=1 ηt δ2t + p2NLfCd T∑ t=1 ηt δt .
By substituting ηt = 1µf t and δt = 1 t1/3 into the preceding inequality and using the convexity of F , it yields the following optimal bound
1
T T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) = O ( d2 T 1/3 ) .
When fi ∈ Flip(Lf ,X◦) ∩ Fsmo(sf ,X◦) ∩ Fsc(µf ,X◦), it follows from an argument similar to that of (21) that
T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ p1Lf +Nsf T∑ t=1 δ2t + 1 2 NC2d2 T∑ t=1 ηt δ2t + p2NLfCd T∑ t=1 ηt δt .
It can be proven that the choice of δt = 1t1/4 yields the optimal convergence rate, that is, O ( d2√ T ) .
(ii) The proof for Algorithm 1 running with DistZOO g̃TPi (·) can be obtained in a similar way by exploiting the properties of the DistZOO g̃TPi (·).
D PROOF OF THEOREM 5
We provide the basic convergence result for each stage k, and we start by deriving a similar bound as that of Lemma 1 as follows: T (k)∑ t=1 ( E [ f(x (k) i• (t)) ] − f(x?) ) ≤ T (k)∑ t=1 E [∣∣f(x?)− f̂(x?; δ(k))∣∣]+ T (k)∑ t=1 E [∣∣f(x(k)i• (t))− f̂(x(k)i• (t); δ(k))∣∣]
+ T (k)∑ t=1
E [ Λ(k),?(t) ] − E [ Λ(k),?(t+ 1) ] 2η(k) + p (k) 1 Lf + 1 2 T (k)∑ t=1 η(k) N∑ i=1 E [∥∥g̃i(x(k)i (t); δ(k))∥∥2]
+ p2Lf T (k)∑ t=1 η(k) N∑ i=1 E [∥∥g̃i(x(k)i (t); δ(k))∥∥]
(27) where Λ(k),?(t) = ∑N i=1 ‖x (k) i (t) − x?‖2 and p (k) 1 satisfies the following bound, according to compactness of the set X,
p (k) 1 = 2N max i∈V {‖x(k)avg(1)− x (k) i (1)‖}+
2Nαβ
1− β ( N∑ i=1 ‖x(k)i (1)‖ ) ≤ ( 4N + 2αβ 1− β N2 ) RX.
(28)
The left-hand side of (27) can be further bounded by using the strongly convexity of F , that is,
1
T (k) T (k)∑ t=1 ( f(x (k) i• (t))− f(x ?) ) ≥ 〈 ∇f(x?), x̂(k)i• (T (k))− x? 〉 + Nµf 2 ‖x̂(k)i• (T (k))− x?‖2
Applying the first-order optimality condition to the preceding inequality, i.e., 〈 ∇F (x?),x−x? 〉 ≥ 0 for any x ∈ X, yields
1
T (k) T (k)∑ t=1 ( f(x (k) i• (t))− f(x ?) ) ≥ Nµf 2 ‖x(k+1)i• (1)− x ?‖2 (29)
where the second inequality follows from the non-expansiveness of the Euclidean projection projX(·), and the last equality from Step 3 in Algorithm 2. Combining the inequalities (27), (28) and (29), we have for any i• ∈ V,
Nµf 2
E [ ‖x(k+1)i• (1)− x ?‖2 ] ≤ ∑N i=1 E [ ‖x(k)i (1)− x?‖2 ] 2η(k)T (k) + ( 4N + 2αβ 1− β N2 ) LfRX 1 T (k)
+ 1
T (k) T (k)∑ t=1 E [∣∣f(x?)− f̂(x?; δ(k))∣∣]+ 1 T (k) T∑ t=1 E [∣∣f(x(k)i• (t))− f̂(x(k)i• (t); δ(k))∣∣]
+ 1
2T (k) T (k)∑ t=1 η(k) N∑ i=1 E [∥∥g̃i(x(k)i (t); δ(k))∥∥2]+ p2Lf 1T (k) T (k)∑ t=1 η(k) N∑ i=1 E [∥∥g̃i(x(k)i (t); δ(k))∥∥].
(30) From (30) we find that the convergence depends on the properties of the DistZOOs, and we first derive the dimension-dependence error bounds for DistZOO g̃OPi (·) and the bounds for g̃TPi (·) naturally follows from the derivations.
For DistZOO g̃OPi (·), when fi ∈ Flip(Lf ,BRX) ∩ Fsc(µf ,BRX), it follows from (17), (30) and Lemma 4(i) that
E [
max i∈V
{ ‖x(k+1)i (1)− x ?‖2 }] ≤ E [ max i∈V { ‖x(k)i (1)− x ?‖2 }] 1 µfη(k)T (k)
+ 4 Lf µf δ(k) + 4
( 2 + αβ
1− β N ) Lf µf RX 1 T (k) + 2p2 Lf µf Cd η(k) δ(k) + 1 µf C2d2 η(k) (δ(k))2 . (31)
On the other hand, we have
T (k) = ak−1T (1), η(k) = 1
ak−1 η(1), δ(k) =
1
bk−1 δ(1) (32)
This, together with inequality (31), gives E [
max i∈V
{ ‖x(k+1)i (1)− x ?‖2 }] ≤ E [ max i∈V { ‖x(k)i (1)− x ?‖2 }] 1 µfη(1)T (1)
+ 1( min { a, b, ab , a b2
})k−1 × (
4 Lf µf δ(1) + 4
( 2 + αβ
1− β N ) Lf µf RX 1 T (1)
+ 2p2 Lf µf Cd
η(1) δ(1) + 1 µf C2d2 η(1) (δ(1))2
) .
(33)
By substituting T (1) = m = 1, η(1) = 4min{b, a b2 }
3µf and δ(1) = 1 into the preceding relation, we
arrive at E [
max i∈V
{ ‖x(k+1)i (1)− x ?‖2 }] ≤ E [ max i∈V { ‖x(k)i (1)− x ?‖2 }] 3 4 min { b, ab2 } + R1( min { b, ab2
})k−1 (34)
where we have used the fact that min { a, b, ab , a b2 } = min { b, ab2 } , due to a > b2 (because of a b2 > 1) and b > 1, and
R1 = 4 Lf µf δ(1) + 4
( 2 + αβ
1− β N ) Lf µf RX 1 T (1) + 2p2 Lf µf Cd η(1) δ(1) + 1 µf C2d2 η(1) (δ(1))2 .
We next show by induction that E [
max i∈V
{ ‖x(k)i (1)− x ?‖2 }] ≤ 4 max{R1, R 2 X}
hk−2 (35) where h = min { b, ab2 } . For k = 1, we can use the following bound to deduce that inequality (35)
holds, maxi∈V { ‖x(1)i (1) − x?‖2 } ≤ 4R2X ≤ 4hmax{R1, R2X}. We then assume that inequality (35) holds for k and show it holds for k + 1 as well,
E [
max i∈V
{ ‖x(k+1)i (1)− x ?‖2 }] ≤ 3 4h E [ max i∈V { ‖x(k)i (1)− x ?‖2 }] + R1 hk−1
≤ 3 max{R1, R 2 X}
hk−1 + max{R1, R2X} hk−1 ≤ 4 max{R1, R 2 X} hk−1
which leads to the conclusion in (35). It is easy to verify that the total number of stages in Algorithm 2 is k\ = ⌊ loga ( T m + 1 )⌋ , and the final estimates returned by Algorithm 2 are x̄i(T ), i ∈ V. Hence, applying k\ + 1 to (35), we have
E [
max i∈V
{ ‖x̄i(T )− x?‖2 }] ≤ 4 max{R1, R 2 X}
hk\+1−2 ≤ 4h 2 max{R1, R2X} hloga( T m+1) = 4h2 max{R1, R2X}(
T m + 1
) 1 logh(a)
(36) where we used the inequality that k\ ≥ loga ( T m + 1 ) − 1. We are left to find the minimum of logh(a) = logmin{b, a b2 }(a), which is achieved when b = a b2 . This yields the following final conver-
gence rate, that is, E [ maxi∈V { ‖x̄i(T ) − x?‖2 }] ≤ 4b
2 max{R1,R2X} (T+1)1/3
= O ( d2
(T+1)1/3
) , where the
equality follows from R1 = O ( d2 ) .
When fi ∈ Flip(Lf ,BRX) ∩ Fsmo(sf ,BRX) ∩ Fsc(µf ,BRX), the dimension-dependence bound can be obtained in a similar fashion.
For DistZOO g̃TPi (·), when fi ∈ Flip(Lf ,BRX) ∩ Fsc(µf ,BRX), it follows from (17), (30) and Lemma 4(ii) that
E [
max i∈V
{ ‖x(k+1)i (1)− x ?‖2 }] ≤ E [ max i∈V { ‖x(k)i (1)− x ?‖2 }] 1 µfη(k)T (k)
+ 4 Lf µf δ(k) + 4
( 2 + αβ
1− β N ) Lf µf RX 1 T (k) + L2f µf ( cd+ 2p2 √ c √ d ) η(k). (37)
By setting b = a and then substituting T (1) = 1, η(1) = 4a3µf and δ (1) = 1 into the preceding inequality it follows that
E [
max i∈V
{ ‖x(k+1)i (1)− x ?‖2 }] ≤ E [ max i∈V { ‖x(k)i (1)− x ?‖2 }] 3 4a + R2 ak−1
(38)
where R2 = 4 Lf µf δ(1) + 4 ( 2 + αβ1−βN ) Lf µf RX 1 T (1) + L2f µf ( cd+ 2p2 √ c √ d ) η(1). Then, following an argument similar to that of part (i), we obtain
E [
max i∈V
{ ‖x̄i(T )− x?‖2 }] ≤ 4a
2 max{R2, R2X} T + 1 = O
( d
T + 1
) .
Similarly, when fi ∈ Flip(Lf ,BRX) ∩ Fsmo(sf ,BRX) ∩ Fsc(µf ,BRX), we have E [
max i∈V
{ ‖x̄i(T )− x?‖2 }] ≤ 4a
2 max{R′2, R2X} T + 1 = O
( d
T + 1 ) where R′2 = 2 sf µf (δ(1))2 + 4 ( 2 + αβ1−βN ) Lf µf RX 1 T (1) + L2f µf ( cd+ 2p2 √ c √ d ) η(1). The proof is complete. | 1. What is the focus and contribution of the paper on multi-agent systems?
2. What are the strengths of the proposed algorithms, particularly in terms of convergence rates?
3. Do you have any concerns regarding the novelty and assumptions of the paper?
4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
5. Are there any suggestions for improving the numerical experiments and comparisons with other works? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a zeroth-order optimization algorithm for distributed, multi-agent systems with time-varying communication networks. The authors show that their presented multi-agent zeroth-order projection averaging algorithm (and its improved multi-stage version) has a convergence rate that matches the centralized counterpart algorithms under different assumptions. A small numerical experiment is also conducted to illustrate their theoretical findings.
Review
This paper is well written and the theoretical results are solid. I have a few concerns as follows,
The novelty of this paper: the idea of both the MOZAPA and multi-stage MOZAPA algorithms are not novel. They all have their first-order counterparts and have been presented years ago. Although the authors do give credits to these previous work, the novelty of the two algorithms should be doubted as the only change in the algorithms is to replace the first order oracle with a zeroth-order one.
Some assumptions may need some explanation. In theorem 1 and 3, the authors present their results based on the assumption that
|
f
i
(
x
i
(
t
)
+
δ
t
u
i
(
t
)
)
|
≤
C
, for all
i
∈
V
. I understand this assumption is needed for one-point oracle, but we do need some explanation and comparison between the assumptions for OP oracle and TP oracle.
The comparison table on Page 3 presents some sub-optimal results for the centralized counterpart. For example, for a two-point oracle with strong convexity, the optimal rate should be
O
~
(
d
/
T
)
, if one refers to some previous work like Duchi et al. (2015). The sub-optimal rate should not be presented.
The numerical experiments are trivial and need improvement. The experiment is a toy example, with these theoretical findings, we do not expect very comprehensive numerical results, but at least run 100 MC simulations, instead of 10. We also need a comparison with the centralized zeroth-order algorithm. |
ICLR | Title
Distributed Zeroth-Order Optimization: Convergence Rates That Match Centralized Counterpart
Abstract
Zeroth-order optimization has become increasingly important in complex optimization and machine learning when cost functions are impossible to be described in closed analytical forms. The key idea of zeroth-order optimization lies in the ability for a learner to build gradient estimates by queries sent to the cost function, and then traditional gradient descent algorithms can be executed replacing gradients by the estimates. For optimization over large-scale multi-agent systems with decentralized data and costs, zeroth-order optimization can continue to be utilized to develop scalable and distributed algorithms. In this paper, we aim at understanding the trend in performance transitioning from centralized to distributed zeroth-order algorithms in terms of convergence rates, and focus on multi-agent systems with time-varying communication networks. We establish a series of convergence rates for distributed zeroth-order subgradient algorithms under both one-point and two-point zeroth-order oracles. Apart from the additional node-to-node communication cost due to the distributed nature of algorithms, the established rates in convergence are shown to match their centralized counterpart. We also propose a multi-stage distributed zeroth-order algorithm that better utilizes the learning rates, reduces the computational complexity, and attains even faster convergence rates for compact decision set.
N/A
Zeroth-order optimization has become increasingly important in complex optimization and machine learning when cost functions are impossible to be described in closed analytical forms. The key idea of zeroth-order optimization lies in the ability for a learner to build gradient estimates by queries sent to the cost function, and then traditional gradient descent algorithms can be executed replacing gradients by the estimates. For optimization over large-scale multi-agent systems with decentralized data and costs, zeroth-order optimization can continue to be utilized to develop scalable and distributed algorithms. In this paper, we aim at understanding the trend in performance transitioning from centralized to distributed zeroth-order algorithms in terms of convergence rates, and focus on multi-agent systems with time-varying communication networks. We establish a series of convergence rates for distributed zeroth-order subgradient algorithms under both one-point and two-point zeroth-order oracles. Apart from the additional node-to-node communication cost due to the distributed nature of algorithms, the established rates in convergence are shown to match their centralized counterpart. We also propose a multi-stage distributed zeroth-order algorithm that better utilizes the learning rates, reduces the computational complexity, and attains even faster convergence rates for compact decision set.
1 INTRODUCTION
Various machine learning tasks ultimately boil down to solving optimization problems of different forms, where the cost functions are formed jointly by the data accumulated in experiences and the model used in representing the learning framework. Gradient descent algorithms have been playing a foundational role in practically solving such optimization problems. However, for learning tasks with high-dimensional data and involved learning representations, access to the gradient of the cost function may turn out not possible: the cost function supporting the learning may not have a closed analytical form; or it is simply too computationally costly to be properly differentiated. Zeroth-order optimization provides a systemic way of facilitating gradient descent without direct access to gradient information, where oracles query the cost function values and generate gradient estimates. Zeroth-order methods have shown a number of successful applications, e.g., searching for adversarial attacks in deep learning Chen et al. (2019); Liu et al. (2019) and policy search in reinforcement learning Vemula et al. (2019).
The literature has also explored the potential in extending the standard (centralized) zeroth-order optimization to distributed settings over multi-agent systems, where the data and cost functions are scattered across a network of decentralized agents. With the help of a communication network, the agents may collaboratively solve the network-level optimization task by iteratively exchanging decisions obtained from local zeroth-order descent. The rates of convergence of centralized zerothorder optimization algorithms are now well understood for several sub-classes of convex functions. We are interested in systematically investigating these convergence rates scale for the corresponding distributed algorithms, and focus on the case of time-varying communication networks.
1.1 PROBLEM DEFINITION
Consider a network of agents (nodes) V = {1, . . . , N}. The agents aim to collectively solve the following distributed optimization problem
minimize f(x) := N∑ i=1 fi(x)
subject to x ∈ X. (1)
Here x ∈ Rd is the decision variable, X ⊆ Rd is a convex decision space, and fi : Rd → R is a private convex objective function associated with agent i.
The communication network connecting the nodes is described by a time-varying graph G(t) = (V,E(t)), where E(t) is the set of activated links at time t. Let A(t) be a weight matrix at time t for the graph G(t): for each link (i, j) ∈ E(t), a weight [A(t)]ij > 0 is assigned, and [A(t)]ij = 0 for (i, j) /∈ E(t). We impose the following assumption on the communication network E(t) and the weight matrix A(t).
Assumption 1 (i) There exists a positive integer B such that the union graph (V,E(kB+ 1)∪ · · ·∪ E((k+1)B)) is strongly connected for all k ≥ 0; (ii) A(t) is doubly stochastic, i.e., ∑N i=1[A(t)]ij =
1 and ∑N j=1[A(t)]ij = 1; (iii) [A(t)]ii ≥ ξ for all i, and [A(t)]ij ≥ ξ if (j, i) ∈ E(t), where ξ > 0.
1.2 FUNCTION CLASSES
Let Fcvx denote the set of all convex functions on Rd. We define the following three classes of convex functions in Fcvx.
• The Lipschitz continuous class Flip(Lf ,X) contains the functions in Fcvx that admit a finite Lipschitz constant Lf over X, i.e.,
Flip(Lf ,X) := {g ∈ Fcvx : ∀x,x′ ∈ X, |g(x)− g(x′)| ≤ Lf‖x− x′‖}.
• The smooth class Fsmo(sf ,X) contains the functions that admit a sf -Lipschitz continuous gradient over X, i.e.,
Fsmo(sf ,X) = {g ∈ Fcvx : ∀x,x′ ∈ X, ‖∇g(x)−∇g(x′)‖ ≤ sf‖x− x′‖}.
• The strongly convex class Fsc(µf ,X) contains the functions that are µf -strongly convex, i.e.,
Fsc(µf ,X) = { g ∈ Fcvx : ∀x,x′ ∈ X, g(x) ≥ g(x′) + 〈∇g(x′),x− x′〉+
µf 2 ‖x− x′‖2
} .
1.3 CONTRIBUTIONS AND RELATED WORK
Contributions. We first present MAZOPA, a multi-agent zeroth-order projection averaging algorithm. In MAZOPA, the agents iteratively carry out local zeroth-order descents for their private costs to generate intermediate decisions, send these intermediate decisions to their neighbors over the graph G(t), and then update their decisions by projecting the average neighboring intermediate decisions onto X. For distributed zeroth-order oracles based on one-point or two-point estimates, a series of convergence rate results are established for the three basic function classes. Remarkably, the convergence rates for distributed algorithms are found to be matching their centralized counterpart, and sometimes even tighter rates are obtained, as summarized in Table 1. These results show that by paying the price of node-to-node communication, distributed zeroth-order optimization provides equal performance guarantees as those of centralized approaches. Next, we generalize the MAZOPA to a multi-stage setting, where the local zeroth-order descents take place for multiple steps before the projected averaging in a sequence of epochs. Such multi-stage MAZOPA is shown to be able to reduce the computational complexity, while providing improved convergences rates compared to MAZOPA when the decision set is compact.
Related Work. Recently, many types of centralized zeroth-order optimization algorithms have been studied, and their convergence rates (and the way they depend on the dimension) have been established in different settings. For unconstrained convex optimization, Nesterov & Spokoiny (2017) develops several types of two-point gradient estimators and achieves convergence rates that scale with dimension as O(d2). For constrained stochastic optimization, Duchi et al. (2015) establishes that the convergence rates are sharp up to factors at most logarithmic in the dimension. Zeroth-order optimization has a natural connection to bandit online optimization, where the latter focuses on dynamic environment where the objective functions are varying over time (see, e.g., Flaxman et al. (2005); Agarwal et al. (2010); Shamir (2013; 2017); Bubeck et al. (2017); Lattimore (2020); Hazan & Levy (2014)). In particular, the seminal work Flaxman et al. (2005) constructs a one-point gradient estimator (or one-point bandit feedback model) and achieves an O(d/T 1/4) average regret. For two-point gradient estimator, Shamir (2017) establishes the tightness of the dimension-dependent factor O( √ d) in the framework of zeroth-order stochastic mirror descent.
It is worth zooming into the literature on distributed zeroth-order/bandit online optimization. Due to the absence of a central coordinator, the algorithms developed should always rely on local computations and communications (e.g., Yuan & Ho (2015); Yi et al. (2020); Jakovetic et al. (2018); Hajinezhad et al. (2019); Wang et al. (2019); Pang & Hu (2019); Hajinezhad & Zavlanos (2018); Wan et al. (2020)). This makes the convergence analysis of the distributed zeroth-order/bandit online optimization algorithms more challenging. In Yuan & Ho (2015), the authors develop a class of distributed zeroth-order optimization algorithms that require two functional evaluations at each iteration, and establishes asymptotic convergence of the algorithm. Non-asymptotic convergence is established in Jakovetic et al. (2018); Hajinezhad et al. (2019); Wang et al. (2019); Pang & Hu (2019); Hajinezhad & Zavlanos (2018), but the dimension-dependence factors are either O(d2) or far from optimal. The work Yi et al. (2020) considers distributed online optimization with long-term constraints and establishes bounds on regret as well as constraint violations. To avoid Euclidean projection onto the constraint set, Wan et al. (2020) develops a distributed bandit online optimization algorithm based on conditional gradient descent and one-point bandit feedback, and achieves a regret scaling of O(T 3/4 √ lnT ).
2 THE MAZOPA ALGORITHM AND ITS CONVERGENCE RATES
In this section, we present the MAZOPA algorithm and establish the convergence rates for the three function classes.
2.1 DISTRIBUTED ZEROTH-ORDER ORACLES
Let n be a random vector in Rd drawn from some probability distribution. Then
f̂i(x; δ) := En [fi(x + δn)] (2)
is a smoothed function for fi. Here δ > 0 is a parameter setting the level of the smoothing. We introduce the following definition on distributed zeroth-order oracles (DistZOO).
Definition 1 (DistZOO) A vector g̃i(x; δ) ∈ Rd is called a distributed zeroth-order oracle at node i if the following conditions hold:
(i) E [g̃i(x; δ)] = ∇f̂i(x; δ) for all x ∈ Rd; (ii) If fi ∈ Flip(Lf ), then f̂i ∈ Flip(Lf ) as well, and there holds ∣∣f̂i(x; δ) − fi(x)∣∣ ≤ pdLfδ,
with pd being some positive constant;
(iii) If fi ∈ Fsmo(sf ), then ∣∣f̂i(x; δ)− fi(x)∣∣ ≤ 12 p̃dsfδ2 with p̃d being some positive constant.
A number of DistZOO satisfying Definition 1 can be obtained using existing gradient estimators, see, e.g., Liu et al. (2020). In the paper, we provide two representative gradient estimators that are commonly adopted in the literature. Let ui be a random vector independently generated from a unit sphere B1 in Rd. Then (e.g., Flaxman et al. (2005))
g̃OPi (x; δ) := fi(x + δtui)uid/δ (3)
is a one-point DistZOO satisfying Definition 1. Moreover,
g̃TPi (x; δ) := d
2δ
( fi(x + δu)− fi(x− δu) ) u (4)
is a two-point DistZOO satisfying Definition 1 (e.g., Shamir (2017)).
2.2 THE MAZOPA ALGORITHM
We present the following Multi-Agent Zeroth-Order Projection Averaging (MAZOPA) algorithm, which consists of two steps, a local zeroth-order optimization step and a distributed averaging step. MAZOPA, whose pseudo-code is presented in Algorithm 1, is a variation of the multi-agent subgradient averaging algorithm proposed in Nedic et al. (2008); Nedic & Ozdaglar (2009); Nedic et al. (2010), where the local optimization step is executed by sub-gradient descent.
Algorithm 1 MAZOPA: x̂i(T ) = MAZOPA (xi(1), ηt, δt,X) Require: step size ηt, DistZOO g̃i(x; δt) with exploration parameter δt for all i ∈ V Ensure: xi(1) ∈ X, ∀i ∈ V
1: for t = 1 to T do 2: Node i queries the DistZOO at point xi(t) and receives g̃i(xi(t); δt) 3: Node i computes
vi(t) = xi(t)− ηt · g̃i(xi(t); δt)
4: Node i updates its state by using the information received from its instant neighbors
xi(t+ 1) = projX ( N∑ j=1 [A(t)]ijvj(t) )
5: end for Output: x̂i(T ) = 1T ∑T t=1 xi(t)
2.3 MAIN RESULTS
Let x̂i(T ) be the output of Algorithm 1 at agent i. We denote the optimal solution of problem (1) by x? = arg minx∈X f(x). Defining X◦ := {x + u : x ∈ X,u ∈ B1}, we present the following results on the convergence rate of the MAZOPA algorithm.
Theorem 1 Let Assumption 1 hold. Let DistZOO take the form of g̃OPi (·). Further assume that |fi(xi(t) + δtui(t))| ≤ C for all i ∈ V. We have the following convergence results for every i ∈ V and all T ≥ 1.
(i) Consider fi ∈ Flip(Lf ,X◦) for all i ∈ V. Setting ηt = 1dT 3/4 and δt = 1 t1/4 , t = 1, . . . , T , it holds that E [ f(x̂i(T )) ] − f(x?) = O ( d
T 1/4
) .
(ii) Consider fi ∈ Flip(Lf ,X◦) ∩ Fsmo(sf ,X◦). Setting ηt = 1dT 2/3 and δt = 1 t1/6 , t = 1, . . . , T , it holds that E [ f(x̂i(T )) ] − f(x?) = O ( d
T 1/3
) .
Theorem 2 Let Assumption 1 hold. Let DistZOO take the form of g̃TPi (·). Set ηt = 1√dT and δt =
1√ t , t = 1, . . . , T . Consider fi ∈ Flip(Lf ,X◦), i ∈ V. Then, for every i ∈ V and all T ≥ 1, we have E [ f(x̂i(T )) ] − f(x?) = O (√ d T ) .
With strong convexity, the convergence rates established above can be further strengthened.
Theorem 3 Let Assumption 1 hold. Let DistZOO take the form of g̃OPi (·). Further assume that |fi(xi(t) + δtui(t))| ≤ C for all i ∈ V. We have the following convergence results for every i ∈ V and all T ≥ 1.
(i) Consider fi ∈ Flip(Lf ,X◦) ∩ Fsc(µf ,X◦) for all i ∈ V. Setting ηt = 1µf t and δt = 1 t1/3 , t = 1, . . . , T , it holds that E [ f(x̂i(T )) ] − f(x?) = O ( d2
T 1/3
) .
(ii) Consider fi ∈ Flip(Lf ,X◦)∩Fsmo(sf ,X◦)∩Fsc(µf ,X◦). Setting ηt = 1µf t and δt = 1 t1/4 , t = 1, . . . , T , it holds that E [ f(x̂i(T )) ] − f(x?) = O ( d2√ T ) .
Theorem 4 Let Assumption 1 hold. Let DistZOO take the form of g̃TPi (·). Set ηt = 1µf t and δt = 1 t , t = 1, . . . , T . Consider fi ∈ Flip(Lf ,X◦)∩Fsc(µf ,X◦), i ∈ V. Then, for every i ∈ V and all T ≥ 1, we have E [ f(x̂i(T )) ] − f(x?) = O (d ln(T ) T ) .
3 MULTISTAGE MAZOPA: ADAPTIVE LOCAL DESCENT
We now propose a multi-stage variant of Algorithm 1. We impose the following compactness assumption on the constraint set X.
Assumption 2 There exists 0 < RX <∞ such that ‖x‖ ≤ RX for all x ∈ X.
3.1 THE ALGORITHM
The basic idea is to divide the optimization process into a sequence of epochs, each of which has an exponentially decreasing step size and an exponentially increasing iteration number. The updates in the inner loop of each stage are just made according to Algorithm 1 with fixed step size. In each stage only the average point is maintained and used as the starting point of the next stage. This idea of setting up multi-stage optimization algorithms was originally explored in Hazan & Kale (2011).
Take positive integersm ≥ 1 and a ≥ 2. Let k\ = ⌊ loga ( T m + 1 )⌋ , where bxc represents the largest inter with value no greater than x ∈ R. We divide the T time steps into k\ epochs by
Epoch 1 : 1, . . . , T (1);
Epoch 2 : T (1) + 1, . . . , T (2);
...
Epoch k\ : T (k \−1) + 1, . . . , T (k \).
Here T (1) = m, T (2) = am, . . . , T (k \) = ak \
m. For the jth epoch, all agents will run the MAZOPA algorithm, and denote output of the j-th epoch at agent i by x̂(j)i (T
(j)). The pseudo-code of the resulting multi-stage MAZOPA is presented in Algorithm 2. Compared to the MAZOPA algorithm, the multi-stage MAZOPA has the following advantages:
(i) Multistage MAZOPA only requires each node projects its estimates onto the ball BRX , rather than the constraint set X in each epoch. In particular, multistage MAZOPA algorithm significantly reduces the number of Euclidean projections onto the constraint set X from T to k\. This makes the algorithm more computationally efficient.
(ii) Multistage MAZOPA better utilizes the step size rules, in the sense that at earlier epochs of the algorithm, larger step sizes are adopted to facilitate convergence, while smaller step sizes are adopted to achieve better accuracy at later epochs.
3.2 OPTIMAL CONVERGENCE RATES
We now modify the definitions of Flip, Fsmo and Fsc by replacing X with BRX , with slight abuse of notation. As it turns out, the multistage MAZOPA enjoys refined convergence rates.
Algorithm 2 Multistage MAZOPA Require: exploration parameter δ(1), step size η(1), T (1) = m, total number of iterations T , integer
a ≥ 2, and scalar b > 1 Ensure: x(1)i (1) ∈ X for all i ∈ V, and set k = 1
1: while j = 1, . . . , k\ do 2: Call Algorithm 1 to obtain
x̂ (j) i (T (j)) = MAZOPA ( x (j) i (1), η (j), δ(j),BRX )
3: Compute x(j+1)i (1) = projX ( x̂ (j) i (T (j)) ) 4: Update η(k+1) = 1aη (k) and δ(j+1) = 1b δ (j) 5: Update T (j+1) = aT (j) 6: Update j = j + 1 7: end while
Output: x̄i(T ) = projX ( x̂ (k\) i (T (k\)) )
Theorem 5 Let Assumptions 1 and 2 hold. Let DistZOO take the form of g̃TPi (·). Set a = b, T (1) = m = 1, η(1) = 4a3µf and δ (1) = 1. Consider fi ∈ Flip(Lf ,BRX) ∩ Fsc(µf ,BRX), i ∈ V. We
have E [ maxi∈V { ‖x̄i(T )− x?‖2 }] = O ( d
T+1
) .
The idea of the analysis leading to Theorem 5 can also be extended to one-point oracles. If DistZOO takes the form of g̃OPi (·), one needs to impose the following assumption on the objective functions, that is, |fi(xi(t) + δtui(t))| ≤ C for all i ∈ V. For fi ∈ Flip(Lf ,BRX) ∩ Fsc(µf ,BRX), setting a = b3, the final estimates enjoy a convergence rate of E [ maxi∈V { ‖x̄i(T ) − x?‖2 }] =
O ( d2
(T+1)1/3
) . For fi ∈ Flip(Lf ,BRX)∩Fsmo(sf ,BRX)∩Fsc(µf ,BRX), setting a = b4, there holds
E [ maxi∈V { ‖x̄i(T )− x?‖2 }] = O ( d2√ T+1 ) .
4 NUMERICAL EXAMPLES
In this section, we evaluate the performance of the proposed algorithms on a distributed ridge regression problem.
System setup. The optimization problem has the following form:
minimize f(x) = N∑ i=1 ( 1 2 (a T i x− bi)2 + ρ‖x‖2 ) subject to ‖x‖1 ≤ k
(5)
where x ∈ Rd is the optimization variable, the data pair (ai, bi) ∈ Rd × R is only known to node i with ai and bi being generated uniformly from the unit normal distribution.
Network setup. We implement the proposed algorithms over a randomly generated network that consists of N = 50 nodes, which is shown in Fig. 1. In the simulations, we set d = 10, k = 3/4, ρ = 1/2, and RW = 3/4. We evaluate the performance of the algorithms via the average of 10 implementations. The weight matrix associated with the graph is generated according to the maximum-degree weights:
[A(t)]ij = 1 1+dmax , (j, i) ∈ Et
1− di1+dmax , i = j 0, (j, i) /∈ Et
where dmax = maxi∈V{di} is the maximum degree of Gt (di denotes the degree of node i). Results. The performance of algorithms MAZOPA and multistage MAZOPA is illustrated via plotting the maximum function errors, maxi∈V f(x̂i(T )) and maxi∈V f(x̄i(T )), as a function of the number of iterations T in Fig. 2. As a benchmark, the convergence performance of the gradient
algorithm is displayed in Fig. 2 as well. From the numerical results it is clear that the maximum function errors are vanishing for all zerroth-order algorithms. In fact, the convergence performance of two-point MAZOPA is even comparable to the gradient method. Moreover, the multistage variants in general exhibit better convergence performance, and this is more obvious for the case of two-point MAZOPA. These numerical results are in compliance with the theoretical findings in the paper.
Reproduction of the results. The code used for producing this numerical example is provided in the suplementary material.
5 CONCLUSIONS
We have established a series of convergence rates for distributed zeroth-order subgradient algorithms that match their centralized counterpart for Lipschitz, smooth, and strongly convex function classes. These results provided the theoretical benchmarks for zeroth-order approaches over complex dynamic networks. We also proposed a multi-stage variant of the algorithm that better utilizes the learning rates and attains even improved convergence rates. In future work, it is worth exploring the connection between the convergence rates and the underlying communication complexity for distributed zeroth-order algorithms.
A KEY LEMMAS
We first establish the basic convergence result for Algorithm 1 that is based on DistZOO g̃i(x; δt), which plays a crucial role in subsequent analyses. We will sometimes use i• to denote a node in V just to highlight the focus on a given node (but i• indeed may take any value in V and therefore it is a generic node).
Lemma 1 Let Assumption 1 hold. Let g̃i(xi(t); δt) be a DistZOO that satisfies Definition 1 (i) and (ii). Then, for any i• ∈ V and all T ≥ 1, there holds
T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ T∑ t=1 N∑ i=1 E [∣∣fi(x?)− f̂i(x?; δt)∣∣]
+ T∑ t=1 N∑ i=1 E [∣∣fi(xi•(t))− f̂i(xi•(t); δt)∣∣]
+ T∑ t=1 E [Λ?(t)]− E [Λ?(t+ 1)] 2ηt + p1Lf
+ 1
2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] + p2Lf
T−1∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖ ] where p1 = 2N maxi∈V{‖xavg(1)−xi(1)‖}+ 2Nαβ1−β (∑N i=1 ‖xi(1)‖ ) , p2 = 2N ( 3α 1−β + 4 ) ,
and Λ?(t) = ∑N i=1 ‖xi(t) − x?‖2 with xavg(1) = 1 N ∑N i=1 xi(1), α = ( 1− ξ4N2 )−2 , and
β = ( 1− ξ4N2 )1/B .
Before presenting the proof of Lemma 1, we provide the following two supporting lemmas. The first lemma characterizes the convergence property of the transition matrix induced by weight matrix A(t) (see Nedic et al. (2008)).
Lemma 2 Define the transition matrix as A(t : `) = A(t)A(t − 1) · · ·A(` + 1)A(`) for all t ≥ ` ≥ 1, and write A(t : t) = A(t). A(t : `) satisfies∣∣∣∣[A(t : `)]ij − 1N ∣∣∣∣ ≤ αβt−`+1 where α = ( 1− ξ4N2 )−2 and β = ( 1− ξ4N2 )1/B .
The second lemma establishes the accumulated disagreement for every node in the network.
Lemma 3 (Disagreement) Let Assumption 1 hold. For every node i ∈ V, we have T∑ t=1 ‖xavg(t)− xi(t)‖ ≤ ‖xavg(1)− xi(1)‖+ αβ 1− β ( N∑ i=1 ‖xi(1)‖ )
+
( 3α
1− β + 4 ) T−1∑ t=1 ηt N∑ i=1 ‖g̃i(xi(t); δt)‖
where xavg(t) = 1N ∑N i=1 xi(t).
Proof. To simplify the presentation, we denote
ṽi(t) = N∑ j=1 [A(t)]ijvj(t)
si(t) = projX (ṽi(t))− ṽi(t).
Step 3 in Algorithm 1 can be rewritten as
xi(t+ 1) = ṽi(t) + si(t).
Our analysis relies on the estimate of ‖si(t)‖, which can be bounded as follows:
‖si(t)‖ ≤ ∥∥∥projX (ṽi(t))− N∑
j=1
[A(t)]ijxj(t) ∥∥∥+ N∑
j=1
[A(t)]ij ‖ηtg̃j(xj(t); δt)‖
where the inequality is based on Step 3 in Algorithm 1 and A(t) is double stochastic (cf. Assumption 1). Using the non-expansiveness of the Euclidean projection projX(·) and the fact that∑N j=1[A(t)]ijxj(t) ∈ X, we have
‖si(t)‖ ≤ 2 N∑ j=1 [A(t)]ij ‖ηtg̃j(xj(t); δt)‖ . (6)
We now derive the general expressions for xavg(t+ 1) and xi(t+ 1), respectively. For xavg(t+ 1), we have
xavg(t+ 1) = xavg(t)− 1
N N∑ i=1 ηtg̃i(xi(t); δt) + 1 N N∑ i=1 si(t).
Applying the preceding inequality recursively, we get
xavg(t+ 1) = xavg(1)− t∑ `=1 1 N N∑ i=1 η`g̃i(xi(`); δ`) + t∑ `=1 1 N N∑ i=1 si(`). (7)
Similarly, for xi(t+ 1), we have
xi(t+ 1) = N∑ j=1 [A(t : 1)]ijxj(1)− t∑ `=1 N∑ j=1 [A(t : `)]ijη`g̃j(xj(`); δ`) + t−1∑ `=1 N∑ j=1 [A(t : `+ 1)]ijsj(`) + si(t). (8) Combining (7) and (8), gives ‖xavg(t+ 1)− xi(t+ 1)‖ ≤ N∑ j=1 ∣∣∣∣[A(t : 1)]ij − 1N ∣∣∣∣ ‖xj(1)‖+ t∑ `=1 N∑ j=1 ∣∣∣∣[A(t : `)]ij − 1N ∣∣∣∣ η`‖g̃j(xj(`); δ`)‖
+ t−1∑ `=1 N∑ j=1 ∣∣∣∣[A(t : `+ 1)]ij − 1N ∣∣∣∣ ‖sj(`)‖+ ‖si(t)‖+ 1N N∑ i=1 ‖si(t)‖.
(9) Combining the results in (6), (9) and Lemma 2, leads to
‖xavg(t+ 1)− xi(t+ 1)‖ ≤ αβt (
N∑ i=1 ‖xi(1)‖
) + 3α
t∑ `=1 βt−`η` N∑ i=1 ‖g̃i(xi(`); δ`)‖+ 4ηt N∑ i=1 ‖g̃i(xi(t); δt)‖
where we used the following relation, based on (6):
‖si(t)‖+ 1
N N∑ i=1 ‖si(t)‖ ≤ 2 N∑ i=1 ‖si(t)‖ ≤ 4 N∑ i=1 N∑ j=1 [A(t)]ij ‖ηtg̃j(xj(t); δt)‖ ≤ 4ηt N∑ i=1 ‖g̃i(xi(t); δt)‖.
This implies that
T∑ t=1 ‖xavg(t)− xi(t)‖ ≤ ‖xavg(1)− xi(1)‖+ α ( N∑ i=1 ‖xi(1)‖ ) T−1∑ t=1 βt
+ 3α T−1∑ t=1 t∑ `=1 βt−`η` N∑ i=1 ‖g̃i(xi(`); δ`)‖+ 4 T−1∑ t=1 ηt N∑ i=1 ‖g̃i(xi(t); δt)‖.
(10)
This, in combination with (10), leads to the final bound.
[Proof of Lemma 1]. Denote
Λ(t) = N∑ i=1 ‖xi(t)− x‖2, ∀x ∈ X, t ≥ 1. (11)
We follow the standard analysis by deriving the general evolution of ∆(t),
Λ(t+ 1) = N∑ i=1 ∥∥∥∥∥projX N∑ j=1 [A(t)]ijvj(t) − x∥∥∥∥∥ 2 ≤ N∑ i=1 ‖vi(t)− x‖2 (12)
where the inequality follows from the non-expansiveness of the Euclidean projection and the convexity of norm square function. Expanding the term further gives
Λ(t+ 1) = Λ(t) + N∑ i=1 ‖ηtg̃i(xi(t); δt)‖2 − 2ηt N∑ i=1 〈g̃i(xi(t); δt),xi(t)− x〉 (13)
Taking the expectation on both sides and using the following property of DistZOO (cf. Definition 1(i)):
E [g̃i(xi(t); δt)] = ∇f̂i(xi(t); δt) we further obtain
E [Λ(t+ 1)] = E [Λ(t)] + η2t N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] − 2ηt N∑ i=1 ( E [ f̂i(xi(t); δt) ] − f̂i(x; δt) ) (14)
which implies T∑ t=1 N∑ i=1 E [ f̂i(xi(t); δt) ] − f̂(x; δt) ≤ T∑ t=1 E [Λ(t)]− E [Λ(t+ 1)] 2ηt + 1 2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] . (15) We turn our attention to the left-hand side of (15). By adding and subtracting the term f̂i(xi•(t); δt) and using the Lipschitz continuity of f̂i (cf. Definition 1(ii)), it follows that
T∑ t=1 N∑ i=1 f̂i(xi(t); δt) ≥ T∑ t=1 f̂(xi•(t); δt)− Lf T∑ t=1 N∑ i=1 ‖xi(t)− xi•(t)‖
which, together with (15), yields
T∑ t=1 E [ f̂(xi•(t); δt) ] − f̂(x; δt) ≤ T∑ t=1 E [Λ(t)]− E [Λ(t+ 1)] 2ηt
+ 1
2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] + Lf T∑ t=1 N∑ i=1 ‖xi(t)− xi•(t)‖.
(16)
The desired result follows by relating the left-hand side to the original function f , using the disagreement estimate in Lemma 3, and setting x = x?.
B PROOFS OF THEOREMS 1 AND 2
We first provide the the following lemma that characterizes the properties of the DistZOOs in (3) and (4). Its proof can be derived by resorting to Flaxman et al. (2005); Shamir (2017), which we omit here to save space.
Lemma 4 Suppose that fi ∈ Flip(Lf ) for all i ∈ V. We have the following.
(i) For g̃OPi (·), there hold pd = 1 and
E [ ‖g̃OPi (xi(t); δt)‖2 ] ≤ ( Cd
δt )2 where C = maxi∈V |fi(xi(t) + δtui(t))| with xi(t) ∈ X and ui(t) uniformly drawn from B1. We have p̃d = 1 when fi ∈ Fsmo(sf ).
(ii) For g̃TPi (·), there hold pd = 1 and E [ ‖g̃TPi (xi(t); δt)‖2 ] ≤ cL2fd
where c is some universal constant. In addition, p̃d = 1 when fi ∈ Fsmo(sf ).
Now we are ready to prove Theorems 1 and 2.
(i) First, using the property of the DistZOO g̃OPi (·) (cf. Definition 1(iii)), it follows that N∑ i=1 ∣∣fi(xi•(t))− f̂i(xi•(t); δt)∣∣ ≤ NLfδt N∑ i=1 ∣∣fi(x?)− f̂i(x?; δt)∣∣ ≤ NLfδt (17)
We now focus on the case of fi ∈ Flip(Lf ,X◦). Combining with inequality (17), the results in Lemmas 4(i) and 1, gives
T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ p1Lf + 2NLf T∑ t=1 δt + 1 2ηt E [Λ?(1)]
+ 1
2 NC2d2 T∑ t=1 ηt δ2t + p2NLfCd T∑ t=1 ηt δt
(18)
where we used the fact that ηt is a function of T and Jensen’s inequality, i.e., E [ ∥∥g̃OPi (xi(t); δt)∥∥ ] ≤ (E[ ∥∥g̃OPi (xi(t); δt)∥∥2 ])1/2 ≤ Cdδt .
Substituting the explicit expressions of η = 1 dT 3/4 and δt = 1t1/4 into (18) and dividing both sides by T , we find that
1
T T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) = O ( d T 1/4 ) (19)
where we used the following inequality thet ∑T t=1 t
a = O(T 1+a),∀a 6= −1. The desired result follows by using the convexity of function f , i.e., 1T ∑T t=1 f(xi•(t)) ≥ f(x̂i•(T )). When fi ∈ Flip(Lf ,X◦) ∩ Fsmo(sf ,X◦), it follows from the property of the DistZOO g̃OPi (·) (cf. Definition 1(iii)) that
N∑ i=1 ∣∣fi(xi•(t))− f̂i(xi•(t); δt)∣∣ ≤ 1 2 Nsfδ 2 t
N∑ i=1 ∣∣fi(x?)− f̂i(x?; δt)∣∣ ≤ 1 2 Nsfδ 2 t .
(20)
We then combine the preceding inequality and the results in Lemmas 4(i) and 1 to get T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ p1Lf +Nsf T∑ t=1 δ2t + 1 2ηt E [Λ?(1)]
+ 1
2 NC2d2 T∑ t=1 ηt δ2t + p2NLfCd T∑ t=1 ηt δt .
(21)
In contrast with the bound in (18), the second term on the right-hand side of (21) now becomes Nsf ∑T t=1 δ 2 t , which gives us much more space when choosing δt; we can show that the choices of η = 1 dT 2/3 and δt = 1t1/6 yield the optimal convergence rate O ( d T 1/3 ) .
(ii) When fi ∈ Flip(Lf ,X◦), we have the following result for Algorithm 1 running with DistZOO g̃TPi (·):
T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ p1Lf + 2NLf T∑ t=1 δt + 1 2ηt E [Λ?(1)]
+
( 1
2 cd+ p2
√ c √ d ) NL2fηtT
(22)
where we have used the bounds in (17), Lemma 4(ii) and Lemma 1. Then we can deduce from the terms 1ηt and ηtT that the optimal choice of ηt is 1√ T
. Hence, substituting the explicit expressions for ηt = 1√dT and δt = 1√ t
into (22), dividing both sides with T , and using the convexity of function F , the desired bound can be concluded.
When fi ∈ Flip(Lf ,X◦) ∩ Fsmo(sf ,X◦), it can be shown that the term (2NLf ∑T t=1 δt) on the
right-hand side of (22) is replaced by (Nsf ∑T t=1 δ 2 t ), because of (20). As we discussed in the case of fi ∈ Flip(Lf ,X◦), the convergence rate is determined by the terms involving 1ηt and ηtT . Hence, the convergence rate is the same as that of the case when fi in only Lipschitz continuous. The proof is complete.
C PROOFS OF THEOREM 3 AND 4
First, we claim that for the DistZOOs g̃OPi (·) and g̃TPi (·), the strongly convexity of fi implies the strongly convexity of its smoothed variant f̂i, and its proof is straightforward. We now establish the basic convergence results for Algorithm 1 running with g̃OPi (·) and g̃TPi (·). It follows from (13) that
E [Λ(t+ 1)] = E [Λ(t)] + η2t N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] − 2ηt N∑ i=1 E [〈 ∇f̂i(xi(t); δt),xi(t)− x 〉] ≤ E [Λ(t)] + η2t
N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] − 2ηt N∑ i=1 ( E [ f̂i(xi(t); δt) ] − f̂i(x) ) − µfηtE [Λ(t)]
(23) where in the equality we used the relation E [g̃i(xi(t); δt)] = ∇f̂i(xi(t); δt) (cf. Definition 1(i)) and in the inequality we used the strongly convexity of function f̂i. Summing the inequalities in (23) over t = 1 to t = T and regrouping the terms, we obtain T∑ t=1 N∑ i=1 ( E [ f̂i(xi(t); δt) ] − f̂i(x) ) ≤ 1 2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2
] + 1
2 T∑ t=1 ( 1 ηt (E [Λ(t)]− E [Λ(t+ 1)])− µfE [Λ(t)] )
= 1
2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] + 1 2 ( 1 η1 −µf ) E [Λ(1)]
+ 1
2 T∑ t=2 ( 1 ηt − 1 ηt−1 −µf ) E [Λ(t)]− 1 2ηT E [Λ(T + 1)] .
(24) By substituting the expression for ηt = 1µf t into (24) and dropping the negative term, it follows
T∑ t=1 N∑ i=1 ( E [ f̂i(xi(t); δt) ] − f̂i(x) ) ≤ 1 2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] . (25)
Then, following the same lines as that of the proof of Lemma 1, we have that, for DistZOOs g̃OPi (·) and g̃TPi (·), T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ T∑ t=1 E [∣∣f(x?)− f̂(x?; δt)∣∣]+ T∑ t=1 E [∣∣f(xi•(t))− f̂(xi•(t); δt)∣∣]
+ p1Lf + 1
2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] + p2Lf T−1∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖ ] .
(26)
(i) We now derive the convergence rate results for Algorithm 1 running with DistZOOs g̃OPi (·). When fi ∈ Flip(Lf ,X◦) ∩ Fsc(µf ,X◦), we combine the results in (17), (26) and Lemma 4(i) to get T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ p1Lf + 2NLf T∑ t=1 δt + 1 2 NC2d2 T∑ t=1 ηt δ2t + p2NLfCd T∑ t=1 ηt δt .
By substituting ηt = 1µf t and δt = 1 t1/3 into the preceding inequality and using the convexity of F , it yields the following optimal bound
1
T T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) = O ( d2 T 1/3 ) .
When fi ∈ Flip(Lf ,X◦) ∩ Fsmo(sf ,X◦) ∩ Fsc(µf ,X◦), it follows from an argument similar to that of (21) that
T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ p1Lf +Nsf T∑ t=1 δ2t + 1 2 NC2d2 T∑ t=1 ηt δ2t + p2NLfCd T∑ t=1 ηt δt .
It can be proven that the choice of δt = 1t1/4 yields the optimal convergence rate, that is, O ( d2√ T ) .
(ii) The proof for Algorithm 1 running with DistZOO g̃TPi (·) can be obtained in a similar way by exploiting the properties of the DistZOO g̃TPi (·).
D PROOF OF THEOREM 5
We provide the basic convergence result for each stage k, and we start by deriving a similar bound as that of Lemma 1 as follows: T (k)∑ t=1 ( E [ f(x (k) i• (t)) ] − f(x?) ) ≤ T (k)∑ t=1 E [∣∣f(x?)− f̂(x?; δ(k))∣∣]+ T (k)∑ t=1 E [∣∣f(x(k)i• (t))− f̂(x(k)i• (t); δ(k))∣∣]
+ T (k)∑ t=1
E [ Λ(k),?(t) ] − E [ Λ(k),?(t+ 1) ] 2η(k) + p (k) 1 Lf + 1 2 T (k)∑ t=1 η(k) N∑ i=1 E [∥∥g̃i(x(k)i (t); δ(k))∥∥2]
+ p2Lf T (k)∑ t=1 η(k) N∑ i=1 E [∥∥g̃i(x(k)i (t); δ(k))∥∥]
(27) where Λ(k),?(t) = ∑N i=1 ‖x (k) i (t) − x?‖2 and p (k) 1 satisfies the following bound, according to compactness of the set X,
p (k) 1 = 2N max i∈V {‖x(k)avg(1)− x (k) i (1)‖}+
2Nαβ
1− β ( N∑ i=1 ‖x(k)i (1)‖ ) ≤ ( 4N + 2αβ 1− β N2 ) RX.
(28)
The left-hand side of (27) can be further bounded by using the strongly convexity of F , that is,
1
T (k) T (k)∑ t=1 ( f(x (k) i• (t))− f(x ?) ) ≥ 〈 ∇f(x?), x̂(k)i• (T (k))− x? 〉 + Nµf 2 ‖x̂(k)i• (T (k))− x?‖2
Applying the first-order optimality condition to the preceding inequality, i.e., 〈 ∇F (x?),x−x? 〉 ≥ 0 for any x ∈ X, yields
1
T (k) T (k)∑ t=1 ( f(x (k) i• (t))− f(x ?) ) ≥ Nµf 2 ‖x(k+1)i• (1)− x ?‖2 (29)
where the second inequality follows from the non-expansiveness of the Euclidean projection projX(·), and the last equality from Step 3 in Algorithm 2. Combining the inequalities (27), (28) and (29), we have for any i• ∈ V,
Nµf 2
E [ ‖x(k+1)i• (1)− x ?‖2 ] ≤ ∑N i=1 E [ ‖x(k)i (1)− x?‖2 ] 2η(k)T (k) + ( 4N + 2αβ 1− β N2 ) LfRX 1 T (k)
+ 1
T (k) T (k)∑ t=1 E [∣∣f(x?)− f̂(x?; δ(k))∣∣]+ 1 T (k) T∑ t=1 E [∣∣f(x(k)i• (t))− f̂(x(k)i• (t); δ(k))∣∣]
+ 1
2T (k) T (k)∑ t=1 η(k) N∑ i=1 E [∥∥g̃i(x(k)i (t); δ(k))∥∥2]+ p2Lf 1T (k) T (k)∑ t=1 η(k) N∑ i=1 E [∥∥g̃i(x(k)i (t); δ(k))∥∥].
(30) From (30) we find that the convergence depends on the properties of the DistZOOs, and we first derive the dimension-dependence error bounds for DistZOO g̃OPi (·) and the bounds for g̃TPi (·) naturally follows from the derivations.
For DistZOO g̃OPi (·), when fi ∈ Flip(Lf ,BRX) ∩ Fsc(µf ,BRX), it follows from (17), (30) and Lemma 4(i) that
E [
max i∈V
{ ‖x(k+1)i (1)− x ?‖2 }] ≤ E [ max i∈V { ‖x(k)i (1)− x ?‖2 }] 1 µfη(k)T (k)
+ 4 Lf µf δ(k) + 4
( 2 + αβ
1− β N ) Lf µf RX 1 T (k) + 2p2 Lf µf Cd η(k) δ(k) + 1 µf C2d2 η(k) (δ(k))2 . (31)
On the other hand, we have
T (k) = ak−1T (1), η(k) = 1
ak−1 η(1), δ(k) =
1
bk−1 δ(1) (32)
This, together with inequality (31), gives E [
max i∈V
{ ‖x(k+1)i (1)− x ?‖2 }] ≤ E [ max i∈V { ‖x(k)i (1)− x ?‖2 }] 1 µfη(1)T (1)
+ 1( min { a, b, ab , a b2
})k−1 × (
4 Lf µf δ(1) + 4
( 2 + αβ
1− β N ) Lf µf RX 1 T (1)
+ 2p2 Lf µf Cd
η(1) δ(1) + 1 µf C2d2 η(1) (δ(1))2
) .
(33)
By substituting T (1) = m = 1, η(1) = 4min{b, a b2 }
3µf and δ(1) = 1 into the preceding relation, we
arrive at E [
max i∈V
{ ‖x(k+1)i (1)− x ?‖2 }] ≤ E [ max i∈V { ‖x(k)i (1)− x ?‖2 }] 3 4 min { b, ab2 } + R1( min { b, ab2
})k−1 (34)
where we have used the fact that min { a, b, ab , a b2 } = min { b, ab2 } , due to a > b2 (because of a b2 > 1) and b > 1, and
R1 = 4 Lf µf δ(1) + 4
( 2 + αβ
1− β N ) Lf µf RX 1 T (1) + 2p2 Lf µf Cd η(1) δ(1) + 1 µf C2d2 η(1) (δ(1))2 .
We next show by induction that E [
max i∈V
{ ‖x(k)i (1)− x ?‖2 }] ≤ 4 max{R1, R 2 X}
hk−2 (35) where h = min { b, ab2 } . For k = 1, we can use the following bound to deduce that inequality (35)
holds, maxi∈V { ‖x(1)i (1) − x?‖2 } ≤ 4R2X ≤ 4hmax{R1, R2X}. We then assume that inequality (35) holds for k and show it holds for k + 1 as well,
E [
max i∈V
{ ‖x(k+1)i (1)− x ?‖2 }] ≤ 3 4h E [ max i∈V { ‖x(k)i (1)− x ?‖2 }] + R1 hk−1
≤ 3 max{R1, R 2 X}
hk−1 + max{R1, R2X} hk−1 ≤ 4 max{R1, R 2 X} hk−1
which leads to the conclusion in (35). It is easy to verify that the total number of stages in Algorithm 2 is k\ = ⌊ loga ( T m + 1 )⌋ , and the final estimates returned by Algorithm 2 are x̄i(T ), i ∈ V. Hence, applying k\ + 1 to (35), we have
E [
max i∈V
{ ‖x̄i(T )− x?‖2 }] ≤ 4 max{R1, R 2 X}
hk\+1−2 ≤ 4h 2 max{R1, R2X} hloga( T m+1) = 4h2 max{R1, R2X}(
T m + 1
) 1 logh(a)
(36) where we used the inequality that k\ ≥ loga ( T m + 1 ) − 1. We are left to find the minimum of logh(a) = logmin{b, a b2 }(a), which is achieved when b = a b2 . This yields the following final conver-
gence rate, that is, E [ maxi∈V { ‖x̄i(T ) − x?‖2 }] ≤ 4b
2 max{R1,R2X} (T+1)1/3
= O ( d2
(T+1)1/3
) , where the
equality follows from R1 = O ( d2 ) .
When fi ∈ Flip(Lf ,BRX) ∩ Fsmo(sf ,BRX) ∩ Fsc(µf ,BRX), the dimension-dependence bound can be obtained in a similar fashion.
For DistZOO g̃TPi (·), when fi ∈ Flip(Lf ,BRX) ∩ Fsc(µf ,BRX), it follows from (17), (30) and Lemma 4(ii) that
E [
max i∈V
{ ‖x(k+1)i (1)− x ?‖2 }] ≤ E [ max i∈V { ‖x(k)i (1)− x ?‖2 }] 1 µfη(k)T (k)
+ 4 Lf µf δ(k) + 4
( 2 + αβ
1− β N ) Lf µf RX 1 T (k) + L2f µf ( cd+ 2p2 √ c √ d ) η(k). (37)
By setting b = a and then substituting T (1) = 1, η(1) = 4a3µf and δ (1) = 1 into the preceding inequality it follows that
E [
max i∈V
{ ‖x(k+1)i (1)− x ?‖2 }] ≤ E [ max i∈V { ‖x(k)i (1)− x ?‖2 }] 3 4a + R2 ak−1
(38)
where R2 = 4 Lf µf δ(1) + 4 ( 2 + αβ1−βN ) Lf µf RX 1 T (1) + L2f µf ( cd+ 2p2 √ c √ d ) η(1). Then, following an argument similar to that of part (i), we obtain
E [
max i∈V
{ ‖x̄i(T )− x?‖2 }] ≤ 4a
2 max{R2, R2X} T + 1 = O
( d
T + 1
) .
Similarly, when fi ∈ Flip(Lf ,BRX) ∩ Fsmo(sf ,BRX) ∩ Fsc(µf ,BRX), we have E [
max i∈V
{ ‖x̄i(T )− x?‖2 }] ≤ 4a
2 max{R′2, R2X} T + 1 = O
( d
T + 1 ) where R′2 = 2 sf µf (δ(1))2 + 4 ( 2 + αβ1−βN ) Lf µf RX 1 T (1) + L2f µf ( cd+ 2p2 √ c √ d ) η(1). The proof is complete. | 1. What is the focus of the paper regarding zeroth-order methods for decentralized optimization?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the novelty and significance of the provided convergence results?
4. Are there any concerns regarding the comparisons with existing works and the consideration of decentralized network topology? | Summary Of The Paper
Review | Summary Of The Paper
This paper studied zeroth-order methods via one-point and two-point gradient estimators for decentralized optimization.
Review
Strength: It seems that this paper provided new convergence results for the decentralized setting.
Weakness:
However, the related works are not well-compared or missing. So it's hard for me to judge the results of this paper. For instance, in Table 1,
the results for the Lipschitz class clearly also hold for Lipschitz and smooth class;
the non-accelerated result for the smooth class is
O
(
d
T
)
(see, e.g., [1, 2] via directional derivative), and also accelerated result
O
(
d
2
T
2
)
is obtained in [2] via two-point gradient estimators. Both results are much better than the one in the current submission, but the authors did not list and compare with them.
for the strongly convex class, the convergence rate is linear (e.g., [2] provides the accelerated rates
(
1
−
μ
f
/
s
f
d
)
T
), while the authors listed the much weaker sublinear results in Table 1. -ps: I do not think the constrained/unconstrained domain will affect the comparison with results of [2], since the current submission directly uses the non-expansiveness of the Euclidean projection proj(
⋅
) in their proofs (see Page 11-12)).
The authors consider the decentralized setting, however, their convergence results (see Table 1 and Theorems 1-4) do not depend on any parameters (e.g., spectral gap) regarding the decentralized network topology. so it's also hard for me to believe the correctness of their results.
[1] Arkadi Nemirovski, Anatoli Juditsky, Guanghui Lan, and Alexander Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on optimization, 19(4):1574–1609, 2009.
[2] Yurii Nesterov and Vladimir Spokoiny. Random gradient-free minimization of convex functions. Foundations of Computational Mathematics, 17(2):527–566, 2017. |
ICLR | Title
Distributed Zeroth-Order Optimization: Convergence Rates That Match Centralized Counterpart
Abstract
Zeroth-order optimization has become increasingly important in complex optimization and machine learning when cost functions are impossible to be described in closed analytical forms. The key idea of zeroth-order optimization lies in the ability for a learner to build gradient estimates by queries sent to the cost function, and then traditional gradient descent algorithms can be executed replacing gradients by the estimates. For optimization over large-scale multi-agent systems with decentralized data and costs, zeroth-order optimization can continue to be utilized to develop scalable and distributed algorithms. In this paper, we aim at understanding the trend in performance transitioning from centralized to distributed zeroth-order algorithms in terms of convergence rates, and focus on multi-agent systems with time-varying communication networks. We establish a series of convergence rates for distributed zeroth-order subgradient algorithms under both one-point and two-point zeroth-order oracles. Apart from the additional node-to-node communication cost due to the distributed nature of algorithms, the established rates in convergence are shown to match their centralized counterpart. We also propose a multi-stage distributed zeroth-order algorithm that better utilizes the learning rates, reduces the computational complexity, and attains even faster convergence rates for compact decision set.
N/A
Zeroth-order optimization has become increasingly important in complex optimization and machine learning when cost functions are impossible to be described in closed analytical forms. The key idea of zeroth-order optimization lies in the ability for a learner to build gradient estimates by queries sent to the cost function, and then traditional gradient descent algorithms can be executed replacing gradients by the estimates. For optimization over large-scale multi-agent systems with decentralized data and costs, zeroth-order optimization can continue to be utilized to develop scalable and distributed algorithms. In this paper, we aim at understanding the trend in performance transitioning from centralized to distributed zeroth-order algorithms in terms of convergence rates, and focus on multi-agent systems with time-varying communication networks. We establish a series of convergence rates for distributed zeroth-order subgradient algorithms under both one-point and two-point zeroth-order oracles. Apart from the additional node-to-node communication cost due to the distributed nature of algorithms, the established rates in convergence are shown to match their centralized counterpart. We also propose a multi-stage distributed zeroth-order algorithm that better utilizes the learning rates, reduces the computational complexity, and attains even faster convergence rates for compact decision set.
1 INTRODUCTION
Various machine learning tasks ultimately boil down to solving optimization problems of different forms, where the cost functions are formed jointly by the data accumulated in experiences and the model used in representing the learning framework. Gradient descent algorithms have been playing a foundational role in practically solving such optimization problems. However, for learning tasks with high-dimensional data and involved learning representations, access to the gradient of the cost function may turn out not possible: the cost function supporting the learning may not have a closed analytical form; or it is simply too computationally costly to be properly differentiated. Zeroth-order optimization provides a systemic way of facilitating gradient descent without direct access to gradient information, where oracles query the cost function values and generate gradient estimates. Zeroth-order methods have shown a number of successful applications, e.g., searching for adversarial attacks in deep learning Chen et al. (2019); Liu et al. (2019) and policy search in reinforcement learning Vemula et al. (2019).
The literature has also explored the potential in extending the standard (centralized) zeroth-order optimization to distributed settings over multi-agent systems, where the data and cost functions are scattered across a network of decentralized agents. With the help of a communication network, the agents may collaboratively solve the network-level optimization task by iteratively exchanging decisions obtained from local zeroth-order descent. The rates of convergence of centralized zerothorder optimization algorithms are now well understood for several sub-classes of convex functions. We are interested in systematically investigating these convergence rates scale for the corresponding distributed algorithms, and focus on the case of time-varying communication networks.
1.1 PROBLEM DEFINITION
Consider a network of agents (nodes) V = {1, . . . , N}. The agents aim to collectively solve the following distributed optimization problem
minimize f(x) := N∑ i=1 fi(x)
subject to x ∈ X. (1)
Here x ∈ Rd is the decision variable, X ⊆ Rd is a convex decision space, and fi : Rd → R is a private convex objective function associated with agent i.
The communication network connecting the nodes is described by a time-varying graph G(t) = (V,E(t)), where E(t) is the set of activated links at time t. Let A(t) be a weight matrix at time t for the graph G(t): for each link (i, j) ∈ E(t), a weight [A(t)]ij > 0 is assigned, and [A(t)]ij = 0 for (i, j) /∈ E(t). We impose the following assumption on the communication network E(t) and the weight matrix A(t).
Assumption 1 (i) There exists a positive integer B such that the union graph (V,E(kB+ 1)∪ · · ·∪ E((k+1)B)) is strongly connected for all k ≥ 0; (ii) A(t) is doubly stochastic, i.e., ∑N i=1[A(t)]ij =
1 and ∑N j=1[A(t)]ij = 1; (iii) [A(t)]ii ≥ ξ for all i, and [A(t)]ij ≥ ξ if (j, i) ∈ E(t), where ξ > 0.
1.2 FUNCTION CLASSES
Let Fcvx denote the set of all convex functions on Rd. We define the following three classes of convex functions in Fcvx.
• The Lipschitz continuous class Flip(Lf ,X) contains the functions in Fcvx that admit a finite Lipschitz constant Lf over X, i.e.,
Flip(Lf ,X) := {g ∈ Fcvx : ∀x,x′ ∈ X, |g(x)− g(x′)| ≤ Lf‖x− x′‖}.
• The smooth class Fsmo(sf ,X) contains the functions that admit a sf -Lipschitz continuous gradient over X, i.e.,
Fsmo(sf ,X) = {g ∈ Fcvx : ∀x,x′ ∈ X, ‖∇g(x)−∇g(x′)‖ ≤ sf‖x− x′‖}.
• The strongly convex class Fsc(µf ,X) contains the functions that are µf -strongly convex, i.e.,
Fsc(µf ,X) = { g ∈ Fcvx : ∀x,x′ ∈ X, g(x) ≥ g(x′) + 〈∇g(x′),x− x′〉+
µf 2 ‖x− x′‖2
} .
1.3 CONTRIBUTIONS AND RELATED WORK
Contributions. We first present MAZOPA, a multi-agent zeroth-order projection averaging algorithm. In MAZOPA, the agents iteratively carry out local zeroth-order descents for their private costs to generate intermediate decisions, send these intermediate decisions to their neighbors over the graph G(t), and then update their decisions by projecting the average neighboring intermediate decisions onto X. For distributed zeroth-order oracles based on one-point or two-point estimates, a series of convergence rate results are established for the three basic function classes. Remarkably, the convergence rates for distributed algorithms are found to be matching their centralized counterpart, and sometimes even tighter rates are obtained, as summarized in Table 1. These results show that by paying the price of node-to-node communication, distributed zeroth-order optimization provides equal performance guarantees as those of centralized approaches. Next, we generalize the MAZOPA to a multi-stage setting, where the local zeroth-order descents take place for multiple steps before the projected averaging in a sequence of epochs. Such multi-stage MAZOPA is shown to be able to reduce the computational complexity, while providing improved convergences rates compared to MAZOPA when the decision set is compact.
Related Work. Recently, many types of centralized zeroth-order optimization algorithms have been studied, and their convergence rates (and the way they depend on the dimension) have been established in different settings. For unconstrained convex optimization, Nesterov & Spokoiny (2017) develops several types of two-point gradient estimators and achieves convergence rates that scale with dimension as O(d2). For constrained stochastic optimization, Duchi et al. (2015) establishes that the convergence rates are sharp up to factors at most logarithmic in the dimension. Zeroth-order optimization has a natural connection to bandit online optimization, where the latter focuses on dynamic environment where the objective functions are varying over time (see, e.g., Flaxman et al. (2005); Agarwal et al. (2010); Shamir (2013; 2017); Bubeck et al. (2017); Lattimore (2020); Hazan & Levy (2014)). In particular, the seminal work Flaxman et al. (2005) constructs a one-point gradient estimator (or one-point bandit feedback model) and achieves an O(d/T 1/4) average regret. For two-point gradient estimator, Shamir (2017) establishes the tightness of the dimension-dependent factor O( √ d) in the framework of zeroth-order stochastic mirror descent.
It is worth zooming into the literature on distributed zeroth-order/bandit online optimization. Due to the absence of a central coordinator, the algorithms developed should always rely on local computations and communications (e.g., Yuan & Ho (2015); Yi et al. (2020); Jakovetic et al. (2018); Hajinezhad et al. (2019); Wang et al. (2019); Pang & Hu (2019); Hajinezhad & Zavlanos (2018); Wan et al. (2020)). This makes the convergence analysis of the distributed zeroth-order/bandit online optimization algorithms more challenging. In Yuan & Ho (2015), the authors develop a class of distributed zeroth-order optimization algorithms that require two functional evaluations at each iteration, and establishes asymptotic convergence of the algorithm. Non-asymptotic convergence is established in Jakovetic et al. (2018); Hajinezhad et al. (2019); Wang et al. (2019); Pang & Hu (2019); Hajinezhad & Zavlanos (2018), but the dimension-dependence factors are either O(d2) or far from optimal. The work Yi et al. (2020) considers distributed online optimization with long-term constraints and establishes bounds on regret as well as constraint violations. To avoid Euclidean projection onto the constraint set, Wan et al. (2020) develops a distributed bandit online optimization algorithm based on conditional gradient descent and one-point bandit feedback, and achieves a regret scaling of O(T 3/4 √ lnT ).
2 THE MAZOPA ALGORITHM AND ITS CONVERGENCE RATES
In this section, we present the MAZOPA algorithm and establish the convergence rates for the three function classes.
2.1 DISTRIBUTED ZEROTH-ORDER ORACLES
Let n be a random vector in Rd drawn from some probability distribution. Then
f̂i(x; δ) := En [fi(x + δn)] (2)
is a smoothed function for fi. Here δ > 0 is a parameter setting the level of the smoothing. We introduce the following definition on distributed zeroth-order oracles (DistZOO).
Definition 1 (DistZOO) A vector g̃i(x; δ) ∈ Rd is called a distributed zeroth-order oracle at node i if the following conditions hold:
(i) E [g̃i(x; δ)] = ∇f̂i(x; δ) for all x ∈ Rd; (ii) If fi ∈ Flip(Lf ), then f̂i ∈ Flip(Lf ) as well, and there holds ∣∣f̂i(x; δ) − fi(x)∣∣ ≤ pdLfδ,
with pd being some positive constant;
(iii) If fi ∈ Fsmo(sf ), then ∣∣f̂i(x; δ)− fi(x)∣∣ ≤ 12 p̃dsfδ2 with p̃d being some positive constant.
A number of DistZOO satisfying Definition 1 can be obtained using existing gradient estimators, see, e.g., Liu et al. (2020). In the paper, we provide two representative gradient estimators that are commonly adopted in the literature. Let ui be a random vector independently generated from a unit sphere B1 in Rd. Then (e.g., Flaxman et al. (2005))
g̃OPi (x; δ) := fi(x + δtui)uid/δ (3)
is a one-point DistZOO satisfying Definition 1. Moreover,
g̃TPi (x; δ) := d
2δ
( fi(x + δu)− fi(x− δu) ) u (4)
is a two-point DistZOO satisfying Definition 1 (e.g., Shamir (2017)).
2.2 THE MAZOPA ALGORITHM
We present the following Multi-Agent Zeroth-Order Projection Averaging (MAZOPA) algorithm, which consists of two steps, a local zeroth-order optimization step and a distributed averaging step. MAZOPA, whose pseudo-code is presented in Algorithm 1, is a variation of the multi-agent subgradient averaging algorithm proposed in Nedic et al. (2008); Nedic & Ozdaglar (2009); Nedic et al. (2010), where the local optimization step is executed by sub-gradient descent.
Algorithm 1 MAZOPA: x̂i(T ) = MAZOPA (xi(1), ηt, δt,X) Require: step size ηt, DistZOO g̃i(x; δt) with exploration parameter δt for all i ∈ V Ensure: xi(1) ∈ X, ∀i ∈ V
1: for t = 1 to T do 2: Node i queries the DistZOO at point xi(t) and receives g̃i(xi(t); δt) 3: Node i computes
vi(t) = xi(t)− ηt · g̃i(xi(t); δt)
4: Node i updates its state by using the information received from its instant neighbors
xi(t+ 1) = projX ( N∑ j=1 [A(t)]ijvj(t) )
5: end for Output: x̂i(T ) = 1T ∑T t=1 xi(t)
2.3 MAIN RESULTS
Let x̂i(T ) be the output of Algorithm 1 at agent i. We denote the optimal solution of problem (1) by x? = arg minx∈X f(x). Defining X◦ := {x + u : x ∈ X,u ∈ B1}, we present the following results on the convergence rate of the MAZOPA algorithm.
Theorem 1 Let Assumption 1 hold. Let DistZOO take the form of g̃OPi (·). Further assume that |fi(xi(t) + δtui(t))| ≤ C for all i ∈ V. We have the following convergence results for every i ∈ V and all T ≥ 1.
(i) Consider fi ∈ Flip(Lf ,X◦) for all i ∈ V. Setting ηt = 1dT 3/4 and δt = 1 t1/4 , t = 1, . . . , T , it holds that E [ f(x̂i(T )) ] − f(x?) = O ( d
T 1/4
) .
(ii) Consider fi ∈ Flip(Lf ,X◦) ∩ Fsmo(sf ,X◦). Setting ηt = 1dT 2/3 and δt = 1 t1/6 , t = 1, . . . , T , it holds that E [ f(x̂i(T )) ] − f(x?) = O ( d
T 1/3
) .
Theorem 2 Let Assumption 1 hold. Let DistZOO take the form of g̃TPi (·). Set ηt = 1√dT and δt =
1√ t , t = 1, . . . , T . Consider fi ∈ Flip(Lf ,X◦), i ∈ V. Then, for every i ∈ V and all T ≥ 1, we have E [ f(x̂i(T )) ] − f(x?) = O (√ d T ) .
With strong convexity, the convergence rates established above can be further strengthened.
Theorem 3 Let Assumption 1 hold. Let DistZOO take the form of g̃OPi (·). Further assume that |fi(xi(t) + δtui(t))| ≤ C for all i ∈ V. We have the following convergence results for every i ∈ V and all T ≥ 1.
(i) Consider fi ∈ Flip(Lf ,X◦) ∩ Fsc(µf ,X◦) for all i ∈ V. Setting ηt = 1µf t and δt = 1 t1/3 , t = 1, . . . , T , it holds that E [ f(x̂i(T )) ] − f(x?) = O ( d2
T 1/3
) .
(ii) Consider fi ∈ Flip(Lf ,X◦)∩Fsmo(sf ,X◦)∩Fsc(µf ,X◦). Setting ηt = 1µf t and δt = 1 t1/4 , t = 1, . . . , T , it holds that E [ f(x̂i(T )) ] − f(x?) = O ( d2√ T ) .
Theorem 4 Let Assumption 1 hold. Let DistZOO take the form of g̃TPi (·). Set ηt = 1µf t and δt = 1 t , t = 1, . . . , T . Consider fi ∈ Flip(Lf ,X◦)∩Fsc(µf ,X◦), i ∈ V. Then, for every i ∈ V and all T ≥ 1, we have E [ f(x̂i(T )) ] − f(x?) = O (d ln(T ) T ) .
3 MULTISTAGE MAZOPA: ADAPTIVE LOCAL DESCENT
We now propose a multi-stage variant of Algorithm 1. We impose the following compactness assumption on the constraint set X.
Assumption 2 There exists 0 < RX <∞ such that ‖x‖ ≤ RX for all x ∈ X.
3.1 THE ALGORITHM
The basic idea is to divide the optimization process into a sequence of epochs, each of which has an exponentially decreasing step size and an exponentially increasing iteration number. The updates in the inner loop of each stage are just made according to Algorithm 1 with fixed step size. In each stage only the average point is maintained and used as the starting point of the next stage. This idea of setting up multi-stage optimization algorithms was originally explored in Hazan & Kale (2011).
Take positive integersm ≥ 1 and a ≥ 2. Let k\ = ⌊ loga ( T m + 1 )⌋ , where bxc represents the largest inter with value no greater than x ∈ R. We divide the T time steps into k\ epochs by
Epoch 1 : 1, . . . , T (1);
Epoch 2 : T (1) + 1, . . . , T (2);
...
Epoch k\ : T (k \−1) + 1, . . . , T (k \).
Here T (1) = m, T (2) = am, . . . , T (k \) = ak \
m. For the jth epoch, all agents will run the MAZOPA algorithm, and denote output of the j-th epoch at agent i by x̂(j)i (T
(j)). The pseudo-code of the resulting multi-stage MAZOPA is presented in Algorithm 2. Compared to the MAZOPA algorithm, the multi-stage MAZOPA has the following advantages:
(i) Multistage MAZOPA only requires each node projects its estimates onto the ball BRX , rather than the constraint set X in each epoch. In particular, multistage MAZOPA algorithm significantly reduces the number of Euclidean projections onto the constraint set X from T to k\. This makes the algorithm more computationally efficient.
(ii) Multistage MAZOPA better utilizes the step size rules, in the sense that at earlier epochs of the algorithm, larger step sizes are adopted to facilitate convergence, while smaller step sizes are adopted to achieve better accuracy at later epochs.
3.2 OPTIMAL CONVERGENCE RATES
We now modify the definitions of Flip, Fsmo and Fsc by replacing X with BRX , with slight abuse of notation. As it turns out, the multistage MAZOPA enjoys refined convergence rates.
Algorithm 2 Multistage MAZOPA Require: exploration parameter δ(1), step size η(1), T (1) = m, total number of iterations T , integer
a ≥ 2, and scalar b > 1 Ensure: x(1)i (1) ∈ X for all i ∈ V, and set k = 1
1: while j = 1, . . . , k\ do 2: Call Algorithm 1 to obtain
x̂ (j) i (T (j)) = MAZOPA ( x (j) i (1), η (j), δ(j),BRX )
3: Compute x(j+1)i (1) = projX ( x̂ (j) i (T (j)) ) 4: Update η(k+1) = 1aη (k) and δ(j+1) = 1b δ (j) 5: Update T (j+1) = aT (j) 6: Update j = j + 1 7: end while
Output: x̄i(T ) = projX ( x̂ (k\) i (T (k\)) )
Theorem 5 Let Assumptions 1 and 2 hold. Let DistZOO take the form of g̃TPi (·). Set a = b, T (1) = m = 1, η(1) = 4a3µf and δ (1) = 1. Consider fi ∈ Flip(Lf ,BRX) ∩ Fsc(µf ,BRX), i ∈ V. We
have E [ maxi∈V { ‖x̄i(T )− x?‖2 }] = O ( d
T+1
) .
The idea of the analysis leading to Theorem 5 can also be extended to one-point oracles. If DistZOO takes the form of g̃OPi (·), one needs to impose the following assumption on the objective functions, that is, |fi(xi(t) + δtui(t))| ≤ C for all i ∈ V. For fi ∈ Flip(Lf ,BRX) ∩ Fsc(µf ,BRX), setting a = b3, the final estimates enjoy a convergence rate of E [ maxi∈V { ‖x̄i(T ) − x?‖2 }] =
O ( d2
(T+1)1/3
) . For fi ∈ Flip(Lf ,BRX)∩Fsmo(sf ,BRX)∩Fsc(µf ,BRX), setting a = b4, there holds
E [ maxi∈V { ‖x̄i(T )− x?‖2 }] = O ( d2√ T+1 ) .
4 NUMERICAL EXAMPLES
In this section, we evaluate the performance of the proposed algorithms on a distributed ridge regression problem.
System setup. The optimization problem has the following form:
minimize f(x) = N∑ i=1 ( 1 2 (a T i x− bi)2 + ρ‖x‖2 ) subject to ‖x‖1 ≤ k
(5)
where x ∈ Rd is the optimization variable, the data pair (ai, bi) ∈ Rd × R is only known to node i with ai and bi being generated uniformly from the unit normal distribution.
Network setup. We implement the proposed algorithms over a randomly generated network that consists of N = 50 nodes, which is shown in Fig. 1. In the simulations, we set d = 10, k = 3/4, ρ = 1/2, and RW = 3/4. We evaluate the performance of the algorithms via the average of 10 implementations. The weight matrix associated with the graph is generated according to the maximum-degree weights:
[A(t)]ij = 1 1+dmax , (j, i) ∈ Et
1− di1+dmax , i = j 0, (j, i) /∈ Et
where dmax = maxi∈V{di} is the maximum degree of Gt (di denotes the degree of node i). Results. The performance of algorithms MAZOPA and multistage MAZOPA is illustrated via plotting the maximum function errors, maxi∈V f(x̂i(T )) and maxi∈V f(x̄i(T )), as a function of the number of iterations T in Fig. 2. As a benchmark, the convergence performance of the gradient
algorithm is displayed in Fig. 2 as well. From the numerical results it is clear that the maximum function errors are vanishing for all zerroth-order algorithms. In fact, the convergence performance of two-point MAZOPA is even comparable to the gradient method. Moreover, the multistage variants in general exhibit better convergence performance, and this is more obvious for the case of two-point MAZOPA. These numerical results are in compliance with the theoretical findings in the paper.
Reproduction of the results. The code used for producing this numerical example is provided in the suplementary material.
5 CONCLUSIONS
We have established a series of convergence rates for distributed zeroth-order subgradient algorithms that match their centralized counterpart for Lipschitz, smooth, and strongly convex function classes. These results provided the theoretical benchmarks for zeroth-order approaches over complex dynamic networks. We also proposed a multi-stage variant of the algorithm that better utilizes the learning rates and attains even improved convergence rates. In future work, it is worth exploring the connection between the convergence rates and the underlying communication complexity for distributed zeroth-order algorithms.
A KEY LEMMAS
We first establish the basic convergence result for Algorithm 1 that is based on DistZOO g̃i(x; δt), which plays a crucial role in subsequent analyses. We will sometimes use i• to denote a node in V just to highlight the focus on a given node (but i• indeed may take any value in V and therefore it is a generic node).
Lemma 1 Let Assumption 1 hold. Let g̃i(xi(t); δt) be a DistZOO that satisfies Definition 1 (i) and (ii). Then, for any i• ∈ V and all T ≥ 1, there holds
T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ T∑ t=1 N∑ i=1 E [∣∣fi(x?)− f̂i(x?; δt)∣∣]
+ T∑ t=1 N∑ i=1 E [∣∣fi(xi•(t))− f̂i(xi•(t); δt)∣∣]
+ T∑ t=1 E [Λ?(t)]− E [Λ?(t+ 1)] 2ηt + p1Lf
+ 1
2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] + p2Lf
T−1∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖ ] where p1 = 2N maxi∈V{‖xavg(1)−xi(1)‖}+ 2Nαβ1−β (∑N i=1 ‖xi(1)‖ ) , p2 = 2N ( 3α 1−β + 4 ) ,
and Λ?(t) = ∑N i=1 ‖xi(t) − x?‖2 with xavg(1) = 1 N ∑N i=1 xi(1), α = ( 1− ξ4N2 )−2 , and
β = ( 1− ξ4N2 )1/B .
Before presenting the proof of Lemma 1, we provide the following two supporting lemmas. The first lemma characterizes the convergence property of the transition matrix induced by weight matrix A(t) (see Nedic et al. (2008)).
Lemma 2 Define the transition matrix as A(t : `) = A(t)A(t − 1) · · ·A(` + 1)A(`) for all t ≥ ` ≥ 1, and write A(t : t) = A(t). A(t : `) satisfies∣∣∣∣[A(t : `)]ij − 1N ∣∣∣∣ ≤ αβt−`+1 where α = ( 1− ξ4N2 )−2 and β = ( 1− ξ4N2 )1/B .
The second lemma establishes the accumulated disagreement for every node in the network.
Lemma 3 (Disagreement) Let Assumption 1 hold. For every node i ∈ V, we have T∑ t=1 ‖xavg(t)− xi(t)‖ ≤ ‖xavg(1)− xi(1)‖+ αβ 1− β ( N∑ i=1 ‖xi(1)‖ )
+
( 3α
1− β + 4 ) T−1∑ t=1 ηt N∑ i=1 ‖g̃i(xi(t); δt)‖
where xavg(t) = 1N ∑N i=1 xi(t).
Proof. To simplify the presentation, we denote
ṽi(t) = N∑ j=1 [A(t)]ijvj(t)
si(t) = projX (ṽi(t))− ṽi(t).
Step 3 in Algorithm 1 can be rewritten as
xi(t+ 1) = ṽi(t) + si(t).
Our analysis relies on the estimate of ‖si(t)‖, which can be bounded as follows:
‖si(t)‖ ≤ ∥∥∥projX (ṽi(t))− N∑
j=1
[A(t)]ijxj(t) ∥∥∥+ N∑
j=1
[A(t)]ij ‖ηtg̃j(xj(t); δt)‖
where the inequality is based on Step 3 in Algorithm 1 and A(t) is double stochastic (cf. Assumption 1). Using the non-expansiveness of the Euclidean projection projX(·) and the fact that∑N j=1[A(t)]ijxj(t) ∈ X, we have
‖si(t)‖ ≤ 2 N∑ j=1 [A(t)]ij ‖ηtg̃j(xj(t); δt)‖ . (6)
We now derive the general expressions for xavg(t+ 1) and xi(t+ 1), respectively. For xavg(t+ 1), we have
xavg(t+ 1) = xavg(t)− 1
N N∑ i=1 ηtg̃i(xi(t); δt) + 1 N N∑ i=1 si(t).
Applying the preceding inequality recursively, we get
xavg(t+ 1) = xavg(1)− t∑ `=1 1 N N∑ i=1 η`g̃i(xi(`); δ`) + t∑ `=1 1 N N∑ i=1 si(`). (7)
Similarly, for xi(t+ 1), we have
xi(t+ 1) = N∑ j=1 [A(t : 1)]ijxj(1)− t∑ `=1 N∑ j=1 [A(t : `)]ijη`g̃j(xj(`); δ`) + t−1∑ `=1 N∑ j=1 [A(t : `+ 1)]ijsj(`) + si(t). (8) Combining (7) and (8), gives ‖xavg(t+ 1)− xi(t+ 1)‖ ≤ N∑ j=1 ∣∣∣∣[A(t : 1)]ij − 1N ∣∣∣∣ ‖xj(1)‖+ t∑ `=1 N∑ j=1 ∣∣∣∣[A(t : `)]ij − 1N ∣∣∣∣ η`‖g̃j(xj(`); δ`)‖
+ t−1∑ `=1 N∑ j=1 ∣∣∣∣[A(t : `+ 1)]ij − 1N ∣∣∣∣ ‖sj(`)‖+ ‖si(t)‖+ 1N N∑ i=1 ‖si(t)‖.
(9) Combining the results in (6), (9) and Lemma 2, leads to
‖xavg(t+ 1)− xi(t+ 1)‖ ≤ αβt (
N∑ i=1 ‖xi(1)‖
) + 3α
t∑ `=1 βt−`η` N∑ i=1 ‖g̃i(xi(`); δ`)‖+ 4ηt N∑ i=1 ‖g̃i(xi(t); δt)‖
where we used the following relation, based on (6):
‖si(t)‖+ 1
N N∑ i=1 ‖si(t)‖ ≤ 2 N∑ i=1 ‖si(t)‖ ≤ 4 N∑ i=1 N∑ j=1 [A(t)]ij ‖ηtg̃j(xj(t); δt)‖ ≤ 4ηt N∑ i=1 ‖g̃i(xi(t); δt)‖.
This implies that
T∑ t=1 ‖xavg(t)− xi(t)‖ ≤ ‖xavg(1)− xi(1)‖+ α ( N∑ i=1 ‖xi(1)‖ ) T−1∑ t=1 βt
+ 3α T−1∑ t=1 t∑ `=1 βt−`η` N∑ i=1 ‖g̃i(xi(`); δ`)‖+ 4 T−1∑ t=1 ηt N∑ i=1 ‖g̃i(xi(t); δt)‖.
(10)
This, in combination with (10), leads to the final bound.
[Proof of Lemma 1]. Denote
Λ(t) = N∑ i=1 ‖xi(t)− x‖2, ∀x ∈ X, t ≥ 1. (11)
We follow the standard analysis by deriving the general evolution of ∆(t),
Λ(t+ 1) = N∑ i=1 ∥∥∥∥∥projX N∑ j=1 [A(t)]ijvj(t) − x∥∥∥∥∥ 2 ≤ N∑ i=1 ‖vi(t)− x‖2 (12)
where the inequality follows from the non-expansiveness of the Euclidean projection and the convexity of norm square function. Expanding the term further gives
Λ(t+ 1) = Λ(t) + N∑ i=1 ‖ηtg̃i(xi(t); δt)‖2 − 2ηt N∑ i=1 〈g̃i(xi(t); δt),xi(t)− x〉 (13)
Taking the expectation on both sides and using the following property of DistZOO (cf. Definition 1(i)):
E [g̃i(xi(t); δt)] = ∇f̂i(xi(t); δt) we further obtain
E [Λ(t+ 1)] = E [Λ(t)] + η2t N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] − 2ηt N∑ i=1 ( E [ f̂i(xi(t); δt) ] − f̂i(x; δt) ) (14)
which implies T∑ t=1 N∑ i=1 E [ f̂i(xi(t); δt) ] − f̂(x; δt) ≤ T∑ t=1 E [Λ(t)]− E [Λ(t+ 1)] 2ηt + 1 2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] . (15) We turn our attention to the left-hand side of (15). By adding and subtracting the term f̂i(xi•(t); δt) and using the Lipschitz continuity of f̂i (cf. Definition 1(ii)), it follows that
T∑ t=1 N∑ i=1 f̂i(xi(t); δt) ≥ T∑ t=1 f̂(xi•(t); δt)− Lf T∑ t=1 N∑ i=1 ‖xi(t)− xi•(t)‖
which, together with (15), yields
T∑ t=1 E [ f̂(xi•(t); δt) ] − f̂(x; δt) ≤ T∑ t=1 E [Λ(t)]− E [Λ(t+ 1)] 2ηt
+ 1
2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] + Lf T∑ t=1 N∑ i=1 ‖xi(t)− xi•(t)‖.
(16)
The desired result follows by relating the left-hand side to the original function f , using the disagreement estimate in Lemma 3, and setting x = x?.
B PROOFS OF THEOREMS 1 AND 2
We first provide the the following lemma that characterizes the properties of the DistZOOs in (3) and (4). Its proof can be derived by resorting to Flaxman et al. (2005); Shamir (2017), which we omit here to save space.
Lemma 4 Suppose that fi ∈ Flip(Lf ) for all i ∈ V. We have the following.
(i) For g̃OPi (·), there hold pd = 1 and
E [ ‖g̃OPi (xi(t); δt)‖2 ] ≤ ( Cd
δt )2 where C = maxi∈V |fi(xi(t) + δtui(t))| with xi(t) ∈ X and ui(t) uniformly drawn from B1. We have p̃d = 1 when fi ∈ Fsmo(sf ).
(ii) For g̃TPi (·), there hold pd = 1 and E [ ‖g̃TPi (xi(t); δt)‖2 ] ≤ cL2fd
where c is some universal constant. In addition, p̃d = 1 when fi ∈ Fsmo(sf ).
Now we are ready to prove Theorems 1 and 2.
(i) First, using the property of the DistZOO g̃OPi (·) (cf. Definition 1(iii)), it follows that N∑ i=1 ∣∣fi(xi•(t))− f̂i(xi•(t); δt)∣∣ ≤ NLfδt N∑ i=1 ∣∣fi(x?)− f̂i(x?; δt)∣∣ ≤ NLfδt (17)
We now focus on the case of fi ∈ Flip(Lf ,X◦). Combining with inequality (17), the results in Lemmas 4(i) and 1, gives
T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ p1Lf + 2NLf T∑ t=1 δt + 1 2ηt E [Λ?(1)]
+ 1
2 NC2d2 T∑ t=1 ηt δ2t + p2NLfCd T∑ t=1 ηt δt
(18)
where we used the fact that ηt is a function of T and Jensen’s inequality, i.e., E [ ∥∥g̃OPi (xi(t); δt)∥∥ ] ≤ (E[ ∥∥g̃OPi (xi(t); δt)∥∥2 ])1/2 ≤ Cdδt .
Substituting the explicit expressions of η = 1 dT 3/4 and δt = 1t1/4 into (18) and dividing both sides by T , we find that
1
T T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) = O ( d T 1/4 ) (19)
where we used the following inequality thet ∑T t=1 t
a = O(T 1+a),∀a 6= −1. The desired result follows by using the convexity of function f , i.e., 1T ∑T t=1 f(xi•(t)) ≥ f(x̂i•(T )). When fi ∈ Flip(Lf ,X◦) ∩ Fsmo(sf ,X◦), it follows from the property of the DistZOO g̃OPi (·) (cf. Definition 1(iii)) that
N∑ i=1 ∣∣fi(xi•(t))− f̂i(xi•(t); δt)∣∣ ≤ 1 2 Nsfδ 2 t
N∑ i=1 ∣∣fi(x?)− f̂i(x?; δt)∣∣ ≤ 1 2 Nsfδ 2 t .
(20)
We then combine the preceding inequality and the results in Lemmas 4(i) and 1 to get T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ p1Lf +Nsf T∑ t=1 δ2t + 1 2ηt E [Λ?(1)]
+ 1
2 NC2d2 T∑ t=1 ηt δ2t + p2NLfCd T∑ t=1 ηt δt .
(21)
In contrast with the bound in (18), the second term on the right-hand side of (21) now becomes Nsf ∑T t=1 δ 2 t , which gives us much more space when choosing δt; we can show that the choices of η = 1 dT 2/3 and δt = 1t1/6 yield the optimal convergence rate O ( d T 1/3 ) .
(ii) When fi ∈ Flip(Lf ,X◦), we have the following result for Algorithm 1 running with DistZOO g̃TPi (·):
T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ p1Lf + 2NLf T∑ t=1 δt + 1 2ηt E [Λ?(1)]
+
( 1
2 cd+ p2
√ c √ d ) NL2fηtT
(22)
where we have used the bounds in (17), Lemma 4(ii) and Lemma 1. Then we can deduce from the terms 1ηt and ηtT that the optimal choice of ηt is 1√ T
. Hence, substituting the explicit expressions for ηt = 1√dT and δt = 1√ t
into (22), dividing both sides with T , and using the convexity of function F , the desired bound can be concluded.
When fi ∈ Flip(Lf ,X◦) ∩ Fsmo(sf ,X◦), it can be shown that the term (2NLf ∑T t=1 δt) on the
right-hand side of (22) is replaced by (Nsf ∑T t=1 δ 2 t ), because of (20). As we discussed in the case of fi ∈ Flip(Lf ,X◦), the convergence rate is determined by the terms involving 1ηt and ηtT . Hence, the convergence rate is the same as that of the case when fi in only Lipschitz continuous. The proof is complete.
C PROOFS OF THEOREM 3 AND 4
First, we claim that for the DistZOOs g̃OPi (·) and g̃TPi (·), the strongly convexity of fi implies the strongly convexity of its smoothed variant f̂i, and its proof is straightforward. We now establish the basic convergence results for Algorithm 1 running with g̃OPi (·) and g̃TPi (·). It follows from (13) that
E [Λ(t+ 1)] = E [Λ(t)] + η2t N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] − 2ηt N∑ i=1 E [〈 ∇f̂i(xi(t); δt),xi(t)− x 〉] ≤ E [Λ(t)] + η2t
N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] − 2ηt N∑ i=1 ( E [ f̂i(xi(t); δt) ] − f̂i(x) ) − µfηtE [Λ(t)]
(23) where in the equality we used the relation E [g̃i(xi(t); δt)] = ∇f̂i(xi(t); δt) (cf. Definition 1(i)) and in the inequality we used the strongly convexity of function f̂i. Summing the inequalities in (23) over t = 1 to t = T and regrouping the terms, we obtain T∑ t=1 N∑ i=1 ( E [ f̂i(xi(t); δt) ] − f̂i(x) ) ≤ 1 2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2
] + 1
2 T∑ t=1 ( 1 ηt (E [Λ(t)]− E [Λ(t+ 1)])− µfE [Λ(t)] )
= 1
2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] + 1 2 ( 1 η1 −µf ) E [Λ(1)]
+ 1
2 T∑ t=2 ( 1 ηt − 1 ηt−1 −µf ) E [Λ(t)]− 1 2ηT E [Λ(T + 1)] .
(24) By substituting the expression for ηt = 1µf t into (24) and dropping the negative term, it follows
T∑ t=1 N∑ i=1 ( E [ f̂i(xi(t); δt) ] − f̂i(x) ) ≤ 1 2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] . (25)
Then, following the same lines as that of the proof of Lemma 1, we have that, for DistZOOs g̃OPi (·) and g̃TPi (·), T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ T∑ t=1 E [∣∣f(x?)− f̂(x?; δt)∣∣]+ T∑ t=1 E [∣∣f(xi•(t))− f̂(xi•(t); δt)∣∣]
+ p1Lf + 1
2 T∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖2 ] + p2Lf T−1∑ t=1 ηt N∑ i=1 E [ ‖g̃i(xi(t); δt)‖ ] .
(26)
(i) We now derive the convergence rate results for Algorithm 1 running with DistZOOs g̃OPi (·). When fi ∈ Flip(Lf ,X◦) ∩ Fsc(µf ,X◦), we combine the results in (17), (26) and Lemma 4(i) to get T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ p1Lf + 2NLf T∑ t=1 δt + 1 2 NC2d2 T∑ t=1 ηt δ2t + p2NLfCd T∑ t=1 ηt δt .
By substituting ηt = 1µf t and δt = 1 t1/3 into the preceding inequality and using the convexity of F , it yields the following optimal bound
1
T T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) = O ( d2 T 1/3 ) .
When fi ∈ Flip(Lf ,X◦) ∩ Fsmo(sf ,X◦) ∩ Fsc(µf ,X◦), it follows from an argument similar to that of (21) that
T∑ t=1 ( E [ f(xi•(t)) ] − f(x?) ) ≤ p1Lf +Nsf T∑ t=1 δ2t + 1 2 NC2d2 T∑ t=1 ηt δ2t + p2NLfCd T∑ t=1 ηt δt .
It can be proven that the choice of δt = 1t1/4 yields the optimal convergence rate, that is, O ( d2√ T ) .
(ii) The proof for Algorithm 1 running with DistZOO g̃TPi (·) can be obtained in a similar way by exploiting the properties of the DistZOO g̃TPi (·).
D PROOF OF THEOREM 5
We provide the basic convergence result for each stage k, and we start by deriving a similar bound as that of Lemma 1 as follows: T (k)∑ t=1 ( E [ f(x (k) i• (t)) ] − f(x?) ) ≤ T (k)∑ t=1 E [∣∣f(x?)− f̂(x?; δ(k))∣∣]+ T (k)∑ t=1 E [∣∣f(x(k)i• (t))− f̂(x(k)i• (t); δ(k))∣∣]
+ T (k)∑ t=1
E [ Λ(k),?(t) ] − E [ Λ(k),?(t+ 1) ] 2η(k) + p (k) 1 Lf + 1 2 T (k)∑ t=1 η(k) N∑ i=1 E [∥∥g̃i(x(k)i (t); δ(k))∥∥2]
+ p2Lf T (k)∑ t=1 η(k) N∑ i=1 E [∥∥g̃i(x(k)i (t); δ(k))∥∥]
(27) where Λ(k),?(t) = ∑N i=1 ‖x (k) i (t) − x?‖2 and p (k) 1 satisfies the following bound, according to compactness of the set X,
p (k) 1 = 2N max i∈V {‖x(k)avg(1)− x (k) i (1)‖}+
2Nαβ
1− β ( N∑ i=1 ‖x(k)i (1)‖ ) ≤ ( 4N + 2αβ 1− β N2 ) RX.
(28)
The left-hand side of (27) can be further bounded by using the strongly convexity of F , that is,
1
T (k) T (k)∑ t=1 ( f(x (k) i• (t))− f(x ?) ) ≥ 〈 ∇f(x?), x̂(k)i• (T (k))− x? 〉 + Nµf 2 ‖x̂(k)i• (T (k))− x?‖2
Applying the first-order optimality condition to the preceding inequality, i.e., 〈 ∇F (x?),x−x? 〉 ≥ 0 for any x ∈ X, yields
1
T (k) T (k)∑ t=1 ( f(x (k) i• (t))− f(x ?) ) ≥ Nµf 2 ‖x(k+1)i• (1)− x ?‖2 (29)
where the second inequality follows from the non-expansiveness of the Euclidean projection projX(·), and the last equality from Step 3 in Algorithm 2. Combining the inequalities (27), (28) and (29), we have for any i• ∈ V,
Nµf 2
E [ ‖x(k+1)i• (1)− x ?‖2 ] ≤ ∑N i=1 E [ ‖x(k)i (1)− x?‖2 ] 2η(k)T (k) + ( 4N + 2αβ 1− β N2 ) LfRX 1 T (k)
+ 1
T (k) T (k)∑ t=1 E [∣∣f(x?)− f̂(x?; δ(k))∣∣]+ 1 T (k) T∑ t=1 E [∣∣f(x(k)i• (t))− f̂(x(k)i• (t); δ(k))∣∣]
+ 1
2T (k) T (k)∑ t=1 η(k) N∑ i=1 E [∥∥g̃i(x(k)i (t); δ(k))∥∥2]+ p2Lf 1T (k) T (k)∑ t=1 η(k) N∑ i=1 E [∥∥g̃i(x(k)i (t); δ(k))∥∥].
(30) From (30) we find that the convergence depends on the properties of the DistZOOs, and we first derive the dimension-dependence error bounds for DistZOO g̃OPi (·) and the bounds for g̃TPi (·) naturally follows from the derivations.
For DistZOO g̃OPi (·), when fi ∈ Flip(Lf ,BRX) ∩ Fsc(µf ,BRX), it follows from (17), (30) and Lemma 4(i) that
E [
max i∈V
{ ‖x(k+1)i (1)− x ?‖2 }] ≤ E [ max i∈V { ‖x(k)i (1)− x ?‖2 }] 1 µfη(k)T (k)
+ 4 Lf µf δ(k) + 4
( 2 + αβ
1− β N ) Lf µf RX 1 T (k) + 2p2 Lf µf Cd η(k) δ(k) + 1 µf C2d2 η(k) (δ(k))2 . (31)
On the other hand, we have
T (k) = ak−1T (1), η(k) = 1
ak−1 η(1), δ(k) =
1
bk−1 δ(1) (32)
This, together with inequality (31), gives E [
max i∈V
{ ‖x(k+1)i (1)− x ?‖2 }] ≤ E [ max i∈V { ‖x(k)i (1)− x ?‖2 }] 1 µfη(1)T (1)
+ 1( min { a, b, ab , a b2
})k−1 × (
4 Lf µf δ(1) + 4
( 2 + αβ
1− β N ) Lf µf RX 1 T (1)
+ 2p2 Lf µf Cd
η(1) δ(1) + 1 µf C2d2 η(1) (δ(1))2
) .
(33)
By substituting T (1) = m = 1, η(1) = 4min{b, a b2 }
3µf and δ(1) = 1 into the preceding relation, we
arrive at E [
max i∈V
{ ‖x(k+1)i (1)− x ?‖2 }] ≤ E [ max i∈V { ‖x(k)i (1)− x ?‖2 }] 3 4 min { b, ab2 } + R1( min { b, ab2
})k−1 (34)
where we have used the fact that min { a, b, ab , a b2 } = min { b, ab2 } , due to a > b2 (because of a b2 > 1) and b > 1, and
R1 = 4 Lf µf δ(1) + 4
( 2 + αβ
1− β N ) Lf µf RX 1 T (1) + 2p2 Lf µf Cd η(1) δ(1) + 1 µf C2d2 η(1) (δ(1))2 .
We next show by induction that E [
max i∈V
{ ‖x(k)i (1)− x ?‖2 }] ≤ 4 max{R1, R 2 X}
hk−2 (35) where h = min { b, ab2 } . For k = 1, we can use the following bound to deduce that inequality (35)
holds, maxi∈V { ‖x(1)i (1) − x?‖2 } ≤ 4R2X ≤ 4hmax{R1, R2X}. We then assume that inequality (35) holds for k and show it holds for k + 1 as well,
E [
max i∈V
{ ‖x(k+1)i (1)− x ?‖2 }] ≤ 3 4h E [ max i∈V { ‖x(k)i (1)− x ?‖2 }] + R1 hk−1
≤ 3 max{R1, R 2 X}
hk−1 + max{R1, R2X} hk−1 ≤ 4 max{R1, R 2 X} hk−1
which leads to the conclusion in (35). It is easy to verify that the total number of stages in Algorithm 2 is k\ = ⌊ loga ( T m + 1 )⌋ , and the final estimates returned by Algorithm 2 are x̄i(T ), i ∈ V. Hence, applying k\ + 1 to (35), we have
E [
max i∈V
{ ‖x̄i(T )− x?‖2 }] ≤ 4 max{R1, R 2 X}
hk\+1−2 ≤ 4h 2 max{R1, R2X} hloga( T m+1) = 4h2 max{R1, R2X}(
T m + 1
) 1 logh(a)
(36) where we used the inequality that k\ ≥ loga ( T m + 1 ) − 1. We are left to find the minimum of logh(a) = logmin{b, a b2 }(a), which is achieved when b = a b2 . This yields the following final conver-
gence rate, that is, E [ maxi∈V { ‖x̄i(T ) − x?‖2 }] ≤ 4b
2 max{R1,R2X} (T+1)1/3
= O ( d2
(T+1)1/3
) , where the
equality follows from R1 = O ( d2 ) .
When fi ∈ Flip(Lf ,BRX) ∩ Fsmo(sf ,BRX) ∩ Fsc(µf ,BRX), the dimension-dependence bound can be obtained in a similar fashion.
For DistZOO g̃TPi (·), when fi ∈ Flip(Lf ,BRX) ∩ Fsc(µf ,BRX), it follows from (17), (30) and Lemma 4(ii) that
E [
max i∈V
{ ‖x(k+1)i (1)− x ?‖2 }] ≤ E [ max i∈V { ‖x(k)i (1)− x ?‖2 }] 1 µfη(k)T (k)
+ 4 Lf µf δ(k) + 4
( 2 + αβ
1− β N ) Lf µf RX 1 T (k) + L2f µf ( cd+ 2p2 √ c √ d ) η(k). (37)
By setting b = a and then substituting T (1) = 1, η(1) = 4a3µf and δ (1) = 1 into the preceding inequality it follows that
E [
max i∈V
{ ‖x(k+1)i (1)− x ?‖2 }] ≤ E [ max i∈V { ‖x(k)i (1)− x ?‖2 }] 3 4a + R2 ak−1
(38)
where R2 = 4 Lf µf δ(1) + 4 ( 2 + αβ1−βN ) Lf µf RX 1 T (1) + L2f µf ( cd+ 2p2 √ c √ d ) η(1). Then, following an argument similar to that of part (i), we obtain
E [
max i∈V
{ ‖x̄i(T )− x?‖2 }] ≤ 4a
2 max{R2, R2X} T + 1 = O
( d
T + 1
) .
Similarly, when fi ∈ Flip(Lf ,BRX) ∩ Fsmo(sf ,BRX) ∩ Fsc(µf ,BRX), we have E [
max i∈V
{ ‖x̄i(T )− x?‖2 }] ≤ 4a
2 max{R′2, R2X} T + 1 = O
( d
T + 1 ) where R′2 = 2 sf µf (δ(1))2 + 4 ( 2 + αβ1−βN ) Lf µf RX 1 T (1) + L2f µf ( cd+ 2p2 √ c √ d ) η(1). The proof is complete. | 1. What are the key contributions and novel aspects of the paper in developing distributed zeroth-order algorithms?
2. What are the strengths and weaknesses of the proposed algorithms compared to prior works, particularly in terms of their simplicity and extension from first-order algorithms?
3. How does the reviewer assess the clarity and organization of the paper's content, including the presentation of the main theorems and their failure to fully address the role of the communication graph?
4. What are the limitations of the numerical experiments conducted in the paper, and how do they fall short of fully validating the proposed algorithms' performance in various applications? | Summary Of The Paper
Review | Summary Of The Paper
In this paper, the authors develop a distributed zeroth-order algorithm over a time-varying communication graph, as well as its multi-stage variant with time-varying step size. Convergence rates are established and compared with those of the centralized algorithms.
Review
This paper is well organized and clearly written. The results seem to be reasonable.
The proposed algorithms look like simple extensions from the first-order to the zeroth-order. The authors should highlight the novel contributions of this paper. Are there any particular challenges in analyzing the zeroth-order algorithms, compared to analyzing the first-order ones?
The communication graph should play an important role in the established convergence rates, but the main theorems fail to clarify this issue. Detailed discussions would be helpful.
In the first paragraph, the authors motivate with applications like adversarial attacks in deep learning and policy search in reinforcement learning. However, the numerical experiments end up with a simple least squares problem. More extensive numerical experiments are necessary. |
ICLR | Title
Structured Stochastic Gradient MCMC
Abstract
Stochastic gradient Markov Chain Monte Carlo (SGMCMC) is considered the gold standard for Bayesian inference in large-scale models, such as Bayesian neural networks. Since practitioners face speed versus accuracy tradeoffs in these models, variational inference (VI) is often the preferable option. Unfortunately, VI makes strong assumptions on both the factorization and functional form of the posterior. In this work, we propose a new non-parametric variational approximation that makes no assumptions about the approximate posterior’s functional form and allows practitioners to specify the exact dependencies the algorithm should respect or break. The approach relies on a new Langevin-type algorithm that operates on a modified energy function, where parts of the latent variables are averaged over samples from earlier iterations of the Markov chain. This way, statistical dependencies can be broken in a controlled way, allowing the chain to mix faster. This scheme can be further modified in a “dropout” manner, leading to even more scalability. By implementing the scheme on a ResNet-20 architecture, we obtain better predictive likelihoods and faster mixing time than full SGMCMC.
1 INTRODUCTION
There has been much recent interest in deep Bayesian neural networks (BNN) due to their reliable confidence estimates and generalization properties (Wilson & Izmailov, 2020; Jospin et al., 2020; Cardelli et al., 2019). BNNs rely on ensemble averages over model parameters typically obtained from Markov chain Monte Carlo (MCMC) algorithms, which contrasts to regular neural networks that depend on a single set of parameters. The sheer size of these models requires scalable MCMC approaches based on inexpensive stochastic gradients, of which stochastic gradient Markov chain Monte Carlo (SGMCMC) algorithms are the gold standard (Li et al., 2016; Welling & Teh, 2011; Patterson & Teh, 2013). These algorithms owe their scalability to approximating gradients via mini-batching.
The main downside of using SGMCMC algorithms is their slow mixing rates in high dimensions. An often faster alternative is variational inference (VI) algorithms that approximate the posterior with a simpler (typically factorized) distribution. This formulation results in an optimization problem that can be solved more efficiently using stochastic optimization (Blei et al., 2017; Zhang et al., 2018).
One downside of VI approximations is their solid distributional assumptions. A typical choice is to approximate the Bayesian posterior by a product of univariate Gaussian distributions. These distributional assumptions are frequently over-simplistic in high-dimensional models, where the posterior can be highly multi-modal and possibly heavy-tailed. Another downside is that the variational approximation typically underestimates the posterior variance, leading to poorly calibrated uncertainties and overfitting (Ormerod & Wand, 2010; Giordano et al., 2015; Zhang et al., 2018).
In this work, we derive a fundamentally new SGMCMC approach that takes inspiration from structured VI. While our approach remains a sampling algorithm resembling SGMCMC, we speed up the mixing time by systematically breaking posterior correlations. The resulting algorithm furthermore allows users to specify which posterior correlations to keep and which ones to break. It makes no assumptions on the functional form of the approximate posterior. We call our approach structured SGMCMC since it relies on a structured (i.e., only partially factorized) variational approximation of the posterior (Wainwright & Jordan, 2008).
In more detail, we derive the optimal variational distribution for a given posterior subject to factorization constraints by assuming a functional view on variational inference. We show how to sample from
this optimal distribution by running SGMCMC on a modified energy function. This energy function is obtained by marginalizing the model’s joint distribution over previously generated samples from the Markov chain, leading to an approximate factorization over user-specified parameter groups. Further, we provide a more robust and computationally efficient approximation to the procedure that allows for interpolation between regular SGMCMC and our structured SGMCMC by taking inspiration from dropout techniques. Both methods are compatible with any Markovian SGMCMC algorithm, including Langevin dynamics and stochastic gradient Hamiltonian Monte Carlo.
In sum, our contributions are as follows:
• We propose a new approximate MCMC scheme running SGMCMC on a modified energy function, trading accuracy for speed. This setup effectively allows sampling from a fully joint posterior, a completely factorized posterior, and any in-between.
• We prove mathematically that the resulting scheme asymptotically generates samples from the best possible posterior approximation subject to user-specified factorization constraints between groups of parameters.
• We extend this scheme further by making it more scalable with a dropout-inspired approximation. This new scheme has a hyperparameter that enables a smooth interpolation between full SGMCMC and a "mean-field" version where all posterior correlations are broken.
• We show in both small and large scale experiments that our method well approximates posterior marginals and gives improved results over SGMCMC on Resnet-20 architectures on CIFAR-10, Fashion MNIST, and SVHN in terms of both runtime and final accuracy.
Our paper is structured as follows: Section 2 presents the related work to our proposal, Section 3 introduces preliminaries regarding the energy function and the stochastic gradient updates, Sections 4 and 5 derive our proposed methods, Section 6 details experiments and their results, and Section 7 contains our concluding thoughts.
2 RELATED WORK
Our work connects both to (stochastic) variational inference (Bishop, 2006; Hoffman et al., 2013; Ranganath et al., 2014; Blei et al., 2017; Zhang et al., 2018) and scalable MCMC (Welling & Teh, 2011; Chen et al., 2014; Ma et al., 2017; Zhang et al., 2020; Leimkuhler et al., 2019; Wenzel et al., 2020; Izmailov et al., 2021). For space limitations, we focus on the most related work at the intersection of both topics.
Among the earliest works to hybridize both approaches was (de Freitas et al., 2001) who constructed a variational proposal distribution in the Metropolos-Hastings step of MCMC. An improved approach to that was introduced in (Habib & Barber, 2018), where by introducing low-dimensional auxiliary variables they fit a more accurate approximating distribution. Other related advances to MCMC methods were proposed by Levy et al. (2017) who developed a method to train MCMC kernels with NNs, and Wang et al. (2018); Gong et al. (2018) who leveraged meta learning schemes in SGMCMC methods.
Most recent work focuses on connections between VI and stochastic gradient-based MCMC, or between VI and stochastic gradient descent (SGD). For example, Mandt et al. (2016; 2017) and Duvenaud et al. (2016) consider SGD as a type of variational inference, but their approaches did not attempt to close the gap to exact MCMC. Other works aim at explicitly interpolating between both methods. Domke (2017) proposes a divergence bound for hybridizing VI and MCMC, essentially by running Langevin dynamics on a tempered evidence lower bound (ELBO). Salimans et al. (2015) embody MCMC steps into the variational inference approximation. Ahn et al. (2012) improve stochastic gradient Langevin dynamics by leveraging the central limit theorem and using the estimated inverse Fisher information matrix to sample from the approximate posterior distribution. Rezende & Mohamed (2015) interpreted the path of an MCMC algorithm as a variational distribution, and then fitting parameters to tighten a variational bound. Recently, Hoffman & Ma (2020) interpreted (parametric) VI as approximate Langevin dynamics and showed that both algorithms have similar transient dynamics.
In contrast to all these approaches, our method is inspired by coordinate ascent variational inference (Bishop, 2006) but uses Langevin updates to generate samples from the target distribution that respects an imposed independence structure.
3 PRELIMINARIES
Variational inference (VI) approaches differ from MCMC in two regards: (1) they impose a structured (e.g., fully-factorized) approximation of the posterior for tractability, and (2) they often make parametric assumptions. Is it possible to construct a modified scheme that only relies on the assumption (1), inheriting the non-parametric nature of MCMC while breaking posterior correlations in a controlled manner? As follows, we will show how such a scheme can be realized. We will first derive a modified energy function for Langevin dynamics that we can sample from and then prove that its negative exponential results in the optimal posterior approximation subject to specified factorization constraints. Running SGMCMC algorithms on this energy function will consequently generate samples from this distribution.
Before we explain our new method, we introduce the setup and common notation. Given data D = {(xi, yi)}i=1,...,N , parameters θ, a proper prior distribution p(θ), and a likelihood p(D|θ) =∏N i=1 p(yi|xi, θ), suppose we are interested in the corresponding posterior distribution p(θ|D) ∝ p(D|θ)p(θ). A convenient representation of the posterior is as a Boltzmann distribution:
p(θ|D) ∝ exp{−U(θ)} where U(θ) = − log p(θ,D) = − ∑
(x,y)∈D
log p(y|x, θ)− log p(θ). (1)
U is typically referred to as the posterior energy function. Note that the posterior distribution is typically intractable due to the normalizing constant.
A popular approach for approximating the entire posterior distribution is by deploying Markov chain Monte Carlo (MCMC) algorithms. These methods work by producing an empirical distribution of samples in parameter space, often times through the use of a random walk. While being very accurate and having asymptotic guarantees, these methods are known to not scale well with respect to both data and parameters (Brooks et al., 2011; Geyer, 1992).
Stochastic gradient MCMC (SGMCMC) is a class of scalable MCMC algorithms that can produce posterior samples through gradients on minibatches of data. These algorithms are largely derived from discretized approximations of continuous-time diffusion processes. Examples of these algorithms include stochastic gradient Langevin dynamics (SGLD) (Welling & Teh, 2011), preconditioned SGLD (pSGLD) (Li et al., 2016), and stochastic gradient Hamiltonian Monte Carlo (SGHMC) (Chen et al., 2014).
As alluded to, the basis of SGMCMC algorithms is using a sampled minibatch of data D̃ from D to produce an differentiable, unbiased estimate of the posterior energy function:
U(θ) ≈ Û(θ; D̃) = − N |D̃| ∑ (x,y)∈D̃ log p(y|x, θ)− log p(θ). (2)
Once Û is defined, it is fairly straight forward to generate new samples from the posterior distribution. For instance, the SGLD update is
θ(t+1) = θ(t) − t 2 ∇θÛ(θ(t); D̃t) + ξt where ξt ∼ N (0, tI). (3)
Similar rules for pSGLD and SGHMC can be found in the Supplement. All of these update rules produce a chain of samples up to time step t that ultimately form an empirical distribution p̂(t)(θ|D). Should the algorithms converge, then limt→∞ p̂(t)(θ|D) = p(θ|D).
4 STRUCTURED SGMCMC
By design, SGMCMC methods produce a fully joint posterior distribution over parameters θ. For models with a large number of parameters, this can lead to various complications due to the curse of
dimensionality. This is typically observed with slow convergence times and potentially unexplored parameter spaces. A viable solution is to break dependencies in the posterior distribution by leveraging ideas commonly used in variational inference (VI). This would reduce the number of various potential posterior correlations that the model would need to capture while sampling.
To achieve partial factorization, we must first partition θ into M > 1 distinct, mutually independent groups: θ1, . . . , θM . This partitioning structure is assumed to be known a priori. We will denote the distribution that respects this partitioning structure as q(θ) = ∏M i=1 qi(θi). Similar to VI, we would like this distribution q(θ) to best approximate the true posterior distribution p(θ|D) according to some criteria, such as KL-divergence. This leads to a natural objective function to minimize:
J(q(θ)) = DKL(q(θ)||p(θ|D)) ≡ Eθ∼q [ log q(θ)
p(θ|D)
] (4)
The following Theorem 1 proves that there is a unique solution to the non-parametric KL minimization problem described in Eq. (4). To describe it, we compose θ = {θi, θ̃¬i} for any i where θ̃ ∼ q and define a structured energy function:
U (S)(θ) = M∑ i=1 U (S) i (θi), with U (S) i (θi) := Eθ̃∼qU({θi, θ̃¬i}) := −Eθ̃∼q log p(θi, θ̃¬i,D). (5)
That is, we first define the marginals U (S)i (θi), where we marginalize U(θ) with respect to all q(θ)-factors except qi(θi), and then sum up these marginals to define U (S)(θ). A similar partial marginalization procedure is carried out for conjugate exponential family distributions in coordinate ascent VI (Bishop, 2006). Having a well-defined energy function U (S) allows us to use standard SGMCMC methods to approximate the posterior q(θ) with samples. This serves as the basis for our proposed algorithm that actually approximates this distribution q(θ), which will be discussed shortly. Theorem 1. The unique solution to the KL minimization problem given in Eq. 4 is given by the Boltzmann distribution q(θ) ∝ exp{− ∑M i=1 U (S) i (θi)}. Please refer to the Supplement for the proof.
In an ideal world, we would be able to use the findings of Theorem 1 directly in conjunction with algorithms like Langevin dynamics and Hamiltonian Monte Carlo to produce empirical distributions for q using U (S) (Liu et al., 2019). However, this is intractable for two reasons: (1) these algorithms generally work only well with small amounts of data, and (2) more importantly, the marginals U
(S) i (θi) do not have a closed-form solution but need to be approximated via samples from q. Luckily, since SGMCMC methods only need access to noisy estimates of U (S), we can run these algorithms on a stochastic estimate of Eq. (5),
U (S)(θ) ≈ Û (S)(θ; D̃) = M∑ i=1 Eθ̃∼qÛ({θi, θ̃¬i}; D̃), (6)
where Û(·) is defined in Eq. (2). In practice, at timestep t for i = 1, . . . ,M we estimate Eθ̃∼qÛ({θi, θ̃¬i}; D̃t) with a Monte Carlo approximation. In place of θ̃, we use a single sample of θ̃(t) taken from the current approximate distribution q̂(t) which is composed of samples from previous timesteps (i.e., a uniform distribution over {θ(1), θ(2), . . . , θ(t)}). This leads to the following update step for structured SGLD (S-SGLD):
θ(t+1) = θ(t) − t 2 ∇θÛ (S)(θ; D̃) + ξt where ξt ∼ N (0, tI). (7)
Similar rules for structured variants of pSGLD (S-pSGLD) and SGHMC (S-SGHMC) can be found in the Supplement. Additionally, the full procedure for structured SGMCMC (S-SGMCMC) can be seen in Algorithm 2.
Remark Since ∇θÛ (S) is an unbiased estimator for U (S), we are guaranteed to converge to q from sampling with S-SGMCMC with sufficiently decreasing learning rates so long as we are in a stationary state. While it is unlikely to have the procedure initialize to a stationary state, we observe in practice that our scheme both tends to converge towards and remain in a stationary state. A general proof of convergence is outside the scope of this work and is left to follow-up research.
An example of S-SGMCMC can be seen in Fig. 1(a-b), which features the approximate posterior distributions of a linear regression model with three coefficients and with various independence structures imposed with S-SGLD: (a) joint dependence between w1, w2, and w3; (b-left) dependence between w1 and w2 but independence between w3 and the other coefficients; (b-right) fully factorized. Of note is that the bivariate posterior distributions appear to respect the imposed independence structure. Interestingly, it also appears that the variance shrinks as we induce these factorizations which is a commonly seen artifact when using VI.
5 STRUCTURED DROPOUT SGMCMC
While S-SGMCMC can successfully break dependencies between parameter groups, it does suffer computationally due to each parameter update scaling linearly with respect to M . This means that for a single new sample of θ, the model’s forward pass needs to be computed M different times on the same batch of data D̃, which can quickly become prohibitively expensive for deep models when M is large. Ideally, we would prefer a method that both closely resembles the S-SGMCMC procedure and scales independently from the partitioning scheme. This section presents such a method that achieves this, which we call structured dropout SGMCMC (Sd-SGMCMC), as well as an informal motivation and derivation of the method. More formal details and a theorem proving both SGMCMC and S-SGMCMC are limiting cases for Sd-SGMCMC can be found in the Supplement.
The main motivation for this technique can be seen by recognizing that the composition {θ(t)i , θ̃ (t) ¬i } from Eq. (6) can be rewritten as a sum of masked values rθ(t) + (1− r)θ̃(t) where θ̃(t) ∼ q(t) and rj = 1(i = j) for i = 1, . . . ,M . We can decouple the computational scaling from the number of parameter groups M by replacing the M deterministic masks r’s with K stochastically sampled masks r̃.1 Doing so results in a slightly different energy function and minibatch loss to optimize:
Û (Sd)(θ(t); D̃) ≈ M KE [∑M
i=1 ri ] K∑ k=1 Û(r̃(t,k)θ(t) + (1− r̃(t,k))θ̃(t,k); D̃) (8)
where r̃(t,k) is the kth sample of r̃ for timestep t. A formal justification for Eq. (8) can be found in the Supplement. These energy function approximations lead to the following update step for structured
1K is a hyperparameter that is chosen independent of M ; however, both M and the distribution of r̃ largely influence how small K can be due to how they affect the variance of the gradient of the associated posterior energy function.
Algorithm 2: S-SGMCMC Input: Initial sample θ(0); parameter
partitions θ1, . . . , θM ; step sizes { t}t=0,...,T−1.
Output: q̂(T )(θ) := {θ(t)}t=1,...,T 1 for t = 0 to T − 1 do 2 Sample minibatch D̃(t) ⊂ D 3 for i = 1 to M do 4 Sample θ̃(t)¬i ∼ q̂ (t) ¬i 5 Û (S,t) i = Û([θ (t) i , θ̃ (t) ¬i ]; D̃(t)) 6 end 7 ∇θÛ (S,t) = ∑M i=1∇θÛ (S,t) i 8 θ(t+1) =
SGMCMC_step(θ(t),∇θÛ (S,t), t) 9 end
10 return q̂(T )(θ)
Table 1: IAC and ESS metrics for CIFAR-10, SVHN, and FMNIST with various methods. Subscripts after method names refers to number of equally sized parameter groups, with |θ| meaning every parameter belongs to its own group. Best results are bolded.
CIFAR-10 SVHN FMNIST
Method IAC↓ ESS↑ IAC↓ ESS↑ IAC↓ ESS↑
pSGLD 716 8.01 839 6.82 779 7.09 S-pSGLD2 600 7.44 840 6.80 740 7.55 S-pSGLD4 599 7.4 834 6.83 751 7.45 S-pSGLD8 709 6.41 857 6.67 776 7.24 Sd-pSGLD|θ| 546 8.01 803 7.00 677 8.24 SGHMC 727 7.94 858 6.59 795 6.83 S-SGHMC2 583 7.49 949 5.74 928 5.67 S-SGHMC4 624 7.03 961 5.66 915 5.77 S-SGHMC8 904 4.97 1056 5.30 1142 4.87 Sd-SGHMC|θ| 584 7.7 828 6.56 782 7.08
dropout variant of SGLD (Sd-SGLD):
θ(t+1) = θ(t) − t 2 ∇θÛ (Sd)(θ; D̃) + ξt where ξt ∼ N (0, tI). (9)
The corresponding update rules for the structured dropout variants for pSGLD (Sd-pSGLD) and SGHMC (Sd-SGHMC) are defined in the Supplement. The exact procedure for generating samples of the approximate posterior q̂(t) using structured dropout SGMCMC (Sd-SGMCMC) can also be found in the Supplement.
An example of this method (specifically Sd-SGLD with r̃i iid∼ Bernoulli(0.5) and K = 4) used on a linear regression model can be seen in Fig. 1(c). Of note, we can see that the dropout variant largely respects the independence structure imposed, but maybe not as strictly as the exact S-SGLD method seen in Fig. 1(b). Additionally, the posterior variance also seems to have shrunk similarly to S-SGLD when compared against SGLD.
Masking Distribution Should r̃i iid∼ Bernoulli(ρ), alongside a structure that factorizes by activation components, then the method starts to resemble dropout with rate ρ (Srivastava et al., 2014). The main difference being that instead of replacing a parameter value with 0 it is replaced with a sample from the approximate posterior distribution at time t: q̂(t). While a Bernoulli distribution for r̃ is a natural choice, there are other distributions that can be chosen as well. For instance, r̃i
iid∼ N (0, 1) or r̃i
iid∼ Beta(α, β) are both viable distributions and can be seen as analog to Gaussian and Beta-dropout respectively (Srivastava et al., 2014; Liu et al., 2019). Our experiments will largely focus on sampling r̃ from Bernoulli and uniform over [0, 1] (equivalent to Beta(0.5, 0.5)) distributions.
6 EXPERIMENTS
Overview In this section we evaluate our proposed approach on various models and datasets. Section 6.1 investigates the impact of the variational approximation on the algorithms’ mixing and autocorrelation times using a fully-connected network architecture on MNIST (LeCun et al., 2010). Section 6.2 studies our methods with ResNet-20 (He et al., 2016) on CIFAR-10 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011), and Fashion MNIST (Xiao et al., 2017) and compares them for their accuracy and mixing time. Our experiments reveal that the chains in our proposed methods mix faster than SGMCMC and achieve either comparable or even higher accuracies on average.
We have also conducted experiments on uncertainty visualization, where we tested the proposed methodology on predictive uncertainty estimation by deploying a two-layer fully connected network
on a toy dataset. The uncertainty experimental setup and results, along with more technical details for the other experiments, can be found in the Appendix.
Metrics The primary predictive metric of interest use to evaluate our proposal is classification accuracy. We take the average of an ensemble of 100 models whose weights are sampled from the past samples of the parameters chains in order to calculate the accuracy. Additionally, we also monitor the mixing time of the chains of our methods with both integrated autocorrelation time (IAC) (Sokal, 1997; Goodman & Weare, 2010) and effective sample size (ESS) (Geyer, 1992). IAC measures the correlation between samples in a chain and, in turn, describe the inefficiency of a MCMC algorithm. IAC is computed as τf = ∑∞ τ=−∞ ρf (τ) where ρf is the normalized autocorrelation function of the stochastic process that generated the chain for f and is calculated as ρ̂f (τ) = ĉf (τ)/ĉf (0); where ĉf (τ) = 1 N−τ ∑N−τ n=1 (fn − µf ) (fn+τ − µf ) and µf = 1 N ∑N n=1 fn. We note that we calculated ĉf (τ) with a fast Fourier transform as it is more computationally efficient than using the direct sum. ESS measures how many independent samples would be equivalent to a chain of correlated samples and is calculated as neff = n1+(n−1)p , where n is the number of samples and p is the autocorrelation. 2 We note that a model with higher ESS and lower IAC have faster mixing time. Please see the Appendix for the detailed implementation details and experimental setup for the metrics and our models.
6.1 DROPOUT RATE & GROUP SIZE INVESTIGATION
The aim of this set of experiments is to study the effects that the number of independent parameter groups (or alternatively, the amount of allowed posterior correlations) has on accuracy and mixing time when using our proposed methods. We compare pSGLD, S-pSGLD, and Sd-pSGLD a Bernoulli(ρ) masking distribution with dropout rates ρ ∈ {0.1, 0.3, 0.5} on a fully-connected neural network with 2 hidden layers, with 50 hidden units each, trained and evaluated with MNIST using the standard train and test split. The model has 42,200 parameters in total. For S-pSGLD and Sd-pSGLD, these parameters are evenly distributed into M groups where M ranges from 4 to 42,200. Accuracy, IAC, and ESS are reported in Fig. 2 using 100,000 posterior samples after a 150,000 burn in period. More details on the implementation of the model regarding training and evaluation can be found in the Appendix.
As shown in Fig. 2(a), for S-pSGLD we observe that as we increase the number of groups the accuracy drops dramatically whereas Sd-pSGLD’s accuracy improves slightly and then remains fairly stable. In the best case, Sd-pSGLD achieves an accuracy of 96.3% with 32 groups and dropout rate of 0.5 which outperforms pSGLD with accuracy of 94.2%. We speculate that the dropout-like behavior is beneficial for regularizing the model (much like normal dropout), hence the improved accuracy across all dropout rates. Similarly, a single sample used for the Monte Carlo estimate in S-SGMCMC may not be enough as the number of groups M increase; however, increasing the number of samples in this scenario is infeasible due to S-SGMCMC scaling as O(M). Fig. 2(b-c) portrays the comparison between number of groups and mixing time metrics IAC and ESS. As the number of groups gradually increase, we note that S-pSGLD mixes faster, as does Sd-pSGLD to lesser and lesser degrees as ρ increases. This behavior is to be expected due to Theorem 2, with Sd-pSGLD exhibiting mixing times more similar to pSGLD when ρ = 0.5 and more similar to S-pSGLD when ρ = 0.1.
6.2 SYSTEMATIC COMPARISON ON REAL-WORLD DATA
The goal of these experiments is to test the proposed methodology on larger-scale datasets which mimic real-world data: CIFAR-10, SVHN, and FMNIST. We evaluate our methods on performance accuracy and on mixing times of the chains. We employ ResNet-20 for SVHN and FMNIST without any data augmentation to assess our methods. For CIFAR10 we employ the same data augmentation process as proposed in Cubuk et al. (2019). We evaluate the precision of the methods on accuracy over time and the overall mixing time of them on IAC and ESS with 2 base algorithms: pSGLD and SGHMC. For efficiency purposes we limited our scope to models with either fully joint posteriors or fully factorized. As such, for the latter we employed Sd-SGMCMC methods
2We used the TensorFlow implementation for ESS which uses the direct sum for the autocorrelation.
as S-SGMCMC would not be feasible with the amount of parameter groups present. Bernoulli(ρ) and uniform masking distributions were investigated and are denoted as SBernoulli-SGMCMC and SUniform-SGMCMC respectively, with ρ varying between datasets as determined by a hyperparameter search (detailed in the Appendix).
In Fig. 3 we observe how quickly the proposed methods and the baseline SGMCMC methods approach their optimum accuracy over the course of training. As is shown, SBernoulli-SGMCMC and SUniform-SGMCMC appear to achieve optimal accuracy values much faster than SGMCMC on all datasets and with all base sampling schemes. In some cases, the variational methods achieve better accuracy values than the baseline methods, as seen for CIFAR10 in Fig. 3.
Mixing Time Comparisons We further validated our findings from Section 6.1 by evaluating the IAC and ESS on larger datasets using various methods. Both pSGLD and SGHMC were used as base methods in conjunction with both S-SGMCMC and Sd-SGMCMC using a Bernoulli masking distribution. IAC and ESS were calculated for these methods using the latest 5,000 samples after sampling for 300 epochs; the results of which can be found in Table 1. For CIFAR-10, we see that Sd-SGMCMC with every parameter in a different group mixes the fastest against all other methods. Likewise, for SVHN and FMNIST, Sd-pSGLD with every parameter belonging to its own group mixes faster than all other methods. At times it does appear that increasing the number of parameter groups causes slower mixing time for S-SGMCMC. This could potentially be attributed to large variance in the gradients from using only a single sample per Monte Carlo estimate.
6.3 EXPLORING PARTITIONING SCHEMES
This part of the study aims to explore the capabilities of the proposed methodology further. Here we explore different parameter partitioning schemes on regression datasets.
Here we present the results with different partitions on various regression datasets. We used 7 different datasets: the wine quality dataset (Cortez et al., 2009), the Boston housing dataset (Harrison Jr & Rubinfeld, 1978), the obesity levels dataset (Palechor & de la Hoz Manotas, 2019), the Seoul bikesharing dataset (E et al., 2020; E & Cho, 2020), the concrete compressive strength dataset (Yeh, 1998), and the airfoil self-noise dataset (Brooks et al., 1989). For the evaluation we chose a simple fully connected network with two layers with 50 neurons each, and we use SGLD as an optimizer. As a performance metric we chose mean squared error (MSE). We did hyperparameter tuning with different learning rates and the final results are the means with the standard deviations of 5 runs with different seeds. We do not observe any specific systematic trends on the partitions, apart from the fact that in some cases random performs better. In that way the use of either random partitioning or the fully-factorized partitioning, where every parameter is in a different group appears to be a valid choice a priori; especially the latter since we have noted earlier the faster mixing times associated with this partitioning scheme. More details about the partitioning schemes experiments can be found in the Appendix.
7 CONCLUSIONS
In an attempt to hybridize MCMC and VI, we proposed S-SGMCMC: an approach that produces samples from an structured posterior by running SGMCMC on a modified energy function. The resulting Markov chain becomes asymptotically decoupled across user-specified groups of parameters, resulting in faster convergence. For better computational efficiency, we proposed Sd-SGMCMC: a further generalization of S-SGMCMC inspired by dropout. This extension allows interpolating between a SGMCMC algorithm and its corresponding S-SGMCMC method.
Our experimental results demonstrate that the proposed methods impose structure over posterior distributions, increase mixing times of the chains, and result in similar or better posterior predictive accuracies compared to SGMCMC on a variety of (deep) models. Our experimental evaluations have provided strong empirical evidence for the efficacy of our approach. We also showed that the proposed approach is compatible with various deep learning architectures, including ResNet-20, and various datasets.
Despite its proven capabilities, our proposed methodology does come with some limitations. Namely, for quick access our methods require keeping chains of samples on the GPU whereas the baseline SGMCMC methods can simply save samples to disk. Additionally, S-SGMCMC scales poorly with respect to the number of parameter groups. Sd-SGMCMC manages to break this dependency; however, it still requires slightly more compute than SGMCMC per sample, but it is comparable in wall clock time. Possible future work could focus on more theoretical analyses of S-SGMCMC, such as formal proofs of convergence.
8 ETHICS STATEMENT
The main focus of our work is to train models faster by decreasing the convergence time of their training phase. In this scope we are not aware of any ethical concerns of our research.
9 REPRODUCIBILITY
For this work we have made sure to guarantee reproducibility of the results. We provide all the technical details of our experiments and their implementations, like the hyperparameters, the data, the frameworks and the experimental setups. We have used only open source datasets that are easily accessible to the public. Finally we commit to release the code that we implemented for this work via a public repository.
A THEOREM 1
Proof. We begin with some preliminaries from the main text. Given data D = {(xi, yi)}i=1,...,N , parameters θ, a proper prior distribution p(θ), and a likelihood p(D|θ) = ∏N i=1 p(yi|xi, θ), suppose we are interested in the corresponding posterior distribution p(θ|D) ∝ p(D|θ)p(θ). A convenient representation of the posterior is as a Boltzmann distribution:
p(θ|D) ∝ exp{−U(θ)} where U(θ) = − ∑
(x,y)∈D
log p(y|x, θ)− log p(θ). (10)
U is typically referred to as the posterior energy function. Note that the posterior distribution is typically intractable due to the normalizing constant.
We also write the equation for KL divergence from the main text:
J(q(θ)) = DKL(q(θ)||p(θ|D)) (11) ≡ Eθ∼q [ log q(θ)
p(θ|D)
] (12)
We then rewrite Eq. 4 as follows:
J(q(θ)) = Eθ∼q [log q(θ)]− Eθ∼q [log p(θ,D)] + C (13) =Eθi∼qi [log qi(θi)] + ∑ i 6=j Eθj∼qj [log qj(θj)]− ∫ log p(θ,D)qi(θi)dθi ∏ i6=j qj(θj)dθj + C (14)
for some i ∈ {1, . . . ,M} where ¬i := {1, . . . ,M} \ {i} and C = log p(D). In order to find the optimal distribution that respects the factorization constraints imposed between parameter groups, we need to minimize this functional over q — or rather every qi. This is done by taking the functional derivative of J with respect to qi, setting it equal to zero, and solving for qi:
δJ(q(θ))
δqi(θi) =
∫ log p(θ,D) ∏ i6=j qj(θj)dθj − 1− log qi(θi) := 0 (15)
=⇒ log qi(θi) = Eθ̃¬i∼q¬i [ log p(θi, θ̃¬i,D) ] − 1 (16)
=⇒ qi(θi) ∝ exp { Eθ̃¬i∼q¬i [ log p(θi, θ̃¬i,D) ]} . (17)
By defining the energy U (S)i (θi) = −Eθ̃¬i∼q¬i [ log p(θi, θ̃¬i,D) ] , we realize that by minimizing
the KL-divergence in Eq. 4, the approximate posterior distribution q = ∏M i=1 qi takes the form of a
Boltzmann distribution as in Eq. 1 with U (S)(θ) = ∑M i=1 U (S) i (θi).
It remains to be shown that the solution is unique. To this end, we refer to the convexity of the KL divergence in function space (Cover & Thomas, 2001). This implies that the stationary point of the KL is indeed a global optimum and unique.
B DERIVING U (Sd)
With just a slight shift in perspective, it is actually possible to further generalize U (S) (and consequently S-SGMCMC) to produce a broader class of approximate sampling algorithms. This is done
by first noting that U (S) can be represented with a scaled double-expectation:
U (S)(θ) = − M Er∼p(S) [∑M i=1 ri ]Er∼p(S)Eθ̃∼q [log p(rθ + (1− r)θ̃,D)] (18) where p(S)(r) = Cat(r;M−1, . . . ,M−1) and (rθ + (1 − r)θ̃)i is equal to θi if ri = 1 and θ̃i otherwise for i = 1, . . . ,M . Note that this is constructed in this manner specifically so that U (S) remains differentiable with respect to θ. Also note that though the denominator appears superfluous as Er∼p(S) [ ∑M i=1 ri] = 1, it is necessary for certain theoretic properties, as seen in Theorem 2.
By replacing p(S) with a more flexible distribution, we can further generalize and encapsulate different energy functions to sample from. One such choice is p(Sd)(r; ρ) :∝ ∏M i=1 Bern(ri; ρ)1( ∑M i=1 ri > 0) with ρ ∈ (0, 1).3 Substituting p(S) for p(Sd) in Eq. (18) yields a new energy function that we will refer to as U (Sd). We note that this choice in distribution leads to a dropout-like behavior (Nalisnick et al., 2019; Srivastava et al., 2014), where the composition of model parameters as rθ + (1− r)θ̃ leads to each parameter group θi having a probability of approximately ρ to be used in a prediction and a (1 − ρ) probability of being replaced by θ̃i from the approximate posterior (in traditional dropout, θi would instead be replaced with 0). Likewise, we will denote methods that use this energy function for sampling as structured dropout SGMCMC (Sd-SGMCMC) with different variants all sharing the same Sd prefix (e.g. Sd-SGHMC).
In practice, the double-expectation in U (Sd) is jointly approximated using a Monte Carlo estimate with K samples. This leads to Eq. (8) in the main paper. We note that by approximating U (Sd) in this way, computing a gradient no longer scales on the order of O(M), but rather O(K). This means that the choice of structure imposed on the posterior distribution remains independent of computing resources. As such, configurations with large amounts of parameter groups are typically only feasible when using Sd-SGMCMC as S-SGMCMC would use too much memory and/or compute per sample.
C THEOREM 2
Theorem 2. For a given set of parameters θ partitioned into M groups, under minor assumptions (i) U (Sd) → U as ρ → 1 and (ii) U (Sd) → U (S) as ρ → 0. Thus, distributions approximated by Sd-SGMCMC lie on a continuum with those generated by S-SGMCMC at one extreme and with those from SGMCMC at the other.
Proof. Assume an arbitrary θ, D, n ∈ N, and that Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] exists for r ∈ R.
As an aside, this proof assumes that p(Sd)(r; ρ) :∝ ∏M i=1 Bern(ri; ρ)1( ∑M i=1 ri > 0) with ρ ∈ (0, 1); however, the theorem still holds an arbitrary p(Sd) so long as the mean approaches 1 and variance approaches 0 as n→∞.
(i) Let r(n) ∼ p(Sd)(ρn) where ∀nρn ∈ (0, 1) and ρn → 1. It follows that r(n) → {1}M as n→∞ in distribution (see Lemma 1 in Supplement). Due to bounded and finite support R, we find the following:
U (Sd)(θ) = − M Er∼p(Sd) [ ∑M i=1 ri] ∑ r∈R p(Sd)(r; ρn)Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] (19)
→ −M M ∑ r∈R 1(∀iri = 1)Eθ̃∼q [log p(θ,D)] as n→∞ (20)
= − log p(θ,D) = U(θ) (21)
(ii) Let r(n) ∼ p(Sd)(ρn) where ∀nρn ∈ (0, 1) and ρn → 0. It follows that r(n) → r ∼ Cat(M−1, . . . ,M−1) as n→∞ in distribution (see Lemma 2 in Supplement). Due to bounded and
3Other choices of distribution that are well justified include any with support over [0, 1]M and with measure 0 over {0}M . Exploring the effects these distributions have are an interesting line of future inquiry.
finite supportR, we find the following:
U (Sd)(θ) = − M Er∼p(Sd) [ ∑M i=1 ri] ∑ r∈R p(Sd)(r; ρn)Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] (22)
→ −M 1 ∑ r∈R 1( ∑M i=1 ri = 1) M Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] as n→∞ (23)
= − M∑ i=1 Eθ̃∼q[log p([θi, θ̃¬i,D)] = U (S)(θ) (24)
For both Lemmas 1 and 2, let
p(Sd)(r; ρ) = ρ ∑M i=1 ri(1− ρ)M− ∑M i=1 ri
1− (1− ρ)M 1(∀iri ∈ {0, 1})1 ( M∑ i=1 ri > 0 ) (25)
Lemma 1. For r(n) ∼ p(Sd)(ρn), ρn ∈ (0, 1) and n ∈ N, if ρn → 1 as n → ∞ then r(n) → r ∼ δ({1}M ) in distribution as n→∞.
Proof.
p(Sd)(r = {1}M ; ρn) = ρMn (1− ρn)0
1− (1− ρn)M (26)
→ 1 as n→∞ (27) =⇒ r(n) → δ({1}M ) in distribution. (28)
Lemma 2. For r(n) ∼ p(Sd)(ρn), ρn ∈ (0, 1) and n ∈ N, if ρn → 0 as n → ∞ then r(n) → r ∼ Cat(M−1, . . . ,M−1) in distribution as n→∞.
Proof. Let i ∈ {1, . . . ,M}.
p(Sd)(ri = 1, r¬i = 0; ρn) = ρn(1− ρn)M−1
1− (1− ρn)M (29)
l’Hôspital’s Rule H= (1− ρn)M−1 + ρn(M − 1)(1− ρn)M−2
M(1− ρn)M−1 (30)
→ 1 M as n→∞ (31)
Since the resulting probabilities sum to 1, this implies that r(n) → r ∼ Cat(M−1, . . . ,M−1) in distribution as n→∞.
D DERIVING U (Sd)
To derive U (Sd), we must first start with a shift in perspective on how U (S) is represented. We will rewrite the function in the following way:
U (S)(θ) = − M∑ i=1 Eθ¬i∼q¬i [log p([θi, θ¬i],D)] (32)
= − M Er∼p(S) [ ∑M i=1 ri]
Er∼p(S)Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] (33)
where p(S) is a M -dimensional categorical distribution with uniform weights M−1 and p(rθ + (1− r)θ̃,D) is the joint probability of parameters taking values of rθ + (1− r)θ̃ and data D.4
We note that changing the distribution of r leads to different energy functions to sample from. One such choice is to have p(Sd)(r; ρ) ∝ ρ ∑M i=1 ri(1 − ρ)M− ∑M i=1 ri1(∀iri ∈ {0, 1})1( ∑M i=1 ri > 0) for ρ ∈ (0, 1). Note that this is identical to ri iid∼ Bernoulli(ρ) conditional to ∑M i=1 ri > 0. Let the support of p(Sd) be denoted asR = {0, 1}M \ {0}M . This leads to the following energy function:
U (Sd)(θ) = − M Er∼p(Sd) [ ∑M i=1 ri]
Er∼p(Sd)Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] . (34)
In practice, a few approximations are made to compute the corresponding U (Sd). Firstly, we approximate p(Sd) with an M -dimensional Bernoulli(ρ) distribution as the difference is minute when Mρ is large. Secondly, the outer expectation in Eq. (34) is approximated with a Monte Carlo estimate of K samples. The inner expectation is also approximated with a Monte Carlo estimate using the latest approximate posterior q̂(t). However, just like for S-SGMCMC, only a single sample is used. This further leads to:
U (Sd)(θ(t); D̃) = − 1 Kρ K∑ k=1 U(r(t,k)θ(t) + (1− r(t,k))θ̃(t,k); D̃) (35)
E ALGORITHM FOR Sd-SGMCMC
The procedure for Sd-SGMCMC can be seen in Algorithm 3.
Algorithm 3: Sd-SGMCMC Input: Initial sample θ(0); parameter partitions θ1, . . . , θM ; data set D; initial auxiliary statistics ξ(0); step sizes { t}t=1,...,T ; masking distribution p(Sd); dropout iterations K. Output: q̂(T )(θ) := {θ(t)}t=1,...,T
1 for t = 0 to T − 1 do 2 Sample minibatch D̃(t) ⊂ D 3 for k = 1 to K do 4 Sample masks r(t,k)1 , . . . , r (t,k) M ∼ p(Sd) 5 Sample θ̃(t,k) ∼ q̂(t)
6 θ(t,k) = [r (t,k) i θ (t) i + (1− r (t,k) i )θ̃ (t,k) i ]i=1,...,M 7 Û (Sd,t) k = Û(θ
(t,k); D̃(t)) 8 end 9 ∇θÛ (Sd,t) = MKE r∼p(Sd) [ ∑M i=1 ri] ∑K k=1∇θÛ (Sd,t) k
10 θ(t+1), ξ(t+1) = SGMCMC_step(θ(t),∇θÛ (Sd,t), ξ(t), t) 11 end 12 return q̂(T )(θ)
4rθ + (1 − r)θ̃ is a slight abuse of notation that is meant to represent masking out θi when ri = 0 and masking out θ̃i when ri = 1.
F SGMCMC UPDATE RULES
The update rules for SGLD, pSGLD, and SGHMC are defined as follows:
SGLD θ(t+1) = θ(t) − t 2 ∇θÛ(θ(t)) +N (0, tI) (36)
pSGLD θ(t+1) = θ(t) − t 2
[ R(θ(t))∇θÛ(θ(t)) +
∑ θ ∇θR(θ(t))
] +N (0, tR(θ(t))) (37)
SGHMC θ(t+1) = θ(t) + tM−1m(t+1) (38)
m(t+1) = (1− γ tM−1)m(t) − t∇θÛ(θ(t)) +N (0, 2γ − tV̂ (θ(t))) (39)
where t is the step size at time step t, R(·) and M are preconditioners, γ ≥ 0 is a friction term, and V̂ (·) is an estimate of the covariance induced by the stochastic gradient.5
The update rules for the S-SGMCMC variants are similarly defined as Eqs. 36-39 but all instances of Û(θ(t)) are replaced with Û (S)(θ(t)). Likewise, replacing with Û (Sd)(θ(t)) yields the Sd-SGMCMC variants.
G ABLATION STUDY
This subsection aims to further explore the capabilities of the proposed methodology. More specifically, we visualize uncertainty for a two-layer Fully Connected Network and experiment with various parameter partitions.
Parameter Partitions. We tested our proposal with four partitioning schemes on a 2 layer with 50 neurons fully connected network on a regression task. The partitioning schemes that we used are the following: (a) the parameters are split into 3 groups randomly, (b) the parameters are split by layer(3 layers, 1 input and 2 hidden), (c) by activating neurons inside the layers and (d) every parameter belongs in each own group. We used 7 different datasets: the wine quality datsetCortez et al. (2009), the Boston housing datasetHarrison Jr & Rubinfeld (1978), the obesity levels datasetPalechor & de la Hoz Manotas (2019), the Seoul bike-sharing datasetE et al. (2020); E & Cho (2020), the concrete compressive strength datasetYeh (1998), and the airfoil self-noise datasetBrooks et al. (1989). Every dataset was split into 75% training data, 10% validation data, and 15% test data. We trained the model on training set and validated it in the validation set with an early stoppage. For every dataset and every partitioning scheme we used the learning rates: 1e-3,1e-4,1e-5,1e-6,1e-7 for hyperparameter tuning. For each combination of partition and dataset, we chose the learning rate that provides the best accuracy score on the test set. In this case, as an accuracy score, we used the Mean Squared Error. The final learning rates that we used are presented in Table 3.
5Note that we abuse notation in Eqs. 36-39 where the addition ofN (µ,Σ) denotes the addition of a normally distributed random variable with mean µ and covariance Σ.
H DETAILS ON EXPERIMENTS
H.1 QUALITATIVE REGRESSION EXPERIMENTS
First, we aim to showcase qualitative differences in the empirical posterior distributions generated by a baseline SGMCMC algorithm and our proposed variants. To do so, we consider a regression task where 100 randomly sampled three-dimensional covariates {~xi = [xi,1, xi,2, xi,3]T }i=1,...,100 are used to sample response values yi ∼ N (~wT~xi+b, σ2) where ~w = [w1, w2, w3]T = [1.5,−0.8, 1.3]T , b = 0.5, and σ2 = 1. More details on the generation process for ~x can be found in the Supplement.
We choose to fit a linear regression model of the same form as the generation process. σ2 is assumed to be known. Thus, θ = [w1, w2, w3, b]. A standard normal distribution is used as the prior for each parameter. Due to conjugacy, the posterior distribution can be calculated analytically. As such, the MAP is roughly θ̂MAP ≈ [0.52, 0.31, 0.47, 0.84]. The approximated posterior distributions for θ are found using SGLD, S-SGLD, and Sd-SGLD. For the latter two sampling schemes, two parameter partitions are tested: (i) two groups of parameters where θ1 = [w1, w2] and θ2 = [w3, b] and (ii) four groups of parameters where θ1 = w1, θ2 = w2, θ3 = w3, and θ4 = b. For Sd-SGLD, ρ = 0.5 and K = 4 was used.
The resulting posterior distributions for (w1, w2) and (w1, w3) from all five scenarios, with SGLD in the leftmost column as our baseline, can be seen in Fig. 1. We observe that, as expected, correlations between (w1, w2) still exist when they are allocated to the same parameter group and become apparently independent when assigned to different groups. We also note that the variance of the distributions shrink as the parameter space is partitioned into smaller groups. The underestimation of posterior variance is a commonly reported finding for VI techniques and is interesting to note that our non-parametric methods appear to exhibit this behavior as well. Finally, it appears that the Sd-SGLD adequately approximates S-SGLD with just slightly higher variances and very minor correlations between parameter groups being exhibited.
H.2 REAL-WORLD DATA EXPERIMENTS
Framework details. In this subsection, we provide more detailed results for our experiments and a grid search for FMNIST, CIFAR10, and SVHN. We note that all the code apart from the metrics was written in PyTorch (Paszke et al., 2019). Regarding the metrics, ESS was adopted from the TensorFlow probability library (Dillon et al., 2017; Abadi et al., 2016) and IAC was calculated in python. For all the experiments, we used a seed of 2. Moreover, we note that we grouped the parameters in an ordered way for Sd-pSGLD and S-pSGLD. We denoted previously that Kρ is the number of groups. So every parameter will go to the i mod Kρ group where i is the parameter index. If, for instance, Kρ is 8 then parameter 1 will go to group 1, parameter 2 will go to group 2, parameter 9 will go to group 1, etc. If Kρ is the same as the number of parameters, every parameter will go into its own group.
MNIST. Regarding MNIST, we ran all the experiments for 500 epochs with a batch size of 500 and a learning rate of 1e-2. For Sd-pSGLD, the K is set to 300, which is the forward passes that the model does within 1 epoch. For the grouping of the parameters, for Sd-pSGLD we used group sizes of 2,4,8,32,128,512,2048,4096,8192,16384,32768 and 42200; and for S-pSGLD we used groups sizes of 2,8,32,128,512,2048,4096 and 8192.
FashionMNIST. We ran all experiments for 300 epochs with a batch size of 500. For Sd-SGHMC the K is set to 2, which is the forward passes that the model does within 1 epoch. We observed with experimenting with K that we do not need to set K very high, and even a small number like 2 that we used here is enough to produce the same results as with an K of 200 or 300. In this way, we save significant time in training. Regarding the parameter partitioning, for Sd-SGMCMC, we put every parameter in a different group, and for S-SGMCMC we used groups of 2,4,8, and 16. For Sd-pSGLD, pSGLD, Sd-SGHMC and SGHMC we tested their performances with learning rates of 1e-2,1e-3,1e-4,1e-5. For S-pSGLD we used a learning rate of 1e-3 and for S-SGHMC a learning rate of 1e-2.
CIFAR10. The setup is similar to the one we used in FashionMNIST as we ran all experiments for 300 epochs with a batch size of 128. For Sd-SGHMC, the K is set to 2, which K is the forward passes that the model does within 1 epoch. Regarding the parameter partitioning, for Sd-SGMCMC, we put every parameter in a different group, and for S-SGMCMC we used groups of 2,4,8, and 16. For Sd-pSGLD, pSGLD, Sd-SGHMC and SGHMC we tested their performances with learning rates of 1e-2,1e-3,1e-4,1e-5. For S-pSGLD, we used a learning rate of 1e-3, and for S-SGHMC, a learning rate of 1e-2. We focused our strategy on evaluating the accuracy of the different combinations of hyperparameters with the proposed methods, as can be seen in Figs. 4 and 5. Quantitative results on IAC, ESS and maximum accuracy are depicted in Tables 6 and 7.
SVHN. We also ran all of the experiments for 300 epochs with a batch size of 128. Here for Sd-SGHMC, the K is set to 2, which is the forward passes that the model does within 1 epoch. We note that K here is less than on CIFAR10 and FashionMNIST, but as we mentioned before, this does not make a difference for our results, as we have tested. Regarding the parameter partitioning, for Sd-SGMCMC, we put every parameter in a different group, and for S-SGMCMC we used groups of 2,4,8, and 16. For Sd-pSGLD, pSGLD, Sd-SGHMC and SGHMC we tested their performances with
learning rates of 1e-1,1e-2,1e-3,1e-4,1e-5,1e-6. For S-pSGLD we used a learning rate of 1e-4, and for S-SGHMC, a learning rate of 1e-2. Same as in CIFAR10, we conducted a grid search for learning rate, dropout rate, and optimizers to find the best performing models and test them for their accuracy. We can observe these results in Figure 3 in the main paper. The strategy that we followed is the same as in CIFAR10 and is presented in Figs. 6 and 7. | 1. What is the main contribution of the paper, and how does it relate to previous works in the field?
2. What are the strengths and weaknesses of the proposed method, particularly regarding its scalability and performance compared to other methods?
3. How does the reviewer assess the appropriateness and effectiveness of the evaluation metrics used in the paper?
4. What concerns does the reviewer have regarding the assumptions made in the paper, such as the partitioning structure of the parameters?
5. How does the reviewer interpret the statement on page 4 regarding the scheme's tendency to converge towards and remain in a stationary state?
6. Does the reviewer agree with the statement on page 3 that MCMC algorithms work by producing an empirical distribution of samples through a random walk in parameter space? If not, why?
7. What explanation does the reviewer have for the change in confidence intervals observed in Figure 4a, 4b, 4c, and why does it happen?
8. Can the reviewer provide any intuition about the surprising average accuracy result for the last model in Figure 6?
9. Are there any formatting issues or errors in the references cited in the paper that need to be addressed? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a new hybrid method between MCMC and VI. The main idea of the paper is construction of a new energy function which allows to speed up the sampling significantly in comparison to the SGMCMC (stochastic gradient MCMC). An additional modification of the proposed algorithm is done by adopting a drop-out inspired approximation which allows for even better scalability.
Review
Overall I enjoyed reading the paper and I think the proposed method has merit, but I also do have some concerns.
The paper does not have proof of convergence for the proposed algorithm, a proof of convergence would have made the paper stronger.
Although the paper deals with multivariate models and a potentially large number of parameters, the evaluation metrics are univariate. For example, the ESS in table 1. The authors should report the results for multivariate ESS, especially considering that on, for example, CIFAR-10 the results are only marginally better than SGHMC.
Please note, that in the recent paper [1] it was shown that ESS and “other standard MCMC metrics which do not account for sample bias are not appropriate diagnostic tools for SGMCMC”. This paper [1] proposes to use the kernel Stein discrepancy metric for the assessment of SGMCMC methods. I would recommend the authors include it in their analysis.
On page 4 the authors assume that “This partitioning structure is assumed to be known a priori.” when talking about factorization of the parameters into mutually independent groups. While it can be naturally assumed in some models, is this a straightforward assumption of BNN, since the proposed algorithm is in particular motivated by application to BNN?
On page 4 the authors mention: “While it is unlikely to have the procedure initialize to a stationary state, we observe in practice that our scheme both tends to converge towards and remain in a stationary state. “ I would encourage the authors to directly refer to a figure or table which illustrates this claim.
On page 3 it states: “The gold standard for approximating the entire posterior distribution is by deploying Markov chain Monte Carlo (MCMC) algorithms. These methods work by producing an empirical distribution of samples through a random walk in parameter space. “
I would argue that this is not a correct statement as not all MCMC methods deploy random walk behavior (for example, not HMC or NUTS).
In the appendix on page 17 you mention “We observe that as we break dependencies we capture similar uncertainty intervals. ”. To me it looks like the confidence intervals actually change quite a bit, especially left/right of figure 4a and left/right of figures 4b and 4c, the confidence intervals become quite a bit more narrow. Can this again be attributed to VI behavior?
In Figure 6 the average accuracy for the last model in the list is quite surprising. Do you have any intuition about that?
Please, check your references, both formatting, and whether some of the papers you cite have been published in the meantime.
On page 3 the sentence " The answer is affirmative and will be answered as follows" probably can be re-written in a better way.
[1] Nemeth, Christopher, and Paul Fearnhead. "Stochastic gradient Markov chain Monte Carlo." Journal of the American Statistical Association 116.533 (2021): 433-450.
UPD: I increased my score for correctness to 3 after the authors reply and overall score to 5. |
ICLR | Title
Structured Stochastic Gradient MCMC
Abstract
Stochastic gradient Markov Chain Monte Carlo (SGMCMC) is considered the gold standard for Bayesian inference in large-scale models, such as Bayesian neural networks. Since practitioners face speed versus accuracy tradeoffs in these models, variational inference (VI) is often the preferable option. Unfortunately, VI makes strong assumptions on both the factorization and functional form of the posterior. In this work, we propose a new non-parametric variational approximation that makes no assumptions about the approximate posterior’s functional form and allows practitioners to specify the exact dependencies the algorithm should respect or break. The approach relies on a new Langevin-type algorithm that operates on a modified energy function, where parts of the latent variables are averaged over samples from earlier iterations of the Markov chain. This way, statistical dependencies can be broken in a controlled way, allowing the chain to mix faster. This scheme can be further modified in a “dropout” manner, leading to even more scalability. By implementing the scheme on a ResNet-20 architecture, we obtain better predictive likelihoods and faster mixing time than full SGMCMC.
1 INTRODUCTION
There has been much recent interest in deep Bayesian neural networks (BNN) due to their reliable confidence estimates and generalization properties (Wilson & Izmailov, 2020; Jospin et al., 2020; Cardelli et al., 2019). BNNs rely on ensemble averages over model parameters typically obtained from Markov chain Monte Carlo (MCMC) algorithms, which contrasts to regular neural networks that depend on a single set of parameters. The sheer size of these models requires scalable MCMC approaches based on inexpensive stochastic gradients, of which stochastic gradient Markov chain Monte Carlo (SGMCMC) algorithms are the gold standard (Li et al., 2016; Welling & Teh, 2011; Patterson & Teh, 2013). These algorithms owe their scalability to approximating gradients via mini-batching.
The main downside of using SGMCMC algorithms is their slow mixing rates in high dimensions. An often faster alternative is variational inference (VI) algorithms that approximate the posterior with a simpler (typically factorized) distribution. This formulation results in an optimization problem that can be solved more efficiently using stochastic optimization (Blei et al., 2017; Zhang et al., 2018).
One downside of VI approximations is their solid distributional assumptions. A typical choice is to approximate the Bayesian posterior by a product of univariate Gaussian distributions. These distributional assumptions are frequently over-simplistic in high-dimensional models, where the posterior can be highly multi-modal and possibly heavy-tailed. Another downside is that the variational approximation typically underestimates the posterior variance, leading to poorly calibrated uncertainties and overfitting (Ormerod & Wand, 2010; Giordano et al., 2015; Zhang et al., 2018).
In this work, we derive a fundamentally new SGMCMC approach that takes inspiration from structured VI. While our approach remains a sampling algorithm resembling SGMCMC, we speed up the mixing time by systematically breaking posterior correlations. The resulting algorithm furthermore allows users to specify which posterior correlations to keep and which ones to break. It makes no assumptions on the functional form of the approximate posterior. We call our approach structured SGMCMC since it relies on a structured (i.e., only partially factorized) variational approximation of the posterior (Wainwright & Jordan, 2008).
In more detail, we derive the optimal variational distribution for a given posterior subject to factorization constraints by assuming a functional view on variational inference. We show how to sample from
this optimal distribution by running SGMCMC on a modified energy function. This energy function is obtained by marginalizing the model’s joint distribution over previously generated samples from the Markov chain, leading to an approximate factorization over user-specified parameter groups. Further, we provide a more robust and computationally efficient approximation to the procedure that allows for interpolation between regular SGMCMC and our structured SGMCMC by taking inspiration from dropout techniques. Both methods are compatible with any Markovian SGMCMC algorithm, including Langevin dynamics and stochastic gradient Hamiltonian Monte Carlo.
In sum, our contributions are as follows:
• We propose a new approximate MCMC scheme running SGMCMC on a modified energy function, trading accuracy for speed. This setup effectively allows sampling from a fully joint posterior, a completely factorized posterior, and any in-between.
• We prove mathematically that the resulting scheme asymptotically generates samples from the best possible posterior approximation subject to user-specified factorization constraints between groups of parameters.
• We extend this scheme further by making it more scalable with a dropout-inspired approximation. This new scheme has a hyperparameter that enables a smooth interpolation between full SGMCMC and a "mean-field" version where all posterior correlations are broken.
• We show in both small and large scale experiments that our method well approximates posterior marginals and gives improved results over SGMCMC on Resnet-20 architectures on CIFAR-10, Fashion MNIST, and SVHN in terms of both runtime and final accuracy.
Our paper is structured as follows: Section 2 presents the related work to our proposal, Section 3 introduces preliminaries regarding the energy function and the stochastic gradient updates, Sections 4 and 5 derive our proposed methods, Section 6 details experiments and their results, and Section 7 contains our concluding thoughts.
2 RELATED WORK
Our work connects both to (stochastic) variational inference (Bishop, 2006; Hoffman et al., 2013; Ranganath et al., 2014; Blei et al., 2017; Zhang et al., 2018) and scalable MCMC (Welling & Teh, 2011; Chen et al., 2014; Ma et al., 2017; Zhang et al., 2020; Leimkuhler et al., 2019; Wenzel et al., 2020; Izmailov et al., 2021). For space limitations, we focus on the most related work at the intersection of both topics.
Among the earliest works to hybridize both approaches was (de Freitas et al., 2001) who constructed a variational proposal distribution in the Metropolos-Hastings step of MCMC. An improved approach to that was introduced in (Habib & Barber, 2018), where by introducing low-dimensional auxiliary variables they fit a more accurate approximating distribution. Other related advances to MCMC methods were proposed by Levy et al. (2017) who developed a method to train MCMC kernels with NNs, and Wang et al. (2018); Gong et al. (2018) who leveraged meta learning schemes in SGMCMC methods.
Most recent work focuses on connections between VI and stochastic gradient-based MCMC, or between VI and stochastic gradient descent (SGD). For example, Mandt et al. (2016; 2017) and Duvenaud et al. (2016) consider SGD as a type of variational inference, but their approaches did not attempt to close the gap to exact MCMC. Other works aim at explicitly interpolating between both methods. Domke (2017) proposes a divergence bound for hybridizing VI and MCMC, essentially by running Langevin dynamics on a tempered evidence lower bound (ELBO). Salimans et al. (2015) embody MCMC steps into the variational inference approximation. Ahn et al. (2012) improve stochastic gradient Langevin dynamics by leveraging the central limit theorem and using the estimated inverse Fisher information matrix to sample from the approximate posterior distribution. Rezende & Mohamed (2015) interpreted the path of an MCMC algorithm as a variational distribution, and then fitting parameters to tighten a variational bound. Recently, Hoffman & Ma (2020) interpreted (parametric) VI as approximate Langevin dynamics and showed that both algorithms have similar transient dynamics.
In contrast to all these approaches, our method is inspired by coordinate ascent variational inference (Bishop, 2006) but uses Langevin updates to generate samples from the target distribution that respects an imposed independence structure.
3 PRELIMINARIES
Variational inference (VI) approaches differ from MCMC in two regards: (1) they impose a structured (e.g., fully-factorized) approximation of the posterior for tractability, and (2) they often make parametric assumptions. Is it possible to construct a modified scheme that only relies on the assumption (1), inheriting the non-parametric nature of MCMC while breaking posterior correlations in a controlled manner? As follows, we will show how such a scheme can be realized. We will first derive a modified energy function for Langevin dynamics that we can sample from and then prove that its negative exponential results in the optimal posterior approximation subject to specified factorization constraints. Running SGMCMC algorithms on this energy function will consequently generate samples from this distribution.
Before we explain our new method, we introduce the setup and common notation. Given data D = {(xi, yi)}i=1,...,N , parameters θ, a proper prior distribution p(θ), and a likelihood p(D|θ) =∏N i=1 p(yi|xi, θ), suppose we are interested in the corresponding posterior distribution p(θ|D) ∝ p(D|θ)p(θ). A convenient representation of the posterior is as a Boltzmann distribution:
p(θ|D) ∝ exp{−U(θ)} where U(θ) = − log p(θ,D) = − ∑
(x,y)∈D
log p(y|x, θ)− log p(θ). (1)
U is typically referred to as the posterior energy function. Note that the posterior distribution is typically intractable due to the normalizing constant.
A popular approach for approximating the entire posterior distribution is by deploying Markov chain Monte Carlo (MCMC) algorithms. These methods work by producing an empirical distribution of samples in parameter space, often times through the use of a random walk. While being very accurate and having asymptotic guarantees, these methods are known to not scale well with respect to both data and parameters (Brooks et al., 2011; Geyer, 1992).
Stochastic gradient MCMC (SGMCMC) is a class of scalable MCMC algorithms that can produce posterior samples through gradients on minibatches of data. These algorithms are largely derived from discretized approximations of continuous-time diffusion processes. Examples of these algorithms include stochastic gradient Langevin dynamics (SGLD) (Welling & Teh, 2011), preconditioned SGLD (pSGLD) (Li et al., 2016), and stochastic gradient Hamiltonian Monte Carlo (SGHMC) (Chen et al., 2014).
As alluded to, the basis of SGMCMC algorithms is using a sampled minibatch of data D̃ from D to produce an differentiable, unbiased estimate of the posterior energy function:
U(θ) ≈ Û(θ; D̃) = − N |D̃| ∑ (x,y)∈D̃ log p(y|x, θ)− log p(θ). (2)
Once Û is defined, it is fairly straight forward to generate new samples from the posterior distribution. For instance, the SGLD update is
θ(t+1) = θ(t) − t 2 ∇θÛ(θ(t); D̃t) + ξt where ξt ∼ N (0, tI). (3)
Similar rules for pSGLD and SGHMC can be found in the Supplement. All of these update rules produce a chain of samples up to time step t that ultimately form an empirical distribution p̂(t)(θ|D). Should the algorithms converge, then limt→∞ p̂(t)(θ|D) = p(θ|D).
4 STRUCTURED SGMCMC
By design, SGMCMC methods produce a fully joint posterior distribution over parameters θ. For models with a large number of parameters, this can lead to various complications due to the curse of
dimensionality. This is typically observed with slow convergence times and potentially unexplored parameter spaces. A viable solution is to break dependencies in the posterior distribution by leveraging ideas commonly used in variational inference (VI). This would reduce the number of various potential posterior correlations that the model would need to capture while sampling.
To achieve partial factorization, we must first partition θ into M > 1 distinct, mutually independent groups: θ1, . . . , θM . This partitioning structure is assumed to be known a priori. We will denote the distribution that respects this partitioning structure as q(θ) = ∏M i=1 qi(θi). Similar to VI, we would like this distribution q(θ) to best approximate the true posterior distribution p(θ|D) according to some criteria, such as KL-divergence. This leads to a natural objective function to minimize:
J(q(θ)) = DKL(q(θ)||p(θ|D)) ≡ Eθ∼q [ log q(θ)
p(θ|D)
] (4)
The following Theorem 1 proves that there is a unique solution to the non-parametric KL minimization problem described in Eq. (4). To describe it, we compose θ = {θi, θ̃¬i} for any i where θ̃ ∼ q and define a structured energy function:
U (S)(θ) = M∑ i=1 U (S) i (θi), with U (S) i (θi) := Eθ̃∼qU({θi, θ̃¬i}) := −Eθ̃∼q log p(θi, θ̃¬i,D). (5)
That is, we first define the marginals U (S)i (θi), where we marginalize U(θ) with respect to all q(θ)-factors except qi(θi), and then sum up these marginals to define U (S)(θ). A similar partial marginalization procedure is carried out for conjugate exponential family distributions in coordinate ascent VI (Bishop, 2006). Having a well-defined energy function U (S) allows us to use standard SGMCMC methods to approximate the posterior q(θ) with samples. This serves as the basis for our proposed algorithm that actually approximates this distribution q(θ), which will be discussed shortly. Theorem 1. The unique solution to the KL minimization problem given in Eq. 4 is given by the Boltzmann distribution q(θ) ∝ exp{− ∑M i=1 U (S) i (θi)}. Please refer to the Supplement for the proof.
In an ideal world, we would be able to use the findings of Theorem 1 directly in conjunction with algorithms like Langevin dynamics and Hamiltonian Monte Carlo to produce empirical distributions for q using U (S) (Liu et al., 2019). However, this is intractable for two reasons: (1) these algorithms generally work only well with small amounts of data, and (2) more importantly, the marginals U
(S) i (θi) do not have a closed-form solution but need to be approximated via samples from q. Luckily, since SGMCMC methods only need access to noisy estimates of U (S), we can run these algorithms on a stochastic estimate of Eq. (5),
U (S)(θ) ≈ Û (S)(θ; D̃) = M∑ i=1 Eθ̃∼qÛ({θi, θ̃¬i}; D̃), (6)
where Û(·) is defined in Eq. (2). In practice, at timestep t for i = 1, . . . ,M we estimate Eθ̃∼qÛ({θi, θ̃¬i}; D̃t) with a Monte Carlo approximation. In place of θ̃, we use a single sample of θ̃(t) taken from the current approximate distribution q̂(t) which is composed of samples from previous timesteps (i.e., a uniform distribution over {θ(1), θ(2), . . . , θ(t)}). This leads to the following update step for structured SGLD (S-SGLD):
θ(t+1) = θ(t) − t 2 ∇θÛ (S)(θ; D̃) + ξt where ξt ∼ N (0, tI). (7)
Similar rules for structured variants of pSGLD (S-pSGLD) and SGHMC (S-SGHMC) can be found in the Supplement. Additionally, the full procedure for structured SGMCMC (S-SGMCMC) can be seen in Algorithm 2.
Remark Since ∇θÛ (S) is an unbiased estimator for U (S), we are guaranteed to converge to q from sampling with S-SGMCMC with sufficiently decreasing learning rates so long as we are in a stationary state. While it is unlikely to have the procedure initialize to a stationary state, we observe in practice that our scheme both tends to converge towards and remain in a stationary state. A general proof of convergence is outside the scope of this work and is left to follow-up research.
An example of S-SGMCMC can be seen in Fig. 1(a-b), which features the approximate posterior distributions of a linear regression model with three coefficients and with various independence structures imposed with S-SGLD: (a) joint dependence between w1, w2, and w3; (b-left) dependence between w1 and w2 but independence between w3 and the other coefficients; (b-right) fully factorized. Of note is that the bivariate posterior distributions appear to respect the imposed independence structure. Interestingly, it also appears that the variance shrinks as we induce these factorizations which is a commonly seen artifact when using VI.
5 STRUCTURED DROPOUT SGMCMC
While S-SGMCMC can successfully break dependencies between parameter groups, it does suffer computationally due to each parameter update scaling linearly with respect to M . This means that for a single new sample of θ, the model’s forward pass needs to be computed M different times on the same batch of data D̃, which can quickly become prohibitively expensive for deep models when M is large. Ideally, we would prefer a method that both closely resembles the S-SGMCMC procedure and scales independently from the partitioning scheme. This section presents such a method that achieves this, which we call structured dropout SGMCMC (Sd-SGMCMC), as well as an informal motivation and derivation of the method. More formal details and a theorem proving both SGMCMC and S-SGMCMC are limiting cases for Sd-SGMCMC can be found in the Supplement.
The main motivation for this technique can be seen by recognizing that the composition {θ(t)i , θ̃ (t) ¬i } from Eq. (6) can be rewritten as a sum of masked values rθ(t) + (1− r)θ̃(t) where θ̃(t) ∼ q(t) and rj = 1(i = j) for i = 1, . . . ,M . We can decouple the computational scaling from the number of parameter groups M by replacing the M deterministic masks r’s with K stochastically sampled masks r̃.1 Doing so results in a slightly different energy function and minibatch loss to optimize:
Û (Sd)(θ(t); D̃) ≈ M KE [∑M
i=1 ri ] K∑ k=1 Û(r̃(t,k)θ(t) + (1− r̃(t,k))θ̃(t,k); D̃) (8)
where r̃(t,k) is the kth sample of r̃ for timestep t. A formal justification for Eq. (8) can be found in the Supplement. These energy function approximations lead to the following update step for structured
1K is a hyperparameter that is chosen independent of M ; however, both M and the distribution of r̃ largely influence how small K can be due to how they affect the variance of the gradient of the associated posterior energy function.
Algorithm 2: S-SGMCMC Input: Initial sample θ(0); parameter
partitions θ1, . . . , θM ; step sizes { t}t=0,...,T−1.
Output: q̂(T )(θ) := {θ(t)}t=1,...,T 1 for t = 0 to T − 1 do 2 Sample minibatch D̃(t) ⊂ D 3 for i = 1 to M do 4 Sample θ̃(t)¬i ∼ q̂ (t) ¬i 5 Û (S,t) i = Û([θ (t) i , θ̃ (t) ¬i ]; D̃(t)) 6 end 7 ∇θÛ (S,t) = ∑M i=1∇θÛ (S,t) i 8 θ(t+1) =
SGMCMC_step(θ(t),∇θÛ (S,t), t) 9 end
10 return q̂(T )(θ)
Table 1: IAC and ESS metrics for CIFAR-10, SVHN, and FMNIST with various methods. Subscripts after method names refers to number of equally sized parameter groups, with |θ| meaning every parameter belongs to its own group. Best results are bolded.
CIFAR-10 SVHN FMNIST
Method IAC↓ ESS↑ IAC↓ ESS↑ IAC↓ ESS↑
pSGLD 716 8.01 839 6.82 779 7.09 S-pSGLD2 600 7.44 840 6.80 740 7.55 S-pSGLD4 599 7.4 834 6.83 751 7.45 S-pSGLD8 709 6.41 857 6.67 776 7.24 Sd-pSGLD|θ| 546 8.01 803 7.00 677 8.24 SGHMC 727 7.94 858 6.59 795 6.83 S-SGHMC2 583 7.49 949 5.74 928 5.67 S-SGHMC4 624 7.03 961 5.66 915 5.77 S-SGHMC8 904 4.97 1056 5.30 1142 4.87 Sd-SGHMC|θ| 584 7.7 828 6.56 782 7.08
dropout variant of SGLD (Sd-SGLD):
θ(t+1) = θ(t) − t 2 ∇θÛ (Sd)(θ; D̃) + ξt where ξt ∼ N (0, tI). (9)
The corresponding update rules for the structured dropout variants for pSGLD (Sd-pSGLD) and SGHMC (Sd-SGHMC) are defined in the Supplement. The exact procedure for generating samples of the approximate posterior q̂(t) using structured dropout SGMCMC (Sd-SGMCMC) can also be found in the Supplement.
An example of this method (specifically Sd-SGLD with r̃i iid∼ Bernoulli(0.5) and K = 4) used on a linear regression model can be seen in Fig. 1(c). Of note, we can see that the dropout variant largely respects the independence structure imposed, but maybe not as strictly as the exact S-SGLD method seen in Fig. 1(b). Additionally, the posterior variance also seems to have shrunk similarly to S-SGLD when compared against SGLD.
Masking Distribution Should r̃i iid∼ Bernoulli(ρ), alongside a structure that factorizes by activation components, then the method starts to resemble dropout with rate ρ (Srivastava et al., 2014). The main difference being that instead of replacing a parameter value with 0 it is replaced with a sample from the approximate posterior distribution at time t: q̂(t). While a Bernoulli distribution for r̃ is a natural choice, there are other distributions that can be chosen as well. For instance, r̃i
iid∼ N (0, 1) or r̃i
iid∼ Beta(α, β) are both viable distributions and can be seen as analog to Gaussian and Beta-dropout respectively (Srivastava et al., 2014; Liu et al., 2019). Our experiments will largely focus on sampling r̃ from Bernoulli and uniform over [0, 1] (equivalent to Beta(0.5, 0.5)) distributions.
6 EXPERIMENTS
Overview In this section we evaluate our proposed approach on various models and datasets. Section 6.1 investigates the impact of the variational approximation on the algorithms’ mixing and autocorrelation times using a fully-connected network architecture on MNIST (LeCun et al., 2010). Section 6.2 studies our methods with ResNet-20 (He et al., 2016) on CIFAR-10 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011), and Fashion MNIST (Xiao et al., 2017) and compares them for their accuracy and mixing time. Our experiments reveal that the chains in our proposed methods mix faster than SGMCMC and achieve either comparable or even higher accuracies on average.
We have also conducted experiments on uncertainty visualization, where we tested the proposed methodology on predictive uncertainty estimation by deploying a two-layer fully connected network
on a toy dataset. The uncertainty experimental setup and results, along with more technical details for the other experiments, can be found in the Appendix.
Metrics The primary predictive metric of interest use to evaluate our proposal is classification accuracy. We take the average of an ensemble of 100 models whose weights are sampled from the past samples of the parameters chains in order to calculate the accuracy. Additionally, we also monitor the mixing time of the chains of our methods with both integrated autocorrelation time (IAC) (Sokal, 1997; Goodman & Weare, 2010) and effective sample size (ESS) (Geyer, 1992). IAC measures the correlation between samples in a chain and, in turn, describe the inefficiency of a MCMC algorithm. IAC is computed as τf = ∑∞ τ=−∞ ρf (τ) where ρf is the normalized autocorrelation function of the stochastic process that generated the chain for f and is calculated as ρ̂f (τ) = ĉf (τ)/ĉf (0); where ĉf (τ) = 1 N−τ ∑N−τ n=1 (fn − µf ) (fn+τ − µf ) and µf = 1 N ∑N n=1 fn. We note that we calculated ĉf (τ) with a fast Fourier transform as it is more computationally efficient than using the direct sum. ESS measures how many independent samples would be equivalent to a chain of correlated samples and is calculated as neff = n1+(n−1)p , where n is the number of samples and p is the autocorrelation. 2 We note that a model with higher ESS and lower IAC have faster mixing time. Please see the Appendix for the detailed implementation details and experimental setup for the metrics and our models.
6.1 DROPOUT RATE & GROUP SIZE INVESTIGATION
The aim of this set of experiments is to study the effects that the number of independent parameter groups (or alternatively, the amount of allowed posterior correlations) has on accuracy and mixing time when using our proposed methods. We compare pSGLD, S-pSGLD, and Sd-pSGLD a Bernoulli(ρ) masking distribution with dropout rates ρ ∈ {0.1, 0.3, 0.5} on a fully-connected neural network with 2 hidden layers, with 50 hidden units each, trained and evaluated with MNIST using the standard train and test split. The model has 42,200 parameters in total. For S-pSGLD and Sd-pSGLD, these parameters are evenly distributed into M groups where M ranges from 4 to 42,200. Accuracy, IAC, and ESS are reported in Fig. 2 using 100,000 posterior samples after a 150,000 burn in period. More details on the implementation of the model regarding training and evaluation can be found in the Appendix.
As shown in Fig. 2(a), for S-pSGLD we observe that as we increase the number of groups the accuracy drops dramatically whereas Sd-pSGLD’s accuracy improves slightly and then remains fairly stable. In the best case, Sd-pSGLD achieves an accuracy of 96.3% with 32 groups and dropout rate of 0.5 which outperforms pSGLD with accuracy of 94.2%. We speculate that the dropout-like behavior is beneficial for regularizing the model (much like normal dropout), hence the improved accuracy across all dropout rates. Similarly, a single sample used for the Monte Carlo estimate in S-SGMCMC may not be enough as the number of groups M increase; however, increasing the number of samples in this scenario is infeasible due to S-SGMCMC scaling as O(M). Fig. 2(b-c) portrays the comparison between number of groups and mixing time metrics IAC and ESS. As the number of groups gradually increase, we note that S-pSGLD mixes faster, as does Sd-pSGLD to lesser and lesser degrees as ρ increases. This behavior is to be expected due to Theorem 2, with Sd-pSGLD exhibiting mixing times more similar to pSGLD when ρ = 0.5 and more similar to S-pSGLD when ρ = 0.1.
6.2 SYSTEMATIC COMPARISON ON REAL-WORLD DATA
The goal of these experiments is to test the proposed methodology on larger-scale datasets which mimic real-world data: CIFAR-10, SVHN, and FMNIST. We evaluate our methods on performance accuracy and on mixing times of the chains. We employ ResNet-20 for SVHN and FMNIST without any data augmentation to assess our methods. For CIFAR10 we employ the same data augmentation process as proposed in Cubuk et al. (2019). We evaluate the precision of the methods on accuracy over time and the overall mixing time of them on IAC and ESS with 2 base algorithms: pSGLD and SGHMC. For efficiency purposes we limited our scope to models with either fully joint posteriors or fully factorized. As such, for the latter we employed Sd-SGMCMC methods
2We used the TensorFlow implementation for ESS which uses the direct sum for the autocorrelation.
as S-SGMCMC would not be feasible with the amount of parameter groups present. Bernoulli(ρ) and uniform masking distributions were investigated and are denoted as SBernoulli-SGMCMC and SUniform-SGMCMC respectively, with ρ varying between datasets as determined by a hyperparameter search (detailed in the Appendix).
In Fig. 3 we observe how quickly the proposed methods and the baseline SGMCMC methods approach their optimum accuracy over the course of training. As is shown, SBernoulli-SGMCMC and SUniform-SGMCMC appear to achieve optimal accuracy values much faster than SGMCMC on all datasets and with all base sampling schemes. In some cases, the variational methods achieve better accuracy values than the baseline methods, as seen for CIFAR10 in Fig. 3.
Mixing Time Comparisons We further validated our findings from Section 6.1 by evaluating the IAC and ESS on larger datasets using various methods. Both pSGLD and SGHMC were used as base methods in conjunction with both S-SGMCMC and Sd-SGMCMC using a Bernoulli masking distribution. IAC and ESS were calculated for these methods using the latest 5,000 samples after sampling for 300 epochs; the results of which can be found in Table 1. For CIFAR-10, we see that Sd-SGMCMC with every parameter in a different group mixes the fastest against all other methods. Likewise, for SVHN and FMNIST, Sd-pSGLD with every parameter belonging to its own group mixes faster than all other methods. At times it does appear that increasing the number of parameter groups causes slower mixing time for S-SGMCMC. This could potentially be attributed to large variance in the gradients from using only a single sample per Monte Carlo estimate.
6.3 EXPLORING PARTITIONING SCHEMES
This part of the study aims to explore the capabilities of the proposed methodology further. Here we explore different parameter partitioning schemes on regression datasets.
Here we present the results with different partitions on various regression datasets. We used 7 different datasets: the wine quality dataset (Cortez et al., 2009), the Boston housing dataset (Harrison Jr & Rubinfeld, 1978), the obesity levels dataset (Palechor & de la Hoz Manotas, 2019), the Seoul bikesharing dataset (E et al., 2020; E & Cho, 2020), the concrete compressive strength dataset (Yeh, 1998), and the airfoil self-noise dataset (Brooks et al., 1989). For the evaluation we chose a simple fully connected network with two layers with 50 neurons each, and we use SGLD as an optimizer. As a performance metric we chose mean squared error (MSE). We did hyperparameter tuning with different learning rates and the final results are the means with the standard deviations of 5 runs with different seeds. We do not observe any specific systematic trends on the partitions, apart from the fact that in some cases random performs better. In that way the use of either random partitioning or the fully-factorized partitioning, where every parameter is in a different group appears to be a valid choice a priori; especially the latter since we have noted earlier the faster mixing times associated with this partitioning scheme. More details about the partitioning schemes experiments can be found in the Appendix.
7 CONCLUSIONS
In an attempt to hybridize MCMC and VI, we proposed S-SGMCMC: an approach that produces samples from an structured posterior by running SGMCMC on a modified energy function. The resulting Markov chain becomes asymptotically decoupled across user-specified groups of parameters, resulting in faster convergence. For better computational efficiency, we proposed Sd-SGMCMC: a further generalization of S-SGMCMC inspired by dropout. This extension allows interpolating between a SGMCMC algorithm and its corresponding S-SGMCMC method.
Our experimental results demonstrate that the proposed methods impose structure over posterior distributions, increase mixing times of the chains, and result in similar or better posterior predictive accuracies compared to SGMCMC on a variety of (deep) models. Our experimental evaluations have provided strong empirical evidence for the efficacy of our approach. We also showed that the proposed approach is compatible with various deep learning architectures, including ResNet-20, and various datasets.
Despite its proven capabilities, our proposed methodology does come with some limitations. Namely, for quick access our methods require keeping chains of samples on the GPU whereas the baseline SGMCMC methods can simply save samples to disk. Additionally, S-SGMCMC scales poorly with respect to the number of parameter groups. Sd-SGMCMC manages to break this dependency; however, it still requires slightly more compute than SGMCMC per sample, but it is comparable in wall clock time. Possible future work could focus on more theoretical analyses of S-SGMCMC, such as formal proofs of convergence.
8 ETHICS STATEMENT
The main focus of our work is to train models faster by decreasing the convergence time of their training phase. In this scope we are not aware of any ethical concerns of our research.
9 REPRODUCIBILITY
For this work we have made sure to guarantee reproducibility of the results. We provide all the technical details of our experiments and their implementations, like the hyperparameters, the data, the frameworks and the experimental setups. We have used only open source datasets that are easily accessible to the public. Finally we commit to release the code that we implemented for this work via a public repository.
A THEOREM 1
Proof. We begin with some preliminaries from the main text. Given data D = {(xi, yi)}i=1,...,N , parameters θ, a proper prior distribution p(θ), and a likelihood p(D|θ) = ∏N i=1 p(yi|xi, θ), suppose we are interested in the corresponding posterior distribution p(θ|D) ∝ p(D|θ)p(θ). A convenient representation of the posterior is as a Boltzmann distribution:
p(θ|D) ∝ exp{−U(θ)} where U(θ) = − ∑
(x,y)∈D
log p(y|x, θ)− log p(θ). (10)
U is typically referred to as the posterior energy function. Note that the posterior distribution is typically intractable due to the normalizing constant.
We also write the equation for KL divergence from the main text:
J(q(θ)) = DKL(q(θ)||p(θ|D)) (11) ≡ Eθ∼q [ log q(θ)
p(θ|D)
] (12)
We then rewrite Eq. 4 as follows:
J(q(θ)) = Eθ∼q [log q(θ)]− Eθ∼q [log p(θ,D)] + C (13) =Eθi∼qi [log qi(θi)] + ∑ i 6=j Eθj∼qj [log qj(θj)]− ∫ log p(θ,D)qi(θi)dθi ∏ i6=j qj(θj)dθj + C (14)
for some i ∈ {1, . . . ,M} where ¬i := {1, . . . ,M} \ {i} and C = log p(D). In order to find the optimal distribution that respects the factorization constraints imposed between parameter groups, we need to minimize this functional over q — or rather every qi. This is done by taking the functional derivative of J with respect to qi, setting it equal to zero, and solving for qi:
δJ(q(θ))
δqi(θi) =
∫ log p(θ,D) ∏ i6=j qj(θj)dθj − 1− log qi(θi) := 0 (15)
=⇒ log qi(θi) = Eθ̃¬i∼q¬i [ log p(θi, θ̃¬i,D) ] − 1 (16)
=⇒ qi(θi) ∝ exp { Eθ̃¬i∼q¬i [ log p(θi, θ̃¬i,D) ]} . (17)
By defining the energy U (S)i (θi) = −Eθ̃¬i∼q¬i [ log p(θi, θ̃¬i,D) ] , we realize that by minimizing
the KL-divergence in Eq. 4, the approximate posterior distribution q = ∏M i=1 qi takes the form of a
Boltzmann distribution as in Eq. 1 with U (S)(θ) = ∑M i=1 U (S) i (θi).
It remains to be shown that the solution is unique. To this end, we refer to the convexity of the KL divergence in function space (Cover & Thomas, 2001). This implies that the stationary point of the KL is indeed a global optimum and unique.
B DERIVING U (Sd)
With just a slight shift in perspective, it is actually possible to further generalize U (S) (and consequently S-SGMCMC) to produce a broader class of approximate sampling algorithms. This is done
by first noting that U (S) can be represented with a scaled double-expectation:
U (S)(θ) = − M Er∼p(S) [∑M i=1 ri ]Er∼p(S)Eθ̃∼q [log p(rθ + (1− r)θ̃,D)] (18) where p(S)(r) = Cat(r;M−1, . . . ,M−1) and (rθ + (1 − r)θ̃)i is equal to θi if ri = 1 and θ̃i otherwise for i = 1, . . . ,M . Note that this is constructed in this manner specifically so that U (S) remains differentiable with respect to θ. Also note that though the denominator appears superfluous as Er∼p(S) [ ∑M i=1 ri] = 1, it is necessary for certain theoretic properties, as seen in Theorem 2.
By replacing p(S) with a more flexible distribution, we can further generalize and encapsulate different energy functions to sample from. One such choice is p(Sd)(r; ρ) :∝ ∏M i=1 Bern(ri; ρ)1( ∑M i=1 ri > 0) with ρ ∈ (0, 1).3 Substituting p(S) for p(Sd) in Eq. (18) yields a new energy function that we will refer to as U (Sd). We note that this choice in distribution leads to a dropout-like behavior (Nalisnick et al., 2019; Srivastava et al., 2014), where the composition of model parameters as rθ + (1− r)θ̃ leads to each parameter group θi having a probability of approximately ρ to be used in a prediction and a (1 − ρ) probability of being replaced by θ̃i from the approximate posterior (in traditional dropout, θi would instead be replaced with 0). Likewise, we will denote methods that use this energy function for sampling as structured dropout SGMCMC (Sd-SGMCMC) with different variants all sharing the same Sd prefix (e.g. Sd-SGHMC).
In practice, the double-expectation in U (Sd) is jointly approximated using a Monte Carlo estimate with K samples. This leads to Eq. (8) in the main paper. We note that by approximating U (Sd) in this way, computing a gradient no longer scales on the order of O(M), but rather O(K). This means that the choice of structure imposed on the posterior distribution remains independent of computing resources. As such, configurations with large amounts of parameter groups are typically only feasible when using Sd-SGMCMC as S-SGMCMC would use too much memory and/or compute per sample.
C THEOREM 2
Theorem 2. For a given set of parameters θ partitioned into M groups, under minor assumptions (i) U (Sd) → U as ρ → 1 and (ii) U (Sd) → U (S) as ρ → 0. Thus, distributions approximated by Sd-SGMCMC lie on a continuum with those generated by S-SGMCMC at one extreme and with those from SGMCMC at the other.
Proof. Assume an arbitrary θ, D, n ∈ N, and that Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] exists for r ∈ R.
As an aside, this proof assumes that p(Sd)(r; ρ) :∝ ∏M i=1 Bern(ri; ρ)1( ∑M i=1 ri > 0) with ρ ∈ (0, 1); however, the theorem still holds an arbitrary p(Sd) so long as the mean approaches 1 and variance approaches 0 as n→∞.
(i) Let r(n) ∼ p(Sd)(ρn) where ∀nρn ∈ (0, 1) and ρn → 1. It follows that r(n) → {1}M as n→∞ in distribution (see Lemma 1 in Supplement). Due to bounded and finite support R, we find the following:
U (Sd)(θ) = − M Er∼p(Sd) [ ∑M i=1 ri] ∑ r∈R p(Sd)(r; ρn)Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] (19)
→ −M M ∑ r∈R 1(∀iri = 1)Eθ̃∼q [log p(θ,D)] as n→∞ (20)
= − log p(θ,D) = U(θ) (21)
(ii) Let r(n) ∼ p(Sd)(ρn) where ∀nρn ∈ (0, 1) and ρn → 0. It follows that r(n) → r ∼ Cat(M−1, . . . ,M−1) as n→∞ in distribution (see Lemma 2 in Supplement). Due to bounded and
3Other choices of distribution that are well justified include any with support over [0, 1]M and with measure 0 over {0}M . Exploring the effects these distributions have are an interesting line of future inquiry.
finite supportR, we find the following:
U (Sd)(θ) = − M Er∼p(Sd) [ ∑M i=1 ri] ∑ r∈R p(Sd)(r; ρn)Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] (22)
→ −M 1 ∑ r∈R 1( ∑M i=1 ri = 1) M Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] as n→∞ (23)
= − M∑ i=1 Eθ̃∼q[log p([θi, θ̃¬i,D)] = U (S)(θ) (24)
For both Lemmas 1 and 2, let
p(Sd)(r; ρ) = ρ ∑M i=1 ri(1− ρ)M− ∑M i=1 ri
1− (1− ρ)M 1(∀iri ∈ {0, 1})1 ( M∑ i=1 ri > 0 ) (25)
Lemma 1. For r(n) ∼ p(Sd)(ρn), ρn ∈ (0, 1) and n ∈ N, if ρn → 1 as n → ∞ then r(n) → r ∼ δ({1}M ) in distribution as n→∞.
Proof.
p(Sd)(r = {1}M ; ρn) = ρMn (1− ρn)0
1− (1− ρn)M (26)
→ 1 as n→∞ (27) =⇒ r(n) → δ({1}M ) in distribution. (28)
Lemma 2. For r(n) ∼ p(Sd)(ρn), ρn ∈ (0, 1) and n ∈ N, if ρn → 0 as n → ∞ then r(n) → r ∼ Cat(M−1, . . . ,M−1) in distribution as n→∞.
Proof. Let i ∈ {1, . . . ,M}.
p(Sd)(ri = 1, r¬i = 0; ρn) = ρn(1− ρn)M−1
1− (1− ρn)M (29)
l’Hôspital’s Rule H= (1− ρn)M−1 + ρn(M − 1)(1− ρn)M−2
M(1− ρn)M−1 (30)
→ 1 M as n→∞ (31)
Since the resulting probabilities sum to 1, this implies that r(n) → r ∼ Cat(M−1, . . . ,M−1) in distribution as n→∞.
D DERIVING U (Sd)
To derive U (Sd), we must first start with a shift in perspective on how U (S) is represented. We will rewrite the function in the following way:
U (S)(θ) = − M∑ i=1 Eθ¬i∼q¬i [log p([θi, θ¬i],D)] (32)
= − M Er∼p(S) [ ∑M i=1 ri]
Er∼p(S)Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] (33)
where p(S) is a M -dimensional categorical distribution with uniform weights M−1 and p(rθ + (1− r)θ̃,D) is the joint probability of parameters taking values of rθ + (1− r)θ̃ and data D.4
We note that changing the distribution of r leads to different energy functions to sample from. One such choice is to have p(Sd)(r; ρ) ∝ ρ ∑M i=1 ri(1 − ρ)M− ∑M i=1 ri1(∀iri ∈ {0, 1})1( ∑M i=1 ri > 0) for ρ ∈ (0, 1). Note that this is identical to ri iid∼ Bernoulli(ρ) conditional to ∑M i=1 ri > 0. Let the support of p(Sd) be denoted asR = {0, 1}M \ {0}M . This leads to the following energy function:
U (Sd)(θ) = − M Er∼p(Sd) [ ∑M i=1 ri]
Er∼p(Sd)Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] . (34)
In practice, a few approximations are made to compute the corresponding U (Sd). Firstly, we approximate p(Sd) with an M -dimensional Bernoulli(ρ) distribution as the difference is minute when Mρ is large. Secondly, the outer expectation in Eq. (34) is approximated with a Monte Carlo estimate of K samples. The inner expectation is also approximated with a Monte Carlo estimate using the latest approximate posterior q̂(t). However, just like for S-SGMCMC, only a single sample is used. This further leads to:
U (Sd)(θ(t); D̃) = − 1 Kρ K∑ k=1 U(r(t,k)θ(t) + (1− r(t,k))θ̃(t,k); D̃) (35)
E ALGORITHM FOR Sd-SGMCMC
The procedure for Sd-SGMCMC can be seen in Algorithm 3.
Algorithm 3: Sd-SGMCMC Input: Initial sample θ(0); parameter partitions θ1, . . . , θM ; data set D; initial auxiliary statistics ξ(0); step sizes { t}t=1,...,T ; masking distribution p(Sd); dropout iterations K. Output: q̂(T )(θ) := {θ(t)}t=1,...,T
1 for t = 0 to T − 1 do 2 Sample minibatch D̃(t) ⊂ D 3 for k = 1 to K do 4 Sample masks r(t,k)1 , . . . , r (t,k) M ∼ p(Sd) 5 Sample θ̃(t,k) ∼ q̂(t)
6 θ(t,k) = [r (t,k) i θ (t) i + (1− r (t,k) i )θ̃ (t,k) i ]i=1,...,M 7 Û (Sd,t) k = Û(θ
(t,k); D̃(t)) 8 end 9 ∇θÛ (Sd,t) = MKE r∼p(Sd) [ ∑M i=1 ri] ∑K k=1∇θÛ (Sd,t) k
10 θ(t+1), ξ(t+1) = SGMCMC_step(θ(t),∇θÛ (Sd,t), ξ(t), t) 11 end 12 return q̂(T )(θ)
4rθ + (1 − r)θ̃ is a slight abuse of notation that is meant to represent masking out θi when ri = 0 and masking out θ̃i when ri = 1.
F SGMCMC UPDATE RULES
The update rules for SGLD, pSGLD, and SGHMC are defined as follows:
SGLD θ(t+1) = θ(t) − t 2 ∇θÛ(θ(t)) +N (0, tI) (36)
pSGLD θ(t+1) = θ(t) − t 2
[ R(θ(t))∇θÛ(θ(t)) +
∑ θ ∇θR(θ(t))
] +N (0, tR(θ(t))) (37)
SGHMC θ(t+1) = θ(t) + tM−1m(t+1) (38)
m(t+1) = (1− γ tM−1)m(t) − t∇θÛ(θ(t)) +N (0, 2γ − tV̂ (θ(t))) (39)
where t is the step size at time step t, R(·) and M are preconditioners, γ ≥ 0 is a friction term, and V̂ (·) is an estimate of the covariance induced by the stochastic gradient.5
The update rules for the S-SGMCMC variants are similarly defined as Eqs. 36-39 but all instances of Û(θ(t)) are replaced with Û (S)(θ(t)). Likewise, replacing with Û (Sd)(θ(t)) yields the Sd-SGMCMC variants.
G ABLATION STUDY
This subsection aims to further explore the capabilities of the proposed methodology. More specifically, we visualize uncertainty for a two-layer Fully Connected Network and experiment with various parameter partitions.
Parameter Partitions. We tested our proposal with four partitioning schemes on a 2 layer with 50 neurons fully connected network on a regression task. The partitioning schemes that we used are the following: (a) the parameters are split into 3 groups randomly, (b) the parameters are split by layer(3 layers, 1 input and 2 hidden), (c) by activating neurons inside the layers and (d) every parameter belongs in each own group. We used 7 different datasets: the wine quality datsetCortez et al. (2009), the Boston housing datasetHarrison Jr & Rubinfeld (1978), the obesity levels datasetPalechor & de la Hoz Manotas (2019), the Seoul bike-sharing datasetE et al. (2020); E & Cho (2020), the concrete compressive strength datasetYeh (1998), and the airfoil self-noise datasetBrooks et al. (1989). Every dataset was split into 75% training data, 10% validation data, and 15% test data. We trained the model on training set and validated it in the validation set with an early stoppage. For every dataset and every partitioning scheme we used the learning rates: 1e-3,1e-4,1e-5,1e-6,1e-7 for hyperparameter tuning. For each combination of partition and dataset, we chose the learning rate that provides the best accuracy score on the test set. In this case, as an accuracy score, we used the Mean Squared Error. The final learning rates that we used are presented in Table 3.
5Note that we abuse notation in Eqs. 36-39 where the addition ofN (µ,Σ) denotes the addition of a normally distributed random variable with mean µ and covariance Σ.
H DETAILS ON EXPERIMENTS
H.1 QUALITATIVE REGRESSION EXPERIMENTS
First, we aim to showcase qualitative differences in the empirical posterior distributions generated by a baseline SGMCMC algorithm and our proposed variants. To do so, we consider a regression task where 100 randomly sampled three-dimensional covariates {~xi = [xi,1, xi,2, xi,3]T }i=1,...,100 are used to sample response values yi ∼ N (~wT~xi+b, σ2) where ~w = [w1, w2, w3]T = [1.5,−0.8, 1.3]T , b = 0.5, and σ2 = 1. More details on the generation process for ~x can be found in the Supplement.
We choose to fit a linear regression model of the same form as the generation process. σ2 is assumed to be known. Thus, θ = [w1, w2, w3, b]. A standard normal distribution is used as the prior for each parameter. Due to conjugacy, the posterior distribution can be calculated analytically. As such, the MAP is roughly θ̂MAP ≈ [0.52, 0.31, 0.47, 0.84]. The approximated posterior distributions for θ are found using SGLD, S-SGLD, and Sd-SGLD. For the latter two sampling schemes, two parameter partitions are tested: (i) two groups of parameters where θ1 = [w1, w2] and θ2 = [w3, b] and (ii) four groups of parameters where θ1 = w1, θ2 = w2, θ3 = w3, and θ4 = b. For Sd-SGLD, ρ = 0.5 and K = 4 was used.
The resulting posterior distributions for (w1, w2) and (w1, w3) from all five scenarios, with SGLD in the leftmost column as our baseline, can be seen in Fig. 1. We observe that, as expected, correlations between (w1, w2) still exist when they are allocated to the same parameter group and become apparently independent when assigned to different groups. We also note that the variance of the distributions shrink as the parameter space is partitioned into smaller groups. The underestimation of posterior variance is a commonly reported finding for VI techniques and is interesting to note that our non-parametric methods appear to exhibit this behavior as well. Finally, it appears that the Sd-SGLD adequately approximates S-SGLD with just slightly higher variances and very minor correlations between parameter groups being exhibited.
H.2 REAL-WORLD DATA EXPERIMENTS
Framework details. In this subsection, we provide more detailed results for our experiments and a grid search for FMNIST, CIFAR10, and SVHN. We note that all the code apart from the metrics was written in PyTorch (Paszke et al., 2019). Regarding the metrics, ESS was adopted from the TensorFlow probability library (Dillon et al., 2017; Abadi et al., 2016) and IAC was calculated in python. For all the experiments, we used a seed of 2. Moreover, we note that we grouped the parameters in an ordered way for Sd-pSGLD and S-pSGLD. We denoted previously that Kρ is the number of groups. So every parameter will go to the i mod Kρ group where i is the parameter index. If, for instance, Kρ is 8 then parameter 1 will go to group 1, parameter 2 will go to group 2, parameter 9 will go to group 1, etc. If Kρ is the same as the number of parameters, every parameter will go into its own group.
MNIST. Regarding MNIST, we ran all the experiments for 500 epochs with a batch size of 500 and a learning rate of 1e-2. For Sd-pSGLD, the K is set to 300, which is the forward passes that the model does within 1 epoch. For the grouping of the parameters, for Sd-pSGLD we used group sizes of 2,4,8,32,128,512,2048,4096,8192,16384,32768 and 42200; and for S-pSGLD we used groups sizes of 2,8,32,128,512,2048,4096 and 8192.
FashionMNIST. We ran all experiments for 300 epochs with a batch size of 500. For Sd-SGHMC the K is set to 2, which is the forward passes that the model does within 1 epoch. We observed with experimenting with K that we do not need to set K very high, and even a small number like 2 that we used here is enough to produce the same results as with an K of 200 or 300. In this way, we save significant time in training. Regarding the parameter partitioning, for Sd-SGMCMC, we put every parameter in a different group, and for S-SGMCMC we used groups of 2,4,8, and 16. For Sd-pSGLD, pSGLD, Sd-SGHMC and SGHMC we tested their performances with learning rates of 1e-2,1e-3,1e-4,1e-5. For S-pSGLD we used a learning rate of 1e-3 and for S-SGHMC a learning rate of 1e-2.
CIFAR10. The setup is similar to the one we used in FashionMNIST as we ran all experiments for 300 epochs with a batch size of 128. For Sd-SGHMC, the K is set to 2, which K is the forward passes that the model does within 1 epoch. Regarding the parameter partitioning, for Sd-SGMCMC, we put every parameter in a different group, and for S-SGMCMC we used groups of 2,4,8, and 16. For Sd-pSGLD, pSGLD, Sd-SGHMC and SGHMC we tested their performances with learning rates of 1e-2,1e-3,1e-4,1e-5. For S-pSGLD, we used a learning rate of 1e-3, and for S-SGHMC, a learning rate of 1e-2. We focused our strategy on evaluating the accuracy of the different combinations of hyperparameters with the proposed methods, as can be seen in Figs. 4 and 5. Quantitative results on IAC, ESS and maximum accuracy are depicted in Tables 6 and 7.
SVHN. We also ran all of the experiments for 300 epochs with a batch size of 128. Here for Sd-SGHMC, the K is set to 2, which is the forward passes that the model does within 1 epoch. We note that K here is less than on CIFAR10 and FashionMNIST, but as we mentioned before, this does not make a difference for our results, as we have tested. Regarding the parameter partitioning, for Sd-SGMCMC, we put every parameter in a different group, and for S-SGMCMC we used groups of 2,4,8, and 16. For Sd-pSGLD, pSGLD, Sd-SGHMC and SGHMC we tested their performances with
learning rates of 1e-1,1e-2,1e-3,1e-4,1e-5,1e-6. For S-pSGLD we used a learning rate of 1e-4, and for S-SGHMC, a learning rate of 1e-2. Same as in CIFAR10, we conducted a grid search for learning rate, dropout rate, and optimizers to find the best performing models and test them for their accuracy. We can observe these results in Figure 3 in the main paper. The strategy that we followed is the same as in CIFAR10 and is presented in Figs. 6 and 7. | 1. What is the focus and contribution of the paper on SGMCMC algorithms?
2. What are the concerns regarding the proposed algorithm's motivation and its relation to the target posterior distribution?
3. How does the reviewer assess the comparison between the proposed method and existing SGMCMC algorithms? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a hybrid SGMCMC algorithm, a Langevin-type algorithm that operates on a modified energy function based on the variational inference (VI) approach.
Review
The proposed algorithm is not well motivated: the VI approach aims to obtain a point estimate, SGMCMC aims to draw a sequence of samples from the target posterior, but the proposed algorithm is to simulate a sequence of samples from a modified distribution. It is unclear how well the modified distribution approximates the target distribution and how useful the pseudo-posterior samples are for statistical inference of the target distribution.
The comparison with the existing SGMCMC algorithm is not convincing. It seems that they are ``comparable in wall clock time'', while the baselines pSGLD and SGHMC might not be the state-of-the-art. |
ICLR | Title
Structured Stochastic Gradient MCMC
Abstract
Stochastic gradient Markov Chain Monte Carlo (SGMCMC) is considered the gold standard for Bayesian inference in large-scale models, such as Bayesian neural networks. Since practitioners face speed versus accuracy tradeoffs in these models, variational inference (VI) is often the preferable option. Unfortunately, VI makes strong assumptions on both the factorization and functional form of the posterior. In this work, we propose a new non-parametric variational approximation that makes no assumptions about the approximate posterior’s functional form and allows practitioners to specify the exact dependencies the algorithm should respect or break. The approach relies on a new Langevin-type algorithm that operates on a modified energy function, where parts of the latent variables are averaged over samples from earlier iterations of the Markov chain. This way, statistical dependencies can be broken in a controlled way, allowing the chain to mix faster. This scheme can be further modified in a “dropout” manner, leading to even more scalability. By implementing the scheme on a ResNet-20 architecture, we obtain better predictive likelihoods and faster mixing time than full SGMCMC.
1 INTRODUCTION
There has been much recent interest in deep Bayesian neural networks (BNN) due to their reliable confidence estimates and generalization properties (Wilson & Izmailov, 2020; Jospin et al., 2020; Cardelli et al., 2019). BNNs rely on ensemble averages over model parameters typically obtained from Markov chain Monte Carlo (MCMC) algorithms, which contrasts to regular neural networks that depend on a single set of parameters. The sheer size of these models requires scalable MCMC approaches based on inexpensive stochastic gradients, of which stochastic gradient Markov chain Monte Carlo (SGMCMC) algorithms are the gold standard (Li et al., 2016; Welling & Teh, 2011; Patterson & Teh, 2013). These algorithms owe their scalability to approximating gradients via mini-batching.
The main downside of using SGMCMC algorithms is their slow mixing rates in high dimensions. An often faster alternative is variational inference (VI) algorithms that approximate the posterior with a simpler (typically factorized) distribution. This formulation results in an optimization problem that can be solved more efficiently using stochastic optimization (Blei et al., 2017; Zhang et al., 2018).
One downside of VI approximations is their solid distributional assumptions. A typical choice is to approximate the Bayesian posterior by a product of univariate Gaussian distributions. These distributional assumptions are frequently over-simplistic in high-dimensional models, where the posterior can be highly multi-modal and possibly heavy-tailed. Another downside is that the variational approximation typically underestimates the posterior variance, leading to poorly calibrated uncertainties and overfitting (Ormerod & Wand, 2010; Giordano et al., 2015; Zhang et al., 2018).
In this work, we derive a fundamentally new SGMCMC approach that takes inspiration from structured VI. While our approach remains a sampling algorithm resembling SGMCMC, we speed up the mixing time by systematically breaking posterior correlations. The resulting algorithm furthermore allows users to specify which posterior correlations to keep and which ones to break. It makes no assumptions on the functional form of the approximate posterior. We call our approach structured SGMCMC since it relies on a structured (i.e., only partially factorized) variational approximation of the posterior (Wainwright & Jordan, 2008).
In more detail, we derive the optimal variational distribution for a given posterior subject to factorization constraints by assuming a functional view on variational inference. We show how to sample from
this optimal distribution by running SGMCMC on a modified energy function. This energy function is obtained by marginalizing the model’s joint distribution over previously generated samples from the Markov chain, leading to an approximate factorization over user-specified parameter groups. Further, we provide a more robust and computationally efficient approximation to the procedure that allows for interpolation between regular SGMCMC and our structured SGMCMC by taking inspiration from dropout techniques. Both methods are compatible with any Markovian SGMCMC algorithm, including Langevin dynamics and stochastic gradient Hamiltonian Monte Carlo.
In sum, our contributions are as follows:
• We propose a new approximate MCMC scheme running SGMCMC on a modified energy function, trading accuracy for speed. This setup effectively allows sampling from a fully joint posterior, a completely factorized posterior, and any in-between.
• We prove mathematically that the resulting scheme asymptotically generates samples from the best possible posterior approximation subject to user-specified factorization constraints between groups of parameters.
• We extend this scheme further by making it more scalable with a dropout-inspired approximation. This new scheme has a hyperparameter that enables a smooth interpolation between full SGMCMC and a "mean-field" version where all posterior correlations are broken.
• We show in both small and large scale experiments that our method well approximates posterior marginals and gives improved results over SGMCMC on Resnet-20 architectures on CIFAR-10, Fashion MNIST, and SVHN in terms of both runtime and final accuracy.
Our paper is structured as follows: Section 2 presents the related work to our proposal, Section 3 introduces preliminaries regarding the energy function and the stochastic gradient updates, Sections 4 and 5 derive our proposed methods, Section 6 details experiments and their results, and Section 7 contains our concluding thoughts.
2 RELATED WORK
Our work connects both to (stochastic) variational inference (Bishop, 2006; Hoffman et al., 2013; Ranganath et al., 2014; Blei et al., 2017; Zhang et al., 2018) and scalable MCMC (Welling & Teh, 2011; Chen et al., 2014; Ma et al., 2017; Zhang et al., 2020; Leimkuhler et al., 2019; Wenzel et al., 2020; Izmailov et al., 2021). For space limitations, we focus on the most related work at the intersection of both topics.
Among the earliest works to hybridize both approaches was (de Freitas et al., 2001) who constructed a variational proposal distribution in the Metropolos-Hastings step of MCMC. An improved approach to that was introduced in (Habib & Barber, 2018), where by introducing low-dimensional auxiliary variables they fit a more accurate approximating distribution. Other related advances to MCMC methods were proposed by Levy et al. (2017) who developed a method to train MCMC kernels with NNs, and Wang et al. (2018); Gong et al. (2018) who leveraged meta learning schemes in SGMCMC methods.
Most recent work focuses on connections between VI and stochastic gradient-based MCMC, or between VI and stochastic gradient descent (SGD). For example, Mandt et al. (2016; 2017) and Duvenaud et al. (2016) consider SGD as a type of variational inference, but their approaches did not attempt to close the gap to exact MCMC. Other works aim at explicitly interpolating between both methods. Domke (2017) proposes a divergence bound for hybridizing VI and MCMC, essentially by running Langevin dynamics on a tempered evidence lower bound (ELBO). Salimans et al. (2015) embody MCMC steps into the variational inference approximation. Ahn et al. (2012) improve stochastic gradient Langevin dynamics by leveraging the central limit theorem and using the estimated inverse Fisher information matrix to sample from the approximate posterior distribution. Rezende & Mohamed (2015) interpreted the path of an MCMC algorithm as a variational distribution, and then fitting parameters to tighten a variational bound. Recently, Hoffman & Ma (2020) interpreted (parametric) VI as approximate Langevin dynamics and showed that both algorithms have similar transient dynamics.
In contrast to all these approaches, our method is inspired by coordinate ascent variational inference (Bishop, 2006) but uses Langevin updates to generate samples from the target distribution that respects an imposed independence structure.
3 PRELIMINARIES
Variational inference (VI) approaches differ from MCMC in two regards: (1) they impose a structured (e.g., fully-factorized) approximation of the posterior for tractability, and (2) they often make parametric assumptions. Is it possible to construct a modified scheme that only relies on the assumption (1), inheriting the non-parametric nature of MCMC while breaking posterior correlations in a controlled manner? As follows, we will show how such a scheme can be realized. We will first derive a modified energy function for Langevin dynamics that we can sample from and then prove that its negative exponential results in the optimal posterior approximation subject to specified factorization constraints. Running SGMCMC algorithms on this energy function will consequently generate samples from this distribution.
Before we explain our new method, we introduce the setup and common notation. Given data D = {(xi, yi)}i=1,...,N , parameters θ, a proper prior distribution p(θ), and a likelihood p(D|θ) =∏N i=1 p(yi|xi, θ), suppose we are interested in the corresponding posterior distribution p(θ|D) ∝ p(D|θ)p(θ). A convenient representation of the posterior is as a Boltzmann distribution:
p(θ|D) ∝ exp{−U(θ)} where U(θ) = − log p(θ,D) = − ∑
(x,y)∈D
log p(y|x, θ)− log p(θ). (1)
U is typically referred to as the posterior energy function. Note that the posterior distribution is typically intractable due to the normalizing constant.
A popular approach for approximating the entire posterior distribution is by deploying Markov chain Monte Carlo (MCMC) algorithms. These methods work by producing an empirical distribution of samples in parameter space, often times through the use of a random walk. While being very accurate and having asymptotic guarantees, these methods are known to not scale well with respect to both data and parameters (Brooks et al., 2011; Geyer, 1992).
Stochastic gradient MCMC (SGMCMC) is a class of scalable MCMC algorithms that can produce posterior samples through gradients on minibatches of data. These algorithms are largely derived from discretized approximations of continuous-time diffusion processes. Examples of these algorithms include stochastic gradient Langevin dynamics (SGLD) (Welling & Teh, 2011), preconditioned SGLD (pSGLD) (Li et al., 2016), and stochastic gradient Hamiltonian Monte Carlo (SGHMC) (Chen et al., 2014).
As alluded to, the basis of SGMCMC algorithms is using a sampled minibatch of data D̃ from D to produce an differentiable, unbiased estimate of the posterior energy function:
U(θ) ≈ Û(θ; D̃) = − N |D̃| ∑ (x,y)∈D̃ log p(y|x, θ)− log p(θ). (2)
Once Û is defined, it is fairly straight forward to generate new samples from the posterior distribution. For instance, the SGLD update is
θ(t+1) = θ(t) − t 2 ∇θÛ(θ(t); D̃t) + ξt where ξt ∼ N (0, tI). (3)
Similar rules for pSGLD and SGHMC can be found in the Supplement. All of these update rules produce a chain of samples up to time step t that ultimately form an empirical distribution p̂(t)(θ|D). Should the algorithms converge, then limt→∞ p̂(t)(θ|D) = p(θ|D).
4 STRUCTURED SGMCMC
By design, SGMCMC methods produce a fully joint posterior distribution over parameters θ. For models with a large number of parameters, this can lead to various complications due to the curse of
dimensionality. This is typically observed with slow convergence times and potentially unexplored parameter spaces. A viable solution is to break dependencies in the posterior distribution by leveraging ideas commonly used in variational inference (VI). This would reduce the number of various potential posterior correlations that the model would need to capture while sampling.
To achieve partial factorization, we must first partition θ into M > 1 distinct, mutually independent groups: θ1, . . . , θM . This partitioning structure is assumed to be known a priori. We will denote the distribution that respects this partitioning structure as q(θ) = ∏M i=1 qi(θi). Similar to VI, we would like this distribution q(θ) to best approximate the true posterior distribution p(θ|D) according to some criteria, such as KL-divergence. This leads to a natural objective function to minimize:
J(q(θ)) = DKL(q(θ)||p(θ|D)) ≡ Eθ∼q [ log q(θ)
p(θ|D)
] (4)
The following Theorem 1 proves that there is a unique solution to the non-parametric KL minimization problem described in Eq. (4). To describe it, we compose θ = {θi, θ̃¬i} for any i where θ̃ ∼ q and define a structured energy function:
U (S)(θ) = M∑ i=1 U (S) i (θi), with U (S) i (θi) := Eθ̃∼qU({θi, θ̃¬i}) := −Eθ̃∼q log p(θi, θ̃¬i,D). (5)
That is, we first define the marginals U (S)i (θi), where we marginalize U(θ) with respect to all q(θ)-factors except qi(θi), and then sum up these marginals to define U (S)(θ). A similar partial marginalization procedure is carried out for conjugate exponential family distributions in coordinate ascent VI (Bishop, 2006). Having a well-defined energy function U (S) allows us to use standard SGMCMC methods to approximate the posterior q(θ) with samples. This serves as the basis for our proposed algorithm that actually approximates this distribution q(θ), which will be discussed shortly. Theorem 1. The unique solution to the KL minimization problem given in Eq. 4 is given by the Boltzmann distribution q(θ) ∝ exp{− ∑M i=1 U (S) i (θi)}. Please refer to the Supplement for the proof.
In an ideal world, we would be able to use the findings of Theorem 1 directly in conjunction with algorithms like Langevin dynamics and Hamiltonian Monte Carlo to produce empirical distributions for q using U (S) (Liu et al., 2019). However, this is intractable for two reasons: (1) these algorithms generally work only well with small amounts of data, and (2) more importantly, the marginals U
(S) i (θi) do not have a closed-form solution but need to be approximated via samples from q. Luckily, since SGMCMC methods only need access to noisy estimates of U (S), we can run these algorithms on a stochastic estimate of Eq. (5),
U (S)(θ) ≈ Û (S)(θ; D̃) = M∑ i=1 Eθ̃∼qÛ({θi, θ̃¬i}; D̃), (6)
where Û(·) is defined in Eq. (2). In practice, at timestep t for i = 1, . . . ,M we estimate Eθ̃∼qÛ({θi, θ̃¬i}; D̃t) with a Monte Carlo approximation. In place of θ̃, we use a single sample of θ̃(t) taken from the current approximate distribution q̂(t) which is composed of samples from previous timesteps (i.e., a uniform distribution over {θ(1), θ(2), . . . , θ(t)}). This leads to the following update step for structured SGLD (S-SGLD):
θ(t+1) = θ(t) − t 2 ∇θÛ (S)(θ; D̃) + ξt where ξt ∼ N (0, tI). (7)
Similar rules for structured variants of pSGLD (S-pSGLD) and SGHMC (S-SGHMC) can be found in the Supplement. Additionally, the full procedure for structured SGMCMC (S-SGMCMC) can be seen in Algorithm 2.
Remark Since ∇θÛ (S) is an unbiased estimator for U (S), we are guaranteed to converge to q from sampling with S-SGMCMC with sufficiently decreasing learning rates so long as we are in a stationary state. While it is unlikely to have the procedure initialize to a stationary state, we observe in practice that our scheme both tends to converge towards and remain in a stationary state. A general proof of convergence is outside the scope of this work and is left to follow-up research.
An example of S-SGMCMC can be seen in Fig. 1(a-b), which features the approximate posterior distributions of a linear regression model with three coefficients and with various independence structures imposed with S-SGLD: (a) joint dependence between w1, w2, and w3; (b-left) dependence between w1 and w2 but independence between w3 and the other coefficients; (b-right) fully factorized. Of note is that the bivariate posterior distributions appear to respect the imposed independence structure. Interestingly, it also appears that the variance shrinks as we induce these factorizations which is a commonly seen artifact when using VI.
5 STRUCTURED DROPOUT SGMCMC
While S-SGMCMC can successfully break dependencies between parameter groups, it does suffer computationally due to each parameter update scaling linearly with respect to M . This means that for a single new sample of θ, the model’s forward pass needs to be computed M different times on the same batch of data D̃, which can quickly become prohibitively expensive for deep models when M is large. Ideally, we would prefer a method that both closely resembles the S-SGMCMC procedure and scales independently from the partitioning scheme. This section presents such a method that achieves this, which we call structured dropout SGMCMC (Sd-SGMCMC), as well as an informal motivation and derivation of the method. More formal details and a theorem proving both SGMCMC and S-SGMCMC are limiting cases for Sd-SGMCMC can be found in the Supplement.
The main motivation for this technique can be seen by recognizing that the composition {θ(t)i , θ̃ (t) ¬i } from Eq. (6) can be rewritten as a sum of masked values rθ(t) + (1− r)θ̃(t) where θ̃(t) ∼ q(t) and rj = 1(i = j) for i = 1, . . . ,M . We can decouple the computational scaling from the number of parameter groups M by replacing the M deterministic masks r’s with K stochastically sampled masks r̃.1 Doing so results in a slightly different energy function and minibatch loss to optimize:
Û (Sd)(θ(t); D̃) ≈ M KE [∑M
i=1 ri ] K∑ k=1 Û(r̃(t,k)θ(t) + (1− r̃(t,k))θ̃(t,k); D̃) (8)
where r̃(t,k) is the kth sample of r̃ for timestep t. A formal justification for Eq. (8) can be found in the Supplement. These energy function approximations lead to the following update step for structured
1K is a hyperparameter that is chosen independent of M ; however, both M and the distribution of r̃ largely influence how small K can be due to how they affect the variance of the gradient of the associated posterior energy function.
Algorithm 2: S-SGMCMC Input: Initial sample θ(0); parameter
partitions θ1, . . . , θM ; step sizes { t}t=0,...,T−1.
Output: q̂(T )(θ) := {θ(t)}t=1,...,T 1 for t = 0 to T − 1 do 2 Sample minibatch D̃(t) ⊂ D 3 for i = 1 to M do 4 Sample θ̃(t)¬i ∼ q̂ (t) ¬i 5 Û (S,t) i = Û([θ (t) i , θ̃ (t) ¬i ]; D̃(t)) 6 end 7 ∇θÛ (S,t) = ∑M i=1∇θÛ (S,t) i 8 θ(t+1) =
SGMCMC_step(θ(t),∇θÛ (S,t), t) 9 end
10 return q̂(T )(θ)
Table 1: IAC and ESS metrics for CIFAR-10, SVHN, and FMNIST with various methods. Subscripts after method names refers to number of equally sized parameter groups, with |θ| meaning every parameter belongs to its own group. Best results are bolded.
CIFAR-10 SVHN FMNIST
Method IAC↓ ESS↑ IAC↓ ESS↑ IAC↓ ESS↑
pSGLD 716 8.01 839 6.82 779 7.09 S-pSGLD2 600 7.44 840 6.80 740 7.55 S-pSGLD4 599 7.4 834 6.83 751 7.45 S-pSGLD8 709 6.41 857 6.67 776 7.24 Sd-pSGLD|θ| 546 8.01 803 7.00 677 8.24 SGHMC 727 7.94 858 6.59 795 6.83 S-SGHMC2 583 7.49 949 5.74 928 5.67 S-SGHMC4 624 7.03 961 5.66 915 5.77 S-SGHMC8 904 4.97 1056 5.30 1142 4.87 Sd-SGHMC|θ| 584 7.7 828 6.56 782 7.08
dropout variant of SGLD (Sd-SGLD):
θ(t+1) = θ(t) − t 2 ∇θÛ (Sd)(θ; D̃) + ξt where ξt ∼ N (0, tI). (9)
The corresponding update rules for the structured dropout variants for pSGLD (Sd-pSGLD) and SGHMC (Sd-SGHMC) are defined in the Supplement. The exact procedure for generating samples of the approximate posterior q̂(t) using structured dropout SGMCMC (Sd-SGMCMC) can also be found in the Supplement.
An example of this method (specifically Sd-SGLD with r̃i iid∼ Bernoulli(0.5) and K = 4) used on a linear regression model can be seen in Fig. 1(c). Of note, we can see that the dropout variant largely respects the independence structure imposed, but maybe not as strictly as the exact S-SGLD method seen in Fig. 1(b). Additionally, the posterior variance also seems to have shrunk similarly to S-SGLD when compared against SGLD.
Masking Distribution Should r̃i iid∼ Bernoulli(ρ), alongside a structure that factorizes by activation components, then the method starts to resemble dropout with rate ρ (Srivastava et al., 2014). The main difference being that instead of replacing a parameter value with 0 it is replaced with a sample from the approximate posterior distribution at time t: q̂(t). While a Bernoulli distribution for r̃ is a natural choice, there are other distributions that can be chosen as well. For instance, r̃i
iid∼ N (0, 1) or r̃i
iid∼ Beta(α, β) are both viable distributions and can be seen as analog to Gaussian and Beta-dropout respectively (Srivastava et al., 2014; Liu et al., 2019). Our experiments will largely focus on sampling r̃ from Bernoulli and uniform over [0, 1] (equivalent to Beta(0.5, 0.5)) distributions.
6 EXPERIMENTS
Overview In this section we evaluate our proposed approach on various models and datasets. Section 6.1 investigates the impact of the variational approximation on the algorithms’ mixing and autocorrelation times using a fully-connected network architecture on MNIST (LeCun et al., 2010). Section 6.2 studies our methods with ResNet-20 (He et al., 2016) on CIFAR-10 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011), and Fashion MNIST (Xiao et al., 2017) and compares them for their accuracy and mixing time. Our experiments reveal that the chains in our proposed methods mix faster than SGMCMC and achieve either comparable or even higher accuracies on average.
We have also conducted experiments on uncertainty visualization, where we tested the proposed methodology on predictive uncertainty estimation by deploying a two-layer fully connected network
on a toy dataset. The uncertainty experimental setup and results, along with more technical details for the other experiments, can be found in the Appendix.
Metrics The primary predictive metric of interest use to evaluate our proposal is classification accuracy. We take the average of an ensemble of 100 models whose weights are sampled from the past samples of the parameters chains in order to calculate the accuracy. Additionally, we also monitor the mixing time of the chains of our methods with both integrated autocorrelation time (IAC) (Sokal, 1997; Goodman & Weare, 2010) and effective sample size (ESS) (Geyer, 1992). IAC measures the correlation between samples in a chain and, in turn, describe the inefficiency of a MCMC algorithm. IAC is computed as τf = ∑∞ τ=−∞ ρf (τ) where ρf is the normalized autocorrelation function of the stochastic process that generated the chain for f and is calculated as ρ̂f (τ) = ĉf (τ)/ĉf (0); where ĉf (τ) = 1 N−τ ∑N−τ n=1 (fn − µf ) (fn+τ − µf ) and µf = 1 N ∑N n=1 fn. We note that we calculated ĉf (τ) with a fast Fourier transform as it is more computationally efficient than using the direct sum. ESS measures how many independent samples would be equivalent to a chain of correlated samples and is calculated as neff = n1+(n−1)p , where n is the number of samples and p is the autocorrelation. 2 We note that a model with higher ESS and lower IAC have faster mixing time. Please see the Appendix for the detailed implementation details and experimental setup for the metrics and our models.
6.1 DROPOUT RATE & GROUP SIZE INVESTIGATION
The aim of this set of experiments is to study the effects that the number of independent parameter groups (or alternatively, the amount of allowed posterior correlations) has on accuracy and mixing time when using our proposed methods. We compare pSGLD, S-pSGLD, and Sd-pSGLD a Bernoulli(ρ) masking distribution with dropout rates ρ ∈ {0.1, 0.3, 0.5} on a fully-connected neural network with 2 hidden layers, with 50 hidden units each, trained and evaluated with MNIST using the standard train and test split. The model has 42,200 parameters in total. For S-pSGLD and Sd-pSGLD, these parameters are evenly distributed into M groups where M ranges from 4 to 42,200. Accuracy, IAC, and ESS are reported in Fig. 2 using 100,000 posterior samples after a 150,000 burn in period. More details on the implementation of the model regarding training and evaluation can be found in the Appendix.
As shown in Fig. 2(a), for S-pSGLD we observe that as we increase the number of groups the accuracy drops dramatically whereas Sd-pSGLD’s accuracy improves slightly and then remains fairly stable. In the best case, Sd-pSGLD achieves an accuracy of 96.3% with 32 groups and dropout rate of 0.5 which outperforms pSGLD with accuracy of 94.2%. We speculate that the dropout-like behavior is beneficial for regularizing the model (much like normal dropout), hence the improved accuracy across all dropout rates. Similarly, a single sample used for the Monte Carlo estimate in S-SGMCMC may not be enough as the number of groups M increase; however, increasing the number of samples in this scenario is infeasible due to S-SGMCMC scaling as O(M). Fig. 2(b-c) portrays the comparison between number of groups and mixing time metrics IAC and ESS. As the number of groups gradually increase, we note that S-pSGLD mixes faster, as does Sd-pSGLD to lesser and lesser degrees as ρ increases. This behavior is to be expected due to Theorem 2, with Sd-pSGLD exhibiting mixing times more similar to pSGLD when ρ = 0.5 and more similar to S-pSGLD when ρ = 0.1.
6.2 SYSTEMATIC COMPARISON ON REAL-WORLD DATA
The goal of these experiments is to test the proposed methodology on larger-scale datasets which mimic real-world data: CIFAR-10, SVHN, and FMNIST. We evaluate our methods on performance accuracy and on mixing times of the chains. We employ ResNet-20 for SVHN and FMNIST without any data augmentation to assess our methods. For CIFAR10 we employ the same data augmentation process as proposed in Cubuk et al. (2019). We evaluate the precision of the methods on accuracy over time and the overall mixing time of them on IAC and ESS with 2 base algorithms: pSGLD and SGHMC. For efficiency purposes we limited our scope to models with either fully joint posteriors or fully factorized. As such, for the latter we employed Sd-SGMCMC methods
2We used the TensorFlow implementation for ESS which uses the direct sum for the autocorrelation.
as S-SGMCMC would not be feasible with the amount of parameter groups present. Bernoulli(ρ) and uniform masking distributions were investigated and are denoted as SBernoulli-SGMCMC and SUniform-SGMCMC respectively, with ρ varying between datasets as determined by a hyperparameter search (detailed in the Appendix).
In Fig. 3 we observe how quickly the proposed methods and the baseline SGMCMC methods approach their optimum accuracy over the course of training. As is shown, SBernoulli-SGMCMC and SUniform-SGMCMC appear to achieve optimal accuracy values much faster than SGMCMC on all datasets and with all base sampling schemes. In some cases, the variational methods achieve better accuracy values than the baseline methods, as seen for CIFAR10 in Fig. 3.
Mixing Time Comparisons We further validated our findings from Section 6.1 by evaluating the IAC and ESS on larger datasets using various methods. Both pSGLD and SGHMC were used as base methods in conjunction with both S-SGMCMC and Sd-SGMCMC using a Bernoulli masking distribution. IAC and ESS were calculated for these methods using the latest 5,000 samples after sampling for 300 epochs; the results of which can be found in Table 1. For CIFAR-10, we see that Sd-SGMCMC with every parameter in a different group mixes the fastest against all other methods. Likewise, for SVHN and FMNIST, Sd-pSGLD with every parameter belonging to its own group mixes faster than all other methods. At times it does appear that increasing the number of parameter groups causes slower mixing time for S-SGMCMC. This could potentially be attributed to large variance in the gradients from using only a single sample per Monte Carlo estimate.
6.3 EXPLORING PARTITIONING SCHEMES
This part of the study aims to explore the capabilities of the proposed methodology further. Here we explore different parameter partitioning schemes on regression datasets.
Here we present the results with different partitions on various regression datasets. We used 7 different datasets: the wine quality dataset (Cortez et al., 2009), the Boston housing dataset (Harrison Jr & Rubinfeld, 1978), the obesity levels dataset (Palechor & de la Hoz Manotas, 2019), the Seoul bikesharing dataset (E et al., 2020; E & Cho, 2020), the concrete compressive strength dataset (Yeh, 1998), and the airfoil self-noise dataset (Brooks et al., 1989). For the evaluation we chose a simple fully connected network with two layers with 50 neurons each, and we use SGLD as an optimizer. As a performance metric we chose mean squared error (MSE). We did hyperparameter tuning with different learning rates and the final results are the means with the standard deviations of 5 runs with different seeds. We do not observe any specific systematic trends on the partitions, apart from the fact that in some cases random performs better. In that way the use of either random partitioning or the fully-factorized partitioning, where every parameter is in a different group appears to be a valid choice a priori; especially the latter since we have noted earlier the faster mixing times associated with this partitioning scheme. More details about the partitioning schemes experiments can be found in the Appendix.
7 CONCLUSIONS
In an attempt to hybridize MCMC and VI, we proposed S-SGMCMC: an approach that produces samples from an structured posterior by running SGMCMC on a modified energy function. The resulting Markov chain becomes asymptotically decoupled across user-specified groups of parameters, resulting in faster convergence. For better computational efficiency, we proposed Sd-SGMCMC: a further generalization of S-SGMCMC inspired by dropout. This extension allows interpolating between a SGMCMC algorithm and its corresponding S-SGMCMC method.
Our experimental results demonstrate that the proposed methods impose structure over posterior distributions, increase mixing times of the chains, and result in similar or better posterior predictive accuracies compared to SGMCMC on a variety of (deep) models. Our experimental evaluations have provided strong empirical evidence for the efficacy of our approach. We also showed that the proposed approach is compatible with various deep learning architectures, including ResNet-20, and various datasets.
Despite its proven capabilities, our proposed methodology does come with some limitations. Namely, for quick access our methods require keeping chains of samples on the GPU whereas the baseline SGMCMC methods can simply save samples to disk. Additionally, S-SGMCMC scales poorly with respect to the number of parameter groups. Sd-SGMCMC manages to break this dependency; however, it still requires slightly more compute than SGMCMC per sample, but it is comparable in wall clock time. Possible future work could focus on more theoretical analyses of S-SGMCMC, such as formal proofs of convergence.
8 ETHICS STATEMENT
The main focus of our work is to train models faster by decreasing the convergence time of their training phase. In this scope we are not aware of any ethical concerns of our research.
9 REPRODUCIBILITY
For this work we have made sure to guarantee reproducibility of the results. We provide all the technical details of our experiments and their implementations, like the hyperparameters, the data, the frameworks and the experimental setups. We have used only open source datasets that are easily accessible to the public. Finally we commit to release the code that we implemented for this work via a public repository.
A THEOREM 1
Proof. We begin with some preliminaries from the main text. Given data D = {(xi, yi)}i=1,...,N , parameters θ, a proper prior distribution p(θ), and a likelihood p(D|θ) = ∏N i=1 p(yi|xi, θ), suppose we are interested in the corresponding posterior distribution p(θ|D) ∝ p(D|θ)p(θ). A convenient representation of the posterior is as a Boltzmann distribution:
p(θ|D) ∝ exp{−U(θ)} where U(θ) = − ∑
(x,y)∈D
log p(y|x, θ)− log p(θ). (10)
U is typically referred to as the posterior energy function. Note that the posterior distribution is typically intractable due to the normalizing constant.
We also write the equation for KL divergence from the main text:
J(q(θ)) = DKL(q(θ)||p(θ|D)) (11) ≡ Eθ∼q [ log q(θ)
p(θ|D)
] (12)
We then rewrite Eq. 4 as follows:
J(q(θ)) = Eθ∼q [log q(θ)]− Eθ∼q [log p(θ,D)] + C (13) =Eθi∼qi [log qi(θi)] + ∑ i 6=j Eθj∼qj [log qj(θj)]− ∫ log p(θ,D)qi(θi)dθi ∏ i6=j qj(θj)dθj + C (14)
for some i ∈ {1, . . . ,M} where ¬i := {1, . . . ,M} \ {i} and C = log p(D). In order to find the optimal distribution that respects the factorization constraints imposed between parameter groups, we need to minimize this functional over q — or rather every qi. This is done by taking the functional derivative of J with respect to qi, setting it equal to zero, and solving for qi:
δJ(q(θ))
δqi(θi) =
∫ log p(θ,D) ∏ i6=j qj(θj)dθj − 1− log qi(θi) := 0 (15)
=⇒ log qi(θi) = Eθ̃¬i∼q¬i [ log p(θi, θ̃¬i,D) ] − 1 (16)
=⇒ qi(θi) ∝ exp { Eθ̃¬i∼q¬i [ log p(θi, θ̃¬i,D) ]} . (17)
By defining the energy U (S)i (θi) = −Eθ̃¬i∼q¬i [ log p(θi, θ̃¬i,D) ] , we realize that by minimizing
the KL-divergence in Eq. 4, the approximate posterior distribution q = ∏M i=1 qi takes the form of a
Boltzmann distribution as in Eq. 1 with U (S)(θ) = ∑M i=1 U (S) i (θi).
It remains to be shown that the solution is unique. To this end, we refer to the convexity of the KL divergence in function space (Cover & Thomas, 2001). This implies that the stationary point of the KL is indeed a global optimum and unique.
B DERIVING U (Sd)
With just a slight shift in perspective, it is actually possible to further generalize U (S) (and consequently S-SGMCMC) to produce a broader class of approximate sampling algorithms. This is done
by first noting that U (S) can be represented with a scaled double-expectation:
U (S)(θ) = − M Er∼p(S) [∑M i=1 ri ]Er∼p(S)Eθ̃∼q [log p(rθ + (1− r)θ̃,D)] (18) where p(S)(r) = Cat(r;M−1, . . . ,M−1) and (rθ + (1 − r)θ̃)i is equal to θi if ri = 1 and θ̃i otherwise for i = 1, . . . ,M . Note that this is constructed in this manner specifically so that U (S) remains differentiable with respect to θ. Also note that though the denominator appears superfluous as Er∼p(S) [ ∑M i=1 ri] = 1, it is necessary for certain theoretic properties, as seen in Theorem 2.
By replacing p(S) with a more flexible distribution, we can further generalize and encapsulate different energy functions to sample from. One such choice is p(Sd)(r; ρ) :∝ ∏M i=1 Bern(ri; ρ)1( ∑M i=1 ri > 0) with ρ ∈ (0, 1).3 Substituting p(S) for p(Sd) in Eq. (18) yields a new energy function that we will refer to as U (Sd). We note that this choice in distribution leads to a dropout-like behavior (Nalisnick et al., 2019; Srivastava et al., 2014), where the composition of model parameters as rθ + (1− r)θ̃ leads to each parameter group θi having a probability of approximately ρ to be used in a prediction and a (1 − ρ) probability of being replaced by θ̃i from the approximate posterior (in traditional dropout, θi would instead be replaced with 0). Likewise, we will denote methods that use this energy function for sampling as structured dropout SGMCMC (Sd-SGMCMC) with different variants all sharing the same Sd prefix (e.g. Sd-SGHMC).
In practice, the double-expectation in U (Sd) is jointly approximated using a Monte Carlo estimate with K samples. This leads to Eq. (8) in the main paper. We note that by approximating U (Sd) in this way, computing a gradient no longer scales on the order of O(M), but rather O(K). This means that the choice of structure imposed on the posterior distribution remains independent of computing resources. As such, configurations with large amounts of parameter groups are typically only feasible when using Sd-SGMCMC as S-SGMCMC would use too much memory and/or compute per sample.
C THEOREM 2
Theorem 2. For a given set of parameters θ partitioned into M groups, under minor assumptions (i) U (Sd) → U as ρ → 1 and (ii) U (Sd) → U (S) as ρ → 0. Thus, distributions approximated by Sd-SGMCMC lie on a continuum with those generated by S-SGMCMC at one extreme and with those from SGMCMC at the other.
Proof. Assume an arbitrary θ, D, n ∈ N, and that Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] exists for r ∈ R.
As an aside, this proof assumes that p(Sd)(r; ρ) :∝ ∏M i=1 Bern(ri; ρ)1( ∑M i=1 ri > 0) with ρ ∈ (0, 1); however, the theorem still holds an arbitrary p(Sd) so long as the mean approaches 1 and variance approaches 0 as n→∞.
(i) Let r(n) ∼ p(Sd)(ρn) where ∀nρn ∈ (0, 1) and ρn → 1. It follows that r(n) → {1}M as n→∞ in distribution (see Lemma 1 in Supplement). Due to bounded and finite support R, we find the following:
U (Sd)(θ) = − M Er∼p(Sd) [ ∑M i=1 ri] ∑ r∈R p(Sd)(r; ρn)Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] (19)
→ −M M ∑ r∈R 1(∀iri = 1)Eθ̃∼q [log p(θ,D)] as n→∞ (20)
= − log p(θ,D) = U(θ) (21)
(ii) Let r(n) ∼ p(Sd)(ρn) where ∀nρn ∈ (0, 1) and ρn → 0. It follows that r(n) → r ∼ Cat(M−1, . . . ,M−1) as n→∞ in distribution (see Lemma 2 in Supplement). Due to bounded and
3Other choices of distribution that are well justified include any with support over [0, 1]M and with measure 0 over {0}M . Exploring the effects these distributions have are an interesting line of future inquiry.
finite supportR, we find the following:
U (Sd)(θ) = − M Er∼p(Sd) [ ∑M i=1 ri] ∑ r∈R p(Sd)(r; ρn)Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] (22)
→ −M 1 ∑ r∈R 1( ∑M i=1 ri = 1) M Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] as n→∞ (23)
= − M∑ i=1 Eθ̃∼q[log p([θi, θ̃¬i,D)] = U (S)(θ) (24)
For both Lemmas 1 and 2, let
p(Sd)(r; ρ) = ρ ∑M i=1 ri(1− ρ)M− ∑M i=1 ri
1− (1− ρ)M 1(∀iri ∈ {0, 1})1 ( M∑ i=1 ri > 0 ) (25)
Lemma 1. For r(n) ∼ p(Sd)(ρn), ρn ∈ (0, 1) and n ∈ N, if ρn → 1 as n → ∞ then r(n) → r ∼ δ({1}M ) in distribution as n→∞.
Proof.
p(Sd)(r = {1}M ; ρn) = ρMn (1− ρn)0
1− (1− ρn)M (26)
→ 1 as n→∞ (27) =⇒ r(n) → δ({1}M ) in distribution. (28)
Lemma 2. For r(n) ∼ p(Sd)(ρn), ρn ∈ (0, 1) and n ∈ N, if ρn → 0 as n → ∞ then r(n) → r ∼ Cat(M−1, . . . ,M−1) in distribution as n→∞.
Proof. Let i ∈ {1, . . . ,M}.
p(Sd)(ri = 1, r¬i = 0; ρn) = ρn(1− ρn)M−1
1− (1− ρn)M (29)
l’Hôspital’s Rule H= (1− ρn)M−1 + ρn(M − 1)(1− ρn)M−2
M(1− ρn)M−1 (30)
→ 1 M as n→∞ (31)
Since the resulting probabilities sum to 1, this implies that r(n) → r ∼ Cat(M−1, . . . ,M−1) in distribution as n→∞.
D DERIVING U (Sd)
To derive U (Sd), we must first start with a shift in perspective on how U (S) is represented. We will rewrite the function in the following way:
U (S)(θ) = − M∑ i=1 Eθ¬i∼q¬i [log p([θi, θ¬i],D)] (32)
= − M Er∼p(S) [ ∑M i=1 ri]
Er∼p(S)Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] (33)
where p(S) is a M -dimensional categorical distribution with uniform weights M−1 and p(rθ + (1− r)θ̃,D) is the joint probability of parameters taking values of rθ + (1− r)θ̃ and data D.4
We note that changing the distribution of r leads to different energy functions to sample from. One such choice is to have p(Sd)(r; ρ) ∝ ρ ∑M i=1 ri(1 − ρ)M− ∑M i=1 ri1(∀iri ∈ {0, 1})1( ∑M i=1 ri > 0) for ρ ∈ (0, 1). Note that this is identical to ri iid∼ Bernoulli(ρ) conditional to ∑M i=1 ri > 0. Let the support of p(Sd) be denoted asR = {0, 1}M \ {0}M . This leads to the following energy function:
U (Sd)(θ) = − M Er∼p(Sd) [ ∑M i=1 ri]
Er∼p(Sd)Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] . (34)
In practice, a few approximations are made to compute the corresponding U (Sd). Firstly, we approximate p(Sd) with an M -dimensional Bernoulli(ρ) distribution as the difference is minute when Mρ is large. Secondly, the outer expectation in Eq. (34) is approximated with a Monte Carlo estimate of K samples. The inner expectation is also approximated with a Monte Carlo estimate using the latest approximate posterior q̂(t). However, just like for S-SGMCMC, only a single sample is used. This further leads to:
U (Sd)(θ(t); D̃) = − 1 Kρ K∑ k=1 U(r(t,k)θ(t) + (1− r(t,k))θ̃(t,k); D̃) (35)
E ALGORITHM FOR Sd-SGMCMC
The procedure for Sd-SGMCMC can be seen in Algorithm 3.
Algorithm 3: Sd-SGMCMC Input: Initial sample θ(0); parameter partitions θ1, . . . , θM ; data set D; initial auxiliary statistics ξ(0); step sizes { t}t=1,...,T ; masking distribution p(Sd); dropout iterations K. Output: q̂(T )(θ) := {θ(t)}t=1,...,T
1 for t = 0 to T − 1 do 2 Sample minibatch D̃(t) ⊂ D 3 for k = 1 to K do 4 Sample masks r(t,k)1 , . . . , r (t,k) M ∼ p(Sd) 5 Sample θ̃(t,k) ∼ q̂(t)
6 θ(t,k) = [r (t,k) i θ (t) i + (1− r (t,k) i )θ̃ (t,k) i ]i=1,...,M 7 Û (Sd,t) k = Û(θ
(t,k); D̃(t)) 8 end 9 ∇θÛ (Sd,t) = MKE r∼p(Sd) [ ∑M i=1 ri] ∑K k=1∇θÛ (Sd,t) k
10 θ(t+1), ξ(t+1) = SGMCMC_step(θ(t),∇θÛ (Sd,t), ξ(t), t) 11 end 12 return q̂(T )(θ)
4rθ + (1 − r)θ̃ is a slight abuse of notation that is meant to represent masking out θi when ri = 0 and masking out θ̃i when ri = 1.
F SGMCMC UPDATE RULES
The update rules for SGLD, pSGLD, and SGHMC are defined as follows:
SGLD θ(t+1) = θ(t) − t 2 ∇θÛ(θ(t)) +N (0, tI) (36)
pSGLD θ(t+1) = θ(t) − t 2
[ R(θ(t))∇θÛ(θ(t)) +
∑ θ ∇θR(θ(t))
] +N (0, tR(θ(t))) (37)
SGHMC θ(t+1) = θ(t) + tM−1m(t+1) (38)
m(t+1) = (1− γ tM−1)m(t) − t∇θÛ(θ(t)) +N (0, 2γ − tV̂ (θ(t))) (39)
where t is the step size at time step t, R(·) and M are preconditioners, γ ≥ 0 is a friction term, and V̂ (·) is an estimate of the covariance induced by the stochastic gradient.5
The update rules for the S-SGMCMC variants are similarly defined as Eqs. 36-39 but all instances of Û(θ(t)) are replaced with Û (S)(θ(t)). Likewise, replacing with Û (Sd)(θ(t)) yields the Sd-SGMCMC variants.
G ABLATION STUDY
This subsection aims to further explore the capabilities of the proposed methodology. More specifically, we visualize uncertainty for a two-layer Fully Connected Network and experiment with various parameter partitions.
Parameter Partitions. We tested our proposal with four partitioning schemes on a 2 layer with 50 neurons fully connected network on a regression task. The partitioning schemes that we used are the following: (a) the parameters are split into 3 groups randomly, (b) the parameters are split by layer(3 layers, 1 input and 2 hidden), (c) by activating neurons inside the layers and (d) every parameter belongs in each own group. We used 7 different datasets: the wine quality datsetCortez et al. (2009), the Boston housing datasetHarrison Jr & Rubinfeld (1978), the obesity levels datasetPalechor & de la Hoz Manotas (2019), the Seoul bike-sharing datasetE et al. (2020); E & Cho (2020), the concrete compressive strength datasetYeh (1998), and the airfoil self-noise datasetBrooks et al. (1989). Every dataset was split into 75% training data, 10% validation data, and 15% test data. We trained the model on training set and validated it in the validation set with an early stoppage. For every dataset and every partitioning scheme we used the learning rates: 1e-3,1e-4,1e-5,1e-6,1e-7 for hyperparameter tuning. For each combination of partition and dataset, we chose the learning rate that provides the best accuracy score on the test set. In this case, as an accuracy score, we used the Mean Squared Error. The final learning rates that we used are presented in Table 3.
5Note that we abuse notation in Eqs. 36-39 where the addition ofN (µ,Σ) denotes the addition of a normally distributed random variable with mean µ and covariance Σ.
H DETAILS ON EXPERIMENTS
H.1 QUALITATIVE REGRESSION EXPERIMENTS
First, we aim to showcase qualitative differences in the empirical posterior distributions generated by a baseline SGMCMC algorithm and our proposed variants. To do so, we consider a regression task where 100 randomly sampled three-dimensional covariates {~xi = [xi,1, xi,2, xi,3]T }i=1,...,100 are used to sample response values yi ∼ N (~wT~xi+b, σ2) where ~w = [w1, w2, w3]T = [1.5,−0.8, 1.3]T , b = 0.5, and σ2 = 1. More details on the generation process for ~x can be found in the Supplement.
We choose to fit a linear regression model of the same form as the generation process. σ2 is assumed to be known. Thus, θ = [w1, w2, w3, b]. A standard normal distribution is used as the prior for each parameter. Due to conjugacy, the posterior distribution can be calculated analytically. As such, the MAP is roughly θ̂MAP ≈ [0.52, 0.31, 0.47, 0.84]. The approximated posterior distributions for θ are found using SGLD, S-SGLD, and Sd-SGLD. For the latter two sampling schemes, two parameter partitions are tested: (i) two groups of parameters where θ1 = [w1, w2] and θ2 = [w3, b] and (ii) four groups of parameters where θ1 = w1, θ2 = w2, θ3 = w3, and θ4 = b. For Sd-SGLD, ρ = 0.5 and K = 4 was used.
The resulting posterior distributions for (w1, w2) and (w1, w3) from all five scenarios, with SGLD in the leftmost column as our baseline, can be seen in Fig. 1. We observe that, as expected, correlations between (w1, w2) still exist when they are allocated to the same parameter group and become apparently independent when assigned to different groups. We also note that the variance of the distributions shrink as the parameter space is partitioned into smaller groups. The underestimation of posterior variance is a commonly reported finding for VI techniques and is interesting to note that our non-parametric methods appear to exhibit this behavior as well. Finally, it appears that the Sd-SGLD adequately approximates S-SGLD with just slightly higher variances and very minor correlations between parameter groups being exhibited.
H.2 REAL-WORLD DATA EXPERIMENTS
Framework details. In this subsection, we provide more detailed results for our experiments and a grid search for FMNIST, CIFAR10, and SVHN. We note that all the code apart from the metrics was written in PyTorch (Paszke et al., 2019). Regarding the metrics, ESS was adopted from the TensorFlow probability library (Dillon et al., 2017; Abadi et al., 2016) and IAC was calculated in python. For all the experiments, we used a seed of 2. Moreover, we note that we grouped the parameters in an ordered way for Sd-pSGLD and S-pSGLD. We denoted previously that Kρ is the number of groups. So every parameter will go to the i mod Kρ group where i is the parameter index. If, for instance, Kρ is 8 then parameter 1 will go to group 1, parameter 2 will go to group 2, parameter 9 will go to group 1, etc. If Kρ is the same as the number of parameters, every parameter will go into its own group.
MNIST. Regarding MNIST, we ran all the experiments for 500 epochs with a batch size of 500 and a learning rate of 1e-2. For Sd-pSGLD, the K is set to 300, which is the forward passes that the model does within 1 epoch. For the grouping of the parameters, for Sd-pSGLD we used group sizes of 2,4,8,32,128,512,2048,4096,8192,16384,32768 and 42200; and for S-pSGLD we used groups sizes of 2,8,32,128,512,2048,4096 and 8192.
FashionMNIST. We ran all experiments for 300 epochs with a batch size of 500. For Sd-SGHMC the K is set to 2, which is the forward passes that the model does within 1 epoch. We observed with experimenting with K that we do not need to set K very high, and even a small number like 2 that we used here is enough to produce the same results as with an K of 200 or 300. In this way, we save significant time in training. Regarding the parameter partitioning, for Sd-SGMCMC, we put every parameter in a different group, and for S-SGMCMC we used groups of 2,4,8, and 16. For Sd-pSGLD, pSGLD, Sd-SGHMC and SGHMC we tested their performances with learning rates of 1e-2,1e-3,1e-4,1e-5. For S-pSGLD we used a learning rate of 1e-3 and for S-SGHMC a learning rate of 1e-2.
CIFAR10. The setup is similar to the one we used in FashionMNIST as we ran all experiments for 300 epochs with a batch size of 128. For Sd-SGHMC, the K is set to 2, which K is the forward passes that the model does within 1 epoch. Regarding the parameter partitioning, for Sd-SGMCMC, we put every parameter in a different group, and for S-SGMCMC we used groups of 2,4,8, and 16. For Sd-pSGLD, pSGLD, Sd-SGHMC and SGHMC we tested their performances with learning rates of 1e-2,1e-3,1e-4,1e-5. For S-pSGLD, we used a learning rate of 1e-3, and for S-SGHMC, a learning rate of 1e-2. We focused our strategy on evaluating the accuracy of the different combinations of hyperparameters with the proposed methods, as can be seen in Figs. 4 and 5. Quantitative results on IAC, ESS and maximum accuracy are depicted in Tables 6 and 7.
SVHN. We also ran all of the experiments for 300 epochs with a batch size of 128. Here for Sd-SGHMC, the K is set to 2, which is the forward passes that the model does within 1 epoch. We note that K here is less than on CIFAR10 and FashionMNIST, but as we mentioned before, this does not make a difference for our results, as we have tested. Regarding the parameter partitioning, for Sd-SGMCMC, we put every parameter in a different group, and for S-SGMCMC we used groups of 2,4,8, and 16. For Sd-pSGLD, pSGLD, Sd-SGHMC and SGHMC we tested their performances with
learning rates of 1e-1,1e-2,1e-3,1e-4,1e-5,1e-6. For S-pSGLD we used a learning rate of 1e-4, and for S-SGHMC, a learning rate of 1e-2. Same as in CIFAR10, we conducted a grid search for learning rate, dropout rate, and optimizers to find the best performing models and test them for their accuracy. We can observe these results in Figure 3 in the main paper. The strategy that we followed is the same as in CIFAR10 and is presented in Figs. 6 and 7. | 1. What is the focus and contribution of the paper regarding stochastic gradient Markov Chain Monte Carlo?
2. What are the strengths of the proposed approach, particularly in terms of structured variational approximation?
3. Do you have any concerns or questions about the method, such as its connection to variational inference or the role of dropout schemes?
4. How does the reviewer assess the clarity and quality of the paper's content, including its experimental results and theoretical analysis? | Summary Of The Paper
Review | Summary Of The Paper
This work proposes using a structured variational approximation for stochastic gradient Markov Chain Monte Carlo. This allows someone to choose a factorization for the variational distribution (which factorization is best is unclear; several are studied). Analogously to coordinate ascent variational inference, the authors show that the best approximation is the Boltzmann energy function marginalized over the complements of every parameter group.
However, this structured approximation is computationally expensive and requires the same number of evaluations of the approximation as there are parameter groups. This computational burden is alleviated by a dropout scheme, where instead of sampling from every parameter groups, parameters are masked using a dropout distribution, and the number of stochastic masks is a hyperparameter that controls regularization and fidelity to the structure imposed in the factorization.
Experiments show that this is a viable way to impose structure on a variational distribution, and that mixing times are improved.
Review
This is a very well-written, clear paper, with nice experiments to guide intuition. Congratulations! The effort shows.
If the paper "attempt[s] to hybridize MCMC and VI", why not compare to structured variational inference? Is the only difference between this approach and variational inference the additional noise in the gradient update?
If so, it could be stated in a sentence, which would help me be less confused. After all, Equation (4) is the same objective function, theorem 1 is roughly the same proof as coordinate ascent variational inference (where the parameter groups M are simply coordinates i), then these are more of an 'observation' than a theorem. But the key insight of using the coordinate ascent variational inference for SGMCMC is valuable and the main point of thh paper, and calling it a theorem may be helpful for readers. So it's up to the authors for deciding what's best - just wanted to point out that these connections and derivation analogs could be made more clear, which would make the method easier to understand and exposition more straightforward.
The most confusing sentence for me was belot Eq. 6: "in practice, at timestep t...". Maybe describe how \hat q is composed of samples from previous timesteps.
small wording choices: "completely joint", "well-approximates", "solid distributional assumptions"
in equation 8, rho_i is not defined
I also appreciate the authors including clean code as a supplementary zip file for reproducibility. |
ICLR | Title
Structured Stochastic Gradient MCMC
Abstract
Stochastic gradient Markov Chain Monte Carlo (SGMCMC) is considered the gold standard for Bayesian inference in large-scale models, such as Bayesian neural networks. Since practitioners face speed versus accuracy tradeoffs in these models, variational inference (VI) is often the preferable option. Unfortunately, VI makes strong assumptions on both the factorization and functional form of the posterior. In this work, we propose a new non-parametric variational approximation that makes no assumptions about the approximate posterior’s functional form and allows practitioners to specify the exact dependencies the algorithm should respect or break. The approach relies on a new Langevin-type algorithm that operates on a modified energy function, where parts of the latent variables are averaged over samples from earlier iterations of the Markov chain. This way, statistical dependencies can be broken in a controlled way, allowing the chain to mix faster. This scheme can be further modified in a “dropout” manner, leading to even more scalability. By implementing the scheme on a ResNet-20 architecture, we obtain better predictive likelihoods and faster mixing time than full SGMCMC.
1 INTRODUCTION
There has been much recent interest in deep Bayesian neural networks (BNN) due to their reliable confidence estimates and generalization properties (Wilson & Izmailov, 2020; Jospin et al., 2020; Cardelli et al., 2019). BNNs rely on ensemble averages over model parameters typically obtained from Markov chain Monte Carlo (MCMC) algorithms, which contrasts to regular neural networks that depend on a single set of parameters. The sheer size of these models requires scalable MCMC approaches based on inexpensive stochastic gradients, of which stochastic gradient Markov chain Monte Carlo (SGMCMC) algorithms are the gold standard (Li et al., 2016; Welling & Teh, 2011; Patterson & Teh, 2013). These algorithms owe their scalability to approximating gradients via mini-batching.
The main downside of using SGMCMC algorithms is their slow mixing rates in high dimensions. An often faster alternative is variational inference (VI) algorithms that approximate the posterior with a simpler (typically factorized) distribution. This formulation results in an optimization problem that can be solved more efficiently using stochastic optimization (Blei et al., 2017; Zhang et al., 2018).
One downside of VI approximations is their solid distributional assumptions. A typical choice is to approximate the Bayesian posterior by a product of univariate Gaussian distributions. These distributional assumptions are frequently over-simplistic in high-dimensional models, where the posterior can be highly multi-modal and possibly heavy-tailed. Another downside is that the variational approximation typically underestimates the posterior variance, leading to poorly calibrated uncertainties and overfitting (Ormerod & Wand, 2010; Giordano et al., 2015; Zhang et al., 2018).
In this work, we derive a fundamentally new SGMCMC approach that takes inspiration from structured VI. While our approach remains a sampling algorithm resembling SGMCMC, we speed up the mixing time by systematically breaking posterior correlations. The resulting algorithm furthermore allows users to specify which posterior correlations to keep and which ones to break. It makes no assumptions on the functional form of the approximate posterior. We call our approach structured SGMCMC since it relies on a structured (i.e., only partially factorized) variational approximation of the posterior (Wainwright & Jordan, 2008).
In more detail, we derive the optimal variational distribution for a given posterior subject to factorization constraints by assuming a functional view on variational inference. We show how to sample from
this optimal distribution by running SGMCMC on a modified energy function. This energy function is obtained by marginalizing the model’s joint distribution over previously generated samples from the Markov chain, leading to an approximate factorization over user-specified parameter groups. Further, we provide a more robust and computationally efficient approximation to the procedure that allows for interpolation between regular SGMCMC and our structured SGMCMC by taking inspiration from dropout techniques. Both methods are compatible with any Markovian SGMCMC algorithm, including Langevin dynamics and stochastic gradient Hamiltonian Monte Carlo.
In sum, our contributions are as follows:
• We propose a new approximate MCMC scheme running SGMCMC on a modified energy function, trading accuracy for speed. This setup effectively allows sampling from a fully joint posterior, a completely factorized posterior, and any in-between.
• We prove mathematically that the resulting scheme asymptotically generates samples from the best possible posterior approximation subject to user-specified factorization constraints between groups of parameters.
• We extend this scheme further by making it more scalable with a dropout-inspired approximation. This new scheme has a hyperparameter that enables a smooth interpolation between full SGMCMC and a "mean-field" version where all posterior correlations are broken.
• We show in both small and large scale experiments that our method well approximates posterior marginals and gives improved results over SGMCMC on Resnet-20 architectures on CIFAR-10, Fashion MNIST, and SVHN in terms of both runtime and final accuracy.
Our paper is structured as follows: Section 2 presents the related work to our proposal, Section 3 introduces preliminaries regarding the energy function and the stochastic gradient updates, Sections 4 and 5 derive our proposed methods, Section 6 details experiments and their results, and Section 7 contains our concluding thoughts.
2 RELATED WORK
Our work connects both to (stochastic) variational inference (Bishop, 2006; Hoffman et al., 2013; Ranganath et al., 2014; Blei et al., 2017; Zhang et al., 2018) and scalable MCMC (Welling & Teh, 2011; Chen et al., 2014; Ma et al., 2017; Zhang et al., 2020; Leimkuhler et al., 2019; Wenzel et al., 2020; Izmailov et al., 2021). For space limitations, we focus on the most related work at the intersection of both topics.
Among the earliest works to hybridize both approaches was (de Freitas et al., 2001) who constructed a variational proposal distribution in the Metropolos-Hastings step of MCMC. An improved approach to that was introduced in (Habib & Barber, 2018), where by introducing low-dimensional auxiliary variables they fit a more accurate approximating distribution. Other related advances to MCMC methods were proposed by Levy et al. (2017) who developed a method to train MCMC kernels with NNs, and Wang et al. (2018); Gong et al. (2018) who leveraged meta learning schemes in SGMCMC methods.
Most recent work focuses on connections between VI and stochastic gradient-based MCMC, or between VI and stochastic gradient descent (SGD). For example, Mandt et al. (2016; 2017) and Duvenaud et al. (2016) consider SGD as a type of variational inference, but their approaches did not attempt to close the gap to exact MCMC. Other works aim at explicitly interpolating between both methods. Domke (2017) proposes a divergence bound for hybridizing VI and MCMC, essentially by running Langevin dynamics on a tempered evidence lower bound (ELBO). Salimans et al. (2015) embody MCMC steps into the variational inference approximation. Ahn et al. (2012) improve stochastic gradient Langevin dynamics by leveraging the central limit theorem and using the estimated inverse Fisher information matrix to sample from the approximate posterior distribution. Rezende & Mohamed (2015) interpreted the path of an MCMC algorithm as a variational distribution, and then fitting parameters to tighten a variational bound. Recently, Hoffman & Ma (2020) interpreted (parametric) VI as approximate Langevin dynamics and showed that both algorithms have similar transient dynamics.
In contrast to all these approaches, our method is inspired by coordinate ascent variational inference (Bishop, 2006) but uses Langevin updates to generate samples from the target distribution that respects an imposed independence structure.
3 PRELIMINARIES
Variational inference (VI) approaches differ from MCMC in two regards: (1) they impose a structured (e.g., fully-factorized) approximation of the posterior for tractability, and (2) they often make parametric assumptions. Is it possible to construct a modified scheme that only relies on the assumption (1), inheriting the non-parametric nature of MCMC while breaking posterior correlations in a controlled manner? As follows, we will show how such a scheme can be realized. We will first derive a modified energy function for Langevin dynamics that we can sample from and then prove that its negative exponential results in the optimal posterior approximation subject to specified factorization constraints. Running SGMCMC algorithms on this energy function will consequently generate samples from this distribution.
Before we explain our new method, we introduce the setup and common notation. Given data D = {(xi, yi)}i=1,...,N , parameters θ, a proper prior distribution p(θ), and a likelihood p(D|θ) =∏N i=1 p(yi|xi, θ), suppose we are interested in the corresponding posterior distribution p(θ|D) ∝ p(D|θ)p(θ). A convenient representation of the posterior is as a Boltzmann distribution:
p(θ|D) ∝ exp{−U(θ)} where U(θ) = − log p(θ,D) = − ∑
(x,y)∈D
log p(y|x, θ)− log p(θ). (1)
U is typically referred to as the posterior energy function. Note that the posterior distribution is typically intractable due to the normalizing constant.
A popular approach for approximating the entire posterior distribution is by deploying Markov chain Monte Carlo (MCMC) algorithms. These methods work by producing an empirical distribution of samples in parameter space, often times through the use of a random walk. While being very accurate and having asymptotic guarantees, these methods are known to not scale well with respect to both data and parameters (Brooks et al., 2011; Geyer, 1992).
Stochastic gradient MCMC (SGMCMC) is a class of scalable MCMC algorithms that can produce posterior samples through gradients on minibatches of data. These algorithms are largely derived from discretized approximations of continuous-time diffusion processes. Examples of these algorithms include stochastic gradient Langevin dynamics (SGLD) (Welling & Teh, 2011), preconditioned SGLD (pSGLD) (Li et al., 2016), and stochastic gradient Hamiltonian Monte Carlo (SGHMC) (Chen et al., 2014).
As alluded to, the basis of SGMCMC algorithms is using a sampled minibatch of data D̃ from D to produce an differentiable, unbiased estimate of the posterior energy function:
U(θ) ≈ Û(θ; D̃) = − N |D̃| ∑ (x,y)∈D̃ log p(y|x, θ)− log p(θ). (2)
Once Û is defined, it is fairly straight forward to generate new samples from the posterior distribution. For instance, the SGLD update is
θ(t+1) = θ(t) − t 2 ∇θÛ(θ(t); D̃t) + ξt where ξt ∼ N (0, tI). (3)
Similar rules for pSGLD and SGHMC can be found in the Supplement. All of these update rules produce a chain of samples up to time step t that ultimately form an empirical distribution p̂(t)(θ|D). Should the algorithms converge, then limt→∞ p̂(t)(θ|D) = p(θ|D).
4 STRUCTURED SGMCMC
By design, SGMCMC methods produce a fully joint posterior distribution over parameters θ. For models with a large number of parameters, this can lead to various complications due to the curse of
dimensionality. This is typically observed with slow convergence times and potentially unexplored parameter spaces. A viable solution is to break dependencies in the posterior distribution by leveraging ideas commonly used in variational inference (VI). This would reduce the number of various potential posterior correlations that the model would need to capture while sampling.
To achieve partial factorization, we must first partition θ into M > 1 distinct, mutually independent groups: θ1, . . . , θM . This partitioning structure is assumed to be known a priori. We will denote the distribution that respects this partitioning structure as q(θ) = ∏M i=1 qi(θi). Similar to VI, we would like this distribution q(θ) to best approximate the true posterior distribution p(θ|D) according to some criteria, such as KL-divergence. This leads to a natural objective function to minimize:
J(q(θ)) = DKL(q(θ)||p(θ|D)) ≡ Eθ∼q [ log q(θ)
p(θ|D)
] (4)
The following Theorem 1 proves that there is a unique solution to the non-parametric KL minimization problem described in Eq. (4). To describe it, we compose θ = {θi, θ̃¬i} for any i where θ̃ ∼ q and define a structured energy function:
U (S)(θ) = M∑ i=1 U (S) i (θi), with U (S) i (θi) := Eθ̃∼qU({θi, θ̃¬i}) := −Eθ̃∼q log p(θi, θ̃¬i,D). (5)
That is, we first define the marginals U (S)i (θi), where we marginalize U(θ) with respect to all q(θ)-factors except qi(θi), and then sum up these marginals to define U (S)(θ). A similar partial marginalization procedure is carried out for conjugate exponential family distributions in coordinate ascent VI (Bishop, 2006). Having a well-defined energy function U (S) allows us to use standard SGMCMC methods to approximate the posterior q(θ) with samples. This serves as the basis for our proposed algorithm that actually approximates this distribution q(θ), which will be discussed shortly. Theorem 1. The unique solution to the KL minimization problem given in Eq. 4 is given by the Boltzmann distribution q(θ) ∝ exp{− ∑M i=1 U (S) i (θi)}. Please refer to the Supplement for the proof.
In an ideal world, we would be able to use the findings of Theorem 1 directly in conjunction with algorithms like Langevin dynamics and Hamiltonian Monte Carlo to produce empirical distributions for q using U (S) (Liu et al., 2019). However, this is intractable for two reasons: (1) these algorithms generally work only well with small amounts of data, and (2) more importantly, the marginals U
(S) i (θi) do not have a closed-form solution but need to be approximated via samples from q. Luckily, since SGMCMC methods only need access to noisy estimates of U (S), we can run these algorithms on a stochastic estimate of Eq. (5),
U (S)(θ) ≈ Û (S)(θ; D̃) = M∑ i=1 Eθ̃∼qÛ({θi, θ̃¬i}; D̃), (6)
where Û(·) is defined in Eq. (2). In practice, at timestep t for i = 1, . . . ,M we estimate Eθ̃∼qÛ({θi, θ̃¬i}; D̃t) with a Monte Carlo approximation. In place of θ̃, we use a single sample of θ̃(t) taken from the current approximate distribution q̂(t) which is composed of samples from previous timesteps (i.e., a uniform distribution over {θ(1), θ(2), . . . , θ(t)}). This leads to the following update step for structured SGLD (S-SGLD):
θ(t+1) = θ(t) − t 2 ∇θÛ (S)(θ; D̃) + ξt where ξt ∼ N (0, tI). (7)
Similar rules for structured variants of pSGLD (S-pSGLD) and SGHMC (S-SGHMC) can be found in the Supplement. Additionally, the full procedure for structured SGMCMC (S-SGMCMC) can be seen in Algorithm 2.
Remark Since ∇θÛ (S) is an unbiased estimator for U (S), we are guaranteed to converge to q from sampling with S-SGMCMC with sufficiently decreasing learning rates so long as we are in a stationary state. While it is unlikely to have the procedure initialize to a stationary state, we observe in practice that our scheme both tends to converge towards and remain in a stationary state. A general proof of convergence is outside the scope of this work and is left to follow-up research.
An example of S-SGMCMC can be seen in Fig. 1(a-b), which features the approximate posterior distributions of a linear regression model with three coefficients and with various independence structures imposed with S-SGLD: (a) joint dependence between w1, w2, and w3; (b-left) dependence between w1 and w2 but independence between w3 and the other coefficients; (b-right) fully factorized. Of note is that the bivariate posterior distributions appear to respect the imposed independence structure. Interestingly, it also appears that the variance shrinks as we induce these factorizations which is a commonly seen artifact when using VI.
5 STRUCTURED DROPOUT SGMCMC
While S-SGMCMC can successfully break dependencies between parameter groups, it does suffer computationally due to each parameter update scaling linearly with respect to M . This means that for a single new sample of θ, the model’s forward pass needs to be computed M different times on the same batch of data D̃, which can quickly become prohibitively expensive for deep models when M is large. Ideally, we would prefer a method that both closely resembles the S-SGMCMC procedure and scales independently from the partitioning scheme. This section presents such a method that achieves this, which we call structured dropout SGMCMC (Sd-SGMCMC), as well as an informal motivation and derivation of the method. More formal details and a theorem proving both SGMCMC and S-SGMCMC are limiting cases for Sd-SGMCMC can be found in the Supplement.
The main motivation for this technique can be seen by recognizing that the composition {θ(t)i , θ̃ (t) ¬i } from Eq. (6) can be rewritten as a sum of masked values rθ(t) + (1− r)θ̃(t) where θ̃(t) ∼ q(t) and rj = 1(i = j) for i = 1, . . . ,M . We can decouple the computational scaling from the number of parameter groups M by replacing the M deterministic masks r’s with K stochastically sampled masks r̃.1 Doing so results in a slightly different energy function and minibatch loss to optimize:
Û (Sd)(θ(t); D̃) ≈ M KE [∑M
i=1 ri ] K∑ k=1 Û(r̃(t,k)θ(t) + (1− r̃(t,k))θ̃(t,k); D̃) (8)
where r̃(t,k) is the kth sample of r̃ for timestep t. A formal justification for Eq. (8) can be found in the Supplement. These energy function approximations lead to the following update step for structured
1K is a hyperparameter that is chosen independent of M ; however, both M and the distribution of r̃ largely influence how small K can be due to how they affect the variance of the gradient of the associated posterior energy function.
Algorithm 2: S-SGMCMC Input: Initial sample θ(0); parameter
partitions θ1, . . . , θM ; step sizes { t}t=0,...,T−1.
Output: q̂(T )(θ) := {θ(t)}t=1,...,T 1 for t = 0 to T − 1 do 2 Sample minibatch D̃(t) ⊂ D 3 for i = 1 to M do 4 Sample θ̃(t)¬i ∼ q̂ (t) ¬i 5 Û (S,t) i = Û([θ (t) i , θ̃ (t) ¬i ]; D̃(t)) 6 end 7 ∇θÛ (S,t) = ∑M i=1∇θÛ (S,t) i 8 θ(t+1) =
SGMCMC_step(θ(t),∇θÛ (S,t), t) 9 end
10 return q̂(T )(θ)
Table 1: IAC and ESS metrics for CIFAR-10, SVHN, and FMNIST with various methods. Subscripts after method names refers to number of equally sized parameter groups, with |θ| meaning every parameter belongs to its own group. Best results are bolded.
CIFAR-10 SVHN FMNIST
Method IAC↓ ESS↑ IAC↓ ESS↑ IAC↓ ESS↑
pSGLD 716 8.01 839 6.82 779 7.09 S-pSGLD2 600 7.44 840 6.80 740 7.55 S-pSGLD4 599 7.4 834 6.83 751 7.45 S-pSGLD8 709 6.41 857 6.67 776 7.24 Sd-pSGLD|θ| 546 8.01 803 7.00 677 8.24 SGHMC 727 7.94 858 6.59 795 6.83 S-SGHMC2 583 7.49 949 5.74 928 5.67 S-SGHMC4 624 7.03 961 5.66 915 5.77 S-SGHMC8 904 4.97 1056 5.30 1142 4.87 Sd-SGHMC|θ| 584 7.7 828 6.56 782 7.08
dropout variant of SGLD (Sd-SGLD):
θ(t+1) = θ(t) − t 2 ∇θÛ (Sd)(θ; D̃) + ξt where ξt ∼ N (0, tI). (9)
The corresponding update rules for the structured dropout variants for pSGLD (Sd-pSGLD) and SGHMC (Sd-SGHMC) are defined in the Supplement. The exact procedure for generating samples of the approximate posterior q̂(t) using structured dropout SGMCMC (Sd-SGMCMC) can also be found in the Supplement.
An example of this method (specifically Sd-SGLD with r̃i iid∼ Bernoulli(0.5) and K = 4) used on a linear regression model can be seen in Fig. 1(c). Of note, we can see that the dropout variant largely respects the independence structure imposed, but maybe not as strictly as the exact S-SGLD method seen in Fig. 1(b). Additionally, the posterior variance also seems to have shrunk similarly to S-SGLD when compared against SGLD.
Masking Distribution Should r̃i iid∼ Bernoulli(ρ), alongside a structure that factorizes by activation components, then the method starts to resemble dropout with rate ρ (Srivastava et al., 2014). The main difference being that instead of replacing a parameter value with 0 it is replaced with a sample from the approximate posterior distribution at time t: q̂(t). While a Bernoulli distribution for r̃ is a natural choice, there are other distributions that can be chosen as well. For instance, r̃i
iid∼ N (0, 1) or r̃i
iid∼ Beta(α, β) are both viable distributions and can be seen as analog to Gaussian and Beta-dropout respectively (Srivastava et al., 2014; Liu et al., 2019). Our experiments will largely focus on sampling r̃ from Bernoulli and uniform over [0, 1] (equivalent to Beta(0.5, 0.5)) distributions.
6 EXPERIMENTS
Overview In this section we evaluate our proposed approach on various models and datasets. Section 6.1 investigates the impact of the variational approximation on the algorithms’ mixing and autocorrelation times using a fully-connected network architecture on MNIST (LeCun et al., 2010). Section 6.2 studies our methods with ResNet-20 (He et al., 2016) on CIFAR-10 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011), and Fashion MNIST (Xiao et al., 2017) and compares them for their accuracy and mixing time. Our experiments reveal that the chains in our proposed methods mix faster than SGMCMC and achieve either comparable or even higher accuracies on average.
We have also conducted experiments on uncertainty visualization, where we tested the proposed methodology on predictive uncertainty estimation by deploying a two-layer fully connected network
on a toy dataset. The uncertainty experimental setup and results, along with more technical details for the other experiments, can be found in the Appendix.
Metrics The primary predictive metric of interest use to evaluate our proposal is classification accuracy. We take the average of an ensemble of 100 models whose weights are sampled from the past samples of the parameters chains in order to calculate the accuracy. Additionally, we also monitor the mixing time of the chains of our methods with both integrated autocorrelation time (IAC) (Sokal, 1997; Goodman & Weare, 2010) and effective sample size (ESS) (Geyer, 1992). IAC measures the correlation between samples in a chain and, in turn, describe the inefficiency of a MCMC algorithm. IAC is computed as τf = ∑∞ τ=−∞ ρf (τ) where ρf is the normalized autocorrelation function of the stochastic process that generated the chain for f and is calculated as ρ̂f (τ) = ĉf (τ)/ĉf (0); where ĉf (τ) = 1 N−τ ∑N−τ n=1 (fn − µf ) (fn+τ − µf ) and µf = 1 N ∑N n=1 fn. We note that we calculated ĉf (τ) with a fast Fourier transform as it is more computationally efficient than using the direct sum. ESS measures how many independent samples would be equivalent to a chain of correlated samples and is calculated as neff = n1+(n−1)p , where n is the number of samples and p is the autocorrelation. 2 We note that a model with higher ESS and lower IAC have faster mixing time. Please see the Appendix for the detailed implementation details and experimental setup for the metrics and our models.
6.1 DROPOUT RATE & GROUP SIZE INVESTIGATION
The aim of this set of experiments is to study the effects that the number of independent parameter groups (or alternatively, the amount of allowed posterior correlations) has on accuracy and mixing time when using our proposed methods. We compare pSGLD, S-pSGLD, and Sd-pSGLD a Bernoulli(ρ) masking distribution with dropout rates ρ ∈ {0.1, 0.3, 0.5} on a fully-connected neural network with 2 hidden layers, with 50 hidden units each, trained and evaluated with MNIST using the standard train and test split. The model has 42,200 parameters in total. For S-pSGLD and Sd-pSGLD, these parameters are evenly distributed into M groups where M ranges from 4 to 42,200. Accuracy, IAC, and ESS are reported in Fig. 2 using 100,000 posterior samples after a 150,000 burn in period. More details on the implementation of the model regarding training and evaluation can be found in the Appendix.
As shown in Fig. 2(a), for S-pSGLD we observe that as we increase the number of groups the accuracy drops dramatically whereas Sd-pSGLD’s accuracy improves slightly and then remains fairly stable. In the best case, Sd-pSGLD achieves an accuracy of 96.3% with 32 groups and dropout rate of 0.5 which outperforms pSGLD with accuracy of 94.2%. We speculate that the dropout-like behavior is beneficial for regularizing the model (much like normal dropout), hence the improved accuracy across all dropout rates. Similarly, a single sample used for the Monte Carlo estimate in S-SGMCMC may not be enough as the number of groups M increase; however, increasing the number of samples in this scenario is infeasible due to S-SGMCMC scaling as O(M). Fig. 2(b-c) portrays the comparison between number of groups and mixing time metrics IAC and ESS. As the number of groups gradually increase, we note that S-pSGLD mixes faster, as does Sd-pSGLD to lesser and lesser degrees as ρ increases. This behavior is to be expected due to Theorem 2, with Sd-pSGLD exhibiting mixing times more similar to pSGLD when ρ = 0.5 and more similar to S-pSGLD when ρ = 0.1.
6.2 SYSTEMATIC COMPARISON ON REAL-WORLD DATA
The goal of these experiments is to test the proposed methodology on larger-scale datasets which mimic real-world data: CIFAR-10, SVHN, and FMNIST. We evaluate our methods on performance accuracy and on mixing times of the chains. We employ ResNet-20 for SVHN and FMNIST without any data augmentation to assess our methods. For CIFAR10 we employ the same data augmentation process as proposed in Cubuk et al. (2019). We evaluate the precision of the methods on accuracy over time and the overall mixing time of them on IAC and ESS with 2 base algorithms: pSGLD and SGHMC. For efficiency purposes we limited our scope to models with either fully joint posteriors or fully factorized. As such, for the latter we employed Sd-SGMCMC methods
2We used the TensorFlow implementation for ESS which uses the direct sum for the autocorrelation.
as S-SGMCMC would not be feasible with the amount of parameter groups present. Bernoulli(ρ) and uniform masking distributions were investigated and are denoted as SBernoulli-SGMCMC and SUniform-SGMCMC respectively, with ρ varying between datasets as determined by a hyperparameter search (detailed in the Appendix).
In Fig. 3 we observe how quickly the proposed methods and the baseline SGMCMC methods approach their optimum accuracy over the course of training. As is shown, SBernoulli-SGMCMC and SUniform-SGMCMC appear to achieve optimal accuracy values much faster than SGMCMC on all datasets and with all base sampling schemes. In some cases, the variational methods achieve better accuracy values than the baseline methods, as seen for CIFAR10 in Fig. 3.
Mixing Time Comparisons We further validated our findings from Section 6.1 by evaluating the IAC and ESS on larger datasets using various methods. Both pSGLD and SGHMC were used as base methods in conjunction with both S-SGMCMC and Sd-SGMCMC using a Bernoulli masking distribution. IAC and ESS were calculated for these methods using the latest 5,000 samples after sampling for 300 epochs; the results of which can be found in Table 1. For CIFAR-10, we see that Sd-SGMCMC with every parameter in a different group mixes the fastest against all other methods. Likewise, for SVHN and FMNIST, Sd-pSGLD with every parameter belonging to its own group mixes faster than all other methods. At times it does appear that increasing the number of parameter groups causes slower mixing time for S-SGMCMC. This could potentially be attributed to large variance in the gradients from using only a single sample per Monte Carlo estimate.
6.3 EXPLORING PARTITIONING SCHEMES
This part of the study aims to explore the capabilities of the proposed methodology further. Here we explore different parameter partitioning schemes on regression datasets.
Here we present the results with different partitions on various regression datasets. We used 7 different datasets: the wine quality dataset (Cortez et al., 2009), the Boston housing dataset (Harrison Jr & Rubinfeld, 1978), the obesity levels dataset (Palechor & de la Hoz Manotas, 2019), the Seoul bikesharing dataset (E et al., 2020; E & Cho, 2020), the concrete compressive strength dataset (Yeh, 1998), and the airfoil self-noise dataset (Brooks et al., 1989). For the evaluation we chose a simple fully connected network with two layers with 50 neurons each, and we use SGLD as an optimizer. As a performance metric we chose mean squared error (MSE). We did hyperparameter tuning with different learning rates and the final results are the means with the standard deviations of 5 runs with different seeds. We do not observe any specific systematic trends on the partitions, apart from the fact that in some cases random performs better. In that way the use of either random partitioning or the fully-factorized partitioning, where every parameter is in a different group appears to be a valid choice a priori; especially the latter since we have noted earlier the faster mixing times associated with this partitioning scheme. More details about the partitioning schemes experiments can be found in the Appendix.
7 CONCLUSIONS
In an attempt to hybridize MCMC and VI, we proposed S-SGMCMC: an approach that produces samples from an structured posterior by running SGMCMC on a modified energy function. The resulting Markov chain becomes asymptotically decoupled across user-specified groups of parameters, resulting in faster convergence. For better computational efficiency, we proposed Sd-SGMCMC: a further generalization of S-SGMCMC inspired by dropout. This extension allows interpolating between a SGMCMC algorithm and its corresponding S-SGMCMC method.
Our experimental results demonstrate that the proposed methods impose structure over posterior distributions, increase mixing times of the chains, and result in similar or better posterior predictive accuracies compared to SGMCMC on a variety of (deep) models. Our experimental evaluations have provided strong empirical evidence for the efficacy of our approach. We also showed that the proposed approach is compatible with various deep learning architectures, including ResNet-20, and various datasets.
Despite its proven capabilities, our proposed methodology does come with some limitations. Namely, for quick access our methods require keeping chains of samples on the GPU whereas the baseline SGMCMC methods can simply save samples to disk. Additionally, S-SGMCMC scales poorly with respect to the number of parameter groups. Sd-SGMCMC manages to break this dependency; however, it still requires slightly more compute than SGMCMC per sample, but it is comparable in wall clock time. Possible future work could focus on more theoretical analyses of S-SGMCMC, such as formal proofs of convergence.
8 ETHICS STATEMENT
The main focus of our work is to train models faster by decreasing the convergence time of their training phase. In this scope we are not aware of any ethical concerns of our research.
9 REPRODUCIBILITY
For this work we have made sure to guarantee reproducibility of the results. We provide all the technical details of our experiments and their implementations, like the hyperparameters, the data, the frameworks and the experimental setups. We have used only open source datasets that are easily accessible to the public. Finally we commit to release the code that we implemented for this work via a public repository.
A THEOREM 1
Proof. We begin with some preliminaries from the main text. Given data D = {(xi, yi)}i=1,...,N , parameters θ, a proper prior distribution p(θ), and a likelihood p(D|θ) = ∏N i=1 p(yi|xi, θ), suppose we are interested in the corresponding posterior distribution p(θ|D) ∝ p(D|θ)p(θ). A convenient representation of the posterior is as a Boltzmann distribution:
p(θ|D) ∝ exp{−U(θ)} where U(θ) = − ∑
(x,y)∈D
log p(y|x, θ)− log p(θ). (10)
U is typically referred to as the posterior energy function. Note that the posterior distribution is typically intractable due to the normalizing constant.
We also write the equation for KL divergence from the main text:
J(q(θ)) = DKL(q(θ)||p(θ|D)) (11) ≡ Eθ∼q [ log q(θ)
p(θ|D)
] (12)
We then rewrite Eq. 4 as follows:
J(q(θ)) = Eθ∼q [log q(θ)]− Eθ∼q [log p(θ,D)] + C (13) =Eθi∼qi [log qi(θi)] + ∑ i 6=j Eθj∼qj [log qj(θj)]− ∫ log p(θ,D)qi(θi)dθi ∏ i6=j qj(θj)dθj + C (14)
for some i ∈ {1, . . . ,M} where ¬i := {1, . . . ,M} \ {i} and C = log p(D). In order to find the optimal distribution that respects the factorization constraints imposed between parameter groups, we need to minimize this functional over q — or rather every qi. This is done by taking the functional derivative of J with respect to qi, setting it equal to zero, and solving for qi:
δJ(q(θ))
δqi(θi) =
∫ log p(θ,D) ∏ i6=j qj(θj)dθj − 1− log qi(θi) := 0 (15)
=⇒ log qi(θi) = Eθ̃¬i∼q¬i [ log p(θi, θ̃¬i,D) ] − 1 (16)
=⇒ qi(θi) ∝ exp { Eθ̃¬i∼q¬i [ log p(θi, θ̃¬i,D) ]} . (17)
By defining the energy U (S)i (θi) = −Eθ̃¬i∼q¬i [ log p(θi, θ̃¬i,D) ] , we realize that by minimizing
the KL-divergence in Eq. 4, the approximate posterior distribution q = ∏M i=1 qi takes the form of a
Boltzmann distribution as in Eq. 1 with U (S)(θ) = ∑M i=1 U (S) i (θi).
It remains to be shown that the solution is unique. To this end, we refer to the convexity of the KL divergence in function space (Cover & Thomas, 2001). This implies that the stationary point of the KL is indeed a global optimum and unique.
B DERIVING U (Sd)
With just a slight shift in perspective, it is actually possible to further generalize U (S) (and consequently S-SGMCMC) to produce a broader class of approximate sampling algorithms. This is done
by first noting that U (S) can be represented with a scaled double-expectation:
U (S)(θ) = − M Er∼p(S) [∑M i=1 ri ]Er∼p(S)Eθ̃∼q [log p(rθ + (1− r)θ̃,D)] (18) where p(S)(r) = Cat(r;M−1, . . . ,M−1) and (rθ + (1 − r)θ̃)i is equal to θi if ri = 1 and θ̃i otherwise for i = 1, . . . ,M . Note that this is constructed in this manner specifically so that U (S) remains differentiable with respect to θ. Also note that though the denominator appears superfluous as Er∼p(S) [ ∑M i=1 ri] = 1, it is necessary for certain theoretic properties, as seen in Theorem 2.
By replacing p(S) with a more flexible distribution, we can further generalize and encapsulate different energy functions to sample from. One such choice is p(Sd)(r; ρ) :∝ ∏M i=1 Bern(ri; ρ)1( ∑M i=1 ri > 0) with ρ ∈ (0, 1).3 Substituting p(S) for p(Sd) in Eq. (18) yields a new energy function that we will refer to as U (Sd). We note that this choice in distribution leads to a dropout-like behavior (Nalisnick et al., 2019; Srivastava et al., 2014), where the composition of model parameters as rθ + (1− r)θ̃ leads to each parameter group θi having a probability of approximately ρ to be used in a prediction and a (1 − ρ) probability of being replaced by θ̃i from the approximate posterior (in traditional dropout, θi would instead be replaced with 0). Likewise, we will denote methods that use this energy function for sampling as structured dropout SGMCMC (Sd-SGMCMC) with different variants all sharing the same Sd prefix (e.g. Sd-SGHMC).
In practice, the double-expectation in U (Sd) is jointly approximated using a Monte Carlo estimate with K samples. This leads to Eq. (8) in the main paper. We note that by approximating U (Sd) in this way, computing a gradient no longer scales on the order of O(M), but rather O(K). This means that the choice of structure imposed on the posterior distribution remains independent of computing resources. As such, configurations with large amounts of parameter groups are typically only feasible when using Sd-SGMCMC as S-SGMCMC would use too much memory and/or compute per sample.
C THEOREM 2
Theorem 2. For a given set of parameters θ partitioned into M groups, under minor assumptions (i) U (Sd) → U as ρ → 1 and (ii) U (Sd) → U (S) as ρ → 0. Thus, distributions approximated by Sd-SGMCMC lie on a continuum with those generated by S-SGMCMC at one extreme and with those from SGMCMC at the other.
Proof. Assume an arbitrary θ, D, n ∈ N, and that Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] exists for r ∈ R.
As an aside, this proof assumes that p(Sd)(r; ρ) :∝ ∏M i=1 Bern(ri; ρ)1( ∑M i=1 ri > 0) with ρ ∈ (0, 1); however, the theorem still holds an arbitrary p(Sd) so long as the mean approaches 1 and variance approaches 0 as n→∞.
(i) Let r(n) ∼ p(Sd)(ρn) where ∀nρn ∈ (0, 1) and ρn → 1. It follows that r(n) → {1}M as n→∞ in distribution (see Lemma 1 in Supplement). Due to bounded and finite support R, we find the following:
U (Sd)(θ) = − M Er∼p(Sd) [ ∑M i=1 ri] ∑ r∈R p(Sd)(r; ρn)Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] (19)
→ −M M ∑ r∈R 1(∀iri = 1)Eθ̃∼q [log p(θ,D)] as n→∞ (20)
= − log p(θ,D) = U(θ) (21)
(ii) Let r(n) ∼ p(Sd)(ρn) where ∀nρn ∈ (0, 1) and ρn → 0. It follows that r(n) → r ∼ Cat(M−1, . . . ,M−1) as n→∞ in distribution (see Lemma 2 in Supplement). Due to bounded and
3Other choices of distribution that are well justified include any with support over [0, 1]M and with measure 0 over {0}M . Exploring the effects these distributions have are an interesting line of future inquiry.
finite supportR, we find the following:
U (Sd)(θ) = − M Er∼p(Sd) [ ∑M i=1 ri] ∑ r∈R p(Sd)(r; ρn)Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] (22)
→ −M 1 ∑ r∈R 1( ∑M i=1 ri = 1) M Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] as n→∞ (23)
= − M∑ i=1 Eθ̃∼q[log p([θi, θ̃¬i,D)] = U (S)(θ) (24)
For both Lemmas 1 and 2, let
p(Sd)(r; ρ) = ρ ∑M i=1 ri(1− ρ)M− ∑M i=1 ri
1− (1− ρ)M 1(∀iri ∈ {0, 1})1 ( M∑ i=1 ri > 0 ) (25)
Lemma 1. For r(n) ∼ p(Sd)(ρn), ρn ∈ (0, 1) and n ∈ N, if ρn → 1 as n → ∞ then r(n) → r ∼ δ({1}M ) in distribution as n→∞.
Proof.
p(Sd)(r = {1}M ; ρn) = ρMn (1− ρn)0
1− (1− ρn)M (26)
→ 1 as n→∞ (27) =⇒ r(n) → δ({1}M ) in distribution. (28)
Lemma 2. For r(n) ∼ p(Sd)(ρn), ρn ∈ (0, 1) and n ∈ N, if ρn → 0 as n → ∞ then r(n) → r ∼ Cat(M−1, . . . ,M−1) in distribution as n→∞.
Proof. Let i ∈ {1, . . . ,M}.
p(Sd)(ri = 1, r¬i = 0; ρn) = ρn(1− ρn)M−1
1− (1− ρn)M (29)
l’Hôspital’s Rule H= (1− ρn)M−1 + ρn(M − 1)(1− ρn)M−2
M(1− ρn)M−1 (30)
→ 1 M as n→∞ (31)
Since the resulting probabilities sum to 1, this implies that r(n) → r ∼ Cat(M−1, . . . ,M−1) in distribution as n→∞.
D DERIVING U (Sd)
To derive U (Sd), we must first start with a shift in perspective on how U (S) is represented. We will rewrite the function in the following way:
U (S)(θ) = − M∑ i=1 Eθ¬i∼q¬i [log p([θi, θ¬i],D)] (32)
= − M Er∼p(S) [ ∑M i=1 ri]
Er∼p(S)Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] (33)
where p(S) is a M -dimensional categorical distribution with uniform weights M−1 and p(rθ + (1− r)θ̃,D) is the joint probability of parameters taking values of rθ + (1− r)θ̃ and data D.4
We note that changing the distribution of r leads to different energy functions to sample from. One such choice is to have p(Sd)(r; ρ) ∝ ρ ∑M i=1 ri(1 − ρ)M− ∑M i=1 ri1(∀iri ∈ {0, 1})1( ∑M i=1 ri > 0) for ρ ∈ (0, 1). Note that this is identical to ri iid∼ Bernoulli(ρ) conditional to ∑M i=1 ri > 0. Let the support of p(Sd) be denoted asR = {0, 1}M \ {0}M . This leads to the following energy function:
U (Sd)(θ) = − M Er∼p(Sd) [ ∑M i=1 ri]
Er∼p(Sd)Eθ̃∼q [ log p(rθ + (1− r)θ̃,D) ] . (34)
In practice, a few approximations are made to compute the corresponding U (Sd). Firstly, we approximate p(Sd) with an M -dimensional Bernoulli(ρ) distribution as the difference is minute when Mρ is large. Secondly, the outer expectation in Eq. (34) is approximated with a Monte Carlo estimate of K samples. The inner expectation is also approximated with a Monte Carlo estimate using the latest approximate posterior q̂(t). However, just like for S-SGMCMC, only a single sample is used. This further leads to:
U (Sd)(θ(t); D̃) = − 1 Kρ K∑ k=1 U(r(t,k)θ(t) + (1− r(t,k))θ̃(t,k); D̃) (35)
E ALGORITHM FOR Sd-SGMCMC
The procedure for Sd-SGMCMC can be seen in Algorithm 3.
Algorithm 3: Sd-SGMCMC Input: Initial sample θ(0); parameter partitions θ1, . . . , θM ; data set D; initial auxiliary statistics ξ(0); step sizes { t}t=1,...,T ; masking distribution p(Sd); dropout iterations K. Output: q̂(T )(θ) := {θ(t)}t=1,...,T
1 for t = 0 to T − 1 do 2 Sample minibatch D̃(t) ⊂ D 3 for k = 1 to K do 4 Sample masks r(t,k)1 , . . . , r (t,k) M ∼ p(Sd) 5 Sample θ̃(t,k) ∼ q̂(t)
6 θ(t,k) = [r (t,k) i θ (t) i + (1− r (t,k) i )θ̃ (t,k) i ]i=1,...,M 7 Û (Sd,t) k = Û(θ
(t,k); D̃(t)) 8 end 9 ∇θÛ (Sd,t) = MKE r∼p(Sd) [ ∑M i=1 ri] ∑K k=1∇θÛ (Sd,t) k
10 θ(t+1), ξ(t+1) = SGMCMC_step(θ(t),∇θÛ (Sd,t), ξ(t), t) 11 end 12 return q̂(T )(θ)
4rθ + (1 − r)θ̃ is a slight abuse of notation that is meant to represent masking out θi when ri = 0 and masking out θ̃i when ri = 1.
F SGMCMC UPDATE RULES
The update rules for SGLD, pSGLD, and SGHMC are defined as follows:
SGLD θ(t+1) = θ(t) − t 2 ∇θÛ(θ(t)) +N (0, tI) (36)
pSGLD θ(t+1) = θ(t) − t 2
[ R(θ(t))∇θÛ(θ(t)) +
∑ θ ∇θR(θ(t))
] +N (0, tR(θ(t))) (37)
SGHMC θ(t+1) = θ(t) + tM−1m(t+1) (38)
m(t+1) = (1− γ tM−1)m(t) − t∇θÛ(θ(t)) +N (0, 2γ − tV̂ (θ(t))) (39)
where t is the step size at time step t, R(·) and M are preconditioners, γ ≥ 0 is a friction term, and V̂ (·) is an estimate of the covariance induced by the stochastic gradient.5
The update rules for the S-SGMCMC variants are similarly defined as Eqs. 36-39 but all instances of Û(θ(t)) are replaced with Û (S)(θ(t)). Likewise, replacing with Û (Sd)(θ(t)) yields the Sd-SGMCMC variants.
G ABLATION STUDY
This subsection aims to further explore the capabilities of the proposed methodology. More specifically, we visualize uncertainty for a two-layer Fully Connected Network and experiment with various parameter partitions.
Parameter Partitions. We tested our proposal with four partitioning schemes on a 2 layer with 50 neurons fully connected network on a regression task. The partitioning schemes that we used are the following: (a) the parameters are split into 3 groups randomly, (b) the parameters are split by layer(3 layers, 1 input and 2 hidden), (c) by activating neurons inside the layers and (d) every parameter belongs in each own group. We used 7 different datasets: the wine quality datsetCortez et al. (2009), the Boston housing datasetHarrison Jr & Rubinfeld (1978), the obesity levels datasetPalechor & de la Hoz Manotas (2019), the Seoul bike-sharing datasetE et al. (2020); E & Cho (2020), the concrete compressive strength datasetYeh (1998), and the airfoil self-noise datasetBrooks et al. (1989). Every dataset was split into 75% training data, 10% validation data, and 15% test data. We trained the model on training set and validated it in the validation set with an early stoppage. For every dataset and every partitioning scheme we used the learning rates: 1e-3,1e-4,1e-5,1e-6,1e-7 for hyperparameter tuning. For each combination of partition and dataset, we chose the learning rate that provides the best accuracy score on the test set. In this case, as an accuracy score, we used the Mean Squared Error. The final learning rates that we used are presented in Table 3.
5Note that we abuse notation in Eqs. 36-39 where the addition ofN (µ,Σ) denotes the addition of a normally distributed random variable with mean µ and covariance Σ.
H DETAILS ON EXPERIMENTS
H.1 QUALITATIVE REGRESSION EXPERIMENTS
First, we aim to showcase qualitative differences in the empirical posterior distributions generated by a baseline SGMCMC algorithm and our proposed variants. To do so, we consider a regression task where 100 randomly sampled three-dimensional covariates {~xi = [xi,1, xi,2, xi,3]T }i=1,...,100 are used to sample response values yi ∼ N (~wT~xi+b, σ2) where ~w = [w1, w2, w3]T = [1.5,−0.8, 1.3]T , b = 0.5, and σ2 = 1. More details on the generation process for ~x can be found in the Supplement.
We choose to fit a linear regression model of the same form as the generation process. σ2 is assumed to be known. Thus, θ = [w1, w2, w3, b]. A standard normal distribution is used as the prior for each parameter. Due to conjugacy, the posterior distribution can be calculated analytically. As such, the MAP is roughly θ̂MAP ≈ [0.52, 0.31, 0.47, 0.84]. The approximated posterior distributions for θ are found using SGLD, S-SGLD, and Sd-SGLD. For the latter two sampling schemes, two parameter partitions are tested: (i) two groups of parameters where θ1 = [w1, w2] and θ2 = [w3, b] and (ii) four groups of parameters where θ1 = w1, θ2 = w2, θ3 = w3, and θ4 = b. For Sd-SGLD, ρ = 0.5 and K = 4 was used.
The resulting posterior distributions for (w1, w2) and (w1, w3) from all five scenarios, with SGLD in the leftmost column as our baseline, can be seen in Fig. 1. We observe that, as expected, correlations between (w1, w2) still exist when they are allocated to the same parameter group and become apparently independent when assigned to different groups. We also note that the variance of the distributions shrink as the parameter space is partitioned into smaller groups. The underestimation of posterior variance is a commonly reported finding for VI techniques and is interesting to note that our non-parametric methods appear to exhibit this behavior as well. Finally, it appears that the Sd-SGLD adequately approximates S-SGLD with just slightly higher variances and very minor correlations between parameter groups being exhibited.
H.2 REAL-WORLD DATA EXPERIMENTS
Framework details. In this subsection, we provide more detailed results for our experiments and a grid search for FMNIST, CIFAR10, and SVHN. We note that all the code apart from the metrics was written in PyTorch (Paszke et al., 2019). Regarding the metrics, ESS was adopted from the TensorFlow probability library (Dillon et al., 2017; Abadi et al., 2016) and IAC was calculated in python. For all the experiments, we used a seed of 2. Moreover, we note that we grouped the parameters in an ordered way for Sd-pSGLD and S-pSGLD. We denoted previously that Kρ is the number of groups. So every parameter will go to the i mod Kρ group where i is the parameter index. If, for instance, Kρ is 8 then parameter 1 will go to group 1, parameter 2 will go to group 2, parameter 9 will go to group 1, etc. If Kρ is the same as the number of parameters, every parameter will go into its own group.
MNIST. Regarding MNIST, we ran all the experiments for 500 epochs with a batch size of 500 and a learning rate of 1e-2. For Sd-pSGLD, the K is set to 300, which is the forward passes that the model does within 1 epoch. For the grouping of the parameters, for Sd-pSGLD we used group sizes of 2,4,8,32,128,512,2048,4096,8192,16384,32768 and 42200; and for S-pSGLD we used groups sizes of 2,8,32,128,512,2048,4096 and 8192.
FashionMNIST. We ran all experiments for 300 epochs with a batch size of 500. For Sd-SGHMC the K is set to 2, which is the forward passes that the model does within 1 epoch. We observed with experimenting with K that we do not need to set K very high, and even a small number like 2 that we used here is enough to produce the same results as with an K of 200 or 300. In this way, we save significant time in training. Regarding the parameter partitioning, for Sd-SGMCMC, we put every parameter in a different group, and for S-SGMCMC we used groups of 2,4,8, and 16. For Sd-pSGLD, pSGLD, Sd-SGHMC and SGHMC we tested their performances with learning rates of 1e-2,1e-3,1e-4,1e-5. For S-pSGLD we used a learning rate of 1e-3 and for S-SGHMC a learning rate of 1e-2.
CIFAR10. The setup is similar to the one we used in FashionMNIST as we ran all experiments for 300 epochs with a batch size of 128. For Sd-SGHMC, the K is set to 2, which K is the forward passes that the model does within 1 epoch. Regarding the parameter partitioning, for Sd-SGMCMC, we put every parameter in a different group, and for S-SGMCMC we used groups of 2,4,8, and 16. For Sd-pSGLD, pSGLD, Sd-SGHMC and SGHMC we tested their performances with learning rates of 1e-2,1e-3,1e-4,1e-5. For S-pSGLD, we used a learning rate of 1e-3, and for S-SGHMC, a learning rate of 1e-2. We focused our strategy on evaluating the accuracy of the different combinations of hyperparameters with the proposed methods, as can be seen in Figs. 4 and 5. Quantitative results on IAC, ESS and maximum accuracy are depicted in Tables 6 and 7.
SVHN. We also ran all of the experiments for 300 epochs with a batch size of 128. Here for Sd-SGHMC, the K is set to 2, which is the forward passes that the model does within 1 epoch. We note that K here is less than on CIFAR10 and FashionMNIST, but as we mentioned before, this does not make a difference for our results, as we have tested. Regarding the parameter partitioning, for Sd-SGMCMC, we put every parameter in a different group, and for S-SGMCMC we used groups of 2,4,8, and 16. For Sd-pSGLD, pSGLD, Sd-SGHMC and SGHMC we tested their performances with
learning rates of 1e-1,1e-2,1e-3,1e-4,1e-5,1e-6. For S-pSGLD we used a learning rate of 1e-4, and for S-SGHMC, a learning rate of 1e-2. Same as in CIFAR10, we conducted a grid search for learning rate, dropout rate, and optimizers to find the best performing models and test them for their accuracy. We can observe these results in Figure 3 in the main paper. The strategy that we followed is the same as in CIFAR10 and is presented in Figs. 6 and 7. | 1. What is the main contribution of the paper regarding faster-mixing speed in posterior inference?
2. What are the strengths and weaknesses of the proposed method, particularly in its independence structure and modified energy function?
3. Do you have any concerns regarding the theoretical soundness of the proposed algorithms, such as their reliance on previous samples and potential violation of the Markov assumption?
4. How does the reviewer assess the empirical evaluation of the proposed methods, including their performance in classification tasks and uncertainty quantification abilities?
5. Are there any questions or suggestions for improving the paper, such as providing more details in Algorithm 1 or elaborating on the reasoning behind the better performances of S_d-SGMCMC? | Summary Of The Paper
Review | Summary Of The Paper
The author proposed a framework to incorporate the independence structure into the posterior inference for faster-mixing speed. To achieve that, the author designed two specific algorithms called S-SGMCMC and S_d-SGMCMC, respectively. Specifically, S-SGMCMC consisted of the following steps. First, the target random variables (
θ
) are gathered into mutually independent groups. Then, a modified energy function is derived by minimizing the KL divergence between the posterior
q
and target
p
(
θ
|
D
)
. The last step is to apply a standard SGMCMC method to draw samples from the resulting modified energy function. Further, the author also built a connection between dropout and the modified energy function, which results in a structure dropout SGMCMC (S_d-SGMCMC) with better scalability. The author claimed the resulting algorithms achieved faster mixing speed and better classification accuracy when applied to real-world classification tasks.
Review
Strength
Personally, I find the proposed method interesting. The paper is written clearly and easy to follow, and the overall idea is easy to understand. I have checked the proofs of the theorem and they seem to be correct. To support the claims, the proposed methods were applied to real-world large data set to confirm their advantages compared to the standard full SGMCMC with some ablation studies, although I still think there is room for improvement.
Weakness and concerns
Although the proposed method is easy to follow, I still find some parts unclear and elaborations are needed. First, the author claims that building the independent structure into the posterior can help improve the mixing speed (bottom of page 1, the first paragraph in the conclusion, etc.). However, I cannot see direct reasoning behind those claims, could you elaborate more on this? In section 6.1, I am not sure I understand the reasoning behind the better performances of S_d-SGMCMC. Can you elaborate more on "regularizing the model"? On page 4, why
θ
~
(
t
,
i
)
is composed of samples from previous timesteps? I thought the author mentioned that a single sample
θ
~
(
t
,
i
)
from the current timestep is used for Monte Carlo approximation. I recommend putting more details in algorithm 1. For example, elaborating how
θ
~
¬
i
t
is drawn from
q
^
¬
i
t
. In addition, why do appendix B and D use the same title? Can you merge them together?
Another concern is the theoretical soundness of proposed algorithms. For S-SGMCMC, the Monte Carlo estimation of the modified energy function requires previous samples. Does this break the Markov assumption of SGMCMC since it should only depend on the current samples? For example, when the sampler does not reach the stationary stage,
q
^
t
still evolves with time, thus, the samples from the previous timesteps are not from
q
^
t
. Apart from the stationary distribution of S-SGMCMC, I also wonder what is the stationary distribution (if exists) of the dropout version? How different is the stationary distribution of the dropout version compared to the optimal
q
? Any theoretical guarantees on the correctness of the dropout version?
In terms of the empirical evaluation, I wonder about the uncertainty quantification ability of the proposed algorithms, which is an important metric for SGMCMC methods, especially since the proposed methods also seem to underestimate the posterior variance. In addition, in the abstract, the author mentioned better predictive likelihoods. However, I can only find the accuracy metric in the experiment, which is different from the predictive likelihood. |
ICLR | Title
Bridging between Pool- and Stream-Based Active Learning with Temporal Data Coherence
Abstract
Active learning (AL) reduces the amount of labeled data needed to train a machine learning model by choosing intelligently which instances to label. Classic pool-based AL needs all data to be present in a datacenter, which can be challenging with the increasing amounts of data needed in deep learning. However, AL on mobile devices and robots like autonomous cars can filter the data from perception sensor streams before it even reaches the datacenter. In our work, we investigate AL for such image streams and propose a new concept exploiting their temporal properties. We define three methods using a pseudo uncertainty based on loss learning (Yoo & Kweon, 2019). The first considers the temporal change of uncertainty and requires 5% less labeled data than the vanilla approach. It is extended by the change in latent space in the second method. The third method, temporal distance loss stream (TDLS), combines both with submodular optimization. In our evaluation on an extension of the public Audi Autonomous Driving Dataset (Geyer et al., 2020) we outperform state-of-the-art approaches by using 1% fewer labels. Additionally, we compare our stream-based approaches with existing approaches for AL in a pool-based scenario. Our experiments show that, although pool-based AL has more data access, our stream-based AL approaches need 0.5% fewer labels.
1 INTRODUCTION
Active learning (AL) is a technique to minimize the labeling effort, in which a machine learning model chooses the data to be labeled by itself. It can be divided into two main scenarios, pool-based and stream-based AL (Settles, 2010). Pool-based AL is a cyclic process of selecting batches of the most promising samples from a pool of data based on a query function. The model is retrained after the selection to start the next iteration of the AL cycle. The data pool is stored such that all samples are always accessible. In contrast, stream-based AL assumes an inflow of samples as a stream and the model decides if a sample should be saved and labeled or disposed. In classic stream-based AL the model is trained with each selected sample (Settles, 2010). However, in deep learning samples are usually selected in batches, due to the long training time of the models. This comes with the risk of selecting samples with an equal information gain. Most approaches ignore this fact or solve it by using a small selection batch size.
Besides the scenarios, the selection method, also called querying strategy, is another important factor of AL methods. There are three main categories of AL algorithms: uncertainty-based, diversitybased and learning-based AL (Ren et al., 2022). The first group are uncertainty-based AL methods, including for example Monte Carlo (MC) dropout methods (Gal & Ghahramani, 2016) or methods approximating the uncertainty by using ensembles (Beluch et al., 2018). The second group are diversity-based methods like Coreset (Sener & Savarese, 2018) or diverse embedded gradients (Ash et al., 2020). These methods select samples based on the dataset coverage. The third group are learning-based approaches. These methods, like loss learning (Yoo & Kweon, 2019), train an additional model, which either predicts a value, determining the usefulness of a sample, or decides if a sample should be selected directly. Recent approaches from this category often include unlabeled data for unsupervised training. Other approaches taking diversity into account usually perform an optimization, which requires constant access to the complete labeled and unlabeled dataset. This decreases the number of needed samples as intended, but the access to unlabeled data makes the transfer to a stream-based scenario impossible.
A large body of research in the perception domain focuses on pool-based AL, which requires the transfer of all data to a datacenter. Especially in autonomous driving AL is already an important research topic (Feng et al., 2019) (Hekimoglu et al., 2022). However, the data logistics and data preparation limits the possibilities to apply and scale this approach to open world perception problems, where a lot of data is required. These perceptions task including autonomous driving and robotic perception and environmental sensing. In contrast to pool-based AL, stream-based AL can run directly on mobile devices used in these applications and enables data collection through a large number of agents without a prior transfer to the data center. By performing AL on a mobile robot, it can be applied on temporally coherent camera streams directly, which reduces preprocessing efforts. Based on these considerations we focus on stream-based AL for temporally coherent data.
Our contribution can be summarized as follows: We suggest a novel concept of incorporating temporal information into AL, especially stream-based AL. Our concept exploits the temporal change of uncertainty and distance in latent space. Based on this we propose three methods and compare them with state-of-the-art methods in a classification task; the most commonly used task to benchmark AL. We evaluate our methods against other state-of-the-art methods. Therefore, we create a operational domain detection dataset by adding scene annotations to the Audi Autonomous Driving Dataset (A2D2) (Geyer et al., 2020). Further, we give an overview of the necessary steps to transform a pool-based scenario in a stream-based scenario and perform, to the best of our knowledge, the first direct comparison between stream-based and pool-based AL methods.
2 RELATED WORK
While a lot of authors did great research in the field of pool-based AL, stream-based AL has become unpopular with the rise of deep learning. However, the number of vision sensors receiving constant data streams is increasing, so will the cost of transferring these data to a datacenter. This makes research of stream-based AL techniques interesting, as not all data can be transferred to the datacenter to perform pool-based AL.
2.1 POOL-BASED ACTIVE LEARNING
Sener & Savarese (2018) defined AL as a core set selection problem. The authors aim to select samples that minimize the maximum distance to other not selected points. In this way, it can be formulated as a K-center problem. Solving this is quite costly, so the authors suggested to use a greedy algorithm to approximate the K-center problem. The method will be further denoted as Coreset. In Bayesian active learning with diverse gradient embedding (Badge) Ash et al. (2020) the diversity idea has been extended by taking the prediction uncertainty into account. The authors combined a distance representation of the latent space with pseudo labels based on the highest onehot encoded value to generate gradients. These are created for every class such that the dimension of the embedding is higher than in the Coreset (Sener & Savarese, 2018) approach. The optimal set is estimated using greedy optimization algorithms.
An uncertainty-based approach is MC dropout as a Bayesian approximation (Gal & Ghahramani, 2016). The method uses several dropout layers which are active during the prediction phase. By performing multiple forward passes a distribution over the class predictions is generated where the authors applied the mutual information function in order to calculate the uncertainty of the samples. This is often combined with the Bayesian active learning by disagreement (Houlsby et al., 2011) metrics, considering the mutual information of the multiple forward passes. Their approach has been modified by Kirsch et al. (2019) to take the diversity of the selected batch into account by calculating the joint mutual information. With their BatchBald approach, the authors reduced the selected samples with redundant information in a batch. In contrast to sampling-based approaches, loss learning (Yoo & Kweon, 2019) is a learning-based approach that needs only one forward pass. By adding a loss module to specific layers of the prediction network, the authors predicted the network’s loss and used it as pseudo uncertainty for sample selection. However, the loss module can only predict a relative loss. The authors showed the flexibility of the approach for several tasks, which makes it quite popular. Novel learning-based methods like variational adversarial active learning (VAAL) (Sinha et al., 2019) use the unlabeled data as well. An autoencoder is trained to learn a latent space representation of the data based on the labeled and unlabeled set. Based on the latent space encoding, a discriminator model is trained to discriminate between labeled and unlabeled data.
The selection is based on the lowest prediction confidence of the discriminator out of the unlabeled dataset predictions. The authors outperformed algorithms like Coreset (Sener & Savarese, 2018) and MC dropout (Gal & Ghahramani, 2016) methods. Kim et al. (2021) extended VAAL (Sinha et al., 2019) by adding a loss prediction module to the task model and the predicted loss to the latent space such that it is included in the input vector of the discriminator model. A more diversity-oriented learning-based approach using unlabeled data is sequential graph convolutional network for active learning (Caramalau et al., 2021). The authors used the distance between the features of the task model to calculate an adjacency matrix for a graph containing labeled and unlabeled data. Based on this matrix a graph neural network is trained. By using message passing the network should predict the nodes’ value for being labeled. With this approach, further denoted as CoreGCN, the authors achieved a good performance on classification datasets.
2.2 STREAM-BASED ACTIVE LEARNING
Stream-based AL has rarely been used for perception tasks so far, especially with deep learning models. In the field of perception Narr et al. (2016) selected data by using mondrian forests stream-based AL and trained them in an online learning fashion. For non-deep neural network models online and incremental learning (Chiotellis et al., 2018) is often combined with AL as classical stream-based AL models are retrained after each selection. Another challenge is dealing with conceptual drifts (Łukasz Korycki et al., 2019), (Pham et al., 2022) in which the underlying distribution is changing over time. Especially in stream-based AL, the selection is seen as a submodular optimization problem where the value of an added labeled sample is dependent on the labels already present. As solving these problems is computationally expensive, stream-based greedy algorithms are an important field of research (Fujii & Kashima, 2016). A method for solving submodular optimizations problems is Sieve-Streaming++ (Kazemi et al., 2019). The concept of submodular optimization opens many possibilities for future work. Sieve-Streaming++ has been used by Senzaki & Hamelain (2021) explicitly for AL on edge devices (ALED). The authors tried different semi-positive definite kernels with the prediction confidence as a value function. Temporal and stream properties are neglected in their work.
In our work we will neglect online learning and concept drifts and focus on connecting pool- and stream-based AL for perception.
2.3 ACTIVE LEARNING ON TEMPORAL DATA
Although temporal coherence is an important property of a stream, this property is only used for pool-based AL in previous works. Bengar et al. (2019) used the object detection false positive (FP), true positive (TP), false negative (FN) and true negative (TN) metrics to build a temporal graph and select samples with energy minimization. As this approach requires ground truth, it can be only used as a theoretical baseline. Besides, the authors provided the SYNTHIA-AL dataset, based on the SYNTHIA (Ros et al., 2016) dataset created for AL purposes. Due to the short snippets and high sampling rate, the dataset mostly targets semantic segmentation or object detection applications. Schmidt et al. (2020) used the object detection classification uncertainty estimated by the entropy over a time horizon. To do so the authors used preceding and succeeding images of each Kitti (Geiger et al., 2012) sample. By using this approach a comparable uncertainty can be estimated, avoiding the usage of ensembles (Beluch et al., 2018) or MC dropout (Gal & Ghahramani, 2016) methods. Nevertheless, the authors only described this approach for pool-based AL. Huang et al. (2018) used temporal information to avoid multiple MC dropout forward passes (Gal & Ghahramani, 2016) for semantic segmentation by combining on forward pass uncertainty prediction with a flow network to calculate the uncertainty as moving average over a time horizon.
Although many topics have been covered, in particular in pool-based AL, temporal properties have only been used to save computation for MC dropout (Gal & Ghahramani, 2016) passes. Temporal properties for stream-based AL still appear to be a relatively unexplored research topic. We want to investigate the change of uncertainty and diversity over time, especially for the seldomly covered stream-based AL.
3 FROM POOL-BASED TO STREAM BASED-ACTIVE LEARNING
As pool-based and stream-based AL are quite detached, we want to bridge the gap and enable comparisons by adding two intermediate scenarios. Namely the pool stream and stream batch scenario. A collection of the different scenarios is depicted in Figure 1. All scenarios start with a small labeled dataset (1) which is used to train a model (2). In the classic pool-based scenario, shown in Figure 1a, the samples are selected from a constant unlabeled pool (3) and sent to an oracle (4) for labeling. All samples including the unlabeled ones can be seen and used multiple times. The second scenario depicted in Figure 1b, reflects a continuous data selection, in which the unlabeled pool (3) is extended at every cycle. To reflect limited recording and transferring capabilities we change the scenario to a stream-based one. We remove the pool to create the stream batch scenario depicted in Figure 1c, where the current stream is visible for the model only once. In contrast to the classical stream scenario, a batch B (3) of a maximum size of b can be selected from each stream. In the classic stream scenario depicted in Figure 1d, the samples need to be chosen or disposed immediately. This adds the challenge of defining a threshold function classifying useful and useless samples.
Having introduced the four scenarios, we define four categories for the AL querying strategies and evaluate the possibility of using them in stream scenarios. The first category contains methods evaluating samples individually like loss learning (Yoo & Kweon, 2019), ensembles (Beluch et al., 2018) and MC dropout (Gal & Ghahramani, 2016) methods. These can be used for stream and pool scenarios without any adaptation. The second category describes methods performing a optimization which requires access to all unlabeled data during the optimization. This category contains mostly the diversity-based methods Coreset (Sener & Savarese, 2018), Badge (Ash et al., 2020) and BatchBald (Kirsch et al., 2019) , which cannot be used for stream-based AL directly. However, they can be used if the greedy optimization can be transformed to work on streams. The third category contains methods that use unlabeled data for training, such as CoreGCN (Caramalau et al., 2021) or VAAL (Sinha et al., 2019), and cannot be transferred to stream-based scenarios. The fourth category contains methods that are stream-based, such as the video summarization (Kazemi et al., 2019) and ALED (Senzaki & Hamelain, 2021). As current AL research in perception is focusing on the third category, the number of methods that can be transferred to a stream scenario is quite limited. Only the first and fourth category can be used in all AL scenarios.
4 TEMPORAL INFORMATION IN PERCEPTION DATA
Most perception datasets do not contain temporal data. Since these datasets are meant for classification, object detection or semantic segmentation tasks this information is naturally of lower importance for the task at hand. The commonly used datasets Kitti (Geiger et al., 2012) or Cityscapes (Cordts et al., 2016) aim to have a good diversity to be highly generalizable. Classification datasets like Cifar10 (Krizhevsky et al., 2009), which is often used to benchmark AL, are not temporally ordered data streams. Benchmarking AL on these datasets is sub-optimal and only shows potential label savings on datasets that have been manually designed for diversity. Instead, we propose that AL shall be benchmarked on camera or sensor streams directly such that no additional manual work besides labeling is needed.
We create our benchmark dataset based on A2D2 1 (Geyer et al., 2020) which provides temporally coherent frames structured in different drives. We assign the classification labels urban, highway, country road and construction site describing the driving environment to create an operation domain detection task. This task is important in mobile robotics to estimate if action can be executed safely. The dataset contains several recorded drives in southern Germany, with around 680 frames on average per recording. The data is temporally clustered in the latent space by the nature of the drives, which can be seen in Figure 2. Further details can be found in Appendix A.
5 VALUE OF TEMPORAL INFORMATION FOR ACTIVE LEARNING
By defining the drives of the dataset as consecutive order streams, properties like the predictive uncertainty σp can be represented as function of time t. As sampling-based approaches are problematic for stream-based applications, due to increased computational cost, we use a loss module fL (Yoo & Kweon, 2019) to estimate the predictive uncertainty. The predicted loss L̂ (pseudo uncertainty) of a sample x can be defined as in Equation 1.
σp ≈ L̂x = fL[x] = f∗L[tx] (1)
The loss module, as well as latent space representations depend on the current selected training set and are updated after each AL cycle. As time is strictly increasing, the time derivative of the predicted loss ddt L̂ exists. By taking the
temporal coherence properties of a sensor stream into account and assuming real world (or their simulations) situations, the change between two samples is naturally limited. This effect can be observed in Figure 2, where the different drives form natural clusters. We use the existence of a time derivative to propose our first method1, Temporal Predicted Loss (TPL):
λi = ∣∣∣∣∣∣∣∣ ddt L̂ ∣∣∣∣∣∣∣∣ (2)
Based on the selection value λi, we select the sample i with the highest absolute temporal change in the predicted loss, instead of selecting samples with the highest predicted loss for the next AL cycle. By taking the temporal change into account the method can easily filter similar samples as they have a close temporal relation as well as similar uncertainty values. In addition, the method is very sensitive to samples having sudden changes which cause a change in uncertainty, which can be challenging for the model. In Figure 3 we compare the latent space coverage of loss learning and TPL using t-SNE plots. While the vanilla loss learning approach mostly covers only one corner, our approach selects samples all over the latent space.
Our second method, Temporal Distance Loss Change (TDLC), is also taking the diversity of the dataset into account by analyzing the change in the latent space representation of the samples. As Figure 2 shows, the samples of one drive are often grouped in clusters. We want to investigate if the change of distance in latent space is a suitable metric to increase the performance of a selection query. Thus, we formulate Equation 3 to combine the temporal change of the predicted loss ddt L̂i with the temporal change in latent space ddtfi scaled by the factor δ which is set to one. In this equation, i denotes the sample and λi its selection value. As the magnitude of the learned loss and distance in latent space can be different, we calculate the mean and standard deviation of both values on the fly denoted as mean and std to combine the zero mean unit variance value of both flows.
λi = dL̂i dt − mean( dL̂ dt )
std(dL̂dt ) + δ ·
dfi dt − mean( df dt )
std(dfdt ) (3)
1Will be made available upon acceptance to preserve authors’ privacy.
The last method is following the idea of submodular optimization, as these methods take the relations inside the batch into account. We base our method on the Sieve-Streaming++ algorithm and follow the idea of (Kazemi et al., 2019) and (Senzaki & Hamelain, 2021) to use the determinant of a positive semi-definite kernel. We choose to evaluate the distance of the selected samples with the dot product of the feature vectors, which have also been evaluated by (Senzaki & Hamelain, 2021). In contrast to the distance matrix used in (Kazemi et al., 2019) the dot product of the vectors takes the direction in latent space into account. The resulting matrix has the squared norm of the selected vectors as the main diagonal, while the other values depend on the orientation of the data points from the origin of the latent space. These mathematical properties lead to a higher diversity, compared to a regular distance matrix. We integrate the temporal change of the predicted loss ddt L̂i of the i-th sample with the matrix product of the feature vectors into submodular optimization. The value λJ of the selected set J with j elements in Equation 4 is to be maximized. Where j is constrained by the batch size b to j ≤ b. F j×n denotes the matrix of j latent space feature vectors with n equals the feature dimension. We followed Senzaki & Hamelain (2021) and set the scaling factor δ to 0.5. This method is further denoted as Temporal Distance Loss Stream (TDLS).
λJ = j∑ i=0 ∣∣∣∣∣∣∣∣ ddt L̂i ∣∣∣∣∣∣∣∣+ δ · log(det(FF T + Ij)) (4)
As ablation study, we replace F jxn with the gradient embedding based on Ash et al. (2020) Ejx(n·c) with c being the number of classes, further referred as Temporal Embedded Gradient Loss Stream (TEGLS). This adds information on possible loss directions to the optimization.
6 EXPERIMENTS AND RESULTS
For our experiments, we use the A2D2 object detection dataset labeled as described in Section 4. We start with an initial training set of two drives with 1674 images. Another nine drives with 4518 images remain unseen to be used as streams in the AL cycles. For the testing and validation set, we use three drives each, with a total of 2776 and respectively 2577 images. All splits and further detail can be found in Appendix A. As the streams differ from each other in length, we use a percentage selection size for each stream instead of a fixed one. At each AL cycle, indicated as marker in the plot, a new stream will be added according to the specified scenario from Figure 1. In the results figures we plot the accuracy over the percentage used from the initial training set and the unlabeled pool. As baselines, we use a neural network trained on the whole training set including all possible selections, as well as a random selection strategy. Besides these two commonly used baselines, we introduce a fixed step selection strategy which is an often used strategy to reduce the number of samples in a recording. We use a ResNet18 (He et al., 2016) model for most experiments as it is the most common model in the related work. Further, we extend the classification head to three
fully connected layers with dropout layers in between, such that it can be compared with sampling methods like BatchBald. For the convolutional layers the pre-trained ImageNet (Deng et al., 2009) weights provided by PyTorch (Paszke et al., 2019) are used. As we noticed a positive effect of the joint loss from the loss learning module, we add this module to all models for a fair comparison. All hyperparameters and model details are listed in Appendix B. After each selection cycle, the model is trained from scratch. As to our knowledge no other comparable datasets are available, we variate the order of streams ingested in the cycles to prove the robustness of our approach. The three tested orders are shown alternately in the figures such that each order is shown twice. The same five seeds are used for the different methods. At first, we compare our methods in the stream-based scenario introduced in Figure 1c and finally we relate them to the pool stream scenario from Figure 1b.
6.1 TEMPORAL COHERENCE FOR BATCH DIVERSIFICATION
In our first experiment, we want to show the influence of the temporal relation. In Figure 4 we compare loss learning with TPL in the stream batch scenario from Figure 1c . We show both approaches for the three selection sizes 5%, 10% and 20%. Besides the ResNet18 model, we compare both approaches with the ResNet34 model to prove flexibility. It can be seen that our approach outperforms the vanilla loss learning approach for different models and selection sizes. For all parameters it reaches a higher accuracy score with the same amount of labeled data. For ResNet18 shown in Figure 4a only our approach reaches the performance of the network trained on all data. In Figure 4b our approach clearly reaches the standard deviation region of the fully trained model for a selection size of 20%, while the vanilla loss learning approach reaches it in the last step with 5% additional data selected. The overall performance of the loss learning approach is lower for ResNet34 which influences our approach as well. Qualitatively the increased diversity of our method can be seen in Figure 5b in comparison to loss learning shown in Figure 5a.
In general, it can be seen that our approach outperforms vanilla loss learning for different selection sizes and models. The effect is reduced with a larger selection size, which was expected. In addition, loss learning seems not the perfect method to estimate the (pseudo) uncertainty of a sample as its performance varies between the models in Figure 4. However other approaches require either multiple models or forward passes, which increases computation cost and is therefore problematic for streams. This is important if the selection is performed on a mobile device directly.
6.2 STREAM-BASED ACTIVE LEARNING
In these experiments, we compare our methods introduced in Section 5 with state-of-the-art methods for batch stream-based AL. All experiments in this section are conducted according to the stream batch scenario from Figure 1c. After an ablation study we select our TDLS and TEGLS method for further comparison, as state-of-the-art approaches mostly combine diversity and uncertainty as well, details can be found in Appendix C.
In Figure 6 we compare TDLS and TEGLS with Random, Fixed and ALED for two different orders. In Figure 6b both methods need one stream selection less to cross the fully trained network’s performance at 33% and use 0.5% labels than ALED. In Figure 6a TEGLS cross the fully trained network’s line as 31.8% while ALED needs about 1% more labels to achieve the fully trained networks performance at 32.8%. It can be seen that combining the temporal change of uncertainty with the most informative latent space encoding works best and outperforms random as well as state-of-the-art selection methods.
6.3 FROM STREAM-BASED TO POOL-BASED ACTIVE LEARNING
The main body of work in the field is focused on pool-based AL only. We want to compare the two scenarios stream batch (Figure 1c) and pool stream (Figure 1b) and close the gap. To the best of our knowledge, we are the first to compare the different scenarios explicitly. We use the stream batch and the pool stream scenario introduced in Section 3 for this experiment as otherwise, the pool-based methods could select samples from future streams. Pool-based methods can use information that is not available for stream-based methods like using data from the unlabeled pool for unsupervised training or optimization approach where the data is visited at each optimization step. Therefore poolbased methods are expected to outperform the stream-based methods. The goal of these experiments is to investigate the decrease in performance that comes along with changing from a pool-based to a stream-based setup. As pool-based scenarios require more data logistics and are computationally more expensive, a certain decrease in performance might be acceptable. Figure 7 shows the results for two different orders.
In Figure 7a TEGLS even reaches the fully trained network’s performance one iteration before the pool-based methods and can achieve a data saving of around 1%. In Figure 7b, our method does not suffer from any performance loss due to the disadvantage in the scenarios and reaches the fully trained network’s line together with other methods at 31.5% used data. The performance better than the fully trained network can be explained by the independence of drives in the different sets. So a subset can have a better distribution fit.
To the best of our knowledge we are the first showing that a stream-based method can compete with pool-based approaches in terms of performance. Our proposed method offers data savings like stateof-the-art pool-based methods, while offering the reduced data logistics of stream-based approaches. As most perception problems start from sensor (e.g. camera) streams our method can be used on the mobile device directly for many applications like robotics, autonomous driving or environmental surveillance. This saves additionally computational costs and the data prepossessing efforts.
Although our ablation study in Appendix C showed that the combination with diversity based approaches work quite well, TPL and TDLS do not need any complex matrix operations or submodular optimization techniques.
7 CONCLUSION AND FUTURE WORK
In our work, we investigated stream-based AL for temporally coherent data. The proposed theoretical modifications that make it possible to exploit the temporal information resulted in three classes of methods. To evaluate these scenarios, we created a classification dataset with temporally coherent data including timestamps based on the A2D2 autonomous driving dataset for this purpose, which we made publically available. In our first experiment, we showed that our modifications applied to loss learning outperform the vanilla approach by saving up to 5% more labeled data. Our second experiment proved that our methods combining temporal changes in (pseudo) uncertainty with diversity lead to 1% additional data saving in comparison to state-of-the-art methods in stream batch AL. In the last experiments conducted, we provided, to our knowledge, the first comparison between stream-based and pool-based AL using pool stream and stream batch scenarios. These experiments could prove that our stream-based methods achieve - with 0.5% more data savings - the same performance as pool-based methods for temporal data, so we bridged the gap between them. Given the additional effort, pool-based scenarios require in terms of data logistics, this is a major point for enabling large scale AL.
In future work, we want to focus on alternative uncertainty estimation methods. For one, the uncertainty estimation can be integrated more easily with a diversity measurement, which already showed good results in pool-based approaches. After we proved the benefit of exploiting temporally coherent vision data, we want to extend our approach to semantic segmentation and object detection. Additionally we plan create a dataset for steam-based in mobile robot perception.
REPRODUCIBILITY STATEMENT
To ensure reproducibility we use the five seeds 1, 42, 64, 101 and 999 and set cudnn to ”deterministic”. We used the PyTorch modelzoo2 implementation of the ResNet18 model and mention all modifications in Appendix B. An exact data preprocessing as well as training parameters can be also found in Appendix B. The exact dataset detail including all training, validation and test split as well as the split between the initially labeled and unlabeled pool are given in Appendix A.
ETHICS STATEMENT
In our work, we deal with data selection and recording on mobile devices. Nevertheless, such recordings are necessary and already conducted, especially in autonomous driving data is collected by different institutions. To respect personal data privacy these recordings are strictly regulated by governmental authorities. We strongly encourage to respect these regulations. Nonetheless, such methods can be used to collect data unauthorized. However, we think that the benefit of our streambased AL approach and data collection on mobile devices can create to increase the perception and reliability of these mobile devices, robots and autonomous cars outweigh this risk.
A DATASET DESCRIPTION
In our experiments, we used the object detection part of the A2D2 dataset3. The dataset contains 17 different drives in southern Germany. The frames are timestamped with a high frequency of up to 10 Hz so that the temporal change of the samples can be evaluated meaningfully. Due to sensor synchronization the rate is not constant, but the optical flow does not get lost. This high frequency brings the risk of selecting redundant samples in a batch. The recording drives are split into an initial labeled pool and unlabeled pool for training as well as validation and test set as shown in Table 1. In the stream-based setups, the unlabeled drives are fed as streams into the AL algorithm. The images have been resized with preserved aspect ratio to 240× 151 pixels. For the training, the images have been normalized with the mean and the standard deviation of the currently selected samples. The training dataset has been shuffled.
B DETAILED EXPERIMENT DESCRIPTION
We used for all experiments PyTorch, for the existing reference methods we used the code published by the authors Kirsch et al. (2019)4 and Caramalau et al. (2021)5. The submodular optimization approaches are implementation using Sieve-Streaming++ (Kazemi et al., 2019)6. The ResNet18 has been modified with a three fully connected layer classification head with inner dimensions of 256 and 128. In front of each fully connected layer, we added a dropout with a probability of 0.3. These layers remained active for the MC dropout based methods with ten forward passes. A softmax activation is attached to the last layer. For the convolutional layers the pre-trained weights provided
3https://www.a2d2.audi/a2d2/en/download.html 4https://github.com/BlackHC/batchbald_redux 5https://github.com/razvancaramalau/Sequential-GCN-for-Active-Learning 6https://github.com/ehsankazemi/hybrid-streaming
by PyTorch has been used and freezed. We trained each model with the SGD optimizer using PyTorch 1.11.0 with a learning rate of 0.0001, a momentum of 0.9 and a weight decay of 0.0005 for a maximum of 200 epochs. To ensure convergence we use early stopping on the validation accuracy with a patience of 30. The batch size is set to 128. Due to the drive concept of A2D2, the dataset contains samples very close to each and can contain only small changes in the image. So the parameters have been selected, such that the model does not overfit on the initial set. As a performance increase has been observed when training with an attached loss learning module, this module has been attached to all models. This effect was only observed for a few amounts of data. The loss learning weight in the loss function is set to 1. The learning rate is decreased by a factor of ten, after 160 epochs. The parameters for CoreGCN are taken from the authors’ implementation Caramalau et al. (2021). The experiments are conducted on Nvidia V100 Graphic cards.
C ABLATION STUDY
We compare our methods introduced in Section 5 with each other in an ablation study to investigate the effect of our proposed adaptions. In Figure 8 we show the comparison for two different stream orders. It can be seen, that the temporal change in latent space distance (TDLC) without submodular optimization, does not generate a huge improvement, only a minor one can be seen in Figure 8a. As the distance reflect already the change in latent space, the derivative of this change does not seem to add much value. However, the combination of the distances (TDLS) or embedded gradients
(TEGLS) with the temporal loss as a submodular optimization problem seems to improve our base approach. | 1. What is the focus and contribution of the paper regarding active learning in stream-based data?
2. What are the strengths of the proposed approach, particularly in comparison to pool-based methods?
3. What are the weaknesses of the paper, especially in terms of experimentation and literature review?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any concerns or suggestions regarding the representation of stream-based data and the application of active learning techniques? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper shows several methods on how to use temporal information in stream-based active learning. It shows why classical pool-based AL methods cannot be used in this domain. It also shows that some methods can even outperform pool-based methods in terms of data savings. This paper closes the gap between pool and stream-based active learning. The authors performed several experiments on the public Audi Autonomous Driving Dataset and showed marginal improvements over state-of-the-art approaches by using 1% fewer labels.
Strengths And Weaknesses
The paper is written clearly and easy to follow.
The paper gives a view on how to transform pool-based methods into stream-based methods.
The custom dataset can be easily reproduced by the dataset description or downloaded as it is publicly available, and all hyperparameters needed to replicate the results are given.
related work section can be improved a lot as it does not cover the current state of the art. Good literature overview of existing pool-based query methods and their (dis)advantages. An overview of the properties of stream-based data, as opposed to pool-based data, is missing. For example, the non-stationary nature and possible concept drift of stream-based data are very important properties that can be solved by (stream-based) Active Learning.
Paragraph 2.2 only contains one Stream based active learning technique. There are many more that could be discussed to give a better view of previous research. For example, it would be good to add some info about “RAL - Improving stream-based AL by RL”. There is much more research on stream-based active learning than this paper suggests.
The obtained results can be discussed further to show the real value of the proposed approach. In the current form, it is very hard to see why the proposed approach required 1% less data than other ALs (sec. 6.2, and 6.3).
Clarity, Quality, Novelty And Reproducibility
The paper gives a view on how to transform pool-based methods into stream-based methods. It shows that using temporal changes indeed gives a more diverse sampling strategy than standard pool-based methods.
The explanation of the implementation of temporal predicted loss could be improved by supplying an algorithm description. Very clear explanation of the second method, why it is hypothesized to work well, and how it should be implemented. The TDLS method would be easier to reproduce if a step-by-step algorithm description would be supplied. |
ICLR | Title
Bridging between Pool- and Stream-Based Active Learning with Temporal Data Coherence
Abstract
Active learning (AL) reduces the amount of labeled data needed to train a machine learning model by choosing intelligently which instances to label. Classic pool-based AL needs all data to be present in a datacenter, which can be challenging with the increasing amounts of data needed in deep learning. However, AL on mobile devices and robots like autonomous cars can filter the data from perception sensor streams before it even reaches the datacenter. In our work, we investigate AL for such image streams and propose a new concept exploiting their temporal properties. We define three methods using a pseudo uncertainty based on loss learning (Yoo & Kweon, 2019). The first considers the temporal change of uncertainty and requires 5% less labeled data than the vanilla approach. It is extended by the change in latent space in the second method. The third method, temporal distance loss stream (TDLS), combines both with submodular optimization. In our evaluation on an extension of the public Audi Autonomous Driving Dataset (Geyer et al., 2020) we outperform state-of-the-art approaches by using 1% fewer labels. Additionally, we compare our stream-based approaches with existing approaches for AL in a pool-based scenario. Our experiments show that, although pool-based AL has more data access, our stream-based AL approaches need 0.5% fewer labels.
1 INTRODUCTION
Active learning (AL) is a technique to minimize the labeling effort, in which a machine learning model chooses the data to be labeled by itself. It can be divided into two main scenarios, pool-based and stream-based AL (Settles, 2010). Pool-based AL is a cyclic process of selecting batches of the most promising samples from a pool of data based on a query function. The model is retrained after the selection to start the next iteration of the AL cycle. The data pool is stored such that all samples are always accessible. In contrast, stream-based AL assumes an inflow of samples as a stream and the model decides if a sample should be saved and labeled or disposed. In classic stream-based AL the model is trained with each selected sample (Settles, 2010). However, in deep learning samples are usually selected in batches, due to the long training time of the models. This comes with the risk of selecting samples with an equal information gain. Most approaches ignore this fact or solve it by using a small selection batch size.
Besides the scenarios, the selection method, also called querying strategy, is another important factor of AL methods. There are three main categories of AL algorithms: uncertainty-based, diversitybased and learning-based AL (Ren et al., 2022). The first group are uncertainty-based AL methods, including for example Monte Carlo (MC) dropout methods (Gal & Ghahramani, 2016) or methods approximating the uncertainty by using ensembles (Beluch et al., 2018). The second group are diversity-based methods like Coreset (Sener & Savarese, 2018) or diverse embedded gradients (Ash et al., 2020). These methods select samples based on the dataset coverage. The third group are learning-based approaches. These methods, like loss learning (Yoo & Kweon, 2019), train an additional model, which either predicts a value, determining the usefulness of a sample, or decides if a sample should be selected directly. Recent approaches from this category often include unlabeled data for unsupervised training. Other approaches taking diversity into account usually perform an optimization, which requires constant access to the complete labeled and unlabeled dataset. This decreases the number of needed samples as intended, but the access to unlabeled data makes the transfer to a stream-based scenario impossible.
A large body of research in the perception domain focuses on pool-based AL, which requires the transfer of all data to a datacenter. Especially in autonomous driving AL is already an important research topic (Feng et al., 2019) (Hekimoglu et al., 2022). However, the data logistics and data preparation limits the possibilities to apply and scale this approach to open world perception problems, where a lot of data is required. These perceptions task including autonomous driving and robotic perception and environmental sensing. In contrast to pool-based AL, stream-based AL can run directly on mobile devices used in these applications and enables data collection through a large number of agents without a prior transfer to the data center. By performing AL on a mobile robot, it can be applied on temporally coherent camera streams directly, which reduces preprocessing efforts. Based on these considerations we focus on stream-based AL for temporally coherent data.
Our contribution can be summarized as follows: We suggest a novel concept of incorporating temporal information into AL, especially stream-based AL. Our concept exploits the temporal change of uncertainty and distance in latent space. Based on this we propose three methods and compare them with state-of-the-art methods in a classification task; the most commonly used task to benchmark AL. We evaluate our methods against other state-of-the-art methods. Therefore, we create a operational domain detection dataset by adding scene annotations to the Audi Autonomous Driving Dataset (A2D2) (Geyer et al., 2020). Further, we give an overview of the necessary steps to transform a pool-based scenario in a stream-based scenario and perform, to the best of our knowledge, the first direct comparison between stream-based and pool-based AL methods.
2 RELATED WORK
While a lot of authors did great research in the field of pool-based AL, stream-based AL has become unpopular with the rise of deep learning. However, the number of vision sensors receiving constant data streams is increasing, so will the cost of transferring these data to a datacenter. This makes research of stream-based AL techniques interesting, as not all data can be transferred to the datacenter to perform pool-based AL.
2.1 POOL-BASED ACTIVE LEARNING
Sener & Savarese (2018) defined AL as a core set selection problem. The authors aim to select samples that minimize the maximum distance to other not selected points. In this way, it can be formulated as a K-center problem. Solving this is quite costly, so the authors suggested to use a greedy algorithm to approximate the K-center problem. The method will be further denoted as Coreset. In Bayesian active learning with diverse gradient embedding (Badge) Ash et al. (2020) the diversity idea has been extended by taking the prediction uncertainty into account. The authors combined a distance representation of the latent space with pseudo labels based on the highest onehot encoded value to generate gradients. These are created for every class such that the dimension of the embedding is higher than in the Coreset (Sener & Savarese, 2018) approach. The optimal set is estimated using greedy optimization algorithms.
An uncertainty-based approach is MC dropout as a Bayesian approximation (Gal & Ghahramani, 2016). The method uses several dropout layers which are active during the prediction phase. By performing multiple forward passes a distribution over the class predictions is generated where the authors applied the mutual information function in order to calculate the uncertainty of the samples. This is often combined with the Bayesian active learning by disagreement (Houlsby et al., 2011) metrics, considering the mutual information of the multiple forward passes. Their approach has been modified by Kirsch et al. (2019) to take the diversity of the selected batch into account by calculating the joint mutual information. With their BatchBald approach, the authors reduced the selected samples with redundant information in a batch. In contrast to sampling-based approaches, loss learning (Yoo & Kweon, 2019) is a learning-based approach that needs only one forward pass. By adding a loss module to specific layers of the prediction network, the authors predicted the network’s loss and used it as pseudo uncertainty for sample selection. However, the loss module can only predict a relative loss. The authors showed the flexibility of the approach for several tasks, which makes it quite popular. Novel learning-based methods like variational adversarial active learning (VAAL) (Sinha et al., 2019) use the unlabeled data as well. An autoencoder is trained to learn a latent space representation of the data based on the labeled and unlabeled set. Based on the latent space encoding, a discriminator model is trained to discriminate between labeled and unlabeled data.
The selection is based on the lowest prediction confidence of the discriminator out of the unlabeled dataset predictions. The authors outperformed algorithms like Coreset (Sener & Savarese, 2018) and MC dropout (Gal & Ghahramani, 2016) methods. Kim et al. (2021) extended VAAL (Sinha et al., 2019) by adding a loss prediction module to the task model and the predicted loss to the latent space such that it is included in the input vector of the discriminator model. A more diversity-oriented learning-based approach using unlabeled data is sequential graph convolutional network for active learning (Caramalau et al., 2021). The authors used the distance between the features of the task model to calculate an adjacency matrix for a graph containing labeled and unlabeled data. Based on this matrix a graph neural network is trained. By using message passing the network should predict the nodes’ value for being labeled. With this approach, further denoted as CoreGCN, the authors achieved a good performance on classification datasets.
2.2 STREAM-BASED ACTIVE LEARNING
Stream-based AL has rarely been used for perception tasks so far, especially with deep learning models. In the field of perception Narr et al. (2016) selected data by using mondrian forests stream-based AL and trained them in an online learning fashion. For non-deep neural network models online and incremental learning (Chiotellis et al., 2018) is often combined with AL as classical stream-based AL models are retrained after each selection. Another challenge is dealing with conceptual drifts (Łukasz Korycki et al., 2019), (Pham et al., 2022) in which the underlying distribution is changing over time. Especially in stream-based AL, the selection is seen as a submodular optimization problem where the value of an added labeled sample is dependent on the labels already present. As solving these problems is computationally expensive, stream-based greedy algorithms are an important field of research (Fujii & Kashima, 2016). A method for solving submodular optimizations problems is Sieve-Streaming++ (Kazemi et al., 2019). The concept of submodular optimization opens many possibilities for future work. Sieve-Streaming++ has been used by Senzaki & Hamelain (2021) explicitly for AL on edge devices (ALED). The authors tried different semi-positive definite kernels with the prediction confidence as a value function. Temporal and stream properties are neglected in their work.
In our work we will neglect online learning and concept drifts and focus on connecting pool- and stream-based AL for perception.
2.3 ACTIVE LEARNING ON TEMPORAL DATA
Although temporal coherence is an important property of a stream, this property is only used for pool-based AL in previous works. Bengar et al. (2019) used the object detection false positive (FP), true positive (TP), false negative (FN) and true negative (TN) metrics to build a temporal graph and select samples with energy minimization. As this approach requires ground truth, it can be only used as a theoretical baseline. Besides, the authors provided the SYNTHIA-AL dataset, based on the SYNTHIA (Ros et al., 2016) dataset created for AL purposes. Due to the short snippets and high sampling rate, the dataset mostly targets semantic segmentation or object detection applications. Schmidt et al. (2020) used the object detection classification uncertainty estimated by the entropy over a time horizon. To do so the authors used preceding and succeeding images of each Kitti (Geiger et al., 2012) sample. By using this approach a comparable uncertainty can be estimated, avoiding the usage of ensembles (Beluch et al., 2018) or MC dropout (Gal & Ghahramani, 2016) methods. Nevertheless, the authors only described this approach for pool-based AL. Huang et al. (2018) used temporal information to avoid multiple MC dropout forward passes (Gal & Ghahramani, 2016) for semantic segmentation by combining on forward pass uncertainty prediction with a flow network to calculate the uncertainty as moving average over a time horizon.
Although many topics have been covered, in particular in pool-based AL, temporal properties have only been used to save computation for MC dropout (Gal & Ghahramani, 2016) passes. Temporal properties for stream-based AL still appear to be a relatively unexplored research topic. We want to investigate the change of uncertainty and diversity over time, especially for the seldomly covered stream-based AL.
3 FROM POOL-BASED TO STREAM BASED-ACTIVE LEARNING
As pool-based and stream-based AL are quite detached, we want to bridge the gap and enable comparisons by adding two intermediate scenarios. Namely the pool stream and stream batch scenario. A collection of the different scenarios is depicted in Figure 1. All scenarios start with a small labeled dataset (1) which is used to train a model (2). In the classic pool-based scenario, shown in Figure 1a, the samples are selected from a constant unlabeled pool (3) and sent to an oracle (4) for labeling. All samples including the unlabeled ones can be seen and used multiple times. The second scenario depicted in Figure 1b, reflects a continuous data selection, in which the unlabeled pool (3) is extended at every cycle. To reflect limited recording and transferring capabilities we change the scenario to a stream-based one. We remove the pool to create the stream batch scenario depicted in Figure 1c, where the current stream is visible for the model only once. In contrast to the classical stream scenario, a batch B (3) of a maximum size of b can be selected from each stream. In the classic stream scenario depicted in Figure 1d, the samples need to be chosen or disposed immediately. This adds the challenge of defining a threshold function classifying useful and useless samples.
Having introduced the four scenarios, we define four categories for the AL querying strategies and evaluate the possibility of using them in stream scenarios. The first category contains methods evaluating samples individually like loss learning (Yoo & Kweon, 2019), ensembles (Beluch et al., 2018) and MC dropout (Gal & Ghahramani, 2016) methods. These can be used for stream and pool scenarios without any adaptation. The second category describes methods performing a optimization which requires access to all unlabeled data during the optimization. This category contains mostly the diversity-based methods Coreset (Sener & Savarese, 2018), Badge (Ash et al., 2020) and BatchBald (Kirsch et al., 2019) , which cannot be used for stream-based AL directly. However, they can be used if the greedy optimization can be transformed to work on streams. The third category contains methods that use unlabeled data for training, such as CoreGCN (Caramalau et al., 2021) or VAAL (Sinha et al., 2019), and cannot be transferred to stream-based scenarios. The fourth category contains methods that are stream-based, such as the video summarization (Kazemi et al., 2019) and ALED (Senzaki & Hamelain, 2021). As current AL research in perception is focusing on the third category, the number of methods that can be transferred to a stream scenario is quite limited. Only the first and fourth category can be used in all AL scenarios.
4 TEMPORAL INFORMATION IN PERCEPTION DATA
Most perception datasets do not contain temporal data. Since these datasets are meant for classification, object detection or semantic segmentation tasks this information is naturally of lower importance for the task at hand. The commonly used datasets Kitti (Geiger et al., 2012) or Cityscapes (Cordts et al., 2016) aim to have a good diversity to be highly generalizable. Classification datasets like Cifar10 (Krizhevsky et al., 2009), which is often used to benchmark AL, are not temporally ordered data streams. Benchmarking AL on these datasets is sub-optimal and only shows potential label savings on datasets that have been manually designed for diversity. Instead, we propose that AL shall be benchmarked on camera or sensor streams directly such that no additional manual work besides labeling is needed.
We create our benchmark dataset based on A2D2 1 (Geyer et al., 2020) which provides temporally coherent frames structured in different drives. We assign the classification labels urban, highway, country road and construction site describing the driving environment to create an operation domain detection task. This task is important in mobile robotics to estimate if action can be executed safely. The dataset contains several recorded drives in southern Germany, with around 680 frames on average per recording. The data is temporally clustered in the latent space by the nature of the drives, which can be seen in Figure 2. Further details can be found in Appendix A.
5 VALUE OF TEMPORAL INFORMATION FOR ACTIVE LEARNING
By defining the drives of the dataset as consecutive order streams, properties like the predictive uncertainty σp can be represented as function of time t. As sampling-based approaches are problematic for stream-based applications, due to increased computational cost, we use a loss module fL (Yoo & Kweon, 2019) to estimate the predictive uncertainty. The predicted loss L̂ (pseudo uncertainty) of a sample x can be defined as in Equation 1.
σp ≈ L̂x = fL[x] = f∗L[tx] (1)
The loss module, as well as latent space representations depend on the current selected training set and are updated after each AL cycle. As time is strictly increasing, the time derivative of the predicted loss ddt L̂ exists. By taking the
temporal coherence properties of a sensor stream into account and assuming real world (or their simulations) situations, the change between two samples is naturally limited. This effect can be observed in Figure 2, where the different drives form natural clusters. We use the existence of a time derivative to propose our first method1, Temporal Predicted Loss (TPL):
λi = ∣∣∣∣∣∣∣∣ ddt L̂ ∣∣∣∣∣∣∣∣ (2)
Based on the selection value λi, we select the sample i with the highest absolute temporal change in the predicted loss, instead of selecting samples with the highest predicted loss for the next AL cycle. By taking the temporal change into account the method can easily filter similar samples as they have a close temporal relation as well as similar uncertainty values. In addition, the method is very sensitive to samples having sudden changes which cause a change in uncertainty, which can be challenging for the model. In Figure 3 we compare the latent space coverage of loss learning and TPL using t-SNE plots. While the vanilla loss learning approach mostly covers only one corner, our approach selects samples all over the latent space.
Our second method, Temporal Distance Loss Change (TDLC), is also taking the diversity of the dataset into account by analyzing the change in the latent space representation of the samples. As Figure 2 shows, the samples of one drive are often grouped in clusters. We want to investigate if the change of distance in latent space is a suitable metric to increase the performance of a selection query. Thus, we formulate Equation 3 to combine the temporal change of the predicted loss ddt L̂i with the temporal change in latent space ddtfi scaled by the factor δ which is set to one. In this equation, i denotes the sample and λi its selection value. As the magnitude of the learned loss and distance in latent space can be different, we calculate the mean and standard deviation of both values on the fly denoted as mean and std to combine the zero mean unit variance value of both flows.
λi = dL̂i dt − mean( dL̂ dt )
std(dL̂dt ) + δ ·
dfi dt − mean( df dt )
std(dfdt ) (3)
1Will be made available upon acceptance to preserve authors’ privacy.
The last method is following the idea of submodular optimization, as these methods take the relations inside the batch into account. We base our method on the Sieve-Streaming++ algorithm and follow the idea of (Kazemi et al., 2019) and (Senzaki & Hamelain, 2021) to use the determinant of a positive semi-definite kernel. We choose to evaluate the distance of the selected samples with the dot product of the feature vectors, which have also been evaluated by (Senzaki & Hamelain, 2021). In contrast to the distance matrix used in (Kazemi et al., 2019) the dot product of the vectors takes the direction in latent space into account. The resulting matrix has the squared norm of the selected vectors as the main diagonal, while the other values depend on the orientation of the data points from the origin of the latent space. These mathematical properties lead to a higher diversity, compared to a regular distance matrix. We integrate the temporal change of the predicted loss ddt L̂i of the i-th sample with the matrix product of the feature vectors into submodular optimization. The value λJ of the selected set J with j elements in Equation 4 is to be maximized. Where j is constrained by the batch size b to j ≤ b. F j×n denotes the matrix of j latent space feature vectors with n equals the feature dimension. We followed Senzaki & Hamelain (2021) and set the scaling factor δ to 0.5. This method is further denoted as Temporal Distance Loss Stream (TDLS).
λJ = j∑ i=0 ∣∣∣∣∣∣∣∣ ddt L̂i ∣∣∣∣∣∣∣∣+ δ · log(det(FF T + Ij)) (4)
As ablation study, we replace F jxn with the gradient embedding based on Ash et al. (2020) Ejx(n·c) with c being the number of classes, further referred as Temporal Embedded Gradient Loss Stream (TEGLS). This adds information on possible loss directions to the optimization.
6 EXPERIMENTS AND RESULTS
For our experiments, we use the A2D2 object detection dataset labeled as described in Section 4. We start with an initial training set of two drives with 1674 images. Another nine drives with 4518 images remain unseen to be used as streams in the AL cycles. For the testing and validation set, we use three drives each, with a total of 2776 and respectively 2577 images. All splits and further detail can be found in Appendix A. As the streams differ from each other in length, we use a percentage selection size for each stream instead of a fixed one. At each AL cycle, indicated as marker in the plot, a new stream will be added according to the specified scenario from Figure 1. In the results figures we plot the accuracy over the percentage used from the initial training set and the unlabeled pool. As baselines, we use a neural network trained on the whole training set including all possible selections, as well as a random selection strategy. Besides these two commonly used baselines, we introduce a fixed step selection strategy which is an often used strategy to reduce the number of samples in a recording. We use a ResNet18 (He et al., 2016) model for most experiments as it is the most common model in the related work. Further, we extend the classification head to three
fully connected layers with dropout layers in between, such that it can be compared with sampling methods like BatchBald. For the convolutional layers the pre-trained ImageNet (Deng et al., 2009) weights provided by PyTorch (Paszke et al., 2019) are used. As we noticed a positive effect of the joint loss from the loss learning module, we add this module to all models for a fair comparison. All hyperparameters and model details are listed in Appendix B. After each selection cycle, the model is trained from scratch. As to our knowledge no other comparable datasets are available, we variate the order of streams ingested in the cycles to prove the robustness of our approach. The three tested orders are shown alternately in the figures such that each order is shown twice. The same five seeds are used for the different methods. At first, we compare our methods in the stream-based scenario introduced in Figure 1c and finally we relate them to the pool stream scenario from Figure 1b.
6.1 TEMPORAL COHERENCE FOR BATCH DIVERSIFICATION
In our first experiment, we want to show the influence of the temporal relation. In Figure 4 we compare loss learning with TPL in the stream batch scenario from Figure 1c . We show both approaches for the three selection sizes 5%, 10% and 20%. Besides the ResNet18 model, we compare both approaches with the ResNet34 model to prove flexibility. It can be seen that our approach outperforms the vanilla loss learning approach for different models and selection sizes. For all parameters it reaches a higher accuracy score with the same amount of labeled data. For ResNet18 shown in Figure 4a only our approach reaches the performance of the network trained on all data. In Figure 4b our approach clearly reaches the standard deviation region of the fully trained model for a selection size of 20%, while the vanilla loss learning approach reaches it in the last step with 5% additional data selected. The overall performance of the loss learning approach is lower for ResNet34 which influences our approach as well. Qualitatively the increased diversity of our method can be seen in Figure 5b in comparison to loss learning shown in Figure 5a.
In general, it can be seen that our approach outperforms vanilla loss learning for different selection sizes and models. The effect is reduced with a larger selection size, which was expected. In addition, loss learning seems not the perfect method to estimate the (pseudo) uncertainty of a sample as its performance varies between the models in Figure 4. However other approaches require either multiple models or forward passes, which increases computation cost and is therefore problematic for streams. This is important if the selection is performed on a mobile device directly.
6.2 STREAM-BASED ACTIVE LEARNING
In these experiments, we compare our methods introduced in Section 5 with state-of-the-art methods for batch stream-based AL. All experiments in this section are conducted according to the stream batch scenario from Figure 1c. After an ablation study we select our TDLS and TEGLS method for further comparison, as state-of-the-art approaches mostly combine diversity and uncertainty as well, details can be found in Appendix C.
In Figure 6 we compare TDLS and TEGLS with Random, Fixed and ALED for two different orders. In Figure 6b both methods need one stream selection less to cross the fully trained network’s performance at 33% and use 0.5% labels than ALED. In Figure 6a TEGLS cross the fully trained network’s line as 31.8% while ALED needs about 1% more labels to achieve the fully trained networks performance at 32.8%. It can be seen that combining the temporal change of uncertainty with the most informative latent space encoding works best and outperforms random as well as state-of-the-art selection methods.
6.3 FROM STREAM-BASED TO POOL-BASED ACTIVE LEARNING
The main body of work in the field is focused on pool-based AL only. We want to compare the two scenarios stream batch (Figure 1c) and pool stream (Figure 1b) and close the gap. To the best of our knowledge, we are the first to compare the different scenarios explicitly. We use the stream batch and the pool stream scenario introduced in Section 3 for this experiment as otherwise, the pool-based methods could select samples from future streams. Pool-based methods can use information that is not available for stream-based methods like using data from the unlabeled pool for unsupervised training or optimization approach where the data is visited at each optimization step. Therefore poolbased methods are expected to outperform the stream-based methods. The goal of these experiments is to investigate the decrease in performance that comes along with changing from a pool-based to a stream-based setup. As pool-based scenarios require more data logistics and are computationally more expensive, a certain decrease in performance might be acceptable. Figure 7 shows the results for two different orders.
In Figure 7a TEGLS even reaches the fully trained network’s performance one iteration before the pool-based methods and can achieve a data saving of around 1%. In Figure 7b, our method does not suffer from any performance loss due to the disadvantage in the scenarios and reaches the fully trained network’s line together with other methods at 31.5% used data. The performance better than the fully trained network can be explained by the independence of drives in the different sets. So a subset can have a better distribution fit.
To the best of our knowledge we are the first showing that a stream-based method can compete with pool-based approaches in terms of performance. Our proposed method offers data savings like stateof-the-art pool-based methods, while offering the reduced data logistics of stream-based approaches. As most perception problems start from sensor (e.g. camera) streams our method can be used on the mobile device directly for many applications like robotics, autonomous driving or environmental surveillance. This saves additionally computational costs and the data prepossessing efforts.
Although our ablation study in Appendix C showed that the combination with diversity based approaches work quite well, TPL and TDLS do not need any complex matrix operations or submodular optimization techniques.
7 CONCLUSION AND FUTURE WORK
In our work, we investigated stream-based AL for temporally coherent data. The proposed theoretical modifications that make it possible to exploit the temporal information resulted in three classes of methods. To evaluate these scenarios, we created a classification dataset with temporally coherent data including timestamps based on the A2D2 autonomous driving dataset for this purpose, which we made publically available. In our first experiment, we showed that our modifications applied to loss learning outperform the vanilla approach by saving up to 5% more labeled data. Our second experiment proved that our methods combining temporal changes in (pseudo) uncertainty with diversity lead to 1% additional data saving in comparison to state-of-the-art methods in stream batch AL. In the last experiments conducted, we provided, to our knowledge, the first comparison between stream-based and pool-based AL using pool stream and stream batch scenarios. These experiments could prove that our stream-based methods achieve - with 0.5% more data savings - the same performance as pool-based methods for temporal data, so we bridged the gap between them. Given the additional effort, pool-based scenarios require in terms of data logistics, this is a major point for enabling large scale AL.
In future work, we want to focus on alternative uncertainty estimation methods. For one, the uncertainty estimation can be integrated more easily with a diversity measurement, which already showed good results in pool-based approaches. After we proved the benefit of exploiting temporally coherent vision data, we want to extend our approach to semantic segmentation and object detection. Additionally we plan create a dataset for steam-based in mobile robot perception.
REPRODUCIBILITY STATEMENT
To ensure reproducibility we use the five seeds 1, 42, 64, 101 and 999 and set cudnn to ”deterministic”. We used the PyTorch modelzoo2 implementation of the ResNet18 model and mention all modifications in Appendix B. An exact data preprocessing as well as training parameters can be also found in Appendix B. The exact dataset detail including all training, validation and test split as well as the split between the initially labeled and unlabeled pool are given in Appendix A.
ETHICS STATEMENT
In our work, we deal with data selection and recording on mobile devices. Nevertheless, such recordings are necessary and already conducted, especially in autonomous driving data is collected by different institutions. To respect personal data privacy these recordings are strictly regulated by governmental authorities. We strongly encourage to respect these regulations. Nonetheless, such methods can be used to collect data unauthorized. However, we think that the benefit of our streambased AL approach and data collection on mobile devices can create to increase the perception and reliability of these mobile devices, robots and autonomous cars outweigh this risk.
A DATASET DESCRIPTION
In our experiments, we used the object detection part of the A2D2 dataset3. The dataset contains 17 different drives in southern Germany. The frames are timestamped with a high frequency of up to 10 Hz so that the temporal change of the samples can be evaluated meaningfully. Due to sensor synchronization the rate is not constant, but the optical flow does not get lost. This high frequency brings the risk of selecting redundant samples in a batch. The recording drives are split into an initial labeled pool and unlabeled pool for training as well as validation and test set as shown in Table 1. In the stream-based setups, the unlabeled drives are fed as streams into the AL algorithm. The images have been resized with preserved aspect ratio to 240× 151 pixels. For the training, the images have been normalized with the mean and the standard deviation of the currently selected samples. The training dataset has been shuffled.
B DETAILED EXPERIMENT DESCRIPTION
We used for all experiments PyTorch, for the existing reference methods we used the code published by the authors Kirsch et al. (2019)4 and Caramalau et al. (2021)5. The submodular optimization approaches are implementation using Sieve-Streaming++ (Kazemi et al., 2019)6. The ResNet18 has been modified with a three fully connected layer classification head with inner dimensions of 256 and 128. In front of each fully connected layer, we added a dropout with a probability of 0.3. These layers remained active for the MC dropout based methods with ten forward passes. A softmax activation is attached to the last layer. For the convolutional layers the pre-trained weights provided
3https://www.a2d2.audi/a2d2/en/download.html 4https://github.com/BlackHC/batchbald_redux 5https://github.com/razvancaramalau/Sequential-GCN-for-Active-Learning 6https://github.com/ehsankazemi/hybrid-streaming
by PyTorch has been used and freezed. We trained each model with the SGD optimizer using PyTorch 1.11.0 with a learning rate of 0.0001, a momentum of 0.9 and a weight decay of 0.0005 for a maximum of 200 epochs. To ensure convergence we use early stopping on the validation accuracy with a patience of 30. The batch size is set to 128. Due to the drive concept of A2D2, the dataset contains samples very close to each and can contain only small changes in the image. So the parameters have been selected, such that the model does not overfit on the initial set. As a performance increase has been observed when training with an attached loss learning module, this module has been attached to all models. This effect was only observed for a few amounts of data. The loss learning weight in the loss function is set to 1. The learning rate is decreased by a factor of ten, after 160 epochs. The parameters for CoreGCN are taken from the authors’ implementation Caramalau et al. (2021). The experiments are conducted on Nvidia V100 Graphic cards.
C ABLATION STUDY
We compare our methods introduced in Section 5 with each other in an ablation study to investigate the effect of our proposed adaptions. In Figure 8 we show the comparison for two different stream orders. It can be seen, that the temporal change in latent space distance (TDLC) without submodular optimization, does not generate a huge improvement, only a minor one can be seen in Figure 8a. As the distance reflect already the change in latent space, the derivative of this change does not seem to add much value. However, the combination of the distances (TDLS) or embedded gradients
(TEGLS) with the temporal loss as a submodular optimization problem seems to improve our base approach. | 1. What is the main contribution of the paper regarding active learning for edge devices?
2. What are the strengths and weaknesses of the proposed method, particularly in its motivation and experiments?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are some questions raised by the reviewer regarding the paper's language usage, related work, methodology, and results?
5. What other tasks do the authors want to extend their approach to, according to the conclusion? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors propose an Active Learning method for AL in a streaming context for an application of AL on edge (sensor) devices such as a fleet of cars or robots. Additionally, they define two contexts with different configurations of pool-based batching and streaming of data. Then, they introduce and evaluate variants of loss learning (Yoo & Kweon, 2019) on a new 4-class classification task defined on the Audi Autonomous Driving Dataset (Geyer et al., 2020) and ResNet-based architectures.
Strengths And Weaknesses
Strengths:
Addresses a relevant problem: early pre-processing of data close to sensors
Weaknesses:
Language in-correctness and lack of clarity (see below)
Motivation of the method (why is the classification of highways vs. construction sites important?)
Claims are not well supported by experiments nor by theory
Clarity, Quality, Novelty And Reproducibility
Clarity/Open Points
Lack in language clarity:
Distinction of “uncertainty” and “diversity” into “predict one value” and “performing greedy optimization” are not appropriate, why not stick with the known categorizations?
References missing, for example: “While early approaches like loss learning or MC dropout are generating a single value per unseen image, state of the art approaches often include unlabeled data for unsupervised training. Other approaches taking diversity into account usually perform a greedy optimization, which requires constant access to the complete unlabeled dataset.
Correct use of language lacking, e.g., consistency of names (“pool based” vs “stream-based”, pytorch, BatchBALD vs. BatchBald, Active learning/Learning), typos. Please have the text proof-read.
Motivation is bad: “As classification is still the most commonly benchmarked task and shown by all authors, we focus on classification in our paper.”
What is the task in the autonomous driving use-case that you really want to tackle with your work? Detection of highways or construction sites? Events/Incidents?
Related work
If the work cited in Section 2.1 is related work, then please differentiate your method from it. If it is not (this is what I guess) then omit it (many things already have been used to motivate AL and introduced before)
Where is the related work section on stream-based Active Learning? Section 2.2 misses a lot of related work on stream-based AL, e.g., Fuji2016, see below.
If you use AL to filter a stream of data, should you not also present related work on stream processing such as novelty detection?
Method:
How do you use the time? You only describe the difference in gradients and the distance in the latent space, no temporal information
Section 3: Why don’t you compare to “third category methods”? Is it not possible to train VAAL on stored data in a datacenter once and then apply it on a stream on device?
Section 4: Construction of dataset for classification: what are the problems that arise from the inconsistent sample rate? Are the frame timestamps later used in the method?
Section 5: You mention the method is sensitive to sudden occurrences. What do you mean?
Claims that are not supported by experiments:
Noise of results in Figure 7 is high. Results shown are specific picks (here: orders) to support the claim, but do not show summary statistics. Also, results have high variance and difference between methods is small, drowned by noise.
Figure 4a: why are the data points not equidistant on the x-axis, if you select 5% of the samples for labeling?
Figure 6a,b: missing random baseline
Conclusion: what other tasks do you want to extend the approach to?
Related work (non-exhaustive list, as a starting point)
Fuji et al., 2016, Budgeted stream-based active learning via adaptive submodular maximization
Narr et al., 2016, Stream-based Active Learning for efficient and adaptive classification of 3D objects
DavideCacciarelli et al., 2022, Stream-based active learning with linear models
Li et al., 2019, Incremental semi-supervised learning on streaming data
Korycki et al., 2019, Active Learning with Abstaining Classifiers for Imbalanced Drifting Data Streams
Luo et al., 2017, Online decision making for stream-based robotic sampling via submodular optimization
Minor details
The paper talks about “1% fewer labels” or “0.5% fewer labels needed” – this was somehow confusing to me as it did sound marginal. In retrospect, I think it would be better to relate this value to the number of samples actually being annotated
“A collection of different scenarios [is] depicted in”
“Classification datasets like Cifar10 (Krizhevsky et al., 2009)[,] which is often used to benchmark AL[,] are not temporally ordered data streams.“
“The scaling factor
δ
is chosen to 0.5”
→
“we set the scaling….” |
ICLR | Title
Bridging between Pool- and Stream-Based Active Learning with Temporal Data Coherence
Abstract
Active learning (AL) reduces the amount of labeled data needed to train a machine learning model by choosing intelligently which instances to label. Classic pool-based AL needs all data to be present in a datacenter, which can be challenging with the increasing amounts of data needed in deep learning. However, AL on mobile devices and robots like autonomous cars can filter the data from perception sensor streams before it even reaches the datacenter. In our work, we investigate AL for such image streams and propose a new concept exploiting their temporal properties. We define three methods using a pseudo uncertainty based on loss learning (Yoo & Kweon, 2019). The first considers the temporal change of uncertainty and requires 5% less labeled data than the vanilla approach. It is extended by the change in latent space in the second method. The third method, temporal distance loss stream (TDLS), combines both with submodular optimization. In our evaluation on an extension of the public Audi Autonomous Driving Dataset (Geyer et al., 2020) we outperform state-of-the-art approaches by using 1% fewer labels. Additionally, we compare our stream-based approaches with existing approaches for AL in a pool-based scenario. Our experiments show that, although pool-based AL has more data access, our stream-based AL approaches need 0.5% fewer labels.
1 INTRODUCTION
Active learning (AL) is a technique to minimize the labeling effort, in which a machine learning model chooses the data to be labeled by itself. It can be divided into two main scenarios, pool-based and stream-based AL (Settles, 2010). Pool-based AL is a cyclic process of selecting batches of the most promising samples from a pool of data based on a query function. The model is retrained after the selection to start the next iteration of the AL cycle. The data pool is stored such that all samples are always accessible. In contrast, stream-based AL assumes an inflow of samples as a stream and the model decides if a sample should be saved and labeled or disposed. In classic stream-based AL the model is trained with each selected sample (Settles, 2010). However, in deep learning samples are usually selected in batches, due to the long training time of the models. This comes with the risk of selecting samples with an equal information gain. Most approaches ignore this fact or solve it by using a small selection batch size.
Besides the scenarios, the selection method, also called querying strategy, is another important factor of AL methods. There are three main categories of AL algorithms: uncertainty-based, diversitybased and learning-based AL (Ren et al., 2022). The first group are uncertainty-based AL methods, including for example Monte Carlo (MC) dropout methods (Gal & Ghahramani, 2016) or methods approximating the uncertainty by using ensembles (Beluch et al., 2018). The second group are diversity-based methods like Coreset (Sener & Savarese, 2018) or diverse embedded gradients (Ash et al., 2020). These methods select samples based on the dataset coverage. The third group are learning-based approaches. These methods, like loss learning (Yoo & Kweon, 2019), train an additional model, which either predicts a value, determining the usefulness of a sample, or decides if a sample should be selected directly. Recent approaches from this category often include unlabeled data for unsupervised training. Other approaches taking diversity into account usually perform an optimization, which requires constant access to the complete labeled and unlabeled dataset. This decreases the number of needed samples as intended, but the access to unlabeled data makes the transfer to a stream-based scenario impossible.
A large body of research in the perception domain focuses on pool-based AL, which requires the transfer of all data to a datacenter. Especially in autonomous driving AL is already an important research topic (Feng et al., 2019) (Hekimoglu et al., 2022). However, the data logistics and data preparation limits the possibilities to apply and scale this approach to open world perception problems, where a lot of data is required. These perceptions task including autonomous driving and robotic perception and environmental sensing. In contrast to pool-based AL, stream-based AL can run directly on mobile devices used in these applications and enables data collection through a large number of agents without a prior transfer to the data center. By performing AL on a mobile robot, it can be applied on temporally coherent camera streams directly, which reduces preprocessing efforts. Based on these considerations we focus on stream-based AL for temporally coherent data.
Our contribution can be summarized as follows: We suggest a novel concept of incorporating temporal information into AL, especially stream-based AL. Our concept exploits the temporal change of uncertainty and distance in latent space. Based on this we propose three methods and compare them with state-of-the-art methods in a classification task; the most commonly used task to benchmark AL. We evaluate our methods against other state-of-the-art methods. Therefore, we create a operational domain detection dataset by adding scene annotations to the Audi Autonomous Driving Dataset (A2D2) (Geyer et al., 2020). Further, we give an overview of the necessary steps to transform a pool-based scenario in a stream-based scenario and perform, to the best of our knowledge, the first direct comparison between stream-based and pool-based AL methods.
2 RELATED WORK
While a lot of authors did great research in the field of pool-based AL, stream-based AL has become unpopular with the rise of deep learning. However, the number of vision sensors receiving constant data streams is increasing, so will the cost of transferring these data to a datacenter. This makes research of stream-based AL techniques interesting, as not all data can be transferred to the datacenter to perform pool-based AL.
2.1 POOL-BASED ACTIVE LEARNING
Sener & Savarese (2018) defined AL as a core set selection problem. The authors aim to select samples that minimize the maximum distance to other not selected points. In this way, it can be formulated as a K-center problem. Solving this is quite costly, so the authors suggested to use a greedy algorithm to approximate the K-center problem. The method will be further denoted as Coreset. In Bayesian active learning with diverse gradient embedding (Badge) Ash et al. (2020) the diversity idea has been extended by taking the prediction uncertainty into account. The authors combined a distance representation of the latent space with pseudo labels based on the highest onehot encoded value to generate gradients. These are created for every class such that the dimension of the embedding is higher than in the Coreset (Sener & Savarese, 2018) approach. The optimal set is estimated using greedy optimization algorithms.
An uncertainty-based approach is MC dropout as a Bayesian approximation (Gal & Ghahramani, 2016). The method uses several dropout layers which are active during the prediction phase. By performing multiple forward passes a distribution over the class predictions is generated where the authors applied the mutual information function in order to calculate the uncertainty of the samples. This is often combined with the Bayesian active learning by disagreement (Houlsby et al., 2011) metrics, considering the mutual information of the multiple forward passes. Their approach has been modified by Kirsch et al. (2019) to take the diversity of the selected batch into account by calculating the joint mutual information. With their BatchBald approach, the authors reduced the selected samples with redundant information in a batch. In contrast to sampling-based approaches, loss learning (Yoo & Kweon, 2019) is a learning-based approach that needs only one forward pass. By adding a loss module to specific layers of the prediction network, the authors predicted the network’s loss and used it as pseudo uncertainty for sample selection. However, the loss module can only predict a relative loss. The authors showed the flexibility of the approach for several tasks, which makes it quite popular. Novel learning-based methods like variational adversarial active learning (VAAL) (Sinha et al., 2019) use the unlabeled data as well. An autoencoder is trained to learn a latent space representation of the data based on the labeled and unlabeled set. Based on the latent space encoding, a discriminator model is trained to discriminate between labeled and unlabeled data.
The selection is based on the lowest prediction confidence of the discriminator out of the unlabeled dataset predictions. The authors outperformed algorithms like Coreset (Sener & Savarese, 2018) and MC dropout (Gal & Ghahramani, 2016) methods. Kim et al. (2021) extended VAAL (Sinha et al., 2019) by adding a loss prediction module to the task model and the predicted loss to the latent space such that it is included in the input vector of the discriminator model. A more diversity-oriented learning-based approach using unlabeled data is sequential graph convolutional network for active learning (Caramalau et al., 2021). The authors used the distance between the features of the task model to calculate an adjacency matrix for a graph containing labeled and unlabeled data. Based on this matrix a graph neural network is trained. By using message passing the network should predict the nodes’ value for being labeled. With this approach, further denoted as CoreGCN, the authors achieved a good performance on classification datasets.
2.2 STREAM-BASED ACTIVE LEARNING
Stream-based AL has rarely been used for perception tasks so far, especially with deep learning models. In the field of perception Narr et al. (2016) selected data by using mondrian forests stream-based AL and trained them in an online learning fashion. For non-deep neural network models online and incremental learning (Chiotellis et al., 2018) is often combined with AL as classical stream-based AL models are retrained after each selection. Another challenge is dealing with conceptual drifts (Łukasz Korycki et al., 2019), (Pham et al., 2022) in which the underlying distribution is changing over time. Especially in stream-based AL, the selection is seen as a submodular optimization problem where the value of an added labeled sample is dependent on the labels already present. As solving these problems is computationally expensive, stream-based greedy algorithms are an important field of research (Fujii & Kashima, 2016). A method for solving submodular optimizations problems is Sieve-Streaming++ (Kazemi et al., 2019). The concept of submodular optimization opens many possibilities for future work. Sieve-Streaming++ has been used by Senzaki & Hamelain (2021) explicitly for AL on edge devices (ALED). The authors tried different semi-positive definite kernels with the prediction confidence as a value function. Temporal and stream properties are neglected in their work.
In our work we will neglect online learning and concept drifts and focus on connecting pool- and stream-based AL for perception.
2.3 ACTIVE LEARNING ON TEMPORAL DATA
Although temporal coherence is an important property of a stream, this property is only used for pool-based AL in previous works. Bengar et al. (2019) used the object detection false positive (FP), true positive (TP), false negative (FN) and true negative (TN) metrics to build a temporal graph and select samples with energy minimization. As this approach requires ground truth, it can be only used as a theoretical baseline. Besides, the authors provided the SYNTHIA-AL dataset, based on the SYNTHIA (Ros et al., 2016) dataset created for AL purposes. Due to the short snippets and high sampling rate, the dataset mostly targets semantic segmentation or object detection applications. Schmidt et al. (2020) used the object detection classification uncertainty estimated by the entropy over a time horizon. To do so the authors used preceding and succeeding images of each Kitti (Geiger et al., 2012) sample. By using this approach a comparable uncertainty can be estimated, avoiding the usage of ensembles (Beluch et al., 2018) or MC dropout (Gal & Ghahramani, 2016) methods. Nevertheless, the authors only described this approach for pool-based AL. Huang et al. (2018) used temporal information to avoid multiple MC dropout forward passes (Gal & Ghahramani, 2016) for semantic segmentation by combining on forward pass uncertainty prediction with a flow network to calculate the uncertainty as moving average over a time horizon.
Although many topics have been covered, in particular in pool-based AL, temporal properties have only been used to save computation for MC dropout (Gal & Ghahramani, 2016) passes. Temporal properties for stream-based AL still appear to be a relatively unexplored research topic. We want to investigate the change of uncertainty and diversity over time, especially for the seldomly covered stream-based AL.
3 FROM POOL-BASED TO STREAM BASED-ACTIVE LEARNING
As pool-based and stream-based AL are quite detached, we want to bridge the gap and enable comparisons by adding two intermediate scenarios. Namely the pool stream and stream batch scenario. A collection of the different scenarios is depicted in Figure 1. All scenarios start with a small labeled dataset (1) which is used to train a model (2). In the classic pool-based scenario, shown in Figure 1a, the samples are selected from a constant unlabeled pool (3) and sent to an oracle (4) for labeling. All samples including the unlabeled ones can be seen and used multiple times. The second scenario depicted in Figure 1b, reflects a continuous data selection, in which the unlabeled pool (3) is extended at every cycle. To reflect limited recording and transferring capabilities we change the scenario to a stream-based one. We remove the pool to create the stream batch scenario depicted in Figure 1c, where the current stream is visible for the model only once. In contrast to the classical stream scenario, a batch B (3) of a maximum size of b can be selected from each stream. In the classic stream scenario depicted in Figure 1d, the samples need to be chosen or disposed immediately. This adds the challenge of defining a threshold function classifying useful and useless samples.
Having introduced the four scenarios, we define four categories for the AL querying strategies and evaluate the possibility of using them in stream scenarios. The first category contains methods evaluating samples individually like loss learning (Yoo & Kweon, 2019), ensembles (Beluch et al., 2018) and MC dropout (Gal & Ghahramani, 2016) methods. These can be used for stream and pool scenarios without any adaptation. The second category describes methods performing a optimization which requires access to all unlabeled data during the optimization. This category contains mostly the diversity-based methods Coreset (Sener & Savarese, 2018), Badge (Ash et al., 2020) and BatchBald (Kirsch et al., 2019) , which cannot be used for stream-based AL directly. However, they can be used if the greedy optimization can be transformed to work on streams. The third category contains methods that use unlabeled data for training, such as CoreGCN (Caramalau et al., 2021) or VAAL (Sinha et al., 2019), and cannot be transferred to stream-based scenarios. The fourth category contains methods that are stream-based, such as the video summarization (Kazemi et al., 2019) and ALED (Senzaki & Hamelain, 2021). As current AL research in perception is focusing on the third category, the number of methods that can be transferred to a stream scenario is quite limited. Only the first and fourth category can be used in all AL scenarios.
4 TEMPORAL INFORMATION IN PERCEPTION DATA
Most perception datasets do not contain temporal data. Since these datasets are meant for classification, object detection or semantic segmentation tasks this information is naturally of lower importance for the task at hand. The commonly used datasets Kitti (Geiger et al., 2012) or Cityscapes (Cordts et al., 2016) aim to have a good diversity to be highly generalizable. Classification datasets like Cifar10 (Krizhevsky et al., 2009), which is often used to benchmark AL, are not temporally ordered data streams. Benchmarking AL on these datasets is sub-optimal and only shows potential label savings on datasets that have been manually designed for diversity. Instead, we propose that AL shall be benchmarked on camera or sensor streams directly such that no additional manual work besides labeling is needed.
We create our benchmark dataset based on A2D2 1 (Geyer et al., 2020) which provides temporally coherent frames structured in different drives. We assign the classification labels urban, highway, country road and construction site describing the driving environment to create an operation domain detection task. This task is important in mobile robotics to estimate if action can be executed safely. The dataset contains several recorded drives in southern Germany, with around 680 frames on average per recording. The data is temporally clustered in the latent space by the nature of the drives, which can be seen in Figure 2. Further details can be found in Appendix A.
5 VALUE OF TEMPORAL INFORMATION FOR ACTIVE LEARNING
By defining the drives of the dataset as consecutive order streams, properties like the predictive uncertainty σp can be represented as function of time t. As sampling-based approaches are problematic for stream-based applications, due to increased computational cost, we use a loss module fL (Yoo & Kweon, 2019) to estimate the predictive uncertainty. The predicted loss L̂ (pseudo uncertainty) of a sample x can be defined as in Equation 1.
σp ≈ L̂x = fL[x] = f∗L[tx] (1)
The loss module, as well as latent space representations depend on the current selected training set and are updated after each AL cycle. As time is strictly increasing, the time derivative of the predicted loss ddt L̂ exists. By taking the
temporal coherence properties of a sensor stream into account and assuming real world (or their simulations) situations, the change between two samples is naturally limited. This effect can be observed in Figure 2, where the different drives form natural clusters. We use the existence of a time derivative to propose our first method1, Temporal Predicted Loss (TPL):
λi = ∣∣∣∣∣∣∣∣ ddt L̂ ∣∣∣∣∣∣∣∣ (2)
Based on the selection value λi, we select the sample i with the highest absolute temporal change in the predicted loss, instead of selecting samples with the highest predicted loss for the next AL cycle. By taking the temporal change into account the method can easily filter similar samples as they have a close temporal relation as well as similar uncertainty values. In addition, the method is very sensitive to samples having sudden changes which cause a change in uncertainty, which can be challenging for the model. In Figure 3 we compare the latent space coverage of loss learning and TPL using t-SNE plots. While the vanilla loss learning approach mostly covers only one corner, our approach selects samples all over the latent space.
Our second method, Temporal Distance Loss Change (TDLC), is also taking the diversity of the dataset into account by analyzing the change in the latent space representation of the samples. As Figure 2 shows, the samples of one drive are often grouped in clusters. We want to investigate if the change of distance in latent space is a suitable metric to increase the performance of a selection query. Thus, we formulate Equation 3 to combine the temporal change of the predicted loss ddt L̂i with the temporal change in latent space ddtfi scaled by the factor δ which is set to one. In this equation, i denotes the sample and λi its selection value. As the magnitude of the learned loss and distance in latent space can be different, we calculate the mean and standard deviation of both values on the fly denoted as mean and std to combine the zero mean unit variance value of both flows.
λi = dL̂i dt − mean( dL̂ dt )
std(dL̂dt ) + δ ·
dfi dt − mean( df dt )
std(dfdt ) (3)
1Will be made available upon acceptance to preserve authors’ privacy.
The last method is following the idea of submodular optimization, as these methods take the relations inside the batch into account. We base our method on the Sieve-Streaming++ algorithm and follow the idea of (Kazemi et al., 2019) and (Senzaki & Hamelain, 2021) to use the determinant of a positive semi-definite kernel. We choose to evaluate the distance of the selected samples with the dot product of the feature vectors, which have also been evaluated by (Senzaki & Hamelain, 2021). In contrast to the distance matrix used in (Kazemi et al., 2019) the dot product of the vectors takes the direction in latent space into account. The resulting matrix has the squared norm of the selected vectors as the main diagonal, while the other values depend on the orientation of the data points from the origin of the latent space. These mathematical properties lead to a higher diversity, compared to a regular distance matrix. We integrate the temporal change of the predicted loss ddt L̂i of the i-th sample with the matrix product of the feature vectors into submodular optimization. The value λJ of the selected set J with j elements in Equation 4 is to be maximized. Where j is constrained by the batch size b to j ≤ b. F j×n denotes the matrix of j latent space feature vectors with n equals the feature dimension. We followed Senzaki & Hamelain (2021) and set the scaling factor δ to 0.5. This method is further denoted as Temporal Distance Loss Stream (TDLS).
λJ = j∑ i=0 ∣∣∣∣∣∣∣∣ ddt L̂i ∣∣∣∣∣∣∣∣+ δ · log(det(FF T + Ij)) (4)
As ablation study, we replace F jxn with the gradient embedding based on Ash et al. (2020) Ejx(n·c) with c being the number of classes, further referred as Temporal Embedded Gradient Loss Stream (TEGLS). This adds information on possible loss directions to the optimization.
6 EXPERIMENTS AND RESULTS
For our experiments, we use the A2D2 object detection dataset labeled as described in Section 4. We start with an initial training set of two drives with 1674 images. Another nine drives with 4518 images remain unseen to be used as streams in the AL cycles. For the testing and validation set, we use three drives each, with a total of 2776 and respectively 2577 images. All splits and further detail can be found in Appendix A. As the streams differ from each other in length, we use a percentage selection size for each stream instead of a fixed one. At each AL cycle, indicated as marker in the plot, a new stream will be added according to the specified scenario from Figure 1. In the results figures we plot the accuracy over the percentage used from the initial training set and the unlabeled pool. As baselines, we use a neural network trained on the whole training set including all possible selections, as well as a random selection strategy. Besides these two commonly used baselines, we introduce a fixed step selection strategy which is an often used strategy to reduce the number of samples in a recording. We use a ResNet18 (He et al., 2016) model for most experiments as it is the most common model in the related work. Further, we extend the classification head to three
fully connected layers with dropout layers in between, such that it can be compared with sampling methods like BatchBald. For the convolutional layers the pre-trained ImageNet (Deng et al., 2009) weights provided by PyTorch (Paszke et al., 2019) are used. As we noticed a positive effect of the joint loss from the loss learning module, we add this module to all models for a fair comparison. All hyperparameters and model details are listed in Appendix B. After each selection cycle, the model is trained from scratch. As to our knowledge no other comparable datasets are available, we variate the order of streams ingested in the cycles to prove the robustness of our approach. The three tested orders are shown alternately in the figures such that each order is shown twice. The same five seeds are used for the different methods. At first, we compare our methods in the stream-based scenario introduced in Figure 1c and finally we relate them to the pool stream scenario from Figure 1b.
6.1 TEMPORAL COHERENCE FOR BATCH DIVERSIFICATION
In our first experiment, we want to show the influence of the temporal relation. In Figure 4 we compare loss learning with TPL in the stream batch scenario from Figure 1c . We show both approaches for the three selection sizes 5%, 10% and 20%. Besides the ResNet18 model, we compare both approaches with the ResNet34 model to prove flexibility. It can be seen that our approach outperforms the vanilla loss learning approach for different models and selection sizes. For all parameters it reaches a higher accuracy score with the same amount of labeled data. For ResNet18 shown in Figure 4a only our approach reaches the performance of the network trained on all data. In Figure 4b our approach clearly reaches the standard deviation region of the fully trained model for a selection size of 20%, while the vanilla loss learning approach reaches it in the last step with 5% additional data selected. The overall performance of the loss learning approach is lower for ResNet34 which influences our approach as well. Qualitatively the increased diversity of our method can be seen in Figure 5b in comparison to loss learning shown in Figure 5a.
In general, it can be seen that our approach outperforms vanilla loss learning for different selection sizes and models. The effect is reduced with a larger selection size, which was expected. In addition, loss learning seems not the perfect method to estimate the (pseudo) uncertainty of a sample as its performance varies between the models in Figure 4. However other approaches require either multiple models or forward passes, which increases computation cost and is therefore problematic for streams. This is important if the selection is performed on a mobile device directly.
6.2 STREAM-BASED ACTIVE LEARNING
In these experiments, we compare our methods introduced in Section 5 with state-of-the-art methods for batch stream-based AL. All experiments in this section are conducted according to the stream batch scenario from Figure 1c. After an ablation study we select our TDLS and TEGLS method for further comparison, as state-of-the-art approaches mostly combine diversity and uncertainty as well, details can be found in Appendix C.
In Figure 6 we compare TDLS and TEGLS with Random, Fixed and ALED for two different orders. In Figure 6b both methods need one stream selection less to cross the fully trained network’s performance at 33% and use 0.5% labels than ALED. In Figure 6a TEGLS cross the fully trained network’s line as 31.8% while ALED needs about 1% more labels to achieve the fully trained networks performance at 32.8%. It can be seen that combining the temporal change of uncertainty with the most informative latent space encoding works best and outperforms random as well as state-of-the-art selection methods.
6.3 FROM STREAM-BASED TO POOL-BASED ACTIVE LEARNING
The main body of work in the field is focused on pool-based AL only. We want to compare the two scenarios stream batch (Figure 1c) and pool stream (Figure 1b) and close the gap. To the best of our knowledge, we are the first to compare the different scenarios explicitly. We use the stream batch and the pool stream scenario introduced in Section 3 for this experiment as otherwise, the pool-based methods could select samples from future streams. Pool-based methods can use information that is not available for stream-based methods like using data from the unlabeled pool for unsupervised training or optimization approach where the data is visited at each optimization step. Therefore poolbased methods are expected to outperform the stream-based methods. The goal of these experiments is to investigate the decrease in performance that comes along with changing from a pool-based to a stream-based setup. As pool-based scenarios require more data logistics and are computationally more expensive, a certain decrease in performance might be acceptable. Figure 7 shows the results for two different orders.
In Figure 7a TEGLS even reaches the fully trained network’s performance one iteration before the pool-based methods and can achieve a data saving of around 1%. In Figure 7b, our method does not suffer from any performance loss due to the disadvantage in the scenarios and reaches the fully trained network’s line together with other methods at 31.5% used data. The performance better than the fully trained network can be explained by the independence of drives in the different sets. So a subset can have a better distribution fit.
To the best of our knowledge we are the first showing that a stream-based method can compete with pool-based approaches in terms of performance. Our proposed method offers data savings like stateof-the-art pool-based methods, while offering the reduced data logistics of stream-based approaches. As most perception problems start from sensor (e.g. camera) streams our method can be used on the mobile device directly for many applications like robotics, autonomous driving or environmental surveillance. This saves additionally computational costs and the data prepossessing efforts.
Although our ablation study in Appendix C showed that the combination with diversity based approaches work quite well, TPL and TDLS do not need any complex matrix operations or submodular optimization techniques.
7 CONCLUSION AND FUTURE WORK
In our work, we investigated stream-based AL for temporally coherent data. The proposed theoretical modifications that make it possible to exploit the temporal information resulted in three classes of methods. To evaluate these scenarios, we created a classification dataset with temporally coherent data including timestamps based on the A2D2 autonomous driving dataset for this purpose, which we made publically available. In our first experiment, we showed that our modifications applied to loss learning outperform the vanilla approach by saving up to 5% more labeled data. Our second experiment proved that our methods combining temporal changes in (pseudo) uncertainty with diversity lead to 1% additional data saving in comparison to state-of-the-art methods in stream batch AL. In the last experiments conducted, we provided, to our knowledge, the first comparison between stream-based and pool-based AL using pool stream and stream batch scenarios. These experiments could prove that our stream-based methods achieve - with 0.5% more data savings - the same performance as pool-based methods for temporal data, so we bridged the gap between them. Given the additional effort, pool-based scenarios require in terms of data logistics, this is a major point for enabling large scale AL.
In future work, we want to focus on alternative uncertainty estimation methods. For one, the uncertainty estimation can be integrated more easily with a diversity measurement, which already showed good results in pool-based approaches. After we proved the benefit of exploiting temporally coherent vision data, we want to extend our approach to semantic segmentation and object detection. Additionally we plan create a dataset for steam-based in mobile robot perception.
REPRODUCIBILITY STATEMENT
To ensure reproducibility we use the five seeds 1, 42, 64, 101 and 999 and set cudnn to ”deterministic”. We used the PyTorch modelzoo2 implementation of the ResNet18 model and mention all modifications in Appendix B. An exact data preprocessing as well as training parameters can be also found in Appendix B. The exact dataset detail including all training, validation and test split as well as the split between the initially labeled and unlabeled pool are given in Appendix A.
ETHICS STATEMENT
In our work, we deal with data selection and recording on mobile devices. Nevertheless, such recordings are necessary and already conducted, especially in autonomous driving data is collected by different institutions. To respect personal data privacy these recordings are strictly regulated by governmental authorities. We strongly encourage to respect these regulations. Nonetheless, such methods can be used to collect data unauthorized. However, we think that the benefit of our streambased AL approach and data collection on mobile devices can create to increase the perception and reliability of these mobile devices, robots and autonomous cars outweigh this risk.
A DATASET DESCRIPTION
In our experiments, we used the object detection part of the A2D2 dataset3. The dataset contains 17 different drives in southern Germany. The frames are timestamped with a high frequency of up to 10 Hz so that the temporal change of the samples can be evaluated meaningfully. Due to sensor synchronization the rate is not constant, but the optical flow does not get lost. This high frequency brings the risk of selecting redundant samples in a batch. The recording drives are split into an initial labeled pool and unlabeled pool for training as well as validation and test set as shown in Table 1. In the stream-based setups, the unlabeled drives are fed as streams into the AL algorithm. The images have been resized with preserved aspect ratio to 240× 151 pixels. For the training, the images have been normalized with the mean and the standard deviation of the currently selected samples. The training dataset has been shuffled.
B DETAILED EXPERIMENT DESCRIPTION
We used for all experiments PyTorch, for the existing reference methods we used the code published by the authors Kirsch et al. (2019)4 and Caramalau et al. (2021)5. The submodular optimization approaches are implementation using Sieve-Streaming++ (Kazemi et al., 2019)6. The ResNet18 has been modified with a three fully connected layer classification head with inner dimensions of 256 and 128. In front of each fully connected layer, we added a dropout with a probability of 0.3. These layers remained active for the MC dropout based methods with ten forward passes. A softmax activation is attached to the last layer. For the convolutional layers the pre-trained weights provided
3https://www.a2d2.audi/a2d2/en/download.html 4https://github.com/BlackHC/batchbald_redux 5https://github.com/razvancaramalau/Sequential-GCN-for-Active-Learning 6https://github.com/ehsankazemi/hybrid-streaming
by PyTorch has been used and freezed. We trained each model with the SGD optimizer using PyTorch 1.11.0 with a learning rate of 0.0001, a momentum of 0.9 and a weight decay of 0.0005 for a maximum of 200 epochs. To ensure convergence we use early stopping on the validation accuracy with a patience of 30. The batch size is set to 128. Due to the drive concept of A2D2, the dataset contains samples very close to each and can contain only small changes in the image. So the parameters have been selected, such that the model does not overfit on the initial set. As a performance increase has been observed when training with an attached loss learning module, this module has been attached to all models. This effect was only observed for a few amounts of data. The loss learning weight in the loss function is set to 1. The learning rate is decreased by a factor of ten, after 160 epochs. The parameters for CoreGCN are taken from the authors’ implementation Caramalau et al. (2021). The experiments are conducted on Nvidia V100 Graphic cards.
C ABLATION STUDY
We compare our methods introduced in Section 5 with each other in an ablation study to investigate the effect of our proposed adaptions. In Figure 8 we show the comparison for two different stream orders. It can be seen, that the temporal change in latent space distance (TDLC) without submodular optimization, does not generate a huge improvement, only a minor one can be seen in Figure 8a. As the distance reflect already the change in latent space, the derivative of this change does not seem to add much value. However, the combination of the distances (TDLS) or embedded gradients
(TEGLS) with the temporal loss as a submodular optimization problem seems to improve our base approach. | 1. What is the focus and contribution of the paper on stream-based active learning?
2. What are the strengths of the proposed approach, particularly in its practical applications?
3. What are the weaknesses of the paper regarding its theoretical support and empirical evidence?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Do you have any questions or suggestions regarding the proposed methods and their applications? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper applies stream-based active learning to datasets with time stamps. The main idea is to select data points with a large loss change
‖
d
d
t
L
^
‖
, where
L
^
is an estimated loss. The authors propose three methods by following this policy, and validate their empirical quality in object detection experiments.
Strengths And Weaknesses
Strengths: As mentioned in the paper, stream-based active learning is not much studied compared with pool-based active learning. However, since data are often collected in a streaming fashion (e.g. autonomous driving), the importance of stream-based active learning is increasing. This paper proposes practical ideas for stream-based active learning, and then the results might be used in future applications.
Weaknesses: The support for the superiority of the proposed algorithms is not so strong. There is no theoretical guarantee, and why the authors choose the proposed criteria (objectives (1) and (2)) is not explained clearly. Although the authors empirically show that the proposed algorithms perform well in some object detection tasks, but only for the specific task with specific datasets. I think more experiments are needed if the authors claim that these algorithms perform well for general stream-based active learning with time stamps.
Clarity, Quality, Novelty And Reproducibility
Clarity: The description of the algorithm can be improved. Section 5 describes the algorithmic details of the proposed methods, but they are not sufficient. For example, how to apply SieveStreaming++ is not clear. Since SieveStreaming++ is a method for a static set function, the objective function should be updated at an appropriate timing (minor comment: det(FF^T)+I_j should be replaced by det(FF^T+I_j)), but the authors do not address this point.
Novelty: The novelty of this paper lies in the idea of using
‖
d
d
t
L
^
‖
instead of the loss itself. It is a nice contribution, but not so ground-breaking.
Reproducibility: The datasets and random seeds are addressed. However, the descriptions of the algorithms are not sufficient to reproduce the experimental results. It is helpful if the authors provide the details of the algorithms (how to estimate the loss function, when the loss estimate is updated, etc). |
ICLR | Title
Wasserstein Generalization Bound for Few-Shot Learning
Abstract
In the absence of large quantities of annotated data, few shot learning is used to train neural networks that make predictions based on similarities between datapoints. To better understand how models would behave when presented with unfamiliar data, research on generalization bounds have revealed some important properties about deep neural networks. However, when extended to the domain of few shot learning it often yields loose bounds since it does not take into the account the nature and methodology of few shot learning. We propose a novel stochastic generalization bound for prototypical neural networks by constructing a Wasserstein sphere centered around the distribution of weight matrices. We show that by applying concentration inequalities on the distribution of weight matrices in the Wasserstein sphere stricter generalization bounds can be obtained. Comparison with previous generalization bounds shows the efficacy of our approach and to our knowledge this is the first bound that makes use of Wasserstein distance to give a measure of generalizability of deep neural networks.
1 INTRODUCTION
The problem of finding sharp generalization bounds for deep neural networks is of prominent importance as it allows us to bound the overall uncertainty involved in their application. In recent times the theoretical properties of these bounds have received increased attention and has been an active subject of investigation. Various classical results exploring the r expressivity of neural networks have acknowledged their universality Leshno et al. (1993) and their unexpected advantage over-hand crafted features Barron (1993) even though training of neural networks itself is a hard problem Blum & Rivest (1992). Other studies have also revealed that deep neural networks may have structural properties that enable them to perform non-convex optimization Choromanska et al. (2015); Kawaguchi (2016) further alluding to the fact that given enough data these models can learn any function Cybenko (1989). However, simply possessing such desirable properties does not guarantee that the models will perform accurately on future unknown inputs, this is because without proper restrictions on the optimization the models become prone to over-fitting and to effectively address this challenge leads us to study the generalization of these models. However, though there exits a great body of research pertaining the generalization of classification models relatively little is studied about generalization properties of meta learning models Vanschoren (2019), specifically Few-shot learning (FSL) Wang et al. (2020).
In this paper we study the generalization of FSL specifically that of Prototypical Networks Snell et al. (2017). By leveraging stochastic bounds from classic PAC learning theory Vapnik et al. (1994) we derive a Wasserstein bound on the probability of the absolute difference between the true and the empirical error deviating from a established threshold. Some of the most sharp generalization bounds are obtained using the PAC-Bayesian Framework McAllester (1998; 1999) and in this work we make use of it to derive a stochastic bound for FSL involving Prototypical networks. However, the standard PAC-Bayesian framework relies
on the KL divergence between some prior distribution of set of classifiers and data distribution, our work leverages the Wasserstein metric Vallender (1974) to obtain a better bound. The unique nature of FSL is in stark contrast to traditional task of classification and when combining them with the methodology used in classification of prototypical networks we are able to obtain a tight stochastic bound.
Also prior works assume homogeneous nature of data samples while obtaining the bound , we however do not impose any such restriction while studying it and also our final bound involves the deviation of final distribution from the initial distribution in wasserstein metric.
2 RELATED WORK
Various classical theory work attributes generalization ability to understanding the class-capacity Vapnik (1999); Mohri et al. (2018). Recent work in deep hypothesis spaces Pascanu et al. (2013); Montufar et al. (2014); Livni et al. (2014); Telgarsky (2016) also revealed deep neural networks can perform convex optimization thereby being to generalize over a vast set of datapoints. Harvey et al. (2017) generalization error bound showed that the VC dimension of neural network depends on the product of it depth and parameters considerably improving the previous bounds given by Bartlett et al. (1998). Feed forward neural networks were revealed to have unit-wise ℓ1 norm generalization bound with exponential dependence on depth. A sharpness based measure was suggested by Keskar et al. (2016) to predict the difference in generalization behaviours of networks trained with different batch size SGD. More recent PAC-Bayesian approaches Neyshabur et al. (2017); Nagarajan & Kolter (2019) also gave very sharp bounds utilizing spectral and Frobenius norm of weight matrices. In the domain of few shot learning Cao et al. (2019) provided a framework to obtain the optimal k shot for prototypical networks.
3 BACKGROUND
3.1 PROBLEM SETUP
Consider N distinct classes being sampled i.i.d. from the set of all possible classes C for an N -way classification problem. For each class ci ∈ {c1, c2, . . . , cN} k datapoints are sampled i.i.d. from the class conditional distribution p(x|Y (x) = ci), where x ∈ RD, Y (x) is the class assignment of x and D is the dimension of the data.
The k datapoints constitute the support set of the class ci : Si = {x1, . . . xk} where Y (xj) = ci and xj ∈ Si for all ci ∈ C. Given a datapoint (xj , yj), where yj ∈ {c1, c2, . . . , cN} and xj /∈ {S1, . . . , Sk}, the few shot classification task is to predict the correct assignment label yj using S = ⋃N i=1 Si.
3.2 PROTOTYPICAL NETWORKS
Prototypical Networks Snell et al. (2017) are trained to learn the low dimensional representation of data i.e., they learn a function ϕ : RD −→ RM , where M is the dimension of the representation space. The prototype representation of each class ϕ(Si) is generating by taking the average of the representations of its support set:
ϕ(Si) = 1
k ∑ x∈Si ϕ(x) (1)
Classification of input x is obtained by taking the softmax of the distance between the input embedding and the prototype representation of each class:
pϕ(y = j|x,S) = exp (−d (ϕ (x) , ϕ (Sj)))∑N i=1 exp (−d (ϕ (x) , ϕ (Si)))
(2)
where d is a distance function : RM × RM −→ [0,+∞).
Most applications including our approach use the euclidean distance as the distance function. Learning involves minimizing the negative log probability J(ϕ) = − log pϕ(ŷ = j|x) using SGD and the function ϕ is generally a deep neural network.
3.3 WASSERSTEIN BALL AND TOTAL VARIATION
3.4 WASSERSTEIN BALL
Given the space of all probability distributions P with compact support set, the pth Wasserstein metric on the space P is defined as:
Wp(ν, µ) = (inf E[d(X,Y )p])1/p (3)
where X and Y are random variables with marginals µ and v and infimum is taken over all possible joint distributions of X and Y . For our analysis we focus on the first order Wasserstein distance by taking the distance measure d as the Manhattan distance:
W (ν, µ) = ( inf E [ ∥(X,Y )∥1 ]) (4)
The rationale for using Wasserstein distance is that it gives a metric to measure the minimum difference between two distributions which we use to obtain a sharper generalization bound. Consequently, a wasserstein ball of radius R centered around µ is defined as:
Wµ(R) = { v ∈ P ∣∣W (ν, µ) ≤ R} (5)
3.5 TOTAL VARIATION
The total variation distance between two distributions ν and µ is given by
δ(ν, µ) = sup A∈P
|ν(A)− µ(A)| (6)
Intuitively, this is the largest possible difference between the probabilities that the two probability distributions can assign to the same event. In some cases we can also have the below relation
δ(ν, µ) = 1
2 ||ν − µ||1 (7)
This is similar to Wasserstein distance in many aspects , The total variation distance (or half the norm) arises as the optimal transportation cost, when the cost function is c(x, y) = 1x ̸=yc(x, y) = 1x ̸=y , that is,
1 2 ||ν − µ||1 = δ(ν, µ) = inf P(ν ̸= µ) = inf π Eπ[1ν ̸=µ] (8)
It however differs in taking distributions directly rather than their supports , which makes it less desirable than wasserstein distance for our case. We use it mainly to compare the wasseserstein diatnce with the K−L divergence , which otherwise cannot be compared mathematically with wasserstein metric.
4 WASSERSTEIN BOUND
Prototypical networks make predictions are based on the nearest neighbour from the support set S and the embedding function ϕ. Thus, to formulate the relationship between the complexity of the classifier and the support set we make us of the classical PAC learning theory Vapnik et al. (1994). Consider the simple binary classification problem where the probability of the difference between true and empirical error is bounded by:
P ( ||errtrue(h)− errtrain(h)||1 ≤ ξ ) ≥ 1− δ (9)
where h is the classifier, errtrue is true error, errtrain is the empirical training error obtained on the support set S, 0 ≤ δ ≤ 1 and ξ is defined as:
ξ ≜
√ D ( ln 4kD + 1 ) + ln 4δ
2k (10)
where D is the VC Dimension. l1 metric is conventionally used to measure the deviation but metrics which do not over estimate are crucial for the accurate prediction of the error difference. A sharper bound which is symmetric about the distributions is extremely necessary for studying generalization in Few shot learning.
Li showed that effective prediction by neural networks is generally the result of the final layers of the network where the embeddings are split apart to facilitate effective linear classification. However, the embeddings learnt by the networks before the final layer can be very compact in some high dimensional vector space. Consider two distributions ν and µ from this compact embeddings, the KL divergence of these two distributions is given by:
KL(ν||µ) = ∫ x µ(x) ln ( ν(x) µ(x) ) (11)
If the context of few shot learning the embeddings ϕ(Si) may be close enough enough such that their class conditional distributions are very similar, i.e.
KL(ν||µ) = lim ν−→µKL(ν||µ)
= lim ν−→µ ∫ x µ(x) ln ( ν(x) µ(x) ) (12) By monotone convergence theorem, for finite measures equation (12) can be written as:
KL(ν||µ) = ∫ x lim ν−→µµ(x) ln ( ν(x) µ(x) ) = 0 (13)
As we could see from Equation (13) the KL divergence could be pretty inaccurate in capturing the distance between the class conditional distributions tends to 0 which would be further exacerbated by the log factor present in its formulation. The Wasserstein metric is preferable in this regard to Kullback–Leibler (KL) divergence as it over comes this problem of magnitude reduction by projecting it into higher dimensional product measure space and effectively capturing it Otto & Villani (2000):
W (ν, µ) = √ inf
π∈ ∏ (ν,µ) ∫ M×M d(x, y)2dπ(x, y) (14)
where ∏ (ν, µ) denotes the set of probability measures on M×M where M is some finite dimensional vector space. More specifically, for any two distributions ν and µ by Equation (11) and (14) we have the following
inequalities demonstrating the sharpness of the Wasserstein metric in comparison to KL-divergence in the limiting cases:
1 2 dTV (ν, µ) <
√ KL(ν, µ) (15)
W (ν, µ) ≤ O(dTV (ν, µ)) (16) From Equations (15) and (16) we can conclude that W (ν, µ) ≤ √ KL(ν, µ). Therefore, the usage of a was a Wasserstein distance is more appropriate in the present context of few shot generalization.
Lemma 1. Given a prototypical network ϕθ with a N -way k-shot classification task, a query sample xq ∈ RD with support set S ,R is the radius of this support set and if Ri is the radius of the Wasserstein ball for class ci centered around ϕ(Si) , for Zi = ϕ(xq)− ϕ(Si) , we define v(Zi) = ||E(Zi.Z∗i || , then v(Zi) can be simplified as
v(Z) = 2(1 + 1
k )(R+Ri) (17)
Proof. First, from the definition of v(Z), by taking conditional probabilities into account we get v(Z) = Ex,Si , since xq ∈ RD hence ϕ(x) = ϕ(x) into two parts and examine them separately:
v(Zi) = Ex,Si = E[(ϕ(x)− ϕ(Si)).(ϕ(x)− ϕ(Si))] (18) In general, from probability theory we have for random vector X , the expectation of the quadratic is E[||X||2] = Tr(Var(X)) + E[X]TE[X]. Hence,
v(Zi) = E[||ϕ(x)− ϕ(Si)||2] = Tr(Σ
ϕ(x)−ϕ(Si)) + E[ϕ(x)− ϕ(Si)] ∗E[ϕ(x)− ϕ(Si)],
(19)
where the first term inside the trace can be expanded as:
Σ ϕ(x)−ϕ(Si) = Var[ϕ(x)− ϕ(Si)]
= E[(ϕ(x)− ϕ(Si))(ϕ(x)− ϕ(Si))T ]− (µa − µb)(µa − µb)T
= Σc + µaµ T a +
1 k Σc + µbµ T b − µaµTb − µbµTa − (µa − µb)(µa − µb)T
= (1 + 1
k )Σc (Last terms cancel out).
(20)
by linearity of trace we can obtain the following equation from equation(20)
Trace(Σ ϕ(x)−ϕ(Si)) = (1 +
1 k )Trace(Σc) (21)
We note that Var(X) = E[XXT ] − E[X]E[X]T and Σc ≜ Var(ϕ(x) + ϕ(Si)). Hence, equation (20) is obtained by expanding out the first term and taking the expectation of each resulting item. The second term of Equation (19) is rewritten for notational convenience as :
Ex,S [||ϕ(x)− ϕ(Si)||2] = µa − µb. (22) Putting them together:
i = (1 + 1
k )Tr(Σc) + (µa − µb)T (µa − µb) (23)
Similarly for E[ϕ(x)− ϕ(Si)]∗E[ϕ(x)− ϕ(Si)] we have :
E[ϕ(x)− ϕ(Si)]∗E[ϕ(x)− ϕ(Si)] = Ex,S [||ϕ(x)− ϕ(Si)||2] = Tr(Σ
ϕ(x)−ϕ(Si)) + E[ϕ(x)− ϕ(Si)] ∗E[ϕ(x)− ϕ(Si)]
= (1 + 1
k )Tr(Σc).
(24)
Putting together equation21 and equation24:
Ex,S = (1 + 1
k )Tr(Σc) + (µa − µb)(µa − µb)T + (1 +
1 k )Tr(Σc)
= (µa − µb)T (µa − µb) + 2.(1 + 1
k )Tr(Σc).
(25)
we note that µTaµa and µ T b µb are quadratic forms while µ T aµb describe the dot product between two independent randomly drawn samples which has expectation 0 , as we assume all the random variables involved are centered around 0. By the iid assumption on the random variables , off-diagonal terms of the Σ are zero , hence trace is just the variance of the random vector .
Variance is however the largest possible deviation in the distributed space hence by the assumptions made in the lemma we can write the final expression as 2(1 + 1k )(R+Ri)
For proof of Lemma 1, we first re-state the result on quadratic forms of normally distributed random vectors by Rencher & Schaalje (2008).
Theorem 2. Given a prototypical network ϕθ with a N -way k-shot classification task, a query sample xq ∈ RD with support set S, comes from a sphere of radius R, then the probability the model correctly predicts the class assignment bounded by:
pϕ(y = j|xq,S) ≤ N∏ i=1 1−(1+D)
[ exp ( −3 2 ( (||ϕθ(xq)||2 −Ri)2
3.2.(1 + 1k )(R+Ri)− L (||ϕθ(xq)||2 −Ri) 2
))] (26)
where Ri is the radius of the Wasserstein ball centered around ϕ(Si) measured in wasserstein metric and includes class ci ,i.e ci ∈ Wϕ(Si)(Ri)and L = max(R1, . . . , RN ), Y (xq) = j
Proof. The Wasserstein ball for each class ci is given by equation (5): Wϕ(Si)(Ri) = { v ∈ Pci ∣∣W (µ, v) ≤ Ri} (27) where Pci is the class conditional distribution. For the prototypical network ϕθ to correctly predict Y (xq) the representational embedding of xq must be closer to ϕ(Sj), i.e. ϕ(xq) should be closer to the center of the Wasserstein ball Wϕ(Sj) than any other Wϕ(Si) for all m ∈ {1, . . . , N} and j ̸= m:
pϕ(y = j|xq,S) ≥ pϕ(y = m|xq,S) (28)
For the network to generalize to previously unseen query samples equation (28) should hold true. Therefore:
exp (−d (ϕ (x) , ϕ (Sj))) ≥ exp (−d (ϕ (x) , ϕ (Sm))) (29)
Since the classification depends only on the distance between the representational embeddings, Equation (29) can we rewritten as:
P ( ||ϕ(xq)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(Sp)|| ) (30)
for p = 1, 2, ...N
As ϕ(x), S1, . . . , SN are random we will consider expected value rather than the exact random value , now we get
P (||ϕ(x)− Eϕ(Sj)||) ≥ P (||ϕ(x)− Eϕ(Sp)||) (31)
Now, the probability that xq is correctly classified by the model is given by:
P (y = j|xq,S) = P (||ϕ(x)− ϕ(Sj)|| > ||ϕ(x)− ϕ(S1)||, ||ϕ(x)− ϕ(Sj)|| > ||ϕ(x)− ϕ(S2)||, ... ||ϕ(x)− ϕ(Sj)|| > ||ϕ(x)− ϕ(SN )||)
(32)
Next we note that that the sampling was i.i.d so we can split the RHS of Equation (32) into product of several probabilities:
P (y = j|xq,S) =P (||ϕ(x)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(S1)||) P (||ϕ(x)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(S2)||) ... P (||ϕ(x)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(SN )||)
(33)
Applying Bernstein’s inequality to the ith probability in the product of probabilities in Equation 33 i.e., on P (||ϕ(x)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(Si)||) we get the following bound:
P (||ϕ(x)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(Si)||) ≤
1− (1 +D) [ exp ( −3 2 ( (||ϕθ(xq)||2 −Ri)2
3v(Z)− L (||ϕθ(xq)||2 −Ri)2
))] (34)
where v(Z) = E[ϕ(x)−ϕ(Sj)(ϕ(x)−ϕ(Sj)∗] and Lis a quantity which bounds all the random embeddings of the classes in embedding space , (i.e) L ≥ ||ϕ(Si)|| , we hence choose L = max(R1, . . . , RN ) , as all the embeddings lie in the sphere of radius Ri this quantity bounds all of them. Also , by triangle inequality we have
L ≥ ||ϕ(Si)− ϕ(Si)|| ≥ ||ϕ(Si)|| − ||ϕ(Si)|| (35)
After applying the basic assumptions that query sample is uncorrelated with the support sets Si , we can now use lemma(1) to further simplify this to
v(Z) = 3.2.(1 + 1
k )(R+Rj) (36)
Now by using equation(35) and equation(36)we can rewrite Equation (34) as:
P (||ϕ(x)− ϕ(Sj || ≥ ||ϕ(x)− ϕ(S1||)) ≤
1− (1 +D) [ exp ( −3 2 ( (||ϕθ(xq)||2 −Ri)2
3.2.(1 + 1k )(R+Ri)− L (||ϕθ(xq)||2 −Ri) 2
))] (37)
now in similar way applying this for all probabilities in Equation 32 factors we get the final expression of Equation (38).
pϕ(y = j|xq,S) ≤ N∏ i=1 1−(1+D)
[ exp ( −3 2 ( (||ϕθ(xq)||2 −Ri)2
3.2.(1 + 1k )(R+Ri)− L (||ϕθ(xq)||2 −Ri) 2
))] (38)
5 EXPERIMENTS
In this section, we present our result illustrating the advantage of our bound on following datasets: Omniglot Lake et al. (2015), miniImageNet Vinyals et al. (2016) and tieredImageNet Ren et al. (2018). In table (1) all experiments are performed on a 4 layer neural network, similar to that used by Snell et al. and 7 layer residual neural network He et al. (2016). For the purpose of clarity the specific architecture of the neural networks and the preprocessing of the data is the same as that used by Cao et al. (2019). Relatively simple models are used to highlight the behaviour of our stochastic bound given different network architecture and difference in testing shots k ∈ {1, · · · , 5}. PCA Protonet Cao et al. (2019) uses principal component analysis to consider only the resulting leading d = 60 dimensions as inputs while zeroing out the rest and the Mixed Protonet is a standard prototypical network trained with a randomized number of shots in the range [1, 5].
We demonstrate the error classification percentage (i.e) 100*p where p is the probability of error classification , we can see that the error classification percentage only increases with the number of shots increasing , both in the training and in the testing phase.6.(1 + 1k )(R+Ri)− L (||ϕθ(xq)||2 −Ri)
2 is inversely proportional to 1k for a fixed embedding and a query sample, hence it increases with the number of shots , also the Mixed - shot classification percentage id higher due to the heterogeneous nature of the data, and means more information regarding the distribution of the data.
MODEL CONFIGURATION: Vanilla ProtoNet is used as our baseline . We present the performance of multiple ProtoNets trained with different shots to illustrate the performance degradation issue. ProtoNetPCA uses principal components of the training split embeddings , with components other than the D leading ones zeroed out. We carry out a parameter sweep on miniImageNet and set d = 60; the same value is used on the other two data sets. For selecting the training shot of the embedding network, we find that overall performance to be optimal using k = 5. we set R = 0.001 N = 85 and randomly choose Ri from a sphere of radius 1 and D = 60 based on performance on miniImageNet
We observe that matching the training shot to the test shot generally provides the best performance for vanilla ProtoNets. Also importantly, training with a mixture of different values of k does not provide optimal performance when evaluated on the same mixture of k values. Instead, the resulting performance is mediocre in all test shots.We obtain the PCA of the embedded data by eigendecomposing the covariance matrix of embeddings, we obtain the principal components expressed as the significant eigenvalues, and the principal directions expressed as the eigenvectors corresponding to those eigenvalues. The number of significant eigenvalues approximates the intrinsic dimension of the embedding space. When the subspace is linear, this approximation is exact; otherwise, it serves as an upper bound to the true intrinsic dimension Fukunaga & Olsen (1971)
6 CONCLUSION AND FUTURE WORK
In this paper we provide a novel bound on the generalization error on the N way k shot classification task using prototypical networks, which is crucial in the sense that existing works hold for large samples of
data and hence cannot be applied to k shot learning , where data samples are limited . We also integrate the prototypical architecture of the network in obtaining the error probability of the task classification hence making it much more accurate for few shot learning. We do not assume homogeneous distribution of samples while classification , making it more relevant to practical applications.Sharpness and accuracy of our bound is also demonstrated on various data sets in the experimental section.
Future work includes obtaining a framework to analyze best possible architecture for k -shot learning specific to the data sets wherein presently , we study classification pertaining to a given architecture , however trying to obtain the best possible architecture mathematically which is better in generalization perspective would
be much more relevant and also efficient way of learning inner working of K shot learning. We would like to work in this direction. | 1. What is the main contribution of the paper in few-shot learning?
2. What are the strengths and weaknesses of the paper regarding its theoretical analysis and experimental results?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any questions or concerns regarding the paper's organization, typos, and grammatical errors? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work leverages the Wasserstein metric to obtain a tight stochastic generalization bound for few-shot learning. Compared to traditional tasks,this paper supposes that the Wasserstein distance is more appropriate in the present context of few-shot learning.
Strengths And Weaknesses
Strength:
A generalization bound for few-shot learning is an important and interesting topic.
The paper tries to give a detailed theoretical proof of the bound of FSL.
Weaknesses
The arrangement of article and the typos make the paper hard to read: section 3.3 is empty; there are typos in equation (26,36,37) of Section 3.2.
2.The experimental results are extremely weak. The paper does not verify its theoretical result in the experiments. Compared with the paper[1], the reviewer cannot see any difference in the experimental results. Besides, this paper only reports the accuracy on different datasets without further analysis.
[1]A THEORETICAL ANALYSIS OF THE NUMBER OF SHOTS IN FEW-SHOT LEARNING. Cao at. al, ICLR 2019
This paper claims that the Wasserstein distance gives tighter results compared to the KL distance. It is better to provide some experimental results to prove this claim. Besides, the reviewer wonders how the proposed bounds can guide the design of models with better generalization ability. How to use the bounds to improve the performance of FSL methods?
There are many grammatical and typographical problems in the article. For example.
p1:“r expressivity” —> “expressivity”
P3: “, The total” —> “, the total”
P4: “may be close enough enough” —> “may be close enough” ; “Li showed” —> “Li (2018) showed”
P7: “for p = 1, 2, ...N” —> “for
p
=
1
,
2
,
.
.
.
N
.” ; “
|
|
ϕ
(
x
)
−
ϕ
(
S
j
|
|
” —> “
|
|
ϕ
(
x
)
−
ϕ
(
S
j
)
|
|
”
P7:In equation(36), " v(Z) = 3.2.(..."
P8: “100*p” —> “
100
×
p
”
Please use \citep for the reference.
Clarity, Quality, Novelty And Reproducibility
Although the paper claims to have proposed the first bound that makes use of Wasserstein distance to give a measure of generalizability of deep neural networks, the paper needs to more clearly explain the motivation and interpretation of the bound. In addition, I do not quite understand how the proposed bounds guide the design of models with better generalization ability and how this is demonstrated through experiments. The quality of this paper is rather poor, which really hinders the reading experience. |
ICLR | Title
Wasserstein Generalization Bound for Few-Shot Learning
Abstract
In the absence of large quantities of annotated data, few shot learning is used to train neural networks that make predictions based on similarities between datapoints. To better understand how models would behave when presented with unfamiliar data, research on generalization bounds have revealed some important properties about deep neural networks. However, when extended to the domain of few shot learning it often yields loose bounds since it does not take into the account the nature and methodology of few shot learning. We propose a novel stochastic generalization bound for prototypical neural networks by constructing a Wasserstein sphere centered around the distribution of weight matrices. We show that by applying concentration inequalities on the distribution of weight matrices in the Wasserstein sphere stricter generalization bounds can be obtained. Comparison with previous generalization bounds shows the efficacy of our approach and to our knowledge this is the first bound that makes use of Wasserstein distance to give a measure of generalizability of deep neural networks.
1 INTRODUCTION
The problem of finding sharp generalization bounds for deep neural networks is of prominent importance as it allows us to bound the overall uncertainty involved in their application. In recent times the theoretical properties of these bounds have received increased attention and has been an active subject of investigation. Various classical results exploring the r expressivity of neural networks have acknowledged their universality Leshno et al. (1993) and their unexpected advantage over-hand crafted features Barron (1993) even though training of neural networks itself is a hard problem Blum & Rivest (1992). Other studies have also revealed that deep neural networks may have structural properties that enable them to perform non-convex optimization Choromanska et al. (2015); Kawaguchi (2016) further alluding to the fact that given enough data these models can learn any function Cybenko (1989). However, simply possessing such desirable properties does not guarantee that the models will perform accurately on future unknown inputs, this is because without proper restrictions on the optimization the models become prone to over-fitting and to effectively address this challenge leads us to study the generalization of these models. However, though there exits a great body of research pertaining the generalization of classification models relatively little is studied about generalization properties of meta learning models Vanschoren (2019), specifically Few-shot learning (FSL) Wang et al. (2020).
In this paper we study the generalization of FSL specifically that of Prototypical Networks Snell et al. (2017). By leveraging stochastic bounds from classic PAC learning theory Vapnik et al. (1994) we derive a Wasserstein bound on the probability of the absolute difference between the true and the empirical error deviating from a established threshold. Some of the most sharp generalization bounds are obtained using the PAC-Bayesian Framework McAllester (1998; 1999) and in this work we make use of it to derive a stochastic bound for FSL involving Prototypical networks. However, the standard PAC-Bayesian framework relies
on the KL divergence between some prior distribution of set of classifiers and data distribution, our work leverages the Wasserstein metric Vallender (1974) to obtain a better bound. The unique nature of FSL is in stark contrast to traditional task of classification and when combining them with the methodology used in classification of prototypical networks we are able to obtain a tight stochastic bound.
Also prior works assume homogeneous nature of data samples while obtaining the bound , we however do not impose any such restriction while studying it and also our final bound involves the deviation of final distribution from the initial distribution in wasserstein metric.
2 RELATED WORK
Various classical theory work attributes generalization ability to understanding the class-capacity Vapnik (1999); Mohri et al. (2018). Recent work in deep hypothesis spaces Pascanu et al. (2013); Montufar et al. (2014); Livni et al. (2014); Telgarsky (2016) also revealed deep neural networks can perform convex optimization thereby being to generalize over a vast set of datapoints. Harvey et al. (2017) generalization error bound showed that the VC dimension of neural network depends on the product of it depth and parameters considerably improving the previous bounds given by Bartlett et al. (1998). Feed forward neural networks were revealed to have unit-wise ℓ1 norm generalization bound with exponential dependence on depth. A sharpness based measure was suggested by Keskar et al. (2016) to predict the difference in generalization behaviours of networks trained with different batch size SGD. More recent PAC-Bayesian approaches Neyshabur et al. (2017); Nagarajan & Kolter (2019) also gave very sharp bounds utilizing spectral and Frobenius norm of weight matrices. In the domain of few shot learning Cao et al. (2019) provided a framework to obtain the optimal k shot for prototypical networks.
3 BACKGROUND
3.1 PROBLEM SETUP
Consider N distinct classes being sampled i.i.d. from the set of all possible classes C for an N -way classification problem. For each class ci ∈ {c1, c2, . . . , cN} k datapoints are sampled i.i.d. from the class conditional distribution p(x|Y (x) = ci), where x ∈ RD, Y (x) is the class assignment of x and D is the dimension of the data.
The k datapoints constitute the support set of the class ci : Si = {x1, . . . xk} where Y (xj) = ci and xj ∈ Si for all ci ∈ C. Given a datapoint (xj , yj), where yj ∈ {c1, c2, . . . , cN} and xj /∈ {S1, . . . , Sk}, the few shot classification task is to predict the correct assignment label yj using S = ⋃N i=1 Si.
3.2 PROTOTYPICAL NETWORKS
Prototypical Networks Snell et al. (2017) are trained to learn the low dimensional representation of data i.e., they learn a function ϕ : RD −→ RM , where M is the dimension of the representation space. The prototype representation of each class ϕ(Si) is generating by taking the average of the representations of its support set:
ϕ(Si) = 1
k ∑ x∈Si ϕ(x) (1)
Classification of input x is obtained by taking the softmax of the distance between the input embedding and the prototype representation of each class:
pϕ(y = j|x,S) = exp (−d (ϕ (x) , ϕ (Sj)))∑N i=1 exp (−d (ϕ (x) , ϕ (Si)))
(2)
where d is a distance function : RM × RM −→ [0,+∞).
Most applications including our approach use the euclidean distance as the distance function. Learning involves minimizing the negative log probability J(ϕ) = − log pϕ(ŷ = j|x) using SGD and the function ϕ is generally a deep neural network.
3.3 WASSERSTEIN BALL AND TOTAL VARIATION
3.4 WASSERSTEIN BALL
Given the space of all probability distributions P with compact support set, the pth Wasserstein metric on the space P is defined as:
Wp(ν, µ) = (inf E[d(X,Y )p])1/p (3)
where X and Y are random variables with marginals µ and v and infimum is taken over all possible joint distributions of X and Y . For our analysis we focus on the first order Wasserstein distance by taking the distance measure d as the Manhattan distance:
W (ν, µ) = ( inf E [ ∥(X,Y )∥1 ]) (4)
The rationale for using Wasserstein distance is that it gives a metric to measure the minimum difference between two distributions which we use to obtain a sharper generalization bound. Consequently, a wasserstein ball of radius R centered around µ is defined as:
Wµ(R) = { v ∈ P ∣∣W (ν, µ) ≤ R} (5)
3.5 TOTAL VARIATION
The total variation distance between two distributions ν and µ is given by
δ(ν, µ) = sup A∈P
|ν(A)− µ(A)| (6)
Intuitively, this is the largest possible difference between the probabilities that the two probability distributions can assign to the same event. In some cases we can also have the below relation
δ(ν, µ) = 1
2 ||ν − µ||1 (7)
This is similar to Wasserstein distance in many aspects , The total variation distance (or half the norm) arises as the optimal transportation cost, when the cost function is c(x, y) = 1x ̸=yc(x, y) = 1x ̸=y , that is,
1 2 ||ν − µ||1 = δ(ν, µ) = inf P(ν ̸= µ) = inf π Eπ[1ν ̸=µ] (8)
It however differs in taking distributions directly rather than their supports , which makes it less desirable than wasserstein distance for our case. We use it mainly to compare the wasseserstein diatnce with the K−L divergence , which otherwise cannot be compared mathematically with wasserstein metric.
4 WASSERSTEIN BOUND
Prototypical networks make predictions are based on the nearest neighbour from the support set S and the embedding function ϕ. Thus, to formulate the relationship between the complexity of the classifier and the support set we make us of the classical PAC learning theory Vapnik et al. (1994). Consider the simple binary classification problem where the probability of the difference between true and empirical error is bounded by:
P ( ||errtrue(h)− errtrain(h)||1 ≤ ξ ) ≥ 1− δ (9)
where h is the classifier, errtrue is true error, errtrain is the empirical training error obtained on the support set S, 0 ≤ δ ≤ 1 and ξ is defined as:
ξ ≜
√ D ( ln 4kD + 1 ) + ln 4δ
2k (10)
where D is the VC Dimension. l1 metric is conventionally used to measure the deviation but metrics which do not over estimate are crucial for the accurate prediction of the error difference. A sharper bound which is symmetric about the distributions is extremely necessary for studying generalization in Few shot learning.
Li showed that effective prediction by neural networks is generally the result of the final layers of the network where the embeddings are split apart to facilitate effective linear classification. However, the embeddings learnt by the networks before the final layer can be very compact in some high dimensional vector space. Consider two distributions ν and µ from this compact embeddings, the KL divergence of these two distributions is given by:
KL(ν||µ) = ∫ x µ(x) ln ( ν(x) µ(x) ) (11)
If the context of few shot learning the embeddings ϕ(Si) may be close enough enough such that their class conditional distributions are very similar, i.e.
KL(ν||µ) = lim ν−→µKL(ν||µ)
= lim ν−→µ ∫ x µ(x) ln ( ν(x) µ(x) ) (12) By monotone convergence theorem, for finite measures equation (12) can be written as:
KL(ν||µ) = ∫ x lim ν−→µµ(x) ln ( ν(x) µ(x) ) = 0 (13)
As we could see from Equation (13) the KL divergence could be pretty inaccurate in capturing the distance between the class conditional distributions tends to 0 which would be further exacerbated by the log factor present in its formulation. The Wasserstein metric is preferable in this regard to Kullback–Leibler (KL) divergence as it over comes this problem of magnitude reduction by projecting it into higher dimensional product measure space and effectively capturing it Otto & Villani (2000):
W (ν, µ) = √ inf
π∈ ∏ (ν,µ) ∫ M×M d(x, y)2dπ(x, y) (14)
where ∏ (ν, µ) denotes the set of probability measures on M×M where M is some finite dimensional vector space. More specifically, for any two distributions ν and µ by Equation (11) and (14) we have the following
inequalities demonstrating the sharpness of the Wasserstein metric in comparison to KL-divergence in the limiting cases:
1 2 dTV (ν, µ) <
√ KL(ν, µ) (15)
W (ν, µ) ≤ O(dTV (ν, µ)) (16) From Equations (15) and (16) we can conclude that W (ν, µ) ≤ √ KL(ν, µ). Therefore, the usage of a was a Wasserstein distance is more appropriate in the present context of few shot generalization.
Lemma 1. Given a prototypical network ϕθ with a N -way k-shot classification task, a query sample xq ∈ RD with support set S ,R is the radius of this support set and if Ri is the radius of the Wasserstein ball for class ci centered around ϕ(Si) , for Zi = ϕ(xq)− ϕ(Si) , we define v(Zi) = ||E(Zi.Z∗i || , then v(Zi) can be simplified as
v(Z) = 2(1 + 1
k )(R+Ri) (17)
Proof. First, from the definition of v(Z), by taking conditional probabilities into account we get v(Z) = Ex,Si , since xq ∈ RD hence ϕ(x) = ϕ(x) into two parts and examine them separately:
v(Zi) = Ex,Si = E[(ϕ(x)− ϕ(Si)).(ϕ(x)− ϕ(Si))] (18) In general, from probability theory we have for random vector X , the expectation of the quadratic is E[||X||2] = Tr(Var(X)) + E[X]TE[X]. Hence,
v(Zi) = E[||ϕ(x)− ϕ(Si)||2] = Tr(Σ
ϕ(x)−ϕ(Si)) + E[ϕ(x)− ϕ(Si)] ∗E[ϕ(x)− ϕ(Si)],
(19)
where the first term inside the trace can be expanded as:
Σ ϕ(x)−ϕ(Si) = Var[ϕ(x)− ϕ(Si)]
= E[(ϕ(x)− ϕ(Si))(ϕ(x)− ϕ(Si))T ]− (µa − µb)(µa − µb)T
= Σc + µaµ T a +
1 k Σc + µbµ T b − µaµTb − µbµTa − (µa − µb)(µa − µb)T
= (1 + 1
k )Σc (Last terms cancel out).
(20)
by linearity of trace we can obtain the following equation from equation(20)
Trace(Σ ϕ(x)−ϕ(Si)) = (1 +
1 k )Trace(Σc) (21)
We note that Var(X) = E[XXT ] − E[X]E[X]T and Σc ≜ Var(ϕ(x) + ϕ(Si)). Hence, equation (20) is obtained by expanding out the first term and taking the expectation of each resulting item. The second term of Equation (19) is rewritten for notational convenience as :
Ex,S [||ϕ(x)− ϕ(Si)||2] = µa − µb. (22) Putting them together:
i = (1 + 1
k )Tr(Σc) + (µa − µb)T (µa − µb) (23)
Similarly for E[ϕ(x)− ϕ(Si)]∗E[ϕ(x)− ϕ(Si)] we have :
E[ϕ(x)− ϕ(Si)]∗E[ϕ(x)− ϕ(Si)] = Ex,S [||ϕ(x)− ϕ(Si)||2] = Tr(Σ
ϕ(x)−ϕ(Si)) + E[ϕ(x)− ϕ(Si)] ∗E[ϕ(x)− ϕ(Si)]
= (1 + 1
k )Tr(Σc).
(24)
Putting together equation21 and equation24:
Ex,S = (1 + 1
k )Tr(Σc) + (µa − µb)(µa − µb)T + (1 +
1 k )Tr(Σc)
= (µa − µb)T (µa − µb) + 2.(1 + 1
k )Tr(Σc).
(25)
we note that µTaµa and µ T b µb are quadratic forms while µ T aµb describe the dot product between two independent randomly drawn samples which has expectation 0 , as we assume all the random variables involved are centered around 0. By the iid assumption on the random variables , off-diagonal terms of the Σ are zero , hence trace is just the variance of the random vector .
Variance is however the largest possible deviation in the distributed space hence by the assumptions made in the lemma we can write the final expression as 2(1 + 1k )(R+Ri)
For proof of Lemma 1, we first re-state the result on quadratic forms of normally distributed random vectors by Rencher & Schaalje (2008).
Theorem 2. Given a prototypical network ϕθ with a N -way k-shot classification task, a query sample xq ∈ RD with support set S, comes from a sphere of radius R, then the probability the model correctly predicts the class assignment bounded by:
pϕ(y = j|xq,S) ≤ N∏ i=1 1−(1+D)
[ exp ( −3 2 ( (||ϕθ(xq)||2 −Ri)2
3.2.(1 + 1k )(R+Ri)− L (||ϕθ(xq)||2 −Ri) 2
))] (26)
where Ri is the radius of the Wasserstein ball centered around ϕ(Si) measured in wasserstein metric and includes class ci ,i.e ci ∈ Wϕ(Si)(Ri)and L = max(R1, . . . , RN ), Y (xq) = j
Proof. The Wasserstein ball for each class ci is given by equation (5): Wϕ(Si)(Ri) = { v ∈ Pci ∣∣W (µ, v) ≤ Ri} (27) where Pci is the class conditional distribution. For the prototypical network ϕθ to correctly predict Y (xq) the representational embedding of xq must be closer to ϕ(Sj), i.e. ϕ(xq) should be closer to the center of the Wasserstein ball Wϕ(Sj) than any other Wϕ(Si) for all m ∈ {1, . . . , N} and j ̸= m:
pϕ(y = j|xq,S) ≥ pϕ(y = m|xq,S) (28)
For the network to generalize to previously unseen query samples equation (28) should hold true. Therefore:
exp (−d (ϕ (x) , ϕ (Sj))) ≥ exp (−d (ϕ (x) , ϕ (Sm))) (29)
Since the classification depends only on the distance between the representational embeddings, Equation (29) can we rewritten as:
P ( ||ϕ(xq)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(Sp)|| ) (30)
for p = 1, 2, ...N
As ϕ(x), S1, . . . , SN are random we will consider expected value rather than the exact random value , now we get
P (||ϕ(x)− Eϕ(Sj)||) ≥ P (||ϕ(x)− Eϕ(Sp)||) (31)
Now, the probability that xq is correctly classified by the model is given by:
P (y = j|xq,S) = P (||ϕ(x)− ϕ(Sj)|| > ||ϕ(x)− ϕ(S1)||, ||ϕ(x)− ϕ(Sj)|| > ||ϕ(x)− ϕ(S2)||, ... ||ϕ(x)− ϕ(Sj)|| > ||ϕ(x)− ϕ(SN )||)
(32)
Next we note that that the sampling was i.i.d so we can split the RHS of Equation (32) into product of several probabilities:
P (y = j|xq,S) =P (||ϕ(x)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(S1)||) P (||ϕ(x)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(S2)||) ... P (||ϕ(x)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(SN )||)
(33)
Applying Bernstein’s inequality to the ith probability in the product of probabilities in Equation 33 i.e., on P (||ϕ(x)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(Si)||) we get the following bound:
P (||ϕ(x)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(Si)||) ≤
1− (1 +D) [ exp ( −3 2 ( (||ϕθ(xq)||2 −Ri)2
3v(Z)− L (||ϕθ(xq)||2 −Ri)2
))] (34)
where v(Z) = E[ϕ(x)−ϕ(Sj)(ϕ(x)−ϕ(Sj)∗] and Lis a quantity which bounds all the random embeddings of the classes in embedding space , (i.e) L ≥ ||ϕ(Si)|| , we hence choose L = max(R1, . . . , RN ) , as all the embeddings lie in the sphere of radius Ri this quantity bounds all of them. Also , by triangle inequality we have
L ≥ ||ϕ(Si)− ϕ(Si)|| ≥ ||ϕ(Si)|| − ||ϕ(Si)|| (35)
After applying the basic assumptions that query sample is uncorrelated with the support sets Si , we can now use lemma(1) to further simplify this to
v(Z) = 3.2.(1 + 1
k )(R+Rj) (36)
Now by using equation(35) and equation(36)we can rewrite Equation (34) as:
P (||ϕ(x)− ϕ(Sj || ≥ ||ϕ(x)− ϕ(S1||)) ≤
1− (1 +D) [ exp ( −3 2 ( (||ϕθ(xq)||2 −Ri)2
3.2.(1 + 1k )(R+Ri)− L (||ϕθ(xq)||2 −Ri) 2
))] (37)
now in similar way applying this for all probabilities in Equation 32 factors we get the final expression of Equation (38).
pϕ(y = j|xq,S) ≤ N∏ i=1 1−(1+D)
[ exp ( −3 2 ( (||ϕθ(xq)||2 −Ri)2
3.2.(1 + 1k )(R+Ri)− L (||ϕθ(xq)||2 −Ri) 2
))] (38)
5 EXPERIMENTS
In this section, we present our result illustrating the advantage of our bound on following datasets: Omniglot Lake et al. (2015), miniImageNet Vinyals et al. (2016) and tieredImageNet Ren et al. (2018). In table (1) all experiments are performed on a 4 layer neural network, similar to that used by Snell et al. and 7 layer residual neural network He et al. (2016). For the purpose of clarity the specific architecture of the neural networks and the preprocessing of the data is the same as that used by Cao et al. (2019). Relatively simple models are used to highlight the behaviour of our stochastic bound given different network architecture and difference in testing shots k ∈ {1, · · · , 5}. PCA Protonet Cao et al. (2019) uses principal component analysis to consider only the resulting leading d = 60 dimensions as inputs while zeroing out the rest and the Mixed Protonet is a standard prototypical network trained with a randomized number of shots in the range [1, 5].
We demonstrate the error classification percentage (i.e) 100*p where p is the probability of error classification , we can see that the error classification percentage only increases with the number of shots increasing , both in the training and in the testing phase.6.(1 + 1k )(R+Ri)− L (||ϕθ(xq)||2 −Ri)
2 is inversely proportional to 1k for a fixed embedding and a query sample, hence it increases with the number of shots , also the Mixed - shot classification percentage id higher due to the heterogeneous nature of the data, and means more information regarding the distribution of the data.
MODEL CONFIGURATION: Vanilla ProtoNet is used as our baseline . We present the performance of multiple ProtoNets trained with different shots to illustrate the performance degradation issue. ProtoNetPCA uses principal components of the training split embeddings , with components other than the D leading ones zeroed out. We carry out a parameter sweep on miniImageNet and set d = 60; the same value is used on the other two data sets. For selecting the training shot of the embedding network, we find that overall performance to be optimal using k = 5. we set R = 0.001 N = 85 and randomly choose Ri from a sphere of radius 1 and D = 60 based on performance on miniImageNet
We observe that matching the training shot to the test shot generally provides the best performance for vanilla ProtoNets. Also importantly, training with a mixture of different values of k does not provide optimal performance when evaluated on the same mixture of k values. Instead, the resulting performance is mediocre in all test shots.We obtain the PCA of the embedded data by eigendecomposing the covariance matrix of embeddings, we obtain the principal components expressed as the significant eigenvalues, and the principal directions expressed as the eigenvectors corresponding to those eigenvalues. The number of significant eigenvalues approximates the intrinsic dimension of the embedding space. When the subspace is linear, this approximation is exact; otherwise, it serves as an upper bound to the true intrinsic dimension Fukunaga & Olsen (1971)
6 CONCLUSION AND FUTURE WORK
In this paper we provide a novel bound on the generalization error on the N way k shot classification task using prototypical networks, which is crucial in the sense that existing works hold for large samples of
data and hence cannot be applied to k shot learning , where data samples are limited . We also integrate the prototypical architecture of the network in obtaining the error probability of the task classification hence making it much more accurate for few shot learning. We do not assume homogeneous distribution of samples while classification , making it more relevant to practical applications.Sharpness and accuracy of our bound is also demonstrated on various data sets in the experimental section.
Future work includes obtaining a framework to analyze best possible architecture for k -shot learning specific to the data sets wherein presently , we study classification pertaining to a given architecture , however trying to obtain the best possible architecture mathematically which is better in generalization perspective would
be much more relevant and also efficient way of learning inner working of K shot learning. We would like to work in this direction. | 1. What is the focus of the paper regarding few-shot learning?
2. What are the strengths and weaknesses of the proposed error bound?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns regarding the theory and experiment correspondence?
5. Can the bounds in the paper be improved or compared to prior works? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes an error bound for prototypical few shot learning problem. Wasserstein distance is used in the output matrix instead of Euclidean. The authors design experiments for Omniglot, miniImagenet and tieredImagenet for protonet, mixed shot and PCA protonet algorithms by taking the number of shots as 1 and 5.
Strengths And Weaknesses
Strengths:
Analyzing the number of shots is an important topic in few shot learning.
The paper is written in a clear language (although some typos mistakes needs to be corrected).
Weaknesses:
The risk bound is a lower bound and the denominator could even be negative. In that sense, the theoretical result is not sufficiently informative. It could be an upper bound and using better/more interpretable terms might help.
The theory and the experiments do not correspond to each other. For example, in the paper [1], pairwise Euclidean distance is used while calculating the bounds because of the loss function. Namely, for the proposed framework to hold, one should use Wasserstein distance as the loss between the class distributions and the experiments needs to be designed accordingly.
From my understanding, weak convergence argument is shown as the motivation of the Wasserstein distance choice. The base case is samplewise and Wasserstein is a distributional metric, so a better/different motivation would be more helpful for the paper.
From my understanding, the experiments only show that taking k=5 gives better results than taking k=1 which are already known and does not contribute much to the theory. If we assume a suitable experiment is designed, swiping k over more values would be more helpful at least.
How do the bounds in paper [1] and in this paper compare? (assuming you change your bound to a lower bound)
I did not check the details in their calculations, so I cannot comment much about their correctness. I assumed there is no major error while doing my evaluation.
[1]Tianshi Cao, Marc Law, and Sanja Fidler. A theoretical analysis of the number of shots in few-shot learning. arXiv preprint arXiv:1909.11722, 2019.
Clarity, Quality, Novelty And Reproducibility
It would be good to proofread paper one more time and fix the grammar errors and typos. The writing of the paper is not ready to be published. Some sentences could be made shorter.
The novelty is very limited. The results with very similar theoretical approach is already published and the authors changed Euclidean with Wasserstein, which is direct and not well motivated in terms of the resulting bound and designed experiments.
The paper is easy reproduce since existing methods are used for k=1 and 5. |
ICLR | Title
Wasserstein Generalization Bound for Few-Shot Learning
Abstract
In the absence of large quantities of annotated data, few shot learning is used to train neural networks that make predictions based on similarities between datapoints. To better understand how models would behave when presented with unfamiliar data, research on generalization bounds have revealed some important properties about deep neural networks. However, when extended to the domain of few shot learning it often yields loose bounds since it does not take into the account the nature and methodology of few shot learning. We propose a novel stochastic generalization bound for prototypical neural networks by constructing a Wasserstein sphere centered around the distribution of weight matrices. We show that by applying concentration inequalities on the distribution of weight matrices in the Wasserstein sphere stricter generalization bounds can be obtained. Comparison with previous generalization bounds shows the efficacy of our approach and to our knowledge this is the first bound that makes use of Wasserstein distance to give a measure of generalizability of deep neural networks.
1 INTRODUCTION
The problem of finding sharp generalization bounds for deep neural networks is of prominent importance as it allows us to bound the overall uncertainty involved in their application. In recent times the theoretical properties of these bounds have received increased attention and has been an active subject of investigation. Various classical results exploring the r expressivity of neural networks have acknowledged their universality Leshno et al. (1993) and their unexpected advantage over-hand crafted features Barron (1993) even though training of neural networks itself is a hard problem Blum & Rivest (1992). Other studies have also revealed that deep neural networks may have structural properties that enable them to perform non-convex optimization Choromanska et al. (2015); Kawaguchi (2016) further alluding to the fact that given enough data these models can learn any function Cybenko (1989). However, simply possessing such desirable properties does not guarantee that the models will perform accurately on future unknown inputs, this is because without proper restrictions on the optimization the models become prone to over-fitting and to effectively address this challenge leads us to study the generalization of these models. However, though there exits a great body of research pertaining the generalization of classification models relatively little is studied about generalization properties of meta learning models Vanschoren (2019), specifically Few-shot learning (FSL) Wang et al. (2020).
In this paper we study the generalization of FSL specifically that of Prototypical Networks Snell et al. (2017). By leveraging stochastic bounds from classic PAC learning theory Vapnik et al. (1994) we derive a Wasserstein bound on the probability of the absolute difference between the true and the empirical error deviating from a established threshold. Some of the most sharp generalization bounds are obtained using the PAC-Bayesian Framework McAllester (1998; 1999) and in this work we make use of it to derive a stochastic bound for FSL involving Prototypical networks. However, the standard PAC-Bayesian framework relies
on the KL divergence between some prior distribution of set of classifiers and data distribution, our work leverages the Wasserstein metric Vallender (1974) to obtain a better bound. The unique nature of FSL is in stark contrast to traditional task of classification and when combining them with the methodology used in classification of prototypical networks we are able to obtain a tight stochastic bound.
Also prior works assume homogeneous nature of data samples while obtaining the bound , we however do not impose any such restriction while studying it and also our final bound involves the deviation of final distribution from the initial distribution in wasserstein metric.
2 RELATED WORK
Various classical theory work attributes generalization ability to understanding the class-capacity Vapnik (1999); Mohri et al. (2018). Recent work in deep hypothesis spaces Pascanu et al. (2013); Montufar et al. (2014); Livni et al. (2014); Telgarsky (2016) also revealed deep neural networks can perform convex optimization thereby being to generalize over a vast set of datapoints. Harvey et al. (2017) generalization error bound showed that the VC dimension of neural network depends on the product of it depth and parameters considerably improving the previous bounds given by Bartlett et al. (1998). Feed forward neural networks were revealed to have unit-wise ℓ1 norm generalization bound with exponential dependence on depth. A sharpness based measure was suggested by Keskar et al. (2016) to predict the difference in generalization behaviours of networks trained with different batch size SGD. More recent PAC-Bayesian approaches Neyshabur et al. (2017); Nagarajan & Kolter (2019) also gave very sharp bounds utilizing spectral and Frobenius norm of weight matrices. In the domain of few shot learning Cao et al. (2019) provided a framework to obtain the optimal k shot for prototypical networks.
3 BACKGROUND
3.1 PROBLEM SETUP
Consider N distinct classes being sampled i.i.d. from the set of all possible classes C for an N -way classification problem. For each class ci ∈ {c1, c2, . . . , cN} k datapoints are sampled i.i.d. from the class conditional distribution p(x|Y (x) = ci), where x ∈ RD, Y (x) is the class assignment of x and D is the dimension of the data.
The k datapoints constitute the support set of the class ci : Si = {x1, . . . xk} where Y (xj) = ci and xj ∈ Si for all ci ∈ C. Given a datapoint (xj , yj), where yj ∈ {c1, c2, . . . , cN} and xj /∈ {S1, . . . , Sk}, the few shot classification task is to predict the correct assignment label yj using S = ⋃N i=1 Si.
3.2 PROTOTYPICAL NETWORKS
Prototypical Networks Snell et al. (2017) are trained to learn the low dimensional representation of data i.e., they learn a function ϕ : RD −→ RM , where M is the dimension of the representation space. The prototype representation of each class ϕ(Si) is generating by taking the average of the representations of its support set:
ϕ(Si) = 1
k ∑ x∈Si ϕ(x) (1)
Classification of input x is obtained by taking the softmax of the distance between the input embedding and the prototype representation of each class:
pϕ(y = j|x,S) = exp (−d (ϕ (x) , ϕ (Sj)))∑N i=1 exp (−d (ϕ (x) , ϕ (Si)))
(2)
where d is a distance function : RM × RM −→ [0,+∞).
Most applications including our approach use the euclidean distance as the distance function. Learning involves minimizing the negative log probability J(ϕ) = − log pϕ(ŷ = j|x) using SGD and the function ϕ is generally a deep neural network.
3.3 WASSERSTEIN BALL AND TOTAL VARIATION
3.4 WASSERSTEIN BALL
Given the space of all probability distributions P with compact support set, the pth Wasserstein metric on the space P is defined as:
Wp(ν, µ) = (inf E[d(X,Y )p])1/p (3)
where X and Y are random variables with marginals µ and v and infimum is taken over all possible joint distributions of X and Y . For our analysis we focus on the first order Wasserstein distance by taking the distance measure d as the Manhattan distance:
W (ν, µ) = ( inf E [ ∥(X,Y )∥1 ]) (4)
The rationale for using Wasserstein distance is that it gives a metric to measure the minimum difference between two distributions which we use to obtain a sharper generalization bound. Consequently, a wasserstein ball of radius R centered around µ is defined as:
Wµ(R) = { v ∈ P ∣∣W (ν, µ) ≤ R} (5)
3.5 TOTAL VARIATION
The total variation distance between two distributions ν and µ is given by
δ(ν, µ) = sup A∈P
|ν(A)− µ(A)| (6)
Intuitively, this is the largest possible difference between the probabilities that the two probability distributions can assign to the same event. In some cases we can also have the below relation
δ(ν, µ) = 1
2 ||ν − µ||1 (7)
This is similar to Wasserstein distance in many aspects , The total variation distance (or half the norm) arises as the optimal transportation cost, when the cost function is c(x, y) = 1x ̸=yc(x, y) = 1x ̸=y , that is,
1 2 ||ν − µ||1 = δ(ν, µ) = inf P(ν ̸= µ) = inf π Eπ[1ν ̸=µ] (8)
It however differs in taking distributions directly rather than their supports , which makes it less desirable than wasserstein distance for our case. We use it mainly to compare the wasseserstein diatnce with the K−L divergence , which otherwise cannot be compared mathematically with wasserstein metric.
4 WASSERSTEIN BOUND
Prototypical networks make predictions are based on the nearest neighbour from the support set S and the embedding function ϕ. Thus, to formulate the relationship between the complexity of the classifier and the support set we make us of the classical PAC learning theory Vapnik et al. (1994). Consider the simple binary classification problem where the probability of the difference between true and empirical error is bounded by:
P ( ||errtrue(h)− errtrain(h)||1 ≤ ξ ) ≥ 1− δ (9)
where h is the classifier, errtrue is true error, errtrain is the empirical training error obtained on the support set S, 0 ≤ δ ≤ 1 and ξ is defined as:
ξ ≜
√ D ( ln 4kD + 1 ) + ln 4δ
2k (10)
where D is the VC Dimension. l1 metric is conventionally used to measure the deviation but metrics which do not over estimate are crucial for the accurate prediction of the error difference. A sharper bound which is symmetric about the distributions is extremely necessary for studying generalization in Few shot learning.
Li showed that effective prediction by neural networks is generally the result of the final layers of the network where the embeddings are split apart to facilitate effective linear classification. However, the embeddings learnt by the networks before the final layer can be very compact in some high dimensional vector space. Consider two distributions ν and µ from this compact embeddings, the KL divergence of these two distributions is given by:
KL(ν||µ) = ∫ x µ(x) ln ( ν(x) µ(x) ) (11)
If the context of few shot learning the embeddings ϕ(Si) may be close enough enough such that their class conditional distributions are very similar, i.e.
KL(ν||µ) = lim ν−→µKL(ν||µ)
= lim ν−→µ ∫ x µ(x) ln ( ν(x) µ(x) ) (12) By monotone convergence theorem, for finite measures equation (12) can be written as:
KL(ν||µ) = ∫ x lim ν−→µµ(x) ln ( ν(x) µ(x) ) = 0 (13)
As we could see from Equation (13) the KL divergence could be pretty inaccurate in capturing the distance between the class conditional distributions tends to 0 which would be further exacerbated by the log factor present in its formulation. The Wasserstein metric is preferable in this regard to Kullback–Leibler (KL) divergence as it over comes this problem of magnitude reduction by projecting it into higher dimensional product measure space and effectively capturing it Otto & Villani (2000):
W (ν, µ) = √ inf
π∈ ∏ (ν,µ) ∫ M×M d(x, y)2dπ(x, y) (14)
where ∏ (ν, µ) denotes the set of probability measures on M×M where M is some finite dimensional vector space. More specifically, for any two distributions ν and µ by Equation (11) and (14) we have the following
inequalities demonstrating the sharpness of the Wasserstein metric in comparison to KL-divergence in the limiting cases:
1 2 dTV (ν, µ) <
√ KL(ν, µ) (15)
W (ν, µ) ≤ O(dTV (ν, µ)) (16) From Equations (15) and (16) we can conclude that W (ν, µ) ≤ √ KL(ν, µ). Therefore, the usage of a was a Wasserstein distance is more appropriate in the present context of few shot generalization.
Lemma 1. Given a prototypical network ϕθ with a N -way k-shot classification task, a query sample xq ∈ RD with support set S ,R is the radius of this support set and if Ri is the radius of the Wasserstein ball for class ci centered around ϕ(Si) , for Zi = ϕ(xq)− ϕ(Si) , we define v(Zi) = ||E(Zi.Z∗i || , then v(Zi) can be simplified as
v(Z) = 2(1 + 1
k )(R+Ri) (17)
Proof. First, from the definition of v(Z), by taking conditional probabilities into account we get v(Z) = Ex,Si , since xq ∈ RD hence ϕ(x) = ϕ(x) into two parts and examine them separately:
v(Zi) = Ex,Si = E[(ϕ(x)− ϕ(Si)).(ϕ(x)− ϕ(Si))] (18) In general, from probability theory we have for random vector X , the expectation of the quadratic is E[||X||2] = Tr(Var(X)) + E[X]TE[X]. Hence,
v(Zi) = E[||ϕ(x)− ϕ(Si)||2] = Tr(Σ
ϕ(x)−ϕ(Si)) + E[ϕ(x)− ϕ(Si)] ∗E[ϕ(x)− ϕ(Si)],
(19)
where the first term inside the trace can be expanded as:
Σ ϕ(x)−ϕ(Si) = Var[ϕ(x)− ϕ(Si)]
= E[(ϕ(x)− ϕ(Si))(ϕ(x)− ϕ(Si))T ]− (µa − µb)(µa − µb)T
= Σc + µaµ T a +
1 k Σc + µbµ T b − µaµTb − µbµTa − (µa − µb)(µa − µb)T
= (1 + 1
k )Σc (Last terms cancel out).
(20)
by linearity of trace we can obtain the following equation from equation(20)
Trace(Σ ϕ(x)−ϕ(Si)) = (1 +
1 k )Trace(Σc) (21)
We note that Var(X) = E[XXT ] − E[X]E[X]T and Σc ≜ Var(ϕ(x) + ϕ(Si)). Hence, equation (20) is obtained by expanding out the first term and taking the expectation of each resulting item. The second term of Equation (19) is rewritten for notational convenience as :
Ex,S [||ϕ(x)− ϕ(Si)||2] = µa − µb. (22) Putting them together:
i = (1 + 1
k )Tr(Σc) + (µa − µb)T (µa − µb) (23)
Similarly for E[ϕ(x)− ϕ(Si)]∗E[ϕ(x)− ϕ(Si)] we have :
E[ϕ(x)− ϕ(Si)]∗E[ϕ(x)− ϕ(Si)] = Ex,S [||ϕ(x)− ϕ(Si)||2] = Tr(Σ
ϕ(x)−ϕ(Si)) + E[ϕ(x)− ϕ(Si)] ∗E[ϕ(x)− ϕ(Si)]
= (1 + 1
k )Tr(Σc).
(24)
Putting together equation21 and equation24:
Ex,S = (1 + 1
k )Tr(Σc) + (µa − µb)(µa − µb)T + (1 +
1 k )Tr(Σc)
= (µa − µb)T (µa − µb) + 2.(1 + 1
k )Tr(Σc).
(25)
we note that µTaµa and µ T b µb are quadratic forms while µ T aµb describe the dot product between two independent randomly drawn samples which has expectation 0 , as we assume all the random variables involved are centered around 0. By the iid assumption on the random variables , off-diagonal terms of the Σ are zero , hence trace is just the variance of the random vector .
Variance is however the largest possible deviation in the distributed space hence by the assumptions made in the lemma we can write the final expression as 2(1 + 1k )(R+Ri)
For proof of Lemma 1, we first re-state the result on quadratic forms of normally distributed random vectors by Rencher & Schaalje (2008).
Theorem 2. Given a prototypical network ϕθ with a N -way k-shot classification task, a query sample xq ∈ RD with support set S, comes from a sphere of radius R, then the probability the model correctly predicts the class assignment bounded by:
pϕ(y = j|xq,S) ≤ N∏ i=1 1−(1+D)
[ exp ( −3 2 ( (||ϕθ(xq)||2 −Ri)2
3.2.(1 + 1k )(R+Ri)− L (||ϕθ(xq)||2 −Ri) 2
))] (26)
where Ri is the radius of the Wasserstein ball centered around ϕ(Si) measured in wasserstein metric and includes class ci ,i.e ci ∈ Wϕ(Si)(Ri)and L = max(R1, . . . , RN ), Y (xq) = j
Proof. The Wasserstein ball for each class ci is given by equation (5): Wϕ(Si)(Ri) = { v ∈ Pci ∣∣W (µ, v) ≤ Ri} (27) where Pci is the class conditional distribution. For the prototypical network ϕθ to correctly predict Y (xq) the representational embedding of xq must be closer to ϕ(Sj), i.e. ϕ(xq) should be closer to the center of the Wasserstein ball Wϕ(Sj) than any other Wϕ(Si) for all m ∈ {1, . . . , N} and j ̸= m:
pϕ(y = j|xq,S) ≥ pϕ(y = m|xq,S) (28)
For the network to generalize to previously unseen query samples equation (28) should hold true. Therefore:
exp (−d (ϕ (x) , ϕ (Sj))) ≥ exp (−d (ϕ (x) , ϕ (Sm))) (29)
Since the classification depends only on the distance between the representational embeddings, Equation (29) can we rewritten as:
P ( ||ϕ(xq)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(Sp)|| ) (30)
for p = 1, 2, ...N
As ϕ(x), S1, . . . , SN are random we will consider expected value rather than the exact random value , now we get
P (||ϕ(x)− Eϕ(Sj)||) ≥ P (||ϕ(x)− Eϕ(Sp)||) (31)
Now, the probability that xq is correctly classified by the model is given by:
P (y = j|xq,S) = P (||ϕ(x)− ϕ(Sj)|| > ||ϕ(x)− ϕ(S1)||, ||ϕ(x)− ϕ(Sj)|| > ||ϕ(x)− ϕ(S2)||, ... ||ϕ(x)− ϕ(Sj)|| > ||ϕ(x)− ϕ(SN )||)
(32)
Next we note that that the sampling was i.i.d so we can split the RHS of Equation (32) into product of several probabilities:
P (y = j|xq,S) =P (||ϕ(x)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(S1)||) P (||ϕ(x)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(S2)||) ... P (||ϕ(x)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(SN )||)
(33)
Applying Bernstein’s inequality to the ith probability in the product of probabilities in Equation 33 i.e., on P (||ϕ(x)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(Si)||) we get the following bound:
P (||ϕ(x)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(Si)||) ≤
1− (1 +D) [ exp ( −3 2 ( (||ϕθ(xq)||2 −Ri)2
3v(Z)− L (||ϕθ(xq)||2 −Ri)2
))] (34)
where v(Z) = E[ϕ(x)−ϕ(Sj)(ϕ(x)−ϕ(Sj)∗] and Lis a quantity which bounds all the random embeddings of the classes in embedding space , (i.e) L ≥ ||ϕ(Si)|| , we hence choose L = max(R1, . . . , RN ) , as all the embeddings lie in the sphere of radius Ri this quantity bounds all of them. Also , by triangle inequality we have
L ≥ ||ϕ(Si)− ϕ(Si)|| ≥ ||ϕ(Si)|| − ||ϕ(Si)|| (35)
After applying the basic assumptions that query sample is uncorrelated with the support sets Si , we can now use lemma(1) to further simplify this to
v(Z) = 3.2.(1 + 1
k )(R+Rj) (36)
Now by using equation(35) and equation(36)we can rewrite Equation (34) as:
P (||ϕ(x)− ϕ(Sj || ≥ ||ϕ(x)− ϕ(S1||)) ≤
1− (1 +D) [ exp ( −3 2 ( (||ϕθ(xq)||2 −Ri)2
3.2.(1 + 1k )(R+Ri)− L (||ϕθ(xq)||2 −Ri) 2
))] (37)
now in similar way applying this for all probabilities in Equation 32 factors we get the final expression of Equation (38).
pϕ(y = j|xq,S) ≤ N∏ i=1 1−(1+D)
[ exp ( −3 2 ( (||ϕθ(xq)||2 −Ri)2
3.2.(1 + 1k )(R+Ri)− L (||ϕθ(xq)||2 −Ri) 2
))] (38)
5 EXPERIMENTS
In this section, we present our result illustrating the advantage of our bound on following datasets: Omniglot Lake et al. (2015), miniImageNet Vinyals et al. (2016) and tieredImageNet Ren et al. (2018). In table (1) all experiments are performed on a 4 layer neural network, similar to that used by Snell et al. and 7 layer residual neural network He et al. (2016). For the purpose of clarity the specific architecture of the neural networks and the preprocessing of the data is the same as that used by Cao et al. (2019). Relatively simple models are used to highlight the behaviour of our stochastic bound given different network architecture and difference in testing shots k ∈ {1, · · · , 5}. PCA Protonet Cao et al. (2019) uses principal component analysis to consider only the resulting leading d = 60 dimensions as inputs while zeroing out the rest and the Mixed Protonet is a standard prototypical network trained with a randomized number of shots in the range [1, 5].
We demonstrate the error classification percentage (i.e) 100*p where p is the probability of error classification , we can see that the error classification percentage only increases with the number of shots increasing , both in the training and in the testing phase.6.(1 + 1k )(R+Ri)− L (||ϕθ(xq)||2 −Ri)
2 is inversely proportional to 1k for a fixed embedding and a query sample, hence it increases with the number of shots , also the Mixed - shot classification percentage id higher due to the heterogeneous nature of the data, and means more information regarding the distribution of the data.
MODEL CONFIGURATION: Vanilla ProtoNet is used as our baseline . We present the performance of multiple ProtoNets trained with different shots to illustrate the performance degradation issue. ProtoNetPCA uses principal components of the training split embeddings , with components other than the D leading ones zeroed out. We carry out a parameter sweep on miniImageNet and set d = 60; the same value is used on the other two data sets. For selecting the training shot of the embedding network, we find that overall performance to be optimal using k = 5. we set R = 0.001 N = 85 and randomly choose Ri from a sphere of radius 1 and D = 60 based on performance on miniImageNet
We observe that matching the training shot to the test shot generally provides the best performance for vanilla ProtoNets. Also importantly, training with a mixture of different values of k does not provide optimal performance when evaluated on the same mixture of k values. Instead, the resulting performance is mediocre in all test shots.We obtain the PCA of the embedded data by eigendecomposing the covariance matrix of embeddings, we obtain the principal components expressed as the significant eigenvalues, and the principal directions expressed as the eigenvectors corresponding to those eigenvalues. The number of significant eigenvalues approximates the intrinsic dimension of the embedding space. When the subspace is linear, this approximation is exact; otherwise, it serves as an upper bound to the true intrinsic dimension Fukunaga & Olsen (1971)
6 CONCLUSION AND FUTURE WORK
In this paper we provide a novel bound on the generalization error on the N way k shot classification task using prototypical networks, which is crucial in the sense that existing works hold for large samples of
data and hence cannot be applied to k shot learning , where data samples are limited . We also integrate the prototypical architecture of the network in obtaining the error probability of the task classification hence making it much more accurate for few shot learning. We do not assume homogeneous distribution of samples while classification , making it more relevant to practical applications.Sharpness and accuracy of our bound is also demonstrated on various data sets in the experimental section.
Future work includes obtaining a framework to analyze best possible architecture for k -shot learning specific to the data sets wherein presently , we study classification pertaining to a given architecture , however trying to obtain the best possible architecture mathematically which is better in generalization perspective would
be much more relevant and also efficient way of learning inner working of K shot learning. We would like to work in this direction. | 1. What is the main contribution of the paper regarding the stochastic generalization bound for prototypical neural networks?
2. What are the strengths and weaknesses of the proposed approach, particularly in its notation, conceptualization, and relevance to previous works?
3. Do you have any questions or concerns about the experiment results and their relation to the Wasserstein bound discussed in the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This submission studied the stochastic generalization bound for prototypical neural networks and proposed a generalization bound based on Wasserstein distance. Some experiment results are provided (I am not sure these results supported the claims made).
Strengths And Weaknesses
Weakness.
Notations and concepts like “R is the radius of the support set”, “
E
(
Z
i
,
Z
i
∗
”, “
ϕ
(
S
i
)
¯
” in Lemma 1 are not defined in the paper (and they are not standard well-defined math notation). Consequently I am unable to understand the results. Also what is the meaning of
v
(
Z
)
?
It is not clear how are the experiment results relevant to the Wasserstein bound discussed in the paper
Another limitation is that the analyses seem to focus on a specific family of few-shot learners, namely the prototypical net. This should be more clear in the title.
I don’t see how the discussion taking up the entire page 4 and the detailed proof serve to help reader better understand this paper. I suggest defer these contents to the Appendix and use the space to provide more intuitive explanation of what this paper trying to achieve, and how the proposed bounds help.
Wasserstein bounds for generalization error has been very-well studied (e.g., see refs in Shalit, et al. (2016) Counterfactual regression (CFR) paper), yet these works are not discussed or cited in this work. Although they are not specific for few-shot learning, they are highly relevant.
My understanding is that
ϕ
(
S
)
is a distribution, then in Thm 2, what does it mean by saying “a query sample comes from a sphere of radius R”. How do you compute the Wasserstein distance between a sample and a distribution?
In Thm 2, why does the query sample needs to come from the sphere of radius
R
, not inside a ball with radius
R
?
What is that
3.2
.
in expression
3.2
.
(
1
+
1
k
)
(
R
+
R
i
)
(Eq. (26), (37), (38))
What does it mean by error classification percentage? Based on the results in Table 1-2, they look like accuracy to me.
Table 2 seems to have some formatting issues.
In the abstract the author(s) noted “Comparison with previous generalization bounds shows the efficacy of our approach”, I could not find such comparison in this submission
This submission exceeds the page limit.
Johansson, Shalit & Sontag (2016) Learning representations for counterfactual inference. ICML 2016
Clarity, Quality, Novelty And Reproducibility
Clarity. Poor
There are lots typos (e.g., “enough enough”, “(i.e.)”, “K—L divergence”, “diatnce”, etc.) and undefined notations in this submission, making it difficult to follow the content. Also, while the author(s) have claimed that their experiment results “illustrates the advantage of their bound”, I simply did not get it. Also, to verify the “sharpness” of the bound, the author(s) should at least compare the predicted risk with the empirical risk, which is not the case in the work.
Quality. Poor
With the current presentation, I find it difficult to follow the author(s)‘ logic and development. The experimental results are not clearly explained and analyzed.
Novelty. NA
I am unable to evaluate this dimension because I can not say I have a fair understanding of this paper.
Reproducibility. Poor
Definition of several key concepts missing, so it is hard to verify the theoretical claims made in the paper. I cross-compared the reported results with the results from Cao et al. (2019), as the author(s) have claimed to follow the experimental setup of that work. I noticed for the same experiments, some numbers reported here are exactly the same as those reported in Cao et al. (2019), but some are very different. This is weird, because if the author(s) had run all experiments from scratch, then I would expect the results to be similar but not exactly the same. Please clarity. |
ICLR | Title
Wasserstein Generalization Bound for Few-Shot Learning
Abstract
In the absence of large quantities of annotated data, few shot learning is used to train neural networks that make predictions based on similarities between datapoints. To better understand how models would behave when presented with unfamiliar data, research on generalization bounds have revealed some important properties about deep neural networks. However, when extended to the domain of few shot learning it often yields loose bounds since it does not take into the account the nature and methodology of few shot learning. We propose a novel stochastic generalization bound for prototypical neural networks by constructing a Wasserstein sphere centered around the distribution of weight matrices. We show that by applying concentration inequalities on the distribution of weight matrices in the Wasserstein sphere stricter generalization bounds can be obtained. Comparison with previous generalization bounds shows the efficacy of our approach and to our knowledge this is the first bound that makes use of Wasserstein distance to give a measure of generalizability of deep neural networks.
1 INTRODUCTION
The problem of finding sharp generalization bounds for deep neural networks is of prominent importance as it allows us to bound the overall uncertainty involved in their application. In recent times the theoretical properties of these bounds have received increased attention and has been an active subject of investigation. Various classical results exploring the r expressivity of neural networks have acknowledged their universality Leshno et al. (1993) and their unexpected advantage over-hand crafted features Barron (1993) even though training of neural networks itself is a hard problem Blum & Rivest (1992). Other studies have also revealed that deep neural networks may have structural properties that enable them to perform non-convex optimization Choromanska et al. (2015); Kawaguchi (2016) further alluding to the fact that given enough data these models can learn any function Cybenko (1989). However, simply possessing such desirable properties does not guarantee that the models will perform accurately on future unknown inputs, this is because without proper restrictions on the optimization the models become prone to over-fitting and to effectively address this challenge leads us to study the generalization of these models. However, though there exits a great body of research pertaining the generalization of classification models relatively little is studied about generalization properties of meta learning models Vanschoren (2019), specifically Few-shot learning (FSL) Wang et al. (2020).
In this paper we study the generalization of FSL specifically that of Prototypical Networks Snell et al. (2017). By leveraging stochastic bounds from classic PAC learning theory Vapnik et al. (1994) we derive a Wasserstein bound on the probability of the absolute difference between the true and the empirical error deviating from a established threshold. Some of the most sharp generalization bounds are obtained using the PAC-Bayesian Framework McAllester (1998; 1999) and in this work we make use of it to derive a stochastic bound for FSL involving Prototypical networks. However, the standard PAC-Bayesian framework relies
on the KL divergence between some prior distribution of set of classifiers and data distribution, our work leverages the Wasserstein metric Vallender (1974) to obtain a better bound. The unique nature of FSL is in stark contrast to traditional task of classification and when combining them with the methodology used in classification of prototypical networks we are able to obtain a tight stochastic bound.
Also prior works assume homogeneous nature of data samples while obtaining the bound , we however do not impose any such restriction while studying it and also our final bound involves the deviation of final distribution from the initial distribution in wasserstein metric.
2 RELATED WORK
Various classical theory work attributes generalization ability to understanding the class-capacity Vapnik (1999); Mohri et al. (2018). Recent work in deep hypothesis spaces Pascanu et al. (2013); Montufar et al. (2014); Livni et al. (2014); Telgarsky (2016) also revealed deep neural networks can perform convex optimization thereby being to generalize over a vast set of datapoints. Harvey et al. (2017) generalization error bound showed that the VC dimension of neural network depends on the product of it depth and parameters considerably improving the previous bounds given by Bartlett et al. (1998). Feed forward neural networks were revealed to have unit-wise ℓ1 norm generalization bound with exponential dependence on depth. A sharpness based measure was suggested by Keskar et al. (2016) to predict the difference in generalization behaviours of networks trained with different batch size SGD. More recent PAC-Bayesian approaches Neyshabur et al. (2017); Nagarajan & Kolter (2019) also gave very sharp bounds utilizing spectral and Frobenius norm of weight matrices. In the domain of few shot learning Cao et al. (2019) provided a framework to obtain the optimal k shot for prototypical networks.
3 BACKGROUND
3.1 PROBLEM SETUP
Consider N distinct classes being sampled i.i.d. from the set of all possible classes C for an N -way classification problem. For each class ci ∈ {c1, c2, . . . , cN} k datapoints are sampled i.i.d. from the class conditional distribution p(x|Y (x) = ci), where x ∈ RD, Y (x) is the class assignment of x and D is the dimension of the data.
The k datapoints constitute the support set of the class ci : Si = {x1, . . . xk} where Y (xj) = ci and xj ∈ Si for all ci ∈ C. Given a datapoint (xj , yj), where yj ∈ {c1, c2, . . . , cN} and xj /∈ {S1, . . . , Sk}, the few shot classification task is to predict the correct assignment label yj using S = ⋃N i=1 Si.
3.2 PROTOTYPICAL NETWORKS
Prototypical Networks Snell et al. (2017) are trained to learn the low dimensional representation of data i.e., they learn a function ϕ : RD −→ RM , where M is the dimension of the representation space. The prototype representation of each class ϕ(Si) is generating by taking the average of the representations of its support set:
ϕ(Si) = 1
k ∑ x∈Si ϕ(x) (1)
Classification of input x is obtained by taking the softmax of the distance between the input embedding and the prototype representation of each class:
pϕ(y = j|x,S) = exp (−d (ϕ (x) , ϕ (Sj)))∑N i=1 exp (−d (ϕ (x) , ϕ (Si)))
(2)
where d is a distance function : RM × RM −→ [0,+∞).
Most applications including our approach use the euclidean distance as the distance function. Learning involves minimizing the negative log probability J(ϕ) = − log pϕ(ŷ = j|x) using SGD and the function ϕ is generally a deep neural network.
3.3 WASSERSTEIN BALL AND TOTAL VARIATION
3.4 WASSERSTEIN BALL
Given the space of all probability distributions P with compact support set, the pth Wasserstein metric on the space P is defined as:
Wp(ν, µ) = (inf E[d(X,Y )p])1/p (3)
where X and Y are random variables with marginals µ and v and infimum is taken over all possible joint distributions of X and Y . For our analysis we focus on the first order Wasserstein distance by taking the distance measure d as the Manhattan distance:
W (ν, µ) = ( inf E [ ∥(X,Y )∥1 ]) (4)
The rationale for using Wasserstein distance is that it gives a metric to measure the minimum difference between two distributions which we use to obtain a sharper generalization bound. Consequently, a wasserstein ball of radius R centered around µ is defined as:
Wµ(R) = { v ∈ P ∣∣W (ν, µ) ≤ R} (5)
3.5 TOTAL VARIATION
The total variation distance between two distributions ν and µ is given by
δ(ν, µ) = sup A∈P
|ν(A)− µ(A)| (6)
Intuitively, this is the largest possible difference between the probabilities that the two probability distributions can assign to the same event. In some cases we can also have the below relation
δ(ν, µ) = 1
2 ||ν − µ||1 (7)
This is similar to Wasserstein distance in many aspects , The total variation distance (or half the norm) arises as the optimal transportation cost, when the cost function is c(x, y) = 1x ̸=yc(x, y) = 1x ̸=y , that is,
1 2 ||ν − µ||1 = δ(ν, µ) = inf P(ν ̸= µ) = inf π Eπ[1ν ̸=µ] (8)
It however differs in taking distributions directly rather than their supports , which makes it less desirable than wasserstein distance for our case. We use it mainly to compare the wasseserstein diatnce with the K−L divergence , which otherwise cannot be compared mathematically with wasserstein metric.
4 WASSERSTEIN BOUND
Prototypical networks make predictions are based on the nearest neighbour from the support set S and the embedding function ϕ. Thus, to formulate the relationship between the complexity of the classifier and the support set we make us of the classical PAC learning theory Vapnik et al. (1994). Consider the simple binary classification problem where the probability of the difference between true and empirical error is bounded by:
P ( ||errtrue(h)− errtrain(h)||1 ≤ ξ ) ≥ 1− δ (9)
where h is the classifier, errtrue is true error, errtrain is the empirical training error obtained on the support set S, 0 ≤ δ ≤ 1 and ξ is defined as:
ξ ≜
√ D ( ln 4kD + 1 ) + ln 4δ
2k (10)
where D is the VC Dimension. l1 metric is conventionally used to measure the deviation but metrics which do not over estimate are crucial for the accurate prediction of the error difference. A sharper bound which is symmetric about the distributions is extremely necessary for studying generalization in Few shot learning.
Li showed that effective prediction by neural networks is generally the result of the final layers of the network where the embeddings are split apart to facilitate effective linear classification. However, the embeddings learnt by the networks before the final layer can be very compact in some high dimensional vector space. Consider two distributions ν and µ from this compact embeddings, the KL divergence of these two distributions is given by:
KL(ν||µ) = ∫ x µ(x) ln ( ν(x) µ(x) ) (11)
If the context of few shot learning the embeddings ϕ(Si) may be close enough enough such that their class conditional distributions are very similar, i.e.
KL(ν||µ) = lim ν−→µKL(ν||µ)
= lim ν−→µ ∫ x µ(x) ln ( ν(x) µ(x) ) (12) By monotone convergence theorem, for finite measures equation (12) can be written as:
KL(ν||µ) = ∫ x lim ν−→µµ(x) ln ( ν(x) µ(x) ) = 0 (13)
As we could see from Equation (13) the KL divergence could be pretty inaccurate in capturing the distance between the class conditional distributions tends to 0 which would be further exacerbated by the log factor present in its formulation. The Wasserstein metric is preferable in this regard to Kullback–Leibler (KL) divergence as it over comes this problem of magnitude reduction by projecting it into higher dimensional product measure space and effectively capturing it Otto & Villani (2000):
W (ν, µ) = √ inf
π∈ ∏ (ν,µ) ∫ M×M d(x, y)2dπ(x, y) (14)
where ∏ (ν, µ) denotes the set of probability measures on M×M where M is some finite dimensional vector space. More specifically, for any two distributions ν and µ by Equation (11) and (14) we have the following
inequalities demonstrating the sharpness of the Wasserstein metric in comparison to KL-divergence in the limiting cases:
1 2 dTV (ν, µ) <
√ KL(ν, µ) (15)
W (ν, µ) ≤ O(dTV (ν, µ)) (16) From Equations (15) and (16) we can conclude that W (ν, µ) ≤ √ KL(ν, µ). Therefore, the usage of a was a Wasserstein distance is more appropriate in the present context of few shot generalization.
Lemma 1. Given a prototypical network ϕθ with a N -way k-shot classification task, a query sample xq ∈ RD with support set S ,R is the radius of this support set and if Ri is the radius of the Wasserstein ball for class ci centered around ϕ(Si) , for Zi = ϕ(xq)− ϕ(Si) , we define v(Zi) = ||E(Zi.Z∗i || , then v(Zi) can be simplified as
v(Z) = 2(1 + 1
k )(R+Ri) (17)
Proof. First, from the definition of v(Z), by taking conditional probabilities into account we get v(Z) = Ex,Si , since xq ∈ RD hence ϕ(x) = ϕ(x) into two parts and examine them separately:
v(Zi) = Ex,Si = E[(ϕ(x)− ϕ(Si)).(ϕ(x)− ϕ(Si))] (18) In general, from probability theory we have for random vector X , the expectation of the quadratic is E[||X||2] = Tr(Var(X)) + E[X]TE[X]. Hence,
v(Zi) = E[||ϕ(x)− ϕ(Si)||2] = Tr(Σ
ϕ(x)−ϕ(Si)) + E[ϕ(x)− ϕ(Si)] ∗E[ϕ(x)− ϕ(Si)],
(19)
where the first term inside the trace can be expanded as:
Σ ϕ(x)−ϕ(Si) = Var[ϕ(x)− ϕ(Si)]
= E[(ϕ(x)− ϕ(Si))(ϕ(x)− ϕ(Si))T ]− (µa − µb)(µa − µb)T
= Σc + µaµ T a +
1 k Σc + µbµ T b − µaµTb − µbµTa − (µa − µb)(µa − µb)T
= (1 + 1
k )Σc (Last terms cancel out).
(20)
by linearity of trace we can obtain the following equation from equation(20)
Trace(Σ ϕ(x)−ϕ(Si)) = (1 +
1 k )Trace(Σc) (21)
We note that Var(X) = E[XXT ] − E[X]E[X]T and Σc ≜ Var(ϕ(x) + ϕ(Si)). Hence, equation (20) is obtained by expanding out the first term and taking the expectation of each resulting item. The second term of Equation (19) is rewritten for notational convenience as :
Ex,S [||ϕ(x)− ϕ(Si)||2] = µa − µb. (22) Putting them together:
i = (1 + 1
k )Tr(Σc) + (µa − µb)T (µa − µb) (23)
Similarly for E[ϕ(x)− ϕ(Si)]∗E[ϕ(x)− ϕ(Si)] we have :
E[ϕ(x)− ϕ(Si)]∗E[ϕ(x)− ϕ(Si)] = Ex,S [||ϕ(x)− ϕ(Si)||2] = Tr(Σ
ϕ(x)−ϕ(Si)) + E[ϕ(x)− ϕ(Si)] ∗E[ϕ(x)− ϕ(Si)]
= (1 + 1
k )Tr(Σc).
(24)
Putting together equation21 and equation24:
Ex,S = (1 + 1
k )Tr(Σc) + (µa − µb)(µa − µb)T + (1 +
1 k )Tr(Σc)
= (µa − µb)T (µa − µb) + 2.(1 + 1
k )Tr(Σc).
(25)
we note that µTaµa and µ T b µb are quadratic forms while µ T aµb describe the dot product between two independent randomly drawn samples which has expectation 0 , as we assume all the random variables involved are centered around 0. By the iid assumption on the random variables , off-diagonal terms of the Σ are zero , hence trace is just the variance of the random vector .
Variance is however the largest possible deviation in the distributed space hence by the assumptions made in the lemma we can write the final expression as 2(1 + 1k )(R+Ri)
For proof of Lemma 1, we first re-state the result on quadratic forms of normally distributed random vectors by Rencher & Schaalje (2008).
Theorem 2. Given a prototypical network ϕθ with a N -way k-shot classification task, a query sample xq ∈ RD with support set S, comes from a sphere of radius R, then the probability the model correctly predicts the class assignment bounded by:
pϕ(y = j|xq,S) ≤ N∏ i=1 1−(1+D)
[ exp ( −3 2 ( (||ϕθ(xq)||2 −Ri)2
3.2.(1 + 1k )(R+Ri)− L (||ϕθ(xq)||2 −Ri) 2
))] (26)
where Ri is the radius of the Wasserstein ball centered around ϕ(Si) measured in wasserstein metric and includes class ci ,i.e ci ∈ Wϕ(Si)(Ri)and L = max(R1, . . . , RN ), Y (xq) = j
Proof. The Wasserstein ball for each class ci is given by equation (5): Wϕ(Si)(Ri) = { v ∈ Pci ∣∣W (µ, v) ≤ Ri} (27) where Pci is the class conditional distribution. For the prototypical network ϕθ to correctly predict Y (xq) the representational embedding of xq must be closer to ϕ(Sj), i.e. ϕ(xq) should be closer to the center of the Wasserstein ball Wϕ(Sj) than any other Wϕ(Si) for all m ∈ {1, . . . , N} and j ̸= m:
pϕ(y = j|xq,S) ≥ pϕ(y = m|xq,S) (28)
For the network to generalize to previously unseen query samples equation (28) should hold true. Therefore:
exp (−d (ϕ (x) , ϕ (Sj))) ≥ exp (−d (ϕ (x) , ϕ (Sm))) (29)
Since the classification depends only on the distance between the representational embeddings, Equation (29) can we rewritten as:
P ( ||ϕ(xq)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(Sp)|| ) (30)
for p = 1, 2, ...N
As ϕ(x), S1, . . . , SN are random we will consider expected value rather than the exact random value , now we get
P (||ϕ(x)− Eϕ(Sj)||) ≥ P (||ϕ(x)− Eϕ(Sp)||) (31)
Now, the probability that xq is correctly classified by the model is given by:
P (y = j|xq,S) = P (||ϕ(x)− ϕ(Sj)|| > ||ϕ(x)− ϕ(S1)||, ||ϕ(x)− ϕ(Sj)|| > ||ϕ(x)− ϕ(S2)||, ... ||ϕ(x)− ϕ(Sj)|| > ||ϕ(x)− ϕ(SN )||)
(32)
Next we note that that the sampling was i.i.d so we can split the RHS of Equation (32) into product of several probabilities:
P (y = j|xq,S) =P (||ϕ(x)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(S1)||) P (||ϕ(x)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(S2)||) ... P (||ϕ(x)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(SN )||)
(33)
Applying Bernstein’s inequality to the ith probability in the product of probabilities in Equation 33 i.e., on P (||ϕ(x)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(Si)||) we get the following bound:
P (||ϕ(x)− ϕ(Sj)|| ≥ ||ϕ(x)− ϕ(Si)||) ≤
1− (1 +D) [ exp ( −3 2 ( (||ϕθ(xq)||2 −Ri)2
3v(Z)− L (||ϕθ(xq)||2 −Ri)2
))] (34)
where v(Z) = E[ϕ(x)−ϕ(Sj)(ϕ(x)−ϕ(Sj)∗] and Lis a quantity which bounds all the random embeddings of the classes in embedding space , (i.e) L ≥ ||ϕ(Si)|| , we hence choose L = max(R1, . . . , RN ) , as all the embeddings lie in the sphere of radius Ri this quantity bounds all of them. Also , by triangle inequality we have
L ≥ ||ϕ(Si)− ϕ(Si)|| ≥ ||ϕ(Si)|| − ||ϕ(Si)|| (35)
After applying the basic assumptions that query sample is uncorrelated with the support sets Si , we can now use lemma(1) to further simplify this to
v(Z) = 3.2.(1 + 1
k )(R+Rj) (36)
Now by using equation(35) and equation(36)we can rewrite Equation (34) as:
P (||ϕ(x)− ϕ(Sj || ≥ ||ϕ(x)− ϕ(S1||)) ≤
1− (1 +D) [ exp ( −3 2 ( (||ϕθ(xq)||2 −Ri)2
3.2.(1 + 1k )(R+Ri)− L (||ϕθ(xq)||2 −Ri) 2
))] (37)
now in similar way applying this for all probabilities in Equation 32 factors we get the final expression of Equation (38).
pϕ(y = j|xq,S) ≤ N∏ i=1 1−(1+D)
[ exp ( −3 2 ( (||ϕθ(xq)||2 −Ri)2
3.2.(1 + 1k )(R+Ri)− L (||ϕθ(xq)||2 −Ri) 2
))] (38)
5 EXPERIMENTS
In this section, we present our result illustrating the advantage of our bound on following datasets: Omniglot Lake et al. (2015), miniImageNet Vinyals et al. (2016) and tieredImageNet Ren et al. (2018). In table (1) all experiments are performed on a 4 layer neural network, similar to that used by Snell et al. and 7 layer residual neural network He et al. (2016). For the purpose of clarity the specific architecture of the neural networks and the preprocessing of the data is the same as that used by Cao et al. (2019). Relatively simple models are used to highlight the behaviour of our stochastic bound given different network architecture and difference in testing shots k ∈ {1, · · · , 5}. PCA Protonet Cao et al. (2019) uses principal component analysis to consider only the resulting leading d = 60 dimensions as inputs while zeroing out the rest and the Mixed Protonet is a standard prototypical network trained with a randomized number of shots in the range [1, 5].
We demonstrate the error classification percentage (i.e) 100*p where p is the probability of error classification , we can see that the error classification percentage only increases with the number of shots increasing , both in the training and in the testing phase.6.(1 + 1k )(R+Ri)− L (||ϕθ(xq)||2 −Ri)
2 is inversely proportional to 1k for a fixed embedding and a query sample, hence it increases with the number of shots , also the Mixed - shot classification percentage id higher due to the heterogeneous nature of the data, and means more information regarding the distribution of the data.
MODEL CONFIGURATION: Vanilla ProtoNet is used as our baseline . We present the performance of multiple ProtoNets trained with different shots to illustrate the performance degradation issue. ProtoNetPCA uses principal components of the training split embeddings , with components other than the D leading ones zeroed out. We carry out a parameter sweep on miniImageNet and set d = 60; the same value is used on the other two data sets. For selecting the training shot of the embedding network, we find that overall performance to be optimal using k = 5. we set R = 0.001 N = 85 and randomly choose Ri from a sphere of radius 1 and D = 60 based on performance on miniImageNet
We observe that matching the training shot to the test shot generally provides the best performance for vanilla ProtoNets. Also importantly, training with a mixture of different values of k does not provide optimal performance when evaluated on the same mixture of k values. Instead, the resulting performance is mediocre in all test shots.We obtain the PCA of the embedded data by eigendecomposing the covariance matrix of embeddings, we obtain the principal components expressed as the significant eigenvalues, and the principal directions expressed as the eigenvectors corresponding to those eigenvalues. The number of significant eigenvalues approximates the intrinsic dimension of the embedding space. When the subspace is linear, this approximation is exact; otherwise, it serves as an upper bound to the true intrinsic dimension Fukunaga & Olsen (1971)
6 CONCLUSION AND FUTURE WORK
In this paper we provide a novel bound on the generalization error on the N way k shot classification task using prototypical networks, which is crucial in the sense that existing works hold for large samples of
data and hence cannot be applied to k shot learning , where data samples are limited . We also integrate the prototypical architecture of the network in obtaining the error probability of the task classification hence making it much more accurate for few shot learning. We do not assume homogeneous distribution of samples while classification , making it more relevant to practical applications.Sharpness and accuracy of our bound is also demonstrated on various data sets in the experimental section.
Future work includes obtaining a framework to analyze best possible architecture for k -shot learning specific to the data sets wherein presently , we study classification pertaining to a given architecture , however trying to obtain the best possible architecture mathematically which is better in generalization perspective would
be much more relevant and also efficient way of learning inner working of K shot learning. We would like to work in this direction. | 1. What is the focus of the paper regarding few-shot classification tasks?
2. What are the strengths of the proposed approach, particularly in terms of its ability to provide bounds on generalization errors?
3. What are the weaknesses of the paper, especially regarding its comparisons with other works and experimental demonstrations?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper states a bound on the generalization error of an N way few-shot classification task using prototypical networks. It also investigates the prototypical architecture of the network in obtaining the error probability of the task classification.
Strengths And Weaknesses
Strength: The paper tackles a seemingly challenging problem. The Wasserstein bounds seem nontrivial. In the experiments, the authors illustrate the advantage of their bounds on several datasets including Omniglot, miniImageNet, and tierredImageNet. They show experiments on a four layer neural net and a 7 layer resnet. They demonstrate the error classification percentage for various prototypical variants. These experiments demonstrate the sharpness and accuracy of their bounds.
Weakness: (i) The paper could benefit from a more clear comparison with prior results/generalization bounds in related settings; (ii) The experiments could be improved to better demonstrate the utility of the provided bounds.
Clarity, Quality, Novelty And Reproducibility
The paper is easy to follow for the reviewer. There is no code associated with the submission so it's unclear whether the experiments can be reproduced. The theoretical statements are accompanied with full proofs. |
ICLR | Title
Imitate Your Own Refinement: Knowledge Distillation Sheds Light on Efficient Image-to-Image Translation
Abstract
The excellent performance of the state-of-the-art Generative Adversarial Networks (GANs) is always accompanied by enormous parameters and computations, making them unaffordable on resource-limited mobile devices. As an effective model compression technique, knowledge distillation (KD) has been proposed to transfer the knowledge from a cumbersome teacher to a lightweight student. Following its success on classification, some recent works have applied KD to GAN-based image-to-image translation but lead to unsatisfactory performance. In this paper, to tackle this challenge, we propose a novel knowledge distillation framework named IYOR (Imitate Your Own Refinement), which consists of the following two techniques. Firstly, since image-to-image translation is an ill-posed problem, knowledge distillation on image-to-image translation may force the student to learn the average results between multiple correct answers and thus harm student performance. To address this problem, we propose to replace the teacher network in knowledge distillation with a refining network, which is trained to refine the images generated by the student to make them more realistic. During the training period, the refining network and the student are trained simultaneously, and the student is trained to imitate the refined results in a knowledge distillation manner. Secondly, instead of only distilling the knowledge in the generated images, we propose SIFT KD, which firstly extracts the distinctive and scaleinvariant features of the generated images with Scale-invariant feature transform (SIFT), and then distills them from the refining network to the student. Extensive experimental results demonstrate the effectiveness of our method on five datasets with nine previous knowledge distillation methods. Our codes are available in the supplementary material and will be released on Github.
1 INTRODUCTION
In the last decade, Generative Adversarial Networks (GANs) have evolved to one of the most dominated methods for content generation of images (Isola et al., 2017; Zhu et al., 2017a), videos (Vondrick et al., 2016), text (Zhang et al., 2016), audios (Kong et al., 2020), graphs (Wang et al., 2018a), point clouds (Li et al., 2019) and multi-modal systems (Zhu et al., 2017b). Their remarkable ability of representation and generation has significantly boosted the performance of image-to-image translation and further promoted their usage in real-world applications. Despite their impressive performance, GANs models usually suffer from massive parameters and computation, which have limited them to deploy on resource-restricted platforms such as mobile phones. This problem further raises the research trend in model compression such as network pruning (Buciluǎ et al., 2006; He et al., 2018a; 2017), weights quantization (Lee et al., 2019; Nagel et al., 2019), lightweight model design (Ma et al., 2018; Sandler et al., 2018; Howard et al., 2017), neural network architecture search (Howard et al., 2019; He et al., 2018b), and knowledge distillation (Hinton et al., 2014).
Knowledge distillation (KD), which aims to improve the performance of lightweight students by transferring knowledge from an over-parameterized teacher model, has become a popular technique for model compression. By imitating the prediction results and the intermediate features of teachers, students can achieve significant performance improvements. Following its success in image classification (Hinton et al., 2014; Zhang et al., 2020), object detection (Zhang & Ma, 2021) and semantic
segmentation (Yang et al., 2022), Recently, some researchers have tried to apply knowledge distillation to image-to-image translation by training students to mimic the images generated by the teachers. Unfortunately, these trials usually lead to limited and even sometimes negative performance (Li et al., 2020c; Zhang et al., 2022). Some works have been proposed to distill teacher knowledge in their features and lead to positive effectiveness (Ren et al., 2021; Li et al., 2020c). However, there is still no analysis on the reason that why traditional image-based knowledge distillation fails.
In this paper, we mainly impute the unsatisfactory performance of naive knowledge distillation to the ill-posed property of image-to-image translation. Unlike image classification, where each image always has a unique categorical label, an image can have multiple different but correct posttranslation answers in image-to-image translation. For example, in Edge→Shoe translation (i.e., translating edges of shoes to photos), given an input image of edges, there are multiple corresponding images of shoes with different colors, styles, and contents. All of these images can be correct answers while the average of them may have low quality. Unfortunately, in traditional KD, the student and teacher are likely to give two different but correct predictions for the same input image. In this case, the knowledge distillation loss forces the students to learn the average between the student outputs and the teacher outputs, which can harm student performance acutely. In contrast, the ideal case to avoid this problem is to guarantee that the student and teacher output the consistent answers for the input image. However, this assumption does not always hold since the student and the teacher in traditional KD are two independent image-to-image translation models.
To address this problem, we propose IYOR (Imitate Your Own Refinement), a generalized knowledge framework which introduces a different manner to build the “teacher network” in knowledge distillation. Taking Edge→Shoe translation as an example, as shown in Figure 1, instead of building a teacher network which translates edges into shoes, IYOR introduces a refining network, which takes the shoe images generated by the student as inputs, refines them, and outputs the images of the shoe which have much better quality. Note that the refining network is trained with the student simultaneously and can be discarded during inference to avoid additional parameters and computations. Since the refining network has much more parameters than the student, this refining process can significantly improve the quality of images generated by students. Hence, the refined results can be considered as the “teacher outputs” in traditional knowledge distillation, and utilized as the learning targets of the students. The major advantage of IYOR is that the refining network is conditioned on the outputs of students, instead of the original inputted images. Hence, the refined results are more likely to be consistent with the student outputs than the teacher outputs in traditional knowledge distillation. As a result, it can alleviate the problem of ineffective knowledge distillation caussed by
the ill-posed property. Extensive experiments show that dramatic performance gain of five datasets can be observed by simply replacing the traditional teacher network with the refining network.
Moreover, instead of directly training the student to imitate the images generated by the refining network pixel by pixel, we further propose SIFT distillation which adopts Scale Invariant Feature Transform (SIFT) (Lowe, 1999), a typical image feature extraction method in traditional image processing to extract the scale-invariant and highly distinctive features of the generated images and then distills them from the refining network to the students. As pointed out by abundant previous research (Lowe, 1999; 2004; Yuan et al., 2008), the features extracted by SIFT are invariant to image scaling, rotation and illumination, and highly distinctive for downstream tasks such as detection and tracking. Hence, these features carry more semantic information of the images, and they are more beneficial in knowledge distillation than traditional pixel-wise imitating. Another advantage of SIFT KD is that SIFT does not contain any trainable parameters, which makes SIFT KD generalize well on different image-to-image translation tasks as a plug-and-play knowledge distillation technique.
Experimental results on five image-to-image translation tasks have demonstrated the performance of IYOR for both paired and unpaired image-to-image translation in terms of both quantitative and qualitative analysis. Despite its simplicity, IYOR outperforms the previous nine knowledge distillation methods by a clear margin. Besides, experimental results also demonstrate that IYOR can be combined with the previous feature-based knowledge distillation methods to achieve better performance. To sum up, our main contributions can be summarized as follows.
• We propose IYOR, a knowledge distillation method for efficient image-to-image translation. To the best of our knowledge, IYOR firstly shows that the most naive image-based knowledge distillation can be effective by replacing the teacher with a refining network.
• We propose SIFT distillation, which adopts SIFT to extract the distinctive and scaleinvariant features of images and distill them from the refining network to the student.
• Extensive experiments on both paired and unpaired translation tasks have demonstrated the performance of IYOR over nine previous methods and five datasets in terms of both quantitative and qualitative results. Our codes have been released for future research.
2 RELATED WORK
2.1 IMAGE-TO-IMAGE TRANSLATION WITH GANS
Remarkable progress has been achieved in image-to-image translation with the rapid development of generative adversarial networks (GANs) (Goodfellow et al., 2014; Brock et al., 2018). Pix2Pix is first proposed to perform paired image-to-image translation with conditional GANs (Isola et al., 2017). Then, Pix2PixHD is proposed to improve the generation quality with multi-scale generators and discriminators (Wang et al., 2018b). The similar idea has also been extended in text-to-image translation (Zhang et al., 2017), multi-modal image-to-image translation (Huang et al., 2018; Zhu et al., 2017c) and applications such as super-resolution and image dehazing (Wang et al., 2018d; Ledig et al., 2017; Zhang et al., 2017). In the real-world applications, the paired image-to-image translation dataset is usually not available. To address this problem, abundant methods have been proposed to perform image-to-image translation on unpaired datasets with cycle-consistency regularization (Zhu et al., 2017a; Yi et al., 2017; Kim et al., 2017). StarGAN is proposed to perform image-to-image translation for multiple domains with a single model (Choi et al., 2018), and StarGAN v2 is proposed to increase the scalability and the diversity of image-to-image translation models at the same time (Choi et al., 2020). Attention based GANs have been widely utilized to improve the performance of image-to-image translation by localizing the to-be-translated regions with attention modules (Tang et al., 2021; Chen et al., 2018; Emami et al., 2020; Alami Mejjati et al., 2018). Recently, some researchers have proposed to replace the convolutional layers in GAN with MLPmixers and vision transformers, which leads to better high-fidelity translation (Wan et al., 2021; Cazenavette & De Guevara, 2021).
2.2 KNOWLEDGE DISTILLATION
The idea that employing a large model to improve the performance of a small model is firstly proposed by Buciluǎ (Buciluǎ et al., 2006) for the compression of neural network ensemble. Then, Hinton et al. propose the concept of knowledge distillation, which introduces a temperature hyperparameter in the softmax layer to flatter teacher prediction (Hinton et al., 2014). Following their
success, many researchers have proposed to not only distill the teacher knowledge in its predicted categorical probability distribution, but also the dark knowledge in features (Romero et al., 2015; Tian et al., 2019), spatial attention (Zagoruyko & Komodakis, 2017), channel-wise attention (Liu et al., 2021a; Shu et al., 2021; Li et al., 2021a), pixel-wise relation (Zhang & Ma, 2021; Li et al., 2020c; Yoon et al., 2020), instance-wise relation (Park et al., 2019b; Tung & Mori, 2019; Peng et al., 2019), task-oriented information (Zhang et al., 2020), decision boundary samples (Heo et al., 2019b), positive feature (Heo et al., 2019a) and frequency-biased information (Zhang et al., 2022) with optimization methods such as L2-norm distance (Romero et al., 2015; Yim et al., 2017), adversarial learning (Shen et al., 2019; Liu et al., 2019a; Xu et al., 2017), and contrastive learning (Tian et al., 2019; Chen et al., 2020b). Besides image classification, knowledge distillation has already been used in model compression for object detection (Chen et al., 2017; Li et al., 2017; Wang et al., 2019; Bajestani & Yang, 2020; Li et al., 2020b), semantic segmentation (Liu et al., 2019b; Park & Heo, 2020), pre-trained language models (Sanh et al., 2019; Xu et al., 2020)s and so on.
Knowledge Distillation on Image-to-Image Translation A few research has been proposed to perform knowledge distillation on image-to-image translation. Li et al. propose the framework of GAN compression, which has applied the classic L2-norm feature distillation on the intermediate neural layers (Li et al., 2020a). However, their results demonstrate that this application leads to unsatisfying performance improvements. Then, Li et al. propose the semantic relation preserving knowledge distillation, which aims to distill the relation between different patches in the generated images instead of the encoded features (Li et al., 2020c). Then, Chen et al. propose to distill image-to-image translation models with knowledge distillation not only generators but also the discriminators (Chen et al., 2020a). Similarly, Li et al. propose to revisit the discriminator in GAN compression, which transfers the knowledge in the teacher discriminator with L2-norm and texture loss (Li et al., 2021b). Jin et al. introduce the centered kernel alignment as the distance metric in knowledge distillation, which does not require additional layers for feature reshaping. Ren et al. propose to train the teacher and student GANs simultaneously, which shows the possibility of online knowledge distillation on image-to-image translation (Ren et al., 2021). Recently, motivated by the fact that tiny GANs work badly in generating high-quality high-frequency information, Zhang et al. propose to distill only the high-frequency information decomposed by discrete wavelet transformation in the images generated by teachers (Zhang et al., 2022). Besides image-to-image translation, there are also some knowledge distillation methods designed for GAN compression on the other tasks (Liu et al., 2021b; Wang et al., 2018c; Aguinaldo et al., 2019). Unfortunately, most of these knowledge distillation methods focus on distilling teacher knowledge in their features, and sufficient evidences show that directly training students to mimic the generated images from teachers leads to insufficient and even negative performance (Li et al., 2020c; Zhang et al., 2022). In contrast, this paper firstly shows that naive image-based distillation can also achieve valuable performance boosts.
3 METHODOLOGY
3.1 KNOWLEDGE DISTILLATION
In this section, we firstly revisit the formulation of knowledge distillation on image classification and then simply extend them to image-to-image translation. Given a set of training samples X = {x1, x2, ..., xn} and the corresponding ground truth Y = {y1, y2, ..., yn}, by denoting the student function and the pre-trained teacher function as fs and ft, then the training loss of classical knowledge distillation method (Hinton et al., 2014) can be formulated as
argmin fs
Ex,y [ (1− α) · CE(fs(x), y) + α · KL(fs(x)/τ, ft(x)/τ) ] , (1)
where CE and KL indicate cross-entropy loss and the Kullback-Leibler divergence, respectively. τ is the temperature hyper-parameter to soften the probability distribution and α is a hyper-parameter to balance the origin training loss and the knowledge distillation loss. When knowledge distillation is applied to image-to-image translation, since the predictions of students and teachers are the value of pixels instead of probability distributions, KL divergence can be replaced with the L1-norm loss, which is widely utilized in low-level vision. And the cross-entropy loss for classification should be replaced with the GAN training loss. Taking Pix2Pix (Isola et al., 2017) as an example, the knowledge distillation loss (Hinton et al., 2014) for training the generator can be formulated as
argmin fs
Ex,y [ (1− α) · L1 ( fs(x), y ) + α · L1 ( fs(x), ft(x) ) + LcGAN ( fs(x) )] , (2)
where L1 indicates the L1-norm loss. LcGAN indicates the conditional GAN loss, which measures how the generated images fool the discriminator. Note that we do not introduce LcGAN and the discriminator of GANs in detail here since they have no direct influence with our method.
3.2 IYOR: IMITATE YOUR OWN REFINEMENT
Instead of using two independent neural networks as the student and the teacher we append a refining network fr after the student network, which is trained to translate the images generated by the student network fr(fs(x)) to the corresponding ground-truth y. Thus, the “teacher model” in IYOR can be written as ft = fs ◦ fr. In our implementation, fr has the same architecture as the teacher in traditional KD and hence it has enough learning ability to refine student outputs. Note that the fr can be discarded after the training period to avoid the additional parameters and computation. Besides, unlike traditional KD where the teacher is first pre-trained and then utilized to teach the student, in IYOR, fr and fs are trained simultaneously. For simplicity, by denoting zs = fs(x) and zr = fr ◦ fs(x), the training objective of the refining network fr can be formulated as
argmin fr
Ex,y [ L1 ( zr, y ) + LcGAN ( zr )] . (3)
And the training objective of the student fs can be formulated as
argmin fs
Ex,y [ (1− α) · L1 ( zs, y ) + α · L1 ( zs, zr ) + LcGAN ( zs(x) )] . (4)
Note that since IYOR only distills the generators of GANs, we omit the description of discriminators here. Besides, IYOR can be easily extended to unpaired image-to-image translation models such as CycleGAN by introducing two refining networks to both the two translation directions, respectively.
3.3 WHY IYOR WORKS
Consider a general knowledge distillation with the L1-norm, define a function G as G ( fs(x), ft(x) ) := Ex,y [ (1− α) · L1 ( fs(x), y ) + α · L1 ( fs(x), ft(x) ) +H ( fs(x) )] , (5)
where H is a function about the student network. For simplicity, we abbreviate Ex,y as E. The objective function of traditional knowledge distillation (TKD) (2) and IYOR (4) are specific cases of equation (5). Let f1s and f 2 s be the optimal student networks of problem (2) and IYOR (4), we will provide an assumption and a theorem to interpret the effectiveness of IYOR. TKD : f1s = argmin
fs
G ( fs(x), ft(x) ) , IYOR : f2s = argmin
fs,fr
G ( fs(x), fr ◦ fs(x) ) . (6)
Assumption 3.1 Since our teacher fr ◦ fs(x) has more parameters than the traditional teacher ft(x), we assume that when they achieve the optimal values, the loss of TKD is less than IYOR. In other words, denoting f1t and f 2 t as the optimal teacher networks of TKD and IYOR, then we have
G(f2s , f 2 t ) ≤ G(f1s , f1t ). (7)
Theorem 3.1 Under the Assumption (3.1), the L1 distance between the optimal student network and teacher network in IYOR is less than that in TKD, which means
E [ L1(f 2 s , f 2 t ) ] ≤ E [ L1(f 1 s , f 1 t ) ] . (8)
Please refer to Appendix A for the proof. Besides, we have also explained why traditional KD methods fail on image-to-image translation with VC theory in Appendix B.
3.4 SIFT DISTILLATION
Scale-Invariant Features Transform (SIFT) is one of the most effective and popular image descriptors in classicial image processing. Usually, SIFT mainly has four steps, including scale-space extrema detection, keypoint localization, orientation assignment and keypoint description. Usually, SIFT features carry rich semantic information of the images while having much lower dimensions than the original image. Thus, distilling the SIFT features is more efficient than directly distilling the pixels of the generated images. By denoting SIFT as ϕ(·) and a loss hyper-parameter as β, then the loss function of SIFT distillation in our method can be formulated as
argmin fs
Ex,y [ (1− α) · L1 ( zs, y ) + α · L1 ( zs, zr ) + LcGAN ( zs(x) ) + β · L1 ( ϕ(zs), ϕ(zr) )] . (9)
4 EXPERIMENT
4.1 EXPERIMENT SETTINGS
Models and Datasets In this paper, we mainly evaluate the performance of our method with CycleGAN (Zhu et al., 2017a) for unpaired image-to-image translation, and Pix2Pix (Isola et al., 2017) and Pix2PixHD (Wang et al., 2018b) for paired image-to-image translation. The refining network in our method has an identical architecture to the original model before compression. The students in our experiments have the same network depth as the original model before compression except for fewer channels. Five datasets are utilized for quantitative evaluation, including Horse→Zebra, Maps, Edge→Shoe, Summer→Winter, and Apple→Orange. Comparison Methods We have compared our methods with nine knowledge distillation methods, including three of them which are firstly proposed for image classification and then adopted by us to image-to-image translation (Hinton et al., 2014; Ahn et al., 2019; Zagoruyko & Komodakis, 2017), and six of them which are designed for image-to-image translation (Li et al., 2020a; Jin et al., 2021; Zhang et al., 2022; Li et al., 2021b; Ren et al., 2021; Li et al., 2020c). Note that some comparison methods have both knowledge distillation and neural network pruning. Following the setting of the previous work (Zhang et al., 2022), we only compare our method with their knowledge distillation algorithms for a fair comparison.
Training and Evaluation Settings We adopt the same training setting from the origin implementation of CycleGAN and Pix2Pix. Models for Edge→Shoe and the other datasets are trained by 50 and 200 epochs, respectively. Following previous works, we adopt Frechet Inception Distance (FID) as the performance metric for all datasets. A lower FID indicates that the distribution of the generated
images and the real images have a lower distance, and thus the generated images have better quality. On paired image-to-image translation, we report model performance at the last epoch. On unpaired image-to-image translation, since the performance for different epochs is unstable, we compute the FID for every five epochs and report the lowest one. For both paired and unpaired image-to-image translation, FID are computed over only the images in the test set.
4.2 EXPERIMENT RESULTS
Quantitative Results Quantitative comparison with previous knowledge distillation methods on unpaired image-to-image translation and paired image-to-image translation datasets are shown in Table 1 and Table 2, respectively. It is observed that: (i) Directly applying the naive image-based knowledge distillation (Hinton et al., 2014) leads to very limited and even negative performance. For instance, it leads to 1.91 and 0.67 FID increments (performance drop) on Edge→Shoe with Pix2Pix and Pix2PixHD, respectively. (ii) In contrast, by replacing the teacher in naive image-based with the refining network in our method, knowledge distillation leads to consistent performance improvements. On average, 5.12 and 10.43 FID decrements (performance improvements) can be gained in paired and unpaired image-to-image translation, respectively. (iii) Combining our method with previous feature-based knowledge distillation leads to further performance improvements. For instance, on the 14.82× compressed and 6.80× compressed Horse→Zebra students, combining our method with the method of Ren et al. leads to 2.35 and 1.13 further FID decrements. (iv) Table 3 further demonstrates the effectiveness of our method in more compression ratios and more datasets. These
observations demonstrate that our method can significantly improve the performance of lightweight image-to-image translation models in a wide range of settings.
Qualitative Results Qualitative comparison between our methods and previous methods on unpaired and paired image-to-image translation datasets are shown in Figure 2 and Figure 3 , respectively. Besides, Figure 4 further shows the performance of our method on the other two datasets. It is observed that: (i) Compared with the model before compression (the teacher model), a significant performance drop can be observed on the student model trained without knowledge distillation. For instance, on Horse→Zebra, most student models can not transform the whole body of horses into stripes. Some previous knowledge distillation methods (e.g. Zhang et al., Ren et al.) can alleviate this problem while our method leads to much better performance. (ii) On the Maps translation task, the buildings and the roads generated by students trained with previous knowledge distillation methods are fuzzy. In contrast, our method can generate clearer shapes and edges for buildings, roads, and rivers. (iii) On Edge→Shoe, the images generated by the students trained without knowledge distillation usually have severe corruption such as the holes in high-heeled shoes. In contrast, the images generated by our methods have better quality in terms of highlights, shapes, and colors. (iv) On Winter→Summer, the model trained by our method can successfully remove the snow on the plants. On Apple→Orange, the images generated by our method have much less corruption than the baseline model. These results demonstrate that students trained by IYOR achieve better performance in terms of not only statistical scores but also human vision.
0 50 100 150 200 Training Epoch
50
100
150
200
F I D
Hinton KD Our Method
Figure 5: Comparison between our method and Hinton KD on the FID between students and teachers on Horse→Zebra with CycleGAN.
Table 4: Ablation study on SIFT distillation and the usage of the refining network on Horse→Zebra with CycleGAN students.
#Params FLOPs Refining SIFT FID↓ ∆ ↑
1.61 7.29
× × 70.54±9.63 – ✓ × 59.31±2.89 11.23 × ✓ 63.17±3.66 7.37 ✓ ✓ 56.45±2.59 14.09
0.72 3.35
× × 85.04±6.88 – ✓ × 72.53±3.15 12.51 × ✓ 78.11±1.71 6.93 ✓ ✓ 69.67±5.32 15.37
5 DISCUSSION
5.1 ABLATION STUDY
In this paper, we mainly propose two knowledge distillation techniques, including (a) learning from a refining network instead of a teacher network and (b) SIFT KD. Table 4 shows the ablation study of the two techniques on Horse→Zebra with CycleGAN. It is observed that on the 7.08× and 15.81× compressed students: (i) 11.23 and 12.51 FID decrements can be observed by replacing the teacher network in traditional knowledge distillation with a refining network, respectively. (ii) 7.37 and 6.93 FID decrements can be observed by applying SIFT KD, respectively. (iii) 14.09 and 15.37 FID decrements can be obtained by combining the two techniques together, respectively. These observations indicate that both the two techniques have their own merits and their benefits are orthogonal.
5.2 STUDENT-TEACHER SIMILARITY
In this subsection, we show that the refining network in IYOR has more consistent outputs with the student than the teacher in traditional KD. The FID between images generated by students and refining network in our method and the traditional KD method is shown in Figure 5. Note that A lower FID here indicates a larger student-teacher similarity. It is observed that our method leads to lower FID during the whole training period, indicating that compared with the teachers in traditional KD, the images generated by the refining network in our method are more likely to be consistent with images generated by the students. Besides, since FID measures the distance between the distribution of images generated by the student and the teacher, this observation also implies that the student in our method can learn teacher knowledge more effectively.
6 CONCLUSION
Due to the ill-posed property of image-to-image translation, directly applying traditional knowledge distillation usually leads to unsatisfactory and even negative impacts. To address this problem, we propose a new knowledge distillation method, named IYOR (imitate your own refinement), in which a refining network replaces the teacher network in traditional KD. During the training phase, the refining network strives to improve the quality of images generated by the students instead of generating images from the inputs. Hence, the refined results can be better learning targets than the teacher outputs that are used in traditional KD. Extensive quantitative and qualitative results have demonstrated that IYOR outperforms existing nine approaches in both paired and unpaired translation. Besides, SIFT knowledge distillation is also introduced to improve the effectiveness of knowledge distillation by extracting the distinctive and scale-invariant features of images and then distilling them from teachers to students. Furthermore, we have analyzed why traditional KD fails and IYOR works well on image-to-image translation theoretically.
A THE PROOF FOR THEOREM 3.1
Proof According to the Assumption 3.1, we have E [ (1− α) · L1 ( f2s , y ) + α · L1 ( f2s , f 2 t ) +H(f2s ) ] ≤ E [ (1− α) · L1 ( f1s , y ) + α · L1 ( f1s , f 1 t ) +H(f1s ) ] (10)
Since f1s is the optimal solution for TKD (6), and G(f 1 s , f 1 t ) ≤ G(f2s , f1t ) implies E [ (1− α) · L1 ( f1s , y ) + α · L1 ( f1s , f 1 t ) +H(f1s ) ] ≤ E [ (1− α) · L1 ( f2s , y ) + α · L1 ( f2s , f 2 t ) +H(f2s ) ] (11)
Combining equation (10) and equation (11), we have E [ L1(f 2 s , f 2 t ) ] ≤ E [ L1(f 1 s , f 1 t ) ] . (12)
□
B ANALYSING KNOWLEDGE DISTILLATION WITH VC THEORY
Recent evidences show that directly applying the naive Hinton et al. knowledge distillation (Hinton et al., 2014; Zhang et al., 2022; Li et al., 2020c) to image-to-image translation usually leads to limited and even negative performance. In this subsection, we try to explain this observation from the perspective of VC theory based on generalized knowledge distillation (Lopez-Paz et al., 2016). Denoting a function class as F , then the student function, the teacher function and the oracle real target function can be written as fs ∈ Fs, ft ∈ Ft, and f ∈ F , respectively. Given n training samples, we can assume that the student function fs and the teacher function fs may learn the true function f at a rate of αs and αt, which can be formulated as
R(fs)−R(f) ≤ O( |Fs|C nαs ) + εs, and R(ft)−R(f) ≤ O( |Ft|C nαt ) + εt respectively, (13)
where O(·) term is the estimation error, εs and εt are the approximation error of the student function class Fs and the teacher function class Ft with respect to f ∈ F . A higher α indicates the learning problem is easier to be solved. Then, we can assume that the student learns from the teacher at the rate αkd with the approximation error εkd, which can be formulated as
R(fs)−R(ft) ≤ O( |Fs|C nαkd ) + εkd. (14)
As pointed out by Lopez-Paz et al. (Lopez-Paz et al., 2016), since the teacher model has more parameters than the student, we can assume the teacher function can learn the true function with a higher rate, indicating αt > αs and αt > αkd. By combining (13) and (14), we have the following inequality.
R(fs)−R(f) = R(fs)−R(ft) +R(ft)−R(fs)
≤ O( |Fs|C nαkd ) + εkd +O( |Ft|C nαt ) + εt ≤ O( |Fs|C + |Ft|C nαkd ) + εkd + εt
(15)
Thus, given a learning task, now we can study whether knowledge distillation works well in this task by analyzing whether the following inequality
O( |Fs|C + |Ft|C
nαkd ) + εkd + εt ≤ O( |Fs|C nαs ) + εs (16)
holds. Since the teacher model usually have more parameters than the student model, |Fs|C + |Ft|C ≤ |Fs|C | usually does not hold in knowledge distillation. Thus, the inequality highlights that the benefits of knowledge distillation arise because of εkd + εt ≤ εs and αkd > αs.
In image classification, as pointed out by Lopez-Paz et al. (Lopez-Paz et al., 2016), since soft labels ft(x) (the probability distribution) of teachers contain more information than the one-hot label y, it allows students to learn teachers at a higher rate than learning the true function, indicating that αkd >αs (Lopez-Paz et al., 2016). Besides, since the label for an input image is unique, learning the true function does not conflict with learning the teacher function, and thus it is safe to assume that εs ≥ εt + εkd. In contrast, on image-to-image translation, since the prediction of students and teachers are values of pixels instead of the probability distribution, there is no additional information in ft(x) compared with the ground truth. Thus αkd >αs does not hold. Moreover, since image-toimage translation is an ill-posed problem, the prediction of students and teachers may be different but correct answers for the same input image, indicating that εs ≥ εt + εkd also does not hold. These observations demonstrate that the inequality (16) does not hold in image-to-image translation, which can explain the limited performance of directly applying Hinton et al. knowledge distillation to image-to-image translation.
Instead of distilling the generated images, some recent knowledge distillation methods have been proposed to distill teacher knowledge in their features. Since there is more information contained in teacher features than ground-truth images, these methods can be considered as a guarantee for αkd >αs. In contrast, IYOR aims to improve knowledge distillation by addressing the ill-posed property, implying εs ≥ εt + εkd. Since IYOR and previous feature-based methods have different perspectives to support inequality (16), their benefits are orthogonal and can be combined.
C DETAILED EXPERIMENT SETTINGS
We follow the official codes of CycleGAN and Pix2Pix1 to conduct our experiments. Models on Edge→Shoe are trained by 50 epochs. Models on the other datasets are trained by 200 epochs. The momentum of Adam optimizer is 0.5. During the all the experiments, we set α = 1 and β = 1. The initial learning rate is 0.0002. LSGAN is used as the generator of the model. The discriminator is a 70x70 PatchGAN. In the experiments of CycleGAN, the backbone in the generators of both students and teachers (refining networks) are ResNet with six blocks. Their main difference is that the student backbone has much less channels than the teacher. Batch size is set to 1 for both training and inference. We compute the FID scores based on Pytorch-FID 2, a well known python package. We find that some of previous works compute the FID for unpaired image-to-image translation by using the images in both training set and test set to achieve more stable performance. However, we believe this behavior that access the test images during training is not reasonable. Hence, we choose to compute the FID on only the test set. As claimed by previous research Jin et al. (2021)3, this makes the FID scores in our experiments around 5-6 lower than the previous works which report FID on the both training and test set.
D INFLUENCE FROM HYPER-PARAMETERS
In this paper, we mainly have two hyper-parameters α and β to balance the magnitudes of knowledge distillation loss and the original GAN training loss. Hyper-parameters sensitivity study on Horse→Zebra with 15.81× compressed students is introduced in Figure 6. Note that the reported value is FID (lower is better). It is observed that: (i) With the worst α, our method achieve 70.21 FID, which is still 14.83 lower than the student trained without KD, and 6.83 lower than the second-best KD method. (ii) With the worst β, our method achieve 70.54 FID, which is still 14.50 lower than the student trained without KD, and 6.50 lower than the second-best KD method. These observations indicate that our method is not sensitive to the value of hyper-parameters.
E INFLUENCE FROM THE SIZE OF THE REFINING NETWORK
In our experiments, the refining network has the same architecture as the teacher network in traditional KD, which is also same as the image-to-image translation model before compression. In
1https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/ 2https://github.com/mseitzer/pytorch-fid 3https://github.com/snap-research/CAT
this section, we study the influence from the size of the refining network. As shown in Table 5: (i) With more parameters, the refining network can achieve a very low FID, which indicates that the refinement has good quality. And at the same time, the student can also be trained better, which achieve relative lower FID. (ii) When the refining network does not have enough parameters, the refinement has a relative higher FID and the effectiveness of knowledge distillation is not very significant. These observations indicate that a refining network with enough parameters can make a positive influence to the performance of knowledge distillation. In contrast, when the refining network does not have enough parameters, it can not successfully refine the image generated by the student, which leads to limited knowledge distillation performance.
F EXPERIMENTS ON CITYSCAPES
Following previous research Zhu et al. (2017a); Park et al. (2019a), we have also evaluated our method on Cityscapes (Cordts et al., 2016). Cityscapes is original proposed as a dataset for autonomous driving, including tasks such as detection and segmentation. In our experiments, we take the semantic segmentation mask as the input and take the natural images of the street as the label to train the image-to-image translation models. Then, we adopt the mIoU of a pre-trained FCN model on the generated images as the performance metric. A higher mIoU indicates that the image-toimage translation model has better performance. Our experimental results are shown in Table 6. It
is observed that there are 2.17 mIoU improvements on the student trained with our method, which is 0.71 higher than the second-best method.
G PYTHON-STYLE PSEUDO CODE
The following code block presents a brief implementation of IYOR.
# The pseudo code of IYOR def IYOR(x, student, refiner, sift):
# x: the input image, student: the student network # refiner: the refining network # sift: a function to extract sift features student_output = student(x) refinement = refine(student_output.detach()) # pixel-wise imitating kd_loss = l1_loss(student_output, refinement) # sift distillation kd_loss += l1_loss(sift(student_output), sift(refinement)) return kd_loss | 1. What is the focus and contribution of the paper on image-to-image translation?
2. What are the strengths of the proposed approach, particularly in addressing ill-posedness and using SIFT knowledge distillation?
3. What are the weaknesses of the paper, especially regarding assumptions and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions for improving the method by applying it to modern i2i methods or conducting a fairer comparison with the teacher network? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors propose a novel knowledge distillation method for image-to-image translation. This framework is called IYOR, which is consist of two improvements. Firstly, IYOR introduces a refinement network to address the ill-posedness present in i2i problem. Secondly, IYOR proposes to use SIFT knowledge distillation instead of doing it right on the image space. Experiments show the superiority of this method on 5 datasets.
Strengths And Weaknesses
Strength: 1. The idea of using a refinement network to address the ill-posedness is cool. 2. A solid ablation study is conducted, proving the effectiveness of their methods. 3. The evaluation is extensive compared to the existing works.
Weaknesses: 1. The guidance over SIFT feature space is good. However, the perceptual losses (such as VGG feature loss) are also considered effective. The authors should clarify their choice, otherwise this contribution is weakened. 2. Assumption 3.1. says the loss of TKD is assumed less than IYOR. However, eqn.7 tells a different story. 3. Assumption 3.1. may not hold in real cases. One cannot increase the parameter number of the teacher network when applying a KD algorithm.
4. This paper can become more solid if IYOR is used in some modern i2i methods. i.e, StyleFlow, EGSDE etc. 5. The student and refinement networks are trained simultaneously. Which may improve the performance of the teacher network. Is the comparison fair? Please provide KID/FID metrics of your teacher network.
Clarity, Quality, Novelty And Reproducibility
Please add link to reference literatures on the tables.
This paper is clearly presented.
Good reproducibility: The authors attached their codes with the submission. |
ICLR | Title
Imitate Your Own Refinement: Knowledge Distillation Sheds Light on Efficient Image-to-Image Translation
Abstract
The excellent performance of the state-of-the-art Generative Adversarial Networks (GANs) is always accompanied by enormous parameters and computations, making them unaffordable on resource-limited mobile devices. As an effective model compression technique, knowledge distillation (KD) has been proposed to transfer the knowledge from a cumbersome teacher to a lightweight student. Following its success on classification, some recent works have applied KD to GAN-based image-to-image translation but lead to unsatisfactory performance. In this paper, to tackle this challenge, we propose a novel knowledge distillation framework named IYOR (Imitate Your Own Refinement), which consists of the following two techniques. Firstly, since image-to-image translation is an ill-posed problem, knowledge distillation on image-to-image translation may force the student to learn the average results between multiple correct answers and thus harm student performance. To address this problem, we propose to replace the teacher network in knowledge distillation with a refining network, which is trained to refine the images generated by the student to make them more realistic. During the training period, the refining network and the student are trained simultaneously, and the student is trained to imitate the refined results in a knowledge distillation manner. Secondly, instead of only distilling the knowledge in the generated images, we propose SIFT KD, which firstly extracts the distinctive and scaleinvariant features of the generated images with Scale-invariant feature transform (SIFT), and then distills them from the refining network to the student. Extensive experimental results demonstrate the effectiveness of our method on five datasets with nine previous knowledge distillation methods. Our codes are available in the supplementary material and will be released on Github.
1 INTRODUCTION
In the last decade, Generative Adversarial Networks (GANs) have evolved to one of the most dominated methods for content generation of images (Isola et al., 2017; Zhu et al., 2017a), videos (Vondrick et al., 2016), text (Zhang et al., 2016), audios (Kong et al., 2020), graphs (Wang et al., 2018a), point clouds (Li et al., 2019) and multi-modal systems (Zhu et al., 2017b). Their remarkable ability of representation and generation has significantly boosted the performance of image-to-image translation and further promoted their usage in real-world applications. Despite their impressive performance, GANs models usually suffer from massive parameters and computation, which have limited them to deploy on resource-restricted platforms such as mobile phones. This problem further raises the research trend in model compression such as network pruning (Buciluǎ et al., 2006; He et al., 2018a; 2017), weights quantization (Lee et al., 2019; Nagel et al., 2019), lightweight model design (Ma et al., 2018; Sandler et al., 2018; Howard et al., 2017), neural network architecture search (Howard et al., 2019; He et al., 2018b), and knowledge distillation (Hinton et al., 2014).
Knowledge distillation (KD), which aims to improve the performance of lightweight students by transferring knowledge from an over-parameterized teacher model, has become a popular technique for model compression. By imitating the prediction results and the intermediate features of teachers, students can achieve significant performance improvements. Following its success in image classification (Hinton et al., 2014; Zhang et al., 2020), object detection (Zhang & Ma, 2021) and semantic
segmentation (Yang et al., 2022), Recently, some researchers have tried to apply knowledge distillation to image-to-image translation by training students to mimic the images generated by the teachers. Unfortunately, these trials usually lead to limited and even sometimes negative performance (Li et al., 2020c; Zhang et al., 2022). Some works have been proposed to distill teacher knowledge in their features and lead to positive effectiveness (Ren et al., 2021; Li et al., 2020c). However, there is still no analysis on the reason that why traditional image-based knowledge distillation fails.
In this paper, we mainly impute the unsatisfactory performance of naive knowledge distillation to the ill-posed property of image-to-image translation. Unlike image classification, where each image always has a unique categorical label, an image can have multiple different but correct posttranslation answers in image-to-image translation. For example, in Edge→Shoe translation (i.e., translating edges of shoes to photos), given an input image of edges, there are multiple corresponding images of shoes with different colors, styles, and contents. All of these images can be correct answers while the average of them may have low quality. Unfortunately, in traditional KD, the student and teacher are likely to give two different but correct predictions for the same input image. In this case, the knowledge distillation loss forces the students to learn the average between the student outputs and the teacher outputs, which can harm student performance acutely. In contrast, the ideal case to avoid this problem is to guarantee that the student and teacher output the consistent answers for the input image. However, this assumption does not always hold since the student and the teacher in traditional KD are two independent image-to-image translation models.
To address this problem, we propose IYOR (Imitate Your Own Refinement), a generalized knowledge framework which introduces a different manner to build the “teacher network” in knowledge distillation. Taking Edge→Shoe translation as an example, as shown in Figure 1, instead of building a teacher network which translates edges into shoes, IYOR introduces a refining network, which takes the shoe images generated by the student as inputs, refines them, and outputs the images of the shoe which have much better quality. Note that the refining network is trained with the student simultaneously and can be discarded during inference to avoid additional parameters and computations. Since the refining network has much more parameters than the student, this refining process can significantly improve the quality of images generated by students. Hence, the refined results can be considered as the “teacher outputs” in traditional knowledge distillation, and utilized as the learning targets of the students. The major advantage of IYOR is that the refining network is conditioned on the outputs of students, instead of the original inputted images. Hence, the refined results are more likely to be consistent with the student outputs than the teacher outputs in traditional knowledge distillation. As a result, it can alleviate the problem of ineffective knowledge distillation caussed by
the ill-posed property. Extensive experiments show that dramatic performance gain of five datasets can be observed by simply replacing the traditional teacher network with the refining network.
Moreover, instead of directly training the student to imitate the images generated by the refining network pixel by pixel, we further propose SIFT distillation which adopts Scale Invariant Feature Transform (SIFT) (Lowe, 1999), a typical image feature extraction method in traditional image processing to extract the scale-invariant and highly distinctive features of the generated images and then distills them from the refining network to the students. As pointed out by abundant previous research (Lowe, 1999; 2004; Yuan et al., 2008), the features extracted by SIFT are invariant to image scaling, rotation and illumination, and highly distinctive for downstream tasks such as detection and tracking. Hence, these features carry more semantic information of the images, and they are more beneficial in knowledge distillation than traditional pixel-wise imitating. Another advantage of SIFT KD is that SIFT does not contain any trainable parameters, which makes SIFT KD generalize well on different image-to-image translation tasks as a plug-and-play knowledge distillation technique.
Experimental results on five image-to-image translation tasks have demonstrated the performance of IYOR for both paired and unpaired image-to-image translation in terms of both quantitative and qualitative analysis. Despite its simplicity, IYOR outperforms the previous nine knowledge distillation methods by a clear margin. Besides, experimental results also demonstrate that IYOR can be combined with the previous feature-based knowledge distillation methods to achieve better performance. To sum up, our main contributions can be summarized as follows.
• We propose IYOR, a knowledge distillation method for efficient image-to-image translation. To the best of our knowledge, IYOR firstly shows that the most naive image-based knowledge distillation can be effective by replacing the teacher with a refining network.
• We propose SIFT distillation, which adopts SIFT to extract the distinctive and scaleinvariant features of images and distill them from the refining network to the student.
• Extensive experiments on both paired and unpaired translation tasks have demonstrated the performance of IYOR over nine previous methods and five datasets in terms of both quantitative and qualitative results. Our codes have been released for future research.
2 RELATED WORK
2.1 IMAGE-TO-IMAGE TRANSLATION WITH GANS
Remarkable progress has been achieved in image-to-image translation with the rapid development of generative adversarial networks (GANs) (Goodfellow et al., 2014; Brock et al., 2018). Pix2Pix is first proposed to perform paired image-to-image translation with conditional GANs (Isola et al., 2017). Then, Pix2PixHD is proposed to improve the generation quality with multi-scale generators and discriminators (Wang et al., 2018b). The similar idea has also been extended in text-to-image translation (Zhang et al., 2017), multi-modal image-to-image translation (Huang et al., 2018; Zhu et al., 2017c) and applications such as super-resolution and image dehazing (Wang et al., 2018d; Ledig et al., 2017; Zhang et al., 2017). In the real-world applications, the paired image-to-image translation dataset is usually not available. To address this problem, abundant methods have been proposed to perform image-to-image translation on unpaired datasets with cycle-consistency regularization (Zhu et al., 2017a; Yi et al., 2017; Kim et al., 2017). StarGAN is proposed to perform image-to-image translation for multiple domains with a single model (Choi et al., 2018), and StarGAN v2 is proposed to increase the scalability and the diversity of image-to-image translation models at the same time (Choi et al., 2020). Attention based GANs have been widely utilized to improve the performance of image-to-image translation by localizing the to-be-translated regions with attention modules (Tang et al., 2021; Chen et al., 2018; Emami et al., 2020; Alami Mejjati et al., 2018). Recently, some researchers have proposed to replace the convolutional layers in GAN with MLPmixers and vision transformers, which leads to better high-fidelity translation (Wan et al., 2021; Cazenavette & De Guevara, 2021).
2.2 KNOWLEDGE DISTILLATION
The idea that employing a large model to improve the performance of a small model is firstly proposed by Buciluǎ (Buciluǎ et al., 2006) for the compression of neural network ensemble. Then, Hinton et al. propose the concept of knowledge distillation, which introduces a temperature hyperparameter in the softmax layer to flatter teacher prediction (Hinton et al., 2014). Following their
success, many researchers have proposed to not only distill the teacher knowledge in its predicted categorical probability distribution, but also the dark knowledge in features (Romero et al., 2015; Tian et al., 2019), spatial attention (Zagoruyko & Komodakis, 2017), channel-wise attention (Liu et al., 2021a; Shu et al., 2021; Li et al., 2021a), pixel-wise relation (Zhang & Ma, 2021; Li et al., 2020c; Yoon et al., 2020), instance-wise relation (Park et al., 2019b; Tung & Mori, 2019; Peng et al., 2019), task-oriented information (Zhang et al., 2020), decision boundary samples (Heo et al., 2019b), positive feature (Heo et al., 2019a) and frequency-biased information (Zhang et al., 2022) with optimization methods such as L2-norm distance (Romero et al., 2015; Yim et al., 2017), adversarial learning (Shen et al., 2019; Liu et al., 2019a; Xu et al., 2017), and contrastive learning (Tian et al., 2019; Chen et al., 2020b). Besides image classification, knowledge distillation has already been used in model compression for object detection (Chen et al., 2017; Li et al., 2017; Wang et al., 2019; Bajestani & Yang, 2020; Li et al., 2020b), semantic segmentation (Liu et al., 2019b; Park & Heo, 2020), pre-trained language models (Sanh et al., 2019; Xu et al., 2020)s and so on.
Knowledge Distillation on Image-to-Image Translation A few research has been proposed to perform knowledge distillation on image-to-image translation. Li et al. propose the framework of GAN compression, which has applied the classic L2-norm feature distillation on the intermediate neural layers (Li et al., 2020a). However, their results demonstrate that this application leads to unsatisfying performance improvements. Then, Li et al. propose the semantic relation preserving knowledge distillation, which aims to distill the relation between different patches in the generated images instead of the encoded features (Li et al., 2020c). Then, Chen et al. propose to distill image-to-image translation models with knowledge distillation not only generators but also the discriminators (Chen et al., 2020a). Similarly, Li et al. propose to revisit the discriminator in GAN compression, which transfers the knowledge in the teacher discriminator with L2-norm and texture loss (Li et al., 2021b). Jin et al. introduce the centered kernel alignment as the distance metric in knowledge distillation, which does not require additional layers for feature reshaping. Ren et al. propose to train the teacher and student GANs simultaneously, which shows the possibility of online knowledge distillation on image-to-image translation (Ren et al., 2021). Recently, motivated by the fact that tiny GANs work badly in generating high-quality high-frequency information, Zhang et al. propose to distill only the high-frequency information decomposed by discrete wavelet transformation in the images generated by teachers (Zhang et al., 2022). Besides image-to-image translation, there are also some knowledge distillation methods designed for GAN compression on the other tasks (Liu et al., 2021b; Wang et al., 2018c; Aguinaldo et al., 2019). Unfortunately, most of these knowledge distillation methods focus on distilling teacher knowledge in their features, and sufficient evidences show that directly training students to mimic the generated images from teachers leads to insufficient and even negative performance (Li et al., 2020c; Zhang et al., 2022). In contrast, this paper firstly shows that naive image-based distillation can also achieve valuable performance boosts.
3 METHODOLOGY
3.1 KNOWLEDGE DISTILLATION
In this section, we firstly revisit the formulation of knowledge distillation on image classification and then simply extend them to image-to-image translation. Given a set of training samples X = {x1, x2, ..., xn} and the corresponding ground truth Y = {y1, y2, ..., yn}, by denoting the student function and the pre-trained teacher function as fs and ft, then the training loss of classical knowledge distillation method (Hinton et al., 2014) can be formulated as
argmin fs
Ex,y [ (1− α) · CE(fs(x), y) + α · KL(fs(x)/τ, ft(x)/τ) ] , (1)
where CE and KL indicate cross-entropy loss and the Kullback-Leibler divergence, respectively. τ is the temperature hyper-parameter to soften the probability distribution and α is a hyper-parameter to balance the origin training loss and the knowledge distillation loss. When knowledge distillation is applied to image-to-image translation, since the predictions of students and teachers are the value of pixels instead of probability distributions, KL divergence can be replaced with the L1-norm loss, which is widely utilized in low-level vision. And the cross-entropy loss for classification should be replaced with the GAN training loss. Taking Pix2Pix (Isola et al., 2017) as an example, the knowledge distillation loss (Hinton et al., 2014) for training the generator can be formulated as
argmin fs
Ex,y [ (1− α) · L1 ( fs(x), y ) + α · L1 ( fs(x), ft(x) ) + LcGAN ( fs(x) )] , (2)
where L1 indicates the L1-norm loss. LcGAN indicates the conditional GAN loss, which measures how the generated images fool the discriminator. Note that we do not introduce LcGAN and the discriminator of GANs in detail here since they have no direct influence with our method.
3.2 IYOR: IMITATE YOUR OWN REFINEMENT
Instead of using two independent neural networks as the student and the teacher we append a refining network fr after the student network, which is trained to translate the images generated by the student network fr(fs(x)) to the corresponding ground-truth y. Thus, the “teacher model” in IYOR can be written as ft = fs ◦ fr. In our implementation, fr has the same architecture as the teacher in traditional KD and hence it has enough learning ability to refine student outputs. Note that the fr can be discarded after the training period to avoid the additional parameters and computation. Besides, unlike traditional KD where the teacher is first pre-trained and then utilized to teach the student, in IYOR, fr and fs are trained simultaneously. For simplicity, by denoting zs = fs(x) and zr = fr ◦ fs(x), the training objective of the refining network fr can be formulated as
argmin fr
Ex,y [ L1 ( zr, y ) + LcGAN ( zr )] . (3)
And the training objective of the student fs can be formulated as
argmin fs
Ex,y [ (1− α) · L1 ( zs, y ) + α · L1 ( zs, zr ) + LcGAN ( zs(x) )] . (4)
Note that since IYOR only distills the generators of GANs, we omit the description of discriminators here. Besides, IYOR can be easily extended to unpaired image-to-image translation models such as CycleGAN by introducing two refining networks to both the two translation directions, respectively.
3.3 WHY IYOR WORKS
Consider a general knowledge distillation with the L1-norm, define a function G as G ( fs(x), ft(x) ) := Ex,y [ (1− α) · L1 ( fs(x), y ) + α · L1 ( fs(x), ft(x) ) +H ( fs(x) )] , (5)
where H is a function about the student network. For simplicity, we abbreviate Ex,y as E. The objective function of traditional knowledge distillation (TKD) (2) and IYOR (4) are specific cases of equation (5). Let f1s and f 2 s be the optimal student networks of problem (2) and IYOR (4), we will provide an assumption and a theorem to interpret the effectiveness of IYOR. TKD : f1s = argmin
fs
G ( fs(x), ft(x) ) , IYOR : f2s = argmin
fs,fr
G ( fs(x), fr ◦ fs(x) ) . (6)
Assumption 3.1 Since our teacher fr ◦ fs(x) has more parameters than the traditional teacher ft(x), we assume that when they achieve the optimal values, the loss of TKD is less than IYOR. In other words, denoting f1t and f 2 t as the optimal teacher networks of TKD and IYOR, then we have
G(f2s , f 2 t ) ≤ G(f1s , f1t ). (7)
Theorem 3.1 Under the Assumption (3.1), the L1 distance between the optimal student network and teacher network in IYOR is less than that in TKD, which means
E [ L1(f 2 s , f 2 t ) ] ≤ E [ L1(f 1 s , f 1 t ) ] . (8)
Please refer to Appendix A for the proof. Besides, we have also explained why traditional KD methods fail on image-to-image translation with VC theory in Appendix B.
3.4 SIFT DISTILLATION
Scale-Invariant Features Transform (SIFT) is one of the most effective and popular image descriptors in classicial image processing. Usually, SIFT mainly has four steps, including scale-space extrema detection, keypoint localization, orientation assignment and keypoint description. Usually, SIFT features carry rich semantic information of the images while having much lower dimensions than the original image. Thus, distilling the SIFT features is more efficient than directly distilling the pixels of the generated images. By denoting SIFT as ϕ(·) and a loss hyper-parameter as β, then the loss function of SIFT distillation in our method can be formulated as
argmin fs
Ex,y [ (1− α) · L1 ( zs, y ) + α · L1 ( zs, zr ) + LcGAN ( zs(x) ) + β · L1 ( ϕ(zs), ϕ(zr) )] . (9)
4 EXPERIMENT
4.1 EXPERIMENT SETTINGS
Models and Datasets In this paper, we mainly evaluate the performance of our method with CycleGAN (Zhu et al., 2017a) for unpaired image-to-image translation, and Pix2Pix (Isola et al., 2017) and Pix2PixHD (Wang et al., 2018b) for paired image-to-image translation. The refining network in our method has an identical architecture to the original model before compression. The students in our experiments have the same network depth as the original model before compression except for fewer channels. Five datasets are utilized for quantitative evaluation, including Horse→Zebra, Maps, Edge→Shoe, Summer→Winter, and Apple→Orange. Comparison Methods We have compared our methods with nine knowledge distillation methods, including three of them which are firstly proposed for image classification and then adopted by us to image-to-image translation (Hinton et al., 2014; Ahn et al., 2019; Zagoruyko & Komodakis, 2017), and six of them which are designed for image-to-image translation (Li et al., 2020a; Jin et al., 2021; Zhang et al., 2022; Li et al., 2021b; Ren et al., 2021; Li et al., 2020c). Note that some comparison methods have both knowledge distillation and neural network pruning. Following the setting of the previous work (Zhang et al., 2022), we only compare our method with their knowledge distillation algorithms for a fair comparison.
Training and Evaluation Settings We adopt the same training setting from the origin implementation of CycleGAN and Pix2Pix. Models for Edge→Shoe and the other datasets are trained by 50 and 200 epochs, respectively. Following previous works, we adopt Frechet Inception Distance (FID) as the performance metric for all datasets. A lower FID indicates that the distribution of the generated
images and the real images have a lower distance, and thus the generated images have better quality. On paired image-to-image translation, we report model performance at the last epoch. On unpaired image-to-image translation, since the performance for different epochs is unstable, we compute the FID for every five epochs and report the lowest one. For both paired and unpaired image-to-image translation, FID are computed over only the images in the test set.
4.2 EXPERIMENT RESULTS
Quantitative Results Quantitative comparison with previous knowledge distillation methods on unpaired image-to-image translation and paired image-to-image translation datasets are shown in Table 1 and Table 2, respectively. It is observed that: (i) Directly applying the naive image-based knowledge distillation (Hinton et al., 2014) leads to very limited and even negative performance. For instance, it leads to 1.91 and 0.67 FID increments (performance drop) on Edge→Shoe with Pix2Pix and Pix2PixHD, respectively. (ii) In contrast, by replacing the teacher in naive image-based with the refining network in our method, knowledge distillation leads to consistent performance improvements. On average, 5.12 and 10.43 FID decrements (performance improvements) can be gained in paired and unpaired image-to-image translation, respectively. (iii) Combining our method with previous feature-based knowledge distillation leads to further performance improvements. For instance, on the 14.82× compressed and 6.80× compressed Horse→Zebra students, combining our method with the method of Ren et al. leads to 2.35 and 1.13 further FID decrements. (iv) Table 3 further demonstrates the effectiveness of our method in more compression ratios and more datasets. These
observations demonstrate that our method can significantly improve the performance of lightweight image-to-image translation models in a wide range of settings.
Qualitative Results Qualitative comparison between our methods and previous methods on unpaired and paired image-to-image translation datasets are shown in Figure 2 and Figure 3 , respectively. Besides, Figure 4 further shows the performance of our method on the other two datasets. It is observed that: (i) Compared with the model before compression (the teacher model), a significant performance drop can be observed on the student model trained without knowledge distillation. For instance, on Horse→Zebra, most student models can not transform the whole body of horses into stripes. Some previous knowledge distillation methods (e.g. Zhang et al., Ren et al.) can alleviate this problem while our method leads to much better performance. (ii) On the Maps translation task, the buildings and the roads generated by students trained with previous knowledge distillation methods are fuzzy. In contrast, our method can generate clearer shapes and edges for buildings, roads, and rivers. (iii) On Edge→Shoe, the images generated by the students trained without knowledge distillation usually have severe corruption such as the holes in high-heeled shoes. In contrast, the images generated by our methods have better quality in terms of highlights, shapes, and colors. (iv) On Winter→Summer, the model trained by our method can successfully remove the snow on the plants. On Apple→Orange, the images generated by our method have much less corruption than the baseline model. These results demonstrate that students trained by IYOR achieve better performance in terms of not only statistical scores but also human vision.
0 50 100 150 200 Training Epoch
50
100
150
200
F I D
Hinton KD Our Method
Figure 5: Comparison between our method and Hinton KD on the FID between students and teachers on Horse→Zebra with CycleGAN.
Table 4: Ablation study on SIFT distillation and the usage of the refining network on Horse→Zebra with CycleGAN students.
#Params FLOPs Refining SIFT FID↓ ∆ ↑
1.61 7.29
× × 70.54±9.63 – ✓ × 59.31±2.89 11.23 × ✓ 63.17±3.66 7.37 ✓ ✓ 56.45±2.59 14.09
0.72 3.35
× × 85.04±6.88 – ✓ × 72.53±3.15 12.51 × ✓ 78.11±1.71 6.93 ✓ ✓ 69.67±5.32 15.37
5 DISCUSSION
5.1 ABLATION STUDY
In this paper, we mainly propose two knowledge distillation techniques, including (a) learning from a refining network instead of a teacher network and (b) SIFT KD. Table 4 shows the ablation study of the two techniques on Horse→Zebra with CycleGAN. It is observed that on the 7.08× and 15.81× compressed students: (i) 11.23 and 12.51 FID decrements can be observed by replacing the teacher network in traditional knowledge distillation with a refining network, respectively. (ii) 7.37 and 6.93 FID decrements can be observed by applying SIFT KD, respectively. (iii) 14.09 and 15.37 FID decrements can be obtained by combining the two techniques together, respectively. These observations indicate that both the two techniques have their own merits and their benefits are orthogonal.
5.2 STUDENT-TEACHER SIMILARITY
In this subsection, we show that the refining network in IYOR has more consistent outputs with the student than the teacher in traditional KD. The FID between images generated by students and refining network in our method and the traditional KD method is shown in Figure 5. Note that A lower FID here indicates a larger student-teacher similarity. It is observed that our method leads to lower FID during the whole training period, indicating that compared with the teachers in traditional KD, the images generated by the refining network in our method are more likely to be consistent with images generated by the students. Besides, since FID measures the distance between the distribution of images generated by the student and the teacher, this observation also implies that the student in our method can learn teacher knowledge more effectively.
6 CONCLUSION
Due to the ill-posed property of image-to-image translation, directly applying traditional knowledge distillation usually leads to unsatisfactory and even negative impacts. To address this problem, we propose a new knowledge distillation method, named IYOR (imitate your own refinement), in which a refining network replaces the teacher network in traditional KD. During the training phase, the refining network strives to improve the quality of images generated by the students instead of generating images from the inputs. Hence, the refined results can be better learning targets than the teacher outputs that are used in traditional KD. Extensive quantitative and qualitative results have demonstrated that IYOR outperforms existing nine approaches in both paired and unpaired translation. Besides, SIFT knowledge distillation is also introduced to improve the effectiveness of knowledge distillation by extracting the distinctive and scale-invariant features of images and then distilling them from teachers to students. Furthermore, we have analyzed why traditional KD fails and IYOR works well on image-to-image translation theoretically.
A THE PROOF FOR THEOREM 3.1
Proof According to the Assumption 3.1, we have E [ (1− α) · L1 ( f2s , y ) + α · L1 ( f2s , f 2 t ) +H(f2s ) ] ≤ E [ (1− α) · L1 ( f1s , y ) + α · L1 ( f1s , f 1 t ) +H(f1s ) ] (10)
Since f1s is the optimal solution for TKD (6), and G(f 1 s , f 1 t ) ≤ G(f2s , f1t ) implies E [ (1− α) · L1 ( f1s , y ) + α · L1 ( f1s , f 1 t ) +H(f1s ) ] ≤ E [ (1− α) · L1 ( f2s , y ) + α · L1 ( f2s , f 2 t ) +H(f2s ) ] (11)
Combining equation (10) and equation (11), we have E [ L1(f 2 s , f 2 t ) ] ≤ E [ L1(f 1 s , f 1 t ) ] . (12)
□
B ANALYSING KNOWLEDGE DISTILLATION WITH VC THEORY
Recent evidences show that directly applying the naive Hinton et al. knowledge distillation (Hinton et al., 2014; Zhang et al., 2022; Li et al., 2020c) to image-to-image translation usually leads to limited and even negative performance. In this subsection, we try to explain this observation from the perspective of VC theory based on generalized knowledge distillation (Lopez-Paz et al., 2016). Denoting a function class as F , then the student function, the teacher function and the oracle real target function can be written as fs ∈ Fs, ft ∈ Ft, and f ∈ F , respectively. Given n training samples, we can assume that the student function fs and the teacher function fs may learn the true function f at a rate of αs and αt, which can be formulated as
R(fs)−R(f) ≤ O( |Fs|C nαs ) + εs, and R(ft)−R(f) ≤ O( |Ft|C nαt ) + εt respectively, (13)
where O(·) term is the estimation error, εs and εt are the approximation error of the student function class Fs and the teacher function class Ft with respect to f ∈ F . A higher α indicates the learning problem is easier to be solved. Then, we can assume that the student learns from the teacher at the rate αkd with the approximation error εkd, which can be formulated as
R(fs)−R(ft) ≤ O( |Fs|C nαkd ) + εkd. (14)
As pointed out by Lopez-Paz et al. (Lopez-Paz et al., 2016), since the teacher model has more parameters than the student, we can assume the teacher function can learn the true function with a higher rate, indicating αt > αs and αt > αkd. By combining (13) and (14), we have the following inequality.
R(fs)−R(f) = R(fs)−R(ft) +R(ft)−R(fs)
≤ O( |Fs|C nαkd ) + εkd +O( |Ft|C nαt ) + εt ≤ O( |Fs|C + |Ft|C nαkd ) + εkd + εt
(15)
Thus, given a learning task, now we can study whether knowledge distillation works well in this task by analyzing whether the following inequality
O( |Fs|C + |Ft|C
nαkd ) + εkd + εt ≤ O( |Fs|C nαs ) + εs (16)
holds. Since the teacher model usually have more parameters than the student model, |Fs|C + |Ft|C ≤ |Fs|C | usually does not hold in knowledge distillation. Thus, the inequality highlights that the benefits of knowledge distillation arise because of εkd + εt ≤ εs and αkd > αs.
In image classification, as pointed out by Lopez-Paz et al. (Lopez-Paz et al., 2016), since soft labels ft(x) (the probability distribution) of teachers contain more information than the one-hot label y, it allows students to learn teachers at a higher rate than learning the true function, indicating that αkd >αs (Lopez-Paz et al., 2016). Besides, since the label for an input image is unique, learning the true function does not conflict with learning the teacher function, and thus it is safe to assume that εs ≥ εt + εkd. In contrast, on image-to-image translation, since the prediction of students and teachers are values of pixels instead of the probability distribution, there is no additional information in ft(x) compared with the ground truth. Thus αkd >αs does not hold. Moreover, since image-toimage translation is an ill-posed problem, the prediction of students and teachers may be different but correct answers for the same input image, indicating that εs ≥ εt + εkd also does not hold. These observations demonstrate that the inequality (16) does not hold in image-to-image translation, which can explain the limited performance of directly applying Hinton et al. knowledge distillation to image-to-image translation.
Instead of distilling the generated images, some recent knowledge distillation methods have been proposed to distill teacher knowledge in their features. Since there is more information contained in teacher features than ground-truth images, these methods can be considered as a guarantee for αkd >αs. In contrast, IYOR aims to improve knowledge distillation by addressing the ill-posed property, implying εs ≥ εt + εkd. Since IYOR and previous feature-based methods have different perspectives to support inequality (16), their benefits are orthogonal and can be combined.
C DETAILED EXPERIMENT SETTINGS
We follow the official codes of CycleGAN and Pix2Pix1 to conduct our experiments. Models on Edge→Shoe are trained by 50 epochs. Models on the other datasets are trained by 200 epochs. The momentum of Adam optimizer is 0.5. During the all the experiments, we set α = 1 and β = 1. The initial learning rate is 0.0002. LSGAN is used as the generator of the model. The discriminator is a 70x70 PatchGAN. In the experiments of CycleGAN, the backbone in the generators of both students and teachers (refining networks) are ResNet with six blocks. Their main difference is that the student backbone has much less channels than the teacher. Batch size is set to 1 for both training and inference. We compute the FID scores based on Pytorch-FID 2, a well known python package. We find that some of previous works compute the FID for unpaired image-to-image translation by using the images in both training set and test set to achieve more stable performance. However, we believe this behavior that access the test images during training is not reasonable. Hence, we choose to compute the FID on only the test set. As claimed by previous research Jin et al. (2021)3, this makes the FID scores in our experiments around 5-6 lower than the previous works which report FID on the both training and test set.
D INFLUENCE FROM HYPER-PARAMETERS
In this paper, we mainly have two hyper-parameters α and β to balance the magnitudes of knowledge distillation loss and the original GAN training loss. Hyper-parameters sensitivity study on Horse→Zebra with 15.81× compressed students is introduced in Figure 6. Note that the reported value is FID (lower is better). It is observed that: (i) With the worst α, our method achieve 70.21 FID, which is still 14.83 lower than the student trained without KD, and 6.83 lower than the second-best KD method. (ii) With the worst β, our method achieve 70.54 FID, which is still 14.50 lower than the student trained without KD, and 6.50 lower than the second-best KD method. These observations indicate that our method is not sensitive to the value of hyper-parameters.
E INFLUENCE FROM THE SIZE OF THE REFINING NETWORK
In our experiments, the refining network has the same architecture as the teacher network in traditional KD, which is also same as the image-to-image translation model before compression. In
1https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/ 2https://github.com/mseitzer/pytorch-fid 3https://github.com/snap-research/CAT
this section, we study the influence from the size of the refining network. As shown in Table 5: (i) With more parameters, the refining network can achieve a very low FID, which indicates that the refinement has good quality. And at the same time, the student can also be trained better, which achieve relative lower FID. (ii) When the refining network does not have enough parameters, the refinement has a relative higher FID and the effectiveness of knowledge distillation is not very significant. These observations indicate that a refining network with enough parameters can make a positive influence to the performance of knowledge distillation. In contrast, when the refining network does not have enough parameters, it can not successfully refine the image generated by the student, which leads to limited knowledge distillation performance.
F EXPERIMENTS ON CITYSCAPES
Following previous research Zhu et al. (2017a); Park et al. (2019a), we have also evaluated our method on Cityscapes (Cordts et al., 2016). Cityscapes is original proposed as a dataset for autonomous driving, including tasks such as detection and segmentation. In our experiments, we take the semantic segmentation mask as the input and take the natural images of the street as the label to train the image-to-image translation models. Then, we adopt the mIoU of a pre-trained FCN model on the generated images as the performance metric. A higher mIoU indicates that the image-toimage translation model has better performance. Our experimental results are shown in Table 6. It
is observed that there are 2.17 mIoU improvements on the student trained with our method, which is 0.71 higher than the second-best method.
G PYTHON-STYLE PSEUDO CODE
The following code block presents a brief implementation of IYOR.
# The pseudo code of IYOR def IYOR(x, student, refiner, sift):
# x: the input image, student: the student network # refiner: the refining network # sift: a function to extract sift features student_output = student(x) refinement = refine(student_output.detach()) # pixel-wise imitating kd_loss = l1_loss(student_output, refinement) # sift distillation kd_loss += l1_loss(sift(student_output), sift(refinement)) return kd_loss | 1. What is the focus of the paper in terms of image-to-image translation?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its name and experimental results?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any questions or concerns regarding the comparison with other works and the performance improvement of the proposed method? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper tries to deal with the efficient image-to-image translation via the knowledge distillation approach, by proposing a so-called IYOR (imitate your own refinement) method with two techniques of replacing the teacher network with a refining network and adopting a SIFT distillation between the student network and the refining network. Some image-to-image translation methods and knowledge distiallation methods are selected for comparison.
Strengths And Weaknesses
Strengths:
The proposed SIFT loss is interesting and mighe be helpful for some situations.
The writing is ok.
Weaknesses:
It's a little strange to name the proposed method as a knowledge distillation method, since it seems that the IYOR (Figure 1b) is a two-stage coarse-to-fine generation method with two losses (L1 and SIFT) for constraining the outputs of the first stage and the second stage, actually I am not very familiar with the knowledge distillation, but I feel some differences between them.
It's somewhat insufficient about the experiments, and the performance improvement of proposed method is limited. Specifically, the comparison doesn't consider the state-of-the-art image-to-image methods, also as we know that the FID is sensitive while the the visual evaluation is more confident, and it is better to show user study results for better evaluation.
As to the objective of efficiency, it's hard to understand the details about Table 3. Besides, the performance improvement is also not obvious from this table.
Clarity, Quality, Novelty And Reproducibility
The clarity of the method is good, but the analysis of experiments is not well clarified. The reproducibility is fine. The novelty of the paper is somewhat limited about the knowledge distillation. Overall, the quality of this paper is borderline. |
ICLR | Title
Imitate Your Own Refinement: Knowledge Distillation Sheds Light on Efficient Image-to-Image Translation
Abstract
The excellent performance of the state-of-the-art Generative Adversarial Networks (GANs) is always accompanied by enormous parameters and computations, making them unaffordable on resource-limited mobile devices. As an effective model compression technique, knowledge distillation (KD) has been proposed to transfer the knowledge from a cumbersome teacher to a lightweight student. Following its success on classification, some recent works have applied KD to GAN-based image-to-image translation but lead to unsatisfactory performance. In this paper, to tackle this challenge, we propose a novel knowledge distillation framework named IYOR (Imitate Your Own Refinement), which consists of the following two techniques. Firstly, since image-to-image translation is an ill-posed problem, knowledge distillation on image-to-image translation may force the student to learn the average results between multiple correct answers and thus harm student performance. To address this problem, we propose to replace the teacher network in knowledge distillation with a refining network, which is trained to refine the images generated by the student to make them more realistic. During the training period, the refining network and the student are trained simultaneously, and the student is trained to imitate the refined results in a knowledge distillation manner. Secondly, instead of only distilling the knowledge in the generated images, we propose SIFT KD, which firstly extracts the distinctive and scaleinvariant features of the generated images with Scale-invariant feature transform (SIFT), and then distills them from the refining network to the student. Extensive experimental results demonstrate the effectiveness of our method on five datasets with nine previous knowledge distillation methods. Our codes are available in the supplementary material and will be released on Github.
1 INTRODUCTION
In the last decade, Generative Adversarial Networks (GANs) have evolved to one of the most dominated methods for content generation of images (Isola et al., 2017; Zhu et al., 2017a), videos (Vondrick et al., 2016), text (Zhang et al., 2016), audios (Kong et al., 2020), graphs (Wang et al., 2018a), point clouds (Li et al., 2019) and multi-modal systems (Zhu et al., 2017b). Their remarkable ability of representation and generation has significantly boosted the performance of image-to-image translation and further promoted their usage in real-world applications. Despite their impressive performance, GANs models usually suffer from massive parameters and computation, which have limited them to deploy on resource-restricted platforms such as mobile phones. This problem further raises the research trend in model compression such as network pruning (Buciluǎ et al., 2006; He et al., 2018a; 2017), weights quantization (Lee et al., 2019; Nagel et al., 2019), lightweight model design (Ma et al., 2018; Sandler et al., 2018; Howard et al., 2017), neural network architecture search (Howard et al., 2019; He et al., 2018b), and knowledge distillation (Hinton et al., 2014).
Knowledge distillation (KD), which aims to improve the performance of lightweight students by transferring knowledge from an over-parameterized teacher model, has become a popular technique for model compression. By imitating the prediction results and the intermediate features of teachers, students can achieve significant performance improvements. Following its success in image classification (Hinton et al., 2014; Zhang et al., 2020), object detection (Zhang & Ma, 2021) and semantic
segmentation (Yang et al., 2022), Recently, some researchers have tried to apply knowledge distillation to image-to-image translation by training students to mimic the images generated by the teachers. Unfortunately, these trials usually lead to limited and even sometimes negative performance (Li et al., 2020c; Zhang et al., 2022). Some works have been proposed to distill teacher knowledge in their features and lead to positive effectiveness (Ren et al., 2021; Li et al., 2020c). However, there is still no analysis on the reason that why traditional image-based knowledge distillation fails.
In this paper, we mainly impute the unsatisfactory performance of naive knowledge distillation to the ill-posed property of image-to-image translation. Unlike image classification, where each image always has a unique categorical label, an image can have multiple different but correct posttranslation answers in image-to-image translation. For example, in Edge→Shoe translation (i.e., translating edges of shoes to photos), given an input image of edges, there are multiple corresponding images of shoes with different colors, styles, and contents. All of these images can be correct answers while the average of them may have low quality. Unfortunately, in traditional KD, the student and teacher are likely to give two different but correct predictions for the same input image. In this case, the knowledge distillation loss forces the students to learn the average between the student outputs and the teacher outputs, which can harm student performance acutely. In contrast, the ideal case to avoid this problem is to guarantee that the student and teacher output the consistent answers for the input image. However, this assumption does not always hold since the student and the teacher in traditional KD are two independent image-to-image translation models.
To address this problem, we propose IYOR (Imitate Your Own Refinement), a generalized knowledge framework which introduces a different manner to build the “teacher network” in knowledge distillation. Taking Edge→Shoe translation as an example, as shown in Figure 1, instead of building a teacher network which translates edges into shoes, IYOR introduces a refining network, which takes the shoe images generated by the student as inputs, refines them, and outputs the images of the shoe which have much better quality. Note that the refining network is trained with the student simultaneously and can be discarded during inference to avoid additional parameters and computations. Since the refining network has much more parameters than the student, this refining process can significantly improve the quality of images generated by students. Hence, the refined results can be considered as the “teacher outputs” in traditional knowledge distillation, and utilized as the learning targets of the students. The major advantage of IYOR is that the refining network is conditioned on the outputs of students, instead of the original inputted images. Hence, the refined results are more likely to be consistent with the student outputs than the teacher outputs in traditional knowledge distillation. As a result, it can alleviate the problem of ineffective knowledge distillation caussed by
the ill-posed property. Extensive experiments show that dramatic performance gain of five datasets can be observed by simply replacing the traditional teacher network with the refining network.
Moreover, instead of directly training the student to imitate the images generated by the refining network pixel by pixel, we further propose SIFT distillation which adopts Scale Invariant Feature Transform (SIFT) (Lowe, 1999), a typical image feature extraction method in traditional image processing to extract the scale-invariant and highly distinctive features of the generated images and then distills them from the refining network to the students. As pointed out by abundant previous research (Lowe, 1999; 2004; Yuan et al., 2008), the features extracted by SIFT are invariant to image scaling, rotation and illumination, and highly distinctive for downstream tasks such as detection and tracking. Hence, these features carry more semantic information of the images, and they are more beneficial in knowledge distillation than traditional pixel-wise imitating. Another advantage of SIFT KD is that SIFT does not contain any trainable parameters, which makes SIFT KD generalize well on different image-to-image translation tasks as a plug-and-play knowledge distillation technique.
Experimental results on five image-to-image translation tasks have demonstrated the performance of IYOR for both paired and unpaired image-to-image translation in terms of both quantitative and qualitative analysis. Despite its simplicity, IYOR outperforms the previous nine knowledge distillation methods by a clear margin. Besides, experimental results also demonstrate that IYOR can be combined with the previous feature-based knowledge distillation methods to achieve better performance. To sum up, our main contributions can be summarized as follows.
• We propose IYOR, a knowledge distillation method for efficient image-to-image translation. To the best of our knowledge, IYOR firstly shows that the most naive image-based knowledge distillation can be effective by replacing the teacher with a refining network.
• We propose SIFT distillation, which adopts SIFT to extract the distinctive and scaleinvariant features of images and distill them from the refining network to the student.
• Extensive experiments on both paired and unpaired translation tasks have demonstrated the performance of IYOR over nine previous methods and five datasets in terms of both quantitative and qualitative results. Our codes have been released for future research.
2 RELATED WORK
2.1 IMAGE-TO-IMAGE TRANSLATION WITH GANS
Remarkable progress has been achieved in image-to-image translation with the rapid development of generative adversarial networks (GANs) (Goodfellow et al., 2014; Brock et al., 2018). Pix2Pix is first proposed to perform paired image-to-image translation with conditional GANs (Isola et al., 2017). Then, Pix2PixHD is proposed to improve the generation quality with multi-scale generators and discriminators (Wang et al., 2018b). The similar idea has also been extended in text-to-image translation (Zhang et al., 2017), multi-modal image-to-image translation (Huang et al., 2018; Zhu et al., 2017c) and applications such as super-resolution and image dehazing (Wang et al., 2018d; Ledig et al., 2017; Zhang et al., 2017). In the real-world applications, the paired image-to-image translation dataset is usually not available. To address this problem, abundant methods have been proposed to perform image-to-image translation on unpaired datasets with cycle-consistency regularization (Zhu et al., 2017a; Yi et al., 2017; Kim et al., 2017). StarGAN is proposed to perform image-to-image translation for multiple domains with a single model (Choi et al., 2018), and StarGAN v2 is proposed to increase the scalability and the diversity of image-to-image translation models at the same time (Choi et al., 2020). Attention based GANs have been widely utilized to improve the performance of image-to-image translation by localizing the to-be-translated regions with attention modules (Tang et al., 2021; Chen et al., 2018; Emami et al., 2020; Alami Mejjati et al., 2018). Recently, some researchers have proposed to replace the convolutional layers in GAN with MLPmixers and vision transformers, which leads to better high-fidelity translation (Wan et al., 2021; Cazenavette & De Guevara, 2021).
2.2 KNOWLEDGE DISTILLATION
The idea that employing a large model to improve the performance of a small model is firstly proposed by Buciluǎ (Buciluǎ et al., 2006) for the compression of neural network ensemble. Then, Hinton et al. propose the concept of knowledge distillation, which introduces a temperature hyperparameter in the softmax layer to flatter teacher prediction (Hinton et al., 2014). Following their
success, many researchers have proposed to not only distill the teacher knowledge in its predicted categorical probability distribution, but also the dark knowledge in features (Romero et al., 2015; Tian et al., 2019), spatial attention (Zagoruyko & Komodakis, 2017), channel-wise attention (Liu et al., 2021a; Shu et al., 2021; Li et al., 2021a), pixel-wise relation (Zhang & Ma, 2021; Li et al., 2020c; Yoon et al., 2020), instance-wise relation (Park et al., 2019b; Tung & Mori, 2019; Peng et al., 2019), task-oriented information (Zhang et al., 2020), decision boundary samples (Heo et al., 2019b), positive feature (Heo et al., 2019a) and frequency-biased information (Zhang et al., 2022) with optimization methods such as L2-norm distance (Romero et al., 2015; Yim et al., 2017), adversarial learning (Shen et al., 2019; Liu et al., 2019a; Xu et al., 2017), and contrastive learning (Tian et al., 2019; Chen et al., 2020b). Besides image classification, knowledge distillation has already been used in model compression for object detection (Chen et al., 2017; Li et al., 2017; Wang et al., 2019; Bajestani & Yang, 2020; Li et al., 2020b), semantic segmentation (Liu et al., 2019b; Park & Heo, 2020), pre-trained language models (Sanh et al., 2019; Xu et al., 2020)s and so on.
Knowledge Distillation on Image-to-Image Translation A few research has been proposed to perform knowledge distillation on image-to-image translation. Li et al. propose the framework of GAN compression, which has applied the classic L2-norm feature distillation on the intermediate neural layers (Li et al., 2020a). However, their results demonstrate that this application leads to unsatisfying performance improvements. Then, Li et al. propose the semantic relation preserving knowledge distillation, which aims to distill the relation between different patches in the generated images instead of the encoded features (Li et al., 2020c). Then, Chen et al. propose to distill image-to-image translation models with knowledge distillation not only generators but also the discriminators (Chen et al., 2020a). Similarly, Li et al. propose to revisit the discriminator in GAN compression, which transfers the knowledge in the teacher discriminator with L2-norm and texture loss (Li et al., 2021b). Jin et al. introduce the centered kernel alignment as the distance metric in knowledge distillation, which does not require additional layers for feature reshaping. Ren et al. propose to train the teacher and student GANs simultaneously, which shows the possibility of online knowledge distillation on image-to-image translation (Ren et al., 2021). Recently, motivated by the fact that tiny GANs work badly in generating high-quality high-frequency information, Zhang et al. propose to distill only the high-frequency information decomposed by discrete wavelet transformation in the images generated by teachers (Zhang et al., 2022). Besides image-to-image translation, there are also some knowledge distillation methods designed for GAN compression on the other tasks (Liu et al., 2021b; Wang et al., 2018c; Aguinaldo et al., 2019). Unfortunately, most of these knowledge distillation methods focus on distilling teacher knowledge in their features, and sufficient evidences show that directly training students to mimic the generated images from teachers leads to insufficient and even negative performance (Li et al., 2020c; Zhang et al., 2022). In contrast, this paper firstly shows that naive image-based distillation can also achieve valuable performance boosts.
3 METHODOLOGY
3.1 KNOWLEDGE DISTILLATION
In this section, we firstly revisit the formulation of knowledge distillation on image classification and then simply extend them to image-to-image translation. Given a set of training samples X = {x1, x2, ..., xn} and the corresponding ground truth Y = {y1, y2, ..., yn}, by denoting the student function and the pre-trained teacher function as fs and ft, then the training loss of classical knowledge distillation method (Hinton et al., 2014) can be formulated as
argmin fs
Ex,y [ (1− α) · CE(fs(x), y) + α · KL(fs(x)/τ, ft(x)/τ) ] , (1)
where CE and KL indicate cross-entropy loss and the Kullback-Leibler divergence, respectively. τ is the temperature hyper-parameter to soften the probability distribution and α is a hyper-parameter to balance the origin training loss and the knowledge distillation loss. When knowledge distillation is applied to image-to-image translation, since the predictions of students and teachers are the value of pixels instead of probability distributions, KL divergence can be replaced with the L1-norm loss, which is widely utilized in low-level vision. And the cross-entropy loss for classification should be replaced with the GAN training loss. Taking Pix2Pix (Isola et al., 2017) as an example, the knowledge distillation loss (Hinton et al., 2014) for training the generator can be formulated as
argmin fs
Ex,y [ (1− α) · L1 ( fs(x), y ) + α · L1 ( fs(x), ft(x) ) + LcGAN ( fs(x) )] , (2)
where L1 indicates the L1-norm loss. LcGAN indicates the conditional GAN loss, which measures how the generated images fool the discriminator. Note that we do not introduce LcGAN and the discriminator of GANs in detail here since they have no direct influence with our method.
3.2 IYOR: IMITATE YOUR OWN REFINEMENT
Instead of using two independent neural networks as the student and the teacher we append a refining network fr after the student network, which is trained to translate the images generated by the student network fr(fs(x)) to the corresponding ground-truth y. Thus, the “teacher model” in IYOR can be written as ft = fs ◦ fr. In our implementation, fr has the same architecture as the teacher in traditional KD and hence it has enough learning ability to refine student outputs. Note that the fr can be discarded after the training period to avoid the additional parameters and computation. Besides, unlike traditional KD where the teacher is first pre-trained and then utilized to teach the student, in IYOR, fr and fs are trained simultaneously. For simplicity, by denoting zs = fs(x) and zr = fr ◦ fs(x), the training objective of the refining network fr can be formulated as
argmin fr
Ex,y [ L1 ( zr, y ) + LcGAN ( zr )] . (3)
And the training objective of the student fs can be formulated as
argmin fs
Ex,y [ (1− α) · L1 ( zs, y ) + α · L1 ( zs, zr ) + LcGAN ( zs(x) )] . (4)
Note that since IYOR only distills the generators of GANs, we omit the description of discriminators here. Besides, IYOR can be easily extended to unpaired image-to-image translation models such as CycleGAN by introducing two refining networks to both the two translation directions, respectively.
3.3 WHY IYOR WORKS
Consider a general knowledge distillation with the L1-norm, define a function G as G ( fs(x), ft(x) ) := Ex,y [ (1− α) · L1 ( fs(x), y ) + α · L1 ( fs(x), ft(x) ) +H ( fs(x) )] , (5)
where H is a function about the student network. For simplicity, we abbreviate Ex,y as E. The objective function of traditional knowledge distillation (TKD) (2) and IYOR (4) are specific cases of equation (5). Let f1s and f 2 s be the optimal student networks of problem (2) and IYOR (4), we will provide an assumption and a theorem to interpret the effectiveness of IYOR. TKD : f1s = argmin
fs
G ( fs(x), ft(x) ) , IYOR : f2s = argmin
fs,fr
G ( fs(x), fr ◦ fs(x) ) . (6)
Assumption 3.1 Since our teacher fr ◦ fs(x) has more parameters than the traditional teacher ft(x), we assume that when they achieve the optimal values, the loss of TKD is less than IYOR. In other words, denoting f1t and f 2 t as the optimal teacher networks of TKD and IYOR, then we have
G(f2s , f 2 t ) ≤ G(f1s , f1t ). (7)
Theorem 3.1 Under the Assumption (3.1), the L1 distance between the optimal student network and teacher network in IYOR is less than that in TKD, which means
E [ L1(f 2 s , f 2 t ) ] ≤ E [ L1(f 1 s , f 1 t ) ] . (8)
Please refer to Appendix A for the proof. Besides, we have also explained why traditional KD methods fail on image-to-image translation with VC theory in Appendix B.
3.4 SIFT DISTILLATION
Scale-Invariant Features Transform (SIFT) is one of the most effective and popular image descriptors in classicial image processing. Usually, SIFT mainly has four steps, including scale-space extrema detection, keypoint localization, orientation assignment and keypoint description. Usually, SIFT features carry rich semantic information of the images while having much lower dimensions than the original image. Thus, distilling the SIFT features is more efficient than directly distilling the pixels of the generated images. By denoting SIFT as ϕ(·) and a loss hyper-parameter as β, then the loss function of SIFT distillation in our method can be formulated as
argmin fs
Ex,y [ (1− α) · L1 ( zs, y ) + α · L1 ( zs, zr ) + LcGAN ( zs(x) ) + β · L1 ( ϕ(zs), ϕ(zr) )] . (9)
4 EXPERIMENT
4.1 EXPERIMENT SETTINGS
Models and Datasets In this paper, we mainly evaluate the performance of our method with CycleGAN (Zhu et al., 2017a) for unpaired image-to-image translation, and Pix2Pix (Isola et al., 2017) and Pix2PixHD (Wang et al., 2018b) for paired image-to-image translation. The refining network in our method has an identical architecture to the original model before compression. The students in our experiments have the same network depth as the original model before compression except for fewer channels. Five datasets are utilized for quantitative evaluation, including Horse→Zebra, Maps, Edge→Shoe, Summer→Winter, and Apple→Orange. Comparison Methods We have compared our methods with nine knowledge distillation methods, including three of them which are firstly proposed for image classification and then adopted by us to image-to-image translation (Hinton et al., 2014; Ahn et al., 2019; Zagoruyko & Komodakis, 2017), and six of them which are designed for image-to-image translation (Li et al., 2020a; Jin et al., 2021; Zhang et al., 2022; Li et al., 2021b; Ren et al., 2021; Li et al., 2020c). Note that some comparison methods have both knowledge distillation and neural network pruning. Following the setting of the previous work (Zhang et al., 2022), we only compare our method with their knowledge distillation algorithms for a fair comparison.
Training and Evaluation Settings We adopt the same training setting from the origin implementation of CycleGAN and Pix2Pix. Models for Edge→Shoe and the other datasets are trained by 50 and 200 epochs, respectively. Following previous works, we adopt Frechet Inception Distance (FID) as the performance metric for all datasets. A lower FID indicates that the distribution of the generated
images and the real images have a lower distance, and thus the generated images have better quality. On paired image-to-image translation, we report model performance at the last epoch. On unpaired image-to-image translation, since the performance for different epochs is unstable, we compute the FID for every five epochs and report the lowest one. For both paired and unpaired image-to-image translation, FID are computed over only the images in the test set.
4.2 EXPERIMENT RESULTS
Quantitative Results Quantitative comparison with previous knowledge distillation methods on unpaired image-to-image translation and paired image-to-image translation datasets are shown in Table 1 and Table 2, respectively. It is observed that: (i) Directly applying the naive image-based knowledge distillation (Hinton et al., 2014) leads to very limited and even negative performance. For instance, it leads to 1.91 and 0.67 FID increments (performance drop) on Edge→Shoe with Pix2Pix and Pix2PixHD, respectively. (ii) In contrast, by replacing the teacher in naive image-based with the refining network in our method, knowledge distillation leads to consistent performance improvements. On average, 5.12 and 10.43 FID decrements (performance improvements) can be gained in paired and unpaired image-to-image translation, respectively. (iii) Combining our method with previous feature-based knowledge distillation leads to further performance improvements. For instance, on the 14.82× compressed and 6.80× compressed Horse→Zebra students, combining our method with the method of Ren et al. leads to 2.35 and 1.13 further FID decrements. (iv) Table 3 further demonstrates the effectiveness of our method in more compression ratios and more datasets. These
observations demonstrate that our method can significantly improve the performance of lightweight image-to-image translation models in a wide range of settings.
Qualitative Results Qualitative comparison between our methods and previous methods on unpaired and paired image-to-image translation datasets are shown in Figure 2 and Figure 3 , respectively. Besides, Figure 4 further shows the performance of our method on the other two datasets. It is observed that: (i) Compared with the model before compression (the teacher model), a significant performance drop can be observed on the student model trained without knowledge distillation. For instance, on Horse→Zebra, most student models can not transform the whole body of horses into stripes. Some previous knowledge distillation methods (e.g. Zhang et al., Ren et al.) can alleviate this problem while our method leads to much better performance. (ii) On the Maps translation task, the buildings and the roads generated by students trained with previous knowledge distillation methods are fuzzy. In contrast, our method can generate clearer shapes and edges for buildings, roads, and rivers. (iii) On Edge→Shoe, the images generated by the students trained without knowledge distillation usually have severe corruption such as the holes in high-heeled shoes. In contrast, the images generated by our methods have better quality in terms of highlights, shapes, and colors. (iv) On Winter→Summer, the model trained by our method can successfully remove the snow on the plants. On Apple→Orange, the images generated by our method have much less corruption than the baseline model. These results demonstrate that students trained by IYOR achieve better performance in terms of not only statistical scores but also human vision.
0 50 100 150 200 Training Epoch
50
100
150
200
F I D
Hinton KD Our Method
Figure 5: Comparison between our method and Hinton KD on the FID between students and teachers on Horse→Zebra with CycleGAN.
Table 4: Ablation study on SIFT distillation and the usage of the refining network on Horse→Zebra with CycleGAN students.
#Params FLOPs Refining SIFT FID↓ ∆ ↑
1.61 7.29
× × 70.54±9.63 – ✓ × 59.31±2.89 11.23 × ✓ 63.17±3.66 7.37 ✓ ✓ 56.45±2.59 14.09
0.72 3.35
× × 85.04±6.88 – ✓ × 72.53±3.15 12.51 × ✓ 78.11±1.71 6.93 ✓ ✓ 69.67±5.32 15.37
5 DISCUSSION
5.1 ABLATION STUDY
In this paper, we mainly propose two knowledge distillation techniques, including (a) learning from a refining network instead of a teacher network and (b) SIFT KD. Table 4 shows the ablation study of the two techniques on Horse→Zebra with CycleGAN. It is observed that on the 7.08× and 15.81× compressed students: (i) 11.23 and 12.51 FID decrements can be observed by replacing the teacher network in traditional knowledge distillation with a refining network, respectively. (ii) 7.37 and 6.93 FID decrements can be observed by applying SIFT KD, respectively. (iii) 14.09 and 15.37 FID decrements can be obtained by combining the two techniques together, respectively. These observations indicate that both the two techniques have their own merits and their benefits are orthogonal.
5.2 STUDENT-TEACHER SIMILARITY
In this subsection, we show that the refining network in IYOR has more consistent outputs with the student than the teacher in traditional KD. The FID between images generated by students and refining network in our method and the traditional KD method is shown in Figure 5. Note that A lower FID here indicates a larger student-teacher similarity. It is observed that our method leads to lower FID during the whole training period, indicating that compared with the teachers in traditional KD, the images generated by the refining network in our method are more likely to be consistent with images generated by the students. Besides, since FID measures the distance between the distribution of images generated by the student and the teacher, this observation also implies that the student in our method can learn teacher knowledge more effectively.
6 CONCLUSION
Due to the ill-posed property of image-to-image translation, directly applying traditional knowledge distillation usually leads to unsatisfactory and even negative impacts. To address this problem, we propose a new knowledge distillation method, named IYOR (imitate your own refinement), in which a refining network replaces the teacher network in traditional KD. During the training phase, the refining network strives to improve the quality of images generated by the students instead of generating images from the inputs. Hence, the refined results can be better learning targets than the teacher outputs that are used in traditional KD. Extensive quantitative and qualitative results have demonstrated that IYOR outperforms existing nine approaches in both paired and unpaired translation. Besides, SIFT knowledge distillation is also introduced to improve the effectiveness of knowledge distillation by extracting the distinctive and scale-invariant features of images and then distilling them from teachers to students. Furthermore, we have analyzed why traditional KD fails and IYOR works well on image-to-image translation theoretically.
A THE PROOF FOR THEOREM 3.1
Proof According to the Assumption 3.1, we have E [ (1− α) · L1 ( f2s , y ) + α · L1 ( f2s , f 2 t ) +H(f2s ) ] ≤ E [ (1− α) · L1 ( f1s , y ) + α · L1 ( f1s , f 1 t ) +H(f1s ) ] (10)
Since f1s is the optimal solution for TKD (6), and G(f 1 s , f 1 t ) ≤ G(f2s , f1t ) implies E [ (1− α) · L1 ( f1s , y ) + α · L1 ( f1s , f 1 t ) +H(f1s ) ] ≤ E [ (1− α) · L1 ( f2s , y ) + α · L1 ( f2s , f 2 t ) +H(f2s ) ] (11)
Combining equation (10) and equation (11), we have E [ L1(f 2 s , f 2 t ) ] ≤ E [ L1(f 1 s , f 1 t ) ] . (12)
□
B ANALYSING KNOWLEDGE DISTILLATION WITH VC THEORY
Recent evidences show that directly applying the naive Hinton et al. knowledge distillation (Hinton et al., 2014; Zhang et al., 2022; Li et al., 2020c) to image-to-image translation usually leads to limited and even negative performance. In this subsection, we try to explain this observation from the perspective of VC theory based on generalized knowledge distillation (Lopez-Paz et al., 2016). Denoting a function class as F , then the student function, the teacher function and the oracle real target function can be written as fs ∈ Fs, ft ∈ Ft, and f ∈ F , respectively. Given n training samples, we can assume that the student function fs and the teacher function fs may learn the true function f at a rate of αs and αt, which can be formulated as
R(fs)−R(f) ≤ O( |Fs|C nαs ) + εs, and R(ft)−R(f) ≤ O( |Ft|C nαt ) + εt respectively, (13)
where O(·) term is the estimation error, εs and εt are the approximation error of the student function class Fs and the teacher function class Ft with respect to f ∈ F . A higher α indicates the learning problem is easier to be solved. Then, we can assume that the student learns from the teacher at the rate αkd with the approximation error εkd, which can be formulated as
R(fs)−R(ft) ≤ O( |Fs|C nαkd ) + εkd. (14)
As pointed out by Lopez-Paz et al. (Lopez-Paz et al., 2016), since the teacher model has more parameters than the student, we can assume the teacher function can learn the true function with a higher rate, indicating αt > αs and αt > αkd. By combining (13) and (14), we have the following inequality.
R(fs)−R(f) = R(fs)−R(ft) +R(ft)−R(fs)
≤ O( |Fs|C nαkd ) + εkd +O( |Ft|C nαt ) + εt ≤ O( |Fs|C + |Ft|C nαkd ) + εkd + εt
(15)
Thus, given a learning task, now we can study whether knowledge distillation works well in this task by analyzing whether the following inequality
O( |Fs|C + |Ft|C
nαkd ) + εkd + εt ≤ O( |Fs|C nαs ) + εs (16)
holds. Since the teacher model usually have more parameters than the student model, |Fs|C + |Ft|C ≤ |Fs|C | usually does not hold in knowledge distillation. Thus, the inequality highlights that the benefits of knowledge distillation arise because of εkd + εt ≤ εs and αkd > αs.
In image classification, as pointed out by Lopez-Paz et al. (Lopez-Paz et al., 2016), since soft labels ft(x) (the probability distribution) of teachers contain more information than the one-hot label y, it allows students to learn teachers at a higher rate than learning the true function, indicating that αkd >αs (Lopez-Paz et al., 2016). Besides, since the label for an input image is unique, learning the true function does not conflict with learning the teacher function, and thus it is safe to assume that εs ≥ εt + εkd. In contrast, on image-to-image translation, since the prediction of students and teachers are values of pixels instead of the probability distribution, there is no additional information in ft(x) compared with the ground truth. Thus αkd >αs does not hold. Moreover, since image-toimage translation is an ill-posed problem, the prediction of students and teachers may be different but correct answers for the same input image, indicating that εs ≥ εt + εkd also does not hold. These observations demonstrate that the inequality (16) does not hold in image-to-image translation, which can explain the limited performance of directly applying Hinton et al. knowledge distillation to image-to-image translation.
Instead of distilling the generated images, some recent knowledge distillation methods have been proposed to distill teacher knowledge in their features. Since there is more information contained in teacher features than ground-truth images, these methods can be considered as a guarantee for αkd >αs. In contrast, IYOR aims to improve knowledge distillation by addressing the ill-posed property, implying εs ≥ εt + εkd. Since IYOR and previous feature-based methods have different perspectives to support inequality (16), their benefits are orthogonal and can be combined.
C DETAILED EXPERIMENT SETTINGS
We follow the official codes of CycleGAN and Pix2Pix1 to conduct our experiments. Models on Edge→Shoe are trained by 50 epochs. Models on the other datasets are trained by 200 epochs. The momentum of Adam optimizer is 0.5. During the all the experiments, we set α = 1 and β = 1. The initial learning rate is 0.0002. LSGAN is used as the generator of the model. The discriminator is a 70x70 PatchGAN. In the experiments of CycleGAN, the backbone in the generators of both students and teachers (refining networks) are ResNet with six blocks. Their main difference is that the student backbone has much less channels than the teacher. Batch size is set to 1 for both training and inference. We compute the FID scores based on Pytorch-FID 2, a well known python package. We find that some of previous works compute the FID for unpaired image-to-image translation by using the images in both training set and test set to achieve more stable performance. However, we believe this behavior that access the test images during training is not reasonable. Hence, we choose to compute the FID on only the test set. As claimed by previous research Jin et al. (2021)3, this makes the FID scores in our experiments around 5-6 lower than the previous works which report FID on the both training and test set.
D INFLUENCE FROM HYPER-PARAMETERS
In this paper, we mainly have two hyper-parameters α and β to balance the magnitudes of knowledge distillation loss and the original GAN training loss. Hyper-parameters sensitivity study on Horse→Zebra with 15.81× compressed students is introduced in Figure 6. Note that the reported value is FID (lower is better). It is observed that: (i) With the worst α, our method achieve 70.21 FID, which is still 14.83 lower than the student trained without KD, and 6.83 lower than the second-best KD method. (ii) With the worst β, our method achieve 70.54 FID, which is still 14.50 lower than the student trained without KD, and 6.50 lower than the second-best KD method. These observations indicate that our method is not sensitive to the value of hyper-parameters.
E INFLUENCE FROM THE SIZE OF THE REFINING NETWORK
In our experiments, the refining network has the same architecture as the teacher network in traditional KD, which is also same as the image-to-image translation model before compression. In
1https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/ 2https://github.com/mseitzer/pytorch-fid 3https://github.com/snap-research/CAT
this section, we study the influence from the size of the refining network. As shown in Table 5: (i) With more parameters, the refining network can achieve a very low FID, which indicates that the refinement has good quality. And at the same time, the student can also be trained better, which achieve relative lower FID. (ii) When the refining network does not have enough parameters, the refinement has a relative higher FID and the effectiveness of knowledge distillation is not very significant. These observations indicate that a refining network with enough parameters can make a positive influence to the performance of knowledge distillation. In contrast, when the refining network does not have enough parameters, it can not successfully refine the image generated by the student, which leads to limited knowledge distillation performance.
F EXPERIMENTS ON CITYSCAPES
Following previous research Zhu et al. (2017a); Park et al. (2019a), we have also evaluated our method on Cityscapes (Cordts et al., 2016). Cityscapes is original proposed as a dataset for autonomous driving, including tasks such as detection and segmentation. In our experiments, we take the semantic segmentation mask as the input and take the natural images of the street as the label to train the image-to-image translation models. Then, we adopt the mIoU of a pre-trained FCN model on the generated images as the performance metric. A higher mIoU indicates that the image-toimage translation model has better performance. Our experimental results are shown in Table 6. It
is observed that there are 2.17 mIoU improvements on the student trained with our method, which is 0.71 higher than the second-best method.
G PYTHON-STYLE PSEUDO CODE
The following code block presents a brief implementation of IYOR.
# The pseudo code of IYOR def IYOR(x, student, refiner, sift):
# x: the input image, student: the student network # refiner: the refining network # sift: a function to extract sift features student_output = student(x) refinement = refine(student_output.detach()) # pixel-wise imitating kd_loss = l1_loss(student_output, refinement) # sift distillation kd_loss += l1_loss(sift(student_output), sift(refinement)) return kd_loss | 1. What is the main contribution of the paper, and how does it address the problem of knowledge distillation for image-to-image translation?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its ability to improve FID scores?
3. How does the inclusion of SIFT KD affect the focus and effectiveness of the manuscript, and could it be presented as a separate contribution?
4. Are there any concerns regarding the mathematical support provided, especially regarding Assumption 3.1 and the generalizability of the framework?
5. How could the clarity and quality of the writing be improved, specifically in the introduction and related work sections? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors present a new solution for knowledge distillation under the task of Image-to-Image translation. They propose that instead of using the naïve approach and distilling knowledge from a larger Image-to-Image translation network, one could distill knowledge from an image refinement network that is applied to the output of the student. They show this change to result in state-of-the-art FID scores. Furthermore the authors also introduce another signal to the knowledge distillation (KD) process based on the classic SIFT image descriptor.
Strengths And Weaknesses
The strength of the paper is in how the authors have identified a problem, hypothesised about the cause and proposed a simple and effective solution. The authors present a good motivation for the refinement distillation, arguing that since the task of Image-to-Image Translation (I2I Translation) can present multiple correct ‘answers’ for the same input, performing distillation between disparate networks does not guarantee they would both generate the same ‘correct answer’. This motivation is followed by a strong set of experiments that show the benefit of the proposed model against other methods that perform KD for I2I translation.
I believe the main weakness of the proposed model to be the inclusion of SIFT KD as one of the contributions. The motivation for SIFT KD does not carry it as well as the motivation for the refinement distillation. The studies cited to corroborate the effectiveness of SIFT as an image representation framework are all pre-deep learning, which wouldn’t be a problem in isolation, but it becomes one since it is common for other KD models to use mid-network representations to improve their results. This calls for a thorough comparison between these two choices of representation to drive KD. Furthermore, the paper is written around the refinement distillation concept, from the title to the mathematical support, leaving the SIFT KD contribution looking like an afterthought, even if the empirical results show the benefit of using such representation. I believe the inclusion blurs the focus of the manuscript and since the performance improvement is significant, perhaps SIFT KD could be presented on its own and properly motivated and explored in future work.
Besides SIFT KD, I believe the mathematical support given doesn’t fully work. In the introduction the authors propose that this is a generalised knowledge distillation framework, which was corroborated in section 3.2. However since other knowledge distillation tasks are not fit to the proposed framework it is hard to see it as general. Even the “general” function G in equation (5) cannot be applied to the most common task of classification, and it’s hard to see how it would apply to a task dissimilar to image generation. My biggest concern however regards Assumption 3.1. While it follows the intuitive concept that more parameters lead to a better solution, the tasks here are not comparable in that sense. The traditional TKD method is using a pre-trained teacher that is frozen and was trained using the GAN framework while the IYOR method is using an online learning method and the teacher is trained only on L1 loss for image refinement. Without equal ground it is hard to say if more parameters guarantee lower losses and it is therefore hard to make that assumption for the proof that IYOR leads to closer teacher-student performances.
Clarity, Quality, Novelty And Reproducibility
Writing is clear on most of the paper, with the Introduction and Related Work popping up as weaker sections. Writing could be improved in general for them. At the end of this section I include some notes on typos. Section 2.2 is heavy with references, so much so that it makes the text hard to parse; the authors could keep the references there to papers more closely related to the ideas of the manuscript since most of them are listed after concepts that are not used by the manuscript nor are they explained within the context of the citation (e.g.: spatial attention (…), pixel-wise attention (…), instance-wise attention (…)).
The tables look nice and are easy to read but they also use a lot of unnecessary space. By including network scales, FLOPs and number of parameters within the captions or in a separate table (using names to reference such as ‘CycleGAN-S1’ for example) the authors could merge many of the sub-tables into one, such as the ones in Table 1 and 2.
It is not clear from the writing how training of the refinement network (the ‘teacher’) is done. One can assume that gradient does not flow from the student to the teacher, which requires gradient to be stopped; is that assumption correct?
In Section 4.1 the authors mention that when training unpaired I2I translation the final model was selected from a collection of epoch-based checkpoints by using FID. Was this procedure done for the competing models reported? Was FID computed over training/validation set (and not test set) for this selection?
In Section 4.2 the authors mention that in Figures 2 and 3 there is a comparison against the student model trained without KD. Which one of the images in Figures 2 and 3 represent this model? Still regarding these figures, I believe it would be beneficial to the reader if the images were ordered by their FID; comparing the worst model (baseline from Hinton et al) against the best (the presented one) makes it harder to distinguish the finer changes between the presented models and others closer to the state-of-the-art.
Extra notes:
There is some writing on top of Figure 4.
Sec. 1: dominated -> dominating
Sec. 1: them to deploy -> them from deploying
Sec. 1: further raises -> incentivises or drives
Sec. 1: there is a comma before the start of sentence (“, Recently, “)
Sec. 1: negative performance -> negative transfer
Sec. 1: caussed -> caused
Assumption 3.1: TKD is less than IYOR -> order is inverted
Sec. 4.1: origin -> original
Sec. 5.2: FID between -> FID difference/comparison between |
ICLR | Title
Imitate Your Own Refinement: Knowledge Distillation Sheds Light on Efficient Image-to-Image Translation
Abstract
The excellent performance of the state-of-the-art Generative Adversarial Networks (GANs) is always accompanied by enormous parameters and computations, making them unaffordable on resource-limited mobile devices. As an effective model compression technique, knowledge distillation (KD) has been proposed to transfer the knowledge from a cumbersome teacher to a lightweight student. Following its success on classification, some recent works have applied KD to GAN-based image-to-image translation but lead to unsatisfactory performance. In this paper, to tackle this challenge, we propose a novel knowledge distillation framework named IYOR (Imitate Your Own Refinement), which consists of the following two techniques. Firstly, since image-to-image translation is an ill-posed problem, knowledge distillation on image-to-image translation may force the student to learn the average results between multiple correct answers and thus harm student performance. To address this problem, we propose to replace the teacher network in knowledge distillation with a refining network, which is trained to refine the images generated by the student to make them more realistic. During the training period, the refining network and the student are trained simultaneously, and the student is trained to imitate the refined results in a knowledge distillation manner. Secondly, instead of only distilling the knowledge in the generated images, we propose SIFT KD, which firstly extracts the distinctive and scaleinvariant features of the generated images with Scale-invariant feature transform (SIFT), and then distills them from the refining network to the student. Extensive experimental results demonstrate the effectiveness of our method on five datasets with nine previous knowledge distillation methods. Our codes are available in the supplementary material and will be released on Github.
1 INTRODUCTION
In the last decade, Generative Adversarial Networks (GANs) have evolved to one of the most dominated methods for content generation of images (Isola et al., 2017; Zhu et al., 2017a), videos (Vondrick et al., 2016), text (Zhang et al., 2016), audios (Kong et al., 2020), graphs (Wang et al., 2018a), point clouds (Li et al., 2019) and multi-modal systems (Zhu et al., 2017b). Their remarkable ability of representation and generation has significantly boosted the performance of image-to-image translation and further promoted their usage in real-world applications. Despite their impressive performance, GANs models usually suffer from massive parameters and computation, which have limited them to deploy on resource-restricted platforms such as mobile phones. This problem further raises the research trend in model compression such as network pruning (Buciluǎ et al., 2006; He et al., 2018a; 2017), weights quantization (Lee et al., 2019; Nagel et al., 2019), lightweight model design (Ma et al., 2018; Sandler et al., 2018; Howard et al., 2017), neural network architecture search (Howard et al., 2019; He et al., 2018b), and knowledge distillation (Hinton et al., 2014).
Knowledge distillation (KD), which aims to improve the performance of lightweight students by transferring knowledge from an over-parameterized teacher model, has become a popular technique for model compression. By imitating the prediction results and the intermediate features of teachers, students can achieve significant performance improvements. Following its success in image classification (Hinton et al., 2014; Zhang et al., 2020), object detection (Zhang & Ma, 2021) and semantic
segmentation (Yang et al., 2022), Recently, some researchers have tried to apply knowledge distillation to image-to-image translation by training students to mimic the images generated by the teachers. Unfortunately, these trials usually lead to limited and even sometimes negative performance (Li et al., 2020c; Zhang et al., 2022). Some works have been proposed to distill teacher knowledge in their features and lead to positive effectiveness (Ren et al., 2021; Li et al., 2020c). However, there is still no analysis on the reason that why traditional image-based knowledge distillation fails.
In this paper, we mainly impute the unsatisfactory performance of naive knowledge distillation to the ill-posed property of image-to-image translation. Unlike image classification, where each image always has a unique categorical label, an image can have multiple different but correct posttranslation answers in image-to-image translation. For example, in Edge→Shoe translation (i.e., translating edges of shoes to photos), given an input image of edges, there are multiple corresponding images of shoes with different colors, styles, and contents. All of these images can be correct answers while the average of them may have low quality. Unfortunately, in traditional KD, the student and teacher are likely to give two different but correct predictions for the same input image. In this case, the knowledge distillation loss forces the students to learn the average between the student outputs and the teacher outputs, which can harm student performance acutely. In contrast, the ideal case to avoid this problem is to guarantee that the student and teacher output the consistent answers for the input image. However, this assumption does not always hold since the student and the teacher in traditional KD are two independent image-to-image translation models.
To address this problem, we propose IYOR (Imitate Your Own Refinement), a generalized knowledge framework which introduces a different manner to build the “teacher network” in knowledge distillation. Taking Edge→Shoe translation as an example, as shown in Figure 1, instead of building a teacher network which translates edges into shoes, IYOR introduces a refining network, which takes the shoe images generated by the student as inputs, refines them, and outputs the images of the shoe which have much better quality. Note that the refining network is trained with the student simultaneously and can be discarded during inference to avoid additional parameters and computations. Since the refining network has much more parameters than the student, this refining process can significantly improve the quality of images generated by students. Hence, the refined results can be considered as the “teacher outputs” in traditional knowledge distillation, and utilized as the learning targets of the students. The major advantage of IYOR is that the refining network is conditioned on the outputs of students, instead of the original inputted images. Hence, the refined results are more likely to be consistent with the student outputs than the teacher outputs in traditional knowledge distillation. As a result, it can alleviate the problem of ineffective knowledge distillation caussed by
the ill-posed property. Extensive experiments show that dramatic performance gain of five datasets can be observed by simply replacing the traditional teacher network with the refining network.
Moreover, instead of directly training the student to imitate the images generated by the refining network pixel by pixel, we further propose SIFT distillation which adopts Scale Invariant Feature Transform (SIFT) (Lowe, 1999), a typical image feature extraction method in traditional image processing to extract the scale-invariant and highly distinctive features of the generated images and then distills them from the refining network to the students. As pointed out by abundant previous research (Lowe, 1999; 2004; Yuan et al., 2008), the features extracted by SIFT are invariant to image scaling, rotation and illumination, and highly distinctive for downstream tasks such as detection and tracking. Hence, these features carry more semantic information of the images, and they are more beneficial in knowledge distillation than traditional pixel-wise imitating. Another advantage of SIFT KD is that SIFT does not contain any trainable parameters, which makes SIFT KD generalize well on different image-to-image translation tasks as a plug-and-play knowledge distillation technique.
Experimental results on five image-to-image translation tasks have demonstrated the performance of IYOR for both paired and unpaired image-to-image translation in terms of both quantitative and qualitative analysis. Despite its simplicity, IYOR outperforms the previous nine knowledge distillation methods by a clear margin. Besides, experimental results also demonstrate that IYOR can be combined with the previous feature-based knowledge distillation methods to achieve better performance. To sum up, our main contributions can be summarized as follows.
• We propose IYOR, a knowledge distillation method for efficient image-to-image translation. To the best of our knowledge, IYOR firstly shows that the most naive image-based knowledge distillation can be effective by replacing the teacher with a refining network.
• We propose SIFT distillation, which adopts SIFT to extract the distinctive and scaleinvariant features of images and distill them from the refining network to the student.
• Extensive experiments on both paired and unpaired translation tasks have demonstrated the performance of IYOR over nine previous methods and five datasets in terms of both quantitative and qualitative results. Our codes have been released for future research.
2 RELATED WORK
2.1 IMAGE-TO-IMAGE TRANSLATION WITH GANS
Remarkable progress has been achieved in image-to-image translation with the rapid development of generative adversarial networks (GANs) (Goodfellow et al., 2014; Brock et al., 2018). Pix2Pix is first proposed to perform paired image-to-image translation with conditional GANs (Isola et al., 2017). Then, Pix2PixHD is proposed to improve the generation quality with multi-scale generators and discriminators (Wang et al., 2018b). The similar idea has also been extended in text-to-image translation (Zhang et al., 2017), multi-modal image-to-image translation (Huang et al., 2018; Zhu et al., 2017c) and applications such as super-resolution and image dehazing (Wang et al., 2018d; Ledig et al., 2017; Zhang et al., 2017). In the real-world applications, the paired image-to-image translation dataset is usually not available. To address this problem, abundant methods have been proposed to perform image-to-image translation on unpaired datasets with cycle-consistency regularization (Zhu et al., 2017a; Yi et al., 2017; Kim et al., 2017). StarGAN is proposed to perform image-to-image translation for multiple domains with a single model (Choi et al., 2018), and StarGAN v2 is proposed to increase the scalability and the diversity of image-to-image translation models at the same time (Choi et al., 2020). Attention based GANs have been widely utilized to improve the performance of image-to-image translation by localizing the to-be-translated regions with attention modules (Tang et al., 2021; Chen et al., 2018; Emami et al., 2020; Alami Mejjati et al., 2018). Recently, some researchers have proposed to replace the convolutional layers in GAN with MLPmixers and vision transformers, which leads to better high-fidelity translation (Wan et al., 2021; Cazenavette & De Guevara, 2021).
2.2 KNOWLEDGE DISTILLATION
The idea that employing a large model to improve the performance of a small model is firstly proposed by Buciluǎ (Buciluǎ et al., 2006) for the compression of neural network ensemble. Then, Hinton et al. propose the concept of knowledge distillation, which introduces a temperature hyperparameter in the softmax layer to flatter teacher prediction (Hinton et al., 2014). Following their
success, many researchers have proposed to not only distill the teacher knowledge in its predicted categorical probability distribution, but also the dark knowledge in features (Romero et al., 2015; Tian et al., 2019), spatial attention (Zagoruyko & Komodakis, 2017), channel-wise attention (Liu et al., 2021a; Shu et al., 2021; Li et al., 2021a), pixel-wise relation (Zhang & Ma, 2021; Li et al., 2020c; Yoon et al., 2020), instance-wise relation (Park et al., 2019b; Tung & Mori, 2019; Peng et al., 2019), task-oriented information (Zhang et al., 2020), decision boundary samples (Heo et al., 2019b), positive feature (Heo et al., 2019a) and frequency-biased information (Zhang et al., 2022) with optimization methods such as L2-norm distance (Romero et al., 2015; Yim et al., 2017), adversarial learning (Shen et al., 2019; Liu et al., 2019a; Xu et al., 2017), and contrastive learning (Tian et al., 2019; Chen et al., 2020b). Besides image classification, knowledge distillation has already been used in model compression for object detection (Chen et al., 2017; Li et al., 2017; Wang et al., 2019; Bajestani & Yang, 2020; Li et al., 2020b), semantic segmentation (Liu et al., 2019b; Park & Heo, 2020), pre-trained language models (Sanh et al., 2019; Xu et al., 2020)s and so on.
Knowledge Distillation on Image-to-Image Translation A few research has been proposed to perform knowledge distillation on image-to-image translation. Li et al. propose the framework of GAN compression, which has applied the classic L2-norm feature distillation on the intermediate neural layers (Li et al., 2020a). However, their results demonstrate that this application leads to unsatisfying performance improvements. Then, Li et al. propose the semantic relation preserving knowledge distillation, which aims to distill the relation between different patches in the generated images instead of the encoded features (Li et al., 2020c). Then, Chen et al. propose to distill image-to-image translation models with knowledge distillation not only generators but also the discriminators (Chen et al., 2020a). Similarly, Li et al. propose to revisit the discriminator in GAN compression, which transfers the knowledge in the teacher discriminator with L2-norm and texture loss (Li et al., 2021b). Jin et al. introduce the centered kernel alignment as the distance metric in knowledge distillation, which does not require additional layers for feature reshaping. Ren et al. propose to train the teacher and student GANs simultaneously, which shows the possibility of online knowledge distillation on image-to-image translation (Ren et al., 2021). Recently, motivated by the fact that tiny GANs work badly in generating high-quality high-frequency information, Zhang et al. propose to distill only the high-frequency information decomposed by discrete wavelet transformation in the images generated by teachers (Zhang et al., 2022). Besides image-to-image translation, there are also some knowledge distillation methods designed for GAN compression on the other tasks (Liu et al., 2021b; Wang et al., 2018c; Aguinaldo et al., 2019). Unfortunately, most of these knowledge distillation methods focus on distilling teacher knowledge in their features, and sufficient evidences show that directly training students to mimic the generated images from teachers leads to insufficient and even negative performance (Li et al., 2020c; Zhang et al., 2022). In contrast, this paper firstly shows that naive image-based distillation can also achieve valuable performance boosts.
3 METHODOLOGY
3.1 KNOWLEDGE DISTILLATION
In this section, we firstly revisit the formulation of knowledge distillation on image classification and then simply extend them to image-to-image translation. Given a set of training samples X = {x1, x2, ..., xn} and the corresponding ground truth Y = {y1, y2, ..., yn}, by denoting the student function and the pre-trained teacher function as fs and ft, then the training loss of classical knowledge distillation method (Hinton et al., 2014) can be formulated as
argmin fs
Ex,y [ (1− α) · CE(fs(x), y) + α · KL(fs(x)/τ, ft(x)/τ) ] , (1)
where CE and KL indicate cross-entropy loss and the Kullback-Leibler divergence, respectively. τ is the temperature hyper-parameter to soften the probability distribution and α is a hyper-parameter to balance the origin training loss and the knowledge distillation loss. When knowledge distillation is applied to image-to-image translation, since the predictions of students and teachers are the value of pixels instead of probability distributions, KL divergence can be replaced with the L1-norm loss, which is widely utilized in low-level vision. And the cross-entropy loss for classification should be replaced with the GAN training loss. Taking Pix2Pix (Isola et al., 2017) as an example, the knowledge distillation loss (Hinton et al., 2014) for training the generator can be formulated as
argmin fs
Ex,y [ (1− α) · L1 ( fs(x), y ) + α · L1 ( fs(x), ft(x) ) + LcGAN ( fs(x) )] , (2)
where L1 indicates the L1-norm loss. LcGAN indicates the conditional GAN loss, which measures how the generated images fool the discriminator. Note that we do not introduce LcGAN and the discriminator of GANs in detail here since they have no direct influence with our method.
3.2 IYOR: IMITATE YOUR OWN REFINEMENT
Instead of using two independent neural networks as the student and the teacher we append a refining network fr after the student network, which is trained to translate the images generated by the student network fr(fs(x)) to the corresponding ground-truth y. Thus, the “teacher model” in IYOR can be written as ft = fs ◦ fr. In our implementation, fr has the same architecture as the teacher in traditional KD and hence it has enough learning ability to refine student outputs. Note that the fr can be discarded after the training period to avoid the additional parameters and computation. Besides, unlike traditional KD where the teacher is first pre-trained and then utilized to teach the student, in IYOR, fr and fs are trained simultaneously. For simplicity, by denoting zs = fs(x) and zr = fr ◦ fs(x), the training objective of the refining network fr can be formulated as
argmin fr
Ex,y [ L1 ( zr, y ) + LcGAN ( zr )] . (3)
And the training objective of the student fs can be formulated as
argmin fs
Ex,y [ (1− α) · L1 ( zs, y ) + α · L1 ( zs, zr ) + LcGAN ( zs(x) )] . (4)
Note that since IYOR only distills the generators of GANs, we omit the description of discriminators here. Besides, IYOR can be easily extended to unpaired image-to-image translation models such as CycleGAN by introducing two refining networks to both the two translation directions, respectively.
3.3 WHY IYOR WORKS
Consider a general knowledge distillation with the L1-norm, define a function G as G ( fs(x), ft(x) ) := Ex,y [ (1− α) · L1 ( fs(x), y ) + α · L1 ( fs(x), ft(x) ) +H ( fs(x) )] , (5)
where H is a function about the student network. For simplicity, we abbreviate Ex,y as E. The objective function of traditional knowledge distillation (TKD) (2) and IYOR (4) are specific cases of equation (5). Let f1s and f 2 s be the optimal student networks of problem (2) and IYOR (4), we will provide an assumption and a theorem to interpret the effectiveness of IYOR. TKD : f1s = argmin
fs
G ( fs(x), ft(x) ) , IYOR : f2s = argmin
fs,fr
G ( fs(x), fr ◦ fs(x) ) . (6)
Assumption 3.1 Since our teacher fr ◦ fs(x) has more parameters than the traditional teacher ft(x), we assume that when they achieve the optimal values, the loss of TKD is less than IYOR. In other words, denoting f1t and f 2 t as the optimal teacher networks of TKD and IYOR, then we have
G(f2s , f 2 t ) ≤ G(f1s , f1t ). (7)
Theorem 3.1 Under the Assumption (3.1), the L1 distance between the optimal student network and teacher network in IYOR is less than that in TKD, which means
E [ L1(f 2 s , f 2 t ) ] ≤ E [ L1(f 1 s , f 1 t ) ] . (8)
Please refer to Appendix A for the proof. Besides, we have also explained why traditional KD methods fail on image-to-image translation with VC theory in Appendix B.
3.4 SIFT DISTILLATION
Scale-Invariant Features Transform (SIFT) is one of the most effective and popular image descriptors in classicial image processing. Usually, SIFT mainly has four steps, including scale-space extrema detection, keypoint localization, orientation assignment and keypoint description. Usually, SIFT features carry rich semantic information of the images while having much lower dimensions than the original image. Thus, distilling the SIFT features is more efficient than directly distilling the pixels of the generated images. By denoting SIFT as ϕ(·) and a loss hyper-parameter as β, then the loss function of SIFT distillation in our method can be formulated as
argmin fs
Ex,y [ (1− α) · L1 ( zs, y ) + α · L1 ( zs, zr ) + LcGAN ( zs(x) ) + β · L1 ( ϕ(zs), ϕ(zr) )] . (9)
4 EXPERIMENT
4.1 EXPERIMENT SETTINGS
Models and Datasets In this paper, we mainly evaluate the performance of our method with CycleGAN (Zhu et al., 2017a) for unpaired image-to-image translation, and Pix2Pix (Isola et al., 2017) and Pix2PixHD (Wang et al., 2018b) for paired image-to-image translation. The refining network in our method has an identical architecture to the original model before compression. The students in our experiments have the same network depth as the original model before compression except for fewer channels. Five datasets are utilized for quantitative evaluation, including Horse→Zebra, Maps, Edge→Shoe, Summer→Winter, and Apple→Orange. Comparison Methods We have compared our methods with nine knowledge distillation methods, including three of them which are firstly proposed for image classification and then adopted by us to image-to-image translation (Hinton et al., 2014; Ahn et al., 2019; Zagoruyko & Komodakis, 2017), and six of them which are designed for image-to-image translation (Li et al., 2020a; Jin et al., 2021; Zhang et al., 2022; Li et al., 2021b; Ren et al., 2021; Li et al., 2020c). Note that some comparison methods have both knowledge distillation and neural network pruning. Following the setting of the previous work (Zhang et al., 2022), we only compare our method with their knowledge distillation algorithms for a fair comparison.
Training and Evaluation Settings We adopt the same training setting from the origin implementation of CycleGAN and Pix2Pix. Models for Edge→Shoe and the other datasets are trained by 50 and 200 epochs, respectively. Following previous works, we adopt Frechet Inception Distance (FID) as the performance metric for all datasets. A lower FID indicates that the distribution of the generated
images and the real images have a lower distance, and thus the generated images have better quality. On paired image-to-image translation, we report model performance at the last epoch. On unpaired image-to-image translation, since the performance for different epochs is unstable, we compute the FID for every five epochs and report the lowest one. For both paired and unpaired image-to-image translation, FID are computed over only the images in the test set.
4.2 EXPERIMENT RESULTS
Quantitative Results Quantitative comparison with previous knowledge distillation methods on unpaired image-to-image translation and paired image-to-image translation datasets are shown in Table 1 and Table 2, respectively. It is observed that: (i) Directly applying the naive image-based knowledge distillation (Hinton et al., 2014) leads to very limited and even negative performance. For instance, it leads to 1.91 and 0.67 FID increments (performance drop) on Edge→Shoe with Pix2Pix and Pix2PixHD, respectively. (ii) In contrast, by replacing the teacher in naive image-based with the refining network in our method, knowledge distillation leads to consistent performance improvements. On average, 5.12 and 10.43 FID decrements (performance improvements) can be gained in paired and unpaired image-to-image translation, respectively. (iii) Combining our method with previous feature-based knowledge distillation leads to further performance improvements. For instance, on the 14.82× compressed and 6.80× compressed Horse→Zebra students, combining our method with the method of Ren et al. leads to 2.35 and 1.13 further FID decrements. (iv) Table 3 further demonstrates the effectiveness of our method in more compression ratios and more datasets. These
observations demonstrate that our method can significantly improve the performance of lightweight image-to-image translation models in a wide range of settings.
Qualitative Results Qualitative comparison between our methods and previous methods on unpaired and paired image-to-image translation datasets are shown in Figure 2 and Figure 3 , respectively. Besides, Figure 4 further shows the performance of our method on the other two datasets. It is observed that: (i) Compared with the model before compression (the teacher model), a significant performance drop can be observed on the student model trained without knowledge distillation. For instance, on Horse→Zebra, most student models can not transform the whole body of horses into stripes. Some previous knowledge distillation methods (e.g. Zhang et al., Ren et al.) can alleviate this problem while our method leads to much better performance. (ii) On the Maps translation task, the buildings and the roads generated by students trained with previous knowledge distillation methods are fuzzy. In contrast, our method can generate clearer shapes and edges for buildings, roads, and rivers. (iii) On Edge→Shoe, the images generated by the students trained without knowledge distillation usually have severe corruption such as the holes in high-heeled shoes. In contrast, the images generated by our methods have better quality in terms of highlights, shapes, and colors. (iv) On Winter→Summer, the model trained by our method can successfully remove the snow on the plants. On Apple→Orange, the images generated by our method have much less corruption than the baseline model. These results demonstrate that students trained by IYOR achieve better performance in terms of not only statistical scores but also human vision.
0 50 100 150 200 Training Epoch
50
100
150
200
F I D
Hinton KD Our Method
Figure 5: Comparison between our method and Hinton KD on the FID between students and teachers on Horse→Zebra with CycleGAN.
Table 4: Ablation study on SIFT distillation and the usage of the refining network on Horse→Zebra with CycleGAN students.
#Params FLOPs Refining SIFT FID↓ ∆ ↑
1.61 7.29
× × 70.54±9.63 – ✓ × 59.31±2.89 11.23 × ✓ 63.17±3.66 7.37 ✓ ✓ 56.45±2.59 14.09
0.72 3.35
× × 85.04±6.88 – ✓ × 72.53±3.15 12.51 × ✓ 78.11±1.71 6.93 ✓ ✓ 69.67±5.32 15.37
5 DISCUSSION
5.1 ABLATION STUDY
In this paper, we mainly propose two knowledge distillation techniques, including (a) learning from a refining network instead of a teacher network and (b) SIFT KD. Table 4 shows the ablation study of the two techniques on Horse→Zebra with CycleGAN. It is observed that on the 7.08× and 15.81× compressed students: (i) 11.23 and 12.51 FID decrements can be observed by replacing the teacher network in traditional knowledge distillation with a refining network, respectively. (ii) 7.37 and 6.93 FID decrements can be observed by applying SIFT KD, respectively. (iii) 14.09 and 15.37 FID decrements can be obtained by combining the two techniques together, respectively. These observations indicate that both the two techniques have their own merits and their benefits are orthogonal.
5.2 STUDENT-TEACHER SIMILARITY
In this subsection, we show that the refining network in IYOR has more consistent outputs with the student than the teacher in traditional KD. The FID between images generated by students and refining network in our method and the traditional KD method is shown in Figure 5. Note that A lower FID here indicates a larger student-teacher similarity. It is observed that our method leads to lower FID during the whole training period, indicating that compared with the teachers in traditional KD, the images generated by the refining network in our method are more likely to be consistent with images generated by the students. Besides, since FID measures the distance between the distribution of images generated by the student and the teacher, this observation also implies that the student in our method can learn teacher knowledge more effectively.
6 CONCLUSION
Due to the ill-posed property of image-to-image translation, directly applying traditional knowledge distillation usually leads to unsatisfactory and even negative impacts. To address this problem, we propose a new knowledge distillation method, named IYOR (imitate your own refinement), in which a refining network replaces the teacher network in traditional KD. During the training phase, the refining network strives to improve the quality of images generated by the students instead of generating images from the inputs. Hence, the refined results can be better learning targets than the teacher outputs that are used in traditional KD. Extensive quantitative and qualitative results have demonstrated that IYOR outperforms existing nine approaches in both paired and unpaired translation. Besides, SIFT knowledge distillation is also introduced to improve the effectiveness of knowledge distillation by extracting the distinctive and scale-invariant features of images and then distilling them from teachers to students. Furthermore, we have analyzed why traditional KD fails and IYOR works well on image-to-image translation theoretically.
A THE PROOF FOR THEOREM 3.1
Proof According to the Assumption 3.1, we have E [ (1− α) · L1 ( f2s , y ) + α · L1 ( f2s , f 2 t ) +H(f2s ) ] ≤ E [ (1− α) · L1 ( f1s , y ) + α · L1 ( f1s , f 1 t ) +H(f1s ) ] (10)
Since f1s is the optimal solution for TKD (6), and G(f 1 s , f 1 t ) ≤ G(f2s , f1t ) implies E [ (1− α) · L1 ( f1s , y ) + α · L1 ( f1s , f 1 t ) +H(f1s ) ] ≤ E [ (1− α) · L1 ( f2s , y ) + α · L1 ( f2s , f 2 t ) +H(f2s ) ] (11)
Combining equation (10) and equation (11), we have E [ L1(f 2 s , f 2 t ) ] ≤ E [ L1(f 1 s , f 1 t ) ] . (12)
□
B ANALYSING KNOWLEDGE DISTILLATION WITH VC THEORY
Recent evidences show that directly applying the naive Hinton et al. knowledge distillation (Hinton et al., 2014; Zhang et al., 2022; Li et al., 2020c) to image-to-image translation usually leads to limited and even negative performance. In this subsection, we try to explain this observation from the perspective of VC theory based on generalized knowledge distillation (Lopez-Paz et al., 2016). Denoting a function class as F , then the student function, the teacher function and the oracle real target function can be written as fs ∈ Fs, ft ∈ Ft, and f ∈ F , respectively. Given n training samples, we can assume that the student function fs and the teacher function fs may learn the true function f at a rate of αs and αt, which can be formulated as
R(fs)−R(f) ≤ O( |Fs|C nαs ) + εs, and R(ft)−R(f) ≤ O( |Ft|C nαt ) + εt respectively, (13)
where O(·) term is the estimation error, εs and εt are the approximation error of the student function class Fs and the teacher function class Ft with respect to f ∈ F . A higher α indicates the learning problem is easier to be solved. Then, we can assume that the student learns from the teacher at the rate αkd with the approximation error εkd, which can be formulated as
R(fs)−R(ft) ≤ O( |Fs|C nαkd ) + εkd. (14)
As pointed out by Lopez-Paz et al. (Lopez-Paz et al., 2016), since the teacher model has more parameters than the student, we can assume the teacher function can learn the true function with a higher rate, indicating αt > αs and αt > αkd. By combining (13) and (14), we have the following inequality.
R(fs)−R(f) = R(fs)−R(ft) +R(ft)−R(fs)
≤ O( |Fs|C nαkd ) + εkd +O( |Ft|C nαt ) + εt ≤ O( |Fs|C + |Ft|C nαkd ) + εkd + εt
(15)
Thus, given a learning task, now we can study whether knowledge distillation works well in this task by analyzing whether the following inequality
O( |Fs|C + |Ft|C
nαkd ) + εkd + εt ≤ O( |Fs|C nαs ) + εs (16)
holds. Since the teacher model usually have more parameters than the student model, |Fs|C + |Ft|C ≤ |Fs|C | usually does not hold in knowledge distillation. Thus, the inequality highlights that the benefits of knowledge distillation arise because of εkd + εt ≤ εs and αkd > αs.
In image classification, as pointed out by Lopez-Paz et al. (Lopez-Paz et al., 2016), since soft labels ft(x) (the probability distribution) of teachers contain more information than the one-hot label y, it allows students to learn teachers at a higher rate than learning the true function, indicating that αkd >αs (Lopez-Paz et al., 2016). Besides, since the label for an input image is unique, learning the true function does not conflict with learning the teacher function, and thus it is safe to assume that εs ≥ εt + εkd. In contrast, on image-to-image translation, since the prediction of students and teachers are values of pixels instead of the probability distribution, there is no additional information in ft(x) compared with the ground truth. Thus αkd >αs does not hold. Moreover, since image-toimage translation is an ill-posed problem, the prediction of students and teachers may be different but correct answers for the same input image, indicating that εs ≥ εt + εkd also does not hold. These observations demonstrate that the inequality (16) does not hold in image-to-image translation, which can explain the limited performance of directly applying Hinton et al. knowledge distillation to image-to-image translation.
Instead of distilling the generated images, some recent knowledge distillation methods have been proposed to distill teacher knowledge in their features. Since there is more information contained in teacher features than ground-truth images, these methods can be considered as a guarantee for αkd >αs. In contrast, IYOR aims to improve knowledge distillation by addressing the ill-posed property, implying εs ≥ εt + εkd. Since IYOR and previous feature-based methods have different perspectives to support inequality (16), their benefits are orthogonal and can be combined.
C DETAILED EXPERIMENT SETTINGS
We follow the official codes of CycleGAN and Pix2Pix1 to conduct our experiments. Models on Edge→Shoe are trained by 50 epochs. Models on the other datasets are trained by 200 epochs. The momentum of Adam optimizer is 0.5. During the all the experiments, we set α = 1 and β = 1. The initial learning rate is 0.0002. LSGAN is used as the generator of the model. The discriminator is a 70x70 PatchGAN. In the experiments of CycleGAN, the backbone in the generators of both students and teachers (refining networks) are ResNet with six blocks. Their main difference is that the student backbone has much less channels than the teacher. Batch size is set to 1 for both training and inference. We compute the FID scores based on Pytorch-FID 2, a well known python package. We find that some of previous works compute the FID for unpaired image-to-image translation by using the images in both training set and test set to achieve more stable performance. However, we believe this behavior that access the test images during training is not reasonable. Hence, we choose to compute the FID on only the test set. As claimed by previous research Jin et al. (2021)3, this makes the FID scores in our experiments around 5-6 lower than the previous works which report FID on the both training and test set.
D INFLUENCE FROM HYPER-PARAMETERS
In this paper, we mainly have two hyper-parameters α and β to balance the magnitudes of knowledge distillation loss and the original GAN training loss. Hyper-parameters sensitivity study on Horse→Zebra with 15.81× compressed students is introduced in Figure 6. Note that the reported value is FID (lower is better). It is observed that: (i) With the worst α, our method achieve 70.21 FID, which is still 14.83 lower than the student trained without KD, and 6.83 lower than the second-best KD method. (ii) With the worst β, our method achieve 70.54 FID, which is still 14.50 lower than the student trained without KD, and 6.50 lower than the second-best KD method. These observations indicate that our method is not sensitive to the value of hyper-parameters.
E INFLUENCE FROM THE SIZE OF THE REFINING NETWORK
In our experiments, the refining network has the same architecture as the teacher network in traditional KD, which is also same as the image-to-image translation model before compression. In
1https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/ 2https://github.com/mseitzer/pytorch-fid 3https://github.com/snap-research/CAT
this section, we study the influence from the size of the refining network. As shown in Table 5: (i) With more parameters, the refining network can achieve a very low FID, which indicates that the refinement has good quality. And at the same time, the student can also be trained better, which achieve relative lower FID. (ii) When the refining network does not have enough parameters, the refinement has a relative higher FID and the effectiveness of knowledge distillation is not very significant. These observations indicate that a refining network with enough parameters can make a positive influence to the performance of knowledge distillation. In contrast, when the refining network does not have enough parameters, it can not successfully refine the image generated by the student, which leads to limited knowledge distillation performance.
F EXPERIMENTS ON CITYSCAPES
Following previous research Zhu et al. (2017a); Park et al. (2019a), we have also evaluated our method on Cityscapes (Cordts et al., 2016). Cityscapes is original proposed as a dataset for autonomous driving, including tasks such as detection and segmentation. In our experiments, we take the semantic segmentation mask as the input and take the natural images of the street as the label to train the image-to-image translation models. Then, we adopt the mIoU of a pre-trained FCN model on the generated images as the performance metric. A higher mIoU indicates that the image-toimage translation model has better performance. Our experimental results are shown in Table 6. It
is observed that there are 2.17 mIoU improvements on the student trained with our method, which is 0.71 higher than the second-best method.
G PYTHON-STYLE PSEUDO CODE
The following code block presents a brief implementation of IYOR.
# The pseudo code of IYOR def IYOR(x, student, refiner, sift):
# x: the input image, student: the student network # refiner: the refining network # sift: a function to extract sift features student_output = student(x) refinement = refine(student_output.detach()) # pixel-wise imitating kd_loss = l1_loss(student_output, refinement) # sift distillation kd_loss += l1_loss(sift(student_output), sift(refinement)) return kd_loss | 1. What is the focus of the paper regarding compressing GAN models using knowledge distillation?
2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to scale to other well-known models in I2I tasks?
3. Do you have any concerns about the idea of extracting scale-invariant features for KD of GANs?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work focuses on compressing the GAN models with knowledge distillation (KD). The most different part is that this work uses a refining network to modify the output of the student network before optimizing the KD loss. The refining network is optimized jointly with the student network. In addition, the authors propose to extract scale-invariant features for KD of GANs. The experiments show that the proposed method works well when distilling the CylcleGAN model.
Strengths And Weaknesses
Strength:
Distilling the GANs is helpful in practical applications, and this work shows promising results in distilling CycleGAN.
This work explains why existing KD methods may not work well on image-to-image translation tasks. Theoretical analyses are provided and seem convincing.
This work proposes to refine the student network's output, which I think is a good idea. The strategy can be regarded as remapping the output to an (approximatively) embedded space, and it has been proven effective in KD methods.
Learning invariant features for computing the KD loss is also reasonable.
Weakness:
One major concern is that in the current version, it is unclear whether the proposed method can scale to other well-known models in I2I, for instance, StarGAN v2 [1], CUT [2], and MUNIT [3]. Notably, it is non-trivial to adapt a GAN model to another related task or even another dataset (c.f. [4]). So, it is necessary to show the proposed KD method for GAN has desirable scalability.
The resolution of samples is too small. The authors should show samples with a resolution of at least
256
×
256
. Seeing how well high-resolution details are preserved is very important for evaluating the effectiveness of GAN compressing. After all, the burden in computational cost mostly comes from the demand for high-resolution. In my opinion, producing thumbnails is insufficient to show that the distilled model maintains good performance.
The idea of extracting invariant features is reasonable. However, why specifically use scale-invariant features? It is confusing why scale-invariant features (significantly) help distill the CycleGAN because it is good at producing outputs that are the same size as the inputs. The CycleGAN model is trained with the L1 norm loss in the image space, and the inputs like a sketch, apple, or horse provide strong prior. Based on this, the teacher and student networks produce samples with similar shapes but different texture details.
[1] Choi, Yunjey, et al. "Stargan v2: Diverse image synthesis for multiple domains." in CVPR, 2020. [2] Park, Taesung, et al. "Contrastive learning for unpaired image-to-image translation." in ECCV, 2020. [3] Huang, Xun, et al. "Multimodal unsupervised image-to-image translation." in ECCV, 2018. [4] Sauer, Axel, et al. "Stylegan-xl: Scaling stylegan to large diverse datasets." in SIGGRAPH, 2022.
Clarity, Quality, Novelty And Reproducibility
The presentation is OK for me. This work proposes an interesting method, which sheds light on improving the GAN compression methods in the knowledge distillation manner. Please see my comments above. |
ICLR | Title
Towards Robustness Certification Against Universal Perturbations
Abstract
In this paper, we investigate the problem of certifying neural network robustness against universal perturbations (UPs), which have been widely used in universal adversarial attacks and backdoor attacks. Existing robustness certification methods aim to provide robustness guarantees for each sample with respect to the worst-case perturbations given a neural network. However, those sample-wise bounds will be loose when considering the UP threat model as they overlook the important constraint that the perturbation should be shared across all samples. We propose a method based on a combination of linear relaxation-based perturbation analysis and Mixed Integer Linear Programming to establish the first robust certification method for UP. In addition, we develop a theoretical framework for computing error bounds on the entire population using the certification results from a randomly sampled batch. Aside from an extensive evaluation of the proposed certification, we further show how the certification facilitates efficient comparison of robustness among different models or efficacy among different universal adversarial attack defenses and enables accurate detection of backdoor target classes.
1 INTRODUCTION
As deep neural networks become prevalent in modern performance-critical systems such as selfdriving cars and healthcare, it is critical to understand their failure modes and performance guarantees. Universal perturbations (UPs) are an important class of vulnerabilities faced by deep neural networks. Such perturbations can fool a classifier into misclassifying any input from a given distribution with high probability at test time. Past literature has studied two lines of techniques to create UPs: universal adversarial attacks (Moosavi-Dezfooli et al., 2017) and backdoor attacks (Gu et al., 2019; Chen et al., 2017). The former crafts a UP based on a trained model and does not rely on access to training data. The latter, by contrast, prespecifies a pattern as a UP and further alters the training data so that adding the pattern (often known as the trigger in backdoor attack literature) will change the output of the trained classifier into an attacker-desired target class.
Many defenses have been proposed for both universal adversarial attacks (Akhtar & Mian, 2018; Moosavi-Dezfooli et al., 2017; Shafahi et al., 2020; Benz et al., 2021; Liu et al., 2021) and backdoor attacks (Wang et al., 2019; Chen et al., 2019; Guo et al., 2019; Borgnia et al., 2020; Qiu et al., 2021). But empirical evaluation with attacks does not provide a formal guarantee on the robustness as it is infeasible for an attack algorithm to provably cover all concerned perturbations. In contrast, robustness certification aims to verify the output bounds of the model given a certain class of input perturbations and provably certify the robustness against all the concerned perturbations. Although several recent works (Weber et al., 2020; Xie et al., 2021) developed techniques to achieve certified robustness of a classifier against backdoor-attackinduced UPs with certain norm bound. However, these techniques apply to specific learning algorithms and require the knowledge of the training data. It remains an open question: How to certify the robustness of a trained model against a class of UPs in a way that is agnostic to the underlying training algorithm and data, and is general for different UPs (including both universal adversarial attacks and norm-bounded backdoor attacks)?
∗Zhouxing Shi and Yi Zeng contributed equally. Corresponding Yi Zeng, Lingjuan Lyu or Ruoxi Jia. Work partially done during Yi Zeng’s internship at Sony AI.
In this paper, we propose a framework to certify the worst-case classification accuracy on a batch of test samples against l∞-norm-bounded UPs. Our approach builds off of past works for certifying robustness against sample-wise perturbations that are independently added to each sample. For efficient verification, many recent works linearly relax nonlinear activation functions in neural networks into linear bounds and then conduct linear bound propagation to obtain the output bounds for the whole model (Wong & Kolter, 2018; Wang et al., 2018b; Dvijotham et al., 2018; Zhang et al., 2018; Singh et al., 2019b). This process is also referred to as linear perturbation analysis (Xu et al., 2020a). Since the worst-case model accuracy against sample-wise perturbations is a lower bound of the worst-case accuracy against UPs, these certification techniques could be applied to obtain a certificate against UPs. However, a direct application would overlook the important constraint that a UP is shared across different inputs, thereby producing overly conservative certification results.
Unlike sample-wise perturbations, UPs require theoretical reasoning to generalize certification results. This is because UPs are applied to any input from the data distribution, and our main interest lies in the expected model accuracy over the entire data distribution against UPs. However, certification procedures can only accept a batch of samples from the distribution and certify the accuracy over the samples. Therefore, it’s crucial to understand the discrepancy between certified robustness computed from samples and the actual population robustness.
We summarize our contributions as follows:
• We formulate the problem of robustness certification against UPs. We then generalize linear relaxation based perturbation analysis (LiRPA) to UPs, and we further propose a Mixed Integer Linear Programming (MILP) formulation over linear bounds from LiRPA, to obtain tighter certification on the worst-case accuracy of a given model against UPs within a ℓ∞-norm ball1. • We establish a theoretical framework for analyzing the generalizability of the certification results based on random sampled subsets to the entire population. • We conduct extensive experiments to show that our certification method provides certified lower bounds on the worst-case robust accuracy against both universal adversarial attacks and l∞-bounded backdoor attacks, which are substantially tighter than results by directly applying existing sample-wise certification. • We also investigate the implications of robustness certification on UPs to facilitate easy comparisons of robustness among different models or the efficacy of empirical defenses, and to achieve reliable identification of backdoor target classes.
2 BACKGROUND AND RELATED WORK
Universal Adversarial Perturbation Neural networks are vulnerable to adversarial examples (Szegedy et al., 2014), which has led to the development of universal adversarial perturbations (UAPs), a same noise can consistently deceive a target network on most images (Liu et al., 2019; 2020). Existing defenses against UAPs include fine-tuning on pre-computed UAPs (MoosaviDezfooli et al., 2017), post-hoc detection (Akhtar et al., 2018), universal adversarial training with online UAP generation (Mummadi et al., 2019; Shafahi et al., 2020; Benz et al., 2021). However, all existing defenses to UAPs are empirical works without efficacy guarantee to new attacks.
Backdoor Attacks In backdoor attacks, attackers plant a predefined UP (a.k.a. the trigger) in the victim model by manipulating the training procedure (Li et al., 2020c). Attacked models can give adversarially-desired outputs for any input patched with the trigger while still show good performance on clean inputs. Existing defenses include: poison detection via outlier detection (Gao et al., 2019; Chen et al., 2018; Tran et al., 2018; Zeng et al., 2021) which rely on the modeling of clean samples’ distribution; poisoned model identification (Xu et al., 2019; Wang et al., 2020b); trojan removal via trigger synthesising (Wang et al., 2019; Chen et al., 2019; Guo et al., 2019; Zeng et al., 2022a), or preprocessing and fine-tuning; (Li et al., 2020b; Borgnia et al., 2020); robust training via differential privacy (Du et al., 2019) or redesigning the training pipeline (Levine & Feizi, 2020; Jia et al., 2020; Huang et al., 2022; Li et al., 2021). As all these defenses were empirical, existing literature has revealed those empirical defenses’ limitations to zero-day attacks or adaptive attacks (Zeng et al., 2022b).
Robustness Certification of Neural Networks Early robustness certifications (Katz et al., 2017; Ehlers, 2017; Tjeng et al., 2017) largely relied on satisfiability modulo theory (SMT) or integer
1https://github.com/ruoxi-jia-group/Universal_Pert_Cert
linear programming (ILP) solvers are were limited to very small networks. For more efficient verification, bound propagation with convex relaxations has been proposed (Wong & Kolter, 2018; Wang et al., 2018b; Zhang et al., 2018; Weng et al., 2018; Singh et al., 2019b; Salman et al., 2019), which over-approximates nonlinear activations with convex relaxation and propagates the bounds layer by layer to finally bound the entire model. Xu et al. (2020a) proposed a bound propagation framework for general computational graphs and referred to the related methods as linear relaxation based perturbation analysis (LiRPA), as activations are relaxed by linear bounds. Bound propagation methods have also been further enhanced with techniques such as branch-and-bound (Bunel et al., 2018; 2020; Wang et al., 2018a;b; Xu et al., 2020b; Wang et al., 2021), multi-neuron relaxation and cutting planes (Singh et al., 2019a; Ferrari et al., 2021; Zhang et al., 2022a) for tighter results at a cost of efficiency. However, these works are developed for sample-wise perturbations, and they cannot directly produce tight certification against universal perturbations. Besides, there are several randomized smoothing (Cohen et al., 2019) based methods for certified robustness against backdoor attacks (Weber et al., 2020; Wang et al., 2020a; Xie et al., 2021; Zhang et al., 2022b). These are stochastic methods and are usually considered orthogonal to deterministic certification. Moreover, they require access to training data, only applicable to some specific learning algorithms (e.g., binary models or federated learning) and not general for other UPs, such as UAPs.
3 METHODOLOGY
3.1 PROBLEM FORMULATION
On a set of n independent samples {z(1), . . . , z(n)} from the data distribution Ω, where z(i) = (xi, yi) is the i-th example, xi (xi ∈ Rd) is the input and yi is the ground-truth label, we aim to certify the robustness of a K-way neural network classifier f : Rd → RK against a potential universal perturbation δ with ℓ∞ norm constrained as ∥δ∥∞ ≤ ϵ. In particular, we aim to certify and lower bound the worst-case accuracy of the neural network on {z(1), . . . , z(n)} for any universal perturbation δ (∥δ∥∞ ≤ ϵ) applied to all the examples:
min ∥δ∥p≤ϵ
1
n n∑ i=1 1 ( min j ̸=yi {myi,j(xi + δ)} > 0 ) , (1)
where myi,j(xi + δ) = fyi(xi + δ) − fj(xi + δ) is the margin between the ground-truth class yi and an incorrect class j ̸= yi, and the indicator checks whether the margin is positive for any j ̸= yi when a perturbation δ is added. It is NP-hard to exactly verify Eq. (1) even for n = 1 and a small ReLU network (Katz et al., 2017). Thus recent neural network verifiers usually compute a lower bound for the margin as myi,j(xi + δ) ≤ myi,j(xi + δ), and then we can replace m in Eq. (1) with m to lower bound Eq. (1) and this bound also serves as a lower bound for the robustness.
3.2 LINEAR PERTURBATION ANALYSIS W.R.T. A UNIVERSAL PERTURBATION
We adopt linear relaxation based perturbation analysis (LiRPA) from previous works which focused on sample-wise perturbations, “auto LiRPA” (Xu et al., 2020a) specifically, to obtain lower bounds on myi,j(xi + δ) represented as linear functions w.r.t. the universal perturbation δ. but it is also feasible to use other verification frameworks such as Singh et al. (2019b); Wang et al. (2018b). auto LiRPA can bound the output of a computational graph when its input nodes are perturbed, and it can produce linear functions w.r.t. the perturbed inputs nodes as linear bounds. Note that margin functions can be appended to the original neural classifier as the output of the computational graph, and thereby the margins can be bounded. When sample-wise perturbations are considered in previous works, the linear bounds can usually be written as
∀i ∈ [n], ∀j ̸= yi, ∀∥δ∥∞ ≤ ϵ, myi,j(xi + δ) ≥ ã (i) j (xi + δ) + b̃
(i) j , (2)
where ã(i)j and b̃ (i)
j are coefficients and biases in the linear bounds. This is achieved by relaxing nonlinear functions such as activation functions in the network with linear bounds and propagating linear coefficients through the computational graph. The right-hand-side (RHS) of Eq. (2) is a linear function w.r.t. (xi+δ). To obtain a final bound represented as a concrete number without relying on the δ variable, a concretization step can be applied on the RHS given the constraint on ∥δ∥∞, which eliminates the δ variable and lower bounds the RHS as ã(i)j (xi+δ)+b̃ (i) j ≥ −ϵ∥ã (i) j ∥1+ã (i) j xi+b̃ (i) j .
However, the aforementioned concretization step considers the worst-case δ for each sample independently but a universal perturbation δ should be shared across all the examples. Thereby it will produce relatively loose and over-conservative results under the universal perturbation setting, as the perturbations are much stronger when each example can take an independent perturbation respectively compared to a single and universal perturbation for all the examples.
In contrast, we propose to obtain a tighter certification for universal perturbation. Unlike Eq. (2), we use auto LiRPA to compute the linear lower bound with respect to δ instead of (xi + δ) by treating δ as a perturbed input node and xi as a fixed input node in the computational graph:
∀i ∈ [n], ∀j ̸= yi, ∀∥δ∥∞ ≤ ϵ, myi,j(xi + δ) ≥ a (i) j δ + b (i) j , (3)
where a(i)j and b (i) j are new coefficients and biases in the linear bound, and xi does not appear on the RHS as it is fixed. In the next section, we will lower bound the worst-case accuracy Eq. (1) by solving an MILP problem based on Eq. (3).
3.3 AN MILP FORMULATION TO LOWER BOUND THE WORST-CASE ACCURACY
In this section, we use linear bounds in Eq. (3) to compute a lower bound for the worst-case accuracy in Eq. (1). Specifically, by replacing each myi,j in Eq. (1) with its lower bound from Eq. (3), we lower bound Eq. (1) by solving the following problem:
minimize 1
n n∑ i=1 1 ( min j ̸=yi { a (i) j δ + b (i) j } > 0 ) s.t. ∥δ∥∞ ≤ ϵ. (4)
Now, we show that Eq. (4) can be rewritten into an MILP formulation: Theorem 1. Problem Eq. (4) is equivalent to the following MILP problem:
minimize ϑ
s.t. ϑ = 1
n n∑ i=1 q(i), (5)
∀i ∈ [n], q(i) ∈ {0, 1}, −τ(1− q(i)) ≤ ∑ j ̸=yi (a (i) j δ + b (i) j )s (i) j ≤ τq (i), (6)
∀i ∈ [n],∀j ̸= yi, s(i)j ∈ {0, 1}, ∑ j ̸=yi s (i) j = 1, (7)
∀i ∈ [n],∀j1 ̸= yi,∀j2 ̸= yi, (a(i)j1 δ + b (i) j1 )s (i) j1 − τ(1− s(i)j1 ) ≤ (a (i) j2 δ + b (i) j2 ), (8) ∥δ∥∞ ≤ ϵ,
where τ ≥ maxi∈[n] ∑ j ̸=yi |a (i) j δ + b (i) j | is a sufficient large constant.
In Theorem 1, given a universal perturbation δ, for the i-th example, integer variable q(i) ∈ {0, 1} denotes whether the the model is certifiably correct on this example based on linear bounds from Eq. (3), and the certified accuracy on the whole batch can be computed as Eq. (5). The model is certifiably correct on the i-th example when myi,j(xi + δ) ≥ a (i) j δ + b (i) j > 0 holds for all j ̸= yi. We use an integer variable s(i)j ∈ {0, 1} to denote whether class j is the hardest among all j ̸= yi under the ceritification, i.e., ∀j′ ̸= yi,a(i)j δ+b (i) j ≤ a (i) j′ δ+b (i) j′ holds, which is enforced by Eq. (8). We require each example to have exactly one hardest class j with s(i)j = 1 (see Eq. (7)); in case that there are multiple classes with an equal lower bound on the margin function, it is valid to treat any of them as the hardest. Then we only need to check whether a(i)j δ+b (i) j > 0 holds for the hardest class
j with s(i)j = 1, equivalently ∑ j ̸=yi(a (i) j δ+b (i) j )s
(i) j > 0. In Eq. (6), as τ is sufficiently large, only∑
j ̸=yi(a (i) j δ+b (i) j )s (i) j ≥ 0 is effectively required when q(i) = 1, and ∑ j ̸=yi(a (i) j δ+b (i) j )s (i) j ≤ 0
is required when q(i) = 0. Note that if exactly ∑
j ̸=yi(a (i) j δ + b (i) j )s (i) j = 0 happens, q (i) = 0 will be taken by MILP due to the minimization objective, and thus it is still compatible with our goal
for checking a(i)j δ + b (i) j > 0. Overall the MILP formulation minimizes the certified accuracy over all possible universal perturbation δ (∥δ∥∞ ≤ ϵ), to finally produce a lower bound for Eq. (1). We formally prove this theorem in Appendix A.1, and we use Gurobi (Bixby, 2007) to solve the MILP.
Although it is possible to solve the whole certification algorithm through MILP (Tjeng et al., 2017), it will be computationally prohibitive. Even for very small networks with thousands of neurons, the number of integer variables in their MILP formulation will be proportional to the number of neurons. In contrast, by computing linear bounds first before solving MILP, the number of integer variables in our formulation is only proportional to the number of samples in a batch and the number of classes, and it does not depend on the size of the network, which makes it feasible in practice.
4 GENERALIZATION OF UNIVERSAL PERTURBATION
In the previous section, we proposed our robustness certification method against UPs. Note that the certification results are only guaranteed for the given batch of samples till now. In this section, we study how the certified accuracy computed on a batch approximates the certified accuracy computed on the entire data distribution.
Let z(i) be a random sample drawn from probability space (Ω,F ,P), which is endowed with a σalgebra F and a probability measure P. A dataset Dn ≜ {z(1), . . . , z(n)} consists of n observations drawn independently from Ω according to P; equivalently it can be considered as a random point in (Ωn,Fn,Pn), which is the n-fold Cartesian product of Ω equipped with the product σ-algebra Fn and the product Pn = P× · · · × P︸ ︷︷ ︸
n times
. Let ∆ denote the l∞ ball that contains all allowable perturbations
∆ = {δ : ∥δ∥∞ ≤ ϵ} with radius ϵ. And let B : Ω → R(d+1)K be a linear bound generation procedure, and for each z = (x, y), it returns parameters {aj ,bj}j ̸=y of the linear lower bounds on the margins, i.e., my,j(x+ δ) ≥ aj(x+ δ)+bj . In the proposed framework, B is instantiated to be auto LiRPA (Xu et al., 2020a). Let An : R(d+1)Kn → ∆ denote the MILP in Eq. (4), which return a perturbation δ given the linear bounds on the margins. The overall certification procedure is the composition of An and B, denoted by G = An ◦ B ◦ · ◦ B︸ ︷︷ ︸
n times
≜ An ◦ B◦n.
For every data sample z = (x, y) ∈ Ω, we define the set
∆Bz :=
{ δ ∈ ∆ : 1 ( min j ̸=y { ajδ + bj } > 0 )} as the set of perturbations such that the margin between the ground-truth class and any other class is certifiably positive according to the linear bounds provided by B, i.e., the model is certifiably robust to any perturbation in this set, but it is still possible for the model to be robust to a perturbation δ /∈ ∆z . Note that the dependence of the set on B has been made explicit because aj ,bj depend on B. Similarly, we define the set
∆̃z :=
{ δ ∈ ∆ : 1 ( min j ̸=y { my,j(x+ δ) } > 0 )} (9)
as the set of all perturbations that are incapable of fooling the given model f , i.e., the data z is actually robust to any perturbation in this set. Note that ∆̃z is a superset of ∆Bz , and unlike ∆ B z , it does not depend on the linear bound generation procedure. We make the following definitions: Definition 1. The certified robust probability (CRP) of a given perturbation δ ∈ ∆ based on a linear bound generation procedure B is defined as
V B(δ) ≜ P(z ∈ Ω : δ ∈ ∆Bz ). (10) The actual robust probability (ARP) of a given perturbation δ ∈ ∆ is defined as
U(δ) ≜ P(z ∈ Ω : δ ∈ ∆̃z). (11)
The certified robust rate (CRR) of a perturbation δ ∈ ∆ on an evaluation dataset Dn based on a linear bound generation procedure B is
V̂ B(δ;Dn) ≜ 1
n ∑ z∈Dn 1(δ ∈ ∆Bz ). (12)
Equivalently, we can write the objective of Eq. (4) as minδ∈∆ V̂ B(δ;Dn). ∆Bz can be equivalently defined by the existence of a binary variable as in the MILP formulation in Theorem 1 and is thus nonconvex in general. In the following, we use V̂ B(δ) for V̂ B(δ;Dn) if the evaluation dataset is Dn for notational simplicity. Note that V B(δ) ≤ U(δ) for any δ ∈ ∆ and the equality is attained when the equality in (3) is attained, i.e., the lower bound generated by B exactly matches the actual margin at any δ. Now we present the following theorem that estimates the value of ARP based on the CRR computed from a batch of random samples.
Theorem 2 ((1− ξ)-probable certification for ARP). Given G = An ◦ B◦n and 0 < ξ < 1, for any δ, it holds that
Pn ( U(δ) ≥ min
δ∈∆ V̂ B(δ;Dn) + U(δ
∗)− V B(δ∗)− t∗(ξ, n) ) ≥ 1− ξ, (13)
where t∗(ξ, n) is the root of the equation (1 + 4t) ln(1 + 4t) − 4t = 4n ln(1/ξ) and t ∗(ξ, n) is a monotonically decreasing function in n and ξ. δ∗ = argminδ U(δ). Moreover, we have that
Pn ( U(δ) ≥ min
δ∈∆ V̂ B(∆;Dn)− t∗(ξ, n)
) ≥ 1− ξ, (14)
The proof can be found in Appendix A.2. Both bounds are interesting to interpret. The bound Eq. (13) shows that the discrepancy between the ARP of any perturbation and the CRR (i.e., the certified accuracy on a random batch) depends on U(δ∗) − V B(δ∗) and t∗(ξ, n). Given the trained model and the underlying data distribution, δ∗ is fixed; hence, the term U(δ∗) − V B(δ∗) depends on the tightness of linear bounds produced by B. The tighter bounds B can provide, the smaller difference there will be between U(δ∗)
and V B(δ∗). This bound suggests that plugging tighter linear bound generation techniques into our certification framework can potentially give rise to better approximation error. It is also interesting to note that the approximation error of the proposed certification framework G = An ◦ B◦n exclusively depends on B, not An. This is because An always returns the optimal solution to the MILP, thereby not introducing any additional error. The second term t∗(ξ, n) depends on the number of samples for certification and it vanishes as the n grows (illustrated in Figure 1). The second bound (Eq. (14)) utilizes the fact that U(δ∗) − V B(δ∗) ≥ 0, and is more relaxed but more convenient than the first bound (Eq. (13)) because the lower bound of the ARP can be calculated given the certification results on a batch, the number of samples in the batch, and the confidence level 1 − ξ. In the Section 5.2, we will showcase the estimation of the ARP using this bound.
5 EXPERIMENT
5.1 EXPERIMENTAL SETUP
For evaluating the certification, we consider two benchmark datasets, MNIST (LeCun et al., 1998) and CIFAR-10 (Krizhevsky et al., 2009), widely adopted in existing works. We adopt 5 model structures from existing works (Singh et al., 2019a; Tjandraatmadja et al., 2020; Wang et al., 2021; Müller et al., 2022; Zhang et al., 2022a): Conv-small, Conv-4-layer, Conv-big on MNIST, and ResNet-2B, ResNet-4B on CIFAR-10, with details in Appendix B. We use the CRR and attackACC (accuracy under an attack) as the metrics. All the results are averaged over three runs with different random seeds on 100 random samples for each dataset. Further experimental details can be found in Appendix B.
5.2 EVALUATION
Comparing to existing robustness certification We first focus on evaluating our certification results compared to existing robustness certification for sample-wise perturbation. There are several competitive frameworks such as Singh et al. (2019b); Bunel et al. (2020); Henriksen & Lomuscio (2021), and we compare with auto LiRPA (Xu et al., 2020a) specifically for its state-of-the-art
performance (Bak et al., 2021). We consider models from both natural training and adversarial training. For MNIST, we evaluate the two certification methods on models naturally trained, and PGD-32 (PGD with ℓ∞-norm 32255 ) adversarially trained models (Madry et al., 2018). Table 1 details the results on naturally trained MNIST models. Our method provides much tighter bounds than sample-wise certification results across all the settings. Table 2 illustrates results on PGD-32 trained MNIST models. With adversarial training, we observe the certified robustness of the models against UPs also largely increased compared naturally trained models in Table 1. Our certification results are still much tighter than sample-wise results especially under settings with larger perturbations. On CIFAR-10, we only evaluate the results on adversarially trained models (PGD-8 for ResNet-2B and ResNet-4B) as the naturally trained model is severely susceptible to perturbations on CIFAR-10 (see Figure 5) even with ϵ = 1255 . To sum up, our method can provide tighter robustness certification than existing sample-wise methods.
Estimation of the lower bound of ARP Figure 2 illustrate the application of Theorem 2 with CRR. We use the naturally trained Conv-4-layer on MNIST as an example and we set ϵ = 6255 . We demonstrate the estimation of 0.9-probable certification for the lower bound of ARP, or ARP, by setting ξ = 0.1. From Figure 2, we can learn that the empirical results, CRR, can be tighter when more samples are considered in certification. Incorporating more samples also makes the estimated ARP much closer to the CRR (as t∗(ξ, n) is smaller). Such an observation shows that when incorporating more samples in certification, the empirical results would better reflect the actual robustness of the whole population.
In particular, when using 1000 samples, the result can be interpreted as the ARP is larger than 84.73% with at least a 90% probability.
Validating with UAP attacks We then validate the robustness certification results with UAP attacks as CRR should lower bound the attack-ACCs. We consider three SOTA UAP attacks: Adv-UAP (Li et al., 2022), CosUAP (Zhang et al., 2021a), and DF-UAP (Zhang et al., 2020), detailed in in Appendix B. Figure 3 compares CRR and attack-ACCs on PGD-32 trained MNIST models. As shown in Figure 3, CRRs are indeed lower than all the attack-ACCs as expected.
Validating with backdoor attacks We also validate whether CRR still lower bounds the attackACCs in backdoor attacks. We consider two backdoor triggers namely a blended trigger (Chen et al., 2017) with a small ℓ∞-norm (∥δ∥∞ = 5255 , referred as the stealthy trigger), and the BadNets (Gu
et al., 2019) (∥δ∥∞ = 255255 ). All the attacks utilize the same poison ratio, 20% following existing works (Zeng et al., 2021). The visual example of the poisoned sample, the triggers, and the certification results are listed in Figure 4. Under the setting of the stealthy blended backdoor, we find that the CRR drops dramatically before reaching the trigger’s norm (∥δ∥∞ = 5255 ) compared to the same model trained on clean MNIST. This observation verifies the correctness of CRR and its potential to reveal stealthy l∞-bounded backdoor attacks in the current trend of backdoor development with smaller l∞-norm constraints, e.g., Zeng et al. (2022b); Zhao et al. (2020). However, assuming an ℓp norm bound of the backdoor triggers is not widely accepted in traditional backdoor settings. Thus, we also present the results of BadNets (with ∥δ∥∞ = 255255 ) in Figure 4. We consider the backdoor model trained from scratch or fine-tuned from the clean model. The CRR is still lower-bounding the attack’s deployed ℓ∞ bound of the trigger. However, as the trigger has a large ℓ∞ norm, the CRRs of poisoned models are of no difference to the clean model and thus not that useful. Nevertheless, in Section 5.3, we show a simple twist of the certification framework to help reveal backdoors’ existence.
5.3 IMPLICATIONS OF ROBUSTNESS CERTIFICATION AGAINST UPS
Now we explore the potential implications of our robustness certification against UPs. We focus on 3 case studies on model structure comparison, UAP defenses comparison, and backdoor detection.
Comparing model structures One implication of robustness certification regarding UPs is to compare different model structures and training strategies regarding the certified robustness against UPs. Figure 5 depicts the certification results of all the considered model structures with different training settings on MNIST and CIFAR-10. We consider both naturally trained and PGD trained models with different l∞ perturbation norm. In Figure 5 (a) on MNIST, we find that the largest model, Conv-big, shows the worst certified robustness against UPs. But the smallest Conv-small’s CRR is higher than that of Conv-4-layer under naturally trained setting, PGD-8, and PGD-16, but not PGD32. The only difference between Convsmall and Conv-4-layer is that Conv-4-layer
uses a larger padding step which resulting a slightly larger hidden layer (see Appendix B). Based on the observation, there is an interesting trade-off between model size and certified robustness against UPs: A slightly larger structure can help the model obtain better certified robustness when adopting adversarial training, potentially due to increased model capacity. Such an observation can be further illustrated in Figure 5 (b). Specifically, ResNet-2B’s CRR would drop to random guessing when using PGD-16, while ResNet-4B can still maintain a certain scale of CRR. But even larger models Figure 5 (a) have worse certified robustness, potentially due to looser certified bounds.
Implication to UAP defenses Another implication of the CRR is to compare existing UAP defenses regarding their efficacy. We consider three types of defenses and five different defenses in total: FGSM and PGD sample-wise adversarial training (Goodfellow et al., 2014; Madry et al., 2017;
Wong et al., 2019); universal adversarial training (UAT) with FGSM or PGD synthesizing UPs (Shafahi et al., 2020); sample-wise certified defense with Interval Bound Propagation (IBP) training (Gowal et al., 2018; Mirman et al., 2018). The defended models are further evaluated with UAP attacks and certification. The results with a small perturbation radius |δ|∞ = 16255 are shown in Table 4. Additional results with a larger perturbation radius (|δ|∞ = 80255 ) are in Table 7, Appendix C. We use the row titled “Worst” to record the maximum accuracy drop using UAP attacks compared to clean accuracy. Surprisingly, in Table 4, we find the CRR of models trained with UAT is worse than their sample-wise adversarial training counterparts (i.e., UAT-PGD results are worse than PGD). However, in the case of larger perturbation radius (Table 7, Appendix C), the UAT-trained models can achieve higher CRR than the sample-wise counterparts. Such an observation indicates an underexplored trade-off between perturbation radius and UAP defense method on CRR. The CRR result from the IBP-trained model is much tighter than others, as IBP directly optimizes over an objective for certified robustness and tightens the certified bounds for all the neurons. Moreover, CRR is also aligned with the worst attacked accuracy drop and can be an indicator for comparing different settings of UAP defenses.
Implication to backdoor defenses We evaluated the effectiveness of the CRR in revealing potential backdoors in the above section, but the effectiveness is yet only limited to triggers with small perturbations. This section presents a simple twist on the certification framework by teaming up with adversarial training (PGD-16). We depict the average class-wise certification results on 10 ResNet-4B models trained with different random seeds over different BadNets poison ratios in Figure 4. Based on the re-
sults, we find the certification can reliably reveal the targeted label and justify how mighty the backdoor attack is (i.e., the CRR is aligned with the poison ratio used). Additional results on the Smooth attack (Zeng et al., 2021) and ℓ2 invisible attack (Li et al., 2020a) are listed in Appendix C, which share similar observations. The reason of the successful identification is that, naturally, the adversarial training would force the model to learn more from the reliable features and thus make standard backdoors stand out from benign features of the data (i.e., easier to be learned by the model), as also discussed in Weng et al. (2020). Thus after training a model with adversarial training with large perturbation radius, the model would likely engrave the trigger and thus have a high CRR only on the target label. CRR by our proposed method provides an intriguing point of view to determine the attack’s strength (i.e., poison ratio).
6 CONCLUSION
In this work, we present the first focused study on certifying neural networks’ robustness against UPs. In contrast to previous robustness certification works that focused on sample-wise perturbations, we formulate the certification problem against UPs by emphasizing sharing a universal perturbation between different samples. We propose a combination of linear relaxation-based bounds and MILP to solve the problem. We also present a theoretical analysis framework to estimate the certification result for the entire population based on results from a batch of random samples. Extensive experiments reveal that our certification imposes tighter results than directly applying existing sample-wise robustness certifications. In addition, we discuss and demonstrate how robustness certification against UPs could facilitate comparing certified robustness between different model structures and defense methods and provide reliable backdoor detection.
ACKNOWLEDGEMENT
This work is partially funded by Sony AI. This work is also supported in part by NSF under IIS2008173, IIS-2048280, and by Army Research Laboratory under W911NF-20-2-0158. RJ and the ReDS lab appreciate the support of The Amazon - Virginia Tech Initiative for Efficient and Robust Machine Learning and the Cisco Award. YZ and ZS are both supported by the Amazon Fellowship.
A PROOFS
A.1 PROOF OF THEOREM 1
Let ϑ̂ be the solution of the MILP problem in the theorem, and let ϑ̃ be the solution to Eq. (4). Theorem 1 states that ϑ̂ = ϑ̃. We formally prove the equivalence below.
Proof. We first show that ϑ̂ ≤ ϑ̃. In Eq. (4), there exists some δ̃ such that
δ̃ = argmin ∥δ∥∞≤ϵ
1
n n∑ i=1 1 ( min j ̸=yi { a (i) j δ + b (i) j } > 0 ) .
Then, for every i ∈ [n], take the following values for variables in the MILP formulation:
q(i) = 1 ( min j ̸=yi { a (i) j δ̃ + b (i) j } > 0 ) ,
ϑ = ϑ̃ = 1
n n∑ i=1 q(i),
∀j ̸= yi, s(i)j = 1(j = j ′), where j′ = argmin j ̸=yi a (i) j δ̃ + b (i) j ,
and it is easy to see that the values for these variables satisfy all the constraints in the MILP problem. Thus the result of the minimization in the MILP should be no smaller than ϑ̃, i.e., ϑ̂ ≤ ϑ̃. We now show that ϑ̃ ≤ ϑ̂. We use δ̂, q̂, ŝ to denote the values of δ, q, s variable in the solution of MILP. For every i ∈ [n], Eq. (7) ensures that there exists exactly one ĵ (ĵ ̸= yi) with ŝ(i)ĵ = 1, and Eq. (8) ensures that for all j ̸= yi, a(i)ĵ δ̂ + b (i) ĵ ≤ a(i)j δ̂ + b
(i) j holds. Thus∑
j ̸=yi
(a (i) j δ + b (i) j )ŝ (i) j = min j ̸=yi {a(i)j δ̂ + b (i) j }.
According to Eq. (6), if q̂(i) = 1, ∑
j ̸=yi(a (i) j δ + b (i) j )ŝ (i) j ≥ 0 holds. In case that ∑ j ̸=yi(a (i) j δ +
b (i) j )ŝ (i) j = 0, Eq. (6) also holds with q̂ (i) = 0, and due to the minimization objective of MILP, q̂(i) = 0 instead of q̂(i) = 1 will be taken. Thus ∑
j ̸=yi(a (i) j δ + b (i) j )ŝ (i) j > 0 strictly holds when q̂(i) = 1. And if q̂(i) = 0, ∑
j ̸=yi(a (i) j δ + b (i) j )ŝ (i) j ≤ 0 holds. Thus
q̂(i) = 1 ( ∑ j ̸=yi (a (i) j δ + b (i) j )ŝ (i) j > 0 ) = 1 ( min j ̸=yi {a(i)j δ̂ + b (i) j } > 0 ) .
Thereby
ϑ̂ = 1
n n∑ i=1 q̂(i) = 1 n n∑ i=1 1 ( min j ̸=yi { a (i) j δ̂ + b (i) j } > 0 ) ,
and then the result of Eq. (4) is no smaller than ϑ̂, i.e., ϑ̃ ≤ ϑ̂.
Hence ϑ̂ = ϑ̃ is proved.
A.2 PROOF OF THEOREM 2
Let δ∗ = argminδ∈∆ U(δ) be the optimal universal perturbation that minimizes the ARP, let δn be the value returned by G(Dn), and let δ̃ = argminδ∈∆ V B(δ) that minimizes the CRP. We introduce the following lemma: Lemma 1. Given An, it holds that
Pn(V̂ B(δ∗)− V B(δ∗) > t∗(ξ, n)) ≤ ξ (15)
where t∗(ξ, n) is the root of the equation (1 + 4t) ln(1 + 4t)− 4t = 4n ln(1/ξ).
Proof. Let q(i) = 1(δ∗ ∈ ∆B z(i) ) which can also be interpreted as 1 ( minj ̸=yi { a (i) j δ ∗ + b (i) j } >
0 ) . Then, V̂ B(δ∗) = 1n ∑n i=1 q (i) and V B(δ∗) = E[ 1n ∑n i=1 q (i))] = E[q(i)]. Let σ2 denote the
variance of q(i). Since q(i) is a binary random variable, we have that σ2 ≤ 1/4. Let h(u) = (1 + u) ln(1 + u)− u. For any t > 0, we have that
Pn( 1
n n∑ i=1 q(i) − E[q(i)] > t) ≤ exp ( − nσ2h( t σ2 ) ) (16)
≤ exp ( − n
4 h(4t)
) , (17)
where the first inequality is a direct application of the Bennett’s inequality (Bennett, 1962), and the second inequality is due to the fact that nσ2h( tσ2 ) is a monotonically decreasing function of σ
2. Let t(ϵ, n) denote the root of exp ( − n4h(4t) ) = ξ. Then, it follows that Pn( 1n ∑n i=1 q
(i) − E[q(i)] > t(ϵ, n)) ≤ ξ.
Then we prove Theorem 2 to certify the robustness of a classifier against the worst-case attack δ∗.
Proof. We use the following relations: for any δ ∈ ∆,
U(δ) ≥ min δ∈∆ U(δ)
= U(δ∗) = U(δ∗)− V B(δ∗)︸ ︷︷ ︸ (i) +V B(δ∗)− V̂ B(δ∗)︸ ︷︷ ︸ (ii) + V̂ B(δ∗)− V̂ B(δn)︸ ︷︷ ︸ (iii) +V̂ B(δn)
≥ (i) + (ii) + V̂ B(δn),
(18)
where (ii) can be bounded by applying the concentration inequality in Lemma 1; (iii) ≥ 0 due to the optimality of δn = argminδ∈∆ V̂ B(δ). Combining these bounds yields Theorem 2.
B FURTHER DETAILS ON EXPERIMENTAL SETTINGS
We use one server equipped with a total of 8 RTX A6000 GPUs as the hardware platform. PyTorch (Paszke et al., 2019) is adopted as the implementation framework. We detail the model structures used in our experiment in Table 6. All of the model structures used in this work were also considered in other existing robustness certification works as the standard set-ups: Conv-small, Conv-4-layer, Conv-big on the MNIST (Singh et al., 2019a; Tjandraatmadja et al., 2020; Wang et al., 2021; Müller et al., 2022), and ResNet-2B, ResNet-4B on the CIFAR-10 (Zhang et al., 2022a). We use Adadelta (Zeiler, 2012) as the optimizer with a learning rate set to 0.1 for all the model training process (including the adversarial training for the model updating step as well). For MNIST models, we train each model with 60 epochs. For CIFAR-10 models, we train each model with 500 epochs to ensure full convergence. For adversarial training adopted in the main text, the number of steps in PGD attacks is 7; step-size for PGD is set as ϵ4 . For IBP training, we use the implementation in Shi et al. (2021).
Now, we details the UAP attacks considered in the experiment for validating the certification results, namely the Adv-UAP (Li et al., 2022), Cos-UAP (Zhang et al., 2021a), and DF-UAP (Zhang et al., 2020). The design of each UAP attack’s synthesis procedure distinguishes these attacks. Specifically, Adv-UAP synthesizes and generates adversarial examples for each input before synthesizing the UAP, which has shown to be more effective in finding stronger UAPs. Cos-UAP produces UAP by reducing the Cosine similarity between the original output logits and the disturbed logits; DFUAP employs a similar loss as listed in the C&W attack (Carlini & Wagner, 2017), which aims to reduce the distance between the ground-truth label’s logits and the maximum logits of the rest.
Now we provide the detailed settings of the backdoor target-class identification in Section 5.3. For the threat model, we consider the scenario where the defender aims to determine if a backdoor attack resides in a given dataset, identify the target class, and justify how potent the attack is if there is an identified attack. We assumes the defender has access to the training set to be inspected, with no additional clean validation data required. To conduct the instantiated case shown in Section 5.3, the defender adversarially trains (with PGD-16) 10 different models on the inspecting training dataset and obtaining the averaging CRR results in a class-wise manner. Especially as we assume no additional clean validation data is required, we pass through 100 random noise into the certifying models to obtain the results in Figure 6, 7, 8.
C ADDITIONAL RESULTS
C.1 ADDITIONAL RESULTS ON UAP DEFENSES COMPARISON
Table 7 details the results of UAP defenses comparison under the large-norm setting (ϵ = 80255 ). Noting all the defenses adopted are also incorporated with the same expense. For large-norm settings, we find that only the certified-robustness training ends up with a CRR larger than 0. Apart from its actual effectiveness, as mentioned in the main text, the IBP-trained model also ends up with much tighter intermediate linear bounds (i.e., a and b are tighter). Even though our work can only return a positive CRR on the IBP-trained model, the certification results are still aligned with the actual attack results, as the IBP-trained model would have stronger robustness than the other models in terms of the least change in the ACC drop.
C.2 ADDITIONAL RESULTS ON BACKDOOR TARGET-CLASS IDENTIFICATION
We now provide additional results on implementing the certification framework to identify the existence of backdoor attacks. In this section, the results provided are evaluated against the Smooth attack (Zeng et al., 2021) and the l2-invisible attack (Li et al., 2020a). Figure 7,8 illustrate the results on the Smooth attack and l2-invisble attack respecively. Based on the results, we find the certification can also reliably reveal the targeted label and justify how powerful the backdoor attack is for Smooth attack and the l2-invisible attack.
D BROADEN IMPACT AND LIMITATIONS
D.1 UAP AND BACKDOOR ATTACKS
UAP attacks aim to synthesize a UP by accessing and analyzing the output of a trained neural network. Backdoor attacks aim to insert a predefined trigger into the neural network and ensure an effective attack without accessing and analyzing the output after the model is trained over the poisoned samples. Many existing works have found these two paralleled lines of work have interesting intersections. In particular, the formulation of UAP synthesizing has also inspired or has its interesting counterparts in backdoor attacks or defense designs. For example, Li et al. (2020a); Zhang et al. (2021b) designed their backdoor trigger via a similar process of synthesizing UAP using a trained model. Kolouri et al. (2020); Zeng et al. (2022a) adopted this interesting intersection between UAP and backdoor attacks to provide identification of backdoors or conduct online removal of backdoors. Suppose we view these two attack paradigms at the inference time (with a trained model). In that case, mitigation defenses and robustness synthesizing tools for both attacks can be developed for general robustness to UP.
D.2 LIMITATIONS
Unconstrained or Large ℓ∞-norm Attacks: Some of the UAP attacks are generated without specifying a constraint (Brown et al., 2017), and in most backdoor attacks, the trigger inserted does not have a constrained ℓ∞ norm. If the attack can have an unconstrained ℓ∞ or a very large ℓ∞ norm, only trivial certification results can be obtained from our certification. This limitation also commonly exists in state-of-the-art sample-wise certification methods (Wang et al., 2021; Ferrari et al., 2021). In fact, any certification procedure requires some constraints on potential perturbations and does not apply to unbounded perturbations. This open problem calls for answers and the attention for future research.
Computational Cost: Supporting large models and large datasets can be computationally costly for our certification. Existing works for certifying pre-trained models (Wang et al., 2021; Ferrari et al., 2021) are also commonly limited to moderate-sized networks, and the cost of our method is lower bounded by existing linear bound propagation frameworks that we use to obtain the linear bounds before solving the MILP problem. It remains a challenging open-problem for scaling to larger-scale networks, such as models for ImageNet (Deng et al., 2009). | 1. What is the focus of the paper regarding neural network verification?
2. What are the strengths of the proposed approach, particularly in its adaptation from existing methods?
3. What are the weaknesses of the paper, especially in comparison with other verification methods?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions to enhance the scalability of the verification process? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper studies the verification of neural networks against universal adversarial perturbations, i.e., perturbations that are universally applied on a set of images. It solves the verification problem by adapting bound propagation- and MILP-based verification methods to universal perturbations.
Strengths And Weaknesses
The paper is the first to the best of my knowledge to discuss the formal verification of universal perturbations.
The paper includes an analysis of the generalisation of robustness results from samples to the entire data distribution.
The verification method is a straightforward adaptation of existing methods.
Unsound comparison with bound propagation-based methods: whereas the certified robust rate for the bound propagation methods is taken as the rate of images for which a model is robust, said rate for the current method is being minimised in an MILP formulation.
Clarity, Quality, Novelty And Reproducibility
The paper is well written and presented. It is highly incremental to previous work as existing methods on bound propagation and MILP formulations are almost entirely used as appeared in previous works. I think that the novelty can be improved by exploiting the restricted nature of the problem studied (which couples the perturbation radius to all inputs) to improve the scalability of verification. |
ICLR | Title
Towards Robustness Certification Against Universal Perturbations
Abstract
In this paper, we investigate the problem of certifying neural network robustness against universal perturbations (UPs), which have been widely used in universal adversarial attacks and backdoor attacks. Existing robustness certification methods aim to provide robustness guarantees for each sample with respect to the worst-case perturbations given a neural network. However, those sample-wise bounds will be loose when considering the UP threat model as they overlook the important constraint that the perturbation should be shared across all samples. We propose a method based on a combination of linear relaxation-based perturbation analysis and Mixed Integer Linear Programming to establish the first robust certification method for UP. In addition, we develop a theoretical framework for computing error bounds on the entire population using the certification results from a randomly sampled batch. Aside from an extensive evaluation of the proposed certification, we further show how the certification facilitates efficient comparison of robustness among different models or efficacy among different universal adversarial attack defenses and enables accurate detection of backdoor target classes.
1 INTRODUCTION
As deep neural networks become prevalent in modern performance-critical systems such as selfdriving cars and healthcare, it is critical to understand their failure modes and performance guarantees. Universal perturbations (UPs) are an important class of vulnerabilities faced by deep neural networks. Such perturbations can fool a classifier into misclassifying any input from a given distribution with high probability at test time. Past literature has studied two lines of techniques to create UPs: universal adversarial attacks (Moosavi-Dezfooli et al., 2017) and backdoor attacks (Gu et al., 2019; Chen et al., 2017). The former crafts a UP based on a trained model and does not rely on access to training data. The latter, by contrast, prespecifies a pattern as a UP and further alters the training data so that adding the pattern (often known as the trigger in backdoor attack literature) will change the output of the trained classifier into an attacker-desired target class.
Many defenses have been proposed for both universal adversarial attacks (Akhtar & Mian, 2018; Moosavi-Dezfooli et al., 2017; Shafahi et al., 2020; Benz et al., 2021; Liu et al., 2021) and backdoor attacks (Wang et al., 2019; Chen et al., 2019; Guo et al., 2019; Borgnia et al., 2020; Qiu et al., 2021). But empirical evaluation with attacks does not provide a formal guarantee on the robustness as it is infeasible for an attack algorithm to provably cover all concerned perturbations. In contrast, robustness certification aims to verify the output bounds of the model given a certain class of input perturbations and provably certify the robustness against all the concerned perturbations. Although several recent works (Weber et al., 2020; Xie et al., 2021) developed techniques to achieve certified robustness of a classifier against backdoor-attackinduced UPs with certain norm bound. However, these techniques apply to specific learning algorithms and require the knowledge of the training data. It remains an open question: How to certify the robustness of a trained model against a class of UPs in a way that is agnostic to the underlying training algorithm and data, and is general for different UPs (including both universal adversarial attacks and norm-bounded backdoor attacks)?
∗Zhouxing Shi and Yi Zeng contributed equally. Corresponding Yi Zeng, Lingjuan Lyu or Ruoxi Jia. Work partially done during Yi Zeng’s internship at Sony AI.
In this paper, we propose a framework to certify the worst-case classification accuracy on a batch of test samples against l∞-norm-bounded UPs. Our approach builds off of past works for certifying robustness against sample-wise perturbations that are independently added to each sample. For efficient verification, many recent works linearly relax nonlinear activation functions in neural networks into linear bounds and then conduct linear bound propagation to obtain the output bounds for the whole model (Wong & Kolter, 2018; Wang et al., 2018b; Dvijotham et al., 2018; Zhang et al., 2018; Singh et al., 2019b). This process is also referred to as linear perturbation analysis (Xu et al., 2020a). Since the worst-case model accuracy against sample-wise perturbations is a lower bound of the worst-case accuracy against UPs, these certification techniques could be applied to obtain a certificate against UPs. However, a direct application would overlook the important constraint that a UP is shared across different inputs, thereby producing overly conservative certification results.
Unlike sample-wise perturbations, UPs require theoretical reasoning to generalize certification results. This is because UPs are applied to any input from the data distribution, and our main interest lies in the expected model accuracy over the entire data distribution against UPs. However, certification procedures can only accept a batch of samples from the distribution and certify the accuracy over the samples. Therefore, it’s crucial to understand the discrepancy between certified robustness computed from samples and the actual population robustness.
We summarize our contributions as follows:
• We formulate the problem of robustness certification against UPs. We then generalize linear relaxation based perturbation analysis (LiRPA) to UPs, and we further propose a Mixed Integer Linear Programming (MILP) formulation over linear bounds from LiRPA, to obtain tighter certification on the worst-case accuracy of a given model against UPs within a ℓ∞-norm ball1. • We establish a theoretical framework for analyzing the generalizability of the certification results based on random sampled subsets to the entire population. • We conduct extensive experiments to show that our certification method provides certified lower bounds on the worst-case robust accuracy against both universal adversarial attacks and l∞-bounded backdoor attacks, which are substantially tighter than results by directly applying existing sample-wise certification. • We also investigate the implications of robustness certification on UPs to facilitate easy comparisons of robustness among different models or the efficacy of empirical defenses, and to achieve reliable identification of backdoor target classes.
2 BACKGROUND AND RELATED WORK
Universal Adversarial Perturbation Neural networks are vulnerable to adversarial examples (Szegedy et al., 2014), which has led to the development of universal adversarial perturbations (UAPs), a same noise can consistently deceive a target network on most images (Liu et al., 2019; 2020). Existing defenses against UAPs include fine-tuning on pre-computed UAPs (MoosaviDezfooli et al., 2017), post-hoc detection (Akhtar et al., 2018), universal adversarial training with online UAP generation (Mummadi et al., 2019; Shafahi et al., 2020; Benz et al., 2021). However, all existing defenses to UAPs are empirical works without efficacy guarantee to new attacks.
Backdoor Attacks In backdoor attacks, attackers plant a predefined UP (a.k.a. the trigger) in the victim model by manipulating the training procedure (Li et al., 2020c). Attacked models can give adversarially-desired outputs for any input patched with the trigger while still show good performance on clean inputs. Existing defenses include: poison detection via outlier detection (Gao et al., 2019; Chen et al., 2018; Tran et al., 2018; Zeng et al., 2021) which rely on the modeling of clean samples’ distribution; poisoned model identification (Xu et al., 2019; Wang et al., 2020b); trojan removal via trigger synthesising (Wang et al., 2019; Chen et al., 2019; Guo et al., 2019; Zeng et al., 2022a), or preprocessing and fine-tuning; (Li et al., 2020b; Borgnia et al., 2020); robust training via differential privacy (Du et al., 2019) or redesigning the training pipeline (Levine & Feizi, 2020; Jia et al., 2020; Huang et al., 2022; Li et al., 2021). As all these defenses were empirical, existing literature has revealed those empirical defenses’ limitations to zero-day attacks or adaptive attacks (Zeng et al., 2022b).
Robustness Certification of Neural Networks Early robustness certifications (Katz et al., 2017; Ehlers, 2017; Tjeng et al., 2017) largely relied on satisfiability modulo theory (SMT) or integer
1https://github.com/ruoxi-jia-group/Universal_Pert_Cert
linear programming (ILP) solvers are were limited to very small networks. For more efficient verification, bound propagation with convex relaxations has been proposed (Wong & Kolter, 2018; Wang et al., 2018b; Zhang et al., 2018; Weng et al., 2018; Singh et al., 2019b; Salman et al., 2019), which over-approximates nonlinear activations with convex relaxation and propagates the bounds layer by layer to finally bound the entire model. Xu et al. (2020a) proposed a bound propagation framework for general computational graphs and referred to the related methods as linear relaxation based perturbation analysis (LiRPA), as activations are relaxed by linear bounds. Bound propagation methods have also been further enhanced with techniques such as branch-and-bound (Bunel et al., 2018; 2020; Wang et al., 2018a;b; Xu et al., 2020b; Wang et al., 2021), multi-neuron relaxation and cutting planes (Singh et al., 2019a; Ferrari et al., 2021; Zhang et al., 2022a) for tighter results at a cost of efficiency. However, these works are developed for sample-wise perturbations, and they cannot directly produce tight certification against universal perturbations. Besides, there are several randomized smoothing (Cohen et al., 2019) based methods for certified robustness against backdoor attacks (Weber et al., 2020; Wang et al., 2020a; Xie et al., 2021; Zhang et al., 2022b). These are stochastic methods and are usually considered orthogonal to deterministic certification. Moreover, they require access to training data, only applicable to some specific learning algorithms (e.g., binary models or federated learning) and not general for other UPs, such as UAPs.
3 METHODOLOGY
3.1 PROBLEM FORMULATION
On a set of n independent samples {z(1), . . . , z(n)} from the data distribution Ω, where z(i) = (xi, yi) is the i-th example, xi (xi ∈ Rd) is the input and yi is the ground-truth label, we aim to certify the robustness of a K-way neural network classifier f : Rd → RK against a potential universal perturbation δ with ℓ∞ norm constrained as ∥δ∥∞ ≤ ϵ. In particular, we aim to certify and lower bound the worst-case accuracy of the neural network on {z(1), . . . , z(n)} for any universal perturbation δ (∥δ∥∞ ≤ ϵ) applied to all the examples:
min ∥δ∥p≤ϵ
1
n n∑ i=1 1 ( min j ̸=yi {myi,j(xi + δ)} > 0 ) , (1)
where myi,j(xi + δ) = fyi(xi + δ) − fj(xi + δ) is the margin between the ground-truth class yi and an incorrect class j ̸= yi, and the indicator checks whether the margin is positive for any j ̸= yi when a perturbation δ is added. It is NP-hard to exactly verify Eq. (1) even for n = 1 and a small ReLU network (Katz et al., 2017). Thus recent neural network verifiers usually compute a lower bound for the margin as myi,j(xi + δ) ≤ myi,j(xi + δ), and then we can replace m in Eq. (1) with m to lower bound Eq. (1) and this bound also serves as a lower bound for the robustness.
3.2 LINEAR PERTURBATION ANALYSIS W.R.T. A UNIVERSAL PERTURBATION
We adopt linear relaxation based perturbation analysis (LiRPA) from previous works which focused on sample-wise perturbations, “auto LiRPA” (Xu et al., 2020a) specifically, to obtain lower bounds on myi,j(xi + δ) represented as linear functions w.r.t. the universal perturbation δ. but it is also feasible to use other verification frameworks such as Singh et al. (2019b); Wang et al. (2018b). auto LiRPA can bound the output of a computational graph when its input nodes are perturbed, and it can produce linear functions w.r.t. the perturbed inputs nodes as linear bounds. Note that margin functions can be appended to the original neural classifier as the output of the computational graph, and thereby the margins can be bounded. When sample-wise perturbations are considered in previous works, the linear bounds can usually be written as
∀i ∈ [n], ∀j ̸= yi, ∀∥δ∥∞ ≤ ϵ, myi,j(xi + δ) ≥ ã (i) j (xi + δ) + b̃
(i) j , (2)
where ã(i)j and b̃ (i)
j are coefficients and biases in the linear bounds. This is achieved by relaxing nonlinear functions such as activation functions in the network with linear bounds and propagating linear coefficients through the computational graph. The right-hand-side (RHS) of Eq. (2) is a linear function w.r.t. (xi+δ). To obtain a final bound represented as a concrete number without relying on the δ variable, a concretization step can be applied on the RHS given the constraint on ∥δ∥∞, which eliminates the δ variable and lower bounds the RHS as ã(i)j (xi+δ)+b̃ (i) j ≥ −ϵ∥ã (i) j ∥1+ã (i) j xi+b̃ (i) j .
However, the aforementioned concretization step considers the worst-case δ for each sample independently but a universal perturbation δ should be shared across all the examples. Thereby it will produce relatively loose and over-conservative results under the universal perturbation setting, as the perturbations are much stronger when each example can take an independent perturbation respectively compared to a single and universal perturbation for all the examples.
In contrast, we propose to obtain a tighter certification for universal perturbation. Unlike Eq. (2), we use auto LiRPA to compute the linear lower bound with respect to δ instead of (xi + δ) by treating δ as a perturbed input node and xi as a fixed input node in the computational graph:
∀i ∈ [n], ∀j ̸= yi, ∀∥δ∥∞ ≤ ϵ, myi,j(xi + δ) ≥ a (i) j δ + b (i) j , (3)
where a(i)j and b (i) j are new coefficients and biases in the linear bound, and xi does not appear on the RHS as it is fixed. In the next section, we will lower bound the worst-case accuracy Eq. (1) by solving an MILP problem based on Eq. (3).
3.3 AN MILP FORMULATION TO LOWER BOUND THE WORST-CASE ACCURACY
In this section, we use linear bounds in Eq. (3) to compute a lower bound for the worst-case accuracy in Eq. (1). Specifically, by replacing each myi,j in Eq. (1) with its lower bound from Eq. (3), we lower bound Eq. (1) by solving the following problem:
minimize 1
n n∑ i=1 1 ( min j ̸=yi { a (i) j δ + b (i) j } > 0 ) s.t. ∥δ∥∞ ≤ ϵ. (4)
Now, we show that Eq. (4) can be rewritten into an MILP formulation: Theorem 1. Problem Eq. (4) is equivalent to the following MILP problem:
minimize ϑ
s.t. ϑ = 1
n n∑ i=1 q(i), (5)
∀i ∈ [n], q(i) ∈ {0, 1}, −τ(1− q(i)) ≤ ∑ j ̸=yi (a (i) j δ + b (i) j )s (i) j ≤ τq (i), (6)
∀i ∈ [n],∀j ̸= yi, s(i)j ∈ {0, 1}, ∑ j ̸=yi s (i) j = 1, (7)
∀i ∈ [n],∀j1 ̸= yi,∀j2 ̸= yi, (a(i)j1 δ + b (i) j1 )s (i) j1 − τ(1− s(i)j1 ) ≤ (a (i) j2 δ + b (i) j2 ), (8) ∥δ∥∞ ≤ ϵ,
where τ ≥ maxi∈[n] ∑ j ̸=yi |a (i) j δ + b (i) j | is a sufficient large constant.
In Theorem 1, given a universal perturbation δ, for the i-th example, integer variable q(i) ∈ {0, 1} denotes whether the the model is certifiably correct on this example based on linear bounds from Eq. (3), and the certified accuracy on the whole batch can be computed as Eq. (5). The model is certifiably correct on the i-th example when myi,j(xi + δ) ≥ a (i) j δ + b (i) j > 0 holds for all j ̸= yi. We use an integer variable s(i)j ∈ {0, 1} to denote whether class j is the hardest among all j ̸= yi under the ceritification, i.e., ∀j′ ̸= yi,a(i)j δ+b (i) j ≤ a (i) j′ δ+b (i) j′ holds, which is enforced by Eq. (8). We require each example to have exactly one hardest class j with s(i)j = 1 (see Eq. (7)); in case that there are multiple classes with an equal lower bound on the margin function, it is valid to treat any of them as the hardest. Then we only need to check whether a(i)j δ+b (i) j > 0 holds for the hardest class
j with s(i)j = 1, equivalently ∑ j ̸=yi(a (i) j δ+b (i) j )s
(i) j > 0. In Eq. (6), as τ is sufficiently large, only∑
j ̸=yi(a (i) j δ+b (i) j )s (i) j ≥ 0 is effectively required when q(i) = 1, and ∑ j ̸=yi(a (i) j δ+b (i) j )s (i) j ≤ 0
is required when q(i) = 0. Note that if exactly ∑
j ̸=yi(a (i) j δ + b (i) j )s (i) j = 0 happens, q (i) = 0 will be taken by MILP due to the minimization objective, and thus it is still compatible with our goal
for checking a(i)j δ + b (i) j > 0. Overall the MILP formulation minimizes the certified accuracy over all possible universal perturbation δ (∥δ∥∞ ≤ ϵ), to finally produce a lower bound for Eq. (1). We formally prove this theorem in Appendix A.1, and we use Gurobi (Bixby, 2007) to solve the MILP.
Although it is possible to solve the whole certification algorithm through MILP (Tjeng et al., 2017), it will be computationally prohibitive. Even for very small networks with thousands of neurons, the number of integer variables in their MILP formulation will be proportional to the number of neurons. In contrast, by computing linear bounds first before solving MILP, the number of integer variables in our formulation is only proportional to the number of samples in a batch and the number of classes, and it does not depend on the size of the network, which makes it feasible in practice.
4 GENERALIZATION OF UNIVERSAL PERTURBATION
In the previous section, we proposed our robustness certification method against UPs. Note that the certification results are only guaranteed for the given batch of samples till now. In this section, we study how the certified accuracy computed on a batch approximates the certified accuracy computed on the entire data distribution.
Let z(i) be a random sample drawn from probability space (Ω,F ,P), which is endowed with a σalgebra F and a probability measure P. A dataset Dn ≜ {z(1), . . . , z(n)} consists of n observations drawn independently from Ω according to P; equivalently it can be considered as a random point in (Ωn,Fn,Pn), which is the n-fold Cartesian product of Ω equipped with the product σ-algebra Fn and the product Pn = P× · · · × P︸ ︷︷ ︸
n times
. Let ∆ denote the l∞ ball that contains all allowable perturbations
∆ = {δ : ∥δ∥∞ ≤ ϵ} with radius ϵ. And let B : Ω → R(d+1)K be a linear bound generation procedure, and for each z = (x, y), it returns parameters {aj ,bj}j ̸=y of the linear lower bounds on the margins, i.e., my,j(x+ δ) ≥ aj(x+ δ)+bj . In the proposed framework, B is instantiated to be auto LiRPA (Xu et al., 2020a). Let An : R(d+1)Kn → ∆ denote the MILP in Eq. (4), which return a perturbation δ given the linear bounds on the margins. The overall certification procedure is the composition of An and B, denoted by G = An ◦ B ◦ · ◦ B︸ ︷︷ ︸
n times
≜ An ◦ B◦n.
For every data sample z = (x, y) ∈ Ω, we define the set
∆Bz :=
{ δ ∈ ∆ : 1 ( min j ̸=y { ajδ + bj } > 0 )} as the set of perturbations such that the margin between the ground-truth class and any other class is certifiably positive according to the linear bounds provided by B, i.e., the model is certifiably robust to any perturbation in this set, but it is still possible for the model to be robust to a perturbation δ /∈ ∆z . Note that the dependence of the set on B has been made explicit because aj ,bj depend on B. Similarly, we define the set
∆̃z :=
{ δ ∈ ∆ : 1 ( min j ̸=y { my,j(x+ δ) } > 0 )} (9)
as the set of all perturbations that are incapable of fooling the given model f , i.e., the data z is actually robust to any perturbation in this set. Note that ∆̃z is a superset of ∆Bz , and unlike ∆ B z , it does not depend on the linear bound generation procedure. We make the following definitions: Definition 1. The certified robust probability (CRP) of a given perturbation δ ∈ ∆ based on a linear bound generation procedure B is defined as
V B(δ) ≜ P(z ∈ Ω : δ ∈ ∆Bz ). (10) The actual robust probability (ARP) of a given perturbation δ ∈ ∆ is defined as
U(δ) ≜ P(z ∈ Ω : δ ∈ ∆̃z). (11)
The certified robust rate (CRR) of a perturbation δ ∈ ∆ on an evaluation dataset Dn based on a linear bound generation procedure B is
V̂ B(δ;Dn) ≜ 1
n ∑ z∈Dn 1(δ ∈ ∆Bz ). (12)
Equivalently, we can write the objective of Eq. (4) as minδ∈∆ V̂ B(δ;Dn). ∆Bz can be equivalently defined by the existence of a binary variable as in the MILP formulation in Theorem 1 and is thus nonconvex in general. In the following, we use V̂ B(δ) for V̂ B(δ;Dn) if the evaluation dataset is Dn for notational simplicity. Note that V B(δ) ≤ U(δ) for any δ ∈ ∆ and the equality is attained when the equality in (3) is attained, i.e., the lower bound generated by B exactly matches the actual margin at any δ. Now we present the following theorem that estimates the value of ARP based on the CRR computed from a batch of random samples.
Theorem 2 ((1− ξ)-probable certification for ARP). Given G = An ◦ B◦n and 0 < ξ < 1, for any δ, it holds that
Pn ( U(δ) ≥ min
δ∈∆ V̂ B(δ;Dn) + U(δ
∗)− V B(δ∗)− t∗(ξ, n) ) ≥ 1− ξ, (13)
where t∗(ξ, n) is the root of the equation (1 + 4t) ln(1 + 4t) − 4t = 4n ln(1/ξ) and t ∗(ξ, n) is a monotonically decreasing function in n and ξ. δ∗ = argminδ U(δ). Moreover, we have that
Pn ( U(δ) ≥ min
δ∈∆ V̂ B(∆;Dn)− t∗(ξ, n)
) ≥ 1− ξ, (14)
The proof can be found in Appendix A.2. Both bounds are interesting to interpret. The bound Eq. (13) shows that the discrepancy between the ARP of any perturbation and the CRR (i.e., the certified accuracy on a random batch) depends on U(δ∗) − V B(δ∗) and t∗(ξ, n). Given the trained model and the underlying data distribution, δ∗ is fixed; hence, the term U(δ∗) − V B(δ∗) depends on the tightness of linear bounds produced by B. The tighter bounds B can provide, the smaller difference there will be between U(δ∗)
and V B(δ∗). This bound suggests that plugging tighter linear bound generation techniques into our certification framework can potentially give rise to better approximation error. It is also interesting to note that the approximation error of the proposed certification framework G = An ◦ B◦n exclusively depends on B, not An. This is because An always returns the optimal solution to the MILP, thereby not introducing any additional error. The second term t∗(ξ, n) depends on the number of samples for certification and it vanishes as the n grows (illustrated in Figure 1). The second bound (Eq. (14)) utilizes the fact that U(δ∗) − V B(δ∗) ≥ 0, and is more relaxed but more convenient than the first bound (Eq. (13)) because the lower bound of the ARP can be calculated given the certification results on a batch, the number of samples in the batch, and the confidence level 1 − ξ. In the Section 5.2, we will showcase the estimation of the ARP using this bound.
5 EXPERIMENT
5.1 EXPERIMENTAL SETUP
For evaluating the certification, we consider two benchmark datasets, MNIST (LeCun et al., 1998) and CIFAR-10 (Krizhevsky et al., 2009), widely adopted in existing works. We adopt 5 model structures from existing works (Singh et al., 2019a; Tjandraatmadja et al., 2020; Wang et al., 2021; Müller et al., 2022; Zhang et al., 2022a): Conv-small, Conv-4-layer, Conv-big on MNIST, and ResNet-2B, ResNet-4B on CIFAR-10, with details in Appendix B. We use the CRR and attackACC (accuracy under an attack) as the metrics. All the results are averaged over three runs with different random seeds on 100 random samples for each dataset. Further experimental details can be found in Appendix B.
5.2 EVALUATION
Comparing to existing robustness certification We first focus on evaluating our certification results compared to existing robustness certification for sample-wise perturbation. There are several competitive frameworks such as Singh et al. (2019b); Bunel et al. (2020); Henriksen & Lomuscio (2021), and we compare with auto LiRPA (Xu et al., 2020a) specifically for its state-of-the-art
performance (Bak et al., 2021). We consider models from both natural training and adversarial training. For MNIST, we evaluate the two certification methods on models naturally trained, and PGD-32 (PGD with ℓ∞-norm 32255 ) adversarially trained models (Madry et al., 2018). Table 1 details the results on naturally trained MNIST models. Our method provides much tighter bounds than sample-wise certification results across all the settings. Table 2 illustrates results on PGD-32 trained MNIST models. With adversarial training, we observe the certified robustness of the models against UPs also largely increased compared naturally trained models in Table 1. Our certification results are still much tighter than sample-wise results especially under settings with larger perturbations. On CIFAR-10, we only evaluate the results on adversarially trained models (PGD-8 for ResNet-2B and ResNet-4B) as the naturally trained model is severely susceptible to perturbations on CIFAR-10 (see Figure 5) even with ϵ = 1255 . To sum up, our method can provide tighter robustness certification than existing sample-wise methods.
Estimation of the lower bound of ARP Figure 2 illustrate the application of Theorem 2 with CRR. We use the naturally trained Conv-4-layer on MNIST as an example and we set ϵ = 6255 . We demonstrate the estimation of 0.9-probable certification for the lower bound of ARP, or ARP, by setting ξ = 0.1. From Figure 2, we can learn that the empirical results, CRR, can be tighter when more samples are considered in certification. Incorporating more samples also makes the estimated ARP much closer to the CRR (as t∗(ξ, n) is smaller). Such an observation shows that when incorporating more samples in certification, the empirical results would better reflect the actual robustness of the whole population.
In particular, when using 1000 samples, the result can be interpreted as the ARP is larger than 84.73% with at least a 90% probability.
Validating with UAP attacks We then validate the robustness certification results with UAP attacks as CRR should lower bound the attack-ACCs. We consider three SOTA UAP attacks: Adv-UAP (Li et al., 2022), CosUAP (Zhang et al., 2021a), and DF-UAP (Zhang et al., 2020), detailed in in Appendix B. Figure 3 compares CRR and attack-ACCs on PGD-32 trained MNIST models. As shown in Figure 3, CRRs are indeed lower than all the attack-ACCs as expected.
Validating with backdoor attacks We also validate whether CRR still lower bounds the attackACCs in backdoor attacks. We consider two backdoor triggers namely a blended trigger (Chen et al., 2017) with a small ℓ∞-norm (∥δ∥∞ = 5255 , referred as the stealthy trigger), and the BadNets (Gu
et al., 2019) (∥δ∥∞ = 255255 ). All the attacks utilize the same poison ratio, 20% following existing works (Zeng et al., 2021). The visual example of the poisoned sample, the triggers, and the certification results are listed in Figure 4. Under the setting of the stealthy blended backdoor, we find that the CRR drops dramatically before reaching the trigger’s norm (∥δ∥∞ = 5255 ) compared to the same model trained on clean MNIST. This observation verifies the correctness of CRR and its potential to reveal stealthy l∞-bounded backdoor attacks in the current trend of backdoor development with smaller l∞-norm constraints, e.g., Zeng et al. (2022b); Zhao et al. (2020). However, assuming an ℓp norm bound of the backdoor triggers is not widely accepted in traditional backdoor settings. Thus, we also present the results of BadNets (with ∥δ∥∞ = 255255 ) in Figure 4. We consider the backdoor model trained from scratch or fine-tuned from the clean model. The CRR is still lower-bounding the attack’s deployed ℓ∞ bound of the trigger. However, as the trigger has a large ℓ∞ norm, the CRRs of poisoned models are of no difference to the clean model and thus not that useful. Nevertheless, in Section 5.3, we show a simple twist of the certification framework to help reveal backdoors’ existence.
5.3 IMPLICATIONS OF ROBUSTNESS CERTIFICATION AGAINST UPS
Now we explore the potential implications of our robustness certification against UPs. We focus on 3 case studies on model structure comparison, UAP defenses comparison, and backdoor detection.
Comparing model structures One implication of robustness certification regarding UPs is to compare different model structures and training strategies regarding the certified robustness against UPs. Figure 5 depicts the certification results of all the considered model structures with different training settings on MNIST and CIFAR-10. We consider both naturally trained and PGD trained models with different l∞ perturbation norm. In Figure 5 (a) on MNIST, we find that the largest model, Conv-big, shows the worst certified robustness against UPs. But the smallest Conv-small’s CRR is higher than that of Conv-4-layer under naturally trained setting, PGD-8, and PGD-16, but not PGD32. The only difference between Convsmall and Conv-4-layer is that Conv-4-layer
uses a larger padding step which resulting a slightly larger hidden layer (see Appendix B). Based on the observation, there is an interesting trade-off between model size and certified robustness against UPs: A slightly larger structure can help the model obtain better certified robustness when adopting adversarial training, potentially due to increased model capacity. Such an observation can be further illustrated in Figure 5 (b). Specifically, ResNet-2B’s CRR would drop to random guessing when using PGD-16, while ResNet-4B can still maintain a certain scale of CRR. But even larger models Figure 5 (a) have worse certified robustness, potentially due to looser certified bounds.
Implication to UAP defenses Another implication of the CRR is to compare existing UAP defenses regarding their efficacy. We consider three types of defenses and five different defenses in total: FGSM and PGD sample-wise adversarial training (Goodfellow et al., 2014; Madry et al., 2017;
Wong et al., 2019); universal adversarial training (UAT) with FGSM or PGD synthesizing UPs (Shafahi et al., 2020); sample-wise certified defense with Interval Bound Propagation (IBP) training (Gowal et al., 2018; Mirman et al., 2018). The defended models are further evaluated with UAP attacks and certification. The results with a small perturbation radius |δ|∞ = 16255 are shown in Table 4. Additional results with a larger perturbation radius (|δ|∞ = 80255 ) are in Table 7, Appendix C. We use the row titled “Worst” to record the maximum accuracy drop using UAP attacks compared to clean accuracy. Surprisingly, in Table 4, we find the CRR of models trained with UAT is worse than their sample-wise adversarial training counterparts (i.e., UAT-PGD results are worse than PGD). However, in the case of larger perturbation radius (Table 7, Appendix C), the UAT-trained models can achieve higher CRR than the sample-wise counterparts. Such an observation indicates an underexplored trade-off between perturbation radius and UAP defense method on CRR. The CRR result from the IBP-trained model is much tighter than others, as IBP directly optimizes over an objective for certified robustness and tightens the certified bounds for all the neurons. Moreover, CRR is also aligned with the worst attacked accuracy drop and can be an indicator for comparing different settings of UAP defenses.
Implication to backdoor defenses We evaluated the effectiveness of the CRR in revealing potential backdoors in the above section, but the effectiveness is yet only limited to triggers with small perturbations. This section presents a simple twist on the certification framework by teaming up with adversarial training (PGD-16). We depict the average class-wise certification results on 10 ResNet-4B models trained with different random seeds over different BadNets poison ratios in Figure 4. Based on the re-
sults, we find the certification can reliably reveal the targeted label and justify how mighty the backdoor attack is (i.e., the CRR is aligned with the poison ratio used). Additional results on the Smooth attack (Zeng et al., 2021) and ℓ2 invisible attack (Li et al., 2020a) are listed in Appendix C, which share similar observations. The reason of the successful identification is that, naturally, the adversarial training would force the model to learn more from the reliable features and thus make standard backdoors stand out from benign features of the data (i.e., easier to be learned by the model), as also discussed in Weng et al. (2020). Thus after training a model with adversarial training with large perturbation radius, the model would likely engrave the trigger and thus have a high CRR only on the target label. CRR by our proposed method provides an intriguing point of view to determine the attack’s strength (i.e., poison ratio).
6 CONCLUSION
In this work, we present the first focused study on certifying neural networks’ robustness against UPs. In contrast to previous robustness certification works that focused on sample-wise perturbations, we formulate the certification problem against UPs by emphasizing sharing a universal perturbation between different samples. We propose a combination of linear relaxation-based bounds and MILP to solve the problem. We also present a theoretical analysis framework to estimate the certification result for the entire population based on results from a batch of random samples. Extensive experiments reveal that our certification imposes tighter results than directly applying existing sample-wise robustness certifications. In addition, we discuss and demonstrate how robustness certification against UPs could facilitate comparing certified robustness between different model structures and defense methods and provide reliable backdoor detection.
ACKNOWLEDGEMENT
This work is partially funded by Sony AI. This work is also supported in part by NSF under IIS2008173, IIS-2048280, and by Army Research Laboratory under W911NF-20-2-0158. RJ and the ReDS lab appreciate the support of The Amazon - Virginia Tech Initiative for Efficient and Robust Machine Learning and the Cisco Award. YZ and ZS are both supported by the Amazon Fellowship.
A PROOFS
A.1 PROOF OF THEOREM 1
Let ϑ̂ be the solution of the MILP problem in the theorem, and let ϑ̃ be the solution to Eq. (4). Theorem 1 states that ϑ̂ = ϑ̃. We formally prove the equivalence below.
Proof. We first show that ϑ̂ ≤ ϑ̃. In Eq. (4), there exists some δ̃ such that
δ̃ = argmin ∥δ∥∞≤ϵ
1
n n∑ i=1 1 ( min j ̸=yi { a (i) j δ + b (i) j } > 0 ) .
Then, for every i ∈ [n], take the following values for variables in the MILP formulation:
q(i) = 1 ( min j ̸=yi { a (i) j δ̃ + b (i) j } > 0 ) ,
ϑ = ϑ̃ = 1
n n∑ i=1 q(i),
∀j ̸= yi, s(i)j = 1(j = j ′), where j′ = argmin j ̸=yi a (i) j δ̃ + b (i) j ,
and it is easy to see that the values for these variables satisfy all the constraints in the MILP problem. Thus the result of the minimization in the MILP should be no smaller than ϑ̃, i.e., ϑ̂ ≤ ϑ̃. We now show that ϑ̃ ≤ ϑ̂. We use δ̂, q̂, ŝ to denote the values of δ, q, s variable in the solution of MILP. For every i ∈ [n], Eq. (7) ensures that there exists exactly one ĵ (ĵ ̸= yi) with ŝ(i)ĵ = 1, and Eq. (8) ensures that for all j ̸= yi, a(i)ĵ δ̂ + b (i) ĵ ≤ a(i)j δ̂ + b
(i) j holds. Thus∑
j ̸=yi
(a (i) j δ + b (i) j )ŝ (i) j = min j ̸=yi {a(i)j δ̂ + b (i) j }.
According to Eq. (6), if q̂(i) = 1, ∑
j ̸=yi(a (i) j δ + b (i) j )ŝ (i) j ≥ 0 holds. In case that ∑ j ̸=yi(a (i) j δ +
b (i) j )ŝ (i) j = 0, Eq. (6) also holds with q̂ (i) = 0, and due to the minimization objective of MILP, q̂(i) = 0 instead of q̂(i) = 1 will be taken. Thus ∑
j ̸=yi(a (i) j δ + b (i) j )ŝ (i) j > 0 strictly holds when q̂(i) = 1. And if q̂(i) = 0, ∑
j ̸=yi(a (i) j δ + b (i) j )ŝ (i) j ≤ 0 holds. Thus
q̂(i) = 1 ( ∑ j ̸=yi (a (i) j δ + b (i) j )ŝ (i) j > 0 ) = 1 ( min j ̸=yi {a(i)j δ̂ + b (i) j } > 0 ) .
Thereby
ϑ̂ = 1
n n∑ i=1 q̂(i) = 1 n n∑ i=1 1 ( min j ̸=yi { a (i) j δ̂ + b (i) j } > 0 ) ,
and then the result of Eq. (4) is no smaller than ϑ̂, i.e., ϑ̃ ≤ ϑ̂.
Hence ϑ̂ = ϑ̃ is proved.
A.2 PROOF OF THEOREM 2
Let δ∗ = argminδ∈∆ U(δ) be the optimal universal perturbation that minimizes the ARP, let δn be the value returned by G(Dn), and let δ̃ = argminδ∈∆ V B(δ) that minimizes the CRP. We introduce the following lemma: Lemma 1. Given An, it holds that
Pn(V̂ B(δ∗)− V B(δ∗) > t∗(ξ, n)) ≤ ξ (15)
where t∗(ξ, n) is the root of the equation (1 + 4t) ln(1 + 4t)− 4t = 4n ln(1/ξ).
Proof. Let q(i) = 1(δ∗ ∈ ∆B z(i) ) which can also be interpreted as 1 ( minj ̸=yi { a (i) j δ ∗ + b (i) j } >
0 ) . Then, V̂ B(δ∗) = 1n ∑n i=1 q (i) and V B(δ∗) = E[ 1n ∑n i=1 q (i))] = E[q(i)]. Let σ2 denote the
variance of q(i). Since q(i) is a binary random variable, we have that σ2 ≤ 1/4. Let h(u) = (1 + u) ln(1 + u)− u. For any t > 0, we have that
Pn( 1
n n∑ i=1 q(i) − E[q(i)] > t) ≤ exp ( − nσ2h( t σ2 ) ) (16)
≤ exp ( − n
4 h(4t)
) , (17)
where the first inequality is a direct application of the Bennett’s inequality (Bennett, 1962), and the second inequality is due to the fact that nσ2h( tσ2 ) is a monotonically decreasing function of σ
2. Let t(ϵ, n) denote the root of exp ( − n4h(4t) ) = ξ. Then, it follows that Pn( 1n ∑n i=1 q
(i) − E[q(i)] > t(ϵ, n)) ≤ ξ.
Then we prove Theorem 2 to certify the robustness of a classifier against the worst-case attack δ∗.
Proof. We use the following relations: for any δ ∈ ∆,
U(δ) ≥ min δ∈∆ U(δ)
= U(δ∗) = U(δ∗)− V B(δ∗)︸ ︷︷ ︸ (i) +V B(δ∗)− V̂ B(δ∗)︸ ︷︷ ︸ (ii) + V̂ B(δ∗)− V̂ B(δn)︸ ︷︷ ︸ (iii) +V̂ B(δn)
≥ (i) + (ii) + V̂ B(δn),
(18)
where (ii) can be bounded by applying the concentration inequality in Lemma 1; (iii) ≥ 0 due to the optimality of δn = argminδ∈∆ V̂ B(δ). Combining these bounds yields Theorem 2.
B FURTHER DETAILS ON EXPERIMENTAL SETTINGS
We use one server equipped with a total of 8 RTX A6000 GPUs as the hardware platform. PyTorch (Paszke et al., 2019) is adopted as the implementation framework. We detail the model structures used in our experiment in Table 6. All of the model structures used in this work were also considered in other existing robustness certification works as the standard set-ups: Conv-small, Conv-4-layer, Conv-big on the MNIST (Singh et al., 2019a; Tjandraatmadja et al., 2020; Wang et al., 2021; Müller et al., 2022), and ResNet-2B, ResNet-4B on the CIFAR-10 (Zhang et al., 2022a). We use Adadelta (Zeiler, 2012) as the optimizer with a learning rate set to 0.1 for all the model training process (including the adversarial training for the model updating step as well). For MNIST models, we train each model with 60 epochs. For CIFAR-10 models, we train each model with 500 epochs to ensure full convergence. For adversarial training adopted in the main text, the number of steps in PGD attacks is 7; step-size for PGD is set as ϵ4 . For IBP training, we use the implementation in Shi et al. (2021).
Now, we details the UAP attacks considered in the experiment for validating the certification results, namely the Adv-UAP (Li et al., 2022), Cos-UAP (Zhang et al., 2021a), and DF-UAP (Zhang et al., 2020). The design of each UAP attack’s synthesis procedure distinguishes these attacks. Specifically, Adv-UAP synthesizes and generates adversarial examples for each input before synthesizing the UAP, which has shown to be more effective in finding stronger UAPs. Cos-UAP produces UAP by reducing the Cosine similarity between the original output logits and the disturbed logits; DFUAP employs a similar loss as listed in the C&W attack (Carlini & Wagner, 2017), which aims to reduce the distance between the ground-truth label’s logits and the maximum logits of the rest.
Now we provide the detailed settings of the backdoor target-class identification in Section 5.3. For the threat model, we consider the scenario where the defender aims to determine if a backdoor attack resides in a given dataset, identify the target class, and justify how potent the attack is if there is an identified attack. We assumes the defender has access to the training set to be inspected, with no additional clean validation data required. To conduct the instantiated case shown in Section 5.3, the defender adversarially trains (with PGD-16) 10 different models on the inspecting training dataset and obtaining the averaging CRR results in a class-wise manner. Especially as we assume no additional clean validation data is required, we pass through 100 random noise into the certifying models to obtain the results in Figure 6, 7, 8.
C ADDITIONAL RESULTS
C.1 ADDITIONAL RESULTS ON UAP DEFENSES COMPARISON
Table 7 details the results of UAP defenses comparison under the large-norm setting (ϵ = 80255 ). Noting all the defenses adopted are also incorporated with the same expense. For large-norm settings, we find that only the certified-robustness training ends up with a CRR larger than 0. Apart from its actual effectiveness, as mentioned in the main text, the IBP-trained model also ends up with much tighter intermediate linear bounds (i.e., a and b are tighter). Even though our work can only return a positive CRR on the IBP-trained model, the certification results are still aligned with the actual attack results, as the IBP-trained model would have stronger robustness than the other models in terms of the least change in the ACC drop.
C.2 ADDITIONAL RESULTS ON BACKDOOR TARGET-CLASS IDENTIFICATION
We now provide additional results on implementing the certification framework to identify the existence of backdoor attacks. In this section, the results provided are evaluated against the Smooth attack (Zeng et al., 2021) and the l2-invisible attack (Li et al., 2020a). Figure 7,8 illustrate the results on the Smooth attack and l2-invisble attack respecively. Based on the results, we find the certification can also reliably reveal the targeted label and justify how powerful the backdoor attack is for Smooth attack and the l2-invisible attack.
D BROADEN IMPACT AND LIMITATIONS
D.1 UAP AND BACKDOOR ATTACKS
UAP attacks aim to synthesize a UP by accessing and analyzing the output of a trained neural network. Backdoor attacks aim to insert a predefined trigger into the neural network and ensure an effective attack without accessing and analyzing the output after the model is trained over the poisoned samples. Many existing works have found these two paralleled lines of work have interesting intersections. In particular, the formulation of UAP synthesizing has also inspired or has its interesting counterparts in backdoor attacks or defense designs. For example, Li et al. (2020a); Zhang et al. (2021b) designed their backdoor trigger via a similar process of synthesizing UAP using a trained model. Kolouri et al. (2020); Zeng et al. (2022a) adopted this interesting intersection between UAP and backdoor attacks to provide identification of backdoors or conduct online removal of backdoors. Suppose we view these two attack paradigms at the inference time (with a trained model). In that case, mitigation defenses and robustness synthesizing tools for both attacks can be developed for general robustness to UP.
D.2 LIMITATIONS
Unconstrained or Large ℓ∞-norm Attacks: Some of the UAP attacks are generated without specifying a constraint (Brown et al., 2017), and in most backdoor attacks, the trigger inserted does not have a constrained ℓ∞ norm. If the attack can have an unconstrained ℓ∞ or a very large ℓ∞ norm, only trivial certification results can be obtained from our certification. This limitation also commonly exists in state-of-the-art sample-wise certification methods (Wang et al., 2021; Ferrari et al., 2021). In fact, any certification procedure requires some constraints on potential perturbations and does not apply to unbounded perturbations. This open problem calls for answers and the attention for future research.
Computational Cost: Supporting large models and large datasets can be computationally costly for our certification. Existing works for certifying pre-trained models (Wang et al., 2021; Ferrari et al., 2021) are also commonly limited to moderate-sized networks, and the cost of our method is lower bounded by existing linear bound propagation frameworks that we use to obtain the linear bounds before solving the MILP problem. It remains a challenging open-problem for scaling to larger-scale networks, such as models for ImageNet (Deng et al., 2009). | 1. What is the focus of the paper regarding certification defense?
2. What are the strengths and weaknesses of the proposed method?
3. How does the reviewer assess the novelty, clarity, quality, and reproducibility of the paper's content?
4. Are there any concerns or questions regarding the paper's ability to certify robustness against backdoor attacks?
5. What are the limitations of the proposed method in terms of its generalizability to different datasets and models? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a certification defense against universal adversarial examples and backdoor attacks. The proposed method is based on the combination of linear relaxation-based perturbation analysis and Mixed Integer Linear Programming. It also provides a theoretical framework for analyzing the generalizability of the certification results. Experiments demonstrate that it is better than existing sample-wise certification methods when defending against UAPs.
Strengths And Weaknesses
Strengths
· Interesting topic and good motivation. · The proposed method has better results than sample-Wise certification. · The writing is good.
Weaknesses
· The novelty might be limited. · It's unclear if the proposed method can be applied to certify the robustness against backdoor attacks. · Comparisons with some related works are missing. · Generalization of the proposed method is unclear. · Experiments on adversarial patch attacks are missing.
Clarity, Quality, Novelty And Reproducibility
· The contributions and the novelty are limited. This paper proposes a certification defense against universal adversarial examples by modifying existing certification methods for adversarial examples, i.e., "auto LiRPA". The only difference is that the proposed method adds a constraint to ensure the perturbation is global-wise. The technical challenges of the modification (adding the global-wise constraint) are unclear, and it does not seem very challenging. Thus, the novelty of the proposed method might be limited.
· This paper claims it can be applied to backdoor attacks. The certification method uses
L
∞
norm to bound the perturbations. However, most backdoor attacks are not constrained by
L
∞
bound. For example, the patch trigger in BadNets can have a large
L
∞
norm. Thus, it is unclear if the proposed method is able to certify the robustness against backdoor attacks. I think only a limited number of backdoors can be certified.
· Only empirical defenses for backdoor attacks are discussed. The discussion about existing certification methods against backdoor attacks (Wang et al. [1]) is missing. This paper also lacks empirical comparisons between the proposed method and the existing certification method against backdoor attacks.
· Generalization to different datasets and models is unclear. Existing certification work Xu et al. [2] demonstrated it can generalize to different datasets, including ImageNet and various models (e.g., DenseNet, Transformer and LSTM). However, in this paper, all experiments are conducted on two small-scaled datasets (MNIST and CIFAR-10), and it only uses self-defined small CNNs and ResNet.
· While this paper claims the proposed method is general for different UPs, it lacks the discussion and the experiments on an important type of attack, i.e., patch-based adversarial examples [3]. Thus, it is unclear if the proposed method is general.
[1] Wang et al., On Certifying Robustness against Backdoor Attacks via Randomized Smoothing. AML@CVPR 2020. [2] Xu et al., Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond. NeurIPS 2020. [3] Tom et al., Adversarial Patch. arXiv 2017. |
ICLR | Title
Towards Robustness Certification Against Universal Perturbations
Abstract
In this paper, we investigate the problem of certifying neural network robustness against universal perturbations (UPs), which have been widely used in universal adversarial attacks and backdoor attacks. Existing robustness certification methods aim to provide robustness guarantees for each sample with respect to the worst-case perturbations given a neural network. However, those sample-wise bounds will be loose when considering the UP threat model as they overlook the important constraint that the perturbation should be shared across all samples. We propose a method based on a combination of linear relaxation-based perturbation analysis and Mixed Integer Linear Programming to establish the first robust certification method for UP. In addition, we develop a theoretical framework for computing error bounds on the entire population using the certification results from a randomly sampled batch. Aside from an extensive evaluation of the proposed certification, we further show how the certification facilitates efficient comparison of robustness among different models or efficacy among different universal adversarial attack defenses and enables accurate detection of backdoor target classes.
1 INTRODUCTION
As deep neural networks become prevalent in modern performance-critical systems such as selfdriving cars and healthcare, it is critical to understand their failure modes and performance guarantees. Universal perturbations (UPs) are an important class of vulnerabilities faced by deep neural networks. Such perturbations can fool a classifier into misclassifying any input from a given distribution with high probability at test time. Past literature has studied two lines of techniques to create UPs: universal adversarial attacks (Moosavi-Dezfooli et al., 2017) and backdoor attacks (Gu et al., 2019; Chen et al., 2017). The former crafts a UP based on a trained model and does not rely on access to training data. The latter, by contrast, prespecifies a pattern as a UP and further alters the training data so that adding the pattern (often known as the trigger in backdoor attack literature) will change the output of the trained classifier into an attacker-desired target class.
Many defenses have been proposed for both universal adversarial attacks (Akhtar & Mian, 2018; Moosavi-Dezfooli et al., 2017; Shafahi et al., 2020; Benz et al., 2021; Liu et al., 2021) and backdoor attacks (Wang et al., 2019; Chen et al., 2019; Guo et al., 2019; Borgnia et al., 2020; Qiu et al., 2021). But empirical evaluation with attacks does not provide a formal guarantee on the robustness as it is infeasible for an attack algorithm to provably cover all concerned perturbations. In contrast, robustness certification aims to verify the output bounds of the model given a certain class of input perturbations and provably certify the robustness against all the concerned perturbations. Although several recent works (Weber et al., 2020; Xie et al., 2021) developed techniques to achieve certified robustness of a classifier against backdoor-attackinduced UPs with certain norm bound. However, these techniques apply to specific learning algorithms and require the knowledge of the training data. It remains an open question: How to certify the robustness of a trained model against a class of UPs in a way that is agnostic to the underlying training algorithm and data, and is general for different UPs (including both universal adversarial attacks and norm-bounded backdoor attacks)?
∗Zhouxing Shi and Yi Zeng contributed equally. Corresponding Yi Zeng, Lingjuan Lyu or Ruoxi Jia. Work partially done during Yi Zeng’s internship at Sony AI.
In this paper, we propose a framework to certify the worst-case classification accuracy on a batch of test samples against l∞-norm-bounded UPs. Our approach builds off of past works for certifying robustness against sample-wise perturbations that are independently added to each sample. For efficient verification, many recent works linearly relax nonlinear activation functions in neural networks into linear bounds and then conduct linear bound propagation to obtain the output bounds for the whole model (Wong & Kolter, 2018; Wang et al., 2018b; Dvijotham et al., 2018; Zhang et al., 2018; Singh et al., 2019b). This process is also referred to as linear perturbation analysis (Xu et al., 2020a). Since the worst-case model accuracy against sample-wise perturbations is a lower bound of the worst-case accuracy against UPs, these certification techniques could be applied to obtain a certificate against UPs. However, a direct application would overlook the important constraint that a UP is shared across different inputs, thereby producing overly conservative certification results.
Unlike sample-wise perturbations, UPs require theoretical reasoning to generalize certification results. This is because UPs are applied to any input from the data distribution, and our main interest lies in the expected model accuracy over the entire data distribution against UPs. However, certification procedures can only accept a batch of samples from the distribution and certify the accuracy over the samples. Therefore, it’s crucial to understand the discrepancy between certified robustness computed from samples and the actual population robustness.
We summarize our contributions as follows:
• We formulate the problem of robustness certification against UPs. We then generalize linear relaxation based perturbation analysis (LiRPA) to UPs, and we further propose a Mixed Integer Linear Programming (MILP) formulation over linear bounds from LiRPA, to obtain tighter certification on the worst-case accuracy of a given model against UPs within a ℓ∞-norm ball1. • We establish a theoretical framework for analyzing the generalizability of the certification results based on random sampled subsets to the entire population. • We conduct extensive experiments to show that our certification method provides certified lower bounds on the worst-case robust accuracy against both universal adversarial attacks and l∞-bounded backdoor attacks, which are substantially tighter than results by directly applying existing sample-wise certification. • We also investigate the implications of robustness certification on UPs to facilitate easy comparisons of robustness among different models or the efficacy of empirical defenses, and to achieve reliable identification of backdoor target classes.
2 BACKGROUND AND RELATED WORK
Universal Adversarial Perturbation Neural networks are vulnerable to adversarial examples (Szegedy et al., 2014), which has led to the development of universal adversarial perturbations (UAPs), a same noise can consistently deceive a target network on most images (Liu et al., 2019; 2020). Existing defenses against UAPs include fine-tuning on pre-computed UAPs (MoosaviDezfooli et al., 2017), post-hoc detection (Akhtar et al., 2018), universal adversarial training with online UAP generation (Mummadi et al., 2019; Shafahi et al., 2020; Benz et al., 2021). However, all existing defenses to UAPs are empirical works without efficacy guarantee to new attacks.
Backdoor Attacks In backdoor attacks, attackers plant a predefined UP (a.k.a. the trigger) in the victim model by manipulating the training procedure (Li et al., 2020c). Attacked models can give adversarially-desired outputs for any input patched with the trigger while still show good performance on clean inputs. Existing defenses include: poison detection via outlier detection (Gao et al., 2019; Chen et al., 2018; Tran et al., 2018; Zeng et al., 2021) which rely on the modeling of clean samples’ distribution; poisoned model identification (Xu et al., 2019; Wang et al., 2020b); trojan removal via trigger synthesising (Wang et al., 2019; Chen et al., 2019; Guo et al., 2019; Zeng et al., 2022a), or preprocessing and fine-tuning; (Li et al., 2020b; Borgnia et al., 2020); robust training via differential privacy (Du et al., 2019) or redesigning the training pipeline (Levine & Feizi, 2020; Jia et al., 2020; Huang et al., 2022; Li et al., 2021). As all these defenses were empirical, existing literature has revealed those empirical defenses’ limitations to zero-day attacks or adaptive attacks (Zeng et al., 2022b).
Robustness Certification of Neural Networks Early robustness certifications (Katz et al., 2017; Ehlers, 2017; Tjeng et al., 2017) largely relied on satisfiability modulo theory (SMT) or integer
1https://github.com/ruoxi-jia-group/Universal_Pert_Cert
linear programming (ILP) solvers are were limited to very small networks. For more efficient verification, bound propagation with convex relaxations has been proposed (Wong & Kolter, 2018; Wang et al., 2018b; Zhang et al., 2018; Weng et al., 2018; Singh et al., 2019b; Salman et al., 2019), which over-approximates nonlinear activations with convex relaxation and propagates the bounds layer by layer to finally bound the entire model. Xu et al. (2020a) proposed a bound propagation framework for general computational graphs and referred to the related methods as linear relaxation based perturbation analysis (LiRPA), as activations are relaxed by linear bounds. Bound propagation methods have also been further enhanced with techniques such as branch-and-bound (Bunel et al., 2018; 2020; Wang et al., 2018a;b; Xu et al., 2020b; Wang et al., 2021), multi-neuron relaxation and cutting planes (Singh et al., 2019a; Ferrari et al., 2021; Zhang et al., 2022a) for tighter results at a cost of efficiency. However, these works are developed for sample-wise perturbations, and they cannot directly produce tight certification against universal perturbations. Besides, there are several randomized smoothing (Cohen et al., 2019) based methods for certified robustness against backdoor attacks (Weber et al., 2020; Wang et al., 2020a; Xie et al., 2021; Zhang et al., 2022b). These are stochastic methods and are usually considered orthogonal to deterministic certification. Moreover, they require access to training data, only applicable to some specific learning algorithms (e.g., binary models or federated learning) and not general for other UPs, such as UAPs.
3 METHODOLOGY
3.1 PROBLEM FORMULATION
On a set of n independent samples {z(1), . . . , z(n)} from the data distribution Ω, where z(i) = (xi, yi) is the i-th example, xi (xi ∈ Rd) is the input and yi is the ground-truth label, we aim to certify the robustness of a K-way neural network classifier f : Rd → RK against a potential universal perturbation δ with ℓ∞ norm constrained as ∥δ∥∞ ≤ ϵ. In particular, we aim to certify and lower bound the worst-case accuracy of the neural network on {z(1), . . . , z(n)} for any universal perturbation δ (∥δ∥∞ ≤ ϵ) applied to all the examples:
min ∥δ∥p≤ϵ
1
n n∑ i=1 1 ( min j ̸=yi {myi,j(xi + δ)} > 0 ) , (1)
where myi,j(xi + δ) = fyi(xi + δ) − fj(xi + δ) is the margin between the ground-truth class yi and an incorrect class j ̸= yi, and the indicator checks whether the margin is positive for any j ̸= yi when a perturbation δ is added. It is NP-hard to exactly verify Eq. (1) even for n = 1 and a small ReLU network (Katz et al., 2017). Thus recent neural network verifiers usually compute a lower bound for the margin as myi,j(xi + δ) ≤ myi,j(xi + δ), and then we can replace m in Eq. (1) with m to lower bound Eq. (1) and this bound also serves as a lower bound for the robustness.
3.2 LINEAR PERTURBATION ANALYSIS W.R.T. A UNIVERSAL PERTURBATION
We adopt linear relaxation based perturbation analysis (LiRPA) from previous works which focused on sample-wise perturbations, “auto LiRPA” (Xu et al., 2020a) specifically, to obtain lower bounds on myi,j(xi + δ) represented as linear functions w.r.t. the universal perturbation δ. but it is also feasible to use other verification frameworks such as Singh et al. (2019b); Wang et al. (2018b). auto LiRPA can bound the output of a computational graph when its input nodes are perturbed, and it can produce linear functions w.r.t. the perturbed inputs nodes as linear bounds. Note that margin functions can be appended to the original neural classifier as the output of the computational graph, and thereby the margins can be bounded. When sample-wise perturbations are considered in previous works, the linear bounds can usually be written as
∀i ∈ [n], ∀j ̸= yi, ∀∥δ∥∞ ≤ ϵ, myi,j(xi + δ) ≥ ã (i) j (xi + δ) + b̃
(i) j , (2)
where ã(i)j and b̃ (i)
j are coefficients and biases in the linear bounds. This is achieved by relaxing nonlinear functions such as activation functions in the network with linear bounds and propagating linear coefficients through the computational graph. The right-hand-side (RHS) of Eq. (2) is a linear function w.r.t. (xi+δ). To obtain a final bound represented as a concrete number without relying on the δ variable, a concretization step can be applied on the RHS given the constraint on ∥δ∥∞, which eliminates the δ variable and lower bounds the RHS as ã(i)j (xi+δ)+b̃ (i) j ≥ −ϵ∥ã (i) j ∥1+ã (i) j xi+b̃ (i) j .
However, the aforementioned concretization step considers the worst-case δ for each sample independently but a universal perturbation δ should be shared across all the examples. Thereby it will produce relatively loose and over-conservative results under the universal perturbation setting, as the perturbations are much stronger when each example can take an independent perturbation respectively compared to a single and universal perturbation for all the examples.
In contrast, we propose to obtain a tighter certification for universal perturbation. Unlike Eq. (2), we use auto LiRPA to compute the linear lower bound with respect to δ instead of (xi + δ) by treating δ as a perturbed input node and xi as a fixed input node in the computational graph:
∀i ∈ [n], ∀j ̸= yi, ∀∥δ∥∞ ≤ ϵ, myi,j(xi + δ) ≥ a (i) j δ + b (i) j , (3)
where a(i)j and b (i) j are new coefficients and biases in the linear bound, and xi does not appear on the RHS as it is fixed. In the next section, we will lower bound the worst-case accuracy Eq. (1) by solving an MILP problem based on Eq. (3).
3.3 AN MILP FORMULATION TO LOWER BOUND THE WORST-CASE ACCURACY
In this section, we use linear bounds in Eq. (3) to compute a lower bound for the worst-case accuracy in Eq. (1). Specifically, by replacing each myi,j in Eq. (1) with its lower bound from Eq. (3), we lower bound Eq. (1) by solving the following problem:
minimize 1
n n∑ i=1 1 ( min j ̸=yi { a (i) j δ + b (i) j } > 0 ) s.t. ∥δ∥∞ ≤ ϵ. (4)
Now, we show that Eq. (4) can be rewritten into an MILP formulation: Theorem 1. Problem Eq. (4) is equivalent to the following MILP problem:
minimize ϑ
s.t. ϑ = 1
n n∑ i=1 q(i), (5)
∀i ∈ [n], q(i) ∈ {0, 1}, −τ(1− q(i)) ≤ ∑ j ̸=yi (a (i) j δ + b (i) j )s (i) j ≤ τq (i), (6)
∀i ∈ [n],∀j ̸= yi, s(i)j ∈ {0, 1}, ∑ j ̸=yi s (i) j = 1, (7)
∀i ∈ [n],∀j1 ̸= yi,∀j2 ̸= yi, (a(i)j1 δ + b (i) j1 )s (i) j1 − τ(1− s(i)j1 ) ≤ (a (i) j2 δ + b (i) j2 ), (8) ∥δ∥∞ ≤ ϵ,
where τ ≥ maxi∈[n] ∑ j ̸=yi |a (i) j δ + b (i) j | is a sufficient large constant.
In Theorem 1, given a universal perturbation δ, for the i-th example, integer variable q(i) ∈ {0, 1} denotes whether the the model is certifiably correct on this example based on linear bounds from Eq. (3), and the certified accuracy on the whole batch can be computed as Eq. (5). The model is certifiably correct on the i-th example when myi,j(xi + δ) ≥ a (i) j δ + b (i) j > 0 holds for all j ̸= yi. We use an integer variable s(i)j ∈ {0, 1} to denote whether class j is the hardest among all j ̸= yi under the ceritification, i.e., ∀j′ ̸= yi,a(i)j δ+b (i) j ≤ a (i) j′ δ+b (i) j′ holds, which is enforced by Eq. (8). We require each example to have exactly one hardest class j with s(i)j = 1 (see Eq. (7)); in case that there are multiple classes with an equal lower bound on the margin function, it is valid to treat any of them as the hardest. Then we only need to check whether a(i)j δ+b (i) j > 0 holds for the hardest class
j with s(i)j = 1, equivalently ∑ j ̸=yi(a (i) j δ+b (i) j )s
(i) j > 0. In Eq. (6), as τ is sufficiently large, only∑
j ̸=yi(a (i) j δ+b (i) j )s (i) j ≥ 0 is effectively required when q(i) = 1, and ∑ j ̸=yi(a (i) j δ+b (i) j )s (i) j ≤ 0
is required when q(i) = 0. Note that if exactly ∑
j ̸=yi(a (i) j δ + b (i) j )s (i) j = 0 happens, q (i) = 0 will be taken by MILP due to the minimization objective, and thus it is still compatible with our goal
for checking a(i)j δ + b (i) j > 0. Overall the MILP formulation minimizes the certified accuracy over all possible universal perturbation δ (∥δ∥∞ ≤ ϵ), to finally produce a lower bound for Eq. (1). We formally prove this theorem in Appendix A.1, and we use Gurobi (Bixby, 2007) to solve the MILP.
Although it is possible to solve the whole certification algorithm through MILP (Tjeng et al., 2017), it will be computationally prohibitive. Even for very small networks with thousands of neurons, the number of integer variables in their MILP formulation will be proportional to the number of neurons. In contrast, by computing linear bounds first before solving MILP, the number of integer variables in our formulation is only proportional to the number of samples in a batch and the number of classes, and it does not depend on the size of the network, which makes it feasible in practice.
4 GENERALIZATION OF UNIVERSAL PERTURBATION
In the previous section, we proposed our robustness certification method against UPs. Note that the certification results are only guaranteed for the given batch of samples till now. In this section, we study how the certified accuracy computed on a batch approximates the certified accuracy computed on the entire data distribution.
Let z(i) be a random sample drawn from probability space (Ω,F ,P), which is endowed with a σalgebra F and a probability measure P. A dataset Dn ≜ {z(1), . . . , z(n)} consists of n observations drawn independently from Ω according to P; equivalently it can be considered as a random point in (Ωn,Fn,Pn), which is the n-fold Cartesian product of Ω equipped with the product σ-algebra Fn and the product Pn = P× · · · × P︸ ︷︷ ︸
n times
. Let ∆ denote the l∞ ball that contains all allowable perturbations
∆ = {δ : ∥δ∥∞ ≤ ϵ} with radius ϵ. And let B : Ω → R(d+1)K be a linear bound generation procedure, and for each z = (x, y), it returns parameters {aj ,bj}j ̸=y of the linear lower bounds on the margins, i.e., my,j(x+ δ) ≥ aj(x+ δ)+bj . In the proposed framework, B is instantiated to be auto LiRPA (Xu et al., 2020a). Let An : R(d+1)Kn → ∆ denote the MILP in Eq. (4), which return a perturbation δ given the linear bounds on the margins. The overall certification procedure is the composition of An and B, denoted by G = An ◦ B ◦ · ◦ B︸ ︷︷ ︸
n times
≜ An ◦ B◦n.
For every data sample z = (x, y) ∈ Ω, we define the set
∆Bz :=
{ δ ∈ ∆ : 1 ( min j ̸=y { ajδ + bj } > 0 )} as the set of perturbations such that the margin between the ground-truth class and any other class is certifiably positive according to the linear bounds provided by B, i.e., the model is certifiably robust to any perturbation in this set, but it is still possible for the model to be robust to a perturbation δ /∈ ∆z . Note that the dependence of the set on B has been made explicit because aj ,bj depend on B. Similarly, we define the set
∆̃z :=
{ δ ∈ ∆ : 1 ( min j ̸=y { my,j(x+ δ) } > 0 )} (9)
as the set of all perturbations that are incapable of fooling the given model f , i.e., the data z is actually robust to any perturbation in this set. Note that ∆̃z is a superset of ∆Bz , and unlike ∆ B z , it does not depend on the linear bound generation procedure. We make the following definitions: Definition 1. The certified robust probability (CRP) of a given perturbation δ ∈ ∆ based on a linear bound generation procedure B is defined as
V B(δ) ≜ P(z ∈ Ω : δ ∈ ∆Bz ). (10) The actual robust probability (ARP) of a given perturbation δ ∈ ∆ is defined as
U(δ) ≜ P(z ∈ Ω : δ ∈ ∆̃z). (11)
The certified robust rate (CRR) of a perturbation δ ∈ ∆ on an evaluation dataset Dn based on a linear bound generation procedure B is
V̂ B(δ;Dn) ≜ 1
n ∑ z∈Dn 1(δ ∈ ∆Bz ). (12)
Equivalently, we can write the objective of Eq. (4) as minδ∈∆ V̂ B(δ;Dn). ∆Bz can be equivalently defined by the existence of a binary variable as in the MILP formulation in Theorem 1 and is thus nonconvex in general. In the following, we use V̂ B(δ) for V̂ B(δ;Dn) if the evaluation dataset is Dn for notational simplicity. Note that V B(δ) ≤ U(δ) for any δ ∈ ∆ and the equality is attained when the equality in (3) is attained, i.e., the lower bound generated by B exactly matches the actual margin at any δ. Now we present the following theorem that estimates the value of ARP based on the CRR computed from a batch of random samples.
Theorem 2 ((1− ξ)-probable certification for ARP). Given G = An ◦ B◦n and 0 < ξ < 1, for any δ, it holds that
Pn ( U(δ) ≥ min
δ∈∆ V̂ B(δ;Dn) + U(δ
∗)− V B(δ∗)− t∗(ξ, n) ) ≥ 1− ξ, (13)
where t∗(ξ, n) is the root of the equation (1 + 4t) ln(1 + 4t) − 4t = 4n ln(1/ξ) and t ∗(ξ, n) is a monotonically decreasing function in n and ξ. δ∗ = argminδ U(δ). Moreover, we have that
Pn ( U(δ) ≥ min
δ∈∆ V̂ B(∆;Dn)− t∗(ξ, n)
) ≥ 1− ξ, (14)
The proof can be found in Appendix A.2. Both bounds are interesting to interpret. The bound Eq. (13) shows that the discrepancy between the ARP of any perturbation and the CRR (i.e., the certified accuracy on a random batch) depends on U(δ∗) − V B(δ∗) and t∗(ξ, n). Given the trained model and the underlying data distribution, δ∗ is fixed; hence, the term U(δ∗) − V B(δ∗) depends on the tightness of linear bounds produced by B. The tighter bounds B can provide, the smaller difference there will be between U(δ∗)
and V B(δ∗). This bound suggests that plugging tighter linear bound generation techniques into our certification framework can potentially give rise to better approximation error. It is also interesting to note that the approximation error of the proposed certification framework G = An ◦ B◦n exclusively depends on B, not An. This is because An always returns the optimal solution to the MILP, thereby not introducing any additional error. The second term t∗(ξ, n) depends on the number of samples for certification and it vanishes as the n grows (illustrated in Figure 1). The second bound (Eq. (14)) utilizes the fact that U(δ∗) − V B(δ∗) ≥ 0, and is more relaxed but more convenient than the first bound (Eq. (13)) because the lower bound of the ARP can be calculated given the certification results on a batch, the number of samples in the batch, and the confidence level 1 − ξ. In the Section 5.2, we will showcase the estimation of the ARP using this bound.
5 EXPERIMENT
5.1 EXPERIMENTAL SETUP
For evaluating the certification, we consider two benchmark datasets, MNIST (LeCun et al., 1998) and CIFAR-10 (Krizhevsky et al., 2009), widely adopted in existing works. We adopt 5 model structures from existing works (Singh et al., 2019a; Tjandraatmadja et al., 2020; Wang et al., 2021; Müller et al., 2022; Zhang et al., 2022a): Conv-small, Conv-4-layer, Conv-big on MNIST, and ResNet-2B, ResNet-4B on CIFAR-10, with details in Appendix B. We use the CRR and attackACC (accuracy under an attack) as the metrics. All the results are averaged over three runs with different random seeds on 100 random samples for each dataset. Further experimental details can be found in Appendix B.
5.2 EVALUATION
Comparing to existing robustness certification We first focus on evaluating our certification results compared to existing robustness certification for sample-wise perturbation. There are several competitive frameworks such as Singh et al. (2019b); Bunel et al. (2020); Henriksen & Lomuscio (2021), and we compare with auto LiRPA (Xu et al., 2020a) specifically for its state-of-the-art
performance (Bak et al., 2021). We consider models from both natural training and adversarial training. For MNIST, we evaluate the two certification methods on models naturally trained, and PGD-32 (PGD with ℓ∞-norm 32255 ) adversarially trained models (Madry et al., 2018). Table 1 details the results on naturally trained MNIST models. Our method provides much tighter bounds than sample-wise certification results across all the settings. Table 2 illustrates results on PGD-32 trained MNIST models. With adversarial training, we observe the certified robustness of the models against UPs also largely increased compared naturally trained models in Table 1. Our certification results are still much tighter than sample-wise results especially under settings with larger perturbations. On CIFAR-10, we only evaluate the results on adversarially trained models (PGD-8 for ResNet-2B and ResNet-4B) as the naturally trained model is severely susceptible to perturbations on CIFAR-10 (see Figure 5) even with ϵ = 1255 . To sum up, our method can provide tighter robustness certification than existing sample-wise methods.
Estimation of the lower bound of ARP Figure 2 illustrate the application of Theorem 2 with CRR. We use the naturally trained Conv-4-layer on MNIST as an example and we set ϵ = 6255 . We demonstrate the estimation of 0.9-probable certification for the lower bound of ARP, or ARP, by setting ξ = 0.1. From Figure 2, we can learn that the empirical results, CRR, can be tighter when more samples are considered in certification. Incorporating more samples also makes the estimated ARP much closer to the CRR (as t∗(ξ, n) is smaller). Such an observation shows that when incorporating more samples in certification, the empirical results would better reflect the actual robustness of the whole population.
In particular, when using 1000 samples, the result can be interpreted as the ARP is larger than 84.73% with at least a 90% probability.
Validating with UAP attacks We then validate the robustness certification results with UAP attacks as CRR should lower bound the attack-ACCs. We consider three SOTA UAP attacks: Adv-UAP (Li et al., 2022), CosUAP (Zhang et al., 2021a), and DF-UAP (Zhang et al., 2020), detailed in in Appendix B. Figure 3 compares CRR and attack-ACCs on PGD-32 trained MNIST models. As shown in Figure 3, CRRs are indeed lower than all the attack-ACCs as expected.
Validating with backdoor attacks We also validate whether CRR still lower bounds the attackACCs in backdoor attacks. We consider two backdoor triggers namely a blended trigger (Chen et al., 2017) with a small ℓ∞-norm (∥δ∥∞ = 5255 , referred as the stealthy trigger), and the BadNets (Gu
et al., 2019) (∥δ∥∞ = 255255 ). All the attacks utilize the same poison ratio, 20% following existing works (Zeng et al., 2021). The visual example of the poisoned sample, the triggers, and the certification results are listed in Figure 4. Under the setting of the stealthy blended backdoor, we find that the CRR drops dramatically before reaching the trigger’s norm (∥δ∥∞ = 5255 ) compared to the same model trained on clean MNIST. This observation verifies the correctness of CRR and its potential to reveal stealthy l∞-bounded backdoor attacks in the current trend of backdoor development with smaller l∞-norm constraints, e.g., Zeng et al. (2022b); Zhao et al. (2020). However, assuming an ℓp norm bound of the backdoor triggers is not widely accepted in traditional backdoor settings. Thus, we also present the results of BadNets (with ∥δ∥∞ = 255255 ) in Figure 4. We consider the backdoor model trained from scratch or fine-tuned from the clean model. The CRR is still lower-bounding the attack’s deployed ℓ∞ bound of the trigger. However, as the trigger has a large ℓ∞ norm, the CRRs of poisoned models are of no difference to the clean model and thus not that useful. Nevertheless, in Section 5.3, we show a simple twist of the certification framework to help reveal backdoors’ existence.
5.3 IMPLICATIONS OF ROBUSTNESS CERTIFICATION AGAINST UPS
Now we explore the potential implications of our robustness certification against UPs. We focus on 3 case studies on model structure comparison, UAP defenses comparison, and backdoor detection.
Comparing model structures One implication of robustness certification regarding UPs is to compare different model structures and training strategies regarding the certified robustness against UPs. Figure 5 depicts the certification results of all the considered model structures with different training settings on MNIST and CIFAR-10. We consider both naturally trained and PGD trained models with different l∞ perturbation norm. In Figure 5 (a) on MNIST, we find that the largest model, Conv-big, shows the worst certified robustness against UPs. But the smallest Conv-small’s CRR is higher than that of Conv-4-layer under naturally trained setting, PGD-8, and PGD-16, but not PGD32. The only difference between Convsmall and Conv-4-layer is that Conv-4-layer
uses a larger padding step which resulting a slightly larger hidden layer (see Appendix B). Based on the observation, there is an interesting trade-off between model size and certified robustness against UPs: A slightly larger structure can help the model obtain better certified robustness when adopting adversarial training, potentially due to increased model capacity. Such an observation can be further illustrated in Figure 5 (b). Specifically, ResNet-2B’s CRR would drop to random guessing when using PGD-16, while ResNet-4B can still maintain a certain scale of CRR. But even larger models Figure 5 (a) have worse certified robustness, potentially due to looser certified bounds.
Implication to UAP defenses Another implication of the CRR is to compare existing UAP defenses regarding their efficacy. We consider three types of defenses and five different defenses in total: FGSM and PGD sample-wise adversarial training (Goodfellow et al., 2014; Madry et al., 2017;
Wong et al., 2019); universal adversarial training (UAT) with FGSM or PGD synthesizing UPs (Shafahi et al., 2020); sample-wise certified defense with Interval Bound Propagation (IBP) training (Gowal et al., 2018; Mirman et al., 2018). The defended models are further evaluated with UAP attacks and certification. The results with a small perturbation radius |δ|∞ = 16255 are shown in Table 4. Additional results with a larger perturbation radius (|δ|∞ = 80255 ) are in Table 7, Appendix C. We use the row titled “Worst” to record the maximum accuracy drop using UAP attacks compared to clean accuracy. Surprisingly, in Table 4, we find the CRR of models trained with UAT is worse than their sample-wise adversarial training counterparts (i.e., UAT-PGD results are worse than PGD). However, in the case of larger perturbation radius (Table 7, Appendix C), the UAT-trained models can achieve higher CRR than the sample-wise counterparts. Such an observation indicates an underexplored trade-off between perturbation radius and UAP defense method on CRR. The CRR result from the IBP-trained model is much tighter than others, as IBP directly optimizes over an objective for certified robustness and tightens the certified bounds for all the neurons. Moreover, CRR is also aligned with the worst attacked accuracy drop and can be an indicator for comparing different settings of UAP defenses.
Implication to backdoor defenses We evaluated the effectiveness of the CRR in revealing potential backdoors in the above section, but the effectiveness is yet only limited to triggers with small perturbations. This section presents a simple twist on the certification framework by teaming up with adversarial training (PGD-16). We depict the average class-wise certification results on 10 ResNet-4B models trained with different random seeds over different BadNets poison ratios in Figure 4. Based on the re-
sults, we find the certification can reliably reveal the targeted label and justify how mighty the backdoor attack is (i.e., the CRR is aligned with the poison ratio used). Additional results on the Smooth attack (Zeng et al., 2021) and ℓ2 invisible attack (Li et al., 2020a) are listed in Appendix C, which share similar observations. The reason of the successful identification is that, naturally, the adversarial training would force the model to learn more from the reliable features and thus make standard backdoors stand out from benign features of the data (i.e., easier to be learned by the model), as also discussed in Weng et al. (2020). Thus after training a model with adversarial training with large perturbation radius, the model would likely engrave the trigger and thus have a high CRR only on the target label. CRR by our proposed method provides an intriguing point of view to determine the attack’s strength (i.e., poison ratio).
6 CONCLUSION
In this work, we present the first focused study on certifying neural networks’ robustness against UPs. In contrast to previous robustness certification works that focused on sample-wise perturbations, we formulate the certification problem against UPs by emphasizing sharing a universal perturbation between different samples. We propose a combination of linear relaxation-based bounds and MILP to solve the problem. We also present a theoretical analysis framework to estimate the certification result for the entire population based on results from a batch of random samples. Extensive experiments reveal that our certification imposes tighter results than directly applying existing sample-wise robustness certifications. In addition, we discuss and demonstrate how robustness certification against UPs could facilitate comparing certified robustness between different model structures and defense methods and provide reliable backdoor detection.
ACKNOWLEDGEMENT
This work is partially funded by Sony AI. This work is also supported in part by NSF under IIS2008173, IIS-2048280, and by Army Research Laboratory under W911NF-20-2-0158. RJ and the ReDS lab appreciate the support of The Amazon - Virginia Tech Initiative for Efficient and Robust Machine Learning and the Cisco Award. YZ and ZS are both supported by the Amazon Fellowship.
A PROOFS
A.1 PROOF OF THEOREM 1
Let ϑ̂ be the solution of the MILP problem in the theorem, and let ϑ̃ be the solution to Eq. (4). Theorem 1 states that ϑ̂ = ϑ̃. We formally prove the equivalence below.
Proof. We first show that ϑ̂ ≤ ϑ̃. In Eq. (4), there exists some δ̃ such that
δ̃ = argmin ∥δ∥∞≤ϵ
1
n n∑ i=1 1 ( min j ̸=yi { a (i) j δ + b (i) j } > 0 ) .
Then, for every i ∈ [n], take the following values for variables in the MILP formulation:
q(i) = 1 ( min j ̸=yi { a (i) j δ̃ + b (i) j } > 0 ) ,
ϑ = ϑ̃ = 1
n n∑ i=1 q(i),
∀j ̸= yi, s(i)j = 1(j = j ′), where j′ = argmin j ̸=yi a (i) j δ̃ + b (i) j ,
and it is easy to see that the values for these variables satisfy all the constraints in the MILP problem. Thus the result of the minimization in the MILP should be no smaller than ϑ̃, i.e., ϑ̂ ≤ ϑ̃. We now show that ϑ̃ ≤ ϑ̂. We use δ̂, q̂, ŝ to denote the values of δ, q, s variable in the solution of MILP. For every i ∈ [n], Eq. (7) ensures that there exists exactly one ĵ (ĵ ̸= yi) with ŝ(i)ĵ = 1, and Eq. (8) ensures that for all j ̸= yi, a(i)ĵ δ̂ + b (i) ĵ ≤ a(i)j δ̂ + b
(i) j holds. Thus∑
j ̸=yi
(a (i) j δ + b (i) j )ŝ (i) j = min j ̸=yi {a(i)j δ̂ + b (i) j }.
According to Eq. (6), if q̂(i) = 1, ∑
j ̸=yi(a (i) j δ + b (i) j )ŝ (i) j ≥ 0 holds. In case that ∑ j ̸=yi(a (i) j δ +
b (i) j )ŝ (i) j = 0, Eq. (6) also holds with q̂ (i) = 0, and due to the minimization objective of MILP, q̂(i) = 0 instead of q̂(i) = 1 will be taken. Thus ∑
j ̸=yi(a (i) j δ + b (i) j )ŝ (i) j > 0 strictly holds when q̂(i) = 1. And if q̂(i) = 0, ∑
j ̸=yi(a (i) j δ + b (i) j )ŝ (i) j ≤ 0 holds. Thus
q̂(i) = 1 ( ∑ j ̸=yi (a (i) j δ + b (i) j )ŝ (i) j > 0 ) = 1 ( min j ̸=yi {a(i)j δ̂ + b (i) j } > 0 ) .
Thereby
ϑ̂ = 1
n n∑ i=1 q̂(i) = 1 n n∑ i=1 1 ( min j ̸=yi { a (i) j δ̂ + b (i) j } > 0 ) ,
and then the result of Eq. (4) is no smaller than ϑ̂, i.e., ϑ̃ ≤ ϑ̂.
Hence ϑ̂ = ϑ̃ is proved.
A.2 PROOF OF THEOREM 2
Let δ∗ = argminδ∈∆ U(δ) be the optimal universal perturbation that minimizes the ARP, let δn be the value returned by G(Dn), and let δ̃ = argminδ∈∆ V B(δ) that minimizes the CRP. We introduce the following lemma: Lemma 1. Given An, it holds that
Pn(V̂ B(δ∗)− V B(δ∗) > t∗(ξ, n)) ≤ ξ (15)
where t∗(ξ, n) is the root of the equation (1 + 4t) ln(1 + 4t)− 4t = 4n ln(1/ξ).
Proof. Let q(i) = 1(δ∗ ∈ ∆B z(i) ) which can also be interpreted as 1 ( minj ̸=yi { a (i) j δ ∗ + b (i) j } >
0 ) . Then, V̂ B(δ∗) = 1n ∑n i=1 q (i) and V B(δ∗) = E[ 1n ∑n i=1 q (i))] = E[q(i)]. Let σ2 denote the
variance of q(i). Since q(i) is a binary random variable, we have that σ2 ≤ 1/4. Let h(u) = (1 + u) ln(1 + u)− u. For any t > 0, we have that
Pn( 1
n n∑ i=1 q(i) − E[q(i)] > t) ≤ exp ( − nσ2h( t σ2 ) ) (16)
≤ exp ( − n
4 h(4t)
) , (17)
where the first inequality is a direct application of the Bennett’s inequality (Bennett, 1962), and the second inequality is due to the fact that nσ2h( tσ2 ) is a monotonically decreasing function of σ
2. Let t(ϵ, n) denote the root of exp ( − n4h(4t) ) = ξ. Then, it follows that Pn( 1n ∑n i=1 q
(i) − E[q(i)] > t(ϵ, n)) ≤ ξ.
Then we prove Theorem 2 to certify the robustness of a classifier against the worst-case attack δ∗.
Proof. We use the following relations: for any δ ∈ ∆,
U(δ) ≥ min δ∈∆ U(δ)
= U(δ∗) = U(δ∗)− V B(δ∗)︸ ︷︷ ︸ (i) +V B(δ∗)− V̂ B(δ∗)︸ ︷︷ ︸ (ii) + V̂ B(δ∗)− V̂ B(δn)︸ ︷︷ ︸ (iii) +V̂ B(δn)
≥ (i) + (ii) + V̂ B(δn),
(18)
where (ii) can be bounded by applying the concentration inequality in Lemma 1; (iii) ≥ 0 due to the optimality of δn = argminδ∈∆ V̂ B(δ). Combining these bounds yields Theorem 2.
B FURTHER DETAILS ON EXPERIMENTAL SETTINGS
We use one server equipped with a total of 8 RTX A6000 GPUs as the hardware platform. PyTorch (Paszke et al., 2019) is adopted as the implementation framework. We detail the model structures used in our experiment in Table 6. All of the model structures used in this work were also considered in other existing robustness certification works as the standard set-ups: Conv-small, Conv-4-layer, Conv-big on the MNIST (Singh et al., 2019a; Tjandraatmadja et al., 2020; Wang et al., 2021; Müller et al., 2022), and ResNet-2B, ResNet-4B on the CIFAR-10 (Zhang et al., 2022a). We use Adadelta (Zeiler, 2012) as the optimizer with a learning rate set to 0.1 for all the model training process (including the adversarial training for the model updating step as well). For MNIST models, we train each model with 60 epochs. For CIFAR-10 models, we train each model with 500 epochs to ensure full convergence. For adversarial training adopted in the main text, the number of steps in PGD attacks is 7; step-size for PGD is set as ϵ4 . For IBP training, we use the implementation in Shi et al. (2021).
Now, we details the UAP attacks considered in the experiment for validating the certification results, namely the Adv-UAP (Li et al., 2022), Cos-UAP (Zhang et al., 2021a), and DF-UAP (Zhang et al., 2020). The design of each UAP attack’s synthesis procedure distinguishes these attacks. Specifically, Adv-UAP synthesizes and generates adversarial examples for each input before synthesizing the UAP, which has shown to be more effective in finding stronger UAPs. Cos-UAP produces UAP by reducing the Cosine similarity between the original output logits and the disturbed logits; DFUAP employs a similar loss as listed in the C&W attack (Carlini & Wagner, 2017), which aims to reduce the distance between the ground-truth label’s logits and the maximum logits of the rest.
Now we provide the detailed settings of the backdoor target-class identification in Section 5.3. For the threat model, we consider the scenario where the defender aims to determine if a backdoor attack resides in a given dataset, identify the target class, and justify how potent the attack is if there is an identified attack. We assumes the defender has access to the training set to be inspected, with no additional clean validation data required. To conduct the instantiated case shown in Section 5.3, the defender adversarially trains (with PGD-16) 10 different models on the inspecting training dataset and obtaining the averaging CRR results in a class-wise manner. Especially as we assume no additional clean validation data is required, we pass through 100 random noise into the certifying models to obtain the results in Figure 6, 7, 8.
C ADDITIONAL RESULTS
C.1 ADDITIONAL RESULTS ON UAP DEFENSES COMPARISON
Table 7 details the results of UAP defenses comparison under the large-norm setting (ϵ = 80255 ). Noting all the defenses adopted are also incorporated with the same expense. For large-norm settings, we find that only the certified-robustness training ends up with a CRR larger than 0. Apart from its actual effectiveness, as mentioned in the main text, the IBP-trained model also ends up with much tighter intermediate linear bounds (i.e., a and b are tighter). Even though our work can only return a positive CRR on the IBP-trained model, the certification results are still aligned with the actual attack results, as the IBP-trained model would have stronger robustness than the other models in terms of the least change in the ACC drop.
C.2 ADDITIONAL RESULTS ON BACKDOOR TARGET-CLASS IDENTIFICATION
We now provide additional results on implementing the certification framework to identify the existence of backdoor attacks. In this section, the results provided are evaluated against the Smooth attack (Zeng et al., 2021) and the l2-invisible attack (Li et al., 2020a). Figure 7,8 illustrate the results on the Smooth attack and l2-invisble attack respecively. Based on the results, we find the certification can also reliably reveal the targeted label and justify how powerful the backdoor attack is for Smooth attack and the l2-invisible attack.
D BROADEN IMPACT AND LIMITATIONS
D.1 UAP AND BACKDOOR ATTACKS
UAP attacks aim to synthesize a UP by accessing and analyzing the output of a trained neural network. Backdoor attacks aim to insert a predefined trigger into the neural network and ensure an effective attack without accessing and analyzing the output after the model is trained over the poisoned samples. Many existing works have found these two paralleled lines of work have interesting intersections. In particular, the formulation of UAP synthesizing has also inspired or has its interesting counterparts in backdoor attacks or defense designs. For example, Li et al. (2020a); Zhang et al. (2021b) designed their backdoor trigger via a similar process of synthesizing UAP using a trained model. Kolouri et al. (2020); Zeng et al. (2022a) adopted this interesting intersection between UAP and backdoor attacks to provide identification of backdoors or conduct online removal of backdoors. Suppose we view these two attack paradigms at the inference time (with a trained model). In that case, mitigation defenses and robustness synthesizing tools for both attacks can be developed for general robustness to UP.
D.2 LIMITATIONS
Unconstrained or Large ℓ∞-norm Attacks: Some of the UAP attacks are generated without specifying a constraint (Brown et al., 2017), and in most backdoor attacks, the trigger inserted does not have a constrained ℓ∞ norm. If the attack can have an unconstrained ℓ∞ or a very large ℓ∞ norm, only trivial certification results can be obtained from our certification. This limitation also commonly exists in state-of-the-art sample-wise certification methods (Wang et al., 2021; Ferrari et al., 2021). In fact, any certification procedure requires some constraints on potential perturbations and does not apply to unbounded perturbations. This open problem calls for answers and the attention for future research.
Computational Cost: Supporting large models and large datasets can be computationally costly for our certification. Existing works for certifying pre-trained models (Wang et al., 2021; Ferrari et al., 2021) are also commonly limited to moderate-sized networks, and the cost of our method is lower bounded by existing linear bound propagation frameworks that we use to obtain the linear bounds before solving the MILP problem. It remains a challenging open-problem for scaling to larger-scale networks, such as models for ImageNet (Deng et al., 2009). | 1. What is the focus of the paper regarding sample-wise robustness certification?
2. What are the strengths and weaknesses of the paper, particularly in its theoretical contribution and experimental design?
3. Do you have any questions or suggestions regarding the intersection between universal adversarial perturbations and backdoor triggers?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper focused on exploring and extending linear-relaxation-based sample-wise robustness certification to evaluate the robustness of trained neural networks against Universal Perturbations (UPs). The authors pointed out an interesting intersection between universal adversarial perturbations (UAPs) and backdoor triggers. They showcased how the robustness certification to UPs of neural networks can impact these two lines of effort. The theoretical contribution evaluates the generalizability of the effects of the UPs computed using observed samples to unseen data populations. Detailed empirical evaluation regarding multiple perspectives of the proposed method is presented on two standard computer vision benchmarks (MNIST and CIFAR-10) regarding five different model structures (Conv-small, Conv-4-layer, Conv-big, ResNet-2B, and ResNet-4B)
Strengths And Weaknesses
Strength:
This paper focused on a critical problem, certifying the robustness of neural networks to UPs;
The paper is well-written. I enjoyed reading this work;
A clear structure for recognizing the road map of existing work in UAPs, backdoors, and robustness certification, which facilitates readers to position this work in an interesting intersection between UAP attacks and backdoor attacks;
Interesting and solid technical efforts. Given the lack of theoretical analysis in existing UAP and backdoor attacks, the proposed theoretical analyzing framework in this paper gives an interesting first attempt toward a better understanding of neural networks' robustness against these two types of threats;
Interesting experimental design. In particular, the author thoroughly evaluated the proposed UP certification and compared it with existing sample-specific certifications, and validated the results with three actual UAP attacks and two types of backdoor attacks. They also devised the theoretical efforts in practice and studied three implications of UP-robustness certification;
Weaknesses:
I find the intersection/similarity/difference between UAP attacks and backdoor triggers extended in this work quite interesting. Apart from the current related work section, I highly recommend the authors add a discussion to talk about the similarity/difference between these two lines of threats. Especially some of the recent work in the backdoor literature utilize this interesting intersection for stronger attack efficacy and better stealthiness [1,2] or effective defenses [3].
I find most of the experimental settings well-detailed, except for the backdoor target-class identification experiment (Section 5.3). Please add more details to the threat model and the availability of the defender's knowledge, e.g., data accessibility.
There is a limitation when using the proposed method to compare different UAP defenses regarding large norms (Table 7, Appendix C.1), i.e., all the results except the result on the IBP-trained model are trivial. I encourage the authors to add discussions on this limitation.
References
[1] Zhang, Quan, et al. "AdvDoor: adversarial backdoor attack of deep learning system." Proceedings of the 30th ACM SIGSOFT International Symposium on Software Testing and Analysis. 2021.
[2] Zeng, Yi, et al. "NARCISSUS: A Practical Clean-Label Backdoor Attack with Limited Information." arXiv preprint arXiv:2204.05255 (2022).
[3] Kolouri, Soheil, et al. "Universal litmus patterns: Revealing backdoor attacks in cnns." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
Clarity, Quality, Novelty And Reproducibility
The structure of the paper is clear and easy to read. This work looks into a vital problem but not addressed before in the literature. This study makes some novel and exciting technical contributions to the understanding of UP's generalizability, leading to a solid evaluation of the error bound of the proposed UP-robustness certification. Section 5.3's additional empirical discussion on instantiating UP-robustness certifications to compare model structures, UAP defenses, and to use them as a practical backdoor defense is inspiring and engaging. |
ICLR | Title
Towards Robustness Certification Against Universal Perturbations
Abstract
In this paper, we investigate the problem of certifying neural network robustness against universal perturbations (UPs), which have been widely used in universal adversarial attacks and backdoor attacks. Existing robustness certification methods aim to provide robustness guarantees for each sample with respect to the worst-case perturbations given a neural network. However, those sample-wise bounds will be loose when considering the UP threat model as they overlook the important constraint that the perturbation should be shared across all samples. We propose a method based on a combination of linear relaxation-based perturbation analysis and Mixed Integer Linear Programming to establish the first robust certification method for UP. In addition, we develop a theoretical framework for computing error bounds on the entire population using the certification results from a randomly sampled batch. Aside from an extensive evaluation of the proposed certification, we further show how the certification facilitates efficient comparison of robustness among different models or efficacy among different universal adversarial attack defenses and enables accurate detection of backdoor target classes.
1 INTRODUCTION
As deep neural networks become prevalent in modern performance-critical systems such as selfdriving cars and healthcare, it is critical to understand their failure modes and performance guarantees. Universal perturbations (UPs) are an important class of vulnerabilities faced by deep neural networks. Such perturbations can fool a classifier into misclassifying any input from a given distribution with high probability at test time. Past literature has studied two lines of techniques to create UPs: universal adversarial attacks (Moosavi-Dezfooli et al., 2017) and backdoor attacks (Gu et al., 2019; Chen et al., 2017). The former crafts a UP based on a trained model and does not rely on access to training data. The latter, by contrast, prespecifies a pattern as a UP and further alters the training data so that adding the pattern (often known as the trigger in backdoor attack literature) will change the output of the trained classifier into an attacker-desired target class.
Many defenses have been proposed for both universal adversarial attacks (Akhtar & Mian, 2018; Moosavi-Dezfooli et al., 2017; Shafahi et al., 2020; Benz et al., 2021; Liu et al., 2021) and backdoor attacks (Wang et al., 2019; Chen et al., 2019; Guo et al., 2019; Borgnia et al., 2020; Qiu et al., 2021). But empirical evaluation with attacks does not provide a formal guarantee on the robustness as it is infeasible for an attack algorithm to provably cover all concerned perturbations. In contrast, robustness certification aims to verify the output bounds of the model given a certain class of input perturbations and provably certify the robustness against all the concerned perturbations. Although several recent works (Weber et al., 2020; Xie et al., 2021) developed techniques to achieve certified robustness of a classifier against backdoor-attackinduced UPs with certain norm bound. However, these techniques apply to specific learning algorithms and require the knowledge of the training data. It remains an open question: How to certify the robustness of a trained model against a class of UPs in a way that is agnostic to the underlying training algorithm and data, and is general for different UPs (including both universal adversarial attacks and norm-bounded backdoor attacks)?
∗Zhouxing Shi and Yi Zeng contributed equally. Corresponding Yi Zeng, Lingjuan Lyu or Ruoxi Jia. Work partially done during Yi Zeng’s internship at Sony AI.
In this paper, we propose a framework to certify the worst-case classification accuracy on a batch of test samples against l∞-norm-bounded UPs. Our approach builds off of past works for certifying robustness against sample-wise perturbations that are independently added to each sample. For efficient verification, many recent works linearly relax nonlinear activation functions in neural networks into linear bounds and then conduct linear bound propagation to obtain the output bounds for the whole model (Wong & Kolter, 2018; Wang et al., 2018b; Dvijotham et al., 2018; Zhang et al., 2018; Singh et al., 2019b). This process is also referred to as linear perturbation analysis (Xu et al., 2020a). Since the worst-case model accuracy against sample-wise perturbations is a lower bound of the worst-case accuracy against UPs, these certification techniques could be applied to obtain a certificate against UPs. However, a direct application would overlook the important constraint that a UP is shared across different inputs, thereby producing overly conservative certification results.
Unlike sample-wise perturbations, UPs require theoretical reasoning to generalize certification results. This is because UPs are applied to any input from the data distribution, and our main interest lies in the expected model accuracy over the entire data distribution against UPs. However, certification procedures can only accept a batch of samples from the distribution and certify the accuracy over the samples. Therefore, it’s crucial to understand the discrepancy between certified robustness computed from samples and the actual population robustness.
We summarize our contributions as follows:
• We formulate the problem of robustness certification against UPs. We then generalize linear relaxation based perturbation analysis (LiRPA) to UPs, and we further propose a Mixed Integer Linear Programming (MILP) formulation over linear bounds from LiRPA, to obtain tighter certification on the worst-case accuracy of a given model against UPs within a ℓ∞-norm ball1. • We establish a theoretical framework for analyzing the generalizability of the certification results based on random sampled subsets to the entire population. • We conduct extensive experiments to show that our certification method provides certified lower bounds on the worst-case robust accuracy against both universal adversarial attacks and l∞-bounded backdoor attacks, which are substantially tighter than results by directly applying existing sample-wise certification. • We also investigate the implications of robustness certification on UPs to facilitate easy comparisons of robustness among different models or the efficacy of empirical defenses, and to achieve reliable identification of backdoor target classes.
2 BACKGROUND AND RELATED WORK
Universal Adversarial Perturbation Neural networks are vulnerable to adversarial examples (Szegedy et al., 2014), which has led to the development of universal adversarial perturbations (UAPs), a same noise can consistently deceive a target network on most images (Liu et al., 2019; 2020). Existing defenses against UAPs include fine-tuning on pre-computed UAPs (MoosaviDezfooli et al., 2017), post-hoc detection (Akhtar et al., 2018), universal adversarial training with online UAP generation (Mummadi et al., 2019; Shafahi et al., 2020; Benz et al., 2021). However, all existing defenses to UAPs are empirical works without efficacy guarantee to new attacks.
Backdoor Attacks In backdoor attacks, attackers plant a predefined UP (a.k.a. the trigger) in the victim model by manipulating the training procedure (Li et al., 2020c). Attacked models can give adversarially-desired outputs for any input patched with the trigger while still show good performance on clean inputs. Existing defenses include: poison detection via outlier detection (Gao et al., 2019; Chen et al., 2018; Tran et al., 2018; Zeng et al., 2021) which rely on the modeling of clean samples’ distribution; poisoned model identification (Xu et al., 2019; Wang et al., 2020b); trojan removal via trigger synthesising (Wang et al., 2019; Chen et al., 2019; Guo et al., 2019; Zeng et al., 2022a), or preprocessing and fine-tuning; (Li et al., 2020b; Borgnia et al., 2020); robust training via differential privacy (Du et al., 2019) or redesigning the training pipeline (Levine & Feizi, 2020; Jia et al., 2020; Huang et al., 2022; Li et al., 2021). As all these defenses were empirical, existing literature has revealed those empirical defenses’ limitations to zero-day attacks or adaptive attacks (Zeng et al., 2022b).
Robustness Certification of Neural Networks Early robustness certifications (Katz et al., 2017; Ehlers, 2017; Tjeng et al., 2017) largely relied on satisfiability modulo theory (SMT) or integer
1https://github.com/ruoxi-jia-group/Universal_Pert_Cert
linear programming (ILP) solvers are were limited to very small networks. For more efficient verification, bound propagation with convex relaxations has been proposed (Wong & Kolter, 2018; Wang et al., 2018b; Zhang et al., 2018; Weng et al., 2018; Singh et al., 2019b; Salman et al., 2019), which over-approximates nonlinear activations with convex relaxation and propagates the bounds layer by layer to finally bound the entire model. Xu et al. (2020a) proposed a bound propagation framework for general computational graphs and referred to the related methods as linear relaxation based perturbation analysis (LiRPA), as activations are relaxed by linear bounds. Bound propagation methods have also been further enhanced with techniques such as branch-and-bound (Bunel et al., 2018; 2020; Wang et al., 2018a;b; Xu et al., 2020b; Wang et al., 2021), multi-neuron relaxation and cutting planes (Singh et al., 2019a; Ferrari et al., 2021; Zhang et al., 2022a) for tighter results at a cost of efficiency. However, these works are developed for sample-wise perturbations, and they cannot directly produce tight certification against universal perturbations. Besides, there are several randomized smoothing (Cohen et al., 2019) based methods for certified robustness against backdoor attacks (Weber et al., 2020; Wang et al., 2020a; Xie et al., 2021; Zhang et al., 2022b). These are stochastic methods and are usually considered orthogonal to deterministic certification. Moreover, they require access to training data, only applicable to some specific learning algorithms (e.g., binary models or federated learning) and not general for other UPs, such as UAPs.
3 METHODOLOGY
3.1 PROBLEM FORMULATION
On a set of n independent samples {z(1), . . . , z(n)} from the data distribution Ω, where z(i) = (xi, yi) is the i-th example, xi (xi ∈ Rd) is the input and yi is the ground-truth label, we aim to certify the robustness of a K-way neural network classifier f : Rd → RK against a potential universal perturbation δ with ℓ∞ norm constrained as ∥δ∥∞ ≤ ϵ. In particular, we aim to certify and lower bound the worst-case accuracy of the neural network on {z(1), . . . , z(n)} for any universal perturbation δ (∥δ∥∞ ≤ ϵ) applied to all the examples:
min ∥δ∥p≤ϵ
1
n n∑ i=1 1 ( min j ̸=yi {myi,j(xi + δ)} > 0 ) , (1)
where myi,j(xi + δ) = fyi(xi + δ) − fj(xi + δ) is the margin between the ground-truth class yi and an incorrect class j ̸= yi, and the indicator checks whether the margin is positive for any j ̸= yi when a perturbation δ is added. It is NP-hard to exactly verify Eq. (1) even for n = 1 and a small ReLU network (Katz et al., 2017). Thus recent neural network verifiers usually compute a lower bound for the margin as myi,j(xi + δ) ≤ myi,j(xi + δ), and then we can replace m in Eq. (1) with m to lower bound Eq. (1) and this bound also serves as a lower bound for the robustness.
3.2 LINEAR PERTURBATION ANALYSIS W.R.T. A UNIVERSAL PERTURBATION
We adopt linear relaxation based perturbation analysis (LiRPA) from previous works which focused on sample-wise perturbations, “auto LiRPA” (Xu et al., 2020a) specifically, to obtain lower bounds on myi,j(xi + δ) represented as linear functions w.r.t. the universal perturbation δ. but it is also feasible to use other verification frameworks such as Singh et al. (2019b); Wang et al. (2018b). auto LiRPA can bound the output of a computational graph when its input nodes are perturbed, and it can produce linear functions w.r.t. the perturbed inputs nodes as linear bounds. Note that margin functions can be appended to the original neural classifier as the output of the computational graph, and thereby the margins can be bounded. When sample-wise perturbations are considered in previous works, the linear bounds can usually be written as
∀i ∈ [n], ∀j ̸= yi, ∀∥δ∥∞ ≤ ϵ, myi,j(xi + δ) ≥ ã (i) j (xi + δ) + b̃
(i) j , (2)
where ã(i)j and b̃ (i)
j are coefficients and biases in the linear bounds. This is achieved by relaxing nonlinear functions such as activation functions in the network with linear bounds and propagating linear coefficients through the computational graph. The right-hand-side (RHS) of Eq. (2) is a linear function w.r.t. (xi+δ). To obtain a final bound represented as a concrete number without relying on the δ variable, a concretization step can be applied on the RHS given the constraint on ∥δ∥∞, which eliminates the δ variable and lower bounds the RHS as ã(i)j (xi+δ)+b̃ (i) j ≥ −ϵ∥ã (i) j ∥1+ã (i) j xi+b̃ (i) j .
However, the aforementioned concretization step considers the worst-case δ for each sample independently but a universal perturbation δ should be shared across all the examples. Thereby it will produce relatively loose and over-conservative results under the universal perturbation setting, as the perturbations are much stronger when each example can take an independent perturbation respectively compared to a single and universal perturbation for all the examples.
In contrast, we propose to obtain a tighter certification for universal perturbation. Unlike Eq. (2), we use auto LiRPA to compute the linear lower bound with respect to δ instead of (xi + δ) by treating δ as a perturbed input node and xi as a fixed input node in the computational graph:
∀i ∈ [n], ∀j ̸= yi, ∀∥δ∥∞ ≤ ϵ, myi,j(xi + δ) ≥ a (i) j δ + b (i) j , (3)
where a(i)j and b (i) j are new coefficients and biases in the linear bound, and xi does not appear on the RHS as it is fixed. In the next section, we will lower bound the worst-case accuracy Eq. (1) by solving an MILP problem based on Eq. (3).
3.3 AN MILP FORMULATION TO LOWER BOUND THE WORST-CASE ACCURACY
In this section, we use linear bounds in Eq. (3) to compute a lower bound for the worst-case accuracy in Eq. (1). Specifically, by replacing each myi,j in Eq. (1) with its lower bound from Eq. (3), we lower bound Eq. (1) by solving the following problem:
minimize 1
n n∑ i=1 1 ( min j ̸=yi { a (i) j δ + b (i) j } > 0 ) s.t. ∥δ∥∞ ≤ ϵ. (4)
Now, we show that Eq. (4) can be rewritten into an MILP formulation: Theorem 1. Problem Eq. (4) is equivalent to the following MILP problem:
minimize ϑ
s.t. ϑ = 1
n n∑ i=1 q(i), (5)
∀i ∈ [n], q(i) ∈ {0, 1}, −τ(1− q(i)) ≤ ∑ j ̸=yi (a (i) j δ + b (i) j )s (i) j ≤ τq (i), (6)
∀i ∈ [n],∀j ̸= yi, s(i)j ∈ {0, 1}, ∑ j ̸=yi s (i) j = 1, (7)
∀i ∈ [n],∀j1 ̸= yi,∀j2 ̸= yi, (a(i)j1 δ + b (i) j1 )s (i) j1 − τ(1− s(i)j1 ) ≤ (a (i) j2 δ + b (i) j2 ), (8) ∥δ∥∞ ≤ ϵ,
where τ ≥ maxi∈[n] ∑ j ̸=yi |a (i) j δ + b (i) j | is a sufficient large constant.
In Theorem 1, given a universal perturbation δ, for the i-th example, integer variable q(i) ∈ {0, 1} denotes whether the the model is certifiably correct on this example based on linear bounds from Eq. (3), and the certified accuracy on the whole batch can be computed as Eq. (5). The model is certifiably correct on the i-th example when myi,j(xi + δ) ≥ a (i) j δ + b (i) j > 0 holds for all j ̸= yi. We use an integer variable s(i)j ∈ {0, 1} to denote whether class j is the hardest among all j ̸= yi under the ceritification, i.e., ∀j′ ̸= yi,a(i)j δ+b (i) j ≤ a (i) j′ δ+b (i) j′ holds, which is enforced by Eq. (8). We require each example to have exactly one hardest class j with s(i)j = 1 (see Eq. (7)); in case that there are multiple classes with an equal lower bound on the margin function, it is valid to treat any of them as the hardest. Then we only need to check whether a(i)j δ+b (i) j > 0 holds for the hardest class
j with s(i)j = 1, equivalently ∑ j ̸=yi(a (i) j δ+b (i) j )s
(i) j > 0. In Eq. (6), as τ is sufficiently large, only∑
j ̸=yi(a (i) j δ+b (i) j )s (i) j ≥ 0 is effectively required when q(i) = 1, and ∑ j ̸=yi(a (i) j δ+b (i) j )s (i) j ≤ 0
is required when q(i) = 0. Note that if exactly ∑
j ̸=yi(a (i) j δ + b (i) j )s (i) j = 0 happens, q (i) = 0 will be taken by MILP due to the minimization objective, and thus it is still compatible with our goal
for checking a(i)j δ + b (i) j > 0. Overall the MILP formulation minimizes the certified accuracy over all possible universal perturbation δ (∥δ∥∞ ≤ ϵ), to finally produce a lower bound for Eq. (1). We formally prove this theorem in Appendix A.1, and we use Gurobi (Bixby, 2007) to solve the MILP.
Although it is possible to solve the whole certification algorithm through MILP (Tjeng et al., 2017), it will be computationally prohibitive. Even for very small networks with thousands of neurons, the number of integer variables in their MILP formulation will be proportional to the number of neurons. In contrast, by computing linear bounds first before solving MILP, the number of integer variables in our formulation is only proportional to the number of samples in a batch and the number of classes, and it does not depend on the size of the network, which makes it feasible in practice.
4 GENERALIZATION OF UNIVERSAL PERTURBATION
In the previous section, we proposed our robustness certification method against UPs. Note that the certification results are only guaranteed for the given batch of samples till now. In this section, we study how the certified accuracy computed on a batch approximates the certified accuracy computed on the entire data distribution.
Let z(i) be a random sample drawn from probability space (Ω,F ,P), which is endowed with a σalgebra F and a probability measure P. A dataset Dn ≜ {z(1), . . . , z(n)} consists of n observations drawn independently from Ω according to P; equivalently it can be considered as a random point in (Ωn,Fn,Pn), which is the n-fold Cartesian product of Ω equipped with the product σ-algebra Fn and the product Pn = P× · · · × P︸ ︷︷ ︸
n times
. Let ∆ denote the l∞ ball that contains all allowable perturbations
∆ = {δ : ∥δ∥∞ ≤ ϵ} with radius ϵ. And let B : Ω → R(d+1)K be a linear bound generation procedure, and for each z = (x, y), it returns parameters {aj ,bj}j ̸=y of the linear lower bounds on the margins, i.e., my,j(x+ δ) ≥ aj(x+ δ)+bj . In the proposed framework, B is instantiated to be auto LiRPA (Xu et al., 2020a). Let An : R(d+1)Kn → ∆ denote the MILP in Eq. (4), which return a perturbation δ given the linear bounds on the margins. The overall certification procedure is the composition of An and B, denoted by G = An ◦ B ◦ · ◦ B︸ ︷︷ ︸
n times
≜ An ◦ B◦n.
For every data sample z = (x, y) ∈ Ω, we define the set
∆Bz :=
{ δ ∈ ∆ : 1 ( min j ̸=y { ajδ + bj } > 0 )} as the set of perturbations such that the margin between the ground-truth class and any other class is certifiably positive according to the linear bounds provided by B, i.e., the model is certifiably robust to any perturbation in this set, but it is still possible for the model to be robust to a perturbation δ /∈ ∆z . Note that the dependence of the set on B has been made explicit because aj ,bj depend on B. Similarly, we define the set
∆̃z :=
{ δ ∈ ∆ : 1 ( min j ̸=y { my,j(x+ δ) } > 0 )} (9)
as the set of all perturbations that are incapable of fooling the given model f , i.e., the data z is actually robust to any perturbation in this set. Note that ∆̃z is a superset of ∆Bz , and unlike ∆ B z , it does not depend on the linear bound generation procedure. We make the following definitions: Definition 1. The certified robust probability (CRP) of a given perturbation δ ∈ ∆ based on a linear bound generation procedure B is defined as
V B(δ) ≜ P(z ∈ Ω : δ ∈ ∆Bz ). (10) The actual robust probability (ARP) of a given perturbation δ ∈ ∆ is defined as
U(δ) ≜ P(z ∈ Ω : δ ∈ ∆̃z). (11)
The certified robust rate (CRR) of a perturbation δ ∈ ∆ on an evaluation dataset Dn based on a linear bound generation procedure B is
V̂ B(δ;Dn) ≜ 1
n ∑ z∈Dn 1(δ ∈ ∆Bz ). (12)
Equivalently, we can write the objective of Eq. (4) as minδ∈∆ V̂ B(δ;Dn). ∆Bz can be equivalently defined by the existence of a binary variable as in the MILP formulation in Theorem 1 and is thus nonconvex in general. In the following, we use V̂ B(δ) for V̂ B(δ;Dn) if the evaluation dataset is Dn for notational simplicity. Note that V B(δ) ≤ U(δ) for any δ ∈ ∆ and the equality is attained when the equality in (3) is attained, i.e., the lower bound generated by B exactly matches the actual margin at any δ. Now we present the following theorem that estimates the value of ARP based on the CRR computed from a batch of random samples.
Theorem 2 ((1− ξ)-probable certification for ARP). Given G = An ◦ B◦n and 0 < ξ < 1, for any δ, it holds that
Pn ( U(δ) ≥ min
δ∈∆ V̂ B(δ;Dn) + U(δ
∗)− V B(δ∗)− t∗(ξ, n) ) ≥ 1− ξ, (13)
where t∗(ξ, n) is the root of the equation (1 + 4t) ln(1 + 4t) − 4t = 4n ln(1/ξ) and t ∗(ξ, n) is a monotonically decreasing function in n and ξ. δ∗ = argminδ U(δ). Moreover, we have that
Pn ( U(δ) ≥ min
δ∈∆ V̂ B(∆;Dn)− t∗(ξ, n)
) ≥ 1− ξ, (14)
The proof can be found in Appendix A.2. Both bounds are interesting to interpret. The bound Eq. (13) shows that the discrepancy between the ARP of any perturbation and the CRR (i.e., the certified accuracy on a random batch) depends on U(δ∗) − V B(δ∗) and t∗(ξ, n). Given the trained model and the underlying data distribution, δ∗ is fixed; hence, the term U(δ∗) − V B(δ∗) depends on the tightness of linear bounds produced by B. The tighter bounds B can provide, the smaller difference there will be between U(δ∗)
and V B(δ∗). This bound suggests that plugging tighter linear bound generation techniques into our certification framework can potentially give rise to better approximation error. It is also interesting to note that the approximation error of the proposed certification framework G = An ◦ B◦n exclusively depends on B, not An. This is because An always returns the optimal solution to the MILP, thereby not introducing any additional error. The second term t∗(ξ, n) depends on the number of samples for certification and it vanishes as the n grows (illustrated in Figure 1). The second bound (Eq. (14)) utilizes the fact that U(δ∗) − V B(δ∗) ≥ 0, and is more relaxed but more convenient than the first bound (Eq. (13)) because the lower bound of the ARP can be calculated given the certification results on a batch, the number of samples in the batch, and the confidence level 1 − ξ. In the Section 5.2, we will showcase the estimation of the ARP using this bound.
5 EXPERIMENT
5.1 EXPERIMENTAL SETUP
For evaluating the certification, we consider two benchmark datasets, MNIST (LeCun et al., 1998) and CIFAR-10 (Krizhevsky et al., 2009), widely adopted in existing works. We adopt 5 model structures from existing works (Singh et al., 2019a; Tjandraatmadja et al., 2020; Wang et al., 2021; Müller et al., 2022; Zhang et al., 2022a): Conv-small, Conv-4-layer, Conv-big on MNIST, and ResNet-2B, ResNet-4B on CIFAR-10, with details in Appendix B. We use the CRR and attackACC (accuracy under an attack) as the metrics. All the results are averaged over three runs with different random seeds on 100 random samples for each dataset. Further experimental details can be found in Appendix B.
5.2 EVALUATION
Comparing to existing robustness certification We first focus on evaluating our certification results compared to existing robustness certification for sample-wise perturbation. There are several competitive frameworks such as Singh et al. (2019b); Bunel et al. (2020); Henriksen & Lomuscio (2021), and we compare with auto LiRPA (Xu et al., 2020a) specifically for its state-of-the-art
performance (Bak et al., 2021). We consider models from both natural training and adversarial training. For MNIST, we evaluate the two certification methods on models naturally trained, and PGD-32 (PGD with ℓ∞-norm 32255 ) adversarially trained models (Madry et al., 2018). Table 1 details the results on naturally trained MNIST models. Our method provides much tighter bounds than sample-wise certification results across all the settings. Table 2 illustrates results on PGD-32 trained MNIST models. With adversarial training, we observe the certified robustness of the models against UPs also largely increased compared naturally trained models in Table 1. Our certification results are still much tighter than sample-wise results especially under settings with larger perturbations. On CIFAR-10, we only evaluate the results on adversarially trained models (PGD-8 for ResNet-2B and ResNet-4B) as the naturally trained model is severely susceptible to perturbations on CIFAR-10 (see Figure 5) even with ϵ = 1255 . To sum up, our method can provide tighter robustness certification than existing sample-wise methods.
Estimation of the lower bound of ARP Figure 2 illustrate the application of Theorem 2 with CRR. We use the naturally trained Conv-4-layer on MNIST as an example and we set ϵ = 6255 . We demonstrate the estimation of 0.9-probable certification for the lower bound of ARP, or ARP, by setting ξ = 0.1. From Figure 2, we can learn that the empirical results, CRR, can be tighter when more samples are considered in certification. Incorporating more samples also makes the estimated ARP much closer to the CRR (as t∗(ξ, n) is smaller). Such an observation shows that when incorporating more samples in certification, the empirical results would better reflect the actual robustness of the whole population.
In particular, when using 1000 samples, the result can be interpreted as the ARP is larger than 84.73% with at least a 90% probability.
Validating with UAP attacks We then validate the robustness certification results with UAP attacks as CRR should lower bound the attack-ACCs. We consider three SOTA UAP attacks: Adv-UAP (Li et al., 2022), CosUAP (Zhang et al., 2021a), and DF-UAP (Zhang et al., 2020), detailed in in Appendix B. Figure 3 compares CRR and attack-ACCs on PGD-32 trained MNIST models. As shown in Figure 3, CRRs are indeed lower than all the attack-ACCs as expected.
Validating with backdoor attacks We also validate whether CRR still lower bounds the attackACCs in backdoor attacks. We consider two backdoor triggers namely a blended trigger (Chen et al., 2017) with a small ℓ∞-norm (∥δ∥∞ = 5255 , referred as the stealthy trigger), and the BadNets (Gu
et al., 2019) (∥δ∥∞ = 255255 ). All the attacks utilize the same poison ratio, 20% following existing works (Zeng et al., 2021). The visual example of the poisoned sample, the triggers, and the certification results are listed in Figure 4. Under the setting of the stealthy blended backdoor, we find that the CRR drops dramatically before reaching the trigger’s norm (∥δ∥∞ = 5255 ) compared to the same model trained on clean MNIST. This observation verifies the correctness of CRR and its potential to reveal stealthy l∞-bounded backdoor attacks in the current trend of backdoor development with smaller l∞-norm constraints, e.g., Zeng et al. (2022b); Zhao et al. (2020). However, assuming an ℓp norm bound of the backdoor triggers is not widely accepted in traditional backdoor settings. Thus, we also present the results of BadNets (with ∥δ∥∞ = 255255 ) in Figure 4. We consider the backdoor model trained from scratch or fine-tuned from the clean model. The CRR is still lower-bounding the attack’s deployed ℓ∞ bound of the trigger. However, as the trigger has a large ℓ∞ norm, the CRRs of poisoned models are of no difference to the clean model and thus not that useful. Nevertheless, in Section 5.3, we show a simple twist of the certification framework to help reveal backdoors’ existence.
5.3 IMPLICATIONS OF ROBUSTNESS CERTIFICATION AGAINST UPS
Now we explore the potential implications of our robustness certification against UPs. We focus on 3 case studies on model structure comparison, UAP defenses comparison, and backdoor detection.
Comparing model structures One implication of robustness certification regarding UPs is to compare different model structures and training strategies regarding the certified robustness against UPs. Figure 5 depicts the certification results of all the considered model structures with different training settings on MNIST and CIFAR-10. We consider both naturally trained and PGD trained models with different l∞ perturbation norm. In Figure 5 (a) on MNIST, we find that the largest model, Conv-big, shows the worst certified robustness against UPs. But the smallest Conv-small’s CRR is higher than that of Conv-4-layer under naturally trained setting, PGD-8, and PGD-16, but not PGD32. The only difference between Convsmall and Conv-4-layer is that Conv-4-layer
uses a larger padding step which resulting a slightly larger hidden layer (see Appendix B). Based on the observation, there is an interesting trade-off between model size and certified robustness against UPs: A slightly larger structure can help the model obtain better certified robustness when adopting adversarial training, potentially due to increased model capacity. Such an observation can be further illustrated in Figure 5 (b). Specifically, ResNet-2B’s CRR would drop to random guessing when using PGD-16, while ResNet-4B can still maintain a certain scale of CRR. But even larger models Figure 5 (a) have worse certified robustness, potentially due to looser certified bounds.
Implication to UAP defenses Another implication of the CRR is to compare existing UAP defenses regarding their efficacy. We consider three types of defenses and five different defenses in total: FGSM and PGD sample-wise adversarial training (Goodfellow et al., 2014; Madry et al., 2017;
Wong et al., 2019); universal adversarial training (UAT) with FGSM or PGD synthesizing UPs (Shafahi et al., 2020); sample-wise certified defense with Interval Bound Propagation (IBP) training (Gowal et al., 2018; Mirman et al., 2018). The defended models are further evaluated with UAP attacks and certification. The results with a small perturbation radius |δ|∞ = 16255 are shown in Table 4. Additional results with a larger perturbation radius (|δ|∞ = 80255 ) are in Table 7, Appendix C. We use the row titled “Worst” to record the maximum accuracy drop using UAP attacks compared to clean accuracy. Surprisingly, in Table 4, we find the CRR of models trained with UAT is worse than their sample-wise adversarial training counterparts (i.e., UAT-PGD results are worse than PGD). However, in the case of larger perturbation radius (Table 7, Appendix C), the UAT-trained models can achieve higher CRR than the sample-wise counterparts. Such an observation indicates an underexplored trade-off between perturbation radius and UAP defense method on CRR. The CRR result from the IBP-trained model is much tighter than others, as IBP directly optimizes over an objective for certified robustness and tightens the certified bounds for all the neurons. Moreover, CRR is also aligned with the worst attacked accuracy drop and can be an indicator for comparing different settings of UAP defenses.
Implication to backdoor defenses We evaluated the effectiveness of the CRR in revealing potential backdoors in the above section, but the effectiveness is yet only limited to triggers with small perturbations. This section presents a simple twist on the certification framework by teaming up with adversarial training (PGD-16). We depict the average class-wise certification results on 10 ResNet-4B models trained with different random seeds over different BadNets poison ratios in Figure 4. Based on the re-
sults, we find the certification can reliably reveal the targeted label and justify how mighty the backdoor attack is (i.e., the CRR is aligned with the poison ratio used). Additional results on the Smooth attack (Zeng et al., 2021) and ℓ2 invisible attack (Li et al., 2020a) are listed in Appendix C, which share similar observations. The reason of the successful identification is that, naturally, the adversarial training would force the model to learn more from the reliable features and thus make standard backdoors stand out from benign features of the data (i.e., easier to be learned by the model), as also discussed in Weng et al. (2020). Thus after training a model with adversarial training with large perturbation radius, the model would likely engrave the trigger and thus have a high CRR only on the target label. CRR by our proposed method provides an intriguing point of view to determine the attack’s strength (i.e., poison ratio).
6 CONCLUSION
In this work, we present the first focused study on certifying neural networks’ robustness against UPs. In contrast to previous robustness certification works that focused on sample-wise perturbations, we formulate the certification problem against UPs by emphasizing sharing a universal perturbation between different samples. We propose a combination of linear relaxation-based bounds and MILP to solve the problem. We also present a theoretical analysis framework to estimate the certification result for the entire population based on results from a batch of random samples. Extensive experiments reveal that our certification imposes tighter results than directly applying existing sample-wise robustness certifications. In addition, we discuss and demonstrate how robustness certification against UPs could facilitate comparing certified robustness between different model structures and defense methods and provide reliable backdoor detection.
ACKNOWLEDGEMENT
This work is partially funded by Sony AI. This work is also supported in part by NSF under IIS2008173, IIS-2048280, and by Army Research Laboratory under W911NF-20-2-0158. RJ and the ReDS lab appreciate the support of The Amazon - Virginia Tech Initiative for Efficient and Robust Machine Learning and the Cisco Award. YZ and ZS are both supported by the Amazon Fellowship.
A PROOFS
A.1 PROOF OF THEOREM 1
Let ϑ̂ be the solution of the MILP problem in the theorem, and let ϑ̃ be the solution to Eq. (4). Theorem 1 states that ϑ̂ = ϑ̃. We formally prove the equivalence below.
Proof. We first show that ϑ̂ ≤ ϑ̃. In Eq. (4), there exists some δ̃ such that
δ̃ = argmin ∥δ∥∞≤ϵ
1
n n∑ i=1 1 ( min j ̸=yi { a (i) j δ + b (i) j } > 0 ) .
Then, for every i ∈ [n], take the following values for variables in the MILP formulation:
q(i) = 1 ( min j ̸=yi { a (i) j δ̃ + b (i) j } > 0 ) ,
ϑ = ϑ̃ = 1
n n∑ i=1 q(i),
∀j ̸= yi, s(i)j = 1(j = j ′), where j′ = argmin j ̸=yi a (i) j δ̃ + b (i) j ,
and it is easy to see that the values for these variables satisfy all the constraints in the MILP problem. Thus the result of the minimization in the MILP should be no smaller than ϑ̃, i.e., ϑ̂ ≤ ϑ̃. We now show that ϑ̃ ≤ ϑ̂. We use δ̂, q̂, ŝ to denote the values of δ, q, s variable in the solution of MILP. For every i ∈ [n], Eq. (7) ensures that there exists exactly one ĵ (ĵ ̸= yi) with ŝ(i)ĵ = 1, and Eq. (8) ensures that for all j ̸= yi, a(i)ĵ δ̂ + b (i) ĵ ≤ a(i)j δ̂ + b
(i) j holds. Thus∑
j ̸=yi
(a (i) j δ + b (i) j )ŝ (i) j = min j ̸=yi {a(i)j δ̂ + b (i) j }.
According to Eq. (6), if q̂(i) = 1, ∑
j ̸=yi(a (i) j δ + b (i) j )ŝ (i) j ≥ 0 holds. In case that ∑ j ̸=yi(a (i) j δ +
b (i) j )ŝ (i) j = 0, Eq. (6) also holds with q̂ (i) = 0, and due to the minimization objective of MILP, q̂(i) = 0 instead of q̂(i) = 1 will be taken. Thus ∑
j ̸=yi(a (i) j δ + b (i) j )ŝ (i) j > 0 strictly holds when q̂(i) = 1. And if q̂(i) = 0, ∑
j ̸=yi(a (i) j δ + b (i) j )ŝ (i) j ≤ 0 holds. Thus
q̂(i) = 1 ( ∑ j ̸=yi (a (i) j δ + b (i) j )ŝ (i) j > 0 ) = 1 ( min j ̸=yi {a(i)j δ̂ + b (i) j } > 0 ) .
Thereby
ϑ̂ = 1
n n∑ i=1 q̂(i) = 1 n n∑ i=1 1 ( min j ̸=yi { a (i) j δ̂ + b (i) j } > 0 ) ,
and then the result of Eq. (4) is no smaller than ϑ̂, i.e., ϑ̃ ≤ ϑ̂.
Hence ϑ̂ = ϑ̃ is proved.
A.2 PROOF OF THEOREM 2
Let δ∗ = argminδ∈∆ U(δ) be the optimal universal perturbation that minimizes the ARP, let δn be the value returned by G(Dn), and let δ̃ = argminδ∈∆ V B(δ) that minimizes the CRP. We introduce the following lemma: Lemma 1. Given An, it holds that
Pn(V̂ B(δ∗)− V B(δ∗) > t∗(ξ, n)) ≤ ξ (15)
where t∗(ξ, n) is the root of the equation (1 + 4t) ln(1 + 4t)− 4t = 4n ln(1/ξ).
Proof. Let q(i) = 1(δ∗ ∈ ∆B z(i) ) which can also be interpreted as 1 ( minj ̸=yi { a (i) j δ ∗ + b (i) j } >
0 ) . Then, V̂ B(δ∗) = 1n ∑n i=1 q (i) and V B(δ∗) = E[ 1n ∑n i=1 q (i))] = E[q(i)]. Let σ2 denote the
variance of q(i). Since q(i) is a binary random variable, we have that σ2 ≤ 1/4. Let h(u) = (1 + u) ln(1 + u)− u. For any t > 0, we have that
Pn( 1
n n∑ i=1 q(i) − E[q(i)] > t) ≤ exp ( − nσ2h( t σ2 ) ) (16)
≤ exp ( − n
4 h(4t)
) , (17)
where the first inequality is a direct application of the Bennett’s inequality (Bennett, 1962), and the second inequality is due to the fact that nσ2h( tσ2 ) is a monotonically decreasing function of σ
2. Let t(ϵ, n) denote the root of exp ( − n4h(4t) ) = ξ. Then, it follows that Pn( 1n ∑n i=1 q
(i) − E[q(i)] > t(ϵ, n)) ≤ ξ.
Then we prove Theorem 2 to certify the robustness of a classifier against the worst-case attack δ∗.
Proof. We use the following relations: for any δ ∈ ∆,
U(δ) ≥ min δ∈∆ U(δ)
= U(δ∗) = U(δ∗)− V B(δ∗)︸ ︷︷ ︸ (i) +V B(δ∗)− V̂ B(δ∗)︸ ︷︷ ︸ (ii) + V̂ B(δ∗)− V̂ B(δn)︸ ︷︷ ︸ (iii) +V̂ B(δn)
≥ (i) + (ii) + V̂ B(δn),
(18)
where (ii) can be bounded by applying the concentration inequality in Lemma 1; (iii) ≥ 0 due to the optimality of δn = argminδ∈∆ V̂ B(δ). Combining these bounds yields Theorem 2.
B FURTHER DETAILS ON EXPERIMENTAL SETTINGS
We use one server equipped with a total of 8 RTX A6000 GPUs as the hardware platform. PyTorch (Paszke et al., 2019) is adopted as the implementation framework. We detail the model structures used in our experiment in Table 6. All of the model structures used in this work were also considered in other existing robustness certification works as the standard set-ups: Conv-small, Conv-4-layer, Conv-big on the MNIST (Singh et al., 2019a; Tjandraatmadja et al., 2020; Wang et al., 2021; Müller et al., 2022), and ResNet-2B, ResNet-4B on the CIFAR-10 (Zhang et al., 2022a). We use Adadelta (Zeiler, 2012) as the optimizer with a learning rate set to 0.1 for all the model training process (including the adversarial training for the model updating step as well). For MNIST models, we train each model with 60 epochs. For CIFAR-10 models, we train each model with 500 epochs to ensure full convergence. For adversarial training adopted in the main text, the number of steps in PGD attacks is 7; step-size for PGD is set as ϵ4 . For IBP training, we use the implementation in Shi et al. (2021).
Now, we details the UAP attacks considered in the experiment for validating the certification results, namely the Adv-UAP (Li et al., 2022), Cos-UAP (Zhang et al., 2021a), and DF-UAP (Zhang et al., 2020). The design of each UAP attack’s synthesis procedure distinguishes these attacks. Specifically, Adv-UAP synthesizes and generates adversarial examples for each input before synthesizing the UAP, which has shown to be more effective in finding stronger UAPs. Cos-UAP produces UAP by reducing the Cosine similarity between the original output logits and the disturbed logits; DFUAP employs a similar loss as listed in the C&W attack (Carlini & Wagner, 2017), which aims to reduce the distance between the ground-truth label’s logits and the maximum logits of the rest.
Now we provide the detailed settings of the backdoor target-class identification in Section 5.3. For the threat model, we consider the scenario where the defender aims to determine if a backdoor attack resides in a given dataset, identify the target class, and justify how potent the attack is if there is an identified attack. We assumes the defender has access to the training set to be inspected, with no additional clean validation data required. To conduct the instantiated case shown in Section 5.3, the defender adversarially trains (with PGD-16) 10 different models on the inspecting training dataset and obtaining the averaging CRR results in a class-wise manner. Especially as we assume no additional clean validation data is required, we pass through 100 random noise into the certifying models to obtain the results in Figure 6, 7, 8.
C ADDITIONAL RESULTS
C.1 ADDITIONAL RESULTS ON UAP DEFENSES COMPARISON
Table 7 details the results of UAP defenses comparison under the large-norm setting (ϵ = 80255 ). Noting all the defenses adopted are also incorporated with the same expense. For large-norm settings, we find that only the certified-robustness training ends up with a CRR larger than 0. Apart from its actual effectiveness, as mentioned in the main text, the IBP-trained model also ends up with much tighter intermediate linear bounds (i.e., a and b are tighter). Even though our work can only return a positive CRR on the IBP-trained model, the certification results are still aligned with the actual attack results, as the IBP-trained model would have stronger robustness than the other models in terms of the least change in the ACC drop.
C.2 ADDITIONAL RESULTS ON BACKDOOR TARGET-CLASS IDENTIFICATION
We now provide additional results on implementing the certification framework to identify the existence of backdoor attacks. In this section, the results provided are evaluated against the Smooth attack (Zeng et al., 2021) and the l2-invisible attack (Li et al., 2020a). Figure 7,8 illustrate the results on the Smooth attack and l2-invisble attack respecively. Based on the results, we find the certification can also reliably reveal the targeted label and justify how powerful the backdoor attack is for Smooth attack and the l2-invisible attack.
D BROADEN IMPACT AND LIMITATIONS
D.1 UAP AND BACKDOOR ATTACKS
UAP attacks aim to synthesize a UP by accessing and analyzing the output of a trained neural network. Backdoor attacks aim to insert a predefined trigger into the neural network and ensure an effective attack without accessing and analyzing the output after the model is trained over the poisoned samples. Many existing works have found these two paralleled lines of work have interesting intersections. In particular, the formulation of UAP synthesizing has also inspired or has its interesting counterparts in backdoor attacks or defense designs. For example, Li et al. (2020a); Zhang et al. (2021b) designed their backdoor trigger via a similar process of synthesizing UAP using a trained model. Kolouri et al. (2020); Zeng et al. (2022a) adopted this interesting intersection between UAP and backdoor attacks to provide identification of backdoors or conduct online removal of backdoors. Suppose we view these two attack paradigms at the inference time (with a trained model). In that case, mitigation defenses and robustness synthesizing tools for both attacks can be developed for general robustness to UP.
D.2 LIMITATIONS
Unconstrained or Large ℓ∞-norm Attacks: Some of the UAP attacks are generated without specifying a constraint (Brown et al., 2017), and in most backdoor attacks, the trigger inserted does not have a constrained ℓ∞ norm. If the attack can have an unconstrained ℓ∞ or a very large ℓ∞ norm, only trivial certification results can be obtained from our certification. This limitation also commonly exists in state-of-the-art sample-wise certification methods (Wang et al., 2021; Ferrari et al., 2021). In fact, any certification procedure requires some constraints on potential perturbations and does not apply to unbounded perturbations. This open problem calls for answers and the attention for future research.
Computational Cost: Supporting large models and large datasets can be computationally costly for our certification. Existing works for certifying pre-trained models (Wang et al., 2021; Ferrari et al., 2021) are also commonly limited to moderate-sized networks, and the cost of our method is lower bounded by existing linear bound propagation frameworks that we use to obtain the linear bounds before solving the MILP problem. It remains a challenging open-problem for scaling to larger-scale networks, such as models for ImageNet (Deng et al., 2009). | 1. What is the focus of the paper regarding the certification of robustness to Universal Perturbations?
2. What are the strengths of the proposed method, particularly in its ability to tackle the issue of robustness certification to UPs?
3. What are the weaknesses of the paper, specifically regarding experimental settings and limitations in addressing large neural networks or datasets?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper looks at a critical but rarely studied problem: the certification of robustness to Universal Perturbations (UPs). In particular, UPs (e.g., universal adversarial noise, backdoor, or neural trojan attacks) have become alarming lines of threats that make it hard to use Machine Learning as a service safely and reliably.
The authors formulated and solved the robustness certification for UPs utilizing a method based on linear relaxation and Mixed Integer Linear Programming. An intriguing discussion on the difference between existing sample-wise certification and the certification against UPs was provided. The authors also theoretically analyzed the relationship between the certification result based on observed samples and the actual robustness of neural networks to UPs w.r.t. the underlying distribution. Multi-perspective evaluations of the tightness (compared with existing certification), soundness of the bound (real UAP and backdoor attacks), and potential implications (to existing attacks and defenses) are offered.
Strengths And Weaknesses
Strength:
The paper is generally well-written and easy to follow, with a well-designed structure;
The problem of robustness certification to UPs is of great importance, and detailed implications to model/defense comparison and backdoor detection are discussed with empirical results;
The proposed method with MILP and linear relaxation accounting for establishing the information sharing between samples is intuitive but compelling; the tightness of the certification results is shown with a comprehensive study with existing certification techniques and actual attacks;
To the best of my knowledge, UP’s generalization analysis in this paper is the first of its kind effort, which also leads to an intriguing analysis of the error of certification results using the proposed robustness certification method.
Weakness:
Some settings of the experiment are unclear. For example, I think the settings for the Implication to backdoor defenses (i.e., the backdoored class identification) can be further improved with more details on the threat model and how to use the proposed tool for detection regarding existing backdoor attacks.
I can understand the difficulties of the robustness certification to large neural networks or large datasets, especially the problem of certifying against UPs requiring computing w.r.t. multiple samples (it seems it is required to compute a much larger computational graph when the network size and input size grow). I highly recommend the authors add a section discussing these limitations.
Clarity, Quality, Novelty And Reproducibility
The theoretical contribution in this paper is solid and original, and it should be of interest to researchers in both the universal adversarial perturbation, backdoor domains, and robustness certifications. And empirical evaluations took different perspectives into account and lead to some intriguing findings and implications of the robustness certification against UPs. |
ICLR | Title
Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation
Abstract
In recent years, the adversarial vulnerability of deep neural networks (DNNs) has raised increasing attention. Among all the threat models, no-box attacks are the most practical but extremely challenging since they neither rely on any knowledge of the target model or similar substitute model, nor access the dataset for training a new substitute model. Although a recent method has attempted such an attack in a loose sense, its performance is not good enough and the computational overhead of training is expensive. In this paper, we move a step forward and show the existence of a training-free adversarial perturbation under the nobox threat model, which can be successfully used to attack different DNNs in real-time. Motivated by our observation that high-frequency component (HFC) domains in low-level features and plays a crucial role in classification, we attack an image mainly by manipulating its frequency components. Specifically, the perturbation is combined by the suppression of the original HFC and the adding of noisy HFC. We empirically and experimentally analyze the requirements of effective noisy HFC and show that it should be regionally homogeneous, repeating and dense. Extensive experiments on the ImageNet dataset demonstrate the effectiveness of our proposed no-box method. It attacks ten well-known models with a success rate of 98.13% on average, which outperforms state-of-the-art no-box attacks by 29.39%. Furthermore, our method is even competitive to mainstream transfer-based black-box attacks. Our code is available in our appendix.
1 INTRODUCTION
Deep neural networks (DNNs) are widely known to be vulnerable to adversarial examples (Szegedy et al., 2013; Goodfellow et al., 2015), i.e., a human-imperceptible perturbation can lead to misclassification. In adversarial machine learning, the term threat model defines the rules of the attack, such as the resources the attacker can access. Based on the threat model, the attacks are often divided into white-box attacks and black-box attacks. In the white-box threat model (Szegedy et al., 2013; Goodfellow et al., 2015; Madry et al., 2018a), the attacker has full knowledge of a target model, such as the model weights and the whole training dataset. Recognizing the threat of these adversarial attacks, a model owner is unlikely to leak a model’s information to the public. Thus, the white-box attack is often used to evaluate the model robustness for revealing its weakest point (Madry et al., 2018a), but often not considered as a practical attack method (Chen et al., 2017). To this end, numerous works have investigated a more realistic threat model, where the attacker does not require full knowledge of the target model, i.e., the backpropagation on the target model is prohibited. This threat model is called black-box attack (Papernot et al., 2016; Tramèr et al., 2016; Papernot et al., 2017; Narodytska & Kasiviswanathan, 2017; Chen et al., 2017; Brendel et al., 2017; Dong et al., 2019b; Yan et al., 2019; Chen et al., 2020; Zhou et al., 2020). However, such a black-box threat model usually involves a major concern of being resource-intensive in terms of query cost and time. In real-world attack scenarios, even if we ignore such concerns, query-based black-box attack can still be infeasible, e.g., the model API is inaccessible to the attacker. Moreover, it might cause suspicion due to repeated queries to the model with almost the same adversarial image. To alleviate this issue, another line of black-box threat model (Dong et al., 2018; Xie et al., 2019b; Dong et al., 2019a; Wu et al., 2020; Lin et al., 2020; Gao et al., 2020a; 2021) called transfer-based attack is proposed. In this threat model, adversarial examples are crafted via the local available pre-trained substitute model, which usually trains on the same training dataset as the target model. The resultant adversarial examples
are expected to attack the target model. However, without the feedback from the target model, the transferability heavily depends on how large the gap between the substitute model and target model. In practice, this gap is large because the structure and the training technique of the target model are usually not publicly available due to security and privacy concerns.
From the analysis above, we argue that both white-box and black-box attacks can hardly be considered as practical attacks. A practical attack should satisfy two criteria: (a) model-free, i.e., no dependence on the pre-trained substitute model or the target model for either backward propagation or only forward query; (b) data-free, i.e., no dependence on the dataset for training a substitute model. We term it no-box attack. A recent work (Li et al., 2020a) is the first (to our knowledge) as well as the only work to have attempted such an attack in a loose sense. Their threat model still requires a small number of auxiliary samples, such as 20 images. Admittedly, collecting a small number of samples might not be difficult in most cases, but might be still infeasible in some security-sensitive applications. Specifically, their approach (Li et al., 2020a) attempts to train a substitute model by adopting the classical auto-encoder model instead of the supervised classification model due to the constraint of a small-scale dataset. Overall, to attack a certain sample, their approach consists of three steps: (1) collecting a small number of images; (2) training a substitute model; (3) white-box attack on the substitute model. If a new sample, especially from a different class, needs to be attacked, the above process needs to be repeated. Thus, their approach is very resource-intensive. Besides, their attack success rate is still significantly lower than existing black-box attacks.
By contrast, our approach does not require any of the above three steps and is even training-free. With the help of visualization technique proposed by (Zeiler & Fergus, 2014), we observe that the high-frequency component (HFC), e.g., the edge and texture features, is dominant in shallow layers and the low-frequency component (LFC), e.g., the plain areas in the image, is paid less attention to be extracted. Combined with the insight
into the classification logic of DNNs in Sec. 3.1, we observe that HFC plays a crucial role in recognition. As shown in Fig. 1, without LFC, the confidence of HFC is even higher than the raw image. Although it does not hold true for all samples, it does demonstrate the importance of HFC.
Motivated by this, we take the idea of hybrid image (Oliva, 2013) and propose a novel Hybrid Image Transformation (HIT) attack method to craft adversarial examples. Formally, it only needs three steps but can effectively fool various DNNs without any training: First, due to the trainingfree setting and inspired by the analysis from Sec. 3.2, we simply utilize matplotlib1 tool to draw several geometric patterns which serve as the proto-patterns, and the resultant synthesized adversarial patches are thus richer in regionally homogeneous, repeating and dense HFC. Second, we extract the LFC of the raw image and HFC of the adversarial patch. Finally, we combine these two pieces of components and clip them to the ε-ball of the raw image to get the resultant adversarial hybrid example. Extensive experiments on ImageNet demonstrate the effectiveness of our method. By attacking ten state-of-the-art models in the no-box manner, our HIT significantly increases the average success rate from 68.74% to 98.13%. Notably, our HIT is even competitive to mainstream transfer-based black-box attacks.
2 RELATED WORK
Adversarial Attack. Let x denote raw image without any perturbation, xadv and y denote the corresponding adversarial example and true label respectively. In generally, we use l∞-norm to measure the perceptibility of adversarial perturbations, i.e., ||xadv − x||∞ ≤ ε. In this paper, we focus on non-targeted attacks (Dong et al., 2018; Xie et al., 2019b; Wu et al., 2020; Lin et al., 2020; Gao et al., 2020a) which aim to cause misclassification of DNNs f(·), i.e., f(xadv) 6= y.
1https://matplotlib.org/
Competitors. Transferability is an important property for adversarial examples. With it, the resultant adversarial example crafted via one model may fool others. For the black-box threat model, Goodfellow et al. (2015) argue that the vulnerability of DNNs is their linear nature, and generate adversarial examples efficiently by performing FGSM which is a single-step attack. Papernot et al. (2017) train a local model with many queries to substitute for the target model. Dong et al. (2018) integrate a momentum term into I-FGSM Kurakin et al. (2017) to stabilize the update direction during the attack iterations. Xie et al. (2019b) apply diverse input patterns to improve the transferability of adversarial examples. Dong et al. (2019a) propose a translation-invariant attack to mitigate the effect of different discriminative regions between models. Gao et al. (2020a) introduce patch-wise perturbation by amplifying the step size and reuse the cut noise to perturb more information in discriminative regions. For the no-box threat model, Li et al. (2020a) attempt to attack the target model without any model query or the accessible pre-trained substitute model. In their work, with a limited amount of data, they try different mechanisms (with or without supervised technique) to train the substitute model, and then utilize this substitute model to craft transferable adversarial examples. Different from these approaches, our method does not depend on transferability since we do not need any substitute model. In this paper, we craft the adversarial examples from the perspective of the classification logic of DNNs.
Frequency Perspective on DNNs. Our approach is highly inspired by existing works which explain the generalization and adversarial vulnerability of DNNs from the frequency perspective. The fact that DNNs have good generalization while being vulnerable to small adversarial perturbations has motivated (Jo & Bengio, 2017; Wang et al., 2020) to investigate the underlying mechanism, suggesting that surface-statistical content with high-frequency property is essential for the classification task. From the perspective of texture vs. shape, Geirhos et al. (2019); Wang et al. (2020) reveal that DNNs are biased towards texture instead of shape. Since the texture content is considered to have high-frequency property, their finding can be interpreted as the DNN being biased towards HFC. On the other hand, adversarial perturbations are also known to have the high-frequency property and various defense methods have also been motivated from this insight (Aydemir et al., 2018; Das et al., 2018; Liu & JaJa, 2019; Xie et al., 2019a). Nonetheless, it remains unknown whether manually designed high-frequency patterns are sufficient for attacking the network.
3 METHODOLOGY
Although many adversarial attack methods (Papernot et al., 2016; Dong et al., 2018; Gao et al., 2020a; Li et al., 2020a) have achieved pretty high success rates in both black-box and no-box cases, they all need training, especially for query-based (Papernot et al., 2016; Zhou et al., 2020) and nobox adversarial perturbations (Li et al., 2020a) whose training is usually time-consuming. Then a natural question arises: Is it possible to generate robust adversarial perturbations without any training? In the following subsections, we will give our answer and introduce our design.
3.1 MOTIVATION
To better understand the role of HFC and LFC for the classification results of DNNs, we split the information of raw images into these two pieces via Gaussian low-pass filter (defined in Eq. 1).
As illustrated in Fig. 3, when the kernel size is small, i.e., the cutoff frequency is high, the average accuracy of LFC on ten state-of-the-art models is close to 100%. However, if we continue to increase the kernel size, the average accuracy of HFC begins to exceed LFC one. To our surprise, for several specific raw images, e.g., left image of Fig. 1, the true label’s confidence of HFC which is mostly black is even higher than the raw image.
To explain the above phenomenon, we turn to the perspective of feature space. Inspired by recent intermediate feature-based attacks (Zhou et al., 2018; Ganeshan & Babu, 2019; Inkawhich et al., 2019), we argue low-level features are critical to the classification. Interestingly, as shown in Fig. 2, most2 feature maps in
the shallow layers generally extract the edge and texture features (typical ones are highlighted by red boxes), i.e., HFC, and pay less attention to plain areas in images, i.e., LFC. Therefore, if a perturbation can effectively manipulate the HFC of an image, totally different low-level features will be extracted and may lead to misclassification.
3.2 EFFECTIVE ADVERSARIAL HFC
However, what kind of training-free noisy HFC can effectively fool DNNs is still unknown because the performance of any other raw image’s HFC is unsatisfactory (see Appendix Sec. A.8). Zhang et al. (2020) have demonstrated that the effectiveness of adversarial perturbation lies in the fact that it contains irrelevant features. The features of perturbation dominate over the features in the raw image, thus leading to misclassification. Inspired by their finding, we intend to design adversarial HFC with strong irrelevant features, and we conjecture that the following properties are essential.
Regionally Homogeneous. Several recent works (Li et al., 2020b; Gao et al., 2020a; Dong et al., 2019a; Gao et al., 2020b) have demonstrated that adversarial perturbations with regionally homogeneous (or patch-wise (Gao et al., 2020a)) property can enhance the transferability of adversarial examples. Inspired by that the raw image is a composite of homogeneous patterns, the reason might be attributed to that this perturbation tend to form irrelevant features recognizable by the DNNs.
Repeating. Nguyen et al. (2015) observe that extra copies of the repeating element do improve the confidence of DNNs. From the perspective of strengthening the irrelevant features, it is expected that repeating the content is beneficial.
Dense. Analogous to the above repeating property that performs global repeating, i.e., increases the amount of irrelevant features globally, we can also perform local repeating to strengthen its adversarial effect further. For term distinction, we term this property dense.
To verify the effect of the above properties, we conduct the ablation study in Sec. 4.1, and results support our conjecture. Besides, the analysis in Appendix Sec. A.9 also show that our HIT has potential to become a targeted attack.
3.3 HYBRID IMAGE TRANSFORMATION
Motivated by the above discussion, we take the idea of hybrid image (Oliva, 2013) to apply our no-box attacks. Specifically, Oliva (2013) replaces the HFC of one image with the HFC of another carefully picked image and craft hybrid images with two different interpretations: one that appears when the image is viewed up-close, and the other that appears from afar (see Fig.5 of Oliva (2013)). However, confusing human’s vision system (without ε constrain) cannot guarantee the misclassification of DNNs since adversarial examples are constrained by the maximum perturbation. Therefore, we propose a novel Hybrid Image Transformation (HIT) attack method which reduces3 original
2see quantitative analysis in Appendix Sec. A.2. 3Due to the ε constraint, we can not completely replace HFC with others
HFC, and meanwhile, adds well-designed noisy ones to attack DNNs. Our method only needs three steps but can generate robust training-free adversarial perturbations in real time:
First, we provide an adversarial patch xp to generate noisy HFC. Unlike the traditional way that needs training, here we use the matplotlib tool to draw it. Inspired by the observation in Sec. 3.2, we consider three simple regionally homogeneous proto-patterns (to avoid cherry-picking) as our basic adversarial patches: concentric circles, concentric squares, and concentric rhombus in Fig. 4. The effect of concentric pattern is to make the resultant HFC dense. Then we repeat these adversarial patches.
Second, we extract the LFC of the raw image and the HFC of the adversarial patch. Note that several methods can be utilized to extract the HFC and LFC of an image, e.g., Fourier transformation. In this paper, we use an approximated yet simple Gaussian low-pass filterGwhose size is (4k+1)×(4k+1) to get LFC, which can be written as:
Gi,j = 1 2πσ2 e(− i2+j2 2σ2 ), (1)
where σ = k determines the width of our G. In general, the larger σ is, the more HFC is filtered out. We are not going to introduce a new high-pass filter here for simplicity and just get HFC byG. More specifically, we obtain HFC by subtracting the LFC of the adversarial patch.
Finally, we can synthesize these two part components to generate our adversarial hybrid image xadv: xadv = clipx ,ε(x ∗G+ λ · (xp − xp ∗G)), (2) where “*” denotes convolution operation, λ is a weight factor to balance the LFC and HFC, and clipx,ε(·) restricts the resultant adversarial examples within the ε-ball of the raw image in l∞ space. Therefore, our method is different from adversarial patch attacks (Brown et al., 2017; Liu et al., 2020) which replace a subregion of the image with a well-design patch.
As illustrated in Fig. 2(c), our HIT can effectively reduce relevant HFC and add many other irrelevant noisy ones, e.g., highlighted yellow boxes in (c) cannot find any obvious HFC associated with “cat” at all. As a result, the target model can not extract correct features to make a reasonable prediction, thus leading to misclassification. Besides, our adversarial examples are less perceptible than those of our competitors (See Appendix Sec. A.4).
4 EXPERIMENTS
Networks. Here we consider ten well-known classification models: VGG19 (Simonyan & Zisserman, 2015), Inception-v3 (Inc-v3) (Szegedy et al., 2016), ResNet-152 (ResNet) (He et al., 2016), DenseNet-121 (Dense) (Huang et al., 2017), WideResNet (WRN) (Zagoruyko & Komodakis, 2016), SENet (Hu et al., 2018), PNASNet (PNA) (Liu et al., 2018), ShuffleNet-v2 (Shuffle) (Ma et al., 2018), SqueezeNet (Squeeze) (Iandola et al., 2017) and MobileNet-v2 (Mobile) (Sandler et al., 2018) as our target models. All the models are available in the Torchvision4, except for PNA and SENet which are obtained from Github5. We also perform our attack on a real-world recognition system in Appendix Sec. A.6.
4https://github.com/pytorch/vision/tree/master/torchvision/models 5https://github.com/Cadene/pretrained-models.pytorch
Dataset. To make our method more convincing and avoid cherry-picking, we choose 10,000 images (each category contains about 10 images which are resized to 299 × 299 × 3 beforehand) from the ImageNet validation set (Russakovsky et al., 2015) which are classified correctly by all ten networks we consider. We also discuss our methods on other classification tasks in Appendix Sec. A.5.
Parameters. In our experiments, we use l∞-norm to measure the perceptibility of adversarial noises, unless specified, the maximum perturbation ε is set to 16 (results with a smaller ε can be found in Appendix Sec. A.7). For our HIT, the size of Gaussian kernel G is 17 × 17 (i.e. k = 4), weight factor λ is set to 1.0 (the discussion about λ is shown in Appendix Sec. A.3), and density of proto-pattern is set to 12. For tile-size, unless specified, we set to 50 × 50, i.e., tile-scheme is 6× 6. For no-box methods, we follow the same setting as (Li et al., 2020a). For black-box methods, the iteration T is set to 10 and the step size α is 1.6. For MI-FGSM (Dong et al., 2018), we adopt the default decay factor µ = 1.0. For DI2-FGSM (Xie et al., 2019b), we set the transformation probability to 0.7. For TI-FGSM (Dong et al., 2019a), the length of Gaussian kernel is 15. For PI-FGSM (Gao et al., 2020a), the length of project kernel is 3, the amplification factor β and project factor γ are 10.0 and 16.0, respectively. Different from PI-FGSM, β and γ for PI-MI-DI2-FGSM and PI-TI-DI2-FGSM (Gao et al., 2020a) is 2.5 and 2.0.
4.1 ABLATION STUDY
In this section, we conduct a series of ablation study for our HIT. Specifically, we investigate the effectiveness of regionally homogeneous pattern, repeating pattern and dense pattern in Sec. 4.1.1, Sec. 4.1.2 and Sec. 4.1.3, respectively. Besides, we also analyze the effect of perturbation size on the performance in Sec. 4.1.4. For the result of HIT without reducing HFC beforehand is shown in Appendix Tab. A.12.
4.1.1 THE EFFECT OF REGIONALLY HOMOGENEOUS PATTERN
To the best of our knowledge, regionally homogeneous perturbations (Dong et al., 2019a; Gao et al., 2020a;b; Li et al., 2020b) are mostly based on the gradient to craft, thereby training is necessary. However, whether arbitrary noise can benefit from the homogeneous property remains unclear. Therefore, we compare random noises with semi-random ones to check it:
Random noise: For a given random location pair set L, we callNr ∈ RH×W×C random noise if it meets the following formula:
Nr[i, j, c] = { ε · random(−1, 1), (i, j, c) ∈ L 0, else
(3)
Semi-random noise: Different from the random noise, semi-random noise has some regularity. Let S denotes a semi-random location pair set, and here we take H-dimension random noise as an example. Nsr can be written as:
Nsr[i, :, :] = { ε · random(−1, 1), i ∈ S 0, else
(4)
where random(-1, 1) returns 1 or -1 randomly. As depicted in Fig. 5, the success rates ofNsr are consistently higher than those of Nr. As the number of perturbed pixels increases, the margin between them also increases. This demonstrates that training-free noise can also benefit from regionally homogeneous property. To exploit this conclusion further, in Fig. 4, we extend semi-random noise to other more complex “continuous” patterns, e.g., circle.
4.1.2 THE EFFECT OF REPEATING PATTERN
In this section, we show the experimental results of our proposed HIT w.r.t different tile-sizes. Here we consider seven different tile-schemes including 1 × 1, 2 × 2, 3 × 3, 4 × 4, 5 × 5, 6 × 6 and 7× 7, and the tile-sizes thereby are 300× 300, 150× 150, 100× 100, 75× 75, 60× 60, 50× 50, 42 × 42, respectively. We will resize back to 299 × 299 × 3 to match the size of raw images. The visualizations of these patches can be found in Appendix Sec. A.11.
In Fig. 5, we report the average attack success rates of ten models. The success rates increase very quickly at first and then keep stable after 4× 4 tile-scheme. If we continue to increase the tile-size, the attack success rates may go down. The main reason might be that the distortion caused by the resizing operation. It indirectly blurs resultant tiled adversarial patches, thus reducing the available HFC. Compared to the other two geometric patterns, we find that circle patches always perform the best. For example, the success rate is up to 88.67% when tile-size is 6× 6. This result demonstrates that the attack ability of training-free perturbations can benefit from repeating property.
4.1.3 THE EFFECT OF DENSE PATTERN
To validate the effect of dense pattern, we analyze the average attack success rates w.r.t densities. Since the trends of different patterns are similar, we only discuss the results of circle patch whose tile-scheme is 6 × 6. Here we control the density from 1 to 12. For example, “2” denotes only two circles in the proto-pattern, and more visualizations can be found in Appendix Sec. A.11.
As shown in Fig. 5, the success rates increase rapidly at the beginning, then remain stable after the density exceeds 8, and reach the peak at 12. This experiment demonstrates the effectiveness of dense pattern. Therefore, we set the default density of each proto-pattern to 12 in our paper.
4.1.4 THE SIZE OF PERTURBATION
In this section, we study the influence of the maximum perturbation ε on the performance of our HIT. The result of Fig. 6 depicts the growth trends of each model under different adversarial patches. No matter what the adversarial patch is, the performance proliferates at first, then remains stable after ε exceeds 16 for most models. Besides, the circle patch (curve-like) always performs best while the performance of the other two adversarial patches (straight-like) is similar. For example, when ε = 16 and the target model is VGG19, the attack success rate of circles patch is 94.75% while the square patch and rhombuses patch ones are 81.52% and 83.63%, respectively. This demonstrates that DNNs are more vulnerable to curve-like perturbations than straight-like ones (we also analyze the reasons for this in Appendix Sec. A.10).
Another observation from this result is that our HIT can serve as a universal attack, although not in a strict sense. As demonstrated in Fig. 6, when ε = 10 which is the common constraint for universal adversarial perturbations (Mopuri et al., 2017; Moosavi-Dezfooli et al., 2017; Mopuri et al., 2018; Reddy Mopuri et al., 2018; Liu et al., 2019; Hashemi et al., 2020), our HIT with circle patch can achieve a success rate of 63.23% on average. Notably, it can be up to 89.74% on Squeeze.
4.2 COMPARISON OF HIT WITH NO-BOX ATTACKS
In this section, we compare the performance of our no-box HIT with state-of-the-art no-box attacks (Li et al., 2020a). Note that Li et al. (2020a) need to pay 15,000 iterations at most to train a substitute model, and then runs extra 200 iterations baseline attacks and 100 iterations ILA (Huang et al., 2019), which is extremely time-consuming. Significantly different from Li et al. (2020a), our HIT is training-free which does not require any auxiliary images to train a substitute model, thus achieving real-time attack.
The experimental results are reported in Tab. 1. A first glance shows that our HIT outperforms Li et al. (2020a) by a large margin. No matter what the adversarial patches are, our HIT can consistently achieve a success rate of over 92% on average. By contrast, the best performance of Li et al. (2020a), i.e., Prototypical∗ w/ Sup, is only 68.74% on average. Notably, our HIT with circle patch remarkably outperforms Li et al. (2020a) by 29.39% on average and 42.04% at most when attacking PNA.
4.3 COMPARISON OF HIT WITH BLACK-BOX ATTACKS
In this section, we compare our no-box HIT with mainstream transfer-based attacks. For MI-FGSM, DI2-FGSM, PI-FGSM and their extensions PI-MI-DI2-FGSM, we utilize VGG19, Inc-v3, ResNet
and Dense to iteratively (ten forward & backward propagation) craft adversarial examples and use them to attack the rest of black-box models. As for our proposed HIT, we do not need any substitute model or training process. The results are summarized in Tab. 2, where the models in the leftmost column are substitute models, and the bottom block shows the results of our HIT.
As demonstrated in Tab. 2, our HIT is even on par with state-of-the-art PI-MI-DI2-FGSM. Specifically, on average, the best performance of PI-MI-DI2-FGSM is 88.84%, and our HIT based on circle patch can get up to 88.67%. However, the transferability of adversarial examples largely depends on the substitute model. For example, when adversarial examples are crafted via Inc-v3, the performance of PI-MI-DI2-FGSM is limited and our HIT can remarkably outperform it by 23.34% on average. Besides, when the target model is in lightweight models, e.g., Shuffle, our method consistently outperforms these mainstream transfer-based attacks by a large margin.
Since adversarial training technique (Madry et al., 2018b; Tramèr et al., 2018; Awasthi et al., 2021) can effectively defend against adversarial examples, we conduct an extra experiment on several defense models to demonstrate the effectiveness of our method. The additional target models including three ensemble adversarial training models (EAT) (Tramèr et al., 2018): Inc-v3ens3, Inc-v3ens4 and IncRes-v2ens, and three feature denoising models (FD) (Xie et al., 2019a): ResNet152 Baseline (Res152B), ResNet152 Denoise (Res152D) and ResNeXt101 DenoiseAll (ResNeXtDA). As demonstrated in previous works(Guo et al., 2019; Sharma et al., 2019), low-frequency perturbations are more effective for attacking defense models. Motivated by it, we change the tile-schemes to smaller ones (i.e., 2 × 2 for EAT and 1 × 1 for FD) and other parameters stay the same (see more details in Appendix Sec. A.12). As observed in Tab. 3, our HIT is effective even for defense models. Notably, HIT based on circle patch can successfully attack Inc-v3ens4 by 61.86%. Besides, for more robust FD, even crafting adversarial examples via an ensemble of VGG19, Inc-v3, ResNet and Dense, transfer-based PI-TI-DI2-FGSM is still inferior to our HIT. This experimental result reveals that current defenses have not achieved real security, which is even vulnerable to training-free adversarial examples.
5 CONCLUSION
In this paper, we rethink the classification logic of deep neural networks with respect to adversarial examples. We observe that HFC domains in low-level features and plays a crucial role in classification. Besides, we demonstrate that DNNs are vulnerable to training-free perturbations with regionally homogeneous, repeating, dense property through empirically and experimentally analysis. Motivated by these observations, we propose a novel Hybrid Image Transformation (HIT) attack method by combining the LFC of raw images with the HFC of our well-designed adversarial patches to destroy the useful features and add strong irrelevant noisy ones. Extensive experiments on the ImageNet dataset demonstrate the effectiveness of the proposed method. Surprisingly, our simple method outperforms existing no-box attacks by a significant margin and is even on par with transfer-based black-box attacks that require the substitute model to craft adversarial examples.
In another aspect, since most models are vulnerable to our method, it implies that our adversarial examples may capture the common “blind spots” of them. Therefore, a defense can improve the robustness and stability by covering these “blind spots”, i.e., applying data augmentation technique using our adversarial examples.
A APPENDIX
A.1 SETUP
Networks. Here we consider ten well-known classification models: VGG19 Simonyan & Zisserman (2015), Inception-v3 (Inc-v3) Szegedy et al. (2016), ResNet-152 (ResNet) He et al. (2016), DenseNet-121 (Dense) Huang et al. (2017), WideResNet (WRN) Zagoruyko & Komodakis (2016), SENet Hu et al. (2018), PNASNet (PNA) Liu et al. (2018), ShuffleNet-v2 (Shuffle) Ma et al. (2018), SqueezeNet (Squeeze) Iandola et al. (2017) and MobileNet-v2 (Mobile) Sandler et al. (2018) as our target models.
Dataset. To make our method more convincing and avoid cherry-picking, we choose 10,000 images (each category contains about 10 images) from the ImageNet validation set Russakovsky et al. (2015) which are classified correctly by all ten networks we consider. Besides, all images are resized to 299× 299× 3 beforehand. Parameters. In our experiments, we use l∞-norm to measure the perceptibility of adversarial noises, the maximum perturbation ε is set to 16. For our HIT, the size of Gaussian kernel G is 17× 17 (i.e. k = 4), weight factor λ is set to 1.0.
A.2 QUANTITATIVE ANALYSIS ABOUT HFC AND LFC
To quantitatively analyze whether HFC or LFC is dominant in the feature map of shallow layer, we conducted this experiment. Considering that the size of each shallow-layer feature map in Fig. 2(b) is 147, we first resize the Fig. 2(a) (299× 299) to 147× 147, and denote the resultant image by xr. Then we get the xHr (HFC of xr) by:
xHr = xr − xr ∗G. (5)
To quantitatively compare the response of HFC and LFC, we calculate the average response of each feature map φ(x) in low-frequency regions versus that in the other HFC regions. To that end, we generate two masks to distinguish the two regions. More specific, the mask of high-frequency regionsMH can be written as:
MHi,j = { 1, |xHr(i,j)| > τ 0, else , (6)
where τ = 20 is the pre-set threshold which applied to filter out low response. After getting the MH , the mask of LFCML can be easy derivated:
ML = 1−MH . (7)
Therefore, the average response of HFC aH and the average response of LFC aL can be expressed as:
aH =
∑ i,jM
H φ(x)∑ i,jM H , (8)
aL =
∑ i,jM
L φ(x)∑ i,jM L . (9)
In this paper, if a feature map meets aH > aL, we call it “HFC dominant”, otherwise we call it “LFC dominant”. As demonstrate in Fig. 7, most feature maps are focus on HFC, and the “HFC dominant” to “LFC dominant” ratio is 3:1.
A.3 THE EFFECT OF WEIGHT FACTOR λ
In this section, we discuss the effect of different weight factors λ on the experimental results. We tune λ from 0.1 to 10, and the results are shown in Fig. 8. When λ ≤ 1, the attack success rate increases rapidly at the beginning and then remains stable. However, further increasing λ from 1 to 10 does not improve the performance. Actually, the success rates keep stable with a slight drop.
Apparently, a larger λ leads to more perturbations (i.e., increase the average perturbation of all pixels), and our reported results are a little inconsistent with linear assumption (Goodfellow et al., 2015). It probably because our HIT is completely independent of any prior information (e.g. the gradient of any model or data distribution). So it is not the larger the noise is, the farther the deviation from the true label will be. Besides, we notice that the activation functions of these victim’s models are all Relu, which may be another reason for this phenomenon. More specifically, Relu is defined as
Relu(z) =
{ 0, z < 0.
z, else. (10)
where z is the intermediate output before activation layer. If the intermediate adversarial perturbation is large enough, i.e., δ′ ≤ −z, then Relu(z+ δ′) will return 0. But for a misclassification label yadv 6= y, positive activation which is different from the original z may be more helpful than 0.
A.4 QUALITATIVE COMPARISON FOR ADVERSARIAL EXAMPLES
To better reflect the advantages of our approach, in this section we compare the visual quality of the generated adversarial examples. Specifically, we consider state-of-the-art black-box PI-FGSM (Gao et al., 2020a) and no-box attack (Li et al., 2020a) as our competitor. As depicted in Fig. 9, both PI-FGSM (Gao et al., 2020a) and no-box attack (Li et al., 2020a) will cause more perceptible distortions. In contrast, the adversarial perturbation crafted by our HIT is much more imperceptible.
A.5 ATTACK OTHER CLASSIFICATION TASKS
To highlight the practical property of our HIT, in this section we apply our HIT for other classification tasks. Specifically, we consider three well-known fine-grained classification including
Raw images
CUB-200-2011 (Wah et al., 2011), Stanford Cars (Krause et al., 2013) and FGVC Aircraft (Maji et al., 2013) and the victim model is trained via DCL (backbone: Res-50) (Chen et al., 2019). The resolution of inputs is 448× 448. Therefore, we set the ”tile size = 448 / tile scheme”. For example, if the tile-scheme is 4 × 4, then the tile-size is 112. To ensure our defaulting setting (i.e., λ and tile-scheme) for HIT is applicable, we conduct two experiments in the following.
Discussion on tile scheme. We first report the average attack success rates (%) of our HIT w/ Circle w.r.t tile-scheme in Tab. 4. From the result, we can observe that our HIT is also effective for attacking other datasets. Notably, our HIT can fool DCL with about 90% success rate on Stanford Cars dataset. Besides, a relatively smaller tile-size is also helpful in improving the success rate of the attack, which is consistent with the conjecture given in Sec. 3.2.
Discussion on λ. We then report the average attack success rate (%) of our HIT w/ Circle w.r.t λ in Tab. 5. Although set λ = 1.0 is not optimal, the gap between the best results and the results of is very small. Therefore, our default setting for HIT is still applicable.
A.6 ATTACK REAL-WORLD RECOGNITION SYSTEM
To further demonstrate the practical property of our HIT, in this section we apply our HIT (w/ Circle) to attack a real-world recognition system, i.e., Google Cloud Vision API6. Different from existing works (Chen et al., 2017; Brendel et al., 2017) which need a large number of queries for optimization, we directly apply our HIT with the default setting (i.e., tilescheme is 6×6 and λ = 1.0). As illustrated in Fig. 10, our no-box HIT with ε = 16 can effectively change top-k labels. For example, the top-5 label of “fish” is “Fish”, “Fin”, “Seafood”, “Ray-finned fish” and “Marine biology”, while our adversarial example is
“Reptile”,“Turtle”, “Terrestrial Animal”, “Pattern” and “Art”. Notably, there is no overlap on top-k labels between clean image and our adversarial example, which also demonstrate the effectiveness of our no-box HIT.
A.7 RESULTS FOR SMALLER PERTURBATION
In this experiment, we report the average success rates (%) between state-of-the-art black-box attacks (further add Ghost Network algorithm (Li et al., 2020c) as our competitor) and our proposed no-box attack with a smaller perturbation ε = 8.
As demonstrated in Tab. 6, our proposed methods are still competitive to mainstream transfer-based black-box attacks, even though they combine many effective techniques. Remarkably, our no-box attack can significantly outperform Ghost Networks (+MI-FGSM). Although Ghost Networks (+PIMI-DI-FGSM) is much more powerful, our no-box attack can surpass it in some cases. For example, when fooling Shuffle, our HIT (w/ Circle) can outperform Ghost Networks (+PI-MI-DI-FGSM) by about 8%.
A.8 RAW IMAGE FOR ATTACK
To highlight the effectiveness of our design for adversarial patches, here we conduct the experiment where raw images (shown in Fig. 11) serve as “adversarial patch”. More specially, we utilize the HFC of these raw images (like Din et al.) to manipulate adversarial examples. However, even the HFC of texture-rich raw images (e.g., “Grifola frondosa” and “Capitulum”) do not achieve a good result. As demonstrated in Tab. 8, the average attack success rates are all less than 40%. By contrast, our well-designed adversarial patches can significantly achieve a success rate of nearly 90%, which demonstrates the effectiveness of our design.
6https://cloud.google.com/vision/docs/drag-and-drop
Table 6: The comparison of attack success rates (%) on normally trained models between black-box attacks and our no-box attacks with maximum perturbation ε = 8. For black-box attacks, adversarial examples are crafted via Inc-v3.
Attacks VGG19 ResNet DenseNet WRN SENet PNA Shuffle Squeeze Mobile AVG.
MI-FGSM 21.26 15.63 20.47 14.45 11.58 23.36 24.13 32.61 25.64 21.01 Ghost Networks (+MI-FGSM) 27.12 17.45 22.31 14.92 13.43 28.63 30.08 40.80 33.67 25.38
DI-FGSM 18.41 11.24 16.44 10.19 8.28 17.59 15.03 18.40 18.20 14.86 PI-FGSM 24.12 14.98 22.91 15.38 12.21 27.32 25.93 39.20 27.29 23.26 PI-MI-DI2-FGSM 40.88 31.02 41.34 29.70 25.56 38.46 36.79 46.70 42.38 36.98 Ghost Networks (+PI-MI-DI2-FGSM) 63.56 43.59 55.21 40.91 40.33 54.73 63.48 81.98 73.46 57.47
HIT w/ 6× 6 Circle 37.64 36.21 40.80 30.82 21.36 37.63 71.45 79.34 69.90 47.24 HIT w/ 6× 6 Square 13.40 17.50 25.85 21.99 13.79 18.31 53.03 61.61 50.88 30.71 HIT w/ 6× 6 Rhombus 17.24 19.95 27.65 22.00 12.85 18.91 58.84 63.49 57.90 33.20
A.9 DISCUSSION ON TARGETED ATTACK
(a) Adversarial Patch (b) Label ID: 794 (“shower curtain”)
Figure 12: We show (a) our adversarial patch and (b) some images which classified as shower curtain from ImageNet dataset. The bottom row is their HFC extracted by Eq. 1.
Although we do not explicitly force the resultant adversarial examples to be misclassified as a specific targeted label, we observe that our HIT tends to implement a targeted attack due to the frequency domain operation and classification logic of DNNs. In Tab. 7, we report the top-5 prediction labels of our adversarial examples, which are crafted by 6× 6 concentric circle pattern. A first glance shows that almost all models tend to misclassify adversarial examples generated by our HIT as several specific labels, e.g., 794 (“shower curtain”). Furthermore, this phenomenon is more obvious for Mobile and ResNet whose ratio is up to 47.69% and 75.75% respectively.
To better understand this phenomenon, we show several clean images whose labels are “shower curtain” from the ImageNet dataset and our adversarial patch in Fig. 12. We observe that the HFC of “shower curtain” is somehow aligned with our adversarial patch, i.e., they all show similar certain repetitive circles. We suspect this phenomenon might be because our proposed perturbation dominates the overall features of the image, and instead, the original features of the image become noise. Since existing algorithms are not effective yet and simply replacing our adversarial patch with a clean targeted image does not achieve an effective targeted attack (as demonstrated in Sec. A.8), we will further study the selection and generation of adversarial patches, e.g., fusing the shallow texture information of targeted distribution to guide the resultant adversarial examples towards the targeted category.
A.10 WHY CIRCLE PATTERN IS USUALLY BETTER?
Here we attempt to provide an insight into the performance gap between Circle and the other two patterns by analyzing the intermediate feature response. Without loss of generality, we set the layer index to “depth of each DNN” / 2 and report the average cosine similarity of the features between 10,000 raw images and their adversarial examples. The result from Tab. 9 shows that Circle consistently leads to lower cosine similarity than other patterns. Consequently, the features that feed to the deep layer are more featureless, thus leading to misclassification.
A.11 VISUALIZATION OF OUR ADVERSARIAL PATCHES
In this section, we first visualize the concentric circle with respect to densities in Fig. 13. Here we control the density from 1 to 12, e.g., “2” denotes only two circles in the proto-pattern. With the increase of density, the distance between any two circles will also be reduced.
Then we list our adversarial patches with respect to tile-schemes in Fig. 14. More specifically, we first crop the 600 × 600 × 3 proto-patterns to 300 × 300 × 3 adversarial patches, then resize them into different tile-sizes (e.g., 150 × 150 × 3) and tile them to 300 × 300 × 3, finally resize back to 299×299×3 to match the size of raw images. As we can see, if we decrease the tile-size, distortion is inevitable.
A.12 THE EFFECT OF REPEATING PATTERN FOR DEFENSES
In this section, we further consider six additional well-known defense models, which including three ensemble adversarial training models (EAT) (Tramèr et al., 2018): Inc-v3ens3, Inc-v3ens4 and IncRes-v2ens,7 and three feature denoising models (FD) (Xie et al., 2019a): ResNet152 Baseline (Res152B), ResNet152 Denoise (Res152D), ResNeXt101 DenoiseAll (ResNeXtDA),8 to discuss the effect of repeating pattern.
7https://github.com/tensorflow/models/tree/archive/research/adv_ imagenet_models
8https://github.com/facebookresearch/ImageNet-Adversarial-Training
Table 9: The cosine similarity comparison for different patterns.
Model Attack VGG19 Inc-v3 ResNet Dense WRN SENet PNA Squeeze Shuffle Mobile Avg.
- HIT w/ Square (Ours) 0.6215 0.7419 0.7638 0.8090 0.7599 0.5838 0.7437 0.6940 0.6704 0.4838 0.6872
HIT w/ Rhombus (Ours) 0.6218 0.7458 0.7448 0.7853 0.7280 0.6258 0.7672 0.6738 0.6461 0.4005 0.6746 HIT w/ Circle (Ours) 0.5472 0.6685 0.7306 0.7779 0.7223 0.5613 0.6747 0.6643 0.6062 0.3617 0.6314
Density:7 Density:8 Density:9 Density:10 Density:11 Density:12
Density:1 Density:2 Density:3 Density:4 Density:5 Density:6
Figure 13: We visualize our proto-patterns w.r.t densities. Here we take concentric circles as an example.
Generally, a smaller tile-scheme can generate a more perceptible perturbation. As shown in Fig. 15, the area of each regionally homogeneous (i.e. continues) line in adversarial examples crafted by 1× 1 patches is bigger than 6× 6 ones. Different from the trends on NTs, smaller tile-schemes are more effective for attacking defense models. As demonstrated in Tab. 16, when attacking EAT, 2×2 adversarial patches perform best, and further increasing the tile-scheme will significantly degrade performance, e.g., 7 × 7 rhombuses only successfully attack EAT by 6.61% on average. The trend of FD is similar to that of EAT, except that 1×1 adversarial patches work best. The reason might be that thin regionally homogeneous lines are more easily to filter out by the denoising block of (Xie et al., 2019a). Therefore, in our paper, we use 2× 2 and 1× 1 adversarial patches to attack EAT and FD, respectively.
1x1 2x2 3x3 4x4 5x5 6x6 7x7 | 1. What is the focus of the paper regarding adversarial attacks?
2. What are the strengths of the proposed approach, particularly its novelty and efficiency?
3. What are the weaknesses of the paper, especially regarding the discussion of the circle's performance?
4. Do you have any suggestions for improving the proposed method, such as optimizing the perturbation? | Summary Of The Paper
Review | Summary Of The Paper
Following the scenario of No-box Attacks, they propose a simple but efficient method, that crafts adversarial perturbations by three types of handmade images, i.e., concentric circles, concentric squares, and concentric rhombus, and add them to the high-frequency component of benign samples. Experimental results show that their handmade perturbations significantly outperform previous No-box attack methods, even competitively to some black-box methods.
Review
The authors separate a benign image into two components: LFC(low-frequency component) and HFC(high-frequency component). They demonstrate that the high-frequency component is an essential part of the DNN model's performance. So they add some handmade images in the high-frequency part of benign images. The method seems novel, and the paper is well written and easy to follow. I think it is an interesting paper. Here I have some concerns:
It is a little weak about the discussion of why the circle always performs better. Do the circle/square/rhombus perturbations always lead the victim models to predict certain classes?
Can we optimize the perturbation to find an optimal proto-pattern? |
ICLR | Title
Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation
Abstract
In recent years, the adversarial vulnerability of deep neural networks (DNNs) has raised increasing attention. Among all the threat models, no-box attacks are the most practical but extremely challenging since they neither rely on any knowledge of the target model or similar substitute model, nor access the dataset for training a new substitute model. Although a recent method has attempted such an attack in a loose sense, its performance is not good enough and the computational overhead of training is expensive. In this paper, we move a step forward and show the existence of a training-free adversarial perturbation under the nobox threat model, which can be successfully used to attack different DNNs in real-time. Motivated by our observation that high-frequency component (HFC) domains in low-level features and plays a crucial role in classification, we attack an image mainly by manipulating its frequency components. Specifically, the perturbation is combined by the suppression of the original HFC and the adding of noisy HFC. We empirically and experimentally analyze the requirements of effective noisy HFC and show that it should be regionally homogeneous, repeating and dense. Extensive experiments on the ImageNet dataset demonstrate the effectiveness of our proposed no-box method. It attacks ten well-known models with a success rate of 98.13% on average, which outperforms state-of-the-art no-box attacks by 29.39%. Furthermore, our method is even competitive to mainstream transfer-based black-box attacks. Our code is available in our appendix.
1 INTRODUCTION
Deep neural networks (DNNs) are widely known to be vulnerable to adversarial examples (Szegedy et al., 2013; Goodfellow et al., 2015), i.e., a human-imperceptible perturbation can lead to misclassification. In adversarial machine learning, the term threat model defines the rules of the attack, such as the resources the attacker can access. Based on the threat model, the attacks are often divided into white-box attacks and black-box attacks. In the white-box threat model (Szegedy et al., 2013; Goodfellow et al., 2015; Madry et al., 2018a), the attacker has full knowledge of a target model, such as the model weights and the whole training dataset. Recognizing the threat of these adversarial attacks, a model owner is unlikely to leak a model’s information to the public. Thus, the white-box attack is often used to evaluate the model robustness for revealing its weakest point (Madry et al., 2018a), but often not considered as a practical attack method (Chen et al., 2017). To this end, numerous works have investigated a more realistic threat model, where the attacker does not require full knowledge of the target model, i.e., the backpropagation on the target model is prohibited. This threat model is called black-box attack (Papernot et al., 2016; Tramèr et al., 2016; Papernot et al., 2017; Narodytska & Kasiviswanathan, 2017; Chen et al., 2017; Brendel et al., 2017; Dong et al., 2019b; Yan et al., 2019; Chen et al., 2020; Zhou et al., 2020). However, such a black-box threat model usually involves a major concern of being resource-intensive in terms of query cost and time. In real-world attack scenarios, even if we ignore such concerns, query-based black-box attack can still be infeasible, e.g., the model API is inaccessible to the attacker. Moreover, it might cause suspicion due to repeated queries to the model with almost the same adversarial image. To alleviate this issue, another line of black-box threat model (Dong et al., 2018; Xie et al., 2019b; Dong et al., 2019a; Wu et al., 2020; Lin et al., 2020; Gao et al., 2020a; 2021) called transfer-based attack is proposed. In this threat model, adversarial examples are crafted via the local available pre-trained substitute model, which usually trains on the same training dataset as the target model. The resultant adversarial examples
are expected to attack the target model. However, without the feedback from the target model, the transferability heavily depends on how large the gap between the substitute model and target model. In practice, this gap is large because the structure and the training technique of the target model are usually not publicly available due to security and privacy concerns.
From the analysis above, we argue that both white-box and black-box attacks can hardly be considered as practical attacks. A practical attack should satisfy two criteria: (a) model-free, i.e., no dependence on the pre-trained substitute model or the target model for either backward propagation or only forward query; (b) data-free, i.e., no dependence on the dataset for training a substitute model. We term it no-box attack. A recent work (Li et al., 2020a) is the first (to our knowledge) as well as the only work to have attempted such an attack in a loose sense. Their threat model still requires a small number of auxiliary samples, such as 20 images. Admittedly, collecting a small number of samples might not be difficult in most cases, but might be still infeasible in some security-sensitive applications. Specifically, their approach (Li et al., 2020a) attempts to train a substitute model by adopting the classical auto-encoder model instead of the supervised classification model due to the constraint of a small-scale dataset. Overall, to attack a certain sample, their approach consists of three steps: (1) collecting a small number of images; (2) training a substitute model; (3) white-box attack on the substitute model. If a new sample, especially from a different class, needs to be attacked, the above process needs to be repeated. Thus, their approach is very resource-intensive. Besides, their attack success rate is still significantly lower than existing black-box attacks.
By contrast, our approach does not require any of the above three steps and is even training-free. With the help of visualization technique proposed by (Zeiler & Fergus, 2014), we observe that the high-frequency component (HFC), e.g., the edge and texture features, is dominant in shallow layers and the low-frequency component (LFC), e.g., the plain areas in the image, is paid less attention to be extracted. Combined with the insight
into the classification logic of DNNs in Sec. 3.1, we observe that HFC plays a crucial role in recognition. As shown in Fig. 1, without LFC, the confidence of HFC is even higher than the raw image. Although it does not hold true for all samples, it does demonstrate the importance of HFC.
Motivated by this, we take the idea of hybrid image (Oliva, 2013) and propose a novel Hybrid Image Transformation (HIT) attack method to craft adversarial examples. Formally, it only needs three steps but can effectively fool various DNNs without any training: First, due to the trainingfree setting and inspired by the analysis from Sec. 3.2, we simply utilize matplotlib1 tool to draw several geometric patterns which serve as the proto-patterns, and the resultant synthesized adversarial patches are thus richer in regionally homogeneous, repeating and dense HFC. Second, we extract the LFC of the raw image and HFC of the adversarial patch. Finally, we combine these two pieces of components and clip them to the ε-ball of the raw image to get the resultant adversarial hybrid example. Extensive experiments on ImageNet demonstrate the effectiveness of our method. By attacking ten state-of-the-art models in the no-box manner, our HIT significantly increases the average success rate from 68.74% to 98.13%. Notably, our HIT is even competitive to mainstream transfer-based black-box attacks.
2 RELATED WORK
Adversarial Attack. Let x denote raw image without any perturbation, xadv and y denote the corresponding adversarial example and true label respectively. In generally, we use l∞-norm to measure the perceptibility of adversarial perturbations, i.e., ||xadv − x||∞ ≤ ε. In this paper, we focus on non-targeted attacks (Dong et al., 2018; Xie et al., 2019b; Wu et al., 2020; Lin et al., 2020; Gao et al., 2020a) which aim to cause misclassification of DNNs f(·), i.e., f(xadv) 6= y.
1https://matplotlib.org/
Competitors. Transferability is an important property for adversarial examples. With it, the resultant adversarial example crafted via one model may fool others. For the black-box threat model, Goodfellow et al. (2015) argue that the vulnerability of DNNs is their linear nature, and generate adversarial examples efficiently by performing FGSM which is a single-step attack. Papernot et al. (2017) train a local model with many queries to substitute for the target model. Dong et al. (2018) integrate a momentum term into I-FGSM Kurakin et al. (2017) to stabilize the update direction during the attack iterations. Xie et al. (2019b) apply diverse input patterns to improve the transferability of adversarial examples. Dong et al. (2019a) propose a translation-invariant attack to mitigate the effect of different discriminative regions between models. Gao et al. (2020a) introduce patch-wise perturbation by amplifying the step size and reuse the cut noise to perturb more information in discriminative regions. For the no-box threat model, Li et al. (2020a) attempt to attack the target model without any model query or the accessible pre-trained substitute model. In their work, with a limited amount of data, they try different mechanisms (with or without supervised technique) to train the substitute model, and then utilize this substitute model to craft transferable adversarial examples. Different from these approaches, our method does not depend on transferability since we do not need any substitute model. In this paper, we craft the adversarial examples from the perspective of the classification logic of DNNs.
Frequency Perspective on DNNs. Our approach is highly inspired by existing works which explain the generalization and adversarial vulnerability of DNNs from the frequency perspective. The fact that DNNs have good generalization while being vulnerable to small adversarial perturbations has motivated (Jo & Bengio, 2017; Wang et al., 2020) to investigate the underlying mechanism, suggesting that surface-statistical content with high-frequency property is essential for the classification task. From the perspective of texture vs. shape, Geirhos et al. (2019); Wang et al. (2020) reveal that DNNs are biased towards texture instead of shape. Since the texture content is considered to have high-frequency property, their finding can be interpreted as the DNN being biased towards HFC. On the other hand, adversarial perturbations are also known to have the high-frequency property and various defense methods have also been motivated from this insight (Aydemir et al., 2018; Das et al., 2018; Liu & JaJa, 2019; Xie et al., 2019a). Nonetheless, it remains unknown whether manually designed high-frequency patterns are sufficient for attacking the network.
3 METHODOLOGY
Although many adversarial attack methods (Papernot et al., 2016; Dong et al., 2018; Gao et al., 2020a; Li et al., 2020a) have achieved pretty high success rates in both black-box and no-box cases, they all need training, especially for query-based (Papernot et al., 2016; Zhou et al., 2020) and nobox adversarial perturbations (Li et al., 2020a) whose training is usually time-consuming. Then a natural question arises: Is it possible to generate robust adversarial perturbations without any training? In the following subsections, we will give our answer and introduce our design.
3.1 MOTIVATION
To better understand the role of HFC and LFC for the classification results of DNNs, we split the information of raw images into these two pieces via Gaussian low-pass filter (defined in Eq. 1).
As illustrated in Fig. 3, when the kernel size is small, i.e., the cutoff frequency is high, the average accuracy of LFC on ten state-of-the-art models is close to 100%. However, if we continue to increase the kernel size, the average accuracy of HFC begins to exceed LFC one. To our surprise, for several specific raw images, e.g., left image of Fig. 1, the true label’s confidence of HFC which is mostly black is even higher than the raw image.
To explain the above phenomenon, we turn to the perspective of feature space. Inspired by recent intermediate feature-based attacks (Zhou et al., 2018; Ganeshan & Babu, 2019; Inkawhich et al., 2019), we argue low-level features are critical to the classification. Interestingly, as shown in Fig. 2, most2 feature maps in
the shallow layers generally extract the edge and texture features (typical ones are highlighted by red boxes), i.e., HFC, and pay less attention to plain areas in images, i.e., LFC. Therefore, if a perturbation can effectively manipulate the HFC of an image, totally different low-level features will be extracted and may lead to misclassification.
3.2 EFFECTIVE ADVERSARIAL HFC
However, what kind of training-free noisy HFC can effectively fool DNNs is still unknown because the performance of any other raw image’s HFC is unsatisfactory (see Appendix Sec. A.8). Zhang et al. (2020) have demonstrated that the effectiveness of adversarial perturbation lies in the fact that it contains irrelevant features. The features of perturbation dominate over the features in the raw image, thus leading to misclassification. Inspired by their finding, we intend to design adversarial HFC with strong irrelevant features, and we conjecture that the following properties are essential.
Regionally Homogeneous. Several recent works (Li et al., 2020b; Gao et al., 2020a; Dong et al., 2019a; Gao et al., 2020b) have demonstrated that adversarial perturbations with regionally homogeneous (or patch-wise (Gao et al., 2020a)) property can enhance the transferability of adversarial examples. Inspired by that the raw image is a composite of homogeneous patterns, the reason might be attributed to that this perturbation tend to form irrelevant features recognizable by the DNNs.
Repeating. Nguyen et al. (2015) observe that extra copies of the repeating element do improve the confidence of DNNs. From the perspective of strengthening the irrelevant features, it is expected that repeating the content is beneficial.
Dense. Analogous to the above repeating property that performs global repeating, i.e., increases the amount of irrelevant features globally, we can also perform local repeating to strengthen its adversarial effect further. For term distinction, we term this property dense.
To verify the effect of the above properties, we conduct the ablation study in Sec. 4.1, and results support our conjecture. Besides, the analysis in Appendix Sec. A.9 also show that our HIT has potential to become a targeted attack.
3.3 HYBRID IMAGE TRANSFORMATION
Motivated by the above discussion, we take the idea of hybrid image (Oliva, 2013) to apply our no-box attacks. Specifically, Oliva (2013) replaces the HFC of one image with the HFC of another carefully picked image and craft hybrid images with two different interpretations: one that appears when the image is viewed up-close, and the other that appears from afar (see Fig.5 of Oliva (2013)). However, confusing human’s vision system (without ε constrain) cannot guarantee the misclassification of DNNs since adversarial examples are constrained by the maximum perturbation. Therefore, we propose a novel Hybrid Image Transformation (HIT) attack method which reduces3 original
2see quantitative analysis in Appendix Sec. A.2. 3Due to the ε constraint, we can not completely replace HFC with others
HFC, and meanwhile, adds well-designed noisy ones to attack DNNs. Our method only needs three steps but can generate robust training-free adversarial perturbations in real time:
First, we provide an adversarial patch xp to generate noisy HFC. Unlike the traditional way that needs training, here we use the matplotlib tool to draw it. Inspired by the observation in Sec. 3.2, we consider three simple regionally homogeneous proto-patterns (to avoid cherry-picking) as our basic adversarial patches: concentric circles, concentric squares, and concentric rhombus in Fig. 4. The effect of concentric pattern is to make the resultant HFC dense. Then we repeat these adversarial patches.
Second, we extract the LFC of the raw image and the HFC of the adversarial patch. Note that several methods can be utilized to extract the HFC and LFC of an image, e.g., Fourier transformation. In this paper, we use an approximated yet simple Gaussian low-pass filterGwhose size is (4k+1)×(4k+1) to get LFC, which can be written as:
Gi,j = 1 2πσ2 e(− i2+j2 2σ2 ), (1)
where σ = k determines the width of our G. In general, the larger σ is, the more HFC is filtered out. We are not going to introduce a new high-pass filter here for simplicity and just get HFC byG. More specifically, we obtain HFC by subtracting the LFC of the adversarial patch.
Finally, we can synthesize these two part components to generate our adversarial hybrid image xadv: xadv = clipx ,ε(x ∗G+ λ · (xp − xp ∗G)), (2) where “*” denotes convolution operation, λ is a weight factor to balance the LFC and HFC, and clipx,ε(·) restricts the resultant adversarial examples within the ε-ball of the raw image in l∞ space. Therefore, our method is different from adversarial patch attacks (Brown et al., 2017; Liu et al., 2020) which replace a subregion of the image with a well-design patch.
As illustrated in Fig. 2(c), our HIT can effectively reduce relevant HFC and add many other irrelevant noisy ones, e.g., highlighted yellow boxes in (c) cannot find any obvious HFC associated with “cat” at all. As a result, the target model can not extract correct features to make a reasonable prediction, thus leading to misclassification. Besides, our adversarial examples are less perceptible than those of our competitors (See Appendix Sec. A.4).
4 EXPERIMENTS
Networks. Here we consider ten well-known classification models: VGG19 (Simonyan & Zisserman, 2015), Inception-v3 (Inc-v3) (Szegedy et al., 2016), ResNet-152 (ResNet) (He et al., 2016), DenseNet-121 (Dense) (Huang et al., 2017), WideResNet (WRN) (Zagoruyko & Komodakis, 2016), SENet (Hu et al., 2018), PNASNet (PNA) (Liu et al., 2018), ShuffleNet-v2 (Shuffle) (Ma et al., 2018), SqueezeNet (Squeeze) (Iandola et al., 2017) and MobileNet-v2 (Mobile) (Sandler et al., 2018) as our target models. All the models are available in the Torchvision4, except for PNA and SENet which are obtained from Github5. We also perform our attack on a real-world recognition system in Appendix Sec. A.6.
4https://github.com/pytorch/vision/tree/master/torchvision/models 5https://github.com/Cadene/pretrained-models.pytorch
Dataset. To make our method more convincing and avoid cherry-picking, we choose 10,000 images (each category contains about 10 images which are resized to 299 × 299 × 3 beforehand) from the ImageNet validation set (Russakovsky et al., 2015) which are classified correctly by all ten networks we consider. We also discuss our methods on other classification tasks in Appendix Sec. A.5.
Parameters. In our experiments, we use l∞-norm to measure the perceptibility of adversarial noises, unless specified, the maximum perturbation ε is set to 16 (results with a smaller ε can be found in Appendix Sec. A.7). For our HIT, the size of Gaussian kernel G is 17 × 17 (i.e. k = 4), weight factor λ is set to 1.0 (the discussion about λ is shown in Appendix Sec. A.3), and density of proto-pattern is set to 12. For tile-size, unless specified, we set to 50 × 50, i.e., tile-scheme is 6× 6. For no-box methods, we follow the same setting as (Li et al., 2020a). For black-box methods, the iteration T is set to 10 and the step size α is 1.6. For MI-FGSM (Dong et al., 2018), we adopt the default decay factor µ = 1.0. For DI2-FGSM (Xie et al., 2019b), we set the transformation probability to 0.7. For TI-FGSM (Dong et al., 2019a), the length of Gaussian kernel is 15. For PI-FGSM (Gao et al., 2020a), the length of project kernel is 3, the amplification factor β and project factor γ are 10.0 and 16.0, respectively. Different from PI-FGSM, β and γ for PI-MI-DI2-FGSM and PI-TI-DI2-FGSM (Gao et al., 2020a) is 2.5 and 2.0.
4.1 ABLATION STUDY
In this section, we conduct a series of ablation study for our HIT. Specifically, we investigate the effectiveness of regionally homogeneous pattern, repeating pattern and dense pattern in Sec. 4.1.1, Sec. 4.1.2 and Sec. 4.1.3, respectively. Besides, we also analyze the effect of perturbation size on the performance in Sec. 4.1.4. For the result of HIT without reducing HFC beforehand is shown in Appendix Tab. A.12.
4.1.1 THE EFFECT OF REGIONALLY HOMOGENEOUS PATTERN
To the best of our knowledge, regionally homogeneous perturbations (Dong et al., 2019a; Gao et al., 2020a;b; Li et al., 2020b) are mostly based on the gradient to craft, thereby training is necessary. However, whether arbitrary noise can benefit from the homogeneous property remains unclear. Therefore, we compare random noises with semi-random ones to check it:
Random noise: For a given random location pair set L, we callNr ∈ RH×W×C random noise if it meets the following formula:
Nr[i, j, c] = { ε · random(−1, 1), (i, j, c) ∈ L 0, else
(3)
Semi-random noise: Different from the random noise, semi-random noise has some regularity. Let S denotes a semi-random location pair set, and here we take H-dimension random noise as an example. Nsr can be written as:
Nsr[i, :, :] = { ε · random(−1, 1), i ∈ S 0, else
(4)
where random(-1, 1) returns 1 or -1 randomly. As depicted in Fig. 5, the success rates ofNsr are consistently higher than those of Nr. As the number of perturbed pixels increases, the margin between them also increases. This demonstrates that training-free noise can also benefit from regionally homogeneous property. To exploit this conclusion further, in Fig. 4, we extend semi-random noise to other more complex “continuous” patterns, e.g., circle.
4.1.2 THE EFFECT OF REPEATING PATTERN
In this section, we show the experimental results of our proposed HIT w.r.t different tile-sizes. Here we consider seven different tile-schemes including 1 × 1, 2 × 2, 3 × 3, 4 × 4, 5 × 5, 6 × 6 and 7× 7, and the tile-sizes thereby are 300× 300, 150× 150, 100× 100, 75× 75, 60× 60, 50× 50, 42 × 42, respectively. We will resize back to 299 × 299 × 3 to match the size of raw images. The visualizations of these patches can be found in Appendix Sec. A.11.
In Fig. 5, we report the average attack success rates of ten models. The success rates increase very quickly at first and then keep stable after 4× 4 tile-scheme. If we continue to increase the tile-size, the attack success rates may go down. The main reason might be that the distortion caused by the resizing operation. It indirectly blurs resultant tiled adversarial patches, thus reducing the available HFC. Compared to the other two geometric patterns, we find that circle patches always perform the best. For example, the success rate is up to 88.67% when tile-size is 6× 6. This result demonstrates that the attack ability of training-free perturbations can benefit from repeating property.
4.1.3 THE EFFECT OF DENSE PATTERN
To validate the effect of dense pattern, we analyze the average attack success rates w.r.t densities. Since the trends of different patterns are similar, we only discuss the results of circle patch whose tile-scheme is 6 × 6. Here we control the density from 1 to 12. For example, “2” denotes only two circles in the proto-pattern, and more visualizations can be found in Appendix Sec. A.11.
As shown in Fig. 5, the success rates increase rapidly at the beginning, then remain stable after the density exceeds 8, and reach the peak at 12. This experiment demonstrates the effectiveness of dense pattern. Therefore, we set the default density of each proto-pattern to 12 in our paper.
4.1.4 THE SIZE OF PERTURBATION
In this section, we study the influence of the maximum perturbation ε on the performance of our HIT. The result of Fig. 6 depicts the growth trends of each model under different adversarial patches. No matter what the adversarial patch is, the performance proliferates at first, then remains stable after ε exceeds 16 for most models. Besides, the circle patch (curve-like) always performs best while the performance of the other two adversarial patches (straight-like) is similar. For example, when ε = 16 and the target model is VGG19, the attack success rate of circles patch is 94.75% while the square patch and rhombuses patch ones are 81.52% and 83.63%, respectively. This demonstrates that DNNs are more vulnerable to curve-like perturbations than straight-like ones (we also analyze the reasons for this in Appendix Sec. A.10).
Another observation from this result is that our HIT can serve as a universal attack, although not in a strict sense. As demonstrated in Fig. 6, when ε = 10 which is the common constraint for universal adversarial perturbations (Mopuri et al., 2017; Moosavi-Dezfooli et al., 2017; Mopuri et al., 2018; Reddy Mopuri et al., 2018; Liu et al., 2019; Hashemi et al., 2020), our HIT with circle patch can achieve a success rate of 63.23% on average. Notably, it can be up to 89.74% on Squeeze.
4.2 COMPARISON OF HIT WITH NO-BOX ATTACKS
In this section, we compare the performance of our no-box HIT with state-of-the-art no-box attacks (Li et al., 2020a). Note that Li et al. (2020a) need to pay 15,000 iterations at most to train a substitute model, and then runs extra 200 iterations baseline attacks and 100 iterations ILA (Huang et al., 2019), which is extremely time-consuming. Significantly different from Li et al. (2020a), our HIT is training-free which does not require any auxiliary images to train a substitute model, thus achieving real-time attack.
The experimental results are reported in Tab. 1. A first glance shows that our HIT outperforms Li et al. (2020a) by a large margin. No matter what the adversarial patches are, our HIT can consistently achieve a success rate of over 92% on average. By contrast, the best performance of Li et al. (2020a), i.e., Prototypical∗ w/ Sup, is only 68.74% on average. Notably, our HIT with circle patch remarkably outperforms Li et al. (2020a) by 29.39% on average and 42.04% at most when attacking PNA.
4.3 COMPARISON OF HIT WITH BLACK-BOX ATTACKS
In this section, we compare our no-box HIT with mainstream transfer-based attacks. For MI-FGSM, DI2-FGSM, PI-FGSM and their extensions PI-MI-DI2-FGSM, we utilize VGG19, Inc-v3, ResNet
and Dense to iteratively (ten forward & backward propagation) craft adversarial examples and use them to attack the rest of black-box models. As for our proposed HIT, we do not need any substitute model or training process. The results are summarized in Tab. 2, where the models in the leftmost column are substitute models, and the bottom block shows the results of our HIT.
As demonstrated in Tab. 2, our HIT is even on par with state-of-the-art PI-MI-DI2-FGSM. Specifically, on average, the best performance of PI-MI-DI2-FGSM is 88.84%, and our HIT based on circle patch can get up to 88.67%. However, the transferability of adversarial examples largely depends on the substitute model. For example, when adversarial examples are crafted via Inc-v3, the performance of PI-MI-DI2-FGSM is limited and our HIT can remarkably outperform it by 23.34% on average. Besides, when the target model is in lightweight models, e.g., Shuffle, our method consistently outperforms these mainstream transfer-based attacks by a large margin.
Since adversarial training technique (Madry et al., 2018b; Tramèr et al., 2018; Awasthi et al., 2021) can effectively defend against adversarial examples, we conduct an extra experiment on several defense models to demonstrate the effectiveness of our method. The additional target models including three ensemble adversarial training models (EAT) (Tramèr et al., 2018): Inc-v3ens3, Inc-v3ens4 and IncRes-v2ens, and three feature denoising models (FD) (Xie et al., 2019a): ResNet152 Baseline (Res152B), ResNet152 Denoise (Res152D) and ResNeXt101 DenoiseAll (ResNeXtDA). As demonstrated in previous works(Guo et al., 2019; Sharma et al., 2019), low-frequency perturbations are more effective for attacking defense models. Motivated by it, we change the tile-schemes to smaller ones (i.e., 2 × 2 for EAT and 1 × 1 for FD) and other parameters stay the same (see more details in Appendix Sec. A.12). As observed in Tab. 3, our HIT is effective even for defense models. Notably, HIT based on circle patch can successfully attack Inc-v3ens4 by 61.86%. Besides, for more robust FD, even crafting adversarial examples via an ensemble of VGG19, Inc-v3, ResNet and Dense, transfer-based PI-TI-DI2-FGSM is still inferior to our HIT. This experimental result reveals that current defenses have not achieved real security, which is even vulnerable to training-free adversarial examples.
5 CONCLUSION
In this paper, we rethink the classification logic of deep neural networks with respect to adversarial examples. We observe that HFC domains in low-level features and plays a crucial role in classification. Besides, we demonstrate that DNNs are vulnerable to training-free perturbations with regionally homogeneous, repeating, dense property through empirically and experimentally analysis. Motivated by these observations, we propose a novel Hybrid Image Transformation (HIT) attack method by combining the LFC of raw images with the HFC of our well-designed adversarial patches to destroy the useful features and add strong irrelevant noisy ones. Extensive experiments on the ImageNet dataset demonstrate the effectiveness of the proposed method. Surprisingly, our simple method outperforms existing no-box attacks by a significant margin and is even on par with transfer-based black-box attacks that require the substitute model to craft adversarial examples.
In another aspect, since most models are vulnerable to our method, it implies that our adversarial examples may capture the common “blind spots” of them. Therefore, a defense can improve the robustness and stability by covering these “blind spots”, i.e., applying data augmentation technique using our adversarial examples.
A APPENDIX
A.1 SETUP
Networks. Here we consider ten well-known classification models: VGG19 Simonyan & Zisserman (2015), Inception-v3 (Inc-v3) Szegedy et al. (2016), ResNet-152 (ResNet) He et al. (2016), DenseNet-121 (Dense) Huang et al. (2017), WideResNet (WRN) Zagoruyko & Komodakis (2016), SENet Hu et al. (2018), PNASNet (PNA) Liu et al. (2018), ShuffleNet-v2 (Shuffle) Ma et al. (2018), SqueezeNet (Squeeze) Iandola et al. (2017) and MobileNet-v2 (Mobile) Sandler et al. (2018) as our target models.
Dataset. To make our method more convincing and avoid cherry-picking, we choose 10,000 images (each category contains about 10 images) from the ImageNet validation set Russakovsky et al. (2015) which are classified correctly by all ten networks we consider. Besides, all images are resized to 299× 299× 3 beforehand. Parameters. In our experiments, we use l∞-norm to measure the perceptibility of adversarial noises, the maximum perturbation ε is set to 16. For our HIT, the size of Gaussian kernel G is 17× 17 (i.e. k = 4), weight factor λ is set to 1.0.
A.2 QUANTITATIVE ANALYSIS ABOUT HFC AND LFC
To quantitatively analyze whether HFC or LFC is dominant in the feature map of shallow layer, we conducted this experiment. Considering that the size of each shallow-layer feature map in Fig. 2(b) is 147, we first resize the Fig. 2(a) (299× 299) to 147× 147, and denote the resultant image by xr. Then we get the xHr (HFC of xr) by:
xHr = xr − xr ∗G. (5)
To quantitatively compare the response of HFC and LFC, we calculate the average response of each feature map φ(x) in low-frequency regions versus that in the other HFC regions. To that end, we generate two masks to distinguish the two regions. More specific, the mask of high-frequency regionsMH can be written as:
MHi,j = { 1, |xHr(i,j)| > τ 0, else , (6)
where τ = 20 is the pre-set threshold which applied to filter out low response. After getting the MH , the mask of LFCML can be easy derivated:
ML = 1−MH . (7)
Therefore, the average response of HFC aH and the average response of LFC aL can be expressed as:
aH =
∑ i,jM
H φ(x)∑ i,jM H , (8)
aL =
∑ i,jM
L φ(x)∑ i,jM L . (9)
In this paper, if a feature map meets aH > aL, we call it “HFC dominant”, otherwise we call it “LFC dominant”. As demonstrate in Fig. 7, most feature maps are focus on HFC, and the “HFC dominant” to “LFC dominant” ratio is 3:1.
A.3 THE EFFECT OF WEIGHT FACTOR λ
In this section, we discuss the effect of different weight factors λ on the experimental results. We tune λ from 0.1 to 10, and the results are shown in Fig. 8. When λ ≤ 1, the attack success rate increases rapidly at the beginning and then remains stable. However, further increasing λ from 1 to 10 does not improve the performance. Actually, the success rates keep stable with a slight drop.
Apparently, a larger λ leads to more perturbations (i.e., increase the average perturbation of all pixels), and our reported results are a little inconsistent with linear assumption (Goodfellow et al., 2015). It probably because our HIT is completely independent of any prior information (e.g. the gradient of any model or data distribution). So it is not the larger the noise is, the farther the deviation from the true label will be. Besides, we notice that the activation functions of these victim’s models are all Relu, which may be another reason for this phenomenon. More specifically, Relu is defined as
Relu(z) =
{ 0, z < 0.
z, else. (10)
where z is the intermediate output before activation layer. If the intermediate adversarial perturbation is large enough, i.e., δ′ ≤ −z, then Relu(z+ δ′) will return 0. But for a misclassification label yadv 6= y, positive activation which is different from the original z may be more helpful than 0.
A.4 QUALITATIVE COMPARISON FOR ADVERSARIAL EXAMPLES
To better reflect the advantages of our approach, in this section we compare the visual quality of the generated adversarial examples. Specifically, we consider state-of-the-art black-box PI-FGSM (Gao et al., 2020a) and no-box attack (Li et al., 2020a) as our competitor. As depicted in Fig. 9, both PI-FGSM (Gao et al., 2020a) and no-box attack (Li et al., 2020a) will cause more perceptible distortions. In contrast, the adversarial perturbation crafted by our HIT is much more imperceptible.
A.5 ATTACK OTHER CLASSIFICATION TASKS
To highlight the practical property of our HIT, in this section we apply our HIT for other classification tasks. Specifically, we consider three well-known fine-grained classification including
Raw images
CUB-200-2011 (Wah et al., 2011), Stanford Cars (Krause et al., 2013) and FGVC Aircraft (Maji et al., 2013) and the victim model is trained via DCL (backbone: Res-50) (Chen et al., 2019). The resolution of inputs is 448× 448. Therefore, we set the ”tile size = 448 / tile scheme”. For example, if the tile-scheme is 4 × 4, then the tile-size is 112. To ensure our defaulting setting (i.e., λ and tile-scheme) for HIT is applicable, we conduct two experiments in the following.
Discussion on tile scheme. We first report the average attack success rates (%) of our HIT w/ Circle w.r.t tile-scheme in Tab. 4. From the result, we can observe that our HIT is also effective for attacking other datasets. Notably, our HIT can fool DCL with about 90% success rate on Stanford Cars dataset. Besides, a relatively smaller tile-size is also helpful in improving the success rate of the attack, which is consistent with the conjecture given in Sec. 3.2.
Discussion on λ. We then report the average attack success rate (%) of our HIT w/ Circle w.r.t λ in Tab. 5. Although set λ = 1.0 is not optimal, the gap between the best results and the results of is very small. Therefore, our default setting for HIT is still applicable.
A.6 ATTACK REAL-WORLD RECOGNITION SYSTEM
To further demonstrate the practical property of our HIT, in this section we apply our HIT (w/ Circle) to attack a real-world recognition system, i.e., Google Cloud Vision API6. Different from existing works (Chen et al., 2017; Brendel et al., 2017) which need a large number of queries for optimization, we directly apply our HIT with the default setting (i.e., tilescheme is 6×6 and λ = 1.0). As illustrated in Fig. 10, our no-box HIT with ε = 16 can effectively change top-k labels. For example, the top-5 label of “fish” is “Fish”, “Fin”, “Seafood”, “Ray-finned fish” and “Marine biology”, while our adversarial example is
“Reptile”,“Turtle”, “Terrestrial Animal”, “Pattern” and “Art”. Notably, there is no overlap on top-k labels between clean image and our adversarial example, which also demonstrate the effectiveness of our no-box HIT.
A.7 RESULTS FOR SMALLER PERTURBATION
In this experiment, we report the average success rates (%) between state-of-the-art black-box attacks (further add Ghost Network algorithm (Li et al., 2020c) as our competitor) and our proposed no-box attack with a smaller perturbation ε = 8.
As demonstrated in Tab. 6, our proposed methods are still competitive to mainstream transfer-based black-box attacks, even though they combine many effective techniques. Remarkably, our no-box attack can significantly outperform Ghost Networks (+MI-FGSM). Although Ghost Networks (+PIMI-DI-FGSM) is much more powerful, our no-box attack can surpass it in some cases. For example, when fooling Shuffle, our HIT (w/ Circle) can outperform Ghost Networks (+PI-MI-DI-FGSM) by about 8%.
A.8 RAW IMAGE FOR ATTACK
To highlight the effectiveness of our design for adversarial patches, here we conduct the experiment where raw images (shown in Fig. 11) serve as “adversarial patch”. More specially, we utilize the HFC of these raw images (like Din et al.) to manipulate adversarial examples. However, even the HFC of texture-rich raw images (e.g., “Grifola frondosa” and “Capitulum”) do not achieve a good result. As demonstrated in Tab. 8, the average attack success rates are all less than 40%. By contrast, our well-designed adversarial patches can significantly achieve a success rate of nearly 90%, which demonstrates the effectiveness of our design.
6https://cloud.google.com/vision/docs/drag-and-drop
Table 6: The comparison of attack success rates (%) on normally trained models between black-box attacks and our no-box attacks with maximum perturbation ε = 8. For black-box attacks, adversarial examples are crafted via Inc-v3.
Attacks VGG19 ResNet DenseNet WRN SENet PNA Shuffle Squeeze Mobile AVG.
MI-FGSM 21.26 15.63 20.47 14.45 11.58 23.36 24.13 32.61 25.64 21.01 Ghost Networks (+MI-FGSM) 27.12 17.45 22.31 14.92 13.43 28.63 30.08 40.80 33.67 25.38
DI-FGSM 18.41 11.24 16.44 10.19 8.28 17.59 15.03 18.40 18.20 14.86 PI-FGSM 24.12 14.98 22.91 15.38 12.21 27.32 25.93 39.20 27.29 23.26 PI-MI-DI2-FGSM 40.88 31.02 41.34 29.70 25.56 38.46 36.79 46.70 42.38 36.98 Ghost Networks (+PI-MI-DI2-FGSM) 63.56 43.59 55.21 40.91 40.33 54.73 63.48 81.98 73.46 57.47
HIT w/ 6× 6 Circle 37.64 36.21 40.80 30.82 21.36 37.63 71.45 79.34 69.90 47.24 HIT w/ 6× 6 Square 13.40 17.50 25.85 21.99 13.79 18.31 53.03 61.61 50.88 30.71 HIT w/ 6× 6 Rhombus 17.24 19.95 27.65 22.00 12.85 18.91 58.84 63.49 57.90 33.20
A.9 DISCUSSION ON TARGETED ATTACK
(a) Adversarial Patch (b) Label ID: 794 (“shower curtain”)
Figure 12: We show (a) our adversarial patch and (b) some images which classified as shower curtain from ImageNet dataset. The bottom row is their HFC extracted by Eq. 1.
Although we do not explicitly force the resultant adversarial examples to be misclassified as a specific targeted label, we observe that our HIT tends to implement a targeted attack due to the frequency domain operation and classification logic of DNNs. In Tab. 7, we report the top-5 prediction labels of our adversarial examples, which are crafted by 6× 6 concentric circle pattern. A first glance shows that almost all models tend to misclassify adversarial examples generated by our HIT as several specific labels, e.g., 794 (“shower curtain”). Furthermore, this phenomenon is more obvious for Mobile and ResNet whose ratio is up to 47.69% and 75.75% respectively.
To better understand this phenomenon, we show several clean images whose labels are “shower curtain” from the ImageNet dataset and our adversarial patch in Fig. 12. We observe that the HFC of “shower curtain” is somehow aligned with our adversarial patch, i.e., they all show similar certain repetitive circles. We suspect this phenomenon might be because our proposed perturbation dominates the overall features of the image, and instead, the original features of the image become noise. Since existing algorithms are not effective yet and simply replacing our adversarial patch with a clean targeted image does not achieve an effective targeted attack (as demonstrated in Sec. A.8), we will further study the selection and generation of adversarial patches, e.g., fusing the shallow texture information of targeted distribution to guide the resultant adversarial examples towards the targeted category.
A.10 WHY CIRCLE PATTERN IS USUALLY BETTER?
Here we attempt to provide an insight into the performance gap between Circle and the other two patterns by analyzing the intermediate feature response. Without loss of generality, we set the layer index to “depth of each DNN” / 2 and report the average cosine similarity of the features between 10,000 raw images and their adversarial examples. The result from Tab. 9 shows that Circle consistently leads to lower cosine similarity than other patterns. Consequently, the features that feed to the deep layer are more featureless, thus leading to misclassification.
A.11 VISUALIZATION OF OUR ADVERSARIAL PATCHES
In this section, we first visualize the concentric circle with respect to densities in Fig. 13. Here we control the density from 1 to 12, e.g., “2” denotes only two circles in the proto-pattern. With the increase of density, the distance between any two circles will also be reduced.
Then we list our adversarial patches with respect to tile-schemes in Fig. 14. More specifically, we first crop the 600 × 600 × 3 proto-patterns to 300 × 300 × 3 adversarial patches, then resize them into different tile-sizes (e.g., 150 × 150 × 3) and tile them to 300 × 300 × 3, finally resize back to 299×299×3 to match the size of raw images. As we can see, if we decrease the tile-size, distortion is inevitable.
A.12 THE EFFECT OF REPEATING PATTERN FOR DEFENSES
In this section, we further consider six additional well-known defense models, which including three ensemble adversarial training models (EAT) (Tramèr et al., 2018): Inc-v3ens3, Inc-v3ens4 and IncRes-v2ens,7 and three feature denoising models (FD) (Xie et al., 2019a): ResNet152 Baseline (Res152B), ResNet152 Denoise (Res152D), ResNeXt101 DenoiseAll (ResNeXtDA),8 to discuss the effect of repeating pattern.
7https://github.com/tensorflow/models/tree/archive/research/adv_ imagenet_models
8https://github.com/facebookresearch/ImageNet-Adversarial-Training
Table 9: The cosine similarity comparison for different patterns.
Model Attack VGG19 Inc-v3 ResNet Dense WRN SENet PNA Squeeze Shuffle Mobile Avg.
- HIT w/ Square (Ours) 0.6215 0.7419 0.7638 0.8090 0.7599 0.5838 0.7437 0.6940 0.6704 0.4838 0.6872
HIT w/ Rhombus (Ours) 0.6218 0.7458 0.7448 0.7853 0.7280 0.6258 0.7672 0.6738 0.6461 0.4005 0.6746 HIT w/ Circle (Ours) 0.5472 0.6685 0.7306 0.7779 0.7223 0.5613 0.6747 0.6643 0.6062 0.3617 0.6314
Density:7 Density:8 Density:9 Density:10 Density:11 Density:12
Density:1 Density:2 Density:3 Density:4 Density:5 Density:6
Figure 13: We visualize our proto-patterns w.r.t densities. Here we take concentric circles as an example.
Generally, a smaller tile-scheme can generate a more perceptible perturbation. As shown in Fig. 15, the area of each regionally homogeneous (i.e. continues) line in adversarial examples crafted by 1× 1 patches is bigger than 6× 6 ones. Different from the trends on NTs, smaller tile-schemes are more effective for attacking defense models. As demonstrated in Tab. 16, when attacking EAT, 2×2 adversarial patches perform best, and further increasing the tile-scheme will significantly degrade performance, e.g., 7 × 7 rhombuses only successfully attack EAT by 6.61% on average. The trend of FD is similar to that of EAT, except that 1×1 adversarial patches work best. The reason might be that thin regionally homogeneous lines are more easily to filter out by the denoising block of (Xie et al., 2019a). Therefore, in our paper, we use 2× 2 and 1× 1 adversarial patches to attack EAT and FD, respectively.
1x1 2x2 3x3 4x4 5x5 6x6 7x7 | 1. What is the focus and contribution of the paper regarding adversarial attacks?
2. What are the strengths of the proposed approach, particularly in its simplicity and performance across multiple settings?
3. What are the concerns and limitations of the paper, including the visibility of artifacts, the need for more comprehensive experiments, and the comparison with other approaches?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any minor comments or suggestions for improving the paper, such as modifying Figure 1 or discussing additional relevant works? | Summary Of The Paper
Review | Summary Of The Paper
The proposed approach involves creating adversarial examples in a training-free manner by manipulating the frequency components of the image. High -frequency information is used from simple geometric patterns and combined with low-frequency component from the input to create a hybrid image. The resultant image is shown to fool classifiers under multiple settings which presents an easy way to construct adversarial examples.
Review
Strengths:
— The creation of adversarial example is simple and well analyzed showing improved performances compared to previous approaches across multiple settings. The construction of an adversarial example in a training-free manner without using gradients is an interesting phenomenon and can be used to improve robustness of Neural networks.
— The paper is well written and easy to follow.
Concerns:
— Looking at the qualitative examples in Figure 13, the resultant adversarial example consists of artifacts based on the type of pattern used. This can make it easier to identify based on manual visual inspection. More qualitative examples of the proposed approach compared with the examples from Li et al. , can improve be helpful.
— The quantitative experiments are mainly presented for ImageNet dataset. It would be interesting to see the effectiveness of the proposed approach for other datasets such as Places365 or even other vision tasks. This would also provide an opportunity to discuss choosing different hyperparameters associated with the approach for custom datasets.
— Attacking a real world recognition system as done in Li et al., would show the practicality of the algorithm.
— In Table 3, the proposed attack is compared against gradient based attacks on different adversarially trained models. One question that arises is that why the proposed approach is better than even gradient based attacks for the defense models. Specifically, ResNet152_D which involves denoising operation specifically designed to suppress the effect of high frequency components. So it would be expected that the proposed approach that relies on the effect of HFC would not be better than gradient based attacks. More analysis or insight into this experiment would be useful. The authors are also encouraged to discuss the work by Yin et al. (arXiv:1906.08988) which is also along similar lines.
— Metzen et al. ( arXiv:1702.04267 ) showed that auxiliary CNNs can be used to detect adversarial perturbations. It was shown later that gradient based adversarial examples can be constructed to fool both the classifier and adversarial detector (arXiv:1705.07263 ). However, since the proposed approach does not rely on substitute models or gradient based information, it would be interesting to see if such an auxiliary classifier can be an effective defense for the proposed approach which relies on feature statistics from HFC.
Minor comments:
— Figure 1 suggests that High-frequency component has more confidence than the original image itself for all data. Although the authors mention in the text that this is not the case for all input images, Figure 1 can mislead readers into thinking this is true for all examples. The authors are suggested to either include the text in the figure caption or alter the position of the image.
— The related work section discusses some important works in the adversarial attack literature. Authors are encouraged to also discuss works related to adversarial patches which have been used to fool multiple vision tasks and how the proposed work (which does not rely on learning the perturbation ) is different from such works. Since both approaches rely on using high frequency information from the perturbation, it would improve understanding among readers. |
ICLR | Title
Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation
Abstract
In recent years, the adversarial vulnerability of deep neural networks (DNNs) has raised increasing attention. Among all the threat models, no-box attacks are the most practical but extremely challenging since they neither rely on any knowledge of the target model or similar substitute model, nor access the dataset for training a new substitute model. Although a recent method has attempted such an attack in a loose sense, its performance is not good enough and the computational overhead of training is expensive. In this paper, we move a step forward and show the existence of a training-free adversarial perturbation under the nobox threat model, which can be successfully used to attack different DNNs in real-time. Motivated by our observation that high-frequency component (HFC) domains in low-level features and plays a crucial role in classification, we attack an image mainly by manipulating its frequency components. Specifically, the perturbation is combined by the suppression of the original HFC and the adding of noisy HFC. We empirically and experimentally analyze the requirements of effective noisy HFC and show that it should be regionally homogeneous, repeating and dense. Extensive experiments on the ImageNet dataset demonstrate the effectiveness of our proposed no-box method. It attacks ten well-known models with a success rate of 98.13% on average, which outperforms state-of-the-art no-box attacks by 29.39%. Furthermore, our method is even competitive to mainstream transfer-based black-box attacks. Our code is available in our appendix.
1 INTRODUCTION
Deep neural networks (DNNs) are widely known to be vulnerable to adversarial examples (Szegedy et al., 2013; Goodfellow et al., 2015), i.e., a human-imperceptible perturbation can lead to misclassification. In adversarial machine learning, the term threat model defines the rules of the attack, such as the resources the attacker can access. Based on the threat model, the attacks are often divided into white-box attacks and black-box attacks. In the white-box threat model (Szegedy et al., 2013; Goodfellow et al., 2015; Madry et al., 2018a), the attacker has full knowledge of a target model, such as the model weights and the whole training dataset. Recognizing the threat of these adversarial attacks, a model owner is unlikely to leak a model’s information to the public. Thus, the white-box attack is often used to evaluate the model robustness for revealing its weakest point (Madry et al., 2018a), but often not considered as a practical attack method (Chen et al., 2017). To this end, numerous works have investigated a more realistic threat model, where the attacker does not require full knowledge of the target model, i.e., the backpropagation on the target model is prohibited. This threat model is called black-box attack (Papernot et al., 2016; Tramèr et al., 2016; Papernot et al., 2017; Narodytska & Kasiviswanathan, 2017; Chen et al., 2017; Brendel et al., 2017; Dong et al., 2019b; Yan et al., 2019; Chen et al., 2020; Zhou et al., 2020). However, such a black-box threat model usually involves a major concern of being resource-intensive in terms of query cost and time. In real-world attack scenarios, even if we ignore such concerns, query-based black-box attack can still be infeasible, e.g., the model API is inaccessible to the attacker. Moreover, it might cause suspicion due to repeated queries to the model with almost the same adversarial image. To alleviate this issue, another line of black-box threat model (Dong et al., 2018; Xie et al., 2019b; Dong et al., 2019a; Wu et al., 2020; Lin et al., 2020; Gao et al., 2020a; 2021) called transfer-based attack is proposed. In this threat model, adversarial examples are crafted via the local available pre-trained substitute model, which usually trains on the same training dataset as the target model. The resultant adversarial examples
are expected to attack the target model. However, without the feedback from the target model, the transferability heavily depends on how large the gap between the substitute model and target model. In practice, this gap is large because the structure and the training technique of the target model are usually not publicly available due to security and privacy concerns.
From the analysis above, we argue that both white-box and black-box attacks can hardly be considered as practical attacks. A practical attack should satisfy two criteria: (a) model-free, i.e., no dependence on the pre-trained substitute model or the target model for either backward propagation or only forward query; (b) data-free, i.e., no dependence on the dataset for training a substitute model. We term it no-box attack. A recent work (Li et al., 2020a) is the first (to our knowledge) as well as the only work to have attempted such an attack in a loose sense. Their threat model still requires a small number of auxiliary samples, such as 20 images. Admittedly, collecting a small number of samples might not be difficult in most cases, but might be still infeasible in some security-sensitive applications. Specifically, their approach (Li et al., 2020a) attempts to train a substitute model by adopting the classical auto-encoder model instead of the supervised classification model due to the constraint of a small-scale dataset. Overall, to attack a certain sample, their approach consists of three steps: (1) collecting a small number of images; (2) training a substitute model; (3) white-box attack on the substitute model. If a new sample, especially from a different class, needs to be attacked, the above process needs to be repeated. Thus, their approach is very resource-intensive. Besides, their attack success rate is still significantly lower than existing black-box attacks.
By contrast, our approach does not require any of the above three steps and is even training-free. With the help of visualization technique proposed by (Zeiler & Fergus, 2014), we observe that the high-frequency component (HFC), e.g., the edge and texture features, is dominant in shallow layers and the low-frequency component (LFC), e.g., the plain areas in the image, is paid less attention to be extracted. Combined with the insight
into the classification logic of DNNs in Sec. 3.1, we observe that HFC plays a crucial role in recognition. As shown in Fig. 1, without LFC, the confidence of HFC is even higher than the raw image. Although it does not hold true for all samples, it does demonstrate the importance of HFC.
Motivated by this, we take the idea of hybrid image (Oliva, 2013) and propose a novel Hybrid Image Transformation (HIT) attack method to craft adversarial examples. Formally, it only needs three steps but can effectively fool various DNNs without any training: First, due to the trainingfree setting and inspired by the analysis from Sec. 3.2, we simply utilize matplotlib1 tool to draw several geometric patterns which serve as the proto-patterns, and the resultant synthesized adversarial patches are thus richer in regionally homogeneous, repeating and dense HFC. Second, we extract the LFC of the raw image and HFC of the adversarial patch. Finally, we combine these two pieces of components and clip them to the ε-ball of the raw image to get the resultant adversarial hybrid example. Extensive experiments on ImageNet demonstrate the effectiveness of our method. By attacking ten state-of-the-art models in the no-box manner, our HIT significantly increases the average success rate from 68.74% to 98.13%. Notably, our HIT is even competitive to mainstream transfer-based black-box attacks.
2 RELATED WORK
Adversarial Attack. Let x denote raw image without any perturbation, xadv and y denote the corresponding adversarial example and true label respectively. In generally, we use l∞-norm to measure the perceptibility of adversarial perturbations, i.e., ||xadv − x||∞ ≤ ε. In this paper, we focus on non-targeted attacks (Dong et al., 2018; Xie et al., 2019b; Wu et al., 2020; Lin et al., 2020; Gao et al., 2020a) which aim to cause misclassification of DNNs f(·), i.e., f(xadv) 6= y.
1https://matplotlib.org/
Competitors. Transferability is an important property for adversarial examples. With it, the resultant adversarial example crafted via one model may fool others. For the black-box threat model, Goodfellow et al. (2015) argue that the vulnerability of DNNs is their linear nature, and generate adversarial examples efficiently by performing FGSM which is a single-step attack. Papernot et al. (2017) train a local model with many queries to substitute for the target model. Dong et al. (2018) integrate a momentum term into I-FGSM Kurakin et al. (2017) to stabilize the update direction during the attack iterations. Xie et al. (2019b) apply diverse input patterns to improve the transferability of adversarial examples. Dong et al. (2019a) propose a translation-invariant attack to mitigate the effect of different discriminative regions between models. Gao et al. (2020a) introduce patch-wise perturbation by amplifying the step size and reuse the cut noise to perturb more information in discriminative regions. For the no-box threat model, Li et al. (2020a) attempt to attack the target model without any model query or the accessible pre-trained substitute model. In their work, with a limited amount of data, they try different mechanisms (with or without supervised technique) to train the substitute model, and then utilize this substitute model to craft transferable adversarial examples. Different from these approaches, our method does not depend on transferability since we do not need any substitute model. In this paper, we craft the adversarial examples from the perspective of the classification logic of DNNs.
Frequency Perspective on DNNs. Our approach is highly inspired by existing works which explain the generalization and adversarial vulnerability of DNNs from the frequency perspective. The fact that DNNs have good generalization while being vulnerable to small adversarial perturbations has motivated (Jo & Bengio, 2017; Wang et al., 2020) to investigate the underlying mechanism, suggesting that surface-statistical content with high-frequency property is essential for the classification task. From the perspective of texture vs. shape, Geirhos et al. (2019); Wang et al. (2020) reveal that DNNs are biased towards texture instead of shape. Since the texture content is considered to have high-frequency property, their finding can be interpreted as the DNN being biased towards HFC. On the other hand, adversarial perturbations are also known to have the high-frequency property and various defense methods have also been motivated from this insight (Aydemir et al., 2018; Das et al., 2018; Liu & JaJa, 2019; Xie et al., 2019a). Nonetheless, it remains unknown whether manually designed high-frequency patterns are sufficient for attacking the network.
3 METHODOLOGY
Although many adversarial attack methods (Papernot et al., 2016; Dong et al., 2018; Gao et al., 2020a; Li et al., 2020a) have achieved pretty high success rates in both black-box and no-box cases, they all need training, especially for query-based (Papernot et al., 2016; Zhou et al., 2020) and nobox adversarial perturbations (Li et al., 2020a) whose training is usually time-consuming. Then a natural question arises: Is it possible to generate robust adversarial perturbations without any training? In the following subsections, we will give our answer and introduce our design.
3.1 MOTIVATION
To better understand the role of HFC and LFC for the classification results of DNNs, we split the information of raw images into these two pieces via Gaussian low-pass filter (defined in Eq. 1).
As illustrated in Fig. 3, when the kernel size is small, i.e., the cutoff frequency is high, the average accuracy of LFC on ten state-of-the-art models is close to 100%. However, if we continue to increase the kernel size, the average accuracy of HFC begins to exceed LFC one. To our surprise, for several specific raw images, e.g., left image of Fig. 1, the true label’s confidence of HFC which is mostly black is even higher than the raw image.
To explain the above phenomenon, we turn to the perspective of feature space. Inspired by recent intermediate feature-based attacks (Zhou et al., 2018; Ganeshan & Babu, 2019; Inkawhich et al., 2019), we argue low-level features are critical to the classification. Interestingly, as shown in Fig. 2, most2 feature maps in
the shallow layers generally extract the edge and texture features (typical ones are highlighted by red boxes), i.e., HFC, and pay less attention to plain areas in images, i.e., LFC. Therefore, if a perturbation can effectively manipulate the HFC of an image, totally different low-level features will be extracted and may lead to misclassification.
3.2 EFFECTIVE ADVERSARIAL HFC
However, what kind of training-free noisy HFC can effectively fool DNNs is still unknown because the performance of any other raw image’s HFC is unsatisfactory (see Appendix Sec. A.8). Zhang et al. (2020) have demonstrated that the effectiveness of adversarial perturbation lies in the fact that it contains irrelevant features. The features of perturbation dominate over the features in the raw image, thus leading to misclassification. Inspired by their finding, we intend to design adversarial HFC with strong irrelevant features, and we conjecture that the following properties are essential.
Regionally Homogeneous. Several recent works (Li et al., 2020b; Gao et al., 2020a; Dong et al., 2019a; Gao et al., 2020b) have demonstrated that adversarial perturbations with regionally homogeneous (or patch-wise (Gao et al., 2020a)) property can enhance the transferability of adversarial examples. Inspired by that the raw image is a composite of homogeneous patterns, the reason might be attributed to that this perturbation tend to form irrelevant features recognizable by the DNNs.
Repeating. Nguyen et al. (2015) observe that extra copies of the repeating element do improve the confidence of DNNs. From the perspective of strengthening the irrelevant features, it is expected that repeating the content is beneficial.
Dense. Analogous to the above repeating property that performs global repeating, i.e., increases the amount of irrelevant features globally, we can also perform local repeating to strengthen its adversarial effect further. For term distinction, we term this property dense.
To verify the effect of the above properties, we conduct the ablation study in Sec. 4.1, and results support our conjecture. Besides, the analysis in Appendix Sec. A.9 also show that our HIT has potential to become a targeted attack.
3.3 HYBRID IMAGE TRANSFORMATION
Motivated by the above discussion, we take the idea of hybrid image (Oliva, 2013) to apply our no-box attacks. Specifically, Oliva (2013) replaces the HFC of one image with the HFC of another carefully picked image and craft hybrid images with two different interpretations: one that appears when the image is viewed up-close, and the other that appears from afar (see Fig.5 of Oliva (2013)). However, confusing human’s vision system (without ε constrain) cannot guarantee the misclassification of DNNs since adversarial examples are constrained by the maximum perturbation. Therefore, we propose a novel Hybrid Image Transformation (HIT) attack method which reduces3 original
2see quantitative analysis in Appendix Sec. A.2. 3Due to the ε constraint, we can not completely replace HFC with others
HFC, and meanwhile, adds well-designed noisy ones to attack DNNs. Our method only needs three steps but can generate robust training-free adversarial perturbations in real time:
First, we provide an adversarial patch xp to generate noisy HFC. Unlike the traditional way that needs training, here we use the matplotlib tool to draw it. Inspired by the observation in Sec. 3.2, we consider three simple regionally homogeneous proto-patterns (to avoid cherry-picking) as our basic adversarial patches: concentric circles, concentric squares, and concentric rhombus in Fig. 4. The effect of concentric pattern is to make the resultant HFC dense. Then we repeat these adversarial patches.
Second, we extract the LFC of the raw image and the HFC of the adversarial patch. Note that several methods can be utilized to extract the HFC and LFC of an image, e.g., Fourier transformation. In this paper, we use an approximated yet simple Gaussian low-pass filterGwhose size is (4k+1)×(4k+1) to get LFC, which can be written as:
Gi,j = 1 2πσ2 e(− i2+j2 2σ2 ), (1)
where σ = k determines the width of our G. In general, the larger σ is, the more HFC is filtered out. We are not going to introduce a new high-pass filter here for simplicity and just get HFC byG. More specifically, we obtain HFC by subtracting the LFC of the adversarial patch.
Finally, we can synthesize these two part components to generate our adversarial hybrid image xadv: xadv = clipx ,ε(x ∗G+ λ · (xp − xp ∗G)), (2) where “*” denotes convolution operation, λ is a weight factor to balance the LFC and HFC, and clipx,ε(·) restricts the resultant adversarial examples within the ε-ball of the raw image in l∞ space. Therefore, our method is different from adversarial patch attacks (Brown et al., 2017; Liu et al., 2020) which replace a subregion of the image with a well-design patch.
As illustrated in Fig. 2(c), our HIT can effectively reduce relevant HFC and add many other irrelevant noisy ones, e.g., highlighted yellow boxes in (c) cannot find any obvious HFC associated with “cat” at all. As a result, the target model can not extract correct features to make a reasonable prediction, thus leading to misclassification. Besides, our adversarial examples are less perceptible than those of our competitors (See Appendix Sec. A.4).
4 EXPERIMENTS
Networks. Here we consider ten well-known classification models: VGG19 (Simonyan & Zisserman, 2015), Inception-v3 (Inc-v3) (Szegedy et al., 2016), ResNet-152 (ResNet) (He et al., 2016), DenseNet-121 (Dense) (Huang et al., 2017), WideResNet (WRN) (Zagoruyko & Komodakis, 2016), SENet (Hu et al., 2018), PNASNet (PNA) (Liu et al., 2018), ShuffleNet-v2 (Shuffle) (Ma et al., 2018), SqueezeNet (Squeeze) (Iandola et al., 2017) and MobileNet-v2 (Mobile) (Sandler et al., 2018) as our target models. All the models are available in the Torchvision4, except for PNA and SENet which are obtained from Github5. We also perform our attack on a real-world recognition system in Appendix Sec. A.6.
4https://github.com/pytorch/vision/tree/master/torchvision/models 5https://github.com/Cadene/pretrained-models.pytorch
Dataset. To make our method more convincing and avoid cherry-picking, we choose 10,000 images (each category contains about 10 images which are resized to 299 × 299 × 3 beforehand) from the ImageNet validation set (Russakovsky et al., 2015) which are classified correctly by all ten networks we consider. We also discuss our methods on other classification tasks in Appendix Sec. A.5.
Parameters. In our experiments, we use l∞-norm to measure the perceptibility of adversarial noises, unless specified, the maximum perturbation ε is set to 16 (results with a smaller ε can be found in Appendix Sec. A.7). For our HIT, the size of Gaussian kernel G is 17 × 17 (i.e. k = 4), weight factor λ is set to 1.0 (the discussion about λ is shown in Appendix Sec. A.3), and density of proto-pattern is set to 12. For tile-size, unless specified, we set to 50 × 50, i.e., tile-scheme is 6× 6. For no-box methods, we follow the same setting as (Li et al., 2020a). For black-box methods, the iteration T is set to 10 and the step size α is 1.6. For MI-FGSM (Dong et al., 2018), we adopt the default decay factor µ = 1.0. For DI2-FGSM (Xie et al., 2019b), we set the transformation probability to 0.7. For TI-FGSM (Dong et al., 2019a), the length of Gaussian kernel is 15. For PI-FGSM (Gao et al., 2020a), the length of project kernel is 3, the amplification factor β and project factor γ are 10.0 and 16.0, respectively. Different from PI-FGSM, β and γ for PI-MI-DI2-FGSM and PI-TI-DI2-FGSM (Gao et al., 2020a) is 2.5 and 2.0.
4.1 ABLATION STUDY
In this section, we conduct a series of ablation study for our HIT. Specifically, we investigate the effectiveness of regionally homogeneous pattern, repeating pattern and dense pattern in Sec. 4.1.1, Sec. 4.1.2 and Sec. 4.1.3, respectively. Besides, we also analyze the effect of perturbation size on the performance in Sec. 4.1.4. For the result of HIT without reducing HFC beforehand is shown in Appendix Tab. A.12.
4.1.1 THE EFFECT OF REGIONALLY HOMOGENEOUS PATTERN
To the best of our knowledge, regionally homogeneous perturbations (Dong et al., 2019a; Gao et al., 2020a;b; Li et al., 2020b) are mostly based on the gradient to craft, thereby training is necessary. However, whether arbitrary noise can benefit from the homogeneous property remains unclear. Therefore, we compare random noises with semi-random ones to check it:
Random noise: For a given random location pair set L, we callNr ∈ RH×W×C random noise if it meets the following formula:
Nr[i, j, c] = { ε · random(−1, 1), (i, j, c) ∈ L 0, else
(3)
Semi-random noise: Different from the random noise, semi-random noise has some regularity. Let S denotes a semi-random location pair set, and here we take H-dimension random noise as an example. Nsr can be written as:
Nsr[i, :, :] = { ε · random(−1, 1), i ∈ S 0, else
(4)
where random(-1, 1) returns 1 or -1 randomly. As depicted in Fig. 5, the success rates ofNsr are consistently higher than those of Nr. As the number of perturbed pixels increases, the margin between them also increases. This demonstrates that training-free noise can also benefit from regionally homogeneous property. To exploit this conclusion further, in Fig. 4, we extend semi-random noise to other more complex “continuous” patterns, e.g., circle.
4.1.2 THE EFFECT OF REPEATING PATTERN
In this section, we show the experimental results of our proposed HIT w.r.t different tile-sizes. Here we consider seven different tile-schemes including 1 × 1, 2 × 2, 3 × 3, 4 × 4, 5 × 5, 6 × 6 and 7× 7, and the tile-sizes thereby are 300× 300, 150× 150, 100× 100, 75× 75, 60× 60, 50× 50, 42 × 42, respectively. We will resize back to 299 × 299 × 3 to match the size of raw images. The visualizations of these patches can be found in Appendix Sec. A.11.
In Fig. 5, we report the average attack success rates of ten models. The success rates increase very quickly at first and then keep stable after 4× 4 tile-scheme. If we continue to increase the tile-size, the attack success rates may go down. The main reason might be that the distortion caused by the resizing operation. It indirectly blurs resultant tiled adversarial patches, thus reducing the available HFC. Compared to the other two geometric patterns, we find that circle patches always perform the best. For example, the success rate is up to 88.67% when tile-size is 6× 6. This result demonstrates that the attack ability of training-free perturbations can benefit from repeating property.
4.1.3 THE EFFECT OF DENSE PATTERN
To validate the effect of dense pattern, we analyze the average attack success rates w.r.t densities. Since the trends of different patterns are similar, we only discuss the results of circle patch whose tile-scheme is 6 × 6. Here we control the density from 1 to 12. For example, “2” denotes only two circles in the proto-pattern, and more visualizations can be found in Appendix Sec. A.11.
As shown in Fig. 5, the success rates increase rapidly at the beginning, then remain stable after the density exceeds 8, and reach the peak at 12. This experiment demonstrates the effectiveness of dense pattern. Therefore, we set the default density of each proto-pattern to 12 in our paper.
4.1.4 THE SIZE OF PERTURBATION
In this section, we study the influence of the maximum perturbation ε on the performance of our HIT. The result of Fig. 6 depicts the growth trends of each model under different adversarial patches. No matter what the adversarial patch is, the performance proliferates at first, then remains stable after ε exceeds 16 for most models. Besides, the circle patch (curve-like) always performs best while the performance of the other two adversarial patches (straight-like) is similar. For example, when ε = 16 and the target model is VGG19, the attack success rate of circles patch is 94.75% while the square patch and rhombuses patch ones are 81.52% and 83.63%, respectively. This demonstrates that DNNs are more vulnerable to curve-like perturbations than straight-like ones (we also analyze the reasons for this in Appendix Sec. A.10).
Another observation from this result is that our HIT can serve as a universal attack, although not in a strict sense. As demonstrated in Fig. 6, when ε = 10 which is the common constraint for universal adversarial perturbations (Mopuri et al., 2017; Moosavi-Dezfooli et al., 2017; Mopuri et al., 2018; Reddy Mopuri et al., 2018; Liu et al., 2019; Hashemi et al., 2020), our HIT with circle patch can achieve a success rate of 63.23% on average. Notably, it can be up to 89.74% on Squeeze.
4.2 COMPARISON OF HIT WITH NO-BOX ATTACKS
In this section, we compare the performance of our no-box HIT with state-of-the-art no-box attacks (Li et al., 2020a). Note that Li et al. (2020a) need to pay 15,000 iterations at most to train a substitute model, and then runs extra 200 iterations baseline attacks and 100 iterations ILA (Huang et al., 2019), which is extremely time-consuming. Significantly different from Li et al. (2020a), our HIT is training-free which does not require any auxiliary images to train a substitute model, thus achieving real-time attack.
The experimental results are reported in Tab. 1. A first glance shows that our HIT outperforms Li et al. (2020a) by a large margin. No matter what the adversarial patches are, our HIT can consistently achieve a success rate of over 92% on average. By contrast, the best performance of Li et al. (2020a), i.e., Prototypical∗ w/ Sup, is only 68.74% on average. Notably, our HIT with circle patch remarkably outperforms Li et al. (2020a) by 29.39% on average and 42.04% at most when attacking PNA.
4.3 COMPARISON OF HIT WITH BLACK-BOX ATTACKS
In this section, we compare our no-box HIT with mainstream transfer-based attacks. For MI-FGSM, DI2-FGSM, PI-FGSM and their extensions PI-MI-DI2-FGSM, we utilize VGG19, Inc-v3, ResNet
and Dense to iteratively (ten forward & backward propagation) craft adversarial examples and use them to attack the rest of black-box models. As for our proposed HIT, we do not need any substitute model or training process. The results are summarized in Tab. 2, where the models in the leftmost column are substitute models, and the bottom block shows the results of our HIT.
As demonstrated in Tab. 2, our HIT is even on par with state-of-the-art PI-MI-DI2-FGSM. Specifically, on average, the best performance of PI-MI-DI2-FGSM is 88.84%, and our HIT based on circle patch can get up to 88.67%. However, the transferability of adversarial examples largely depends on the substitute model. For example, when adversarial examples are crafted via Inc-v3, the performance of PI-MI-DI2-FGSM is limited and our HIT can remarkably outperform it by 23.34% on average. Besides, when the target model is in lightweight models, e.g., Shuffle, our method consistently outperforms these mainstream transfer-based attacks by a large margin.
Since adversarial training technique (Madry et al., 2018b; Tramèr et al., 2018; Awasthi et al., 2021) can effectively defend against adversarial examples, we conduct an extra experiment on several defense models to demonstrate the effectiveness of our method. The additional target models including three ensemble adversarial training models (EAT) (Tramèr et al., 2018): Inc-v3ens3, Inc-v3ens4 and IncRes-v2ens, and three feature denoising models (FD) (Xie et al., 2019a): ResNet152 Baseline (Res152B), ResNet152 Denoise (Res152D) and ResNeXt101 DenoiseAll (ResNeXtDA). As demonstrated in previous works(Guo et al., 2019; Sharma et al., 2019), low-frequency perturbations are more effective for attacking defense models. Motivated by it, we change the tile-schemes to smaller ones (i.e., 2 × 2 for EAT and 1 × 1 for FD) and other parameters stay the same (see more details in Appendix Sec. A.12). As observed in Tab. 3, our HIT is effective even for defense models. Notably, HIT based on circle patch can successfully attack Inc-v3ens4 by 61.86%. Besides, for more robust FD, even crafting adversarial examples via an ensemble of VGG19, Inc-v3, ResNet and Dense, transfer-based PI-TI-DI2-FGSM is still inferior to our HIT. This experimental result reveals that current defenses have not achieved real security, which is even vulnerable to training-free adversarial examples.
5 CONCLUSION
In this paper, we rethink the classification logic of deep neural networks with respect to adversarial examples. We observe that HFC domains in low-level features and plays a crucial role in classification. Besides, we demonstrate that DNNs are vulnerable to training-free perturbations with regionally homogeneous, repeating, dense property through empirically and experimentally analysis. Motivated by these observations, we propose a novel Hybrid Image Transformation (HIT) attack method by combining the LFC of raw images with the HFC of our well-designed adversarial patches to destroy the useful features and add strong irrelevant noisy ones. Extensive experiments on the ImageNet dataset demonstrate the effectiveness of the proposed method. Surprisingly, our simple method outperforms existing no-box attacks by a significant margin and is even on par with transfer-based black-box attacks that require the substitute model to craft adversarial examples.
In another aspect, since most models are vulnerable to our method, it implies that our adversarial examples may capture the common “blind spots” of them. Therefore, a defense can improve the robustness and stability by covering these “blind spots”, i.e., applying data augmentation technique using our adversarial examples.
A APPENDIX
A.1 SETUP
Networks. Here we consider ten well-known classification models: VGG19 Simonyan & Zisserman (2015), Inception-v3 (Inc-v3) Szegedy et al. (2016), ResNet-152 (ResNet) He et al. (2016), DenseNet-121 (Dense) Huang et al. (2017), WideResNet (WRN) Zagoruyko & Komodakis (2016), SENet Hu et al. (2018), PNASNet (PNA) Liu et al. (2018), ShuffleNet-v2 (Shuffle) Ma et al. (2018), SqueezeNet (Squeeze) Iandola et al. (2017) and MobileNet-v2 (Mobile) Sandler et al. (2018) as our target models.
Dataset. To make our method more convincing and avoid cherry-picking, we choose 10,000 images (each category contains about 10 images) from the ImageNet validation set Russakovsky et al. (2015) which are classified correctly by all ten networks we consider. Besides, all images are resized to 299× 299× 3 beforehand. Parameters. In our experiments, we use l∞-norm to measure the perceptibility of adversarial noises, the maximum perturbation ε is set to 16. For our HIT, the size of Gaussian kernel G is 17× 17 (i.e. k = 4), weight factor λ is set to 1.0.
A.2 QUANTITATIVE ANALYSIS ABOUT HFC AND LFC
To quantitatively analyze whether HFC or LFC is dominant in the feature map of shallow layer, we conducted this experiment. Considering that the size of each shallow-layer feature map in Fig. 2(b) is 147, we first resize the Fig. 2(a) (299× 299) to 147× 147, and denote the resultant image by xr. Then we get the xHr (HFC of xr) by:
xHr = xr − xr ∗G. (5)
To quantitatively compare the response of HFC and LFC, we calculate the average response of each feature map φ(x) in low-frequency regions versus that in the other HFC regions. To that end, we generate two masks to distinguish the two regions. More specific, the mask of high-frequency regionsMH can be written as:
MHi,j = { 1, |xHr(i,j)| > τ 0, else , (6)
where τ = 20 is the pre-set threshold which applied to filter out low response. After getting the MH , the mask of LFCML can be easy derivated:
ML = 1−MH . (7)
Therefore, the average response of HFC aH and the average response of LFC aL can be expressed as:
aH =
∑ i,jM
H φ(x)∑ i,jM H , (8)
aL =
∑ i,jM
L φ(x)∑ i,jM L . (9)
In this paper, if a feature map meets aH > aL, we call it “HFC dominant”, otherwise we call it “LFC dominant”. As demonstrate in Fig. 7, most feature maps are focus on HFC, and the “HFC dominant” to “LFC dominant” ratio is 3:1.
A.3 THE EFFECT OF WEIGHT FACTOR λ
In this section, we discuss the effect of different weight factors λ on the experimental results. We tune λ from 0.1 to 10, and the results are shown in Fig. 8. When λ ≤ 1, the attack success rate increases rapidly at the beginning and then remains stable. However, further increasing λ from 1 to 10 does not improve the performance. Actually, the success rates keep stable with a slight drop.
Apparently, a larger λ leads to more perturbations (i.e., increase the average perturbation of all pixels), and our reported results are a little inconsistent with linear assumption (Goodfellow et al., 2015). It probably because our HIT is completely independent of any prior information (e.g. the gradient of any model or data distribution). So it is not the larger the noise is, the farther the deviation from the true label will be. Besides, we notice that the activation functions of these victim’s models are all Relu, which may be another reason for this phenomenon. More specifically, Relu is defined as
Relu(z) =
{ 0, z < 0.
z, else. (10)
where z is the intermediate output before activation layer. If the intermediate adversarial perturbation is large enough, i.e., δ′ ≤ −z, then Relu(z+ δ′) will return 0. But for a misclassification label yadv 6= y, positive activation which is different from the original z may be more helpful than 0.
A.4 QUALITATIVE COMPARISON FOR ADVERSARIAL EXAMPLES
To better reflect the advantages of our approach, in this section we compare the visual quality of the generated adversarial examples. Specifically, we consider state-of-the-art black-box PI-FGSM (Gao et al., 2020a) and no-box attack (Li et al., 2020a) as our competitor. As depicted in Fig. 9, both PI-FGSM (Gao et al., 2020a) and no-box attack (Li et al., 2020a) will cause more perceptible distortions. In contrast, the adversarial perturbation crafted by our HIT is much more imperceptible.
A.5 ATTACK OTHER CLASSIFICATION TASKS
To highlight the practical property of our HIT, in this section we apply our HIT for other classification tasks. Specifically, we consider three well-known fine-grained classification including
Raw images
CUB-200-2011 (Wah et al., 2011), Stanford Cars (Krause et al., 2013) and FGVC Aircraft (Maji et al., 2013) and the victim model is trained via DCL (backbone: Res-50) (Chen et al., 2019). The resolution of inputs is 448× 448. Therefore, we set the ”tile size = 448 / tile scheme”. For example, if the tile-scheme is 4 × 4, then the tile-size is 112. To ensure our defaulting setting (i.e., λ and tile-scheme) for HIT is applicable, we conduct two experiments in the following.
Discussion on tile scheme. We first report the average attack success rates (%) of our HIT w/ Circle w.r.t tile-scheme in Tab. 4. From the result, we can observe that our HIT is also effective for attacking other datasets. Notably, our HIT can fool DCL with about 90% success rate on Stanford Cars dataset. Besides, a relatively smaller tile-size is also helpful in improving the success rate of the attack, which is consistent with the conjecture given in Sec. 3.2.
Discussion on λ. We then report the average attack success rate (%) of our HIT w/ Circle w.r.t λ in Tab. 5. Although set λ = 1.0 is not optimal, the gap between the best results and the results of is very small. Therefore, our default setting for HIT is still applicable.
A.6 ATTACK REAL-WORLD RECOGNITION SYSTEM
To further demonstrate the practical property of our HIT, in this section we apply our HIT (w/ Circle) to attack a real-world recognition system, i.e., Google Cloud Vision API6. Different from existing works (Chen et al., 2017; Brendel et al., 2017) which need a large number of queries for optimization, we directly apply our HIT with the default setting (i.e., tilescheme is 6×6 and λ = 1.0). As illustrated in Fig. 10, our no-box HIT with ε = 16 can effectively change top-k labels. For example, the top-5 label of “fish” is “Fish”, “Fin”, “Seafood”, “Ray-finned fish” and “Marine biology”, while our adversarial example is
“Reptile”,“Turtle”, “Terrestrial Animal”, “Pattern” and “Art”. Notably, there is no overlap on top-k labels between clean image and our adversarial example, which also demonstrate the effectiveness of our no-box HIT.
A.7 RESULTS FOR SMALLER PERTURBATION
In this experiment, we report the average success rates (%) between state-of-the-art black-box attacks (further add Ghost Network algorithm (Li et al., 2020c) as our competitor) and our proposed no-box attack with a smaller perturbation ε = 8.
As demonstrated in Tab. 6, our proposed methods are still competitive to mainstream transfer-based black-box attacks, even though they combine many effective techniques. Remarkably, our no-box attack can significantly outperform Ghost Networks (+MI-FGSM). Although Ghost Networks (+PIMI-DI-FGSM) is much more powerful, our no-box attack can surpass it in some cases. For example, when fooling Shuffle, our HIT (w/ Circle) can outperform Ghost Networks (+PI-MI-DI-FGSM) by about 8%.
A.8 RAW IMAGE FOR ATTACK
To highlight the effectiveness of our design for adversarial patches, here we conduct the experiment where raw images (shown in Fig. 11) serve as “adversarial patch”. More specially, we utilize the HFC of these raw images (like Din et al.) to manipulate adversarial examples. However, even the HFC of texture-rich raw images (e.g., “Grifola frondosa” and “Capitulum”) do not achieve a good result. As demonstrated in Tab. 8, the average attack success rates are all less than 40%. By contrast, our well-designed adversarial patches can significantly achieve a success rate of nearly 90%, which demonstrates the effectiveness of our design.
6https://cloud.google.com/vision/docs/drag-and-drop
Table 6: The comparison of attack success rates (%) on normally trained models between black-box attacks and our no-box attacks with maximum perturbation ε = 8. For black-box attacks, adversarial examples are crafted via Inc-v3.
Attacks VGG19 ResNet DenseNet WRN SENet PNA Shuffle Squeeze Mobile AVG.
MI-FGSM 21.26 15.63 20.47 14.45 11.58 23.36 24.13 32.61 25.64 21.01 Ghost Networks (+MI-FGSM) 27.12 17.45 22.31 14.92 13.43 28.63 30.08 40.80 33.67 25.38
DI-FGSM 18.41 11.24 16.44 10.19 8.28 17.59 15.03 18.40 18.20 14.86 PI-FGSM 24.12 14.98 22.91 15.38 12.21 27.32 25.93 39.20 27.29 23.26 PI-MI-DI2-FGSM 40.88 31.02 41.34 29.70 25.56 38.46 36.79 46.70 42.38 36.98 Ghost Networks (+PI-MI-DI2-FGSM) 63.56 43.59 55.21 40.91 40.33 54.73 63.48 81.98 73.46 57.47
HIT w/ 6× 6 Circle 37.64 36.21 40.80 30.82 21.36 37.63 71.45 79.34 69.90 47.24 HIT w/ 6× 6 Square 13.40 17.50 25.85 21.99 13.79 18.31 53.03 61.61 50.88 30.71 HIT w/ 6× 6 Rhombus 17.24 19.95 27.65 22.00 12.85 18.91 58.84 63.49 57.90 33.20
A.9 DISCUSSION ON TARGETED ATTACK
(a) Adversarial Patch (b) Label ID: 794 (“shower curtain”)
Figure 12: We show (a) our adversarial patch and (b) some images which classified as shower curtain from ImageNet dataset. The bottom row is their HFC extracted by Eq. 1.
Although we do not explicitly force the resultant adversarial examples to be misclassified as a specific targeted label, we observe that our HIT tends to implement a targeted attack due to the frequency domain operation and classification logic of DNNs. In Tab. 7, we report the top-5 prediction labels of our adversarial examples, which are crafted by 6× 6 concentric circle pattern. A first glance shows that almost all models tend to misclassify adversarial examples generated by our HIT as several specific labels, e.g., 794 (“shower curtain”). Furthermore, this phenomenon is more obvious for Mobile and ResNet whose ratio is up to 47.69% and 75.75% respectively.
To better understand this phenomenon, we show several clean images whose labels are “shower curtain” from the ImageNet dataset and our adversarial patch in Fig. 12. We observe that the HFC of “shower curtain” is somehow aligned with our adversarial patch, i.e., they all show similar certain repetitive circles. We suspect this phenomenon might be because our proposed perturbation dominates the overall features of the image, and instead, the original features of the image become noise. Since existing algorithms are not effective yet and simply replacing our adversarial patch with a clean targeted image does not achieve an effective targeted attack (as demonstrated in Sec. A.8), we will further study the selection and generation of adversarial patches, e.g., fusing the shallow texture information of targeted distribution to guide the resultant adversarial examples towards the targeted category.
A.10 WHY CIRCLE PATTERN IS USUALLY BETTER?
Here we attempt to provide an insight into the performance gap between Circle and the other two patterns by analyzing the intermediate feature response. Without loss of generality, we set the layer index to “depth of each DNN” / 2 and report the average cosine similarity of the features between 10,000 raw images and their adversarial examples. The result from Tab. 9 shows that Circle consistently leads to lower cosine similarity than other patterns. Consequently, the features that feed to the deep layer are more featureless, thus leading to misclassification.
A.11 VISUALIZATION OF OUR ADVERSARIAL PATCHES
In this section, we first visualize the concentric circle with respect to densities in Fig. 13. Here we control the density from 1 to 12, e.g., “2” denotes only two circles in the proto-pattern. With the increase of density, the distance between any two circles will also be reduced.
Then we list our adversarial patches with respect to tile-schemes in Fig. 14. More specifically, we first crop the 600 × 600 × 3 proto-patterns to 300 × 300 × 3 adversarial patches, then resize them into different tile-sizes (e.g., 150 × 150 × 3) and tile them to 300 × 300 × 3, finally resize back to 299×299×3 to match the size of raw images. As we can see, if we decrease the tile-size, distortion is inevitable.
A.12 THE EFFECT OF REPEATING PATTERN FOR DEFENSES
In this section, we further consider six additional well-known defense models, which including three ensemble adversarial training models (EAT) (Tramèr et al., 2018): Inc-v3ens3, Inc-v3ens4 and IncRes-v2ens,7 and three feature denoising models (FD) (Xie et al., 2019a): ResNet152 Baseline (Res152B), ResNet152 Denoise (Res152D), ResNeXt101 DenoiseAll (ResNeXtDA),8 to discuss the effect of repeating pattern.
7https://github.com/tensorflow/models/tree/archive/research/adv_ imagenet_models
8https://github.com/facebookresearch/ImageNet-Adversarial-Training
Table 9: The cosine similarity comparison for different patterns.
Model Attack VGG19 Inc-v3 ResNet Dense WRN SENet PNA Squeeze Shuffle Mobile Avg.
- HIT w/ Square (Ours) 0.6215 0.7419 0.7638 0.8090 0.7599 0.5838 0.7437 0.6940 0.6704 0.4838 0.6872
HIT w/ Rhombus (Ours) 0.6218 0.7458 0.7448 0.7853 0.7280 0.6258 0.7672 0.6738 0.6461 0.4005 0.6746 HIT w/ Circle (Ours) 0.5472 0.6685 0.7306 0.7779 0.7223 0.5613 0.6747 0.6643 0.6062 0.3617 0.6314
Density:7 Density:8 Density:9 Density:10 Density:11 Density:12
Density:1 Density:2 Density:3 Density:4 Density:5 Density:6
Figure 13: We visualize our proto-patterns w.r.t densities. Here we take concentric circles as an example.
Generally, a smaller tile-scheme can generate a more perceptible perturbation. As shown in Fig. 15, the area of each regionally homogeneous (i.e. continues) line in adversarial examples crafted by 1× 1 patches is bigger than 6× 6 ones. Different from the trends on NTs, smaller tile-schemes are more effective for attacking defense models. As demonstrated in Tab. 16, when attacking EAT, 2×2 adversarial patches perform best, and further increasing the tile-scheme will significantly degrade performance, e.g., 7 × 7 rhombuses only successfully attack EAT by 6.61% on average. The trend of FD is similar to that of EAT, except that 1×1 adversarial patches work best. The reason might be that thin regionally homogeneous lines are more easily to filter out by the denoising block of (Xie et al., 2019a). Therefore, in our paper, we use 2× 2 and 1× 1 adversarial patches to attack EAT and FD, respectively.
1x1 2x2 3x3 4x4 5x5 6x6 7x7 | 1. What is the focus and contribution of the paper regarding image modification attacks?
2. What are the weaknesses of the paper's experiments and comparisons with other works in the field?
3. Do you have any concerns regarding the technical novelty and dependence on proto-patterns?
4. How does the reviewer assess the effectiveness and limitations of the proposed attack, particularly regarding norms, data augmentation, tile size, and transferability?
5. What are the suggestions for improving the paper, including additional comparisons and analyses? | Summary Of The Paper
Review | Summary Of The Paper
The authors have proposed an attack based on the modification of high-frequency components of an image through different types of masks such as circular, square, and rhombus. The experiments are performed using multiple networks on a subset of the ImageNet database.
Review
It is well known that the high-frequency components of an image are critical for both classification and robustness. Therefore, the authors need to revisit the text accordingly regarding the claim and highlight the text both in the abstract and the paper and refer to the following but not limited to the following papers.
[1] High-Frequency Component Helps Explain the Generalization of Convolutional Neural Networks, Haohan Wang, Xindi Wu, Zeyi Huang, Eric P. Xing, CVPR2020 [2] D. Anshumaan, A. Agarwal, M. Vatsa, and R. Singh, WaveTransform: Crafting Adversarial Examples via Input Decomposition, ECCV Workshop on Adversarial Robustness in the Real World, 2020, pp. 152-168, [3] Steganographic universal adversarial perturbations, PRL 2020
The comparison shown in the paper is extremely weak and does not use the existing works that utilize the same concept [2-6].
[4] Guo, C., Frank, J.S. and Weinberger, K.Q., 2018. Low-frequency adversarial perturbation. arXiv preprint arXiv:1809.08758. [5] Sharma, Y., Ding, G.W. and Brubaker, M., 2019. On the effectiveness of low-frequency perturbations. arXiv preprint arXiv:1903.00073. [6] Hashemi et al., Transferable Universal Adversarial Perturbations Using Generative Models, 2020
The technical novelty of the paper is not clear as the authors use the concept of Hybrid image which is well known.
The attack is highly dependent on the proto-patterns and why only the circle pattern is more effective is not well studied.
The attack norms is also a concern, the authors have used slightly higher norm. The ablation study corresponding to different epsilon values such as 2, 4, 8, 10 needs to be added and a comparison with existing algorithms on the same norm strength can be added.
Are the models trained using the data augmentation which usually performs corruption-based augmentation as well? Are those models are also vulnerable? What about ensemble-based models?
Another limitation of the attack is the dependency on the tile size. The paper lacks several such analyses to explain why this dependency exists and why hyperparameter is more successful than others.
Please add the comparison with the following transferable perturbations and defense as well.
Learning Transferable Adversarial Examples via Ghost Networks, AAAI 2020. Adversarial Robustness Across Representation Spaces. CVPR
What is the computation cost of PI-MI-DI2-FGSM? |
ICLR | Title
Practical No-box Adversarial Attacks with Training-free Hybrid Image Transformation
Abstract
In recent years, the adversarial vulnerability of deep neural networks (DNNs) has raised increasing attention. Among all the threat models, no-box attacks are the most practical but extremely challenging since they neither rely on any knowledge of the target model or similar substitute model, nor access the dataset for training a new substitute model. Although a recent method has attempted such an attack in a loose sense, its performance is not good enough and the computational overhead of training is expensive. In this paper, we move a step forward and show the existence of a training-free adversarial perturbation under the nobox threat model, which can be successfully used to attack different DNNs in real-time. Motivated by our observation that high-frequency component (HFC) domains in low-level features and plays a crucial role in classification, we attack an image mainly by manipulating its frequency components. Specifically, the perturbation is combined by the suppression of the original HFC and the adding of noisy HFC. We empirically and experimentally analyze the requirements of effective noisy HFC and show that it should be regionally homogeneous, repeating and dense. Extensive experiments on the ImageNet dataset demonstrate the effectiveness of our proposed no-box method. It attacks ten well-known models with a success rate of 98.13% on average, which outperforms state-of-the-art no-box attacks by 29.39%. Furthermore, our method is even competitive to mainstream transfer-based black-box attacks. Our code is available in our appendix.
1 INTRODUCTION
Deep neural networks (DNNs) are widely known to be vulnerable to adversarial examples (Szegedy et al., 2013; Goodfellow et al., 2015), i.e., a human-imperceptible perturbation can lead to misclassification. In adversarial machine learning, the term threat model defines the rules of the attack, such as the resources the attacker can access. Based on the threat model, the attacks are often divided into white-box attacks and black-box attacks. In the white-box threat model (Szegedy et al., 2013; Goodfellow et al., 2015; Madry et al., 2018a), the attacker has full knowledge of a target model, such as the model weights and the whole training dataset. Recognizing the threat of these adversarial attacks, a model owner is unlikely to leak a model’s information to the public. Thus, the white-box attack is often used to evaluate the model robustness for revealing its weakest point (Madry et al., 2018a), but often not considered as a practical attack method (Chen et al., 2017). To this end, numerous works have investigated a more realistic threat model, where the attacker does not require full knowledge of the target model, i.e., the backpropagation on the target model is prohibited. This threat model is called black-box attack (Papernot et al., 2016; Tramèr et al., 2016; Papernot et al., 2017; Narodytska & Kasiviswanathan, 2017; Chen et al., 2017; Brendel et al., 2017; Dong et al., 2019b; Yan et al., 2019; Chen et al., 2020; Zhou et al., 2020). However, such a black-box threat model usually involves a major concern of being resource-intensive in terms of query cost and time. In real-world attack scenarios, even if we ignore such concerns, query-based black-box attack can still be infeasible, e.g., the model API is inaccessible to the attacker. Moreover, it might cause suspicion due to repeated queries to the model with almost the same adversarial image. To alleviate this issue, another line of black-box threat model (Dong et al., 2018; Xie et al., 2019b; Dong et al., 2019a; Wu et al., 2020; Lin et al., 2020; Gao et al., 2020a; 2021) called transfer-based attack is proposed. In this threat model, adversarial examples are crafted via the local available pre-trained substitute model, which usually trains on the same training dataset as the target model. The resultant adversarial examples
are expected to attack the target model. However, without the feedback from the target model, the transferability heavily depends on how large the gap between the substitute model and target model. In practice, this gap is large because the structure and the training technique of the target model are usually not publicly available due to security and privacy concerns.
From the analysis above, we argue that both white-box and black-box attacks can hardly be considered as practical attacks. A practical attack should satisfy two criteria: (a) model-free, i.e., no dependence on the pre-trained substitute model or the target model for either backward propagation or only forward query; (b) data-free, i.e., no dependence on the dataset for training a substitute model. We term it no-box attack. A recent work (Li et al., 2020a) is the first (to our knowledge) as well as the only work to have attempted such an attack in a loose sense. Their threat model still requires a small number of auxiliary samples, such as 20 images. Admittedly, collecting a small number of samples might not be difficult in most cases, but might be still infeasible in some security-sensitive applications. Specifically, their approach (Li et al., 2020a) attempts to train a substitute model by adopting the classical auto-encoder model instead of the supervised classification model due to the constraint of a small-scale dataset. Overall, to attack a certain sample, their approach consists of three steps: (1) collecting a small number of images; (2) training a substitute model; (3) white-box attack on the substitute model. If a new sample, especially from a different class, needs to be attacked, the above process needs to be repeated. Thus, their approach is very resource-intensive. Besides, their attack success rate is still significantly lower than existing black-box attacks.
By contrast, our approach does not require any of the above three steps and is even training-free. With the help of visualization technique proposed by (Zeiler & Fergus, 2014), we observe that the high-frequency component (HFC), e.g., the edge and texture features, is dominant in shallow layers and the low-frequency component (LFC), e.g., the plain areas in the image, is paid less attention to be extracted. Combined with the insight
into the classification logic of DNNs in Sec. 3.1, we observe that HFC plays a crucial role in recognition. As shown in Fig. 1, without LFC, the confidence of HFC is even higher than the raw image. Although it does not hold true for all samples, it does demonstrate the importance of HFC.
Motivated by this, we take the idea of hybrid image (Oliva, 2013) and propose a novel Hybrid Image Transformation (HIT) attack method to craft adversarial examples. Formally, it only needs three steps but can effectively fool various DNNs without any training: First, due to the trainingfree setting and inspired by the analysis from Sec. 3.2, we simply utilize matplotlib1 tool to draw several geometric patterns which serve as the proto-patterns, and the resultant synthesized adversarial patches are thus richer in regionally homogeneous, repeating and dense HFC. Second, we extract the LFC of the raw image and HFC of the adversarial patch. Finally, we combine these two pieces of components and clip them to the ε-ball of the raw image to get the resultant adversarial hybrid example. Extensive experiments on ImageNet demonstrate the effectiveness of our method. By attacking ten state-of-the-art models in the no-box manner, our HIT significantly increases the average success rate from 68.74% to 98.13%. Notably, our HIT is even competitive to mainstream transfer-based black-box attacks.
2 RELATED WORK
Adversarial Attack. Let x denote raw image without any perturbation, xadv and y denote the corresponding adversarial example and true label respectively. In generally, we use l∞-norm to measure the perceptibility of adversarial perturbations, i.e., ||xadv − x||∞ ≤ ε. In this paper, we focus on non-targeted attacks (Dong et al., 2018; Xie et al., 2019b; Wu et al., 2020; Lin et al., 2020; Gao et al., 2020a) which aim to cause misclassification of DNNs f(·), i.e., f(xadv) 6= y.
1https://matplotlib.org/
Competitors. Transferability is an important property for adversarial examples. With it, the resultant adversarial example crafted via one model may fool others. For the black-box threat model, Goodfellow et al. (2015) argue that the vulnerability of DNNs is their linear nature, and generate adversarial examples efficiently by performing FGSM which is a single-step attack. Papernot et al. (2017) train a local model with many queries to substitute for the target model. Dong et al. (2018) integrate a momentum term into I-FGSM Kurakin et al. (2017) to stabilize the update direction during the attack iterations. Xie et al. (2019b) apply diverse input patterns to improve the transferability of adversarial examples. Dong et al. (2019a) propose a translation-invariant attack to mitigate the effect of different discriminative regions between models. Gao et al. (2020a) introduce patch-wise perturbation by amplifying the step size and reuse the cut noise to perturb more information in discriminative regions. For the no-box threat model, Li et al. (2020a) attempt to attack the target model without any model query or the accessible pre-trained substitute model. In their work, with a limited amount of data, they try different mechanisms (with or without supervised technique) to train the substitute model, and then utilize this substitute model to craft transferable adversarial examples. Different from these approaches, our method does not depend on transferability since we do not need any substitute model. In this paper, we craft the adversarial examples from the perspective of the classification logic of DNNs.
Frequency Perspective on DNNs. Our approach is highly inspired by existing works which explain the generalization and adversarial vulnerability of DNNs from the frequency perspective. The fact that DNNs have good generalization while being vulnerable to small adversarial perturbations has motivated (Jo & Bengio, 2017; Wang et al., 2020) to investigate the underlying mechanism, suggesting that surface-statistical content with high-frequency property is essential for the classification task. From the perspective of texture vs. shape, Geirhos et al. (2019); Wang et al. (2020) reveal that DNNs are biased towards texture instead of shape. Since the texture content is considered to have high-frequency property, their finding can be interpreted as the DNN being biased towards HFC. On the other hand, adversarial perturbations are also known to have the high-frequency property and various defense methods have also been motivated from this insight (Aydemir et al., 2018; Das et al., 2018; Liu & JaJa, 2019; Xie et al., 2019a). Nonetheless, it remains unknown whether manually designed high-frequency patterns are sufficient for attacking the network.
3 METHODOLOGY
Although many adversarial attack methods (Papernot et al., 2016; Dong et al., 2018; Gao et al., 2020a; Li et al., 2020a) have achieved pretty high success rates in both black-box and no-box cases, they all need training, especially for query-based (Papernot et al., 2016; Zhou et al., 2020) and nobox adversarial perturbations (Li et al., 2020a) whose training is usually time-consuming. Then a natural question arises: Is it possible to generate robust adversarial perturbations without any training? In the following subsections, we will give our answer and introduce our design.
3.1 MOTIVATION
To better understand the role of HFC and LFC for the classification results of DNNs, we split the information of raw images into these two pieces via Gaussian low-pass filter (defined in Eq. 1).
As illustrated in Fig. 3, when the kernel size is small, i.e., the cutoff frequency is high, the average accuracy of LFC on ten state-of-the-art models is close to 100%. However, if we continue to increase the kernel size, the average accuracy of HFC begins to exceed LFC one. To our surprise, for several specific raw images, e.g., left image of Fig. 1, the true label’s confidence of HFC which is mostly black is even higher than the raw image.
To explain the above phenomenon, we turn to the perspective of feature space. Inspired by recent intermediate feature-based attacks (Zhou et al., 2018; Ganeshan & Babu, 2019; Inkawhich et al., 2019), we argue low-level features are critical to the classification. Interestingly, as shown in Fig. 2, most2 feature maps in
the shallow layers generally extract the edge and texture features (typical ones are highlighted by red boxes), i.e., HFC, and pay less attention to plain areas in images, i.e., LFC. Therefore, if a perturbation can effectively manipulate the HFC of an image, totally different low-level features will be extracted and may lead to misclassification.
3.2 EFFECTIVE ADVERSARIAL HFC
However, what kind of training-free noisy HFC can effectively fool DNNs is still unknown because the performance of any other raw image’s HFC is unsatisfactory (see Appendix Sec. A.8). Zhang et al. (2020) have demonstrated that the effectiveness of adversarial perturbation lies in the fact that it contains irrelevant features. The features of perturbation dominate over the features in the raw image, thus leading to misclassification. Inspired by their finding, we intend to design adversarial HFC with strong irrelevant features, and we conjecture that the following properties are essential.
Regionally Homogeneous. Several recent works (Li et al., 2020b; Gao et al., 2020a; Dong et al., 2019a; Gao et al., 2020b) have demonstrated that adversarial perturbations with regionally homogeneous (or patch-wise (Gao et al., 2020a)) property can enhance the transferability of adversarial examples. Inspired by that the raw image is a composite of homogeneous patterns, the reason might be attributed to that this perturbation tend to form irrelevant features recognizable by the DNNs.
Repeating. Nguyen et al. (2015) observe that extra copies of the repeating element do improve the confidence of DNNs. From the perspective of strengthening the irrelevant features, it is expected that repeating the content is beneficial.
Dense. Analogous to the above repeating property that performs global repeating, i.e., increases the amount of irrelevant features globally, we can also perform local repeating to strengthen its adversarial effect further. For term distinction, we term this property dense.
To verify the effect of the above properties, we conduct the ablation study in Sec. 4.1, and results support our conjecture. Besides, the analysis in Appendix Sec. A.9 also show that our HIT has potential to become a targeted attack.
3.3 HYBRID IMAGE TRANSFORMATION
Motivated by the above discussion, we take the idea of hybrid image (Oliva, 2013) to apply our no-box attacks. Specifically, Oliva (2013) replaces the HFC of one image with the HFC of another carefully picked image and craft hybrid images with two different interpretations: one that appears when the image is viewed up-close, and the other that appears from afar (see Fig.5 of Oliva (2013)). However, confusing human’s vision system (without ε constrain) cannot guarantee the misclassification of DNNs since adversarial examples are constrained by the maximum perturbation. Therefore, we propose a novel Hybrid Image Transformation (HIT) attack method which reduces3 original
2see quantitative analysis in Appendix Sec. A.2. 3Due to the ε constraint, we can not completely replace HFC with others
HFC, and meanwhile, adds well-designed noisy ones to attack DNNs. Our method only needs three steps but can generate robust training-free adversarial perturbations in real time:
First, we provide an adversarial patch xp to generate noisy HFC. Unlike the traditional way that needs training, here we use the matplotlib tool to draw it. Inspired by the observation in Sec. 3.2, we consider three simple regionally homogeneous proto-patterns (to avoid cherry-picking) as our basic adversarial patches: concentric circles, concentric squares, and concentric rhombus in Fig. 4. The effect of concentric pattern is to make the resultant HFC dense. Then we repeat these adversarial patches.
Second, we extract the LFC of the raw image and the HFC of the adversarial patch. Note that several methods can be utilized to extract the HFC and LFC of an image, e.g., Fourier transformation. In this paper, we use an approximated yet simple Gaussian low-pass filterGwhose size is (4k+1)×(4k+1) to get LFC, which can be written as:
Gi,j = 1 2πσ2 e(− i2+j2 2σ2 ), (1)
where σ = k determines the width of our G. In general, the larger σ is, the more HFC is filtered out. We are not going to introduce a new high-pass filter here for simplicity and just get HFC byG. More specifically, we obtain HFC by subtracting the LFC of the adversarial patch.
Finally, we can synthesize these two part components to generate our adversarial hybrid image xadv: xadv = clipx ,ε(x ∗G+ λ · (xp − xp ∗G)), (2) where “*” denotes convolution operation, λ is a weight factor to balance the LFC and HFC, and clipx,ε(·) restricts the resultant adversarial examples within the ε-ball of the raw image in l∞ space. Therefore, our method is different from adversarial patch attacks (Brown et al., 2017; Liu et al., 2020) which replace a subregion of the image with a well-design patch.
As illustrated in Fig. 2(c), our HIT can effectively reduce relevant HFC and add many other irrelevant noisy ones, e.g., highlighted yellow boxes in (c) cannot find any obvious HFC associated with “cat” at all. As a result, the target model can not extract correct features to make a reasonable prediction, thus leading to misclassification. Besides, our adversarial examples are less perceptible than those of our competitors (See Appendix Sec. A.4).
4 EXPERIMENTS
Networks. Here we consider ten well-known classification models: VGG19 (Simonyan & Zisserman, 2015), Inception-v3 (Inc-v3) (Szegedy et al., 2016), ResNet-152 (ResNet) (He et al., 2016), DenseNet-121 (Dense) (Huang et al., 2017), WideResNet (WRN) (Zagoruyko & Komodakis, 2016), SENet (Hu et al., 2018), PNASNet (PNA) (Liu et al., 2018), ShuffleNet-v2 (Shuffle) (Ma et al., 2018), SqueezeNet (Squeeze) (Iandola et al., 2017) and MobileNet-v2 (Mobile) (Sandler et al., 2018) as our target models. All the models are available in the Torchvision4, except for PNA and SENet which are obtained from Github5. We also perform our attack on a real-world recognition system in Appendix Sec. A.6.
4https://github.com/pytorch/vision/tree/master/torchvision/models 5https://github.com/Cadene/pretrained-models.pytorch
Dataset. To make our method more convincing and avoid cherry-picking, we choose 10,000 images (each category contains about 10 images which are resized to 299 × 299 × 3 beforehand) from the ImageNet validation set (Russakovsky et al., 2015) which are classified correctly by all ten networks we consider. We also discuss our methods on other classification tasks in Appendix Sec. A.5.
Parameters. In our experiments, we use l∞-norm to measure the perceptibility of adversarial noises, unless specified, the maximum perturbation ε is set to 16 (results with a smaller ε can be found in Appendix Sec. A.7). For our HIT, the size of Gaussian kernel G is 17 × 17 (i.e. k = 4), weight factor λ is set to 1.0 (the discussion about λ is shown in Appendix Sec. A.3), and density of proto-pattern is set to 12. For tile-size, unless specified, we set to 50 × 50, i.e., tile-scheme is 6× 6. For no-box methods, we follow the same setting as (Li et al., 2020a). For black-box methods, the iteration T is set to 10 and the step size α is 1.6. For MI-FGSM (Dong et al., 2018), we adopt the default decay factor µ = 1.0. For DI2-FGSM (Xie et al., 2019b), we set the transformation probability to 0.7. For TI-FGSM (Dong et al., 2019a), the length of Gaussian kernel is 15. For PI-FGSM (Gao et al., 2020a), the length of project kernel is 3, the amplification factor β and project factor γ are 10.0 and 16.0, respectively. Different from PI-FGSM, β and γ for PI-MI-DI2-FGSM and PI-TI-DI2-FGSM (Gao et al., 2020a) is 2.5 and 2.0.
4.1 ABLATION STUDY
In this section, we conduct a series of ablation study for our HIT. Specifically, we investigate the effectiveness of regionally homogeneous pattern, repeating pattern and dense pattern in Sec. 4.1.1, Sec. 4.1.2 and Sec. 4.1.3, respectively. Besides, we also analyze the effect of perturbation size on the performance in Sec. 4.1.4. For the result of HIT without reducing HFC beforehand is shown in Appendix Tab. A.12.
4.1.1 THE EFFECT OF REGIONALLY HOMOGENEOUS PATTERN
To the best of our knowledge, regionally homogeneous perturbations (Dong et al., 2019a; Gao et al., 2020a;b; Li et al., 2020b) are mostly based on the gradient to craft, thereby training is necessary. However, whether arbitrary noise can benefit from the homogeneous property remains unclear. Therefore, we compare random noises with semi-random ones to check it:
Random noise: For a given random location pair set L, we callNr ∈ RH×W×C random noise if it meets the following formula:
Nr[i, j, c] = { ε · random(−1, 1), (i, j, c) ∈ L 0, else
(3)
Semi-random noise: Different from the random noise, semi-random noise has some regularity. Let S denotes a semi-random location pair set, and here we take H-dimension random noise as an example. Nsr can be written as:
Nsr[i, :, :] = { ε · random(−1, 1), i ∈ S 0, else
(4)
where random(-1, 1) returns 1 or -1 randomly. As depicted in Fig. 5, the success rates ofNsr are consistently higher than those of Nr. As the number of perturbed pixels increases, the margin between them also increases. This demonstrates that training-free noise can also benefit from regionally homogeneous property. To exploit this conclusion further, in Fig. 4, we extend semi-random noise to other more complex “continuous” patterns, e.g., circle.
4.1.2 THE EFFECT OF REPEATING PATTERN
In this section, we show the experimental results of our proposed HIT w.r.t different tile-sizes. Here we consider seven different tile-schemes including 1 × 1, 2 × 2, 3 × 3, 4 × 4, 5 × 5, 6 × 6 and 7× 7, and the tile-sizes thereby are 300× 300, 150× 150, 100× 100, 75× 75, 60× 60, 50× 50, 42 × 42, respectively. We will resize back to 299 × 299 × 3 to match the size of raw images. The visualizations of these patches can be found in Appendix Sec. A.11.
In Fig. 5, we report the average attack success rates of ten models. The success rates increase very quickly at first and then keep stable after 4× 4 tile-scheme. If we continue to increase the tile-size, the attack success rates may go down. The main reason might be that the distortion caused by the resizing operation. It indirectly blurs resultant tiled adversarial patches, thus reducing the available HFC. Compared to the other two geometric patterns, we find that circle patches always perform the best. For example, the success rate is up to 88.67% when tile-size is 6× 6. This result demonstrates that the attack ability of training-free perturbations can benefit from repeating property.
4.1.3 THE EFFECT OF DENSE PATTERN
To validate the effect of dense pattern, we analyze the average attack success rates w.r.t densities. Since the trends of different patterns are similar, we only discuss the results of circle patch whose tile-scheme is 6 × 6. Here we control the density from 1 to 12. For example, “2” denotes only two circles in the proto-pattern, and more visualizations can be found in Appendix Sec. A.11.
As shown in Fig. 5, the success rates increase rapidly at the beginning, then remain stable after the density exceeds 8, and reach the peak at 12. This experiment demonstrates the effectiveness of dense pattern. Therefore, we set the default density of each proto-pattern to 12 in our paper.
4.1.4 THE SIZE OF PERTURBATION
In this section, we study the influence of the maximum perturbation ε on the performance of our HIT. The result of Fig. 6 depicts the growth trends of each model under different adversarial patches. No matter what the adversarial patch is, the performance proliferates at first, then remains stable after ε exceeds 16 for most models. Besides, the circle patch (curve-like) always performs best while the performance of the other two adversarial patches (straight-like) is similar. For example, when ε = 16 and the target model is VGG19, the attack success rate of circles patch is 94.75% while the square patch and rhombuses patch ones are 81.52% and 83.63%, respectively. This demonstrates that DNNs are more vulnerable to curve-like perturbations than straight-like ones (we also analyze the reasons for this in Appendix Sec. A.10).
Another observation from this result is that our HIT can serve as a universal attack, although not in a strict sense. As demonstrated in Fig. 6, when ε = 10 which is the common constraint for universal adversarial perturbations (Mopuri et al., 2017; Moosavi-Dezfooli et al., 2017; Mopuri et al., 2018; Reddy Mopuri et al., 2018; Liu et al., 2019; Hashemi et al., 2020), our HIT with circle patch can achieve a success rate of 63.23% on average. Notably, it can be up to 89.74% on Squeeze.
4.2 COMPARISON OF HIT WITH NO-BOX ATTACKS
In this section, we compare the performance of our no-box HIT with state-of-the-art no-box attacks (Li et al., 2020a). Note that Li et al. (2020a) need to pay 15,000 iterations at most to train a substitute model, and then runs extra 200 iterations baseline attacks and 100 iterations ILA (Huang et al., 2019), which is extremely time-consuming. Significantly different from Li et al. (2020a), our HIT is training-free which does not require any auxiliary images to train a substitute model, thus achieving real-time attack.
The experimental results are reported in Tab. 1. A first glance shows that our HIT outperforms Li et al. (2020a) by a large margin. No matter what the adversarial patches are, our HIT can consistently achieve a success rate of over 92% on average. By contrast, the best performance of Li et al. (2020a), i.e., Prototypical∗ w/ Sup, is only 68.74% on average. Notably, our HIT with circle patch remarkably outperforms Li et al. (2020a) by 29.39% on average and 42.04% at most when attacking PNA.
4.3 COMPARISON OF HIT WITH BLACK-BOX ATTACKS
In this section, we compare our no-box HIT with mainstream transfer-based attacks. For MI-FGSM, DI2-FGSM, PI-FGSM and their extensions PI-MI-DI2-FGSM, we utilize VGG19, Inc-v3, ResNet
and Dense to iteratively (ten forward & backward propagation) craft adversarial examples and use them to attack the rest of black-box models. As for our proposed HIT, we do not need any substitute model or training process. The results are summarized in Tab. 2, where the models in the leftmost column are substitute models, and the bottom block shows the results of our HIT.
As demonstrated in Tab. 2, our HIT is even on par with state-of-the-art PI-MI-DI2-FGSM. Specifically, on average, the best performance of PI-MI-DI2-FGSM is 88.84%, and our HIT based on circle patch can get up to 88.67%. However, the transferability of adversarial examples largely depends on the substitute model. For example, when adversarial examples are crafted via Inc-v3, the performance of PI-MI-DI2-FGSM is limited and our HIT can remarkably outperform it by 23.34% on average. Besides, when the target model is in lightweight models, e.g., Shuffle, our method consistently outperforms these mainstream transfer-based attacks by a large margin.
Since adversarial training technique (Madry et al., 2018b; Tramèr et al., 2018; Awasthi et al., 2021) can effectively defend against adversarial examples, we conduct an extra experiment on several defense models to demonstrate the effectiveness of our method. The additional target models including three ensemble adversarial training models (EAT) (Tramèr et al., 2018): Inc-v3ens3, Inc-v3ens4 and IncRes-v2ens, and three feature denoising models (FD) (Xie et al., 2019a): ResNet152 Baseline (Res152B), ResNet152 Denoise (Res152D) and ResNeXt101 DenoiseAll (ResNeXtDA). As demonstrated in previous works(Guo et al., 2019; Sharma et al., 2019), low-frequency perturbations are more effective for attacking defense models. Motivated by it, we change the tile-schemes to smaller ones (i.e., 2 × 2 for EAT and 1 × 1 for FD) and other parameters stay the same (see more details in Appendix Sec. A.12). As observed in Tab. 3, our HIT is effective even for defense models. Notably, HIT based on circle patch can successfully attack Inc-v3ens4 by 61.86%. Besides, for more robust FD, even crafting adversarial examples via an ensemble of VGG19, Inc-v3, ResNet and Dense, transfer-based PI-TI-DI2-FGSM is still inferior to our HIT. This experimental result reveals that current defenses have not achieved real security, which is even vulnerable to training-free adversarial examples.
5 CONCLUSION
In this paper, we rethink the classification logic of deep neural networks with respect to adversarial examples. We observe that HFC domains in low-level features and plays a crucial role in classification. Besides, we demonstrate that DNNs are vulnerable to training-free perturbations with regionally homogeneous, repeating, dense property through empirically and experimentally analysis. Motivated by these observations, we propose a novel Hybrid Image Transformation (HIT) attack method by combining the LFC of raw images with the HFC of our well-designed adversarial patches to destroy the useful features and add strong irrelevant noisy ones. Extensive experiments on the ImageNet dataset demonstrate the effectiveness of the proposed method. Surprisingly, our simple method outperforms existing no-box attacks by a significant margin and is even on par with transfer-based black-box attacks that require the substitute model to craft adversarial examples.
In another aspect, since most models are vulnerable to our method, it implies that our adversarial examples may capture the common “blind spots” of them. Therefore, a defense can improve the robustness and stability by covering these “blind spots”, i.e., applying data augmentation technique using our adversarial examples.
A APPENDIX
A.1 SETUP
Networks. Here we consider ten well-known classification models: VGG19 Simonyan & Zisserman (2015), Inception-v3 (Inc-v3) Szegedy et al. (2016), ResNet-152 (ResNet) He et al. (2016), DenseNet-121 (Dense) Huang et al. (2017), WideResNet (WRN) Zagoruyko & Komodakis (2016), SENet Hu et al. (2018), PNASNet (PNA) Liu et al. (2018), ShuffleNet-v2 (Shuffle) Ma et al. (2018), SqueezeNet (Squeeze) Iandola et al. (2017) and MobileNet-v2 (Mobile) Sandler et al. (2018) as our target models.
Dataset. To make our method more convincing and avoid cherry-picking, we choose 10,000 images (each category contains about 10 images) from the ImageNet validation set Russakovsky et al. (2015) which are classified correctly by all ten networks we consider. Besides, all images are resized to 299× 299× 3 beforehand. Parameters. In our experiments, we use l∞-norm to measure the perceptibility of adversarial noises, the maximum perturbation ε is set to 16. For our HIT, the size of Gaussian kernel G is 17× 17 (i.e. k = 4), weight factor λ is set to 1.0.
A.2 QUANTITATIVE ANALYSIS ABOUT HFC AND LFC
To quantitatively analyze whether HFC or LFC is dominant in the feature map of shallow layer, we conducted this experiment. Considering that the size of each shallow-layer feature map in Fig. 2(b) is 147, we first resize the Fig. 2(a) (299× 299) to 147× 147, and denote the resultant image by xr. Then we get the xHr (HFC of xr) by:
xHr = xr − xr ∗G. (5)
To quantitatively compare the response of HFC and LFC, we calculate the average response of each feature map φ(x) in low-frequency regions versus that in the other HFC regions. To that end, we generate two masks to distinguish the two regions. More specific, the mask of high-frequency regionsMH can be written as:
MHi,j = { 1, |xHr(i,j)| > τ 0, else , (6)
where τ = 20 is the pre-set threshold which applied to filter out low response. After getting the MH , the mask of LFCML can be easy derivated:
ML = 1−MH . (7)
Therefore, the average response of HFC aH and the average response of LFC aL can be expressed as:
aH =
∑ i,jM
H φ(x)∑ i,jM H , (8)
aL =
∑ i,jM
L φ(x)∑ i,jM L . (9)
In this paper, if a feature map meets aH > aL, we call it “HFC dominant”, otherwise we call it “LFC dominant”. As demonstrate in Fig. 7, most feature maps are focus on HFC, and the “HFC dominant” to “LFC dominant” ratio is 3:1.
A.3 THE EFFECT OF WEIGHT FACTOR λ
In this section, we discuss the effect of different weight factors λ on the experimental results. We tune λ from 0.1 to 10, and the results are shown in Fig. 8. When λ ≤ 1, the attack success rate increases rapidly at the beginning and then remains stable. However, further increasing λ from 1 to 10 does not improve the performance. Actually, the success rates keep stable with a slight drop.
Apparently, a larger λ leads to more perturbations (i.e., increase the average perturbation of all pixels), and our reported results are a little inconsistent with linear assumption (Goodfellow et al., 2015). It probably because our HIT is completely independent of any prior information (e.g. the gradient of any model or data distribution). So it is not the larger the noise is, the farther the deviation from the true label will be. Besides, we notice that the activation functions of these victim’s models are all Relu, which may be another reason for this phenomenon. More specifically, Relu is defined as
Relu(z) =
{ 0, z < 0.
z, else. (10)
where z is the intermediate output before activation layer. If the intermediate adversarial perturbation is large enough, i.e., δ′ ≤ −z, then Relu(z+ δ′) will return 0. But for a misclassification label yadv 6= y, positive activation which is different from the original z may be more helpful than 0.
A.4 QUALITATIVE COMPARISON FOR ADVERSARIAL EXAMPLES
To better reflect the advantages of our approach, in this section we compare the visual quality of the generated adversarial examples. Specifically, we consider state-of-the-art black-box PI-FGSM (Gao et al., 2020a) and no-box attack (Li et al., 2020a) as our competitor. As depicted in Fig. 9, both PI-FGSM (Gao et al., 2020a) and no-box attack (Li et al., 2020a) will cause more perceptible distortions. In contrast, the adversarial perturbation crafted by our HIT is much more imperceptible.
A.5 ATTACK OTHER CLASSIFICATION TASKS
To highlight the practical property of our HIT, in this section we apply our HIT for other classification tasks. Specifically, we consider three well-known fine-grained classification including
Raw images
CUB-200-2011 (Wah et al., 2011), Stanford Cars (Krause et al., 2013) and FGVC Aircraft (Maji et al., 2013) and the victim model is trained via DCL (backbone: Res-50) (Chen et al., 2019). The resolution of inputs is 448× 448. Therefore, we set the ”tile size = 448 / tile scheme”. For example, if the tile-scheme is 4 × 4, then the tile-size is 112. To ensure our defaulting setting (i.e., λ and tile-scheme) for HIT is applicable, we conduct two experiments in the following.
Discussion on tile scheme. We first report the average attack success rates (%) of our HIT w/ Circle w.r.t tile-scheme in Tab. 4. From the result, we can observe that our HIT is also effective for attacking other datasets. Notably, our HIT can fool DCL with about 90% success rate on Stanford Cars dataset. Besides, a relatively smaller tile-size is also helpful in improving the success rate of the attack, which is consistent with the conjecture given in Sec. 3.2.
Discussion on λ. We then report the average attack success rate (%) of our HIT w/ Circle w.r.t λ in Tab. 5. Although set λ = 1.0 is not optimal, the gap between the best results and the results of is very small. Therefore, our default setting for HIT is still applicable.
A.6 ATTACK REAL-WORLD RECOGNITION SYSTEM
To further demonstrate the practical property of our HIT, in this section we apply our HIT (w/ Circle) to attack a real-world recognition system, i.e., Google Cloud Vision API6. Different from existing works (Chen et al., 2017; Brendel et al., 2017) which need a large number of queries for optimization, we directly apply our HIT with the default setting (i.e., tilescheme is 6×6 and λ = 1.0). As illustrated in Fig. 10, our no-box HIT with ε = 16 can effectively change top-k labels. For example, the top-5 label of “fish” is “Fish”, “Fin”, “Seafood”, “Ray-finned fish” and “Marine biology”, while our adversarial example is
“Reptile”,“Turtle”, “Terrestrial Animal”, “Pattern” and “Art”. Notably, there is no overlap on top-k labels between clean image and our adversarial example, which also demonstrate the effectiveness of our no-box HIT.
A.7 RESULTS FOR SMALLER PERTURBATION
In this experiment, we report the average success rates (%) between state-of-the-art black-box attacks (further add Ghost Network algorithm (Li et al., 2020c) as our competitor) and our proposed no-box attack with a smaller perturbation ε = 8.
As demonstrated in Tab. 6, our proposed methods are still competitive to mainstream transfer-based black-box attacks, even though they combine many effective techniques. Remarkably, our no-box attack can significantly outperform Ghost Networks (+MI-FGSM). Although Ghost Networks (+PIMI-DI-FGSM) is much more powerful, our no-box attack can surpass it in some cases. For example, when fooling Shuffle, our HIT (w/ Circle) can outperform Ghost Networks (+PI-MI-DI-FGSM) by about 8%.
A.8 RAW IMAGE FOR ATTACK
To highlight the effectiveness of our design for adversarial patches, here we conduct the experiment where raw images (shown in Fig. 11) serve as “adversarial patch”. More specially, we utilize the HFC of these raw images (like Din et al.) to manipulate adversarial examples. However, even the HFC of texture-rich raw images (e.g., “Grifola frondosa” and “Capitulum”) do not achieve a good result. As demonstrated in Tab. 8, the average attack success rates are all less than 40%. By contrast, our well-designed adversarial patches can significantly achieve a success rate of nearly 90%, which demonstrates the effectiveness of our design.
6https://cloud.google.com/vision/docs/drag-and-drop
Table 6: The comparison of attack success rates (%) on normally trained models between black-box attacks and our no-box attacks with maximum perturbation ε = 8. For black-box attacks, adversarial examples are crafted via Inc-v3.
Attacks VGG19 ResNet DenseNet WRN SENet PNA Shuffle Squeeze Mobile AVG.
MI-FGSM 21.26 15.63 20.47 14.45 11.58 23.36 24.13 32.61 25.64 21.01 Ghost Networks (+MI-FGSM) 27.12 17.45 22.31 14.92 13.43 28.63 30.08 40.80 33.67 25.38
DI-FGSM 18.41 11.24 16.44 10.19 8.28 17.59 15.03 18.40 18.20 14.86 PI-FGSM 24.12 14.98 22.91 15.38 12.21 27.32 25.93 39.20 27.29 23.26 PI-MI-DI2-FGSM 40.88 31.02 41.34 29.70 25.56 38.46 36.79 46.70 42.38 36.98 Ghost Networks (+PI-MI-DI2-FGSM) 63.56 43.59 55.21 40.91 40.33 54.73 63.48 81.98 73.46 57.47
HIT w/ 6× 6 Circle 37.64 36.21 40.80 30.82 21.36 37.63 71.45 79.34 69.90 47.24 HIT w/ 6× 6 Square 13.40 17.50 25.85 21.99 13.79 18.31 53.03 61.61 50.88 30.71 HIT w/ 6× 6 Rhombus 17.24 19.95 27.65 22.00 12.85 18.91 58.84 63.49 57.90 33.20
A.9 DISCUSSION ON TARGETED ATTACK
(a) Adversarial Patch (b) Label ID: 794 (“shower curtain”)
Figure 12: We show (a) our adversarial patch and (b) some images which classified as shower curtain from ImageNet dataset. The bottom row is their HFC extracted by Eq. 1.
Although we do not explicitly force the resultant adversarial examples to be misclassified as a specific targeted label, we observe that our HIT tends to implement a targeted attack due to the frequency domain operation and classification logic of DNNs. In Tab. 7, we report the top-5 prediction labels of our adversarial examples, which are crafted by 6× 6 concentric circle pattern. A first glance shows that almost all models tend to misclassify adversarial examples generated by our HIT as several specific labels, e.g., 794 (“shower curtain”). Furthermore, this phenomenon is more obvious for Mobile and ResNet whose ratio is up to 47.69% and 75.75% respectively.
To better understand this phenomenon, we show several clean images whose labels are “shower curtain” from the ImageNet dataset and our adversarial patch in Fig. 12. We observe that the HFC of “shower curtain” is somehow aligned with our adversarial patch, i.e., they all show similar certain repetitive circles. We suspect this phenomenon might be because our proposed perturbation dominates the overall features of the image, and instead, the original features of the image become noise. Since existing algorithms are not effective yet and simply replacing our adversarial patch with a clean targeted image does not achieve an effective targeted attack (as demonstrated in Sec. A.8), we will further study the selection and generation of adversarial patches, e.g., fusing the shallow texture information of targeted distribution to guide the resultant adversarial examples towards the targeted category.
A.10 WHY CIRCLE PATTERN IS USUALLY BETTER?
Here we attempt to provide an insight into the performance gap between Circle and the other two patterns by analyzing the intermediate feature response. Without loss of generality, we set the layer index to “depth of each DNN” / 2 and report the average cosine similarity of the features between 10,000 raw images and their adversarial examples. The result from Tab. 9 shows that Circle consistently leads to lower cosine similarity than other patterns. Consequently, the features that feed to the deep layer are more featureless, thus leading to misclassification.
A.11 VISUALIZATION OF OUR ADVERSARIAL PATCHES
In this section, we first visualize the concentric circle with respect to densities in Fig. 13. Here we control the density from 1 to 12, e.g., “2” denotes only two circles in the proto-pattern. With the increase of density, the distance between any two circles will also be reduced.
Then we list our adversarial patches with respect to tile-schemes in Fig. 14. More specifically, we first crop the 600 × 600 × 3 proto-patterns to 300 × 300 × 3 adversarial patches, then resize them into different tile-sizes (e.g., 150 × 150 × 3) and tile them to 300 × 300 × 3, finally resize back to 299×299×3 to match the size of raw images. As we can see, if we decrease the tile-size, distortion is inevitable.
A.12 THE EFFECT OF REPEATING PATTERN FOR DEFENSES
In this section, we further consider six additional well-known defense models, which including three ensemble adversarial training models (EAT) (Tramèr et al., 2018): Inc-v3ens3, Inc-v3ens4 and IncRes-v2ens,7 and three feature denoising models (FD) (Xie et al., 2019a): ResNet152 Baseline (Res152B), ResNet152 Denoise (Res152D), ResNeXt101 DenoiseAll (ResNeXtDA),8 to discuss the effect of repeating pattern.
7https://github.com/tensorflow/models/tree/archive/research/adv_ imagenet_models
8https://github.com/facebookresearch/ImageNet-Adversarial-Training
Table 9: The cosine similarity comparison for different patterns.
Model Attack VGG19 Inc-v3 ResNet Dense WRN SENet PNA Squeeze Shuffle Mobile Avg.
- HIT w/ Square (Ours) 0.6215 0.7419 0.7638 0.8090 0.7599 0.5838 0.7437 0.6940 0.6704 0.4838 0.6872
HIT w/ Rhombus (Ours) 0.6218 0.7458 0.7448 0.7853 0.7280 0.6258 0.7672 0.6738 0.6461 0.4005 0.6746 HIT w/ Circle (Ours) 0.5472 0.6685 0.7306 0.7779 0.7223 0.5613 0.6747 0.6643 0.6062 0.3617 0.6314
Density:7 Density:8 Density:9 Density:10 Density:11 Density:12
Density:1 Density:2 Density:3 Density:4 Density:5 Density:6
Figure 13: We visualize our proto-patterns w.r.t densities. Here we take concentric circles as an example.
Generally, a smaller tile-scheme can generate a more perceptible perturbation. As shown in Fig. 15, the area of each regionally homogeneous (i.e. continues) line in adversarial examples crafted by 1× 1 patches is bigger than 6× 6 ones. Different from the trends on NTs, smaller tile-schemes are more effective for attacking defense models. As demonstrated in Tab. 16, when attacking EAT, 2×2 adversarial patches perform best, and further increasing the tile-scheme will significantly degrade performance, e.g., 7 × 7 rhombuses only successfully attack EAT by 6.61% on average. The trend of FD is similar to that of EAT, except that 1×1 adversarial patches work best. The reason might be that thin regionally homogeneous lines are more easily to filter out by the denoising block of (Xie et al., 2019a). Therefore, in our paper, we use 2× 2 and 1× 1 adversarial patches to attack EAT and FD, respectively.
1x1 2x2 3x3 4x4 5x5 6x6 7x7 | 1. What is the focus of the paper regarding generating adversarial images?
2. What are the strengths of the proposed Hybrid Image Transformation (HIT) attack?
3. How does the reviewer assess the novelty and efficacy of the proposed attack?
4. What are the weaknesses of the paper, particularly in its experiment section?
5. Do you have any concerns or questions about the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The authors propose a new method for generating adversarial images for image classifier. This is a No-Box attack named Hybrid Image Transformation (HIT) or hit attack which is both model free and data-free (no training required). In the experiment section the authors show the efficacy of the proposed attack on the ImageNet dataset for different DNNs.
Review
The authors start the discussion by revisiting the properties of both white-box and black-box adversarial image classification attack. They highlight some important characteristics of such attacks and argue why most proposed attacks may not be useful in practice. Based on this discussion they argue in favor of previously proposed No-Box attack. A No-Box attack must be both model-free and data-free so that it is not dependent on the training process of one particular Neural Network.
The authors then propose an attack named HIT which is training free and uses the LFC and HFC components of an image to craft the adversarial image. As can be inferred from the previous statement, the main contributors of this attack are the LFC and HFC components of an image. To better understand the role of these components, the authors first split the raw images into these two pieces using Gaussian low-pass filter. This analysis shows that the DNNs rely on HFC components to classify images where as the LFC parts are more visible to humans. This finding helped the authors come to the conjectures that effective adversarial HFCs must be regional homegeneous, repeating and dense. They then craft the HIT attacks by adding such adversarial HFCs to the LFC of an image and then clipping the image to fall in an predefined inf-norm ball.
The authors then perform experiments on the ImageNet dataset with different DNNs. Their attack outperforms the previous state-of-the-art by a large margin. Furthermore, their attack performs well when compared to Black-Box attacks as well. |
ICLR | Title
Symmetry-Breaking Convergence Analysis of Certain Two-layered Neural Networks with ReLU nonlinearity
Abstract
In this paper, we use dynamical system to analyze the nonlinear weight dynamics of two-layered bias-free networks in the form of g(x;w) = ∑K j=1 σ(w T j x), where σ(·) is ReLU nonlinearity. We assume that the input x follow Gaussian distribution. The network is trained using gradient descent to mimic the output of a teacher network of the same size with fixed parameters w∗ using l2 loss. We first show that when K = 1, the nonlinear dynamics can be written in close form, and converges to w∗ with at least (1 − )/2 probability, if random weight initializations of proper standard derivation (∼ 1/ √ d) is used, verifying empirical practice [Glorot & Bengio (2010); He et al. (2015); LeCun et al. (2012)]. For networks with many ReLU nodes (K ≥ 2), we apply our close form dynamics and prove that when the teacher parameters {w∗ j}j=1 forms orthonormal bases, (1) a symmetric weight initialization yields a convergence to a saddle point and (2) a certain symmetry-breaking weight initialization yields global convergence to w∗ without local minima. To our knowledge, this is the first proof that shows global convergence in nonlinear neural network without unrealistic assumptions on the independence of ReLU activations. In addition, we also give a concise gradient update formulation for a multilayer ReLU network when it follows a teacher of the same size with l2 loss. Simulations verify our theoretical analysis.
N/A
∑K j=1 σ(w ᵀ j x),
where σ(·) is ReLU nonlinearity. We assume that the input x follow Gaussian distribution. The network is trained using gradient descent to mimic the output of a teacher network of the same size with fixed parameters w∗ using l2 loss. We first show that when K = 1, the nonlinear dynamics can be written in close form, and converges to w∗ with at least (1 − )/2 probability, if random weight initializations of proper standard derivation (∼ 1/ √ d) is used, verifying empirical practice [Glorot & Bengio (2010); He et al. (2015); LeCun et al. (2012)]. For networks with many ReLU nodes (K ≥ 2), we apply our close form dynamics and prove that when the teacher parameters {w∗j}Kj=1 forms orthonormal bases, (1) a symmetric weight initialization yields a convergence to a saddle point and (2) a certain symmetry-breaking weight initialization yields global convergence to w∗ without local minima. To our knowledge, this is the first proof that shows global convergence in nonlinear neural network without unrealistic assumptions on the independence of ReLU activations. In addition, we also give a concise gradient update formulation for a multilayer ReLU network when it follows a teacher of the same size with l2 loss. Simulations verify our theoretical analysis.
1 INTRODUCTION
Deep learning has made substantial progress in many applications, including Computer Vision [He et al. (2016); Simonyan & Zisserman (2015); Szegedy et al. (2015); Krizhevsky et al. (2012)], Natural Language Processing [Sutskever et al. (2014)] and Speech Recognition [Hinton et al. (2012)]. However, till now, how and why it works remains elusive due to a lack of theoretical understanding. First, how simple approaches like gradient descent can solve a very complicated non-convex optimization effectively. Second, how the deep models, especially deep convolutional models, achieve generalization power despite massive parameters.
In this paper, we focus on the first problem and use dynamical system to analyze the nonlinear gradient descent dynamics of certain two-layered nonlinear network in the following form:
g(x;w) = K∑ j=1 σ(wᵀj x) (1)
where σ(x) = max(x, 0) is the ReLU nonlinearity. We consider the following setting: a student network learns the parameters that minimize the l2 distance between its prediction and the supervision provided by the teacher network of the same size with a fixed set of parameters w∗. We assume all inputs x to follow Gaussian distribution and thus the network is bias-free. Eqn. 1 is highly nonconvex and could contain exponential number of symmetrically equivalent solutions.
To analyze this, we first derive novel and concise gradient update rules for multilayer ReLU networks (See Lemma 2.1) in the teacher-student setting under l2 loss. Then for K = 1, we prove that the nonlinear gradient dynamics of Eqn. 1 has a close form and converges to w∗ with at least (1 −
)/2 probability, if initialized randomly with standard derivation on the order of 1/ √ d, verifying commonly used initialization techniques [Glorot & Bengio (2010); He et al. (2015); LeCun et al. (2012)],. When K ≥ 2, we prove that when the teacher parameters {wj}Kj=1 form orthonormal bases, (1) a symmetric initialization of a student network gets stuck at a saddle point and (2) under a certain symmetric breaking weight initialization, the dynamics converges to w∗, without getting stuck into any local minima. Note that in both cases, the initialization can be arbitrarily close to the origin for a fixed ‖w∗‖, showing that such a convergence behavior is beyond the local convex structure at w∗. To our knowledge, this is the first proof of its kind.
Previous works also use dynamical system to analyze deep neural networks. [Saxe et al. (2013)] analyzes the dynamics of multilayer linear network, and [Kawaguchi (2016)] shows every local minima is global for multilinear network. Very little theoretical work has been done to analyze the dynamics of nonlinear networks, especially deep ones. [Mei et al. (2016)] shows the global convergence whenK = 1 with activation function σ(x) when its derivatives σ′, σ′′, σ′′′ are bounded and σ′ > 0. Similar to our approach, [Saad & Solla (1996)] also uses the student-teacher setting and analyzes the dynamics of student network when the teacher’s parameters w∗ forms a orthonomal bases; however, it uses σ(x) = erf(x) as the nonlinearity and only analyzes the local behaviors of the two critical points (the saddle point in symmetric initializations, and w∗). In contrast, we prove the global convergence behavior in certain symmetry-breaking cases.
Many previous works analyze nonlinear network based on the assumption of independent activations: the activations of ReLU (or other nonlinear) nodes are independent of the input and/or mutually independent. For example, [Choromanska et al. (2015a;b)] relate the nonlinear ReLU network with spin-glass models when several assumptions hold, including the assumption of independent activations (A1p and A5u). [Kawaguchi (2016)] proves that every local minimum in nonlinear network is global based on similar assumptions. [Soudry & Carmon (2016)] shows the global optimality of the local minimum in a two-layered ReLU network, by assuming small sample size and applying independent multiplicative Bernoulli noise on the activations. In practice, the activations are highly dependent due to their common input. Ignoring such dependency also misses important behaviors, and may lead to misleading conclusions. In this paper, no assumption of independent activation is made. For sigmoid activation, [Fukumizu & Amari (2000)] gives quite complicated conditions for a local minimum to be global when adding a new node to a two-layered network. [Janzamin et al. (2015)] gives guarantees on recovering the parameters of a 2-layered neural network learnt with tensor decomposition. In comparison, we analyze ReLU networks trained with gradient descent, which is a more popular setting in practice.
The paper is organized as follows. Sec. 2 introduces the basic formulation and some interesting novel properties of ReLU in multilayered ReLU networks. Sec. 3 and Sec. 4 then analyze the twolayered model Eqn. 1 for K = 1 and K ≥ 2, respectively. Sec. 5 shows that simulation results are consistent with theoretical analysis. Finally Sec. 7 gives detailed proofs for all theorems.
2 PRELIMINARY
2.1 NOTATION
Denote X as a N -by-d input data matrix and w∗ is the parameter of the teacher network with desired N -by-1 output u = g(X;w∗). Now suppose we have an estimator w and the estimated output v = g(X;w). We want to know with l2 loss E(w) = 12‖u − v‖ 2 = 12‖u − g(X;w)‖ 2, whether gradient descent will converge to the desired solution w∗.
The gradient descent update is w(t+1) = w(t) + η∆w(t), where ∆w(t) ≡ −∇E(w(t)). If we let η → 0, then the update rule becomes a first-order differential equation dw/dt = −∇E(w), or more concisely, ẇ = −∇E(w). In this case, Ė = ∇E(w)ᵀẇ = −‖∇E(w)‖2 ≤ 0, i.e., the function value E is nonincreasing over time. The key is to check whether there exist other critical points w 6= w∗ so that ∇E(w) = 0. In our analysis, we assume entries of inputX follow Gaussian distribution. In this situation, the gradient is a random variable and ∆w = −E [∇E(w)]. The expected E [E(w)] is also nonincreasing no matter whether we follow the expected gradient or the gradient itself, because
E [ Ė ] = −E [∇E(w)ᵀ∇E(w)] ≤ −E [∇E(w)]ᵀ E [∇E(w)] ≤ 0 (2)
Therefore, we analyze the behavior of expected gradient E [∇E(w)] rather than∇E(w).
2.2 PROPERTIES OF RELU
In this paper, we discover a few useful properties of ReLU that make our analysis much simpler. Denote D = D(w) = diag(Xw > 0) as a N -by-N diagonal matrix. The l-th diagnonal element of D is a binary variable showing whether the neuron is on for sample l. Using this notation, we could write σ(Xw) = DXw. Note that D only depends on the direction of w but not its magnitude.
Note that for ReLU,D is also “tranparent” on derivatives. For example, the Jacobian Jw[σ(Xw)] = σ′(Xw)X = DX at differentiable regions. This gives a very concise rule for gradient descent in ReLU network: suppose we have negative gradient inflow vector g (of dimension N -by-1) on the current ReLU node with weights w, then we can simply write the update ∆w as:
∆w = Jw[σ(Xw)] ᵀg = XᵀDg (3)
This can be easily applied to multilayer ReLU network. Denote j ∈ [c] if node j is in layer c, dc as the width of layer c, and uj and vj as the output of teacher network and student network, respectively. A simple deduction yields the following lemma:
Lemma 2.1 For neural network with ReLU nonlinearity and using l2 loss to match with a teacher network of the same size, the negative gradient inflow gj for node j at layer c has the following form:
gj = Lj ∑ j′ (L∗j′uj′ − Lj′vj′) (4)
where Lj and L∗j are N -by-N diagonal matrices. For any k ∈ [c+ 1], Lk = ∑ j∈[c] wjkDjLj and similarly for L∗k. For the first layer, L = L ∗ = I .
The intuition here is to start from g = u − v (true for l2 loss) at the top layer and use induction. With this formulation, we could write the finite dynamics for wc (all parameters in layer c). Denote the N -by-dc+1dc matrix Rc = [LjDj ]j∈[c]Xc and R∗c = [L ∗ jD ∗ j ]j∈[c]X ∗ c . Using gradient descent rules:
∆wj = X ᵀ cDjgj = X ᵀ cDjLj ∑ j′ L∗j′D ∗ j′X ∗ cw ∗ j′ − ∑ j′ Lj′Dj′Xcwj′ (5) = XᵀcDjLj (R ∗ cw ∗ c −Rcwc) (6)
Therefore we have: ∆wc = R ᵀ c (R ∗ cw ∗ c −Rcwc) (7)
3 SINGLE RELU CASE
Let’s start with the simplest case where there is only one ReLU node, K = 1. At iteration t, following Eqn. 3, the gradient update rule is:
∆w(t) = XᵀD(t)g(t) = XᵀD(t)(D∗Xw∗ −D(t)Xw(t)) (8)
Note here how the notation ofD(t) comes into play (andD(t)D(t) = D(t)). Indeed, when the neuron is cut off at sample l, then (D(t))ll is zero and will block the corresponding gradient component.
Linear case. In this situationD(t) = D∗ = I (no gating in either forward or backward propagation) and:
w(t+1) = w(t) + η
N XᵀX(w∗ −w(t)) (9)
where η/N is the learning rate. When it is sufficiently small so that the spectral radius ρ(I − η NX
ᵀX) < 1, w(t+1) will converge to w∗ when t→ +∞. Note that this convergence is guaranteed for any initial condition w(1), if XᵀX is full rank with suitable η. This is consistent with its convex nature. If entries of X follow i.i.d Gaussian distribution, then E [ 1 NX ᵀX ]
= I and the condition satisfies.
Nonlinear (ReLU) case. In this case, ∆w = XᵀD(D∗Xw∗ −DXw) in which D is a function of w. Intuitively, this term goes to zero when w → w∗, and should be approximated to be N2 (w
∗ − w) in the i.i.d Gaussian case, since roughly half of the samples are blocked. However, once we make such approximation, we lost the nonlinear behavior of the network and would draw the wrong conclusion of global convergence.
Then how should we analyze it? Notice that in ∆w, both of the two terms have the form F (e,w) = XᵀD(e)D(w)Xw. Using this form, E [∆w] = E [F (w/‖w‖,w∗)] − E [F (w/‖w‖,w)]. Here e is a unit vector called the “projected” weight. In the following, we will show that E [F (e,w)] has the following close form under i.i.d Gaussian assumption on X:
Lemma 3.1 Denote F (e,w) = XᵀD(e)D(w)Xw where e is a unit vector, X = [x1,x2, · · · ,xN ]ᵀ is N -by-d sample matrix and D(w) = diag(Xw > 0) is a binary diagonal matrix. If xi ∼ N(0, I) and are i.i.d (and thus bias-free), then:
E [F (e,w)] = N
2π [(π − θ)w + ‖w‖ sin θe] (10)
where θ = ∠(e,w) ∈ [0, π] is the angle between e and w.
Note that the expectation analysis smooths out the non-differentiable property of ReLU, leaving only one singularity at e = 0. The intuition is that expectation analysis involves an integration over the data distribution. With simple algebraic manipulation, E [∆w] takes the following closed form:
E [∆w] = N 2 (w∗ −w) + N 2π (α sin θw − θw∗) (11)
where α = ‖w∗‖/‖w‖ and θ ∈ [0, π] is the angle between w and w∗. The first term is expected while the last two terms show the nonlinear behavior. Using Lyapunov’s method, we show that the dynamics (if treated continuously) converges to w∗ when w(1) ∈ Ω = {w : ‖w −w∗‖ < ‖w∗‖}:
Lemma 3.2 When w(1) ∈ Ω = {w : ‖w −w∗‖ < ‖w∗‖}, following the dynamics of Eqn. 11, the Lyapunov function V (w) = 12‖w − w
∗‖2 has V̇ < 0 and the system is asymptotically stable and thus w(t) → w∗ when t→ +∞.
See Appendix for the proof. The intuition is to represent V as a 2-by-2 bilinear form of vector [‖w‖, ‖w∗‖], and the bilinear coefficient matrix is positive definite. One question arises: will the same approach show the dynamics converges when the initial conditions lie outside the region Ω, in particular for any region that includes the origin? The answer is probably no. Note that w = 0 is a singularity in which ∆w is not continuous (if approaching from different directions towards w = 0, ∆w is different). It is due to the fact that ReLU function is not differentiable at the origin. We could remove this singularity by “smoothing out” ReLU around the origin. This will yield ∆w→ 0 when w → 0. In this case, V̇ (0) = 0 so Lyapunov method could only tell that the dynamics is stable but not convergent. Note that for ReLU activation, σ′(x) = 0 for certain negative x even after a local smoothing, so the global convergence claim in [Mei et al. (2016)] for l2 loss does not apply.
Random Initialization. Then we study how to sample w(1) so that w(1) ∈ Ω. We would like to sample within Ω, but we don’t know where is w∗. Sampling around origin with big radius r ≥ 2‖w∗‖ is inefficient in particular in high-dimensional space. This is because when the sample is uniform, the probability of hitting the ball is proportional to (r/‖w∗‖)d ≤ 2−d, which is exponentially small.
A better idea is to sample around the origin with very small radius (but not at w = 0), so that the convergent hypersphere behaves like a hyperplane near the origin, and thus almost half of the samples is useful (Fig. 2(a)), as shown in the following theorem:
Theorem 3.3 The dynamics in Eqn. 11 converges to w∗ with probability at least (1 − )/2, if the initial value w(1) is sampled uniformly from Br = {w : ‖w‖ ≤ r} with r ≤ √ 2π d+1‖w ∗‖.
The intution here is to lower-bound the probability of the shaded area (Fig. 2(b)). From the proof, the conclusion could be made stronger to show r ∼ 1/ √ d, consistent with common initialization techniques [Glorot & Bengio (2010); He et al. (2015); LeCun et al. (2012)]. Fig. 2(c) shows an example in the 2D case, in which there is a singularity at the origin, and sampling towards w∗ yields the convergence. This is consistent with the analysis above.
4 MULTIPLE RELUS CASE
Now we are ready to analyze the network g(x) = ∑K j=1 σ(w ᵀ j x) for K ≥ 2 (Fig. 1(c)). Theoretical analysis of such networks is also the main topic in many previous works [Saad & Solla (1996); Soudry & Carmon (2016); Fukumizu & Amari (2000)]. In this case, Lj = L∗j = I for 1 ≤ j ≤ K. Then we have the following nonlinear dynamics from Eqn. 7:
∆wj = K∑ j′=1 f(wj ,wj′ ,w ∗ j′) (12)
where f = F (wj/‖wj‖,w∗j′)− F (wj/‖wj‖,wj′). Therefore, using Eqn. 10, its expectation is:
2π N E [ f(wj ,wj′ ,w ∗ j′) ] = (π − θ∗j ′ j )w ∗ j′ − (π − θ j′ j )wj′ + (‖w∗j′‖ ‖wj‖ sin θ∗j ′ j − ‖wj′‖ ‖wj‖ sin θj ′ j ) wj (13) where θ∗j ′ j ≡ ∠(wj ,w∗j′) and θ j′
j ≡ ∠(wj ,wj′). Eqn. 12 (and its expected version) gives very complicated nonlinear dynamics and could be hard to solve in general. Unlike K = 1, a similar approach with Lyaponov function does not yield a decisive conclusion. However, if we consider the symmetric case: wj = Pjw and w∗j = Pjw ∗ where Pj is a cyclic permutation matrix that maps index j′ + 1 to (j′ + j mod K) + 1 (and P1 is the identity matrix), then RHS of the expected version of Eqn. 12 can be simplified as follows:
E [∆wj ] = ∑ j′ E [ f(wj ,wj′ ,w ∗ j′) ] = ∑ j′ E [f(Pjw, Pj′w, Pj′w∗)]
= ∑ j′′ E [f(Pjw, PjPj′′w, PjPj′′w∗)] ({Pj}Kj=1 is a group)
= Pj ∑ j′′ E [f(w, Pj′′w, Pj′′w∗)] (‖Pw1‖ = ‖w1‖, ∠(Pw1, Pw2) = ∠(w1,w2))
= PjE [∆w1] (14)
which means that if all wj and w∗j are symmetric under the action of cyclic group, so does their expected gradient. Therefore, the trajectory {w(t)} keeps such cyclic structure. Instead of solving a system of K equations, we only need to solve one:
E [∆w] = K∑ j=1 E [f(w, Pjw, Pjw∗)] (15)
Surprisingly, there is another layer of symmetry in Eqn. 15 when {w∗j} forms an orthonomal basis (w∗j′ ᵀw∗j = δjj′ ). In this case, if we start with w (1) = xw∗ + y ∑ j 6=1 Pjw
∗ then we could show that the trajectory keeps this structure and Eqn. 15 can be further reduced into the following 2D nonlinear dynamics:
2π N E [ ∆x ∆y ] = − { [(π − φ)(x− 1 + (K − 1)y)] [ 1 1 ] + [ θ φ∗ − φ ] + φ [ x− 1 y ]} + [(K − 1)(α sinφ∗ − sinφ) + α sin θ] [ x y ] (16)
Here the symmetrical factor (α ≡ ‖w∗j′‖/‖wj‖, θ ≡ θ ∗j j , φ ≡ θ
j′
j , φ ∗ ≡ θ∗j
′
j ) are defined as follows:
α = (x2 + (K−1)y2)−1/2, cos θ = αx, cosφ∗ = αy, cosφ = α2(2xy+ (K−2)y2) (17) For this 2D dynamics, we thus have the following theorem:
Theorem 4.1 For any K ≥ 2, the 2D dynamics (Eqn. 16) shows the following behaviors:
(1) Symmetric case. If the initial condition x(1) = y(1) ∈ (0, 1], then the dynamics reduces to 1D and converges to a saddle point x = y = 1πK ( √ K − 1− arccos(1/ √ K) + π).
(2) Symmetry-Breaking. If (x(1), y(1)) ∈ Ω = {x ∈ (0, 1], y ∈ [0, 1], x > y}, then dynamics always converges to (x, y) = (1, 0).
From (x(t), y(t)) we could recover w(t)j = x (t)w∗j + y (t) ∑ j′ 6=j w ∗ j′ . Obviously, a convergence of Eqn. 16 to (1, 0) means Eqn. 12 converges to {w∗j}, i.e, the teacher parameters are recovered:
Corollary 4.2 For a bias-free two-layered ReLU network g(x;w) = ∑ j σ(w ᵀ j x) that takes Gaussian i.i.d inputs (Fig. 1), if the teacher’s parameters {w∗j} form orthogonal bases, then when the student parameters is initialized in the form of w(1)j = x (1)w∗j + y (1) ∑ j′ 6=j w ∗ j′ where (x(1), y(1)) ∈ Ω = {x ∈ (0, 1], y ∈ [0, 1], x > y}, then the dynamics (Eqn. 12) converges to {w∗j} without being trapped into local minima.
When symmetry is broken, since the closure of Ω includes the origin, there exists a path starting at arbitrarily small neighborhood of origin to w∗, regardless of how large ‖w∗‖ is. In contrast to traditional convex analysis that only gives the local parameter-dependent convergence basin around w∗j , here we obtain a convergence basin that is parameter-independent. In comparison, [Saad & Solla (1996)] uses a different activation function (σ(x) = erf(x)) and only analyzes local behaviors near the two fixed points (the symmetric saddle point and the teacher’s weights w∗), leaving symmetry breaking an empirical procedure. Here we show that it is possible to give global convergence analysis on certain symmetry breaking cases for two-layered ReLU network.
By symmetry, Corollary 4.1 immediately suggests that when w(1) = y(1) ∑K j=1 w ∗ j + (x
(1) − y(1))w∗j′ , then the dynamics will converge to Pj′w
∗. Since x > y but can be arbitrarily close, a slighest preturbation on the symmetric solution x = y leads to a different fixed point, which is a permutation of w∗. This is very similar to Spontaneously Symmetric-Breaking (SSB) procedure in physics, in which a high energy state with full symmetry goes to a low energy state and only retains part of the symmetry. In this case, the energy is the objective function E, the high energy state is the initialization that is almost symmetrical but with small fluctuation, and the low energy state is the fixed point the dynamics converges into.
From the simulation shown in Fig. 4, we could see that gradient descent takes a detour to reach the desired solution w∗, even when the initialization is aligned with w∗. This is because in the first stage, all ReLU nodes receive the residue and try to explain the data in the same way (both x and y increases); when the “obvious” component has been explained away, then the residue changes its direction and pushes some ReLU nodes to explain other components as well (x increases but y decreases).
Empirically this path also converges to w∗ under noise. We leave it a conjecture that the system converges in the presence of reasonably large noise. If this conjecture is true, then with high probability a random initialization stays in the convergence basin and converges to a permutation of w∗. The reason is that a random initialization almost never gives ties. Without a tie, there exists one leading component which will dominate the convergence.
Conjecture 4.3 When the initialization w(1) = x(1)w∗j + y(1) ∑ j′ 6=j w ∗ j′ + , where is Gaussian noise and (x(1), y(1)) ∈ Ω, then the dynamics Eqn. 12 also converges to w∗ without trapped into local minima.
5 SIMULATION
5.1 CLOSE FORM SOLUTION FOR ONE RELU NODE
We verify our close form expression of E [F (e,w)] = E [XᵀD(e)D(w)Xw] (Eqn. 10) with simulation. We randomly pick e and w so that their angle ∠(e,w) is uniformly distributed in [0, π]. We prepare the input data X with standard Gaussian distribution and compare the close form solution E [F (e,w)] with F (e,w), the actual data term in gradient descent without expectation. We use relative RMS error: err = ‖E [F (e,w)] − F (e,w)‖/‖F (e,w)‖. As shown in Fig. 3(a), The error distribution on angles shows the properties of the close-form solution. For small θ, D(w) and
very large noise is present. Both teacher and student networks use g(x) =
∑K
j=1 σ(w ᵀ j x). Each
experiment has 8 runs. Bottom row: Convergence when we use g2(x) = ∑K j=1 ajσ(w ᵀ j x). Here the top weights aj is fixed at different numbers (rather than 1). Large positive aj correponds to fast convergence. When aj has positive/negative components, the network does not converge to w∗.
D(e) overlaps sufficiently, giving a reliable estimation for the gradient. When θ → π, D(w) and D(e) tend not to overlap, leaving very few data involved in the gradient computation. As a result, the variance grows. Note that all our analysis operate on θ ∈ [0, π/2] and is not affected by this behavior. In the following, angles are sampled from [0, π/2].
Fig. 3(a) shows that the close form expression becomes more accurate with more samples. We also examine other zero-mean distributions of X , e.g., uniform distribution in [−1/2, 1/2]. As shown in Fig. 3(d), the close form expression still works for large d, showing that it could be quite general. Note that the error is computed up to a scaling constant, due to the difference in normalization constants among different distributions. We leave it to the future work to prove its usability for broader distributions.
5.2 CONVERGENCE FOR MULTIPLE RELU NODES
Fig. 4(a) and (b) shows the 2D vector field given by the 2D dynamics (Eqn. 16) and Fig. 4(c) shows the 2D trajectory towards convergence to the teacher’s parameters w∗. Interestingly, even when we initialize the weights as (10−3, 0), aligning with w∗, the gradient descent takes detours to reach the destination. One explanation is, at the beginning all nodes move similar direction trying to explain the data, once the data have been explained partly, specialization follows (y decreases).
Fig. 5 shows empirical convergence for K ≥ 2, when the initialization deviates from symmetric initialization in Thm. 4.1. Unless the deviation is large, gradient descent converges to w∗. We also check the convergence of a more general network g2(x) = ∑K j=1 ajσ(w ᵀ j x). When aj > 0 convergence follows; however, when some aj is negative, the network does not converge to w∗, even that the student network already knows the ground truth value of {aj}Kj=1.
6 CONCLUSION AND FUTURE WORK
In this paper, we analyze the nonlinear dynamical behavior of certain two-layered bias-free ReLU networks in the form of g(x;w) = ∑K j=1 σ(w ᵀ j x), where σ = max(x, 0) is the ReLU node. We assume that the input x follows Gaussian distribution and the output is generated by a teacher network with parameters w∗. In K = 1 we show a close-form nonlinear dynamics can be obtained and its convergence to w∗ can be proven, if we sample the initialization properly. Such initialization is consistent with common practice [Glorot & Bengio (2010); He et al. (2015)] and is independent of the value of w∗. ForK ≥ 2, when the teacher parameters {w∗j} form a orthonormal bases, we prove that the trajectory from symmetric initialization is trapped into a saddle point, while certain symmetric breaking initialization converges to w∗ without trapped into any local minima. Future work includes analysis of general cases (or symmetric case plus noise) for K ≥ 2, and a generalization to multilayer ReLU (or other nonlinear) networks.
7 APPENDIX
Here we list all detailed proof for all the theorems.
7.1 PROPERTIES OF RELU NETWORKS
Lemma 7.1 For neural network with ReLU nonlinearity and using l2 loss to match with a teacher network of the same size, the negative gradient inflow gj for node j at layer c has the following form:
gj = Lj ∑ j′ (L∗j′uj′ − Lj′vj′) (18)
where Lj and L∗j are N -by-N diagonal matrices. For any k ∈ [c+ 1], Lk = ∑ j∈[c] wjkDjLj and similarly for L∗k.
Proof We prove by induction on layer. For the first layer, there is only one node with g = u − v, therefore Lj = Lj′ = I . Suppose the condition holds for all node j ∈ [c]. Then for node k ∈ [c+1], we have:
gk = ∑ j wjkDjgj = ∑ j wjkDjLj ∑ j′ L∗j′uj′ − ∑ j′ Lj′vj′ =
∑ j wjkDjLj ∑ j′ L∗j′ ∑ k′ D∗j′w ∗ jk′uk′ − ∑ j′ Lj′ ∑ k′ Dj′wjk′vk′ =
∑ j wjkDjLj ∑ j′ L∗j′D ∗ j′ ∑ k′ w∗jk′uk′ − ∑ j wjkDjLj ∑ j′ Lj′Dj′ ∑ k′ wjk′vk′
= ∑ k′ ∑ j wjkDjLj ∑ j′ L∗j′D ∗ j′w ∗ jk′ uk′ −∑ k′ ∑ j wjkDjLj ∑ j′ Lj′Dj′wjk′ vk′ Setting Lk = ∑ j wjkDjLj and L ∗ k = ∑ j w ∗ jkD ∗ jL ∗ j (both are diagonal matrices), we thus have:
gk = ∑ k′ LkL ∗ k′uk′ − LkLk′vk′ = Lk ∑ k′ L∗k′uk′ − Lk′vk′ (19)
7.2 ONE RELU CASE
Lemma 7.2 Suppose F (e,w) = XᵀD(e)D(w)Xw where e is a unit vector and X = [x1,x2, · · · ,xN ]ᵀ is N -by-d sample matrix. If xi ∼ N(0, I) and are i.i.d, then:
E [F (e,w)] = N
2π ((π − θ)w + ‖w‖ sin θe) (20)
where θ ∈ [0, π] is the angle between e and w.
Proof Note that F can be written in the following form: F (e,w) = ∑
i:xᵀi e≥0,x ᵀ i w≥0
xix ᵀ iw (21)
where xi are samples so that X = [x1,x2, · · · ,xn]ᵀ. We set up the axes related to e and w as in Fig. 6, while the rest of the axis are prependicular to the plane. In this coordinate system, any vector x = [r sinφ, r cosφ, x3, . . . , xd]. We have an orthonomal set of bases: e, e⊥ = −e−w/‖w‖ cos θsin θ (and any set of bases that span the rest of the space). Under the basis, the representation for e and w is [1,0d−1] and [‖w‖ cos θ,−‖w‖ sin θ,0d−2]. Note that here θ ∈ (−π, π]. The angle θ is positive when e “chases after” w, and is otherwise negative.
Now we consider the quality R(φ0) = E [ 1 N ∑ i:φi∈[0,φ0] xix ᵀ i ] . If we take the expectation and use polar coordinate only in the first two dimensions, we have:
R(φ0) = E 1 N ∑ i:φi∈[0,φ0] xix ᵀ i = E [xixᵀi |φi ∈ [0, φ0]]P [φi ∈ [0, φ0]] =
∫ +∞ 0 ∫∫ +∞ −∞ ∫ φ0 0 r sinφr cosφ. . . xd [r sinφ r cosφ . . . xd] p(r)p(θ) d∏ k=3 p(xk)rdrdφdx3 . . . dxd
where p(r) = e−r 2/2 and p(θ) = 1/2π. Note that R(φ0) is a d-by-d matrix. The first 2-by-2 block can be computed in close form (note that ∫ +∞ 0
r2p(r)rdr = 2). Any off-diagonal element except for the first 2-by-2 block is zero due to symmetric property of i.i.d Gaussian variables. Any diagonal element outside the first 2-by-2 block will be P [φi ∈ [0, φ0]] = φ0/2π. Finally, we have:
R(φ0) = E 1 N ∑ i:φi∈[0,φ0] xix ᵀ i = 1 4π [ 2φ0 − sin 2φ0 1− cos 2φ0 0 1− cos 2φ0 2φ0 + sin 2φ0 0 0 0 2φ0Id−2 ] (22)
= φ0 2π Id + 1 4π
[ − sin 2φ0 1− cos 2φ0 0 1− cos 2φ0 sin 2φ0 0
0 0 0
] (23)
With this equation, we could then compute E [F (e,w)]. When θ ≥ 0, the condition {i : xᵀi e ≥ 0,xᵀiw ≥ 0} is equivalent to {i : φi ∈ [θ, π]} (Fig. 6(a)). Using w = [‖w‖ cos θ,−‖w‖ sin θ,0d−2] and we have:
E [F (e,w)] = N (R(π)−R(θ))w (24)
= N
4π
( 2(π − θ)w − ‖w‖ [ − sin 2θ 1− cos 2θ 0 1− cos 2θ sin 2θ 0
0 0 0
][ cos θ − sin θ
0
]) (25)
= N
2π
( (π − θ)w + ‖w‖ [ sin θ 0 ]) (26)
= N
2π ((π − θ)w + ‖w‖ sin θe) (27)
For θ < 0, the condition {i : xᵀi e ≥ 0,x ᵀ iw ≥ 0} is equivalent to {i : φi ∈ [0, π + θ]} (Fig. 6(b)), and similarly we get
E [F (e,w)] = N (R(π + θ)−R(0))w = N 2π ((π + θ)w − ‖w‖ sin θe) (28)
Notice that by abuse of notation, the θ appears in Eqn. 20 is the absolute value and Eqn. 20 follows.
Lemma 7.3 In the region ‖w(1) −w∗‖ < ‖w∗‖, following the dynamics (Eqn. 11), the Lyapunov function V (w) = 12‖w−w
∗‖2 has V̇ < 0 and the system is asymptotically stable and thus w(t) → w∗ when t→ +∞.
Proof Denote that Ω = {w : ‖w(1) −w∗‖ < ‖w∗‖}. Note that
V̇ = (w −w∗)ᵀ∆w = −yᵀMy (29) where y = [‖w∗‖, ‖w‖]ᵀ and M is the following 2-by-2 matrix:
M = 1
2
[ sin 2θ + 2π − 2θ −(2π − θ) cos θ − sin θ
−(2π − θ) cos θ − sin θ 2π
] (30)
In the following we will show that M is positive definite when θ ∈ (0, π/2]. It suffices to show that M11 > 0, M22 > 0 and det(M) > 0. The first two are trivial, while the last one is:
4det(M) = 2π(sin 2θ + 2π − 2θ)− [(2π − θ) cos θ + sin θ]2 (31) = 2π(sin 2θ + 2π − 2θ)− [ (2π − θ)2 cos2 θ + (2π − θ) sin 2θ + sin2 θ ] (32)
= (4π2 − 1) sin2 θ − 4πθ + 4πθ cos2 θ − θ2 cos2 θ + θ sin 2θ (33) = (4π2 − 4πθ − 1) sin2 θ + θ cos θ(2 sin θ − θ cos θ) (34)
Note that 4π2 − 4πθ − 1 = 4π(π − θ) − 1 > 0 for θ ∈ [0, π/2], and g(θ) = sin θ − θ cos θ ≥ 0 for θ ∈ [0, π/2] since g(0) = 0 and g′(θ) ≥ 0 in this region. Therefore, when θ ∈ (0, π/2], M is positive definite.
When θ = 0, M(θ) = π[1,−1;−1, 1] and is semi-positive definite, with the null eigenvector being√ 2 2 [1, 1], i.e., ‖w‖ = ‖w
∗‖. However, along θ = 0, the only w that satisfies ‖w‖ = ‖w∗‖ is w = w∗. Therefore, V̇ = −yᵀMy < 0 in Ω. Note that although this region could be expanded to the entire open half-spaceH = {w : wᵀw∗ > 0}, it is not straightforward to prove the convergence inH, since the trajectory might go outsideH. On the other hand, Ω is the level set V < 12‖w
∗‖2 so the trajectory starting within Ω remains inside.
Theorem 7.4 The dynamics in Eqn. 11 converges to w∗ with probability at least (1 − )/2, if the initial value w(1) is sampled uniformly from Br = {w : ‖w‖ ≤ r} with:
r ≤ √ 2π
d+ 1 ‖w∗‖ (35)
Proof Given a ball of radius r, we first compute the “gap” δ of sphere cap (Fig. 2(b)). First cos θ = r
2‖w∗‖ , so δ = r cos θ = r2
2‖w∗‖ . Then a sufficient condition for the probability argument to hold, is to ensure that the volume Vshaded of the shaded area is greater than 1− 2 Vd(r), where Vd(r) is the volume of d-dimensional ball of radius r. Since Vshaded ≥ 12Vd(r)− δVd−1, it suffices to have:
1 2 Vd(r)− δVd−1 ≥ 1− 2 Vd(r) (36)
which gives
δ ≤ 2 Vd Vd−1
(37)
Using δ = r 2
2‖w∗‖ and Vd(r) = Vd(1)r d, we thus have:
r ≤ Vd(1) Vd−1(1) ‖w∗‖ (38)
where Vd(1) is the volume of the unit ball. Since the volume of d-dimensional unit ball is
Vd(1) = πd/2
Γ(d/2 + 1) (39)
where Γ(x) = ∫∞ 0 tx−1e−tdt. So we have
Vd(1) Vd−1(1) = √ π Γ(d/2 + 1/2) Γ(d/2 + 1) (40)
From Gautschi’s Inequality
x1−s < Γ(x+ 1)
Γ(x+ s) < (x+ s)1−s x > 0, 0 < s < 1 (41)
with s = 1/2 and x = d/2 we have:( d+ 1
2
)−1/2 < Γ(d/2 + 1/2)
Γ(d/2 + 1) <
( d
2
)−1/2 (42)
Therefore, it suffices to have
r ≤ √ 2π
d+ 1 ‖w∗‖ (43)
Note that this upper bound is tight when δ → 0 and d→ +∞, since all inequality involved asymptotically becomes equal.
7.3 TWO LAYER CASE
Lemma 7.5 For φ∗, θ and φ defined in Eqn. 17:
α ≡ (x2 + (K − 1)y2)−1/2 (44) cos θ ≡ αx (45) cosφ∗ ≡ αy (46) cosφ ≡ α2(2xy + (K − 2)y2) (47)
we have the following relations in the triangular region Ω 0 = {(x, y) : x ≥ 0, y ≥ 0, x ≥ y + 0} (Fig. 6(c)):
(1) φ, φ∗ ∈ [0, π/2] and θ ∈ [0, θ0) where θ0 = arccos 1√K .
(2) cosφ = 1− α2(x− y)2 and sinφ = α(x− y) √ 2− α2(x− y)2.
(3) φ∗ ≥ φ (equality holds only when y = 0) and φ∗ > θ.
Proof Propositions (1) and (2) are computed by direct calculations. In particular, note that since cos θ = αx = 1/ √ 1 + (K − 1)(y/x)2 and x > y ≥ 0, we have cos θ ∈ (1/ √ K, 1] and θ ∈ [0, θ0). For Preposition (3), φ∗ = arccosαy > θ = arccosαx because x > y. Finally, for x > y > 0, we have
cosφ cosφ∗ = α2(2xy + (K − 2)y2) αy = α(2x+ (K − 2)y) > α(x+ (K − 1)y) > 1 (48)
The final inequality is because K ≥ 2, x, y > 0 and thus (x + (K − 1)y)2 > x2 + (K − 1)2y2 > x2 + (K − 1)y2 = α−2. Therefore φ∗ > φ. If y = 0 then φ∗ = φ.
Theorem 7.6 For the dynamics defined in Eqn. 16, there exists 0 > 0 so that the trianglar region Ω 0 = {(x, y) : x ≥ 0, y ≥ 0, x ≥ y + 0} (Fig. 6(c)) is a convergent region. That is, the flow goes inwards for all three edges and any trajectory starting in Ω 0 stays.
Proof We discuss the three boundaries as follows:
Case 1: y = 0, 0 ≤ x ≤ 1, horizontal line. In this case, θ = 0, φ = π/2 and φ∗ = π/2. The y component of the dynamics in this line is:
f1 ≡ 2π N ∆y = −π 2 (x− 1) ≥ 0 (49)
So ∆y points to the interior of Ω.
Case 2: x = 1, 0 ≤ y ≤ 1, vertical line. In this case, α ≤ 1 and the x component of the dynamics is:
f2 ≡ 2π
N ∆x = −(π − φ)(K − 1)y − θ + (K − 1)(α sinφ∗ − sinφ) + α sin θ (50)
= −(K − 1) [(π − φ)y − (α sinφ∗ − sinφ)] + (α sin θ − θ) (51) Note that since α ≤ 1, α sin θ ≤ sin θ ≤ θ, so the second term is non-positive. For the first term, we only need to check whether (π − φ)y − (α sinφ∗ − sinφ) is nonnegative. Note that
(π − φ)y − (α sinφ∗ − sinφ) (52) = (π − φ)y + α(x− y) √ 2− α2(x− y)2 − α √ 1− α2y2 (53)
= y [ π − φ− α √ 2− α2(x− y)2 ] + α [ x √ 2− α2(x− y)2 − √ 1− α2y2 ]
(54)
In Ω we have (x − y)2 ≤ 1, combined with α ≤ 1, we have 1 ≤ √ 2− α2(x− y)2 ≤ √
2 and√ 1− α2y2 ≤ 1. Since x = 1, the second term is nonnegative. For the first term, since α ≤ 1,
π − φ− α √
2− α2(x− y)2 ≥ π − π 2 − √ 2 > 0 (55)
So (π − φ)y − (α sinφ∗ − sinφ) ≥ 0 and ∆x ≤ 0, pointing inwards.
Case 3: x = y + , 0 ≤ y ≤ 1, diagonal line. We compute the inner product between (∆x,∆y) and (1,−1), the inward normal of Ω at the line. Using φ ≤ π2 sinφ for φ ∈ [0, π/2] and φ
∗ − θ = arccosαy − arccosαx ≥ 0 when x ≥ y, we have:
f3(y, ) ≡ 2π
N
[ ∆x ∆y ]ᵀ [ 1 −1 ] = φ∗ − θ − φ+ [(K − 1)(α sinφ∗ − sinφ) + α sin θ] (56)
≥ (K − 1) [ α sinφ∗ − ( 1 +
π
2(K − 1)
) sinφ ] = α(K − 1) [√ 1− α2y2 − ( 1 + π
2(K − 1)
)√ 2− α2 2 ] Note that for y > 0:
αy = 1√
(x/y)2 + (K − 1) = 1√ (1 + /y)2 + (K − 1) ≤ 1√ K
(57)
For y = 0, αy = 0 < √ 1/K. So we have √ 1− α2y2 ≥ √ 1− 1/K. And √ 2− α2 2 ≤ √
2. Therefore f3 ≥ α(K−1)(C1− C2) withC1 ≡ √ 1− 1/K > 0 andC2 ≡ √ 2(1+π/2(K−1)) > 0. With = 0 > 0 sufficiently small, f3 > 0.
Lemma 7.7 (Reparametrization) Denote = x − y > 0. The terms αx, αy and α involved in the trigometric functions in Eqn. 16 has the following parameterization:
α [ y x ] = 1
K
[ β − β2
β + (K − 1)β2 Kβ2
] (58)
where β2 = √
(K − β2)/(K − 1). The reverse transformation is given by β =√ K − (K − 1)α2 2. Here β ∈ [1, √ K) and β2 ∈ (0, 1]. In particular, the critical point (x, y) = (1, 0) corresponds to (β, ) = (1, 1). As a result, all trigometric functions in Eqn. 16 only depend on the single variable β. In particular, the following relationship is useful:
β = cos θ + √ K − 1 sin θ (59)
Proof This transformation can be checked by simple algebraic manipulation. For example:
1
αK (β − β2) =
1
K
(√ K
α2 − (K − 1) 2 −
) = 1
K
(√ (Ky + )2 − ) = y (60)
To prove Eqn. 59, first we notice that K cos θ = Kαx = β + (K − 1)β2. Therefore, we have (K cos θ − β)2 − (K − 1)2β22 = 0, which gives β2 − 2β cos θ + 1 −K sin2 θ = 0. Solving this quadratic equation and notice that β ≥ 1, θ ∈ [0, π/2] and we get:
β = cos θ + √ cos2 θ +K sin2 θ − 1 = cos θ + √ K − 1 sin θ (61)
Lemma 7.8 After reparametrization (Eqn. 58), f3(β, ) ≥ 0 for ∈ (0, β2/β]. Furthermore, the equality is true only if (β, ) = (1, 1) or (y, ) = (0, 1).
Proof Applying the parametrization (Eqn. 58) to Eqn. 56 and notice that α = β2 = β2(β), we could write f3 = h1(β)− (φ+ (K − 1) sinφ) (62) When β is fixed, f3 now is a monotonously decreasing function with respect to > 0. Therefore, f3(β, ) ≥ f3(β, ′) for 0 < ≤ ′ ≡ β2/β. If we could prove f3(β, ′) ≥ 0 and only attain zero at known critical point (β, ) = (1, 1), the proof is complete.
Denote f3(β, ′) = f31 + f32 where
f31(β, ′) = φ∗ − θ − ′φ+ ′α sin θ (63) f32(β, ′) = (K − 1)(α sinφ∗ − sinφ) ′ (64)
For f32 it suffices to prove that ′(α sinφ∗ − sinφ) = β2 sinφ∗ − β2β sinφ ≥ 0, which is equivalent to sinφ∗ − sinφ/β ≥ 0. But this is trivially true since φ∗ ≥ φ and β ≥ 1. Therefore, f32 ≥ 0. Note that the equality only holds when φ∗ = φ and β = 1, which corresponds to the horizontal line x ∈ (0, 1], y = 0. For f31, since φ∗ ≥ φ, φ∗ > θ and ′ ∈ (0, 1], we have the following:
f31 = ′(φ∗ − φ) + (1− ′)(φ∗ − θ)− ′θ + β2 sin θ ≥ − ′θ + β2 sin θ ≥ β2 ( sin θ − θ
β
) (65)
And it reduces to showing whether β sin θ − θ is nonnegative. Using Eqn. 59, we have:
f33(θ) = β sin θ − θ = 1
2 sin 2θ +
√ K − 1 sin2 θ − θ (66)
Note that f ′33 = cos 2θ + √ K − 1 sin 2θ − 1 = √ K cos(2θ − θ0) − 1, where θ0 = arccos 1√K . By Prepositions 1 in Lemma 7.5, θ ∈ [0, θ0). Therefore, f ′33 ≥ 0 and since f33(0) = 0, f33 ≥ 0. Again the equity holds when θ = 0, φ∗ = φ and ′ = 1, which is the critical point (β, ) = (1, 1) or (y, ) = (0, 1).
Theorem 7.9 For the dynamics defined in Eqn. 16, the only critical point (∆x = 0 and ∆y = 0) within Ω is (y, ) = (0, 1).
Proof We prove by contradiction. Suppose (β, ) is a critical point other than w∗. A necessary condition for this to hold is f3 = 0 (Eqn. 56). By Lemma 7.8, > ′ = β2/β > 0 and
− 1 +Ky = 1 α (β2 − α+ β − β2) = β − α α = β − β2/ α > β − β2/ ′ α = 0 (67)
So − 1 +Ky is strictly greater than zero. On the other hand, the condition f3 = 0 implies that
((K − 1)(α sinφ∗ − sinφ) + α sin θ) = −1 (φ∗ − θ) + φ (68)
Using φ ∈ [0, π/2], φ∗ ≥ φ and φ∗ > θ, we have: 2π
N ∆y = −(π − φ)( − 1 +Ky)− (φ∗ − φ)− φy + ((K − 1)(α sinφ∗ − sinφ) + α sin θ) y
= −(π − φ)( − 1 +Ky)− (φ∗ − φ)− 1 (φ∗ − θ)y < 0 (69)
So the current point (β, ) cannot be a critical point.
Theorem 7.10 Any trajectory in Ω 0 converges to (y, ) = (1, 0), following the dynamics defined in Eqn. 16.
Proof We have Lyaponov function V = E [E] so that V̇ = −E [∆wᵀ∆w] ≤ −E [∆w]ᵀ E [∆w] ≤ 0. By Thm. 7.9, other than the optimal solution w∗, there is no other symmetric critical point, ∆w 6= 0 and thus V̇ < 0. On the other hand, by Thm. 7.6, the triangular region Ω 0 is convergent, in which the 2D dynamics isC∞ differentiable. Therefore, any 2D solution curve ξ(t) will stay within. By PoincareBendixson theorem, when there is a unique critical point, the curve either converges to a limit circle or the critical point. However, limit cycle is not possible since V is strictly monotonous decreasing along the curve. Therefore, ξ(t) will converge to the unique critical point, which is (y, ) = (1, 0) and so does the symmetric system (Eqn. 12).
Theorem 7.11 When x = y ∈ (0, 1], the 2D dynamics (Eqn. 16) reduces to the following 1D case: 2π
N ∆x = −πK(x− x∗) (70)
where x∗ = 1πK ( √ K − 1− arccos(1/ √ K) + π). Furthermore, x∗ is a convergent critical point.
Proof The 1D system can be computed with simple algebraic manipulations (note that when x = y, φ = 0 and θ = φ∗ = arccos(1/ √ K)). Note that the 1D system is linear and its close form solution is x(t) = x0 + Ce−K/2Nt and thus convergent. | 1. What is the main contribution of the paper in terms of convergence analysis of neural networks?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the clarity, quality, and presentation of the paper's content?
4. What are the novel aspects or insights provided by the paper regarding ReLU nonlinearity and realistic assumptions?
5. Are there any concerns or limitations regarding the paper's focus, methodology, or conclusions? | Review | Review
The paper proposes a convergence analysis of some two-layer NNs with ReLUs. It is not the first such analysis, but maybe it is novel on the assumptions used in the analysis, and the focus on ReLU nonlinearity that is pretty popular in practice.
The paper is quite hard to read, with many English mistakes and typos. Nevertheless, the analysis seems to be generally correct. The novelty and the key insights are however not always well motivated or presented. And the argument that the work uses realistic assumptions (Gaussian inputs for example) as opposed to other works, is quite debatable actually.
Overall, the paper looks like a correct analysis work, but its form is really suboptimal in terms of writing/presentation, and the novelty and relevance of the results are not always very clear, unfortunately. The main results and intuition should be more clearly presented, and details could be moved to appendices for example - that could only help to improve the visibility and impact of these interesting results. |
ICLR | Title
Symmetry-Breaking Convergence Analysis of Certain Two-layered Neural Networks with ReLU nonlinearity
Abstract
In this paper, we use dynamical system to analyze the nonlinear weight dynamics of two-layered bias-free networks in the form of g(x;w) = ∑K j=1 σ(w T j x), where σ(·) is ReLU nonlinearity. We assume that the input x follow Gaussian distribution. The network is trained using gradient descent to mimic the output of a teacher network of the same size with fixed parameters w∗ using l2 loss. We first show that when K = 1, the nonlinear dynamics can be written in close form, and converges to w∗ with at least (1 − )/2 probability, if random weight initializations of proper standard derivation (∼ 1/ √ d) is used, verifying empirical practice [Glorot & Bengio (2010); He et al. (2015); LeCun et al. (2012)]. For networks with many ReLU nodes (K ≥ 2), we apply our close form dynamics and prove that when the teacher parameters {w∗ j}j=1 forms orthonormal bases, (1) a symmetric weight initialization yields a convergence to a saddle point and (2) a certain symmetry-breaking weight initialization yields global convergence to w∗ without local minima. To our knowledge, this is the first proof that shows global convergence in nonlinear neural network without unrealistic assumptions on the independence of ReLU activations. In addition, we also give a concise gradient update formulation for a multilayer ReLU network when it follows a teacher of the same size with l2 loss. Simulations verify our theoretical analysis.
N/A
∑K j=1 σ(w ᵀ j x),
where σ(·) is ReLU nonlinearity. We assume that the input x follow Gaussian distribution. The network is trained using gradient descent to mimic the output of a teacher network of the same size with fixed parameters w∗ using l2 loss. We first show that when K = 1, the nonlinear dynamics can be written in close form, and converges to w∗ with at least (1 − )/2 probability, if random weight initializations of proper standard derivation (∼ 1/ √ d) is used, verifying empirical practice [Glorot & Bengio (2010); He et al. (2015); LeCun et al. (2012)]. For networks with many ReLU nodes (K ≥ 2), we apply our close form dynamics and prove that when the teacher parameters {w∗j}Kj=1 forms orthonormal bases, (1) a symmetric weight initialization yields a convergence to a saddle point and (2) a certain symmetry-breaking weight initialization yields global convergence to w∗ without local minima. To our knowledge, this is the first proof that shows global convergence in nonlinear neural network without unrealistic assumptions on the independence of ReLU activations. In addition, we also give a concise gradient update formulation for a multilayer ReLU network when it follows a teacher of the same size with l2 loss. Simulations verify our theoretical analysis.
1 INTRODUCTION
Deep learning has made substantial progress in many applications, including Computer Vision [He et al. (2016); Simonyan & Zisserman (2015); Szegedy et al. (2015); Krizhevsky et al. (2012)], Natural Language Processing [Sutskever et al. (2014)] and Speech Recognition [Hinton et al. (2012)]. However, till now, how and why it works remains elusive due to a lack of theoretical understanding. First, how simple approaches like gradient descent can solve a very complicated non-convex optimization effectively. Second, how the deep models, especially deep convolutional models, achieve generalization power despite massive parameters.
In this paper, we focus on the first problem and use dynamical system to analyze the nonlinear gradient descent dynamics of certain two-layered nonlinear network in the following form:
g(x;w) = K∑ j=1 σ(wᵀj x) (1)
where σ(x) = max(x, 0) is the ReLU nonlinearity. We consider the following setting: a student network learns the parameters that minimize the l2 distance between its prediction and the supervision provided by the teacher network of the same size with a fixed set of parameters w∗. We assume all inputs x to follow Gaussian distribution and thus the network is bias-free. Eqn. 1 is highly nonconvex and could contain exponential number of symmetrically equivalent solutions.
To analyze this, we first derive novel and concise gradient update rules for multilayer ReLU networks (See Lemma 2.1) in the teacher-student setting under l2 loss. Then for K = 1, we prove that the nonlinear gradient dynamics of Eqn. 1 has a close form and converges to w∗ with at least (1 −
)/2 probability, if initialized randomly with standard derivation on the order of 1/ √ d, verifying commonly used initialization techniques [Glorot & Bengio (2010); He et al. (2015); LeCun et al. (2012)],. When K ≥ 2, we prove that when the teacher parameters {wj}Kj=1 form orthonormal bases, (1) a symmetric initialization of a student network gets stuck at a saddle point and (2) under a certain symmetric breaking weight initialization, the dynamics converges to w∗, without getting stuck into any local minima. Note that in both cases, the initialization can be arbitrarily close to the origin for a fixed ‖w∗‖, showing that such a convergence behavior is beyond the local convex structure at w∗. To our knowledge, this is the first proof of its kind.
Previous works also use dynamical system to analyze deep neural networks. [Saxe et al. (2013)] analyzes the dynamics of multilayer linear network, and [Kawaguchi (2016)] shows every local minima is global for multilinear network. Very little theoretical work has been done to analyze the dynamics of nonlinear networks, especially deep ones. [Mei et al. (2016)] shows the global convergence whenK = 1 with activation function σ(x) when its derivatives σ′, σ′′, σ′′′ are bounded and σ′ > 0. Similar to our approach, [Saad & Solla (1996)] also uses the student-teacher setting and analyzes the dynamics of student network when the teacher’s parameters w∗ forms a orthonomal bases; however, it uses σ(x) = erf(x) as the nonlinearity and only analyzes the local behaviors of the two critical points (the saddle point in symmetric initializations, and w∗). In contrast, we prove the global convergence behavior in certain symmetry-breaking cases.
Many previous works analyze nonlinear network based on the assumption of independent activations: the activations of ReLU (or other nonlinear) nodes are independent of the input and/or mutually independent. For example, [Choromanska et al. (2015a;b)] relate the nonlinear ReLU network with spin-glass models when several assumptions hold, including the assumption of independent activations (A1p and A5u). [Kawaguchi (2016)] proves that every local minimum in nonlinear network is global based on similar assumptions. [Soudry & Carmon (2016)] shows the global optimality of the local minimum in a two-layered ReLU network, by assuming small sample size and applying independent multiplicative Bernoulli noise on the activations. In practice, the activations are highly dependent due to their common input. Ignoring such dependency also misses important behaviors, and may lead to misleading conclusions. In this paper, no assumption of independent activation is made. For sigmoid activation, [Fukumizu & Amari (2000)] gives quite complicated conditions for a local minimum to be global when adding a new node to a two-layered network. [Janzamin et al. (2015)] gives guarantees on recovering the parameters of a 2-layered neural network learnt with tensor decomposition. In comparison, we analyze ReLU networks trained with gradient descent, which is a more popular setting in practice.
The paper is organized as follows. Sec. 2 introduces the basic formulation and some interesting novel properties of ReLU in multilayered ReLU networks. Sec. 3 and Sec. 4 then analyze the twolayered model Eqn. 1 for K = 1 and K ≥ 2, respectively. Sec. 5 shows that simulation results are consistent with theoretical analysis. Finally Sec. 7 gives detailed proofs for all theorems.
2 PRELIMINARY
2.1 NOTATION
Denote X as a N -by-d input data matrix and w∗ is the parameter of the teacher network with desired N -by-1 output u = g(X;w∗). Now suppose we have an estimator w and the estimated output v = g(X;w). We want to know with l2 loss E(w) = 12‖u − v‖ 2 = 12‖u − g(X;w)‖ 2, whether gradient descent will converge to the desired solution w∗.
The gradient descent update is w(t+1) = w(t) + η∆w(t), where ∆w(t) ≡ −∇E(w(t)). If we let η → 0, then the update rule becomes a first-order differential equation dw/dt = −∇E(w), or more concisely, ẇ = −∇E(w). In this case, Ė = ∇E(w)ᵀẇ = −‖∇E(w)‖2 ≤ 0, i.e., the function value E is nonincreasing over time. The key is to check whether there exist other critical points w 6= w∗ so that ∇E(w) = 0. In our analysis, we assume entries of inputX follow Gaussian distribution. In this situation, the gradient is a random variable and ∆w = −E [∇E(w)]. The expected E [E(w)] is also nonincreasing no matter whether we follow the expected gradient or the gradient itself, because
E [ Ė ] = −E [∇E(w)ᵀ∇E(w)] ≤ −E [∇E(w)]ᵀ E [∇E(w)] ≤ 0 (2)
Therefore, we analyze the behavior of expected gradient E [∇E(w)] rather than∇E(w).
2.2 PROPERTIES OF RELU
In this paper, we discover a few useful properties of ReLU that make our analysis much simpler. Denote D = D(w) = diag(Xw > 0) as a N -by-N diagonal matrix. The l-th diagnonal element of D is a binary variable showing whether the neuron is on for sample l. Using this notation, we could write σ(Xw) = DXw. Note that D only depends on the direction of w but not its magnitude.
Note that for ReLU,D is also “tranparent” on derivatives. For example, the Jacobian Jw[σ(Xw)] = σ′(Xw)X = DX at differentiable regions. This gives a very concise rule for gradient descent in ReLU network: suppose we have negative gradient inflow vector g (of dimension N -by-1) on the current ReLU node with weights w, then we can simply write the update ∆w as:
∆w = Jw[σ(Xw)] ᵀg = XᵀDg (3)
This can be easily applied to multilayer ReLU network. Denote j ∈ [c] if node j is in layer c, dc as the width of layer c, and uj and vj as the output of teacher network and student network, respectively. A simple deduction yields the following lemma:
Lemma 2.1 For neural network with ReLU nonlinearity and using l2 loss to match with a teacher network of the same size, the negative gradient inflow gj for node j at layer c has the following form:
gj = Lj ∑ j′ (L∗j′uj′ − Lj′vj′) (4)
where Lj and L∗j are N -by-N diagonal matrices. For any k ∈ [c+ 1], Lk = ∑ j∈[c] wjkDjLj and similarly for L∗k. For the first layer, L = L ∗ = I .
The intuition here is to start from g = u − v (true for l2 loss) at the top layer and use induction. With this formulation, we could write the finite dynamics for wc (all parameters in layer c). Denote the N -by-dc+1dc matrix Rc = [LjDj ]j∈[c]Xc and R∗c = [L ∗ jD ∗ j ]j∈[c]X ∗ c . Using gradient descent rules:
∆wj = X ᵀ cDjgj = X ᵀ cDjLj ∑ j′ L∗j′D ∗ j′X ∗ cw ∗ j′ − ∑ j′ Lj′Dj′Xcwj′ (5) = XᵀcDjLj (R ∗ cw ∗ c −Rcwc) (6)
Therefore we have: ∆wc = R ᵀ c (R ∗ cw ∗ c −Rcwc) (7)
3 SINGLE RELU CASE
Let’s start with the simplest case where there is only one ReLU node, K = 1. At iteration t, following Eqn. 3, the gradient update rule is:
∆w(t) = XᵀD(t)g(t) = XᵀD(t)(D∗Xw∗ −D(t)Xw(t)) (8)
Note here how the notation ofD(t) comes into play (andD(t)D(t) = D(t)). Indeed, when the neuron is cut off at sample l, then (D(t))ll is zero and will block the corresponding gradient component.
Linear case. In this situationD(t) = D∗ = I (no gating in either forward or backward propagation) and:
w(t+1) = w(t) + η
N XᵀX(w∗ −w(t)) (9)
where η/N is the learning rate. When it is sufficiently small so that the spectral radius ρ(I − η NX
ᵀX) < 1, w(t+1) will converge to w∗ when t→ +∞. Note that this convergence is guaranteed for any initial condition w(1), if XᵀX is full rank with suitable η. This is consistent with its convex nature. If entries of X follow i.i.d Gaussian distribution, then E [ 1 NX ᵀX ]
= I and the condition satisfies.
Nonlinear (ReLU) case. In this case, ∆w = XᵀD(D∗Xw∗ −DXw) in which D is a function of w. Intuitively, this term goes to zero when w → w∗, and should be approximated to be N2 (w
∗ − w) in the i.i.d Gaussian case, since roughly half of the samples are blocked. However, once we make such approximation, we lost the nonlinear behavior of the network and would draw the wrong conclusion of global convergence.
Then how should we analyze it? Notice that in ∆w, both of the two terms have the form F (e,w) = XᵀD(e)D(w)Xw. Using this form, E [∆w] = E [F (w/‖w‖,w∗)] − E [F (w/‖w‖,w)]. Here e is a unit vector called the “projected” weight. In the following, we will show that E [F (e,w)] has the following close form under i.i.d Gaussian assumption on X:
Lemma 3.1 Denote F (e,w) = XᵀD(e)D(w)Xw where e is a unit vector, X = [x1,x2, · · · ,xN ]ᵀ is N -by-d sample matrix and D(w) = diag(Xw > 0) is a binary diagonal matrix. If xi ∼ N(0, I) and are i.i.d (and thus bias-free), then:
E [F (e,w)] = N
2π [(π − θ)w + ‖w‖ sin θe] (10)
where θ = ∠(e,w) ∈ [0, π] is the angle between e and w.
Note that the expectation analysis smooths out the non-differentiable property of ReLU, leaving only one singularity at e = 0. The intuition is that expectation analysis involves an integration over the data distribution. With simple algebraic manipulation, E [∆w] takes the following closed form:
E [∆w] = N 2 (w∗ −w) + N 2π (α sin θw − θw∗) (11)
where α = ‖w∗‖/‖w‖ and θ ∈ [0, π] is the angle between w and w∗. The first term is expected while the last two terms show the nonlinear behavior. Using Lyapunov’s method, we show that the dynamics (if treated continuously) converges to w∗ when w(1) ∈ Ω = {w : ‖w −w∗‖ < ‖w∗‖}:
Lemma 3.2 When w(1) ∈ Ω = {w : ‖w −w∗‖ < ‖w∗‖}, following the dynamics of Eqn. 11, the Lyapunov function V (w) = 12‖w − w
∗‖2 has V̇ < 0 and the system is asymptotically stable and thus w(t) → w∗ when t→ +∞.
See Appendix for the proof. The intuition is to represent V as a 2-by-2 bilinear form of vector [‖w‖, ‖w∗‖], and the bilinear coefficient matrix is positive definite. One question arises: will the same approach show the dynamics converges when the initial conditions lie outside the region Ω, in particular for any region that includes the origin? The answer is probably no. Note that w = 0 is a singularity in which ∆w is not continuous (if approaching from different directions towards w = 0, ∆w is different). It is due to the fact that ReLU function is not differentiable at the origin. We could remove this singularity by “smoothing out” ReLU around the origin. This will yield ∆w→ 0 when w → 0. In this case, V̇ (0) = 0 so Lyapunov method could only tell that the dynamics is stable but not convergent. Note that for ReLU activation, σ′(x) = 0 for certain negative x even after a local smoothing, so the global convergence claim in [Mei et al. (2016)] for l2 loss does not apply.
Random Initialization. Then we study how to sample w(1) so that w(1) ∈ Ω. We would like to sample within Ω, but we don’t know where is w∗. Sampling around origin with big radius r ≥ 2‖w∗‖ is inefficient in particular in high-dimensional space. This is because when the sample is uniform, the probability of hitting the ball is proportional to (r/‖w∗‖)d ≤ 2−d, which is exponentially small.
A better idea is to sample around the origin with very small radius (but not at w = 0), so that the convergent hypersphere behaves like a hyperplane near the origin, and thus almost half of the samples is useful (Fig. 2(a)), as shown in the following theorem:
Theorem 3.3 The dynamics in Eqn. 11 converges to w∗ with probability at least (1 − )/2, if the initial value w(1) is sampled uniformly from Br = {w : ‖w‖ ≤ r} with r ≤ √ 2π d+1‖w ∗‖.
The intution here is to lower-bound the probability of the shaded area (Fig. 2(b)). From the proof, the conclusion could be made stronger to show r ∼ 1/ √ d, consistent with common initialization techniques [Glorot & Bengio (2010); He et al. (2015); LeCun et al. (2012)]. Fig. 2(c) shows an example in the 2D case, in which there is a singularity at the origin, and sampling towards w∗ yields the convergence. This is consistent with the analysis above.
4 MULTIPLE RELUS CASE
Now we are ready to analyze the network g(x) = ∑K j=1 σ(w ᵀ j x) for K ≥ 2 (Fig. 1(c)). Theoretical analysis of such networks is also the main topic in many previous works [Saad & Solla (1996); Soudry & Carmon (2016); Fukumizu & Amari (2000)]. In this case, Lj = L∗j = I for 1 ≤ j ≤ K. Then we have the following nonlinear dynamics from Eqn. 7:
∆wj = K∑ j′=1 f(wj ,wj′ ,w ∗ j′) (12)
where f = F (wj/‖wj‖,w∗j′)− F (wj/‖wj‖,wj′). Therefore, using Eqn. 10, its expectation is:
2π N E [ f(wj ,wj′ ,w ∗ j′) ] = (π − θ∗j ′ j )w ∗ j′ − (π − θ j′ j )wj′ + (‖w∗j′‖ ‖wj‖ sin θ∗j ′ j − ‖wj′‖ ‖wj‖ sin θj ′ j ) wj (13) where θ∗j ′ j ≡ ∠(wj ,w∗j′) and θ j′
j ≡ ∠(wj ,wj′). Eqn. 12 (and its expected version) gives very complicated nonlinear dynamics and could be hard to solve in general. Unlike K = 1, a similar approach with Lyaponov function does not yield a decisive conclusion. However, if we consider the symmetric case: wj = Pjw and w∗j = Pjw ∗ where Pj is a cyclic permutation matrix that maps index j′ + 1 to (j′ + j mod K) + 1 (and P1 is the identity matrix), then RHS of the expected version of Eqn. 12 can be simplified as follows:
E [∆wj ] = ∑ j′ E [ f(wj ,wj′ ,w ∗ j′) ] = ∑ j′ E [f(Pjw, Pj′w, Pj′w∗)]
= ∑ j′′ E [f(Pjw, PjPj′′w, PjPj′′w∗)] ({Pj}Kj=1 is a group)
= Pj ∑ j′′ E [f(w, Pj′′w, Pj′′w∗)] (‖Pw1‖ = ‖w1‖, ∠(Pw1, Pw2) = ∠(w1,w2))
= PjE [∆w1] (14)
which means that if all wj and w∗j are symmetric under the action of cyclic group, so does their expected gradient. Therefore, the trajectory {w(t)} keeps such cyclic structure. Instead of solving a system of K equations, we only need to solve one:
E [∆w] = K∑ j=1 E [f(w, Pjw, Pjw∗)] (15)
Surprisingly, there is another layer of symmetry in Eqn. 15 when {w∗j} forms an orthonomal basis (w∗j′ ᵀw∗j = δjj′ ). In this case, if we start with w (1) = xw∗ + y ∑ j 6=1 Pjw
∗ then we could show that the trajectory keeps this structure and Eqn. 15 can be further reduced into the following 2D nonlinear dynamics:
2π N E [ ∆x ∆y ] = − { [(π − φ)(x− 1 + (K − 1)y)] [ 1 1 ] + [ θ φ∗ − φ ] + φ [ x− 1 y ]} + [(K − 1)(α sinφ∗ − sinφ) + α sin θ] [ x y ] (16)
Here the symmetrical factor (α ≡ ‖w∗j′‖/‖wj‖, θ ≡ θ ∗j j , φ ≡ θ
j′
j , φ ∗ ≡ θ∗j
′
j ) are defined as follows:
α = (x2 + (K−1)y2)−1/2, cos θ = αx, cosφ∗ = αy, cosφ = α2(2xy+ (K−2)y2) (17) For this 2D dynamics, we thus have the following theorem:
Theorem 4.1 For any K ≥ 2, the 2D dynamics (Eqn. 16) shows the following behaviors:
(1) Symmetric case. If the initial condition x(1) = y(1) ∈ (0, 1], then the dynamics reduces to 1D and converges to a saddle point x = y = 1πK ( √ K − 1− arccos(1/ √ K) + π).
(2) Symmetry-Breaking. If (x(1), y(1)) ∈ Ω = {x ∈ (0, 1], y ∈ [0, 1], x > y}, then dynamics always converges to (x, y) = (1, 0).
From (x(t), y(t)) we could recover w(t)j = x (t)w∗j + y (t) ∑ j′ 6=j w ∗ j′ . Obviously, a convergence of Eqn. 16 to (1, 0) means Eqn. 12 converges to {w∗j}, i.e, the teacher parameters are recovered:
Corollary 4.2 For a bias-free two-layered ReLU network g(x;w) = ∑ j σ(w ᵀ j x) that takes Gaussian i.i.d inputs (Fig. 1), if the teacher’s parameters {w∗j} form orthogonal bases, then when the student parameters is initialized in the form of w(1)j = x (1)w∗j + y (1) ∑ j′ 6=j w ∗ j′ where (x(1), y(1)) ∈ Ω = {x ∈ (0, 1], y ∈ [0, 1], x > y}, then the dynamics (Eqn. 12) converges to {w∗j} without being trapped into local minima.
When symmetry is broken, since the closure of Ω includes the origin, there exists a path starting at arbitrarily small neighborhood of origin to w∗, regardless of how large ‖w∗‖ is. In contrast to traditional convex analysis that only gives the local parameter-dependent convergence basin around w∗j , here we obtain a convergence basin that is parameter-independent. In comparison, [Saad & Solla (1996)] uses a different activation function (σ(x) = erf(x)) and only analyzes local behaviors near the two fixed points (the symmetric saddle point and the teacher’s weights w∗), leaving symmetry breaking an empirical procedure. Here we show that it is possible to give global convergence analysis on certain symmetry breaking cases for two-layered ReLU network.
By symmetry, Corollary 4.1 immediately suggests that when w(1) = y(1) ∑K j=1 w ∗ j + (x
(1) − y(1))w∗j′ , then the dynamics will converge to Pj′w
∗. Since x > y but can be arbitrarily close, a slighest preturbation on the symmetric solution x = y leads to a different fixed point, which is a permutation of w∗. This is very similar to Spontaneously Symmetric-Breaking (SSB) procedure in physics, in which a high energy state with full symmetry goes to a low energy state and only retains part of the symmetry. In this case, the energy is the objective function E, the high energy state is the initialization that is almost symmetrical but with small fluctuation, and the low energy state is the fixed point the dynamics converges into.
From the simulation shown in Fig. 4, we could see that gradient descent takes a detour to reach the desired solution w∗, even when the initialization is aligned with w∗. This is because in the first stage, all ReLU nodes receive the residue and try to explain the data in the same way (both x and y increases); when the “obvious” component has been explained away, then the residue changes its direction and pushes some ReLU nodes to explain other components as well (x increases but y decreases).
Empirically this path also converges to w∗ under noise. We leave it a conjecture that the system converges in the presence of reasonably large noise. If this conjecture is true, then with high probability a random initialization stays in the convergence basin and converges to a permutation of w∗. The reason is that a random initialization almost never gives ties. Without a tie, there exists one leading component which will dominate the convergence.
Conjecture 4.3 When the initialization w(1) = x(1)w∗j + y(1) ∑ j′ 6=j w ∗ j′ + , where is Gaussian noise and (x(1), y(1)) ∈ Ω, then the dynamics Eqn. 12 also converges to w∗ without trapped into local minima.
5 SIMULATION
5.1 CLOSE FORM SOLUTION FOR ONE RELU NODE
We verify our close form expression of E [F (e,w)] = E [XᵀD(e)D(w)Xw] (Eqn. 10) with simulation. We randomly pick e and w so that their angle ∠(e,w) is uniformly distributed in [0, π]. We prepare the input data X with standard Gaussian distribution and compare the close form solution E [F (e,w)] with F (e,w), the actual data term in gradient descent without expectation. We use relative RMS error: err = ‖E [F (e,w)] − F (e,w)‖/‖F (e,w)‖. As shown in Fig. 3(a), The error distribution on angles shows the properties of the close-form solution. For small θ, D(w) and
very large noise is present. Both teacher and student networks use g(x) =
∑K
j=1 σ(w ᵀ j x). Each
experiment has 8 runs. Bottom row: Convergence when we use g2(x) = ∑K j=1 ajσ(w ᵀ j x). Here the top weights aj is fixed at different numbers (rather than 1). Large positive aj correponds to fast convergence. When aj has positive/negative components, the network does not converge to w∗.
D(e) overlaps sufficiently, giving a reliable estimation for the gradient. When θ → π, D(w) and D(e) tend not to overlap, leaving very few data involved in the gradient computation. As a result, the variance grows. Note that all our analysis operate on θ ∈ [0, π/2] and is not affected by this behavior. In the following, angles are sampled from [0, π/2].
Fig. 3(a) shows that the close form expression becomes more accurate with more samples. We also examine other zero-mean distributions of X , e.g., uniform distribution in [−1/2, 1/2]. As shown in Fig. 3(d), the close form expression still works for large d, showing that it could be quite general. Note that the error is computed up to a scaling constant, due to the difference in normalization constants among different distributions. We leave it to the future work to prove its usability for broader distributions.
5.2 CONVERGENCE FOR MULTIPLE RELU NODES
Fig. 4(a) and (b) shows the 2D vector field given by the 2D dynamics (Eqn. 16) and Fig. 4(c) shows the 2D trajectory towards convergence to the teacher’s parameters w∗. Interestingly, even when we initialize the weights as (10−3, 0), aligning with w∗, the gradient descent takes detours to reach the destination. One explanation is, at the beginning all nodes move similar direction trying to explain the data, once the data have been explained partly, specialization follows (y decreases).
Fig. 5 shows empirical convergence for K ≥ 2, when the initialization deviates from symmetric initialization in Thm. 4.1. Unless the deviation is large, gradient descent converges to w∗. We also check the convergence of a more general network g2(x) = ∑K j=1 ajσ(w ᵀ j x). When aj > 0 convergence follows; however, when some aj is negative, the network does not converge to w∗, even that the student network already knows the ground truth value of {aj}Kj=1.
6 CONCLUSION AND FUTURE WORK
In this paper, we analyze the nonlinear dynamical behavior of certain two-layered bias-free ReLU networks in the form of g(x;w) = ∑K j=1 σ(w ᵀ j x), where σ = max(x, 0) is the ReLU node. We assume that the input x follows Gaussian distribution and the output is generated by a teacher network with parameters w∗. In K = 1 we show a close-form nonlinear dynamics can be obtained and its convergence to w∗ can be proven, if we sample the initialization properly. Such initialization is consistent with common practice [Glorot & Bengio (2010); He et al. (2015)] and is independent of the value of w∗. ForK ≥ 2, when the teacher parameters {w∗j} form a orthonormal bases, we prove that the trajectory from symmetric initialization is trapped into a saddle point, while certain symmetric breaking initialization converges to w∗ without trapped into any local minima. Future work includes analysis of general cases (or symmetric case plus noise) for K ≥ 2, and a generalization to multilayer ReLU (or other nonlinear) networks.
7 APPENDIX
Here we list all detailed proof for all the theorems.
7.1 PROPERTIES OF RELU NETWORKS
Lemma 7.1 For neural network with ReLU nonlinearity and using l2 loss to match with a teacher network of the same size, the negative gradient inflow gj for node j at layer c has the following form:
gj = Lj ∑ j′ (L∗j′uj′ − Lj′vj′) (18)
where Lj and L∗j are N -by-N diagonal matrices. For any k ∈ [c+ 1], Lk = ∑ j∈[c] wjkDjLj and similarly for L∗k.
Proof We prove by induction on layer. For the first layer, there is only one node with g = u − v, therefore Lj = Lj′ = I . Suppose the condition holds for all node j ∈ [c]. Then for node k ∈ [c+1], we have:
gk = ∑ j wjkDjgj = ∑ j wjkDjLj ∑ j′ L∗j′uj′ − ∑ j′ Lj′vj′ =
∑ j wjkDjLj ∑ j′ L∗j′ ∑ k′ D∗j′w ∗ jk′uk′ − ∑ j′ Lj′ ∑ k′ Dj′wjk′vk′ =
∑ j wjkDjLj ∑ j′ L∗j′D ∗ j′ ∑ k′ w∗jk′uk′ − ∑ j wjkDjLj ∑ j′ Lj′Dj′ ∑ k′ wjk′vk′
= ∑ k′ ∑ j wjkDjLj ∑ j′ L∗j′D ∗ j′w ∗ jk′ uk′ −∑ k′ ∑ j wjkDjLj ∑ j′ Lj′Dj′wjk′ vk′ Setting Lk = ∑ j wjkDjLj and L ∗ k = ∑ j w ∗ jkD ∗ jL ∗ j (both are diagonal matrices), we thus have:
gk = ∑ k′ LkL ∗ k′uk′ − LkLk′vk′ = Lk ∑ k′ L∗k′uk′ − Lk′vk′ (19)
7.2 ONE RELU CASE
Lemma 7.2 Suppose F (e,w) = XᵀD(e)D(w)Xw where e is a unit vector and X = [x1,x2, · · · ,xN ]ᵀ is N -by-d sample matrix. If xi ∼ N(0, I) and are i.i.d, then:
E [F (e,w)] = N
2π ((π − θ)w + ‖w‖ sin θe) (20)
where θ ∈ [0, π] is the angle between e and w.
Proof Note that F can be written in the following form: F (e,w) = ∑
i:xᵀi e≥0,x ᵀ i w≥0
xix ᵀ iw (21)
where xi are samples so that X = [x1,x2, · · · ,xn]ᵀ. We set up the axes related to e and w as in Fig. 6, while the rest of the axis are prependicular to the plane. In this coordinate system, any vector x = [r sinφ, r cosφ, x3, . . . , xd]. We have an orthonomal set of bases: e, e⊥ = −e−w/‖w‖ cos θsin θ (and any set of bases that span the rest of the space). Under the basis, the representation for e and w is [1,0d−1] and [‖w‖ cos θ,−‖w‖ sin θ,0d−2]. Note that here θ ∈ (−π, π]. The angle θ is positive when e “chases after” w, and is otherwise negative.
Now we consider the quality R(φ0) = E [ 1 N ∑ i:φi∈[0,φ0] xix ᵀ i ] . If we take the expectation and use polar coordinate only in the first two dimensions, we have:
R(φ0) = E 1 N ∑ i:φi∈[0,φ0] xix ᵀ i = E [xixᵀi |φi ∈ [0, φ0]]P [φi ∈ [0, φ0]] =
∫ +∞ 0 ∫∫ +∞ −∞ ∫ φ0 0 r sinφr cosφ. . . xd [r sinφ r cosφ . . . xd] p(r)p(θ) d∏ k=3 p(xk)rdrdφdx3 . . . dxd
where p(r) = e−r 2/2 and p(θ) = 1/2π. Note that R(φ0) is a d-by-d matrix. The first 2-by-2 block can be computed in close form (note that ∫ +∞ 0
r2p(r)rdr = 2). Any off-diagonal element except for the first 2-by-2 block is zero due to symmetric property of i.i.d Gaussian variables. Any diagonal element outside the first 2-by-2 block will be P [φi ∈ [0, φ0]] = φ0/2π. Finally, we have:
R(φ0) = E 1 N ∑ i:φi∈[0,φ0] xix ᵀ i = 1 4π [ 2φ0 − sin 2φ0 1− cos 2φ0 0 1− cos 2φ0 2φ0 + sin 2φ0 0 0 0 2φ0Id−2 ] (22)
= φ0 2π Id + 1 4π
[ − sin 2φ0 1− cos 2φ0 0 1− cos 2φ0 sin 2φ0 0
0 0 0
] (23)
With this equation, we could then compute E [F (e,w)]. When θ ≥ 0, the condition {i : xᵀi e ≥ 0,xᵀiw ≥ 0} is equivalent to {i : φi ∈ [θ, π]} (Fig. 6(a)). Using w = [‖w‖ cos θ,−‖w‖ sin θ,0d−2] and we have:
E [F (e,w)] = N (R(π)−R(θ))w (24)
= N
4π
( 2(π − θ)w − ‖w‖ [ − sin 2θ 1− cos 2θ 0 1− cos 2θ sin 2θ 0
0 0 0
][ cos θ − sin θ
0
]) (25)
= N
2π
( (π − θ)w + ‖w‖ [ sin θ 0 ]) (26)
= N
2π ((π − θ)w + ‖w‖ sin θe) (27)
For θ < 0, the condition {i : xᵀi e ≥ 0,x ᵀ iw ≥ 0} is equivalent to {i : φi ∈ [0, π + θ]} (Fig. 6(b)), and similarly we get
E [F (e,w)] = N (R(π + θ)−R(0))w = N 2π ((π + θ)w − ‖w‖ sin θe) (28)
Notice that by abuse of notation, the θ appears in Eqn. 20 is the absolute value and Eqn. 20 follows.
Lemma 7.3 In the region ‖w(1) −w∗‖ < ‖w∗‖, following the dynamics (Eqn. 11), the Lyapunov function V (w) = 12‖w−w
∗‖2 has V̇ < 0 and the system is asymptotically stable and thus w(t) → w∗ when t→ +∞.
Proof Denote that Ω = {w : ‖w(1) −w∗‖ < ‖w∗‖}. Note that
V̇ = (w −w∗)ᵀ∆w = −yᵀMy (29) where y = [‖w∗‖, ‖w‖]ᵀ and M is the following 2-by-2 matrix:
M = 1
2
[ sin 2θ + 2π − 2θ −(2π − θ) cos θ − sin θ
−(2π − θ) cos θ − sin θ 2π
] (30)
In the following we will show that M is positive definite when θ ∈ (0, π/2]. It suffices to show that M11 > 0, M22 > 0 and det(M) > 0. The first two are trivial, while the last one is:
4det(M) = 2π(sin 2θ + 2π − 2θ)− [(2π − θ) cos θ + sin θ]2 (31) = 2π(sin 2θ + 2π − 2θ)− [ (2π − θ)2 cos2 θ + (2π − θ) sin 2θ + sin2 θ ] (32)
= (4π2 − 1) sin2 θ − 4πθ + 4πθ cos2 θ − θ2 cos2 θ + θ sin 2θ (33) = (4π2 − 4πθ − 1) sin2 θ + θ cos θ(2 sin θ − θ cos θ) (34)
Note that 4π2 − 4πθ − 1 = 4π(π − θ) − 1 > 0 for θ ∈ [0, π/2], and g(θ) = sin θ − θ cos θ ≥ 0 for θ ∈ [0, π/2] since g(0) = 0 and g′(θ) ≥ 0 in this region. Therefore, when θ ∈ (0, π/2], M is positive definite.
When θ = 0, M(θ) = π[1,−1;−1, 1] and is semi-positive definite, with the null eigenvector being√ 2 2 [1, 1], i.e., ‖w‖ = ‖w
∗‖. However, along θ = 0, the only w that satisfies ‖w‖ = ‖w∗‖ is w = w∗. Therefore, V̇ = −yᵀMy < 0 in Ω. Note that although this region could be expanded to the entire open half-spaceH = {w : wᵀw∗ > 0}, it is not straightforward to prove the convergence inH, since the trajectory might go outsideH. On the other hand, Ω is the level set V < 12‖w
∗‖2 so the trajectory starting within Ω remains inside.
Theorem 7.4 The dynamics in Eqn. 11 converges to w∗ with probability at least (1 − )/2, if the initial value w(1) is sampled uniformly from Br = {w : ‖w‖ ≤ r} with:
r ≤ √ 2π
d+ 1 ‖w∗‖ (35)
Proof Given a ball of radius r, we first compute the “gap” δ of sphere cap (Fig. 2(b)). First cos θ = r
2‖w∗‖ , so δ = r cos θ = r2
2‖w∗‖ . Then a sufficient condition for the probability argument to hold, is to ensure that the volume Vshaded of the shaded area is greater than 1− 2 Vd(r), where Vd(r) is the volume of d-dimensional ball of radius r. Since Vshaded ≥ 12Vd(r)− δVd−1, it suffices to have:
1 2 Vd(r)− δVd−1 ≥ 1− 2 Vd(r) (36)
which gives
δ ≤ 2 Vd Vd−1
(37)
Using δ = r 2
2‖w∗‖ and Vd(r) = Vd(1)r d, we thus have:
r ≤ Vd(1) Vd−1(1) ‖w∗‖ (38)
where Vd(1) is the volume of the unit ball. Since the volume of d-dimensional unit ball is
Vd(1) = πd/2
Γ(d/2 + 1) (39)
where Γ(x) = ∫∞ 0 tx−1e−tdt. So we have
Vd(1) Vd−1(1) = √ π Γ(d/2 + 1/2) Γ(d/2 + 1) (40)
From Gautschi’s Inequality
x1−s < Γ(x+ 1)
Γ(x+ s) < (x+ s)1−s x > 0, 0 < s < 1 (41)
with s = 1/2 and x = d/2 we have:( d+ 1
2
)−1/2 < Γ(d/2 + 1/2)
Γ(d/2 + 1) <
( d
2
)−1/2 (42)
Therefore, it suffices to have
r ≤ √ 2π
d+ 1 ‖w∗‖ (43)
Note that this upper bound is tight when δ → 0 and d→ +∞, since all inequality involved asymptotically becomes equal.
7.3 TWO LAYER CASE
Lemma 7.5 For φ∗, θ and φ defined in Eqn. 17:
α ≡ (x2 + (K − 1)y2)−1/2 (44) cos θ ≡ αx (45) cosφ∗ ≡ αy (46) cosφ ≡ α2(2xy + (K − 2)y2) (47)
we have the following relations in the triangular region Ω 0 = {(x, y) : x ≥ 0, y ≥ 0, x ≥ y + 0} (Fig. 6(c)):
(1) φ, φ∗ ∈ [0, π/2] and θ ∈ [0, θ0) where θ0 = arccos 1√K .
(2) cosφ = 1− α2(x− y)2 and sinφ = α(x− y) √ 2− α2(x− y)2.
(3) φ∗ ≥ φ (equality holds only when y = 0) and φ∗ > θ.
Proof Propositions (1) and (2) are computed by direct calculations. In particular, note that since cos θ = αx = 1/ √ 1 + (K − 1)(y/x)2 and x > y ≥ 0, we have cos θ ∈ (1/ √ K, 1] and θ ∈ [0, θ0). For Preposition (3), φ∗ = arccosαy > θ = arccosαx because x > y. Finally, for x > y > 0, we have
cosφ cosφ∗ = α2(2xy + (K − 2)y2) αy = α(2x+ (K − 2)y) > α(x+ (K − 1)y) > 1 (48)
The final inequality is because K ≥ 2, x, y > 0 and thus (x + (K − 1)y)2 > x2 + (K − 1)2y2 > x2 + (K − 1)y2 = α−2. Therefore φ∗ > φ. If y = 0 then φ∗ = φ.
Theorem 7.6 For the dynamics defined in Eqn. 16, there exists 0 > 0 so that the trianglar region Ω 0 = {(x, y) : x ≥ 0, y ≥ 0, x ≥ y + 0} (Fig. 6(c)) is a convergent region. That is, the flow goes inwards for all three edges and any trajectory starting in Ω 0 stays.
Proof We discuss the three boundaries as follows:
Case 1: y = 0, 0 ≤ x ≤ 1, horizontal line. In this case, θ = 0, φ = π/2 and φ∗ = π/2. The y component of the dynamics in this line is:
f1 ≡ 2π N ∆y = −π 2 (x− 1) ≥ 0 (49)
So ∆y points to the interior of Ω.
Case 2: x = 1, 0 ≤ y ≤ 1, vertical line. In this case, α ≤ 1 and the x component of the dynamics is:
f2 ≡ 2π
N ∆x = −(π − φ)(K − 1)y − θ + (K − 1)(α sinφ∗ − sinφ) + α sin θ (50)
= −(K − 1) [(π − φ)y − (α sinφ∗ − sinφ)] + (α sin θ − θ) (51) Note that since α ≤ 1, α sin θ ≤ sin θ ≤ θ, so the second term is non-positive. For the first term, we only need to check whether (π − φ)y − (α sinφ∗ − sinφ) is nonnegative. Note that
(π − φ)y − (α sinφ∗ − sinφ) (52) = (π − φ)y + α(x− y) √ 2− α2(x− y)2 − α √ 1− α2y2 (53)
= y [ π − φ− α √ 2− α2(x− y)2 ] + α [ x √ 2− α2(x− y)2 − √ 1− α2y2 ]
(54)
In Ω we have (x − y)2 ≤ 1, combined with α ≤ 1, we have 1 ≤ √ 2− α2(x− y)2 ≤ √
2 and√ 1− α2y2 ≤ 1. Since x = 1, the second term is nonnegative. For the first term, since α ≤ 1,
π − φ− α √
2− α2(x− y)2 ≥ π − π 2 − √ 2 > 0 (55)
So (π − φ)y − (α sinφ∗ − sinφ) ≥ 0 and ∆x ≤ 0, pointing inwards.
Case 3: x = y + , 0 ≤ y ≤ 1, diagonal line. We compute the inner product between (∆x,∆y) and (1,−1), the inward normal of Ω at the line. Using φ ≤ π2 sinφ for φ ∈ [0, π/2] and φ
∗ − θ = arccosαy − arccosαx ≥ 0 when x ≥ y, we have:
f3(y, ) ≡ 2π
N
[ ∆x ∆y ]ᵀ [ 1 −1 ] = φ∗ − θ − φ+ [(K − 1)(α sinφ∗ − sinφ) + α sin θ] (56)
≥ (K − 1) [ α sinφ∗ − ( 1 +
π
2(K − 1)
) sinφ ] = α(K − 1) [√ 1− α2y2 − ( 1 + π
2(K − 1)
)√ 2− α2 2 ] Note that for y > 0:
αy = 1√
(x/y)2 + (K − 1) = 1√ (1 + /y)2 + (K − 1) ≤ 1√ K
(57)
For y = 0, αy = 0 < √ 1/K. So we have √ 1− α2y2 ≥ √ 1− 1/K. And √ 2− α2 2 ≤ √
2. Therefore f3 ≥ α(K−1)(C1− C2) withC1 ≡ √ 1− 1/K > 0 andC2 ≡ √ 2(1+π/2(K−1)) > 0. With = 0 > 0 sufficiently small, f3 > 0.
Lemma 7.7 (Reparametrization) Denote = x − y > 0. The terms αx, αy and α involved in the trigometric functions in Eqn. 16 has the following parameterization:
α [ y x ] = 1
K
[ β − β2
β + (K − 1)β2 Kβ2
] (58)
where β2 = √
(K − β2)/(K − 1). The reverse transformation is given by β =√ K − (K − 1)α2 2. Here β ∈ [1, √ K) and β2 ∈ (0, 1]. In particular, the critical point (x, y) = (1, 0) corresponds to (β, ) = (1, 1). As a result, all trigometric functions in Eqn. 16 only depend on the single variable β. In particular, the following relationship is useful:
β = cos θ + √ K − 1 sin θ (59)
Proof This transformation can be checked by simple algebraic manipulation. For example:
1
αK (β − β2) =
1
K
(√ K
α2 − (K − 1) 2 −
) = 1
K
(√ (Ky + )2 − ) = y (60)
To prove Eqn. 59, first we notice that K cos θ = Kαx = β + (K − 1)β2. Therefore, we have (K cos θ − β)2 − (K − 1)2β22 = 0, which gives β2 − 2β cos θ + 1 −K sin2 θ = 0. Solving this quadratic equation and notice that β ≥ 1, θ ∈ [0, π/2] and we get:
β = cos θ + √ cos2 θ +K sin2 θ − 1 = cos θ + √ K − 1 sin θ (61)
Lemma 7.8 After reparametrization (Eqn. 58), f3(β, ) ≥ 0 for ∈ (0, β2/β]. Furthermore, the equality is true only if (β, ) = (1, 1) or (y, ) = (0, 1).
Proof Applying the parametrization (Eqn. 58) to Eqn. 56 and notice that α = β2 = β2(β), we could write f3 = h1(β)− (φ+ (K − 1) sinφ) (62) When β is fixed, f3 now is a monotonously decreasing function with respect to > 0. Therefore, f3(β, ) ≥ f3(β, ′) for 0 < ≤ ′ ≡ β2/β. If we could prove f3(β, ′) ≥ 0 and only attain zero at known critical point (β, ) = (1, 1), the proof is complete.
Denote f3(β, ′) = f31 + f32 where
f31(β, ′) = φ∗ − θ − ′φ+ ′α sin θ (63) f32(β, ′) = (K − 1)(α sinφ∗ − sinφ) ′ (64)
For f32 it suffices to prove that ′(α sinφ∗ − sinφ) = β2 sinφ∗ − β2β sinφ ≥ 0, which is equivalent to sinφ∗ − sinφ/β ≥ 0. But this is trivially true since φ∗ ≥ φ and β ≥ 1. Therefore, f32 ≥ 0. Note that the equality only holds when φ∗ = φ and β = 1, which corresponds to the horizontal line x ∈ (0, 1], y = 0. For f31, since φ∗ ≥ φ, φ∗ > θ and ′ ∈ (0, 1], we have the following:
f31 = ′(φ∗ − φ) + (1− ′)(φ∗ − θ)− ′θ + β2 sin θ ≥ − ′θ + β2 sin θ ≥ β2 ( sin θ − θ
β
) (65)
And it reduces to showing whether β sin θ − θ is nonnegative. Using Eqn. 59, we have:
f33(θ) = β sin θ − θ = 1
2 sin 2θ +
√ K − 1 sin2 θ − θ (66)
Note that f ′33 = cos 2θ + √ K − 1 sin 2θ − 1 = √ K cos(2θ − θ0) − 1, where θ0 = arccos 1√K . By Prepositions 1 in Lemma 7.5, θ ∈ [0, θ0). Therefore, f ′33 ≥ 0 and since f33(0) = 0, f33 ≥ 0. Again the equity holds when θ = 0, φ∗ = φ and ′ = 1, which is the critical point (β, ) = (1, 1) or (y, ) = (0, 1).
Theorem 7.9 For the dynamics defined in Eqn. 16, the only critical point (∆x = 0 and ∆y = 0) within Ω is (y, ) = (0, 1).
Proof We prove by contradiction. Suppose (β, ) is a critical point other than w∗. A necessary condition for this to hold is f3 = 0 (Eqn. 56). By Lemma 7.8, > ′ = β2/β > 0 and
− 1 +Ky = 1 α (β2 − α+ β − β2) = β − α α = β − β2/ α > β − β2/ ′ α = 0 (67)
So − 1 +Ky is strictly greater than zero. On the other hand, the condition f3 = 0 implies that
((K − 1)(α sinφ∗ − sinφ) + α sin θ) = −1 (φ∗ − θ) + φ (68)
Using φ ∈ [0, π/2], φ∗ ≥ φ and φ∗ > θ, we have: 2π
N ∆y = −(π − φ)( − 1 +Ky)− (φ∗ − φ)− φy + ((K − 1)(α sinφ∗ − sinφ) + α sin θ) y
= −(π − φ)( − 1 +Ky)− (φ∗ − φ)− 1 (φ∗ − θ)y < 0 (69)
So the current point (β, ) cannot be a critical point.
Theorem 7.10 Any trajectory in Ω 0 converges to (y, ) = (1, 0), following the dynamics defined in Eqn. 16.
Proof We have Lyaponov function V = E [E] so that V̇ = −E [∆wᵀ∆w] ≤ −E [∆w]ᵀ E [∆w] ≤ 0. By Thm. 7.9, other than the optimal solution w∗, there is no other symmetric critical point, ∆w 6= 0 and thus V̇ < 0. On the other hand, by Thm. 7.6, the triangular region Ω 0 is convergent, in which the 2D dynamics isC∞ differentiable. Therefore, any 2D solution curve ξ(t) will stay within. By PoincareBendixson theorem, when there is a unique critical point, the curve either converges to a limit circle or the critical point. However, limit cycle is not possible since V is strictly monotonous decreasing along the curve. Therefore, ξ(t) will converge to the unique critical point, which is (y, ) = (1, 0) and so does the symmetric system (Eqn. 12).
Theorem 7.11 When x = y ∈ (0, 1], the 2D dynamics (Eqn. 16) reduces to the following 1D case: 2π
N ∆x = −πK(x− x∗) (70)
where x∗ = 1πK ( √ K − 1− arccos(1/ √ K) + π). Furthermore, x∗ is a convergent critical point.
Proof The 1D system can be computed with simple algebraic manipulations (note that when x = y, φ = 0 and θ = φ∗ = arccos(1/ √ K)). Note that the 1D system is linear and its close form solution is x(t) = x0 + Ce−K/2Nt and thus convergent. | 1. What is the focus of the paper regarding the continuous-time dynamics of gradient descent?
2. What are the strengths of the proposed analysis, particularly in its assumptions and settings?
3. Are there any inconsistencies or typos in the paper that need clarification? | Review | Review
This work analyzes the continuous-time dynamics of gradient descent when training two-layer ReLU networks (one input, one output, thus only one layer of ReLU units). The work is interesting in the sense that it does not involve some unrealistic assumptions used by previous works with similar goal. Most importantly, this work does not assume independence between input and activations, and it does not rely on noise injection (which can simplify the analysis). Nonetheless, removing these simplifying assumptions comes at the expense of limiting the analysis to:
1. Only one layer of nonlinear units
2. Discarding the bias term in ReLU while keeping the input Gaussian (thus constant input trick cannot be used to simulate the bias term).
3. Imposing strong assumption on the representation on the input/output via (bias-less) ReLU networks: existence of orthonormal bases to represent this relationships.
Having that said, as far as I can tell, the paper presents original analysis in this new setting, which is interesting and valuable. For example, by exploiting the symmetry in the problem under the assumption 3 I listed above, the authors are able to reduce the high-dimensional dynamics of the gradient descent to a bivariate dynamics (instead of dealing with original size of the parameters). Such reduction to 2D allows the author to rigorously analyze the behavior of the dynamics (e.g. convergence to a saddle point in symmetric case, or to the optimum in non-symmetric case).
Clarification Needed: first paragraph of page 2. Near the end of the paragraph you say "Initialization can be arbitrarily close to origin", but at the beginning of the same paragraph you state "initialized randomly with standard deviation of order 1/sqrt(d)". Aren't these inconsistent?
Some minor comments about the draft:
1. In section 1, 2nd paragraph: "We assume x is Gaussian and thus the network is bias free". Do you mean "zero-mean" Gaussian then?
2. "standard deviation" is spelled "standard derivation" multiple times in the paper.
3. Page 6, last paragraph, first line: Corollary 4.1 should be Corollary 4.2 |
ICLR | Title
Symmetry-Breaking Convergence Analysis of Certain Two-layered Neural Networks with ReLU nonlinearity
Abstract
In this paper, we use dynamical system to analyze the nonlinear weight dynamics of two-layered bias-free networks in the form of g(x;w) = ∑K j=1 σ(w T j x), where σ(·) is ReLU nonlinearity. We assume that the input x follow Gaussian distribution. The network is trained using gradient descent to mimic the output of a teacher network of the same size with fixed parameters w∗ using l2 loss. We first show that when K = 1, the nonlinear dynamics can be written in close form, and converges to w∗ with at least (1 − )/2 probability, if random weight initializations of proper standard derivation (∼ 1/ √ d) is used, verifying empirical practice [Glorot & Bengio (2010); He et al. (2015); LeCun et al. (2012)]. For networks with many ReLU nodes (K ≥ 2), we apply our close form dynamics and prove that when the teacher parameters {w∗ j}j=1 forms orthonormal bases, (1) a symmetric weight initialization yields a convergence to a saddle point and (2) a certain symmetry-breaking weight initialization yields global convergence to w∗ without local minima. To our knowledge, this is the first proof that shows global convergence in nonlinear neural network without unrealistic assumptions on the independence of ReLU activations. In addition, we also give a concise gradient update formulation for a multilayer ReLU network when it follows a teacher of the same size with l2 loss. Simulations verify our theoretical analysis.
N/A
∑K j=1 σ(w ᵀ j x),
where σ(·) is ReLU nonlinearity. We assume that the input x follow Gaussian distribution. The network is trained using gradient descent to mimic the output of a teacher network of the same size with fixed parameters w∗ using l2 loss. We first show that when K = 1, the nonlinear dynamics can be written in close form, and converges to w∗ with at least (1 − )/2 probability, if random weight initializations of proper standard derivation (∼ 1/ √ d) is used, verifying empirical practice [Glorot & Bengio (2010); He et al. (2015); LeCun et al. (2012)]. For networks with many ReLU nodes (K ≥ 2), we apply our close form dynamics and prove that when the teacher parameters {w∗j}Kj=1 forms orthonormal bases, (1) a symmetric weight initialization yields a convergence to a saddle point and (2) a certain symmetry-breaking weight initialization yields global convergence to w∗ without local minima. To our knowledge, this is the first proof that shows global convergence in nonlinear neural network without unrealistic assumptions on the independence of ReLU activations. In addition, we also give a concise gradient update formulation for a multilayer ReLU network when it follows a teacher of the same size with l2 loss. Simulations verify our theoretical analysis.
1 INTRODUCTION
Deep learning has made substantial progress in many applications, including Computer Vision [He et al. (2016); Simonyan & Zisserman (2015); Szegedy et al. (2015); Krizhevsky et al. (2012)], Natural Language Processing [Sutskever et al. (2014)] and Speech Recognition [Hinton et al. (2012)]. However, till now, how and why it works remains elusive due to a lack of theoretical understanding. First, how simple approaches like gradient descent can solve a very complicated non-convex optimization effectively. Second, how the deep models, especially deep convolutional models, achieve generalization power despite massive parameters.
In this paper, we focus on the first problem and use dynamical system to analyze the nonlinear gradient descent dynamics of certain two-layered nonlinear network in the following form:
g(x;w) = K∑ j=1 σ(wᵀj x) (1)
where σ(x) = max(x, 0) is the ReLU nonlinearity. We consider the following setting: a student network learns the parameters that minimize the l2 distance between its prediction and the supervision provided by the teacher network of the same size with a fixed set of parameters w∗. We assume all inputs x to follow Gaussian distribution and thus the network is bias-free. Eqn. 1 is highly nonconvex and could contain exponential number of symmetrically equivalent solutions.
To analyze this, we first derive novel and concise gradient update rules for multilayer ReLU networks (See Lemma 2.1) in the teacher-student setting under l2 loss. Then for K = 1, we prove that the nonlinear gradient dynamics of Eqn. 1 has a close form and converges to w∗ with at least (1 −
)/2 probability, if initialized randomly with standard derivation on the order of 1/ √ d, verifying commonly used initialization techniques [Glorot & Bengio (2010); He et al. (2015); LeCun et al. (2012)],. When K ≥ 2, we prove that when the teacher parameters {wj}Kj=1 form orthonormal bases, (1) a symmetric initialization of a student network gets stuck at a saddle point and (2) under a certain symmetric breaking weight initialization, the dynamics converges to w∗, without getting stuck into any local minima. Note that in both cases, the initialization can be arbitrarily close to the origin for a fixed ‖w∗‖, showing that such a convergence behavior is beyond the local convex structure at w∗. To our knowledge, this is the first proof of its kind.
Previous works also use dynamical system to analyze deep neural networks. [Saxe et al. (2013)] analyzes the dynamics of multilayer linear network, and [Kawaguchi (2016)] shows every local minima is global for multilinear network. Very little theoretical work has been done to analyze the dynamics of nonlinear networks, especially deep ones. [Mei et al. (2016)] shows the global convergence whenK = 1 with activation function σ(x) when its derivatives σ′, σ′′, σ′′′ are bounded and σ′ > 0. Similar to our approach, [Saad & Solla (1996)] also uses the student-teacher setting and analyzes the dynamics of student network when the teacher’s parameters w∗ forms a orthonomal bases; however, it uses σ(x) = erf(x) as the nonlinearity and only analyzes the local behaviors of the two critical points (the saddle point in symmetric initializations, and w∗). In contrast, we prove the global convergence behavior in certain symmetry-breaking cases.
Many previous works analyze nonlinear network based on the assumption of independent activations: the activations of ReLU (or other nonlinear) nodes are independent of the input and/or mutually independent. For example, [Choromanska et al. (2015a;b)] relate the nonlinear ReLU network with spin-glass models when several assumptions hold, including the assumption of independent activations (A1p and A5u). [Kawaguchi (2016)] proves that every local minimum in nonlinear network is global based on similar assumptions. [Soudry & Carmon (2016)] shows the global optimality of the local minimum in a two-layered ReLU network, by assuming small sample size and applying independent multiplicative Bernoulli noise on the activations. In practice, the activations are highly dependent due to their common input. Ignoring such dependency also misses important behaviors, and may lead to misleading conclusions. In this paper, no assumption of independent activation is made. For sigmoid activation, [Fukumizu & Amari (2000)] gives quite complicated conditions for a local minimum to be global when adding a new node to a two-layered network. [Janzamin et al. (2015)] gives guarantees on recovering the parameters of a 2-layered neural network learnt with tensor decomposition. In comparison, we analyze ReLU networks trained with gradient descent, which is a more popular setting in practice.
The paper is organized as follows. Sec. 2 introduces the basic formulation and some interesting novel properties of ReLU in multilayered ReLU networks. Sec. 3 and Sec. 4 then analyze the twolayered model Eqn. 1 for K = 1 and K ≥ 2, respectively. Sec. 5 shows that simulation results are consistent with theoretical analysis. Finally Sec. 7 gives detailed proofs for all theorems.
2 PRELIMINARY
2.1 NOTATION
Denote X as a N -by-d input data matrix and w∗ is the parameter of the teacher network with desired N -by-1 output u = g(X;w∗). Now suppose we have an estimator w and the estimated output v = g(X;w). We want to know with l2 loss E(w) = 12‖u − v‖ 2 = 12‖u − g(X;w)‖ 2, whether gradient descent will converge to the desired solution w∗.
The gradient descent update is w(t+1) = w(t) + η∆w(t), where ∆w(t) ≡ −∇E(w(t)). If we let η → 0, then the update rule becomes a first-order differential equation dw/dt = −∇E(w), or more concisely, ẇ = −∇E(w). In this case, Ė = ∇E(w)ᵀẇ = −‖∇E(w)‖2 ≤ 0, i.e., the function value E is nonincreasing over time. The key is to check whether there exist other critical points w 6= w∗ so that ∇E(w) = 0. In our analysis, we assume entries of inputX follow Gaussian distribution. In this situation, the gradient is a random variable and ∆w = −E [∇E(w)]. The expected E [E(w)] is also nonincreasing no matter whether we follow the expected gradient or the gradient itself, because
E [ Ė ] = −E [∇E(w)ᵀ∇E(w)] ≤ −E [∇E(w)]ᵀ E [∇E(w)] ≤ 0 (2)
Therefore, we analyze the behavior of expected gradient E [∇E(w)] rather than∇E(w).
2.2 PROPERTIES OF RELU
In this paper, we discover a few useful properties of ReLU that make our analysis much simpler. Denote D = D(w) = diag(Xw > 0) as a N -by-N diagonal matrix. The l-th diagnonal element of D is a binary variable showing whether the neuron is on for sample l. Using this notation, we could write σ(Xw) = DXw. Note that D only depends on the direction of w but not its magnitude.
Note that for ReLU,D is also “tranparent” on derivatives. For example, the Jacobian Jw[σ(Xw)] = σ′(Xw)X = DX at differentiable regions. This gives a very concise rule for gradient descent in ReLU network: suppose we have negative gradient inflow vector g (of dimension N -by-1) on the current ReLU node with weights w, then we can simply write the update ∆w as:
∆w = Jw[σ(Xw)] ᵀg = XᵀDg (3)
This can be easily applied to multilayer ReLU network. Denote j ∈ [c] if node j is in layer c, dc as the width of layer c, and uj and vj as the output of teacher network and student network, respectively. A simple deduction yields the following lemma:
Lemma 2.1 For neural network with ReLU nonlinearity and using l2 loss to match with a teacher network of the same size, the negative gradient inflow gj for node j at layer c has the following form:
gj = Lj ∑ j′ (L∗j′uj′ − Lj′vj′) (4)
where Lj and L∗j are N -by-N diagonal matrices. For any k ∈ [c+ 1], Lk = ∑ j∈[c] wjkDjLj and similarly for L∗k. For the first layer, L = L ∗ = I .
The intuition here is to start from g = u − v (true for l2 loss) at the top layer and use induction. With this formulation, we could write the finite dynamics for wc (all parameters in layer c). Denote the N -by-dc+1dc matrix Rc = [LjDj ]j∈[c]Xc and R∗c = [L ∗ jD ∗ j ]j∈[c]X ∗ c . Using gradient descent rules:
∆wj = X ᵀ cDjgj = X ᵀ cDjLj ∑ j′ L∗j′D ∗ j′X ∗ cw ∗ j′ − ∑ j′ Lj′Dj′Xcwj′ (5) = XᵀcDjLj (R ∗ cw ∗ c −Rcwc) (6)
Therefore we have: ∆wc = R ᵀ c (R ∗ cw ∗ c −Rcwc) (7)
3 SINGLE RELU CASE
Let’s start with the simplest case where there is only one ReLU node, K = 1. At iteration t, following Eqn. 3, the gradient update rule is:
∆w(t) = XᵀD(t)g(t) = XᵀD(t)(D∗Xw∗ −D(t)Xw(t)) (8)
Note here how the notation ofD(t) comes into play (andD(t)D(t) = D(t)). Indeed, when the neuron is cut off at sample l, then (D(t))ll is zero and will block the corresponding gradient component.
Linear case. In this situationD(t) = D∗ = I (no gating in either forward or backward propagation) and:
w(t+1) = w(t) + η
N XᵀX(w∗ −w(t)) (9)
where η/N is the learning rate. When it is sufficiently small so that the spectral radius ρ(I − η NX
ᵀX) < 1, w(t+1) will converge to w∗ when t→ +∞. Note that this convergence is guaranteed for any initial condition w(1), if XᵀX is full rank with suitable η. This is consistent with its convex nature. If entries of X follow i.i.d Gaussian distribution, then E [ 1 NX ᵀX ]
= I and the condition satisfies.
Nonlinear (ReLU) case. In this case, ∆w = XᵀD(D∗Xw∗ −DXw) in which D is a function of w. Intuitively, this term goes to zero when w → w∗, and should be approximated to be N2 (w
∗ − w) in the i.i.d Gaussian case, since roughly half of the samples are blocked. However, once we make such approximation, we lost the nonlinear behavior of the network and would draw the wrong conclusion of global convergence.
Then how should we analyze it? Notice that in ∆w, both of the two terms have the form F (e,w) = XᵀD(e)D(w)Xw. Using this form, E [∆w] = E [F (w/‖w‖,w∗)] − E [F (w/‖w‖,w)]. Here e is a unit vector called the “projected” weight. In the following, we will show that E [F (e,w)] has the following close form under i.i.d Gaussian assumption on X:
Lemma 3.1 Denote F (e,w) = XᵀD(e)D(w)Xw where e is a unit vector, X = [x1,x2, · · · ,xN ]ᵀ is N -by-d sample matrix and D(w) = diag(Xw > 0) is a binary diagonal matrix. If xi ∼ N(0, I) and are i.i.d (and thus bias-free), then:
E [F (e,w)] = N
2π [(π − θ)w + ‖w‖ sin θe] (10)
where θ = ∠(e,w) ∈ [0, π] is the angle between e and w.
Note that the expectation analysis smooths out the non-differentiable property of ReLU, leaving only one singularity at e = 0. The intuition is that expectation analysis involves an integration over the data distribution. With simple algebraic manipulation, E [∆w] takes the following closed form:
E [∆w] = N 2 (w∗ −w) + N 2π (α sin θw − θw∗) (11)
where α = ‖w∗‖/‖w‖ and θ ∈ [0, π] is the angle between w and w∗. The first term is expected while the last two terms show the nonlinear behavior. Using Lyapunov’s method, we show that the dynamics (if treated continuously) converges to w∗ when w(1) ∈ Ω = {w : ‖w −w∗‖ < ‖w∗‖}:
Lemma 3.2 When w(1) ∈ Ω = {w : ‖w −w∗‖ < ‖w∗‖}, following the dynamics of Eqn. 11, the Lyapunov function V (w) = 12‖w − w
∗‖2 has V̇ < 0 and the system is asymptotically stable and thus w(t) → w∗ when t→ +∞.
See Appendix for the proof. The intuition is to represent V as a 2-by-2 bilinear form of vector [‖w‖, ‖w∗‖], and the bilinear coefficient matrix is positive definite. One question arises: will the same approach show the dynamics converges when the initial conditions lie outside the region Ω, in particular for any region that includes the origin? The answer is probably no. Note that w = 0 is a singularity in which ∆w is not continuous (if approaching from different directions towards w = 0, ∆w is different). It is due to the fact that ReLU function is not differentiable at the origin. We could remove this singularity by “smoothing out” ReLU around the origin. This will yield ∆w→ 0 when w → 0. In this case, V̇ (0) = 0 so Lyapunov method could only tell that the dynamics is stable but not convergent. Note that for ReLU activation, σ′(x) = 0 for certain negative x even after a local smoothing, so the global convergence claim in [Mei et al. (2016)] for l2 loss does not apply.
Random Initialization. Then we study how to sample w(1) so that w(1) ∈ Ω. We would like to sample within Ω, but we don’t know where is w∗. Sampling around origin with big radius r ≥ 2‖w∗‖ is inefficient in particular in high-dimensional space. This is because when the sample is uniform, the probability of hitting the ball is proportional to (r/‖w∗‖)d ≤ 2−d, which is exponentially small.
A better idea is to sample around the origin with very small radius (but not at w = 0), so that the convergent hypersphere behaves like a hyperplane near the origin, and thus almost half of the samples is useful (Fig. 2(a)), as shown in the following theorem:
Theorem 3.3 The dynamics in Eqn. 11 converges to w∗ with probability at least (1 − )/2, if the initial value w(1) is sampled uniformly from Br = {w : ‖w‖ ≤ r} with r ≤ √ 2π d+1‖w ∗‖.
The intution here is to lower-bound the probability of the shaded area (Fig. 2(b)). From the proof, the conclusion could be made stronger to show r ∼ 1/ √ d, consistent with common initialization techniques [Glorot & Bengio (2010); He et al. (2015); LeCun et al. (2012)]. Fig. 2(c) shows an example in the 2D case, in which there is a singularity at the origin, and sampling towards w∗ yields the convergence. This is consistent with the analysis above.
4 MULTIPLE RELUS CASE
Now we are ready to analyze the network g(x) = ∑K j=1 σ(w ᵀ j x) for K ≥ 2 (Fig. 1(c)). Theoretical analysis of such networks is also the main topic in many previous works [Saad & Solla (1996); Soudry & Carmon (2016); Fukumizu & Amari (2000)]. In this case, Lj = L∗j = I for 1 ≤ j ≤ K. Then we have the following nonlinear dynamics from Eqn. 7:
∆wj = K∑ j′=1 f(wj ,wj′ ,w ∗ j′) (12)
where f = F (wj/‖wj‖,w∗j′)− F (wj/‖wj‖,wj′). Therefore, using Eqn. 10, its expectation is:
2π N E [ f(wj ,wj′ ,w ∗ j′) ] = (π − θ∗j ′ j )w ∗ j′ − (π − θ j′ j )wj′ + (‖w∗j′‖ ‖wj‖ sin θ∗j ′ j − ‖wj′‖ ‖wj‖ sin θj ′ j ) wj (13) where θ∗j ′ j ≡ ∠(wj ,w∗j′) and θ j′
j ≡ ∠(wj ,wj′). Eqn. 12 (and its expected version) gives very complicated nonlinear dynamics and could be hard to solve in general. Unlike K = 1, a similar approach with Lyaponov function does not yield a decisive conclusion. However, if we consider the symmetric case: wj = Pjw and w∗j = Pjw ∗ where Pj is a cyclic permutation matrix that maps index j′ + 1 to (j′ + j mod K) + 1 (and P1 is the identity matrix), then RHS of the expected version of Eqn. 12 can be simplified as follows:
E [∆wj ] = ∑ j′ E [ f(wj ,wj′ ,w ∗ j′) ] = ∑ j′ E [f(Pjw, Pj′w, Pj′w∗)]
= ∑ j′′ E [f(Pjw, PjPj′′w, PjPj′′w∗)] ({Pj}Kj=1 is a group)
= Pj ∑ j′′ E [f(w, Pj′′w, Pj′′w∗)] (‖Pw1‖ = ‖w1‖, ∠(Pw1, Pw2) = ∠(w1,w2))
= PjE [∆w1] (14)
which means that if all wj and w∗j are symmetric under the action of cyclic group, so does their expected gradient. Therefore, the trajectory {w(t)} keeps such cyclic structure. Instead of solving a system of K equations, we only need to solve one:
E [∆w] = K∑ j=1 E [f(w, Pjw, Pjw∗)] (15)
Surprisingly, there is another layer of symmetry in Eqn. 15 when {w∗j} forms an orthonomal basis (w∗j′ ᵀw∗j = δjj′ ). In this case, if we start with w (1) = xw∗ + y ∑ j 6=1 Pjw
∗ then we could show that the trajectory keeps this structure and Eqn. 15 can be further reduced into the following 2D nonlinear dynamics:
2π N E [ ∆x ∆y ] = − { [(π − φ)(x− 1 + (K − 1)y)] [ 1 1 ] + [ θ φ∗ − φ ] + φ [ x− 1 y ]} + [(K − 1)(α sinφ∗ − sinφ) + α sin θ] [ x y ] (16)
Here the symmetrical factor (α ≡ ‖w∗j′‖/‖wj‖, θ ≡ θ ∗j j , φ ≡ θ
j′
j , φ ∗ ≡ θ∗j
′
j ) are defined as follows:
α = (x2 + (K−1)y2)−1/2, cos θ = αx, cosφ∗ = αy, cosφ = α2(2xy+ (K−2)y2) (17) For this 2D dynamics, we thus have the following theorem:
Theorem 4.1 For any K ≥ 2, the 2D dynamics (Eqn. 16) shows the following behaviors:
(1) Symmetric case. If the initial condition x(1) = y(1) ∈ (0, 1], then the dynamics reduces to 1D and converges to a saddle point x = y = 1πK ( √ K − 1− arccos(1/ √ K) + π).
(2) Symmetry-Breaking. If (x(1), y(1)) ∈ Ω = {x ∈ (0, 1], y ∈ [0, 1], x > y}, then dynamics always converges to (x, y) = (1, 0).
From (x(t), y(t)) we could recover w(t)j = x (t)w∗j + y (t) ∑ j′ 6=j w ∗ j′ . Obviously, a convergence of Eqn. 16 to (1, 0) means Eqn. 12 converges to {w∗j}, i.e, the teacher parameters are recovered:
Corollary 4.2 For a bias-free two-layered ReLU network g(x;w) = ∑ j σ(w ᵀ j x) that takes Gaussian i.i.d inputs (Fig. 1), if the teacher’s parameters {w∗j} form orthogonal bases, then when the student parameters is initialized in the form of w(1)j = x (1)w∗j + y (1) ∑ j′ 6=j w ∗ j′ where (x(1), y(1)) ∈ Ω = {x ∈ (0, 1], y ∈ [0, 1], x > y}, then the dynamics (Eqn. 12) converges to {w∗j} without being trapped into local minima.
When symmetry is broken, since the closure of Ω includes the origin, there exists a path starting at arbitrarily small neighborhood of origin to w∗, regardless of how large ‖w∗‖ is. In contrast to traditional convex analysis that only gives the local parameter-dependent convergence basin around w∗j , here we obtain a convergence basin that is parameter-independent. In comparison, [Saad & Solla (1996)] uses a different activation function (σ(x) = erf(x)) and only analyzes local behaviors near the two fixed points (the symmetric saddle point and the teacher’s weights w∗), leaving symmetry breaking an empirical procedure. Here we show that it is possible to give global convergence analysis on certain symmetry breaking cases for two-layered ReLU network.
By symmetry, Corollary 4.1 immediately suggests that when w(1) = y(1) ∑K j=1 w ∗ j + (x
(1) − y(1))w∗j′ , then the dynamics will converge to Pj′w
∗. Since x > y but can be arbitrarily close, a slighest preturbation on the symmetric solution x = y leads to a different fixed point, which is a permutation of w∗. This is very similar to Spontaneously Symmetric-Breaking (SSB) procedure in physics, in which a high energy state with full symmetry goes to a low energy state and only retains part of the symmetry. In this case, the energy is the objective function E, the high energy state is the initialization that is almost symmetrical but with small fluctuation, and the low energy state is the fixed point the dynamics converges into.
From the simulation shown in Fig. 4, we could see that gradient descent takes a detour to reach the desired solution w∗, even when the initialization is aligned with w∗. This is because in the first stage, all ReLU nodes receive the residue and try to explain the data in the same way (both x and y increases); when the “obvious” component has been explained away, then the residue changes its direction and pushes some ReLU nodes to explain other components as well (x increases but y decreases).
Empirically this path also converges to w∗ under noise. We leave it a conjecture that the system converges in the presence of reasonably large noise. If this conjecture is true, then with high probability a random initialization stays in the convergence basin and converges to a permutation of w∗. The reason is that a random initialization almost never gives ties. Without a tie, there exists one leading component which will dominate the convergence.
Conjecture 4.3 When the initialization w(1) = x(1)w∗j + y(1) ∑ j′ 6=j w ∗ j′ + , where is Gaussian noise and (x(1), y(1)) ∈ Ω, then the dynamics Eqn. 12 also converges to w∗ without trapped into local minima.
5 SIMULATION
5.1 CLOSE FORM SOLUTION FOR ONE RELU NODE
We verify our close form expression of E [F (e,w)] = E [XᵀD(e)D(w)Xw] (Eqn. 10) with simulation. We randomly pick e and w so that their angle ∠(e,w) is uniformly distributed in [0, π]. We prepare the input data X with standard Gaussian distribution and compare the close form solution E [F (e,w)] with F (e,w), the actual data term in gradient descent without expectation. We use relative RMS error: err = ‖E [F (e,w)] − F (e,w)‖/‖F (e,w)‖. As shown in Fig. 3(a), The error distribution on angles shows the properties of the close-form solution. For small θ, D(w) and
very large noise is present. Both teacher and student networks use g(x) =
∑K
j=1 σ(w ᵀ j x). Each
experiment has 8 runs. Bottom row: Convergence when we use g2(x) = ∑K j=1 ajσ(w ᵀ j x). Here the top weights aj is fixed at different numbers (rather than 1). Large positive aj correponds to fast convergence. When aj has positive/negative components, the network does not converge to w∗.
D(e) overlaps sufficiently, giving a reliable estimation for the gradient. When θ → π, D(w) and D(e) tend not to overlap, leaving very few data involved in the gradient computation. As a result, the variance grows. Note that all our analysis operate on θ ∈ [0, π/2] and is not affected by this behavior. In the following, angles are sampled from [0, π/2].
Fig. 3(a) shows that the close form expression becomes more accurate with more samples. We also examine other zero-mean distributions of X , e.g., uniform distribution in [−1/2, 1/2]. As shown in Fig. 3(d), the close form expression still works for large d, showing that it could be quite general. Note that the error is computed up to a scaling constant, due to the difference in normalization constants among different distributions. We leave it to the future work to prove its usability for broader distributions.
5.2 CONVERGENCE FOR MULTIPLE RELU NODES
Fig. 4(a) and (b) shows the 2D vector field given by the 2D dynamics (Eqn. 16) and Fig. 4(c) shows the 2D trajectory towards convergence to the teacher’s parameters w∗. Interestingly, even when we initialize the weights as (10−3, 0), aligning with w∗, the gradient descent takes detours to reach the destination. One explanation is, at the beginning all nodes move similar direction trying to explain the data, once the data have been explained partly, specialization follows (y decreases).
Fig. 5 shows empirical convergence for K ≥ 2, when the initialization deviates from symmetric initialization in Thm. 4.1. Unless the deviation is large, gradient descent converges to w∗. We also check the convergence of a more general network g2(x) = ∑K j=1 ajσ(w ᵀ j x). When aj > 0 convergence follows; however, when some aj is negative, the network does not converge to w∗, even that the student network already knows the ground truth value of {aj}Kj=1.
6 CONCLUSION AND FUTURE WORK
In this paper, we analyze the nonlinear dynamical behavior of certain two-layered bias-free ReLU networks in the form of g(x;w) = ∑K j=1 σ(w ᵀ j x), where σ = max(x, 0) is the ReLU node. We assume that the input x follows Gaussian distribution and the output is generated by a teacher network with parameters w∗. In K = 1 we show a close-form nonlinear dynamics can be obtained and its convergence to w∗ can be proven, if we sample the initialization properly. Such initialization is consistent with common practice [Glorot & Bengio (2010); He et al. (2015)] and is independent of the value of w∗. ForK ≥ 2, when the teacher parameters {w∗j} form a orthonormal bases, we prove that the trajectory from symmetric initialization is trapped into a saddle point, while certain symmetric breaking initialization converges to w∗ without trapped into any local minima. Future work includes analysis of general cases (or symmetric case plus noise) for K ≥ 2, and a generalization to multilayer ReLU (or other nonlinear) networks.
7 APPENDIX
Here we list all detailed proof for all the theorems.
7.1 PROPERTIES OF RELU NETWORKS
Lemma 7.1 For neural network with ReLU nonlinearity and using l2 loss to match with a teacher network of the same size, the negative gradient inflow gj for node j at layer c has the following form:
gj = Lj ∑ j′ (L∗j′uj′ − Lj′vj′) (18)
where Lj and L∗j are N -by-N diagonal matrices. For any k ∈ [c+ 1], Lk = ∑ j∈[c] wjkDjLj and similarly for L∗k.
Proof We prove by induction on layer. For the first layer, there is only one node with g = u − v, therefore Lj = Lj′ = I . Suppose the condition holds for all node j ∈ [c]. Then for node k ∈ [c+1], we have:
gk = ∑ j wjkDjgj = ∑ j wjkDjLj ∑ j′ L∗j′uj′ − ∑ j′ Lj′vj′ =
∑ j wjkDjLj ∑ j′ L∗j′ ∑ k′ D∗j′w ∗ jk′uk′ − ∑ j′ Lj′ ∑ k′ Dj′wjk′vk′ =
∑ j wjkDjLj ∑ j′ L∗j′D ∗ j′ ∑ k′ w∗jk′uk′ − ∑ j wjkDjLj ∑ j′ Lj′Dj′ ∑ k′ wjk′vk′
= ∑ k′ ∑ j wjkDjLj ∑ j′ L∗j′D ∗ j′w ∗ jk′ uk′ −∑ k′ ∑ j wjkDjLj ∑ j′ Lj′Dj′wjk′ vk′ Setting Lk = ∑ j wjkDjLj and L ∗ k = ∑ j w ∗ jkD ∗ jL ∗ j (both are diagonal matrices), we thus have:
gk = ∑ k′ LkL ∗ k′uk′ − LkLk′vk′ = Lk ∑ k′ L∗k′uk′ − Lk′vk′ (19)
7.2 ONE RELU CASE
Lemma 7.2 Suppose F (e,w) = XᵀD(e)D(w)Xw where e is a unit vector and X = [x1,x2, · · · ,xN ]ᵀ is N -by-d sample matrix. If xi ∼ N(0, I) and are i.i.d, then:
E [F (e,w)] = N
2π ((π − θ)w + ‖w‖ sin θe) (20)
where θ ∈ [0, π] is the angle between e and w.
Proof Note that F can be written in the following form: F (e,w) = ∑
i:xᵀi e≥0,x ᵀ i w≥0
xix ᵀ iw (21)
where xi are samples so that X = [x1,x2, · · · ,xn]ᵀ. We set up the axes related to e and w as in Fig. 6, while the rest of the axis are prependicular to the plane. In this coordinate system, any vector x = [r sinφ, r cosφ, x3, . . . , xd]. We have an orthonomal set of bases: e, e⊥ = −e−w/‖w‖ cos θsin θ (and any set of bases that span the rest of the space). Under the basis, the representation for e and w is [1,0d−1] and [‖w‖ cos θ,−‖w‖ sin θ,0d−2]. Note that here θ ∈ (−π, π]. The angle θ is positive when e “chases after” w, and is otherwise negative.
Now we consider the quality R(φ0) = E [ 1 N ∑ i:φi∈[0,φ0] xix ᵀ i ] . If we take the expectation and use polar coordinate only in the first two dimensions, we have:
R(φ0) = E 1 N ∑ i:φi∈[0,φ0] xix ᵀ i = E [xixᵀi |φi ∈ [0, φ0]]P [φi ∈ [0, φ0]] =
∫ +∞ 0 ∫∫ +∞ −∞ ∫ φ0 0 r sinφr cosφ. . . xd [r sinφ r cosφ . . . xd] p(r)p(θ) d∏ k=3 p(xk)rdrdφdx3 . . . dxd
where p(r) = e−r 2/2 and p(θ) = 1/2π. Note that R(φ0) is a d-by-d matrix. The first 2-by-2 block can be computed in close form (note that ∫ +∞ 0
r2p(r)rdr = 2). Any off-diagonal element except for the first 2-by-2 block is zero due to symmetric property of i.i.d Gaussian variables. Any diagonal element outside the first 2-by-2 block will be P [φi ∈ [0, φ0]] = φ0/2π. Finally, we have:
R(φ0) = E 1 N ∑ i:φi∈[0,φ0] xix ᵀ i = 1 4π [ 2φ0 − sin 2φ0 1− cos 2φ0 0 1− cos 2φ0 2φ0 + sin 2φ0 0 0 0 2φ0Id−2 ] (22)
= φ0 2π Id + 1 4π
[ − sin 2φ0 1− cos 2φ0 0 1− cos 2φ0 sin 2φ0 0
0 0 0
] (23)
With this equation, we could then compute E [F (e,w)]. When θ ≥ 0, the condition {i : xᵀi e ≥ 0,xᵀiw ≥ 0} is equivalent to {i : φi ∈ [θ, π]} (Fig. 6(a)). Using w = [‖w‖ cos θ,−‖w‖ sin θ,0d−2] and we have:
E [F (e,w)] = N (R(π)−R(θ))w (24)
= N
4π
( 2(π − θ)w − ‖w‖ [ − sin 2θ 1− cos 2θ 0 1− cos 2θ sin 2θ 0
0 0 0
][ cos θ − sin θ
0
]) (25)
= N
2π
( (π − θ)w + ‖w‖ [ sin θ 0 ]) (26)
= N
2π ((π − θ)w + ‖w‖ sin θe) (27)
For θ < 0, the condition {i : xᵀi e ≥ 0,x ᵀ iw ≥ 0} is equivalent to {i : φi ∈ [0, π + θ]} (Fig. 6(b)), and similarly we get
E [F (e,w)] = N (R(π + θ)−R(0))w = N 2π ((π + θ)w − ‖w‖ sin θe) (28)
Notice that by abuse of notation, the θ appears in Eqn. 20 is the absolute value and Eqn. 20 follows.
Lemma 7.3 In the region ‖w(1) −w∗‖ < ‖w∗‖, following the dynamics (Eqn. 11), the Lyapunov function V (w) = 12‖w−w
∗‖2 has V̇ < 0 and the system is asymptotically stable and thus w(t) → w∗ when t→ +∞.
Proof Denote that Ω = {w : ‖w(1) −w∗‖ < ‖w∗‖}. Note that
V̇ = (w −w∗)ᵀ∆w = −yᵀMy (29) where y = [‖w∗‖, ‖w‖]ᵀ and M is the following 2-by-2 matrix:
M = 1
2
[ sin 2θ + 2π − 2θ −(2π − θ) cos θ − sin θ
−(2π − θ) cos θ − sin θ 2π
] (30)
In the following we will show that M is positive definite when θ ∈ (0, π/2]. It suffices to show that M11 > 0, M22 > 0 and det(M) > 0. The first two are trivial, while the last one is:
4det(M) = 2π(sin 2θ + 2π − 2θ)− [(2π − θ) cos θ + sin θ]2 (31) = 2π(sin 2θ + 2π − 2θ)− [ (2π − θ)2 cos2 θ + (2π − θ) sin 2θ + sin2 θ ] (32)
= (4π2 − 1) sin2 θ − 4πθ + 4πθ cos2 θ − θ2 cos2 θ + θ sin 2θ (33) = (4π2 − 4πθ − 1) sin2 θ + θ cos θ(2 sin θ − θ cos θ) (34)
Note that 4π2 − 4πθ − 1 = 4π(π − θ) − 1 > 0 for θ ∈ [0, π/2], and g(θ) = sin θ − θ cos θ ≥ 0 for θ ∈ [0, π/2] since g(0) = 0 and g′(θ) ≥ 0 in this region. Therefore, when θ ∈ (0, π/2], M is positive definite.
When θ = 0, M(θ) = π[1,−1;−1, 1] and is semi-positive definite, with the null eigenvector being√ 2 2 [1, 1], i.e., ‖w‖ = ‖w
∗‖. However, along θ = 0, the only w that satisfies ‖w‖ = ‖w∗‖ is w = w∗. Therefore, V̇ = −yᵀMy < 0 in Ω. Note that although this region could be expanded to the entire open half-spaceH = {w : wᵀw∗ > 0}, it is not straightforward to prove the convergence inH, since the trajectory might go outsideH. On the other hand, Ω is the level set V < 12‖w
∗‖2 so the trajectory starting within Ω remains inside.
Theorem 7.4 The dynamics in Eqn. 11 converges to w∗ with probability at least (1 − )/2, if the initial value w(1) is sampled uniformly from Br = {w : ‖w‖ ≤ r} with:
r ≤ √ 2π
d+ 1 ‖w∗‖ (35)
Proof Given a ball of radius r, we first compute the “gap” δ of sphere cap (Fig. 2(b)). First cos θ = r
2‖w∗‖ , so δ = r cos θ = r2
2‖w∗‖ . Then a sufficient condition for the probability argument to hold, is to ensure that the volume Vshaded of the shaded area is greater than 1− 2 Vd(r), where Vd(r) is the volume of d-dimensional ball of radius r. Since Vshaded ≥ 12Vd(r)− δVd−1, it suffices to have:
1 2 Vd(r)− δVd−1 ≥ 1− 2 Vd(r) (36)
which gives
δ ≤ 2 Vd Vd−1
(37)
Using δ = r 2
2‖w∗‖ and Vd(r) = Vd(1)r d, we thus have:
r ≤ Vd(1) Vd−1(1) ‖w∗‖ (38)
where Vd(1) is the volume of the unit ball. Since the volume of d-dimensional unit ball is
Vd(1) = πd/2
Γ(d/2 + 1) (39)
where Γ(x) = ∫∞ 0 tx−1e−tdt. So we have
Vd(1) Vd−1(1) = √ π Γ(d/2 + 1/2) Γ(d/2 + 1) (40)
From Gautschi’s Inequality
x1−s < Γ(x+ 1)
Γ(x+ s) < (x+ s)1−s x > 0, 0 < s < 1 (41)
with s = 1/2 and x = d/2 we have:( d+ 1
2
)−1/2 < Γ(d/2 + 1/2)
Γ(d/2 + 1) <
( d
2
)−1/2 (42)
Therefore, it suffices to have
r ≤ √ 2π
d+ 1 ‖w∗‖ (43)
Note that this upper bound is tight when δ → 0 and d→ +∞, since all inequality involved asymptotically becomes equal.
7.3 TWO LAYER CASE
Lemma 7.5 For φ∗, θ and φ defined in Eqn. 17:
α ≡ (x2 + (K − 1)y2)−1/2 (44) cos θ ≡ αx (45) cosφ∗ ≡ αy (46) cosφ ≡ α2(2xy + (K − 2)y2) (47)
we have the following relations in the triangular region Ω 0 = {(x, y) : x ≥ 0, y ≥ 0, x ≥ y + 0} (Fig. 6(c)):
(1) φ, φ∗ ∈ [0, π/2] and θ ∈ [0, θ0) where θ0 = arccos 1√K .
(2) cosφ = 1− α2(x− y)2 and sinφ = α(x− y) √ 2− α2(x− y)2.
(3) φ∗ ≥ φ (equality holds only when y = 0) and φ∗ > θ.
Proof Propositions (1) and (2) are computed by direct calculations. In particular, note that since cos θ = αx = 1/ √ 1 + (K − 1)(y/x)2 and x > y ≥ 0, we have cos θ ∈ (1/ √ K, 1] and θ ∈ [0, θ0). For Preposition (3), φ∗ = arccosαy > θ = arccosαx because x > y. Finally, for x > y > 0, we have
cosφ cosφ∗ = α2(2xy + (K − 2)y2) αy = α(2x+ (K − 2)y) > α(x+ (K − 1)y) > 1 (48)
The final inequality is because K ≥ 2, x, y > 0 and thus (x + (K − 1)y)2 > x2 + (K − 1)2y2 > x2 + (K − 1)y2 = α−2. Therefore φ∗ > φ. If y = 0 then φ∗ = φ.
Theorem 7.6 For the dynamics defined in Eqn. 16, there exists 0 > 0 so that the trianglar region Ω 0 = {(x, y) : x ≥ 0, y ≥ 0, x ≥ y + 0} (Fig. 6(c)) is a convergent region. That is, the flow goes inwards for all three edges and any trajectory starting in Ω 0 stays.
Proof We discuss the three boundaries as follows:
Case 1: y = 0, 0 ≤ x ≤ 1, horizontal line. In this case, θ = 0, φ = π/2 and φ∗ = π/2. The y component of the dynamics in this line is:
f1 ≡ 2π N ∆y = −π 2 (x− 1) ≥ 0 (49)
So ∆y points to the interior of Ω.
Case 2: x = 1, 0 ≤ y ≤ 1, vertical line. In this case, α ≤ 1 and the x component of the dynamics is:
f2 ≡ 2π
N ∆x = −(π − φ)(K − 1)y − θ + (K − 1)(α sinφ∗ − sinφ) + α sin θ (50)
= −(K − 1) [(π − φ)y − (α sinφ∗ − sinφ)] + (α sin θ − θ) (51) Note that since α ≤ 1, α sin θ ≤ sin θ ≤ θ, so the second term is non-positive. For the first term, we only need to check whether (π − φ)y − (α sinφ∗ − sinφ) is nonnegative. Note that
(π − φ)y − (α sinφ∗ − sinφ) (52) = (π − φ)y + α(x− y) √ 2− α2(x− y)2 − α √ 1− α2y2 (53)
= y [ π − φ− α √ 2− α2(x− y)2 ] + α [ x √ 2− α2(x− y)2 − √ 1− α2y2 ]
(54)
In Ω we have (x − y)2 ≤ 1, combined with α ≤ 1, we have 1 ≤ √ 2− α2(x− y)2 ≤ √
2 and√ 1− α2y2 ≤ 1. Since x = 1, the second term is nonnegative. For the first term, since α ≤ 1,
π − φ− α √
2− α2(x− y)2 ≥ π − π 2 − √ 2 > 0 (55)
So (π − φ)y − (α sinφ∗ − sinφ) ≥ 0 and ∆x ≤ 0, pointing inwards.
Case 3: x = y + , 0 ≤ y ≤ 1, diagonal line. We compute the inner product between (∆x,∆y) and (1,−1), the inward normal of Ω at the line. Using φ ≤ π2 sinφ for φ ∈ [0, π/2] and φ
∗ − θ = arccosαy − arccosαx ≥ 0 when x ≥ y, we have:
f3(y, ) ≡ 2π
N
[ ∆x ∆y ]ᵀ [ 1 −1 ] = φ∗ − θ − φ+ [(K − 1)(α sinφ∗ − sinφ) + α sin θ] (56)
≥ (K − 1) [ α sinφ∗ − ( 1 +
π
2(K − 1)
) sinφ ] = α(K − 1) [√ 1− α2y2 − ( 1 + π
2(K − 1)
)√ 2− α2 2 ] Note that for y > 0:
αy = 1√
(x/y)2 + (K − 1) = 1√ (1 + /y)2 + (K − 1) ≤ 1√ K
(57)
For y = 0, αy = 0 < √ 1/K. So we have √ 1− α2y2 ≥ √ 1− 1/K. And √ 2− α2 2 ≤ √
2. Therefore f3 ≥ α(K−1)(C1− C2) withC1 ≡ √ 1− 1/K > 0 andC2 ≡ √ 2(1+π/2(K−1)) > 0. With = 0 > 0 sufficiently small, f3 > 0.
Lemma 7.7 (Reparametrization) Denote = x − y > 0. The terms αx, αy and α involved in the trigometric functions in Eqn. 16 has the following parameterization:
α [ y x ] = 1
K
[ β − β2
β + (K − 1)β2 Kβ2
] (58)
where β2 = √
(K − β2)/(K − 1). The reverse transformation is given by β =√ K − (K − 1)α2 2. Here β ∈ [1, √ K) and β2 ∈ (0, 1]. In particular, the critical point (x, y) = (1, 0) corresponds to (β, ) = (1, 1). As a result, all trigometric functions in Eqn. 16 only depend on the single variable β. In particular, the following relationship is useful:
β = cos θ + √ K − 1 sin θ (59)
Proof This transformation can be checked by simple algebraic manipulation. For example:
1
αK (β − β2) =
1
K
(√ K
α2 − (K − 1) 2 −
) = 1
K
(√ (Ky + )2 − ) = y (60)
To prove Eqn. 59, first we notice that K cos θ = Kαx = β + (K − 1)β2. Therefore, we have (K cos θ − β)2 − (K − 1)2β22 = 0, which gives β2 − 2β cos θ + 1 −K sin2 θ = 0. Solving this quadratic equation and notice that β ≥ 1, θ ∈ [0, π/2] and we get:
β = cos θ + √ cos2 θ +K sin2 θ − 1 = cos θ + √ K − 1 sin θ (61)
Lemma 7.8 After reparametrization (Eqn. 58), f3(β, ) ≥ 0 for ∈ (0, β2/β]. Furthermore, the equality is true only if (β, ) = (1, 1) or (y, ) = (0, 1).
Proof Applying the parametrization (Eqn. 58) to Eqn. 56 and notice that α = β2 = β2(β), we could write f3 = h1(β)− (φ+ (K − 1) sinφ) (62) When β is fixed, f3 now is a monotonously decreasing function with respect to > 0. Therefore, f3(β, ) ≥ f3(β, ′) for 0 < ≤ ′ ≡ β2/β. If we could prove f3(β, ′) ≥ 0 and only attain zero at known critical point (β, ) = (1, 1), the proof is complete.
Denote f3(β, ′) = f31 + f32 where
f31(β, ′) = φ∗ − θ − ′φ+ ′α sin θ (63) f32(β, ′) = (K − 1)(α sinφ∗ − sinφ) ′ (64)
For f32 it suffices to prove that ′(α sinφ∗ − sinφ) = β2 sinφ∗ − β2β sinφ ≥ 0, which is equivalent to sinφ∗ − sinφ/β ≥ 0. But this is trivially true since φ∗ ≥ φ and β ≥ 1. Therefore, f32 ≥ 0. Note that the equality only holds when φ∗ = φ and β = 1, which corresponds to the horizontal line x ∈ (0, 1], y = 0. For f31, since φ∗ ≥ φ, φ∗ > θ and ′ ∈ (0, 1], we have the following:
f31 = ′(φ∗ − φ) + (1− ′)(φ∗ − θ)− ′θ + β2 sin θ ≥ − ′θ + β2 sin θ ≥ β2 ( sin θ − θ
β
) (65)
And it reduces to showing whether β sin θ − θ is nonnegative. Using Eqn. 59, we have:
f33(θ) = β sin θ − θ = 1
2 sin 2θ +
√ K − 1 sin2 θ − θ (66)
Note that f ′33 = cos 2θ + √ K − 1 sin 2θ − 1 = √ K cos(2θ − θ0) − 1, where θ0 = arccos 1√K . By Prepositions 1 in Lemma 7.5, θ ∈ [0, θ0). Therefore, f ′33 ≥ 0 and since f33(0) = 0, f33 ≥ 0. Again the equity holds when θ = 0, φ∗ = φ and ′ = 1, which is the critical point (β, ) = (1, 1) or (y, ) = (0, 1).
Theorem 7.9 For the dynamics defined in Eqn. 16, the only critical point (∆x = 0 and ∆y = 0) within Ω is (y, ) = (0, 1).
Proof We prove by contradiction. Suppose (β, ) is a critical point other than w∗. A necessary condition for this to hold is f3 = 0 (Eqn. 56). By Lemma 7.8, > ′ = β2/β > 0 and
− 1 +Ky = 1 α (β2 − α+ β − β2) = β − α α = β − β2/ α > β − β2/ ′ α = 0 (67)
So − 1 +Ky is strictly greater than zero. On the other hand, the condition f3 = 0 implies that
((K − 1)(α sinφ∗ − sinφ) + α sin θ) = −1 (φ∗ − θ) + φ (68)
Using φ ∈ [0, π/2], φ∗ ≥ φ and φ∗ > θ, we have: 2π
N ∆y = −(π − φ)( − 1 +Ky)− (φ∗ − φ)− φy + ((K − 1)(α sinφ∗ − sinφ) + α sin θ) y
= −(π − φ)( − 1 +Ky)− (φ∗ − φ)− 1 (φ∗ − θ)y < 0 (69)
So the current point (β, ) cannot be a critical point.
Theorem 7.10 Any trajectory in Ω 0 converges to (y, ) = (1, 0), following the dynamics defined in Eqn. 16.
Proof We have Lyaponov function V = E [E] so that V̇ = −E [∆wᵀ∆w] ≤ −E [∆w]ᵀ E [∆w] ≤ 0. By Thm. 7.9, other than the optimal solution w∗, there is no other symmetric critical point, ∆w 6= 0 and thus V̇ < 0. On the other hand, by Thm. 7.6, the triangular region Ω 0 is convergent, in which the 2D dynamics isC∞ differentiable. Therefore, any 2D solution curve ξ(t) will stay within. By PoincareBendixson theorem, when there is a unique critical point, the curve either converges to a limit circle or the critical point. However, limit cycle is not possible since V is strictly monotonous decreasing along the curve. Therefore, ξ(t) will converge to the unique critical point, which is (y, ) = (1, 0) and so does the symmetric system (Eqn. 12).
Theorem 7.11 When x = y ∈ (0, 1], the 2D dynamics (Eqn. 16) reduces to the following 1D case: 2π
N ∆x = −πK(x− x∗) (70)
where x∗ = 1πK ( √ K − 1− arccos(1/ √ K) + π). Furthermore, x∗ is a convergent critical point.
Proof The 1D system can be computed with simple algebraic manipulations (note that when x = y, φ = 0 and θ = φ∗ = arccos(1/ √ K)). Note that the 1D system is linear and its close form solution is x(t) = x0 + Ce−K/2Nt and thus convergent. | 1. What are the assumptions made in the paper regarding Gaussian iid input data?
2. How does the author analyze the convergence dynamics of a single layer non-linear network?
3. What are some specific questions the reviewer has regarding the paper's notation and derivations?
4. What is the significance of the results in terms of generalizability to real-world feature inputs?
5. What is the relationship between l and u in the statement "when the neuron is cut off at sample l, then (D^(t))_u"?
6. What is F(e, w) and why is D(e) introduced?
7. What does Theorem 3.3 suggest about the maximal probability of convergence?
8. What is the intuition behind the symmetry group and its importance?
9. What is a_j in Figure 5?
10. How could the author improve the clarity of the paper? | Review | Review
In this paper, the author analyzes the convergence dynamics of a single layer non-linear network under Gaussian iid input assumptions. The first half of the paper, dealing with a single hidden node, was somewhat clear, although I have some specific questions below. The second half, dealing with multiple hidden nodes, was very difficult for me to understand, and the final "punchline" is quite unclear. I think the author should focus on intuition and hide detailed derivations and symbols in an appendix.
In terms of significance, it is very hard for me to be sure how generalizable these results are: the Gaussian assumption is a very strong one, and so is the assumption of iid inputs. Real-world feature inputs are highly correlated and are probably not Gaussian. Such assumptions are not made (as far as I can tell) in recent papers analyzing the convergence of deep networks e.g. Kawaguchi, NIPS 2016. Although the author says the no assumption is made on the independence of activations, this assumption is shifted to the input instead. I think this means that the activations are combinations of iid random variables, and are probably Gaussian like, right? So I'm not sure where this leaves us.
Specific comments:
1. Please use D_w instead of D to show that D is a function of w, and not a constant. This gets particularly confusing when switching to D(w) and D(e) in Section 3. In general, notation in the paper is hard to follow and should be clearly introduced.
2. Section 3, statement that says "when the neuron is cut off at sample l, then (D^(t))_u" what is the relationship between l and u? Also, this is another example of notational inconsistency that causes problems to the reader.
3. Section 3.1, what is F(e, w) and why is D(e) introduced? This was unclear to me.
4. Theorem 3.3 suggests that (if \epsilon is > 0), then to have the maximal probability of convergence, \epsilon should be very close to 0, which means that the ball B_r has radius r -> 0? This seems contradictory from Figure 2.
5. Section 4 was really unclear and I still do not understand what the symmetry group really represents. Is there an intuitive explanation why this is important?
6. Figure 5: what is a_j ?
I encourage the author to rewrite this paper for clarity. In it's present form, it would be very difficult to understand the takeaways from the paper. |
ICLR | Title
Learning Causal Semantic Representation for Out-of-Distribution Prediction
Abstract
Conventional supervised learning methods, especially deep ones, are found to be sensitive to out-of-distribution (OOD) examples, largely because the learned representation mixes the semantic factor with the variation factor due to their domainspecific correlation, while only the semantic factor causes the output. To address the problem, we propose a Causal Semantic Generative model (CSG) based on causality to model the two factors separately, and learn it on a single training domain for prediction without (OOD generalization) or with unsupervised data (domain adaptation) in a test domain. We prove that CSG identifies the semantic factor on the training domain, and the invariance principle of causality subsequently guarantees the boundedness of OOD generalization error and the success of adaptation. We also design novel and delicate learning methods for both effective learning and easy prediction, following the first principle of variational Bayes and the graphical structure of CSG. Empirical study demonstrates the effect of our methods to improve test accuracy for OOD generalization and domain adaptation.
1 INTRODUCTION
Deep learning has initiated a new era of artificial intelligence where the potential of machine learning models is greatly unleashed. Despite the great success, these methods heavily rely on the independently-and-identically-distributed (IID) assumption. This does not always perfectly hold in practice, and the prediction of output (label, response, outcome) y may be saliently affected in out-of-distribution (OOD) cases, even from an essentially irrelevant change to the input (covariate) x, like a position shift or rotation of the object in an image, or a change of background, illumination or style (Shen et al., 2018; He et al., 2019; Arjovsky et al., 2019). These phenomena pose serious concerns on the robustness and trustworthiness of machine learning methods and severely impede them from risk-sensitive scenarios.
Looking into the problem, although deep learning models allow extracting abstract representation for prediction with their powerful approximation capacity, the representation may be overconfident in the correlation between semantic factors s (e.g., shape of an object) and variation factors v (e.g., background, illumination, object position). The correlation may be domain-specific and spurious, and may change drastically in a new environment. So it has become a desire to learn representation that separates semantics s from variations v (Cai et al., 2019; Ilse et al., 2019). Formally, the importance of this goal is that s represents the cause of y. Causal relations better reflect the fundamental mechanisms of nature, bringing the merit to machine learning that they tend to be universal and invariant across domains (Schölkopf et al., 2012; Peters et al., 2017; Schölkopf, 2019), thus providing the most transferable and confident information to unseen domains. Causality has also been shown to lead to proper domain adaptation (Schölkopf et al., 2012; Zhang et al., 2013), lower adaptation cost and lighter catastrophic forgetting (Peters et al., 2016; Bengio et al., 2019; Ke et al., 2019).
In this work, we propose a Causal Semantic Generative model (CSG) for proper and robust OOD prediction, including OOD generalization and domain adaptation. Both tasks have supervised data from a single training domain, but domain adaptation has unsupervised test-domain data during learning, while OOD generalization has no test-domain data, including cases where queries come sequentially or adaptation is unaffordable. (1) We build the model by cautiously following the principle of causality, where we explicitly separate the latent variables into a (group of) semantic factor s and a (group of) variation factor v. We prove that under appropriate conditions CSG identifies the
semantic factor by fitting training data, even in presence of an s-v correlation. (2) By leveraging the causal invariance, we prove that a well-learned CSG is guaranteed to have a bounded OOD generalization error. The bound shows how causal mechanisms affect the error. (3) We develop a domain adaptation method using CSG and causal invariance, which suggests to fix the causal generative mechanisms and adapt the prior to the new domain. We prove the identification of the new prior and the benefit of adaptation. (4) To learn and adapt the model from data, we design novel and delicate reformulations of the Evidence Lower BOund (ELBO) objective following the graphical structure of CSG, so that the inference models required therein can also serve for prediction, and modeling and optimizing inference models in both domains can be avoided. To our best knowledge, our work is the first to identify semantic factor and leverage latent causal invariance for OOD prediction with guarantees. Empirical improvement in OOD performance and adaptation is demonstrated by experiments on multiple tasks including shifted MNIST and ImageCLEF-DA task.
2 RELATED WORK
There have been works that aim to leverage the merit of causality for OOD prediction. For OOD generalization, some works ameliorate discriminative models towards a causal behavior. Bahadori et al. (2017) introduce a regularizer that reweights input dimensions based on their approximated causal effects to the output, and Shen et al. (2018) reweight training samples by amortizing causal effects among input dimensions. They are extended to nonlinear cases (Bahadori et al., 2017; He et al., 2019) via linear-separable representations. Heinze-Deml & Meinshausen (2019) enforce inference invariance by minimizing prediction variance within each label-identity group. These methods introduce no additional modeling effort, but may also be limited to capture invariant causal mechanisms (they are non-generative) and may only behave quantitatively causal in the training domain.
For domain adaptation/generalization, methods are developed under various causal assumptions (Schölkopf et al., 2012; Zhang et al., 2013) or using learned causal relations (Rojas-Carulla et al., 2018; Magliacane et al., 2018). Zhang et al. (2013); Gong et al. (2016; 2018) also consider certain ways of mechanism shift. The considered causality is among directly observed variables, which may not be suitable for general data like image pixels where causality rather lies between data and conceptual latent factors (Lopez-Paz et al., 2017; Besserve et al., 2018; Kilbertus et al., 2018). To consider latent factors, there are domain adaptation (Pan et al., 2010; Baktashmotlagh et al., 2013; Ganin et al., 2016; Long et al., 2015; 2018) and generalization methods (Muandet et al., 2013; Shankar et al., 2018) that learn a representation with domain-invariant marginal distribution, and have achieved remarkable results. Nevertheless, Johansson et al. (2019); Zhao et al. (2019) point out that this invariance is neither sufficient nor necessary to identify the true semantics and lower the adaptation error (Supplement D). Moreover, these methods and invariance risk minimization (Arjovsky et al., 2019) also assume the invariance in the inference direction (i.e., data→ representation), which may not be as general as causal invariance in the generative direction (Section 3.2).
There are also generative methods for domain adaptation/generalization that model latent factors. Cai et al. (2019); Ilse et al. (2019) introduce a semantic factor and a domain-feature factor. They assume the two factors are independent in both the generative and inference models, which may not meet reality closely. They also do not adapt the prior for domain shift thus resort to inference invariance. Zhang et al. (2020) consider a partially observed manipulation variable, while assume its independence from the output in both the joint and posterior, and the adaptation is inconsistent with causal invariance. Atzmon et al. (2020) consider similar latent factors, but use the same (uniform) prior in all domains. These methods also do not show guarantees to identify their latent factors. Teshima et al. (2020) leverage causal invariance and adapt the prior, while also assume latent independence and do not separate the semantic factor. They require some supervised test-domain data, and their deterministic and invertible mechanism also indicates inference invariance. In addition, most domain generalization methods require multiple training domains, with exceptions (e.g., Qiao et al., 2020) that still seek to augment domains. In contrast, CSG leverages causal invariance, and has guarantee to identify the semantic factor from a single training domain, even with a correlation to the variation factor.
Generative supervised learning is not new (Mcauliffe & Blei, 2008; Kingma et al., 2014), but most works do not consider the encoded causality. Other works consider solving causality tasks, notably causal/treatment effect estimation (Louizos et al., 2017; Yao et al., 2018; Wang & Blei, 2019). The task does not focus on OOD prediction, and requires labels for both treated and controlled groups.
Disentangling latent representations is also of interest in unsupervised learning. Despite some empirical success (Chen et al., 2016; Higgins et al., 2017; Chen et al., 2018), Locatello et al. (2019) conclude that it is impossible to guarantee the disentanglement in unsupervised settings. Khemakhem et al. (2019; 2020) show an encouraging result that disentangled representation can be identified up to a permutation with a cause of the latent variable observed. But the methods cannot separate the semantic factor from variation for supervised learning, and require observing sufficiently many different values of the cause variable, making it hard to leverage labels.
Causality with latent variable has been considered in a rich literature (Verma & Pearl, 1991; Spirtes et al., 2000; Richardson et al., 2002; Hoyer et al., 2008; Shpitser et al., 2014), while most works focus on the consequence on observation-level causality. Others consider identifying the latent variable. Janzing et al. (2009); Lee et al. (2019) show the identifiability under additive noise or similar assumptions. For discrete data, a “simple” latent variable can be identified under various specifications (Janzing et al., 2011; Sgouritsa et al., 2013; Kocaoglu et al., 2018). Romeijn & Williamson (2018) leverage interventional datasets. Over these works, we step further to separate and identify the latent variable as semantic and variation factors, and show the benefit for OOD prediction.
3 THE CAUSAL SEMANTIC GENERATIVE MODEL
To develop the model seriously and soberly based on causality, we require the formal definition of causality: two variables have a causal relation, denoted as “cause→effect”, if externally intervening the cause (by changing variables out of the considered system) may change the effect, but not vice versa (Pearl, 2009; Peters et al., 2017). We then follow the logic below to build our model. 1
(1) It may be a general case that neither y → x (e.g., adding noise to the labels in a dataset does not change the images) nor x → y holds (e.g., intervening an image by e.g. breaking a camera sensor unit when taking the image, does not change how the photographer labels it), as also argued by Peters et al. (2017, Section 1.4); Kilbertus et al. (2018). So we employ a generative model (i.e., not only modeling p(y|x)), and introduce a latent variable z to capture factors with causal relations.
(2) The latent variable z as underlying generating factors (e.g., object features like shape and texture, background and illumination in imaging) is plausible to cause both x (e.g., the change of object shape or background makes a different image, but breaking a camera sensor unit does not change the object shape or background) and y (e.g., the photographer would give a different label if the object shape, texture, etc. had been replaced by those of a different object, but
noise-corrupting the label does not change the object features). So we orient the edges in the generative direction z → (x, y), as also adopted by Mcauliffe & Blei (2008); Peters et al. (2017); Teshima et al. (2020). This is in contrast to Cai et al. (2019); Ilse et al. (2019; 2020); Castro et al. (2020) who treat y as the cause of a semantic factor, which, when y is also a noisy observation, makes unreasonable implications (e.g., adding noise to the labels in a dataset automatically changes object features and consequently the images, and changing the object features does not change the label). This difference is also discussed by Peters et al. (2017, Section 1.4); Kilbertus et al. (2018).
(3) We attribute all x-y relation to the existence of some latent factors (“purely common cause”, Lee et al., 2019; Janzing et al., 2009), and exclude x-y edges. This can be achieved as long as z holds sufficient information of data (e.g., with shape, background etc. fixed, breaking a sensor unit does not change the label, and noise-corrupting the label does not change the image). Promoting this restriction reduces arbitrariness in explaining x-y relation and benefits the identification of z. This is in contrast to Kingma et al. (2014); Zhang et al. (2020); Castro et al. (2020) who treat y as a cause of x since no latent variable is introduced between.
1Supplement C provides more explanations on the model.
(4) Not all latent factors are the causes of y (e.g., changing the shape may alter the label, while changing the background does not). We thus split the latent variable as z = (s, v) and remove the edge v → y, where s represents the semantic factor of x that causes y, and v describes the variation or diversity in generating x. This formalizes the intuition on the concepts in Introduction.
(5) The variation v often has a relation to the semantics s, which is often a spurious correlation (e.g., desks prefer a workspace background, but they can also appear in bedrooms and beds can also appear in workspace). So we keep the undirected s-v edge. Although v is not a cause of y, modeling it explicitly is worth the effort since otherwise it would still be implicitly incorporated in s anyway through the s-v correlation. We summarize these conclusions in the following definition.
Definition 3.1 (CSG). A Causal Semantic Generative Model (CSG) p = (p(s, v), p(x|s, v), p(y|s)) is a generative model on data variables x ∈ X ⊂ RdX and y ∈ Y with semantic s ∈ S ⊂ RdS and variation v ∈ V ⊂ RdV latent variables, following the graphical structure shown in Fig. 1.
3.1 THE CAUSAL INVARIANCE PRINCIPLE
The domain-invariance of causal relations translates to the following principle for CSG:
Principle 3.2 (causal invariance). The causal generative mechanisms p(x|s, v) and p(y|s) in CSG are invariant across domains, and the change of prior p(s, v) is the only source of domain shift.
It is supported by the invariance of basic laws of nature (Schölkopf et al., 2012; Peters et al., 2017; Besserve et al., 2018; Bühlmann, 2018; Schölkopf, 2019). Other works instead introduce domain index (Cai et al., 2019; Ilse et al., 2019; 2020; Castro et al., 2020) or manipulation variables (Zhang et al., 2020; Khemakhem et al., 2019; 2020) to model distribution change explicitly. They require multiple training domains or additional observations, and such changes can also be explained under causal invariance as long as the latent variable includes all shifted factors (e.g., domain change of images can be attributed to a different preference of shape, style, texture, background, etc. and their correlations, while the processes generating image and label from them remain the same).
3.2 COMPARISON WITH INFERENCE INVARIANCE
Domain-invariant-representation-based adaptation and generalization methods, and invariant risk minimization (Arjovsky et al., 2019) for domain generalization, use a shared feature extractor across domains. This effectively assumes the invariance of the process in the other direction, i.e., inferring the latent representation from data. We note that in its supportive examples (e.g., inferring the object position from an image, or extracting the fundamental frequency from a vocal audio), generating mechanisms are nearly deterministic and invertible, so that the posterior is almost determined by the inverse function, and causal invariance implies inference invariance. For noisy or degenerate mechanisms (Fig. 2), am-
biguity occurs during inference since there may be multiple values of a latent feature that generate the same observation. The inferred feature would notably rely on the prior through the Bayes rule. Since the prior changes across domains, the inference rule then changes by nature, which challenges the existence of a domain-shared feature extractor. In this case, causal invariance is more reliable than inference invariance.
To leverage causal invariance, we adjust the prior conservatively for OOD generalization (CSG-ind) and data-driven for domain adaptation (CSG-DA), so together with the invariant generative mechanisms, it gives a different and more reliable inference rule than that following inference invariance.
4 METHOD
We develop learning, adaptation and prediction methods for OOD generalization and domain adaptation using CSG following the causal invariance Principle 3.2, and devise practical objectives using variational Bayes. Supplement E.1 details all the derivations.
4.1 METHOD FOR OOD GENERALIZATION
For OOD generalization, a CSG p = (p(s, v), p(x|s, v), p(y|s)) needs to first learn from the supervised data from an underlying data distribution p∗(x, y) on the training domain. Maximizing likelihood Ep∗(x,y)[log p(x, y)] is intractable since p(x, y) given by the CSG p is hard to estimate effectively. We thus adopt the Evidence Lower BOund (ELBO) Lq,p(x, y) := Eq(s,v|x,y)[log p(s,v,x,y)q(s,v|x,y) ] (Jordan et al., 1999; Wainwright et al., 2008) as a tractable surrogate, which requires an auxiliary inference model q(s, v|x, y) to estimate the expectation effectively. Maximizing Lq,p w.r.t q drives q towards the posterior p(s, v|x, y) and meanwhile makes Lq,p a tighter lower bound of log p(x, y). The expected ELBO Ep∗(x,y)[Lq,p(x, y)] then drives p(x, y) towards p∗(x, y).
However, the subtlety with supervised learning is that after fitting data, evaluating p(y|x) for prediction is still hard. We thus propose to employ a model for q(s, v, y|x) instead. The required inference model can be then expressed as q(s, v|x, y) = q(s, v, y|x)/q(y|x) where q(y|x) =∫ q(s, v, y|x) dsdv. It reformulates the expected ELBO as:
Ep∗(x,y)[Lq,p(x, y)] = Ep∗(x)Ep∗(y|x)[log q(y|x)] + Ep∗(x)Eq(s,v,y|x) [p∗(y|x) q(y|x) log p(s, v, x, y) q(s, v, y|x) ] . (1)
The first term is the common cross entropy loss (negative) driving q(y|x) towards p∗(y|x). Once this is achieved, the second term becomes the expected ELBO Ep∗(x)[Lq(s,v,y|x),p(x)] that drives q(s, v, y|x) towards p(s, v, y|x) (and p(x) towards p∗(x)). Since the target p(s, v, y|x) admits the factorization p(s, v|x)p(y|s) (since (v, x) ⊥ y|s ) where p(y|s) is already given by the CSG, we can further ease the modeling of q(s, v, y|x) as q(s, v|x)p(y|s). The ELBO is then reformulated as:
Lq,p(x, y) = log q(y|x) + 1
q(y|x) Eq(s,v|x)
[ p(y|s) log p(s, v)p(x|s, v)
q(s, v|x)
] , (2)
where q(y|x) = Eq(s,v|x)[p(y|s)]. The CSG p and q(s, v|x) are to be optimized. The expectations can be estimated by Monte Carlo, and their gradients can be estimated using the reparameterization trick (Kingma & Welling, 2014). When well optimized, q(s, v|x) well approximates p(s, v|x), so q(y|x) then well approximates p(y|x) = Ep(s,v|x)[p(y|s)] for prediction.
CSG-ind To actively mitigate the spurious s-v correlation from the training domain, we also consider a CSG with an independent prior p⊥(s, v) := p(s)p(v) for prediction in the unknown test domain, where p(s) and p(v) are the marginals of p(s, v). The independent prior p⊥(s, v) encourages the model to stay neutral on the s-v correlation. It has a larger entropy than p(s, v) (Cover & Thomas, 2006, Theorem 2.6.6), so it reduces the information of the training-domain-specific prior. The model then relies more on the invariant generative mechanisms, thus better leverages causal invariance and gives more reliable prediction than that following inference invariance.
For the method, note that the prediction is given by p⊥(y|x) = Ep⊥(s,v|x)[p(y|s)], so we use an inference model for q⊥(s, v|x) that approximates p⊥(s, v|x). However, learning on the training domain still requires the original inference model q(s, v|x). To save the cost of building and learning two inference models, we propose to use q⊥(s, v|x) to represent q(s, v|x). Noting that their targets are related by p(s, v|x) = p(s,v)
p⊥(s,v) p⊥(x) p(x) p ⊥(s, v|x), we formulate q(s, v|x) = p(s,v) p⊥(s,v) p⊥(x) p(x) q ⊥(s, v|x) accordingly, so that this q(s, v|x) achieves its target once q⊥(s, v|x) does. The ELBO then becomes:
Lq,p(x, y) = log π(y|x) + 1
π(y|x) Eq⊥(s,v|x) [ p(s, v) p⊥(s, v) p(y|s) log p ⊥(s, v)p(x|s, v) q⊥(s, v|x) ] , (3)
where π(y|x) := Eq⊥(s,v|x) [ p(s,v) p⊥(s,v) p(y|s) ] . The CSG p and q(s, v|x) are to be optimized (note that p⊥(s, v) is determined by p(s, v) in the CSG p). Prediction is given by p⊥(y|x) ≈ Eq⊥(s,v|x)[p(y|s)].
4.2 METHOD FOR DOMAIN ADAPTATION
When unsupervised data is available from an underlying data distribution p̃∗(x) on the test domain, we can leverage it for adaptation. According to the causal invariance Principle 3.2, we only need to adapt for the test-domain prior p̃(s, v) and the corresponding inference model q̃(s, v|x), while the causal mechanisms p(x|s, v) and p(y|s) are not optimized. Adaptation is done by fitting the test
data via maximizing Ep̃∗(x)[Lq̃,p̃(x)], where the ELBO is in the standard form: Lq̃,p̃(x) = Eq̃(s,v|x) [ log ( p̃(s, v)p(x|s, v)/q̃(s, v|x) )] . (4)
Prediction is given by p̃(y|x) ≈ Eq̃(s,v|x)[p(y|s)]. Similar to the case of CSG-ind, we need q̃(s, v|x) for prediction, but q(s, v|x) is still required for learning on the training domain. When data from both domains are available during learning, we can save the effort of modeling and learning q(s, v|x) using a similar technique. We formulate it using q̃(s, v|x) as q(s, v|x) = p̃(x)p(x) p(s,v) p̃(s,v) q̃(s, v|x) following the same relation between their targets, and the ELBO on the training domain becomes:
Lq,p(x, y) = log π(y|x) + 1
π(y|x) Eq̃(s,v|x) [p(s, v) p̃(s, v) p(y|s) log p̃(s, v)p(x|s, v) q̃(s, v|x) ] , (5)
where π(y|x) := Eq̃(s,v|x) [p(s,v) p̃(s,v)p(y|s) ] . The CSG p and q̃(s, v|x) are to be optimized (not for p̃(s, v)). The resulting method, termed CSG-DA, solves both optimizations (4, 5) simultaneously.
For implementing the three methods, note that only one inference model is required in each case. Supplement E.2 shows its implementation from a general discriminative model (e.g., how to select its hidden nodes as s and v). In practice x often has a much larger dimension than y, making the supervised part of the training-domain ELBO (i.e., the first term in its formulation Eq. (1)) scales smaller than the unsupervised part. So we include an additional cross entropy loss in the objectives.
5 THEORY
We now establish guarantee for the methods on identifying the semantic factor and the subsequent merits for OOD generalization and domain adaptation. We only consider the infinite-data regime to isolate another source of error from finite data. Supplement A shows all the proofs. Identifiability is hard to achieve for latent variable models (Koopmans & Reiersol, 1950; Murphy, 2012; Yacoby et al., 2019; Locatello et al., 2019), since it is a task beyond modeling observational relations (Janzing et al., 2009; Peters et al., 2017). Assumptions are required to draw definite conclusions.
Assumption 5.1 (additive noise). There exist nonlinear functions f and g with bounded derivatives up to third-order, and independent random variables µ and ν, such that p(x|s, v) = pµ(x− f(s, v)), and p(y|s) = pν(y − g(s)) for continuous y or p(y|s) = Cat(y|g(s)) for categorical y.
This structure disables describing a bivariate joint distribution in both generating directions (Zhang & Hyvärinen (2009, Theorem 8), Peters et al. (2014, Proposition 23)), and is widely adopted in directed causal discovery (Janzing et al., 2009; Bühlmann et al., 2014). CSG needs this since it should make the causal direction exclusive. It is also easy to implement with deep models (Kingma & Welling, 2014), so does not essentially restrict model capacity.
Assumption 5.2 (bijectivity). Function f is bijective and g is injective.
It is a common assumption for identifiability (Janzing et al., 2009; Shalit et al., 2017; Khemakhem et al., 2019; Lee et al., 2019). Under Assumption 5.1, it is a sufficient condition (Peters et al., 2014, Proposition 17; Peters et al., 2017, Proposition 7.4) of causal minimality (Peters et al., 2014, p.2012; Peters et al., 2017, Definition 6.33), a fundamental requirement for identifiability (Peters et al., 2014, Proposition 7; Peters et al., 2017, p.109). Particularly, s and v are otherwise allowed to have dummy dimensions that f and g simply ignore, raising another ambiguity against identifiability. On the other hand, according to the commonly acknowledged manifold hypothesis (Weinberger & Saul, 2006; Fefferman et al., 2016) that data tends to lie on a lower-dimensional manifold embedded in the data space, we can take X as the manifold and such a bijection exists as a coordinate map, which is an injection to the original data space (thus allowing dS + dV < dX ).
5.1 IDENTIFIABILITY THEORY
We first formalize the goal of identifying the semantic factor.
Definition 5.3 (semantic-equivalence). We say two CSGs p and p′ are semantic-equivalent, if there exists a homeomorphism2 Φ on S × V , such that (i) its output dimensions in S is constant of v:
2A transformation is a homeomorphism if it is a continuous bijection with continuous inverse.
ΦS(s, v) = ΦS(s) for any v ∈ V , and (ii) it acts as a reparameterization from p to p′: Φ#[ps,v] = p′s,v , p(x|s, v) = p′(x|Φ(s, v)) and p(y|s) = p′(y|ΦS(s)).
It is an equivalent relation if V is connected and is either open or closed in RdV (Supplement A.1). Here, Φ#[ps,v] denotes the pushed-forward distribution3 by Φ, i.e. the distribution of the transformed random variable Φ(s, v) when (s, v) ∼ ps,v . As a reparameterization, Φ allows the two models to have different latent-variable parameterizations while inducing the same distribution on the observed data variables (x, y) (Supplement Lemma A.2). At the heart of the definition, the vconstancy of ΦS implies that Φ is semantic-preserving: one model does not mix the other’s v into its s, so that the s variables of both models hold equivalent information.
We say that a learned CSG p identifies the semantic factor if it is semantic-equivalent to the groundtruth CSG p∗. This identification cannot be characterized by the statistical independence between s and v (as in Cai et al. (2019); Ilse et al. (2019); Zhang et al. (2020)), which is not sufficient (Locatello et al., 2019) nor necessary (due to the existence of spurious correlation). Another related concept is disentanglement. It requires that a semantic transformation on x changes the learned s only (Higgins et al., 2018; Besserve et al., 2020), while the identification here does not require the learned v to be constant of the ground-truth s.
To identify the semantic factor, the ground-truth model could at most provide its information via the data distribution p∗(x, y). Although semantic-equivalent CSGs induce the same distribution on (x, y), the inverse is nontrivial. The following theorem shows that the semantic-identifiability can be achieved under appropriate conditions. Theorem 5.4 (semantic-identifiability). With Assumptions 5.1 and 5.2, a well-learned CSG p with p(x, y) = p∗(x, y) is semantic-equivalent to the ground-truth CSG p∗, if log p(s, v) and log p∗(s, v) have bounded derivatives up to the second-order, and that4 (i) 1σ2µ → ∞ where σ 2 µ := E[µ>µ], or (ii) pµ has an a.e. non-zero characteristic function (e.g., a Gaussian distribution).
Remarks. (1) The requirement on p(s, v) and p∗(s, v) excludes extreme training data that show a deterministic s-v relation, which makes the (s, v) density functions unbounded and discontinuous. In that case (e.g., all desks appear in workspace and all beds in bedrooms), one cannot tell whether the label y is caused by s (e.g., the shape) or by v (e.g., the background).
(2) In condition (i), 1σ2µ measures the intensity of the causal mechanism p(x|s, v). A strong p(x|s, v) helps disambiguating values of (s, v) in generating a given x. The condition makes p(x|s, v) so strong that it is almost deterministic and invertible, so inference invariance also holds (Section 3.2). Supplement A.2 provides a quantitative reference of large intensity for a practical consideration, and Supplement B gives a non-asymptotic extension showing how the intensity trades-off the tolerance of equalities in Definition 5.3. Condition (ii) covers more than inference invariance. It roughly implies that different values of (s, v) a.s. produce different distributions p(x|s, v) on X , so their roles in generating x become clear which helps identification.
(3) The theorem does not contradict the impossibility result by Locatello et al. (2019), which considers disentangling each latent dimension with an unconstrained (s, v)→ (x, y), while we identify s as a whole with the edge v → y removed which breaks the s-v symmetry.
5.2 OOD GENERALIZATION THEORY
The causal invariance Principle 3.2 forms the ground-truth CSG on the test domain as p̃∗ = (p̃∗(s, v), p∗(x|s, v), p∗(y|s)) with the new ground-truth prior p̃∗(s, v), which gives the optimal predictor Ẽ ∗ [y|x] 5 on the test domain. The principle also leads to the invariance of identified causal mechanisms, which shows that the OOD generalization error of a CSG is bounded: Theorem 5.5 (OOD generalization error). With Assumptions 5.1 and 5.2, for a semanticallyidentified CSG p on the training domain with reparameterization Φ, we have up to O(σ2µ) that
3The definition of Φ#[ps,v] requires Φ to be measurable. This is satisfied by the continuity of Φ as a homeomorphism (as long as the considered σ-field is the Borel σ-field) (Billingsley, 2012, Theorem 13.2).
4To be precise, the semantic-equivalent conclusions are that the equalities in Definition 5.3 hold asymptotically in the limit 1
σ2µ →∞ for condition (i), and hold a.e. for condition (ii).
5For categorical y, the expectation of y is taken under the one-hot representation.
for any x ∈ supp(px) ∩ supp(p̃∗x),∣∣∣E[y|x]− Ẽ∗[y|x]∣∣∣ 6 σ2µ‖∇g(s)‖2∥∥Jf−1(x)∥∥22‖∇ log(p(s, v)/p̃(s, v))‖2∣∣∣(s,v)=f−1(x), (6) where supp denotes the support of a distribution, Jf−1 is the Jacobian matrix of f−1, and p̃s,v := Φ#[p̃ ∗ s,v] is the test-domain prior under the parameterization of the identified CSG p. 6
The result shows that when the causal mechanism p(x|s, v) is strong, especially in the extreme case σµ = 0 where inference invariance also holds, it dominates prediction over the prior and the generalization error diminishes. In more general cases where only causal invariance holds, the prior change deviates the prediction rule. The prior-change term ‖∇ log(p(s, v)/p̃(s, v))‖2 measures the hardness or severity of OOD. It diminishes in IID cases, and makes the bound lose its effect when the two priors do not share their support. Using a CSG to fit training data enforces causal invariance and other assumptions, so its E[y|x] behaves more faithfully in low p∗(x) area and the boundedness becomes more plausible in practice. CSG-ind further actively uses an independent prior whose larger support covers more p̃s,v candidates.
5.3 DOMAIN ADAPTATION THEORY
In cases of weak causal mechanism or violent prior change, the new ground-truth prior p∗s,v is important for prediction. The domain adaptation method learns a new prior p̃s,v by fitting unsupervised test-domain data, with causal mechanisms shared. Once the mechanisms are identified, p∗s,v can also be identified under the learned parameterization, and prediction can be made precise.
Theorem 5.6 (domain adaptation error). Under the conditions of Theorem 5.4, for a semanticallyidentified CSG p on the training domain with reparameterization Φ, if its new prior p̃s,v for the test domain is well-learned with p̃(x) = p̃∗(x), then p̃s,v = Φ#[p̃ ∗ s,v], and Ẽ[y|x] = Ẽ ∗ [y|x] for any x ∈ supp(p̃∗x).
Different from existing domain adaptation bounds (Supplement D), Theorems 5.5 and 5.6 allow different inference models in the two domains, thus go beyond inference invariance.
6 EXPERIMENTS
For baselines of OOD generalization, apart from the conventional supervised learning optimizing cross entropy (CE), we also consider a causal discriminative method CNBB (He et al., 2019), and a generative method supervised VAE (sVAE) which is a counterpart of CSG that does not separate its latent variable into s and v. For domain adaptation, we consider well-acknowledged DANN (Ganin et al., 2016), DAN (Long et al., 2015) and CDAN (Long et al., 2018) methods implemented in the dalib package (Jiang et al., 2020), and also sVAE using a similar method as CSG-DA. All methods share the same optimization setup. We align the scale of the CE term in the objectives of all methods, and tune their hyperparameters to lie on the margin that makes the final accuracy near 1 on a validation set from the training domain. See Supplement F for details.
6.1 SHIFTED MNIST
We consider an OOD prediction task on MNIST to classify digits “0” and “1”. In the training data, “0”s are horizontally shifted at random by δ pixels with δ ∼ N (−5, 12), and “1”s by δ ∼ N (5, 12) pixels. We consider two test domains where the digits are not moved, or are shifted δ ∼ N (0, 22) pixels. Both domains have balanced classes. We implement all methods using multilayer perceptron which is not naturally shift invariant. We use a larger architecture for discriminative and domain adaptation methods to compensate the additional generative components of generative methods.
The OOD performance is shown in Table 1. For OOD generalization, CSG gives more genuine predictions in unseen domains, thanks to the identification of the semantic factor. CSG-ind performs even better, demonstrating the merit of approaching a CSG with an independent prior. Other methods are more significantly misled by the position factor from the spurious correlation. CNBB ameliorates the position bias, but not as thoroughly without explicit structures for causal mechanisms. CSG
6The 2-norm ‖·‖2 for matrices refers to the induced operator norm (not the Frobenius norm).
also outperforms sVAE, showing the benefit of separating semantics and variation and modeling the variation explicitly, so the model could consciously drive semantic representation into s. For domain adaptation, existing methods differ a lot, and are hard to perform well on both test domains. When fail to identify, adaptation sometimes even worsens the result, as the misleading representation based on position gets strengthened on the unsupervised test data. CSG is benefited from adaptation by leveraging test data in a proper way that identifies the semantics.
6.2 IMAGECLEF-DA
ImageCLEF-DA (ima, 2014) is a standard benchmark dataset for the ImageCLEF 2014 domain adaptation challenge. We select a pair of adaptation tasks between two of its domains: Caltech-256 and Pascal VOC 2012. Each domain has 12 classes and 600 images following a different distribution from each other. We adopt the same setup as in Long et al. (2018), including the ResNet50 structure (He et al., 2016) pretrained on ImageNet as the backbone of the discriminative/inference model. For generative methods, we leverage the DCGAN generator (Radford et al., 2015) pretrained on Cifar10.
Table 2 shows the results. We see that CSG(-ind) achieves the best OOD generalization result, and performs comparable with modern domain adaptation methods. On this task, the underlying causal mechanism may be very noisy (e.g., photos taken from inside and outside both count for the aircraft class), making identification hard. So CSG-DA does not make a salient improvement.
7 CONCLUSION AND DISCUSSION
We tackle OOD generalization and domain adaptation tasks by proposing a Causal Semantic Generative model (CSG), which builds upon a causal reasoning, and models semantic and variation factors separately while allowing their correlation. Using the invariance principle of causality, we develop effective and delicate methods for learning, adaptation and prediction, and prove the identification of the semantic factor, the boundedness of OOD generalization error, and the success of adaptation under appropriate conditions. Experiments show the improved performance in both tasks.
The consideration of separating semantics from variation extends to broader examples regarding robustness. Convolutional neural networks are found to change its prediction under a different texture but the same shape (Geirhos et al., 2019; Brendel & Bethge, 2019). Adversarial vulnerability (Szegedy et al., 2014; Goodfellow et al., 2015; Kurakin et al., 2016) extends variation factors to human-imperceptible features, i.e. the adversarial noise, which is shown to have a strong spurious correlation with semantics (Ilyas et al., 2019). The separation also matters for fairness when a sensitive variation factor may change prediction due to a spurious correlation. Our methods are potentially beneficial in these examples.
A PROOFS
We first introduce some handy concepts and results to make the proof succinct. We begin with extended discussions on CSG.
Definition A.1. A homeomorphism Φ on S × V is called a reparameterization from CSG p to CSG p′, if Φ#[ps,v] = p′s,v , and p(x|s, v) = p′(x|Φ(s, v)) and p(y|s) = p′(y|ΦS(s, v)) for any (s, v) ∈ S ×V . A reparameterization Φ is called to be semantic-preserving, if its output dimensions in S is constant of v: ΦS(s, v) = ΦS(s) for any v ∈ V .
Note that a reparameterization unnecessarily has its output dimensions in S, i.e. ΦS(s, v), constant of v. The condition that p(y|s) = p′(y|ΦS(s, v)) for any v ∈ V does not indicate that ΦS(s, v) is constant of v, since p′(y|s′) may ignore the change of s′ = ΦS(s, v) from the change of v. The following lemma shows the meaning of a reparameterization: it allows a CSG to vary while inducing the same distribution on the observed data variables (x, y) (i.e., holding the same effect on describing data).
Lemma A.2. If there exists a reparameterization Φ from CSG p to CSG p′, then p(x, y) = p′(x, y).
Proof. By the definition of a reparameterization, we have:
p(x, y) = ∫ p(s, v)p(x|s, v)p(y|s) dsdv = ∫ Φ−1# [p ′ s,v](s, v)p ′(x|Φ(s, v))p′(y|ΦS(s, v)) dsdv
= ∫ p′s,v(s ′, v′)p′(x|s′, v′)p′(y|s′) ds′dv′ = p′(x, y),
where we used variable substitution (s′, v′) := Φ(s, v) in the second-last equality. Note that by the definition of pushed-forward distribution and the bijectivity of Φ, Φ#[ps,v] = p′s,v implies ps,v = Φ −1 # [p ′ s,v], and ∫ f(s′, v′)p′s,v(s ′, v′) ds′dv′ = ∫ f(Φ(s, v))Φ−1# [p ′ s,v](s, v) dsdv (can also be verified deductively using the rule of change of variables, i.e. Lemma A.4 in the following).
The definition of semantic-equivalence (Definition 5.3) can be rephrased by the existence of a semantic-preserving reparameterization. With appropriate model assumptions, we can show that any reparameterization between two CSGs is semantic-preserving, so that semantic-preserving CSGs cannot be converted to each other by a reparameterization that mixes s with v.
Lemma A.3. For two CSGs p and p′, if p′(y|s) has a statistics M ′(s) that is an injective function of s, then any reparameterization Φ from p to p′, if exists, has its ΦS constant of v.
Proof. Let Φ = (ΦS ,ΦV) be any reparameterization from p to p′. Then the condition that p(y|s) = p′(y|ΦS(s, v)) for any v ∈ V indicates that M(s) = M ′(ΦS(s, v)). If there exist s ∈ S and v(1) 6= v(2) ∈ V such that ΦS(s, v(1)) 6= ΦS(s, v(2)), then M ′(ΦS(s, v(1))) 6= M ′(ΦS(s, v(2))) since M ′ is injective. This violates M(s) = M ′(ΦS(s, v)) which requires both M ′(ΦS(s, v(1))) and M ′(ΦS(s, v(2))) to be equal to M(s). So ΦS(s, v) must be constant of v.
We then introduce two mathematical facts.
Lemma A.4 (rule of change of variables). Let z be a random variable on a Euclidean space RdZ with density function pz(z), and let Φ be a homeomorphism on RdZ whose inverse Φ−1 is differentiable. Then the distribution of the transformed random variable z′ = Φ(z) has a density function Φ#[pz](z ′) = pz(Φ −1(z′))|JΦ−1(z′)|, where |JΦ−1(z′)| denotes the absolute value of the determinant of the Jacobian matrix (JΦ−1(z′))ia := ∂∂z′i (Φ −1)a(z ′) of Φ−1 at z′.
Proof. See e.g., Billingsley (2012, Theorem 17.2). Note that a homeomorphism is (Borel) measurable since it is continuous (Billingsley, 2012, Theorem 13.2), so the definition of Φ#[pz] is valid.
Lemma A.5. Let µ be a random variable whose characteristic function is a.e. non-zero. For two functions f and f ′ on the same space, we have: f ∗ pµ = f ′ ∗ pµ ⇐⇒ f = f ′ a.e., where (f ∗ pµ)(x) := ∫ f(x)pµ(x− µ) dµ denotes convolution.
Proof. The function equality f ∗ pµ = f ′ ∗ pµ leads to the equality under Fourier transformation F [f ∗pµ] = F [f ′ ∗pµ], which gives F [f ]F [pµ] = F [f ′]F [pµ]. Since F [pµ] is the characteristic function of pµ, the condition that it is a.e. non-zero indicates that F [f ] = F [f ′] a.e. thus f = f ′ a.e. See also Khemakhem et al. (2019, Theorem 1).
A.1 PROOF OF THE EQUIVALENCE RELATION
Proposition A.6. The semantic-equivalence defined in Definition 5.3 is an equivalence relation if V is connected and is either open or closed in RdV .
Proof. Let Φ be a semantic-preserving reparameterization from one CSG p = (p(s, v), p(x|s, v), p(y|s)) to another p′ = (p′(s, v), p′(x|s, v), p′(y|s)). It has its ΦS constant of v, so we can write Φ(s, v) = (ΦS(s),ΦV(s, v)) =: (φ(s), ψs(v)).
(1) We first show that φ, and ψs for any s ∈ S, are homeomorphisms on S and V , respectively, and that Φ−1(s′, v′) = (φ−1(s′), ψ−1φ−1(s′)(v ′)).
• Since Φ(S × V) = S × V , so φ(S) = ΦS(S) = S, so φ is surjective. • Suppose that there exists s′ ∈ S such that φ−1(s′) = {s(i)}i∈I contains multiple distinct
elements. 1. Since Φ is surjective, for any v′ ∈ V , there exist i ∈ I and v ∈ V such that (s′, v′) = Φ(s(i), v) = (φ(s(i)), ψs(i)(v)), which means that ⋃ i∈I ψs(i)(V) = V .
2. Since Φ is injective, the sets {ψs(i)(V)}i∈I must be mutually disjoint. Otherwise, there would exist i 6= j ∈ I and v(1), v(2) ∈ V such that ψs(i)(v(1)) = ψs(j)(v(2)) thus Φ(s(i), v(1)) = (s′, ψs(i)(v (1))) = (s′, ψs(j)(v (2))) = Φ(s(j), v(2)), which violates
the injectivity of Φ since s(i) 6= s(j). 3. In the case where V is open, then so is any ψs(i)(V) = Φ(s(i),V) since Φ is continuous. But the union of disjoint open sets ⋃ i∈I ψs(i)(V) = V cannot be connected.
This violates the condition that V is connected. 4. A similar argument holds in the case where V is closed.
So φ−1(s′) contains only one unique element for any s′ ∈ S. So φ is injective. • The above argument also shows that for any s′ ∈ S , we have ⋃ i∈I ψs(i)(V) =
ψφ−1(s′)(V) = V . For any s ∈ S, there exists s′ ∈ S such that s = φ−1(s′), so we have ψs(V) = V . So ψs is surjective for any s ∈ S. • Suppose that there exist v(1) 6= v(2) ∈ V such that ψs(v(1)) = ψs(v(2)). Then Φ(s, v(1)) = (φ(s), ψs(v (1))) = (φ(s), ψs(v (2))) = Φ(s, v(2)), which contradicts the injectivity of Φ
since v(1) 6= v(2). So ψs is injective for any s ∈ S. • That Φ is continuous and Φ(s, v) = (φ(s), ψs(v)) indicates that φ and ψs are
continuous. For any (s′, v′) ∈ S × V , we have Φ(φ−1(s′), ψ−1φ−1(s′)(v ′)) = (φ(φ−1(s′)), ψφ−1(s′)(ψ −1 φ−1(s′)(v
′))) = (s′, v′). Applying Φ−1 to both sides gives Φ−1(s′, v′) = (φ−1(s′), ψ−1φ−1(s′)(v
′)). • Since Φ−1 is continuous, φ−1 and ψ−1s are also continuous.
(2) We now show that the relation is an equivalence relation. It amounts to showing the following three properties.
• Reflexivity. For two identical CSGs, we have p(s, v) = p′(s, v), p(x|s, v) = p′(x|s, v) and p(y|s) = p′(y|s). So the identity map as Φ obviously satisfies all the requirements. • Symmetry. Let Φ be a semantic-preserving reparameterization from p = (p(s, v), p(x|s, v), p(y|s)) to p′ = (p′(s, v), p′(x|s, v), p′(y|s)). From the above conclusion in (1), we know that (Φ−1)S(s′, v′) = φ−1(s′) is semantic-preserving. Also, Φ−1 is a homeomorphism on S × V since Φ is. So we only need to show that Φ−1 is a reparameterization from p′ to p for symmetry.
1. From the definition of pushed-forward distribution, we have Φ−1# [p ′ s,v] = ps,v if
Φ#[ps,v] = p ′ s,v . It can also be verified through the rule of change of variables (Lemma A.4) when Φ and Φ−1 are differentiable. From Φ#[ps,v] = p′s,v , we have for any (s′, v′), ps,v(Φ−1(s′, v′))|JΦ−1(s′, v′)| = p′s,v(s′, v′). Since for any (s, v) there exists (s′, v′) such that (s, v) = Φ−1(s′, v′), this implies that for any (s, v), ps,v(s, v)|JΦ−1(Φ(s, v))| = p′s,v(Φ(s, v)), or ps,v(s, v) = p′s,v(Φ(s, v))/|JΦ−1(Φ(s, v))| = p′s,v(Φ(s, v))|JΦ(s, v)| (inverse function theorem), which means that ps,v = Φ−1# [p ′ s,v] by the rule of change of variables. 2. For any (s′, v′), there exists (s, v) such that (s′, v′) = Φ(s, v), so p′(x|s′, v′) = p′(x|Φ(s, v)) = p(x|s, v) = p(x|Φ−1(s′, v′)), and p′(y|s′) = p′(y|ΦS(s)) = p(y|s) = p(y|(Φ−1)S(s′)). So Φ−1 is a reparameterization from p′ to p. • Transitivity. Given a third CSG p′′ = (p′′(s, v), p′′(x|s, v), p′′(y|s)) that is semantic-
equivalent to p′, there exists a semantic-preserving reparameterization Φ′ from p′ to p′′. It is easy to see that (Φ′ ◦ Φ)S(s, v) = Φ′S(ΦS(s, v)) = Φ′S(ΦS(s)) is constant of v thus semantic-preserving. As the composition of two homeomorphisms Φ and Φ′ on S × V , Φ′ ◦ Φ is also a homeomorphism. So we only need to show that Φ′ ◦ Φ is a reparameterization from p to p′′ for transitivity.
1. From the definition of pushed-forward distribution, we have (Φ′ ◦ Φ)#[ps,v] = Φ′#[Φ#[ps,v]] = Φ ′ #[p ′ s,v] = p ′′ s,v if Φ#[ps,v] = p ′ s,v and Φ ′ #[p ′ s,v] = p ′′ s,v . It can
also be verified through the rule of change of variables (Lemma A.4) when Φ−1 and Φ′−1 are differentiable. For any (s′′, v′′), we have
(Φ′ ◦ Φ)#[ps,v](s′′, v′′) = ps,v((Φ′ ◦ Φ)−1(s′′, v′′)) ∣∣J(Φ′◦Φ)−1(s′′, v′′)∣∣
= ps,v(Φ −1(Φ′−1(s′′, v′′))) ∣∣JΦ−1(Φ′−1(s′′, v′′))∣∣|JΦ′−1(s′′, v′′)| = Φ#[ps,v](Φ
′−1(s′′, v′′))|JΦ′−1(s′′, v′′)| = p′s,v(Φ ′−1(s′′, v′′))|JΦ′−1(s′′, v′′)| = Φ′#[p′s,v](s′′, v′′) = p′′s,v(s′′, v′′).
2. For any (s, v), we have: p(x|s, v) = p′(x|Φ(s, v)) = p′′(x|Φ′(Φ(s, v))) = p′′(x|(Φ′ ◦ Φ)(s, v)), p(y|s) = p′(y|ΦS(s)) = p′′(y|Φ′S(ΦS(s))) = p′′(y|(Φ′ ◦ Φ)S(s)).
So Φ′ ◦ Φ is a reparameterization from p to p′′. This completes the proof for an equivalence relation.
A.2 PROOF OF THE SEMANTIC-IDENTIFIABILITY THEOREM 5.4
We present a more general and detailed version of Theorem 5.4 and prove it. The theorem in the main context corresponds to conclusions (ii) and (i) below by taking the two CSGs p′ and p as the well-learned p and the ground-truth CSGs p∗, respectively. Theorem 5.4’ (semantic-identifiability). Consider CSGs p and p′ that have Assumptions 5.1 and 5.2 hold, with the bounded derivative conditions specified to be that for both CSGs, f−1 and g are twice and f thrice differentiable with mentioned derivatives bounded. Further assume that their priors have bounded densities and their log p(s, v) have bounded derivatives up to the second-order. If the two CSGs have p(x, y) = p′(x, y), then they are semantic-equivalent, under the conditions that: 7 (i) pµ has an a.e. non-zero characteristic function (e.g., a Gaussian distribution); (ii) 1σ2µ →∞, where σ 2 µ := E[µ>µ]; (iii) 1σ2µ B ′2 f−1max{B ′ log pB ′ g+ 1 2B ′′ g + 3 2dB ′ f−1B ′′ fB ′ g, BpB ′d f−1(B ′2 log p+B ′′ log p+3dB ′ f−1B ′′ fB ′ log p+ 3d 3 2B′2f−1B ′′2 f +d 3B′′′f B ′ f−1)}, where d := dS + dV , and for both CSGs, the constant Bp bounds p(s, v), B′f−1 , B ′ g, B ′ log p and B ′′ f , B ′′ g , B ′′ log p bound the 2-norms
8 of the gradient/Jacobian and the Hessians of the respective functions, and B′′′f bounds all the 3rd-order derivatives of f .
7To be precise, the conclusions are that the equalities in Definition 5.3 hold a.e. for condition (i), hold asymptotically in the limit 1
σ2µ →∞ for condition (ii), and hold up to a negligible quantity for condition (iii).
8As an induced operator norm for matrices (not the Frobenius norm).
Proof. Without loss of generality, we assume that µ and ν (for continuous y) have zero mean. If it is not, we can redefine f(s, v) := f(s, v) + E[µ] and µ := µ − E[µ] (similarly for ν for continuous y) which does not alter the joint distribution p(s, v, x, y) nor violates any assumptions. Also without loss of generality, we consider one scalar component (dimension) l of y, and abuse the use of symbols y and g for yl and gl to avoid unnecessary complication. Note that for continuous y, due to the additive noise structure y = g(s)+ν and that ν has zero mean, we also have E[y|s] = g(s) as the same as the categorical y case (under the one-hot representation). We sometimes denote z := (s, v) for convenience.
First note that for both CSGs and both continuous and categorical y, by construction g(s) is a sufficient statistics of p(y|s) (not only the expectation E[y|s]), and it is injective. So by Lemma A.3, we only need to show that there exists a reparameterization from p to p′. We will show that Φ := f ′−1 ◦ f is such a reparameterization. Since f and f ′ are bijective and continuous, we have Φ−1 = f−1 ◦ f ′, so Φ is bijective and Φ and Φ−1 are continuous. So Φ is a homeomorphism. Also, by construction, we have:
p(x|z) = pµ(x− f(z)) = pµ(x− f ′(f ′−1(f(z)))) = pµ(x− f ′(Φ(z))) = p′(x|Φ(z)). (7) So we only need to show that p(x, y) = p′(x, y) indicates Φ#[pz] = p′z and p(y|s) = p′(y|ΦS(s, v)),∀v ∈ V under the conditions. Proof under condition (i). We begin with a useful reformulation of the integral ∫ t(z)p(x|z) dz for a general function t of z. We will encounter integrals in this form. By Assumption 5.1, we have p(x|z) = pµ(x− f(z)), so we consider a transformation Ψx(z) := x− f(z) and let µ = Ψx(z). It is invertible, Ψ−1x (µ) = f
−1(x − µ), and JΨ−1x (µ) = −Jf−1(x − µ). By these definitions and the rule of change of variables, we have:∫
t(z)p(x|z) dz = ∫ t(z)pµ(Ψx(z)) dz = ∫ t(Ψ−1x (µ))p(µ) ∣∣∣JΨ−1x (µ)∣∣∣dµ = ∫ t(f−1(x− µ))p(µ)
∣∣Jf−1(x− µ)∣∣dµ = Ep(µ)[(t̄V )(x− µ)] (8) = (f#[t] ∗ pµ)(x), (9)
where we have denoted functions t̄ := t ◦ f−1, V := ∣∣Jf−1∣∣, and abused the push-forward notation
f#[t] for a general function t to formally denote (t ◦ f−1) ∣∣Jf−1∣∣ = t̄V .
According to the graphical structure of CSG, we have:
p(x) = ∫ p(z)p(x|z) dz, (10)
E[y|x] = 1 p(x)
∫ yp(x, y) dy = 1
p(x)
∫∫ yp(z)p(x|z)p(y|s) dzdy
= 1
p(x)
∫ p(z)p(x|z)E[y|s] dz = 1
p(x)
∫ g(s)p(z)p(x|z) dz. (11)
So from Eq. (9), we have:
p(x) = (f#[pz] ∗ pµ)(x), E[y|x] = 1
p(x) (f#[gpz] ∗ pµ)(x). (12)
Matching the data distribution p(x, y) = p′(x, y) indicates both p(x) = p′(x) and E[y|x] = E′[y|x]. Using Lemma A.5 under condition (i), this further indicates f#[pz] = f ′#[p′z] a.e. and f#[gpz] = f ′#[g ′p′z] a.e. The former gives Φ#[pz] = p ′ z . The latter can be reformed as ḡf#[pz] = ḡ ′f ′#[p ′ z] a.e., so ḡ = ḡ′ a.e., where we have denoted ḡ := g ◦ (f−1)S and ḡ′ := g′ ◦ (f ′−1)S similarly. From ḡ = ḡ′, we have for any v ∈ V ,
g(s) = g((f−1 ◦ f)S(s, v)) = g((f−1)S(f(s, v))) = ḡ(f(s, v)) = ḡ′(f(s, v)) = g′((f ′−1)S(f(s, v))) = g′(ΦS(s, v)). (13)
For both continuous and categorical y, g(s) uniquely determines p(y|s). So the above equality means that p(y|s) = p′(y|ΦS(s, v)) for any v ∈ V .
Proof under condition (ii). Applying Eq. (8) to Eqs. (10, 11), we have:
p(x) = Ep(µ)[(p̄zV )(x− µ)], E[y|x] = 1
p(x) Ep(µ)[(ḡp̄zV )(x− µ)],
where we have similarly denoted p̄z := pz ◦ f−1. Under condition (ii), E[µ>µ] is infinitesimal, so we can expand the expressions w.r.t µ. For p(x), we have:
p(x) = Ep(µ) [ p̄zV −∇(p̄zV )>µ+ 1
2 µ>∇∇>(p̄zV )µ+O(E[‖µ‖
3 2]) ]
= p̄zV + 1
2 Ep(µ)
[ µ>∇∇>(p̄zV )µ ] +O(σ3µ),
where all functions are evaluated at x. For E[y|x], we first expand 1/p(x) using 1x+ε = 1 x − ε x2 + O(ε2) to get: 1p(x) = 1 p̄zV − 12p̄2zV 2Ep(µ) [ µ>∇∇>(p̄zV )µ ] +O(σ3µ). The second term is expanded as: ḡp̄zV + 1 2Ep(µ) [ µ>∇∇>(ḡp̄zV )µ ] +O(σ3µ). Combining the two parts, we have:
E[y|x] = ḡ + 1 2 Ep(µ)
[ µ> ( (∇ log p̄zV )∇ḡ> +∇ḡ(∇ log p̄zV )> +∇∇>ḡ ) µ ] +O(σ3µ). (14)
This equation holds for any x ∈ supp(px) since the expectation is taken w.r.t the distribution p(x, y); in other words, the considered x here is any value generated by the model. So up to O(σ2µ),
|p(x)− (p̄zV )(x)| = 1
2 ∣∣Ep(µ)[µ>∇∇>(p̄zV )µ]∣∣ 6 12Ep(µ)[∣∣µ>∇∇>(p̄zV )µ∣∣] 6 1
2 Ep(µ)
[ ‖µ‖2 ∥∥∇∇>(p̄zV )∥∥2‖µ‖2] = 12E[µ>µ]∥∥∇∇>(p̄zV )∥∥2 = 1
2 E[µ>µ]|p̄zV | ∥∥∇∇> log p̄zV + (∇ log p̄zV )(∇ log p̄zV )>∥∥2 6 1
2 E[µ>µ]|p̄zV | (∥∥∇∇> log p̄zV ∥∥2 + ‖∇ log p̄zV ‖22), (15) |E[y|x]− ḡ(x)| = 1
2 ∣∣∣Ep(µ)[µ>((∇ log p̄zV )∇ḡ> +∇ḡ(∇ log p̄zV )> +∇∇>ḡ)µ]∣∣∣ 6 1
2 Ep(µ) [∣∣µ>((∇ log p̄zV )∇ḡ> +∇ḡ(∇ log p̄zV )> +∇∇>ḡ)µ∣∣] 6 1
2 Ep(µ)
[ ‖µ‖2 ∥∥(∇ log p̄zV )∇ḡ> +∇ḡ(∇ log p̄zV )> +∇∇>ḡ∥∥2‖µ‖2] 6 1
2 E[µ>µ] (∥∥(∇ log p̄zV )∇ḡ>∥∥2 + ∥∥∇ḡ(∇ log p̄zV )>∥∥2 + ∥∥∇∇>ḡ∥∥2) = E[µ>µ]
(∣∣(∇ log p̄zV )>∇ḡ∣∣+ 12∥∥∇∇>ḡ∥∥2). (16) Given the bounding conditions in the theorem, the multiplicative factors to E[µ>µ] in the last expressions are bounded by a constant. So when 1σ2µ → ∞, i.e. E[µ
>µ] → 0, we have p(x) and E[y|x] converge uniformly to (p̄zV )(x) = f#[pz](x) and ḡ(x), respectively. So p(x, y) = p′(x, y) indicates f#[pz] = f ′#[p ′ z] and ḡ = ḡ
′, which means Φ#[pz] = p′z and p(y|s) = p′(y|ΦS(s, v)) for any v ∈ V , due to Eq. (13) and the explanation that follows.
Proof under condition (iii). We only need to show that when 1σ2µ is much larger than the given quantity, we still have p(x, y) = p′(x, y) =⇒ p̄zV = p̄′zV ′, ḡ = ḡ′ up to a negligible effect. This task amounts to showing that the residuals |p(x)− (p̄zV )(x)| and |E[y|x]− ḡ(x)| controlled by Eqs. (15, 16) are negligible. To achieve this, we need to further expand the controlling functions using derivatives of f , g and pz explicitly, and bound them by the bounding constants. In the following, we use indices a, b, c for the components of x and i, j, k for those of z. For functions of z appearing in the following (e.g., f , g, pz and their derivatives), they are evaluated at z = f−1(x) since we are bounding functions of x.
(1) Bounding |E[y|x]− ḡ(x)| 6 E[µ>µ] (∣∣(∇ log p̄zV )>∇ḡ∣∣+ 12∥∥∇∇>ḡ∥∥2) from Eq. (16).
From the chain rule of differentiation, it is easy to show that: ∇ log p̄z = Jf−1∇ log pz, ∇ḡ = J(f−1)S∇g = Jf−1∇zg, (17)
where ∇zg = (∇g>, 0>dV ) > (recall that g is a function only of s). For the term ∇ log V , we apply Jacobi’s formula for the derivative of the log-determinant:
∂a log V (x) = ∂a log ∣∣Jf−1(x)∣∣ = tr(J−1f−1(x)(∂aJf−1(x))) = ∑
b,i
J−1f−1(x)ib ( ∂aJf−1(x)bi ) = ∑ b,i Jf (f −1(x))ib∂b∂af −1 i (x) = ∑ i ( Jf (∇∇>f−1i ) ) ia . (18)
However, as bounding Eq. (17) already requires bounding ∥∥Jf−1∥∥2, directly using this expression to bound ‖∇ log V ‖2 would require to also bound ‖Jf‖2. This requirement to bound the first-order derivatives of both f and f−1 is a relatively restrictive one. To ease the requirement, we would like to express ∇ log V in terms of Jf−1 . This can be achieved by expressing ∇∇>f−1i ’s in terms of ∇∇>fc’s. To do this, first consider a general invertible-matrix-valued function A(α) on a scalar α. We have 0 = ∂α ( A(α)−1A(α) ) = (∂αA
−1)A + A−1∂αA, so we have A−1∂αA = −(∂αA−1)A, consequently ∂αA = −A(∂αA−1)A. Using this relation (in the fourth equality below), we have:(
∇∇>f−1i ) ab = ∂a∂bf −1 i = ∂a ( Jf−1 ) bi = ( ∂aJf−1 ) bi
= − ( Jf−1(∂aJ −1 f−1)Jf−1 ) bi = − ( Jf−1 ( ∂aJf ) Jf−1 ) bi
= − ∑ jc (Jf−1)bj ( ∂a(∂jfc) ) (Jf−1)ci = − ∑ jck (Jf−1)bj(∂k∂jfc)(∂af −1 k )(Jf−1)ci
= − ∑ c (Jf−1)ci ∑ jk (Jf−1)bj(∂k∂jfc)(Jf−1)ak = − ∑ c (Jf−1)ci ( Jf−1(∇∇>fc)J>f−1 ) ab ,
or in matrix form, ∇∇>f−1i = − ∑ c (Jf−1)ciJf−1(∇∇>fc)J>f−1 =: − ∑ c (Jf−1)ciK c, (19)
where we have defined the matrix Kc := Jf−1(∇∇>fc)J>f−1 which is symmetric. Substituting with this result, we can transform Eq. (18) into a desired form:
∇ log V (x) = ∑ i ( Jf (∇∇>f−1i ) )> i: = − ∑ i ( Jf ∑ c (Jf−1)ciJf−1(∇∇>fc)J>f−1 )> i:
= − ∑ i (∑ c (Jf−1)ciJfJ −1 f (∇∇ >fc)J > f−1 )> i: = − ∑ ci (Jf−1)ci ( (∇∇>fc)J>f−1 )> i:
= − ∑ c ( Jf−1(∇∇>fc)J>f−1 )> c: = − ∑ c (Kcc:) > = − ∑ c Kc:c, (20)
so its norm can be bounded by: ‖∇ log V (x)‖2 = ∥∥∥∑
c
Kcc: ∥∥∥ 2 = ∥∥∥∑
c
(Jf−1)c:(∇∇>fc)J>f−1 ∥∥∥
2 6 ∑ c ∥∥(Jf−1)c:∥∥2∥∥∇∇>fc∥∥2∥∥Jf−1∥∥2 6 B′′fB′f−1 ∑ c ∥∥(Jf−1)c:∥∥2 6 dB′2f−1B ′′ f , (21)
where we have used the following result in the last inequality:∑ c ∥∥(Jf−1)c:∥∥2 6 d1/2√∑ c ∥∥(Jf−1)c:∥∥22 = d1/2∥∥Jf−1∥∥F 6 d∥∥Jf−1∥∥2 6 dB′f−1 . (22) Integrating Eq. (17) and Eq. (21), we have:∣∣(∇ log p̄zV )>∇ḡ∣∣ = (Jf−1∇ log pz +∇ log V )>Jf−1∇zg
6 (∥∥Jf−1∥∥2‖∇ log pz‖2 + ‖∇ log V ‖2)∥∥Jf−1∥∥‖∇g‖2
6 ( B′f−1B ′ log p + dB ′2 f−1B ′′ f ) B′f−1B ′ g
= ( B′log p + dB ′ f−1B ′′ f ) B′2f−1B ′ g. (23)
For the Hessian of ḡ, direct calculus gives:
∇∇>ḡ = J(f−1)S (∇∇>g)J>(f−1)S + dS∑ i=1 (∇g)si(∇∇>f−1si | 1. What is the main contribution of the paper regarding domain-generalization and domain adaptation?
2. What are the strengths and weaknesses of the proposed approach, particularly in its novelty, correctness, experiments, reproducibility, clarity, and correctness?
3. How does the reviewer assess the relevance of the paper's content, including its claims, comparisons with other works, and discussions of related literature?
4. Are there any concerns or suggestions regarding the paper's methodology, such as its experimental design, hyperparameter search, and ablation study?
5. How does the reviewer evaluate the paper's impact and potential applications in the field of computer vision and machine learning? | Review | Review
Summary The paper focuses on the causal perspective of domain-generalization and domain adaptation setup for images. I.e. classifying an image under some distribution shift at test time. Similar to previous work [1-4], it assumes that some latent semantic-object representation (s) and semantic-domain representation (v) cause the image, and that these causal (generative) mechanisms (s,v ->x ) are stable, while their prior p(s,v) is prone to change at test time. It develops a new variational approach to estimate the generative distributions, and test the approach on two datasets for domain-generalization and domain-adaptation.
Overall, the paper suggests a novel approach and theory to an important problem. Its major weaknesses are in its clarity and the experimental part.
Strong points Novelty: The paper provides a novel approach for estimating the likelihood of p(class|image), by developing a new variational approach for modelling the causal direction (s,v->x). Correctness: Although I didn’t verify the details of the proofs, the approach seems technically correct. Note that I was not convinced that s->y (see weakness)
Weak points Experiments and Reproducibility: The experiments show some signal, but are not through enough: • shifted-MNIST: it is not clear why shift=0 is much better than shift~
N
(
0
,
σ
2
)
, since both cases incorporate a domain shift • It would be useful to show the performance the model and baselines on test samples from the observational (in) distribution. • Missing details about evaluation split for shifted-MNIST: Did the experiments used a validation set for hyper-param search with shifted-MNIST and ImageCLEF? Was it based on in-distribution data or OOD data? • It would be useful to provide an ablation study, since the approach has a lot of "moving parts". • It would be useful to have an experiment on an additional dataset, maybe more controlled than ImageCLEF, but less artificial than shifted-MNIST. • What were the ranges used for hyper-param search? What was the search protocol?
Clarity: • The parts describing the method are hard to follow, it will be useful to improve their clarity. • It will be beneficial to explicitly state which are the learned parametrized distributions, and how inference is applied with them. • What makes the VAE inference mappings (x->s,v) stable to domain shift? E.g. [1] showed that correlated latent properties in VAEs are not robust to such domain shifts. • What makes v distinctive of s? Is it because y only depends on s? • Does the approach uses any information on the labels of the domain?
Correctness: I was not convinced about the causal relation s->y. I.e. that the semantic concept cause the label, independently of the image. I do agree that there is a semantic concept (e.g. s) that cause the image. But then, as explained by [Arjovsky 2019] the labelling process is caused by the image. I.e. s->image->y, and not as argued by the paper. The way I see it, is like a communication channel: y_tx -> s -> image -> y_rx. Could the authors elaborate how the model will change if replacing s->y by y_tx->s ?
Other comments: • I suggest discussing [2,3,4], which learned similar stable mechanisms in images. • I am not sure about the statement that this work is the "first to identify the semantic factor and leverage causal invariance for OOD prediction" e.g. see [3,4] • The title may be confusing. OOD usually refers to anomaly-detection, while this paper relates to domain-generalization and domain-adaptation. • It will be useful to clarify that the approach doesn't use any external-semantic-knowledge. • Section 3.2 - I suggest to add a first sentence to introduce what this section is about. • About remark in page 6: (1) what is a deterministic s-v relation? (2) chairs can also appear in a workspace, and it may help to disentangle the desks from workspaces.
[1] Suter et al. 2018, Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness [2] Besserve et al. 2020, Counterfactuals uncover the modular structure of deep generative models [3] Heinze-Deml et al. 2017, Conditional Variance Penalties and Domain Shift Robustness [4] Atzmon et al. 2020, A causal view of compositional zero-shot recognition
EDIT: Post rebuttal
I thank the authors for their reply. Although the authors answered most of my questions, I decided to keep the score as is, because I share similar concerns with R2 about the presentation, and because experiments are still lacking.
Additionally, I am concerned with one of the author's replies saying All methods achieve accuracy 1 ... on the training distribution, because usually there is a trade-off between accuracy on the observational distribution versus the shifted distribution (discussed by Rothenhäusler, 2018 [Anchor regression]): Achieving perfect accuracy on the observational distribution, usually means relying on the spurious correlations. And under domain-shift scenarios, this would hinder the performance on the shifted-distribution. |
ICLR | Title
Learning Causal Semantic Representation for Out-of-Distribution Prediction
Abstract
Conventional supervised learning methods, especially deep ones, are found to be sensitive to out-of-distribution (OOD) examples, largely because the learned representation mixes the semantic factor with the variation factor due to their domainspecific correlation, while only the semantic factor causes the output. To address the problem, we propose a Causal Semantic Generative model (CSG) based on causality to model the two factors separately, and learn it on a single training domain for prediction without (OOD generalization) or with unsupervised data (domain adaptation) in a test domain. We prove that CSG identifies the semantic factor on the training domain, and the invariance principle of causality subsequently guarantees the boundedness of OOD generalization error and the success of adaptation. We also design novel and delicate learning methods for both effective learning and easy prediction, following the first principle of variational Bayes and the graphical structure of CSG. Empirical study demonstrates the effect of our methods to improve test accuracy for OOD generalization and domain adaptation.
1 INTRODUCTION
Deep learning has initiated a new era of artificial intelligence where the potential of machine learning models is greatly unleashed. Despite the great success, these methods heavily rely on the independently-and-identically-distributed (IID) assumption. This does not always perfectly hold in practice, and the prediction of output (label, response, outcome) y may be saliently affected in out-of-distribution (OOD) cases, even from an essentially irrelevant change to the input (covariate) x, like a position shift or rotation of the object in an image, or a change of background, illumination or style (Shen et al., 2018; He et al., 2019; Arjovsky et al., 2019). These phenomena pose serious concerns on the robustness and trustworthiness of machine learning methods and severely impede them from risk-sensitive scenarios.
Looking into the problem, although deep learning models allow extracting abstract representation for prediction with their powerful approximation capacity, the representation may be overconfident in the correlation between semantic factors s (e.g., shape of an object) and variation factors v (e.g., background, illumination, object position). The correlation may be domain-specific and spurious, and may change drastically in a new environment. So it has become a desire to learn representation that separates semantics s from variations v (Cai et al., 2019; Ilse et al., 2019). Formally, the importance of this goal is that s represents the cause of y. Causal relations better reflect the fundamental mechanisms of nature, bringing the merit to machine learning that they tend to be universal and invariant across domains (Schölkopf et al., 2012; Peters et al., 2017; Schölkopf, 2019), thus providing the most transferable and confident information to unseen domains. Causality has also been shown to lead to proper domain adaptation (Schölkopf et al., 2012; Zhang et al., 2013), lower adaptation cost and lighter catastrophic forgetting (Peters et al., 2016; Bengio et al., 2019; Ke et al., 2019).
In this work, we propose a Causal Semantic Generative model (CSG) for proper and robust OOD prediction, including OOD generalization and domain adaptation. Both tasks have supervised data from a single training domain, but domain adaptation has unsupervised test-domain data during learning, while OOD generalization has no test-domain data, including cases where queries come sequentially or adaptation is unaffordable. (1) We build the model by cautiously following the principle of causality, where we explicitly separate the latent variables into a (group of) semantic factor s and a (group of) variation factor v. We prove that under appropriate conditions CSG identifies the
semantic factor by fitting training data, even in presence of an s-v correlation. (2) By leveraging the causal invariance, we prove that a well-learned CSG is guaranteed to have a bounded OOD generalization error. The bound shows how causal mechanisms affect the error. (3) We develop a domain adaptation method using CSG and causal invariance, which suggests to fix the causal generative mechanisms and adapt the prior to the new domain. We prove the identification of the new prior and the benefit of adaptation. (4) To learn and adapt the model from data, we design novel and delicate reformulations of the Evidence Lower BOund (ELBO) objective following the graphical structure of CSG, so that the inference models required therein can also serve for prediction, and modeling and optimizing inference models in both domains can be avoided. To our best knowledge, our work is the first to identify semantic factor and leverage latent causal invariance for OOD prediction with guarantees. Empirical improvement in OOD performance and adaptation is demonstrated by experiments on multiple tasks including shifted MNIST and ImageCLEF-DA task.
2 RELATED WORK
There have been works that aim to leverage the merit of causality for OOD prediction. For OOD generalization, some works ameliorate discriminative models towards a causal behavior. Bahadori et al. (2017) introduce a regularizer that reweights input dimensions based on their approximated causal effects to the output, and Shen et al. (2018) reweight training samples by amortizing causal effects among input dimensions. They are extended to nonlinear cases (Bahadori et al., 2017; He et al., 2019) via linear-separable representations. Heinze-Deml & Meinshausen (2019) enforce inference invariance by minimizing prediction variance within each label-identity group. These methods introduce no additional modeling effort, but may also be limited to capture invariant causal mechanisms (they are non-generative) and may only behave quantitatively causal in the training domain.
For domain adaptation/generalization, methods are developed under various causal assumptions (Schölkopf et al., 2012; Zhang et al., 2013) or using learned causal relations (Rojas-Carulla et al., 2018; Magliacane et al., 2018). Zhang et al. (2013); Gong et al. (2016; 2018) also consider certain ways of mechanism shift. The considered causality is among directly observed variables, which may not be suitable for general data like image pixels where causality rather lies between data and conceptual latent factors (Lopez-Paz et al., 2017; Besserve et al., 2018; Kilbertus et al., 2018). To consider latent factors, there are domain adaptation (Pan et al., 2010; Baktashmotlagh et al., 2013; Ganin et al., 2016; Long et al., 2015; 2018) and generalization methods (Muandet et al., 2013; Shankar et al., 2018) that learn a representation with domain-invariant marginal distribution, and have achieved remarkable results. Nevertheless, Johansson et al. (2019); Zhao et al. (2019) point out that this invariance is neither sufficient nor necessary to identify the true semantics and lower the adaptation error (Supplement D). Moreover, these methods and invariance risk minimization (Arjovsky et al., 2019) also assume the invariance in the inference direction (i.e., data→ representation), which may not be as general as causal invariance in the generative direction (Section 3.2).
There are also generative methods for domain adaptation/generalization that model latent factors. Cai et al. (2019); Ilse et al. (2019) introduce a semantic factor and a domain-feature factor. They assume the two factors are independent in both the generative and inference models, which may not meet reality closely. They also do not adapt the prior for domain shift thus resort to inference invariance. Zhang et al. (2020) consider a partially observed manipulation variable, while assume its independence from the output in both the joint and posterior, and the adaptation is inconsistent with causal invariance. Atzmon et al. (2020) consider similar latent factors, but use the same (uniform) prior in all domains. These methods also do not show guarantees to identify their latent factors. Teshima et al. (2020) leverage causal invariance and adapt the prior, while also assume latent independence and do not separate the semantic factor. They require some supervised test-domain data, and their deterministic and invertible mechanism also indicates inference invariance. In addition, most domain generalization methods require multiple training domains, with exceptions (e.g., Qiao et al., 2020) that still seek to augment domains. In contrast, CSG leverages causal invariance, and has guarantee to identify the semantic factor from a single training domain, even with a correlation to the variation factor.
Generative supervised learning is not new (Mcauliffe & Blei, 2008; Kingma et al., 2014), but most works do not consider the encoded causality. Other works consider solving causality tasks, notably causal/treatment effect estimation (Louizos et al., 2017; Yao et al., 2018; Wang & Blei, 2019). The task does not focus on OOD prediction, and requires labels for both treated and controlled groups.
Disentangling latent representations is also of interest in unsupervised learning. Despite some empirical success (Chen et al., 2016; Higgins et al., 2017; Chen et al., 2018), Locatello et al. (2019) conclude that it is impossible to guarantee the disentanglement in unsupervised settings. Khemakhem et al. (2019; 2020) show an encouraging result that disentangled representation can be identified up to a permutation with a cause of the latent variable observed. But the methods cannot separate the semantic factor from variation for supervised learning, and require observing sufficiently many different values of the cause variable, making it hard to leverage labels.
Causality with latent variable has been considered in a rich literature (Verma & Pearl, 1991; Spirtes et al., 2000; Richardson et al., 2002; Hoyer et al., 2008; Shpitser et al., 2014), while most works focus on the consequence on observation-level causality. Others consider identifying the latent variable. Janzing et al. (2009); Lee et al. (2019) show the identifiability under additive noise or similar assumptions. For discrete data, a “simple” latent variable can be identified under various specifications (Janzing et al., 2011; Sgouritsa et al., 2013; Kocaoglu et al., 2018). Romeijn & Williamson (2018) leverage interventional datasets. Over these works, we step further to separate and identify the latent variable as semantic and variation factors, and show the benefit for OOD prediction.
3 THE CAUSAL SEMANTIC GENERATIVE MODEL
To develop the model seriously and soberly based on causality, we require the formal definition of causality: two variables have a causal relation, denoted as “cause→effect”, if externally intervening the cause (by changing variables out of the considered system) may change the effect, but not vice versa (Pearl, 2009; Peters et al., 2017). We then follow the logic below to build our model. 1
(1) It may be a general case that neither y → x (e.g., adding noise to the labels in a dataset does not change the images) nor x → y holds (e.g., intervening an image by e.g. breaking a camera sensor unit when taking the image, does not change how the photographer labels it), as also argued by Peters et al. (2017, Section 1.4); Kilbertus et al. (2018). So we employ a generative model (i.e., not only modeling p(y|x)), and introduce a latent variable z to capture factors with causal relations.
(2) The latent variable z as underlying generating factors (e.g., object features like shape and texture, background and illumination in imaging) is plausible to cause both x (e.g., the change of object shape or background makes a different image, but breaking a camera sensor unit does not change the object shape or background) and y (e.g., the photographer would give a different label if the object shape, texture, etc. had been replaced by those of a different object, but
noise-corrupting the label does not change the object features). So we orient the edges in the generative direction z → (x, y), as also adopted by Mcauliffe & Blei (2008); Peters et al. (2017); Teshima et al. (2020). This is in contrast to Cai et al. (2019); Ilse et al. (2019; 2020); Castro et al. (2020) who treat y as the cause of a semantic factor, which, when y is also a noisy observation, makes unreasonable implications (e.g., adding noise to the labels in a dataset automatically changes object features and consequently the images, and changing the object features does not change the label). This difference is also discussed by Peters et al. (2017, Section 1.4); Kilbertus et al. (2018).
(3) We attribute all x-y relation to the existence of some latent factors (“purely common cause”, Lee et al., 2019; Janzing et al., 2009), and exclude x-y edges. This can be achieved as long as z holds sufficient information of data (e.g., with shape, background etc. fixed, breaking a sensor unit does not change the label, and noise-corrupting the label does not change the image). Promoting this restriction reduces arbitrariness in explaining x-y relation and benefits the identification of z. This is in contrast to Kingma et al. (2014); Zhang et al. (2020); Castro et al. (2020) who treat y as a cause of x since no latent variable is introduced between.
1Supplement C provides more explanations on the model.
(4) Not all latent factors are the causes of y (e.g., changing the shape may alter the label, while changing the background does not). We thus split the latent variable as z = (s, v) and remove the edge v → y, where s represents the semantic factor of x that causes y, and v describes the variation or diversity in generating x. This formalizes the intuition on the concepts in Introduction.
(5) The variation v often has a relation to the semantics s, which is often a spurious correlation (e.g., desks prefer a workspace background, but they can also appear in bedrooms and beds can also appear in workspace). So we keep the undirected s-v edge. Although v is not a cause of y, modeling it explicitly is worth the effort since otherwise it would still be implicitly incorporated in s anyway through the s-v correlation. We summarize these conclusions in the following definition.
Definition 3.1 (CSG). A Causal Semantic Generative Model (CSG) p = (p(s, v), p(x|s, v), p(y|s)) is a generative model on data variables x ∈ X ⊂ RdX and y ∈ Y with semantic s ∈ S ⊂ RdS and variation v ∈ V ⊂ RdV latent variables, following the graphical structure shown in Fig. 1.
3.1 THE CAUSAL INVARIANCE PRINCIPLE
The domain-invariance of causal relations translates to the following principle for CSG:
Principle 3.2 (causal invariance). The causal generative mechanisms p(x|s, v) and p(y|s) in CSG are invariant across domains, and the change of prior p(s, v) is the only source of domain shift.
It is supported by the invariance of basic laws of nature (Schölkopf et al., 2012; Peters et al., 2017; Besserve et al., 2018; Bühlmann, 2018; Schölkopf, 2019). Other works instead introduce domain index (Cai et al., 2019; Ilse et al., 2019; 2020; Castro et al., 2020) or manipulation variables (Zhang et al., 2020; Khemakhem et al., 2019; 2020) to model distribution change explicitly. They require multiple training domains or additional observations, and such changes can also be explained under causal invariance as long as the latent variable includes all shifted factors (e.g., domain change of images can be attributed to a different preference of shape, style, texture, background, etc. and their correlations, while the processes generating image and label from them remain the same).
3.2 COMPARISON WITH INFERENCE INVARIANCE
Domain-invariant-representation-based adaptation and generalization methods, and invariant risk minimization (Arjovsky et al., 2019) for domain generalization, use a shared feature extractor across domains. This effectively assumes the invariance of the process in the other direction, i.e., inferring the latent representation from data. We note that in its supportive examples (e.g., inferring the object position from an image, or extracting the fundamental frequency from a vocal audio), generating mechanisms are nearly deterministic and invertible, so that the posterior is almost determined by the inverse function, and causal invariance implies inference invariance. For noisy or degenerate mechanisms (Fig. 2), am-
biguity occurs during inference since there may be multiple values of a latent feature that generate the same observation. The inferred feature would notably rely on the prior through the Bayes rule. Since the prior changes across domains, the inference rule then changes by nature, which challenges the existence of a domain-shared feature extractor. In this case, causal invariance is more reliable than inference invariance.
To leverage causal invariance, we adjust the prior conservatively for OOD generalization (CSG-ind) and data-driven for domain adaptation (CSG-DA), so together with the invariant generative mechanisms, it gives a different and more reliable inference rule than that following inference invariance.
4 METHOD
We develop learning, adaptation and prediction methods for OOD generalization and domain adaptation using CSG following the causal invariance Principle 3.2, and devise practical objectives using variational Bayes. Supplement E.1 details all the derivations.
4.1 METHOD FOR OOD GENERALIZATION
For OOD generalization, a CSG p = (p(s, v), p(x|s, v), p(y|s)) needs to first learn from the supervised data from an underlying data distribution p∗(x, y) on the training domain. Maximizing likelihood Ep∗(x,y)[log p(x, y)] is intractable since p(x, y) given by the CSG p is hard to estimate effectively. We thus adopt the Evidence Lower BOund (ELBO) Lq,p(x, y) := Eq(s,v|x,y)[log p(s,v,x,y)q(s,v|x,y) ] (Jordan et al., 1999; Wainwright et al., 2008) as a tractable surrogate, which requires an auxiliary inference model q(s, v|x, y) to estimate the expectation effectively. Maximizing Lq,p w.r.t q drives q towards the posterior p(s, v|x, y) and meanwhile makes Lq,p a tighter lower bound of log p(x, y). The expected ELBO Ep∗(x,y)[Lq,p(x, y)] then drives p(x, y) towards p∗(x, y).
However, the subtlety with supervised learning is that after fitting data, evaluating p(y|x) for prediction is still hard. We thus propose to employ a model for q(s, v, y|x) instead. The required inference model can be then expressed as q(s, v|x, y) = q(s, v, y|x)/q(y|x) where q(y|x) =∫ q(s, v, y|x) dsdv. It reformulates the expected ELBO as:
Ep∗(x,y)[Lq,p(x, y)] = Ep∗(x)Ep∗(y|x)[log q(y|x)] + Ep∗(x)Eq(s,v,y|x) [p∗(y|x) q(y|x) log p(s, v, x, y) q(s, v, y|x) ] . (1)
The first term is the common cross entropy loss (negative) driving q(y|x) towards p∗(y|x). Once this is achieved, the second term becomes the expected ELBO Ep∗(x)[Lq(s,v,y|x),p(x)] that drives q(s, v, y|x) towards p(s, v, y|x) (and p(x) towards p∗(x)). Since the target p(s, v, y|x) admits the factorization p(s, v|x)p(y|s) (since (v, x) ⊥ y|s ) where p(y|s) is already given by the CSG, we can further ease the modeling of q(s, v, y|x) as q(s, v|x)p(y|s). The ELBO is then reformulated as:
Lq,p(x, y) = log q(y|x) + 1
q(y|x) Eq(s,v|x)
[ p(y|s) log p(s, v)p(x|s, v)
q(s, v|x)
] , (2)
where q(y|x) = Eq(s,v|x)[p(y|s)]. The CSG p and q(s, v|x) are to be optimized. The expectations can be estimated by Monte Carlo, and their gradients can be estimated using the reparameterization trick (Kingma & Welling, 2014). When well optimized, q(s, v|x) well approximates p(s, v|x), so q(y|x) then well approximates p(y|x) = Ep(s,v|x)[p(y|s)] for prediction.
CSG-ind To actively mitigate the spurious s-v correlation from the training domain, we also consider a CSG with an independent prior p⊥(s, v) := p(s)p(v) for prediction in the unknown test domain, where p(s) and p(v) are the marginals of p(s, v). The independent prior p⊥(s, v) encourages the model to stay neutral on the s-v correlation. It has a larger entropy than p(s, v) (Cover & Thomas, 2006, Theorem 2.6.6), so it reduces the information of the training-domain-specific prior. The model then relies more on the invariant generative mechanisms, thus better leverages causal invariance and gives more reliable prediction than that following inference invariance.
For the method, note that the prediction is given by p⊥(y|x) = Ep⊥(s,v|x)[p(y|s)], so we use an inference model for q⊥(s, v|x) that approximates p⊥(s, v|x). However, learning on the training domain still requires the original inference model q(s, v|x). To save the cost of building and learning two inference models, we propose to use q⊥(s, v|x) to represent q(s, v|x). Noting that their targets are related by p(s, v|x) = p(s,v)
p⊥(s,v) p⊥(x) p(x) p ⊥(s, v|x), we formulate q(s, v|x) = p(s,v) p⊥(s,v) p⊥(x) p(x) q ⊥(s, v|x) accordingly, so that this q(s, v|x) achieves its target once q⊥(s, v|x) does. The ELBO then becomes:
Lq,p(x, y) = log π(y|x) + 1
π(y|x) Eq⊥(s,v|x) [ p(s, v) p⊥(s, v) p(y|s) log p ⊥(s, v)p(x|s, v) q⊥(s, v|x) ] , (3)
where π(y|x) := Eq⊥(s,v|x) [ p(s,v) p⊥(s,v) p(y|s) ] . The CSG p and q(s, v|x) are to be optimized (note that p⊥(s, v) is determined by p(s, v) in the CSG p). Prediction is given by p⊥(y|x) ≈ Eq⊥(s,v|x)[p(y|s)].
4.2 METHOD FOR DOMAIN ADAPTATION
When unsupervised data is available from an underlying data distribution p̃∗(x) on the test domain, we can leverage it for adaptation. According to the causal invariance Principle 3.2, we only need to adapt for the test-domain prior p̃(s, v) and the corresponding inference model q̃(s, v|x), while the causal mechanisms p(x|s, v) and p(y|s) are not optimized. Adaptation is done by fitting the test
data via maximizing Ep̃∗(x)[Lq̃,p̃(x)], where the ELBO is in the standard form: Lq̃,p̃(x) = Eq̃(s,v|x) [ log ( p̃(s, v)p(x|s, v)/q̃(s, v|x) )] . (4)
Prediction is given by p̃(y|x) ≈ Eq̃(s,v|x)[p(y|s)]. Similar to the case of CSG-ind, we need q̃(s, v|x) for prediction, but q(s, v|x) is still required for learning on the training domain. When data from both domains are available during learning, we can save the effort of modeling and learning q(s, v|x) using a similar technique. We formulate it using q̃(s, v|x) as q(s, v|x) = p̃(x)p(x) p(s,v) p̃(s,v) q̃(s, v|x) following the same relation between their targets, and the ELBO on the training domain becomes:
Lq,p(x, y) = log π(y|x) + 1
π(y|x) Eq̃(s,v|x) [p(s, v) p̃(s, v) p(y|s) log p̃(s, v)p(x|s, v) q̃(s, v|x) ] , (5)
where π(y|x) := Eq̃(s,v|x) [p(s,v) p̃(s,v)p(y|s) ] . The CSG p and q̃(s, v|x) are to be optimized (not for p̃(s, v)). The resulting method, termed CSG-DA, solves both optimizations (4, 5) simultaneously.
For implementing the three methods, note that only one inference model is required in each case. Supplement E.2 shows its implementation from a general discriminative model (e.g., how to select its hidden nodes as s and v). In practice x often has a much larger dimension than y, making the supervised part of the training-domain ELBO (i.e., the first term in its formulation Eq. (1)) scales smaller than the unsupervised part. So we include an additional cross entropy loss in the objectives.
5 THEORY
We now establish guarantee for the methods on identifying the semantic factor and the subsequent merits for OOD generalization and domain adaptation. We only consider the infinite-data regime to isolate another source of error from finite data. Supplement A shows all the proofs. Identifiability is hard to achieve for latent variable models (Koopmans & Reiersol, 1950; Murphy, 2012; Yacoby et al., 2019; Locatello et al., 2019), since it is a task beyond modeling observational relations (Janzing et al., 2009; Peters et al., 2017). Assumptions are required to draw definite conclusions.
Assumption 5.1 (additive noise). There exist nonlinear functions f and g with bounded derivatives up to third-order, and independent random variables µ and ν, such that p(x|s, v) = pµ(x− f(s, v)), and p(y|s) = pν(y − g(s)) for continuous y or p(y|s) = Cat(y|g(s)) for categorical y.
This structure disables describing a bivariate joint distribution in both generating directions (Zhang & Hyvärinen (2009, Theorem 8), Peters et al. (2014, Proposition 23)), and is widely adopted in directed causal discovery (Janzing et al., 2009; Bühlmann et al., 2014). CSG needs this since it should make the causal direction exclusive. It is also easy to implement with deep models (Kingma & Welling, 2014), so does not essentially restrict model capacity.
Assumption 5.2 (bijectivity). Function f is bijective and g is injective.
It is a common assumption for identifiability (Janzing et al., 2009; Shalit et al., 2017; Khemakhem et al., 2019; Lee et al., 2019). Under Assumption 5.1, it is a sufficient condition (Peters et al., 2014, Proposition 17; Peters et al., 2017, Proposition 7.4) of causal minimality (Peters et al., 2014, p.2012; Peters et al., 2017, Definition 6.33), a fundamental requirement for identifiability (Peters et al., 2014, Proposition 7; Peters et al., 2017, p.109). Particularly, s and v are otherwise allowed to have dummy dimensions that f and g simply ignore, raising another ambiguity against identifiability. On the other hand, according to the commonly acknowledged manifold hypothesis (Weinberger & Saul, 2006; Fefferman et al., 2016) that data tends to lie on a lower-dimensional manifold embedded in the data space, we can take X as the manifold and such a bijection exists as a coordinate map, which is an injection to the original data space (thus allowing dS + dV < dX ).
5.1 IDENTIFIABILITY THEORY
We first formalize the goal of identifying the semantic factor.
Definition 5.3 (semantic-equivalence). We say two CSGs p and p′ are semantic-equivalent, if there exists a homeomorphism2 Φ on S × V , such that (i) its output dimensions in S is constant of v:
2A transformation is a homeomorphism if it is a continuous bijection with continuous inverse.
ΦS(s, v) = ΦS(s) for any v ∈ V , and (ii) it acts as a reparameterization from p to p′: Φ#[ps,v] = p′s,v , p(x|s, v) = p′(x|Φ(s, v)) and p(y|s) = p′(y|ΦS(s)).
It is an equivalent relation if V is connected and is either open or closed in RdV (Supplement A.1). Here, Φ#[ps,v] denotes the pushed-forward distribution3 by Φ, i.e. the distribution of the transformed random variable Φ(s, v) when (s, v) ∼ ps,v . As a reparameterization, Φ allows the two models to have different latent-variable parameterizations while inducing the same distribution on the observed data variables (x, y) (Supplement Lemma A.2). At the heart of the definition, the vconstancy of ΦS implies that Φ is semantic-preserving: one model does not mix the other’s v into its s, so that the s variables of both models hold equivalent information.
We say that a learned CSG p identifies the semantic factor if it is semantic-equivalent to the groundtruth CSG p∗. This identification cannot be characterized by the statistical independence between s and v (as in Cai et al. (2019); Ilse et al. (2019); Zhang et al. (2020)), which is not sufficient (Locatello et al., 2019) nor necessary (due to the existence of spurious correlation). Another related concept is disentanglement. It requires that a semantic transformation on x changes the learned s only (Higgins et al., 2018; Besserve et al., 2020), while the identification here does not require the learned v to be constant of the ground-truth s.
To identify the semantic factor, the ground-truth model could at most provide its information via the data distribution p∗(x, y). Although semantic-equivalent CSGs induce the same distribution on (x, y), the inverse is nontrivial. The following theorem shows that the semantic-identifiability can be achieved under appropriate conditions. Theorem 5.4 (semantic-identifiability). With Assumptions 5.1 and 5.2, a well-learned CSG p with p(x, y) = p∗(x, y) is semantic-equivalent to the ground-truth CSG p∗, if log p(s, v) and log p∗(s, v) have bounded derivatives up to the second-order, and that4 (i) 1σ2µ → ∞ where σ 2 µ := E[µ>µ], or (ii) pµ has an a.e. non-zero characteristic function (e.g., a Gaussian distribution).
Remarks. (1) The requirement on p(s, v) and p∗(s, v) excludes extreme training data that show a deterministic s-v relation, which makes the (s, v) density functions unbounded and discontinuous. In that case (e.g., all desks appear in workspace and all beds in bedrooms), one cannot tell whether the label y is caused by s (e.g., the shape) or by v (e.g., the background).
(2) In condition (i), 1σ2µ measures the intensity of the causal mechanism p(x|s, v). A strong p(x|s, v) helps disambiguating values of (s, v) in generating a given x. The condition makes p(x|s, v) so strong that it is almost deterministic and invertible, so inference invariance also holds (Section 3.2). Supplement A.2 provides a quantitative reference of large intensity for a practical consideration, and Supplement B gives a non-asymptotic extension showing how the intensity trades-off the tolerance of equalities in Definition 5.3. Condition (ii) covers more than inference invariance. It roughly implies that different values of (s, v) a.s. produce different distributions p(x|s, v) on X , so their roles in generating x become clear which helps identification.
(3) The theorem does not contradict the impossibility result by Locatello et al. (2019), which considers disentangling each latent dimension with an unconstrained (s, v)→ (x, y), while we identify s as a whole with the edge v → y removed which breaks the s-v symmetry.
5.2 OOD GENERALIZATION THEORY
The causal invariance Principle 3.2 forms the ground-truth CSG on the test domain as p̃∗ = (p̃∗(s, v), p∗(x|s, v), p∗(y|s)) with the new ground-truth prior p̃∗(s, v), which gives the optimal predictor Ẽ ∗ [y|x] 5 on the test domain. The principle also leads to the invariance of identified causal mechanisms, which shows that the OOD generalization error of a CSG is bounded: Theorem 5.5 (OOD generalization error). With Assumptions 5.1 and 5.2, for a semanticallyidentified CSG p on the training domain with reparameterization Φ, we have up to O(σ2µ) that
3The definition of Φ#[ps,v] requires Φ to be measurable. This is satisfied by the continuity of Φ as a homeomorphism (as long as the considered σ-field is the Borel σ-field) (Billingsley, 2012, Theorem 13.2).
4To be precise, the semantic-equivalent conclusions are that the equalities in Definition 5.3 hold asymptotically in the limit 1
σ2µ →∞ for condition (i), and hold a.e. for condition (ii).
5For categorical y, the expectation of y is taken under the one-hot representation.
for any x ∈ supp(px) ∩ supp(p̃∗x),∣∣∣E[y|x]− Ẽ∗[y|x]∣∣∣ 6 σ2µ‖∇g(s)‖2∥∥Jf−1(x)∥∥22‖∇ log(p(s, v)/p̃(s, v))‖2∣∣∣(s,v)=f−1(x), (6) where supp denotes the support of a distribution, Jf−1 is the Jacobian matrix of f−1, and p̃s,v := Φ#[p̃ ∗ s,v] is the test-domain prior under the parameterization of the identified CSG p. 6
The result shows that when the causal mechanism p(x|s, v) is strong, especially in the extreme case σµ = 0 where inference invariance also holds, it dominates prediction over the prior and the generalization error diminishes. In more general cases where only causal invariance holds, the prior change deviates the prediction rule. The prior-change term ‖∇ log(p(s, v)/p̃(s, v))‖2 measures the hardness or severity of OOD. It diminishes in IID cases, and makes the bound lose its effect when the two priors do not share their support. Using a CSG to fit training data enforces causal invariance and other assumptions, so its E[y|x] behaves more faithfully in low p∗(x) area and the boundedness becomes more plausible in practice. CSG-ind further actively uses an independent prior whose larger support covers more p̃s,v candidates.
5.3 DOMAIN ADAPTATION THEORY
In cases of weak causal mechanism or violent prior change, the new ground-truth prior p∗s,v is important for prediction. The domain adaptation method learns a new prior p̃s,v by fitting unsupervised test-domain data, with causal mechanisms shared. Once the mechanisms are identified, p∗s,v can also be identified under the learned parameterization, and prediction can be made precise.
Theorem 5.6 (domain adaptation error). Under the conditions of Theorem 5.4, for a semanticallyidentified CSG p on the training domain with reparameterization Φ, if its new prior p̃s,v for the test domain is well-learned with p̃(x) = p̃∗(x), then p̃s,v = Φ#[p̃ ∗ s,v], and Ẽ[y|x] = Ẽ ∗ [y|x] for any x ∈ supp(p̃∗x).
Different from existing domain adaptation bounds (Supplement D), Theorems 5.5 and 5.6 allow different inference models in the two domains, thus go beyond inference invariance.
6 EXPERIMENTS
For baselines of OOD generalization, apart from the conventional supervised learning optimizing cross entropy (CE), we also consider a causal discriminative method CNBB (He et al., 2019), and a generative method supervised VAE (sVAE) which is a counterpart of CSG that does not separate its latent variable into s and v. For domain adaptation, we consider well-acknowledged DANN (Ganin et al., 2016), DAN (Long et al., 2015) and CDAN (Long et al., 2018) methods implemented in the dalib package (Jiang et al., 2020), and also sVAE using a similar method as CSG-DA. All methods share the same optimization setup. We align the scale of the CE term in the objectives of all methods, and tune their hyperparameters to lie on the margin that makes the final accuracy near 1 on a validation set from the training domain. See Supplement F for details.
6.1 SHIFTED MNIST
We consider an OOD prediction task on MNIST to classify digits “0” and “1”. In the training data, “0”s are horizontally shifted at random by δ pixels with δ ∼ N (−5, 12), and “1”s by δ ∼ N (5, 12) pixels. We consider two test domains where the digits are not moved, or are shifted δ ∼ N (0, 22) pixels. Both domains have balanced classes. We implement all methods using multilayer perceptron which is not naturally shift invariant. We use a larger architecture for discriminative and domain adaptation methods to compensate the additional generative components of generative methods.
The OOD performance is shown in Table 1. For OOD generalization, CSG gives more genuine predictions in unseen domains, thanks to the identification of the semantic factor. CSG-ind performs even better, demonstrating the merit of approaching a CSG with an independent prior. Other methods are more significantly misled by the position factor from the spurious correlation. CNBB ameliorates the position bias, but not as thoroughly without explicit structures for causal mechanisms. CSG
6The 2-norm ‖·‖2 for matrices refers to the induced operator norm (not the Frobenius norm).
also outperforms sVAE, showing the benefit of separating semantics and variation and modeling the variation explicitly, so the model could consciously drive semantic representation into s. For domain adaptation, existing methods differ a lot, and are hard to perform well on both test domains. When fail to identify, adaptation sometimes even worsens the result, as the misleading representation based on position gets strengthened on the unsupervised test data. CSG is benefited from adaptation by leveraging test data in a proper way that identifies the semantics.
6.2 IMAGECLEF-DA
ImageCLEF-DA (ima, 2014) is a standard benchmark dataset for the ImageCLEF 2014 domain adaptation challenge. We select a pair of adaptation tasks between two of its domains: Caltech-256 and Pascal VOC 2012. Each domain has 12 classes and 600 images following a different distribution from each other. We adopt the same setup as in Long et al. (2018), including the ResNet50 structure (He et al., 2016) pretrained on ImageNet as the backbone of the discriminative/inference model. For generative methods, we leverage the DCGAN generator (Radford et al., 2015) pretrained on Cifar10.
Table 2 shows the results. We see that CSG(-ind) achieves the best OOD generalization result, and performs comparable with modern domain adaptation methods. On this task, the underlying causal mechanism may be very noisy (e.g., photos taken from inside and outside both count for the aircraft class), making identification hard. So CSG-DA does not make a salient improvement.
7 CONCLUSION AND DISCUSSION
We tackle OOD generalization and domain adaptation tasks by proposing a Causal Semantic Generative model (CSG), which builds upon a causal reasoning, and models semantic and variation factors separately while allowing their correlation. Using the invariance principle of causality, we develop effective and delicate methods for learning, adaptation and prediction, and prove the identification of the semantic factor, the boundedness of OOD generalization error, and the success of adaptation under appropriate conditions. Experiments show the improved performance in both tasks.
The consideration of separating semantics from variation extends to broader examples regarding robustness. Convolutional neural networks are found to change its prediction under a different texture but the same shape (Geirhos et al., 2019; Brendel & Bethge, 2019). Adversarial vulnerability (Szegedy et al., 2014; Goodfellow et al., 2015; Kurakin et al., 2016) extends variation factors to human-imperceptible features, i.e. the adversarial noise, which is shown to have a strong spurious correlation with semantics (Ilyas et al., 2019). The separation also matters for fairness when a sensitive variation factor may change prediction due to a spurious correlation. Our methods are potentially beneficial in these examples.
A PROOFS
We first introduce some handy concepts and results to make the proof succinct. We begin with extended discussions on CSG.
Definition A.1. A homeomorphism Φ on S × V is called a reparameterization from CSG p to CSG p′, if Φ#[ps,v] = p′s,v , and p(x|s, v) = p′(x|Φ(s, v)) and p(y|s) = p′(y|ΦS(s, v)) for any (s, v) ∈ S ×V . A reparameterization Φ is called to be semantic-preserving, if its output dimensions in S is constant of v: ΦS(s, v) = ΦS(s) for any v ∈ V .
Note that a reparameterization unnecessarily has its output dimensions in S, i.e. ΦS(s, v), constant of v. The condition that p(y|s) = p′(y|ΦS(s, v)) for any v ∈ V does not indicate that ΦS(s, v) is constant of v, since p′(y|s′) may ignore the change of s′ = ΦS(s, v) from the change of v. The following lemma shows the meaning of a reparameterization: it allows a CSG to vary while inducing the same distribution on the observed data variables (x, y) (i.e., holding the same effect on describing data).
Lemma A.2. If there exists a reparameterization Φ from CSG p to CSG p′, then p(x, y) = p′(x, y).
Proof. By the definition of a reparameterization, we have:
p(x, y) = ∫ p(s, v)p(x|s, v)p(y|s) dsdv = ∫ Φ−1# [p ′ s,v](s, v)p ′(x|Φ(s, v))p′(y|ΦS(s, v)) dsdv
= ∫ p′s,v(s ′, v′)p′(x|s′, v′)p′(y|s′) ds′dv′ = p′(x, y),
where we used variable substitution (s′, v′) := Φ(s, v) in the second-last equality. Note that by the definition of pushed-forward distribution and the bijectivity of Φ, Φ#[ps,v] = p′s,v implies ps,v = Φ −1 # [p ′ s,v], and ∫ f(s′, v′)p′s,v(s ′, v′) ds′dv′ = ∫ f(Φ(s, v))Φ−1# [p ′ s,v](s, v) dsdv (can also be verified deductively using the rule of change of variables, i.e. Lemma A.4 in the following).
The definition of semantic-equivalence (Definition 5.3) can be rephrased by the existence of a semantic-preserving reparameterization. With appropriate model assumptions, we can show that any reparameterization between two CSGs is semantic-preserving, so that semantic-preserving CSGs cannot be converted to each other by a reparameterization that mixes s with v.
Lemma A.3. For two CSGs p and p′, if p′(y|s) has a statistics M ′(s) that is an injective function of s, then any reparameterization Φ from p to p′, if exists, has its ΦS constant of v.
Proof. Let Φ = (ΦS ,ΦV) be any reparameterization from p to p′. Then the condition that p(y|s) = p′(y|ΦS(s, v)) for any v ∈ V indicates that M(s) = M ′(ΦS(s, v)). If there exist s ∈ S and v(1) 6= v(2) ∈ V such that ΦS(s, v(1)) 6= ΦS(s, v(2)), then M ′(ΦS(s, v(1))) 6= M ′(ΦS(s, v(2))) since M ′ is injective. This violates M(s) = M ′(ΦS(s, v)) which requires both M ′(ΦS(s, v(1))) and M ′(ΦS(s, v(2))) to be equal to M(s). So ΦS(s, v) must be constant of v.
We then introduce two mathematical facts.
Lemma A.4 (rule of change of variables). Let z be a random variable on a Euclidean space RdZ with density function pz(z), and let Φ be a homeomorphism on RdZ whose inverse Φ−1 is differentiable. Then the distribution of the transformed random variable z′ = Φ(z) has a density function Φ#[pz](z ′) = pz(Φ −1(z′))|JΦ−1(z′)|, where |JΦ−1(z′)| denotes the absolute value of the determinant of the Jacobian matrix (JΦ−1(z′))ia := ∂∂z′i (Φ −1)a(z ′) of Φ−1 at z′.
Proof. See e.g., Billingsley (2012, Theorem 17.2). Note that a homeomorphism is (Borel) measurable since it is continuous (Billingsley, 2012, Theorem 13.2), so the definition of Φ#[pz] is valid.
Lemma A.5. Let µ be a random variable whose characteristic function is a.e. non-zero. For two functions f and f ′ on the same space, we have: f ∗ pµ = f ′ ∗ pµ ⇐⇒ f = f ′ a.e., where (f ∗ pµ)(x) := ∫ f(x)pµ(x− µ) dµ denotes convolution.
Proof. The function equality f ∗ pµ = f ′ ∗ pµ leads to the equality under Fourier transformation F [f ∗pµ] = F [f ′ ∗pµ], which gives F [f ]F [pµ] = F [f ′]F [pµ]. Since F [pµ] is the characteristic function of pµ, the condition that it is a.e. non-zero indicates that F [f ] = F [f ′] a.e. thus f = f ′ a.e. See also Khemakhem et al. (2019, Theorem 1).
A.1 PROOF OF THE EQUIVALENCE RELATION
Proposition A.6. The semantic-equivalence defined in Definition 5.3 is an equivalence relation if V is connected and is either open or closed in RdV .
Proof. Let Φ be a semantic-preserving reparameterization from one CSG p = (p(s, v), p(x|s, v), p(y|s)) to another p′ = (p′(s, v), p′(x|s, v), p′(y|s)). It has its ΦS constant of v, so we can write Φ(s, v) = (ΦS(s),ΦV(s, v)) =: (φ(s), ψs(v)).
(1) We first show that φ, and ψs for any s ∈ S, are homeomorphisms on S and V , respectively, and that Φ−1(s′, v′) = (φ−1(s′), ψ−1φ−1(s′)(v ′)).
• Since Φ(S × V) = S × V , so φ(S) = ΦS(S) = S, so φ is surjective. • Suppose that there exists s′ ∈ S such that φ−1(s′) = {s(i)}i∈I contains multiple distinct
elements. 1. Since Φ is surjective, for any v′ ∈ V , there exist i ∈ I and v ∈ V such that (s′, v′) = Φ(s(i), v) = (φ(s(i)), ψs(i)(v)), which means that ⋃ i∈I ψs(i)(V) = V .
2. Since Φ is injective, the sets {ψs(i)(V)}i∈I must be mutually disjoint. Otherwise, there would exist i 6= j ∈ I and v(1), v(2) ∈ V such that ψs(i)(v(1)) = ψs(j)(v(2)) thus Φ(s(i), v(1)) = (s′, ψs(i)(v (1))) = (s′, ψs(j)(v (2))) = Φ(s(j), v(2)), which violates
the injectivity of Φ since s(i) 6= s(j). 3. In the case where V is open, then so is any ψs(i)(V) = Φ(s(i),V) since Φ is continuous. But the union of disjoint open sets ⋃ i∈I ψs(i)(V) = V cannot be connected.
This violates the condition that V is connected. 4. A similar argument holds in the case where V is closed.
So φ−1(s′) contains only one unique element for any s′ ∈ S. So φ is injective. • The above argument also shows that for any s′ ∈ S , we have ⋃ i∈I ψs(i)(V) =
ψφ−1(s′)(V) = V . For any s ∈ S, there exists s′ ∈ S such that s = φ−1(s′), so we have ψs(V) = V . So ψs is surjective for any s ∈ S. • Suppose that there exist v(1) 6= v(2) ∈ V such that ψs(v(1)) = ψs(v(2)). Then Φ(s, v(1)) = (φ(s), ψs(v (1))) = (φ(s), ψs(v (2))) = Φ(s, v(2)), which contradicts the injectivity of Φ
since v(1) 6= v(2). So ψs is injective for any s ∈ S. • That Φ is continuous and Φ(s, v) = (φ(s), ψs(v)) indicates that φ and ψs are
continuous. For any (s′, v′) ∈ S × V , we have Φ(φ−1(s′), ψ−1φ−1(s′)(v ′)) = (φ(φ−1(s′)), ψφ−1(s′)(ψ −1 φ−1(s′)(v
′))) = (s′, v′). Applying Φ−1 to both sides gives Φ−1(s′, v′) = (φ−1(s′), ψ−1φ−1(s′)(v
′)). • Since Φ−1 is continuous, φ−1 and ψ−1s are also continuous.
(2) We now show that the relation is an equivalence relation. It amounts to showing the following three properties.
• Reflexivity. For two identical CSGs, we have p(s, v) = p′(s, v), p(x|s, v) = p′(x|s, v) and p(y|s) = p′(y|s). So the identity map as Φ obviously satisfies all the requirements. • Symmetry. Let Φ be a semantic-preserving reparameterization from p = (p(s, v), p(x|s, v), p(y|s)) to p′ = (p′(s, v), p′(x|s, v), p′(y|s)). From the above conclusion in (1), we know that (Φ−1)S(s′, v′) = φ−1(s′) is semantic-preserving. Also, Φ−1 is a homeomorphism on S × V since Φ is. So we only need to show that Φ−1 is a reparameterization from p′ to p for symmetry.
1. From the definition of pushed-forward distribution, we have Φ−1# [p ′ s,v] = ps,v if
Φ#[ps,v] = p ′ s,v . It can also be verified through the rule of change of variables (Lemma A.4) when Φ and Φ−1 are differentiable. From Φ#[ps,v] = p′s,v , we have for any (s′, v′), ps,v(Φ−1(s′, v′))|JΦ−1(s′, v′)| = p′s,v(s′, v′). Since for any (s, v) there exists (s′, v′) such that (s, v) = Φ−1(s′, v′), this implies that for any (s, v), ps,v(s, v)|JΦ−1(Φ(s, v))| = p′s,v(Φ(s, v)), or ps,v(s, v) = p′s,v(Φ(s, v))/|JΦ−1(Φ(s, v))| = p′s,v(Φ(s, v))|JΦ(s, v)| (inverse function theorem), which means that ps,v = Φ−1# [p ′ s,v] by the rule of change of variables. 2. For any (s′, v′), there exists (s, v) such that (s′, v′) = Φ(s, v), so p′(x|s′, v′) = p′(x|Φ(s, v)) = p(x|s, v) = p(x|Φ−1(s′, v′)), and p′(y|s′) = p′(y|ΦS(s)) = p(y|s) = p(y|(Φ−1)S(s′)). So Φ−1 is a reparameterization from p′ to p. • Transitivity. Given a third CSG p′′ = (p′′(s, v), p′′(x|s, v), p′′(y|s)) that is semantic-
equivalent to p′, there exists a semantic-preserving reparameterization Φ′ from p′ to p′′. It is easy to see that (Φ′ ◦ Φ)S(s, v) = Φ′S(ΦS(s, v)) = Φ′S(ΦS(s)) is constant of v thus semantic-preserving. As the composition of two homeomorphisms Φ and Φ′ on S × V , Φ′ ◦ Φ is also a homeomorphism. So we only need to show that Φ′ ◦ Φ is a reparameterization from p to p′′ for transitivity.
1. From the definition of pushed-forward distribution, we have (Φ′ ◦ Φ)#[ps,v] = Φ′#[Φ#[ps,v]] = Φ ′ #[p ′ s,v] = p ′′ s,v if Φ#[ps,v] = p ′ s,v and Φ ′ #[p ′ s,v] = p ′′ s,v . It can
also be verified through the rule of change of variables (Lemma A.4) when Φ−1 and Φ′−1 are differentiable. For any (s′′, v′′), we have
(Φ′ ◦ Φ)#[ps,v](s′′, v′′) = ps,v((Φ′ ◦ Φ)−1(s′′, v′′)) ∣∣J(Φ′◦Φ)−1(s′′, v′′)∣∣
= ps,v(Φ −1(Φ′−1(s′′, v′′))) ∣∣JΦ−1(Φ′−1(s′′, v′′))∣∣|JΦ′−1(s′′, v′′)| = Φ#[ps,v](Φ
′−1(s′′, v′′))|JΦ′−1(s′′, v′′)| = p′s,v(Φ ′−1(s′′, v′′))|JΦ′−1(s′′, v′′)| = Φ′#[p′s,v](s′′, v′′) = p′′s,v(s′′, v′′).
2. For any (s, v), we have: p(x|s, v) = p′(x|Φ(s, v)) = p′′(x|Φ′(Φ(s, v))) = p′′(x|(Φ′ ◦ Φ)(s, v)), p(y|s) = p′(y|ΦS(s)) = p′′(y|Φ′S(ΦS(s))) = p′′(y|(Φ′ ◦ Φ)S(s)).
So Φ′ ◦ Φ is a reparameterization from p to p′′. This completes the proof for an equivalence relation.
A.2 PROOF OF THE SEMANTIC-IDENTIFIABILITY THEOREM 5.4
We present a more general and detailed version of Theorem 5.4 and prove it. The theorem in the main context corresponds to conclusions (ii) and (i) below by taking the two CSGs p′ and p as the well-learned p and the ground-truth CSGs p∗, respectively. Theorem 5.4’ (semantic-identifiability). Consider CSGs p and p′ that have Assumptions 5.1 and 5.2 hold, with the bounded derivative conditions specified to be that for both CSGs, f−1 and g are twice and f thrice differentiable with mentioned derivatives bounded. Further assume that their priors have bounded densities and their log p(s, v) have bounded derivatives up to the second-order. If the two CSGs have p(x, y) = p′(x, y), then they are semantic-equivalent, under the conditions that: 7 (i) pµ has an a.e. non-zero characteristic function (e.g., a Gaussian distribution); (ii) 1σ2µ →∞, where σ 2 µ := E[µ>µ]; (iii) 1σ2µ B ′2 f−1max{B ′ log pB ′ g+ 1 2B ′′ g + 3 2dB ′ f−1B ′′ fB ′ g, BpB ′d f−1(B ′2 log p+B ′′ log p+3dB ′ f−1B ′′ fB ′ log p+ 3d 3 2B′2f−1B ′′2 f +d 3B′′′f B ′ f−1)}, where d := dS + dV , and for both CSGs, the constant Bp bounds p(s, v), B′f−1 , B ′ g, B ′ log p and B ′′ f , B ′′ g , B ′′ log p bound the 2-norms
8 of the gradient/Jacobian and the Hessians of the respective functions, and B′′′f bounds all the 3rd-order derivatives of f .
7To be precise, the conclusions are that the equalities in Definition 5.3 hold a.e. for condition (i), hold asymptotically in the limit 1
σ2µ →∞ for condition (ii), and hold up to a negligible quantity for condition (iii).
8As an induced operator norm for matrices (not the Frobenius norm).
Proof. Without loss of generality, we assume that µ and ν (for continuous y) have zero mean. If it is not, we can redefine f(s, v) := f(s, v) + E[µ] and µ := µ − E[µ] (similarly for ν for continuous y) which does not alter the joint distribution p(s, v, x, y) nor violates any assumptions. Also without loss of generality, we consider one scalar component (dimension) l of y, and abuse the use of symbols y and g for yl and gl to avoid unnecessary complication. Note that for continuous y, due to the additive noise structure y = g(s)+ν and that ν has zero mean, we also have E[y|s] = g(s) as the same as the categorical y case (under the one-hot representation). We sometimes denote z := (s, v) for convenience.
First note that for both CSGs and both continuous and categorical y, by construction g(s) is a sufficient statistics of p(y|s) (not only the expectation E[y|s]), and it is injective. So by Lemma A.3, we only need to show that there exists a reparameterization from p to p′. We will show that Φ := f ′−1 ◦ f is such a reparameterization. Since f and f ′ are bijective and continuous, we have Φ−1 = f−1 ◦ f ′, so Φ is bijective and Φ and Φ−1 are continuous. So Φ is a homeomorphism. Also, by construction, we have:
p(x|z) = pµ(x− f(z)) = pµ(x− f ′(f ′−1(f(z)))) = pµ(x− f ′(Φ(z))) = p′(x|Φ(z)). (7) So we only need to show that p(x, y) = p′(x, y) indicates Φ#[pz] = p′z and p(y|s) = p′(y|ΦS(s, v)),∀v ∈ V under the conditions. Proof under condition (i). We begin with a useful reformulation of the integral ∫ t(z)p(x|z) dz for a general function t of z. We will encounter integrals in this form. By Assumption 5.1, we have p(x|z) = pµ(x− f(z)), so we consider a transformation Ψx(z) := x− f(z) and let µ = Ψx(z). It is invertible, Ψ−1x (µ) = f
−1(x − µ), and JΨ−1x (µ) = −Jf−1(x − µ). By these definitions and the rule of change of variables, we have:∫
t(z)p(x|z) dz = ∫ t(z)pµ(Ψx(z)) dz = ∫ t(Ψ−1x (µ))p(µ) ∣∣∣JΨ−1x (µ)∣∣∣dµ = ∫ t(f−1(x− µ))p(µ)
∣∣Jf−1(x− µ)∣∣dµ = Ep(µ)[(t̄V )(x− µ)] (8) = (f#[t] ∗ pµ)(x), (9)
where we have denoted functions t̄ := t ◦ f−1, V := ∣∣Jf−1∣∣, and abused the push-forward notation
f#[t] for a general function t to formally denote (t ◦ f−1) ∣∣Jf−1∣∣ = t̄V .
According to the graphical structure of CSG, we have:
p(x) = ∫ p(z)p(x|z) dz, (10)
E[y|x] = 1 p(x)
∫ yp(x, y) dy = 1
p(x)
∫∫ yp(z)p(x|z)p(y|s) dzdy
= 1
p(x)
∫ p(z)p(x|z)E[y|s] dz = 1
p(x)
∫ g(s)p(z)p(x|z) dz. (11)
So from Eq. (9), we have:
p(x) = (f#[pz] ∗ pµ)(x), E[y|x] = 1
p(x) (f#[gpz] ∗ pµ)(x). (12)
Matching the data distribution p(x, y) = p′(x, y) indicates both p(x) = p′(x) and E[y|x] = E′[y|x]. Using Lemma A.5 under condition (i), this further indicates f#[pz] = f ′#[p′z] a.e. and f#[gpz] = f ′#[g ′p′z] a.e. The former gives Φ#[pz] = p ′ z . The latter can be reformed as ḡf#[pz] = ḡ ′f ′#[p ′ z] a.e., so ḡ = ḡ′ a.e., where we have denoted ḡ := g ◦ (f−1)S and ḡ′ := g′ ◦ (f ′−1)S similarly. From ḡ = ḡ′, we have for any v ∈ V ,
g(s) = g((f−1 ◦ f)S(s, v)) = g((f−1)S(f(s, v))) = ḡ(f(s, v)) = ḡ′(f(s, v)) = g′((f ′−1)S(f(s, v))) = g′(ΦS(s, v)). (13)
For both continuous and categorical y, g(s) uniquely determines p(y|s). So the above equality means that p(y|s) = p′(y|ΦS(s, v)) for any v ∈ V .
Proof under condition (ii). Applying Eq. (8) to Eqs. (10, 11), we have:
p(x) = Ep(µ)[(p̄zV )(x− µ)], E[y|x] = 1
p(x) Ep(µ)[(ḡp̄zV )(x− µ)],
where we have similarly denoted p̄z := pz ◦ f−1. Under condition (ii), E[µ>µ] is infinitesimal, so we can expand the expressions w.r.t µ. For p(x), we have:
p(x) = Ep(µ) [ p̄zV −∇(p̄zV )>µ+ 1
2 µ>∇∇>(p̄zV )µ+O(E[‖µ‖
3 2]) ]
= p̄zV + 1
2 Ep(µ)
[ µ>∇∇>(p̄zV )µ ] +O(σ3µ),
where all functions are evaluated at x. For E[y|x], we first expand 1/p(x) using 1x+ε = 1 x − ε x2 + O(ε2) to get: 1p(x) = 1 p̄zV − 12p̄2zV 2Ep(µ) [ µ>∇∇>(p̄zV )µ ] +O(σ3µ). The second term is expanded as: ḡp̄zV + 1 2Ep(µ) [ µ>∇∇>(ḡp̄zV )µ ] +O(σ3µ). Combining the two parts, we have:
E[y|x] = ḡ + 1 2 Ep(µ)
[ µ> ( (∇ log p̄zV )∇ḡ> +∇ḡ(∇ log p̄zV )> +∇∇>ḡ ) µ ] +O(σ3µ). (14)
This equation holds for any x ∈ supp(px) since the expectation is taken w.r.t the distribution p(x, y); in other words, the considered x here is any value generated by the model. So up to O(σ2µ),
|p(x)− (p̄zV )(x)| = 1
2 ∣∣Ep(µ)[µ>∇∇>(p̄zV )µ]∣∣ 6 12Ep(µ)[∣∣µ>∇∇>(p̄zV )µ∣∣] 6 1
2 Ep(µ)
[ ‖µ‖2 ∥∥∇∇>(p̄zV )∥∥2‖µ‖2] = 12E[µ>µ]∥∥∇∇>(p̄zV )∥∥2 = 1
2 E[µ>µ]|p̄zV | ∥∥∇∇> log p̄zV + (∇ log p̄zV )(∇ log p̄zV )>∥∥2 6 1
2 E[µ>µ]|p̄zV | (∥∥∇∇> log p̄zV ∥∥2 + ‖∇ log p̄zV ‖22), (15) |E[y|x]− ḡ(x)| = 1
2 ∣∣∣Ep(µ)[µ>((∇ log p̄zV )∇ḡ> +∇ḡ(∇ log p̄zV )> +∇∇>ḡ)µ]∣∣∣ 6 1
2 Ep(µ) [∣∣µ>((∇ log p̄zV )∇ḡ> +∇ḡ(∇ log p̄zV )> +∇∇>ḡ)µ∣∣] 6 1
2 Ep(µ)
[ ‖µ‖2 ∥∥(∇ log p̄zV )∇ḡ> +∇ḡ(∇ log p̄zV )> +∇∇>ḡ∥∥2‖µ‖2] 6 1
2 E[µ>µ] (∥∥(∇ log p̄zV )∇ḡ>∥∥2 + ∥∥∇ḡ(∇ log p̄zV )>∥∥2 + ∥∥∇∇>ḡ∥∥2) = E[µ>µ]
(∣∣(∇ log p̄zV )>∇ḡ∣∣+ 12∥∥∇∇>ḡ∥∥2). (16) Given the bounding conditions in the theorem, the multiplicative factors to E[µ>µ] in the last expressions are bounded by a constant. So when 1σ2µ → ∞, i.e. E[µ
>µ] → 0, we have p(x) and E[y|x] converge uniformly to (p̄zV )(x) = f#[pz](x) and ḡ(x), respectively. So p(x, y) = p′(x, y) indicates f#[pz] = f ′#[p ′ z] and ḡ = ḡ
′, which means Φ#[pz] = p′z and p(y|s) = p′(y|ΦS(s, v)) for any v ∈ V , due to Eq. (13) and the explanation that follows.
Proof under condition (iii). We only need to show that when 1σ2µ is much larger than the given quantity, we still have p(x, y) = p′(x, y) =⇒ p̄zV = p̄′zV ′, ḡ = ḡ′ up to a negligible effect. This task amounts to showing that the residuals |p(x)− (p̄zV )(x)| and |E[y|x]− ḡ(x)| controlled by Eqs. (15, 16) are negligible. To achieve this, we need to further expand the controlling functions using derivatives of f , g and pz explicitly, and bound them by the bounding constants. In the following, we use indices a, b, c for the components of x and i, j, k for those of z. For functions of z appearing in the following (e.g., f , g, pz and their derivatives), they are evaluated at z = f−1(x) since we are bounding functions of x.
(1) Bounding |E[y|x]− ḡ(x)| 6 E[µ>µ] (∣∣(∇ log p̄zV )>∇ḡ∣∣+ 12∥∥∇∇>ḡ∥∥2) from Eq. (16).
From the chain rule of differentiation, it is easy to show that: ∇ log p̄z = Jf−1∇ log pz, ∇ḡ = J(f−1)S∇g = Jf−1∇zg, (17)
where ∇zg = (∇g>, 0>dV ) > (recall that g is a function only of s). For the term ∇ log V , we apply Jacobi’s formula for the derivative of the log-determinant:
∂a log V (x) = ∂a log ∣∣Jf−1(x)∣∣ = tr(J−1f−1(x)(∂aJf−1(x))) = ∑
b,i
J−1f−1(x)ib ( ∂aJf−1(x)bi ) = ∑ b,i Jf (f −1(x))ib∂b∂af −1 i (x) = ∑ i ( Jf (∇∇>f−1i ) ) ia . (18)
However, as bounding Eq. (17) already requires bounding ∥∥Jf−1∥∥2, directly using this expression to bound ‖∇ log V ‖2 would require to also bound ‖Jf‖2. This requirement to bound the first-order derivatives of both f and f−1 is a relatively restrictive one. To ease the requirement, we would like to express ∇ log V in terms of Jf−1 . This can be achieved by expressing ∇∇>f−1i ’s in terms of ∇∇>fc’s. To do this, first consider a general invertible-matrix-valued function A(α) on a scalar α. We have 0 = ∂α ( A(α)−1A(α) ) = (∂αA
−1)A + A−1∂αA, so we have A−1∂αA = −(∂αA−1)A, consequently ∂αA = −A(∂αA−1)A. Using this relation (in the fourth equality below), we have:(
∇∇>f−1i ) ab = ∂a∂bf −1 i = ∂a ( Jf−1 ) bi = ( ∂aJf−1 ) bi
= − ( Jf−1(∂aJ −1 f−1)Jf−1 ) bi = − ( Jf−1 ( ∂aJf ) Jf−1 ) bi
= − ∑ jc (Jf−1)bj ( ∂a(∂jfc) ) (Jf−1)ci = − ∑ jck (Jf−1)bj(∂k∂jfc)(∂af −1 k )(Jf−1)ci
= − ∑ c (Jf−1)ci ∑ jk (Jf−1)bj(∂k∂jfc)(Jf−1)ak = − ∑ c (Jf−1)ci ( Jf−1(∇∇>fc)J>f−1 ) ab ,
or in matrix form, ∇∇>f−1i = − ∑ c (Jf−1)ciJf−1(∇∇>fc)J>f−1 =: − ∑ c (Jf−1)ciK c, (19)
where we have defined the matrix Kc := Jf−1(∇∇>fc)J>f−1 which is symmetric. Substituting with this result, we can transform Eq. (18) into a desired form:
∇ log V (x) = ∑ i ( Jf (∇∇>f−1i ) )> i: = − ∑ i ( Jf ∑ c (Jf−1)ciJf−1(∇∇>fc)J>f−1 )> i:
= − ∑ i (∑ c (Jf−1)ciJfJ −1 f (∇∇ >fc)J > f−1 )> i: = − ∑ ci (Jf−1)ci ( (∇∇>fc)J>f−1 )> i:
= − ∑ c ( Jf−1(∇∇>fc)J>f−1 )> c: = − ∑ c (Kcc:) > = − ∑ c Kc:c, (20)
so its norm can be bounded by: ‖∇ log V (x)‖2 = ∥∥∥∑
c
Kcc: ∥∥∥ 2 = ∥∥∥∑
c
(Jf−1)c:(∇∇>fc)J>f−1 ∥∥∥
2 6 ∑ c ∥∥(Jf−1)c:∥∥2∥∥∇∇>fc∥∥2∥∥Jf−1∥∥2 6 B′′fB′f−1 ∑ c ∥∥(Jf−1)c:∥∥2 6 dB′2f−1B ′′ f , (21)
where we have used the following result in the last inequality:∑ c ∥∥(Jf−1)c:∥∥2 6 d1/2√∑ c ∥∥(Jf−1)c:∥∥22 = d1/2∥∥Jf−1∥∥F 6 d∥∥Jf−1∥∥2 6 dB′f−1 . (22) Integrating Eq. (17) and Eq. (21), we have:∣∣(∇ log p̄zV )>∇ḡ∣∣ = (Jf−1∇ log pz +∇ log V )>Jf−1∇zg
6 (∥∥Jf−1∥∥2‖∇ log pz‖2 + ‖∇ log V ‖2)∥∥Jf−1∥∥‖∇g‖2
6 ( B′f−1B ′ log p + dB ′2 f−1B ′′ f ) B′f−1B ′ g
= ( B′log p + dB ′ f−1B ′′ f ) B′2f−1B ′ g. (23)
For the Hessian of ḡ, direct calculus gives:
∇∇>ḡ = J(f−1)S (∇∇>g)J>(f−1)S + dS∑ i=1 (∇g)si(∇∇>f−1si | 1. What is the main contribution of the paper, particularly in its approach to learning latent causal variables?
2. How does the proposed method differ from other similar approaches in assuming a hidden causal variable?
3. Can you explain the variations presented by the authors for learning the Causal Semantic Generative Model, such as CSG and CSG-ind?
4. How do the data generating assumptions impact the model's performance in terms of Out-of-Distribution Generalization error and Domain Adaptation Error?
5. What are the strengths and weaknesses of the empirical results presented in the paper, especially regarding the choice of tasks and datasets? | Review | Review
The Causal Semantic Generative Model (CSG) presents an approach for learning both semantic and diverse latent causal variables in supervised settings using variational Bayes. Contrary to many similar approaches which assume the label as the causal variable, this assumes a hidden causal variable which produces both the labels and observed features. Furthermore, this approach separates this latent causal variable into two components one for semantics, which impacts label generation and one for the diversity/variation which in combination with the semantic variable generates the observed features.
The authors present variations for learning this model which account for correlation/independence between semantic and diversity latent variables (CSG and CSG-ind) and also extend to settings where some data does contain labels (CSG-DA). The authors also show under which data generating assumptions the modeling holds and show how changes to the prior distribution of semantic and diversity variable distributions impact Out-of-Distribution Generalization error and Domain Adaptation Error.
The work presents empirical results demonstrating the effectiveness of this approach in both OOD settings (no test adaptation) and domain adaptation settings (test adaptation). For the presented experiments the authors show superior results in OOD generalization and competitive results in the domain adaptation setting. While the experiments present a compelling proof of concept, the tasks Shifted MNIST and ImageCLEF-DA are not the most representative challenges in their respective domains. Would be interested to see performance in ColoredMNIST task for causal identification and OOD generalization as the generative structure is well understood as well as performance capabilities. The same could be said for the domain adaptation task, with Office-Home, VisDA-17, DomainNet or a variety of more challenging and representative tasks giving more empirical credibly to the experiments performed. |
ICLR | Title
Learning Causal Semantic Representation for Out-of-Distribution Prediction
Abstract
Conventional supervised learning methods, especially deep ones, are found to be sensitive to out-of-distribution (OOD) examples, largely because the learned representation mixes the semantic factor with the variation factor due to their domainspecific correlation, while only the semantic factor causes the output. To address the problem, we propose a Causal Semantic Generative model (CSG) based on causality to model the two factors separately, and learn it on a single training domain for prediction without (OOD generalization) or with unsupervised data (domain adaptation) in a test domain. We prove that CSG identifies the semantic factor on the training domain, and the invariance principle of causality subsequently guarantees the boundedness of OOD generalization error and the success of adaptation. We also design novel and delicate learning methods for both effective learning and easy prediction, following the first principle of variational Bayes and the graphical structure of CSG. Empirical study demonstrates the effect of our methods to improve test accuracy for OOD generalization and domain adaptation.
1 INTRODUCTION
Deep learning has initiated a new era of artificial intelligence where the potential of machine learning models is greatly unleashed. Despite the great success, these methods heavily rely on the independently-and-identically-distributed (IID) assumption. This does not always perfectly hold in practice, and the prediction of output (label, response, outcome) y may be saliently affected in out-of-distribution (OOD) cases, even from an essentially irrelevant change to the input (covariate) x, like a position shift or rotation of the object in an image, or a change of background, illumination or style (Shen et al., 2018; He et al., 2019; Arjovsky et al., 2019). These phenomena pose serious concerns on the robustness and trustworthiness of machine learning methods and severely impede them from risk-sensitive scenarios.
Looking into the problem, although deep learning models allow extracting abstract representation for prediction with their powerful approximation capacity, the representation may be overconfident in the correlation between semantic factors s (e.g., shape of an object) and variation factors v (e.g., background, illumination, object position). The correlation may be domain-specific and spurious, and may change drastically in a new environment. So it has become a desire to learn representation that separates semantics s from variations v (Cai et al., 2019; Ilse et al., 2019). Formally, the importance of this goal is that s represents the cause of y. Causal relations better reflect the fundamental mechanisms of nature, bringing the merit to machine learning that they tend to be universal and invariant across domains (Schölkopf et al., 2012; Peters et al., 2017; Schölkopf, 2019), thus providing the most transferable and confident information to unseen domains. Causality has also been shown to lead to proper domain adaptation (Schölkopf et al., 2012; Zhang et al., 2013), lower adaptation cost and lighter catastrophic forgetting (Peters et al., 2016; Bengio et al., 2019; Ke et al., 2019).
In this work, we propose a Causal Semantic Generative model (CSG) for proper and robust OOD prediction, including OOD generalization and domain adaptation. Both tasks have supervised data from a single training domain, but domain adaptation has unsupervised test-domain data during learning, while OOD generalization has no test-domain data, including cases where queries come sequentially or adaptation is unaffordable. (1) We build the model by cautiously following the principle of causality, where we explicitly separate the latent variables into a (group of) semantic factor s and a (group of) variation factor v. We prove that under appropriate conditions CSG identifies the
semantic factor by fitting training data, even in presence of an s-v correlation. (2) By leveraging the causal invariance, we prove that a well-learned CSG is guaranteed to have a bounded OOD generalization error. The bound shows how causal mechanisms affect the error. (3) We develop a domain adaptation method using CSG and causal invariance, which suggests to fix the causal generative mechanisms and adapt the prior to the new domain. We prove the identification of the new prior and the benefit of adaptation. (4) To learn and adapt the model from data, we design novel and delicate reformulations of the Evidence Lower BOund (ELBO) objective following the graphical structure of CSG, so that the inference models required therein can also serve for prediction, and modeling and optimizing inference models in both domains can be avoided. To our best knowledge, our work is the first to identify semantic factor and leverage latent causal invariance for OOD prediction with guarantees. Empirical improvement in OOD performance and adaptation is demonstrated by experiments on multiple tasks including shifted MNIST and ImageCLEF-DA task.
2 RELATED WORK
There have been works that aim to leverage the merit of causality for OOD prediction. For OOD generalization, some works ameliorate discriminative models towards a causal behavior. Bahadori et al. (2017) introduce a regularizer that reweights input dimensions based on their approximated causal effects to the output, and Shen et al. (2018) reweight training samples by amortizing causal effects among input dimensions. They are extended to nonlinear cases (Bahadori et al., 2017; He et al., 2019) via linear-separable representations. Heinze-Deml & Meinshausen (2019) enforce inference invariance by minimizing prediction variance within each label-identity group. These methods introduce no additional modeling effort, but may also be limited to capture invariant causal mechanisms (they are non-generative) and may only behave quantitatively causal in the training domain.
For domain adaptation/generalization, methods are developed under various causal assumptions (Schölkopf et al., 2012; Zhang et al., 2013) or using learned causal relations (Rojas-Carulla et al., 2018; Magliacane et al., 2018). Zhang et al. (2013); Gong et al. (2016; 2018) also consider certain ways of mechanism shift. The considered causality is among directly observed variables, which may not be suitable for general data like image pixels where causality rather lies between data and conceptual latent factors (Lopez-Paz et al., 2017; Besserve et al., 2018; Kilbertus et al., 2018). To consider latent factors, there are domain adaptation (Pan et al., 2010; Baktashmotlagh et al., 2013; Ganin et al., 2016; Long et al., 2015; 2018) and generalization methods (Muandet et al., 2013; Shankar et al., 2018) that learn a representation with domain-invariant marginal distribution, and have achieved remarkable results. Nevertheless, Johansson et al. (2019); Zhao et al. (2019) point out that this invariance is neither sufficient nor necessary to identify the true semantics and lower the adaptation error (Supplement D). Moreover, these methods and invariance risk minimization (Arjovsky et al., 2019) also assume the invariance in the inference direction (i.e., data→ representation), which may not be as general as causal invariance in the generative direction (Section 3.2).
There are also generative methods for domain adaptation/generalization that model latent factors. Cai et al. (2019); Ilse et al. (2019) introduce a semantic factor and a domain-feature factor. They assume the two factors are independent in both the generative and inference models, which may not meet reality closely. They also do not adapt the prior for domain shift thus resort to inference invariance. Zhang et al. (2020) consider a partially observed manipulation variable, while assume its independence from the output in both the joint and posterior, and the adaptation is inconsistent with causal invariance. Atzmon et al. (2020) consider similar latent factors, but use the same (uniform) prior in all domains. These methods also do not show guarantees to identify their latent factors. Teshima et al. (2020) leverage causal invariance and adapt the prior, while also assume latent independence and do not separate the semantic factor. They require some supervised test-domain data, and their deterministic and invertible mechanism also indicates inference invariance. In addition, most domain generalization methods require multiple training domains, with exceptions (e.g., Qiao et al., 2020) that still seek to augment domains. In contrast, CSG leverages causal invariance, and has guarantee to identify the semantic factor from a single training domain, even with a correlation to the variation factor.
Generative supervised learning is not new (Mcauliffe & Blei, 2008; Kingma et al., 2014), but most works do not consider the encoded causality. Other works consider solving causality tasks, notably causal/treatment effect estimation (Louizos et al., 2017; Yao et al., 2018; Wang & Blei, 2019). The task does not focus on OOD prediction, and requires labels for both treated and controlled groups.
Disentangling latent representations is also of interest in unsupervised learning. Despite some empirical success (Chen et al., 2016; Higgins et al., 2017; Chen et al., 2018), Locatello et al. (2019) conclude that it is impossible to guarantee the disentanglement in unsupervised settings. Khemakhem et al. (2019; 2020) show an encouraging result that disentangled representation can be identified up to a permutation with a cause of the latent variable observed. But the methods cannot separate the semantic factor from variation for supervised learning, and require observing sufficiently many different values of the cause variable, making it hard to leverage labels.
Causality with latent variable has been considered in a rich literature (Verma & Pearl, 1991; Spirtes et al., 2000; Richardson et al., 2002; Hoyer et al., 2008; Shpitser et al., 2014), while most works focus on the consequence on observation-level causality. Others consider identifying the latent variable. Janzing et al. (2009); Lee et al. (2019) show the identifiability under additive noise or similar assumptions. For discrete data, a “simple” latent variable can be identified under various specifications (Janzing et al., 2011; Sgouritsa et al., 2013; Kocaoglu et al., 2018). Romeijn & Williamson (2018) leverage interventional datasets. Over these works, we step further to separate and identify the latent variable as semantic and variation factors, and show the benefit for OOD prediction.
3 THE CAUSAL SEMANTIC GENERATIVE MODEL
To develop the model seriously and soberly based on causality, we require the formal definition of causality: two variables have a causal relation, denoted as “cause→effect”, if externally intervening the cause (by changing variables out of the considered system) may change the effect, but not vice versa (Pearl, 2009; Peters et al., 2017). We then follow the logic below to build our model. 1
(1) It may be a general case that neither y → x (e.g., adding noise to the labels in a dataset does not change the images) nor x → y holds (e.g., intervening an image by e.g. breaking a camera sensor unit when taking the image, does not change how the photographer labels it), as also argued by Peters et al. (2017, Section 1.4); Kilbertus et al. (2018). So we employ a generative model (i.e., not only modeling p(y|x)), and introduce a latent variable z to capture factors with causal relations.
(2) The latent variable z as underlying generating factors (e.g., object features like shape and texture, background and illumination in imaging) is plausible to cause both x (e.g., the change of object shape or background makes a different image, but breaking a camera sensor unit does not change the object shape or background) and y (e.g., the photographer would give a different label if the object shape, texture, etc. had been replaced by those of a different object, but
noise-corrupting the label does not change the object features). So we orient the edges in the generative direction z → (x, y), as also adopted by Mcauliffe & Blei (2008); Peters et al. (2017); Teshima et al. (2020). This is in contrast to Cai et al. (2019); Ilse et al. (2019; 2020); Castro et al. (2020) who treat y as the cause of a semantic factor, which, when y is also a noisy observation, makes unreasonable implications (e.g., adding noise to the labels in a dataset automatically changes object features and consequently the images, and changing the object features does not change the label). This difference is also discussed by Peters et al. (2017, Section 1.4); Kilbertus et al. (2018).
(3) We attribute all x-y relation to the existence of some latent factors (“purely common cause”, Lee et al., 2019; Janzing et al., 2009), and exclude x-y edges. This can be achieved as long as z holds sufficient information of data (e.g., with shape, background etc. fixed, breaking a sensor unit does not change the label, and noise-corrupting the label does not change the image). Promoting this restriction reduces arbitrariness in explaining x-y relation and benefits the identification of z. This is in contrast to Kingma et al. (2014); Zhang et al. (2020); Castro et al. (2020) who treat y as a cause of x since no latent variable is introduced between.
1Supplement C provides more explanations on the model.
(4) Not all latent factors are the causes of y (e.g., changing the shape may alter the label, while changing the background does not). We thus split the latent variable as z = (s, v) and remove the edge v → y, where s represents the semantic factor of x that causes y, and v describes the variation or diversity in generating x. This formalizes the intuition on the concepts in Introduction.
(5) The variation v often has a relation to the semantics s, which is often a spurious correlation (e.g., desks prefer a workspace background, but they can also appear in bedrooms and beds can also appear in workspace). So we keep the undirected s-v edge. Although v is not a cause of y, modeling it explicitly is worth the effort since otherwise it would still be implicitly incorporated in s anyway through the s-v correlation. We summarize these conclusions in the following definition.
Definition 3.1 (CSG). A Causal Semantic Generative Model (CSG) p = (p(s, v), p(x|s, v), p(y|s)) is a generative model on data variables x ∈ X ⊂ RdX and y ∈ Y with semantic s ∈ S ⊂ RdS and variation v ∈ V ⊂ RdV latent variables, following the graphical structure shown in Fig. 1.
3.1 THE CAUSAL INVARIANCE PRINCIPLE
The domain-invariance of causal relations translates to the following principle for CSG:
Principle 3.2 (causal invariance). The causal generative mechanisms p(x|s, v) and p(y|s) in CSG are invariant across domains, and the change of prior p(s, v) is the only source of domain shift.
It is supported by the invariance of basic laws of nature (Schölkopf et al., 2012; Peters et al., 2017; Besserve et al., 2018; Bühlmann, 2018; Schölkopf, 2019). Other works instead introduce domain index (Cai et al., 2019; Ilse et al., 2019; 2020; Castro et al., 2020) or manipulation variables (Zhang et al., 2020; Khemakhem et al., 2019; 2020) to model distribution change explicitly. They require multiple training domains or additional observations, and such changes can also be explained under causal invariance as long as the latent variable includes all shifted factors (e.g., domain change of images can be attributed to a different preference of shape, style, texture, background, etc. and their correlations, while the processes generating image and label from them remain the same).
3.2 COMPARISON WITH INFERENCE INVARIANCE
Domain-invariant-representation-based adaptation and generalization methods, and invariant risk minimization (Arjovsky et al., 2019) for domain generalization, use a shared feature extractor across domains. This effectively assumes the invariance of the process in the other direction, i.e., inferring the latent representation from data. We note that in its supportive examples (e.g., inferring the object position from an image, or extracting the fundamental frequency from a vocal audio), generating mechanisms are nearly deterministic and invertible, so that the posterior is almost determined by the inverse function, and causal invariance implies inference invariance. For noisy or degenerate mechanisms (Fig. 2), am-
biguity occurs during inference since there may be multiple values of a latent feature that generate the same observation. The inferred feature would notably rely on the prior through the Bayes rule. Since the prior changes across domains, the inference rule then changes by nature, which challenges the existence of a domain-shared feature extractor. In this case, causal invariance is more reliable than inference invariance.
To leverage causal invariance, we adjust the prior conservatively for OOD generalization (CSG-ind) and data-driven for domain adaptation (CSG-DA), so together with the invariant generative mechanisms, it gives a different and more reliable inference rule than that following inference invariance.
4 METHOD
We develop learning, adaptation and prediction methods for OOD generalization and domain adaptation using CSG following the causal invariance Principle 3.2, and devise practical objectives using variational Bayes. Supplement E.1 details all the derivations.
4.1 METHOD FOR OOD GENERALIZATION
For OOD generalization, a CSG p = (p(s, v), p(x|s, v), p(y|s)) needs to first learn from the supervised data from an underlying data distribution p∗(x, y) on the training domain. Maximizing likelihood Ep∗(x,y)[log p(x, y)] is intractable since p(x, y) given by the CSG p is hard to estimate effectively. We thus adopt the Evidence Lower BOund (ELBO) Lq,p(x, y) := Eq(s,v|x,y)[log p(s,v,x,y)q(s,v|x,y) ] (Jordan et al., 1999; Wainwright et al., 2008) as a tractable surrogate, which requires an auxiliary inference model q(s, v|x, y) to estimate the expectation effectively. Maximizing Lq,p w.r.t q drives q towards the posterior p(s, v|x, y) and meanwhile makes Lq,p a tighter lower bound of log p(x, y). The expected ELBO Ep∗(x,y)[Lq,p(x, y)] then drives p(x, y) towards p∗(x, y).
However, the subtlety with supervised learning is that after fitting data, evaluating p(y|x) for prediction is still hard. We thus propose to employ a model for q(s, v, y|x) instead. The required inference model can be then expressed as q(s, v|x, y) = q(s, v, y|x)/q(y|x) where q(y|x) =∫ q(s, v, y|x) dsdv. It reformulates the expected ELBO as:
Ep∗(x,y)[Lq,p(x, y)] = Ep∗(x)Ep∗(y|x)[log q(y|x)] + Ep∗(x)Eq(s,v,y|x) [p∗(y|x) q(y|x) log p(s, v, x, y) q(s, v, y|x) ] . (1)
The first term is the common cross entropy loss (negative) driving q(y|x) towards p∗(y|x). Once this is achieved, the second term becomes the expected ELBO Ep∗(x)[Lq(s,v,y|x),p(x)] that drives q(s, v, y|x) towards p(s, v, y|x) (and p(x) towards p∗(x)). Since the target p(s, v, y|x) admits the factorization p(s, v|x)p(y|s) (since (v, x) ⊥ y|s ) where p(y|s) is already given by the CSG, we can further ease the modeling of q(s, v, y|x) as q(s, v|x)p(y|s). The ELBO is then reformulated as:
Lq,p(x, y) = log q(y|x) + 1
q(y|x) Eq(s,v|x)
[ p(y|s) log p(s, v)p(x|s, v)
q(s, v|x)
] , (2)
where q(y|x) = Eq(s,v|x)[p(y|s)]. The CSG p and q(s, v|x) are to be optimized. The expectations can be estimated by Monte Carlo, and their gradients can be estimated using the reparameterization trick (Kingma & Welling, 2014). When well optimized, q(s, v|x) well approximates p(s, v|x), so q(y|x) then well approximates p(y|x) = Ep(s,v|x)[p(y|s)] for prediction.
CSG-ind To actively mitigate the spurious s-v correlation from the training domain, we also consider a CSG with an independent prior p⊥(s, v) := p(s)p(v) for prediction in the unknown test domain, where p(s) and p(v) are the marginals of p(s, v). The independent prior p⊥(s, v) encourages the model to stay neutral on the s-v correlation. It has a larger entropy than p(s, v) (Cover & Thomas, 2006, Theorem 2.6.6), so it reduces the information of the training-domain-specific prior. The model then relies more on the invariant generative mechanisms, thus better leverages causal invariance and gives more reliable prediction than that following inference invariance.
For the method, note that the prediction is given by p⊥(y|x) = Ep⊥(s,v|x)[p(y|s)], so we use an inference model for q⊥(s, v|x) that approximates p⊥(s, v|x). However, learning on the training domain still requires the original inference model q(s, v|x). To save the cost of building and learning two inference models, we propose to use q⊥(s, v|x) to represent q(s, v|x). Noting that their targets are related by p(s, v|x) = p(s,v)
p⊥(s,v) p⊥(x) p(x) p ⊥(s, v|x), we formulate q(s, v|x) = p(s,v) p⊥(s,v) p⊥(x) p(x) q ⊥(s, v|x) accordingly, so that this q(s, v|x) achieves its target once q⊥(s, v|x) does. The ELBO then becomes:
Lq,p(x, y) = log π(y|x) + 1
π(y|x) Eq⊥(s,v|x) [ p(s, v) p⊥(s, v) p(y|s) log p ⊥(s, v)p(x|s, v) q⊥(s, v|x) ] , (3)
where π(y|x) := Eq⊥(s,v|x) [ p(s,v) p⊥(s,v) p(y|s) ] . The CSG p and q(s, v|x) are to be optimized (note that p⊥(s, v) is determined by p(s, v) in the CSG p). Prediction is given by p⊥(y|x) ≈ Eq⊥(s,v|x)[p(y|s)].
4.2 METHOD FOR DOMAIN ADAPTATION
When unsupervised data is available from an underlying data distribution p̃∗(x) on the test domain, we can leverage it for adaptation. According to the causal invariance Principle 3.2, we only need to adapt for the test-domain prior p̃(s, v) and the corresponding inference model q̃(s, v|x), while the causal mechanisms p(x|s, v) and p(y|s) are not optimized. Adaptation is done by fitting the test
data via maximizing Ep̃∗(x)[Lq̃,p̃(x)], where the ELBO is in the standard form: Lq̃,p̃(x) = Eq̃(s,v|x) [ log ( p̃(s, v)p(x|s, v)/q̃(s, v|x) )] . (4)
Prediction is given by p̃(y|x) ≈ Eq̃(s,v|x)[p(y|s)]. Similar to the case of CSG-ind, we need q̃(s, v|x) for prediction, but q(s, v|x) is still required for learning on the training domain. When data from both domains are available during learning, we can save the effort of modeling and learning q(s, v|x) using a similar technique. We formulate it using q̃(s, v|x) as q(s, v|x) = p̃(x)p(x) p(s,v) p̃(s,v) q̃(s, v|x) following the same relation between their targets, and the ELBO on the training domain becomes:
Lq,p(x, y) = log π(y|x) + 1
π(y|x) Eq̃(s,v|x) [p(s, v) p̃(s, v) p(y|s) log p̃(s, v)p(x|s, v) q̃(s, v|x) ] , (5)
where π(y|x) := Eq̃(s,v|x) [p(s,v) p̃(s,v)p(y|s) ] . The CSG p and q̃(s, v|x) are to be optimized (not for p̃(s, v)). The resulting method, termed CSG-DA, solves both optimizations (4, 5) simultaneously.
For implementing the three methods, note that only one inference model is required in each case. Supplement E.2 shows its implementation from a general discriminative model (e.g., how to select its hidden nodes as s and v). In practice x often has a much larger dimension than y, making the supervised part of the training-domain ELBO (i.e., the first term in its formulation Eq. (1)) scales smaller than the unsupervised part. So we include an additional cross entropy loss in the objectives.
5 THEORY
We now establish guarantee for the methods on identifying the semantic factor and the subsequent merits for OOD generalization and domain adaptation. We only consider the infinite-data regime to isolate another source of error from finite data. Supplement A shows all the proofs. Identifiability is hard to achieve for latent variable models (Koopmans & Reiersol, 1950; Murphy, 2012; Yacoby et al., 2019; Locatello et al., 2019), since it is a task beyond modeling observational relations (Janzing et al., 2009; Peters et al., 2017). Assumptions are required to draw definite conclusions.
Assumption 5.1 (additive noise). There exist nonlinear functions f and g with bounded derivatives up to third-order, and independent random variables µ and ν, such that p(x|s, v) = pµ(x− f(s, v)), and p(y|s) = pν(y − g(s)) for continuous y or p(y|s) = Cat(y|g(s)) for categorical y.
This structure disables describing a bivariate joint distribution in both generating directions (Zhang & Hyvärinen (2009, Theorem 8), Peters et al. (2014, Proposition 23)), and is widely adopted in directed causal discovery (Janzing et al., 2009; Bühlmann et al., 2014). CSG needs this since it should make the causal direction exclusive. It is also easy to implement with deep models (Kingma & Welling, 2014), so does not essentially restrict model capacity.
Assumption 5.2 (bijectivity). Function f is bijective and g is injective.
It is a common assumption for identifiability (Janzing et al., 2009; Shalit et al., 2017; Khemakhem et al., 2019; Lee et al., 2019). Under Assumption 5.1, it is a sufficient condition (Peters et al., 2014, Proposition 17; Peters et al., 2017, Proposition 7.4) of causal minimality (Peters et al., 2014, p.2012; Peters et al., 2017, Definition 6.33), a fundamental requirement for identifiability (Peters et al., 2014, Proposition 7; Peters et al., 2017, p.109). Particularly, s and v are otherwise allowed to have dummy dimensions that f and g simply ignore, raising another ambiguity against identifiability. On the other hand, according to the commonly acknowledged manifold hypothesis (Weinberger & Saul, 2006; Fefferman et al., 2016) that data tends to lie on a lower-dimensional manifold embedded in the data space, we can take X as the manifold and such a bijection exists as a coordinate map, which is an injection to the original data space (thus allowing dS + dV < dX ).
5.1 IDENTIFIABILITY THEORY
We first formalize the goal of identifying the semantic factor.
Definition 5.3 (semantic-equivalence). We say two CSGs p and p′ are semantic-equivalent, if there exists a homeomorphism2 Φ on S × V , such that (i) its output dimensions in S is constant of v:
2A transformation is a homeomorphism if it is a continuous bijection with continuous inverse.
ΦS(s, v) = ΦS(s) for any v ∈ V , and (ii) it acts as a reparameterization from p to p′: Φ#[ps,v] = p′s,v , p(x|s, v) = p′(x|Φ(s, v)) and p(y|s) = p′(y|ΦS(s)).
It is an equivalent relation if V is connected and is either open or closed in RdV (Supplement A.1). Here, Φ#[ps,v] denotes the pushed-forward distribution3 by Φ, i.e. the distribution of the transformed random variable Φ(s, v) when (s, v) ∼ ps,v . As a reparameterization, Φ allows the two models to have different latent-variable parameterizations while inducing the same distribution on the observed data variables (x, y) (Supplement Lemma A.2). At the heart of the definition, the vconstancy of ΦS implies that Φ is semantic-preserving: one model does not mix the other’s v into its s, so that the s variables of both models hold equivalent information.
We say that a learned CSG p identifies the semantic factor if it is semantic-equivalent to the groundtruth CSG p∗. This identification cannot be characterized by the statistical independence between s and v (as in Cai et al. (2019); Ilse et al. (2019); Zhang et al. (2020)), which is not sufficient (Locatello et al., 2019) nor necessary (due to the existence of spurious correlation). Another related concept is disentanglement. It requires that a semantic transformation on x changes the learned s only (Higgins et al., 2018; Besserve et al., 2020), while the identification here does not require the learned v to be constant of the ground-truth s.
To identify the semantic factor, the ground-truth model could at most provide its information via the data distribution p∗(x, y). Although semantic-equivalent CSGs induce the same distribution on (x, y), the inverse is nontrivial. The following theorem shows that the semantic-identifiability can be achieved under appropriate conditions. Theorem 5.4 (semantic-identifiability). With Assumptions 5.1 and 5.2, a well-learned CSG p with p(x, y) = p∗(x, y) is semantic-equivalent to the ground-truth CSG p∗, if log p(s, v) and log p∗(s, v) have bounded derivatives up to the second-order, and that4 (i) 1σ2µ → ∞ where σ 2 µ := E[µ>µ], or (ii) pµ has an a.e. non-zero characteristic function (e.g., a Gaussian distribution).
Remarks. (1) The requirement on p(s, v) and p∗(s, v) excludes extreme training data that show a deterministic s-v relation, which makes the (s, v) density functions unbounded and discontinuous. In that case (e.g., all desks appear in workspace and all beds in bedrooms), one cannot tell whether the label y is caused by s (e.g., the shape) or by v (e.g., the background).
(2) In condition (i), 1σ2µ measures the intensity of the causal mechanism p(x|s, v). A strong p(x|s, v) helps disambiguating values of (s, v) in generating a given x. The condition makes p(x|s, v) so strong that it is almost deterministic and invertible, so inference invariance also holds (Section 3.2). Supplement A.2 provides a quantitative reference of large intensity for a practical consideration, and Supplement B gives a non-asymptotic extension showing how the intensity trades-off the tolerance of equalities in Definition 5.3. Condition (ii) covers more than inference invariance. It roughly implies that different values of (s, v) a.s. produce different distributions p(x|s, v) on X , so their roles in generating x become clear which helps identification.
(3) The theorem does not contradict the impossibility result by Locatello et al. (2019), which considers disentangling each latent dimension with an unconstrained (s, v)→ (x, y), while we identify s as a whole with the edge v → y removed which breaks the s-v symmetry.
5.2 OOD GENERALIZATION THEORY
The causal invariance Principle 3.2 forms the ground-truth CSG on the test domain as p̃∗ = (p̃∗(s, v), p∗(x|s, v), p∗(y|s)) with the new ground-truth prior p̃∗(s, v), which gives the optimal predictor Ẽ ∗ [y|x] 5 on the test domain. The principle also leads to the invariance of identified causal mechanisms, which shows that the OOD generalization error of a CSG is bounded: Theorem 5.5 (OOD generalization error). With Assumptions 5.1 and 5.2, for a semanticallyidentified CSG p on the training domain with reparameterization Φ, we have up to O(σ2µ) that
3The definition of Φ#[ps,v] requires Φ to be measurable. This is satisfied by the continuity of Φ as a homeomorphism (as long as the considered σ-field is the Borel σ-field) (Billingsley, 2012, Theorem 13.2).
4To be precise, the semantic-equivalent conclusions are that the equalities in Definition 5.3 hold asymptotically in the limit 1
σ2µ →∞ for condition (i), and hold a.e. for condition (ii).
5For categorical y, the expectation of y is taken under the one-hot representation.
for any x ∈ supp(px) ∩ supp(p̃∗x),∣∣∣E[y|x]− Ẽ∗[y|x]∣∣∣ 6 σ2µ‖∇g(s)‖2∥∥Jf−1(x)∥∥22‖∇ log(p(s, v)/p̃(s, v))‖2∣∣∣(s,v)=f−1(x), (6) where supp denotes the support of a distribution, Jf−1 is the Jacobian matrix of f−1, and p̃s,v := Φ#[p̃ ∗ s,v] is the test-domain prior under the parameterization of the identified CSG p. 6
The result shows that when the causal mechanism p(x|s, v) is strong, especially in the extreme case σµ = 0 where inference invariance also holds, it dominates prediction over the prior and the generalization error diminishes. In more general cases where only causal invariance holds, the prior change deviates the prediction rule. The prior-change term ‖∇ log(p(s, v)/p̃(s, v))‖2 measures the hardness or severity of OOD. It diminishes in IID cases, and makes the bound lose its effect when the two priors do not share their support. Using a CSG to fit training data enforces causal invariance and other assumptions, so its E[y|x] behaves more faithfully in low p∗(x) area and the boundedness becomes more plausible in practice. CSG-ind further actively uses an independent prior whose larger support covers more p̃s,v candidates.
5.3 DOMAIN ADAPTATION THEORY
In cases of weak causal mechanism or violent prior change, the new ground-truth prior p∗s,v is important for prediction. The domain adaptation method learns a new prior p̃s,v by fitting unsupervised test-domain data, with causal mechanisms shared. Once the mechanisms are identified, p∗s,v can also be identified under the learned parameterization, and prediction can be made precise.
Theorem 5.6 (domain adaptation error). Under the conditions of Theorem 5.4, for a semanticallyidentified CSG p on the training domain with reparameterization Φ, if its new prior p̃s,v for the test domain is well-learned with p̃(x) = p̃∗(x), then p̃s,v = Φ#[p̃ ∗ s,v], and Ẽ[y|x] = Ẽ ∗ [y|x] for any x ∈ supp(p̃∗x).
Different from existing domain adaptation bounds (Supplement D), Theorems 5.5 and 5.6 allow different inference models in the two domains, thus go beyond inference invariance.
6 EXPERIMENTS
For baselines of OOD generalization, apart from the conventional supervised learning optimizing cross entropy (CE), we also consider a causal discriminative method CNBB (He et al., 2019), and a generative method supervised VAE (sVAE) which is a counterpart of CSG that does not separate its latent variable into s and v. For domain adaptation, we consider well-acknowledged DANN (Ganin et al., 2016), DAN (Long et al., 2015) and CDAN (Long et al., 2018) methods implemented in the dalib package (Jiang et al., 2020), and also sVAE using a similar method as CSG-DA. All methods share the same optimization setup. We align the scale of the CE term in the objectives of all methods, and tune their hyperparameters to lie on the margin that makes the final accuracy near 1 on a validation set from the training domain. See Supplement F for details.
6.1 SHIFTED MNIST
We consider an OOD prediction task on MNIST to classify digits “0” and “1”. In the training data, “0”s are horizontally shifted at random by δ pixels with δ ∼ N (−5, 12), and “1”s by δ ∼ N (5, 12) pixels. We consider two test domains where the digits are not moved, or are shifted δ ∼ N (0, 22) pixels. Both domains have balanced classes. We implement all methods using multilayer perceptron which is not naturally shift invariant. We use a larger architecture for discriminative and domain adaptation methods to compensate the additional generative components of generative methods.
The OOD performance is shown in Table 1. For OOD generalization, CSG gives more genuine predictions in unseen domains, thanks to the identification of the semantic factor. CSG-ind performs even better, demonstrating the merit of approaching a CSG with an independent prior. Other methods are more significantly misled by the position factor from the spurious correlation. CNBB ameliorates the position bias, but not as thoroughly without explicit structures for causal mechanisms. CSG
6The 2-norm ‖·‖2 for matrices refers to the induced operator norm (not the Frobenius norm).
also outperforms sVAE, showing the benefit of separating semantics and variation and modeling the variation explicitly, so the model could consciously drive semantic representation into s. For domain adaptation, existing methods differ a lot, and are hard to perform well on both test domains. When fail to identify, adaptation sometimes even worsens the result, as the misleading representation based on position gets strengthened on the unsupervised test data. CSG is benefited from adaptation by leveraging test data in a proper way that identifies the semantics.
6.2 IMAGECLEF-DA
ImageCLEF-DA (ima, 2014) is a standard benchmark dataset for the ImageCLEF 2014 domain adaptation challenge. We select a pair of adaptation tasks between two of its domains: Caltech-256 and Pascal VOC 2012. Each domain has 12 classes and 600 images following a different distribution from each other. We adopt the same setup as in Long et al. (2018), including the ResNet50 structure (He et al., 2016) pretrained on ImageNet as the backbone of the discriminative/inference model. For generative methods, we leverage the DCGAN generator (Radford et al., 2015) pretrained on Cifar10.
Table 2 shows the results. We see that CSG(-ind) achieves the best OOD generalization result, and performs comparable with modern domain adaptation methods. On this task, the underlying causal mechanism may be very noisy (e.g., photos taken from inside and outside both count for the aircraft class), making identification hard. So CSG-DA does not make a salient improvement.
7 CONCLUSION AND DISCUSSION
We tackle OOD generalization and domain adaptation tasks by proposing a Causal Semantic Generative model (CSG), which builds upon a causal reasoning, and models semantic and variation factors separately while allowing their correlation. Using the invariance principle of causality, we develop effective and delicate methods for learning, adaptation and prediction, and prove the identification of the semantic factor, the boundedness of OOD generalization error, and the success of adaptation under appropriate conditions. Experiments show the improved performance in both tasks.
The consideration of separating semantics from variation extends to broader examples regarding robustness. Convolutional neural networks are found to change its prediction under a different texture but the same shape (Geirhos et al., 2019; Brendel & Bethge, 2019). Adversarial vulnerability (Szegedy et al., 2014; Goodfellow et al., 2015; Kurakin et al., 2016) extends variation factors to human-imperceptible features, i.e. the adversarial noise, which is shown to have a strong spurious correlation with semantics (Ilyas et al., 2019). The separation also matters for fairness when a sensitive variation factor may change prediction due to a spurious correlation. Our methods are potentially beneficial in these examples.
A PROOFS
We first introduce some handy concepts and results to make the proof succinct. We begin with extended discussions on CSG.
Definition A.1. A homeomorphism Φ on S × V is called a reparameterization from CSG p to CSG p′, if Φ#[ps,v] = p′s,v , and p(x|s, v) = p′(x|Φ(s, v)) and p(y|s) = p′(y|ΦS(s, v)) for any (s, v) ∈ S ×V . A reparameterization Φ is called to be semantic-preserving, if its output dimensions in S is constant of v: ΦS(s, v) = ΦS(s) for any v ∈ V .
Note that a reparameterization unnecessarily has its output dimensions in S, i.e. ΦS(s, v), constant of v. The condition that p(y|s) = p′(y|ΦS(s, v)) for any v ∈ V does not indicate that ΦS(s, v) is constant of v, since p′(y|s′) may ignore the change of s′ = ΦS(s, v) from the change of v. The following lemma shows the meaning of a reparameterization: it allows a CSG to vary while inducing the same distribution on the observed data variables (x, y) (i.e., holding the same effect on describing data).
Lemma A.2. If there exists a reparameterization Φ from CSG p to CSG p′, then p(x, y) = p′(x, y).
Proof. By the definition of a reparameterization, we have:
p(x, y) = ∫ p(s, v)p(x|s, v)p(y|s) dsdv = ∫ Φ−1# [p ′ s,v](s, v)p ′(x|Φ(s, v))p′(y|ΦS(s, v)) dsdv
= ∫ p′s,v(s ′, v′)p′(x|s′, v′)p′(y|s′) ds′dv′ = p′(x, y),
where we used variable substitution (s′, v′) := Φ(s, v) in the second-last equality. Note that by the definition of pushed-forward distribution and the bijectivity of Φ, Φ#[ps,v] = p′s,v implies ps,v = Φ −1 # [p ′ s,v], and ∫ f(s′, v′)p′s,v(s ′, v′) ds′dv′ = ∫ f(Φ(s, v))Φ−1# [p ′ s,v](s, v) dsdv (can also be verified deductively using the rule of change of variables, i.e. Lemma A.4 in the following).
The definition of semantic-equivalence (Definition 5.3) can be rephrased by the existence of a semantic-preserving reparameterization. With appropriate model assumptions, we can show that any reparameterization between two CSGs is semantic-preserving, so that semantic-preserving CSGs cannot be converted to each other by a reparameterization that mixes s with v.
Lemma A.3. For two CSGs p and p′, if p′(y|s) has a statistics M ′(s) that is an injective function of s, then any reparameterization Φ from p to p′, if exists, has its ΦS constant of v.
Proof. Let Φ = (ΦS ,ΦV) be any reparameterization from p to p′. Then the condition that p(y|s) = p′(y|ΦS(s, v)) for any v ∈ V indicates that M(s) = M ′(ΦS(s, v)). If there exist s ∈ S and v(1) 6= v(2) ∈ V such that ΦS(s, v(1)) 6= ΦS(s, v(2)), then M ′(ΦS(s, v(1))) 6= M ′(ΦS(s, v(2))) since M ′ is injective. This violates M(s) = M ′(ΦS(s, v)) which requires both M ′(ΦS(s, v(1))) and M ′(ΦS(s, v(2))) to be equal to M(s). So ΦS(s, v) must be constant of v.
We then introduce two mathematical facts.
Lemma A.4 (rule of change of variables). Let z be a random variable on a Euclidean space RdZ with density function pz(z), and let Φ be a homeomorphism on RdZ whose inverse Φ−1 is differentiable. Then the distribution of the transformed random variable z′ = Φ(z) has a density function Φ#[pz](z ′) = pz(Φ −1(z′))|JΦ−1(z′)|, where |JΦ−1(z′)| denotes the absolute value of the determinant of the Jacobian matrix (JΦ−1(z′))ia := ∂∂z′i (Φ −1)a(z ′) of Φ−1 at z′.
Proof. See e.g., Billingsley (2012, Theorem 17.2). Note that a homeomorphism is (Borel) measurable since it is continuous (Billingsley, 2012, Theorem 13.2), so the definition of Φ#[pz] is valid.
Lemma A.5. Let µ be a random variable whose characteristic function is a.e. non-zero. For two functions f and f ′ on the same space, we have: f ∗ pµ = f ′ ∗ pµ ⇐⇒ f = f ′ a.e., where (f ∗ pµ)(x) := ∫ f(x)pµ(x− µ) dµ denotes convolution.
Proof. The function equality f ∗ pµ = f ′ ∗ pµ leads to the equality under Fourier transformation F [f ∗pµ] = F [f ′ ∗pµ], which gives F [f ]F [pµ] = F [f ′]F [pµ]. Since F [pµ] is the characteristic function of pµ, the condition that it is a.e. non-zero indicates that F [f ] = F [f ′] a.e. thus f = f ′ a.e. See also Khemakhem et al. (2019, Theorem 1).
A.1 PROOF OF THE EQUIVALENCE RELATION
Proposition A.6. The semantic-equivalence defined in Definition 5.3 is an equivalence relation if V is connected and is either open or closed in RdV .
Proof. Let Φ be a semantic-preserving reparameterization from one CSG p = (p(s, v), p(x|s, v), p(y|s)) to another p′ = (p′(s, v), p′(x|s, v), p′(y|s)). It has its ΦS constant of v, so we can write Φ(s, v) = (ΦS(s),ΦV(s, v)) =: (φ(s), ψs(v)).
(1) We first show that φ, and ψs for any s ∈ S, are homeomorphisms on S and V , respectively, and that Φ−1(s′, v′) = (φ−1(s′), ψ−1φ−1(s′)(v ′)).
• Since Φ(S × V) = S × V , so φ(S) = ΦS(S) = S, so φ is surjective. • Suppose that there exists s′ ∈ S such that φ−1(s′) = {s(i)}i∈I contains multiple distinct
elements. 1. Since Φ is surjective, for any v′ ∈ V , there exist i ∈ I and v ∈ V such that (s′, v′) = Φ(s(i), v) = (φ(s(i)), ψs(i)(v)), which means that ⋃ i∈I ψs(i)(V) = V .
2. Since Φ is injective, the sets {ψs(i)(V)}i∈I must be mutually disjoint. Otherwise, there would exist i 6= j ∈ I and v(1), v(2) ∈ V such that ψs(i)(v(1)) = ψs(j)(v(2)) thus Φ(s(i), v(1)) = (s′, ψs(i)(v (1))) = (s′, ψs(j)(v (2))) = Φ(s(j), v(2)), which violates
the injectivity of Φ since s(i) 6= s(j). 3. In the case where V is open, then so is any ψs(i)(V) = Φ(s(i),V) since Φ is continuous. But the union of disjoint open sets ⋃ i∈I ψs(i)(V) = V cannot be connected.
This violates the condition that V is connected. 4. A similar argument holds in the case where V is closed.
So φ−1(s′) contains only one unique element for any s′ ∈ S. So φ is injective. • The above argument also shows that for any s′ ∈ S , we have ⋃ i∈I ψs(i)(V) =
ψφ−1(s′)(V) = V . For any s ∈ S, there exists s′ ∈ S such that s = φ−1(s′), so we have ψs(V) = V . So ψs is surjective for any s ∈ S. • Suppose that there exist v(1) 6= v(2) ∈ V such that ψs(v(1)) = ψs(v(2)). Then Φ(s, v(1)) = (φ(s), ψs(v (1))) = (φ(s), ψs(v (2))) = Φ(s, v(2)), which contradicts the injectivity of Φ
since v(1) 6= v(2). So ψs is injective for any s ∈ S. • That Φ is continuous and Φ(s, v) = (φ(s), ψs(v)) indicates that φ and ψs are
continuous. For any (s′, v′) ∈ S × V , we have Φ(φ−1(s′), ψ−1φ−1(s′)(v ′)) = (φ(φ−1(s′)), ψφ−1(s′)(ψ −1 φ−1(s′)(v
′))) = (s′, v′). Applying Φ−1 to both sides gives Φ−1(s′, v′) = (φ−1(s′), ψ−1φ−1(s′)(v
′)). • Since Φ−1 is continuous, φ−1 and ψ−1s are also continuous.
(2) We now show that the relation is an equivalence relation. It amounts to showing the following three properties.
• Reflexivity. For two identical CSGs, we have p(s, v) = p′(s, v), p(x|s, v) = p′(x|s, v) and p(y|s) = p′(y|s). So the identity map as Φ obviously satisfies all the requirements. • Symmetry. Let Φ be a semantic-preserving reparameterization from p = (p(s, v), p(x|s, v), p(y|s)) to p′ = (p′(s, v), p′(x|s, v), p′(y|s)). From the above conclusion in (1), we know that (Φ−1)S(s′, v′) = φ−1(s′) is semantic-preserving. Also, Φ−1 is a homeomorphism on S × V since Φ is. So we only need to show that Φ−1 is a reparameterization from p′ to p for symmetry.
1. From the definition of pushed-forward distribution, we have Φ−1# [p ′ s,v] = ps,v if
Φ#[ps,v] = p ′ s,v . It can also be verified through the rule of change of variables (Lemma A.4) when Φ and Φ−1 are differentiable. From Φ#[ps,v] = p′s,v , we have for any (s′, v′), ps,v(Φ−1(s′, v′))|JΦ−1(s′, v′)| = p′s,v(s′, v′). Since for any (s, v) there exists (s′, v′) such that (s, v) = Φ−1(s′, v′), this implies that for any (s, v), ps,v(s, v)|JΦ−1(Φ(s, v))| = p′s,v(Φ(s, v)), or ps,v(s, v) = p′s,v(Φ(s, v))/|JΦ−1(Φ(s, v))| = p′s,v(Φ(s, v))|JΦ(s, v)| (inverse function theorem), which means that ps,v = Φ−1# [p ′ s,v] by the rule of change of variables. 2. For any (s′, v′), there exists (s, v) such that (s′, v′) = Φ(s, v), so p′(x|s′, v′) = p′(x|Φ(s, v)) = p(x|s, v) = p(x|Φ−1(s′, v′)), and p′(y|s′) = p′(y|ΦS(s)) = p(y|s) = p(y|(Φ−1)S(s′)). So Φ−1 is a reparameterization from p′ to p. • Transitivity. Given a third CSG p′′ = (p′′(s, v), p′′(x|s, v), p′′(y|s)) that is semantic-
equivalent to p′, there exists a semantic-preserving reparameterization Φ′ from p′ to p′′. It is easy to see that (Φ′ ◦ Φ)S(s, v) = Φ′S(ΦS(s, v)) = Φ′S(ΦS(s)) is constant of v thus semantic-preserving. As the composition of two homeomorphisms Φ and Φ′ on S × V , Φ′ ◦ Φ is also a homeomorphism. So we only need to show that Φ′ ◦ Φ is a reparameterization from p to p′′ for transitivity.
1. From the definition of pushed-forward distribution, we have (Φ′ ◦ Φ)#[ps,v] = Φ′#[Φ#[ps,v]] = Φ ′ #[p ′ s,v] = p ′′ s,v if Φ#[ps,v] = p ′ s,v and Φ ′ #[p ′ s,v] = p ′′ s,v . It can
also be verified through the rule of change of variables (Lemma A.4) when Φ−1 and Φ′−1 are differentiable. For any (s′′, v′′), we have
(Φ′ ◦ Φ)#[ps,v](s′′, v′′) = ps,v((Φ′ ◦ Φ)−1(s′′, v′′)) ∣∣J(Φ′◦Φ)−1(s′′, v′′)∣∣
= ps,v(Φ −1(Φ′−1(s′′, v′′))) ∣∣JΦ−1(Φ′−1(s′′, v′′))∣∣|JΦ′−1(s′′, v′′)| = Φ#[ps,v](Φ
′−1(s′′, v′′))|JΦ′−1(s′′, v′′)| = p′s,v(Φ ′−1(s′′, v′′))|JΦ′−1(s′′, v′′)| = Φ′#[p′s,v](s′′, v′′) = p′′s,v(s′′, v′′).
2. For any (s, v), we have: p(x|s, v) = p′(x|Φ(s, v)) = p′′(x|Φ′(Φ(s, v))) = p′′(x|(Φ′ ◦ Φ)(s, v)), p(y|s) = p′(y|ΦS(s)) = p′′(y|Φ′S(ΦS(s))) = p′′(y|(Φ′ ◦ Φ)S(s)).
So Φ′ ◦ Φ is a reparameterization from p to p′′. This completes the proof for an equivalence relation.
A.2 PROOF OF THE SEMANTIC-IDENTIFIABILITY THEOREM 5.4
We present a more general and detailed version of Theorem 5.4 and prove it. The theorem in the main context corresponds to conclusions (ii) and (i) below by taking the two CSGs p′ and p as the well-learned p and the ground-truth CSGs p∗, respectively. Theorem 5.4’ (semantic-identifiability). Consider CSGs p and p′ that have Assumptions 5.1 and 5.2 hold, with the bounded derivative conditions specified to be that for both CSGs, f−1 and g are twice and f thrice differentiable with mentioned derivatives bounded. Further assume that their priors have bounded densities and their log p(s, v) have bounded derivatives up to the second-order. If the two CSGs have p(x, y) = p′(x, y), then they are semantic-equivalent, under the conditions that: 7 (i) pµ has an a.e. non-zero characteristic function (e.g., a Gaussian distribution); (ii) 1σ2µ →∞, where σ 2 µ := E[µ>µ]; (iii) 1σ2µ B ′2 f−1max{B ′ log pB ′ g+ 1 2B ′′ g + 3 2dB ′ f−1B ′′ fB ′ g, BpB ′d f−1(B ′2 log p+B ′′ log p+3dB ′ f−1B ′′ fB ′ log p+ 3d 3 2B′2f−1B ′′2 f +d 3B′′′f B ′ f−1)}, where d := dS + dV , and for both CSGs, the constant Bp bounds p(s, v), B′f−1 , B ′ g, B ′ log p and B ′′ f , B ′′ g , B ′′ log p bound the 2-norms
8 of the gradient/Jacobian and the Hessians of the respective functions, and B′′′f bounds all the 3rd-order derivatives of f .
7To be precise, the conclusions are that the equalities in Definition 5.3 hold a.e. for condition (i), hold asymptotically in the limit 1
σ2µ →∞ for condition (ii), and hold up to a negligible quantity for condition (iii).
8As an induced operator norm for matrices (not the Frobenius norm).
Proof. Without loss of generality, we assume that µ and ν (for continuous y) have zero mean. If it is not, we can redefine f(s, v) := f(s, v) + E[µ] and µ := µ − E[µ] (similarly for ν for continuous y) which does not alter the joint distribution p(s, v, x, y) nor violates any assumptions. Also without loss of generality, we consider one scalar component (dimension) l of y, and abuse the use of symbols y and g for yl and gl to avoid unnecessary complication. Note that for continuous y, due to the additive noise structure y = g(s)+ν and that ν has zero mean, we also have E[y|s] = g(s) as the same as the categorical y case (under the one-hot representation). We sometimes denote z := (s, v) for convenience.
First note that for both CSGs and both continuous and categorical y, by construction g(s) is a sufficient statistics of p(y|s) (not only the expectation E[y|s]), and it is injective. So by Lemma A.3, we only need to show that there exists a reparameterization from p to p′. We will show that Φ := f ′−1 ◦ f is such a reparameterization. Since f and f ′ are bijective and continuous, we have Φ−1 = f−1 ◦ f ′, so Φ is bijective and Φ and Φ−1 are continuous. So Φ is a homeomorphism. Also, by construction, we have:
p(x|z) = pµ(x− f(z)) = pµ(x− f ′(f ′−1(f(z)))) = pµ(x− f ′(Φ(z))) = p′(x|Φ(z)). (7) So we only need to show that p(x, y) = p′(x, y) indicates Φ#[pz] = p′z and p(y|s) = p′(y|ΦS(s, v)),∀v ∈ V under the conditions. Proof under condition (i). We begin with a useful reformulation of the integral ∫ t(z)p(x|z) dz for a general function t of z. We will encounter integrals in this form. By Assumption 5.1, we have p(x|z) = pµ(x− f(z)), so we consider a transformation Ψx(z) := x− f(z) and let µ = Ψx(z). It is invertible, Ψ−1x (µ) = f
−1(x − µ), and JΨ−1x (µ) = −Jf−1(x − µ). By these definitions and the rule of change of variables, we have:∫
t(z)p(x|z) dz = ∫ t(z)pµ(Ψx(z)) dz = ∫ t(Ψ−1x (µ))p(µ) ∣∣∣JΨ−1x (µ)∣∣∣dµ = ∫ t(f−1(x− µ))p(µ)
∣∣Jf−1(x− µ)∣∣dµ = Ep(µ)[(t̄V )(x− µ)] (8) = (f#[t] ∗ pµ)(x), (9)
where we have denoted functions t̄ := t ◦ f−1, V := ∣∣Jf−1∣∣, and abused the push-forward notation
f#[t] for a general function t to formally denote (t ◦ f−1) ∣∣Jf−1∣∣ = t̄V .
According to the graphical structure of CSG, we have:
p(x) = ∫ p(z)p(x|z) dz, (10)
E[y|x] = 1 p(x)
∫ yp(x, y) dy = 1
p(x)
∫∫ yp(z)p(x|z)p(y|s) dzdy
= 1
p(x)
∫ p(z)p(x|z)E[y|s] dz = 1
p(x)
∫ g(s)p(z)p(x|z) dz. (11)
So from Eq. (9), we have:
p(x) = (f#[pz] ∗ pµ)(x), E[y|x] = 1
p(x) (f#[gpz] ∗ pµ)(x). (12)
Matching the data distribution p(x, y) = p′(x, y) indicates both p(x) = p′(x) and E[y|x] = E′[y|x]. Using Lemma A.5 under condition (i), this further indicates f#[pz] = f ′#[p′z] a.e. and f#[gpz] = f ′#[g ′p′z] a.e. The former gives Φ#[pz] = p ′ z . The latter can be reformed as ḡf#[pz] = ḡ ′f ′#[p ′ z] a.e., so ḡ = ḡ′ a.e., where we have denoted ḡ := g ◦ (f−1)S and ḡ′ := g′ ◦ (f ′−1)S similarly. From ḡ = ḡ′, we have for any v ∈ V ,
g(s) = g((f−1 ◦ f)S(s, v)) = g((f−1)S(f(s, v))) = ḡ(f(s, v)) = ḡ′(f(s, v)) = g′((f ′−1)S(f(s, v))) = g′(ΦS(s, v)). (13)
For both continuous and categorical y, g(s) uniquely determines p(y|s). So the above equality means that p(y|s) = p′(y|ΦS(s, v)) for any v ∈ V .
Proof under condition (ii). Applying Eq. (8) to Eqs. (10, 11), we have:
p(x) = Ep(µ)[(p̄zV )(x− µ)], E[y|x] = 1
p(x) Ep(µ)[(ḡp̄zV )(x− µ)],
where we have similarly denoted p̄z := pz ◦ f−1. Under condition (ii), E[µ>µ] is infinitesimal, so we can expand the expressions w.r.t µ. For p(x), we have:
p(x) = Ep(µ) [ p̄zV −∇(p̄zV )>µ+ 1
2 µ>∇∇>(p̄zV )µ+O(E[‖µ‖
3 2]) ]
= p̄zV + 1
2 Ep(µ)
[ µ>∇∇>(p̄zV )µ ] +O(σ3µ),
where all functions are evaluated at x. For E[y|x], we first expand 1/p(x) using 1x+ε = 1 x − ε x2 + O(ε2) to get: 1p(x) = 1 p̄zV − 12p̄2zV 2Ep(µ) [ µ>∇∇>(p̄zV )µ ] +O(σ3µ). The second term is expanded as: ḡp̄zV + 1 2Ep(µ) [ µ>∇∇>(ḡp̄zV )µ ] +O(σ3µ). Combining the two parts, we have:
E[y|x] = ḡ + 1 2 Ep(µ)
[ µ> ( (∇ log p̄zV )∇ḡ> +∇ḡ(∇ log p̄zV )> +∇∇>ḡ ) µ ] +O(σ3µ). (14)
This equation holds for any x ∈ supp(px) since the expectation is taken w.r.t the distribution p(x, y); in other words, the considered x here is any value generated by the model. So up to O(σ2µ),
|p(x)− (p̄zV )(x)| = 1
2 ∣∣Ep(µ)[µ>∇∇>(p̄zV )µ]∣∣ 6 12Ep(µ)[∣∣µ>∇∇>(p̄zV )µ∣∣] 6 1
2 Ep(µ)
[ ‖µ‖2 ∥∥∇∇>(p̄zV )∥∥2‖µ‖2] = 12E[µ>µ]∥∥∇∇>(p̄zV )∥∥2 = 1
2 E[µ>µ]|p̄zV | ∥∥∇∇> log p̄zV + (∇ log p̄zV )(∇ log p̄zV )>∥∥2 6 1
2 E[µ>µ]|p̄zV | (∥∥∇∇> log p̄zV ∥∥2 + ‖∇ log p̄zV ‖22), (15) |E[y|x]− ḡ(x)| = 1
2 ∣∣∣Ep(µ)[µ>((∇ log p̄zV )∇ḡ> +∇ḡ(∇ log p̄zV )> +∇∇>ḡ)µ]∣∣∣ 6 1
2 Ep(µ) [∣∣µ>((∇ log p̄zV )∇ḡ> +∇ḡ(∇ log p̄zV )> +∇∇>ḡ)µ∣∣] 6 1
2 Ep(µ)
[ ‖µ‖2 ∥∥(∇ log p̄zV )∇ḡ> +∇ḡ(∇ log p̄zV )> +∇∇>ḡ∥∥2‖µ‖2] 6 1
2 E[µ>µ] (∥∥(∇ log p̄zV )∇ḡ>∥∥2 + ∥∥∇ḡ(∇ log p̄zV )>∥∥2 + ∥∥∇∇>ḡ∥∥2) = E[µ>µ]
(∣∣(∇ log p̄zV )>∇ḡ∣∣+ 12∥∥∇∇>ḡ∥∥2). (16) Given the bounding conditions in the theorem, the multiplicative factors to E[µ>µ] in the last expressions are bounded by a constant. So when 1σ2µ → ∞, i.e. E[µ
>µ] → 0, we have p(x) and E[y|x] converge uniformly to (p̄zV )(x) = f#[pz](x) and ḡ(x), respectively. So p(x, y) = p′(x, y) indicates f#[pz] = f ′#[p ′ z] and ḡ = ḡ
′, which means Φ#[pz] = p′z and p(y|s) = p′(y|ΦS(s, v)) for any v ∈ V , due to Eq. (13) and the explanation that follows.
Proof under condition (iii). We only need to show that when 1σ2µ is much larger than the given quantity, we still have p(x, y) = p′(x, y) =⇒ p̄zV = p̄′zV ′, ḡ = ḡ′ up to a negligible effect. This task amounts to showing that the residuals |p(x)− (p̄zV )(x)| and |E[y|x]− ḡ(x)| controlled by Eqs. (15, 16) are negligible. To achieve this, we need to further expand the controlling functions using derivatives of f , g and pz explicitly, and bound them by the bounding constants. In the following, we use indices a, b, c for the components of x and i, j, k for those of z. For functions of z appearing in the following (e.g., f , g, pz and their derivatives), they are evaluated at z = f−1(x) since we are bounding functions of x.
(1) Bounding |E[y|x]− ḡ(x)| 6 E[µ>µ] (∣∣(∇ log p̄zV )>∇ḡ∣∣+ 12∥∥∇∇>ḡ∥∥2) from Eq. (16).
From the chain rule of differentiation, it is easy to show that: ∇ log p̄z = Jf−1∇ log pz, ∇ḡ = J(f−1)S∇g = Jf−1∇zg, (17)
where ∇zg = (∇g>, 0>dV ) > (recall that g is a function only of s). For the term ∇ log V , we apply Jacobi’s formula for the derivative of the log-determinant:
∂a log V (x) = ∂a log ∣∣Jf−1(x)∣∣ = tr(J−1f−1(x)(∂aJf−1(x))) = ∑
b,i
J−1f−1(x)ib ( ∂aJf−1(x)bi ) = ∑ b,i Jf (f −1(x))ib∂b∂af −1 i (x) = ∑ i ( Jf (∇∇>f−1i ) ) ia . (18)
However, as bounding Eq. (17) already requires bounding ∥∥Jf−1∥∥2, directly using this expression to bound ‖∇ log V ‖2 would require to also bound ‖Jf‖2. This requirement to bound the first-order derivatives of both f and f−1 is a relatively restrictive one. To ease the requirement, we would like to express ∇ log V in terms of Jf−1 . This can be achieved by expressing ∇∇>f−1i ’s in terms of ∇∇>fc’s. To do this, first consider a general invertible-matrix-valued function A(α) on a scalar α. We have 0 = ∂α ( A(α)−1A(α) ) = (∂αA
−1)A + A−1∂αA, so we have A−1∂αA = −(∂αA−1)A, consequently ∂αA = −A(∂αA−1)A. Using this relation (in the fourth equality below), we have:(
∇∇>f−1i ) ab = ∂a∂bf −1 i = ∂a ( Jf−1 ) bi = ( ∂aJf−1 ) bi
= − ( Jf−1(∂aJ −1 f−1)Jf−1 ) bi = − ( Jf−1 ( ∂aJf ) Jf−1 ) bi
= − ∑ jc (Jf−1)bj ( ∂a(∂jfc) ) (Jf−1)ci = − ∑ jck (Jf−1)bj(∂k∂jfc)(∂af −1 k )(Jf−1)ci
= − ∑ c (Jf−1)ci ∑ jk (Jf−1)bj(∂k∂jfc)(Jf−1)ak = − ∑ c (Jf−1)ci ( Jf−1(∇∇>fc)J>f−1 ) ab ,
or in matrix form, ∇∇>f−1i = − ∑ c (Jf−1)ciJf−1(∇∇>fc)J>f−1 =: − ∑ c (Jf−1)ciK c, (19)
where we have defined the matrix Kc := Jf−1(∇∇>fc)J>f−1 which is symmetric. Substituting with this result, we can transform Eq. (18) into a desired form:
∇ log V (x) = ∑ i ( Jf (∇∇>f−1i ) )> i: = − ∑ i ( Jf ∑ c (Jf−1)ciJf−1(∇∇>fc)J>f−1 )> i:
= − ∑ i (∑ c (Jf−1)ciJfJ −1 f (∇∇ >fc)J > f−1 )> i: = − ∑ ci (Jf−1)ci ( (∇∇>fc)J>f−1 )> i:
= − ∑ c ( Jf−1(∇∇>fc)J>f−1 )> c: = − ∑ c (Kcc:) > = − ∑ c Kc:c, (20)
so its norm can be bounded by: ‖∇ log V (x)‖2 = ∥∥∥∑
c
Kcc: ∥∥∥ 2 = ∥∥∥∑
c
(Jf−1)c:(∇∇>fc)J>f−1 ∥∥∥
2 6 ∑ c ∥∥(Jf−1)c:∥∥2∥∥∇∇>fc∥∥2∥∥Jf−1∥∥2 6 B′′fB′f−1 ∑ c ∥∥(Jf−1)c:∥∥2 6 dB′2f−1B ′′ f , (21)
where we have used the following result in the last inequality:∑ c ∥∥(Jf−1)c:∥∥2 6 d1/2√∑ c ∥∥(Jf−1)c:∥∥22 = d1/2∥∥Jf−1∥∥F 6 d∥∥Jf−1∥∥2 6 dB′f−1 . (22) Integrating Eq. (17) and Eq. (21), we have:∣∣(∇ log p̄zV )>∇ḡ∣∣ = (Jf−1∇ log pz +∇ log V )>Jf−1∇zg
6 (∥∥Jf−1∥∥2‖∇ log pz‖2 + ‖∇ log V ‖2)∥∥Jf−1∥∥‖∇g‖2
6 ( B′f−1B ′ log p + dB ′2 f−1B ′′ f ) B′f−1B ′ g
= ( B′log p + dB ′ f−1B ′′ f ) B′2f−1B ′ g. (23)
For the Hessian of ḡ, direct calculus gives:
∇∇>ḡ = J(f−1)S (∇∇>g)J>(f−1)S + dS∑ i=1 (∇g)si(∇∇>f−1si | 1. What is the main contribution of the paper, and what are the strengths of the proposed approach?
2. What are the weaknesses of the paper, especially regarding its presentation and readability?
3. How can the authors improve the paper's presentation and make it more concise?
4. What are some questions and concerns regarding the variables used in the paper, particularly in Section 3?
5. Can the authors provide more details about the difference between their approach and IRM, and why they chose not to include IRM in their experiments?
6. What are some potential issues with the assumptions made in the paper, and how might they affect the practical applicability of the proposed method?
7. How might the authors better illustrate the benefits of their proposed framework in future versions of the paper? | Review | Review
This paper proposes a Causal Semantic Generative model (CSG) to model semantic and variation factors separately and also provides a variational approach to learn the model. It is proved that under some (perhaps strong) assumptions, CSG identifies the semantic factor on the training domain. Authors further show the boundedness of OOD generalization error based on the above result. Two experiments are presented to validate the proposed method.
While the paper seems to have many interesting ideas and theoretic results, it is poorly presented and not well prepared, making it hard to evaluate the technical correctness. I also notice that the paper changes the original latex formatting by reducing much vertical space for almost all section titles, theorems, and equations. Also, the top margin of page 8 is heavily reduced. There is approximately 2/3 - 1 page more content than that of the original format with required page limit. Given this consideration, I also lower my score.
In summary, I recommend a rejection for the present version. I highly suggest the authors revise the paper to make it more readable, self-contained, and concise, and resubmit the paper to another top conference/journal.
Please find my questions/suggestions below.
The variables
s
,
v
,
x
,
y
are introduced in the introduction part but then not mentioned when they are actually used in Section 3 to describe CSG. I suggest introducing the variables in Section 3.
Authors use many italic words, and more crucially, lots of mixed uses of italic and normal fonts, particularly in Section 3. It really affects the reading flow. For introducing the CSG part, I recommend firstly introduce the model and then use another few paragraphs to present the examples for illustration.
For the invariance principle 3.2,
p
(
s
,
v
)
is the only source of domain shift. Does it allow
p
(
v
)
and
p
(
s
|
v
)
(or
p
(
s
)
and
p
(
v
|
s
)
) change while keeping
p
(
s
,
v
)
unchanged?
(a) Sometimes you use 'Theorem' and also 'Thm.' in the main text, please be consistent. (b) The places for citation also affects the reading flow. For example, right after Assumption 5.2, 'It is a common (Janzing et al., 2009; Shalit et al., 2017; Khemakhem et al., 2019; Lee et al., 2019) sufficient condition for the fundamental (Peters et al., 2014, Prop. 7) requirement of causal minimality (Peters et al., 2014; 2017) for identifiability.' (c) Some notations are not introduced, e.g.,
q
\indep
(
x
)
in CSG-ind.
Why is it necessary to introduce CSG-ind? And in what case shall we consider CSG-ind? I did not find it well explained.
Also for Assumption 5.2, 'It is a common sufficient condition for the fundamental requirement of causal minimality for identifiability.' However, causal minimality is not equivalent to that
f
being bijective.
The paper should provide more details in comparing with IRM, and in fact I didn't get much information about the difference with IRM from Section 3.2. Can the authors give more details about 'For noisy or degenerate mechanisms, ambiguity occurs during inference (Fig. 2), and the inferred result notably relies on the prior.'? Also, IRM is not compared in the experiments.
In my opinion, the biggest contribution of this paper is to propose CSG, within which several theoretic results (identifiability, OOD error bound, etc.) are established under some necessary but maybe strong conditions. However, there is much less content on verifying that CSG does bring many benefits. One way is to conduct extensive experiments to verify so, but I find the experiments are not sufficient and also several methods (like IRM) are not included for comparison.
** after reading rebuttal**
Thanks for clarifications and an improved version. I decide to increase my evaluation to 5.
However, I think that the paper needs to take more content to illustrate the practical benefits of the proposed CSG frameworks, for the following reasons:
the framework is proposed based on empirical observations like 'intervening an image by e.g. breaking a camera sensor unit when taking the image, does not change how the photographer labels it', which is not mathematically rigorous;
the principles and assumptions are rather strong (though I understand one generally has to make assumptions in causality), and in practice it is not clear when such assumptions hold and how many applications satisfy these assumptions;
the interesting derivations and theorems are also based on the CSG framework, which means, if the framework is incorrect, then these results may fail;
the experiment settings are rather limited in the current version. I hope the authors to add further content in their next version, regardless of whether the paper gets accepted or rejected.
Lastly, I still feel it a bit tricky to change the original formatting in the previous submission. |
ICLR | Title
Vi-MIX FOR SELF-SUPERVISED VIDEO REPRESENTATION
Abstract
Contrastive representation learning of videos highly rely on exhaustive data augmentation strategies. Therefore, towards designing video augmentation for selfsupervised learning, we first analyze the best strategy to mix videos to create a new augmented video sample. Then, the question remains, can we make use of the other modalities in videos for data mixing? To this end, we propose Cross-Modal Manifold Cutmix (CMMC) that inserts a video tesseract into another video tesseract in the feature space across two different modalities. We find that our video mixing strategy: Vi-Mix, i.e. preliminary mixing of videos followed by CMMC across different modalities in a video, improves the quality of learned video representations. We exhaustively conduct experiments for two downstream tasks: action recognition and video retrieval on three popular video datasets UCF101, HMDB51, and NTU-60. We show that the performance of Vi-Mix on both the downstream tasks is on par with the other self-supervised approaches while requiring less training data.
N/A
Contrastive representation learning of videos highly rely on exhaustive data augmentation strategies. Therefore, towards designing video augmentation for selfsupervised learning, we first analyze the best strategy to mix videos to create a new augmented video sample. Then, the question remains, can we make use of the other modalities in videos for data mixing? To this end, we propose Cross-Modal Manifold Cutmix (CMMC) that inserts a video tesseract into another video tesseract in the feature space across two different modalities. We find that our video mixing strategy: Vi-Mix, i.e. preliminary mixing of videos followed by CMMC across different modalities in a video, improves the quality of learned video representations. We exhaustively conduct experiments for two downstream tasks: action recognition and video retrieval on three popular video datasets UCF101, HMDB51, and NTU-60. We show that the performance of Vi-Mix on both the downstream tasks is on par with the other self-supervised approaches while requiring less training data.
1 INTRODUCTION
The recent advancements in self-supervised representation is credited to the success of using discriminative contrastive loss such as InfoNCE (Gutmann & Hyvärinen, 2010). Given a data sample, contrastive representation learning focus on discriminating its transformed version from a large pool of other instances or their transformations. Thus, the concept of contrastive learning while applicable to any domains, its effectiveness rely on the domain-specific inductive bias as the transformations are obtained from the same data instance. For images, these transformations are usually standard data augmentation techniques (Chen et al., 2020) while in videos, data artifacts that arise from temporal segments within the same video clip (Lee et al., 2017b;a; Fernando et al., 2017; Pickup et al., 2014; Misra et al., 2016).
Recently, data mixing strategies (Zhang et al., 2018; Shen et al., 2020; Yun et al., 2019) have emerged as one of the promising data augmentation for supervised learning methods. These mixing strategies when incorporated with contrastive learning, the quality of the learned representation improves drastically as in Lee et al. (2021); Verma et al. (2021; 2019). Such augmentations introduce semantically meaningful variance for better generalization which is crucial for learning self-supervised representations. While these mixing strategies have been impactful for learning image representations, mixing strategies have been very limitedly explored in the video domain.
Therefore, in this paper, we study the various data mixing strategies for videos,and propose a new approach to overcome their limitation by mixing across modalities. We first investigate and compare the mixing strategies adopted from the the image domain, and we find that mixing videos by performing simple interpolation of two video cuboids (Mixup) is more effective than inserting a video cuboid within another (Cutmix). This is in contrast to the observations made in the image domain. Furthermore, unlike learning image representations (Lee et al., 2021), these data mixing strategies are prone to over-fitting when trained for longer, making them limited for videos.
Motivated by the success of previous self-supervised techniques exploiting multiple modalities to learn discriminative video representation as in Arandjelovic & Zisserman (2017a); Chung & Zisserman (2016); Korbar et al. (2018); Arandjelovic & Zisserman (2017b); Owens & Efros (2018); Piergiovanni et al. (2020); Miech et al. (2020), in this paper, we pose the following question: can we take advantage of other modalities for mixing videos while learning self-supervised representation?
Different modalities of a video like RGB, optical flow, etc. have different distributions and thus, mixing them directly in the input space makes the task of discriminating similar instances from the other instances easier limiting the quality of the learned representation. To this end, we propose our Cross-Modal Manifold Cutmix (CMMC), that performs data mixing operation ‘across different modalities’ of a video in their hidden intermediate ‘representations’. Given the video encoders from different modalities pre-trained with contrastive loss in addition to mixup augmentation, CMMC exploits the underlying structure of the data manifold. This is done by performing cutmix operation in the feature space across space, time and channels. To the best of our knowledge, this is the first attempt to perform mixing across channels. The channel mixing of the cross-modal feature map enforces the encoder to learn better semantic concepts in the videos. Hence, we train video encoders for different modalities in several stages including the use of mixup strategy in videos and our proposed CMMC. We call this video augmentation strategy - Vi-Mix which stands for Video instance-Mix for contrastive representation learning.
Empirically, we confirm that Vi-Mix being easy to implement, significantly improves contrastive representation learning for videos. We show that Vi-Mix can effectively learn self-supervised representation with small availability of data for pretext task and can also take advantage of other modalities of the videos through manifold mixing strategy. We thoroughly evaluate the quality of the learned representation on two downstream tasks action recognition and retrieval, on UCF101 and HMDB51. We demonstrate the improvement in transferability of the representation learned with Vi-Mix by conducting training on a large scale dataset Kinetics-400 and then finetuning on smaller datasets. Furthermore, we corroborate the robustness of our video data augmentation strategy by observing similar improvements on video skeleton sequences for the task of action recognition.
2 BACKGROUND
In this section, we first review a general contrastive learning mechanism used for learning selfsupervised video representation. Then, we review a data mixing formulation for self-supervision in the image domain. Let X ∈ RT×3×H×W be a sequence of video. The objective is to learn a mapping f : X → z where z ∈ RD, that can be effectively used to discriminate video clips for various downstream tasks, e.g. action recognition , retrieval, etc.
Contrastive Learning. Assume a set of augmentation transformations A is applied to X . So, for a particular video there exists a positive (say X̃ ) whereas the other transformed videos in a minibatch are considered as negatives. The encoder f(.) and its exponential average model f̃(.) maps the positives and negatives respectively to embedding vectors. Therefore, the contrastive loss for a sample Xi is formulated as
L(Xi) = −log exp(zi · z̃i/τ) exp(zi · z̃i/τ) + ∑ j∈N exp(zi · zj/τ) (1)
where τ is a scaling temperature parameter and N is the set of negatives. Note that the embedding vectors zi and z̃i are L2-normalized before the loss computation. Thus, the loss L optimizes the video instances such that the representation of the video instances with the same view are pulled towards each other while pushing away from the other instances.
Data Mix for Contrastive Learning. We revisit the formulation proposed in i-mix (Lee et al., 2021) for mixing data within a batch for contrastive representation learning. Let yi ∈ {0, 1}BS be the virtual labels of the input Xi and X̃i in a batch, where yi,i = 1 and yi,j 6=i = 0. Then, the (N + 1)− way discrimination loss for a sample in a batch is:
L(Xi, yi) = −yi,b · log exp(zi · z̃b/τ) exp(zi · z̃b/τ) + ∑ j∈N exp(zi · zj/τ) (2)
where b ranges from 0 to BS. Thus, the data instances are mixed within a batch for which the loss is defined as:
LMix((Xi, yi), (Xr, yr), λ) = L(Mix(Xi,Xr;λ), λyi + (1− λ)yr) (3)
where λ ∼ Beta(α, α) is a mixing coefficient, r ∼ rand(BS), and Mix() is a mixing operator. In the following, we will discuss the appropriate mixing operators in the video domain.
3 VI-MIX
In this paper, we use the same i-mix formulation (from the above section) for data mixing while learning discriminative self-supervised representation. First, we investigate the best strategies to define the mixing operator for video domain. Furthermore, we introduce a manifold mixing strategy to make use of the other modalities freely available in videos for data mixing. We integrate both these data augmentation strategies, together called Video-instance Mix (Vi-Mix) for contrastive representation learning of Videos.
3.1 MIXING OPERATOR FOR VIDEOS
Unlike mixing operations in images as in Zhang et al. (2018); Verma et al. (2019); Shen et al. (2020); Yun et al. (2019), videos have temporal dimension. We argue that handling temporal dimension in videos is not equivalent to handling spatial dimension in images. For the Mixing operation defined in equation 3, it is straightforward to extend the existing image mixing strategies to videos. Mixup (Verma et al., 2019) in videos perform weighted averaging of two spatio-temporal stack of frames. In contrast to cutmix operator (Yun et al., 2019), mixup operator retains the temporal information in videos and thus facilitates the contrastive representation learning. We empirically corroborate this observation in the experimental analysis. In addition to this, videos possess different modalities like optical flow that can be computed without any supervision. The question remains that can we make use of other modalities in videos for mixing instances while learning contrastive representation? To this end, we introduce Cross-Modal Manifold Cutmix (CMMC) strategy for mixing video instances across different modalities which is discussed in the next section.
3.2 CROSS-MODAL MANIFOLD CUTMIX
Different modalities in videos is an additional information that are often exploited for self-supervised learning as in (Han et al., 2020a; Linguo et al., 2021). In contrast to these approaches, we simply propose to mix these different modalities as another data augmentation strategy for self-supervised representation. However, the dissimilarity in distribution between the different modalities (say, RGB and optical flow) in videos makes it harder to mix them at input space. Consequently, we propose Cross-Modal Manifold Cutmix to mix such cross-modal representations in the hidden representation space.
As an extension of the previous notation, we now consider two different modalities X1i and X2i for a given video clip Xi. The objective of the self-supervised task is to learn discriminative video representation, i.e. to learn functions f1(·) and f2(·). We decompose the encoder function by f1(X1i) = f1k(g1k(X1i)), where g1k is a part of the video encoder for modality 1 with k layers that maps the input data X1i to a hidden representation. Similarly, f1k maps the hidden representation g1k(X1i) to the embedding vector z1i. Note that we already have trained video encoders f1i(·) and f2i(·) by exploiting the above mentioned Mixup strategy among the video instances in a mini-batch while optimizing the contrastive loss. Now, CMMC is trained in a 4 stage fashion. In the first stage, we train the encoder f1i(·) of modality 1 in 5 steps as illustrated in figure 1. First, we select random
layers k and l from a set of eligible layers in f1i(·) and f2i(·) respectively such that k ≤ l. This set excludes the input space. Second, we fed a pair of input X1i and X2r to their respective video encoders f1 and f2 until they reach layer k and layer l respectively. We obtain g1k(X1i) and g2l(X2r) - a hidden representation (spatio-temporal tesseract) of both videos in modality 1 and 2. Third, we perform a data mixing among the hidden representations across two modalities as:
gmix1k , λ = CutMix(g1k, g2r;α) (4)
ymix1k = λy1i + (1− λ)y2r (5) where (y1i, y2r) are one-hot labels, hyper-parameter α = 1, and the mixing operator is cutmix as in Yun et al. (2019) which returns the mixing coefficient λ along with the mixed data. For brevity, we omit the input instances in the equation. Fourth, we continue the forward pass in f1(·) only from layer k to the output embedding, now we denote by zmix1i . Fifth, this embedding is used to compute the (N + 1)-way discrimination loss which is reformulated as:
L(X1i, y1i) = −ymix1i,b · log exp(zmix1i · z̃1b/τ) exp(zmix1i · z̃1b/τ) + ∑ j∈N exp(zmix1i · z1j/τ) (6)
The computed gradients are backpropagated through the entire video encoder f1(·) of modality 1 only. It is to be noted that the video encoder f1(·) of modality 2 is not trained in this stage. In the second stage, we train the video encoder f2(·) for modality 2 while freezing the updated learned weights of f1(·). We continue this cycle twice for each modality, and hence 4 stages to learn the self-supervised video representation in f1(·) and f2(·) . Algorithm 1 provides the pseudocode of one stage of CMMC for training encoder f1(·). Thus, to sum up Vi-Mix consists of initially training
Algorithm 1 Pytorch-like style Pseudocode of One stage CMMC for modality 1 alpha, mix1 = 1., rand(1, L) # L is the layers in the encoder mix2 = rand(mix1, L) x1 q, x1 k = aug(rgb) # Two modalities of the RGB (modality 1) data x2 = aug(flow) # Flow data (modality 2) g1 = f 1q.partial forward(x1 q, 0, mix1). g2 = f 2q.partial forward(x2, 0, mix2) g mix, labels new, lam = CutMix(g1, g2, alpha) z1 = normalize(f 1q.partial forward(x1 q, mix1, L)) z2 = normalize(f 1k.forward(x1 k)) z2, g2 = z2.detach(), g2.detach() # no gradient flow logits = matmul(z1, z2.T) / t loss = lam * CrossEntropyLoss(logits, arange(len(x1 q)) +
(1 - lam) * CrossEntropyLoss(logits, labels new)
video encoders of modality 1 and modality 2 independently with infoNCE loss as in Chen et al. (2020) and applying mixup augmentation. Then, we perform CMMC among the hidden representations of data from modality 1 and 2 in 4 stages. This is performed by alternation training strategy as in Han et al. (2020a) to make use of the latest learned representations in the cross-modal network. The final learned model is obtained after two cycles of training encoder of each modality.
CutMix in feature space. Here, we explain how the cutmix operator is applied on the video tesseracts in the feature space. Assume that the hidden representation of the input video sequence X1i in modality 1, g1k ∈ Rc1×t1×h1×w1 , where c1 represents channel, t1 time, and s1 = h1 × w1 is the spatial resolution. We generate a new representation gmix1 by combining the hidden representations g1k and g2l. These hidden representations g1k and g2l may differ such that (c1, t1, s1) ≥ (c2, t2, s2). Therefore, we define a cutmix operation that combines the video tesseracts in space, time and across channels. We define the combining operation as
gmix1 =M g1k + (1−M) g2l (7) where M ∈ {0, 1}c1×t1×h1×w1 is a binary tensor mask which is decided by sampling the bounding box coordinates bbox = (bc1, bc2, bt1, bt2, bh1, bh2, bw1, bw2) from a uniform distribution. In order to preserve the temporal information in a video, we fix (bt1, bt2) = (0, t2). Similarly, we preserve the channel information processed by the video encoder f2(·) by fixing (bc1, bc2) = (0, c2).
Thus, the bounding box selection follows a random sampling of a center coordinate (bwc, bhc) from (U(0, w2), U(0, h2)). The corner points of the bounding box are determined by
bw1, bw2 = bwc − w2 √ λ
2 , bwc +
w2 √ λ
2 bh1, bh2 = bhc −
h2 √ λ
2 , bhc +
h2 √ λ
2 (8)
where λ ∼ U(0, 1). Even by fixing (bt1, bt2) and (bc1, bc2), the resultant video tesseract in one modality may not match the dimension of the video tesseracts in other modality across channel and time, if k < l. So, we select a 4D bounding box with coordinates (Mc1,Mc2,Mt1,Mt2,Mh1,Mh2,Mw1,Mw2) within the defined binary mask M . We randomly sample a center coordinate (Mcc,Mtc,Mhc,Mwc) from (U(0, c1), U(0, t1)), (U(0, h1), and U(0, w1)) respectively. The end points of the binary mask M are determined by
Mc1,Mc2 =Mcc − c22 ,Mcc + c2 2 Mt1,Mt2 =Mtc − t2 2 ,Mtc + t2 2 Mh1,Mh2 =Mhc − h22 ,Mhc + h2 2 Mw1,Mw2 =Mwc − w2 2 ,Mwc + w2 2
(9)
For the region within this bounding box, the values in the binary mask is filled with 0, otherwise 1. A new mixing coefficient is computed by 1− λnew = ∑ c,t,w,hMc,t,w,h denoting the complementary of the proportion of volume occupied by M . This new mixing coefficient λnew is returned by the cutmix function to compute the mixed labels in equation 5.
Thus, we perform a mix operation in videos across all the dimensions including spatial, temporal and channels. However, we preserve the temporal properties of the video instances by retaining a proportion of channel information. This makes cutmix operation effective in the feature space.
4 EXPERIMENTS
In this section, we describe the datasets used in our experimental analysis, implementation details, and evaluation setup. We present ablation studies to illustrate the effectiveness of Vi-Mix video data augmentation and also, provide an exhaustive state-of-the-art comparison with our Vi-Mix models.
4.1 DATASETS
We use two video action recognition datasets: UCF101 (Soomro et al., 2012) and Kinetics-400 (Kay et al., 2017) for self-supervised training of the video encoders. UCF101 contains 13k videos with 101 human actions and Kinetics-400 (K400) contains 240k video clips with 400 human actions. We also use a skeleton action recognition dataset: NTU-RGB+D (Shahroudy et al., 2016) for selfsupervised training of a skeleton encoder. NTU-RGB+D (NTU-60) contains 58k videos with 60 human action, all performed indoors. Note that we use the videos or skeleton sequences from the training set only for the self-supervised pre-training. Downstream tasks are evaluated on split1 of UCF101 and HMDB51 (Kuehne et al., 2011), which contains 7k videos with 51 human actions. For evaluation on skeletons, we evaluate on the validation set of NTU-60 on Cross-subject (xsub) and Cross-View (xview) protocols.
4.2 IMPLEMENTATION DETAILS
Vi-Mix is a simple data augmentation strategy that requires cutmix operation in the feature space which is adopted from (Yun et al., 2019) followed by our temporal and channel mixing. The input modalities in our experiments consists of RGB, optical flow and skeletons (3D Poses). The optical flow is computed with the un-supervised TV-L1 algorithm (Sánchez Pérez et al., 2013) and the same pre-processing procedure is used as in Carreira & Zisserman (2017). For the skeleton experiments, the skeleton data X ∈ RC×T×V is acquired using KinectV2 sensors, where coordinate feature C = 3, # joints V = 25, and # frames T = 50. Following the pre-processing steps in Linguo et al. (2021), we compute the joints and motion cues. For all the RGB and optical flow models, we choose S3D (Xie et al., 2018) architecture as the backbone whereas for the skeleton model, we choose ST-GCN (Yan et al., 2018) with channels in each layer reduced by 1/4 times as the backbone. For self-supervised representation learning, we adopt a momentum-updated history queue to cache a large number of video features as in MoCo (He et al., 2019). We attach a non-linear projection head, and remove it for downstream task evaluations as done in SimCLR (Chen et al., 2020).
For our experiments with RGB and optical flow, we use 32 128 × 128 frames of RGB (or flow) input,at 30 fps. For additional data augmentation, we apply clip-wise consistent random crops, horizontal flips, Gaussian blur and color jittering. We also apply random temporal cropping from the same video as used in Han et al. (2020a). For training a MoCo model with Vi-Mix data augmentation, we initially train the RGB and Flow networks for 300 epochs with mixup data augmentation
independently. The mixup operation is applied in the input space. Then, we train these pre-train networks with CMMC in 4 stages. In each stage, a network with one input modality is trained for 100 epochs by freezing the network with other modality. In the next stage, we reverse the crossmodal networks and continue training the network with other modality. Finally, after the 4 stages, the resultant models are hence trained for 500 epochs in total. For optimization, we use Adam with 10−3 learning rate and 10−5 weight decay. All the experiments are trained on 4 and 2 V100 GPUs for K400 and others respectively, with a batch size of 32 videos per GPU.
For our experiments with skeleton sequence, we choose Shear with shearing amplitude 0.5 and Crop with a padding ratio of 0.6 as the augmentation strategy as used in Linguo et al. (2021). Note that, for CMMC on skeleton data, we perform cutmix operation only on skeleton vertices followed by channel and temporal mixing. For training with CMMC, the GCN encoders are initially trained for 150 epochs on Joint and Motion cues. This is followed by 2 stage training each with 150 epochs, where encoder with one modality is trained and the other is frozen. For optimization, we use SGD with momentum (0.9) and weight decay (0.0001). The model is trained on 1 V100 with a batch size of 128 skeleton sequences.
4.3 EVALUATION SETUP FOR DOWNSTREAM TASKS
For experiments with RGB and optical flow, we evaluate on two downstream tasks: (i) action classification and (ii) retrieval. For action classification, we evaluate on (1) linear probe where the entire encoder is frozen and a single linear layer followed by a softmax layer is trained with cross-entropy loss, and (2) finetune where the entire encoder along with a linear and softmax layer is trained with cross-entropy loss. Note that the encoders are initialized with the Vi-Mix learned weights. More details for training the downstream action classification framework is provided in the Appendix. For action retrieval, the extracted features from the encoder pre-trained with Vi-Mix are used for nearest-neighbor (NN) retrieval. We report Recall at k (R@k) which implies, if the top k nearest neighbours comprise one video pertaining to the same class, a correct retrieval is counted.
4.4 ABLATION STUDIES ON VI-MIX
In this section, we empirically show the correctness of our data augmentation strategy for videos. We also investigate the potential reasons behind the significant improvement of performance with Vi-Mix by conducting relevant experiments.
Which mixing strategy is the best for uni-modal video understanding? In Table 1, we investigate different video mixing strategies based on mixup and cutmix operator for downstream action classification and retrieval tasks. For the augmentations based on Cutmix (Yun et al., 2019), we randomly select a sub-cuboid and plug it into another video. We also consider Videomix (Yun et al., 2020) that performs a cutmix operation across all the frames clipwise consistent. For the virtual labels, we perform label smoothing as defined in equation 3. In image domain, cutmix outperforms the mixup strategy in supervised settings (Yun et al., 2019). However, we find that all strategies using cutmix in temporal dimension (Temporal cutmix), spatio-temporal dimension (ST cutmix) and spatial cutmix (VideoMix) performs worse than simple Mixup strategy. This is because cutmix operation destroys the temporal structure of the videos which is crucial for understanding actions in videos. Similarly, VideoMix where cutmix is performed spatially and not temporally, introduces new contextual information in videos in arbitrary spatial locations. This not only hampers the motion patterns present in the original video but also weakens the similarity between the positive samples in the contrastive loss. Thus, video mixing operators must ensure retention of temporal characteristics in videos.
Why do we need multi-modal mixing strategy for videos? In fig. 2, we illustrate the downstream action classification accuracy vs # epochs plot. This plot clearly shows the importance of applying data mixing augmentation. However, the model trained without multi-modal mixing strategy (CMMC) over-fits after 300 epochs, whereas the models training with multi-modal mixing are still learning discriminative representation. We find that the mixing strategy on video representation learning induces faster training and with CMMC, the models learn cross-modal knowledge without using complicated knowledge distillation techniques as in Crasto et al. (2019); Garcia et al. (2018). It is to be noted that training with cross-modal data augmentation is more beneficial when trained in alternation strategy. In fig. 2, we show that the RGB model with alternate training outperforms the RGB model which is trained for 200 epochs straightaway with the outdated optical flow model. The alternate training strategy takes benefit of the most updated cross-modal model for data mixing and hence learning more discriminative representation.
Diagnosis of CMMC. In Table 2, we provide the results for different configurations of data mixing in the feature space. The objective is to understand the strategies responsible for boosting the performance of the models on UCF101 and HMDB51 for downstream tasks. All the models are initialized with weights obtained from pre-training with mixup for 300 epochs. First, we show that cross-modal mixup (indicated by + mixup) in the feature space exploiting the cross-modal representation outperforms the traditional manifold mixup (indicated by + mixup) in the feature space (Verma et al., 2019) on UFC-101 which does not make use of the cross-modal representation. However, we observe that the action classification accuracy on HMDB51 using optical flow is equivalent for both strategies with or without using cross-modality. This is because HMDB51 mostly consists of static actions with improminent motion patterns which limits the optical flow model to learn motion dominated representation. As a result, this also affects the action classification accuracy on HMDB51 when evaluated with both the streams.
Next, we show the influence of mixing different dimensions in the hidden representation of a video. We perform experiments with cross-modal cutmix occurring in the spatial dimension (s), spatial and temporal (t) dimensions, and finally all the dimensions including channels (c). Note that the manifold cutmix operation is performed within the same data manifold, i.e. the cutmix is performed across the same random layer between RGB and Flow networks. In contradiction to our previous observation of video mixing in the input space, here the temporal cutmix provides a minor boost to the performance of the downstream tasks. This is supported by the fact that hidden representations of a video retains temporal information due to the preceding convolutional operations on the input sample. Also, the channel mixing further boosts the performance by retaining lost temporal information in the resultant mixed feature map. Finally, we introduce more randomness in CMMC by
randomizing the selection of the cross-modal network layer (refer to mix2 in algorithm 1) where the mixing takes place. This enables CMMC to take advantage of the features from later layers of the cross-modal network.
4.5 COMPARISON TO THE STATE-OF-THE-ART
In this section, we compare Vi-Mix with previous self-supervised approaches for video/skeleton action classification and video action retrieval. In Table 3, we provide the action classification results on UCF101 and HMDB51 for linear-probing and full finetuning of video encoders with models trained on UCF101 and K400. For linear probing, Vi-Mix only with its strong data augmentation (mixup + CMMC) outperforms CoCLR which shares the same evaluation setting with Vi-Mix, by 1.9% on UCF101. Similar, observation is made for models trained with K400. We also note that our finetuned Vi-Mix encoders outperform approaches using higher spatial resolution (XDC, AVTS, MemDPC), deeper layers (XDC, MemDPC, GDT). However, the lower action classification accuracy of Vi-Mix on HMDB51 compared to CoCLR indicates the requirement of positive mining of data samples in contrastive learning as performed in CoCLR. The large performance gap of Vi-Mix with CVRL is owing to the deeper layers of R3D (49 vs 23) and the input spatial resolution (224 vs 128). Interestingly, our Vi-Mix model pre-trained on K400 performs on par with the models trained on larger datasets substantiating the impact of simple cross-modal video augmentation.
In Table 4, we provide the video retrieval results on UCF101 and HMDB51 for the Vi-Mix models trained on UCF101. This is a classical test for verifying if the pre-trained model learns semantic information while learning self-supervised representation. We test if a query instance clip and its nearest neighbours belong to the same category. Our Vi-Mix model outperforms all the representative baselines by a significant margin on both the datasets.
In Table 5, we generalize our Vi-Mix strategy for skeleton action representation on NTU-60. Since, mixup in the input space hampers the spatial configuration of the skeletons processed by ST-GCN, we only perform CMMC with hidden skeleton representations in the encoder. We treat Joints and Motion as different input modalities of given skeleton data. Note that the cutmix operation across spatial dimension represents the joint vertices (1-dimensional). The downstream action classification results of a skeleton model pre-trained with contrastive learning (SkeletonCLR) using CMMC outperforms its baseline by 2.4% on cross-subject and by 1.% on cross-view protocol. The superior results of CrosSCLR is owing to its cross-modal positive mining which benefits the vanilla SkeletonCLR models. We believe that such positive mining approaches in contrastive learning such as CoCLR or CrosSCLR can benefit by using our video mixing strategies.
5 RELATED WORK
Deep neural networks, especially the networks fabricated for processing videos are data-hungry. While annotating large scale video data is expensive, recently many self-supervised video representation learning approaches have been proposed to make use of the abundant web videos. On one hand, some methods have exploited the temporal structure of the videos, such as predicting if frames appear in order, reverse order, shuffled, color-consistency across frames, etc (Lee et al., 2017b;a; Fernando et al., 2017; Pickup et al., 2014; Misra et al., 2016; Wang et al., 2019; 2017; Vondrick et al., 2018; Recasens et al., 2021). On the other hand, some methods have been taking advantage of the multiple modalities of videos like audio, text, optical flow, etc by designing pretext tasks for their temporal alignment (Chung & Zisserman, 2016; Korbar et al., 2018; Arandjelovic & Zisserman, 2017b; Owens & Efros, 2018; Piergiovanni et al., 2020; Miech et al., 2020; Arandjelovic & Zisserman, 2017a).
Meanwhile, data mixing strategies have gained popularity in image-domain data augmentations for supervised learning (Zhang et al., 2018; Shen et al., 2020; Yun et al., 2019) in addition to their usage also for learning self-supervised image representation (Verma et al., 2019; Lee et al., 2021; Verma et al., 2021). A recent work (unpublished), in the spirit of data mixing in the video domain, VideoMix creates a new training video by inserting a video cuboid into another video in the supervised setting (Yun et al., 2020). In contrast, we focus on mixing video samples for self-supervised representation. Different from the observations in VideoMix, we note that mixup in Vi-Mix is a better augmentation tool rather than strategies involving removal of spatio-temporal sub-space from the original videos. The most closest to our work, Manifold mixup (Verma et al., 2019) focuses on interpolating hidden representation of the samples within a mini-batch, whereas, our proposed CMMC in Vi-Mix performs cutmix operation in the data manifold across different modalities. In addition, we also introduce the notion of channel mixing in the feature space. We find that Vi-Mix is simple to implement while is a strong data augmentation tool for learning self-supervised video representation even with small data size.
6 CONCLUSION
We have analyzed the augmentation strategies for learning self-supervised video representation. We have introduced Vi-Mix which includes performing video mixup followed by Cross-modal manifold mixup to take advantage of additional modalities present in videos. Vi-Mix improves the quality of learned representation and thus brings significant improvement in the performance of downstream tasks on UCF101, HMDB51 and NTU-60 datasets. We believe that Vi-Mix can be a standard video augmentation tool while learning any multi-modal self-supervised video representation.
A APPENDIX
A.1 MORE IMPLEMENTATION DETAILS
Training/Testing specification for downstream finetuning on UCF101 and HMDB51. At training, we apply the same data augmentation as in the pre-training stage mentioned in section 4.2, except for Gaussain blurring. The model is trained with similar optimization configuration as in the pre-training stage for 500 epochs. At inference, we perform spatially fully convolutional inference on videos by applying ten crops (center crop and 4 corners with horizontal flipping) and temporally take clips with overlapping moving windows. The final prediction is the average softmax scores of all the clips.
Training/Testing specification for downstream finetuning on NTU-60. For training the pretrained ST-GCN along with the linear classifier, we apply the same data augmentation as in the pre-training stage. We train for 100 epochs with learning rate 0.1 (multiplied by 0.1 at epoch 80).
History Queue in MoCo. We adopt momentum-updated history queue as in MoCo (He et al., 2019) to cache a large number of visual features while learning contrastive representation. For our pretraining experiments, we use a softmax temperature τ = 0.07, and a momentum m = 0.999. The queue size of MoCo for pre-training experiments on UCF101, K400 and NTU-60 are 2048, 16384 and 32768 respectively.
A.2 REGULARIZATION EFFECT OF VI-MIX
In fig. 3, we provide (1) a plot of training loss of two models, one using Vi-Mix and the other model is using standard data augmentations, and (2) the (K+1)-way accuracy of the pretext task of the models learning contrastive representation. We observe a disparity between the training losses (at left of the figure) in both the models with and without using Vi-Mix. This is owing to the hardness of the pretext task which can be directly correlated with the difficulty of the data transformation, via ViMix data augmentation. Meanwhile, we also note that the (K+1)-way accuracy of the Vi-Mix model
while training on contrastive loss is lower than that of the model without using Vi-Mix (at the right of the figure). However, the performance gain of the Vi-Mix model on downstream classification and retrieval tasks shows the regularizing capability of using Vi-Mix type data augmentation. | 1. What is the main contribution of the paper regarding video mixing strategies?
2. What are the strengths and weaknesses of the proposed approach compared to existing methods?
3. Do you have any concerns about the experimental setup or results?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions for improving the study or exploring new research directions? | Summary Of The Paper
Review | Summary Of The Paper
The paper explores video mixing strategies for self-supervised learning. Based on the existing approaches in the image domain (Mixup and CutMix), the evaluation is performed on the video domain. Additionally, the paper suggests mixing different modalities of video that lead to the proposed CMMC learning strategy. The overall approach is called Vi-Mix and studied on downstream tasks of video action recognition and video retrieval using three datasets: UCF101, HMDB51, and NTU-60.
Review
Strengths:
The paper is well-written and easy to follow. The related work section is quite brief but sufficient (I would suggest more discussion about video self-supervised methods).
The suggested study of mixing strategies for video self-supervised learning is novel and it extends observations from supervised learning (VideoMix).
The idea of mixing two modalities is very interesting and it opens more directions in further research.
Weaknesses:
The overall approach is highly based on already existing methods (Mixup, CutMix) and it can be considered as just a smart way of combining them.
Most conclusions in the ablation study are made based on the results using the UCF101 dataset as a training set. However, UCF101 is a quite small and biased dataset that can lead to incorrect conclusions. It would be much stronger to use the Kinetics dataset to pre-train the model and then consider UCF101 and HMDB51 only for downstream tasks.
There is no specification of how UCF101 and HMDB51 datasets are used. Which split of UCF101 is used for training? Which split of HMDB51 is used for testing? Are the results in Table 3 and Table 4 are for all splits or just for split 1?
There are some concerns about the results in Table 1. It seems that results are presented for the models that are not fully converged. In Figure 1 the blue line (without mix) is still climbing even after 500 epochs. If we also compare results for MoCo from Table 1 (38.2, 15.3) and Table 2 (46.8, 23.1) we can see that they are significantly different. It raises the question: Does mixing really improve accuracy or does it just speed up convergence?
The statement "This is because cutmix operation destroys the temporal structure of the videos which is crucial for understanding actions in videos." does not apply to the results on HMDB51 in Table 1. Almost every mixing strategy improves over MoCo. It would be beneficial to see more discussion of this effect.
Table 3 and Table 4 are confusing. Many methods explore additional data modalities during training such as optical flow, audio, and text. However, most methods are still using only RGB stream to obtain results for downstream tasks (except for MemDPC and CoCLR that present results for both RGB and Two-stream settings in their papers). Based on the results in Table 2 (last row) and Table 3 (second row) Vi-Mix is using the two-stream setting which requires the computation of optical flow during inference time. It makes the comparison to most of the other methods unfair. It is crucial to also present results for only the RGB stream for the state-of-the-art comparison. These results could be much lower considering Table 2 (only 55.8 vs 74.0 for UCF101).
Also, the statements and insights of the paper would be stronger if other contrastive learning methods (e.g. SimCLR) and backbones (e.g. R3D, R(2+1)D) are considered. Also, the comparison with other methods would be easier. |
ICLR | Title
Vi-MIX FOR SELF-SUPERVISED VIDEO REPRESENTATION
Abstract
Contrastive representation learning of videos highly rely on exhaustive data augmentation strategies. Therefore, towards designing video augmentation for selfsupervised learning, we first analyze the best strategy to mix videos to create a new augmented video sample. Then, the question remains, can we make use of the other modalities in videos for data mixing? To this end, we propose Cross-Modal Manifold Cutmix (CMMC) that inserts a video tesseract into another video tesseract in the feature space across two different modalities. We find that our video mixing strategy: Vi-Mix, i.e. preliminary mixing of videos followed by CMMC across different modalities in a video, improves the quality of learned video representations. We exhaustively conduct experiments for two downstream tasks: action recognition and video retrieval on three popular video datasets UCF101, HMDB51, and NTU-60. We show that the performance of Vi-Mix on both the downstream tasks is on par with the other self-supervised approaches while requiring less training data.
N/A
Contrastive representation learning of videos highly rely on exhaustive data augmentation strategies. Therefore, towards designing video augmentation for selfsupervised learning, we first analyze the best strategy to mix videos to create a new augmented video sample. Then, the question remains, can we make use of the other modalities in videos for data mixing? To this end, we propose Cross-Modal Manifold Cutmix (CMMC) that inserts a video tesseract into another video tesseract in the feature space across two different modalities. We find that our video mixing strategy: Vi-Mix, i.e. preliminary mixing of videos followed by CMMC across different modalities in a video, improves the quality of learned video representations. We exhaustively conduct experiments for two downstream tasks: action recognition and video retrieval on three popular video datasets UCF101, HMDB51, and NTU-60. We show that the performance of Vi-Mix on both the downstream tasks is on par with the other self-supervised approaches while requiring less training data.
1 INTRODUCTION
The recent advancements in self-supervised representation is credited to the success of using discriminative contrastive loss such as InfoNCE (Gutmann & Hyvärinen, 2010). Given a data sample, contrastive representation learning focus on discriminating its transformed version from a large pool of other instances or their transformations. Thus, the concept of contrastive learning while applicable to any domains, its effectiveness rely on the domain-specific inductive bias as the transformations are obtained from the same data instance. For images, these transformations are usually standard data augmentation techniques (Chen et al., 2020) while in videos, data artifacts that arise from temporal segments within the same video clip (Lee et al., 2017b;a; Fernando et al., 2017; Pickup et al., 2014; Misra et al., 2016).
Recently, data mixing strategies (Zhang et al., 2018; Shen et al., 2020; Yun et al., 2019) have emerged as one of the promising data augmentation for supervised learning methods. These mixing strategies when incorporated with contrastive learning, the quality of the learned representation improves drastically as in Lee et al. (2021); Verma et al. (2021; 2019). Such augmentations introduce semantically meaningful variance for better generalization which is crucial for learning self-supervised representations. While these mixing strategies have been impactful for learning image representations, mixing strategies have been very limitedly explored in the video domain.
Therefore, in this paper, we study the various data mixing strategies for videos,and propose a new approach to overcome their limitation by mixing across modalities. We first investigate and compare the mixing strategies adopted from the the image domain, and we find that mixing videos by performing simple interpolation of two video cuboids (Mixup) is more effective than inserting a video cuboid within another (Cutmix). This is in contrast to the observations made in the image domain. Furthermore, unlike learning image representations (Lee et al., 2021), these data mixing strategies are prone to over-fitting when trained for longer, making them limited for videos.
Motivated by the success of previous self-supervised techniques exploiting multiple modalities to learn discriminative video representation as in Arandjelovic & Zisserman (2017a); Chung & Zisserman (2016); Korbar et al. (2018); Arandjelovic & Zisserman (2017b); Owens & Efros (2018); Piergiovanni et al. (2020); Miech et al. (2020), in this paper, we pose the following question: can we take advantage of other modalities for mixing videos while learning self-supervised representation?
Different modalities of a video like RGB, optical flow, etc. have different distributions and thus, mixing them directly in the input space makes the task of discriminating similar instances from the other instances easier limiting the quality of the learned representation. To this end, we propose our Cross-Modal Manifold Cutmix (CMMC), that performs data mixing operation ‘across different modalities’ of a video in their hidden intermediate ‘representations’. Given the video encoders from different modalities pre-trained with contrastive loss in addition to mixup augmentation, CMMC exploits the underlying structure of the data manifold. This is done by performing cutmix operation in the feature space across space, time and channels. To the best of our knowledge, this is the first attempt to perform mixing across channels. The channel mixing of the cross-modal feature map enforces the encoder to learn better semantic concepts in the videos. Hence, we train video encoders for different modalities in several stages including the use of mixup strategy in videos and our proposed CMMC. We call this video augmentation strategy - Vi-Mix which stands for Video instance-Mix for contrastive representation learning.
Empirically, we confirm that Vi-Mix being easy to implement, significantly improves contrastive representation learning for videos. We show that Vi-Mix can effectively learn self-supervised representation with small availability of data for pretext task and can also take advantage of other modalities of the videos through manifold mixing strategy. We thoroughly evaluate the quality of the learned representation on two downstream tasks action recognition and retrieval, on UCF101 and HMDB51. We demonstrate the improvement in transferability of the representation learned with Vi-Mix by conducting training on a large scale dataset Kinetics-400 and then finetuning on smaller datasets. Furthermore, we corroborate the robustness of our video data augmentation strategy by observing similar improvements on video skeleton sequences for the task of action recognition.
2 BACKGROUND
In this section, we first review a general contrastive learning mechanism used for learning selfsupervised video representation. Then, we review a data mixing formulation for self-supervision in the image domain. Let X ∈ RT×3×H×W be a sequence of video. The objective is to learn a mapping f : X → z where z ∈ RD, that can be effectively used to discriminate video clips for various downstream tasks, e.g. action recognition , retrieval, etc.
Contrastive Learning. Assume a set of augmentation transformations A is applied to X . So, for a particular video there exists a positive (say X̃ ) whereas the other transformed videos in a minibatch are considered as negatives. The encoder f(.) and its exponential average model f̃(.) maps the positives and negatives respectively to embedding vectors. Therefore, the contrastive loss for a sample Xi is formulated as
L(Xi) = −log exp(zi · z̃i/τ) exp(zi · z̃i/τ) + ∑ j∈N exp(zi · zj/τ) (1)
where τ is a scaling temperature parameter and N is the set of negatives. Note that the embedding vectors zi and z̃i are L2-normalized before the loss computation. Thus, the loss L optimizes the video instances such that the representation of the video instances with the same view are pulled towards each other while pushing away from the other instances.
Data Mix for Contrastive Learning. We revisit the formulation proposed in i-mix (Lee et al., 2021) for mixing data within a batch for contrastive representation learning. Let yi ∈ {0, 1}BS be the virtual labels of the input Xi and X̃i in a batch, where yi,i = 1 and yi,j 6=i = 0. Then, the (N + 1)− way discrimination loss for a sample in a batch is:
L(Xi, yi) = −yi,b · log exp(zi · z̃b/τ) exp(zi · z̃b/τ) + ∑ j∈N exp(zi · zj/τ) (2)
where b ranges from 0 to BS. Thus, the data instances are mixed within a batch for which the loss is defined as:
LMix((Xi, yi), (Xr, yr), λ) = L(Mix(Xi,Xr;λ), λyi + (1− λ)yr) (3)
where λ ∼ Beta(α, α) is a mixing coefficient, r ∼ rand(BS), and Mix() is a mixing operator. In the following, we will discuss the appropriate mixing operators in the video domain.
3 VI-MIX
In this paper, we use the same i-mix formulation (from the above section) for data mixing while learning discriminative self-supervised representation. First, we investigate the best strategies to define the mixing operator for video domain. Furthermore, we introduce a manifold mixing strategy to make use of the other modalities freely available in videos for data mixing. We integrate both these data augmentation strategies, together called Video-instance Mix (Vi-Mix) for contrastive representation learning of Videos.
3.1 MIXING OPERATOR FOR VIDEOS
Unlike mixing operations in images as in Zhang et al. (2018); Verma et al. (2019); Shen et al. (2020); Yun et al. (2019), videos have temporal dimension. We argue that handling temporal dimension in videos is not equivalent to handling spatial dimension in images. For the Mixing operation defined in equation 3, it is straightforward to extend the existing image mixing strategies to videos. Mixup (Verma et al., 2019) in videos perform weighted averaging of two spatio-temporal stack of frames. In contrast to cutmix operator (Yun et al., 2019), mixup operator retains the temporal information in videos and thus facilitates the contrastive representation learning. We empirically corroborate this observation in the experimental analysis. In addition to this, videos possess different modalities like optical flow that can be computed without any supervision. The question remains that can we make use of other modalities in videos for mixing instances while learning contrastive representation? To this end, we introduce Cross-Modal Manifold Cutmix (CMMC) strategy for mixing video instances across different modalities which is discussed in the next section.
3.2 CROSS-MODAL MANIFOLD CUTMIX
Different modalities in videos is an additional information that are often exploited for self-supervised learning as in (Han et al., 2020a; Linguo et al., 2021). In contrast to these approaches, we simply propose to mix these different modalities as another data augmentation strategy for self-supervised representation. However, the dissimilarity in distribution between the different modalities (say, RGB and optical flow) in videos makes it harder to mix them at input space. Consequently, we propose Cross-Modal Manifold Cutmix to mix such cross-modal representations in the hidden representation space.
As an extension of the previous notation, we now consider two different modalities X1i and X2i for a given video clip Xi. The objective of the self-supervised task is to learn discriminative video representation, i.e. to learn functions f1(·) and f2(·). We decompose the encoder function by f1(X1i) = f1k(g1k(X1i)), where g1k is a part of the video encoder for modality 1 with k layers that maps the input data X1i to a hidden representation. Similarly, f1k maps the hidden representation g1k(X1i) to the embedding vector z1i. Note that we already have trained video encoders f1i(·) and f2i(·) by exploiting the above mentioned Mixup strategy among the video instances in a mini-batch while optimizing the contrastive loss. Now, CMMC is trained in a 4 stage fashion. In the first stage, we train the encoder f1i(·) of modality 1 in 5 steps as illustrated in figure 1. First, we select random
layers k and l from a set of eligible layers in f1i(·) and f2i(·) respectively such that k ≤ l. This set excludes the input space. Second, we fed a pair of input X1i and X2r to their respective video encoders f1 and f2 until they reach layer k and layer l respectively. We obtain g1k(X1i) and g2l(X2r) - a hidden representation (spatio-temporal tesseract) of both videos in modality 1 and 2. Third, we perform a data mixing among the hidden representations across two modalities as:
gmix1k , λ = CutMix(g1k, g2r;α) (4)
ymix1k = λy1i + (1− λ)y2r (5) where (y1i, y2r) are one-hot labels, hyper-parameter α = 1, and the mixing operator is cutmix as in Yun et al. (2019) which returns the mixing coefficient λ along with the mixed data. For brevity, we omit the input instances in the equation. Fourth, we continue the forward pass in f1(·) only from layer k to the output embedding, now we denote by zmix1i . Fifth, this embedding is used to compute the (N + 1)-way discrimination loss which is reformulated as:
L(X1i, y1i) = −ymix1i,b · log exp(zmix1i · z̃1b/τ) exp(zmix1i · z̃1b/τ) + ∑ j∈N exp(zmix1i · z1j/τ) (6)
The computed gradients are backpropagated through the entire video encoder f1(·) of modality 1 only. It is to be noted that the video encoder f1(·) of modality 2 is not trained in this stage. In the second stage, we train the video encoder f2(·) for modality 2 while freezing the updated learned weights of f1(·). We continue this cycle twice for each modality, and hence 4 stages to learn the self-supervised video representation in f1(·) and f2(·) . Algorithm 1 provides the pseudocode of one stage of CMMC for training encoder f1(·). Thus, to sum up Vi-Mix consists of initially training
Algorithm 1 Pytorch-like style Pseudocode of One stage CMMC for modality 1 alpha, mix1 = 1., rand(1, L) # L is the layers in the encoder mix2 = rand(mix1, L) x1 q, x1 k = aug(rgb) # Two modalities of the RGB (modality 1) data x2 = aug(flow) # Flow data (modality 2) g1 = f 1q.partial forward(x1 q, 0, mix1). g2 = f 2q.partial forward(x2, 0, mix2) g mix, labels new, lam = CutMix(g1, g2, alpha) z1 = normalize(f 1q.partial forward(x1 q, mix1, L)) z2 = normalize(f 1k.forward(x1 k)) z2, g2 = z2.detach(), g2.detach() # no gradient flow logits = matmul(z1, z2.T) / t loss = lam * CrossEntropyLoss(logits, arange(len(x1 q)) +
(1 - lam) * CrossEntropyLoss(logits, labels new)
video encoders of modality 1 and modality 2 independently with infoNCE loss as in Chen et al. (2020) and applying mixup augmentation. Then, we perform CMMC among the hidden representations of data from modality 1 and 2 in 4 stages. This is performed by alternation training strategy as in Han et al. (2020a) to make use of the latest learned representations in the cross-modal network. The final learned model is obtained after two cycles of training encoder of each modality.
CutMix in feature space. Here, we explain how the cutmix operator is applied on the video tesseracts in the feature space. Assume that the hidden representation of the input video sequence X1i in modality 1, g1k ∈ Rc1×t1×h1×w1 , where c1 represents channel, t1 time, and s1 = h1 × w1 is the spatial resolution. We generate a new representation gmix1 by combining the hidden representations g1k and g2l. These hidden representations g1k and g2l may differ such that (c1, t1, s1) ≥ (c2, t2, s2). Therefore, we define a cutmix operation that combines the video tesseracts in space, time and across channels. We define the combining operation as
gmix1 =M g1k + (1−M) g2l (7) where M ∈ {0, 1}c1×t1×h1×w1 is a binary tensor mask which is decided by sampling the bounding box coordinates bbox = (bc1, bc2, bt1, bt2, bh1, bh2, bw1, bw2) from a uniform distribution. In order to preserve the temporal information in a video, we fix (bt1, bt2) = (0, t2). Similarly, we preserve the channel information processed by the video encoder f2(·) by fixing (bc1, bc2) = (0, c2).
Thus, the bounding box selection follows a random sampling of a center coordinate (bwc, bhc) from (U(0, w2), U(0, h2)). The corner points of the bounding box are determined by
bw1, bw2 = bwc − w2 √ λ
2 , bwc +
w2 √ λ
2 bh1, bh2 = bhc −
h2 √ λ
2 , bhc +
h2 √ λ
2 (8)
where λ ∼ U(0, 1). Even by fixing (bt1, bt2) and (bc1, bc2), the resultant video tesseract in one modality may not match the dimension of the video tesseracts in other modality across channel and time, if k < l. So, we select a 4D bounding box with coordinates (Mc1,Mc2,Mt1,Mt2,Mh1,Mh2,Mw1,Mw2) within the defined binary mask M . We randomly sample a center coordinate (Mcc,Mtc,Mhc,Mwc) from (U(0, c1), U(0, t1)), (U(0, h1), and U(0, w1)) respectively. The end points of the binary mask M are determined by
Mc1,Mc2 =Mcc − c22 ,Mcc + c2 2 Mt1,Mt2 =Mtc − t2 2 ,Mtc + t2 2 Mh1,Mh2 =Mhc − h22 ,Mhc + h2 2 Mw1,Mw2 =Mwc − w2 2 ,Mwc + w2 2
(9)
For the region within this bounding box, the values in the binary mask is filled with 0, otherwise 1. A new mixing coefficient is computed by 1− λnew = ∑ c,t,w,hMc,t,w,h denoting the complementary of the proportion of volume occupied by M . This new mixing coefficient λnew is returned by the cutmix function to compute the mixed labels in equation 5.
Thus, we perform a mix operation in videos across all the dimensions including spatial, temporal and channels. However, we preserve the temporal properties of the video instances by retaining a proportion of channel information. This makes cutmix operation effective in the feature space.
4 EXPERIMENTS
In this section, we describe the datasets used in our experimental analysis, implementation details, and evaluation setup. We present ablation studies to illustrate the effectiveness of Vi-Mix video data augmentation and also, provide an exhaustive state-of-the-art comparison with our Vi-Mix models.
4.1 DATASETS
We use two video action recognition datasets: UCF101 (Soomro et al., 2012) and Kinetics-400 (Kay et al., 2017) for self-supervised training of the video encoders. UCF101 contains 13k videos with 101 human actions and Kinetics-400 (K400) contains 240k video clips with 400 human actions. We also use a skeleton action recognition dataset: NTU-RGB+D (Shahroudy et al., 2016) for selfsupervised training of a skeleton encoder. NTU-RGB+D (NTU-60) contains 58k videos with 60 human action, all performed indoors. Note that we use the videos or skeleton sequences from the training set only for the self-supervised pre-training. Downstream tasks are evaluated on split1 of UCF101 and HMDB51 (Kuehne et al., 2011), which contains 7k videos with 51 human actions. For evaluation on skeletons, we evaluate on the validation set of NTU-60 on Cross-subject (xsub) and Cross-View (xview) protocols.
4.2 IMPLEMENTATION DETAILS
Vi-Mix is a simple data augmentation strategy that requires cutmix operation in the feature space which is adopted from (Yun et al., 2019) followed by our temporal and channel mixing. The input modalities in our experiments consists of RGB, optical flow and skeletons (3D Poses). The optical flow is computed with the un-supervised TV-L1 algorithm (Sánchez Pérez et al., 2013) and the same pre-processing procedure is used as in Carreira & Zisserman (2017). For the skeleton experiments, the skeleton data X ∈ RC×T×V is acquired using KinectV2 sensors, where coordinate feature C = 3, # joints V = 25, and # frames T = 50. Following the pre-processing steps in Linguo et al. (2021), we compute the joints and motion cues. For all the RGB and optical flow models, we choose S3D (Xie et al., 2018) architecture as the backbone whereas for the skeleton model, we choose ST-GCN (Yan et al., 2018) with channels in each layer reduced by 1/4 times as the backbone. For self-supervised representation learning, we adopt a momentum-updated history queue to cache a large number of video features as in MoCo (He et al., 2019). We attach a non-linear projection head, and remove it for downstream task evaluations as done in SimCLR (Chen et al., 2020).
For our experiments with RGB and optical flow, we use 32 128 × 128 frames of RGB (or flow) input,at 30 fps. For additional data augmentation, we apply clip-wise consistent random crops, horizontal flips, Gaussian blur and color jittering. We also apply random temporal cropping from the same video as used in Han et al. (2020a). For training a MoCo model with Vi-Mix data augmentation, we initially train the RGB and Flow networks for 300 epochs with mixup data augmentation
independently. The mixup operation is applied in the input space. Then, we train these pre-train networks with CMMC in 4 stages. In each stage, a network with one input modality is trained for 100 epochs by freezing the network with other modality. In the next stage, we reverse the crossmodal networks and continue training the network with other modality. Finally, after the 4 stages, the resultant models are hence trained for 500 epochs in total. For optimization, we use Adam with 10−3 learning rate and 10−5 weight decay. All the experiments are trained on 4 and 2 V100 GPUs for K400 and others respectively, with a batch size of 32 videos per GPU.
For our experiments with skeleton sequence, we choose Shear with shearing amplitude 0.5 and Crop with a padding ratio of 0.6 as the augmentation strategy as used in Linguo et al. (2021). Note that, for CMMC on skeleton data, we perform cutmix operation only on skeleton vertices followed by channel and temporal mixing. For training with CMMC, the GCN encoders are initially trained for 150 epochs on Joint and Motion cues. This is followed by 2 stage training each with 150 epochs, where encoder with one modality is trained and the other is frozen. For optimization, we use SGD with momentum (0.9) and weight decay (0.0001). The model is trained on 1 V100 with a batch size of 128 skeleton sequences.
4.3 EVALUATION SETUP FOR DOWNSTREAM TASKS
For experiments with RGB and optical flow, we evaluate on two downstream tasks: (i) action classification and (ii) retrieval. For action classification, we evaluate on (1) linear probe where the entire encoder is frozen and a single linear layer followed by a softmax layer is trained with cross-entropy loss, and (2) finetune where the entire encoder along with a linear and softmax layer is trained with cross-entropy loss. Note that the encoders are initialized with the Vi-Mix learned weights. More details for training the downstream action classification framework is provided in the Appendix. For action retrieval, the extracted features from the encoder pre-trained with Vi-Mix are used for nearest-neighbor (NN) retrieval. We report Recall at k (R@k) which implies, if the top k nearest neighbours comprise one video pertaining to the same class, a correct retrieval is counted.
4.4 ABLATION STUDIES ON VI-MIX
In this section, we empirically show the correctness of our data augmentation strategy for videos. We also investigate the potential reasons behind the significant improvement of performance with Vi-Mix by conducting relevant experiments.
Which mixing strategy is the best for uni-modal video understanding? In Table 1, we investigate different video mixing strategies based on mixup and cutmix operator for downstream action classification and retrieval tasks. For the augmentations based on Cutmix (Yun et al., 2019), we randomly select a sub-cuboid and plug it into another video. We also consider Videomix (Yun et al., 2020) that performs a cutmix operation across all the frames clipwise consistent. For the virtual labels, we perform label smoothing as defined in equation 3. In image domain, cutmix outperforms the mixup strategy in supervised settings (Yun et al., 2019). However, we find that all strategies using cutmix in temporal dimension (Temporal cutmix), spatio-temporal dimension (ST cutmix) and spatial cutmix (VideoMix) performs worse than simple Mixup strategy. This is because cutmix operation destroys the temporal structure of the videos which is crucial for understanding actions in videos. Similarly, VideoMix where cutmix is performed spatially and not temporally, introduces new contextual information in videos in arbitrary spatial locations. This not only hampers the motion patterns present in the original video but also weakens the similarity between the positive samples in the contrastive loss. Thus, video mixing operators must ensure retention of temporal characteristics in videos.
Why do we need multi-modal mixing strategy for videos? In fig. 2, we illustrate the downstream action classification accuracy vs # epochs plot. This plot clearly shows the importance of applying data mixing augmentation. However, the model trained without multi-modal mixing strategy (CMMC) over-fits after 300 epochs, whereas the models training with multi-modal mixing are still learning discriminative representation. We find that the mixing strategy on video representation learning induces faster training and with CMMC, the models learn cross-modal knowledge without using complicated knowledge distillation techniques as in Crasto et al. (2019); Garcia et al. (2018). It is to be noted that training with cross-modal data augmentation is more beneficial when trained in alternation strategy. In fig. 2, we show that the RGB model with alternate training outperforms the RGB model which is trained for 200 epochs straightaway with the outdated optical flow model. The alternate training strategy takes benefit of the most updated cross-modal model for data mixing and hence learning more discriminative representation.
Diagnosis of CMMC. In Table 2, we provide the results for different configurations of data mixing in the feature space. The objective is to understand the strategies responsible for boosting the performance of the models on UCF101 and HMDB51 for downstream tasks. All the models are initialized with weights obtained from pre-training with mixup for 300 epochs. First, we show that cross-modal mixup (indicated by + mixup) in the feature space exploiting the cross-modal representation outperforms the traditional manifold mixup (indicated by + mixup) in the feature space (Verma et al., 2019) on UFC-101 which does not make use of the cross-modal representation. However, we observe that the action classification accuracy on HMDB51 using optical flow is equivalent for both strategies with or without using cross-modality. This is because HMDB51 mostly consists of static actions with improminent motion patterns which limits the optical flow model to learn motion dominated representation. As a result, this also affects the action classification accuracy on HMDB51 when evaluated with both the streams.
Next, we show the influence of mixing different dimensions in the hidden representation of a video. We perform experiments with cross-modal cutmix occurring in the spatial dimension (s), spatial and temporal (t) dimensions, and finally all the dimensions including channels (c). Note that the manifold cutmix operation is performed within the same data manifold, i.e. the cutmix is performed across the same random layer between RGB and Flow networks. In contradiction to our previous observation of video mixing in the input space, here the temporal cutmix provides a minor boost to the performance of the downstream tasks. This is supported by the fact that hidden representations of a video retains temporal information due to the preceding convolutional operations on the input sample. Also, the channel mixing further boosts the performance by retaining lost temporal information in the resultant mixed feature map. Finally, we introduce more randomness in CMMC by
randomizing the selection of the cross-modal network layer (refer to mix2 in algorithm 1) where the mixing takes place. This enables CMMC to take advantage of the features from later layers of the cross-modal network.
4.5 COMPARISON TO THE STATE-OF-THE-ART
In this section, we compare Vi-Mix with previous self-supervised approaches for video/skeleton action classification and video action retrieval. In Table 3, we provide the action classification results on UCF101 and HMDB51 for linear-probing and full finetuning of video encoders with models trained on UCF101 and K400. For linear probing, Vi-Mix only with its strong data augmentation (mixup + CMMC) outperforms CoCLR which shares the same evaluation setting with Vi-Mix, by 1.9% on UCF101. Similar, observation is made for models trained with K400. We also note that our finetuned Vi-Mix encoders outperform approaches using higher spatial resolution (XDC, AVTS, MemDPC), deeper layers (XDC, MemDPC, GDT). However, the lower action classification accuracy of Vi-Mix on HMDB51 compared to CoCLR indicates the requirement of positive mining of data samples in contrastive learning as performed in CoCLR. The large performance gap of Vi-Mix with CVRL is owing to the deeper layers of R3D (49 vs 23) and the input spatial resolution (224 vs 128). Interestingly, our Vi-Mix model pre-trained on K400 performs on par with the models trained on larger datasets substantiating the impact of simple cross-modal video augmentation.
In Table 4, we provide the video retrieval results on UCF101 and HMDB51 for the Vi-Mix models trained on UCF101. This is a classical test for verifying if the pre-trained model learns semantic information while learning self-supervised representation. We test if a query instance clip and its nearest neighbours belong to the same category. Our Vi-Mix model outperforms all the representative baselines by a significant margin on both the datasets.
In Table 5, we generalize our Vi-Mix strategy for skeleton action representation on NTU-60. Since, mixup in the input space hampers the spatial configuration of the skeletons processed by ST-GCN, we only perform CMMC with hidden skeleton representations in the encoder. We treat Joints and Motion as different input modalities of given skeleton data. Note that the cutmix operation across spatial dimension represents the joint vertices (1-dimensional). The downstream action classification results of a skeleton model pre-trained with contrastive learning (SkeletonCLR) using CMMC outperforms its baseline by 2.4% on cross-subject and by 1.% on cross-view protocol. The superior results of CrosSCLR is owing to its cross-modal positive mining which benefits the vanilla SkeletonCLR models. We believe that such positive mining approaches in contrastive learning such as CoCLR or CrosSCLR can benefit by using our video mixing strategies.
5 RELATED WORK
Deep neural networks, especially the networks fabricated for processing videos are data-hungry. While annotating large scale video data is expensive, recently many self-supervised video representation learning approaches have been proposed to make use of the abundant web videos. On one hand, some methods have exploited the temporal structure of the videos, such as predicting if frames appear in order, reverse order, shuffled, color-consistency across frames, etc (Lee et al., 2017b;a; Fernando et al., 2017; Pickup et al., 2014; Misra et al., 2016; Wang et al., 2019; 2017; Vondrick et al., 2018; Recasens et al., 2021). On the other hand, some methods have been taking advantage of the multiple modalities of videos like audio, text, optical flow, etc by designing pretext tasks for their temporal alignment (Chung & Zisserman, 2016; Korbar et al., 2018; Arandjelovic & Zisserman, 2017b; Owens & Efros, 2018; Piergiovanni et al., 2020; Miech et al., 2020; Arandjelovic & Zisserman, 2017a).
Meanwhile, data mixing strategies have gained popularity in image-domain data augmentations for supervised learning (Zhang et al., 2018; Shen et al., 2020; Yun et al., 2019) in addition to their usage also for learning self-supervised image representation (Verma et al., 2019; Lee et al., 2021; Verma et al., 2021). A recent work (unpublished), in the spirit of data mixing in the video domain, VideoMix creates a new training video by inserting a video cuboid into another video in the supervised setting (Yun et al., 2020). In contrast, we focus on mixing video samples for self-supervised representation. Different from the observations in VideoMix, we note that mixup in Vi-Mix is a better augmentation tool rather than strategies involving removal of spatio-temporal sub-space from the original videos. The most closest to our work, Manifold mixup (Verma et al., 2019) focuses on interpolating hidden representation of the samples within a mini-batch, whereas, our proposed CMMC in Vi-Mix performs cutmix operation in the data manifold across different modalities. In addition, we also introduce the notion of channel mixing in the feature space. We find that Vi-Mix is simple to implement while is a strong data augmentation tool for learning self-supervised video representation even with small data size.
6 CONCLUSION
We have analyzed the augmentation strategies for learning self-supervised video representation. We have introduced Vi-Mix which includes performing video mixup followed by Cross-modal manifold mixup to take advantage of additional modalities present in videos. Vi-Mix improves the quality of learned representation and thus brings significant improvement in the performance of downstream tasks on UCF101, HMDB51 and NTU-60 datasets. We believe that Vi-Mix can be a standard video augmentation tool while learning any multi-modal self-supervised video representation.
A APPENDIX
A.1 MORE IMPLEMENTATION DETAILS
Training/Testing specification for downstream finetuning on UCF101 and HMDB51. At training, we apply the same data augmentation as in the pre-training stage mentioned in section 4.2, except for Gaussain blurring. The model is trained with similar optimization configuration as in the pre-training stage for 500 epochs. At inference, we perform spatially fully convolutional inference on videos by applying ten crops (center crop and 4 corners with horizontal flipping) and temporally take clips with overlapping moving windows. The final prediction is the average softmax scores of all the clips.
Training/Testing specification for downstream finetuning on NTU-60. For training the pretrained ST-GCN along with the linear classifier, we apply the same data augmentation as in the pre-training stage. We train for 100 epochs with learning rate 0.1 (multiplied by 0.1 at epoch 80).
History Queue in MoCo. We adopt momentum-updated history queue as in MoCo (He et al., 2019) to cache a large number of visual features while learning contrastive representation. For our pretraining experiments, we use a softmax temperature τ = 0.07, and a momentum m = 0.999. The queue size of MoCo for pre-training experiments on UCF101, K400 and NTU-60 are 2048, 16384 and 32768 respectively.
A.2 REGULARIZATION EFFECT OF VI-MIX
In fig. 3, we provide (1) a plot of training loss of two models, one using Vi-Mix and the other model is using standard data augmentations, and (2) the (K+1)-way accuracy of the pretext task of the models learning contrastive representation. We observe a disparity between the training losses (at left of the figure) in both the models with and without using Vi-Mix. This is owing to the hardness of the pretext task which can be directly correlated with the difficulty of the data transformation, via ViMix data augmentation. Meanwhile, we also note that the (K+1)-way accuracy of the Vi-Mix model
while training on contrastive loss is lower than that of the model without using Vi-Mix (at the right of the figure). However, the performance gain of the Vi-Mix model on downstream classification and retrieval tasks shows the regularizing capability of using Vi-Mix type data augmentation. | 1. What is the main contribution of the paper regarding video representation learning?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its training pipeline and performance compared to state-of-the-art methods?
3. Do you have any questions or concerns regarding the effectiveness and significance of the claimed contributions?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
In this paper, the authors tackle the problem of self-supervised video representation learning. They extend existing image domain data augmentation, cutmix, to multi-modal video domain. They validate the effectiveness of the proposed method on publicly available benchmarks: UCF-101, HMDB-51, and NTU-60.
The claimed contribution of the paper are as follows.
In unimodal learning, they find that mixup is more effective than cutmix.
Cross-modal manifold cutmix is proposed for self-supervised video representation learning.
They claim this is the first attempt to perform mixing across channels.
Review
This paper has the following strengths.
The problem addressed, self-supervised video representation learning is an interesting, and unsolved problem yet.
Adding video mixup shows good performance on downstream video classification and retrieval tasks.
The paper is structured well. It is easy for readers to follow except for a few confusing sentences.
However, this paper has weaknesses as well.
The proposed training pipeline is not end-to-end and quite heavy. We need to 1) independently train RGB and Flow encoders, 2) train the RGB encoder with cutmixed flow features (from the frozen Flow encoder), 3) train the flow encoder with cutmixed RGB features (from the frozen RGB encoder), 4) repeat step 2 and 3 again, 5) train on downstream tasks. It is fine to be complex if the performance gain is significant. However, the gain does not seem very significant. Is there any way to simplify the training pipeline? E.g., Can we jointly train RGB and Flow encoders?
The performance improvement from cross-modal cutmix is not significant despite we are relying on two modalities and quite complex training pipeline. In Table 2, Cross-modal mixup gives only 1~2 point improvement on UCF-101, HMDB-51 classification/retrieval tasks.
The proposed method does not show competitive performance compare with state-of-the-art. Despite very complex training pipeline, the proposed method does not outperform CoCLR when both are pre-trained on Kinetics-400 as shown in Table 3. It shows on par performance when much smaller UCF-101 (1/24 in scale) is used for pre-training. This might imply the proposed method does not learn more transferable/generalizable self-supervised representations than state-of-the-art methods. On skeleton based recognition task, the proposed method shows inferior result than CrosSCLR (xsub: 72.5 vs 74.5, xview: 79.1 vs 82.1).
They claim this work is the first attempt to perform mixing across channels. However, it seems that channel mixup does not improve the performance in general in Table 2. Why do we need channel mixup if it does not improve the performance?
There are a few unclear points in the paper. In Table 4, is it RGB + Flow result or single modal result? Please add modality and other columns (network, resolution, depth) in this table as well. In Table 2 caption, “All the models are trained on training samples of UCF101 for 500 epochs”. In the text, “All the models are initialized with weights obtained from pre-training with mixup for 300 epochs”. Which one is correct? |
ICLR | Title
Vi-MIX FOR SELF-SUPERVISED VIDEO REPRESENTATION
Abstract
Contrastive representation learning of videos highly rely on exhaustive data augmentation strategies. Therefore, towards designing video augmentation for selfsupervised learning, we first analyze the best strategy to mix videos to create a new augmented video sample. Then, the question remains, can we make use of the other modalities in videos for data mixing? To this end, we propose Cross-Modal Manifold Cutmix (CMMC) that inserts a video tesseract into another video tesseract in the feature space across two different modalities. We find that our video mixing strategy: Vi-Mix, i.e. preliminary mixing of videos followed by CMMC across different modalities in a video, improves the quality of learned video representations. We exhaustively conduct experiments for two downstream tasks: action recognition and video retrieval on three popular video datasets UCF101, HMDB51, and NTU-60. We show that the performance of Vi-Mix on both the downstream tasks is on par with the other self-supervised approaches while requiring less training data.
N/A
Contrastive representation learning of videos highly rely on exhaustive data augmentation strategies. Therefore, towards designing video augmentation for selfsupervised learning, we first analyze the best strategy to mix videos to create a new augmented video sample. Then, the question remains, can we make use of the other modalities in videos for data mixing? To this end, we propose Cross-Modal Manifold Cutmix (CMMC) that inserts a video tesseract into another video tesseract in the feature space across two different modalities. We find that our video mixing strategy: Vi-Mix, i.e. preliminary mixing of videos followed by CMMC across different modalities in a video, improves the quality of learned video representations. We exhaustively conduct experiments for two downstream tasks: action recognition and video retrieval on three popular video datasets UCF101, HMDB51, and NTU-60. We show that the performance of Vi-Mix on both the downstream tasks is on par with the other self-supervised approaches while requiring less training data.
1 INTRODUCTION
The recent advancements in self-supervised representation is credited to the success of using discriminative contrastive loss such as InfoNCE (Gutmann & Hyvärinen, 2010). Given a data sample, contrastive representation learning focus on discriminating its transformed version from a large pool of other instances or their transformations. Thus, the concept of contrastive learning while applicable to any domains, its effectiveness rely on the domain-specific inductive bias as the transformations are obtained from the same data instance. For images, these transformations are usually standard data augmentation techniques (Chen et al., 2020) while in videos, data artifacts that arise from temporal segments within the same video clip (Lee et al., 2017b;a; Fernando et al., 2017; Pickup et al., 2014; Misra et al., 2016).
Recently, data mixing strategies (Zhang et al., 2018; Shen et al., 2020; Yun et al., 2019) have emerged as one of the promising data augmentation for supervised learning methods. These mixing strategies when incorporated with contrastive learning, the quality of the learned representation improves drastically as in Lee et al. (2021); Verma et al. (2021; 2019). Such augmentations introduce semantically meaningful variance for better generalization which is crucial for learning self-supervised representations. While these mixing strategies have been impactful for learning image representations, mixing strategies have been very limitedly explored in the video domain.
Therefore, in this paper, we study the various data mixing strategies for videos,and propose a new approach to overcome their limitation by mixing across modalities. We first investigate and compare the mixing strategies adopted from the the image domain, and we find that mixing videos by performing simple interpolation of two video cuboids (Mixup) is more effective than inserting a video cuboid within another (Cutmix). This is in contrast to the observations made in the image domain. Furthermore, unlike learning image representations (Lee et al., 2021), these data mixing strategies are prone to over-fitting when trained for longer, making them limited for videos.
Motivated by the success of previous self-supervised techniques exploiting multiple modalities to learn discriminative video representation as in Arandjelovic & Zisserman (2017a); Chung & Zisserman (2016); Korbar et al. (2018); Arandjelovic & Zisserman (2017b); Owens & Efros (2018); Piergiovanni et al. (2020); Miech et al. (2020), in this paper, we pose the following question: can we take advantage of other modalities for mixing videos while learning self-supervised representation?
Different modalities of a video like RGB, optical flow, etc. have different distributions and thus, mixing them directly in the input space makes the task of discriminating similar instances from the other instances easier limiting the quality of the learned representation. To this end, we propose our Cross-Modal Manifold Cutmix (CMMC), that performs data mixing operation ‘across different modalities’ of a video in their hidden intermediate ‘representations’. Given the video encoders from different modalities pre-trained with contrastive loss in addition to mixup augmentation, CMMC exploits the underlying structure of the data manifold. This is done by performing cutmix operation in the feature space across space, time and channels. To the best of our knowledge, this is the first attempt to perform mixing across channels. The channel mixing of the cross-modal feature map enforces the encoder to learn better semantic concepts in the videos. Hence, we train video encoders for different modalities in several stages including the use of mixup strategy in videos and our proposed CMMC. We call this video augmentation strategy - Vi-Mix which stands for Video instance-Mix for contrastive representation learning.
Empirically, we confirm that Vi-Mix being easy to implement, significantly improves contrastive representation learning for videos. We show that Vi-Mix can effectively learn self-supervised representation with small availability of data for pretext task and can also take advantage of other modalities of the videos through manifold mixing strategy. We thoroughly evaluate the quality of the learned representation on two downstream tasks action recognition and retrieval, on UCF101 and HMDB51. We demonstrate the improvement in transferability of the representation learned with Vi-Mix by conducting training on a large scale dataset Kinetics-400 and then finetuning on smaller datasets. Furthermore, we corroborate the robustness of our video data augmentation strategy by observing similar improvements on video skeleton sequences for the task of action recognition.
2 BACKGROUND
In this section, we first review a general contrastive learning mechanism used for learning selfsupervised video representation. Then, we review a data mixing formulation for self-supervision in the image domain. Let X ∈ RT×3×H×W be a sequence of video. The objective is to learn a mapping f : X → z where z ∈ RD, that can be effectively used to discriminate video clips for various downstream tasks, e.g. action recognition , retrieval, etc.
Contrastive Learning. Assume a set of augmentation transformations A is applied to X . So, for a particular video there exists a positive (say X̃ ) whereas the other transformed videos in a minibatch are considered as negatives. The encoder f(.) and its exponential average model f̃(.) maps the positives and negatives respectively to embedding vectors. Therefore, the contrastive loss for a sample Xi is formulated as
L(Xi) = −log exp(zi · z̃i/τ) exp(zi · z̃i/τ) + ∑ j∈N exp(zi · zj/τ) (1)
where τ is a scaling temperature parameter and N is the set of negatives. Note that the embedding vectors zi and z̃i are L2-normalized before the loss computation. Thus, the loss L optimizes the video instances such that the representation of the video instances with the same view are pulled towards each other while pushing away from the other instances.
Data Mix for Contrastive Learning. We revisit the formulation proposed in i-mix (Lee et al., 2021) for mixing data within a batch for contrastive representation learning. Let yi ∈ {0, 1}BS be the virtual labels of the input Xi and X̃i in a batch, where yi,i = 1 and yi,j 6=i = 0. Then, the (N + 1)− way discrimination loss for a sample in a batch is:
L(Xi, yi) = −yi,b · log exp(zi · z̃b/τ) exp(zi · z̃b/τ) + ∑ j∈N exp(zi · zj/τ) (2)
where b ranges from 0 to BS. Thus, the data instances are mixed within a batch for which the loss is defined as:
LMix((Xi, yi), (Xr, yr), λ) = L(Mix(Xi,Xr;λ), λyi + (1− λ)yr) (3)
where λ ∼ Beta(α, α) is a mixing coefficient, r ∼ rand(BS), and Mix() is a mixing operator. In the following, we will discuss the appropriate mixing operators in the video domain.
3 VI-MIX
In this paper, we use the same i-mix formulation (from the above section) for data mixing while learning discriminative self-supervised representation. First, we investigate the best strategies to define the mixing operator for video domain. Furthermore, we introduce a manifold mixing strategy to make use of the other modalities freely available in videos for data mixing. We integrate both these data augmentation strategies, together called Video-instance Mix (Vi-Mix) for contrastive representation learning of Videos.
3.1 MIXING OPERATOR FOR VIDEOS
Unlike mixing operations in images as in Zhang et al. (2018); Verma et al. (2019); Shen et al. (2020); Yun et al. (2019), videos have temporal dimension. We argue that handling temporal dimension in videos is not equivalent to handling spatial dimension in images. For the Mixing operation defined in equation 3, it is straightforward to extend the existing image mixing strategies to videos. Mixup (Verma et al., 2019) in videos perform weighted averaging of two spatio-temporal stack of frames. In contrast to cutmix operator (Yun et al., 2019), mixup operator retains the temporal information in videos and thus facilitates the contrastive representation learning. We empirically corroborate this observation in the experimental analysis. In addition to this, videos possess different modalities like optical flow that can be computed without any supervision. The question remains that can we make use of other modalities in videos for mixing instances while learning contrastive representation? To this end, we introduce Cross-Modal Manifold Cutmix (CMMC) strategy for mixing video instances across different modalities which is discussed in the next section.
3.2 CROSS-MODAL MANIFOLD CUTMIX
Different modalities in videos is an additional information that are often exploited for self-supervised learning as in (Han et al., 2020a; Linguo et al., 2021). In contrast to these approaches, we simply propose to mix these different modalities as another data augmentation strategy for self-supervised representation. However, the dissimilarity in distribution between the different modalities (say, RGB and optical flow) in videos makes it harder to mix them at input space. Consequently, we propose Cross-Modal Manifold Cutmix to mix such cross-modal representations in the hidden representation space.
As an extension of the previous notation, we now consider two different modalities X1i and X2i for a given video clip Xi. The objective of the self-supervised task is to learn discriminative video representation, i.e. to learn functions f1(·) and f2(·). We decompose the encoder function by f1(X1i) = f1k(g1k(X1i)), where g1k is a part of the video encoder for modality 1 with k layers that maps the input data X1i to a hidden representation. Similarly, f1k maps the hidden representation g1k(X1i) to the embedding vector z1i. Note that we already have trained video encoders f1i(·) and f2i(·) by exploiting the above mentioned Mixup strategy among the video instances in a mini-batch while optimizing the contrastive loss. Now, CMMC is trained in a 4 stage fashion. In the first stage, we train the encoder f1i(·) of modality 1 in 5 steps as illustrated in figure 1. First, we select random
layers k and l from a set of eligible layers in f1i(·) and f2i(·) respectively such that k ≤ l. This set excludes the input space. Second, we fed a pair of input X1i and X2r to their respective video encoders f1 and f2 until they reach layer k and layer l respectively. We obtain g1k(X1i) and g2l(X2r) - a hidden representation (spatio-temporal tesseract) of both videos in modality 1 and 2. Third, we perform a data mixing among the hidden representations across two modalities as:
gmix1k , λ = CutMix(g1k, g2r;α) (4)
ymix1k = λy1i + (1− λ)y2r (5) where (y1i, y2r) are one-hot labels, hyper-parameter α = 1, and the mixing operator is cutmix as in Yun et al. (2019) which returns the mixing coefficient λ along with the mixed data. For brevity, we omit the input instances in the equation. Fourth, we continue the forward pass in f1(·) only from layer k to the output embedding, now we denote by zmix1i . Fifth, this embedding is used to compute the (N + 1)-way discrimination loss which is reformulated as:
L(X1i, y1i) = −ymix1i,b · log exp(zmix1i · z̃1b/τ) exp(zmix1i · z̃1b/τ) + ∑ j∈N exp(zmix1i · z1j/τ) (6)
The computed gradients are backpropagated through the entire video encoder f1(·) of modality 1 only. It is to be noted that the video encoder f1(·) of modality 2 is not trained in this stage. In the second stage, we train the video encoder f2(·) for modality 2 while freezing the updated learned weights of f1(·). We continue this cycle twice for each modality, and hence 4 stages to learn the self-supervised video representation in f1(·) and f2(·) . Algorithm 1 provides the pseudocode of one stage of CMMC for training encoder f1(·). Thus, to sum up Vi-Mix consists of initially training
Algorithm 1 Pytorch-like style Pseudocode of One stage CMMC for modality 1 alpha, mix1 = 1., rand(1, L) # L is the layers in the encoder mix2 = rand(mix1, L) x1 q, x1 k = aug(rgb) # Two modalities of the RGB (modality 1) data x2 = aug(flow) # Flow data (modality 2) g1 = f 1q.partial forward(x1 q, 0, mix1). g2 = f 2q.partial forward(x2, 0, mix2) g mix, labels new, lam = CutMix(g1, g2, alpha) z1 = normalize(f 1q.partial forward(x1 q, mix1, L)) z2 = normalize(f 1k.forward(x1 k)) z2, g2 = z2.detach(), g2.detach() # no gradient flow logits = matmul(z1, z2.T) / t loss = lam * CrossEntropyLoss(logits, arange(len(x1 q)) +
(1 - lam) * CrossEntropyLoss(logits, labels new)
video encoders of modality 1 and modality 2 independently with infoNCE loss as in Chen et al. (2020) and applying mixup augmentation. Then, we perform CMMC among the hidden representations of data from modality 1 and 2 in 4 stages. This is performed by alternation training strategy as in Han et al. (2020a) to make use of the latest learned representations in the cross-modal network. The final learned model is obtained after two cycles of training encoder of each modality.
CutMix in feature space. Here, we explain how the cutmix operator is applied on the video tesseracts in the feature space. Assume that the hidden representation of the input video sequence X1i in modality 1, g1k ∈ Rc1×t1×h1×w1 , where c1 represents channel, t1 time, and s1 = h1 × w1 is the spatial resolution. We generate a new representation gmix1 by combining the hidden representations g1k and g2l. These hidden representations g1k and g2l may differ such that (c1, t1, s1) ≥ (c2, t2, s2). Therefore, we define a cutmix operation that combines the video tesseracts in space, time and across channels. We define the combining operation as
gmix1 =M g1k + (1−M) g2l (7) where M ∈ {0, 1}c1×t1×h1×w1 is a binary tensor mask which is decided by sampling the bounding box coordinates bbox = (bc1, bc2, bt1, bt2, bh1, bh2, bw1, bw2) from a uniform distribution. In order to preserve the temporal information in a video, we fix (bt1, bt2) = (0, t2). Similarly, we preserve the channel information processed by the video encoder f2(·) by fixing (bc1, bc2) = (0, c2).
Thus, the bounding box selection follows a random sampling of a center coordinate (bwc, bhc) from (U(0, w2), U(0, h2)). The corner points of the bounding box are determined by
bw1, bw2 = bwc − w2 √ λ
2 , bwc +
w2 √ λ
2 bh1, bh2 = bhc −
h2 √ λ
2 , bhc +
h2 √ λ
2 (8)
where λ ∼ U(0, 1). Even by fixing (bt1, bt2) and (bc1, bc2), the resultant video tesseract in one modality may not match the dimension of the video tesseracts in other modality across channel and time, if k < l. So, we select a 4D bounding box with coordinates (Mc1,Mc2,Mt1,Mt2,Mh1,Mh2,Mw1,Mw2) within the defined binary mask M . We randomly sample a center coordinate (Mcc,Mtc,Mhc,Mwc) from (U(0, c1), U(0, t1)), (U(0, h1), and U(0, w1)) respectively. The end points of the binary mask M are determined by
Mc1,Mc2 =Mcc − c22 ,Mcc + c2 2 Mt1,Mt2 =Mtc − t2 2 ,Mtc + t2 2 Mh1,Mh2 =Mhc − h22 ,Mhc + h2 2 Mw1,Mw2 =Mwc − w2 2 ,Mwc + w2 2
(9)
For the region within this bounding box, the values in the binary mask is filled with 0, otherwise 1. A new mixing coefficient is computed by 1− λnew = ∑ c,t,w,hMc,t,w,h denoting the complementary of the proportion of volume occupied by M . This new mixing coefficient λnew is returned by the cutmix function to compute the mixed labels in equation 5.
Thus, we perform a mix operation in videos across all the dimensions including spatial, temporal and channels. However, we preserve the temporal properties of the video instances by retaining a proportion of channel information. This makes cutmix operation effective in the feature space.
4 EXPERIMENTS
In this section, we describe the datasets used in our experimental analysis, implementation details, and evaluation setup. We present ablation studies to illustrate the effectiveness of Vi-Mix video data augmentation and also, provide an exhaustive state-of-the-art comparison with our Vi-Mix models.
4.1 DATASETS
We use two video action recognition datasets: UCF101 (Soomro et al., 2012) and Kinetics-400 (Kay et al., 2017) for self-supervised training of the video encoders. UCF101 contains 13k videos with 101 human actions and Kinetics-400 (K400) contains 240k video clips with 400 human actions. We also use a skeleton action recognition dataset: NTU-RGB+D (Shahroudy et al., 2016) for selfsupervised training of a skeleton encoder. NTU-RGB+D (NTU-60) contains 58k videos with 60 human action, all performed indoors. Note that we use the videos or skeleton sequences from the training set only for the self-supervised pre-training. Downstream tasks are evaluated on split1 of UCF101 and HMDB51 (Kuehne et al., 2011), which contains 7k videos with 51 human actions. For evaluation on skeletons, we evaluate on the validation set of NTU-60 on Cross-subject (xsub) and Cross-View (xview) protocols.
4.2 IMPLEMENTATION DETAILS
Vi-Mix is a simple data augmentation strategy that requires cutmix operation in the feature space which is adopted from (Yun et al., 2019) followed by our temporal and channel mixing. The input modalities in our experiments consists of RGB, optical flow and skeletons (3D Poses). The optical flow is computed with the un-supervised TV-L1 algorithm (Sánchez Pérez et al., 2013) and the same pre-processing procedure is used as in Carreira & Zisserman (2017). For the skeleton experiments, the skeleton data X ∈ RC×T×V is acquired using KinectV2 sensors, where coordinate feature C = 3, # joints V = 25, and # frames T = 50. Following the pre-processing steps in Linguo et al. (2021), we compute the joints and motion cues. For all the RGB and optical flow models, we choose S3D (Xie et al., 2018) architecture as the backbone whereas for the skeleton model, we choose ST-GCN (Yan et al., 2018) with channels in each layer reduced by 1/4 times as the backbone. For self-supervised representation learning, we adopt a momentum-updated history queue to cache a large number of video features as in MoCo (He et al., 2019). We attach a non-linear projection head, and remove it for downstream task evaluations as done in SimCLR (Chen et al., 2020).
For our experiments with RGB and optical flow, we use 32 128 × 128 frames of RGB (or flow) input,at 30 fps. For additional data augmentation, we apply clip-wise consistent random crops, horizontal flips, Gaussian blur and color jittering. We also apply random temporal cropping from the same video as used in Han et al. (2020a). For training a MoCo model with Vi-Mix data augmentation, we initially train the RGB and Flow networks for 300 epochs with mixup data augmentation
independently. The mixup operation is applied in the input space. Then, we train these pre-train networks with CMMC in 4 stages. In each stage, a network with one input modality is trained for 100 epochs by freezing the network with other modality. In the next stage, we reverse the crossmodal networks and continue training the network with other modality. Finally, after the 4 stages, the resultant models are hence trained for 500 epochs in total. For optimization, we use Adam with 10−3 learning rate and 10−5 weight decay. All the experiments are trained on 4 and 2 V100 GPUs for K400 and others respectively, with a batch size of 32 videos per GPU.
For our experiments with skeleton sequence, we choose Shear with shearing amplitude 0.5 and Crop with a padding ratio of 0.6 as the augmentation strategy as used in Linguo et al. (2021). Note that, for CMMC on skeleton data, we perform cutmix operation only on skeleton vertices followed by channel and temporal mixing. For training with CMMC, the GCN encoders are initially trained for 150 epochs on Joint and Motion cues. This is followed by 2 stage training each with 150 epochs, where encoder with one modality is trained and the other is frozen. For optimization, we use SGD with momentum (0.9) and weight decay (0.0001). The model is trained on 1 V100 with a batch size of 128 skeleton sequences.
4.3 EVALUATION SETUP FOR DOWNSTREAM TASKS
For experiments with RGB and optical flow, we evaluate on two downstream tasks: (i) action classification and (ii) retrieval. For action classification, we evaluate on (1) linear probe where the entire encoder is frozen and a single linear layer followed by a softmax layer is trained with cross-entropy loss, and (2) finetune where the entire encoder along with a linear and softmax layer is trained with cross-entropy loss. Note that the encoders are initialized with the Vi-Mix learned weights. More details for training the downstream action classification framework is provided in the Appendix. For action retrieval, the extracted features from the encoder pre-trained with Vi-Mix are used for nearest-neighbor (NN) retrieval. We report Recall at k (R@k) which implies, if the top k nearest neighbours comprise one video pertaining to the same class, a correct retrieval is counted.
4.4 ABLATION STUDIES ON VI-MIX
In this section, we empirically show the correctness of our data augmentation strategy for videos. We also investigate the potential reasons behind the significant improvement of performance with Vi-Mix by conducting relevant experiments.
Which mixing strategy is the best for uni-modal video understanding? In Table 1, we investigate different video mixing strategies based on mixup and cutmix operator for downstream action classification and retrieval tasks. For the augmentations based on Cutmix (Yun et al., 2019), we randomly select a sub-cuboid and plug it into another video. We also consider Videomix (Yun et al., 2020) that performs a cutmix operation across all the frames clipwise consistent. For the virtual labels, we perform label smoothing as defined in equation 3. In image domain, cutmix outperforms the mixup strategy in supervised settings (Yun et al., 2019). However, we find that all strategies using cutmix in temporal dimension (Temporal cutmix), spatio-temporal dimension (ST cutmix) and spatial cutmix (VideoMix) performs worse than simple Mixup strategy. This is because cutmix operation destroys the temporal structure of the videos which is crucial for understanding actions in videos. Similarly, VideoMix where cutmix is performed spatially and not temporally, introduces new contextual information in videos in arbitrary spatial locations. This not only hampers the motion patterns present in the original video but also weakens the similarity between the positive samples in the contrastive loss. Thus, video mixing operators must ensure retention of temporal characteristics in videos.
Why do we need multi-modal mixing strategy for videos? In fig. 2, we illustrate the downstream action classification accuracy vs # epochs plot. This plot clearly shows the importance of applying data mixing augmentation. However, the model trained without multi-modal mixing strategy (CMMC) over-fits after 300 epochs, whereas the models training with multi-modal mixing are still learning discriminative representation. We find that the mixing strategy on video representation learning induces faster training and with CMMC, the models learn cross-modal knowledge without using complicated knowledge distillation techniques as in Crasto et al. (2019); Garcia et al. (2018). It is to be noted that training with cross-modal data augmentation is more beneficial when trained in alternation strategy. In fig. 2, we show that the RGB model with alternate training outperforms the RGB model which is trained for 200 epochs straightaway with the outdated optical flow model. The alternate training strategy takes benefit of the most updated cross-modal model for data mixing and hence learning more discriminative representation.
Diagnosis of CMMC. In Table 2, we provide the results for different configurations of data mixing in the feature space. The objective is to understand the strategies responsible for boosting the performance of the models on UCF101 and HMDB51 for downstream tasks. All the models are initialized with weights obtained from pre-training with mixup for 300 epochs. First, we show that cross-modal mixup (indicated by + mixup) in the feature space exploiting the cross-modal representation outperforms the traditional manifold mixup (indicated by + mixup) in the feature space (Verma et al., 2019) on UFC-101 which does not make use of the cross-modal representation. However, we observe that the action classification accuracy on HMDB51 using optical flow is equivalent for both strategies with or without using cross-modality. This is because HMDB51 mostly consists of static actions with improminent motion patterns which limits the optical flow model to learn motion dominated representation. As a result, this also affects the action classification accuracy on HMDB51 when evaluated with both the streams.
Next, we show the influence of mixing different dimensions in the hidden representation of a video. We perform experiments with cross-modal cutmix occurring in the spatial dimension (s), spatial and temporal (t) dimensions, and finally all the dimensions including channels (c). Note that the manifold cutmix operation is performed within the same data manifold, i.e. the cutmix is performed across the same random layer between RGB and Flow networks. In contradiction to our previous observation of video mixing in the input space, here the temporal cutmix provides a minor boost to the performance of the downstream tasks. This is supported by the fact that hidden representations of a video retains temporal information due to the preceding convolutional operations on the input sample. Also, the channel mixing further boosts the performance by retaining lost temporal information in the resultant mixed feature map. Finally, we introduce more randomness in CMMC by
randomizing the selection of the cross-modal network layer (refer to mix2 in algorithm 1) where the mixing takes place. This enables CMMC to take advantage of the features from later layers of the cross-modal network.
4.5 COMPARISON TO THE STATE-OF-THE-ART
In this section, we compare Vi-Mix with previous self-supervised approaches for video/skeleton action classification and video action retrieval. In Table 3, we provide the action classification results on UCF101 and HMDB51 for linear-probing and full finetuning of video encoders with models trained on UCF101 and K400. For linear probing, Vi-Mix only with its strong data augmentation (mixup + CMMC) outperforms CoCLR which shares the same evaluation setting with Vi-Mix, by 1.9% on UCF101. Similar, observation is made for models trained with K400. We also note that our finetuned Vi-Mix encoders outperform approaches using higher spatial resolution (XDC, AVTS, MemDPC), deeper layers (XDC, MemDPC, GDT). However, the lower action classification accuracy of Vi-Mix on HMDB51 compared to CoCLR indicates the requirement of positive mining of data samples in contrastive learning as performed in CoCLR. The large performance gap of Vi-Mix with CVRL is owing to the deeper layers of R3D (49 vs 23) and the input spatial resolution (224 vs 128). Interestingly, our Vi-Mix model pre-trained on K400 performs on par with the models trained on larger datasets substantiating the impact of simple cross-modal video augmentation.
In Table 4, we provide the video retrieval results on UCF101 and HMDB51 for the Vi-Mix models trained on UCF101. This is a classical test for verifying if the pre-trained model learns semantic information while learning self-supervised representation. We test if a query instance clip and its nearest neighbours belong to the same category. Our Vi-Mix model outperforms all the representative baselines by a significant margin on both the datasets.
In Table 5, we generalize our Vi-Mix strategy for skeleton action representation on NTU-60. Since, mixup in the input space hampers the spatial configuration of the skeletons processed by ST-GCN, we only perform CMMC with hidden skeleton representations in the encoder. We treat Joints and Motion as different input modalities of given skeleton data. Note that the cutmix operation across spatial dimension represents the joint vertices (1-dimensional). The downstream action classification results of a skeleton model pre-trained with contrastive learning (SkeletonCLR) using CMMC outperforms its baseline by 2.4% on cross-subject and by 1.% on cross-view protocol. The superior results of CrosSCLR is owing to its cross-modal positive mining which benefits the vanilla SkeletonCLR models. We believe that such positive mining approaches in contrastive learning such as CoCLR or CrosSCLR can benefit by using our video mixing strategies.
5 RELATED WORK
Deep neural networks, especially the networks fabricated for processing videos are data-hungry. While annotating large scale video data is expensive, recently many self-supervised video representation learning approaches have been proposed to make use of the abundant web videos. On one hand, some methods have exploited the temporal structure of the videos, such as predicting if frames appear in order, reverse order, shuffled, color-consistency across frames, etc (Lee et al., 2017b;a; Fernando et al., 2017; Pickup et al., 2014; Misra et al., 2016; Wang et al., 2019; 2017; Vondrick et al., 2018; Recasens et al., 2021). On the other hand, some methods have been taking advantage of the multiple modalities of videos like audio, text, optical flow, etc by designing pretext tasks for their temporal alignment (Chung & Zisserman, 2016; Korbar et al., 2018; Arandjelovic & Zisserman, 2017b; Owens & Efros, 2018; Piergiovanni et al., 2020; Miech et al., 2020; Arandjelovic & Zisserman, 2017a).
Meanwhile, data mixing strategies have gained popularity in image-domain data augmentations for supervised learning (Zhang et al., 2018; Shen et al., 2020; Yun et al., 2019) in addition to their usage also for learning self-supervised image representation (Verma et al., 2019; Lee et al., 2021; Verma et al., 2021). A recent work (unpublished), in the spirit of data mixing in the video domain, VideoMix creates a new training video by inserting a video cuboid into another video in the supervised setting (Yun et al., 2020). In contrast, we focus on mixing video samples for self-supervised representation. Different from the observations in VideoMix, we note that mixup in Vi-Mix is a better augmentation tool rather than strategies involving removal of spatio-temporal sub-space from the original videos. The most closest to our work, Manifold mixup (Verma et al., 2019) focuses on interpolating hidden representation of the samples within a mini-batch, whereas, our proposed CMMC in Vi-Mix performs cutmix operation in the data manifold across different modalities. In addition, we also introduce the notion of channel mixing in the feature space. We find that Vi-Mix is simple to implement while is a strong data augmentation tool for learning self-supervised video representation even with small data size.
6 CONCLUSION
We have analyzed the augmentation strategies for learning self-supervised video representation. We have introduced Vi-Mix which includes performing video mixup followed by Cross-modal manifold mixup to take advantage of additional modalities present in videos. Vi-Mix improves the quality of learned representation and thus brings significant improvement in the performance of downstream tasks on UCF101, HMDB51 and NTU-60 datasets. We believe that Vi-Mix can be a standard video augmentation tool while learning any multi-modal self-supervised video representation.
A APPENDIX
A.1 MORE IMPLEMENTATION DETAILS
Training/Testing specification for downstream finetuning on UCF101 and HMDB51. At training, we apply the same data augmentation as in the pre-training stage mentioned in section 4.2, except for Gaussain blurring. The model is trained with similar optimization configuration as in the pre-training stage for 500 epochs. At inference, we perform spatially fully convolutional inference on videos by applying ten crops (center crop and 4 corners with horizontal flipping) and temporally take clips with overlapping moving windows. The final prediction is the average softmax scores of all the clips.
Training/Testing specification for downstream finetuning on NTU-60. For training the pretrained ST-GCN along with the linear classifier, we apply the same data augmentation as in the pre-training stage. We train for 100 epochs with learning rate 0.1 (multiplied by 0.1 at epoch 80).
History Queue in MoCo. We adopt momentum-updated history queue as in MoCo (He et al., 2019) to cache a large number of visual features while learning contrastive representation. For our pretraining experiments, we use a softmax temperature τ = 0.07, and a momentum m = 0.999. The queue size of MoCo for pre-training experiments on UCF101, K400 and NTU-60 are 2048, 16384 and 32768 respectively.
A.2 REGULARIZATION EFFECT OF VI-MIX
In fig. 3, we provide (1) a plot of training loss of two models, one using Vi-Mix and the other model is using standard data augmentations, and (2) the (K+1)-way accuracy of the pretext task of the models learning contrastive representation. We observe a disparity between the training losses (at left of the figure) in both the models with and without using Vi-Mix. This is owing to the hardness of the pretext task which can be directly correlated with the difficulty of the data transformation, via ViMix data augmentation. Meanwhile, we also note that the (K+1)-way accuracy of the Vi-Mix model
while training on contrastive loss is lower than that of the model without using Vi-Mix (at the right of the figure). However, the performance gain of the Vi-Mix model on downstream classification and retrieval tasks shows the regularizing capability of using Vi-Mix type data augmentation. | 1. What is the main contribution of the paper regarding video representation learning?
2. What are the strengths of the proposed augmentation strategy, particularly in its application to self-supervised learning?
3. What are the weaknesses of the paper, especially in comparison to other works in the field?
4. How does the reviewer assess the effectiveness and generalizability of the cross-modal augmentation used in the proposed approach?
5. Are there any concerns or suggestions regarding the training pipeline and its complexity? | Summary Of The Paper
Review | Summary Of The Paper
This submission proposes an augmentation strategy for self-supervised video representation learning. This augmentation (Vi-Mix) includes performing video mixup on RGB and optical flow streams individually by performing weighted averaging of two spatio-temporal stack of frames followed by a cross-modal Cutmix to mix such cross-modal representations in the hidden representation space. The self-supervised learning process has been done on Kinetics400 and UCF101 datasets and then the performance has been shown on downstream action recognition and retrieval.
Review
Strengths:
1- The idea of proposing new augmentation strategies for self-supervised video representation learning is interesting as this is one of the key components of self-supervised learning especially in the video domain that we have an additional dimension (time).
2- Authors proposed different ablation studies showing the effect of using different mix-up strategies in different streams (RGB, optical flow, and both).
Weaknesses:
1- Authors mentioned some motivations for using cutmix as a way of cross-modal augmentation, however, I am not still convinced what is the benefit of mixup augmentation in each stream compared to the existing simple augmentation methods such as random temporal sampling as in CVRL, [1].
2- Although the authors mentioned that the benefit of using cutmix is doing augmentation across modalities, I'm curious how this augmentation can be generalized to other modalities such as audio or text. To me, this cross-modal augmentation only works on the modalities such as flow which also has spatial dimensions same as the RGB domain.
3- Comparison to the state-of-the-art results in Table 3 are not convincing. The CVRL approach only uses the RGB stream, their final results outperformed the proposed approach with a large gap (10%). Although the authors mentioned that this improvement is due to the larger size of the input (224 instead of 112) and the architecture, I'm curious to see how the proposed approach behaves in the same setting (e.g., same resolution and same architecture).
4- Comparison with the recent CVPR 2021 work [1] which has shown their results on Spatio-temporal MoCo-V2 and obtained better results than the proposed method is missing.
5- The proposed training pipeline is composed of different stages (individual RGB and Flow training, 4 stages of training with Cutmix, and then another round of joint training). The benefit of self-supervised representation learning is how it can be utilized to train on larger datasets as we don't need annotations. However, here we can see that multiple stages of training and the large number of training epochs make the process much more challenging.
[1] Feichtenhofer, Christoph, et al. "A Large-Scale Study on Unsupervised Spatiotemporal Representation Learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. |
ICLR | Title
Vi-MIX FOR SELF-SUPERVISED VIDEO REPRESENTATION
Abstract
Contrastive representation learning of videos highly rely on exhaustive data augmentation strategies. Therefore, towards designing video augmentation for selfsupervised learning, we first analyze the best strategy to mix videos to create a new augmented video sample. Then, the question remains, can we make use of the other modalities in videos for data mixing? To this end, we propose Cross-Modal Manifold Cutmix (CMMC) that inserts a video tesseract into another video tesseract in the feature space across two different modalities. We find that our video mixing strategy: Vi-Mix, i.e. preliminary mixing of videos followed by CMMC across different modalities in a video, improves the quality of learned video representations. We exhaustively conduct experiments for two downstream tasks: action recognition and video retrieval on three popular video datasets UCF101, HMDB51, and NTU-60. We show that the performance of Vi-Mix on both the downstream tasks is on par with the other self-supervised approaches while requiring less training data.
N/A
Contrastive representation learning of videos highly rely on exhaustive data augmentation strategies. Therefore, towards designing video augmentation for selfsupervised learning, we first analyze the best strategy to mix videos to create a new augmented video sample. Then, the question remains, can we make use of the other modalities in videos for data mixing? To this end, we propose Cross-Modal Manifold Cutmix (CMMC) that inserts a video tesseract into another video tesseract in the feature space across two different modalities. We find that our video mixing strategy: Vi-Mix, i.e. preliminary mixing of videos followed by CMMC across different modalities in a video, improves the quality of learned video representations. We exhaustively conduct experiments for two downstream tasks: action recognition and video retrieval on three popular video datasets UCF101, HMDB51, and NTU-60. We show that the performance of Vi-Mix on both the downstream tasks is on par with the other self-supervised approaches while requiring less training data.
1 INTRODUCTION
The recent advancements in self-supervised representation is credited to the success of using discriminative contrastive loss such as InfoNCE (Gutmann & Hyvärinen, 2010). Given a data sample, contrastive representation learning focus on discriminating its transformed version from a large pool of other instances or their transformations. Thus, the concept of contrastive learning while applicable to any domains, its effectiveness rely on the domain-specific inductive bias as the transformations are obtained from the same data instance. For images, these transformations are usually standard data augmentation techniques (Chen et al., 2020) while in videos, data artifacts that arise from temporal segments within the same video clip (Lee et al., 2017b;a; Fernando et al., 2017; Pickup et al., 2014; Misra et al., 2016).
Recently, data mixing strategies (Zhang et al., 2018; Shen et al., 2020; Yun et al., 2019) have emerged as one of the promising data augmentation for supervised learning methods. These mixing strategies when incorporated with contrastive learning, the quality of the learned representation improves drastically as in Lee et al. (2021); Verma et al. (2021; 2019). Such augmentations introduce semantically meaningful variance for better generalization which is crucial for learning self-supervised representations. While these mixing strategies have been impactful for learning image representations, mixing strategies have been very limitedly explored in the video domain.
Therefore, in this paper, we study the various data mixing strategies for videos,and propose a new approach to overcome their limitation by mixing across modalities. We first investigate and compare the mixing strategies adopted from the the image domain, and we find that mixing videos by performing simple interpolation of two video cuboids (Mixup) is more effective than inserting a video cuboid within another (Cutmix). This is in contrast to the observations made in the image domain. Furthermore, unlike learning image representations (Lee et al., 2021), these data mixing strategies are prone to over-fitting when trained for longer, making them limited for videos.
Motivated by the success of previous self-supervised techniques exploiting multiple modalities to learn discriminative video representation as in Arandjelovic & Zisserman (2017a); Chung & Zisserman (2016); Korbar et al. (2018); Arandjelovic & Zisserman (2017b); Owens & Efros (2018); Piergiovanni et al. (2020); Miech et al. (2020), in this paper, we pose the following question: can we take advantage of other modalities for mixing videos while learning self-supervised representation?
Different modalities of a video like RGB, optical flow, etc. have different distributions and thus, mixing them directly in the input space makes the task of discriminating similar instances from the other instances easier limiting the quality of the learned representation. To this end, we propose our Cross-Modal Manifold Cutmix (CMMC), that performs data mixing operation ‘across different modalities’ of a video in their hidden intermediate ‘representations’. Given the video encoders from different modalities pre-trained with contrastive loss in addition to mixup augmentation, CMMC exploits the underlying structure of the data manifold. This is done by performing cutmix operation in the feature space across space, time and channels. To the best of our knowledge, this is the first attempt to perform mixing across channels. The channel mixing of the cross-modal feature map enforces the encoder to learn better semantic concepts in the videos. Hence, we train video encoders for different modalities in several stages including the use of mixup strategy in videos and our proposed CMMC. We call this video augmentation strategy - Vi-Mix which stands for Video instance-Mix for contrastive representation learning.
Empirically, we confirm that Vi-Mix being easy to implement, significantly improves contrastive representation learning for videos. We show that Vi-Mix can effectively learn self-supervised representation with small availability of data for pretext task and can also take advantage of other modalities of the videos through manifold mixing strategy. We thoroughly evaluate the quality of the learned representation on two downstream tasks action recognition and retrieval, on UCF101 and HMDB51. We demonstrate the improvement in transferability of the representation learned with Vi-Mix by conducting training on a large scale dataset Kinetics-400 and then finetuning on smaller datasets. Furthermore, we corroborate the robustness of our video data augmentation strategy by observing similar improvements on video skeleton sequences for the task of action recognition.
2 BACKGROUND
In this section, we first review a general contrastive learning mechanism used for learning selfsupervised video representation. Then, we review a data mixing formulation for self-supervision in the image domain. Let X ∈ RT×3×H×W be a sequence of video. The objective is to learn a mapping f : X → z where z ∈ RD, that can be effectively used to discriminate video clips for various downstream tasks, e.g. action recognition , retrieval, etc.
Contrastive Learning. Assume a set of augmentation transformations A is applied to X . So, for a particular video there exists a positive (say X̃ ) whereas the other transformed videos in a minibatch are considered as negatives. The encoder f(.) and its exponential average model f̃(.) maps the positives and negatives respectively to embedding vectors. Therefore, the contrastive loss for a sample Xi is formulated as
L(Xi) = −log exp(zi · z̃i/τ) exp(zi · z̃i/τ) + ∑ j∈N exp(zi · zj/τ) (1)
where τ is a scaling temperature parameter and N is the set of negatives. Note that the embedding vectors zi and z̃i are L2-normalized before the loss computation. Thus, the loss L optimizes the video instances such that the representation of the video instances with the same view are pulled towards each other while pushing away from the other instances.
Data Mix for Contrastive Learning. We revisit the formulation proposed in i-mix (Lee et al., 2021) for mixing data within a batch for contrastive representation learning. Let yi ∈ {0, 1}BS be the virtual labels of the input Xi and X̃i in a batch, where yi,i = 1 and yi,j 6=i = 0. Then, the (N + 1)− way discrimination loss for a sample in a batch is:
L(Xi, yi) = −yi,b · log exp(zi · z̃b/τ) exp(zi · z̃b/τ) + ∑ j∈N exp(zi · zj/τ) (2)
where b ranges from 0 to BS. Thus, the data instances are mixed within a batch for which the loss is defined as:
LMix((Xi, yi), (Xr, yr), λ) = L(Mix(Xi,Xr;λ), λyi + (1− λ)yr) (3)
where λ ∼ Beta(α, α) is a mixing coefficient, r ∼ rand(BS), and Mix() is a mixing operator. In the following, we will discuss the appropriate mixing operators in the video domain.
3 VI-MIX
In this paper, we use the same i-mix formulation (from the above section) for data mixing while learning discriminative self-supervised representation. First, we investigate the best strategies to define the mixing operator for video domain. Furthermore, we introduce a manifold mixing strategy to make use of the other modalities freely available in videos for data mixing. We integrate both these data augmentation strategies, together called Video-instance Mix (Vi-Mix) for contrastive representation learning of Videos.
3.1 MIXING OPERATOR FOR VIDEOS
Unlike mixing operations in images as in Zhang et al. (2018); Verma et al. (2019); Shen et al. (2020); Yun et al. (2019), videos have temporal dimension. We argue that handling temporal dimension in videos is not equivalent to handling spatial dimension in images. For the Mixing operation defined in equation 3, it is straightforward to extend the existing image mixing strategies to videos. Mixup (Verma et al., 2019) in videos perform weighted averaging of two spatio-temporal stack of frames. In contrast to cutmix operator (Yun et al., 2019), mixup operator retains the temporal information in videos and thus facilitates the contrastive representation learning. We empirically corroborate this observation in the experimental analysis. In addition to this, videos possess different modalities like optical flow that can be computed without any supervision. The question remains that can we make use of other modalities in videos for mixing instances while learning contrastive representation? To this end, we introduce Cross-Modal Manifold Cutmix (CMMC) strategy for mixing video instances across different modalities which is discussed in the next section.
3.2 CROSS-MODAL MANIFOLD CUTMIX
Different modalities in videos is an additional information that are often exploited for self-supervised learning as in (Han et al., 2020a; Linguo et al., 2021). In contrast to these approaches, we simply propose to mix these different modalities as another data augmentation strategy for self-supervised representation. However, the dissimilarity in distribution between the different modalities (say, RGB and optical flow) in videos makes it harder to mix them at input space. Consequently, we propose Cross-Modal Manifold Cutmix to mix such cross-modal representations in the hidden representation space.
As an extension of the previous notation, we now consider two different modalities X1i and X2i for a given video clip Xi. The objective of the self-supervised task is to learn discriminative video representation, i.e. to learn functions f1(·) and f2(·). We decompose the encoder function by f1(X1i) = f1k(g1k(X1i)), where g1k is a part of the video encoder for modality 1 with k layers that maps the input data X1i to a hidden representation. Similarly, f1k maps the hidden representation g1k(X1i) to the embedding vector z1i. Note that we already have trained video encoders f1i(·) and f2i(·) by exploiting the above mentioned Mixup strategy among the video instances in a mini-batch while optimizing the contrastive loss. Now, CMMC is trained in a 4 stage fashion. In the first stage, we train the encoder f1i(·) of modality 1 in 5 steps as illustrated in figure 1. First, we select random
layers k and l from a set of eligible layers in f1i(·) and f2i(·) respectively such that k ≤ l. This set excludes the input space. Second, we fed a pair of input X1i and X2r to their respective video encoders f1 and f2 until they reach layer k and layer l respectively. We obtain g1k(X1i) and g2l(X2r) - a hidden representation (spatio-temporal tesseract) of both videos in modality 1 and 2. Third, we perform a data mixing among the hidden representations across two modalities as:
gmix1k , λ = CutMix(g1k, g2r;α) (4)
ymix1k = λy1i + (1− λ)y2r (5) where (y1i, y2r) are one-hot labels, hyper-parameter α = 1, and the mixing operator is cutmix as in Yun et al. (2019) which returns the mixing coefficient λ along with the mixed data. For brevity, we omit the input instances in the equation. Fourth, we continue the forward pass in f1(·) only from layer k to the output embedding, now we denote by zmix1i . Fifth, this embedding is used to compute the (N + 1)-way discrimination loss which is reformulated as:
L(X1i, y1i) = −ymix1i,b · log exp(zmix1i · z̃1b/τ) exp(zmix1i · z̃1b/τ) + ∑ j∈N exp(zmix1i · z1j/τ) (6)
The computed gradients are backpropagated through the entire video encoder f1(·) of modality 1 only. It is to be noted that the video encoder f1(·) of modality 2 is not trained in this stage. In the second stage, we train the video encoder f2(·) for modality 2 while freezing the updated learned weights of f1(·). We continue this cycle twice for each modality, and hence 4 stages to learn the self-supervised video representation in f1(·) and f2(·) . Algorithm 1 provides the pseudocode of one stage of CMMC for training encoder f1(·). Thus, to sum up Vi-Mix consists of initially training
Algorithm 1 Pytorch-like style Pseudocode of One stage CMMC for modality 1 alpha, mix1 = 1., rand(1, L) # L is the layers in the encoder mix2 = rand(mix1, L) x1 q, x1 k = aug(rgb) # Two modalities of the RGB (modality 1) data x2 = aug(flow) # Flow data (modality 2) g1 = f 1q.partial forward(x1 q, 0, mix1). g2 = f 2q.partial forward(x2, 0, mix2) g mix, labels new, lam = CutMix(g1, g2, alpha) z1 = normalize(f 1q.partial forward(x1 q, mix1, L)) z2 = normalize(f 1k.forward(x1 k)) z2, g2 = z2.detach(), g2.detach() # no gradient flow logits = matmul(z1, z2.T) / t loss = lam * CrossEntropyLoss(logits, arange(len(x1 q)) +
(1 - lam) * CrossEntropyLoss(logits, labels new)
video encoders of modality 1 and modality 2 independently with infoNCE loss as in Chen et al. (2020) and applying mixup augmentation. Then, we perform CMMC among the hidden representations of data from modality 1 and 2 in 4 stages. This is performed by alternation training strategy as in Han et al. (2020a) to make use of the latest learned representations in the cross-modal network. The final learned model is obtained after two cycles of training encoder of each modality.
CutMix in feature space. Here, we explain how the cutmix operator is applied on the video tesseracts in the feature space. Assume that the hidden representation of the input video sequence X1i in modality 1, g1k ∈ Rc1×t1×h1×w1 , where c1 represents channel, t1 time, and s1 = h1 × w1 is the spatial resolution. We generate a new representation gmix1 by combining the hidden representations g1k and g2l. These hidden representations g1k and g2l may differ such that (c1, t1, s1) ≥ (c2, t2, s2). Therefore, we define a cutmix operation that combines the video tesseracts in space, time and across channels. We define the combining operation as
gmix1 =M g1k + (1−M) g2l (7) where M ∈ {0, 1}c1×t1×h1×w1 is a binary tensor mask which is decided by sampling the bounding box coordinates bbox = (bc1, bc2, bt1, bt2, bh1, bh2, bw1, bw2) from a uniform distribution. In order to preserve the temporal information in a video, we fix (bt1, bt2) = (0, t2). Similarly, we preserve the channel information processed by the video encoder f2(·) by fixing (bc1, bc2) = (0, c2).
Thus, the bounding box selection follows a random sampling of a center coordinate (bwc, bhc) from (U(0, w2), U(0, h2)). The corner points of the bounding box are determined by
bw1, bw2 = bwc − w2 √ λ
2 , bwc +
w2 √ λ
2 bh1, bh2 = bhc −
h2 √ λ
2 , bhc +
h2 √ λ
2 (8)
where λ ∼ U(0, 1). Even by fixing (bt1, bt2) and (bc1, bc2), the resultant video tesseract in one modality may not match the dimension of the video tesseracts in other modality across channel and time, if k < l. So, we select a 4D bounding box with coordinates (Mc1,Mc2,Mt1,Mt2,Mh1,Mh2,Mw1,Mw2) within the defined binary mask M . We randomly sample a center coordinate (Mcc,Mtc,Mhc,Mwc) from (U(0, c1), U(0, t1)), (U(0, h1), and U(0, w1)) respectively. The end points of the binary mask M are determined by
Mc1,Mc2 =Mcc − c22 ,Mcc + c2 2 Mt1,Mt2 =Mtc − t2 2 ,Mtc + t2 2 Mh1,Mh2 =Mhc − h22 ,Mhc + h2 2 Mw1,Mw2 =Mwc − w2 2 ,Mwc + w2 2
(9)
For the region within this bounding box, the values in the binary mask is filled with 0, otherwise 1. A new mixing coefficient is computed by 1− λnew = ∑ c,t,w,hMc,t,w,h denoting the complementary of the proportion of volume occupied by M . This new mixing coefficient λnew is returned by the cutmix function to compute the mixed labels in equation 5.
Thus, we perform a mix operation in videos across all the dimensions including spatial, temporal and channels. However, we preserve the temporal properties of the video instances by retaining a proportion of channel information. This makes cutmix operation effective in the feature space.
4 EXPERIMENTS
In this section, we describe the datasets used in our experimental analysis, implementation details, and evaluation setup. We present ablation studies to illustrate the effectiveness of Vi-Mix video data augmentation and also, provide an exhaustive state-of-the-art comparison with our Vi-Mix models.
4.1 DATASETS
We use two video action recognition datasets: UCF101 (Soomro et al., 2012) and Kinetics-400 (Kay et al., 2017) for self-supervised training of the video encoders. UCF101 contains 13k videos with 101 human actions and Kinetics-400 (K400) contains 240k video clips with 400 human actions. We also use a skeleton action recognition dataset: NTU-RGB+D (Shahroudy et al., 2016) for selfsupervised training of a skeleton encoder. NTU-RGB+D (NTU-60) contains 58k videos with 60 human action, all performed indoors. Note that we use the videos or skeleton sequences from the training set only for the self-supervised pre-training. Downstream tasks are evaluated on split1 of UCF101 and HMDB51 (Kuehne et al., 2011), which contains 7k videos with 51 human actions. For evaluation on skeletons, we evaluate on the validation set of NTU-60 on Cross-subject (xsub) and Cross-View (xview) protocols.
4.2 IMPLEMENTATION DETAILS
Vi-Mix is a simple data augmentation strategy that requires cutmix operation in the feature space which is adopted from (Yun et al., 2019) followed by our temporal and channel mixing. The input modalities in our experiments consists of RGB, optical flow and skeletons (3D Poses). The optical flow is computed with the un-supervised TV-L1 algorithm (Sánchez Pérez et al., 2013) and the same pre-processing procedure is used as in Carreira & Zisserman (2017). For the skeleton experiments, the skeleton data X ∈ RC×T×V is acquired using KinectV2 sensors, where coordinate feature C = 3, # joints V = 25, and # frames T = 50. Following the pre-processing steps in Linguo et al. (2021), we compute the joints and motion cues. For all the RGB and optical flow models, we choose S3D (Xie et al., 2018) architecture as the backbone whereas for the skeleton model, we choose ST-GCN (Yan et al., 2018) with channels in each layer reduced by 1/4 times as the backbone. For self-supervised representation learning, we adopt a momentum-updated history queue to cache a large number of video features as in MoCo (He et al., 2019). We attach a non-linear projection head, and remove it for downstream task evaluations as done in SimCLR (Chen et al., 2020).
For our experiments with RGB and optical flow, we use 32 128 × 128 frames of RGB (or flow) input,at 30 fps. For additional data augmentation, we apply clip-wise consistent random crops, horizontal flips, Gaussian blur and color jittering. We also apply random temporal cropping from the same video as used in Han et al. (2020a). For training a MoCo model with Vi-Mix data augmentation, we initially train the RGB and Flow networks for 300 epochs with mixup data augmentation
independently. The mixup operation is applied in the input space. Then, we train these pre-train networks with CMMC in 4 stages. In each stage, a network with one input modality is trained for 100 epochs by freezing the network with other modality. In the next stage, we reverse the crossmodal networks and continue training the network with other modality. Finally, after the 4 stages, the resultant models are hence trained for 500 epochs in total. For optimization, we use Adam with 10−3 learning rate and 10−5 weight decay. All the experiments are trained on 4 and 2 V100 GPUs for K400 and others respectively, with a batch size of 32 videos per GPU.
For our experiments with skeleton sequence, we choose Shear with shearing amplitude 0.5 and Crop with a padding ratio of 0.6 as the augmentation strategy as used in Linguo et al. (2021). Note that, for CMMC on skeleton data, we perform cutmix operation only on skeleton vertices followed by channel and temporal mixing. For training with CMMC, the GCN encoders are initially trained for 150 epochs on Joint and Motion cues. This is followed by 2 stage training each with 150 epochs, where encoder with one modality is trained and the other is frozen. For optimization, we use SGD with momentum (0.9) and weight decay (0.0001). The model is trained on 1 V100 with a batch size of 128 skeleton sequences.
4.3 EVALUATION SETUP FOR DOWNSTREAM TASKS
For experiments with RGB and optical flow, we evaluate on two downstream tasks: (i) action classification and (ii) retrieval. For action classification, we evaluate on (1) linear probe where the entire encoder is frozen and a single linear layer followed by a softmax layer is trained with cross-entropy loss, and (2) finetune where the entire encoder along with a linear and softmax layer is trained with cross-entropy loss. Note that the encoders are initialized with the Vi-Mix learned weights. More details for training the downstream action classification framework is provided in the Appendix. For action retrieval, the extracted features from the encoder pre-trained with Vi-Mix are used for nearest-neighbor (NN) retrieval. We report Recall at k (R@k) which implies, if the top k nearest neighbours comprise one video pertaining to the same class, a correct retrieval is counted.
4.4 ABLATION STUDIES ON VI-MIX
In this section, we empirically show the correctness of our data augmentation strategy for videos. We also investigate the potential reasons behind the significant improvement of performance with Vi-Mix by conducting relevant experiments.
Which mixing strategy is the best for uni-modal video understanding? In Table 1, we investigate different video mixing strategies based on mixup and cutmix operator for downstream action classification and retrieval tasks. For the augmentations based on Cutmix (Yun et al., 2019), we randomly select a sub-cuboid and plug it into another video. We also consider Videomix (Yun et al., 2020) that performs a cutmix operation across all the frames clipwise consistent. For the virtual labels, we perform label smoothing as defined in equation 3. In image domain, cutmix outperforms the mixup strategy in supervised settings (Yun et al., 2019). However, we find that all strategies using cutmix in temporal dimension (Temporal cutmix), spatio-temporal dimension (ST cutmix) and spatial cutmix (VideoMix) performs worse than simple Mixup strategy. This is because cutmix operation destroys the temporal structure of the videos which is crucial for understanding actions in videos. Similarly, VideoMix where cutmix is performed spatially and not temporally, introduces new contextual information in videos in arbitrary spatial locations. This not only hampers the motion patterns present in the original video but also weakens the similarity between the positive samples in the contrastive loss. Thus, video mixing operators must ensure retention of temporal characteristics in videos.
Why do we need multi-modal mixing strategy for videos? In fig. 2, we illustrate the downstream action classification accuracy vs # epochs plot. This plot clearly shows the importance of applying data mixing augmentation. However, the model trained without multi-modal mixing strategy (CMMC) over-fits after 300 epochs, whereas the models training with multi-modal mixing are still learning discriminative representation. We find that the mixing strategy on video representation learning induces faster training and with CMMC, the models learn cross-modal knowledge without using complicated knowledge distillation techniques as in Crasto et al. (2019); Garcia et al. (2018). It is to be noted that training with cross-modal data augmentation is more beneficial when trained in alternation strategy. In fig. 2, we show that the RGB model with alternate training outperforms the RGB model which is trained for 200 epochs straightaway with the outdated optical flow model. The alternate training strategy takes benefit of the most updated cross-modal model for data mixing and hence learning more discriminative representation.
Diagnosis of CMMC. In Table 2, we provide the results for different configurations of data mixing in the feature space. The objective is to understand the strategies responsible for boosting the performance of the models on UCF101 and HMDB51 for downstream tasks. All the models are initialized with weights obtained from pre-training with mixup for 300 epochs. First, we show that cross-modal mixup (indicated by + mixup) in the feature space exploiting the cross-modal representation outperforms the traditional manifold mixup (indicated by + mixup) in the feature space (Verma et al., 2019) on UFC-101 which does not make use of the cross-modal representation. However, we observe that the action classification accuracy on HMDB51 using optical flow is equivalent for both strategies with or without using cross-modality. This is because HMDB51 mostly consists of static actions with improminent motion patterns which limits the optical flow model to learn motion dominated representation. As a result, this also affects the action classification accuracy on HMDB51 when evaluated with both the streams.
Next, we show the influence of mixing different dimensions in the hidden representation of a video. We perform experiments with cross-modal cutmix occurring in the spatial dimension (s), spatial and temporal (t) dimensions, and finally all the dimensions including channels (c). Note that the manifold cutmix operation is performed within the same data manifold, i.e. the cutmix is performed across the same random layer between RGB and Flow networks. In contradiction to our previous observation of video mixing in the input space, here the temporal cutmix provides a minor boost to the performance of the downstream tasks. This is supported by the fact that hidden representations of a video retains temporal information due to the preceding convolutional operations on the input sample. Also, the channel mixing further boosts the performance by retaining lost temporal information in the resultant mixed feature map. Finally, we introduce more randomness in CMMC by
randomizing the selection of the cross-modal network layer (refer to mix2 in algorithm 1) where the mixing takes place. This enables CMMC to take advantage of the features from later layers of the cross-modal network.
4.5 COMPARISON TO THE STATE-OF-THE-ART
In this section, we compare Vi-Mix with previous self-supervised approaches for video/skeleton action classification and video action retrieval. In Table 3, we provide the action classification results on UCF101 and HMDB51 for linear-probing and full finetuning of video encoders with models trained on UCF101 and K400. For linear probing, Vi-Mix only with its strong data augmentation (mixup + CMMC) outperforms CoCLR which shares the same evaluation setting with Vi-Mix, by 1.9% on UCF101. Similar, observation is made for models trained with K400. We also note that our finetuned Vi-Mix encoders outperform approaches using higher spatial resolution (XDC, AVTS, MemDPC), deeper layers (XDC, MemDPC, GDT). However, the lower action classification accuracy of Vi-Mix on HMDB51 compared to CoCLR indicates the requirement of positive mining of data samples in contrastive learning as performed in CoCLR. The large performance gap of Vi-Mix with CVRL is owing to the deeper layers of R3D (49 vs 23) and the input spatial resolution (224 vs 128). Interestingly, our Vi-Mix model pre-trained on K400 performs on par with the models trained on larger datasets substantiating the impact of simple cross-modal video augmentation.
In Table 4, we provide the video retrieval results on UCF101 and HMDB51 for the Vi-Mix models trained on UCF101. This is a classical test for verifying if the pre-trained model learns semantic information while learning self-supervised representation. We test if a query instance clip and its nearest neighbours belong to the same category. Our Vi-Mix model outperforms all the representative baselines by a significant margin on both the datasets.
In Table 5, we generalize our Vi-Mix strategy for skeleton action representation on NTU-60. Since, mixup in the input space hampers the spatial configuration of the skeletons processed by ST-GCN, we only perform CMMC with hidden skeleton representations in the encoder. We treat Joints and Motion as different input modalities of given skeleton data. Note that the cutmix operation across spatial dimension represents the joint vertices (1-dimensional). The downstream action classification results of a skeleton model pre-trained with contrastive learning (SkeletonCLR) using CMMC outperforms its baseline by 2.4% on cross-subject and by 1.% on cross-view protocol. The superior results of CrosSCLR is owing to its cross-modal positive mining which benefits the vanilla SkeletonCLR models. We believe that such positive mining approaches in contrastive learning such as CoCLR or CrosSCLR can benefit by using our video mixing strategies.
5 RELATED WORK
Deep neural networks, especially the networks fabricated for processing videos are data-hungry. While annotating large scale video data is expensive, recently many self-supervised video representation learning approaches have been proposed to make use of the abundant web videos. On one hand, some methods have exploited the temporal structure of the videos, such as predicting if frames appear in order, reverse order, shuffled, color-consistency across frames, etc (Lee et al., 2017b;a; Fernando et al., 2017; Pickup et al., 2014; Misra et al., 2016; Wang et al., 2019; 2017; Vondrick et al., 2018; Recasens et al., 2021). On the other hand, some methods have been taking advantage of the multiple modalities of videos like audio, text, optical flow, etc by designing pretext tasks for their temporal alignment (Chung & Zisserman, 2016; Korbar et al., 2018; Arandjelovic & Zisserman, 2017b; Owens & Efros, 2018; Piergiovanni et al., 2020; Miech et al., 2020; Arandjelovic & Zisserman, 2017a).
Meanwhile, data mixing strategies have gained popularity in image-domain data augmentations for supervised learning (Zhang et al., 2018; Shen et al., 2020; Yun et al., 2019) in addition to their usage also for learning self-supervised image representation (Verma et al., 2019; Lee et al., 2021; Verma et al., 2021). A recent work (unpublished), in the spirit of data mixing in the video domain, VideoMix creates a new training video by inserting a video cuboid into another video in the supervised setting (Yun et al., 2020). In contrast, we focus on mixing video samples for self-supervised representation. Different from the observations in VideoMix, we note that mixup in Vi-Mix is a better augmentation tool rather than strategies involving removal of spatio-temporal sub-space from the original videos. The most closest to our work, Manifold mixup (Verma et al., 2019) focuses on interpolating hidden representation of the samples within a mini-batch, whereas, our proposed CMMC in Vi-Mix performs cutmix operation in the data manifold across different modalities. In addition, we also introduce the notion of channel mixing in the feature space. We find that Vi-Mix is simple to implement while is a strong data augmentation tool for learning self-supervised video representation even with small data size.
6 CONCLUSION
We have analyzed the augmentation strategies for learning self-supervised video representation. We have introduced Vi-Mix which includes performing video mixup followed by Cross-modal manifold mixup to take advantage of additional modalities present in videos. Vi-Mix improves the quality of learned representation and thus brings significant improvement in the performance of downstream tasks on UCF101, HMDB51 and NTU-60 datasets. We believe that Vi-Mix can be a standard video augmentation tool while learning any multi-modal self-supervised video representation.
A APPENDIX
A.1 MORE IMPLEMENTATION DETAILS
Training/Testing specification for downstream finetuning on UCF101 and HMDB51. At training, we apply the same data augmentation as in the pre-training stage mentioned in section 4.2, except for Gaussain blurring. The model is trained with similar optimization configuration as in the pre-training stage for 500 epochs. At inference, we perform spatially fully convolutional inference on videos by applying ten crops (center crop and 4 corners with horizontal flipping) and temporally take clips with overlapping moving windows. The final prediction is the average softmax scores of all the clips.
Training/Testing specification for downstream finetuning on NTU-60. For training the pretrained ST-GCN along with the linear classifier, we apply the same data augmentation as in the pre-training stage. We train for 100 epochs with learning rate 0.1 (multiplied by 0.1 at epoch 80).
History Queue in MoCo. We adopt momentum-updated history queue as in MoCo (He et al., 2019) to cache a large number of visual features while learning contrastive representation. For our pretraining experiments, we use a softmax temperature τ = 0.07, and a momentum m = 0.999. The queue size of MoCo for pre-training experiments on UCF101, K400 and NTU-60 are 2048, 16384 and 32768 respectively.
A.2 REGULARIZATION EFFECT OF VI-MIX
In fig. 3, we provide (1) a plot of training loss of two models, one using Vi-Mix and the other model is using standard data augmentations, and (2) the (K+1)-way accuracy of the pretext task of the models learning contrastive representation. We observe a disparity between the training losses (at left of the figure) in both the models with and without using Vi-Mix. This is owing to the hardness of the pretext task which can be directly correlated with the difficulty of the data transformation, via ViMix data augmentation. Meanwhile, we also note that the (K+1)-way accuracy of the Vi-Mix model
while training on contrastive loss is lower than that of the model without using Vi-Mix (at the right of the figure). However, the performance gain of the Vi-Mix model on downstream classification and retrieval tasks shows the regularizing capability of using Vi-Mix type data augmentation. | 1. What is the focus of the paper regarding video representation learning?
2. What are the strengths of the proposed approach, particularly in its augmentation strategy?
3. Do you have any concerns or questions about the methodology, such as the choice of network layer or the notation errors?
4. How does the reviewer assess the performance gain of the proposed approach compared to previous works?
5. Are there any suggestions for improving the experimental results or validating the method on different datasets? | Summary Of The Paper
Review | Summary Of The Paper
This paper majorly explored the augmentation strategy, especially data mixing, on self-supervised video representation learning. The paper first evaluates the performance of existing data mixup and cutmix in video domain, then proposes the cross-modal manifold cutmix (CMMC) which integrates the extracted multi-modal features, finally implements extensive experiments to support the proposed augmentation strategy.
Review
Strength:
The paper provides a comprehensive evaluation and anslysis on data-level mix strategies in self-supervised video representation learning, including mixup, cutmix and some variants.
The paper aims to jointly leverage the use of multi-modal data and mixing strategy to learn video representation. The proposed ViMix performs the mixing augmentation in feature maps, successfully avoid the problem of distribution inconsistency between multi-modal data, and improve the perofrmance.
The experimental results and analysis provide some insights from the perspective of spatio-temporal characteristics of videos.
Weakness:
There are some previous works on video representation learning that perform multi-modal contrast and uni-modal data augmentations on the feature map like [1]. If perform uni-modal manifold cutmix but with cross-modal contrastive loss, how much performance gain will be obtained? The comparison is desired.
In 3.2, the descriptions on the details of manifold cutmix operation is difficult to follow and somewhat confusing. First, when choosing the network layer, why set k <= l? What is the motivation? Second, how to ensure (c1,t1,s1)>=(c2,t2,s2)? When the network goes deeper, the spatio-temporal resolution descreases, the channel increases, it seems confilicting. Third, it seems Eq.9 defines both start and end points of mask M, and in this case, how to preserve the temporal and channel information?
There are some mistakes in notations, like in 3.2, 'f_1() of modality 2' -> 'f_2()', in 4.4 'cross-modal mixup (+mixup)' -> 'CM mixup', in algorithm 1, 'z_1 = normalize(f_1q.partial_forward(x1_q, mix1, L))' -> '(f_1q.partial_forward(g_mix, mix1, L)'.
Besides UCF-101 and HMDB-51, some other dataset preferring motions liking Diving could better validate the proposed method.
[1] Patrick, Mandela, et al. "Space-Time Crop & Attend: Improving Cross-modal Video Representation Learning." ICCV, 2021. |
ICLR | Title
Neural Approximation of an Auto-Regressive Process through Confidence Guided Sampling
Abstract
We propose a generic confidence-based approximation that can be plugged in and simplify the auto-regressive generation process with a proved convergence. We first assume that the priors of future samples can be generated in an independently and identically distributed (i.i.d.) manner using an efficient predictor. Given the past samples and future priors, the mother AR model can post-process the priors while the accompanied confidence predictor decides whether the current sample needs a resampling or not. Thanks to the i.i.d. assumption, the post-processing can update each sample in a parallel way, which remarkably accelerates the mother model. Our experiments on different data domains including sequences and images show that the proposed method can successfully capture the complex structures of the data and generate the meaningful future samples with lower computational cost while preserving the sequential relationship of the data.
1 INTRODUCTION
The auto-regressive (AR) model, which infers and predicts the causal relationship between the previous and future samples in a sequential data, has been widely studied since the beginning of machine learning research. The recent advances of the auto-regressive model brought by the neural network have achieved impressive success in handling complex data including texts (Sutskever et al., 2011), audio signals (Vinyals et al., 2012; Tamamori et al., 2017; van den Oord et al., 2016a), and images (van den Oord et al., 2016b; Salimans et al., 2017).
It is well known that AR model can learn a tractable data distribution p(x) and can be easily extended for both discrete and continuous data. Due to their nature, AR models have especially shown a good fit with a sequential data, such as voice generation (van den Oord et al., 2016a) and provide a stable training while they are free from the mode collapsing problem (van den Oord et al., 2016b). However, these models must infer each element xi of the data x = [x1, x2, · · · , xi, · · · , xN ] in a serial manner, requiring O(N) times more than the other non-sequential estimators, which outputs x at once (Garnelo et al., 2018; Kim et al., 2019; Kingma & Welling, 2014; Goodfellow et al., 2014). Moreover, it is difficult to employ recent parallel computation because AR models always require a previous time step by definition. This mostly limits the use of the AR models in practice despite their advantages.
To resolve the problem, we introduce a new and generic approximation method, Neural AutoRegressive model Approximator (NARA), which can be easily plugged into any AR model. We show that NARA can reduce the generation complexity of AR models by relaxing an inevitable AR nature and enables AR models to employ the powerful parallelization techniques in the sequential data generation, which was difficult previously.
NARA consists of three modules; (1) a prior-sample predictor, (2) a confidence predictor, and (3) the original AR model. To relax the AR nature, given a set of past samples, we first assume that each sample of the future sequence can be generated in an independent and identical manner. Thanks to the i.i.d. assumption, using the first module of NARA, we can sample a series of future priors and these future priors are post-processed by the original AR model, generating a set of raw predictions. The confidence predictor evaluates the credibility of these raw samples and decide whether the model needs re-sampling or not. The confidence predictor plays an important role in that the approximation errors can be accumulated during the sequential AR generation process if the
erroneous samples with low confidence are left unchanged. Therefore, in our model, the sample can be drawn either by the mixture of the AR model or the proposed approximation method, and finally the selection of the generated samples are guided by the predicted confidence.
We evaluate NARA with various baseline AR models and data domains including simple curves, image sequences (Yoo et al., 2017), CelebA (Liu et al., 2015a), and ImageNet (Deng et al., 2009). For the sequential data (simple curves and golf), we employed the Long Short-Term Memory models (LSTM) (Hochreiter & Schmidhuber, 1997) as a baseline AR model while PixelCNN++ (Salimans et al., 2017) is used for the image generation (CelebA and ImageNet). Our experiments show that NARA can largely reduce the sample inference complexity even with a heavy and complex model on a difficult data domain such as image pixels.
The main contributions of our work can be summarized as follows: (1) we introduce a new and generic approximation method that can accelerate any AR generation procedure. (2) Compared to a full AR generation, the quality of approximated samples remains reliable by the accompanied confidence prediction model that measures the sample credibility. (3) Finally, we show that this is possible because, under a mild condition, the approximated samples from our method can eventually converge toward the true future sample. Thus, our method can effectively reduce the generation complexity of the AR model by partially substituting it with the simple i.i.d. model.
2 PRELIMINARY: AUTO-REGRESSIVE MODELS
Auto-regressive generation model is a probabilistic model to assign a probability p(x) of the data x including n samples. This method considers the data x as a sequence {xi | i = 1, · · ·n}, and the probability p(x) is defined by an AR manner as follows:
p(x) = n∏ i=1 p(xi|x1, ..., xi−1), (1)
From the formulation, the AR model provides a tractable data distribution p(x). Recently, in training the model parameters using the training samples x̂T , the computation parallelization are actively employed for calculating the distance between the real sample x̂t ∈ x̂T and the generated sample xt from equation (1). Still, for generating the future samples, it requires O(N) by definition.
3 PROPOSED METHOD
3.1 OVERVIEW
Figure 9 shows the concept of the proposed approximator NARA. NARA consists of a prior-sample predictor fW and confidence predictor gV . Given samples x≤i = {x1, · · · , xi}, the prior-sample predictor predicts a chunk of M number of the prior values m(i,i+M ]. Afterward, using the prior samples, we draw the future samples x(i,i+M ] in parallel. We note that this is possible because the
prior m(i,i+M ] is i.i.d. variable from our assumption. Subsequently, for the predicted x(i,i+M ], the confidence predictor predicts confidence scores ν(i,i+M ]. Then, using the predicted confidence, our model decides whether the samples of interest should be redrawn by the AR model (re-sample xi+1) or they are just accepted. The detailed explanation will be described in the following sections.
3.2 APPROXIMATING SAMPLE DISTRIBUTION OF AR MODEL
Given the samples x≤i = {x1, · · · , xi}, a AR model defines the distribution of future samples x(i,j] = {xi+1, · · · , xj} as follows:
pθ(x(i,j]|x≤i) = j∏
l=i+1
pθ(xl|x1, ..., xl−1). (2)
Here, θ denotes the parameters of the AR model. The indices i, j are assumed to satisfy the condition j > i, ∀i, j ∈ [1, N ]. To approximate the distribution pθ(x(i,j]|x≤i), we introduce a set of prior samples m(i,j−1] = fW (x≤i;W ), where we assume that they are i.i.d. given the observation x≤i. Here, W is the model parameter of the prior-sample predictor fW (·). Based on this, we define an approximated distribution qθ,W (x(i,j−1]|x≤i,m(i,j−1]) characterized by the original AR model pθ and the prior-sample predictor fW (·) as follows:
qθ,W (x(i,j]|x≤i,m(i,j−1]) ≡ pθ(xi+1|x≤i) j∏
l=i+2 pθ(xl|x≤i,mi+1, . . .ml−1)︸ ︷︷ ︸ Compute in parallel (const time)
(A) ' pθ(xi+1|x≤i) j∏ l=i+2
pθ(xl|x≤i, xi+1, . . . xl−1)︸ ︷︷ ︸ Compute in sequential (linear time)
= pθ(x(i,j]|x≤i).
(3)
Here, approximation (A) is true when m(i,j−1] approaches to x(i,j−1]. Note that it becomes possible to compute qθ,W in a constant time because we assume the prior variable mi to be i.i.d. while pθ requires a linear time complexity.
Then, we optimize the network parameters θ and W by minimizing the negative log-likelihood (NLL) of qθ,W (x(i,j] = x̂(i,j]|x≤i, fW (x≤i)) where x̂(i,j] is a set of samples that are sampled from the baseline AR model. We guide the prior-sample predictor fW (·) to generate the prior samples that is likely to come from the distribution of original AR model m by minimizing the original AR model θ and prior-sample predictor W jointly as follows:
min θ,W − log pθ(x(g)(i,j]|x≤i)− Epθ(x(i,j]|x≤i)[log qθ,W (x(i,j]|x≤i, fW (x≤i))], (4)
where x(g) denotes the ground truth sample value in the generated region of the training samples. Note that both pθ(x) and its approximated distribution qθ,W (x) approaches to the true data distribution when (1) our prior-sample predictor generates the prior samples m close to the true samples x and (2) the NLL of the AR distribution approaches to the data distribution. Based on our analysis and experiments, we later show that our model can satisfy these conditions theoretically and empirically in the following sections.
3.3 CONFIDENCE PREDICTION
Using the prior-sample predictor fW (·), our model generates future samples based on the previous samples. However, accumulation of approximation errors in the AR generation may lead to an unsuccessful sample generation. To mitigate the problem, we introduce an auxiliary module that determines whether to accept or reject the approximated samples generated as described in the previous subsection, referred to as confidence predictor.
First, we define the confidence of the generated samples as follows: νk = qθ,W (xk = x̂k|x≤i, fW (x≤i)), (5)
where x̂k ∼ pθ(xk|x≤k−1) and k ∈ {1, · · · , j}. The confidence value νk provides a measure of how likely the generated samples from qθ,W (·) is drawn from pθ(xk|x≤k−1). Based on the confidence value νk, our model decides whether it can accept the sample xk or not. More specifically, we choose a threshold ∈ [0, 1] and accept samples which have the confidence score larger than the threshold . When the case = 1, our model always redraws the sample using the AR model no matter how our confidence is high. Note that our model becomes equivalent to the target AR model when = 1. When = 0, our model always accepts the approximated samples. In practice, we accept k̂( )− 1 samples among approximated M samples where k̂( ) = argmink νk > . Subsequently, we re-sample xk̂( )) from the original AR model and repeat approximation scheme until reach the maximum length.
However, it is impractical to calculate equation (5) directly because we need the samples x̂k from the original AR model. We first need to go forward using the AR model to see the next sample and come backward to calculate the confidence to decide whether we use the sample or not, which is nonsense.
To detour this problem, we introduce a network gV (·) that approximates the binary decision variable h k = I(νk ≥ ) as follows: h (i,j] ' gV (x≤i, fW (x≤i)), (6) where h (i,j] = {h i+1, · · · , h j}. The network gV (·) is implemented by a auto-encoder architecture with a sigmoid activation output that makes the equation (6) equivalent to the logistic regression.
3.4 TRAINING DETAILS
To train the proposed model, we randomly select the sample s(i) ∈ [1, N ] for the sequence x(i) in a training batch. Then, we predict l(i) = min(B,N − s(i)) sample values after s(i), where B denotes the number of samples the prediction considers. To calculate equation (4), we minimize the loss of the training sample x(k), k = 1 · · ·K, and the locations s(i) ∈ [1, N ] as,
min θ,W − 1 K K∑ i=1 log pθ(x (i) ≤s(i)+l(i))− 1 KM K∑ i=1 M∑ j=1 log qθ,W (x̂ (i)(j) (s(i),s(i)+l(i)] |x(i)≤s(i)). (7)
Here, x̂(i)(j) (s(i),s(i)+l(i)] for j ∈ {1 · · ·M} denotes M number of the sequences from the AR distribution pθ(x(s(i),s(i)+l(i)]|x (i)
≤s(i)) for i-th training data. From the experiment, we found that M = 1 sample is enough to train the model. This training scheme guides the distribution drawn by NARA to fit the original AR distribution as well as to generate future samples, simultaneously. To train gV (·), binary cross-entropy loss is used with h in equation (6), with freezing the other parameters.
3.5 THEORETICAL EXPLANATION
Here, we show that the proposed NARA is a regularized version of the original AR model. At the extremum, the approximated sample distribution from NARA is equivalent to that of the original AR model. In NARA, our approximate distribution q(x(i,j+1]|x≤i,m(i,j]) is reformulated as follows:
qφ,θ(x(i,j+1]|x≤i,m(i,j]) ≡ pθ(xi+1|x≤i) j+1∏ l=i+2 qφ(xl|x≤i,mi+1, ...,ml−1)
= pθ(x≤i+1) pθ(x≤i) · pθ(x≤i+2) pθ(x≤i+1) ·, · · · , ·pθ(x≤j+1) pθ(x≤j)︸ ︷︷ ︸
pθ(x(i,j+1]|x≤i)
· [ pθ(x≤i+1)
qφ(x≤i,mi+1) ·, · · · , ·
qφ(x≤i,m(i,j], xj+1)
pθ(x≤j+1) ] ︸ ︷︷ ︸
R(pθ,qφ,m(i,j])
,
(8)
where the parameter φ denotes the network parameters of the approximated distribution q(·). Therefore, our proposed cost function can be represented as the negative log-likelihood of the AR model with a regularizer − logR(pθ, qφ,m(i,j]):
min φ,θ −Epθ(x(i,j+1]|x≤i)[log pθ(x(i,j+1]|x≤i)) + logR(pθ, qφ,m(i,j])] (9)
Note that the proposed cost function is equivalent to that of the original AR model when logR(pθ, qφ,m(i,j]) = 0, which is true under the condition of m(i,j] = x(i,j] and qφ(·|·) = pθ(·|·). Here, m(i,j] = fW (x≤i). By minimizing the equation (9),R(pθ, qφ,m(i,j]) enforces the direction of the optimization to estimate the probability ratio of qφ and pθ while it minimize the gap between qφ(x≤i,m(i,j],xj+1)
qφ(x≤i,m(i,j]) so that m(i,j] = fW (x≤i) approaches to x(i,j].
4 RELATED WORK
Deep AR and regression models: After employing the deep neural network, the AR models handling sequential data has achieved significant improvements in handling the various sequential data including text (Sutskever et al., 2011), sound (Vinyals et al., 2012; Tamamori et al., 2017), and images (van den Oord et al., 2016b; Salimans et al., 2017). The idea has been employed to “Flow based model” which uses auto-regressive sample flows (Kingma & Dhariwal, 2018; Germain et al., 2015; Papamakarios et al., 2017; Kingma et al., 2016) to infer complex distribution, and reported meaningful progresses. Also, the attempts (Yoo et al., 2017; Garnelo et al., 2018; Kim et al., 2019) to replace the kernel function of the stochastic regression and prediction processes to neural network has been proposed to deal with semi-supervised data not imposing an explicit sequential relationship.
Approximated AR methods: Reducing the complexity of the deep AR model has been explored by a number of studies, either targeting multiple domain (Seo et al., 2018; Stern et al., 2018) or specific target such as machine translation (Wang et al., 2018; Ghazvininejad et al., 2019; Welleck et al., 2019; Wang et al., 2018; 2019) and image generation (Ramachandran et al., 2017).
Adding one step further to the previous studies, we propose a new general approximation method for AR methods by assuming the i.i.d. condition for the “easy to predict” samples. This differentiates our approach to (Seo et al., 2018) in that we do not sequentially approximate the future samples by using a smaller AR model but use a chunk-wise predictor to approximate the samples at once. In addition, our confidence prediction module can be seen as a stochastic version of the verification step in (Stern et al., 2018), which helps our model to converge toward the original solution. This confidence guided approximation can be easily augmented to the other domain specific AR approximation methods because our method is not limited to a domain specific selection queues such as quotation (Welleck et al., 2019; Ghazvininejad et al., 2019) or nearby convolutional features (Ramachandran et al., 2017).
5 EXPERIMENTS
In this section, we demonstrate the data generation results from the proposed NARA. To check the feasibility, we first test our method into time-series data generation problem, and second, into image generation. The detailed model structures and additional results are attached in the Supplementary material. The implementation of the methods will be available soon.
5.1 EXPERIMENTAL SETTING
Time-series data generation problem: In this problem, we used LSTM as the base model. First, we tested our method with a simple one-dimensional sinusoidal function. Second, we tested the video sequence data (golf swing) for demonstrating the more complicated case. In this case, we repeated the swing sequences 20 times to make the periodic image sequences and resize each image to 64× 64 resolution. Also, beside the LSTM, we used autoencoder structures to embed the images into latent space. The projected points for the image sequences are linked by LSTM, similar to (Yoo et al., 2017). For both cases, we used ADAM optimizer (Kingma & Ba, 2015) with a default setting and a learning rate 0.001.
Image generation: For the image generation task, we used PixelCNN++ (Salimans et al., 2017) as the base model. The number of channels of the network was set to 160 and the number of logistic mixtures was set to 10. See (Salimans et al., 2017) for the detailed explanation of the parameters. In this task, the baseline AR model (PixelCNN++) is much heavier than those used in the previous tasks. Here, we show that the proposed approximated model can significantly reduce the computational burden of the original AR model. The prior-sample predictor fW (·) and the confidence estimator gV (·) were both implemented by U-net structured autoencoder (Ronneberger et al., 2015). We optimized the models using ADAM with learning rate 0.0001. Every module was trained from scratch. We mainly used CelebA (Liu et al., 2015b) resizing the samples to 64× 64 resolution. In the experiments, we randomly pick 36, 000 images for training and 3, 000 images for validation.
Training and evaluation: For the first problem, we use single GPU (NVIDIA Titan XP), and for the second problem, four to eight GPUs (NVIDIA Tesla P40) were used1. The training and inference code used in this section are implemented by PyTorch library. For the quantitative evaluation, we measure the error between the true future samples and the generated one, and also employ Fréchet Inception Distance score (FID) (Heusel et al., 2017) as a measure of the model performance and visual quality of the generated images for the second image generation problem.
5.2 ANALYSIS
5.2.1 TIME-SERIES DATA GENERATION
Figure 2a shows the generation results of the one-dimensional time-series from our approximation model with different acceptance ratios (red, green, and blue) and the baseline LSTM models (black).
1The overall expreiments were conducted on NSML (Sung et al., 2017) GPU system.
From the figure, we can see that both models correctly generates the future samples. Please note that, from the prior sample generation result (magenta), the prior samples m converged to the true samples x as claimed in Section 3.5.
The graph in Figure 2b shows the acceptance ratio and the `1-error over the confidence threshold ∈ (0, 1]. The error denotes the distance between the ground truth samples x and the generated ones. As expected, our model accepted more samples as the threshold decreases. However, contrary to our initial expectations, the error-threshold graph shows that the less acceptance of samples does not always bring the more accurate generation results. From the graph, the generation with an intermediate acceptance ratio achieved the best result. Interestingly, we report that this tendency between the acceptance ratio and the generation quality was repeatedly observed in the other datasets as well.
Figure 3 shows the image sequence generation results from NARA. From the result, we can see that the proposed approximation method is still effective when the input data dimension becomes much larger and the AR model becomes more complicated. In the golf swing dataset, the proposed approximation model also succeeded to capture the periodic change of the image sequence. The table 3 shows that the proper amount of approximation can obtain better accuracy than the none, similar to the other previous experiments. One notable observation regarding the phenomenon is that the period of image sequence was slightly changed among different ratio of approximated sample acceptance (Figure 3). One possible explanation would be that the approximation module suppress the rapid change of the samples, and this affects the interval of a single cycle.
5.2.2 IMAGE GENERATION
Figure 4 shows that our method can be integrated into PixelCNN++ and generates images with the significant amount of the predicted sample acceptance (white region). We observed that the confidence was mostly low (blue) in the eyes, mouth, and boundary regions of the face, and the PixelCNN is used to generate those regions. This shows that compared to the other homogeneous regions of the image, the model finds it relatively hard to describe the details, which matches with our intuition.
The graphs in Figure 5 present the quantitative analysis regarding the inference time and the NLL in generating images. In Figure 5a, the relation between inference time and the skimming ratio is reported. The results show that the inference speed is significantly improved as more pixels are accepted. Table 2 further supports this that our approximation method generates a fair quality of images while it speeds up the generation procedure 5 ∼ 10 times faster than the base model. In the image generation example also, we found that the fair amount of acceptance can improve the perceptual visual quality of the generated images compared to the vanilla PixelCNN++ (Table 2). Our method benefits from increasing the acceptance ratio to some extent in terms of FID showing a U-shaped trend over the variation, similar to those in Figure 2b. Note that a lower FID score identifies a better model. Consistent with previous results, we can conjecture that the proposed approximation scheme learns the mean-prior of the images and guides the AR model to prevent generating erroneous images. The confidence maps and the graph illustrated in Figure 4, 5a, and 5c
support this conjecture. Complex details such as eyes, mouths and contours have largely harder than the backgrounds and remaining faces.
In Figure 5b and Figure 5c, the graphs show the results supporting the convergence of the proposed method. The graph in Figure 5b shows the NLL of the base PixelCNN++ and that of our proposed method under the full-accept case, i.e. we fully believe the approximation results. Note that the NLL of both cases converged and the PixelCNN achieved noticeably lower NLL compared to the fully accepting the pixels at every epoch. This is already expected in Section 3.2 that the baseline AR model approaches more closely to the data distribution than our module. This supports the necessity of re-generating procedure by using the PixelCNN++, especially when the approximation module finds the pixel has a low confidence.
The graph in Figure 5c presents the `1 distance between the generated prior pixel m and the corresponding ground-truth pixel x in the test data reconstruction. Again, similar to the previous time-series experiments, the model successfully converged to the original value (m approaches to x). Combined with the result in Figure 5b, this result supports the convergence conditions claimed in section 3.2. Regarding the convergence, we compared the NLL of the converged PixelCNN++ distribution from the proposed scheme and that of PixelCNN++ with CelebA dataset from the original paper (Salimans et al., 2017).
6 CONCLUSION
In this paper, we proposed the efficient neural auto-regressive model approximation method, NARA, which can be used in various auto-regressive (AR) models. By introducing the prior-sampling and confidence prediction modules, we showed that NARA can theoretically and empirically approximate the future samples under a relaxed causal relationships. This approximation simplifies the generation process and enables our model to use powerful parallelization techniques for the sample generation procedure. In the experiments, we showed that NARA can be successfully applied with different AR models in the various tasks from simple to complex time-series data and image pixel generation. These results support that the proposed method can introduce a way to use AR models in a more efficient manner.
B SUPPLEMENTARY EXPLAIN ON PROPOSED SAMPLE GENERATION
Figure 6 shows the detailed process of sampling when the proposed NARA module is attached to the mother AR model. The diagram describes the Sinusoidal function generation example, which is the simplest example in our paper. In each auto-regressive step, we decide if we use the approximated value predicted by our sample predictor (predict the value in chunkwise manner assuming i.i.d condition) or sample the value with the mother auto-regressive model. This decision is conducted based on the confidence estimation by the Confidence predictor of the paper. The chunk-wise estimation step can be boosted by recent parallel computing methods different from the mother autoregressive model, and this can possibly relax the computation burden of the original auto-regressive model. More importantly, this scheme can be applied to various auto-regressive models.
C SUPPLEMENTARY EXPERIMENTS
C.1 SUPPLEMENTARY GENERATION RESULTS
In addition to the results presented in the paper, we show supplement generation examples in below figures. Figure 3 and Table 3 present the image sequence generation result from the other golf swing sequence. In this case also, we can observe the similar swing cycle period changes and acceptance ratio-error tendencies reported in the paper. Our approximation slightly affects the cycle of the time-serious data, and the result also shows that the approximation can achieve even better prediction results than “none-acceptance” case.
Figure 11 and 12 shows the additional facial image generation results among ∈ [0.0, 1.0]. We can see that pixels from boundary region were more frequently re-sampled compared to other relatively simple face parts such as cheek or forehead. Also, we tested our model with ImageNet classes, and the results were presented in Figure 10.
C.2 ANALYSIS ON WHEN CONFIDENCE MODULE FAILS.
If the prior-sample predictor fW performs worse than our expectation, the confidence module gV will reject all prior samples; hence, in this case, the model repeats to draw samples using the original AR model in a sample-by-sample manner. However, in an even worse situation, the confidence module could always accept prior-samples, including low-quality prior samples. To simulate the failure or both fW and gV , we manually fix the “accept region” regardless of the predicted confidence score. We select ten samples with the lowest confidence scores and report the results in Figure 8. In the figure, we can observe that the drastic error can occur when both the prior-sample predictor fW and confidence module gV fail. However, in practice, our confidence module gV can detect the drastic error during the sampling stage as shown in Figure 8b; hence, such extreme error will occur rarely.
C.3 APPLYING NARA WITH FASTER PIXELCNN++
To show that our method can be added to diverse AR mode, we also combined our skimming method to the fast version of PixelCNN++2. Figure 9 and Table 4 show the generated samples and the generation time using the incorporated model (skim+Fast PixelCNN++). Due to the lack of time to fully investigate all datasets with the new implementation, we conducted the experiment on CIFAR-10, which is the most widely used dataset in PixelCNN works and is more diverse than the CelebA dataset. From the results, we show that our method, augmented with Fast PixelCNN++, can make the baseline algorithm much faster as we suggested in the paper.
2Ramachandran, Prajit, et al. ”Fast generation for convolutional autoregressive models.” arXiv preprint arXiv:1704.06001 (2017). | 1. What is the main contribution of the paper regarding autoregressive models?
2. How does the proposed approach work, and what are its potential advantages?
3. What are the weaknesses of the paper, particularly in terms of writing and experimentation?
4. Do you have any questions or concerns regarding the connection between the proposed method and Generative Adversarial Networks (GANs)?
5. How would you assess the clarity and quality of the paper's content? | Review | Review
The paper presents a technique for approximately sampling from autoregressive models using something like a a proposal distribution and a critic. The idea is to chunk the output into blocks and, for each block, predict each element in the block independently from a proposal network, ask a critic network whether the block looks sensible and, if not, resampling the block using the autoregressive model itself.
In broad strokes the approach makes sense. It assumes, essentially, that parts of the sequence are hard to predict and parts are easy and, if there are enough easy parts, this procedure should lead to faster inference.
The paper's writing is not ideal. There are some grammatical mistakes that harm reading (for example, the second paragraph of the introduction says "However, these models must infer each element of the data x ∈ RN step by step in a serial manner, requiring O(N) times more than other non-sequential estimators", where it is unclear what is O(N) more than what, how is this measured, etc). That said I was mostly able to follow all key points.
The authors do not point out the obvious connection to GANs, which also rely on a critic network to decide whether a sample looks like it comes from the correct distribution, except in GANs the critic is jointly trained with the generator (as opposed to here where it's trained after) and in GANs the critic is only used at training time, while here the critic is used to accelerate sampling (the better the critic the faster this method can sample).
I wish the experimental results were a little more explicit about the time vs quality tradeoff; I expected to see more plots with pareto curves, since as-is it's hard to judge the magnitude of the tradeoffs involved. I'd also like a more thorough analysis on why there is a non-monotonic tradeoff in some experiments (table 1, figure 2(b)) between the amount of approximation and the sample quality; this makes me think something else is going on here as this approximate inference method should just decrease quality, never increase it.
Overall I lean towards accepting the paper, but I encourage the authors to revise the writing and to add a few plots explicitly showing the time vs quality tradeoff both in likelihood (wrt the full model) and in downstream metrics like FID. |
ICLR | Title
Neural Approximation of an Auto-Regressive Process through Confidence Guided Sampling
Abstract
We propose a generic confidence-based approximation that can be plugged in and simplify the auto-regressive generation process with a proved convergence. We first assume that the priors of future samples can be generated in an independently and identically distributed (i.i.d.) manner using an efficient predictor. Given the past samples and future priors, the mother AR model can post-process the priors while the accompanied confidence predictor decides whether the current sample needs a resampling or not. Thanks to the i.i.d. assumption, the post-processing can update each sample in a parallel way, which remarkably accelerates the mother model. Our experiments on different data domains including sequences and images show that the proposed method can successfully capture the complex structures of the data and generate the meaningful future samples with lower computational cost while preserving the sequential relationship of the data.
1 INTRODUCTION
The auto-regressive (AR) model, which infers and predicts the causal relationship between the previous and future samples in a sequential data, has been widely studied since the beginning of machine learning research. The recent advances of the auto-regressive model brought by the neural network have achieved impressive success in handling complex data including texts (Sutskever et al., 2011), audio signals (Vinyals et al., 2012; Tamamori et al., 2017; van den Oord et al., 2016a), and images (van den Oord et al., 2016b; Salimans et al., 2017).
It is well known that AR model can learn a tractable data distribution p(x) and can be easily extended for both discrete and continuous data. Due to their nature, AR models have especially shown a good fit with a sequential data, such as voice generation (van den Oord et al., 2016a) and provide a stable training while they are free from the mode collapsing problem (van den Oord et al., 2016b). However, these models must infer each element xi of the data x = [x1, x2, · · · , xi, · · · , xN ] in a serial manner, requiring O(N) times more than the other non-sequential estimators, which outputs x at once (Garnelo et al., 2018; Kim et al., 2019; Kingma & Welling, 2014; Goodfellow et al., 2014). Moreover, it is difficult to employ recent parallel computation because AR models always require a previous time step by definition. This mostly limits the use of the AR models in practice despite their advantages.
To resolve the problem, we introduce a new and generic approximation method, Neural AutoRegressive model Approximator (NARA), which can be easily plugged into any AR model. We show that NARA can reduce the generation complexity of AR models by relaxing an inevitable AR nature and enables AR models to employ the powerful parallelization techniques in the sequential data generation, which was difficult previously.
NARA consists of three modules; (1) a prior-sample predictor, (2) a confidence predictor, and (3) the original AR model. To relax the AR nature, given a set of past samples, we first assume that each sample of the future sequence can be generated in an independent and identical manner. Thanks to the i.i.d. assumption, using the first module of NARA, we can sample a series of future priors and these future priors are post-processed by the original AR model, generating a set of raw predictions. The confidence predictor evaluates the credibility of these raw samples and decide whether the model needs re-sampling or not. The confidence predictor plays an important role in that the approximation errors can be accumulated during the sequential AR generation process if the
erroneous samples with low confidence are left unchanged. Therefore, in our model, the sample can be drawn either by the mixture of the AR model or the proposed approximation method, and finally the selection of the generated samples are guided by the predicted confidence.
We evaluate NARA with various baseline AR models and data domains including simple curves, image sequences (Yoo et al., 2017), CelebA (Liu et al., 2015a), and ImageNet (Deng et al., 2009). For the sequential data (simple curves and golf), we employed the Long Short-Term Memory models (LSTM) (Hochreiter & Schmidhuber, 1997) as a baseline AR model while PixelCNN++ (Salimans et al., 2017) is used for the image generation (CelebA and ImageNet). Our experiments show that NARA can largely reduce the sample inference complexity even with a heavy and complex model on a difficult data domain such as image pixels.
The main contributions of our work can be summarized as follows: (1) we introduce a new and generic approximation method that can accelerate any AR generation procedure. (2) Compared to a full AR generation, the quality of approximated samples remains reliable by the accompanied confidence prediction model that measures the sample credibility. (3) Finally, we show that this is possible because, under a mild condition, the approximated samples from our method can eventually converge toward the true future sample. Thus, our method can effectively reduce the generation complexity of the AR model by partially substituting it with the simple i.i.d. model.
2 PRELIMINARY: AUTO-REGRESSIVE MODELS
Auto-regressive generation model is a probabilistic model to assign a probability p(x) of the data x including n samples. This method considers the data x as a sequence {xi | i = 1, · · ·n}, and the probability p(x) is defined by an AR manner as follows:
p(x) = n∏ i=1 p(xi|x1, ..., xi−1), (1)
From the formulation, the AR model provides a tractable data distribution p(x). Recently, in training the model parameters using the training samples x̂T , the computation parallelization are actively employed for calculating the distance between the real sample x̂t ∈ x̂T and the generated sample xt from equation (1). Still, for generating the future samples, it requires O(N) by definition.
3 PROPOSED METHOD
3.1 OVERVIEW
Figure 9 shows the concept of the proposed approximator NARA. NARA consists of a prior-sample predictor fW and confidence predictor gV . Given samples x≤i = {x1, · · · , xi}, the prior-sample predictor predicts a chunk of M number of the prior values m(i,i+M ]. Afterward, using the prior samples, we draw the future samples x(i,i+M ] in parallel. We note that this is possible because the
prior m(i,i+M ] is i.i.d. variable from our assumption. Subsequently, for the predicted x(i,i+M ], the confidence predictor predicts confidence scores ν(i,i+M ]. Then, using the predicted confidence, our model decides whether the samples of interest should be redrawn by the AR model (re-sample xi+1) or they are just accepted. The detailed explanation will be described in the following sections.
3.2 APPROXIMATING SAMPLE DISTRIBUTION OF AR MODEL
Given the samples x≤i = {x1, · · · , xi}, a AR model defines the distribution of future samples x(i,j] = {xi+1, · · · , xj} as follows:
pθ(x(i,j]|x≤i) = j∏
l=i+1
pθ(xl|x1, ..., xl−1). (2)
Here, θ denotes the parameters of the AR model. The indices i, j are assumed to satisfy the condition j > i, ∀i, j ∈ [1, N ]. To approximate the distribution pθ(x(i,j]|x≤i), we introduce a set of prior samples m(i,j−1] = fW (x≤i;W ), where we assume that they are i.i.d. given the observation x≤i. Here, W is the model parameter of the prior-sample predictor fW (·). Based on this, we define an approximated distribution qθ,W (x(i,j−1]|x≤i,m(i,j−1]) characterized by the original AR model pθ and the prior-sample predictor fW (·) as follows:
qθ,W (x(i,j]|x≤i,m(i,j−1]) ≡ pθ(xi+1|x≤i) j∏
l=i+2 pθ(xl|x≤i,mi+1, . . .ml−1)︸ ︷︷ ︸ Compute in parallel (const time)
(A) ' pθ(xi+1|x≤i) j∏ l=i+2
pθ(xl|x≤i, xi+1, . . . xl−1)︸ ︷︷ ︸ Compute in sequential (linear time)
= pθ(x(i,j]|x≤i).
(3)
Here, approximation (A) is true when m(i,j−1] approaches to x(i,j−1]. Note that it becomes possible to compute qθ,W in a constant time because we assume the prior variable mi to be i.i.d. while pθ requires a linear time complexity.
Then, we optimize the network parameters θ and W by minimizing the negative log-likelihood (NLL) of qθ,W (x(i,j] = x̂(i,j]|x≤i, fW (x≤i)) where x̂(i,j] is a set of samples that are sampled from the baseline AR model. We guide the prior-sample predictor fW (·) to generate the prior samples that is likely to come from the distribution of original AR model m by minimizing the original AR model θ and prior-sample predictor W jointly as follows:
min θ,W − log pθ(x(g)(i,j]|x≤i)− Epθ(x(i,j]|x≤i)[log qθ,W (x(i,j]|x≤i, fW (x≤i))], (4)
where x(g) denotes the ground truth sample value in the generated region of the training samples. Note that both pθ(x) and its approximated distribution qθ,W (x) approaches to the true data distribution when (1) our prior-sample predictor generates the prior samples m close to the true samples x and (2) the NLL of the AR distribution approaches to the data distribution. Based on our analysis and experiments, we later show that our model can satisfy these conditions theoretically and empirically in the following sections.
3.3 CONFIDENCE PREDICTION
Using the prior-sample predictor fW (·), our model generates future samples based on the previous samples. However, accumulation of approximation errors in the AR generation may lead to an unsuccessful sample generation. To mitigate the problem, we introduce an auxiliary module that determines whether to accept or reject the approximated samples generated as described in the previous subsection, referred to as confidence predictor.
First, we define the confidence of the generated samples as follows: νk = qθ,W (xk = x̂k|x≤i, fW (x≤i)), (5)
where x̂k ∼ pθ(xk|x≤k−1) and k ∈ {1, · · · , j}. The confidence value νk provides a measure of how likely the generated samples from qθ,W (·) is drawn from pθ(xk|x≤k−1). Based on the confidence value νk, our model decides whether it can accept the sample xk or not. More specifically, we choose a threshold ∈ [0, 1] and accept samples which have the confidence score larger than the threshold . When the case = 1, our model always redraws the sample using the AR model no matter how our confidence is high. Note that our model becomes equivalent to the target AR model when = 1. When = 0, our model always accepts the approximated samples. In practice, we accept k̂( )− 1 samples among approximated M samples where k̂( ) = argmink νk > . Subsequently, we re-sample xk̂( )) from the original AR model and repeat approximation scheme until reach the maximum length.
However, it is impractical to calculate equation (5) directly because we need the samples x̂k from the original AR model. We first need to go forward using the AR model to see the next sample and come backward to calculate the confidence to decide whether we use the sample or not, which is nonsense.
To detour this problem, we introduce a network gV (·) that approximates the binary decision variable h k = I(νk ≥ ) as follows: h (i,j] ' gV (x≤i, fW (x≤i)), (6) where h (i,j] = {h i+1, · · · , h j}. The network gV (·) is implemented by a auto-encoder architecture with a sigmoid activation output that makes the equation (6) equivalent to the logistic regression.
3.4 TRAINING DETAILS
To train the proposed model, we randomly select the sample s(i) ∈ [1, N ] for the sequence x(i) in a training batch. Then, we predict l(i) = min(B,N − s(i)) sample values after s(i), where B denotes the number of samples the prediction considers. To calculate equation (4), we minimize the loss of the training sample x(k), k = 1 · · ·K, and the locations s(i) ∈ [1, N ] as,
min θ,W − 1 K K∑ i=1 log pθ(x (i) ≤s(i)+l(i))− 1 KM K∑ i=1 M∑ j=1 log qθ,W (x̂ (i)(j) (s(i),s(i)+l(i)] |x(i)≤s(i)). (7)
Here, x̂(i)(j) (s(i),s(i)+l(i)] for j ∈ {1 · · ·M} denotes M number of the sequences from the AR distribution pθ(x(s(i),s(i)+l(i)]|x (i)
≤s(i)) for i-th training data. From the experiment, we found that M = 1 sample is enough to train the model. This training scheme guides the distribution drawn by NARA to fit the original AR distribution as well as to generate future samples, simultaneously. To train gV (·), binary cross-entropy loss is used with h in equation (6), with freezing the other parameters.
3.5 THEORETICAL EXPLANATION
Here, we show that the proposed NARA is a regularized version of the original AR model. At the extremum, the approximated sample distribution from NARA is equivalent to that of the original AR model. In NARA, our approximate distribution q(x(i,j+1]|x≤i,m(i,j]) is reformulated as follows:
qφ,θ(x(i,j+1]|x≤i,m(i,j]) ≡ pθ(xi+1|x≤i) j+1∏ l=i+2 qφ(xl|x≤i,mi+1, ...,ml−1)
= pθ(x≤i+1) pθ(x≤i) · pθ(x≤i+2) pθ(x≤i+1) ·, · · · , ·pθ(x≤j+1) pθ(x≤j)︸ ︷︷ ︸
pθ(x(i,j+1]|x≤i)
· [ pθ(x≤i+1)
qφ(x≤i,mi+1) ·, · · · , ·
qφ(x≤i,m(i,j], xj+1)
pθ(x≤j+1) ] ︸ ︷︷ ︸
R(pθ,qφ,m(i,j])
,
(8)
where the parameter φ denotes the network parameters of the approximated distribution q(·). Therefore, our proposed cost function can be represented as the negative log-likelihood of the AR model with a regularizer − logR(pθ, qφ,m(i,j]):
min φ,θ −Epθ(x(i,j+1]|x≤i)[log pθ(x(i,j+1]|x≤i)) + logR(pθ, qφ,m(i,j])] (9)
Note that the proposed cost function is equivalent to that of the original AR model when logR(pθ, qφ,m(i,j]) = 0, which is true under the condition of m(i,j] = x(i,j] and qφ(·|·) = pθ(·|·). Here, m(i,j] = fW (x≤i). By minimizing the equation (9),R(pθ, qφ,m(i,j]) enforces the direction of the optimization to estimate the probability ratio of qφ and pθ while it minimize the gap between qφ(x≤i,m(i,j],xj+1)
qφ(x≤i,m(i,j]) so that m(i,j] = fW (x≤i) approaches to x(i,j].
4 RELATED WORK
Deep AR and regression models: After employing the deep neural network, the AR models handling sequential data has achieved significant improvements in handling the various sequential data including text (Sutskever et al., 2011), sound (Vinyals et al., 2012; Tamamori et al., 2017), and images (van den Oord et al., 2016b; Salimans et al., 2017). The idea has been employed to “Flow based model” which uses auto-regressive sample flows (Kingma & Dhariwal, 2018; Germain et al., 2015; Papamakarios et al., 2017; Kingma et al., 2016) to infer complex distribution, and reported meaningful progresses. Also, the attempts (Yoo et al., 2017; Garnelo et al., 2018; Kim et al., 2019) to replace the kernel function of the stochastic regression and prediction processes to neural network has been proposed to deal with semi-supervised data not imposing an explicit sequential relationship.
Approximated AR methods: Reducing the complexity of the deep AR model has been explored by a number of studies, either targeting multiple domain (Seo et al., 2018; Stern et al., 2018) or specific target such as machine translation (Wang et al., 2018; Ghazvininejad et al., 2019; Welleck et al., 2019; Wang et al., 2018; 2019) and image generation (Ramachandran et al., 2017).
Adding one step further to the previous studies, we propose a new general approximation method for AR methods by assuming the i.i.d. condition for the “easy to predict” samples. This differentiates our approach to (Seo et al., 2018) in that we do not sequentially approximate the future samples by using a smaller AR model but use a chunk-wise predictor to approximate the samples at once. In addition, our confidence prediction module can be seen as a stochastic version of the verification step in (Stern et al., 2018), which helps our model to converge toward the original solution. This confidence guided approximation can be easily augmented to the other domain specific AR approximation methods because our method is not limited to a domain specific selection queues such as quotation (Welleck et al., 2019; Ghazvininejad et al., 2019) or nearby convolutional features (Ramachandran et al., 2017).
5 EXPERIMENTS
In this section, we demonstrate the data generation results from the proposed NARA. To check the feasibility, we first test our method into time-series data generation problem, and second, into image generation. The detailed model structures and additional results are attached in the Supplementary material. The implementation of the methods will be available soon.
5.1 EXPERIMENTAL SETTING
Time-series data generation problem: In this problem, we used LSTM as the base model. First, we tested our method with a simple one-dimensional sinusoidal function. Second, we tested the video sequence data (golf swing) for demonstrating the more complicated case. In this case, we repeated the swing sequences 20 times to make the periodic image sequences and resize each image to 64× 64 resolution. Also, beside the LSTM, we used autoencoder structures to embed the images into latent space. The projected points for the image sequences are linked by LSTM, similar to (Yoo et al., 2017). For both cases, we used ADAM optimizer (Kingma & Ba, 2015) with a default setting and a learning rate 0.001.
Image generation: For the image generation task, we used PixelCNN++ (Salimans et al., 2017) as the base model. The number of channels of the network was set to 160 and the number of logistic mixtures was set to 10. See (Salimans et al., 2017) for the detailed explanation of the parameters. In this task, the baseline AR model (PixelCNN++) is much heavier than those used in the previous tasks. Here, we show that the proposed approximated model can significantly reduce the computational burden of the original AR model. The prior-sample predictor fW (·) and the confidence estimator gV (·) were both implemented by U-net structured autoencoder (Ronneberger et al., 2015). We optimized the models using ADAM with learning rate 0.0001. Every module was trained from scratch. We mainly used CelebA (Liu et al., 2015b) resizing the samples to 64× 64 resolution. In the experiments, we randomly pick 36, 000 images for training and 3, 000 images for validation.
Training and evaluation: For the first problem, we use single GPU (NVIDIA Titan XP), and for the second problem, four to eight GPUs (NVIDIA Tesla P40) were used1. The training and inference code used in this section are implemented by PyTorch library. For the quantitative evaluation, we measure the error between the true future samples and the generated one, and also employ Fréchet Inception Distance score (FID) (Heusel et al., 2017) as a measure of the model performance and visual quality of the generated images for the second image generation problem.
5.2 ANALYSIS
5.2.1 TIME-SERIES DATA GENERATION
Figure 2a shows the generation results of the one-dimensional time-series from our approximation model with different acceptance ratios (red, green, and blue) and the baseline LSTM models (black).
1The overall expreiments were conducted on NSML (Sung et al., 2017) GPU system.
From the figure, we can see that both models correctly generates the future samples. Please note that, from the prior sample generation result (magenta), the prior samples m converged to the true samples x as claimed in Section 3.5.
The graph in Figure 2b shows the acceptance ratio and the `1-error over the confidence threshold ∈ (0, 1]. The error denotes the distance between the ground truth samples x and the generated ones. As expected, our model accepted more samples as the threshold decreases. However, contrary to our initial expectations, the error-threshold graph shows that the less acceptance of samples does not always bring the more accurate generation results. From the graph, the generation with an intermediate acceptance ratio achieved the best result. Interestingly, we report that this tendency between the acceptance ratio and the generation quality was repeatedly observed in the other datasets as well.
Figure 3 shows the image sequence generation results from NARA. From the result, we can see that the proposed approximation method is still effective when the input data dimension becomes much larger and the AR model becomes more complicated. In the golf swing dataset, the proposed approximation model also succeeded to capture the periodic change of the image sequence. The table 3 shows that the proper amount of approximation can obtain better accuracy than the none, similar to the other previous experiments. One notable observation regarding the phenomenon is that the period of image sequence was slightly changed among different ratio of approximated sample acceptance (Figure 3). One possible explanation would be that the approximation module suppress the rapid change of the samples, and this affects the interval of a single cycle.
5.2.2 IMAGE GENERATION
Figure 4 shows that our method can be integrated into PixelCNN++ and generates images with the significant amount of the predicted sample acceptance (white region). We observed that the confidence was mostly low (blue) in the eyes, mouth, and boundary regions of the face, and the PixelCNN is used to generate those regions. This shows that compared to the other homogeneous regions of the image, the model finds it relatively hard to describe the details, which matches with our intuition.
The graphs in Figure 5 present the quantitative analysis regarding the inference time and the NLL in generating images. In Figure 5a, the relation between inference time and the skimming ratio is reported. The results show that the inference speed is significantly improved as more pixels are accepted. Table 2 further supports this that our approximation method generates a fair quality of images while it speeds up the generation procedure 5 ∼ 10 times faster than the base model. In the image generation example also, we found that the fair amount of acceptance can improve the perceptual visual quality of the generated images compared to the vanilla PixelCNN++ (Table 2). Our method benefits from increasing the acceptance ratio to some extent in terms of FID showing a U-shaped trend over the variation, similar to those in Figure 2b. Note that a lower FID score identifies a better model. Consistent with previous results, we can conjecture that the proposed approximation scheme learns the mean-prior of the images and guides the AR model to prevent generating erroneous images. The confidence maps and the graph illustrated in Figure 4, 5a, and 5c
support this conjecture. Complex details such as eyes, mouths and contours have largely harder than the backgrounds and remaining faces.
In Figure 5b and Figure 5c, the graphs show the results supporting the convergence of the proposed method. The graph in Figure 5b shows the NLL of the base PixelCNN++ and that of our proposed method under the full-accept case, i.e. we fully believe the approximation results. Note that the NLL of both cases converged and the PixelCNN achieved noticeably lower NLL compared to the fully accepting the pixels at every epoch. This is already expected in Section 3.2 that the baseline AR model approaches more closely to the data distribution than our module. This supports the necessity of re-generating procedure by using the PixelCNN++, especially when the approximation module finds the pixel has a low confidence.
The graph in Figure 5c presents the `1 distance between the generated prior pixel m and the corresponding ground-truth pixel x in the test data reconstruction. Again, similar to the previous time-series experiments, the model successfully converged to the original value (m approaches to x). Combined with the result in Figure 5b, this result supports the convergence conditions claimed in section 3.2. Regarding the convergence, we compared the NLL of the converged PixelCNN++ distribution from the proposed scheme and that of PixelCNN++ with CelebA dataset from the original paper (Salimans et al., 2017).
6 CONCLUSION
In this paper, we proposed the efficient neural auto-regressive model approximation method, NARA, which can be used in various auto-regressive (AR) models. By introducing the prior-sampling and confidence prediction modules, we showed that NARA can theoretically and empirically approximate the future samples under a relaxed causal relationships. This approximation simplifies the generation process and enables our model to use powerful parallelization techniques for the sample generation procedure. In the experiments, we showed that NARA can be successfully applied with different AR models in the various tasks from simple to complex time-series data and image pixel generation. These results support that the proposed method can introduce a way to use AR models in a more efficient manner.
B SUPPLEMENTARY EXPLAIN ON PROPOSED SAMPLE GENERATION
Figure 6 shows the detailed process of sampling when the proposed NARA module is attached to the mother AR model. The diagram describes the Sinusoidal function generation example, which is the simplest example in our paper. In each auto-regressive step, we decide if we use the approximated value predicted by our sample predictor (predict the value in chunkwise manner assuming i.i.d condition) or sample the value with the mother auto-regressive model. This decision is conducted based on the confidence estimation by the Confidence predictor of the paper. The chunk-wise estimation step can be boosted by recent parallel computing methods different from the mother autoregressive model, and this can possibly relax the computation burden of the original auto-regressive model. More importantly, this scheme can be applied to various auto-regressive models.
C SUPPLEMENTARY EXPERIMENTS
C.1 SUPPLEMENTARY GENERATION RESULTS
In addition to the results presented in the paper, we show supplement generation examples in below figures. Figure 3 and Table 3 present the image sequence generation result from the other golf swing sequence. In this case also, we can observe the similar swing cycle period changes and acceptance ratio-error tendencies reported in the paper. Our approximation slightly affects the cycle of the time-serious data, and the result also shows that the approximation can achieve even better prediction results than “none-acceptance” case.
Figure 11 and 12 shows the additional facial image generation results among ∈ [0.0, 1.0]. We can see that pixels from boundary region were more frequently re-sampled compared to other relatively simple face parts such as cheek or forehead. Also, we tested our model with ImageNet classes, and the results were presented in Figure 10.
C.2 ANALYSIS ON WHEN CONFIDENCE MODULE FAILS.
If the prior-sample predictor fW performs worse than our expectation, the confidence module gV will reject all prior samples; hence, in this case, the model repeats to draw samples using the original AR model in a sample-by-sample manner. However, in an even worse situation, the confidence module could always accept prior-samples, including low-quality prior samples. To simulate the failure or both fW and gV , we manually fix the “accept region” regardless of the predicted confidence score. We select ten samples with the lowest confidence scores and report the results in Figure 8. In the figure, we can observe that the drastic error can occur when both the prior-sample predictor fW and confidence module gV fail. However, in practice, our confidence module gV can detect the drastic error during the sampling stage as shown in Figure 8b; hence, such extreme error will occur rarely.
C.3 APPLYING NARA WITH FASTER PIXELCNN++
To show that our method can be added to diverse AR mode, we also combined our skimming method to the fast version of PixelCNN++2. Figure 9 and Table 4 show the generated samples and the generation time using the incorporated model (skim+Fast PixelCNN++). Due to the lack of time to fully investigate all datasets with the new implementation, we conducted the experiment on CIFAR-10, which is the most widely used dataset in PixelCNN works and is more diverse than the CelebA dataset. From the results, we show that our method, augmented with Fast PixelCNN++, can make the baseline algorithm much faster as we suggested in the paper.
2Ramachandran, Prajit, et al. ”Fast generation for convolutional autoregressive models.” arXiv preprint arXiv:1704.06001 (2017). | 1. What is the main contribution of the paper regarding sampling time series?
2. What are the strengths and weaknesses of the proposed method compared to prior works on video generation?
3. How does the reviewer assess the novelty and distinctive features of the algorithm?
4. What are the minor technical comments and suggestions for improvement? | Review | Review
The authors consider the problem of sampling time series.
To solve the problem they propose a method that is based on the autoregression model. The novelty here lies in the proposed sampling methods: we start with a sampling of a prior and then try to generate data according to the restored distribution. We learn two functions: signal recovery and confidence prediction.
The main hyperparameter of the algorithm $\varepsilon$ identifies how much samples we accept.
The distinctive feature of the algorithm is speed-up for the sample generation process.
Weak reject
There are a significant number of works on video generation, see e.g. [1, 2], references therein and articles that cite these two articles. The problem setting seems pretty similar. It seems like a good idea to compare to these methods (and it seems that video generation is a very resource-demanding procedure, and they don't use parallel applications similar to proposed in the paper. What is the reason?) Most of the approaches use only one frame to generate video, but it seems that LSTM in these methods will benefit from using of multiple frames as input (and will be able to transfer information in autoregression manner by transferring all they need in a hidden state).
The article, in my opinion, will benefit from comparison to these approaches or at least by using some benchmarks from these works to demonstrate feasibility of the considered approach, also it seems that these works are good for demonstration of parallelization capabilities (as in many cases the same idea applies).
Not minor comments:
1. In Figure 2 (a) it is not clear how the data and prediction were generated. According to the procedure in Figure 1 and text we use the same input for all approaches. However solid lines for different epsilons are different.
2. The effect of the dependence of recovery of quality for Figure 2 (b) is not explained and is controversial: we get the smallest error for intermediate acceptance ratio, however, there is also a decrease of error if we further increase the gauge threshold (btw the term gauge threshold is new to machine learning community, consider replacement of it)
3. More simpler examples will benefit the paper, as we'll be able to know more fundamental properties of the proposed approach.
Minor technical comments:
1. s. 3.1. predictor predicts
commas in equation (8)
2. Figure 2: no axis labels for the left plot, use for label "acceptance ratio" red color font & for label "L1 error" blue color font
3. Table 1 bracket after l_1 is missing
4. Maybe $\sigma$ is not the best designation of confidence, as it can be confused with the variance
5. Figure 1: some indexes should be not $x_{i + 2}$, but $x_{i + j}$, $x_{i + M}$. Also, some ">" before \epsilon should be "<"
6. "a auto-encoder architecture" ->
"an auto-encoder architecture"
[1] J. He et al. Probabilistic Video Generation using Holistic Attribute Control. 2018. ECCV
[2] E. Denton, R.Fergus. Stochastic Video Generation with a Learned Prior. 2018. |
ICLR | Title
Neural Approximation of an Auto-Regressive Process through Confidence Guided Sampling
Abstract
We propose a generic confidence-based approximation that can be plugged in and simplify the auto-regressive generation process with a proved convergence. We first assume that the priors of future samples can be generated in an independently and identically distributed (i.i.d.) manner using an efficient predictor. Given the past samples and future priors, the mother AR model can post-process the priors while the accompanied confidence predictor decides whether the current sample needs a resampling or not. Thanks to the i.i.d. assumption, the post-processing can update each sample in a parallel way, which remarkably accelerates the mother model. Our experiments on different data domains including sequences and images show that the proposed method can successfully capture the complex structures of the data and generate the meaningful future samples with lower computational cost while preserving the sequential relationship of the data.
1 INTRODUCTION
The auto-regressive (AR) model, which infers and predicts the causal relationship between the previous and future samples in a sequential data, has been widely studied since the beginning of machine learning research. The recent advances of the auto-regressive model brought by the neural network have achieved impressive success in handling complex data including texts (Sutskever et al., 2011), audio signals (Vinyals et al., 2012; Tamamori et al., 2017; van den Oord et al., 2016a), and images (van den Oord et al., 2016b; Salimans et al., 2017).
It is well known that AR model can learn a tractable data distribution p(x) and can be easily extended for both discrete and continuous data. Due to their nature, AR models have especially shown a good fit with a sequential data, such as voice generation (van den Oord et al., 2016a) and provide a stable training while they are free from the mode collapsing problem (van den Oord et al., 2016b). However, these models must infer each element xi of the data x = [x1, x2, · · · , xi, · · · , xN ] in a serial manner, requiring O(N) times more than the other non-sequential estimators, which outputs x at once (Garnelo et al., 2018; Kim et al., 2019; Kingma & Welling, 2014; Goodfellow et al., 2014). Moreover, it is difficult to employ recent parallel computation because AR models always require a previous time step by definition. This mostly limits the use of the AR models in practice despite their advantages.
To resolve the problem, we introduce a new and generic approximation method, Neural AutoRegressive model Approximator (NARA), which can be easily plugged into any AR model. We show that NARA can reduce the generation complexity of AR models by relaxing an inevitable AR nature and enables AR models to employ the powerful parallelization techniques in the sequential data generation, which was difficult previously.
NARA consists of three modules; (1) a prior-sample predictor, (2) a confidence predictor, and (3) the original AR model. To relax the AR nature, given a set of past samples, we first assume that each sample of the future sequence can be generated in an independent and identical manner. Thanks to the i.i.d. assumption, using the first module of NARA, we can sample a series of future priors and these future priors are post-processed by the original AR model, generating a set of raw predictions. The confidence predictor evaluates the credibility of these raw samples and decide whether the model needs re-sampling or not. The confidence predictor plays an important role in that the approximation errors can be accumulated during the sequential AR generation process if the
erroneous samples with low confidence are left unchanged. Therefore, in our model, the sample can be drawn either by the mixture of the AR model or the proposed approximation method, and finally the selection of the generated samples are guided by the predicted confidence.
We evaluate NARA with various baseline AR models and data domains including simple curves, image sequences (Yoo et al., 2017), CelebA (Liu et al., 2015a), and ImageNet (Deng et al., 2009). For the sequential data (simple curves and golf), we employed the Long Short-Term Memory models (LSTM) (Hochreiter & Schmidhuber, 1997) as a baseline AR model while PixelCNN++ (Salimans et al., 2017) is used for the image generation (CelebA and ImageNet). Our experiments show that NARA can largely reduce the sample inference complexity even with a heavy and complex model on a difficult data domain such as image pixels.
The main contributions of our work can be summarized as follows: (1) we introduce a new and generic approximation method that can accelerate any AR generation procedure. (2) Compared to a full AR generation, the quality of approximated samples remains reliable by the accompanied confidence prediction model that measures the sample credibility. (3) Finally, we show that this is possible because, under a mild condition, the approximated samples from our method can eventually converge toward the true future sample. Thus, our method can effectively reduce the generation complexity of the AR model by partially substituting it with the simple i.i.d. model.
2 PRELIMINARY: AUTO-REGRESSIVE MODELS
Auto-regressive generation model is a probabilistic model to assign a probability p(x) of the data x including n samples. This method considers the data x as a sequence {xi | i = 1, · · ·n}, and the probability p(x) is defined by an AR manner as follows:
p(x) = n∏ i=1 p(xi|x1, ..., xi−1), (1)
From the formulation, the AR model provides a tractable data distribution p(x). Recently, in training the model parameters using the training samples x̂T , the computation parallelization are actively employed for calculating the distance between the real sample x̂t ∈ x̂T and the generated sample xt from equation (1). Still, for generating the future samples, it requires O(N) by definition.
3 PROPOSED METHOD
3.1 OVERVIEW
Figure 9 shows the concept of the proposed approximator NARA. NARA consists of a prior-sample predictor fW and confidence predictor gV . Given samples x≤i = {x1, · · · , xi}, the prior-sample predictor predicts a chunk of M number of the prior values m(i,i+M ]. Afterward, using the prior samples, we draw the future samples x(i,i+M ] in parallel. We note that this is possible because the
prior m(i,i+M ] is i.i.d. variable from our assumption. Subsequently, for the predicted x(i,i+M ], the confidence predictor predicts confidence scores ν(i,i+M ]. Then, using the predicted confidence, our model decides whether the samples of interest should be redrawn by the AR model (re-sample xi+1) or they are just accepted. The detailed explanation will be described in the following sections.
3.2 APPROXIMATING SAMPLE DISTRIBUTION OF AR MODEL
Given the samples x≤i = {x1, · · · , xi}, a AR model defines the distribution of future samples x(i,j] = {xi+1, · · · , xj} as follows:
pθ(x(i,j]|x≤i) = j∏
l=i+1
pθ(xl|x1, ..., xl−1). (2)
Here, θ denotes the parameters of the AR model. The indices i, j are assumed to satisfy the condition j > i, ∀i, j ∈ [1, N ]. To approximate the distribution pθ(x(i,j]|x≤i), we introduce a set of prior samples m(i,j−1] = fW (x≤i;W ), where we assume that they are i.i.d. given the observation x≤i. Here, W is the model parameter of the prior-sample predictor fW (·). Based on this, we define an approximated distribution qθ,W (x(i,j−1]|x≤i,m(i,j−1]) characterized by the original AR model pθ and the prior-sample predictor fW (·) as follows:
qθ,W (x(i,j]|x≤i,m(i,j−1]) ≡ pθ(xi+1|x≤i) j∏
l=i+2 pθ(xl|x≤i,mi+1, . . .ml−1)︸ ︷︷ ︸ Compute in parallel (const time)
(A) ' pθ(xi+1|x≤i) j∏ l=i+2
pθ(xl|x≤i, xi+1, . . . xl−1)︸ ︷︷ ︸ Compute in sequential (linear time)
= pθ(x(i,j]|x≤i).
(3)
Here, approximation (A) is true when m(i,j−1] approaches to x(i,j−1]. Note that it becomes possible to compute qθ,W in a constant time because we assume the prior variable mi to be i.i.d. while pθ requires a linear time complexity.
Then, we optimize the network parameters θ and W by minimizing the negative log-likelihood (NLL) of qθ,W (x(i,j] = x̂(i,j]|x≤i, fW (x≤i)) where x̂(i,j] is a set of samples that are sampled from the baseline AR model. We guide the prior-sample predictor fW (·) to generate the prior samples that is likely to come from the distribution of original AR model m by minimizing the original AR model θ and prior-sample predictor W jointly as follows:
min θ,W − log pθ(x(g)(i,j]|x≤i)− Epθ(x(i,j]|x≤i)[log qθ,W (x(i,j]|x≤i, fW (x≤i))], (4)
where x(g) denotes the ground truth sample value in the generated region of the training samples. Note that both pθ(x) and its approximated distribution qθ,W (x) approaches to the true data distribution when (1) our prior-sample predictor generates the prior samples m close to the true samples x and (2) the NLL of the AR distribution approaches to the data distribution. Based on our analysis and experiments, we later show that our model can satisfy these conditions theoretically and empirically in the following sections.
3.3 CONFIDENCE PREDICTION
Using the prior-sample predictor fW (·), our model generates future samples based on the previous samples. However, accumulation of approximation errors in the AR generation may lead to an unsuccessful sample generation. To mitigate the problem, we introduce an auxiliary module that determines whether to accept or reject the approximated samples generated as described in the previous subsection, referred to as confidence predictor.
First, we define the confidence of the generated samples as follows: νk = qθ,W (xk = x̂k|x≤i, fW (x≤i)), (5)
where x̂k ∼ pθ(xk|x≤k−1) and k ∈ {1, · · · , j}. The confidence value νk provides a measure of how likely the generated samples from qθ,W (·) is drawn from pθ(xk|x≤k−1). Based on the confidence value νk, our model decides whether it can accept the sample xk or not. More specifically, we choose a threshold ∈ [0, 1] and accept samples which have the confidence score larger than the threshold . When the case = 1, our model always redraws the sample using the AR model no matter how our confidence is high. Note that our model becomes equivalent to the target AR model when = 1. When = 0, our model always accepts the approximated samples. In practice, we accept k̂( )− 1 samples among approximated M samples where k̂( ) = argmink νk > . Subsequently, we re-sample xk̂( )) from the original AR model and repeat approximation scheme until reach the maximum length.
However, it is impractical to calculate equation (5) directly because we need the samples x̂k from the original AR model. We first need to go forward using the AR model to see the next sample and come backward to calculate the confidence to decide whether we use the sample or not, which is nonsense.
To detour this problem, we introduce a network gV (·) that approximates the binary decision variable h k = I(νk ≥ ) as follows: h (i,j] ' gV (x≤i, fW (x≤i)), (6) where h (i,j] = {h i+1, · · · , h j}. The network gV (·) is implemented by a auto-encoder architecture with a sigmoid activation output that makes the equation (6) equivalent to the logistic regression.
3.4 TRAINING DETAILS
To train the proposed model, we randomly select the sample s(i) ∈ [1, N ] for the sequence x(i) in a training batch. Then, we predict l(i) = min(B,N − s(i)) sample values after s(i), where B denotes the number of samples the prediction considers. To calculate equation (4), we minimize the loss of the training sample x(k), k = 1 · · ·K, and the locations s(i) ∈ [1, N ] as,
min θ,W − 1 K K∑ i=1 log pθ(x (i) ≤s(i)+l(i))− 1 KM K∑ i=1 M∑ j=1 log qθ,W (x̂ (i)(j) (s(i),s(i)+l(i)] |x(i)≤s(i)). (7)
Here, x̂(i)(j) (s(i),s(i)+l(i)] for j ∈ {1 · · ·M} denotes M number of the sequences from the AR distribution pθ(x(s(i),s(i)+l(i)]|x (i)
≤s(i)) for i-th training data. From the experiment, we found that M = 1 sample is enough to train the model. This training scheme guides the distribution drawn by NARA to fit the original AR distribution as well as to generate future samples, simultaneously. To train gV (·), binary cross-entropy loss is used with h in equation (6), with freezing the other parameters.
3.5 THEORETICAL EXPLANATION
Here, we show that the proposed NARA is a regularized version of the original AR model. At the extremum, the approximated sample distribution from NARA is equivalent to that of the original AR model. In NARA, our approximate distribution q(x(i,j+1]|x≤i,m(i,j]) is reformulated as follows:
qφ,θ(x(i,j+1]|x≤i,m(i,j]) ≡ pθ(xi+1|x≤i) j+1∏ l=i+2 qφ(xl|x≤i,mi+1, ...,ml−1)
= pθ(x≤i+1) pθ(x≤i) · pθ(x≤i+2) pθ(x≤i+1) ·, · · · , ·pθ(x≤j+1) pθ(x≤j)︸ ︷︷ ︸
pθ(x(i,j+1]|x≤i)
· [ pθ(x≤i+1)
qφ(x≤i,mi+1) ·, · · · , ·
qφ(x≤i,m(i,j], xj+1)
pθ(x≤j+1) ] ︸ ︷︷ ︸
R(pθ,qφ,m(i,j])
,
(8)
where the parameter φ denotes the network parameters of the approximated distribution q(·). Therefore, our proposed cost function can be represented as the negative log-likelihood of the AR model with a regularizer − logR(pθ, qφ,m(i,j]):
min φ,θ −Epθ(x(i,j+1]|x≤i)[log pθ(x(i,j+1]|x≤i)) + logR(pθ, qφ,m(i,j])] (9)
Note that the proposed cost function is equivalent to that of the original AR model when logR(pθ, qφ,m(i,j]) = 0, which is true under the condition of m(i,j] = x(i,j] and qφ(·|·) = pθ(·|·). Here, m(i,j] = fW (x≤i). By minimizing the equation (9),R(pθ, qφ,m(i,j]) enforces the direction of the optimization to estimate the probability ratio of qφ and pθ while it minimize the gap between qφ(x≤i,m(i,j],xj+1)
qφ(x≤i,m(i,j]) so that m(i,j] = fW (x≤i) approaches to x(i,j].
4 RELATED WORK
Deep AR and regression models: After employing the deep neural network, the AR models handling sequential data has achieved significant improvements in handling the various sequential data including text (Sutskever et al., 2011), sound (Vinyals et al., 2012; Tamamori et al., 2017), and images (van den Oord et al., 2016b; Salimans et al., 2017). The idea has been employed to “Flow based model” which uses auto-regressive sample flows (Kingma & Dhariwal, 2018; Germain et al., 2015; Papamakarios et al., 2017; Kingma et al., 2016) to infer complex distribution, and reported meaningful progresses. Also, the attempts (Yoo et al., 2017; Garnelo et al., 2018; Kim et al., 2019) to replace the kernel function of the stochastic regression and prediction processes to neural network has been proposed to deal with semi-supervised data not imposing an explicit sequential relationship.
Approximated AR methods: Reducing the complexity of the deep AR model has been explored by a number of studies, either targeting multiple domain (Seo et al., 2018; Stern et al., 2018) or specific target such as machine translation (Wang et al., 2018; Ghazvininejad et al., 2019; Welleck et al., 2019; Wang et al., 2018; 2019) and image generation (Ramachandran et al., 2017).
Adding one step further to the previous studies, we propose a new general approximation method for AR methods by assuming the i.i.d. condition for the “easy to predict” samples. This differentiates our approach to (Seo et al., 2018) in that we do not sequentially approximate the future samples by using a smaller AR model but use a chunk-wise predictor to approximate the samples at once. In addition, our confidence prediction module can be seen as a stochastic version of the verification step in (Stern et al., 2018), which helps our model to converge toward the original solution. This confidence guided approximation can be easily augmented to the other domain specific AR approximation methods because our method is not limited to a domain specific selection queues such as quotation (Welleck et al., 2019; Ghazvininejad et al., 2019) or nearby convolutional features (Ramachandran et al., 2017).
5 EXPERIMENTS
In this section, we demonstrate the data generation results from the proposed NARA. To check the feasibility, we first test our method into time-series data generation problem, and second, into image generation. The detailed model structures and additional results are attached in the Supplementary material. The implementation of the methods will be available soon.
5.1 EXPERIMENTAL SETTING
Time-series data generation problem: In this problem, we used LSTM as the base model. First, we tested our method with a simple one-dimensional sinusoidal function. Second, we tested the video sequence data (golf swing) for demonstrating the more complicated case. In this case, we repeated the swing sequences 20 times to make the periodic image sequences and resize each image to 64× 64 resolution. Also, beside the LSTM, we used autoencoder structures to embed the images into latent space. The projected points for the image sequences are linked by LSTM, similar to (Yoo et al., 2017). For both cases, we used ADAM optimizer (Kingma & Ba, 2015) with a default setting and a learning rate 0.001.
Image generation: For the image generation task, we used PixelCNN++ (Salimans et al., 2017) as the base model. The number of channels of the network was set to 160 and the number of logistic mixtures was set to 10. See (Salimans et al., 2017) for the detailed explanation of the parameters. In this task, the baseline AR model (PixelCNN++) is much heavier than those used in the previous tasks. Here, we show that the proposed approximated model can significantly reduce the computational burden of the original AR model. The prior-sample predictor fW (·) and the confidence estimator gV (·) were both implemented by U-net structured autoencoder (Ronneberger et al., 2015). We optimized the models using ADAM with learning rate 0.0001. Every module was trained from scratch. We mainly used CelebA (Liu et al., 2015b) resizing the samples to 64× 64 resolution. In the experiments, we randomly pick 36, 000 images for training and 3, 000 images for validation.
Training and evaluation: For the first problem, we use single GPU (NVIDIA Titan XP), and for the second problem, four to eight GPUs (NVIDIA Tesla P40) were used1. The training and inference code used in this section are implemented by PyTorch library. For the quantitative evaluation, we measure the error between the true future samples and the generated one, and also employ Fréchet Inception Distance score (FID) (Heusel et al., 2017) as a measure of the model performance and visual quality of the generated images for the second image generation problem.
5.2 ANALYSIS
5.2.1 TIME-SERIES DATA GENERATION
Figure 2a shows the generation results of the one-dimensional time-series from our approximation model with different acceptance ratios (red, green, and blue) and the baseline LSTM models (black).
1The overall expreiments were conducted on NSML (Sung et al., 2017) GPU system.
From the figure, we can see that both models correctly generates the future samples. Please note that, from the prior sample generation result (magenta), the prior samples m converged to the true samples x as claimed in Section 3.5.
The graph in Figure 2b shows the acceptance ratio and the `1-error over the confidence threshold ∈ (0, 1]. The error denotes the distance between the ground truth samples x and the generated ones. As expected, our model accepted more samples as the threshold decreases. However, contrary to our initial expectations, the error-threshold graph shows that the less acceptance of samples does not always bring the more accurate generation results. From the graph, the generation with an intermediate acceptance ratio achieved the best result. Interestingly, we report that this tendency between the acceptance ratio and the generation quality was repeatedly observed in the other datasets as well.
Figure 3 shows the image sequence generation results from NARA. From the result, we can see that the proposed approximation method is still effective when the input data dimension becomes much larger and the AR model becomes more complicated. In the golf swing dataset, the proposed approximation model also succeeded to capture the periodic change of the image sequence. The table 3 shows that the proper amount of approximation can obtain better accuracy than the none, similar to the other previous experiments. One notable observation regarding the phenomenon is that the period of image sequence was slightly changed among different ratio of approximated sample acceptance (Figure 3). One possible explanation would be that the approximation module suppress the rapid change of the samples, and this affects the interval of a single cycle.
5.2.2 IMAGE GENERATION
Figure 4 shows that our method can be integrated into PixelCNN++ and generates images with the significant amount of the predicted sample acceptance (white region). We observed that the confidence was mostly low (blue) in the eyes, mouth, and boundary regions of the face, and the PixelCNN is used to generate those regions. This shows that compared to the other homogeneous regions of the image, the model finds it relatively hard to describe the details, which matches with our intuition.
The graphs in Figure 5 present the quantitative analysis regarding the inference time and the NLL in generating images. In Figure 5a, the relation between inference time and the skimming ratio is reported. The results show that the inference speed is significantly improved as more pixels are accepted. Table 2 further supports this that our approximation method generates a fair quality of images while it speeds up the generation procedure 5 ∼ 10 times faster than the base model. In the image generation example also, we found that the fair amount of acceptance can improve the perceptual visual quality of the generated images compared to the vanilla PixelCNN++ (Table 2). Our method benefits from increasing the acceptance ratio to some extent in terms of FID showing a U-shaped trend over the variation, similar to those in Figure 2b. Note that a lower FID score identifies a better model. Consistent with previous results, we can conjecture that the proposed approximation scheme learns the mean-prior of the images and guides the AR model to prevent generating erroneous images. The confidence maps and the graph illustrated in Figure 4, 5a, and 5c
support this conjecture. Complex details such as eyes, mouths and contours have largely harder than the backgrounds and remaining faces.
In Figure 5b and Figure 5c, the graphs show the results supporting the convergence of the proposed method. The graph in Figure 5b shows the NLL of the base PixelCNN++ and that of our proposed method under the full-accept case, i.e. we fully believe the approximation results. Note that the NLL of both cases converged and the PixelCNN achieved noticeably lower NLL compared to the fully accepting the pixels at every epoch. This is already expected in Section 3.2 that the baseline AR model approaches more closely to the data distribution than our module. This supports the necessity of re-generating procedure by using the PixelCNN++, especially when the approximation module finds the pixel has a low confidence.
The graph in Figure 5c presents the `1 distance between the generated prior pixel m and the corresponding ground-truth pixel x in the test data reconstruction. Again, similar to the previous time-series experiments, the model successfully converged to the original value (m approaches to x). Combined with the result in Figure 5b, this result supports the convergence conditions claimed in section 3.2. Regarding the convergence, we compared the NLL of the converged PixelCNN++ distribution from the proposed scheme and that of PixelCNN++ with CelebA dataset from the original paper (Salimans et al., 2017).
6 CONCLUSION
In this paper, we proposed the efficient neural auto-regressive model approximation method, NARA, which can be used in various auto-regressive (AR) models. By introducing the prior-sampling and confidence prediction modules, we showed that NARA can theoretically and empirically approximate the future samples under a relaxed causal relationships. This approximation simplifies the generation process and enables our model to use powerful parallelization techniques for the sample generation procedure. In the experiments, we showed that NARA can be successfully applied with different AR models in the various tasks from simple to complex time-series data and image pixel generation. These results support that the proposed method can introduce a way to use AR models in a more efficient manner.
B SUPPLEMENTARY EXPLAIN ON PROPOSED SAMPLE GENERATION
Figure 6 shows the detailed process of sampling when the proposed NARA module is attached to the mother AR model. The diagram describes the Sinusoidal function generation example, which is the simplest example in our paper. In each auto-regressive step, we decide if we use the approximated value predicted by our sample predictor (predict the value in chunkwise manner assuming i.i.d condition) or sample the value with the mother auto-regressive model. This decision is conducted based on the confidence estimation by the Confidence predictor of the paper. The chunk-wise estimation step can be boosted by recent parallel computing methods different from the mother autoregressive model, and this can possibly relax the computation burden of the original auto-regressive model. More importantly, this scheme can be applied to various auto-regressive models.
C SUPPLEMENTARY EXPERIMENTS
C.1 SUPPLEMENTARY GENERATION RESULTS
In addition to the results presented in the paper, we show supplement generation examples in below figures. Figure 3 and Table 3 present the image sequence generation result from the other golf swing sequence. In this case also, we can observe the similar swing cycle period changes and acceptance ratio-error tendencies reported in the paper. Our approximation slightly affects the cycle of the time-serious data, and the result also shows that the approximation can achieve even better prediction results than “none-acceptance” case.
Figure 11 and 12 shows the additional facial image generation results among ∈ [0.0, 1.0]. We can see that pixels from boundary region were more frequently re-sampled compared to other relatively simple face parts such as cheek or forehead. Also, we tested our model with ImageNet classes, and the results were presented in Figure 10.
C.2 ANALYSIS ON WHEN CONFIDENCE MODULE FAILS.
If the prior-sample predictor fW performs worse than our expectation, the confidence module gV will reject all prior samples; hence, in this case, the model repeats to draw samples using the original AR model in a sample-by-sample manner. However, in an even worse situation, the confidence module could always accept prior-samples, including low-quality prior samples. To simulate the failure or both fW and gV , we manually fix the “accept region” regardless of the predicted confidence score. We select ten samples with the lowest confidence scores and report the results in Figure 8. In the figure, we can observe that the drastic error can occur when both the prior-sample predictor fW and confidence module gV fail. However, in practice, our confidence module gV can detect the drastic error during the sampling stage as shown in Figure 8b; hence, such extreme error will occur rarely.
C.3 APPLYING NARA WITH FASTER PIXELCNN++
To show that our method can be added to diverse AR mode, we also combined our skimming method to the fast version of PixelCNN++2. Figure 9 and Table 4 show the generated samples and the generation time using the incorporated model (skim+Fast PixelCNN++). Due to the lack of time to fully investigate all datasets with the new implementation, we conducted the experiment on CIFAR-10, which is the most widely used dataset in PixelCNN works and is more diverse than the CelebA dataset. From the results, we show that our method, augmented with Fast PixelCNN++, can make the baseline algorithm much faster as we suggested in the paper.
2Ramachandran, Prajit, et al. ”Fast generation for convolutional autoregressive models.” arXiv preprint arXiv:1704.06001 (2017). | 1. What is the main contribution of the paper regarding autoregressive models?
2. What are the strengths of the proposed approach, particularly in its design and trade-offs?
3. Do you have any concerns or questions regarding the theoretical analysis, specifically on the correction term and approximation quality?
4. How does the reviewer assess the novelty and significance of the paper's content?
5. Are there any relevant works that the reviewer thinks should be included in the comparison? | Review | Review
This paper addresses the sequential limitation of autoregressive model when doing sampling. Specifically, instead of sampling next observations in a sequential fashion, this paper generates future observations in parallel, with the help of i.i.d. future priors. With the help of learned confidence model, the model is able to get trade-off between speed and approximation accuracy. Experiments on synthetic data and image generation with PixelCNN++ show the comparable results while being significantly faster than baseline.
Overall the paper is well motivated, with an interesting design of the variational distribution to approximate the true autoregressive distribution. The design of the confidence model looks a bit heuristic, but the trade-off ability between efficiency and quality it brings is also quite interesting.
Below are some minor comments:
1. The theoretical analysis is basically comment about the objective which is less interesting. However more interesting guarantees would be: 1) with the additional correction term added, how would it help with reducing the variance; 2) As the q_{\theta, \phi} is always in a limited form due to the parallelism requirement, how bad the approximation could be in the worst case ---- I’m not asking for these results, but any form of discussion would be helpful.
2. The author only compared with the raw PixelCNN++. Would any of the existing AR-speedup method be applicable for a comparison? |
ICLR | Title
Topology and Geometry of Half-Rectified Network Optimization
Abstract
The loss surface of deep neural networks has recently attracted interest in the optimization and machine learning communities as a prime example of highdimensional non-convex problem. Some insights were recently gained using spin glass models and mean-field approximations, but at the expense of strongly simplifying the nonlinear nature of the model. In this work, we do not make any such assumption and study conditions on the data distribution and model architecture that prevent the existence of bad local minima. Our theoretical work quantifies and formalizes two important folklore facts: (i) the landscape of deep linear networks has a radically different topology from that of deep half-rectified ones, and (ii) that the energy landscape in the non-linear case is fundamentally controlled by the interplay between the smoothness of the data distribution and model over-parametrization. Our main theoretical contribution is to prove that half-rectified single layer networks are asymptotically connected, and we provide explicit bounds that reveal the aforementioned interplay. The conditioning of gradient descent is the next challenge we address. We study this question through the geometry of the level sets, and we introduce an algorithm to efficiently estimate the regularity of such sets on large-scale networks. Our empirical results show that these level sets remain connected throughout all the learning phase, suggesting a near convex behavior, but they become exponentially more curvy as the energy level decays, in accordance to what is observed in practice with very low curvature attractors.
1 INTRODUCTION
Optimization is a critical component in deep learning, governing its success in different areas of computer vision, speech processing and natural language processing. The prevalent optimization strategy is Stochastic Gradient Descent, invented by Robbins and Munro in the 50s. The empirical performance of SGD on these models is better than one could expect in generic, arbitrary non-convex loss surfaces, often aided by modifications yielding significant speedups Duchi et al. (2011); Hinton et al. (2012); Ioffe & Szegedy (2015); Kingma & Ba (2014). This raises a number of theoretical questions as to why neural network optimization does not suffer in practice from poor local minima.
The loss surface of deep neural networks has recently attracted interest in the optimization and machine learning communities as a paradigmatic example of a hard, high-dimensional, non-convex problem. Recent work has explored models from statistical physics such as spin glasses Choromanska et al. (2015), in order to understand the macroscopic properties of the system, but at the expense of strongly simplifying the nonlinear nature of the model. Other authors have advocated that the real danger in high-dimensional setups are saddle points rather than poor local minima Dauphin et al. (2014), although recent results rigorously establish that gradient descent does not get stuck on saddle points Lee et al. (2016) but merely slowed down. Other notable recent contributions are Kawaguchi (2016), which further develops the spin-glass connection from Choromanska et al. (2015) and resolves the linear case by showing that no poor local minima exist; Sagun et al. (2014) which also
∗Currently on leave from UC Berkeley.
discusses the impact of stochastic vs plain gradient, Soudry & Carmon (2016), that studies Empirical Risk Minimization for piecewise multilayer neural networks under overparametrization (which needs to grow with the amount of available data), and Goodfellow et al. (2014), which provided insightful intuitions on the loss surface of large deep learning models and partly motivated our work. Additionally, the work Safran & Shamir (2015) studies some topological properties of homogeneous nonlinear networks and shows how overparametrization acts upon these properties, and the pioneering Shamir (2016) studied the distribution-specific hardness of optimizing non-convex objectives. Lastly, several papers submitted concurrently and independently of this one deserve note, particularly Swirszcz et al. (2016) which analyzes the explicit criteria under which sigmoid-based neural networks become trapped by poor local minima, as well as Tian (2017), which offers a complementary study of two layer ReLU based networks, and their learning dynamics.
In this work, we do not make any linearity assumption and study conditions on the data distribution and model architecture that prevent the existence of bad local minima. The loss surface F (θ) of a given model can be expressed in terms of its level sets Ωλ, which contain for each energy level λ all parameters θ yielding a loss smaller or equal than λ. A first question we address concerns the topology of these level sets, i.e. under which conditions they are connected. Connected level sets imply that one can always find a descent direction at each energy level, and therefore that no poor local minima can exist. In absence of nonlinearities, deep (linear) networks have connected level sets Kawaguchi (2016). We first generalize this result to include ridge regression (in the two layer case) and provide an alternative, more direct proof of the general case. We then move to the half-rectified case and show that the topology is intrinsically different and clearly dependent on the interplay between data distribution and model architecture. Our main theoretical contribution is to prove that half-rectified single layer networks are asymptotically connected, and we provide explicit bounds that reveal the aforementioned interplay.
Beyond the question of whether the loss contains poor local minima or not, the immediate follow-up question that determines the convergence of algorithms in practice is the local conditioning of the loss surface. It is thus related not to the topology but to the shape or geometry of the level sets. As the energy level decays, one expects the level sets to exhibit more complex irregular structures, which correspond to regions where F (θ) has small curvature. In order to verify this intuition, we introduce an efficient algorithm to estimate the geometric regularity of these level sets by approximating geodesics of each level set starting at two random boundary points. Our algorithm uses dynamic programming and can be efficiently deployed to study mid-scale CNN architectures on MNIST, CIFAR-10 and RNN models on Penn Treebank next word prediction. Our empirical results show that these models have a nearly convex behavior up until their lowest test errors, with a single connected component that becomes more elongated as the energy decays. The rest of the paper is structured as follows. Section 2 presents our theoretical results on the topological connectedness of multilayer networks. Section 3 presents our path discovery algorithm and Section 4 covers the numerical experiments.
2 TOPOLOGY OF LEVEL SETS
Let P be a probability measure on a product space X ×Y , where we assume X and Y are Euclidean vector spaces for simplicity. Let {(xi, yi)}i be an iid sample of size L drawn from P defining the training set. We consider the classic empirical risk minimization of the form
Fe(θ) = 1
L L∑ l=1 ‖Φ(xi; θ)− yi‖2 + κR(θ) , (1)
where Φ(x; θ) encapsulates the feature representation that uses parameters θ ∈ RS and R(θ) is a regularization term. In a deep neural network, θ contains the weights and biases used in all layers. For convenience, in our analysis we will also use the oracle risk minimization:
Fo(θ) = E(X,Y )∼P ‖Φ(X; θ)− Y ‖2 + κR(θ) . (2)
Our setup considers the case whereR consists on either `1 or `2 norms, as we shall describe below. They correspond to well-known sparse and ridge regularization respectively.
2.1 POOR LOCAL MINIMA CHARACTERIZATION FROM TOPOLOGICAL CONNECTEDNESS
We define the level set of F (θ) as
ΩF (λ) = {θ ∈ RS ; F (θ) ≤ λ} . (3)
The first question we study is the structure of critical points of Fe(θ) and Fo(θ) when Φ is a multilayer neural network. For simplicity, we consider first a strict notion of local minima: θ ∈ RS is a strict local minima of F if there is > 0 with F (θ′) > F (θ) for all θ′ ∈ B(θ, ) and θ′ 6= θ. In particular, we are interested to know whether Fe has local minima which are not global minima. This question is answered by knowing whether ΩF (λ) is connected at each energy level λ: Proposition 2.1. If ΩF (λ) is connected for all λ then every local minima of F (θ) is a global minima.
Strict local minima implies that ∇F (θ) = 0 and HF (θ) 0, but avoids degenerate cases where F is constant along a manifold intersecting θ. In that scenario, if Uθ denotes that manifold, our reasoning immediately implies that if ΩF (λ) are connected, then for all > 0 there exists θ′ with dist(θ′,Uθ) ≤ and F (θ′) < F (θ). In other words, some element at the boundary of Uθ must be a saddle point. A stronger property that eliminates the risk of gradient descent getting stuck at Uθ is that all elements at the boundary of Uθ are saddle points. This can be guaranteed if one can show that there exists a path connecting any θ to the lowest energy level such that F is strictly decreasing along it.
Such degenerate cases arise in deep linear networks in absence of regularization. If θ = (W1, . . . ,WK) denotes any parameter value, with N1, . . . NK denoting the hidden layer sizes, and Fk ∈ GL+Nk(R) are arbitrary elements of the general linear group of invertible Nk × Nk matrices with positive determinant, then
Uθ = {W1F−11 , F1W2F −1 2 , . . . , FKWK ; Fk ∈ GL + Nk (R)} .
In particular, Uθ has a Lie Group structure. In the half-rectified nonlinear case, the general linear group is replaced by the Lie group of homogeneous invertible matrices Fk = diag(α1, . . . , αNk) with αj > 0.
This proposition shows that a sufficient condition to prevent the existence of poor local minima is having connected level sets, but this condition is not necessary: one can have isolated local minima lying at the same energy level. This can be the case in systems that are defined up to a discrete symmetry group, such as multilayer neural networks. However, as we shall see next, this case puts the system in a brittle position, since one needs to be able to account for all the local minima (and there can be exponentially many of them as the parameter dimensionality increases) and verify that their energy is indeed equal.
2.2 THE LINEAR CASE
We first consider the particularly simple case where F is a multilayer network defined by
Φ(x; θ) = WK . . .W1x , θ = (W1, . . . ,WK) . (4)
and the ridge regression R(θ) = ‖θ‖2. This model defines a non-convex (and non-concave) loss Fe(θ). When κ = 0, it has been shown in Saxe et al. (2013) and Kawaguchi (2016) that in this case, every local minima is a global minima. We provide here an alternative proof of that result that uses a somewhat simpler argument and allows for κ > 0 in the case K = 2. Proposition 2.2. Let W1,W2, . . . ,WK be weight matrices of sizes nk × nk+1, k < K, and let Fe(θ), Fo(θ) denote the risk minimizations using Φ as in (4). Assume that nj ≥ min(n1, nK) for j = 2 . . .K − 1. Then ΩFe(λ) (and ΩFo ) is connected for all λ and all K when κ = 0, and for κ > 0 when K = 2; and therefore there are no poor local minima in these cases. Moreover, any θ can be connected to the lowest energy level with a strictly decreasing path.
Let us highlight that this result is slightly complementary than that of Kawaguchi (2016), Theorem 2.3. Whereas we require nj ≥ min(n1, nK) for j = 2 . . .K − 1 and our analysis does not inform about the order of the saddle points, we do not need full rank assumptions on ΣX nor the weights Wk.
This result does also highlight a certain mismatch between the picture of having no poor local minima and generalization error. Incorporating regularization drastically changes the topology, and the fact that we are able to show connectedness only in the two-layer case with ridge regression is profound; we conjecture that extending it to deeper models requires a different regularization, perhaps using more general atomic norms Bach (2013). But we now move our interest to the nonlinear case, which is more relevant to our purposes.
2.3 HALF-RECTIFIED NONLINEAR CASE
We now study the setting given by
Φ(x; θ) = WKρWK−1ρ . . . ρW1x , θ = (W1, . . . ,WK) , (5)
where ρ(z) = max(0, z). The biases can be implemented by replacing the input vector x with x = (x, 1) and by rebranding each parameter matrix as
W i = ( Wi bi 0 1 ) ,
where bi contains the biases for each layer. For simplicity, we continue to use Wi and x in the following.
2.3.1 NONLINEAR MODELS ARE GENERALLY DISCONNECTED
One may wonder whether the same phenomena of global connectedness also holds in the halfrectified case. A simple motivating counterexample shows that this is not the case in general. Consider a simple setup with X ∈ R2 drawn from a mixture of two Gaussians N−1 and N1, and let Y = (X − µZ) · Z , where Z is the (hidden) mixture component taking {1,−1} values. Let Ŷ = Φ(X; {W1,W2}) be a single-hidden layer ReLU network, with two hidden units. Let θA be a configuration that bisects the two mixture components, and let θB the same configuration, but swapping the bisectrices. One can verify that they can both achieve arbitrarily small risk by letting the covariance of the mixture components go to 0. However, any path that connects θA to θB must necessarily pass through a point in which W1 has rank 1, which leads to an estimator with risk at least 1/2.
In fact, it is easy to see that this counter-example can be extended to any generic half-rectified architecture, if one is allowed to adversarially design a data distribution. For any given Φ(X; θ) with arbitrary architecture and current parameters θ = (Wi), let Pθ = {A1, . . . ,AS} be the underlying tessellation of the input space given by our current choice of parameters; that is, Φ(X; θ) is piece-wise linear and Pθ contains those pieces. Now let X be any arbitrary distribution with density p(x) > 0 for all x ∈ Rn, for example a Gaussian, and let Y | X d= Φ(X; θ) . Since Φ is invariant under a subgroup of permutations θσ of its hidden layers, it is easy to see that one can find two parameter values θA = θ and θB = θσ such that Fo(θA) = Fo(θB) = 0, but any continuous path γ(t) from θA to θB will have a different tessellation and therefore won’t satisfy Fo(γ(t)) = 0. Moreover, one can build on this counter-example to show that not only the level sets are disconnected, but also that there exist poor local minima. Let θ′ be a different set of parameters, and Y ′ | X d= Φ(X; θ′) be a different target distribution. Now consider the data distribution given by the mixture
X | p(x) , z ∼ Bernoulli(π) , Y | X, z d= zΦ(X; θ) + (1− z)Φ(X; θ′) . By adjusting the mixture component π we can clearly change the risk at θ and θ′ and make them different, but we conjecture that this preserves the status of local minima of θ and θ′. Appendix E constructs a counter-example numerically.
This illustrates an intrinsic difficulty in the optimization landscape if one is after universal guarantees that do not depend upon the data distribution. This difficulty is non-existent in the linear case and not easy to exploit in mean-field approaches such as Choromanska et al. (2015), and shows that in general we should not expect to obtain connected level sets. However, connectedness can be recovered if one is willing to accept a small increase of energy and make some assumptions on the complexity of the regression task. Our main result shows that the amount by which the energy is allowed to increase is upper bounded by a quantity that trades-off model overparametrization and smoothness in the data distribution.
For that purpose, we start with a characterization of the oracle loss, and for simplicity let us assume Y ∈ R and let us first consider the case with a single hidden layer and `1 regularization: R(θ) = ‖θ‖1.
2.3.2 PRELIMINARIES
Before proving our main result, we need to introduce preliminary notation and results. We first describe the case with a single hidden layer of size m.
We define
e(m) = min W1∈Rm×n,‖W1(i)‖2≤1,W2∈Rm
E{|Φ(X; θ)− Y |2}+ κ‖W2‖1 . (6)
to be the oracle risk using m hidden units with norm ≤ 1 and using sparse regression. It is a well known result by Hornik and Cybenko that a single hidden layer is a universal approximator under very mild assumptions, i.e. limm→∞ e(m) = 0. This result merely states that our statistical setup is consistent, and it should not be surprising to the reader familiar with classic approximation theory. A more interesting question is the rate at which e(m) decays, which depends on the smoothness of the joint density (X,Y ) ∼ P relative to the nonlinear activation family we have chosen. For convenience, we redefine W = W1 and β = W2 and Z(W ) = max(0,WX). We also write z(w) = max(0, 〈w,X〉) where (X,Y ) ∼ P and w ∈ RN is any deterministic vector. Let ΣX = EPXXT ∈ RN×N be the covariance operator of the random input X . We assume ‖ΣX‖ <∞. A fundamental property that will be essential to our analysis is that, despite the fact that Z is nonlinear, the quantity [w1, w2]Z := EP {z(w1)z(w2)} is locally equivalent to the linear metric 〈w1, w2〉X = EP {wT1 XXTw2} = 〈w1,ΣXw2〉, and that the linearization error decreases with the angle between w1 and w2. Without loss of generality, we assume here that ‖w1‖ = ‖w2‖ = 1, and we write ‖w‖2Z = E{|z(w)|2}. Proposition 2.3. Let α = cos−1(〈w1, w2〉) be the angle between unitary vectors w1 and w2 and let wm =
w1+w2 ‖w1+w2‖ be their unitary bisector. Then
1 + cosα
2 ‖wm‖2Z − 2‖ΣX‖
( 1− cosα
2 + sin2 α
) ≤ [w1, w2]Z ≤ 1 + cosα
2 ‖wm‖2Z . (7)
The term ‖ΣX‖ is overly pessimistic: we can replace it by the energy of X projected into the subspace spanned byw1 andw2 (which is bounded by 2‖ΣX‖). When α is small, a Taylor expansion of the trigonometric terms reveals that
2
3‖ΣX‖ 〈w1, w2〉 =
2
3‖ΣX‖ cosα =
2 3‖ΣX‖ (1− α
2
2 +O(α4))
≤ (1− α2/4)‖wm‖2Z − ‖ΣX‖(α2/4 + α2) +O(α4) ≤ [w1, w2]Z +O(α4) ,
and similarly [w1, w2]Z ≤ 〈w1, w2〉‖wm‖2Z ≤ ‖ΣX‖〈w1, w2〉 .
The local behavior of parameters w1, w2 on our regression problem is thus equivalent to that of having a linear layer, provided w1 and w2 are sufficiently close to each other. This result can be seen as a spoiler of what is coming: increasing the hidden layer dimensionality m will increase the chances to encounter pairs of vectors w1, w2 with small angle; and with it some hope of approximating the previous linear behavior thanks to the small linearization error.
In order to control the connectedness, we need a last definition. Given a hidden layer of size m with current parameters W ∈ Rn×m, we define a “robust compressibility” factor as
δW (l, α;m) = min ‖γ‖0≤l,supi |∠(w̃i,wi)|≤α
E{|Y − γZ(W̃ )|2 + κ‖γ‖1} , (l ≤ m) . (8)
This quantity thus measures how easily one can compress the current hidden layer representation, by keeping only a subset of l its units, but allowing these units to move by a small amount controlled by α. It is a form of n-width similar to Kolmogorov width Donoho (2006) and is also related to robust sparse coding from Tang et al. (2013); Ekanadham et al. (2011).
2.3.3 MAIN RESULT
Our main result considers now a non-asymptotic scenario given by some fixed size m of the hidden layer. Given two parameter values θA = (WA1 ,W A 2 ) ∈ W and θB = (WB1 ,WB2 ) with Fo(θ {A,B}) ≤ λ, we show that there exists a continuous path γ : [0, 1] → W connecting θA and θB such that its oracle risk is uniformly bounded by max(λ, ), where decreases with model overparametrization.
Theorem 2.4. For any θA, θB ∈ W and λ ∈ R satisfying Fo(θ{A,B}) ≤ λ, there exists a continuous path γ : [0, 1]→W such that γ(0) = θA, γ(1) = θB and
Fo(γ(t)) ≤ max(λ, ) , with (9)
= inf l,α
( max { e(l),δWA1 (m, 0;m), δWA1 (m− l, α;m), (10)
δWB1 (m, 0;m), δWB1 (m− l, α;m) } + C1α+O(α 2) ) , (11)
where C1 is an absolute constant depending only on κ and P .
Some remarks are in order. First, our regularization term is currently a mix between `2 norm constraints on the first layer and `1 norm constraints on the second layer. We believe this is an artifact of our proof technique, and we conjecture that more general regularizations yield similar results. Next, this result uses the data distribution through the oracle bound e(m) and the covariance term. The extension to empirical risk is accomplished by replacing the probability measure P by the empirical measure P̂ = 1L ∑ l δ ((x, y)− (xl, yl)). However, our asymptotic analysis has to be carefully reexamined to take into account and avoid the trivial regime when M outgrows L. A consequence of Theorem 2.4 is that as m increases, the model becomes asymptotically connected, as proven in the following corollary.
Corollary 2.5. As m increases, the energy gap satisfies = O(m− 1n ) and therefore the level sets become connected at all energy levels.
This is consistent with the overparametrization results from Safran & Shamir (2015); Shamir (2016) and the general common knowledge amongst deep learning practitioners. Our next sections explore this question, and refine it by considering not only topological properties but also some rough geometrical measure of the level sets.
3 GEOMETRY OF LEVEL SETS
3.1 THE GREEDY ALGORITHM
The intuition behind our main result is that, for smooth enough loss functions and for sufficient overparameterization, it should be “easy” to connect two equally powerful models—i.e., two models with FoθA,B ≤ λ. A sensible measure of this ease-of-connectedness is the normalized length of the geodesic connecting one model to the other: |γA,B(t)|/|θA − θB |. This length represents approximately how far of an excursion one must make in the space of models relative to the euclidean distance between a pair of models. Thus, convex models have a geodesic length of 1, because the geodesic is simply linear interpolation between models, while more non-convex models have geodesic lengths strictly larger than 1.
Because calculating the exact geodesic is difficult, we approximate the geodesic paths via a dynamic programming approach we call Dynamic String Sampling. We comment on alternative algorithms in Appendix A.
For a pair of models with network parameters θi, θj , each with Fe(θ) below a threshold L0, we aim to efficienly generate paths in the space of weights where the empirical loss along the path remains below L0. These paths are continuous curves belonging to ΩF (λ)–that is, the level sets of the loss function of interest.
Algorithm 1 Greedy Dynamic String Sampling 1: L0 ← Threshold below which path will be found 2: Φ1 ← randomly initialize θ1, train Φ(xi θ1) to L0 3: Φ2 ← randomly initialize θ2, train Φ(xi θ2) to L0 4: BeadList←(Φ1,Φ2) 5: Depth← 0 6: procedure FINDCONNECTION(Φ1,Φ2) 7: t∗ ← t such that dγ(θ1,θ2,t)dt ∣∣∣∣ t = 0 OR t = 0.5
8: Φ3 ← train Φ(xi; t∗θ1 + (1− t∗)θ2) to L0 9: BeadList← insert(Φ3, after Φ1, BeadList)
10: MaxError1 ← maxt(Fe(tθ3 + (1− t)θ1)) 11: MaxError2 ← maxt(Fe(tθ2 + (1− t)θ3)) 12: ifMaxError1 > L0 then return FindConnection(Φ1,Φ3) 13: ifMaxError2 > L0 then return FindConnection(Φ3,Φ2) 14: Depth← Depth+1
The algorithm recursively builds a string of models in the space of weights which continuously connect θi to θj . Models are added and trained until the pairwise linearly interpolated loss, i.e. maxtFe(tθi + (1 − t)θj) for t ∈ (0, 1), is below the threshold, L0, for every pair of neighboring models on the string. We provide a cartoon of the algorithm in Appendix C.
3.2 FAILURE CONDITIONS AND PRACTICALITIES
While the algorithm presented will faithfully certify two models are connected if the algorithm converges, it is worth emphasizing that the algorithm does not guarantee that two models are disconnected if the algorithm fails to converge. In general, the problem of determining if two models are connected can be made arbitrarily difficult by choice of a particularly pathological geometry for the loss function, so we are constrained to heuristic arguments for determining when to stop running the algorithm. Thankfully, in practice, loss function geometries for problems of interest are not intractably difficult to explore. We comment more on diagnosing disconnections more carefully in Appendix E.
Further, if the MaxError exceeds L0 for every new recursive branch as the algorithm progresses, the worst case runtime scales as O(exp(Depth)). Empirically, we find that the number of new models added at each depth does grow, but eventually saturates, and falls for a wide variety of models and architectures, so that the typical runtime is closer to O(poly(Depth))—at least up until a critical value of L0.
To aid convergence, either of the choices in line 7 of the algorithm works in practice—choosing t∗ at a local maximum can provide a modest increase in algorithm runtime, but can be unstable if the the calculated interpolated loss is particularly flat or noisy. t∗ = .5 is more stable, but slower. Finally, we find that training Φ3 to αL0 for α < 1 in line 8 of the algorithm tends to aid convergence without noticeably impacting our numerics. We provide further implementation details in 4.
4 NUMERICAL EXPERIMENTS
For our numerical experiments, we calculated normalized geodesic lengths for a variety of regression and classification tasks. In practice, this involved training a pair of randomly initialized models to the desired test loss value/accuracy/perplexity, and then attempting to connect that pair of models via the Dynamic String Sampling algorithm. We also tabulated the average number of “beads”, or the number intermediate models needed by the algorithm to connect two initial models. For all of the below experiments, the reported losses and accuracies are on a restricted test set. For more complete architecture and implementation details, see our GitHub page.
The results are broadly organized by increasing model complexity and task difficulty, from easiest to hardest. Throughout, and remarkably, we were able to easily connect models for every dataset and architecture investigated except the one explicitly constructed counterexample discussed in Appendix E.1. Qualitatively, all of the models exhibit a transition from a highly convex regime at high loss to a non-convex regime at low loss, as demonstrated by the growth of the normalized length as well as the monotonic increase in the number of required “beads” to form a low-loss connection.
4.1 POLYNOMIAL REGRESSION
We studied a 1-4-4-1 fully connected multilayer perceptron style architecture with sigmoid nonlinearities and RMSProp/ADAM optimization. For ease-of-analysis, we restricted the training and test data to be strictly contained in the interval x ∈ [0, 1] and f(x) ∈ [0, 1]. The number of required beads, and thus the runtime of the algorithm, grew approximately as a power-law, as demonstrated in Table 1 Fig. 1. We also provide a visualization of a representative connecting path between two models of equivalent power in Appendix D.
The cubic regression task exhibits an interesting feature around L0 = .15 in Table 1 Fig. 2, where the normalized length spikes, but the number of required beads remains low. Up until this point, the
cubic model is strongly convex, so this first spike seems to indicate the onset of non-convex behavior and a concomitant radical change in the geometry of the loss surface for lower loss.
4.2 CONVOLUTIONAL NEURAL NETWORKS
To test the algorithm on larger architectures, we ran it on the MNIST hand written digit recognition task as well as the CIFAR10 image recognition task, indicated in Table 1, Figs. 3 and 4. Again, the data exhibits strong qualitative similarity with the previous models: normalized length remains low until a threshold loss value, after which it grows approximately as a power law. Interestingly, the MNIST dataset exhibits very low normalized length, even for models nearly at the state of the art in classification power, in agreement with the folk-understanding that MNIST is highly convex and/or “easy”. The CIFAR10 dataset, however, exhibits large non-convexity, even at the modest test accuracy of 80%.
4.3 RECURRENT NEURAL NETWORKS
To gauge the generalizability of our algorithm, we also applied it to an LSTM architecture for solving the next word prediction task on the PTB dataset, depicted in Table 1 Fig. 5. Noteably, even for a radically different architecture, loss function, and data set, the normalized lengths produced by the DSS algorithm recapitulate the same qualitative features seen in the above datasets—i.e., models can be easily connected at high perplexity, and the normalized length grows at lower and lower perplexity after a threshold value, indicating an onset of increased non-convexity of the loss surface.
5 DISCUSSION
We have addressed the problem of characterizing the loss surface of neural networks from the perspective of gradient descent algorithms. We explored two angles – topological and geometrical aspects – that build on top of each other.
On the one hand, we have presented new theoretical results that quantify the amount of uphill climbing that is required in order to progress to lower energy configurations in single hidden-layer ReLU networks, and proved that this amount converges to zero with overparametrization under mild conditions. On the other hand, we have introduced a dynamic programming algorithm that efficiently approximates geodesics within each level set, providing a tool that not only verifies the connectedness of level sets, but also estimates the geometric regularity of these sets. Thanks to this information, we can quantify how ‘non-convex’ an optimization problem is, and verify that the optimization of quintessential deep learning tasks – CIFAR-10 and MNIST classification using CNNs, and next word prediction using LSTMs – behaves in a nearly convex fashion up until they reach high accuracy levels.
That said, there are some limitations to our framework. In particular, we do not address saddle-point issues that can greatly affect the actual convergence of gradient descent methods. There are also a number of open questions; amongst those, in the near future we shall concentrate on:
• Extending Theorem 2.4 to the multilayer case. We believe this is within reach, since the main analytic tool we use is that small changes in the parameters result in small changes in the covariance structure of the features. That remains the case in the multilayer case.
• Empirical versus Oracle Risk. A big limitation of our theory is that right now it does not inform us on the differences between optimizing the empirical risk versus the oracle risk. Understanding the impact of generalization error and stochastic gradient in the ability to do small uphill climbs is an open line of research.
• Influence of symmetry groups. Under appropriate conditions, the presence of discrete symmetry groups does not prevent the loss from being connected, but at the expense of increasing the capacity. An important open question is whether one can improve the asymptotic properties by relaxing connectedness to being connected up to discrete symmetry.
• Improving numerics with Hyperplane method. Our current numerical experiments employ a greedy (albeit faster) algorithm to discover connected components and estimate geodesics. We plan to perform experiments using the less greedy algorithm described in Appendix A.
ACKNOWLEDGMENTS
We would like to thank Mark Tygert for pointing out the reference to the -nets and Kolmogorov capacity, and Martin Arjovsky for spotting several bugs in early version of the results. We would also like to thank Maithra Raghu and Jascha Sohl-Dickstein for enlightening discussions, as well as Yasaman Bahri for helpful feedback on an early version of the manuscript. CDF was supported by the NSF Graduate Research Fellowship under Grant DGE-1106400.
A CONSTRAINED DYNAMIC STRING SAMPLING
While the algorithm presented in Sec. 3.1 is fast for sufficiently smooth families of loss surfaces with few saddle points, here we present a slightly modified version which, while slower, provides more control over the convergence of the string. We did not use the algorithm presented in this section for our numerical studies.
Instead of training intermediate models via full SGD to a desired accuracy as in step 8 of the algorithm, intermediate models are be subject to a constraint that ensures they are “close” to the neighboring models on the string. Specifically, intermediate models are constrained to the unique hyperplane in weightspace equidistant from its two neighbors. This can be further modified by additional regularization terms to control the “springy-ness” of the string. These heuristics could be chosen to try to more faithfully sample the geodesic between two models.
In practice, for a given model on the string, θi, these two regularizations augment the standard loss by: F̃ (θ) = F (θ) + ζ(‖θi−1− θi‖+ ‖θi+1− θi‖) +κ‖ (θi−1−θi+1)/2‖(θi−1−θi+1)/2‖ · (θi−(θi−1−θi+1)/2) ‖(θi−(θi−1−θi+1)/2)‖‖. The ζ regularization term controls the “springy-ness” of the weightstring, and the κ regularization term controls how far off the hyperplane a new model can deviate.
Because adapting DSS to use this constraint is straightforward, here we will describe an alternative “breadth-first” approach wherein models are trained in parallel until convergence. This alternative approach has the advantage that it will indicate a disconnection between two models “sooner” in training. The precise geometry of the loss surface will dictate which approach to use in practice.
Given two random models σi and σj where |σi − σj | < L0, we aim to follow the evolution of the family of models connecting σi to σj . Intuitively, almost every continuous path in the space of random models connecting σi to σj has, on average, the same (high) loss. For simplicity, we choose to initialize the string to the linear segment interpolating between these two models. If this entire segment is evolved via gradient descent, the segment will either evolve into a string which is entirely contained in a basin of the loss surface, or some number of points will become fixed at a higher loss. These fixed points are difficult to detect directly, but will be indirectly detected by the persistence of a large interpolated loss between two adjacent models on the string.
The algorithm proceeds as follows:
(0.) Initialize model string to have two models, σi and σj .
1. Begin training all models to the desired loss, keeping the instantaneous loss, L0(t), of all models being trained approximately constant.
2. If the pairwise interpolated loss between σn and σn+1 exceeds L0(t), insert a new model at the maximum of the interpolated loss (or halfway) between these two models.
3. Repeat steps (1) and (2) until all models (and interpolated errors) are below a threshold loss L0(tfinal) := L0, or until a chosen failure condition (see 3.2).
B PROOFS
B.1 PROOF OF PROPOSITION 2.1
Suppose that θ1 is a local minima and θ2 is a global minima, but F (θ1) > F (θ2). If λ = F (θ1), then clearly θ1 and θ2 both belong to ΩF (λ). Suppose now that ΩF (λ) is connected. Then we
could find a smooth (i.e. continuous and differentiable) path γ(t) with γ(0) = θ1, γ(1) = θ2 and F (γ(t)) ≤ λ = F (θ1). But this contradicts the strict local minima status of θ1, and therefore ΩF (λ) cannot be connected .
B.2 PROOF OF PROPOSITION 2.2
Let us first consider the case with κ = 0. We proceed by induction over the number of layers K. For K = 1, the loss F (θ) is convex. Let θA, θB be two arbitrary points in a level set Ωλ. Thus F (θA) ≤ λ and F (θB) ≤ λ. By definition of convexity, a linear path is sufficient in that case to connect θA and θB :
F ((1− t)θA + tθB) ≤ (1− t)F (θA) + tF (θB) ≤ λ . Suppose the result is true for K − 1. Let θA = (WA1 , . . . ,WAK ) and θB = (WB1 , . . . ,WBK ) with F (θA) ≤ λ, F (θB) ≤ λ. Since nj ≥ min(n1, nK) for j = 2 . . .K − 1, we can find k∗ = {1,K − 1} such that nk∗ ≥ min(nk∗−1, nk∗+1). For each W1, . . . ,WK , we denote W̃j = Wj for j 6= k∗, k∗ − 1 and W̃k∗ = Wk∗−1Wk∗ . By induction hypothesis, the loss expressed in terms of θ̃ = (W̃1, . . . , W̃K−1) is connected between θ̃A and θ̃B . Let W̃k∗(t) the corresponding linear path projected in the layer k∗. We need to produce a path in the variables Wk∗−1(t), Wk∗(t) such that:
i Wk∗−1(0) = WAk∗−1, Wk∗−1(1) = W B k∗−1,
ii Wk∗(0) = WAk∗ , Wk∗(1) = W B k∗ ,
iii Wk∗(t)Wk∗−1(t) = W̃k∗−1(t) for t ∈ (0, 1).
We construct it as follows. Let
Wk∗(t) = tW B k∗ + (1− t)WAk∗ + t(1− t)V ,
Wk∗−1(t) = Wk∗(t) †W̃k∗−1(t) ,
where Wk∗(t)† = (Wk∗(t)TWk∗(t))−1Wk∗(t)T denotes the pseudoinverse and V is a nk∗−1×nk∗ matrix drawn from a iid distribution. Conditions (i) and (ii) are immediate from the definition, and condition (iii) results from the fact that
Wk∗(t)Wk∗(t) † = IN∗k ,
since W ∗k (t) has full rank for all t ∈ (0, 1). Finally, let us prove that the result is also true when K = 2 and κ > 0. We construct the path using the variational properties of atomic norms Bach (2013). When we pick the ridge regression regularization, the corresponding atomic norm is the nuclear norm:
‖X‖∗ = min UV T=X
1 2 (‖U‖2 + ‖V ‖2) .
The path is constructed by exploiting the convexity of the variational norm ‖X‖∗. Let θA = (WA1 ,W A 2 ) and θ B = (WB1 ,W B 2 ), and we define W̃ = W1W2. Since W̃ {A,B} = W {A,B} 1 W {A,B} 2 , it results that
‖W̃ {A,B}‖∗ ≤ 1 2 (‖W {A,B}1 ‖2 + ‖W {A,B} 2 ‖2) . (12)
From (12) it results that the loss Fo(W1,W2) can be minored by another loss expressed in terms of W̃ of the form E{|Y − W̃X|2}+ 2κ‖W̃‖∗ , which is convex with respect to W̃ . Thus a linear path in W̃ from W̃A to W̃B is guaranteed to be below Fo(θ{A,B}). Let us define
∀ t , W1(t),W2(t) = arg min UV T=W̃ (t) (‖U‖2 + ‖V ‖2) .
One can verify that we can first consider a path (βA1 (s), β A 2 (s)) from (W A 1 ,W A 2 ) to (W1(0),W2(0) such that ∀ s β1(s)β2(s) = W̃A and ‖β1(s)‖2 + ‖β2(s)‖2 decreases ,
and similarly for (WB1 ,W B 2 ) to (W1(1),W2(1). The path (β A {1,2}(s),W{1,2}(t), β B {1,2}(s)) satisfies (i-iii) by definition. We also verify that
‖W1(t)‖2 + ‖W2(t)‖2 = 2‖W̃ (t)‖∗ ≤ 2(1− t)‖W̃ (0)‖∗ + 2t‖W̃ (1)‖∗ ≤ (1− t)(‖W‖21(0) + ‖W‖22(0)) + t(‖W‖21(1) + ‖W‖22(1)) .
Finally, we verify that the paths we have just created, when applied to θA arbitrary and θB = θ∗ a global minimum, are strictly decreasing, again by induction. For K = 1, this is again an immediate consequence of convexity. ForK > 1, our inductive construction guarantees that for any 0 < t < 1, the path θ(t) = (Wk(t))k≤K satisfies Fo(θ(t)) < Fo(θA). This concludes the proof .
B.3 PROOF OF PROPOSITION 2.3
Let A(w1, w2) = {x ∈ Rn; 〈x,w1〉 ≥ 0 , 〈x,w2〉 ≥ 0} .
By definition, we have
〈w1, w2〉Z = E{max(0, 〈X,w1〉) max(0, 〈X,w2〉)} (13)
= ∫ A(w1,w2) 〈x,w1〉〈x,w2〉dP (x) , (14)
= ∫ Q(A(w1,w2)) 〈Q(x), w1〉〈Q(x), w2〉(dP̄ (Q(x))) , (15)
whereQ is the orthogonal projection onto the space spanned byw1 andw2 and dP̄ (x) = dP̄ (x1, x2) is the marginal density on that subspace. Since this projection does not interfere with the rest of the proof, we abuse notation by dropping the Q and still referring to dP (x) as the probability density.
Now, let r = 12‖w1 + w2‖ = 1+cos(α) 2 and d = w2−w1 2 . By construction we have
w1 = rwm − d , w2 = rwm + d , and thus 〈x,w1〉〈x,w2〉 = r2|〈x,wm〉|2 − |〈x, d〉|2 . (16) By denoting C(wm) = {x ∈ Rn; 〈x,wm〉 ≥ 0}, observe that A(w1, w2) ⊆ C(wm). Let us denote by B = C(wm) \A(w1, w2) the disjoint complement. It results that
〈w1, w2〉Z = ∫ A(w1,w2) 〈x,w1〉〈x,w2〉dP (x)
= ∫ C(wm) [r2|〈x,wm〉|2 − |〈x, d〉|2]dP (x)−
r2 ∫ B |〈x,wm〉|2dP (x) + ∫ B |〈x, d〉|2dP (x)
= r2‖wm‖2Z − r2 ∫ B
|〈x,wm〉|2dP (x)︸ ︷︷ ︸ E1
− ∫ A(w1,w2)
|〈x, d〉|2dP (x)︸ ︷︷ ︸ E2 . (17)
We conclude by bounding each error term E1 and E2 separately: 0 ≤ E1 ≤ r2| sin(α)|2 ∫ B ‖x‖2dP (x) ≤ r2| sin(α)|22‖ΣX‖ , (18)
since every point in B by definition has angle greater than π/2− α from wm. Also, 0 ≤ E2 ≤ ‖d‖2 ∫ A(w1,w2) ‖x‖2dP (x) ≤ 1− cos(α) 2 2‖ΣX‖ (19)
by direct application of Cauchy-Schwartz. The proof is completed by plugging the bounds from (18) and (19) into (17) .
B.4 PROOF OF THEOREM 2.4
Consider a generic α and l ≤ m. A path from θA to θB will be constructed by concatenating the following paths:
1. from θA to θlA, the best linear predictor using the same first layer as θA,
2. from θlA to θsA, the best (m− l)-term approximation using perturbed atoms from θA, 3. from θsA to θ∗ the oracle l term approximation,
4. from θ∗ to θsB , the best (m− l)-term approximation using perturbed atoms from θB , 5. from θsB to θlB , the best linear predictor using the same first layer as θB ,
6. from θlB to θB .
The proof will study the increase in the loss along each subpath and aggregate the resulting increase into a common bound.
Subpaths (1) and (6) only involve changing the parameters of the second layer while leaving the firstlayer weights fixed, which define a convex loss. Therefore a linear path is sufficient to guarantee that the loss along that path will be upper bounded by λ on the first end and δWA1 (m, 0,m) on the other end.
Concerning subpaths (3) and (4), we notice that they can also be constructed using only parameters of the second layer, by observing that one can fit into a single n × m parameter matrix both the (m − l)-term approximation and the oracle l-term approximation. Indeed, let us describe subpath (3) in detail ( subpath (4) is constructed analogously by replacing the role of θsA with θsB). Let W̃A the first-layer parameter matrix associated with the m − l-sparse solution θsA, and let γA denote its second layer coefficients, which is a m-dimensional vector with at most m − l non-zero coefficients. LetW∗ be the first-layer matrix of the l-term oracle approximation, and γ∗ the corresponding second-layer coefficients. Since there are only m − l columns of W̃A that are used, corresponding to the support of γA, we can consider a path θ̄ that replaces the remaining l columns with those from W∗ while keeping the second-layer vector γA fixed. Since the modified columns correspond to zeros in γA, such paths have constant loss. Call W̄ the resulting first-layer matrix, containing both the active m− l active columns of W̃A and the l columns of W∗ in the positions determined by the zeros of γA. Now we can consider the linear subpath that interpolates between γA and γ∗ while keeping the first layer fixed at W̄ . Since again this is a linear subpath that only moves second-layer coefficients, it is non-increasing thanks to the convexity of the loss while fixing the first layer. We easily verify that at the end of this linear subpath we are using the oracle l-term approximation, which has loss e(l), and therefore subpath (3) incurs in a loss that is bounded by its extremal values δWA1 (m− l, α,m) and e(l).
Finally, we need to show how to construct the subpaths (2) and (5), which are the most delicate step since they cannot be bounded using convexity arguments as above. Let W̃A be the resulting perturbed first-layer parameter matrix withm− l sparse coefficients γA. Let us consider an auxiliary regression of the form W = [WA; W̃A] ∈ Rn×2m . and regression parameters
β1 = [β1; 0] , β2 = [0; γA] .
Clearly E{|Y − β1W |2}+ κ‖β1‖1 = E{|Y − β1WA|2}+ κ‖β1‖1
and similarly for β2. By convexity, the augmented linear path η(t) = (1− t)β1 + tβ2 thus satisfies
∀ t , L(t) = E{|Y − η(t)W |2}+ κ‖η(t)‖1 ≤ max(L(0), L(1)) .
Let us now approximate this augmented linear path with a path in terms of first and second layer weights. We consider
η1(t) = (1− t)WA + tW̃A , and η2(t) = (1− t)β1 + tγA .
We have that
Fo({η1(t), η2(t)}) = E{|Y − η2(t)Z(η1(t))|2}+ κ‖η2(t)‖1 (20) ≤ E{|Y − η2(t)Z(η1(t))|2}+ κ((1− t)‖β1‖1 + t‖γA‖1) = L(t) + E{|Y − η2(t)Z(η1(t))|2} − E{|Y − (1− t)β1Z(WA)− tγAZ(W̃A)|2} . (21)
Finally, we verify that∣∣∣E{|Y − η2(t)Z(η1(t))|2} − E{|Y − (1− t)β1Z(WA)− tγAZ(W̃A)|2}∣∣∣ ≤ (22) ≤ 4αmax(E|Y |2, √ E|Y 2|)‖ΣX‖(κ−1/2 + α √ E|Y 2|κ−1) + o(α2) .
Indeed, from Proposition 2.3, and using the fact that ∀ i ≤M, t ∈ [0, 1] , ∣∣∠((1− t)wAi + tw̃Ai ;wAi )∣∣ ≤ α , ∣∣∠((1− t)wAi + tw̃Ai ; w̃Ai )∣∣ ≤ α
we can write (1− t)β1,iz(wAi ) + tγA,iz(w̃Ai ) d = η2(t)iz(η1(t)i) + ni ,
with E{|ni|2} ≤ 4|η2(t)i|2‖ΣX‖α2 + O(α4) and E|ni| ≤ 2|η2(t)i|α √ ‖ΣX‖ using concavity of the moments. Thus∣∣∣E{|Y − η2(t)Z(η1(t))|2} − E{|Y − (1− t)β1Z(WA)− tγAZ(W̃A)|2}∣∣∣ ≤ 2E
{∑ i (Y − η2(t)Z(η1(t)))ni } + E { | ∑ i ni|2 }
≤ 4 ( α √ E|Y 2|‖ΣX‖‖η2‖+ α2(‖η2‖1)2‖ΣX‖ ) ≤ 4αmax(1, √ E|Y 2|)‖ΣX‖(‖η2‖1 + α‖η2‖21) + o(α2)
≤ 4αmax( √ E|Y 2|,E|Y 2|)‖ΣX‖(κ−1 + α √ E|Y 2|κ−2) + o(α2) ,
which proves (22).
We have just constructed a path from θA to θB , in which all subpaths except (2) and (5) have energy maximized at the extrema due to convexity, given respectively by λ, δW 1A(m, 0,m), δW 1A(m − l, α,m), e(l), δW 1B (m − l, α,m), and δW 1B (m, 0,m). For the two subpaths (2) and (5), (22) shows that it is sufficient to add the corresponding upper bound to the linear subpath, which is of the form Cα + o(α2) where C is an explicit constant independent of θ. Since l and α are arbitrary, we are free to pick the infimum, which concludes the proof.
B.5 PROOF OF COROLLARY 2.5
Let us consider a generic first layer weight matrix W ∈ Rn×m. Without loss of generality, we can assume that ‖wk‖ = 1 for all k, since increasing the norm of ‖wk‖within the unit ball has no penalty in the loss, and we can compensate this scaling in the second layer thanks to the homogeneity of the half-rectification. Since this results in an attenuation of these second layer weights, they too are guaranteed not to increase the loss.
From Vershynin (2010) [Lemma 5.2] we verify that the covering number N (Sn−1, ) of the Euclidean unit sphere Sn−1 satisfies
N (Sn−1, ) ≤ ( 1 + 2 )n ,
which means that we can cover the unit sphere with an -net of size N (Sn−1, ).
Let 0 < η < n−1(1 + n−1)−1, and let us pick, for each m, m = m η−1 n . Let us consider its corresponding -net of size
um = N (Sn−1, m) ' ( 1 + 2
m
)n ' m1−η .
Since we have m vectors in the unit sphere, it results from the pigeonhole principle that at least one element of the net will be associated with at least vm = mu−1m ' mη vectors; in other words, we are guaranteed to find amongst our weight vector W a collection Qm of vm ' mη vectors that are all at an angle at most 2 m apart. Let us now apply Theorem 2.4 by picking n = vm and α = m. We need to see that the terms involved in the bound all converge to 0 as m→∞. The contribution of the oracle error e(vm) − e(m) goes to zero as m → ∞ by the fact that limm→∞ e(m) exists (it is a decreasing, positive sequence) and that vm →∞. Let us now verify that δ(m − vm, m,m) also converges to zero. We are going to prune the first layer by removing one by one the vectors in Qm. Removing one of these vectors at a time incurs in an error of the order of m. Indeed, let wk be one of such vectors and let β′ be the solution of
min β′ E(β′) = min β′=(βf ;βk)∈Rk E{|Y − βTf Z(W−k)− βkz(wk)|2}+ κ(‖βf‖1 + |βk|) ,
where W−k is a shorthand for the matrix containing the rest of the vectors that have not been discarded yet. Removing the vector wk from the first layer increases the loss by a factor that is upper bounded by E(βp)− E(β), where
(βp)j =
{ β′j for j < k − 1 ,
β′k−1 + β ′ k otherwise.
,
since now βp is a feasible solution for the pruned first layer.
Let us finally bound E(βp)− E(β). Since ∠(wk, wk−1) ≤ m, it results from Proposition 2.3 that
z(wk) d = z(wk−1) + n ,
with E{|n|2} ≤ Cα2 for some constantC independent ofm. By redefining p1 = Y −βTp Z(W−k)− 1 2n and p2 = 1 2n, we have
E{|Y − βTp Z(W−k)|2} − E{|Y − β′ T Z(W−k)− βkz(wk)|2}
= E{|p1 + p2|2} − E{|p1 − p2|2} = 4E{|p1p2|}
≤ √√√√E{∣∣∣∣Y − βTp Z(W−k)− 12n ∣∣∣∣2 }√ E{|n|2}
≤ (C + α)α ' m ,
where C only depends on E{|Y |2}. We also verify that ‖βp‖1 ≤ ‖β′‖1. It results that removing |Qm| of such vectors incurs an increase of the loss at most |Qm| m ' mηm η−1 n = mη+
η−1 n . Since we picked η such that η + η−1n < 0, this term converges to zero. The
proof is finished.
C CARTOON OF ALGORITHM
Refer to Fig. 2.
D VISUALIZATION OF CONNECTION
Because the weight matrices are anywhere from high to extremely high dimensional, for the purposes of visualization we projected the models on the connecting path into a three dimensionsal subspace. Snapshots of the algorithm in progress for the quadratic regression task are indicated in Fig. 3. This was done by vectorizing all of the weight matrices for all the beads for a given connecting path, and then performing principal component analysis to find the three highest weight projections for the collection of models that define the endpoints of segments for a connecting path—i.e., the
θi discussed in the algorithm. We then projected the connecting string of models onto these three directions.
The color of the strings was chosen to be representative of the test loss under a log mapping, so that extremely high test loss mapped to red, whereas test loss near the threshold mapped to blue. An animation of the connecting path can be seen on our Github page.
Finally, projections onto pairs of principal components are indicated by the black curves.
E A DISCONNECTION
E.1 A DISCONNECTION
As a sanity check for the algorithm, we also applied it to a problem for which we know that it is not possible to connect models of equivalent power by the arguments of section 2.3.1. The input data is 3 points in R2, and the task is to permute the datapoints, i.e. map {x1, x2, x3} → {x2, x3, x1}. This map requires at least 12 parameters in general for the three linear maps which take xi → xj for i, j ∈ {{1, 2}, {2, 3}, {3, 1}}. Our archticture was a 2-3-2 fully connected neural network with a single relu nonlinearity after the hidden layer—a model which clearly has 12 free parameters by construction. The two models we tried to connect were a single model, θ, and a copy of θ with the first two neurons in the hidden layer permuted, θ̃σ . The algorithm fails to converge when initialized with these two models. We provide a visualization of the string of models produced by the algorithm in Fig. 4.
In general, a persistent high interpolated loss between two neighboring beads on the string of models could arise from either a slowly converging, connected pair of models or from a truly disconnected pair of models. “Proving” a disconnection at the level of numerical experiments is intractable in general, but a collection of negative results—i.e., failures to converge—are highly suggestive of a true disconnection. | 1. What is the significance of the paper's contributions in the context of prior research?
2. How do the proposed results compare to those of previous works in terms of strength and technicality?
3. Are there any concerns regarding the validity or convincingness of the main theoretical result?
4. How does the reviewer assess the quality of the paper's writing, experimental section, and overall presentation? | Review | Review
This is an incremental result (several related results that the authors of the paper mentioned here were already published). The authors claim that they can get rid of the technical assumptions from the previous papers but the results they propose are significantly weaker and also quite technical. The main theoretical result - Theorem 2.4 is not convincing at all. Furthermore, the paper is badly written. No theoretical intuition is given, the experimental section is weak and in some places the formatting is wrong. |
ICLR | Title
Topology and Geometry of Half-Rectified Network Optimization
Abstract
The loss surface of deep neural networks has recently attracted interest in the optimization and machine learning communities as a prime example of highdimensional non-convex problem. Some insights were recently gained using spin glass models and mean-field approximations, but at the expense of strongly simplifying the nonlinear nature of the model. In this work, we do not make any such assumption and study conditions on the data distribution and model architecture that prevent the existence of bad local minima. Our theoretical work quantifies and formalizes two important folklore facts: (i) the landscape of deep linear networks has a radically different topology from that of deep half-rectified ones, and (ii) that the energy landscape in the non-linear case is fundamentally controlled by the interplay between the smoothness of the data distribution and model over-parametrization. Our main theoretical contribution is to prove that half-rectified single layer networks are asymptotically connected, and we provide explicit bounds that reveal the aforementioned interplay. The conditioning of gradient descent is the next challenge we address. We study this question through the geometry of the level sets, and we introduce an algorithm to efficiently estimate the regularity of such sets on large-scale networks. Our empirical results show that these level sets remain connected throughout all the learning phase, suggesting a near convex behavior, but they become exponentially more curvy as the energy level decays, in accordance to what is observed in practice with very low curvature attractors.
1 INTRODUCTION
Optimization is a critical component in deep learning, governing its success in different areas of computer vision, speech processing and natural language processing. The prevalent optimization strategy is Stochastic Gradient Descent, invented by Robbins and Munro in the 50s. The empirical performance of SGD on these models is better than one could expect in generic, arbitrary non-convex loss surfaces, often aided by modifications yielding significant speedups Duchi et al. (2011); Hinton et al. (2012); Ioffe & Szegedy (2015); Kingma & Ba (2014). This raises a number of theoretical questions as to why neural network optimization does not suffer in practice from poor local minima.
The loss surface of deep neural networks has recently attracted interest in the optimization and machine learning communities as a paradigmatic example of a hard, high-dimensional, non-convex problem. Recent work has explored models from statistical physics such as spin glasses Choromanska et al. (2015), in order to understand the macroscopic properties of the system, but at the expense of strongly simplifying the nonlinear nature of the model. Other authors have advocated that the real danger in high-dimensional setups are saddle points rather than poor local minima Dauphin et al. (2014), although recent results rigorously establish that gradient descent does not get stuck on saddle points Lee et al. (2016) but merely slowed down. Other notable recent contributions are Kawaguchi (2016), which further develops the spin-glass connection from Choromanska et al. (2015) and resolves the linear case by showing that no poor local minima exist; Sagun et al. (2014) which also
∗Currently on leave from UC Berkeley.
discusses the impact of stochastic vs plain gradient, Soudry & Carmon (2016), that studies Empirical Risk Minimization for piecewise multilayer neural networks under overparametrization (which needs to grow with the amount of available data), and Goodfellow et al. (2014), which provided insightful intuitions on the loss surface of large deep learning models and partly motivated our work. Additionally, the work Safran & Shamir (2015) studies some topological properties of homogeneous nonlinear networks and shows how overparametrization acts upon these properties, and the pioneering Shamir (2016) studied the distribution-specific hardness of optimizing non-convex objectives. Lastly, several papers submitted concurrently and independently of this one deserve note, particularly Swirszcz et al. (2016) which analyzes the explicit criteria under which sigmoid-based neural networks become trapped by poor local minima, as well as Tian (2017), which offers a complementary study of two layer ReLU based networks, and their learning dynamics.
In this work, we do not make any linearity assumption and study conditions on the data distribution and model architecture that prevent the existence of bad local minima. The loss surface F (θ) of a given model can be expressed in terms of its level sets Ωλ, which contain for each energy level λ all parameters θ yielding a loss smaller or equal than λ. A first question we address concerns the topology of these level sets, i.e. under which conditions they are connected. Connected level sets imply that one can always find a descent direction at each energy level, and therefore that no poor local minima can exist. In absence of nonlinearities, deep (linear) networks have connected level sets Kawaguchi (2016). We first generalize this result to include ridge regression (in the two layer case) and provide an alternative, more direct proof of the general case. We then move to the half-rectified case and show that the topology is intrinsically different and clearly dependent on the interplay between data distribution and model architecture. Our main theoretical contribution is to prove that half-rectified single layer networks are asymptotically connected, and we provide explicit bounds that reveal the aforementioned interplay.
Beyond the question of whether the loss contains poor local minima or not, the immediate follow-up question that determines the convergence of algorithms in practice is the local conditioning of the loss surface. It is thus related not to the topology but to the shape or geometry of the level sets. As the energy level decays, one expects the level sets to exhibit more complex irregular structures, which correspond to regions where F (θ) has small curvature. In order to verify this intuition, we introduce an efficient algorithm to estimate the geometric regularity of these level sets by approximating geodesics of each level set starting at two random boundary points. Our algorithm uses dynamic programming and can be efficiently deployed to study mid-scale CNN architectures on MNIST, CIFAR-10 and RNN models on Penn Treebank next word prediction. Our empirical results show that these models have a nearly convex behavior up until their lowest test errors, with a single connected component that becomes more elongated as the energy decays. The rest of the paper is structured as follows. Section 2 presents our theoretical results on the topological connectedness of multilayer networks. Section 3 presents our path discovery algorithm and Section 4 covers the numerical experiments.
2 TOPOLOGY OF LEVEL SETS
Let P be a probability measure on a product space X ×Y , where we assume X and Y are Euclidean vector spaces for simplicity. Let {(xi, yi)}i be an iid sample of size L drawn from P defining the training set. We consider the classic empirical risk minimization of the form
Fe(θ) = 1
L L∑ l=1 ‖Φ(xi; θ)− yi‖2 + κR(θ) , (1)
where Φ(x; θ) encapsulates the feature representation that uses parameters θ ∈ RS and R(θ) is a regularization term. In a deep neural network, θ contains the weights and biases used in all layers. For convenience, in our analysis we will also use the oracle risk minimization:
Fo(θ) = E(X,Y )∼P ‖Φ(X; θ)− Y ‖2 + κR(θ) . (2)
Our setup considers the case whereR consists on either `1 or `2 norms, as we shall describe below. They correspond to well-known sparse and ridge regularization respectively.
2.1 POOR LOCAL MINIMA CHARACTERIZATION FROM TOPOLOGICAL CONNECTEDNESS
We define the level set of F (θ) as
ΩF (λ) = {θ ∈ RS ; F (θ) ≤ λ} . (3)
The first question we study is the structure of critical points of Fe(θ) and Fo(θ) when Φ is a multilayer neural network. For simplicity, we consider first a strict notion of local minima: θ ∈ RS is a strict local minima of F if there is > 0 with F (θ′) > F (θ) for all θ′ ∈ B(θ, ) and θ′ 6= θ. In particular, we are interested to know whether Fe has local minima which are not global minima. This question is answered by knowing whether ΩF (λ) is connected at each energy level λ: Proposition 2.1. If ΩF (λ) is connected for all λ then every local minima of F (θ) is a global minima.
Strict local minima implies that ∇F (θ) = 0 and HF (θ) 0, but avoids degenerate cases where F is constant along a manifold intersecting θ. In that scenario, if Uθ denotes that manifold, our reasoning immediately implies that if ΩF (λ) are connected, then for all > 0 there exists θ′ with dist(θ′,Uθ) ≤ and F (θ′) < F (θ). In other words, some element at the boundary of Uθ must be a saddle point. A stronger property that eliminates the risk of gradient descent getting stuck at Uθ is that all elements at the boundary of Uθ are saddle points. This can be guaranteed if one can show that there exists a path connecting any θ to the lowest energy level such that F is strictly decreasing along it.
Such degenerate cases arise in deep linear networks in absence of regularization. If θ = (W1, . . . ,WK) denotes any parameter value, with N1, . . . NK denoting the hidden layer sizes, and Fk ∈ GL+Nk(R) are arbitrary elements of the general linear group of invertible Nk × Nk matrices with positive determinant, then
Uθ = {W1F−11 , F1W2F −1 2 , . . . , FKWK ; Fk ∈ GL + Nk (R)} .
In particular, Uθ has a Lie Group structure. In the half-rectified nonlinear case, the general linear group is replaced by the Lie group of homogeneous invertible matrices Fk = diag(α1, . . . , αNk) with αj > 0.
This proposition shows that a sufficient condition to prevent the existence of poor local minima is having connected level sets, but this condition is not necessary: one can have isolated local minima lying at the same energy level. This can be the case in systems that are defined up to a discrete symmetry group, such as multilayer neural networks. However, as we shall see next, this case puts the system in a brittle position, since one needs to be able to account for all the local minima (and there can be exponentially many of them as the parameter dimensionality increases) and verify that their energy is indeed equal.
2.2 THE LINEAR CASE
We first consider the particularly simple case where F is a multilayer network defined by
Φ(x; θ) = WK . . .W1x , θ = (W1, . . . ,WK) . (4)
and the ridge regression R(θ) = ‖θ‖2. This model defines a non-convex (and non-concave) loss Fe(θ). When κ = 0, it has been shown in Saxe et al. (2013) and Kawaguchi (2016) that in this case, every local minima is a global minima. We provide here an alternative proof of that result that uses a somewhat simpler argument and allows for κ > 0 in the case K = 2. Proposition 2.2. Let W1,W2, . . . ,WK be weight matrices of sizes nk × nk+1, k < K, and let Fe(θ), Fo(θ) denote the risk minimizations using Φ as in (4). Assume that nj ≥ min(n1, nK) for j = 2 . . .K − 1. Then ΩFe(λ) (and ΩFo ) is connected for all λ and all K when κ = 0, and for κ > 0 when K = 2; and therefore there are no poor local minima in these cases. Moreover, any θ can be connected to the lowest energy level with a strictly decreasing path.
Let us highlight that this result is slightly complementary than that of Kawaguchi (2016), Theorem 2.3. Whereas we require nj ≥ min(n1, nK) for j = 2 . . .K − 1 and our analysis does not inform about the order of the saddle points, we do not need full rank assumptions on ΣX nor the weights Wk.
This result does also highlight a certain mismatch between the picture of having no poor local minima and generalization error. Incorporating regularization drastically changes the topology, and the fact that we are able to show connectedness only in the two-layer case with ridge regression is profound; we conjecture that extending it to deeper models requires a different regularization, perhaps using more general atomic norms Bach (2013). But we now move our interest to the nonlinear case, which is more relevant to our purposes.
2.3 HALF-RECTIFIED NONLINEAR CASE
We now study the setting given by
Φ(x; θ) = WKρWK−1ρ . . . ρW1x , θ = (W1, . . . ,WK) , (5)
where ρ(z) = max(0, z). The biases can be implemented by replacing the input vector x with x = (x, 1) and by rebranding each parameter matrix as
W i = ( Wi bi 0 1 ) ,
where bi contains the biases for each layer. For simplicity, we continue to use Wi and x in the following.
2.3.1 NONLINEAR MODELS ARE GENERALLY DISCONNECTED
One may wonder whether the same phenomena of global connectedness also holds in the halfrectified case. A simple motivating counterexample shows that this is not the case in general. Consider a simple setup with X ∈ R2 drawn from a mixture of two Gaussians N−1 and N1, and let Y = (X − µZ) · Z , where Z is the (hidden) mixture component taking {1,−1} values. Let Ŷ = Φ(X; {W1,W2}) be a single-hidden layer ReLU network, with two hidden units. Let θA be a configuration that bisects the two mixture components, and let θB the same configuration, but swapping the bisectrices. One can verify that they can both achieve arbitrarily small risk by letting the covariance of the mixture components go to 0. However, any path that connects θA to θB must necessarily pass through a point in which W1 has rank 1, which leads to an estimator with risk at least 1/2.
In fact, it is easy to see that this counter-example can be extended to any generic half-rectified architecture, if one is allowed to adversarially design a data distribution. For any given Φ(X; θ) with arbitrary architecture and current parameters θ = (Wi), let Pθ = {A1, . . . ,AS} be the underlying tessellation of the input space given by our current choice of parameters; that is, Φ(X; θ) is piece-wise linear and Pθ contains those pieces. Now let X be any arbitrary distribution with density p(x) > 0 for all x ∈ Rn, for example a Gaussian, and let Y | X d= Φ(X; θ) . Since Φ is invariant under a subgroup of permutations θσ of its hidden layers, it is easy to see that one can find two parameter values θA = θ and θB = θσ such that Fo(θA) = Fo(θB) = 0, but any continuous path γ(t) from θA to θB will have a different tessellation and therefore won’t satisfy Fo(γ(t)) = 0. Moreover, one can build on this counter-example to show that not only the level sets are disconnected, but also that there exist poor local minima. Let θ′ be a different set of parameters, and Y ′ | X d= Φ(X; θ′) be a different target distribution. Now consider the data distribution given by the mixture
X | p(x) , z ∼ Bernoulli(π) , Y | X, z d= zΦ(X; θ) + (1− z)Φ(X; θ′) . By adjusting the mixture component π we can clearly change the risk at θ and θ′ and make them different, but we conjecture that this preserves the status of local minima of θ and θ′. Appendix E constructs a counter-example numerically.
This illustrates an intrinsic difficulty in the optimization landscape if one is after universal guarantees that do not depend upon the data distribution. This difficulty is non-existent in the linear case and not easy to exploit in mean-field approaches such as Choromanska et al. (2015), and shows that in general we should not expect to obtain connected level sets. However, connectedness can be recovered if one is willing to accept a small increase of energy and make some assumptions on the complexity of the regression task. Our main result shows that the amount by which the energy is allowed to increase is upper bounded by a quantity that trades-off model overparametrization and smoothness in the data distribution.
For that purpose, we start with a characterization of the oracle loss, and for simplicity let us assume Y ∈ R and let us first consider the case with a single hidden layer and `1 regularization: R(θ) = ‖θ‖1.
2.3.2 PRELIMINARIES
Before proving our main result, we need to introduce preliminary notation and results. We first describe the case with a single hidden layer of size m.
We define
e(m) = min W1∈Rm×n,‖W1(i)‖2≤1,W2∈Rm
E{|Φ(X; θ)− Y |2}+ κ‖W2‖1 . (6)
to be the oracle risk using m hidden units with norm ≤ 1 and using sparse regression. It is a well known result by Hornik and Cybenko that a single hidden layer is a universal approximator under very mild assumptions, i.e. limm→∞ e(m) = 0. This result merely states that our statistical setup is consistent, and it should not be surprising to the reader familiar with classic approximation theory. A more interesting question is the rate at which e(m) decays, which depends on the smoothness of the joint density (X,Y ) ∼ P relative to the nonlinear activation family we have chosen. For convenience, we redefine W = W1 and β = W2 and Z(W ) = max(0,WX). We also write z(w) = max(0, 〈w,X〉) where (X,Y ) ∼ P and w ∈ RN is any deterministic vector. Let ΣX = EPXXT ∈ RN×N be the covariance operator of the random input X . We assume ‖ΣX‖ <∞. A fundamental property that will be essential to our analysis is that, despite the fact that Z is nonlinear, the quantity [w1, w2]Z := EP {z(w1)z(w2)} is locally equivalent to the linear metric 〈w1, w2〉X = EP {wT1 XXTw2} = 〈w1,ΣXw2〉, and that the linearization error decreases with the angle between w1 and w2. Without loss of generality, we assume here that ‖w1‖ = ‖w2‖ = 1, and we write ‖w‖2Z = E{|z(w)|2}. Proposition 2.3. Let α = cos−1(〈w1, w2〉) be the angle between unitary vectors w1 and w2 and let wm =
w1+w2 ‖w1+w2‖ be their unitary bisector. Then
1 + cosα
2 ‖wm‖2Z − 2‖ΣX‖
( 1− cosα
2 + sin2 α
) ≤ [w1, w2]Z ≤ 1 + cosα
2 ‖wm‖2Z . (7)
The term ‖ΣX‖ is overly pessimistic: we can replace it by the energy of X projected into the subspace spanned byw1 andw2 (which is bounded by 2‖ΣX‖). When α is small, a Taylor expansion of the trigonometric terms reveals that
2
3‖ΣX‖ 〈w1, w2〉 =
2
3‖ΣX‖ cosα =
2 3‖ΣX‖ (1− α
2
2 +O(α4))
≤ (1− α2/4)‖wm‖2Z − ‖ΣX‖(α2/4 + α2) +O(α4) ≤ [w1, w2]Z +O(α4) ,
and similarly [w1, w2]Z ≤ 〈w1, w2〉‖wm‖2Z ≤ ‖ΣX‖〈w1, w2〉 .
The local behavior of parameters w1, w2 on our regression problem is thus equivalent to that of having a linear layer, provided w1 and w2 are sufficiently close to each other. This result can be seen as a spoiler of what is coming: increasing the hidden layer dimensionality m will increase the chances to encounter pairs of vectors w1, w2 with small angle; and with it some hope of approximating the previous linear behavior thanks to the small linearization error.
In order to control the connectedness, we need a last definition. Given a hidden layer of size m with current parameters W ∈ Rn×m, we define a “robust compressibility” factor as
δW (l, α;m) = min ‖γ‖0≤l,supi |∠(w̃i,wi)|≤α
E{|Y − γZ(W̃ )|2 + κ‖γ‖1} , (l ≤ m) . (8)
This quantity thus measures how easily one can compress the current hidden layer representation, by keeping only a subset of l its units, but allowing these units to move by a small amount controlled by α. It is a form of n-width similar to Kolmogorov width Donoho (2006) and is also related to robust sparse coding from Tang et al. (2013); Ekanadham et al. (2011).
2.3.3 MAIN RESULT
Our main result considers now a non-asymptotic scenario given by some fixed size m of the hidden layer. Given two parameter values θA = (WA1 ,W A 2 ) ∈ W and θB = (WB1 ,WB2 ) with Fo(θ {A,B}) ≤ λ, we show that there exists a continuous path γ : [0, 1] → W connecting θA and θB such that its oracle risk is uniformly bounded by max(λ, ), where decreases with model overparametrization.
Theorem 2.4. For any θA, θB ∈ W and λ ∈ R satisfying Fo(θ{A,B}) ≤ λ, there exists a continuous path γ : [0, 1]→W such that γ(0) = θA, γ(1) = θB and
Fo(γ(t)) ≤ max(λ, ) , with (9)
= inf l,α
( max { e(l),δWA1 (m, 0;m), δWA1 (m− l, α;m), (10)
δWB1 (m, 0;m), δWB1 (m− l, α;m) } + C1α+O(α 2) ) , (11)
where C1 is an absolute constant depending only on κ and P .
Some remarks are in order. First, our regularization term is currently a mix between `2 norm constraints on the first layer and `1 norm constraints on the second layer. We believe this is an artifact of our proof technique, and we conjecture that more general regularizations yield similar results. Next, this result uses the data distribution through the oracle bound e(m) and the covariance term. The extension to empirical risk is accomplished by replacing the probability measure P by the empirical measure P̂ = 1L ∑ l δ ((x, y)− (xl, yl)). However, our asymptotic analysis has to be carefully reexamined to take into account and avoid the trivial regime when M outgrows L. A consequence of Theorem 2.4 is that as m increases, the model becomes asymptotically connected, as proven in the following corollary.
Corollary 2.5. As m increases, the energy gap satisfies = O(m− 1n ) and therefore the level sets become connected at all energy levels.
This is consistent with the overparametrization results from Safran & Shamir (2015); Shamir (2016) and the general common knowledge amongst deep learning practitioners. Our next sections explore this question, and refine it by considering not only topological properties but also some rough geometrical measure of the level sets.
3 GEOMETRY OF LEVEL SETS
3.1 THE GREEDY ALGORITHM
The intuition behind our main result is that, for smooth enough loss functions and for sufficient overparameterization, it should be “easy” to connect two equally powerful models—i.e., two models with FoθA,B ≤ λ. A sensible measure of this ease-of-connectedness is the normalized length of the geodesic connecting one model to the other: |γA,B(t)|/|θA − θB |. This length represents approximately how far of an excursion one must make in the space of models relative to the euclidean distance between a pair of models. Thus, convex models have a geodesic length of 1, because the geodesic is simply linear interpolation between models, while more non-convex models have geodesic lengths strictly larger than 1.
Because calculating the exact geodesic is difficult, we approximate the geodesic paths via a dynamic programming approach we call Dynamic String Sampling. We comment on alternative algorithms in Appendix A.
For a pair of models with network parameters θi, θj , each with Fe(θ) below a threshold L0, we aim to efficienly generate paths in the space of weights where the empirical loss along the path remains below L0. These paths are continuous curves belonging to ΩF (λ)–that is, the level sets of the loss function of interest.
Algorithm 1 Greedy Dynamic String Sampling 1: L0 ← Threshold below which path will be found 2: Φ1 ← randomly initialize θ1, train Φ(xi θ1) to L0 3: Φ2 ← randomly initialize θ2, train Φ(xi θ2) to L0 4: BeadList←(Φ1,Φ2) 5: Depth← 0 6: procedure FINDCONNECTION(Φ1,Φ2) 7: t∗ ← t such that dγ(θ1,θ2,t)dt ∣∣∣∣ t = 0 OR t = 0.5
8: Φ3 ← train Φ(xi; t∗θ1 + (1− t∗)θ2) to L0 9: BeadList← insert(Φ3, after Φ1, BeadList)
10: MaxError1 ← maxt(Fe(tθ3 + (1− t)θ1)) 11: MaxError2 ← maxt(Fe(tθ2 + (1− t)θ3)) 12: ifMaxError1 > L0 then return FindConnection(Φ1,Φ3) 13: ifMaxError2 > L0 then return FindConnection(Φ3,Φ2) 14: Depth← Depth+1
The algorithm recursively builds a string of models in the space of weights which continuously connect θi to θj . Models are added and trained until the pairwise linearly interpolated loss, i.e. maxtFe(tθi + (1 − t)θj) for t ∈ (0, 1), is below the threshold, L0, for every pair of neighboring models on the string. We provide a cartoon of the algorithm in Appendix C.
3.2 FAILURE CONDITIONS AND PRACTICALITIES
While the algorithm presented will faithfully certify two models are connected if the algorithm converges, it is worth emphasizing that the algorithm does not guarantee that two models are disconnected if the algorithm fails to converge. In general, the problem of determining if two models are connected can be made arbitrarily difficult by choice of a particularly pathological geometry for the loss function, so we are constrained to heuristic arguments for determining when to stop running the algorithm. Thankfully, in practice, loss function geometries for problems of interest are not intractably difficult to explore. We comment more on diagnosing disconnections more carefully in Appendix E.
Further, if the MaxError exceeds L0 for every new recursive branch as the algorithm progresses, the worst case runtime scales as O(exp(Depth)). Empirically, we find that the number of new models added at each depth does grow, but eventually saturates, and falls for a wide variety of models and architectures, so that the typical runtime is closer to O(poly(Depth))—at least up until a critical value of L0.
To aid convergence, either of the choices in line 7 of the algorithm works in practice—choosing t∗ at a local maximum can provide a modest increase in algorithm runtime, but can be unstable if the the calculated interpolated loss is particularly flat or noisy. t∗ = .5 is more stable, but slower. Finally, we find that training Φ3 to αL0 for α < 1 in line 8 of the algorithm tends to aid convergence without noticeably impacting our numerics. We provide further implementation details in 4.
4 NUMERICAL EXPERIMENTS
For our numerical experiments, we calculated normalized geodesic lengths for a variety of regression and classification tasks. In practice, this involved training a pair of randomly initialized models to the desired test loss value/accuracy/perplexity, and then attempting to connect that pair of models via the Dynamic String Sampling algorithm. We also tabulated the average number of “beads”, or the number intermediate models needed by the algorithm to connect two initial models. For all of the below experiments, the reported losses and accuracies are on a restricted test set. For more complete architecture and implementation details, see our GitHub page.
The results are broadly organized by increasing model complexity and task difficulty, from easiest to hardest. Throughout, and remarkably, we were able to easily connect models for every dataset and architecture investigated except the one explicitly constructed counterexample discussed in Appendix E.1. Qualitatively, all of the models exhibit a transition from a highly convex regime at high loss to a non-convex regime at low loss, as demonstrated by the growth of the normalized length as well as the monotonic increase in the number of required “beads” to form a low-loss connection.
4.1 POLYNOMIAL REGRESSION
We studied a 1-4-4-1 fully connected multilayer perceptron style architecture with sigmoid nonlinearities and RMSProp/ADAM optimization. For ease-of-analysis, we restricted the training and test data to be strictly contained in the interval x ∈ [0, 1] and f(x) ∈ [0, 1]. The number of required beads, and thus the runtime of the algorithm, grew approximately as a power-law, as demonstrated in Table 1 Fig. 1. We also provide a visualization of a representative connecting path between two models of equivalent power in Appendix D.
The cubic regression task exhibits an interesting feature around L0 = .15 in Table 1 Fig. 2, where the normalized length spikes, but the number of required beads remains low. Up until this point, the
cubic model is strongly convex, so this first spike seems to indicate the onset of non-convex behavior and a concomitant radical change in the geometry of the loss surface for lower loss.
4.2 CONVOLUTIONAL NEURAL NETWORKS
To test the algorithm on larger architectures, we ran it on the MNIST hand written digit recognition task as well as the CIFAR10 image recognition task, indicated in Table 1, Figs. 3 and 4. Again, the data exhibits strong qualitative similarity with the previous models: normalized length remains low until a threshold loss value, after which it grows approximately as a power law. Interestingly, the MNIST dataset exhibits very low normalized length, even for models nearly at the state of the art in classification power, in agreement with the folk-understanding that MNIST is highly convex and/or “easy”. The CIFAR10 dataset, however, exhibits large non-convexity, even at the modest test accuracy of 80%.
4.3 RECURRENT NEURAL NETWORKS
To gauge the generalizability of our algorithm, we also applied it to an LSTM architecture for solving the next word prediction task on the PTB dataset, depicted in Table 1 Fig. 5. Noteably, even for a radically different architecture, loss function, and data set, the normalized lengths produced by the DSS algorithm recapitulate the same qualitative features seen in the above datasets—i.e., models can be easily connected at high perplexity, and the normalized length grows at lower and lower perplexity after a threshold value, indicating an onset of increased non-convexity of the loss surface.
5 DISCUSSION
We have addressed the problem of characterizing the loss surface of neural networks from the perspective of gradient descent algorithms. We explored two angles – topological and geometrical aspects – that build on top of each other.
On the one hand, we have presented new theoretical results that quantify the amount of uphill climbing that is required in order to progress to lower energy configurations in single hidden-layer ReLU networks, and proved that this amount converges to zero with overparametrization under mild conditions. On the other hand, we have introduced a dynamic programming algorithm that efficiently approximates geodesics within each level set, providing a tool that not only verifies the connectedness of level sets, but also estimates the geometric regularity of these sets. Thanks to this information, we can quantify how ‘non-convex’ an optimization problem is, and verify that the optimization of quintessential deep learning tasks – CIFAR-10 and MNIST classification using CNNs, and next word prediction using LSTMs – behaves in a nearly convex fashion up until they reach high accuracy levels.
That said, there are some limitations to our framework. In particular, we do not address saddle-point issues that can greatly affect the actual convergence of gradient descent methods. There are also a number of open questions; amongst those, in the near future we shall concentrate on:
• Extending Theorem 2.4 to the multilayer case. We believe this is within reach, since the main analytic tool we use is that small changes in the parameters result in small changes in the covariance structure of the features. That remains the case in the multilayer case.
• Empirical versus Oracle Risk. A big limitation of our theory is that right now it does not inform us on the differences between optimizing the empirical risk versus the oracle risk. Understanding the impact of generalization error and stochastic gradient in the ability to do small uphill climbs is an open line of research.
• Influence of symmetry groups. Under appropriate conditions, the presence of discrete symmetry groups does not prevent the loss from being connected, but at the expense of increasing the capacity. An important open question is whether one can improve the asymptotic properties by relaxing connectedness to being connected up to discrete symmetry.
• Improving numerics with Hyperplane method. Our current numerical experiments employ a greedy (albeit faster) algorithm to discover connected components and estimate geodesics. We plan to perform experiments using the less greedy algorithm described in Appendix A.
ACKNOWLEDGMENTS
We would like to thank Mark Tygert for pointing out the reference to the -nets and Kolmogorov capacity, and Martin Arjovsky for spotting several bugs in early version of the results. We would also like to thank Maithra Raghu and Jascha Sohl-Dickstein for enlightening discussions, as well as Yasaman Bahri for helpful feedback on an early version of the manuscript. CDF was supported by the NSF Graduate Research Fellowship under Grant DGE-1106400.
A CONSTRAINED DYNAMIC STRING SAMPLING
While the algorithm presented in Sec. 3.1 is fast for sufficiently smooth families of loss surfaces with few saddle points, here we present a slightly modified version which, while slower, provides more control over the convergence of the string. We did not use the algorithm presented in this section for our numerical studies.
Instead of training intermediate models via full SGD to a desired accuracy as in step 8 of the algorithm, intermediate models are be subject to a constraint that ensures they are “close” to the neighboring models on the string. Specifically, intermediate models are constrained to the unique hyperplane in weightspace equidistant from its two neighbors. This can be further modified by additional regularization terms to control the “springy-ness” of the string. These heuristics could be chosen to try to more faithfully sample the geodesic between two models.
In practice, for a given model on the string, θi, these two regularizations augment the standard loss by: F̃ (θ) = F (θ) + ζ(‖θi−1− θi‖+ ‖θi+1− θi‖) +κ‖ (θi−1−θi+1)/2‖(θi−1−θi+1)/2‖ · (θi−(θi−1−θi+1)/2) ‖(θi−(θi−1−θi+1)/2)‖‖. The ζ regularization term controls the “springy-ness” of the weightstring, and the κ regularization term controls how far off the hyperplane a new model can deviate.
Because adapting DSS to use this constraint is straightforward, here we will describe an alternative “breadth-first” approach wherein models are trained in parallel until convergence. This alternative approach has the advantage that it will indicate a disconnection between two models “sooner” in training. The precise geometry of the loss surface will dictate which approach to use in practice.
Given two random models σi and σj where |σi − σj | < L0, we aim to follow the evolution of the family of models connecting σi to σj . Intuitively, almost every continuous path in the space of random models connecting σi to σj has, on average, the same (high) loss. For simplicity, we choose to initialize the string to the linear segment interpolating between these two models. If this entire segment is evolved via gradient descent, the segment will either evolve into a string which is entirely contained in a basin of the loss surface, or some number of points will become fixed at a higher loss. These fixed points are difficult to detect directly, but will be indirectly detected by the persistence of a large interpolated loss between two adjacent models on the string.
The algorithm proceeds as follows:
(0.) Initialize model string to have two models, σi and σj .
1. Begin training all models to the desired loss, keeping the instantaneous loss, L0(t), of all models being trained approximately constant.
2. If the pairwise interpolated loss between σn and σn+1 exceeds L0(t), insert a new model at the maximum of the interpolated loss (or halfway) between these two models.
3. Repeat steps (1) and (2) until all models (and interpolated errors) are below a threshold loss L0(tfinal) := L0, or until a chosen failure condition (see 3.2).
B PROOFS
B.1 PROOF OF PROPOSITION 2.1
Suppose that θ1 is a local minima and θ2 is a global minima, but F (θ1) > F (θ2). If λ = F (θ1), then clearly θ1 and θ2 both belong to ΩF (λ). Suppose now that ΩF (λ) is connected. Then we
could find a smooth (i.e. continuous and differentiable) path γ(t) with γ(0) = θ1, γ(1) = θ2 and F (γ(t)) ≤ λ = F (θ1). But this contradicts the strict local minima status of θ1, and therefore ΩF (λ) cannot be connected .
B.2 PROOF OF PROPOSITION 2.2
Let us first consider the case with κ = 0. We proceed by induction over the number of layers K. For K = 1, the loss F (θ) is convex. Let θA, θB be two arbitrary points in a level set Ωλ. Thus F (θA) ≤ λ and F (θB) ≤ λ. By definition of convexity, a linear path is sufficient in that case to connect θA and θB :
F ((1− t)θA + tθB) ≤ (1− t)F (θA) + tF (θB) ≤ λ . Suppose the result is true for K − 1. Let θA = (WA1 , . . . ,WAK ) and θB = (WB1 , . . . ,WBK ) with F (θA) ≤ λ, F (θB) ≤ λ. Since nj ≥ min(n1, nK) for j = 2 . . .K − 1, we can find k∗ = {1,K − 1} such that nk∗ ≥ min(nk∗−1, nk∗+1). For each W1, . . . ,WK , we denote W̃j = Wj for j 6= k∗, k∗ − 1 and W̃k∗ = Wk∗−1Wk∗ . By induction hypothesis, the loss expressed in terms of θ̃ = (W̃1, . . . , W̃K−1) is connected between θ̃A and θ̃B . Let W̃k∗(t) the corresponding linear path projected in the layer k∗. We need to produce a path in the variables Wk∗−1(t), Wk∗(t) such that:
i Wk∗−1(0) = WAk∗−1, Wk∗−1(1) = W B k∗−1,
ii Wk∗(0) = WAk∗ , Wk∗(1) = W B k∗ ,
iii Wk∗(t)Wk∗−1(t) = W̃k∗−1(t) for t ∈ (0, 1).
We construct it as follows. Let
Wk∗(t) = tW B k∗ + (1− t)WAk∗ + t(1− t)V ,
Wk∗−1(t) = Wk∗(t) †W̃k∗−1(t) ,
where Wk∗(t)† = (Wk∗(t)TWk∗(t))−1Wk∗(t)T denotes the pseudoinverse and V is a nk∗−1×nk∗ matrix drawn from a iid distribution. Conditions (i) and (ii) are immediate from the definition, and condition (iii) results from the fact that
Wk∗(t)Wk∗(t) † = IN∗k ,
since W ∗k (t) has full rank for all t ∈ (0, 1). Finally, let us prove that the result is also true when K = 2 and κ > 0. We construct the path using the variational properties of atomic norms Bach (2013). When we pick the ridge regression regularization, the corresponding atomic norm is the nuclear norm:
‖X‖∗ = min UV T=X
1 2 (‖U‖2 + ‖V ‖2) .
The path is constructed by exploiting the convexity of the variational norm ‖X‖∗. Let θA = (WA1 ,W A 2 ) and θ B = (WB1 ,W B 2 ), and we define W̃ = W1W2. Since W̃ {A,B} = W {A,B} 1 W {A,B} 2 , it results that
‖W̃ {A,B}‖∗ ≤ 1 2 (‖W {A,B}1 ‖2 + ‖W {A,B} 2 ‖2) . (12)
From (12) it results that the loss Fo(W1,W2) can be minored by another loss expressed in terms of W̃ of the form E{|Y − W̃X|2}+ 2κ‖W̃‖∗ , which is convex with respect to W̃ . Thus a linear path in W̃ from W̃A to W̃B is guaranteed to be below Fo(θ{A,B}). Let us define
∀ t , W1(t),W2(t) = arg min UV T=W̃ (t) (‖U‖2 + ‖V ‖2) .
One can verify that we can first consider a path (βA1 (s), β A 2 (s)) from (W A 1 ,W A 2 ) to (W1(0),W2(0) such that ∀ s β1(s)β2(s) = W̃A and ‖β1(s)‖2 + ‖β2(s)‖2 decreases ,
and similarly for (WB1 ,W B 2 ) to (W1(1),W2(1). The path (β A {1,2}(s),W{1,2}(t), β B {1,2}(s)) satisfies (i-iii) by definition. We also verify that
‖W1(t)‖2 + ‖W2(t)‖2 = 2‖W̃ (t)‖∗ ≤ 2(1− t)‖W̃ (0)‖∗ + 2t‖W̃ (1)‖∗ ≤ (1− t)(‖W‖21(0) + ‖W‖22(0)) + t(‖W‖21(1) + ‖W‖22(1)) .
Finally, we verify that the paths we have just created, when applied to θA arbitrary and θB = θ∗ a global minimum, are strictly decreasing, again by induction. For K = 1, this is again an immediate consequence of convexity. ForK > 1, our inductive construction guarantees that for any 0 < t < 1, the path θ(t) = (Wk(t))k≤K satisfies Fo(θ(t)) < Fo(θA). This concludes the proof .
B.3 PROOF OF PROPOSITION 2.3
Let A(w1, w2) = {x ∈ Rn; 〈x,w1〉 ≥ 0 , 〈x,w2〉 ≥ 0} .
By definition, we have
〈w1, w2〉Z = E{max(0, 〈X,w1〉) max(0, 〈X,w2〉)} (13)
= ∫ A(w1,w2) 〈x,w1〉〈x,w2〉dP (x) , (14)
= ∫ Q(A(w1,w2)) 〈Q(x), w1〉〈Q(x), w2〉(dP̄ (Q(x))) , (15)
whereQ is the orthogonal projection onto the space spanned byw1 andw2 and dP̄ (x) = dP̄ (x1, x2) is the marginal density on that subspace. Since this projection does not interfere with the rest of the proof, we abuse notation by dropping the Q and still referring to dP (x) as the probability density.
Now, let r = 12‖w1 + w2‖ = 1+cos(α) 2 and d = w2−w1 2 . By construction we have
w1 = rwm − d , w2 = rwm + d , and thus 〈x,w1〉〈x,w2〉 = r2|〈x,wm〉|2 − |〈x, d〉|2 . (16) By denoting C(wm) = {x ∈ Rn; 〈x,wm〉 ≥ 0}, observe that A(w1, w2) ⊆ C(wm). Let us denote by B = C(wm) \A(w1, w2) the disjoint complement. It results that
〈w1, w2〉Z = ∫ A(w1,w2) 〈x,w1〉〈x,w2〉dP (x)
= ∫ C(wm) [r2|〈x,wm〉|2 − |〈x, d〉|2]dP (x)−
r2 ∫ B |〈x,wm〉|2dP (x) + ∫ B |〈x, d〉|2dP (x)
= r2‖wm‖2Z − r2 ∫ B
|〈x,wm〉|2dP (x)︸ ︷︷ ︸ E1
− ∫ A(w1,w2)
|〈x, d〉|2dP (x)︸ ︷︷ ︸ E2 . (17)
We conclude by bounding each error term E1 and E2 separately: 0 ≤ E1 ≤ r2| sin(α)|2 ∫ B ‖x‖2dP (x) ≤ r2| sin(α)|22‖ΣX‖ , (18)
since every point in B by definition has angle greater than π/2− α from wm. Also, 0 ≤ E2 ≤ ‖d‖2 ∫ A(w1,w2) ‖x‖2dP (x) ≤ 1− cos(α) 2 2‖ΣX‖ (19)
by direct application of Cauchy-Schwartz. The proof is completed by plugging the bounds from (18) and (19) into (17) .
B.4 PROOF OF THEOREM 2.4
Consider a generic α and l ≤ m. A path from θA to θB will be constructed by concatenating the following paths:
1. from θA to θlA, the best linear predictor using the same first layer as θA,
2. from θlA to θsA, the best (m− l)-term approximation using perturbed atoms from θA, 3. from θsA to θ∗ the oracle l term approximation,
4. from θ∗ to θsB , the best (m− l)-term approximation using perturbed atoms from θB , 5. from θsB to θlB , the best linear predictor using the same first layer as θB ,
6. from θlB to θB .
The proof will study the increase in the loss along each subpath and aggregate the resulting increase into a common bound.
Subpaths (1) and (6) only involve changing the parameters of the second layer while leaving the firstlayer weights fixed, which define a convex loss. Therefore a linear path is sufficient to guarantee that the loss along that path will be upper bounded by λ on the first end and δWA1 (m, 0,m) on the other end.
Concerning subpaths (3) and (4), we notice that they can also be constructed using only parameters of the second layer, by observing that one can fit into a single n × m parameter matrix both the (m − l)-term approximation and the oracle l-term approximation. Indeed, let us describe subpath (3) in detail ( subpath (4) is constructed analogously by replacing the role of θsA with θsB). Let W̃A the first-layer parameter matrix associated with the m − l-sparse solution θsA, and let γA denote its second layer coefficients, which is a m-dimensional vector with at most m − l non-zero coefficients. LetW∗ be the first-layer matrix of the l-term oracle approximation, and γ∗ the corresponding second-layer coefficients. Since there are only m − l columns of W̃A that are used, corresponding to the support of γA, we can consider a path θ̄ that replaces the remaining l columns with those from W∗ while keeping the second-layer vector γA fixed. Since the modified columns correspond to zeros in γA, such paths have constant loss. Call W̄ the resulting first-layer matrix, containing both the active m− l active columns of W̃A and the l columns of W∗ in the positions determined by the zeros of γA. Now we can consider the linear subpath that interpolates between γA and γ∗ while keeping the first layer fixed at W̄ . Since again this is a linear subpath that only moves second-layer coefficients, it is non-increasing thanks to the convexity of the loss while fixing the first layer. We easily verify that at the end of this linear subpath we are using the oracle l-term approximation, which has loss e(l), and therefore subpath (3) incurs in a loss that is bounded by its extremal values δWA1 (m− l, α,m) and e(l).
Finally, we need to show how to construct the subpaths (2) and (5), which are the most delicate step since they cannot be bounded using convexity arguments as above. Let W̃A be the resulting perturbed first-layer parameter matrix withm− l sparse coefficients γA. Let us consider an auxiliary regression of the form W = [WA; W̃A] ∈ Rn×2m . and regression parameters
β1 = [β1; 0] , β2 = [0; γA] .
Clearly E{|Y − β1W |2}+ κ‖β1‖1 = E{|Y − β1WA|2}+ κ‖β1‖1
and similarly for β2. By convexity, the augmented linear path η(t) = (1− t)β1 + tβ2 thus satisfies
∀ t , L(t) = E{|Y − η(t)W |2}+ κ‖η(t)‖1 ≤ max(L(0), L(1)) .
Let us now approximate this augmented linear path with a path in terms of first and second layer weights. We consider
η1(t) = (1− t)WA + tW̃A , and η2(t) = (1− t)β1 + tγA .
We have that
Fo({η1(t), η2(t)}) = E{|Y − η2(t)Z(η1(t))|2}+ κ‖η2(t)‖1 (20) ≤ E{|Y − η2(t)Z(η1(t))|2}+ κ((1− t)‖β1‖1 + t‖γA‖1) = L(t) + E{|Y − η2(t)Z(η1(t))|2} − E{|Y − (1− t)β1Z(WA)− tγAZ(W̃A)|2} . (21)
Finally, we verify that∣∣∣E{|Y − η2(t)Z(η1(t))|2} − E{|Y − (1− t)β1Z(WA)− tγAZ(W̃A)|2}∣∣∣ ≤ (22) ≤ 4αmax(E|Y |2, √ E|Y 2|)‖ΣX‖(κ−1/2 + α √ E|Y 2|κ−1) + o(α2) .
Indeed, from Proposition 2.3, and using the fact that ∀ i ≤M, t ∈ [0, 1] , ∣∣∠((1− t)wAi + tw̃Ai ;wAi )∣∣ ≤ α , ∣∣∠((1− t)wAi + tw̃Ai ; w̃Ai )∣∣ ≤ α
we can write (1− t)β1,iz(wAi ) + tγA,iz(w̃Ai ) d = η2(t)iz(η1(t)i) + ni ,
with E{|ni|2} ≤ 4|η2(t)i|2‖ΣX‖α2 + O(α4) and E|ni| ≤ 2|η2(t)i|α √ ‖ΣX‖ using concavity of the moments. Thus∣∣∣E{|Y − η2(t)Z(η1(t))|2} − E{|Y − (1− t)β1Z(WA)− tγAZ(W̃A)|2}∣∣∣ ≤ 2E
{∑ i (Y − η2(t)Z(η1(t)))ni } + E { | ∑ i ni|2 }
≤ 4 ( α √ E|Y 2|‖ΣX‖‖η2‖+ α2(‖η2‖1)2‖ΣX‖ ) ≤ 4αmax(1, √ E|Y 2|)‖ΣX‖(‖η2‖1 + α‖η2‖21) + o(α2)
≤ 4αmax( √ E|Y 2|,E|Y 2|)‖ΣX‖(κ−1 + α √ E|Y 2|κ−2) + o(α2) ,
which proves (22).
We have just constructed a path from θA to θB , in which all subpaths except (2) and (5) have energy maximized at the extrema due to convexity, given respectively by λ, δW 1A(m, 0,m), δW 1A(m − l, α,m), e(l), δW 1B (m − l, α,m), and δW 1B (m, 0,m). For the two subpaths (2) and (5), (22) shows that it is sufficient to add the corresponding upper bound to the linear subpath, which is of the form Cα + o(α2) where C is an explicit constant independent of θ. Since l and α are arbitrary, we are free to pick the infimum, which concludes the proof.
B.5 PROOF OF COROLLARY 2.5
Let us consider a generic first layer weight matrix W ∈ Rn×m. Without loss of generality, we can assume that ‖wk‖ = 1 for all k, since increasing the norm of ‖wk‖within the unit ball has no penalty in the loss, and we can compensate this scaling in the second layer thanks to the homogeneity of the half-rectification. Since this results in an attenuation of these second layer weights, they too are guaranteed not to increase the loss.
From Vershynin (2010) [Lemma 5.2] we verify that the covering number N (Sn−1, ) of the Euclidean unit sphere Sn−1 satisfies
N (Sn−1, ) ≤ ( 1 + 2 )n ,
which means that we can cover the unit sphere with an -net of size N (Sn−1, ).
Let 0 < η < n−1(1 + n−1)−1, and let us pick, for each m, m = m η−1 n . Let us consider its corresponding -net of size
um = N (Sn−1, m) ' ( 1 + 2
m
)n ' m1−η .
Since we have m vectors in the unit sphere, it results from the pigeonhole principle that at least one element of the net will be associated with at least vm = mu−1m ' mη vectors; in other words, we are guaranteed to find amongst our weight vector W a collection Qm of vm ' mη vectors that are all at an angle at most 2 m apart. Let us now apply Theorem 2.4 by picking n = vm and α = m. We need to see that the terms involved in the bound all converge to 0 as m→∞. The contribution of the oracle error e(vm) − e(m) goes to zero as m → ∞ by the fact that limm→∞ e(m) exists (it is a decreasing, positive sequence) and that vm →∞. Let us now verify that δ(m − vm, m,m) also converges to zero. We are going to prune the first layer by removing one by one the vectors in Qm. Removing one of these vectors at a time incurs in an error of the order of m. Indeed, let wk be one of such vectors and let β′ be the solution of
min β′ E(β′) = min β′=(βf ;βk)∈Rk E{|Y − βTf Z(W−k)− βkz(wk)|2}+ κ(‖βf‖1 + |βk|) ,
where W−k is a shorthand for the matrix containing the rest of the vectors that have not been discarded yet. Removing the vector wk from the first layer increases the loss by a factor that is upper bounded by E(βp)− E(β), where
(βp)j =
{ β′j for j < k − 1 ,
β′k−1 + β ′ k otherwise.
,
since now βp is a feasible solution for the pruned first layer.
Let us finally bound E(βp)− E(β). Since ∠(wk, wk−1) ≤ m, it results from Proposition 2.3 that
z(wk) d = z(wk−1) + n ,
with E{|n|2} ≤ Cα2 for some constantC independent ofm. By redefining p1 = Y −βTp Z(W−k)− 1 2n and p2 = 1 2n, we have
E{|Y − βTp Z(W−k)|2} − E{|Y − β′ T Z(W−k)− βkz(wk)|2}
= E{|p1 + p2|2} − E{|p1 − p2|2} = 4E{|p1p2|}
≤ √√√√E{∣∣∣∣Y − βTp Z(W−k)− 12n ∣∣∣∣2 }√ E{|n|2}
≤ (C + α)α ' m ,
where C only depends on E{|Y |2}. We also verify that ‖βp‖1 ≤ ‖β′‖1. It results that removing |Qm| of such vectors incurs an increase of the loss at most |Qm| m ' mηm η−1 n = mη+
η−1 n . Since we picked η such that η + η−1n < 0, this term converges to zero. The
proof is finished.
C CARTOON OF ALGORITHM
Refer to Fig. 2.
D VISUALIZATION OF CONNECTION
Because the weight matrices are anywhere from high to extremely high dimensional, for the purposes of visualization we projected the models on the connecting path into a three dimensionsal subspace. Snapshots of the algorithm in progress for the quadratic regression task are indicated in Fig. 3. This was done by vectorizing all of the weight matrices for all the beads for a given connecting path, and then performing principal component analysis to find the three highest weight projections for the collection of models that define the endpoints of segments for a connecting path—i.e., the
θi discussed in the algorithm. We then projected the connecting string of models onto these three directions.
The color of the strings was chosen to be representative of the test loss under a log mapping, so that extremely high test loss mapped to red, whereas test loss near the threshold mapped to blue. An animation of the connecting path can be seen on our Github page.
Finally, projections onto pairs of principal components are indicated by the black curves.
E A DISCONNECTION
E.1 A DISCONNECTION
As a sanity check for the algorithm, we also applied it to a problem for which we know that it is not possible to connect models of equivalent power by the arguments of section 2.3.1. The input data is 3 points in R2, and the task is to permute the datapoints, i.e. map {x1, x2, x3} → {x2, x3, x1}. This map requires at least 12 parameters in general for the three linear maps which take xi → xj for i, j ∈ {{1, 2}, {2, 3}, {3, 1}}. Our archticture was a 2-3-2 fully connected neural network with a single relu nonlinearity after the hidden layer—a model which clearly has 12 free parameters by construction. The two models we tried to connect were a single model, θ, and a copy of θ with the first two neurons in the hidden layer permuted, θ̃σ . The algorithm fails to converge when initialized with these two models. We provide a visualization of the string of models produced by the algorithm in Fig. 4.
In general, a persistent high interpolated loss between two neighboring beads on the string of models could arise from either a slowly converging, connected pair of models or from a truly disconnected pair of models. “Proving” a disconnection at the level of numerical experiments is intractable in general, but a collection of negative results—i.e., failures to converge—are highly suggestive of a true disconnection. | 1. What are the contributions and key insights provided by the paper on the energy landscape of neural networks?
2. How does the paper's analysis of the loss function connectivity and non-convexity impact our understanding of training neural networks?
3. What are the limitations of the paper's approach and assumptions, particularly regarding its applicability to real-world scenarios?
4. Can the paper's findings be applied to improve the efficiency or effectiveness of optimization algorithms used in deep learning?
5. Are there any open questions or areas for further research related to the paper's topics? | Review | Review
This paper studies the energy landscape of the loss function in neural networks. It is generally clearly written and nicely provides intuitions for the results. One main contribution is to show that the level sets of the loss becomes connected as the network is increasingly overparameterized. It also quantifies, in a way, the degree of disconnectedness possible in terms of the increase in loss that one must allow to find a connected path. It would seem that this might have some implications for the likelihood of escaping local minima with stochastic gradient descent. The paper also presents a simple algorithm for finding geodesic paths between two networks such that the loss is decreasing along the path. Using this they show that the loss seems to become more nonconvex when the loss is smaller. This is also quite interesting.
The work does have some significant limitations, which is not surprising given the difficulty of fully analyzing the network loss function. However, the authors are quite clear about these limitations, which especially include not yet analyzing deep networks and analyzing only the oracle loss, and not the empirical loss. I would have also appreciated a little more practical discussion of the bound in Theorem 2.4. It is hard to tell whether this bound is tight enough to be practically relevant. |
ICLR | Title
Topology and Geometry of Half-Rectified Network Optimization
Abstract
The loss surface of deep neural networks has recently attracted interest in the optimization and machine learning communities as a prime example of highdimensional non-convex problem. Some insights were recently gained using spin glass models and mean-field approximations, but at the expense of strongly simplifying the nonlinear nature of the model. In this work, we do not make any such assumption and study conditions on the data distribution and model architecture that prevent the existence of bad local minima. Our theoretical work quantifies and formalizes two important folklore facts: (i) the landscape of deep linear networks has a radically different topology from that of deep half-rectified ones, and (ii) that the energy landscape in the non-linear case is fundamentally controlled by the interplay between the smoothness of the data distribution and model over-parametrization. Our main theoretical contribution is to prove that half-rectified single layer networks are asymptotically connected, and we provide explicit bounds that reveal the aforementioned interplay. The conditioning of gradient descent is the next challenge we address. We study this question through the geometry of the level sets, and we introduce an algorithm to efficiently estimate the regularity of such sets on large-scale networks. Our empirical results show that these level sets remain connected throughout all the learning phase, suggesting a near convex behavior, but they become exponentially more curvy as the energy level decays, in accordance to what is observed in practice with very low curvature attractors.
1 INTRODUCTION
Optimization is a critical component in deep learning, governing its success in different areas of computer vision, speech processing and natural language processing. The prevalent optimization strategy is Stochastic Gradient Descent, invented by Robbins and Munro in the 50s. The empirical performance of SGD on these models is better than one could expect in generic, arbitrary non-convex loss surfaces, often aided by modifications yielding significant speedups Duchi et al. (2011); Hinton et al. (2012); Ioffe & Szegedy (2015); Kingma & Ba (2014). This raises a number of theoretical questions as to why neural network optimization does not suffer in practice from poor local minima.
The loss surface of deep neural networks has recently attracted interest in the optimization and machine learning communities as a paradigmatic example of a hard, high-dimensional, non-convex problem. Recent work has explored models from statistical physics such as spin glasses Choromanska et al. (2015), in order to understand the macroscopic properties of the system, but at the expense of strongly simplifying the nonlinear nature of the model. Other authors have advocated that the real danger in high-dimensional setups are saddle points rather than poor local minima Dauphin et al. (2014), although recent results rigorously establish that gradient descent does not get stuck on saddle points Lee et al. (2016) but merely slowed down. Other notable recent contributions are Kawaguchi (2016), which further develops the spin-glass connection from Choromanska et al. (2015) and resolves the linear case by showing that no poor local minima exist; Sagun et al. (2014) which also
∗Currently on leave from UC Berkeley.
discusses the impact of stochastic vs plain gradient, Soudry & Carmon (2016), that studies Empirical Risk Minimization for piecewise multilayer neural networks under overparametrization (which needs to grow with the amount of available data), and Goodfellow et al. (2014), which provided insightful intuitions on the loss surface of large deep learning models and partly motivated our work. Additionally, the work Safran & Shamir (2015) studies some topological properties of homogeneous nonlinear networks and shows how overparametrization acts upon these properties, and the pioneering Shamir (2016) studied the distribution-specific hardness of optimizing non-convex objectives. Lastly, several papers submitted concurrently and independently of this one deserve note, particularly Swirszcz et al. (2016) which analyzes the explicit criteria under which sigmoid-based neural networks become trapped by poor local minima, as well as Tian (2017), which offers a complementary study of two layer ReLU based networks, and their learning dynamics.
In this work, we do not make any linearity assumption and study conditions on the data distribution and model architecture that prevent the existence of bad local minima. The loss surface F (θ) of a given model can be expressed in terms of its level sets Ωλ, which contain for each energy level λ all parameters θ yielding a loss smaller or equal than λ. A first question we address concerns the topology of these level sets, i.e. under which conditions they are connected. Connected level sets imply that one can always find a descent direction at each energy level, and therefore that no poor local minima can exist. In absence of nonlinearities, deep (linear) networks have connected level sets Kawaguchi (2016). We first generalize this result to include ridge regression (in the two layer case) and provide an alternative, more direct proof of the general case. We then move to the half-rectified case and show that the topology is intrinsically different and clearly dependent on the interplay between data distribution and model architecture. Our main theoretical contribution is to prove that half-rectified single layer networks are asymptotically connected, and we provide explicit bounds that reveal the aforementioned interplay.
Beyond the question of whether the loss contains poor local minima or not, the immediate follow-up question that determines the convergence of algorithms in practice is the local conditioning of the loss surface. It is thus related not to the topology but to the shape or geometry of the level sets. As the energy level decays, one expects the level sets to exhibit more complex irregular structures, which correspond to regions where F (θ) has small curvature. In order to verify this intuition, we introduce an efficient algorithm to estimate the geometric regularity of these level sets by approximating geodesics of each level set starting at two random boundary points. Our algorithm uses dynamic programming and can be efficiently deployed to study mid-scale CNN architectures on MNIST, CIFAR-10 and RNN models on Penn Treebank next word prediction. Our empirical results show that these models have a nearly convex behavior up until their lowest test errors, with a single connected component that becomes more elongated as the energy decays. The rest of the paper is structured as follows. Section 2 presents our theoretical results on the topological connectedness of multilayer networks. Section 3 presents our path discovery algorithm and Section 4 covers the numerical experiments.
2 TOPOLOGY OF LEVEL SETS
Let P be a probability measure on a product space X ×Y , where we assume X and Y are Euclidean vector spaces for simplicity. Let {(xi, yi)}i be an iid sample of size L drawn from P defining the training set. We consider the classic empirical risk minimization of the form
Fe(θ) = 1
L L∑ l=1 ‖Φ(xi; θ)− yi‖2 + κR(θ) , (1)
where Φ(x; θ) encapsulates the feature representation that uses parameters θ ∈ RS and R(θ) is a regularization term. In a deep neural network, θ contains the weights and biases used in all layers. For convenience, in our analysis we will also use the oracle risk minimization:
Fo(θ) = E(X,Y )∼P ‖Φ(X; θ)− Y ‖2 + κR(θ) . (2)
Our setup considers the case whereR consists on either `1 or `2 norms, as we shall describe below. They correspond to well-known sparse and ridge regularization respectively.
2.1 POOR LOCAL MINIMA CHARACTERIZATION FROM TOPOLOGICAL CONNECTEDNESS
We define the level set of F (θ) as
ΩF (λ) = {θ ∈ RS ; F (θ) ≤ λ} . (3)
The first question we study is the structure of critical points of Fe(θ) and Fo(θ) when Φ is a multilayer neural network. For simplicity, we consider first a strict notion of local minima: θ ∈ RS is a strict local minima of F if there is > 0 with F (θ′) > F (θ) for all θ′ ∈ B(θ, ) and θ′ 6= θ. In particular, we are interested to know whether Fe has local minima which are not global minima. This question is answered by knowing whether ΩF (λ) is connected at each energy level λ: Proposition 2.1. If ΩF (λ) is connected for all λ then every local minima of F (θ) is a global minima.
Strict local minima implies that ∇F (θ) = 0 and HF (θ) 0, but avoids degenerate cases where F is constant along a manifold intersecting θ. In that scenario, if Uθ denotes that manifold, our reasoning immediately implies that if ΩF (λ) are connected, then for all > 0 there exists θ′ with dist(θ′,Uθ) ≤ and F (θ′) < F (θ). In other words, some element at the boundary of Uθ must be a saddle point. A stronger property that eliminates the risk of gradient descent getting stuck at Uθ is that all elements at the boundary of Uθ are saddle points. This can be guaranteed if one can show that there exists a path connecting any θ to the lowest energy level such that F is strictly decreasing along it.
Such degenerate cases arise in deep linear networks in absence of regularization. If θ = (W1, . . . ,WK) denotes any parameter value, with N1, . . . NK denoting the hidden layer sizes, and Fk ∈ GL+Nk(R) are arbitrary elements of the general linear group of invertible Nk × Nk matrices with positive determinant, then
Uθ = {W1F−11 , F1W2F −1 2 , . . . , FKWK ; Fk ∈ GL + Nk (R)} .
In particular, Uθ has a Lie Group structure. In the half-rectified nonlinear case, the general linear group is replaced by the Lie group of homogeneous invertible matrices Fk = diag(α1, . . . , αNk) with αj > 0.
This proposition shows that a sufficient condition to prevent the existence of poor local minima is having connected level sets, but this condition is not necessary: one can have isolated local minima lying at the same energy level. This can be the case in systems that are defined up to a discrete symmetry group, such as multilayer neural networks. However, as we shall see next, this case puts the system in a brittle position, since one needs to be able to account for all the local minima (and there can be exponentially many of them as the parameter dimensionality increases) and verify that their energy is indeed equal.
2.2 THE LINEAR CASE
We first consider the particularly simple case where F is a multilayer network defined by
Φ(x; θ) = WK . . .W1x , θ = (W1, . . . ,WK) . (4)
and the ridge regression R(θ) = ‖θ‖2. This model defines a non-convex (and non-concave) loss Fe(θ). When κ = 0, it has been shown in Saxe et al. (2013) and Kawaguchi (2016) that in this case, every local minima is a global minima. We provide here an alternative proof of that result that uses a somewhat simpler argument and allows for κ > 0 in the case K = 2. Proposition 2.2. Let W1,W2, . . . ,WK be weight matrices of sizes nk × nk+1, k < K, and let Fe(θ), Fo(θ) denote the risk minimizations using Φ as in (4). Assume that nj ≥ min(n1, nK) for j = 2 . . .K − 1. Then ΩFe(λ) (and ΩFo ) is connected for all λ and all K when κ = 0, and for κ > 0 when K = 2; and therefore there are no poor local minima in these cases. Moreover, any θ can be connected to the lowest energy level with a strictly decreasing path.
Let us highlight that this result is slightly complementary than that of Kawaguchi (2016), Theorem 2.3. Whereas we require nj ≥ min(n1, nK) for j = 2 . . .K − 1 and our analysis does not inform about the order of the saddle points, we do not need full rank assumptions on ΣX nor the weights Wk.
This result does also highlight a certain mismatch between the picture of having no poor local minima and generalization error. Incorporating regularization drastically changes the topology, and the fact that we are able to show connectedness only in the two-layer case with ridge regression is profound; we conjecture that extending it to deeper models requires a different regularization, perhaps using more general atomic norms Bach (2013). But we now move our interest to the nonlinear case, which is more relevant to our purposes.
2.3 HALF-RECTIFIED NONLINEAR CASE
We now study the setting given by
Φ(x; θ) = WKρWK−1ρ . . . ρW1x , θ = (W1, . . . ,WK) , (5)
where ρ(z) = max(0, z). The biases can be implemented by replacing the input vector x with x = (x, 1) and by rebranding each parameter matrix as
W i = ( Wi bi 0 1 ) ,
where bi contains the biases for each layer. For simplicity, we continue to use Wi and x in the following.
2.3.1 NONLINEAR MODELS ARE GENERALLY DISCONNECTED
One may wonder whether the same phenomena of global connectedness also holds in the halfrectified case. A simple motivating counterexample shows that this is not the case in general. Consider a simple setup with X ∈ R2 drawn from a mixture of two Gaussians N−1 and N1, and let Y = (X − µZ) · Z , where Z is the (hidden) mixture component taking {1,−1} values. Let Ŷ = Φ(X; {W1,W2}) be a single-hidden layer ReLU network, with two hidden units. Let θA be a configuration that bisects the two mixture components, and let θB the same configuration, but swapping the bisectrices. One can verify that they can both achieve arbitrarily small risk by letting the covariance of the mixture components go to 0. However, any path that connects θA to θB must necessarily pass through a point in which W1 has rank 1, which leads to an estimator with risk at least 1/2.
In fact, it is easy to see that this counter-example can be extended to any generic half-rectified architecture, if one is allowed to adversarially design a data distribution. For any given Φ(X; θ) with arbitrary architecture and current parameters θ = (Wi), let Pθ = {A1, . . . ,AS} be the underlying tessellation of the input space given by our current choice of parameters; that is, Φ(X; θ) is piece-wise linear and Pθ contains those pieces. Now let X be any arbitrary distribution with density p(x) > 0 for all x ∈ Rn, for example a Gaussian, and let Y | X d= Φ(X; θ) . Since Φ is invariant under a subgroup of permutations θσ of its hidden layers, it is easy to see that one can find two parameter values θA = θ and θB = θσ such that Fo(θA) = Fo(θB) = 0, but any continuous path γ(t) from θA to θB will have a different tessellation and therefore won’t satisfy Fo(γ(t)) = 0. Moreover, one can build on this counter-example to show that not only the level sets are disconnected, but also that there exist poor local minima. Let θ′ be a different set of parameters, and Y ′ | X d= Φ(X; θ′) be a different target distribution. Now consider the data distribution given by the mixture
X | p(x) , z ∼ Bernoulli(π) , Y | X, z d= zΦ(X; θ) + (1− z)Φ(X; θ′) . By adjusting the mixture component π we can clearly change the risk at θ and θ′ and make them different, but we conjecture that this preserves the status of local minima of θ and θ′. Appendix E constructs a counter-example numerically.
This illustrates an intrinsic difficulty in the optimization landscape if one is after universal guarantees that do not depend upon the data distribution. This difficulty is non-existent in the linear case and not easy to exploit in mean-field approaches such as Choromanska et al. (2015), and shows that in general we should not expect to obtain connected level sets. However, connectedness can be recovered if one is willing to accept a small increase of energy and make some assumptions on the complexity of the regression task. Our main result shows that the amount by which the energy is allowed to increase is upper bounded by a quantity that trades-off model overparametrization and smoothness in the data distribution.
For that purpose, we start with a characterization of the oracle loss, and for simplicity let us assume Y ∈ R and let us first consider the case with a single hidden layer and `1 regularization: R(θ) = ‖θ‖1.
2.3.2 PRELIMINARIES
Before proving our main result, we need to introduce preliminary notation and results. We first describe the case with a single hidden layer of size m.
We define
e(m) = min W1∈Rm×n,‖W1(i)‖2≤1,W2∈Rm
E{|Φ(X; θ)− Y |2}+ κ‖W2‖1 . (6)
to be the oracle risk using m hidden units with norm ≤ 1 and using sparse regression. It is a well known result by Hornik and Cybenko that a single hidden layer is a universal approximator under very mild assumptions, i.e. limm→∞ e(m) = 0. This result merely states that our statistical setup is consistent, and it should not be surprising to the reader familiar with classic approximation theory. A more interesting question is the rate at which e(m) decays, which depends on the smoothness of the joint density (X,Y ) ∼ P relative to the nonlinear activation family we have chosen. For convenience, we redefine W = W1 and β = W2 and Z(W ) = max(0,WX). We also write z(w) = max(0, 〈w,X〉) where (X,Y ) ∼ P and w ∈ RN is any deterministic vector. Let ΣX = EPXXT ∈ RN×N be the covariance operator of the random input X . We assume ‖ΣX‖ <∞. A fundamental property that will be essential to our analysis is that, despite the fact that Z is nonlinear, the quantity [w1, w2]Z := EP {z(w1)z(w2)} is locally equivalent to the linear metric 〈w1, w2〉X = EP {wT1 XXTw2} = 〈w1,ΣXw2〉, and that the linearization error decreases with the angle between w1 and w2. Without loss of generality, we assume here that ‖w1‖ = ‖w2‖ = 1, and we write ‖w‖2Z = E{|z(w)|2}. Proposition 2.3. Let α = cos−1(〈w1, w2〉) be the angle between unitary vectors w1 and w2 and let wm =
w1+w2 ‖w1+w2‖ be their unitary bisector. Then
1 + cosα
2 ‖wm‖2Z − 2‖ΣX‖
( 1− cosα
2 + sin2 α
) ≤ [w1, w2]Z ≤ 1 + cosα
2 ‖wm‖2Z . (7)
The term ‖ΣX‖ is overly pessimistic: we can replace it by the energy of X projected into the subspace spanned byw1 andw2 (which is bounded by 2‖ΣX‖). When α is small, a Taylor expansion of the trigonometric terms reveals that
2
3‖ΣX‖ 〈w1, w2〉 =
2
3‖ΣX‖ cosα =
2 3‖ΣX‖ (1− α
2
2 +O(α4))
≤ (1− α2/4)‖wm‖2Z − ‖ΣX‖(α2/4 + α2) +O(α4) ≤ [w1, w2]Z +O(α4) ,
and similarly [w1, w2]Z ≤ 〈w1, w2〉‖wm‖2Z ≤ ‖ΣX‖〈w1, w2〉 .
The local behavior of parameters w1, w2 on our regression problem is thus equivalent to that of having a linear layer, provided w1 and w2 are sufficiently close to each other. This result can be seen as a spoiler of what is coming: increasing the hidden layer dimensionality m will increase the chances to encounter pairs of vectors w1, w2 with small angle; and with it some hope of approximating the previous linear behavior thanks to the small linearization error.
In order to control the connectedness, we need a last definition. Given a hidden layer of size m with current parameters W ∈ Rn×m, we define a “robust compressibility” factor as
δW (l, α;m) = min ‖γ‖0≤l,supi |∠(w̃i,wi)|≤α
E{|Y − γZ(W̃ )|2 + κ‖γ‖1} , (l ≤ m) . (8)
This quantity thus measures how easily one can compress the current hidden layer representation, by keeping only a subset of l its units, but allowing these units to move by a small amount controlled by α. It is a form of n-width similar to Kolmogorov width Donoho (2006) and is also related to robust sparse coding from Tang et al. (2013); Ekanadham et al. (2011).
2.3.3 MAIN RESULT
Our main result considers now a non-asymptotic scenario given by some fixed size m of the hidden layer. Given two parameter values θA = (WA1 ,W A 2 ) ∈ W and θB = (WB1 ,WB2 ) with Fo(θ {A,B}) ≤ λ, we show that there exists a continuous path γ : [0, 1] → W connecting θA and θB such that its oracle risk is uniformly bounded by max(λ, ), where decreases with model overparametrization.
Theorem 2.4. For any θA, θB ∈ W and λ ∈ R satisfying Fo(θ{A,B}) ≤ λ, there exists a continuous path γ : [0, 1]→W such that γ(0) = θA, γ(1) = θB and
Fo(γ(t)) ≤ max(λ, ) , with (9)
= inf l,α
( max { e(l),δWA1 (m, 0;m), δWA1 (m− l, α;m), (10)
δWB1 (m, 0;m), δWB1 (m− l, α;m) } + C1α+O(α 2) ) , (11)
where C1 is an absolute constant depending only on κ and P .
Some remarks are in order. First, our regularization term is currently a mix between `2 norm constraints on the first layer and `1 norm constraints on the second layer. We believe this is an artifact of our proof technique, and we conjecture that more general regularizations yield similar results. Next, this result uses the data distribution through the oracle bound e(m) and the covariance term. The extension to empirical risk is accomplished by replacing the probability measure P by the empirical measure P̂ = 1L ∑ l δ ((x, y)− (xl, yl)). However, our asymptotic analysis has to be carefully reexamined to take into account and avoid the trivial regime when M outgrows L. A consequence of Theorem 2.4 is that as m increases, the model becomes asymptotically connected, as proven in the following corollary.
Corollary 2.5. As m increases, the energy gap satisfies = O(m− 1n ) and therefore the level sets become connected at all energy levels.
This is consistent with the overparametrization results from Safran & Shamir (2015); Shamir (2016) and the general common knowledge amongst deep learning practitioners. Our next sections explore this question, and refine it by considering not only topological properties but also some rough geometrical measure of the level sets.
3 GEOMETRY OF LEVEL SETS
3.1 THE GREEDY ALGORITHM
The intuition behind our main result is that, for smooth enough loss functions and for sufficient overparameterization, it should be “easy” to connect two equally powerful models—i.e., two models with FoθA,B ≤ λ. A sensible measure of this ease-of-connectedness is the normalized length of the geodesic connecting one model to the other: |γA,B(t)|/|θA − θB |. This length represents approximately how far of an excursion one must make in the space of models relative to the euclidean distance between a pair of models. Thus, convex models have a geodesic length of 1, because the geodesic is simply linear interpolation between models, while more non-convex models have geodesic lengths strictly larger than 1.
Because calculating the exact geodesic is difficult, we approximate the geodesic paths via a dynamic programming approach we call Dynamic String Sampling. We comment on alternative algorithms in Appendix A.
For a pair of models with network parameters θi, θj , each with Fe(θ) below a threshold L0, we aim to efficienly generate paths in the space of weights where the empirical loss along the path remains below L0. These paths are continuous curves belonging to ΩF (λ)–that is, the level sets of the loss function of interest.
Algorithm 1 Greedy Dynamic String Sampling 1: L0 ← Threshold below which path will be found 2: Φ1 ← randomly initialize θ1, train Φ(xi θ1) to L0 3: Φ2 ← randomly initialize θ2, train Φ(xi θ2) to L0 4: BeadList←(Φ1,Φ2) 5: Depth← 0 6: procedure FINDCONNECTION(Φ1,Φ2) 7: t∗ ← t such that dγ(θ1,θ2,t)dt ∣∣∣∣ t = 0 OR t = 0.5
8: Φ3 ← train Φ(xi; t∗θ1 + (1− t∗)θ2) to L0 9: BeadList← insert(Φ3, after Φ1, BeadList)
10: MaxError1 ← maxt(Fe(tθ3 + (1− t)θ1)) 11: MaxError2 ← maxt(Fe(tθ2 + (1− t)θ3)) 12: ifMaxError1 > L0 then return FindConnection(Φ1,Φ3) 13: ifMaxError2 > L0 then return FindConnection(Φ3,Φ2) 14: Depth← Depth+1
The algorithm recursively builds a string of models in the space of weights which continuously connect θi to θj . Models are added and trained until the pairwise linearly interpolated loss, i.e. maxtFe(tθi + (1 − t)θj) for t ∈ (0, 1), is below the threshold, L0, for every pair of neighboring models on the string. We provide a cartoon of the algorithm in Appendix C.
3.2 FAILURE CONDITIONS AND PRACTICALITIES
While the algorithm presented will faithfully certify two models are connected if the algorithm converges, it is worth emphasizing that the algorithm does not guarantee that two models are disconnected if the algorithm fails to converge. In general, the problem of determining if two models are connected can be made arbitrarily difficult by choice of a particularly pathological geometry for the loss function, so we are constrained to heuristic arguments for determining when to stop running the algorithm. Thankfully, in practice, loss function geometries for problems of interest are not intractably difficult to explore. We comment more on diagnosing disconnections more carefully in Appendix E.
Further, if the MaxError exceeds L0 for every new recursive branch as the algorithm progresses, the worst case runtime scales as O(exp(Depth)). Empirically, we find that the number of new models added at each depth does grow, but eventually saturates, and falls for a wide variety of models and architectures, so that the typical runtime is closer to O(poly(Depth))—at least up until a critical value of L0.
To aid convergence, either of the choices in line 7 of the algorithm works in practice—choosing t∗ at a local maximum can provide a modest increase in algorithm runtime, but can be unstable if the the calculated interpolated loss is particularly flat or noisy. t∗ = .5 is more stable, but slower. Finally, we find that training Φ3 to αL0 for α < 1 in line 8 of the algorithm tends to aid convergence without noticeably impacting our numerics. We provide further implementation details in 4.
4 NUMERICAL EXPERIMENTS
For our numerical experiments, we calculated normalized geodesic lengths for a variety of regression and classification tasks. In practice, this involved training a pair of randomly initialized models to the desired test loss value/accuracy/perplexity, and then attempting to connect that pair of models via the Dynamic String Sampling algorithm. We also tabulated the average number of “beads”, or the number intermediate models needed by the algorithm to connect two initial models. For all of the below experiments, the reported losses and accuracies are on a restricted test set. For more complete architecture and implementation details, see our GitHub page.
The results are broadly organized by increasing model complexity and task difficulty, from easiest to hardest. Throughout, and remarkably, we were able to easily connect models for every dataset and architecture investigated except the one explicitly constructed counterexample discussed in Appendix E.1. Qualitatively, all of the models exhibit a transition from a highly convex regime at high loss to a non-convex regime at low loss, as demonstrated by the growth of the normalized length as well as the monotonic increase in the number of required “beads” to form a low-loss connection.
4.1 POLYNOMIAL REGRESSION
We studied a 1-4-4-1 fully connected multilayer perceptron style architecture with sigmoid nonlinearities and RMSProp/ADAM optimization. For ease-of-analysis, we restricted the training and test data to be strictly contained in the interval x ∈ [0, 1] and f(x) ∈ [0, 1]. The number of required beads, and thus the runtime of the algorithm, grew approximately as a power-law, as demonstrated in Table 1 Fig. 1. We also provide a visualization of a representative connecting path between two models of equivalent power in Appendix D.
The cubic regression task exhibits an interesting feature around L0 = .15 in Table 1 Fig. 2, where the normalized length spikes, but the number of required beads remains low. Up until this point, the
cubic model is strongly convex, so this first spike seems to indicate the onset of non-convex behavior and a concomitant radical change in the geometry of the loss surface for lower loss.
4.2 CONVOLUTIONAL NEURAL NETWORKS
To test the algorithm on larger architectures, we ran it on the MNIST hand written digit recognition task as well as the CIFAR10 image recognition task, indicated in Table 1, Figs. 3 and 4. Again, the data exhibits strong qualitative similarity with the previous models: normalized length remains low until a threshold loss value, after which it grows approximately as a power law. Interestingly, the MNIST dataset exhibits very low normalized length, even for models nearly at the state of the art in classification power, in agreement with the folk-understanding that MNIST is highly convex and/or “easy”. The CIFAR10 dataset, however, exhibits large non-convexity, even at the modest test accuracy of 80%.
4.3 RECURRENT NEURAL NETWORKS
To gauge the generalizability of our algorithm, we also applied it to an LSTM architecture for solving the next word prediction task on the PTB dataset, depicted in Table 1 Fig. 5. Noteably, even for a radically different architecture, loss function, and data set, the normalized lengths produced by the DSS algorithm recapitulate the same qualitative features seen in the above datasets—i.e., models can be easily connected at high perplexity, and the normalized length grows at lower and lower perplexity after a threshold value, indicating an onset of increased non-convexity of the loss surface.
5 DISCUSSION
We have addressed the problem of characterizing the loss surface of neural networks from the perspective of gradient descent algorithms. We explored two angles – topological and geometrical aspects – that build on top of each other.
On the one hand, we have presented new theoretical results that quantify the amount of uphill climbing that is required in order to progress to lower energy configurations in single hidden-layer ReLU networks, and proved that this amount converges to zero with overparametrization under mild conditions. On the other hand, we have introduced a dynamic programming algorithm that efficiently approximates geodesics within each level set, providing a tool that not only verifies the connectedness of level sets, but also estimates the geometric regularity of these sets. Thanks to this information, we can quantify how ‘non-convex’ an optimization problem is, and verify that the optimization of quintessential deep learning tasks – CIFAR-10 and MNIST classification using CNNs, and next word prediction using LSTMs – behaves in a nearly convex fashion up until they reach high accuracy levels.
That said, there are some limitations to our framework. In particular, we do not address saddle-point issues that can greatly affect the actual convergence of gradient descent methods. There are also a number of open questions; amongst those, in the near future we shall concentrate on:
• Extending Theorem 2.4 to the multilayer case. We believe this is within reach, since the main analytic tool we use is that small changes in the parameters result in small changes in the covariance structure of the features. That remains the case in the multilayer case.
• Empirical versus Oracle Risk. A big limitation of our theory is that right now it does not inform us on the differences between optimizing the empirical risk versus the oracle risk. Understanding the impact of generalization error and stochastic gradient in the ability to do small uphill climbs is an open line of research.
• Influence of symmetry groups. Under appropriate conditions, the presence of discrete symmetry groups does not prevent the loss from being connected, but at the expense of increasing the capacity. An important open question is whether one can improve the asymptotic properties by relaxing connectedness to being connected up to discrete symmetry.
• Improving numerics with Hyperplane method. Our current numerical experiments employ a greedy (albeit faster) algorithm to discover connected components and estimate geodesics. We plan to perform experiments using the less greedy algorithm described in Appendix A.
ACKNOWLEDGMENTS
We would like to thank Mark Tygert for pointing out the reference to the -nets and Kolmogorov capacity, and Martin Arjovsky for spotting several bugs in early version of the results. We would also like to thank Maithra Raghu and Jascha Sohl-Dickstein for enlightening discussions, as well as Yasaman Bahri for helpful feedback on an early version of the manuscript. CDF was supported by the NSF Graduate Research Fellowship under Grant DGE-1106400.
A CONSTRAINED DYNAMIC STRING SAMPLING
While the algorithm presented in Sec. 3.1 is fast for sufficiently smooth families of loss surfaces with few saddle points, here we present a slightly modified version which, while slower, provides more control over the convergence of the string. We did not use the algorithm presented in this section for our numerical studies.
Instead of training intermediate models via full SGD to a desired accuracy as in step 8 of the algorithm, intermediate models are be subject to a constraint that ensures they are “close” to the neighboring models on the string. Specifically, intermediate models are constrained to the unique hyperplane in weightspace equidistant from its two neighbors. This can be further modified by additional regularization terms to control the “springy-ness” of the string. These heuristics could be chosen to try to more faithfully sample the geodesic between two models.
In practice, for a given model on the string, θi, these two regularizations augment the standard loss by: F̃ (θ) = F (θ) + ζ(‖θi−1− θi‖+ ‖θi+1− θi‖) +κ‖ (θi−1−θi+1)/2‖(θi−1−θi+1)/2‖ · (θi−(θi−1−θi+1)/2) ‖(θi−(θi−1−θi+1)/2)‖‖. The ζ regularization term controls the “springy-ness” of the weightstring, and the κ regularization term controls how far off the hyperplane a new model can deviate.
Because adapting DSS to use this constraint is straightforward, here we will describe an alternative “breadth-first” approach wherein models are trained in parallel until convergence. This alternative approach has the advantage that it will indicate a disconnection between two models “sooner” in training. The precise geometry of the loss surface will dictate which approach to use in practice.
Given two random models σi and σj where |σi − σj | < L0, we aim to follow the evolution of the family of models connecting σi to σj . Intuitively, almost every continuous path in the space of random models connecting σi to σj has, on average, the same (high) loss. For simplicity, we choose to initialize the string to the linear segment interpolating between these two models. If this entire segment is evolved via gradient descent, the segment will either evolve into a string which is entirely contained in a basin of the loss surface, or some number of points will become fixed at a higher loss. These fixed points are difficult to detect directly, but will be indirectly detected by the persistence of a large interpolated loss between two adjacent models on the string.
The algorithm proceeds as follows:
(0.) Initialize model string to have two models, σi and σj .
1. Begin training all models to the desired loss, keeping the instantaneous loss, L0(t), of all models being trained approximately constant.
2. If the pairwise interpolated loss between σn and σn+1 exceeds L0(t), insert a new model at the maximum of the interpolated loss (or halfway) between these two models.
3. Repeat steps (1) and (2) until all models (and interpolated errors) are below a threshold loss L0(tfinal) := L0, or until a chosen failure condition (see 3.2).
B PROOFS
B.1 PROOF OF PROPOSITION 2.1
Suppose that θ1 is a local minima and θ2 is a global minima, but F (θ1) > F (θ2). If λ = F (θ1), then clearly θ1 and θ2 both belong to ΩF (λ). Suppose now that ΩF (λ) is connected. Then we
could find a smooth (i.e. continuous and differentiable) path γ(t) with γ(0) = θ1, γ(1) = θ2 and F (γ(t)) ≤ λ = F (θ1). But this contradicts the strict local minima status of θ1, and therefore ΩF (λ) cannot be connected .
B.2 PROOF OF PROPOSITION 2.2
Let us first consider the case with κ = 0. We proceed by induction over the number of layers K. For K = 1, the loss F (θ) is convex. Let θA, θB be two arbitrary points in a level set Ωλ. Thus F (θA) ≤ λ and F (θB) ≤ λ. By definition of convexity, a linear path is sufficient in that case to connect θA and θB :
F ((1− t)θA + tθB) ≤ (1− t)F (θA) + tF (θB) ≤ λ . Suppose the result is true for K − 1. Let θA = (WA1 , . . . ,WAK ) and θB = (WB1 , . . . ,WBK ) with F (θA) ≤ λ, F (θB) ≤ λ. Since nj ≥ min(n1, nK) for j = 2 . . .K − 1, we can find k∗ = {1,K − 1} such that nk∗ ≥ min(nk∗−1, nk∗+1). For each W1, . . . ,WK , we denote W̃j = Wj for j 6= k∗, k∗ − 1 and W̃k∗ = Wk∗−1Wk∗ . By induction hypothesis, the loss expressed in terms of θ̃ = (W̃1, . . . , W̃K−1) is connected between θ̃A and θ̃B . Let W̃k∗(t) the corresponding linear path projected in the layer k∗. We need to produce a path in the variables Wk∗−1(t), Wk∗(t) such that:
i Wk∗−1(0) = WAk∗−1, Wk∗−1(1) = W B k∗−1,
ii Wk∗(0) = WAk∗ , Wk∗(1) = W B k∗ ,
iii Wk∗(t)Wk∗−1(t) = W̃k∗−1(t) for t ∈ (0, 1).
We construct it as follows. Let
Wk∗(t) = tW B k∗ + (1− t)WAk∗ + t(1− t)V ,
Wk∗−1(t) = Wk∗(t) †W̃k∗−1(t) ,
where Wk∗(t)† = (Wk∗(t)TWk∗(t))−1Wk∗(t)T denotes the pseudoinverse and V is a nk∗−1×nk∗ matrix drawn from a iid distribution. Conditions (i) and (ii) are immediate from the definition, and condition (iii) results from the fact that
Wk∗(t)Wk∗(t) † = IN∗k ,
since W ∗k (t) has full rank for all t ∈ (0, 1). Finally, let us prove that the result is also true when K = 2 and κ > 0. We construct the path using the variational properties of atomic norms Bach (2013). When we pick the ridge regression regularization, the corresponding atomic norm is the nuclear norm:
‖X‖∗ = min UV T=X
1 2 (‖U‖2 + ‖V ‖2) .
The path is constructed by exploiting the convexity of the variational norm ‖X‖∗. Let θA = (WA1 ,W A 2 ) and θ B = (WB1 ,W B 2 ), and we define W̃ = W1W2. Since W̃ {A,B} = W {A,B} 1 W {A,B} 2 , it results that
‖W̃ {A,B}‖∗ ≤ 1 2 (‖W {A,B}1 ‖2 + ‖W {A,B} 2 ‖2) . (12)
From (12) it results that the loss Fo(W1,W2) can be minored by another loss expressed in terms of W̃ of the form E{|Y − W̃X|2}+ 2κ‖W̃‖∗ , which is convex with respect to W̃ . Thus a linear path in W̃ from W̃A to W̃B is guaranteed to be below Fo(θ{A,B}). Let us define
∀ t , W1(t),W2(t) = arg min UV T=W̃ (t) (‖U‖2 + ‖V ‖2) .
One can verify that we can first consider a path (βA1 (s), β A 2 (s)) from (W A 1 ,W A 2 ) to (W1(0),W2(0) such that ∀ s β1(s)β2(s) = W̃A and ‖β1(s)‖2 + ‖β2(s)‖2 decreases ,
and similarly for (WB1 ,W B 2 ) to (W1(1),W2(1). The path (β A {1,2}(s),W{1,2}(t), β B {1,2}(s)) satisfies (i-iii) by definition. We also verify that
‖W1(t)‖2 + ‖W2(t)‖2 = 2‖W̃ (t)‖∗ ≤ 2(1− t)‖W̃ (0)‖∗ + 2t‖W̃ (1)‖∗ ≤ (1− t)(‖W‖21(0) + ‖W‖22(0)) + t(‖W‖21(1) + ‖W‖22(1)) .
Finally, we verify that the paths we have just created, when applied to θA arbitrary and θB = θ∗ a global minimum, are strictly decreasing, again by induction. For K = 1, this is again an immediate consequence of convexity. ForK > 1, our inductive construction guarantees that for any 0 < t < 1, the path θ(t) = (Wk(t))k≤K satisfies Fo(θ(t)) < Fo(θA). This concludes the proof .
B.3 PROOF OF PROPOSITION 2.3
Let A(w1, w2) = {x ∈ Rn; 〈x,w1〉 ≥ 0 , 〈x,w2〉 ≥ 0} .
By definition, we have
〈w1, w2〉Z = E{max(0, 〈X,w1〉) max(0, 〈X,w2〉)} (13)
= ∫ A(w1,w2) 〈x,w1〉〈x,w2〉dP (x) , (14)
= ∫ Q(A(w1,w2)) 〈Q(x), w1〉〈Q(x), w2〉(dP̄ (Q(x))) , (15)
whereQ is the orthogonal projection onto the space spanned byw1 andw2 and dP̄ (x) = dP̄ (x1, x2) is the marginal density on that subspace. Since this projection does not interfere with the rest of the proof, we abuse notation by dropping the Q and still referring to dP (x) as the probability density.
Now, let r = 12‖w1 + w2‖ = 1+cos(α) 2 and d = w2−w1 2 . By construction we have
w1 = rwm − d , w2 = rwm + d , and thus 〈x,w1〉〈x,w2〉 = r2|〈x,wm〉|2 − |〈x, d〉|2 . (16) By denoting C(wm) = {x ∈ Rn; 〈x,wm〉 ≥ 0}, observe that A(w1, w2) ⊆ C(wm). Let us denote by B = C(wm) \A(w1, w2) the disjoint complement. It results that
〈w1, w2〉Z = ∫ A(w1,w2) 〈x,w1〉〈x,w2〉dP (x)
= ∫ C(wm) [r2|〈x,wm〉|2 − |〈x, d〉|2]dP (x)−
r2 ∫ B |〈x,wm〉|2dP (x) + ∫ B |〈x, d〉|2dP (x)
= r2‖wm‖2Z − r2 ∫ B
|〈x,wm〉|2dP (x)︸ ︷︷ ︸ E1
− ∫ A(w1,w2)
|〈x, d〉|2dP (x)︸ ︷︷ ︸ E2 . (17)
We conclude by bounding each error term E1 and E2 separately: 0 ≤ E1 ≤ r2| sin(α)|2 ∫ B ‖x‖2dP (x) ≤ r2| sin(α)|22‖ΣX‖ , (18)
since every point in B by definition has angle greater than π/2− α from wm. Also, 0 ≤ E2 ≤ ‖d‖2 ∫ A(w1,w2) ‖x‖2dP (x) ≤ 1− cos(α) 2 2‖ΣX‖ (19)
by direct application of Cauchy-Schwartz. The proof is completed by plugging the bounds from (18) and (19) into (17) .
B.4 PROOF OF THEOREM 2.4
Consider a generic α and l ≤ m. A path from θA to θB will be constructed by concatenating the following paths:
1. from θA to θlA, the best linear predictor using the same first layer as θA,
2. from θlA to θsA, the best (m− l)-term approximation using perturbed atoms from θA, 3. from θsA to θ∗ the oracle l term approximation,
4. from θ∗ to θsB , the best (m− l)-term approximation using perturbed atoms from θB , 5. from θsB to θlB , the best linear predictor using the same first layer as θB ,
6. from θlB to θB .
The proof will study the increase in the loss along each subpath and aggregate the resulting increase into a common bound.
Subpaths (1) and (6) only involve changing the parameters of the second layer while leaving the firstlayer weights fixed, which define a convex loss. Therefore a linear path is sufficient to guarantee that the loss along that path will be upper bounded by λ on the first end and δWA1 (m, 0,m) on the other end.
Concerning subpaths (3) and (4), we notice that they can also be constructed using only parameters of the second layer, by observing that one can fit into a single n × m parameter matrix both the (m − l)-term approximation and the oracle l-term approximation. Indeed, let us describe subpath (3) in detail ( subpath (4) is constructed analogously by replacing the role of θsA with θsB). Let W̃A the first-layer parameter matrix associated with the m − l-sparse solution θsA, and let γA denote its second layer coefficients, which is a m-dimensional vector with at most m − l non-zero coefficients. LetW∗ be the first-layer matrix of the l-term oracle approximation, and γ∗ the corresponding second-layer coefficients. Since there are only m − l columns of W̃A that are used, corresponding to the support of γA, we can consider a path θ̄ that replaces the remaining l columns with those from W∗ while keeping the second-layer vector γA fixed. Since the modified columns correspond to zeros in γA, such paths have constant loss. Call W̄ the resulting first-layer matrix, containing both the active m− l active columns of W̃A and the l columns of W∗ in the positions determined by the zeros of γA. Now we can consider the linear subpath that interpolates between γA and γ∗ while keeping the first layer fixed at W̄ . Since again this is a linear subpath that only moves second-layer coefficients, it is non-increasing thanks to the convexity of the loss while fixing the first layer. We easily verify that at the end of this linear subpath we are using the oracle l-term approximation, which has loss e(l), and therefore subpath (3) incurs in a loss that is bounded by its extremal values δWA1 (m− l, α,m) and e(l).
Finally, we need to show how to construct the subpaths (2) and (5), which are the most delicate step since they cannot be bounded using convexity arguments as above. Let W̃A be the resulting perturbed first-layer parameter matrix withm− l sparse coefficients γA. Let us consider an auxiliary regression of the form W = [WA; W̃A] ∈ Rn×2m . and regression parameters
β1 = [β1; 0] , β2 = [0; γA] .
Clearly E{|Y − β1W |2}+ κ‖β1‖1 = E{|Y − β1WA|2}+ κ‖β1‖1
and similarly for β2. By convexity, the augmented linear path η(t) = (1− t)β1 + tβ2 thus satisfies
∀ t , L(t) = E{|Y − η(t)W |2}+ κ‖η(t)‖1 ≤ max(L(0), L(1)) .
Let us now approximate this augmented linear path with a path in terms of first and second layer weights. We consider
η1(t) = (1− t)WA + tW̃A , and η2(t) = (1− t)β1 + tγA .
We have that
Fo({η1(t), η2(t)}) = E{|Y − η2(t)Z(η1(t))|2}+ κ‖η2(t)‖1 (20) ≤ E{|Y − η2(t)Z(η1(t))|2}+ κ((1− t)‖β1‖1 + t‖γA‖1) = L(t) + E{|Y − η2(t)Z(η1(t))|2} − E{|Y − (1− t)β1Z(WA)− tγAZ(W̃A)|2} . (21)
Finally, we verify that∣∣∣E{|Y − η2(t)Z(η1(t))|2} − E{|Y − (1− t)β1Z(WA)− tγAZ(W̃A)|2}∣∣∣ ≤ (22) ≤ 4αmax(E|Y |2, √ E|Y 2|)‖ΣX‖(κ−1/2 + α √ E|Y 2|κ−1) + o(α2) .
Indeed, from Proposition 2.3, and using the fact that ∀ i ≤M, t ∈ [0, 1] , ∣∣∠((1− t)wAi + tw̃Ai ;wAi )∣∣ ≤ α , ∣∣∠((1− t)wAi + tw̃Ai ; w̃Ai )∣∣ ≤ α
we can write (1− t)β1,iz(wAi ) + tγA,iz(w̃Ai ) d = η2(t)iz(η1(t)i) + ni ,
with E{|ni|2} ≤ 4|η2(t)i|2‖ΣX‖α2 + O(α4) and E|ni| ≤ 2|η2(t)i|α √ ‖ΣX‖ using concavity of the moments. Thus∣∣∣E{|Y − η2(t)Z(η1(t))|2} − E{|Y − (1− t)β1Z(WA)− tγAZ(W̃A)|2}∣∣∣ ≤ 2E
{∑ i (Y − η2(t)Z(η1(t)))ni } + E { | ∑ i ni|2 }
≤ 4 ( α √ E|Y 2|‖ΣX‖‖η2‖+ α2(‖η2‖1)2‖ΣX‖ ) ≤ 4αmax(1, √ E|Y 2|)‖ΣX‖(‖η2‖1 + α‖η2‖21) + o(α2)
≤ 4αmax( √ E|Y 2|,E|Y 2|)‖ΣX‖(κ−1 + α √ E|Y 2|κ−2) + o(α2) ,
which proves (22).
We have just constructed a path from θA to θB , in which all subpaths except (2) and (5) have energy maximized at the extrema due to convexity, given respectively by λ, δW 1A(m, 0,m), δW 1A(m − l, α,m), e(l), δW 1B (m − l, α,m), and δW 1B (m, 0,m). For the two subpaths (2) and (5), (22) shows that it is sufficient to add the corresponding upper bound to the linear subpath, which is of the form Cα + o(α2) where C is an explicit constant independent of θ. Since l and α are arbitrary, we are free to pick the infimum, which concludes the proof.
B.5 PROOF OF COROLLARY 2.5
Let us consider a generic first layer weight matrix W ∈ Rn×m. Without loss of generality, we can assume that ‖wk‖ = 1 for all k, since increasing the norm of ‖wk‖within the unit ball has no penalty in the loss, and we can compensate this scaling in the second layer thanks to the homogeneity of the half-rectification. Since this results in an attenuation of these second layer weights, they too are guaranteed not to increase the loss.
From Vershynin (2010) [Lemma 5.2] we verify that the covering number N (Sn−1, ) of the Euclidean unit sphere Sn−1 satisfies
N (Sn−1, ) ≤ ( 1 + 2 )n ,
which means that we can cover the unit sphere with an -net of size N (Sn−1, ).
Let 0 < η < n−1(1 + n−1)−1, and let us pick, for each m, m = m η−1 n . Let us consider its corresponding -net of size
um = N (Sn−1, m) ' ( 1 + 2
m
)n ' m1−η .
Since we have m vectors in the unit sphere, it results from the pigeonhole principle that at least one element of the net will be associated with at least vm = mu−1m ' mη vectors; in other words, we are guaranteed to find amongst our weight vector W a collection Qm of vm ' mη vectors that are all at an angle at most 2 m apart. Let us now apply Theorem 2.4 by picking n = vm and α = m. We need to see that the terms involved in the bound all converge to 0 as m→∞. The contribution of the oracle error e(vm) − e(m) goes to zero as m → ∞ by the fact that limm→∞ e(m) exists (it is a decreasing, positive sequence) and that vm →∞. Let us now verify that δ(m − vm, m,m) also converges to zero. We are going to prune the first layer by removing one by one the vectors in Qm. Removing one of these vectors at a time incurs in an error of the order of m. Indeed, let wk be one of such vectors and let β′ be the solution of
min β′ E(β′) = min β′=(βf ;βk)∈Rk E{|Y − βTf Z(W−k)− βkz(wk)|2}+ κ(‖βf‖1 + |βk|) ,
where W−k is a shorthand for the matrix containing the rest of the vectors that have not been discarded yet. Removing the vector wk from the first layer increases the loss by a factor that is upper bounded by E(βp)− E(β), where
(βp)j =
{ β′j for j < k − 1 ,
β′k−1 + β ′ k otherwise.
,
since now βp is a feasible solution for the pruned first layer.
Let us finally bound E(βp)− E(β). Since ∠(wk, wk−1) ≤ m, it results from Proposition 2.3 that
z(wk) d = z(wk−1) + n ,
with E{|n|2} ≤ Cα2 for some constantC independent ofm. By redefining p1 = Y −βTp Z(W−k)− 1 2n and p2 = 1 2n, we have
E{|Y − βTp Z(W−k)|2} − E{|Y − β′ T Z(W−k)− βkz(wk)|2}
= E{|p1 + p2|2} − E{|p1 − p2|2} = 4E{|p1p2|}
≤ √√√√E{∣∣∣∣Y − βTp Z(W−k)− 12n ∣∣∣∣2 }√ E{|n|2}
≤ (C + α)α ' m ,
where C only depends on E{|Y |2}. We also verify that ‖βp‖1 ≤ ‖β′‖1. It results that removing |Qm| of such vectors incurs an increase of the loss at most |Qm| m ' mηm η−1 n = mη+
η−1 n . Since we picked η such that η + η−1n < 0, this term converges to zero. The
proof is finished.
C CARTOON OF ALGORITHM
Refer to Fig. 2.
D VISUALIZATION OF CONNECTION
Because the weight matrices are anywhere from high to extremely high dimensional, for the purposes of visualization we projected the models on the connecting path into a three dimensionsal subspace. Snapshots of the algorithm in progress for the quadratic regression task are indicated in Fig. 3. This was done by vectorizing all of the weight matrices for all the beads for a given connecting path, and then performing principal component analysis to find the three highest weight projections for the collection of models that define the endpoints of segments for a connecting path—i.e., the
θi discussed in the algorithm. We then projected the connecting string of models onto these three directions.
The color of the strings was chosen to be representative of the test loss under a log mapping, so that extremely high test loss mapped to red, whereas test loss near the threshold mapped to blue. An animation of the connecting path can be seen on our Github page.
Finally, projections onto pairs of principal components are indicated by the black curves.
E A DISCONNECTION
E.1 A DISCONNECTION
As a sanity check for the algorithm, we also applied it to a problem for which we know that it is not possible to connect models of equivalent power by the arguments of section 2.3.1. The input data is 3 points in R2, and the task is to permute the datapoints, i.e. map {x1, x2, x3} → {x2, x3, x1}. This map requires at least 12 parameters in general for the three linear maps which take xi → xj for i, j ∈ {{1, 2}, {2, 3}, {3, 1}}. Our archticture was a 2-3-2 fully connected neural network with a single relu nonlinearity after the hidden layer—a model which clearly has 12 free parameters by construction. The two models we tried to connect were a single model, θ, and a copy of θ with the first two neurons in the hidden layer permuted, θ̃σ . The algorithm fails to converge when initialized with these two models. We provide a visualization of the string of models produced by the algorithm in Fig. 4.
In general, a persistent high interpolated loss between two neighboring beads on the string of models could arise from either a slowly converging, connected pair of models or from a truly disconnected pair of models. “Proving” a disconnection at the level of numerical experiments is intractable in general, but a collection of negative results—i.e., failures to converge—are highly suggestive of a true disconnection. | 1. What are the main contributions of the paper regarding the landscape of deep networks?
2. What are the strengths of the paper, particularly in its theoretical and empirical analyses?
3. What are the limitations of the paper, especially in terms of its scope and practical applicability?
4. How does the reviewer assess the value and interest of the provided contributions despite their specificity?
5. Are there any concerns or suggestions regarding the methodology or results presented in the paper? | Review | Review
This work contributes to understanding the landscape of deep networks in terms of its topology and geometry. The paper analyzes the former theoretically, and studies the latter empirically. Although the provided contributions are very specific (ReLU nets with single hidden layer, and a heuristic to calculate the normalized geodesic), the results are original and of interest. Thus, they could potentially be used as stepping stones for deeper developments in this area.
Pros:
1. Providing new theory about existence of "poor" local minima for ReLU networks with a hidden unit that relies on input distribution properties as well as the size of the hidden layer.
2. Coming up with a heuristic algorithm to compute the normalized geodesic between two solution points. The latter reflects how curved the path between the two is.
Cons:
The results are very specific in both topology and geometry analysis.
1. The analysis is performed only over a "single" hidden layer ReLU network. Given the importance of depth in deep architectures, this result cannot really explain the kinds of architectures we are interested in practically.
2. The normalized geodesic criterion is somewhat limited in representing how easy it is to connect two equally good points. For example, there might exist a straight line between the two (which is considered as easy by the geodesic criterion), but this line might be going through a very narrow valley, challenging gradient based optimization algorithms (and thus extremely difficult to navigate in practice). In addition, the proposed algorithm for computing the normalized geodesic is a greedy heuristic, which as far as I can tell, makes it difficult to know how we can trust in the estimated geodesics obtained by this algorithm.
With all cons said, I stress that I understand both problems tackled in the paper are challenging, and thus I find the contributions valuable and interesting. |
ICLR | Title
Deep Manifold Computing and Visualization Using Elastic Locally Isometric Smoothness
Abstract
The ability to preserve local geometry of highly nonlinear manifolds in high dimensional spaces and properly unfold them into lower dimensional hyperplanes is the key to the success of manifold computing, nonlinear dimensionality reduction (NLDR) and visualization. This paper proposes a novel method, called elastic locally isometric smoothness (ELIS), to empower deep neural networks with such an ability. ELIS requires that a desired metric between points should be preserved across layers in order to preserve local geometry; such a smoothness constraint effectively regularizes vector-based transformations to become well-behaved local metric-preserving homeomorphisms. Moreover, ELIS requires that the smoothness should be imposed in a way to render sufficient flexibility for tackling complicated nonlinearity and non-Euclideanity; this is achieved layer-wisely via nonlinearity in both the similarity and activation functions. The ELIS method incorporates a class of suitable nonlinear similarity functions into a two-way divergence loss and uses hyperparameter continuation in finding optimal solutions. Extensive experiments, comparisons, and ablation study demonstrate that ELIS can deliver results not only superior to UMAP and t-SNE for and visualization but also better than other leading counterparts of manifold and autoencoder learning for NLDR and manifold data generation.
1 INTRODUCTION
Manifold learning aims to find from a set of higher dimensional data its embedding or representation in a low dimensional latent space. Nonlinear dimensionality reduction (NLDR) aims to construct a transformation that is generalizable to unseen data. It is hoped that the lower dimensional representation can be used in conjunction with a simple metric such as the Euclidean distance for downstream tasks such as classification and visualization. Manifold data generation performs the inverse transformation to generate data from samples in the latent space. We call this collection of manifold related problems manifold computing. The basis for manifold computing is the manifold assumption (Mikhail Belkin, 2002; Fefferman et al., 2016).
Great advances have been made in the past two decades in manifold computing and visualization. ISOMAP (Tenenbaum et al., 2000) and LLE (locally linear embedding) (Roweis & Saul, 2000) are classic methods for manifold learning. More recent developments include local geometry-based method (Gashler et al., 2008; Zhang & Wang, 2007; Chen & Buja, 2009; McQueen et al., 2016), graph spectral analysis (Donoho & Grimes, 2003) and latent variable models (Saul, 2020). The most popular high dimensional data visualization methods to date are t-SNE (Maaten, 2014) and UMAP (McInnes et al., 2018), with wide applications such as bio-science and technology (Becht et al., 2019; Dorrity et al., 2020). While the aforementioned are traditional machine learning, deep learning-based methods include autoencoders (Hinton & Salakhutdinov, 2006; Moor et al., 2020). The problem can be considered from the viewpoints of geometry deep learning (Bronstein et al., 2017)) and topology data analysis (Wasserman, 2018; Moor et al., 2020).
The ability to preserve geometric structure of nonlinear manifolds and properly unfold them into lower dimensional hyperplanes is the key to the success of manifold-based computing and visualization. Recently, Markov-Lipschitz deep learning (MLDL) (Li et al., 2020) is proposed as a general framework for manifold learning, NLDR, visualization and manifold data generation. The idea is to impose the constraint of geometric isometry across neural network layers to preserve the local
geometric structure of manifold data. This effectively transforms a vector-based transformation of conventional neural networks into a local distance-preserving homeomorphism. Such local homeomorphisms avoid the transformation from collapse, twisting, or crossing, so as to improve generalization, stability, and robustness. Locally isometric smoothness (LIS) (Li et al., 2020), which imposes straight distance-preserving, is proposed as a method in the MLDL framework. LIS has demonstrated significant advantages in manifold learning and NLDR.
This paper proposes a more advanced method in the MLDL framework, called elastic locally isometric smoothness (ELIS), aimed to empower deep neural networks with ability to tackle the high nonlinearity and non-Euclideanity challenges arising from complicated manifolds in high dimension spaces that LIS is unable to cope with. Whereas LIS preserves the straight distances between neighboring points, ELIS is based on a similarity metric that is nonlinear in distance and a two-way divergence loss (of nearby neighbors and far-away pairs, respectively); this renders more flexibility and capacity in tackling the challenges yet under the control of the ELIS regularization. As the result, ELIS bridges gaps between non-Euclidean manifolds in the input space and resulting Euclidean hyperplanes in the learned lower dimensional latent space, with geometric structure of the manifolds preserved. Both ELIS and LIS can be considered as a form of graph neural networks (GNN) (Scarselli et al., 2009) but without the aggregation generally present in GNNs. They are more like what is called “manifold learning 2.0” (Bronstein, 2020).
The distinctive features of ELIS (and LIS) in comparison with related methods are summarized in Table 1. ELIS-based neural networks can accomplish all the functionalities in the general MLDL framework, for which none of the methods can achieve. Extensive experiments, comparisons, and ablation study demonstrate that ELIS-based neural networks produce results not only superior to the SOTA t-SNE and UMAP for NLDR and visualization but also better than other algorithms of manifold and autoencoder learning, including LIS, for NLDR and manifold data generation. The main contributions of this paper are summarized below:
(1) Proposing the ELIS constraint in the MLDL framework, based on a similarity metric which is nonlinear in distance. It inherits the metric-preserving property of LIS so that the resulting layer-wise transformation is geometrically smooth, hence topologically homeomorphic, yet possesses more flexibility than LIS in handling highly nonlinear manifolds in high dimensional spaces.
(2) Proposing conditions for a class of nonlinear similarity functions for converting from distance to similarity, in conjunction with a two-way divergence loss. This ensures the metric-preserving and neighbor-confining properties.
(3) Proposing two instances of ELIS-based neural networks: an ELIS encoder for manifold learning and visualization and an ELIS autoencoder for manifold reconstruction and data generation.
(4) Providing several SOTA results that surpass UMAP and other leading algorithms.
In the following, Section 2 introduces LIS and presents ELIS formulations, and the Section 3 presents extensive experiments. The code is provided in the Supplementary Material.
2 ELASTIC LOCALLY ISOMETRIC SMOOTHNESS
Both ELIS and LIS are formulated in the MLDL framework (illustrated in Fig.A1 in Appendix) which is aimed to regularize neural transformations through imposing the ELIS constraint between
layers to achieve certain well-behaving properties. However, the ELIS formulation tackles challenges of highly nonlinear manifold data in high dimensional spaces using a more flexible and effective way, much inspired by t-SNE (Maaten, 2014) and UMAP (McInnes et al., 2018). Let X = {x1, . . . , xM} be a set of M samples in the input space RN with the index set S = {1, . . . ,M}. These samples may come from one or several lower dimensional manifoldsMX ⊂ RN . WhenMX is Riemannian, its tangent subspace Tx(MX) at any x ∈ MX is locally isomorphic to an Euclidean space of dimensionality dim(MX) < N . Therefore, we can use a cascade of nonlinear neural transformations to "unfold" nonlinear manifolds in a high dimensional input space into hyper-planar regions in a lower dimensional latent space.
Both ELIS and LIS aim to accomplish the following 4 tasks, of which few neural networks can do all: (1) Manifold learning: to learn an embedding in a latent spaceMZ ⊂ Rn, where n < N , based on the local structure of X . (2) Representation Learning: to learn the underlying mapping Φ : MX =⇒ MZ for the embedding that is generalizable to unseen data x 6∈ X,x ∈ MX . (3) Visualization: to visualize the embedding in 2D or 3D space. (4) Manifold generation: to find the inverse mapping Φ−1 :MZ =⇒MX and generate new data onMX from samples inMZ . ELIS is aimed to surpass LIS.
2.1 THE LIS CONSTRAINT AND NEURAL NETWORKS
The LIS constraint is aimed to best preserve the local distances of the data between two metric spaces, encouraging a vector-based neural transformation Φ(X |W ), where W is the transformation matrix of the neural network, to become a well-behaved local distance-preserving homeomorphism. This can be achieved by adding the following LIS loss (Li et al., 2020), imposed between two layers (metric spaces) l and l′
L(l,l ′) LIS (W ) = ∑ i∈S ∑ j∈N (l)i ∣∣∣d(x(l)i , x(l)j )− d(x(l′)i , x(l′)j ))∣∣∣ (1) where d : X ×X → R≥0 is a dissimilarity metric, x(l ′) i = Φ(x (l) i |W ) is the result of the effective transformation Φ from layer l to l′, and Ni is the set of neighbors of i. Without prior knowledge, dij is usually computed as the Euclidean distance, albeit it may not well reflect the reality. It is hoped that after a series of proper nonlinear transformations, the input data is transformed into an embedding in the latent space such that the Euclidean distance make more sense in describing mutual relationships between points. In this work, we aim to find such transformations.
The LIS loss effectively minimizes the bi-Lipschitz constant of Φ. It is through the neighborhood system, N = {Ni | i ∈ S}, that the influence of a point on the others is propagated to afar. For this reason, the collection of random variable x(l) constitutes a Markov random field. Equ. (1) is defined w.r.t. Ni (Markovianity) and aimed to minimizing the bi-Lipschitz constant, hence the name Markov-Lipschitz (Li et al., 2020).
The basic LIS loss is augmented by an auxiliary "push-way" term (Li et al., 2020)
L(l,l ′) push(W ) = − ∑ i∈S ∑ j 6∈j∈N (l)i π[dl′(x (l′) i , x (l′) j ) < B] dl′(x (l′) i , x (l′) j ) (2)
in which π[·] ∈ {0, 1} is the indicator function and B is a bound. This term is aimed to help "unfold" nonlinear manifolds, by exerting a spring force to push away from each other those pairs (i, j) which are non-neighbors at layer l but nearby (distance smaller than B) at layer l′.
These two losses are combined to form a LIS-based encoder loss for manifold learning and dimension reduction
LEnc = ∑ (l,l′) L(l,l ′) LIS (W ) + µL (l,l′) push(W ) (3)
where µ is a weight and (l, l′) is summed over a set of designated layer pairs (currently designed manually). A LIS-based autoencoder can be formulated by applying the LIS constraint between layers within the decoder and between the encoder and decoder layers. LIS-based neural networks have significant advantages (Li et al., 2020).
2.2 THE ELIS CONSTRAINT
The proposed ELIS constraint is aimed to tackle difficulties in “flattening" highly nonlinear manifolds in a high dimensional space into hyperplanes in a lower dimensional space. It imposes a more flexible nonlinear similarity-preserving constraint as opposed to the distance-preserving (isometry) constraint of Vanila LIS. More specifically, ELIS transforms a distance into a similarity metric using a nonlinear function and defines a KL loss based on similarities between nearby pairs and far-away pairs. This makes the metric-preserving constraint of ELIS more flexible than the straight distance-preserving of LIS to accomplish the challenging task.
Moreover, ELIS requires that the smoothness should be imposed in a way to render sufficient flexibility for tackling complicated nonlinearity and non-Euclideanity; this is achieved layer-wisely via nonlinearity in both the similarity and activation functions.
Converting distance to similarity. Following UMAP, we assume that X(l) is fixed (e.g., the input layer) and X(l
′) at subsequent layers l′ are computed as a result of manifold learning. The nonlinear similarities between xi and xj at each layer is computed as follows. First, define an nearest neighbor (NN)-normalized distance
di|j def = d(xi, xj)− ρi ≥ 0 (4)
where ρi = d(xi, xnn(i)) in which xnn(i) denotes the nearest neighbor of xi. Then, di|j is converted to a similarity metric ui|j = g(di|j) ∈ [0, 1] where g is a nonlinear function.
We require that g(η) satisfy the following necessary conditions ∀η = di|j ≥ 0:
Condition (1) – it is monotonically decreasing, g′(η) < 0 for η > 0; Condition (2) – its first derivative diminishes in the limit, limη→∞ |g′(η)| = 0.
The first condition ensues a monotonic and inverse relationship between the distance and the similarity. The second condition effectively leads to a neighborhood system bounded softly as opposed to the "hard" bounded neighborhoods in the LIS and provides proper control on contributions of neighboring points to the back-propagation of neural network learning.
We further require that the g(η) to be a function of η2 – for convenience not necessity, such that its first derivative take the form g′(η) = 2ηh(η) where h(η) is also a function of η2. h(η) can be called influence function because it controls how the other neighboring point xj can influence xi. Condition (2) above restricts the influence from "far-away" point xj on xi (between which the distance ηij = ‖xi − xj‖ is relatively large) to diminish in the back-propagation process. This provides a properly weighted neighborhood system w.r.t. which the influence between points is limited with certain scope adaptively.
Specifically for ELIS, we define the following σi-data-adaptive, ν-parameterized nonlinear similarity
ui|j(σi, ν) = g(di|j | σi, ν) = Cν
( 1 + d2i|j
σi ν
)−(ν+1) , (5)
where ν ∈ R+ is similar to the degree of freedom (DoF) parameter in the t-distribution,
Cν = 2π
( Γ ( ν+1 2 ) √ νπΓ ( ν 2 ))2 (6) is a function of ν which sets the limit limν→+∞ g(0 | σi, ν) = 1 (∀σi > 0)), and the data-adaptive parameter σi > 0, playing a calibration role, is estimated from the data by best fitting the equation∑
j 6=i
ui|j(σi, ν) = log2Q (7)
for the perplexity-like hyperparameter Q given. While other choices satisfying the aforementioned necessary conditions, including the normalized Gaussian and Cauchy functions used in t-SNE (Maaten, 2014) and the fitted polynomial function used in UMAP (McInnes et al., 2018), can also work for ELIS, we find Equ. (5) a better choice not only because it produces better results but also because we can use the ν parameter as a continuation tool for preventing the training from converging
to bad local minima and for controlling separation margin between different manifolds, as will be shown in the ablation study.
Computing similarities uij and u′ij . Because the symmetry ui|j = uj|i does not hold due to differences in σi for the input layer (l), the following symmetrization is performed
uij = uj|i + ui|j − uj|iui|j . (8) On the other hand, for the subsequent latent layers, the computation of σi and ρi for each i would bring about huge computational costs. To overcome this problem, we directly set σ′i = 1 and ρ ′ i = 0 (this also ensures the symmetry u′i|j = u ′ j|i). While σi and ρi are needed to deal with unevenness and outliers of the data for the input layer, the necessity becomes not so demanding as the layer goes deeper after layers of nonlinear manifold unfolding. From u(l)ij of layer l can be constructed a weighted graph G(S, X(l), U (l)) consisting of a set S of nodes with node attributes X(l) and edge attributes (weights) U (l) = {u(l)ij ≥ > 0 | ∀i, j ∈ S}. The global structure of a manifold is discovered from local geometry of data through the graph G. Formulating the ELIS losse. ELIS transforms the distance metric dij into a similarity metric using a nonlinear function uij = g(dij) and defines the ELIS loss between layers l and l′, in terms of similarities u(l)ij | i, j ∈ S, i 6= j} at layers l and its counterpart U (l ′) = {u(l ′)
ij | i, j ∈ S, i 6= j} at layers l′. The ELIS loss is defined by what we call the two-way divergence (a.k.a. the fuzzy information for discrimination (Bhandari & Pal, 1993) and the fuzzy set cross entropy in UMAP (McInnes et al., 2018))
L(l,l ′)
ELIS(W | X (l), X(l
′)) = ∑
i,j∈S,i6=j u (l) ij log
u (l) ij u (l′) ij + (1− u(l)ij ) log 1− u(l)ij 1− u(l ′ ) ij
(9)
The first term is the directed divergence of the two fuzzy sets of similarities, in lieu of the LIS’ distance-preserving term of Equ. (1); the second term can be considered as the directed divergence of the two corresponding complement fuzzy sets, replacing the push-way term of Equ. (2).
Equ.(9) is called the "two-way divergence" because the first term on the right side of the equation imposes similarity-based attraction forces between nearby (intra-manifold) pairs whereas the second term exerts dissimilarity-based repulsion forces between far-away (inter-manifold) pairs. In other words, intra-manifold points are transformed to a cluster in the latent space, mainly as the result of the first term whereas inter-manifold point pairs push away from each other to different clusters, mainly due to the second term.
Note also that ELIS applies the two terms in a soft, adaptive way via its weighted neighborhood graph where the edges are effectively restricted by pairs of corresponding nodes (data points) between which the absolute gradients ∣∣∣∇WL(l,l′)ELIS(W | X(l), X(l′))∣∣∣ ≥ > 0 are nonzero, in contrast to the "hard" neighborhood system in LIS.
The ELIS loss can be rearranged as follows
L(l,l ′ )
ELIS(W | X (l), X(l
′ )) = ∑ i,j∈S,i6=j u (l) ij log u (l) ij + (1− u (l) ij ) log(1− u (l) ij )
− u(l)ij log u (l′) ij − (1− u (l) ij ) log(1− u
(l ′ ) ij ) (10)
When X(l) (hence u(l)ij ) fixed, the optimization only needs to minimize second part involving u (l′) ij .
2.3 ELIS ENCODER AND AUTOENCODER
The ELIS encoder consists of a cascade of nonlinear forward neural transformations constrained by the ELIS loss, aimed for manifold learning and NLDR. An ELIS (and LIS) encoder can learn an NLDR transformation without the need for a decoder (as required by autoencoders), and this encoder can generalize to unseen data (that ISOMAP, LLE, t-SNE and UMAP cannot). The total loss for the ELIS encoder is the sum of all the ELIS losses over a prescribed set of layer pairs (l, l′)
LEnc(W ) = ∑ (l,l′) α(l,l ′)L (l,l′) ELIS(W ) (11)
where α(l,l ′) weight the relative importance of L(l,l ′) ELIS .
The ELIS autoencoder has two purposes: (1) to further regularize or optimize the ELIS encoderbased manifold learning by using an ELIS decoder, and (2) to enable generation of new data of the learned manifolds by the trained ELIS decoder. The ELIS autoencoder structure consists of the ELIS encoder and decoder in cascade. The ELIS decoder is aimed to approximate the inverse transformations of the ELIS encoder and is made to be entirely symmetric to the ELIS encoder in its network structure. The overall weight matrices becomes W = [WEnc,WDec]. The loss function is composed of three terms:
LAE(W ) = LEnc(WEnc) + LDec(WDec) + LRec(W ) (12)
where LEnc(WEnc) is the same as Equ. (11), LDec(WDec) is defined in the same way following the symmetry, and the reconstruction loss LRec(W ) is the summed over all the corresponding layers
LRec(W ) = L−1∑ l=0 γl M∑ i=1 ‖ xi(l) − x̂(l)i ‖ 2 (13)
where x̂(l)i are the data points at the corresponding layer of the decoder and γl are the weights. The constraints due to LRec(W ) and LTie(W ) are illustrated by the dashed lines in Fig. A1 in Appendix.
3 EXPERIMENTS
The following experiments are aimed to evaluate ELIS in comparison with other five algorithms: UMAP (McInnes et al., 2018), t-SNE (Maaten, 2014) (for visualization), MLLE (Zhang & Wang, 2007), TopoAE (Moor et al., 2020) and LIS (Li et al., 2020) in terms of visual inspection and numerical metrics for manifold computing, visualization and data generation. Nine datasets are used, including five toy datasets: (1) SwissRoll (3-D), (2) S-Curve (3-D), (3) Servered Sphere (3-D), (4) SpheresA (101-D) (see (Moor et al., 2020) for the description) and (5) SpheresB (101-D, a modified composition from SpheresA); and four real-world datasets: (6) Coil20 (16384-D) and (7) Coil100 (49152-D) (Nene et al., 1996), (8) MNIST (784-D) (LeCun, 2013), and (9) Fashion-MNIST (784-D) (Xiao et al., 2017). The toy datasets are used because their geometric and topological structures are clear for the evaluation. The SpheresA dataset (Moor et al., 2020)is composed of 1 large sphere enclosing 10 small ones in 101-D space. SpheresB differs from SpheresA in that its large sphere consists of only 500 samples (whereas that in SpheresA has 5000) – the data is so sparse that the smallest within-sphere distance on the larger sphere can be greater than that between the larger sphere and some small ones. 5 performance metrics are used for the evaluation, whose exact definitions are given in Appendix A.2.
The pseudo-codes of the ELIS encoder and autoencoder and hyperparameter settings are described in Appendix A.1. The implementation uses the PyTorch 1.6.1 library running on Ubuntu 18.04 on NVIDIA v100 GPU. The time is spent mainly in the computation of neighbors. At present, the ELIS algorithms computes the neighborhood for every point pair, hence have the complexity of O(M2) for each cross-layer pair (l, l′). The complexity can be reduced to O(M1.14) if using the nearest-neighbor-descent algorithm of UMAP (McInnes et al., 2018).
3.1 MANIFOLD LEARNING AND GENERATION
Manifold Learning. Table 2 compares performances of the five NLDR methods where bold numbers are the best results, and underline the second best. The ELIS encoder has overall the best performance. Fig. 1 visualizes some representative results, and more results are given in Table. A2, Fig. A2 - Fig. A4 in Appendix A.3.
Next, we delve into embedding details of of the Coil20 objects resulting from the ELIS encoder (ELIS-Enc) and UMAP in Fig. 2. First, all the object embeddings in the ELIS-Enc result form closed loops for the 360 degree rotations (refer to Fig. A5 for the embedding-object correspondence). Second, the quality of the ELIS-derived embeddings enables us to infer some symmetries of the objects in the 3D space. Four types of such symmetries are explored and discussed in Appendix A.3. The UMAP result, in contrast, does not possess such quality.
Manifold Data Generation. Fig. 3 compares images generated from interpolated points between two nearest neighbors on the embedding using three autoencoders in comparison. The images generated by the ELIS-AE have clear boundaries and look sharper than those produced by the other two autoencoders. Results for several other objects are shown in Fig. A6 in Appendix A.4.
3.2 ABLATION STUDY
Cross-layer ELIS constraint. The cross-layer ELIS constraint is weighted by α(l, l′) as in Equ. (11). Four weight schemes (1) Head-Tail, (2) Head-Mids, (3) Mids-Tail and (4) Head-Mids + Mids-Tail are designed for an L-layer encoder, as described in details in Appendix A.5. The results are compared in Table. A3 and Fig. A7. Overall, the "Head-Mids + Mids-Tail" scheme, which imposes the most extensive cross-layer ELIS constraints and also needs more computation, achieves the best results. This justifies the use of the proposed ELIS method for performance improvements.
Effect of final ν value. The final ν value has significant influence on the within-manifold and between-manifold scatters. Fig. A8 and Fig. A9 in Appendix A.5 demonstrate the effect of varying ν in the input space and varying ν in the latent space on the results, respectively.
Continuation in ν. Continuation in hyperparameter ν in latent space is used during the ELIS encoder learning process. The algorithm starts with a small ν in latent space to include more global
information. Then it gradually increases the value to focus more locally. The continuation strategy results in significant better solutions as shown in Fig. A10 in Appendix A.5.
The major merits of ELIS. Finally, we summarize the comparative results of the experiments in the following table.
ELIS LIS UMAP t-SNE MLLE TopoAE
Succeed in unfolding toy data Yes Yes No No Yes No
Perfect manifold structures on Coil Yes No Maybe No No No
High Accuracy Most No Some Some No No
Good Reconstruction Quality Yes Maybe N/A N/A No No
4 CONCLUSION
The proposed ELIS method preserves the nonlinear similarity metric locally across layers of deep neural networks by optimizing two-way divergence loss. It effectively tackles difficulties in deep manifold computing and visualization with the local geometry-preserving property. Empirical results, comparisons, and ablation study demonstrate that ELIS is not only superior to UMAP and t-SNE for NLDR and visualization but also better than other leading manifold and autoencoder learning algorithms for NLDR and manifold data reconstruction and generation. Future work includes the following: (1) extending the unsupervised version of MLDL to self-supervise, semi-supervised and supervised tasks; (2) further formulating MLDL so that cross-layer link hyperparameters α become part of learnable hyperparameters.
APPENDIX
A.1 THE MLDL FRAMEWORK AND ELIS
Markov-Lipschitz deep learning (MLDL) framework. The MLDL framework is illustrated in Fig.A1 (from Li et al. (2020)). The ML-AutoEncoder (of LIS or ELIS type) transforms the input X to an embedding X(L) at layer L (the latent layer) using the ML-Encoder, and then reconstruct X̂ using the ML-Decoder. Whereas a standard neural network consists of a cascade of transformations φ(l) (blue arrows), an MLDL network imposes the constraint between any two layers as appropriate (shown in orange arcs and dashed lines) in the form of cross-layer loss functions weighted by α(l,l
′). This encourages φ(l) to become well-behaved local homeomorphisms. The latent features X(L) extracted by the learned ML-Encoder can be used for downstream tasks such as visualization and classification as well as manifold data generation using the learned ML-Decoder.
Figure A1: Illustration of Markov-Lipschitz deep learning (MLDL) framework using an MLAutoEncoder (best viewed in color).
The pseudo-codes for the ELIS encoder and the ELIS autoencoder, related hyperparameter, and a parameter continuation method are described below.
Algorithm 1: ELIS Encoder Input : Data:X(0), learning rate lr, epochs E, number of encoder layers L, Weight hyperparameter α, νList, Q, Calculate d(0)i|j with (4) Calculate σ(0)i with (7) Calculate u(0)ij with (8) Initialize the neural network {Φ(1)Enc( · |W (1) Enc), Φ (2) Enc( · |W (2) Enc), · · · ,Φ (L) Enc( · |W (L) Enc)} while i = 0; i < E; i++ do ν ←− νList[i] while l = 1; l <= L; l++ do
Calculate l layer’s embedding X(l) ←− Φ(l)Enc(X(l−1)|W (l) Enc)
Calculate u(l)ij with (5) and (8) end while l′ = l; l′ <= L; l′++ do
Calculate the ELIS losses between layer l and lyaer l′, L(l,l ′)
Enc with (10) end
Update parameters: W ←−W − lr · ∑L l=1 ∑L l′=l α (l,l′) ∂L (l,l′) Enc
∂W end
Algorithm 2: ELIS AutoEncoder Input : Data:X(0), learning rate lr, epochs E, number of encoder layers L, Weight hyperparameter α γ, νList, Q Calculate d(0)i|j with (4) Calculate σ(0)i with (7) Calculate u(0)ij with (8) Initialize the neural network {Φ(1)Enc( · |W (1) Enc), Φ (2) Enc( · |W (2) Enc), · · · ,Φ (L) Enc( · |W (L) Enc),
Φ (1) Dec( · |W (1) Dec), Φ (2) Dec( · |W (2) Dec), · · · ,Φ (L) Dec( · |W (L) Dec)}
while i = 0; i < E; i++ do ν ←− νList[i] while l = 1; l ≤ 2L; l++ do
if l ≤ L then Calculate l layer’s encoder embedding X(l) ←− Φ(l)Enc(X(l−1)|W (l) Enc); else Calculate l layer’s decoder embedding X(l) ←− Φ(l)Dec(X(l−1)|W (l) Dec); end Calculate u(l)ij with (5) and (8)
end while l′ = l; l′ <= L; l′++ do
Calculate the ELIS losses between layer l and lyaer l′, L(l,l ′)
ELIS with (10) end Calculate the reconstruction loss between layer l and lyaer 2L− l, L(l,2L−l)Rec with (13)
Update parameters: W ←−W − lr( ∑L l=1 ∑L l′=l α (l,l′) ∂L (l,l′) Enc ∂W + ∑L l=1 γ (l,2L−l) ∂L (l,2L−l) Rec
∂W ) end
Hyperparameters. Table. A1 summarizes the ELIS hyperparameter setting for different datasets. Other hyperparameters are set the same for all datasets: learning rate lr = 0.01 and number of epochs E = 5000. The LeakyReLU is used as the activation function.
Table A1: Hyperparameters of ELIS for different datasets
Dataset Point Network Structure (Number of parameters) Q in Equ. (7) Batchsize
Swiss Roll 800 3, 500, 500, 2 (0.252M) 10 800 S-Curve 800 3, 500, 500, 2 (0.252M) 10 800 Servered Sphere 800 3, 500, 500, 2 (0.252M) 10 800 SpheresA 10000 101, 500, 500, 2 (0.301M) 10 10000 SpheresB 5500 101, 500, 500, 2 (0.301M) 10 5500 Coli20 1440 16384, 500, 500, 2 (8.443M) 10 1440 Coli100 7200 49152, 1000, 500, 250,2 (24.82M) 10 2400 MNIST 60000 784, 1000, 500, 300, 2 (1.434M) 15 4000 Fashion-MNIST 60000 784, 1000, 500, 2 (1.285M) 10 4000
Continuation in ν(l
′). In the training process, the parameter ν(l ′)) in computing sample similarities for the latent layer is graduated from a small number to a large number, e.g. ν(l
′) : 0.01 → 100 (see Equ. (5)), though fixed at a large value, e.g. ν(l
′) = 100 for the input layer. Empirically, the continuation helps training converge to a good solution; the reasons behind are to be explained in a future work.
A.2 DEFINITIONS OF PERFORMANCE METRICS
1. Cont (Continuity) is asymmetric to Trust (from space X(l ′) to space X(l)):
Cont = 1
k2 − k1 + 1 k2∑ k=k1 1− 2Mk(2M − 3k − 1) M∑ i=1 ∑ j∈N (l)i,k,j 6∈N (l′) i,k (r (l′) i,j − k) where r(l ′) i,j is the rank of x (l′) j in the k-NN of x (l′) i . M is the size of dataset. N (l′) i,k is the set of indices to the k-NN of x(l ′)
i . k1 and k2 are the lower and upper bounds of the k-NN. For sphereA and sphereB, we focus more on global performance, so set k1 = [M/14], k2 = [M/7]. For other datasets, we set k1 = 5, k2 = 10.
2. Trust (Trustworthiness) measures how well the k nearest neighbors of a point are preserved when going from space X(l) to space X(l ′):
Trust = 1
k2 − k1 + 1 k2∑ k=k1 1− 2Mk(2M − 3k − 1) M∑ i=1 ∑ j∈N (l
′) i,k ,j 6∈N (l) i,k
(r (l) i,j − k) where r(l)i,j is the rank of x (l) j in the k-NN of x (l) i .
3. ACC (svm)
The ACC (svm) is calculated as follows. (1) Compute nonlinear dimensionality reduction methods to obtain 2-dimensional embeddings. (2) Partition the data by 5-fold cross-validation. (3) For each fold, train the linear kernel SVM classifier using the training set and test it in the test set. (4) Calculate the mean value of the classification accuracy.
4. ACC (NN) is defined as follows:
ACC(NN) =
∑M i π [ Yi = YN (L)i,1 ] M
π[·] is the exponential function. Yi is the label of sample i, YN (L)i,1 is the label of sample N (L) i,1 , where N (L) i,1 is the nearest neighbor point of node i in layer L.
5. AUC is defined as follows:
AUC(f) =
∑ p0∈P0 ∑ p1∈P1 π [p0 > p1]
|P0| · |P1|
P0 =
{ d (L) ij −min d (L) ij
max d (L) ij −min d (L) ij
|i, j ∈ {1, 2, 3 · · · ,M}, Yi = Yj
}
P1 =
{ d (L) ij −min d (L) ij
max d (L) ij −min d (L) ij
|i, j ∈ {1, 2, 3 · · · ,M}, Yi 6= Yj
}
Where d(L)ij is the distance in layer L. P0 is the set of positive sample pair, P1 is the set of negative sample pair.
A.3 MANIFOLD LEARNING AND NLDR
Manifold Learning Results. This subsection shows more results of manifold learning and NLDR obtained by using the ELIS-Enc and the ELIS-AE, in comparison with the other methods, on training and testing datasets. Some typical embedding results of manifold learning using the three autoencoder methods are visualized in Fig. A2, where t-SNE and UMAP are not included because these nontransformational methods are unable to generalize to test datasets. LIS-Enc and TopoAE learned poor
Figure A2: Comparison of visualization results of autoencoders on training and testing sets
results on the training set, so it did not work well on the test set either. ELIS-AE, as a autoencoderbased mehtod, have a huge advantage in terms of generalization performance, because it can handle the test data. And it is very easy to apply to specific tasks such as classification, regression and clustering.
Table. A2 compares performance metrics on 8 datasets, where the ACC(SVM), ACC(NN) and AUC are abscent for the SwissRoll and SeveredSphere because these datasets have no class label.
Fig. A3 and Fig. A4 shows the visualization resules of the toy and real-world datasets on the training datasets. For Swiss roll, Servered Sphere, and S-Curve, ELIS-Enc, LIS-Enc and MLLE all maintained the topology of the original data, however, MLLE method did not hold the relative Euclidean distance (The resulting embedding is square instead of rectangular). For SpheresA, ELIS-Enc, LIS-Enc, and TopoAE show the "big sphere enclosing 10 small spheres" in 2D embedding, but for SpheresB only
Table A2: Comparison in performance metrics with 5 difference methods in eight datasets
ELIS-Enc LIS-Enc UMAP t-SNE TopoAE MLLE
Swiss Roll Cont 1.0000 1.0000 0.9962 0.9969 0.9716 0.9956Trust 1.0000 1.0000 0.9983 0.9993 0.9809 0.9948
SeveredSphere Cont 0.9997 0.9932 0.9967 0.9985 0.9854 0.9958Trust 0.9997 0.9755 0.9989 0.9995 0.9891 0.9836
SpheresA
Cont 0.7850 0.7892 0.7147 0.7548 0.8064 0.7272 ACC(SVM) 0.5213 0.5000 0.5550 0.4992 0.4982 0.5000 ACC(NN) 0.9985 0.9912 0.5406 0.7837 0.9944 0.5205 AUC 0.5698 0.3362 0.5816 0.5603 0.3328 0.5961
SpheresB
Cont 0.9242 0.9255 0.9109 0.9155 0.9245 0.8943 ACC(SVM) 0.9558 0.9100 0.9100 0.8478 0.9581 0.0965 ACC(NN) 0.9987 0.9969 0.8469 0.9365 0.9949 0.8265 AUC 0.9780 0.9318 0.9570 0.9570 0.9870 0.9459
Coil20
Cont 0.9956 0.9973 0.9962 0.9927 0.9901 0.9395 ACC(SVM) 0.8941 0.8301 0.8472 0.8014 0.7078 0.1556 NNACC 0.9965 0.9354 0.8917 0.9965 0.8160 0.6410 AUC 0.9780 0.9537 0.9842 0.9582 0.8916 0.8824
Coil100
Cont 0.9936 0.9967 0.9955 0.9950 0.9903 0.7898 ACC(SVM) 0.9372 0.7319 0.8299 0.8278 0.5540 0.0363 ACC(NN) 0.9976 0.8163 0.9232 0.9951 0.4797 0.3350 AUC 0.9770 0.9667 0.9819 0.9759 0.8735 0.7322
MNIST
Cont 0.9639 0.9749 0.9646 0.9630 0.9618 0.9183 ACC(SVM) 0.9699 0.7468 0.9690 0.9525 0.7450 0.1100 ACC(NN) 0.9568 0.7035 0.9528 0.9567 0.7773 0.7423 AUC 0.9725 0.8779 0.9691 0.9314 0.8000 0.8575
Fashion -MNIST
Cont 0.9848 0.9901 0.9836 0.9777 0.9864 0.9298 ACC(SVM) 0.7125 0.6908 0.7030 0.5518 0.6067 0.1058 ACC(NN) 0.7092 0.6427 0.7253 0.7787 0.5718 0.6145 AUC 0.9121 0.8843 0.9165 0.8256 0.8310 0.7908
ELIS-Enc and LIS-Enc shows the "enclosing" phenomenon. For Coil20 and Coil100, ELIS-Enc, UMAP and TSNE can produce non-intersecting embeddings. However, the ELIS-Enc results are distinguishable and do not cut any of the manifolds. For MNIST and Fashion-MNIST, Both the UMAP and ELIS-Enc methods output the good embeding, But in terms of performance metrics, ELIS-Enc has sufficient advantages.
Symmetry of the objects and ELIS-Enc’s embedding in Coil20. For Coil20, information about the symmetry of the objects in the picture can be obtained by analyzing the embedding generated by ELIS-Enc. Details of the ELIS-Enc embedding of the Coil20 are shown in Fig. A5.
We divided the Coil20’s manifolds into four patterns based on the symmetry of the objects in the image and the shape of the manifold.
(1) Objects that are single plane mirror symmetric have elongated ellipse embedding shapes; For objects with single plane mirror symmetry, an angle can be found from which an image taken by rotating to the left is approximately equal to an image taken by rotating to the right. The corresponding two-dimensional manifolds are therefore elongated ellipse (The endpoints of the two long axes of the ellipse correspond to the two images obtained by taking pictures along the plane of symmetry.).
(2) Objects that are rotational symmetric have round embedding shapes; For rotational symmetric objects, the resulting pictures are always very similar no matter what angle they are taken from, so that the resulting two-dimensional manifold is squeezed inward into a circle.
(3) Objects that are double vertical mirror symmetric and have nested double ring embeddings; For objects with double vertical mirror symmetry, every 180 degrees of rotation, the resulting image reappears (the reappeared image is very similar to the one from 180 degrees ago, and
Figure A3: Comparison of visualization results for toy dataset on training set
is very close in two-dimensional space), thus the resulting manifold consists of two nested rings.
(4) Object’s symmetry is not evident.
Figure A4: Comparison of visualization results for real-world dataset on training set
Figure A5: Details of the ELIS-Enc’s embedding of the Coil20 and four manifold patterns
A.4 MANIFOLD DATA GENERATION
The manifold generation task generates a complete manifold structure from finite manifold samples. In this experiment,the test steps are as follows:
(1) Training a network (includes encoder and decoder) that generating 2-dimensional embedding;
(2) Performing linear interpolation in the embeddings; (3) Mapping the interpolation result back to the data space via the decoder.
Generation results for comparison with the TopoAE and LIS-AE are shown in Fig. A6. The same network structure was used in the experiments.
Figure A6: Comparison in visualization with LIS-AE and TopoAE in manifold generation. The left side is three embedding result, and black point in manifolds is the location of the interpolation. The right side is the interpolation results. there are 12 images in the right of the figure, the leftmost and the rightmost images are the original images, and ten images in the middle are the generation results with geodesic distance.
ELIS-AE has an advantage over the other two methods. Both LIS-AE and TopoAE methods do not learn a satisfactory embedding, so the interpolation results are poor. The embedding in LIS-AE has overlapping manifolds, so it generates images belonging to other popular methods (e.g. manifold A). The TopoAE’s embedding is messy, so the decoder reconstructs fuzzy images.
A.5 ABLATION STUDY
Cross-layer ELIS constraint. The effect of the ELIS-Enc constraint is determined by the weights α(l, l′) as in Equ. (11). We set the weights α(l,l
′) in either of four schemes (where Head, Tail and Mids are used to denote input layer, latent layer and intermediate layers) for an L-layer encoder:
(1) Head-Tail: weight α(0,L) = 1 (the constraint is imposed between the input layer and the latent layer);
(2) Head-Mids: weights α(0,l) = 1/L where l ∈ {1, 2, · · · , L} (the constraints are imposed between the input layer and each of intermediate layers);
(3) Mids-Tail: weights α(l,L) = 1/L where l ∈ {1, 2, · · · , L} (the constraints are imposed between the latent layer and each of intermediate layers);
(4) Head-Mids + Mids-Tail: weights α(0,l) = α(l,L) = 1/2L where l ∈ {1, 2, · · · , L} (combination of Head-Mids and Mids-Tail).
In this ablation study, a 10-layer neural network is used and the width of the network is determined depending on the dataset. (Swiss Roll:[3, 50, 45, 35, 30, 25, 20, 15, 10, 5, 2], SpheresA:[101, 50, 45, 35, 30, 25, 20, 15, 10, 5, 2], SpheresB:[101, 500, 400, 300, 300, 200, 200, 100, 100, 2], COIL20:[16384, 50, 45, 35, 30, 25, 20, 15, 10, 5, 2])
The evaluation metrics for four different cross-layer schemes are presented in Table. A3. The results of different cross-layer schemes are shown in Fig. A7.
Figure A7: Comparison in visualisation of four different cross-layer schemes.
The visualization results and metrics show that the cross-layer scheme (4) has better results in a 10-layer network. The network is very difficult to train if the ELIS loss acts only on the first and last layers (cross-layer scheme (1) ). The network will be easier to train if ELIS losses acts from the first layer and all intermediate layers (cross-layer scheme (2)). The ELIS losses in the middle and last layers (cross-layer scheme (3)) does not improve the performance of the embedding if used alone.
Table A3: Comparison in performance metrics of four different cross-layer schemes.
Cont Trust ACC(SVM) ACC(NN) AUC
Swiss Roll (1) - - - - - (2) 0.9999 0.9999 - - - (3) - - - - - (4) 0.9999 0.9999 - - -
SpheresA
(1) - - - - - (2) 0.9402 0.8832 0.9149 0.9478 0.9696 (3) - - - - - (4) 0.9376 0.8858 0.9529 0.9784 0.9721
SpheresB
(1) 0.9111 0.6373 0.5225 1.0000 0.5486 (2) 0.9087 0.6341 0.5145 1.0000 0.5489 (3) 0.8520 0.6299 0.5388 1.0000 0.4474 (4) 0.8167 0.6432 0.8740 0.9936 0.7461
Coil20
(1) - - - - - (2) 0.9955 0.9852 0.8454 0.9792 0.9721 (3) 0.9904 0.9876 0.8459 0.9847 0.9524 (4) 0.9947 0.9901 0.8867 0.9986 0.9735
However, if used in conjunction with the cross-layer scheme (4), it will improve the metric of the resulting latent space.
Effect of ν value. Fig. A8 and Fig. A9 show the effect of data space hyperparameter and latent space hyperparameter on embedding results.
Figure A8: Embedding results with varying ν in input space.
ν in input space controls the range of sensations in data space. if input space’s ν is small, the derivative of the probabilistic mapping function of the input layer will be small. The probability will be insensitive to distance in data space. In other words, ELIS-Enc will degradation into LIS-Enc. If more attention is paid to the global information of the input data, raise ν. If more attention is paid to local information about the input data, lower input space’s ν. By default, ELIS-Enc does not change this hyperparameter, but defaults to input space’s ν = 100.
ν in latent space controls degree of display in latent space (show detailed information or show global information). if latent space’s ν is small, ELIS will tend to display global information. If latent space’s ν is large, ELIS-Enc will tend to display the local information. Fig. A9 shows, from left to right, a process that goes from showing global information to showing excessive local information.
Figure A9: Embedding results with varying ν in latent space.
Continuation strategy. Continuation in hyperparameter latent space’s ν is used during the ELIS encoder learning process. The algorithm starts with a small latent space’s ν value to include more global information. Then it gradually increases the value to focus more locally. The necessity of parameter continuation is shown in Fig. A10.
Figure A10: Ablation study of with and without parameter continuation in latent space ν. (The upper row shows results obtained via parameter continuation ν = 0.001 → ν = 100 in latent space, the lower row shows results with a fixed ν = 100)
Experiments prove that the effect of parameter continuation (ν = 0.001→ ν = 100 in latent space) is very obvious. There is a big improvement in Swiss Roll, Coil20 and MNIST. | 1. What is the reviewer's opinion regarding the paper's empirical results?
2. What are the reviewer's concerns about the paper's exposition and motivation?
3. What is the reviewer's question regarding the KL-like two-way divergence and its comparison to LIS loss?
4. What does the reviewer find confusing regarding the dependence of g(η) and h(η)?
5. What evidence or justification does the reviewer request regarding ELIS encoder's ability to generalize to unseen data?
6. What does the reviewer find subjective in the table of "merits"?
7. What questions does the reviewer have regarding the hyperparameters ν in input space and latent space? | Review | Review
The empirical results, especially the visualizations, are quite impressive. The Swiss roll is fully unrolled to a rectangle, the MNIST clusters are well-separated, the smaller spheres in SpheresB are correctly clustered, and the classes in the COIL datasets are embedded with correct loop topology.
On the other hand, I found the exposition unclear and unmotivated. The method is called "elastic locally isometric smoothness," but after reading the paper, I am not sure I see why it is elastic, locally isometric, or smooth. The paper is riddled with typos and awkward wording, but Section 2—comprising the theory and algorithm setup—deserves a deeper edit.
Most importantly, the motivation for the choices made in the method is unclear. For example, why is the KL-like two-way divergence a better way to compare distance matrices/metric graphs? Why is this superior to the LIS loss, which compares distances directly? The paper repeatedly stresses that ELIS is more "flexible" than LIS due to nonlinearity: "It imposes a more flexible nonlinear similarity-preserving constraint as opposed to the distance-preserving (isometry) constraint of Vanila LIS. More specifically, ELIS transforms a distance into a similarity metric using a nonlinear function and defines a KL loss based on similarities between nearby pairs and far-away pairs." The use of "constraint" to refer to loss terms is misleading here. And while more "flexibility" sounds good, it is unclear what exactly it means or why it should be true. Adding more nonlinearity generally makes problems harder, right?
Perhaps the justification for these choices is purely empirical: the method achieves good results on a variety of data, so it must be better. If this is the intended argument, the paper should make this explicit. Moreover, the relative complexity of the method compared to LIS should impose a high burden of evidence. Otherwise, if there are theoretical motivations for the choices made in the paper, they should be made explicit.
Other assorted nitpicks:
The "dependence" of
g
(
η
)
and
h
(
η
)
on
η
2
is confusing. The square function is monotonic over nonnegative real numbers, so this talk of dependence is vacuous as long as
η
is nonnegative. If you mean to make the dependence explicit, why not just write
g
(
η
2
)
?
The article asserts that "An ELIS (and LIS) encoder can learn an NLDR transformation without the need for a decoder (as required by autoencoders), and this encoder can generalize to unseen data (that ISOMAP, LLE, t-SNE and UMAP cannot)." This requires evidence/justification.
The table of "merits" on page 8 seems awfully subjective. It is OK to make claims in the text, but they should be backed up by hard data. Putting subjective claims in a table makes it look like they are measurable.
The hyperparameters require clearer explanation. The appendix states on page 21 that "ν in input space controls the range of sensations in data space." What is a range of sensations? Similarly, it states that "ν in latent space controls degree of display in latent space (show detailed information or show global information)." What is a degree of display? Are these parameters related to scale or feature size? How do they scale relative to the data? What "units" are they in? Finally, the appendix uses the same parameter name, ν, for parameters in both the input space and the latent space, referring to, e.g., "latent space's ν." Why not just use the layer indices as in the main text? |
ICLR | Title
Deep Manifold Computing and Visualization Using Elastic Locally Isometric Smoothness
Abstract
The ability to preserve local geometry of highly nonlinear manifolds in high dimensional spaces and properly unfold them into lower dimensional hyperplanes is the key to the success of manifold computing, nonlinear dimensionality reduction (NLDR) and visualization. This paper proposes a novel method, called elastic locally isometric smoothness (ELIS), to empower deep neural networks with such an ability. ELIS requires that a desired metric between points should be preserved across layers in order to preserve local geometry; such a smoothness constraint effectively regularizes vector-based transformations to become well-behaved local metric-preserving homeomorphisms. Moreover, ELIS requires that the smoothness should be imposed in a way to render sufficient flexibility for tackling complicated nonlinearity and non-Euclideanity; this is achieved layer-wisely via nonlinearity in both the similarity and activation functions. The ELIS method incorporates a class of suitable nonlinear similarity functions into a two-way divergence loss and uses hyperparameter continuation in finding optimal solutions. Extensive experiments, comparisons, and ablation study demonstrate that ELIS can deliver results not only superior to UMAP and t-SNE for and visualization but also better than other leading counterparts of manifold and autoencoder learning for NLDR and manifold data generation.
1 INTRODUCTION
Manifold learning aims to find from a set of higher dimensional data its embedding or representation in a low dimensional latent space. Nonlinear dimensionality reduction (NLDR) aims to construct a transformation that is generalizable to unseen data. It is hoped that the lower dimensional representation can be used in conjunction with a simple metric such as the Euclidean distance for downstream tasks such as classification and visualization. Manifold data generation performs the inverse transformation to generate data from samples in the latent space. We call this collection of manifold related problems manifold computing. The basis for manifold computing is the manifold assumption (Mikhail Belkin, 2002; Fefferman et al., 2016).
Great advances have been made in the past two decades in manifold computing and visualization. ISOMAP (Tenenbaum et al., 2000) and LLE (locally linear embedding) (Roweis & Saul, 2000) are classic methods for manifold learning. More recent developments include local geometry-based method (Gashler et al., 2008; Zhang & Wang, 2007; Chen & Buja, 2009; McQueen et al., 2016), graph spectral analysis (Donoho & Grimes, 2003) and latent variable models (Saul, 2020). The most popular high dimensional data visualization methods to date are t-SNE (Maaten, 2014) and UMAP (McInnes et al., 2018), with wide applications such as bio-science and technology (Becht et al., 2019; Dorrity et al., 2020). While the aforementioned are traditional machine learning, deep learning-based methods include autoencoders (Hinton & Salakhutdinov, 2006; Moor et al., 2020). The problem can be considered from the viewpoints of geometry deep learning (Bronstein et al., 2017)) and topology data analysis (Wasserman, 2018; Moor et al., 2020).
The ability to preserve geometric structure of nonlinear manifolds and properly unfold them into lower dimensional hyperplanes is the key to the success of manifold-based computing and visualization. Recently, Markov-Lipschitz deep learning (MLDL) (Li et al., 2020) is proposed as a general framework for manifold learning, NLDR, visualization and manifold data generation. The idea is to impose the constraint of geometric isometry across neural network layers to preserve the local
geometric structure of manifold data. This effectively transforms a vector-based transformation of conventional neural networks into a local distance-preserving homeomorphism. Such local homeomorphisms avoid the transformation from collapse, twisting, or crossing, so as to improve generalization, stability, and robustness. Locally isometric smoothness (LIS) (Li et al., 2020), which imposes straight distance-preserving, is proposed as a method in the MLDL framework. LIS has demonstrated significant advantages in manifold learning and NLDR.
This paper proposes a more advanced method in the MLDL framework, called elastic locally isometric smoothness (ELIS), aimed to empower deep neural networks with ability to tackle the high nonlinearity and non-Euclideanity challenges arising from complicated manifolds in high dimension spaces that LIS is unable to cope with. Whereas LIS preserves the straight distances between neighboring points, ELIS is based on a similarity metric that is nonlinear in distance and a two-way divergence loss (of nearby neighbors and far-away pairs, respectively); this renders more flexibility and capacity in tackling the challenges yet under the control of the ELIS regularization. As the result, ELIS bridges gaps between non-Euclidean manifolds in the input space and resulting Euclidean hyperplanes in the learned lower dimensional latent space, with geometric structure of the manifolds preserved. Both ELIS and LIS can be considered as a form of graph neural networks (GNN) (Scarselli et al., 2009) but without the aggregation generally present in GNNs. They are more like what is called “manifold learning 2.0” (Bronstein, 2020).
The distinctive features of ELIS (and LIS) in comparison with related methods are summarized in Table 1. ELIS-based neural networks can accomplish all the functionalities in the general MLDL framework, for which none of the methods can achieve. Extensive experiments, comparisons, and ablation study demonstrate that ELIS-based neural networks produce results not only superior to the SOTA t-SNE and UMAP for NLDR and visualization but also better than other algorithms of manifold and autoencoder learning, including LIS, for NLDR and manifold data generation. The main contributions of this paper are summarized below:
(1) Proposing the ELIS constraint in the MLDL framework, based on a similarity metric which is nonlinear in distance. It inherits the metric-preserving property of LIS so that the resulting layer-wise transformation is geometrically smooth, hence topologically homeomorphic, yet possesses more flexibility than LIS in handling highly nonlinear manifolds in high dimensional spaces.
(2) Proposing conditions for a class of nonlinear similarity functions for converting from distance to similarity, in conjunction with a two-way divergence loss. This ensures the metric-preserving and neighbor-confining properties.
(3) Proposing two instances of ELIS-based neural networks: an ELIS encoder for manifold learning and visualization and an ELIS autoencoder for manifold reconstruction and data generation.
(4) Providing several SOTA results that surpass UMAP and other leading algorithms.
In the following, Section 2 introduces LIS and presents ELIS formulations, and the Section 3 presents extensive experiments. The code is provided in the Supplementary Material.
2 ELASTIC LOCALLY ISOMETRIC SMOOTHNESS
Both ELIS and LIS are formulated in the MLDL framework (illustrated in Fig.A1 in Appendix) which is aimed to regularize neural transformations through imposing the ELIS constraint between
layers to achieve certain well-behaving properties. However, the ELIS formulation tackles challenges of highly nonlinear manifold data in high dimensional spaces using a more flexible and effective way, much inspired by t-SNE (Maaten, 2014) and UMAP (McInnes et al., 2018). Let X = {x1, . . . , xM} be a set of M samples in the input space RN with the index set S = {1, . . . ,M}. These samples may come from one or several lower dimensional manifoldsMX ⊂ RN . WhenMX is Riemannian, its tangent subspace Tx(MX) at any x ∈ MX is locally isomorphic to an Euclidean space of dimensionality dim(MX) < N . Therefore, we can use a cascade of nonlinear neural transformations to "unfold" nonlinear manifolds in a high dimensional input space into hyper-planar regions in a lower dimensional latent space.
Both ELIS and LIS aim to accomplish the following 4 tasks, of which few neural networks can do all: (1) Manifold learning: to learn an embedding in a latent spaceMZ ⊂ Rn, where n < N , based on the local structure of X . (2) Representation Learning: to learn the underlying mapping Φ : MX =⇒ MZ for the embedding that is generalizable to unseen data x 6∈ X,x ∈ MX . (3) Visualization: to visualize the embedding in 2D or 3D space. (4) Manifold generation: to find the inverse mapping Φ−1 :MZ =⇒MX and generate new data onMX from samples inMZ . ELIS is aimed to surpass LIS.
2.1 THE LIS CONSTRAINT AND NEURAL NETWORKS
The LIS constraint is aimed to best preserve the local distances of the data between two metric spaces, encouraging a vector-based neural transformation Φ(X |W ), where W is the transformation matrix of the neural network, to become a well-behaved local distance-preserving homeomorphism. This can be achieved by adding the following LIS loss (Li et al., 2020), imposed between two layers (metric spaces) l and l′
L(l,l ′) LIS (W ) = ∑ i∈S ∑ j∈N (l)i ∣∣∣d(x(l)i , x(l)j )− d(x(l′)i , x(l′)j ))∣∣∣ (1) where d : X ×X → R≥0 is a dissimilarity metric, x(l ′) i = Φ(x (l) i |W ) is the result of the effective transformation Φ from layer l to l′, and Ni is the set of neighbors of i. Without prior knowledge, dij is usually computed as the Euclidean distance, albeit it may not well reflect the reality. It is hoped that after a series of proper nonlinear transformations, the input data is transformed into an embedding in the latent space such that the Euclidean distance make more sense in describing mutual relationships between points. In this work, we aim to find such transformations.
The LIS loss effectively minimizes the bi-Lipschitz constant of Φ. It is through the neighborhood system, N = {Ni | i ∈ S}, that the influence of a point on the others is propagated to afar. For this reason, the collection of random variable x(l) constitutes a Markov random field. Equ. (1) is defined w.r.t. Ni (Markovianity) and aimed to minimizing the bi-Lipschitz constant, hence the name Markov-Lipschitz (Li et al., 2020).
The basic LIS loss is augmented by an auxiliary "push-way" term (Li et al., 2020)
L(l,l ′) push(W ) = − ∑ i∈S ∑ j 6∈j∈N (l)i π[dl′(x (l′) i , x (l′) j ) < B] dl′(x (l′) i , x (l′) j ) (2)
in which π[·] ∈ {0, 1} is the indicator function and B is a bound. This term is aimed to help "unfold" nonlinear manifolds, by exerting a spring force to push away from each other those pairs (i, j) which are non-neighbors at layer l but nearby (distance smaller than B) at layer l′.
These two losses are combined to form a LIS-based encoder loss for manifold learning and dimension reduction
LEnc = ∑ (l,l′) L(l,l ′) LIS (W ) + µL (l,l′) push(W ) (3)
where µ is a weight and (l, l′) is summed over a set of designated layer pairs (currently designed manually). A LIS-based autoencoder can be formulated by applying the LIS constraint between layers within the decoder and between the encoder and decoder layers. LIS-based neural networks have significant advantages (Li et al., 2020).
2.2 THE ELIS CONSTRAINT
The proposed ELIS constraint is aimed to tackle difficulties in “flattening" highly nonlinear manifolds in a high dimensional space into hyperplanes in a lower dimensional space. It imposes a more flexible nonlinear similarity-preserving constraint as opposed to the distance-preserving (isometry) constraint of Vanila LIS. More specifically, ELIS transforms a distance into a similarity metric using a nonlinear function and defines a KL loss based on similarities between nearby pairs and far-away pairs. This makes the metric-preserving constraint of ELIS more flexible than the straight distance-preserving of LIS to accomplish the challenging task.
Moreover, ELIS requires that the smoothness should be imposed in a way to render sufficient flexibility for tackling complicated nonlinearity and non-Euclideanity; this is achieved layer-wisely via nonlinearity in both the similarity and activation functions.
Converting distance to similarity. Following UMAP, we assume that X(l) is fixed (e.g., the input layer) and X(l
′) at subsequent layers l′ are computed as a result of manifold learning. The nonlinear similarities between xi and xj at each layer is computed as follows. First, define an nearest neighbor (NN)-normalized distance
di|j def = d(xi, xj)− ρi ≥ 0 (4)
where ρi = d(xi, xnn(i)) in which xnn(i) denotes the nearest neighbor of xi. Then, di|j is converted to a similarity metric ui|j = g(di|j) ∈ [0, 1] where g is a nonlinear function.
We require that g(η) satisfy the following necessary conditions ∀η = di|j ≥ 0:
Condition (1) – it is monotonically decreasing, g′(η) < 0 for η > 0; Condition (2) – its first derivative diminishes in the limit, limη→∞ |g′(η)| = 0.
The first condition ensues a monotonic and inverse relationship between the distance and the similarity. The second condition effectively leads to a neighborhood system bounded softly as opposed to the "hard" bounded neighborhoods in the LIS and provides proper control on contributions of neighboring points to the back-propagation of neural network learning.
We further require that the g(η) to be a function of η2 – for convenience not necessity, such that its first derivative take the form g′(η) = 2ηh(η) where h(η) is also a function of η2. h(η) can be called influence function because it controls how the other neighboring point xj can influence xi. Condition (2) above restricts the influence from "far-away" point xj on xi (between which the distance ηij = ‖xi − xj‖ is relatively large) to diminish in the back-propagation process. This provides a properly weighted neighborhood system w.r.t. which the influence between points is limited with certain scope adaptively.
Specifically for ELIS, we define the following σi-data-adaptive, ν-parameterized nonlinear similarity
ui|j(σi, ν) = g(di|j | σi, ν) = Cν
( 1 + d2i|j
σi ν
)−(ν+1) , (5)
where ν ∈ R+ is similar to the degree of freedom (DoF) parameter in the t-distribution,
Cν = 2π
( Γ ( ν+1 2 ) √ νπΓ ( ν 2 ))2 (6) is a function of ν which sets the limit limν→+∞ g(0 | σi, ν) = 1 (∀σi > 0)), and the data-adaptive parameter σi > 0, playing a calibration role, is estimated from the data by best fitting the equation∑
j 6=i
ui|j(σi, ν) = log2Q (7)
for the perplexity-like hyperparameter Q given. While other choices satisfying the aforementioned necessary conditions, including the normalized Gaussian and Cauchy functions used in t-SNE (Maaten, 2014) and the fitted polynomial function used in UMAP (McInnes et al., 2018), can also work for ELIS, we find Equ. (5) a better choice not only because it produces better results but also because we can use the ν parameter as a continuation tool for preventing the training from converging
to bad local minima and for controlling separation margin between different manifolds, as will be shown in the ablation study.
Computing similarities uij and u′ij . Because the symmetry ui|j = uj|i does not hold due to differences in σi for the input layer (l), the following symmetrization is performed
uij = uj|i + ui|j − uj|iui|j . (8) On the other hand, for the subsequent latent layers, the computation of σi and ρi for each i would bring about huge computational costs. To overcome this problem, we directly set σ′i = 1 and ρ ′ i = 0 (this also ensures the symmetry u′i|j = u ′ j|i). While σi and ρi are needed to deal with unevenness and outliers of the data for the input layer, the necessity becomes not so demanding as the layer goes deeper after layers of nonlinear manifold unfolding. From u(l)ij of layer l can be constructed a weighted graph G(S, X(l), U (l)) consisting of a set S of nodes with node attributes X(l) and edge attributes (weights) U (l) = {u(l)ij ≥ > 0 | ∀i, j ∈ S}. The global structure of a manifold is discovered from local geometry of data through the graph G. Formulating the ELIS losse. ELIS transforms the distance metric dij into a similarity metric using a nonlinear function uij = g(dij) and defines the ELIS loss between layers l and l′, in terms of similarities u(l)ij | i, j ∈ S, i 6= j} at layers l and its counterpart U (l ′) = {u(l ′)
ij | i, j ∈ S, i 6= j} at layers l′. The ELIS loss is defined by what we call the two-way divergence (a.k.a. the fuzzy information for discrimination (Bhandari & Pal, 1993) and the fuzzy set cross entropy in UMAP (McInnes et al., 2018))
L(l,l ′)
ELIS(W | X (l), X(l
′)) = ∑
i,j∈S,i6=j u (l) ij log
u (l) ij u (l′) ij + (1− u(l)ij ) log 1− u(l)ij 1− u(l ′ ) ij
(9)
The first term is the directed divergence of the two fuzzy sets of similarities, in lieu of the LIS’ distance-preserving term of Equ. (1); the second term can be considered as the directed divergence of the two corresponding complement fuzzy sets, replacing the push-way term of Equ. (2).
Equ.(9) is called the "two-way divergence" because the first term on the right side of the equation imposes similarity-based attraction forces between nearby (intra-manifold) pairs whereas the second term exerts dissimilarity-based repulsion forces between far-away (inter-manifold) pairs. In other words, intra-manifold points are transformed to a cluster in the latent space, mainly as the result of the first term whereas inter-manifold point pairs push away from each other to different clusters, mainly due to the second term.
Note also that ELIS applies the two terms in a soft, adaptive way via its weighted neighborhood graph where the edges are effectively restricted by pairs of corresponding nodes (data points) between which the absolute gradients ∣∣∣∇WL(l,l′)ELIS(W | X(l), X(l′))∣∣∣ ≥ > 0 are nonzero, in contrast to the "hard" neighborhood system in LIS.
The ELIS loss can be rearranged as follows
L(l,l ′ )
ELIS(W | X (l), X(l
′ )) = ∑ i,j∈S,i6=j u (l) ij log u (l) ij + (1− u (l) ij ) log(1− u (l) ij )
− u(l)ij log u (l′) ij − (1− u (l) ij ) log(1− u
(l ′ ) ij ) (10)
When X(l) (hence u(l)ij ) fixed, the optimization only needs to minimize second part involving u (l′) ij .
2.3 ELIS ENCODER AND AUTOENCODER
The ELIS encoder consists of a cascade of nonlinear forward neural transformations constrained by the ELIS loss, aimed for manifold learning and NLDR. An ELIS (and LIS) encoder can learn an NLDR transformation without the need for a decoder (as required by autoencoders), and this encoder can generalize to unseen data (that ISOMAP, LLE, t-SNE and UMAP cannot). The total loss for the ELIS encoder is the sum of all the ELIS losses over a prescribed set of layer pairs (l, l′)
LEnc(W ) = ∑ (l,l′) α(l,l ′)L (l,l′) ELIS(W ) (11)
where α(l,l ′) weight the relative importance of L(l,l ′) ELIS .
The ELIS autoencoder has two purposes: (1) to further regularize or optimize the ELIS encoderbased manifold learning by using an ELIS decoder, and (2) to enable generation of new data of the learned manifolds by the trained ELIS decoder. The ELIS autoencoder structure consists of the ELIS encoder and decoder in cascade. The ELIS decoder is aimed to approximate the inverse transformations of the ELIS encoder and is made to be entirely symmetric to the ELIS encoder in its network structure. The overall weight matrices becomes W = [WEnc,WDec]. The loss function is composed of three terms:
LAE(W ) = LEnc(WEnc) + LDec(WDec) + LRec(W ) (12)
where LEnc(WEnc) is the same as Equ. (11), LDec(WDec) is defined in the same way following the symmetry, and the reconstruction loss LRec(W ) is the summed over all the corresponding layers
LRec(W ) = L−1∑ l=0 γl M∑ i=1 ‖ xi(l) − x̂(l)i ‖ 2 (13)
where x̂(l)i are the data points at the corresponding layer of the decoder and γl are the weights. The constraints due to LRec(W ) and LTie(W ) are illustrated by the dashed lines in Fig. A1 in Appendix.
3 EXPERIMENTS
The following experiments are aimed to evaluate ELIS in comparison with other five algorithms: UMAP (McInnes et al., 2018), t-SNE (Maaten, 2014) (for visualization), MLLE (Zhang & Wang, 2007), TopoAE (Moor et al., 2020) and LIS (Li et al., 2020) in terms of visual inspection and numerical metrics for manifold computing, visualization and data generation. Nine datasets are used, including five toy datasets: (1) SwissRoll (3-D), (2) S-Curve (3-D), (3) Servered Sphere (3-D), (4) SpheresA (101-D) (see (Moor et al., 2020) for the description) and (5) SpheresB (101-D, a modified composition from SpheresA); and four real-world datasets: (6) Coil20 (16384-D) and (7) Coil100 (49152-D) (Nene et al., 1996), (8) MNIST (784-D) (LeCun, 2013), and (9) Fashion-MNIST (784-D) (Xiao et al., 2017). The toy datasets are used because their geometric and topological structures are clear for the evaluation. The SpheresA dataset (Moor et al., 2020)is composed of 1 large sphere enclosing 10 small ones in 101-D space. SpheresB differs from SpheresA in that its large sphere consists of only 500 samples (whereas that in SpheresA has 5000) – the data is so sparse that the smallest within-sphere distance on the larger sphere can be greater than that between the larger sphere and some small ones. 5 performance metrics are used for the evaluation, whose exact definitions are given in Appendix A.2.
The pseudo-codes of the ELIS encoder and autoencoder and hyperparameter settings are described in Appendix A.1. The implementation uses the PyTorch 1.6.1 library running on Ubuntu 18.04 on NVIDIA v100 GPU. The time is spent mainly in the computation of neighbors. At present, the ELIS algorithms computes the neighborhood for every point pair, hence have the complexity of O(M2) for each cross-layer pair (l, l′). The complexity can be reduced to O(M1.14) if using the nearest-neighbor-descent algorithm of UMAP (McInnes et al., 2018).
3.1 MANIFOLD LEARNING AND GENERATION
Manifold Learning. Table 2 compares performances of the five NLDR methods where bold numbers are the best results, and underline the second best. The ELIS encoder has overall the best performance. Fig. 1 visualizes some representative results, and more results are given in Table. A2, Fig. A2 - Fig. A4 in Appendix A.3.
Next, we delve into embedding details of of the Coil20 objects resulting from the ELIS encoder (ELIS-Enc) and UMAP in Fig. 2. First, all the object embeddings in the ELIS-Enc result form closed loops for the 360 degree rotations (refer to Fig. A5 for the embedding-object correspondence). Second, the quality of the ELIS-derived embeddings enables us to infer some symmetries of the objects in the 3D space. Four types of such symmetries are explored and discussed in Appendix A.3. The UMAP result, in contrast, does not possess such quality.
Manifold Data Generation. Fig. 3 compares images generated from interpolated points between two nearest neighbors on the embedding using three autoencoders in comparison. The images generated by the ELIS-AE have clear boundaries and look sharper than those produced by the other two autoencoders. Results for several other objects are shown in Fig. A6 in Appendix A.4.
3.2 ABLATION STUDY
Cross-layer ELIS constraint. The cross-layer ELIS constraint is weighted by α(l, l′) as in Equ. (11). Four weight schemes (1) Head-Tail, (2) Head-Mids, (3) Mids-Tail and (4) Head-Mids + Mids-Tail are designed for an L-layer encoder, as described in details in Appendix A.5. The results are compared in Table. A3 and Fig. A7. Overall, the "Head-Mids + Mids-Tail" scheme, which imposes the most extensive cross-layer ELIS constraints and also needs more computation, achieves the best results. This justifies the use of the proposed ELIS method for performance improvements.
Effect of final ν value. The final ν value has significant influence on the within-manifold and between-manifold scatters. Fig. A8 and Fig. A9 in Appendix A.5 demonstrate the effect of varying ν in the input space and varying ν in the latent space on the results, respectively.
Continuation in ν. Continuation in hyperparameter ν in latent space is used during the ELIS encoder learning process. The algorithm starts with a small ν in latent space to include more global
information. Then it gradually increases the value to focus more locally. The continuation strategy results in significant better solutions as shown in Fig. A10 in Appendix A.5.
The major merits of ELIS. Finally, we summarize the comparative results of the experiments in the following table.
ELIS LIS UMAP t-SNE MLLE TopoAE
Succeed in unfolding toy data Yes Yes No No Yes No
Perfect manifold structures on Coil Yes No Maybe No No No
High Accuracy Most No Some Some No No
Good Reconstruction Quality Yes Maybe N/A N/A No No
4 CONCLUSION
The proposed ELIS method preserves the nonlinear similarity metric locally across layers of deep neural networks by optimizing two-way divergence loss. It effectively tackles difficulties in deep manifold computing and visualization with the local geometry-preserving property. Empirical results, comparisons, and ablation study demonstrate that ELIS is not only superior to UMAP and t-SNE for NLDR and visualization but also better than other leading manifold and autoencoder learning algorithms for NLDR and manifold data reconstruction and generation. Future work includes the following: (1) extending the unsupervised version of MLDL to self-supervise, semi-supervised and supervised tasks; (2) further formulating MLDL so that cross-layer link hyperparameters α become part of learnable hyperparameters.
APPENDIX
A.1 THE MLDL FRAMEWORK AND ELIS
Markov-Lipschitz deep learning (MLDL) framework. The MLDL framework is illustrated in Fig.A1 (from Li et al. (2020)). The ML-AutoEncoder (of LIS or ELIS type) transforms the input X to an embedding X(L) at layer L (the latent layer) using the ML-Encoder, and then reconstruct X̂ using the ML-Decoder. Whereas a standard neural network consists of a cascade of transformations φ(l) (blue arrows), an MLDL network imposes the constraint between any two layers as appropriate (shown in orange arcs and dashed lines) in the form of cross-layer loss functions weighted by α(l,l
′). This encourages φ(l) to become well-behaved local homeomorphisms. The latent features X(L) extracted by the learned ML-Encoder can be used for downstream tasks such as visualization and classification as well as manifold data generation using the learned ML-Decoder.
Figure A1: Illustration of Markov-Lipschitz deep learning (MLDL) framework using an MLAutoEncoder (best viewed in color).
The pseudo-codes for the ELIS encoder and the ELIS autoencoder, related hyperparameter, and a parameter continuation method are described below.
Algorithm 1: ELIS Encoder Input : Data:X(0), learning rate lr, epochs E, number of encoder layers L, Weight hyperparameter α, νList, Q, Calculate d(0)i|j with (4) Calculate σ(0)i with (7) Calculate u(0)ij with (8) Initialize the neural network {Φ(1)Enc( · |W (1) Enc), Φ (2) Enc( · |W (2) Enc), · · · ,Φ (L) Enc( · |W (L) Enc)} while i = 0; i < E; i++ do ν ←− νList[i] while l = 1; l <= L; l++ do
Calculate l layer’s embedding X(l) ←− Φ(l)Enc(X(l−1)|W (l) Enc)
Calculate u(l)ij with (5) and (8) end while l′ = l; l′ <= L; l′++ do
Calculate the ELIS losses between layer l and lyaer l′, L(l,l ′)
Enc with (10) end
Update parameters: W ←−W − lr · ∑L l=1 ∑L l′=l α (l,l′) ∂L (l,l′) Enc
∂W end
Algorithm 2: ELIS AutoEncoder Input : Data:X(0), learning rate lr, epochs E, number of encoder layers L, Weight hyperparameter α γ, νList, Q Calculate d(0)i|j with (4) Calculate σ(0)i with (7) Calculate u(0)ij with (8) Initialize the neural network {Φ(1)Enc( · |W (1) Enc), Φ (2) Enc( · |W (2) Enc), · · · ,Φ (L) Enc( · |W (L) Enc),
Φ (1) Dec( · |W (1) Dec), Φ (2) Dec( · |W (2) Dec), · · · ,Φ (L) Dec( · |W (L) Dec)}
while i = 0; i < E; i++ do ν ←− νList[i] while l = 1; l ≤ 2L; l++ do
if l ≤ L then Calculate l layer’s encoder embedding X(l) ←− Φ(l)Enc(X(l−1)|W (l) Enc); else Calculate l layer’s decoder embedding X(l) ←− Φ(l)Dec(X(l−1)|W (l) Dec); end Calculate u(l)ij with (5) and (8)
end while l′ = l; l′ <= L; l′++ do
Calculate the ELIS losses between layer l and lyaer l′, L(l,l ′)
ELIS with (10) end Calculate the reconstruction loss between layer l and lyaer 2L− l, L(l,2L−l)Rec with (13)
Update parameters: W ←−W − lr( ∑L l=1 ∑L l′=l α (l,l′) ∂L (l,l′) Enc ∂W + ∑L l=1 γ (l,2L−l) ∂L (l,2L−l) Rec
∂W ) end
Hyperparameters. Table. A1 summarizes the ELIS hyperparameter setting for different datasets. Other hyperparameters are set the same for all datasets: learning rate lr = 0.01 and number of epochs E = 5000. The LeakyReLU is used as the activation function.
Table A1: Hyperparameters of ELIS for different datasets
Dataset Point Network Structure (Number of parameters) Q in Equ. (7) Batchsize
Swiss Roll 800 3, 500, 500, 2 (0.252M) 10 800 S-Curve 800 3, 500, 500, 2 (0.252M) 10 800 Servered Sphere 800 3, 500, 500, 2 (0.252M) 10 800 SpheresA 10000 101, 500, 500, 2 (0.301M) 10 10000 SpheresB 5500 101, 500, 500, 2 (0.301M) 10 5500 Coli20 1440 16384, 500, 500, 2 (8.443M) 10 1440 Coli100 7200 49152, 1000, 500, 250,2 (24.82M) 10 2400 MNIST 60000 784, 1000, 500, 300, 2 (1.434M) 15 4000 Fashion-MNIST 60000 784, 1000, 500, 2 (1.285M) 10 4000
Continuation in ν(l
′). In the training process, the parameter ν(l ′)) in computing sample similarities for the latent layer is graduated from a small number to a large number, e.g. ν(l
′) : 0.01 → 100 (see Equ. (5)), though fixed at a large value, e.g. ν(l
′) = 100 for the input layer. Empirically, the continuation helps training converge to a good solution; the reasons behind are to be explained in a future work.
A.2 DEFINITIONS OF PERFORMANCE METRICS
1. Cont (Continuity) is asymmetric to Trust (from space X(l ′) to space X(l)):
Cont = 1
k2 − k1 + 1 k2∑ k=k1 1− 2Mk(2M − 3k − 1) M∑ i=1 ∑ j∈N (l)i,k,j 6∈N (l′) i,k (r (l′) i,j − k) where r(l ′) i,j is the rank of x (l′) j in the k-NN of x (l′) i . M is the size of dataset. N (l′) i,k is the set of indices to the k-NN of x(l ′)
i . k1 and k2 are the lower and upper bounds of the k-NN. For sphereA and sphereB, we focus more on global performance, so set k1 = [M/14], k2 = [M/7]. For other datasets, we set k1 = 5, k2 = 10.
2. Trust (Trustworthiness) measures how well the k nearest neighbors of a point are preserved when going from space X(l) to space X(l ′):
Trust = 1
k2 − k1 + 1 k2∑ k=k1 1− 2Mk(2M − 3k − 1) M∑ i=1 ∑ j∈N (l
′) i,k ,j 6∈N (l) i,k
(r (l) i,j − k) where r(l)i,j is the rank of x (l) j in the k-NN of x (l) i .
3. ACC (svm)
The ACC (svm) is calculated as follows. (1) Compute nonlinear dimensionality reduction methods to obtain 2-dimensional embeddings. (2) Partition the data by 5-fold cross-validation. (3) For each fold, train the linear kernel SVM classifier using the training set and test it in the test set. (4) Calculate the mean value of the classification accuracy.
4. ACC (NN) is defined as follows:
ACC(NN) =
∑M i π [ Yi = YN (L)i,1 ] M
π[·] is the exponential function. Yi is the label of sample i, YN (L)i,1 is the label of sample N (L) i,1 , where N (L) i,1 is the nearest neighbor point of node i in layer L.
5. AUC is defined as follows:
AUC(f) =
∑ p0∈P0 ∑ p1∈P1 π [p0 > p1]
|P0| · |P1|
P0 =
{ d (L) ij −min d (L) ij
max d (L) ij −min d (L) ij
|i, j ∈ {1, 2, 3 · · · ,M}, Yi = Yj
}
P1 =
{ d (L) ij −min d (L) ij
max d (L) ij −min d (L) ij
|i, j ∈ {1, 2, 3 · · · ,M}, Yi 6= Yj
}
Where d(L)ij is the distance in layer L. P0 is the set of positive sample pair, P1 is the set of negative sample pair.
A.3 MANIFOLD LEARNING AND NLDR
Manifold Learning Results. This subsection shows more results of manifold learning and NLDR obtained by using the ELIS-Enc and the ELIS-AE, in comparison with the other methods, on training and testing datasets. Some typical embedding results of manifold learning using the three autoencoder methods are visualized in Fig. A2, where t-SNE and UMAP are not included because these nontransformational methods are unable to generalize to test datasets. LIS-Enc and TopoAE learned poor
Figure A2: Comparison of visualization results of autoencoders on training and testing sets
results on the training set, so it did not work well on the test set either. ELIS-AE, as a autoencoderbased mehtod, have a huge advantage in terms of generalization performance, because it can handle the test data. And it is very easy to apply to specific tasks such as classification, regression and clustering.
Table. A2 compares performance metrics on 8 datasets, where the ACC(SVM), ACC(NN) and AUC are abscent for the SwissRoll and SeveredSphere because these datasets have no class label.
Fig. A3 and Fig. A4 shows the visualization resules of the toy and real-world datasets on the training datasets. For Swiss roll, Servered Sphere, and S-Curve, ELIS-Enc, LIS-Enc and MLLE all maintained the topology of the original data, however, MLLE method did not hold the relative Euclidean distance (The resulting embedding is square instead of rectangular). For SpheresA, ELIS-Enc, LIS-Enc, and TopoAE show the "big sphere enclosing 10 small spheres" in 2D embedding, but for SpheresB only
Table A2: Comparison in performance metrics with 5 difference methods in eight datasets
ELIS-Enc LIS-Enc UMAP t-SNE TopoAE MLLE
Swiss Roll Cont 1.0000 1.0000 0.9962 0.9969 0.9716 0.9956Trust 1.0000 1.0000 0.9983 0.9993 0.9809 0.9948
SeveredSphere Cont 0.9997 0.9932 0.9967 0.9985 0.9854 0.9958Trust 0.9997 0.9755 0.9989 0.9995 0.9891 0.9836
SpheresA
Cont 0.7850 0.7892 0.7147 0.7548 0.8064 0.7272 ACC(SVM) 0.5213 0.5000 0.5550 0.4992 0.4982 0.5000 ACC(NN) 0.9985 0.9912 0.5406 0.7837 0.9944 0.5205 AUC 0.5698 0.3362 0.5816 0.5603 0.3328 0.5961
SpheresB
Cont 0.9242 0.9255 0.9109 0.9155 0.9245 0.8943 ACC(SVM) 0.9558 0.9100 0.9100 0.8478 0.9581 0.0965 ACC(NN) 0.9987 0.9969 0.8469 0.9365 0.9949 0.8265 AUC 0.9780 0.9318 0.9570 0.9570 0.9870 0.9459
Coil20
Cont 0.9956 0.9973 0.9962 0.9927 0.9901 0.9395 ACC(SVM) 0.8941 0.8301 0.8472 0.8014 0.7078 0.1556 NNACC 0.9965 0.9354 0.8917 0.9965 0.8160 0.6410 AUC 0.9780 0.9537 0.9842 0.9582 0.8916 0.8824
Coil100
Cont 0.9936 0.9967 0.9955 0.9950 0.9903 0.7898 ACC(SVM) 0.9372 0.7319 0.8299 0.8278 0.5540 0.0363 ACC(NN) 0.9976 0.8163 0.9232 0.9951 0.4797 0.3350 AUC 0.9770 0.9667 0.9819 0.9759 0.8735 0.7322
MNIST
Cont 0.9639 0.9749 0.9646 0.9630 0.9618 0.9183 ACC(SVM) 0.9699 0.7468 0.9690 0.9525 0.7450 0.1100 ACC(NN) 0.9568 0.7035 0.9528 0.9567 0.7773 0.7423 AUC 0.9725 0.8779 0.9691 0.9314 0.8000 0.8575
Fashion -MNIST
Cont 0.9848 0.9901 0.9836 0.9777 0.9864 0.9298 ACC(SVM) 0.7125 0.6908 0.7030 0.5518 0.6067 0.1058 ACC(NN) 0.7092 0.6427 0.7253 0.7787 0.5718 0.6145 AUC 0.9121 0.8843 0.9165 0.8256 0.8310 0.7908
ELIS-Enc and LIS-Enc shows the "enclosing" phenomenon. For Coil20 and Coil100, ELIS-Enc, UMAP and TSNE can produce non-intersecting embeddings. However, the ELIS-Enc results are distinguishable and do not cut any of the manifolds. For MNIST and Fashion-MNIST, Both the UMAP and ELIS-Enc methods output the good embeding, But in terms of performance metrics, ELIS-Enc has sufficient advantages.
Symmetry of the objects and ELIS-Enc’s embedding in Coil20. For Coil20, information about the symmetry of the objects in the picture can be obtained by analyzing the embedding generated by ELIS-Enc. Details of the ELIS-Enc embedding of the Coil20 are shown in Fig. A5.
We divided the Coil20’s manifolds into four patterns based on the symmetry of the objects in the image and the shape of the manifold.
(1) Objects that are single plane mirror symmetric have elongated ellipse embedding shapes; For objects with single plane mirror symmetry, an angle can be found from which an image taken by rotating to the left is approximately equal to an image taken by rotating to the right. The corresponding two-dimensional manifolds are therefore elongated ellipse (The endpoints of the two long axes of the ellipse correspond to the two images obtained by taking pictures along the plane of symmetry.).
(2) Objects that are rotational symmetric have round embedding shapes; For rotational symmetric objects, the resulting pictures are always very similar no matter what angle they are taken from, so that the resulting two-dimensional manifold is squeezed inward into a circle.
(3) Objects that are double vertical mirror symmetric and have nested double ring embeddings; For objects with double vertical mirror symmetry, every 180 degrees of rotation, the resulting image reappears (the reappeared image is very similar to the one from 180 degrees ago, and
Figure A3: Comparison of visualization results for toy dataset on training set
is very close in two-dimensional space), thus the resulting manifold consists of two nested rings.
(4) Object’s symmetry is not evident.
Figure A4: Comparison of visualization results for real-world dataset on training set
Figure A5: Details of the ELIS-Enc’s embedding of the Coil20 and four manifold patterns
A.4 MANIFOLD DATA GENERATION
The manifold generation task generates a complete manifold structure from finite manifold samples. In this experiment,the test steps are as follows:
(1) Training a network (includes encoder and decoder) that generating 2-dimensional embedding;
(2) Performing linear interpolation in the embeddings; (3) Mapping the interpolation result back to the data space via the decoder.
Generation results for comparison with the TopoAE and LIS-AE are shown in Fig. A6. The same network structure was used in the experiments.
Figure A6: Comparison in visualization with LIS-AE and TopoAE in manifold generation. The left side is three embedding result, and black point in manifolds is the location of the interpolation. The right side is the interpolation results. there are 12 images in the right of the figure, the leftmost and the rightmost images are the original images, and ten images in the middle are the generation results with geodesic distance.
ELIS-AE has an advantage over the other two methods. Both LIS-AE and TopoAE methods do not learn a satisfactory embedding, so the interpolation results are poor. The embedding in LIS-AE has overlapping manifolds, so it generates images belonging to other popular methods (e.g. manifold A). The TopoAE’s embedding is messy, so the decoder reconstructs fuzzy images.
A.5 ABLATION STUDY
Cross-layer ELIS constraint. The effect of the ELIS-Enc constraint is determined by the weights α(l, l′) as in Equ. (11). We set the weights α(l,l
′) in either of four schemes (where Head, Tail and Mids are used to denote input layer, latent layer and intermediate layers) for an L-layer encoder:
(1) Head-Tail: weight α(0,L) = 1 (the constraint is imposed between the input layer and the latent layer);
(2) Head-Mids: weights α(0,l) = 1/L where l ∈ {1, 2, · · · , L} (the constraints are imposed between the input layer and each of intermediate layers);
(3) Mids-Tail: weights α(l,L) = 1/L where l ∈ {1, 2, · · · , L} (the constraints are imposed between the latent layer and each of intermediate layers);
(4) Head-Mids + Mids-Tail: weights α(0,l) = α(l,L) = 1/2L where l ∈ {1, 2, · · · , L} (combination of Head-Mids and Mids-Tail).
In this ablation study, a 10-layer neural network is used and the width of the network is determined depending on the dataset. (Swiss Roll:[3, 50, 45, 35, 30, 25, 20, 15, 10, 5, 2], SpheresA:[101, 50, 45, 35, 30, 25, 20, 15, 10, 5, 2], SpheresB:[101, 500, 400, 300, 300, 200, 200, 100, 100, 2], COIL20:[16384, 50, 45, 35, 30, 25, 20, 15, 10, 5, 2])
The evaluation metrics for four different cross-layer schemes are presented in Table. A3. The results of different cross-layer schemes are shown in Fig. A7.
Figure A7: Comparison in visualisation of four different cross-layer schemes.
The visualization results and metrics show that the cross-layer scheme (4) has better results in a 10-layer network. The network is very difficult to train if the ELIS loss acts only on the first and last layers (cross-layer scheme (1) ). The network will be easier to train if ELIS losses acts from the first layer and all intermediate layers (cross-layer scheme (2)). The ELIS losses in the middle and last layers (cross-layer scheme (3)) does not improve the performance of the embedding if used alone.
Table A3: Comparison in performance metrics of four different cross-layer schemes.
Cont Trust ACC(SVM) ACC(NN) AUC
Swiss Roll (1) - - - - - (2) 0.9999 0.9999 - - - (3) - - - - - (4) 0.9999 0.9999 - - -
SpheresA
(1) - - - - - (2) 0.9402 0.8832 0.9149 0.9478 0.9696 (3) - - - - - (4) 0.9376 0.8858 0.9529 0.9784 0.9721
SpheresB
(1) 0.9111 0.6373 0.5225 1.0000 0.5486 (2) 0.9087 0.6341 0.5145 1.0000 0.5489 (3) 0.8520 0.6299 0.5388 1.0000 0.4474 (4) 0.8167 0.6432 0.8740 0.9936 0.7461
Coil20
(1) - - - - - (2) 0.9955 0.9852 0.8454 0.9792 0.9721 (3) 0.9904 0.9876 0.8459 0.9847 0.9524 (4) 0.9947 0.9901 0.8867 0.9986 0.9735
However, if used in conjunction with the cross-layer scheme (4), it will improve the metric of the resulting latent space.
Effect of ν value. Fig. A8 and Fig. A9 show the effect of data space hyperparameter and latent space hyperparameter on embedding results.
Figure A8: Embedding results with varying ν in input space.
ν in input space controls the range of sensations in data space. if input space’s ν is small, the derivative of the probabilistic mapping function of the input layer will be small. The probability will be insensitive to distance in data space. In other words, ELIS-Enc will degradation into LIS-Enc. If more attention is paid to the global information of the input data, raise ν. If more attention is paid to local information about the input data, lower input space’s ν. By default, ELIS-Enc does not change this hyperparameter, but defaults to input space’s ν = 100.
ν in latent space controls degree of display in latent space (show detailed information or show global information). if latent space’s ν is small, ELIS will tend to display global information. If latent space’s ν is large, ELIS-Enc will tend to display the local information. Fig. A9 shows, from left to right, a process that goes from showing global information to showing excessive local information.
Figure A9: Embedding results with varying ν in latent space.
Continuation strategy. Continuation in hyperparameter latent space’s ν is used during the ELIS encoder learning process. The algorithm starts with a small latent space’s ν value to include more global information. Then it gradually increases the value to focus more locally. The necessity of parameter continuation is shown in Fig. A10.
Figure A10: Ablation study of with and without parameter continuation in latent space ν. (The upper row shows results obtained via parameter continuation ν = 0.001 → ν = 100 in latent space, the lower row shows results with a fixed ν = 100)
Experiments prove that the effect of parameter continuation (ν = 0.001→ ν = 100 in latent space) is very obvious. There is a big improvement in Swiss Roll, Coil20 and MNIST. | 1. What is the main contribution of the paper regarding nonlinear dimensionality reduction?
2. How does the proposed approach compare to implicit mappings such as t-SNE and UMAP?
3. What are the strengths and weaknesses of the proposed method, particularly in terms of its ability to achieve meaningful nonlinear dimensionality reduction?
4. How do the chosen performance metrics support or limit the conclusions drawn from the results?
5. What are some potential improvements that could be made to the experimental design and description?
6. Is there any new insight or theory provided by the paper regarding manifold learning?
7. How does the paper's approach differ from LIS, and is the inclusion of LIS useful for understanding ELIS?
8. Are there any grammatical errors or unclear phrasing in the paper that detract from its overall clarity? | Review | Review
The paper investigates objectives for training a multilayer neural network to map data to a lower dimensional space. It investigates an explicit non-linear dimensionality reduction approach compared to implicit approaches (multidimensional scaling, ISOMPA, stochastic neighborhood embedding, local linear embedding, etc.). To do this, the paper proposes penalties calculated between the pairwise similarities of data points representations at different layers. The penalties themselves use the fact that the similarities are treated as a probabilities [0,1] and penalize the KL divergence. The proposed approach is qualitatively and quantitatively contrasted to implicit mappings (t-SNE and UMAP) and a recent work that also uses an auto-encoder neural network with a topologically based (persistent topology) objective.
Strengths: The approach provides a way of training an explicit mapping that achieves meaningful non-linear dimensionality reduction. In this it takes cues largely from t-SNE and UMAP to define its objectives but uses them to adjust parameters of the mapping rather than the implicit points. The qualitative results seem as good or better than existing state of the art for standard data sets.
Weaknesses: No variation across runs is given. The performance values differ marginally and the hyper-parameter selection objective is not given. It is not clear if the qualitative differences are representative.
Performance metrics are not defined in the main body nor is their description in appendix referenced. After looking at the appendix, it appears the linear SVM metric is pointless. Why would the representation be separable? Overall, the set of chosen performance metrics are not that informative, coherent, or comprehensive. Some are meaningful, others are hand-chosen to illustrate 'deficits', and the results are for one run across the board (no details of variability are given). I would suggest a more interpretable approach such as Peltonen and Kaski's "Generative Modeling for Maximizing Precision and Recall in Information Visualization" in JMLR Workshop and Conference Proceedings 15:579-587, 2011.
The table describing the major merits of ELIS seems largely subjective; there is no criterion on the determination of the entries.
No description of the run time necessary to perform the entire training and hyper-parameter selection is given. Nor is there a description of how the hyper-parameter selection was done. Perhaps the parameters were chosen to optimize the supervised performance metrics, which would not appropriate for an unsupervised approach. If unsupervised performance metrics are used, then this is fine for visualization purposes, but it is not clear how long this entire scheme would take. One of the strengths of a method like t-SNE is a minimal amount of design choices need to be made besides hyper-parameters such as
Q
. Designing a neural network architecture is not yet non-trivial.
The paper does not provide any new perspectives or theory to support the approach.
The paper needs careful proofreading.
Conclusion: The paper is a borderline contribution. It could be useful in practice; however, the experimental design, description, and results need improvement. Primarily, is not clear what is entailed to choose tune the network to get this explicit form versus using implicit mappings (UMAP or t-SNE). No new insight or theory about manifold learning are discussed.
Suggestions:
The paper claims "ELIS is aimed to surpass LIS". LIS takes on a similar approach of neural network mapping but uses a nearest-neighbor distance preservation simultaneous with a repulsion term. One page is devoted to this method, but it is not necessary to understand LIS to understand ELIS. Thus, I don't find the inclusion of LIS useful.
In introduction, it is not clear that t-SNE and UMAP should be considered "traditional machine learning". They are techniques that optimize implicit mappings without learning anything.
In abstract, "UMAP and t-SNE for and" has extra for.
Extra ")" after Bronstein in second paragraph.
The phrasing at top of page 2, "avoid the transformation from collapse, twisting, or crossing" is not grammatically correct or mathematically clear. Later, not clear what "straight distance-preserving" is.
Perhaps it would be better to say "nonlinear, but monotonic in distance" rather than "nonlinear in distance".
It is a bit of overreach to say "for which none of the methods can achieve". The performance metrics and objectives differ, and the hyper-parameter selection is not stated clearly as in the TopoAE paper.
There is an extra parenthesis in equation 1.
Shorthand
d
i
j
is not defined.
"the influence of a point on the others is propagated to afar. "
In section 2.2, it is not clear to the reader what specific parts are "Following UMAP".
Equations (5) and (6) are said to be 'similar' to t-distribution when in fact it is is nothing more than a squared and scaled t-distribution function. The approach borrows heavily from SNE and t-SNE in that the scaling of each point " is estimated from the data by best fitting the equation" "for the perplexity-like hyperparameter Q given" , which after rewriting (to fix the grammar) would be clearer.
At the bottom of page 4 it is not clear to the user what "continuation tool" is.
After equation 8 "On the other hand" is dangling without a preceding "On the one hand".
When discussing the choice of neighborhood scale parameter
σ
i
"the necessity becomes not so demanding" should be rephrased.
"Formulating the ELIS losse"
"only needs to minimize second part" -> "only needs to minimize the second part"
Table A1 "Coli100"
A.2 References on performance metrics are missing. Details on performance metric are not uniform. Different parameters are used for 'continuity' and 'trust' than on other ones.
"as a autoencoder- based mehtod"
"resules" |
ICLR | Title
Deep Manifold Computing and Visualization Using Elastic Locally Isometric Smoothness
Abstract
The ability to preserve local geometry of highly nonlinear manifolds in high dimensional spaces and properly unfold them into lower dimensional hyperplanes is the key to the success of manifold computing, nonlinear dimensionality reduction (NLDR) and visualization. This paper proposes a novel method, called elastic locally isometric smoothness (ELIS), to empower deep neural networks with such an ability. ELIS requires that a desired metric between points should be preserved across layers in order to preserve local geometry; such a smoothness constraint effectively regularizes vector-based transformations to become well-behaved local metric-preserving homeomorphisms. Moreover, ELIS requires that the smoothness should be imposed in a way to render sufficient flexibility for tackling complicated nonlinearity and non-Euclideanity; this is achieved layer-wisely via nonlinearity in both the similarity and activation functions. The ELIS method incorporates a class of suitable nonlinear similarity functions into a two-way divergence loss and uses hyperparameter continuation in finding optimal solutions. Extensive experiments, comparisons, and ablation study demonstrate that ELIS can deliver results not only superior to UMAP and t-SNE for and visualization but also better than other leading counterparts of manifold and autoencoder learning for NLDR and manifold data generation.
1 INTRODUCTION
Manifold learning aims to find from a set of higher dimensional data its embedding or representation in a low dimensional latent space. Nonlinear dimensionality reduction (NLDR) aims to construct a transformation that is generalizable to unseen data. It is hoped that the lower dimensional representation can be used in conjunction with a simple metric such as the Euclidean distance for downstream tasks such as classification and visualization. Manifold data generation performs the inverse transformation to generate data from samples in the latent space. We call this collection of manifold related problems manifold computing. The basis for manifold computing is the manifold assumption (Mikhail Belkin, 2002; Fefferman et al., 2016).
Great advances have been made in the past two decades in manifold computing and visualization. ISOMAP (Tenenbaum et al., 2000) and LLE (locally linear embedding) (Roweis & Saul, 2000) are classic methods for manifold learning. More recent developments include local geometry-based method (Gashler et al., 2008; Zhang & Wang, 2007; Chen & Buja, 2009; McQueen et al., 2016), graph spectral analysis (Donoho & Grimes, 2003) and latent variable models (Saul, 2020). The most popular high dimensional data visualization methods to date are t-SNE (Maaten, 2014) and UMAP (McInnes et al., 2018), with wide applications such as bio-science and technology (Becht et al., 2019; Dorrity et al., 2020). While the aforementioned are traditional machine learning, deep learning-based methods include autoencoders (Hinton & Salakhutdinov, 2006; Moor et al., 2020). The problem can be considered from the viewpoints of geometry deep learning (Bronstein et al., 2017)) and topology data analysis (Wasserman, 2018; Moor et al., 2020).
The ability to preserve geometric structure of nonlinear manifolds and properly unfold them into lower dimensional hyperplanes is the key to the success of manifold-based computing and visualization. Recently, Markov-Lipschitz deep learning (MLDL) (Li et al., 2020) is proposed as a general framework for manifold learning, NLDR, visualization and manifold data generation. The idea is to impose the constraint of geometric isometry across neural network layers to preserve the local
geometric structure of manifold data. This effectively transforms a vector-based transformation of conventional neural networks into a local distance-preserving homeomorphism. Such local homeomorphisms avoid the transformation from collapse, twisting, or crossing, so as to improve generalization, stability, and robustness. Locally isometric smoothness (LIS) (Li et al., 2020), which imposes straight distance-preserving, is proposed as a method in the MLDL framework. LIS has demonstrated significant advantages in manifold learning and NLDR.
This paper proposes a more advanced method in the MLDL framework, called elastic locally isometric smoothness (ELIS), aimed to empower deep neural networks with ability to tackle the high nonlinearity and non-Euclideanity challenges arising from complicated manifolds in high dimension spaces that LIS is unable to cope with. Whereas LIS preserves the straight distances between neighboring points, ELIS is based on a similarity metric that is nonlinear in distance and a two-way divergence loss (of nearby neighbors and far-away pairs, respectively); this renders more flexibility and capacity in tackling the challenges yet under the control of the ELIS regularization. As the result, ELIS bridges gaps between non-Euclidean manifolds in the input space and resulting Euclidean hyperplanes in the learned lower dimensional latent space, with geometric structure of the manifolds preserved. Both ELIS and LIS can be considered as a form of graph neural networks (GNN) (Scarselli et al., 2009) but without the aggregation generally present in GNNs. They are more like what is called “manifold learning 2.0” (Bronstein, 2020).
The distinctive features of ELIS (and LIS) in comparison with related methods are summarized in Table 1. ELIS-based neural networks can accomplish all the functionalities in the general MLDL framework, for which none of the methods can achieve. Extensive experiments, comparisons, and ablation study demonstrate that ELIS-based neural networks produce results not only superior to the SOTA t-SNE and UMAP for NLDR and visualization but also better than other algorithms of manifold and autoencoder learning, including LIS, for NLDR and manifold data generation. The main contributions of this paper are summarized below:
(1) Proposing the ELIS constraint in the MLDL framework, based on a similarity metric which is nonlinear in distance. It inherits the metric-preserving property of LIS so that the resulting layer-wise transformation is geometrically smooth, hence topologically homeomorphic, yet possesses more flexibility than LIS in handling highly nonlinear manifolds in high dimensional spaces.
(2) Proposing conditions for a class of nonlinear similarity functions for converting from distance to similarity, in conjunction with a two-way divergence loss. This ensures the metric-preserving and neighbor-confining properties.
(3) Proposing two instances of ELIS-based neural networks: an ELIS encoder for manifold learning and visualization and an ELIS autoencoder for manifold reconstruction and data generation.
(4) Providing several SOTA results that surpass UMAP and other leading algorithms.
In the following, Section 2 introduces LIS and presents ELIS formulations, and the Section 3 presents extensive experiments. The code is provided in the Supplementary Material.
2 ELASTIC LOCALLY ISOMETRIC SMOOTHNESS
Both ELIS and LIS are formulated in the MLDL framework (illustrated in Fig.A1 in Appendix) which is aimed to regularize neural transformations through imposing the ELIS constraint between
layers to achieve certain well-behaving properties. However, the ELIS formulation tackles challenges of highly nonlinear manifold data in high dimensional spaces using a more flexible and effective way, much inspired by t-SNE (Maaten, 2014) and UMAP (McInnes et al., 2018). Let X = {x1, . . . , xM} be a set of M samples in the input space RN with the index set S = {1, . . . ,M}. These samples may come from one or several lower dimensional manifoldsMX ⊂ RN . WhenMX is Riemannian, its tangent subspace Tx(MX) at any x ∈ MX is locally isomorphic to an Euclidean space of dimensionality dim(MX) < N . Therefore, we can use a cascade of nonlinear neural transformations to "unfold" nonlinear manifolds in a high dimensional input space into hyper-planar regions in a lower dimensional latent space.
Both ELIS and LIS aim to accomplish the following 4 tasks, of which few neural networks can do all: (1) Manifold learning: to learn an embedding in a latent spaceMZ ⊂ Rn, where n < N , based on the local structure of X . (2) Representation Learning: to learn the underlying mapping Φ : MX =⇒ MZ for the embedding that is generalizable to unseen data x 6∈ X,x ∈ MX . (3) Visualization: to visualize the embedding in 2D or 3D space. (4) Manifold generation: to find the inverse mapping Φ−1 :MZ =⇒MX and generate new data onMX from samples inMZ . ELIS is aimed to surpass LIS.
2.1 THE LIS CONSTRAINT AND NEURAL NETWORKS
The LIS constraint is aimed to best preserve the local distances of the data between two metric spaces, encouraging a vector-based neural transformation Φ(X |W ), where W is the transformation matrix of the neural network, to become a well-behaved local distance-preserving homeomorphism. This can be achieved by adding the following LIS loss (Li et al., 2020), imposed between two layers (metric spaces) l and l′
L(l,l ′) LIS (W ) = ∑ i∈S ∑ j∈N (l)i ∣∣∣d(x(l)i , x(l)j )− d(x(l′)i , x(l′)j ))∣∣∣ (1) where d : X ×X → R≥0 is a dissimilarity metric, x(l ′) i = Φ(x (l) i |W ) is the result of the effective transformation Φ from layer l to l′, and Ni is the set of neighbors of i. Without prior knowledge, dij is usually computed as the Euclidean distance, albeit it may not well reflect the reality. It is hoped that after a series of proper nonlinear transformations, the input data is transformed into an embedding in the latent space such that the Euclidean distance make more sense in describing mutual relationships between points. In this work, we aim to find such transformations.
The LIS loss effectively minimizes the bi-Lipschitz constant of Φ. It is through the neighborhood system, N = {Ni | i ∈ S}, that the influence of a point on the others is propagated to afar. For this reason, the collection of random variable x(l) constitutes a Markov random field. Equ. (1) is defined w.r.t. Ni (Markovianity) and aimed to minimizing the bi-Lipschitz constant, hence the name Markov-Lipschitz (Li et al., 2020).
The basic LIS loss is augmented by an auxiliary "push-way" term (Li et al., 2020)
L(l,l ′) push(W ) = − ∑ i∈S ∑ j 6∈j∈N (l)i π[dl′(x (l′) i , x (l′) j ) < B] dl′(x (l′) i , x (l′) j ) (2)
in which π[·] ∈ {0, 1} is the indicator function and B is a bound. This term is aimed to help "unfold" nonlinear manifolds, by exerting a spring force to push away from each other those pairs (i, j) which are non-neighbors at layer l but nearby (distance smaller than B) at layer l′.
These two losses are combined to form a LIS-based encoder loss for manifold learning and dimension reduction
LEnc = ∑ (l,l′) L(l,l ′) LIS (W ) + µL (l,l′) push(W ) (3)
where µ is a weight and (l, l′) is summed over a set of designated layer pairs (currently designed manually). A LIS-based autoencoder can be formulated by applying the LIS constraint between layers within the decoder and between the encoder and decoder layers. LIS-based neural networks have significant advantages (Li et al., 2020).
2.2 THE ELIS CONSTRAINT
The proposed ELIS constraint is aimed to tackle difficulties in “flattening" highly nonlinear manifolds in a high dimensional space into hyperplanes in a lower dimensional space. It imposes a more flexible nonlinear similarity-preserving constraint as opposed to the distance-preserving (isometry) constraint of Vanila LIS. More specifically, ELIS transforms a distance into a similarity metric using a nonlinear function and defines a KL loss based on similarities between nearby pairs and far-away pairs. This makes the metric-preserving constraint of ELIS more flexible than the straight distance-preserving of LIS to accomplish the challenging task.
Moreover, ELIS requires that the smoothness should be imposed in a way to render sufficient flexibility for tackling complicated nonlinearity and non-Euclideanity; this is achieved layer-wisely via nonlinearity in both the similarity and activation functions.
Converting distance to similarity. Following UMAP, we assume that X(l) is fixed (e.g., the input layer) and X(l
′) at subsequent layers l′ are computed as a result of manifold learning. The nonlinear similarities between xi and xj at each layer is computed as follows. First, define an nearest neighbor (NN)-normalized distance
di|j def = d(xi, xj)− ρi ≥ 0 (4)
where ρi = d(xi, xnn(i)) in which xnn(i) denotes the nearest neighbor of xi. Then, di|j is converted to a similarity metric ui|j = g(di|j) ∈ [0, 1] where g is a nonlinear function.
We require that g(η) satisfy the following necessary conditions ∀η = di|j ≥ 0:
Condition (1) – it is monotonically decreasing, g′(η) < 0 for η > 0; Condition (2) – its first derivative diminishes in the limit, limη→∞ |g′(η)| = 0.
The first condition ensues a monotonic and inverse relationship between the distance and the similarity. The second condition effectively leads to a neighborhood system bounded softly as opposed to the "hard" bounded neighborhoods in the LIS and provides proper control on contributions of neighboring points to the back-propagation of neural network learning.
We further require that the g(η) to be a function of η2 – for convenience not necessity, such that its first derivative take the form g′(η) = 2ηh(η) where h(η) is also a function of η2. h(η) can be called influence function because it controls how the other neighboring point xj can influence xi. Condition (2) above restricts the influence from "far-away" point xj on xi (between which the distance ηij = ‖xi − xj‖ is relatively large) to diminish in the back-propagation process. This provides a properly weighted neighborhood system w.r.t. which the influence between points is limited with certain scope adaptively.
Specifically for ELIS, we define the following σi-data-adaptive, ν-parameterized nonlinear similarity
ui|j(σi, ν) = g(di|j | σi, ν) = Cν
( 1 + d2i|j
σi ν
)−(ν+1) , (5)
where ν ∈ R+ is similar to the degree of freedom (DoF) parameter in the t-distribution,
Cν = 2π
( Γ ( ν+1 2 ) √ νπΓ ( ν 2 ))2 (6) is a function of ν which sets the limit limν→+∞ g(0 | σi, ν) = 1 (∀σi > 0)), and the data-adaptive parameter σi > 0, playing a calibration role, is estimated from the data by best fitting the equation∑
j 6=i
ui|j(σi, ν) = log2Q (7)
for the perplexity-like hyperparameter Q given. While other choices satisfying the aforementioned necessary conditions, including the normalized Gaussian and Cauchy functions used in t-SNE (Maaten, 2014) and the fitted polynomial function used in UMAP (McInnes et al., 2018), can also work for ELIS, we find Equ. (5) a better choice not only because it produces better results but also because we can use the ν parameter as a continuation tool for preventing the training from converging
to bad local minima and for controlling separation margin between different manifolds, as will be shown in the ablation study.
Computing similarities uij and u′ij . Because the symmetry ui|j = uj|i does not hold due to differences in σi for the input layer (l), the following symmetrization is performed
uij = uj|i + ui|j − uj|iui|j . (8) On the other hand, for the subsequent latent layers, the computation of σi and ρi for each i would bring about huge computational costs. To overcome this problem, we directly set σ′i = 1 and ρ ′ i = 0 (this also ensures the symmetry u′i|j = u ′ j|i). While σi and ρi are needed to deal with unevenness and outliers of the data for the input layer, the necessity becomes not so demanding as the layer goes deeper after layers of nonlinear manifold unfolding. From u(l)ij of layer l can be constructed a weighted graph G(S, X(l), U (l)) consisting of a set S of nodes with node attributes X(l) and edge attributes (weights) U (l) = {u(l)ij ≥ > 0 | ∀i, j ∈ S}. The global structure of a manifold is discovered from local geometry of data through the graph G. Formulating the ELIS losse. ELIS transforms the distance metric dij into a similarity metric using a nonlinear function uij = g(dij) and defines the ELIS loss between layers l and l′, in terms of similarities u(l)ij | i, j ∈ S, i 6= j} at layers l and its counterpart U (l ′) = {u(l ′)
ij | i, j ∈ S, i 6= j} at layers l′. The ELIS loss is defined by what we call the two-way divergence (a.k.a. the fuzzy information for discrimination (Bhandari & Pal, 1993) and the fuzzy set cross entropy in UMAP (McInnes et al., 2018))
L(l,l ′)
ELIS(W | X (l), X(l
′)) = ∑
i,j∈S,i6=j u (l) ij log
u (l) ij u (l′) ij + (1− u(l)ij ) log 1− u(l)ij 1− u(l ′ ) ij
(9)
The first term is the directed divergence of the two fuzzy sets of similarities, in lieu of the LIS’ distance-preserving term of Equ. (1); the second term can be considered as the directed divergence of the two corresponding complement fuzzy sets, replacing the push-way term of Equ. (2).
Equ.(9) is called the "two-way divergence" because the first term on the right side of the equation imposes similarity-based attraction forces between nearby (intra-manifold) pairs whereas the second term exerts dissimilarity-based repulsion forces between far-away (inter-manifold) pairs. In other words, intra-manifold points are transformed to a cluster in the latent space, mainly as the result of the first term whereas inter-manifold point pairs push away from each other to different clusters, mainly due to the second term.
Note also that ELIS applies the two terms in a soft, adaptive way via its weighted neighborhood graph where the edges are effectively restricted by pairs of corresponding nodes (data points) between which the absolute gradients ∣∣∣∇WL(l,l′)ELIS(W | X(l), X(l′))∣∣∣ ≥ > 0 are nonzero, in contrast to the "hard" neighborhood system in LIS.
The ELIS loss can be rearranged as follows
L(l,l ′ )
ELIS(W | X (l), X(l
′ )) = ∑ i,j∈S,i6=j u (l) ij log u (l) ij + (1− u (l) ij ) log(1− u (l) ij )
− u(l)ij log u (l′) ij − (1− u (l) ij ) log(1− u
(l ′ ) ij ) (10)
When X(l) (hence u(l)ij ) fixed, the optimization only needs to minimize second part involving u (l′) ij .
2.3 ELIS ENCODER AND AUTOENCODER
The ELIS encoder consists of a cascade of nonlinear forward neural transformations constrained by the ELIS loss, aimed for manifold learning and NLDR. An ELIS (and LIS) encoder can learn an NLDR transformation without the need for a decoder (as required by autoencoders), and this encoder can generalize to unseen data (that ISOMAP, LLE, t-SNE and UMAP cannot). The total loss for the ELIS encoder is the sum of all the ELIS losses over a prescribed set of layer pairs (l, l′)
LEnc(W ) = ∑ (l,l′) α(l,l ′)L (l,l′) ELIS(W ) (11)
where α(l,l ′) weight the relative importance of L(l,l ′) ELIS .
The ELIS autoencoder has two purposes: (1) to further regularize or optimize the ELIS encoderbased manifold learning by using an ELIS decoder, and (2) to enable generation of new data of the learned manifolds by the trained ELIS decoder. The ELIS autoencoder structure consists of the ELIS encoder and decoder in cascade. The ELIS decoder is aimed to approximate the inverse transformations of the ELIS encoder and is made to be entirely symmetric to the ELIS encoder in its network structure. The overall weight matrices becomes W = [WEnc,WDec]. The loss function is composed of three terms:
LAE(W ) = LEnc(WEnc) + LDec(WDec) + LRec(W ) (12)
where LEnc(WEnc) is the same as Equ. (11), LDec(WDec) is defined in the same way following the symmetry, and the reconstruction loss LRec(W ) is the summed over all the corresponding layers
LRec(W ) = L−1∑ l=0 γl M∑ i=1 ‖ xi(l) − x̂(l)i ‖ 2 (13)
where x̂(l)i are the data points at the corresponding layer of the decoder and γl are the weights. The constraints due to LRec(W ) and LTie(W ) are illustrated by the dashed lines in Fig. A1 in Appendix.
3 EXPERIMENTS
The following experiments are aimed to evaluate ELIS in comparison with other five algorithms: UMAP (McInnes et al., 2018), t-SNE (Maaten, 2014) (for visualization), MLLE (Zhang & Wang, 2007), TopoAE (Moor et al., 2020) and LIS (Li et al., 2020) in terms of visual inspection and numerical metrics for manifold computing, visualization and data generation. Nine datasets are used, including five toy datasets: (1) SwissRoll (3-D), (2) S-Curve (3-D), (3) Servered Sphere (3-D), (4) SpheresA (101-D) (see (Moor et al., 2020) for the description) and (5) SpheresB (101-D, a modified composition from SpheresA); and four real-world datasets: (6) Coil20 (16384-D) and (7) Coil100 (49152-D) (Nene et al., 1996), (8) MNIST (784-D) (LeCun, 2013), and (9) Fashion-MNIST (784-D) (Xiao et al., 2017). The toy datasets are used because their geometric and topological structures are clear for the evaluation. The SpheresA dataset (Moor et al., 2020)is composed of 1 large sphere enclosing 10 small ones in 101-D space. SpheresB differs from SpheresA in that its large sphere consists of only 500 samples (whereas that in SpheresA has 5000) – the data is so sparse that the smallest within-sphere distance on the larger sphere can be greater than that between the larger sphere and some small ones. 5 performance metrics are used for the evaluation, whose exact definitions are given in Appendix A.2.
The pseudo-codes of the ELIS encoder and autoencoder and hyperparameter settings are described in Appendix A.1. The implementation uses the PyTorch 1.6.1 library running on Ubuntu 18.04 on NVIDIA v100 GPU. The time is spent mainly in the computation of neighbors. At present, the ELIS algorithms computes the neighborhood for every point pair, hence have the complexity of O(M2) for each cross-layer pair (l, l′). The complexity can be reduced to O(M1.14) if using the nearest-neighbor-descent algorithm of UMAP (McInnes et al., 2018).
3.1 MANIFOLD LEARNING AND GENERATION
Manifold Learning. Table 2 compares performances of the five NLDR methods where bold numbers are the best results, and underline the second best. The ELIS encoder has overall the best performance. Fig. 1 visualizes some representative results, and more results are given in Table. A2, Fig. A2 - Fig. A4 in Appendix A.3.
Next, we delve into embedding details of of the Coil20 objects resulting from the ELIS encoder (ELIS-Enc) and UMAP in Fig. 2. First, all the object embeddings in the ELIS-Enc result form closed loops for the 360 degree rotations (refer to Fig. A5 for the embedding-object correspondence). Second, the quality of the ELIS-derived embeddings enables us to infer some symmetries of the objects in the 3D space. Four types of such symmetries are explored and discussed in Appendix A.3. The UMAP result, in contrast, does not possess such quality.
Manifold Data Generation. Fig. 3 compares images generated from interpolated points between two nearest neighbors on the embedding using three autoencoders in comparison. The images generated by the ELIS-AE have clear boundaries and look sharper than those produced by the other two autoencoders. Results for several other objects are shown in Fig. A6 in Appendix A.4.
3.2 ABLATION STUDY
Cross-layer ELIS constraint. The cross-layer ELIS constraint is weighted by α(l, l′) as in Equ. (11). Four weight schemes (1) Head-Tail, (2) Head-Mids, (3) Mids-Tail and (4) Head-Mids + Mids-Tail are designed for an L-layer encoder, as described in details in Appendix A.5. The results are compared in Table. A3 and Fig. A7. Overall, the "Head-Mids + Mids-Tail" scheme, which imposes the most extensive cross-layer ELIS constraints and also needs more computation, achieves the best results. This justifies the use of the proposed ELIS method for performance improvements.
Effect of final ν value. The final ν value has significant influence on the within-manifold and between-manifold scatters. Fig. A8 and Fig. A9 in Appendix A.5 demonstrate the effect of varying ν in the input space and varying ν in the latent space on the results, respectively.
Continuation in ν. Continuation in hyperparameter ν in latent space is used during the ELIS encoder learning process. The algorithm starts with a small ν in latent space to include more global
information. Then it gradually increases the value to focus more locally. The continuation strategy results in significant better solutions as shown in Fig. A10 in Appendix A.5.
The major merits of ELIS. Finally, we summarize the comparative results of the experiments in the following table.
ELIS LIS UMAP t-SNE MLLE TopoAE
Succeed in unfolding toy data Yes Yes No No Yes No
Perfect manifold structures on Coil Yes No Maybe No No No
High Accuracy Most No Some Some No No
Good Reconstruction Quality Yes Maybe N/A N/A No No
4 CONCLUSION
The proposed ELIS method preserves the nonlinear similarity metric locally across layers of deep neural networks by optimizing two-way divergence loss. It effectively tackles difficulties in deep manifold computing and visualization with the local geometry-preserving property. Empirical results, comparisons, and ablation study demonstrate that ELIS is not only superior to UMAP and t-SNE for NLDR and visualization but also better than other leading manifold and autoencoder learning algorithms for NLDR and manifold data reconstruction and generation. Future work includes the following: (1) extending the unsupervised version of MLDL to self-supervise, semi-supervised and supervised tasks; (2) further formulating MLDL so that cross-layer link hyperparameters α become part of learnable hyperparameters.
APPENDIX
A.1 THE MLDL FRAMEWORK AND ELIS
Markov-Lipschitz deep learning (MLDL) framework. The MLDL framework is illustrated in Fig.A1 (from Li et al. (2020)). The ML-AutoEncoder (of LIS or ELIS type) transforms the input X to an embedding X(L) at layer L (the latent layer) using the ML-Encoder, and then reconstruct X̂ using the ML-Decoder. Whereas a standard neural network consists of a cascade of transformations φ(l) (blue arrows), an MLDL network imposes the constraint between any two layers as appropriate (shown in orange arcs and dashed lines) in the form of cross-layer loss functions weighted by α(l,l
′). This encourages φ(l) to become well-behaved local homeomorphisms. The latent features X(L) extracted by the learned ML-Encoder can be used for downstream tasks such as visualization and classification as well as manifold data generation using the learned ML-Decoder.
Figure A1: Illustration of Markov-Lipschitz deep learning (MLDL) framework using an MLAutoEncoder (best viewed in color).
The pseudo-codes for the ELIS encoder and the ELIS autoencoder, related hyperparameter, and a parameter continuation method are described below.
Algorithm 1: ELIS Encoder Input : Data:X(0), learning rate lr, epochs E, number of encoder layers L, Weight hyperparameter α, νList, Q, Calculate d(0)i|j with (4) Calculate σ(0)i with (7) Calculate u(0)ij with (8) Initialize the neural network {Φ(1)Enc( · |W (1) Enc), Φ (2) Enc( · |W (2) Enc), · · · ,Φ (L) Enc( · |W (L) Enc)} while i = 0; i < E; i++ do ν ←− νList[i] while l = 1; l <= L; l++ do
Calculate l layer’s embedding X(l) ←− Φ(l)Enc(X(l−1)|W (l) Enc)
Calculate u(l)ij with (5) and (8) end while l′ = l; l′ <= L; l′++ do
Calculate the ELIS losses between layer l and lyaer l′, L(l,l ′)
Enc with (10) end
Update parameters: W ←−W − lr · ∑L l=1 ∑L l′=l α (l,l′) ∂L (l,l′) Enc
∂W end
Algorithm 2: ELIS AutoEncoder Input : Data:X(0), learning rate lr, epochs E, number of encoder layers L, Weight hyperparameter α γ, νList, Q Calculate d(0)i|j with (4) Calculate σ(0)i with (7) Calculate u(0)ij with (8) Initialize the neural network {Φ(1)Enc( · |W (1) Enc), Φ (2) Enc( · |W (2) Enc), · · · ,Φ (L) Enc( · |W (L) Enc),
Φ (1) Dec( · |W (1) Dec), Φ (2) Dec( · |W (2) Dec), · · · ,Φ (L) Dec( · |W (L) Dec)}
while i = 0; i < E; i++ do ν ←− νList[i] while l = 1; l ≤ 2L; l++ do
if l ≤ L then Calculate l layer’s encoder embedding X(l) ←− Φ(l)Enc(X(l−1)|W (l) Enc); else Calculate l layer’s decoder embedding X(l) ←− Φ(l)Dec(X(l−1)|W (l) Dec); end Calculate u(l)ij with (5) and (8)
end while l′ = l; l′ <= L; l′++ do
Calculate the ELIS losses between layer l and lyaer l′, L(l,l ′)
ELIS with (10) end Calculate the reconstruction loss between layer l and lyaer 2L− l, L(l,2L−l)Rec with (13)
Update parameters: W ←−W − lr( ∑L l=1 ∑L l′=l α (l,l′) ∂L (l,l′) Enc ∂W + ∑L l=1 γ (l,2L−l) ∂L (l,2L−l) Rec
∂W ) end
Hyperparameters. Table. A1 summarizes the ELIS hyperparameter setting for different datasets. Other hyperparameters are set the same for all datasets: learning rate lr = 0.01 and number of epochs E = 5000. The LeakyReLU is used as the activation function.
Table A1: Hyperparameters of ELIS for different datasets
Dataset Point Network Structure (Number of parameters) Q in Equ. (7) Batchsize
Swiss Roll 800 3, 500, 500, 2 (0.252M) 10 800 S-Curve 800 3, 500, 500, 2 (0.252M) 10 800 Servered Sphere 800 3, 500, 500, 2 (0.252M) 10 800 SpheresA 10000 101, 500, 500, 2 (0.301M) 10 10000 SpheresB 5500 101, 500, 500, 2 (0.301M) 10 5500 Coli20 1440 16384, 500, 500, 2 (8.443M) 10 1440 Coli100 7200 49152, 1000, 500, 250,2 (24.82M) 10 2400 MNIST 60000 784, 1000, 500, 300, 2 (1.434M) 15 4000 Fashion-MNIST 60000 784, 1000, 500, 2 (1.285M) 10 4000
Continuation in ν(l
′). In the training process, the parameter ν(l ′)) in computing sample similarities for the latent layer is graduated from a small number to a large number, e.g. ν(l
′) : 0.01 → 100 (see Equ. (5)), though fixed at a large value, e.g. ν(l
′) = 100 for the input layer. Empirically, the continuation helps training converge to a good solution; the reasons behind are to be explained in a future work.
A.2 DEFINITIONS OF PERFORMANCE METRICS
1. Cont (Continuity) is asymmetric to Trust (from space X(l ′) to space X(l)):
Cont = 1
k2 − k1 + 1 k2∑ k=k1 1− 2Mk(2M − 3k − 1) M∑ i=1 ∑ j∈N (l)i,k,j 6∈N (l′) i,k (r (l′) i,j − k) where r(l ′) i,j is the rank of x (l′) j in the k-NN of x (l′) i . M is the size of dataset. N (l′) i,k is the set of indices to the k-NN of x(l ′)
i . k1 and k2 are the lower and upper bounds of the k-NN. For sphereA and sphereB, we focus more on global performance, so set k1 = [M/14], k2 = [M/7]. For other datasets, we set k1 = 5, k2 = 10.
2. Trust (Trustworthiness) measures how well the k nearest neighbors of a point are preserved when going from space X(l) to space X(l ′):
Trust = 1
k2 − k1 + 1 k2∑ k=k1 1− 2Mk(2M − 3k − 1) M∑ i=1 ∑ j∈N (l
′) i,k ,j 6∈N (l) i,k
(r (l) i,j − k) where r(l)i,j is the rank of x (l) j in the k-NN of x (l) i .
3. ACC (svm)
The ACC (svm) is calculated as follows. (1) Compute nonlinear dimensionality reduction methods to obtain 2-dimensional embeddings. (2) Partition the data by 5-fold cross-validation. (3) For each fold, train the linear kernel SVM classifier using the training set and test it in the test set. (4) Calculate the mean value of the classification accuracy.
4. ACC (NN) is defined as follows:
ACC(NN) =
∑M i π [ Yi = YN (L)i,1 ] M
π[·] is the exponential function. Yi is the label of sample i, YN (L)i,1 is the label of sample N (L) i,1 , where N (L) i,1 is the nearest neighbor point of node i in layer L.
5. AUC is defined as follows:
AUC(f) =
∑ p0∈P0 ∑ p1∈P1 π [p0 > p1]
|P0| · |P1|
P0 =
{ d (L) ij −min d (L) ij
max d (L) ij −min d (L) ij
|i, j ∈ {1, 2, 3 · · · ,M}, Yi = Yj
}
P1 =
{ d (L) ij −min d (L) ij
max d (L) ij −min d (L) ij
|i, j ∈ {1, 2, 3 · · · ,M}, Yi 6= Yj
}
Where d(L)ij is the distance in layer L. P0 is the set of positive sample pair, P1 is the set of negative sample pair.
A.3 MANIFOLD LEARNING AND NLDR
Manifold Learning Results. This subsection shows more results of manifold learning and NLDR obtained by using the ELIS-Enc and the ELIS-AE, in comparison with the other methods, on training and testing datasets. Some typical embedding results of manifold learning using the three autoencoder methods are visualized in Fig. A2, where t-SNE and UMAP are not included because these nontransformational methods are unable to generalize to test datasets. LIS-Enc and TopoAE learned poor
Figure A2: Comparison of visualization results of autoencoders on training and testing sets
results on the training set, so it did not work well on the test set either. ELIS-AE, as a autoencoderbased mehtod, have a huge advantage in terms of generalization performance, because it can handle the test data. And it is very easy to apply to specific tasks such as classification, regression and clustering.
Table. A2 compares performance metrics on 8 datasets, where the ACC(SVM), ACC(NN) and AUC are abscent for the SwissRoll and SeveredSphere because these datasets have no class label.
Fig. A3 and Fig. A4 shows the visualization resules of the toy and real-world datasets on the training datasets. For Swiss roll, Servered Sphere, and S-Curve, ELIS-Enc, LIS-Enc and MLLE all maintained the topology of the original data, however, MLLE method did not hold the relative Euclidean distance (The resulting embedding is square instead of rectangular). For SpheresA, ELIS-Enc, LIS-Enc, and TopoAE show the "big sphere enclosing 10 small spheres" in 2D embedding, but for SpheresB only
Table A2: Comparison in performance metrics with 5 difference methods in eight datasets
ELIS-Enc LIS-Enc UMAP t-SNE TopoAE MLLE
Swiss Roll Cont 1.0000 1.0000 0.9962 0.9969 0.9716 0.9956Trust 1.0000 1.0000 0.9983 0.9993 0.9809 0.9948
SeveredSphere Cont 0.9997 0.9932 0.9967 0.9985 0.9854 0.9958Trust 0.9997 0.9755 0.9989 0.9995 0.9891 0.9836
SpheresA
Cont 0.7850 0.7892 0.7147 0.7548 0.8064 0.7272 ACC(SVM) 0.5213 0.5000 0.5550 0.4992 0.4982 0.5000 ACC(NN) 0.9985 0.9912 0.5406 0.7837 0.9944 0.5205 AUC 0.5698 0.3362 0.5816 0.5603 0.3328 0.5961
SpheresB
Cont 0.9242 0.9255 0.9109 0.9155 0.9245 0.8943 ACC(SVM) 0.9558 0.9100 0.9100 0.8478 0.9581 0.0965 ACC(NN) 0.9987 0.9969 0.8469 0.9365 0.9949 0.8265 AUC 0.9780 0.9318 0.9570 0.9570 0.9870 0.9459
Coil20
Cont 0.9956 0.9973 0.9962 0.9927 0.9901 0.9395 ACC(SVM) 0.8941 0.8301 0.8472 0.8014 0.7078 0.1556 NNACC 0.9965 0.9354 0.8917 0.9965 0.8160 0.6410 AUC 0.9780 0.9537 0.9842 0.9582 0.8916 0.8824
Coil100
Cont 0.9936 0.9967 0.9955 0.9950 0.9903 0.7898 ACC(SVM) 0.9372 0.7319 0.8299 0.8278 0.5540 0.0363 ACC(NN) 0.9976 0.8163 0.9232 0.9951 0.4797 0.3350 AUC 0.9770 0.9667 0.9819 0.9759 0.8735 0.7322
MNIST
Cont 0.9639 0.9749 0.9646 0.9630 0.9618 0.9183 ACC(SVM) 0.9699 0.7468 0.9690 0.9525 0.7450 0.1100 ACC(NN) 0.9568 0.7035 0.9528 0.9567 0.7773 0.7423 AUC 0.9725 0.8779 0.9691 0.9314 0.8000 0.8575
Fashion -MNIST
Cont 0.9848 0.9901 0.9836 0.9777 0.9864 0.9298 ACC(SVM) 0.7125 0.6908 0.7030 0.5518 0.6067 0.1058 ACC(NN) 0.7092 0.6427 0.7253 0.7787 0.5718 0.6145 AUC 0.9121 0.8843 0.9165 0.8256 0.8310 0.7908
ELIS-Enc and LIS-Enc shows the "enclosing" phenomenon. For Coil20 and Coil100, ELIS-Enc, UMAP and TSNE can produce non-intersecting embeddings. However, the ELIS-Enc results are distinguishable and do not cut any of the manifolds. For MNIST and Fashion-MNIST, Both the UMAP and ELIS-Enc methods output the good embeding, But in terms of performance metrics, ELIS-Enc has sufficient advantages.
Symmetry of the objects and ELIS-Enc’s embedding in Coil20. For Coil20, information about the symmetry of the objects in the picture can be obtained by analyzing the embedding generated by ELIS-Enc. Details of the ELIS-Enc embedding of the Coil20 are shown in Fig. A5.
We divided the Coil20’s manifolds into four patterns based on the symmetry of the objects in the image and the shape of the manifold.
(1) Objects that are single plane mirror symmetric have elongated ellipse embedding shapes; For objects with single plane mirror symmetry, an angle can be found from which an image taken by rotating to the left is approximately equal to an image taken by rotating to the right. The corresponding two-dimensional manifolds are therefore elongated ellipse (The endpoints of the two long axes of the ellipse correspond to the two images obtained by taking pictures along the plane of symmetry.).
(2) Objects that are rotational symmetric have round embedding shapes; For rotational symmetric objects, the resulting pictures are always very similar no matter what angle they are taken from, so that the resulting two-dimensional manifold is squeezed inward into a circle.
(3) Objects that are double vertical mirror symmetric and have nested double ring embeddings; For objects with double vertical mirror symmetry, every 180 degrees of rotation, the resulting image reappears (the reappeared image is very similar to the one from 180 degrees ago, and
Figure A3: Comparison of visualization results for toy dataset on training set
is very close in two-dimensional space), thus the resulting manifold consists of two nested rings.
(4) Object’s symmetry is not evident.
Figure A4: Comparison of visualization results for real-world dataset on training set
Figure A5: Details of the ELIS-Enc’s embedding of the Coil20 and four manifold patterns
A.4 MANIFOLD DATA GENERATION
The manifold generation task generates a complete manifold structure from finite manifold samples. In this experiment,the test steps are as follows:
(1) Training a network (includes encoder and decoder) that generating 2-dimensional embedding;
(2) Performing linear interpolation in the embeddings; (3) Mapping the interpolation result back to the data space via the decoder.
Generation results for comparison with the TopoAE and LIS-AE are shown in Fig. A6. The same network structure was used in the experiments.
Figure A6: Comparison in visualization with LIS-AE and TopoAE in manifold generation. The left side is three embedding result, and black point in manifolds is the location of the interpolation. The right side is the interpolation results. there are 12 images in the right of the figure, the leftmost and the rightmost images are the original images, and ten images in the middle are the generation results with geodesic distance.
ELIS-AE has an advantage over the other two methods. Both LIS-AE and TopoAE methods do not learn a satisfactory embedding, so the interpolation results are poor. The embedding in LIS-AE has overlapping manifolds, so it generates images belonging to other popular methods (e.g. manifold A). The TopoAE’s embedding is messy, so the decoder reconstructs fuzzy images.
A.5 ABLATION STUDY
Cross-layer ELIS constraint. The effect of the ELIS-Enc constraint is determined by the weights α(l, l′) as in Equ. (11). We set the weights α(l,l
′) in either of four schemes (where Head, Tail and Mids are used to denote input layer, latent layer and intermediate layers) for an L-layer encoder:
(1) Head-Tail: weight α(0,L) = 1 (the constraint is imposed between the input layer and the latent layer);
(2) Head-Mids: weights α(0,l) = 1/L where l ∈ {1, 2, · · · , L} (the constraints are imposed between the input layer and each of intermediate layers);
(3) Mids-Tail: weights α(l,L) = 1/L where l ∈ {1, 2, · · · , L} (the constraints are imposed between the latent layer and each of intermediate layers);
(4) Head-Mids + Mids-Tail: weights α(0,l) = α(l,L) = 1/2L where l ∈ {1, 2, · · · , L} (combination of Head-Mids and Mids-Tail).
In this ablation study, a 10-layer neural network is used and the width of the network is determined depending on the dataset. (Swiss Roll:[3, 50, 45, 35, 30, 25, 20, 15, 10, 5, 2], SpheresA:[101, 50, 45, 35, 30, 25, 20, 15, 10, 5, 2], SpheresB:[101, 500, 400, 300, 300, 200, 200, 100, 100, 2], COIL20:[16384, 50, 45, 35, 30, 25, 20, 15, 10, 5, 2])
The evaluation metrics for four different cross-layer schemes are presented in Table. A3. The results of different cross-layer schemes are shown in Fig. A7.
Figure A7: Comparison in visualisation of four different cross-layer schemes.
The visualization results and metrics show that the cross-layer scheme (4) has better results in a 10-layer network. The network is very difficult to train if the ELIS loss acts only on the first and last layers (cross-layer scheme (1) ). The network will be easier to train if ELIS losses acts from the first layer and all intermediate layers (cross-layer scheme (2)). The ELIS losses in the middle and last layers (cross-layer scheme (3)) does not improve the performance of the embedding if used alone.
Table A3: Comparison in performance metrics of four different cross-layer schemes.
Cont Trust ACC(SVM) ACC(NN) AUC
Swiss Roll (1) - - - - - (2) 0.9999 0.9999 - - - (3) - - - - - (4) 0.9999 0.9999 - - -
SpheresA
(1) - - - - - (2) 0.9402 0.8832 0.9149 0.9478 0.9696 (3) - - - - - (4) 0.9376 0.8858 0.9529 0.9784 0.9721
SpheresB
(1) 0.9111 0.6373 0.5225 1.0000 0.5486 (2) 0.9087 0.6341 0.5145 1.0000 0.5489 (3) 0.8520 0.6299 0.5388 1.0000 0.4474 (4) 0.8167 0.6432 0.8740 0.9936 0.7461
Coil20
(1) - - - - - (2) 0.9955 0.9852 0.8454 0.9792 0.9721 (3) 0.9904 0.9876 0.8459 0.9847 0.9524 (4) 0.9947 0.9901 0.8867 0.9986 0.9735
However, if used in conjunction with the cross-layer scheme (4), it will improve the metric of the resulting latent space.
Effect of ν value. Fig. A8 and Fig. A9 show the effect of data space hyperparameter and latent space hyperparameter on embedding results.
Figure A8: Embedding results with varying ν in input space.
ν in input space controls the range of sensations in data space. if input space’s ν is small, the derivative of the probabilistic mapping function of the input layer will be small. The probability will be insensitive to distance in data space. In other words, ELIS-Enc will degradation into LIS-Enc. If more attention is paid to the global information of the input data, raise ν. If more attention is paid to local information about the input data, lower input space’s ν. By default, ELIS-Enc does not change this hyperparameter, but defaults to input space’s ν = 100.
ν in latent space controls degree of display in latent space (show detailed information or show global information). if latent space’s ν is small, ELIS will tend to display global information. If latent space’s ν is large, ELIS-Enc will tend to display the local information. Fig. A9 shows, from left to right, a process that goes from showing global information to showing excessive local information.
Figure A9: Embedding results with varying ν in latent space.
Continuation strategy. Continuation in hyperparameter latent space’s ν is used during the ELIS encoder learning process. The algorithm starts with a small latent space’s ν value to include more global information. Then it gradually increases the value to focus more locally. The necessity of parameter continuation is shown in Fig. A10.
Figure A10: Ablation study of with and without parameter continuation in latent space ν. (The upper row shows results obtained via parameter continuation ν = 0.001 → ν = 100 in latent space, the lower row shows results with a fixed ν = 100)
Experiments prove that the effect of parameter continuation (ν = 0.001→ ν = 100 in latent space) is very obvious. There is a big improvement in Swiss Roll, Coil20 and MNIST. | 1. What is the main contribution of the paper, and how does it differ from its predecessor, Locally Isometric Smoothness (LIS)?
2. How does the proposed model, ELIS, handle cases that cannot be handled by LIS?
3. What are the concerns regarding the equation described in the manuscript, specifically the difference between two distances in two different spaces?
4. How does the author justify the claims made about the advantages of LIS/ELIS, particularly in terms of generalization, stability, and robustness?
5. What are some examples of subjective statements made in the paper that lack logical arguments?
6. How does the author compare the results of ELIS with other algorithms, including LIS, for NLDR and manifold data generation?
7. What is the definition of a geometrically smooth function, and how does it relate to topological homeomorphic transformation?
8. Can the author provide specific examples to support the claim that ELIS and LIS can accomplish four tasks that few neural networks can do? | Review | Review
This manuscript proposes a model named Elastic Locally Isometric Smoothness (ELIS) that is claimed to help and perform all the tasks in the sol called "Manifold Computing" tasks, ranging from visualization, nonlinear dimensionality reduction, manifold data generation, and learning representations. The claim here is that the proposed model imposes, some how, constraints on the geometric isometry across the network layers to preserve the local geometric structure of the data manifold. The ELIS model is based on another, unknown model/framework (and yet unpublished), or the so called: "Markov-Lipschitz" Deep Learning [Li et al. 2020]. The main technical contribution here is that ELIS, can handle cases/scenarios that cannot be handled by its predecessor, Locally Isometric Smoothness (LIS) again due to [Li et al. 2020], such as highly nonlinear manifolds, non-Euclidean challenges, bridging gaps between non-Euclidean manifolds in the input space and the resulting Euclidean hyperplanes!
Comments:
Eq. (1) which seems to describe LIS raises some concerns; it's the difference between two distances in the two different spaces: one for the input before the transformation \Phi function, and the other one is the output of the transformation function. Unfortunately, as presented in the manuscript, these two distances are not comparable and hence they have different scale; this is similar to performing a comparison operation on two different scale systems: metric vs. imperial, say. This is especially more concerning since \Phi is usually a nonlinear transformation.
There is a lot of mix with back and forth discussions between LIS and ELIS; LIS is a recent 2020 not published work yet and the Authors are treating it as a legitimate ground truth. In fact, various parts of the manuscript seem to be repackaging and re-selling LIS to the reader and it confuses the reading and the understanding of the manuscript.
What is also disturbing in the current version of the manuscript is the over-selling and high-pitch on the power and capabilities of LIS/ELIS; it seems this bad habit of various Authors in the deep learning regime have been spreading constantly to so many papers these days. The manuscript has a significant amount of subjective statements than cannot be justified even by simple logical arguments. Here are few examples:
Ex. 1: "Such local homeomorphisms avoid the transformation from collapse, twisting, or crossing, so as to improve generalization, stability, and robustness." How local homeomorphisms will improve generalization, stability and robustness, assuming that the proposed model/constraint really preserves local homeomorphisms.
Ex. 2: "LIS has demonstrated significant advantages in manifold learning and NLDR" ?!
Ex. 3: "Extensive experiments, comparisons, and ablation study demonstrate that ELIS-based neural networks produce results not only superior to the SOTA t-SNE and UMAP for NLDR and visualization but also better than other algorithms of manifold and autoencoder learning, including LIS, for NLDR and manifold data generation." The results in Table 2 do not support this claim. In fact, ELIS results are insignificant from UMAP results and hence one can easily question the merits of LIS/ELIS. A
Ex. 4: "It inherits the metric-preserving property of LIS so that the resulting layer-wise transformation is geometrically smooth, hence topologically homeomorphic, yet possesses more flexibility than LIS in handling highly nonlinear manifolds in high dimensional spaces." First, LIS is not known nor proved to have such properties. Second, any mathematical argument or result to support the claim that a geometrically smooth layer-wise transformation will lead to a topological homeomorphic transformation?? What is the definition of a geometrically smooth function?
Ex. 5: "Both ELIS and LIS aim to accomplish the following 4 tasks, of which few neural networks can do all:" Any particular examples to support this claim?
The paper and the work are still not mature yet. |
ICLR | Title
Deep Manifold Computing and Visualization Using Elastic Locally Isometric Smoothness
Abstract
The ability to preserve local geometry of highly nonlinear manifolds in high dimensional spaces and properly unfold them into lower dimensional hyperplanes is the key to the success of manifold computing, nonlinear dimensionality reduction (NLDR) and visualization. This paper proposes a novel method, called elastic locally isometric smoothness (ELIS), to empower deep neural networks with such an ability. ELIS requires that a desired metric between points should be preserved across layers in order to preserve local geometry; such a smoothness constraint effectively regularizes vector-based transformations to become well-behaved local metric-preserving homeomorphisms. Moreover, ELIS requires that the smoothness should be imposed in a way to render sufficient flexibility for tackling complicated nonlinearity and non-Euclideanity; this is achieved layer-wisely via nonlinearity in both the similarity and activation functions. The ELIS method incorporates a class of suitable nonlinear similarity functions into a two-way divergence loss and uses hyperparameter continuation in finding optimal solutions. Extensive experiments, comparisons, and ablation study demonstrate that ELIS can deliver results not only superior to UMAP and t-SNE for and visualization but also better than other leading counterparts of manifold and autoencoder learning for NLDR and manifold data generation.
1 INTRODUCTION
Manifold learning aims to find from a set of higher dimensional data its embedding or representation in a low dimensional latent space. Nonlinear dimensionality reduction (NLDR) aims to construct a transformation that is generalizable to unseen data. It is hoped that the lower dimensional representation can be used in conjunction with a simple metric such as the Euclidean distance for downstream tasks such as classification and visualization. Manifold data generation performs the inverse transformation to generate data from samples in the latent space. We call this collection of manifold related problems manifold computing. The basis for manifold computing is the manifold assumption (Mikhail Belkin, 2002; Fefferman et al., 2016).
Great advances have been made in the past two decades in manifold computing and visualization. ISOMAP (Tenenbaum et al., 2000) and LLE (locally linear embedding) (Roweis & Saul, 2000) are classic methods for manifold learning. More recent developments include local geometry-based method (Gashler et al., 2008; Zhang & Wang, 2007; Chen & Buja, 2009; McQueen et al., 2016), graph spectral analysis (Donoho & Grimes, 2003) and latent variable models (Saul, 2020). The most popular high dimensional data visualization methods to date are t-SNE (Maaten, 2014) and UMAP (McInnes et al., 2018), with wide applications such as bio-science and technology (Becht et al., 2019; Dorrity et al., 2020). While the aforementioned are traditional machine learning, deep learning-based methods include autoencoders (Hinton & Salakhutdinov, 2006; Moor et al., 2020). The problem can be considered from the viewpoints of geometry deep learning (Bronstein et al., 2017)) and topology data analysis (Wasserman, 2018; Moor et al., 2020).
The ability to preserve geometric structure of nonlinear manifolds and properly unfold them into lower dimensional hyperplanes is the key to the success of manifold-based computing and visualization. Recently, Markov-Lipschitz deep learning (MLDL) (Li et al., 2020) is proposed as a general framework for manifold learning, NLDR, visualization and manifold data generation. The idea is to impose the constraint of geometric isometry across neural network layers to preserve the local
geometric structure of manifold data. This effectively transforms a vector-based transformation of conventional neural networks into a local distance-preserving homeomorphism. Such local homeomorphisms avoid the transformation from collapse, twisting, or crossing, so as to improve generalization, stability, and robustness. Locally isometric smoothness (LIS) (Li et al., 2020), which imposes straight distance-preserving, is proposed as a method in the MLDL framework. LIS has demonstrated significant advantages in manifold learning and NLDR.
This paper proposes a more advanced method in the MLDL framework, called elastic locally isometric smoothness (ELIS), aimed to empower deep neural networks with ability to tackle the high nonlinearity and non-Euclideanity challenges arising from complicated manifolds in high dimension spaces that LIS is unable to cope with. Whereas LIS preserves the straight distances between neighboring points, ELIS is based on a similarity metric that is nonlinear in distance and a two-way divergence loss (of nearby neighbors and far-away pairs, respectively); this renders more flexibility and capacity in tackling the challenges yet under the control of the ELIS regularization. As the result, ELIS bridges gaps between non-Euclidean manifolds in the input space and resulting Euclidean hyperplanes in the learned lower dimensional latent space, with geometric structure of the manifolds preserved. Both ELIS and LIS can be considered as a form of graph neural networks (GNN) (Scarselli et al., 2009) but without the aggregation generally present in GNNs. They are more like what is called “manifold learning 2.0” (Bronstein, 2020).
The distinctive features of ELIS (and LIS) in comparison with related methods are summarized in Table 1. ELIS-based neural networks can accomplish all the functionalities in the general MLDL framework, for which none of the methods can achieve. Extensive experiments, comparisons, and ablation study demonstrate that ELIS-based neural networks produce results not only superior to the SOTA t-SNE and UMAP for NLDR and visualization but also better than other algorithms of manifold and autoencoder learning, including LIS, for NLDR and manifold data generation. The main contributions of this paper are summarized below:
(1) Proposing the ELIS constraint in the MLDL framework, based on a similarity metric which is nonlinear in distance. It inherits the metric-preserving property of LIS so that the resulting layer-wise transformation is geometrically smooth, hence topologically homeomorphic, yet possesses more flexibility than LIS in handling highly nonlinear manifolds in high dimensional spaces.
(2) Proposing conditions for a class of nonlinear similarity functions for converting from distance to similarity, in conjunction with a two-way divergence loss. This ensures the metric-preserving and neighbor-confining properties.
(3) Proposing two instances of ELIS-based neural networks: an ELIS encoder for manifold learning and visualization and an ELIS autoencoder for manifold reconstruction and data generation.
(4) Providing several SOTA results that surpass UMAP and other leading algorithms.
In the following, Section 2 introduces LIS and presents ELIS formulations, and the Section 3 presents extensive experiments. The code is provided in the Supplementary Material.
2 ELASTIC LOCALLY ISOMETRIC SMOOTHNESS
Both ELIS and LIS are formulated in the MLDL framework (illustrated in Fig.A1 in Appendix) which is aimed to regularize neural transformations through imposing the ELIS constraint between
layers to achieve certain well-behaving properties. However, the ELIS formulation tackles challenges of highly nonlinear manifold data in high dimensional spaces using a more flexible and effective way, much inspired by t-SNE (Maaten, 2014) and UMAP (McInnes et al., 2018). Let X = {x1, . . . , xM} be a set of M samples in the input space RN with the index set S = {1, . . . ,M}. These samples may come from one or several lower dimensional manifoldsMX ⊂ RN . WhenMX is Riemannian, its tangent subspace Tx(MX) at any x ∈ MX is locally isomorphic to an Euclidean space of dimensionality dim(MX) < N . Therefore, we can use a cascade of nonlinear neural transformations to "unfold" nonlinear manifolds in a high dimensional input space into hyper-planar regions in a lower dimensional latent space.
Both ELIS and LIS aim to accomplish the following 4 tasks, of which few neural networks can do all: (1) Manifold learning: to learn an embedding in a latent spaceMZ ⊂ Rn, where n < N , based on the local structure of X . (2) Representation Learning: to learn the underlying mapping Φ : MX =⇒ MZ for the embedding that is generalizable to unseen data x 6∈ X,x ∈ MX . (3) Visualization: to visualize the embedding in 2D or 3D space. (4) Manifold generation: to find the inverse mapping Φ−1 :MZ =⇒MX and generate new data onMX from samples inMZ . ELIS is aimed to surpass LIS.
2.1 THE LIS CONSTRAINT AND NEURAL NETWORKS
The LIS constraint is aimed to best preserve the local distances of the data between two metric spaces, encouraging a vector-based neural transformation Φ(X |W ), where W is the transformation matrix of the neural network, to become a well-behaved local distance-preserving homeomorphism. This can be achieved by adding the following LIS loss (Li et al., 2020), imposed between two layers (metric spaces) l and l′
L(l,l ′) LIS (W ) = ∑ i∈S ∑ j∈N (l)i ∣∣∣d(x(l)i , x(l)j )− d(x(l′)i , x(l′)j ))∣∣∣ (1) where d : X ×X → R≥0 is a dissimilarity metric, x(l ′) i = Φ(x (l) i |W ) is the result of the effective transformation Φ from layer l to l′, and Ni is the set of neighbors of i. Without prior knowledge, dij is usually computed as the Euclidean distance, albeit it may not well reflect the reality. It is hoped that after a series of proper nonlinear transformations, the input data is transformed into an embedding in the latent space such that the Euclidean distance make more sense in describing mutual relationships between points. In this work, we aim to find such transformations.
The LIS loss effectively minimizes the bi-Lipschitz constant of Φ. It is through the neighborhood system, N = {Ni | i ∈ S}, that the influence of a point on the others is propagated to afar. For this reason, the collection of random variable x(l) constitutes a Markov random field. Equ. (1) is defined w.r.t. Ni (Markovianity) and aimed to minimizing the bi-Lipschitz constant, hence the name Markov-Lipschitz (Li et al., 2020).
The basic LIS loss is augmented by an auxiliary "push-way" term (Li et al., 2020)
L(l,l ′) push(W ) = − ∑ i∈S ∑ j 6∈j∈N (l)i π[dl′(x (l′) i , x (l′) j ) < B] dl′(x (l′) i , x (l′) j ) (2)
in which π[·] ∈ {0, 1} is the indicator function and B is a bound. This term is aimed to help "unfold" nonlinear manifolds, by exerting a spring force to push away from each other those pairs (i, j) which are non-neighbors at layer l but nearby (distance smaller than B) at layer l′.
These two losses are combined to form a LIS-based encoder loss for manifold learning and dimension reduction
LEnc = ∑ (l,l′) L(l,l ′) LIS (W ) + µL (l,l′) push(W ) (3)
where µ is a weight and (l, l′) is summed over a set of designated layer pairs (currently designed manually). A LIS-based autoencoder can be formulated by applying the LIS constraint between layers within the decoder and between the encoder and decoder layers. LIS-based neural networks have significant advantages (Li et al., 2020).
2.2 THE ELIS CONSTRAINT
The proposed ELIS constraint is aimed to tackle difficulties in “flattening" highly nonlinear manifolds in a high dimensional space into hyperplanes in a lower dimensional space. It imposes a more flexible nonlinear similarity-preserving constraint as opposed to the distance-preserving (isometry) constraint of Vanila LIS. More specifically, ELIS transforms a distance into a similarity metric using a nonlinear function and defines a KL loss based on similarities between nearby pairs and far-away pairs. This makes the metric-preserving constraint of ELIS more flexible than the straight distance-preserving of LIS to accomplish the challenging task.
Moreover, ELIS requires that the smoothness should be imposed in a way to render sufficient flexibility for tackling complicated nonlinearity and non-Euclideanity; this is achieved layer-wisely via nonlinearity in both the similarity and activation functions.
Converting distance to similarity. Following UMAP, we assume that X(l) is fixed (e.g., the input layer) and X(l
′) at subsequent layers l′ are computed as a result of manifold learning. The nonlinear similarities between xi and xj at each layer is computed as follows. First, define an nearest neighbor (NN)-normalized distance
di|j def = d(xi, xj)− ρi ≥ 0 (4)
where ρi = d(xi, xnn(i)) in which xnn(i) denotes the nearest neighbor of xi. Then, di|j is converted to a similarity metric ui|j = g(di|j) ∈ [0, 1] where g is a nonlinear function.
We require that g(η) satisfy the following necessary conditions ∀η = di|j ≥ 0:
Condition (1) – it is monotonically decreasing, g′(η) < 0 for η > 0; Condition (2) – its first derivative diminishes in the limit, limη→∞ |g′(η)| = 0.
The first condition ensues a monotonic and inverse relationship between the distance and the similarity. The second condition effectively leads to a neighborhood system bounded softly as opposed to the "hard" bounded neighborhoods in the LIS and provides proper control on contributions of neighboring points to the back-propagation of neural network learning.
We further require that the g(η) to be a function of η2 – for convenience not necessity, such that its first derivative take the form g′(η) = 2ηh(η) where h(η) is also a function of η2. h(η) can be called influence function because it controls how the other neighboring point xj can influence xi. Condition (2) above restricts the influence from "far-away" point xj on xi (between which the distance ηij = ‖xi − xj‖ is relatively large) to diminish in the back-propagation process. This provides a properly weighted neighborhood system w.r.t. which the influence between points is limited with certain scope adaptively.
Specifically for ELIS, we define the following σi-data-adaptive, ν-parameterized nonlinear similarity
ui|j(σi, ν) = g(di|j | σi, ν) = Cν
( 1 + d2i|j
σi ν
)−(ν+1) , (5)
where ν ∈ R+ is similar to the degree of freedom (DoF) parameter in the t-distribution,
Cν = 2π
( Γ ( ν+1 2 ) √ νπΓ ( ν 2 ))2 (6) is a function of ν which sets the limit limν→+∞ g(0 | σi, ν) = 1 (∀σi > 0)), and the data-adaptive parameter σi > 0, playing a calibration role, is estimated from the data by best fitting the equation∑
j 6=i
ui|j(σi, ν) = log2Q (7)
for the perplexity-like hyperparameter Q given. While other choices satisfying the aforementioned necessary conditions, including the normalized Gaussian and Cauchy functions used in t-SNE (Maaten, 2014) and the fitted polynomial function used in UMAP (McInnes et al., 2018), can also work for ELIS, we find Equ. (5) a better choice not only because it produces better results but also because we can use the ν parameter as a continuation tool for preventing the training from converging
to bad local minima and for controlling separation margin between different manifolds, as will be shown in the ablation study.
Computing similarities uij and u′ij . Because the symmetry ui|j = uj|i does not hold due to differences in σi for the input layer (l), the following symmetrization is performed
uij = uj|i + ui|j − uj|iui|j . (8) On the other hand, for the subsequent latent layers, the computation of σi and ρi for each i would bring about huge computational costs. To overcome this problem, we directly set σ′i = 1 and ρ ′ i = 0 (this also ensures the symmetry u′i|j = u ′ j|i). While σi and ρi are needed to deal with unevenness and outliers of the data for the input layer, the necessity becomes not so demanding as the layer goes deeper after layers of nonlinear manifold unfolding. From u(l)ij of layer l can be constructed a weighted graph G(S, X(l), U (l)) consisting of a set S of nodes with node attributes X(l) and edge attributes (weights) U (l) = {u(l)ij ≥ > 0 | ∀i, j ∈ S}. The global structure of a manifold is discovered from local geometry of data through the graph G. Formulating the ELIS losse. ELIS transforms the distance metric dij into a similarity metric using a nonlinear function uij = g(dij) and defines the ELIS loss between layers l and l′, in terms of similarities u(l)ij | i, j ∈ S, i 6= j} at layers l and its counterpart U (l ′) = {u(l ′)
ij | i, j ∈ S, i 6= j} at layers l′. The ELIS loss is defined by what we call the two-way divergence (a.k.a. the fuzzy information for discrimination (Bhandari & Pal, 1993) and the fuzzy set cross entropy in UMAP (McInnes et al., 2018))
L(l,l ′)
ELIS(W | X (l), X(l
′)) = ∑
i,j∈S,i6=j u (l) ij log
u (l) ij u (l′) ij + (1− u(l)ij ) log 1− u(l)ij 1− u(l ′ ) ij
(9)
The first term is the directed divergence of the two fuzzy sets of similarities, in lieu of the LIS’ distance-preserving term of Equ. (1); the second term can be considered as the directed divergence of the two corresponding complement fuzzy sets, replacing the push-way term of Equ. (2).
Equ.(9) is called the "two-way divergence" because the first term on the right side of the equation imposes similarity-based attraction forces between nearby (intra-manifold) pairs whereas the second term exerts dissimilarity-based repulsion forces between far-away (inter-manifold) pairs. In other words, intra-manifold points are transformed to a cluster in the latent space, mainly as the result of the first term whereas inter-manifold point pairs push away from each other to different clusters, mainly due to the second term.
Note also that ELIS applies the two terms in a soft, adaptive way via its weighted neighborhood graph where the edges are effectively restricted by pairs of corresponding nodes (data points) between which the absolute gradients ∣∣∣∇WL(l,l′)ELIS(W | X(l), X(l′))∣∣∣ ≥ > 0 are nonzero, in contrast to the "hard" neighborhood system in LIS.
The ELIS loss can be rearranged as follows
L(l,l ′ )
ELIS(W | X (l), X(l
′ )) = ∑ i,j∈S,i6=j u (l) ij log u (l) ij + (1− u (l) ij ) log(1− u (l) ij )
− u(l)ij log u (l′) ij − (1− u (l) ij ) log(1− u
(l ′ ) ij ) (10)
When X(l) (hence u(l)ij ) fixed, the optimization only needs to minimize second part involving u (l′) ij .
2.3 ELIS ENCODER AND AUTOENCODER
The ELIS encoder consists of a cascade of nonlinear forward neural transformations constrained by the ELIS loss, aimed for manifold learning and NLDR. An ELIS (and LIS) encoder can learn an NLDR transformation without the need for a decoder (as required by autoencoders), and this encoder can generalize to unseen data (that ISOMAP, LLE, t-SNE and UMAP cannot). The total loss for the ELIS encoder is the sum of all the ELIS losses over a prescribed set of layer pairs (l, l′)
LEnc(W ) = ∑ (l,l′) α(l,l ′)L (l,l′) ELIS(W ) (11)
where α(l,l ′) weight the relative importance of L(l,l ′) ELIS .
The ELIS autoencoder has two purposes: (1) to further regularize or optimize the ELIS encoderbased manifold learning by using an ELIS decoder, and (2) to enable generation of new data of the learned manifolds by the trained ELIS decoder. The ELIS autoencoder structure consists of the ELIS encoder and decoder in cascade. The ELIS decoder is aimed to approximate the inverse transformations of the ELIS encoder and is made to be entirely symmetric to the ELIS encoder in its network structure. The overall weight matrices becomes W = [WEnc,WDec]. The loss function is composed of three terms:
LAE(W ) = LEnc(WEnc) + LDec(WDec) + LRec(W ) (12)
where LEnc(WEnc) is the same as Equ. (11), LDec(WDec) is defined in the same way following the symmetry, and the reconstruction loss LRec(W ) is the summed over all the corresponding layers
LRec(W ) = L−1∑ l=0 γl M∑ i=1 ‖ xi(l) − x̂(l)i ‖ 2 (13)
where x̂(l)i are the data points at the corresponding layer of the decoder and γl are the weights. The constraints due to LRec(W ) and LTie(W ) are illustrated by the dashed lines in Fig. A1 in Appendix.
3 EXPERIMENTS
The following experiments are aimed to evaluate ELIS in comparison with other five algorithms: UMAP (McInnes et al., 2018), t-SNE (Maaten, 2014) (for visualization), MLLE (Zhang & Wang, 2007), TopoAE (Moor et al., 2020) and LIS (Li et al., 2020) in terms of visual inspection and numerical metrics for manifold computing, visualization and data generation. Nine datasets are used, including five toy datasets: (1) SwissRoll (3-D), (2) S-Curve (3-D), (3) Servered Sphere (3-D), (4) SpheresA (101-D) (see (Moor et al., 2020) for the description) and (5) SpheresB (101-D, a modified composition from SpheresA); and four real-world datasets: (6) Coil20 (16384-D) and (7) Coil100 (49152-D) (Nene et al., 1996), (8) MNIST (784-D) (LeCun, 2013), and (9) Fashion-MNIST (784-D) (Xiao et al., 2017). The toy datasets are used because their geometric and topological structures are clear for the evaluation. The SpheresA dataset (Moor et al., 2020)is composed of 1 large sphere enclosing 10 small ones in 101-D space. SpheresB differs from SpheresA in that its large sphere consists of only 500 samples (whereas that in SpheresA has 5000) – the data is so sparse that the smallest within-sphere distance on the larger sphere can be greater than that between the larger sphere and some small ones. 5 performance metrics are used for the evaluation, whose exact definitions are given in Appendix A.2.
The pseudo-codes of the ELIS encoder and autoencoder and hyperparameter settings are described in Appendix A.1. The implementation uses the PyTorch 1.6.1 library running on Ubuntu 18.04 on NVIDIA v100 GPU. The time is spent mainly in the computation of neighbors. At present, the ELIS algorithms computes the neighborhood for every point pair, hence have the complexity of O(M2) for each cross-layer pair (l, l′). The complexity can be reduced to O(M1.14) if using the nearest-neighbor-descent algorithm of UMAP (McInnes et al., 2018).
3.1 MANIFOLD LEARNING AND GENERATION
Manifold Learning. Table 2 compares performances of the five NLDR methods where bold numbers are the best results, and underline the second best. The ELIS encoder has overall the best performance. Fig. 1 visualizes some representative results, and more results are given in Table. A2, Fig. A2 - Fig. A4 in Appendix A.3.
Next, we delve into embedding details of of the Coil20 objects resulting from the ELIS encoder (ELIS-Enc) and UMAP in Fig. 2. First, all the object embeddings in the ELIS-Enc result form closed loops for the 360 degree rotations (refer to Fig. A5 for the embedding-object correspondence). Second, the quality of the ELIS-derived embeddings enables us to infer some symmetries of the objects in the 3D space. Four types of such symmetries are explored and discussed in Appendix A.3. The UMAP result, in contrast, does not possess such quality.
Manifold Data Generation. Fig. 3 compares images generated from interpolated points between two nearest neighbors on the embedding using three autoencoders in comparison. The images generated by the ELIS-AE have clear boundaries and look sharper than those produced by the other two autoencoders. Results for several other objects are shown in Fig. A6 in Appendix A.4.
3.2 ABLATION STUDY
Cross-layer ELIS constraint. The cross-layer ELIS constraint is weighted by α(l, l′) as in Equ. (11). Four weight schemes (1) Head-Tail, (2) Head-Mids, (3) Mids-Tail and (4) Head-Mids + Mids-Tail are designed for an L-layer encoder, as described in details in Appendix A.5. The results are compared in Table. A3 and Fig. A7. Overall, the "Head-Mids + Mids-Tail" scheme, which imposes the most extensive cross-layer ELIS constraints and also needs more computation, achieves the best results. This justifies the use of the proposed ELIS method for performance improvements.
Effect of final ν value. The final ν value has significant influence on the within-manifold and between-manifold scatters. Fig. A8 and Fig. A9 in Appendix A.5 demonstrate the effect of varying ν in the input space and varying ν in the latent space on the results, respectively.
Continuation in ν. Continuation in hyperparameter ν in latent space is used during the ELIS encoder learning process. The algorithm starts with a small ν in latent space to include more global
information. Then it gradually increases the value to focus more locally. The continuation strategy results in significant better solutions as shown in Fig. A10 in Appendix A.5.
The major merits of ELIS. Finally, we summarize the comparative results of the experiments in the following table.
ELIS LIS UMAP t-SNE MLLE TopoAE
Succeed in unfolding toy data Yes Yes No No Yes No
Perfect manifold structures on Coil Yes No Maybe No No No
High Accuracy Most No Some Some No No
Good Reconstruction Quality Yes Maybe N/A N/A No No
4 CONCLUSION
The proposed ELIS method preserves the nonlinear similarity metric locally across layers of deep neural networks by optimizing two-way divergence loss. It effectively tackles difficulties in deep manifold computing and visualization with the local geometry-preserving property. Empirical results, comparisons, and ablation study demonstrate that ELIS is not only superior to UMAP and t-SNE for NLDR and visualization but also better than other leading manifold and autoencoder learning algorithms for NLDR and manifold data reconstruction and generation. Future work includes the following: (1) extending the unsupervised version of MLDL to self-supervise, semi-supervised and supervised tasks; (2) further formulating MLDL so that cross-layer link hyperparameters α become part of learnable hyperparameters.
APPENDIX
A.1 THE MLDL FRAMEWORK AND ELIS
Markov-Lipschitz deep learning (MLDL) framework. The MLDL framework is illustrated in Fig.A1 (from Li et al. (2020)). The ML-AutoEncoder (of LIS or ELIS type) transforms the input X to an embedding X(L) at layer L (the latent layer) using the ML-Encoder, and then reconstruct X̂ using the ML-Decoder. Whereas a standard neural network consists of a cascade of transformations φ(l) (blue arrows), an MLDL network imposes the constraint between any two layers as appropriate (shown in orange arcs and dashed lines) in the form of cross-layer loss functions weighted by α(l,l
′). This encourages φ(l) to become well-behaved local homeomorphisms. The latent features X(L) extracted by the learned ML-Encoder can be used for downstream tasks such as visualization and classification as well as manifold data generation using the learned ML-Decoder.
Figure A1: Illustration of Markov-Lipschitz deep learning (MLDL) framework using an MLAutoEncoder (best viewed in color).
The pseudo-codes for the ELIS encoder and the ELIS autoencoder, related hyperparameter, and a parameter continuation method are described below.
Algorithm 1: ELIS Encoder Input : Data:X(0), learning rate lr, epochs E, number of encoder layers L, Weight hyperparameter α, νList, Q, Calculate d(0)i|j with (4) Calculate σ(0)i with (7) Calculate u(0)ij with (8) Initialize the neural network {Φ(1)Enc( · |W (1) Enc), Φ (2) Enc( · |W (2) Enc), · · · ,Φ (L) Enc( · |W (L) Enc)} while i = 0; i < E; i++ do ν ←− νList[i] while l = 1; l <= L; l++ do
Calculate l layer’s embedding X(l) ←− Φ(l)Enc(X(l−1)|W (l) Enc)
Calculate u(l)ij with (5) and (8) end while l′ = l; l′ <= L; l′++ do
Calculate the ELIS losses between layer l and lyaer l′, L(l,l ′)
Enc with (10) end
Update parameters: W ←−W − lr · ∑L l=1 ∑L l′=l α (l,l′) ∂L (l,l′) Enc
∂W end
Algorithm 2: ELIS AutoEncoder Input : Data:X(0), learning rate lr, epochs E, number of encoder layers L, Weight hyperparameter α γ, νList, Q Calculate d(0)i|j with (4) Calculate σ(0)i with (7) Calculate u(0)ij with (8) Initialize the neural network {Φ(1)Enc( · |W (1) Enc), Φ (2) Enc( · |W (2) Enc), · · · ,Φ (L) Enc( · |W (L) Enc),
Φ (1) Dec( · |W (1) Dec), Φ (2) Dec( · |W (2) Dec), · · · ,Φ (L) Dec( · |W (L) Dec)}
while i = 0; i < E; i++ do ν ←− νList[i] while l = 1; l ≤ 2L; l++ do
if l ≤ L then Calculate l layer’s encoder embedding X(l) ←− Φ(l)Enc(X(l−1)|W (l) Enc); else Calculate l layer’s decoder embedding X(l) ←− Φ(l)Dec(X(l−1)|W (l) Dec); end Calculate u(l)ij with (5) and (8)
end while l′ = l; l′ <= L; l′++ do
Calculate the ELIS losses between layer l and lyaer l′, L(l,l ′)
ELIS with (10) end Calculate the reconstruction loss between layer l and lyaer 2L− l, L(l,2L−l)Rec with (13)
Update parameters: W ←−W − lr( ∑L l=1 ∑L l′=l α (l,l′) ∂L (l,l′) Enc ∂W + ∑L l=1 γ (l,2L−l) ∂L (l,2L−l) Rec
∂W ) end
Hyperparameters. Table. A1 summarizes the ELIS hyperparameter setting for different datasets. Other hyperparameters are set the same for all datasets: learning rate lr = 0.01 and number of epochs E = 5000. The LeakyReLU is used as the activation function.
Table A1: Hyperparameters of ELIS for different datasets
Dataset Point Network Structure (Number of parameters) Q in Equ. (7) Batchsize
Swiss Roll 800 3, 500, 500, 2 (0.252M) 10 800 S-Curve 800 3, 500, 500, 2 (0.252M) 10 800 Servered Sphere 800 3, 500, 500, 2 (0.252M) 10 800 SpheresA 10000 101, 500, 500, 2 (0.301M) 10 10000 SpheresB 5500 101, 500, 500, 2 (0.301M) 10 5500 Coli20 1440 16384, 500, 500, 2 (8.443M) 10 1440 Coli100 7200 49152, 1000, 500, 250,2 (24.82M) 10 2400 MNIST 60000 784, 1000, 500, 300, 2 (1.434M) 15 4000 Fashion-MNIST 60000 784, 1000, 500, 2 (1.285M) 10 4000
Continuation in ν(l
′). In the training process, the parameter ν(l ′)) in computing sample similarities for the latent layer is graduated from a small number to a large number, e.g. ν(l
′) : 0.01 → 100 (see Equ. (5)), though fixed at a large value, e.g. ν(l
′) = 100 for the input layer. Empirically, the continuation helps training converge to a good solution; the reasons behind are to be explained in a future work.
A.2 DEFINITIONS OF PERFORMANCE METRICS
1. Cont (Continuity) is asymmetric to Trust (from space X(l ′) to space X(l)):
Cont = 1
k2 − k1 + 1 k2∑ k=k1 1− 2Mk(2M − 3k − 1) M∑ i=1 ∑ j∈N (l)i,k,j 6∈N (l′) i,k (r (l′) i,j − k) where r(l ′) i,j is the rank of x (l′) j in the k-NN of x (l′) i . M is the size of dataset. N (l′) i,k is the set of indices to the k-NN of x(l ′)
i . k1 and k2 are the lower and upper bounds of the k-NN. For sphereA and sphereB, we focus more on global performance, so set k1 = [M/14], k2 = [M/7]. For other datasets, we set k1 = 5, k2 = 10.
2. Trust (Trustworthiness) measures how well the k nearest neighbors of a point are preserved when going from space X(l) to space X(l ′):
Trust = 1
k2 − k1 + 1 k2∑ k=k1 1− 2Mk(2M − 3k − 1) M∑ i=1 ∑ j∈N (l
′) i,k ,j 6∈N (l) i,k
(r (l) i,j − k) where r(l)i,j is the rank of x (l) j in the k-NN of x (l) i .
3. ACC (svm)
The ACC (svm) is calculated as follows. (1) Compute nonlinear dimensionality reduction methods to obtain 2-dimensional embeddings. (2) Partition the data by 5-fold cross-validation. (3) For each fold, train the linear kernel SVM classifier using the training set and test it in the test set. (4) Calculate the mean value of the classification accuracy.
4. ACC (NN) is defined as follows:
ACC(NN) =
∑M i π [ Yi = YN (L)i,1 ] M
π[·] is the exponential function. Yi is the label of sample i, YN (L)i,1 is the label of sample N (L) i,1 , where N (L) i,1 is the nearest neighbor point of node i in layer L.
5. AUC is defined as follows:
AUC(f) =
∑ p0∈P0 ∑ p1∈P1 π [p0 > p1]
|P0| · |P1|
P0 =
{ d (L) ij −min d (L) ij
max d (L) ij −min d (L) ij
|i, j ∈ {1, 2, 3 · · · ,M}, Yi = Yj
}
P1 =
{ d (L) ij −min d (L) ij
max d (L) ij −min d (L) ij
|i, j ∈ {1, 2, 3 · · · ,M}, Yi 6= Yj
}
Where d(L)ij is the distance in layer L. P0 is the set of positive sample pair, P1 is the set of negative sample pair.
A.3 MANIFOLD LEARNING AND NLDR
Manifold Learning Results. This subsection shows more results of manifold learning and NLDR obtained by using the ELIS-Enc and the ELIS-AE, in comparison with the other methods, on training and testing datasets. Some typical embedding results of manifold learning using the three autoencoder methods are visualized in Fig. A2, where t-SNE and UMAP are not included because these nontransformational methods are unable to generalize to test datasets. LIS-Enc and TopoAE learned poor
Figure A2: Comparison of visualization results of autoencoders on training and testing sets
results on the training set, so it did not work well on the test set either. ELIS-AE, as a autoencoderbased mehtod, have a huge advantage in terms of generalization performance, because it can handle the test data. And it is very easy to apply to specific tasks such as classification, regression and clustering.
Table. A2 compares performance metrics on 8 datasets, where the ACC(SVM), ACC(NN) and AUC are abscent for the SwissRoll and SeveredSphere because these datasets have no class label.
Fig. A3 and Fig. A4 shows the visualization resules of the toy and real-world datasets on the training datasets. For Swiss roll, Servered Sphere, and S-Curve, ELIS-Enc, LIS-Enc and MLLE all maintained the topology of the original data, however, MLLE method did not hold the relative Euclidean distance (The resulting embedding is square instead of rectangular). For SpheresA, ELIS-Enc, LIS-Enc, and TopoAE show the "big sphere enclosing 10 small spheres" in 2D embedding, but for SpheresB only
Table A2: Comparison in performance metrics with 5 difference methods in eight datasets
ELIS-Enc LIS-Enc UMAP t-SNE TopoAE MLLE
Swiss Roll Cont 1.0000 1.0000 0.9962 0.9969 0.9716 0.9956Trust 1.0000 1.0000 0.9983 0.9993 0.9809 0.9948
SeveredSphere Cont 0.9997 0.9932 0.9967 0.9985 0.9854 0.9958Trust 0.9997 0.9755 0.9989 0.9995 0.9891 0.9836
SpheresA
Cont 0.7850 0.7892 0.7147 0.7548 0.8064 0.7272 ACC(SVM) 0.5213 0.5000 0.5550 0.4992 0.4982 0.5000 ACC(NN) 0.9985 0.9912 0.5406 0.7837 0.9944 0.5205 AUC 0.5698 0.3362 0.5816 0.5603 0.3328 0.5961
SpheresB
Cont 0.9242 0.9255 0.9109 0.9155 0.9245 0.8943 ACC(SVM) 0.9558 0.9100 0.9100 0.8478 0.9581 0.0965 ACC(NN) 0.9987 0.9969 0.8469 0.9365 0.9949 0.8265 AUC 0.9780 0.9318 0.9570 0.9570 0.9870 0.9459
Coil20
Cont 0.9956 0.9973 0.9962 0.9927 0.9901 0.9395 ACC(SVM) 0.8941 0.8301 0.8472 0.8014 0.7078 0.1556 NNACC 0.9965 0.9354 0.8917 0.9965 0.8160 0.6410 AUC 0.9780 0.9537 0.9842 0.9582 0.8916 0.8824
Coil100
Cont 0.9936 0.9967 0.9955 0.9950 0.9903 0.7898 ACC(SVM) 0.9372 0.7319 0.8299 0.8278 0.5540 0.0363 ACC(NN) 0.9976 0.8163 0.9232 0.9951 0.4797 0.3350 AUC 0.9770 0.9667 0.9819 0.9759 0.8735 0.7322
MNIST
Cont 0.9639 0.9749 0.9646 0.9630 0.9618 0.9183 ACC(SVM) 0.9699 0.7468 0.9690 0.9525 0.7450 0.1100 ACC(NN) 0.9568 0.7035 0.9528 0.9567 0.7773 0.7423 AUC 0.9725 0.8779 0.9691 0.9314 0.8000 0.8575
Fashion -MNIST
Cont 0.9848 0.9901 0.9836 0.9777 0.9864 0.9298 ACC(SVM) 0.7125 0.6908 0.7030 0.5518 0.6067 0.1058 ACC(NN) 0.7092 0.6427 0.7253 0.7787 0.5718 0.6145 AUC 0.9121 0.8843 0.9165 0.8256 0.8310 0.7908
ELIS-Enc and LIS-Enc shows the "enclosing" phenomenon. For Coil20 and Coil100, ELIS-Enc, UMAP and TSNE can produce non-intersecting embeddings. However, the ELIS-Enc results are distinguishable and do not cut any of the manifolds. For MNIST and Fashion-MNIST, Both the UMAP and ELIS-Enc methods output the good embeding, But in terms of performance metrics, ELIS-Enc has sufficient advantages.
Symmetry of the objects and ELIS-Enc’s embedding in Coil20. For Coil20, information about the symmetry of the objects in the picture can be obtained by analyzing the embedding generated by ELIS-Enc. Details of the ELIS-Enc embedding of the Coil20 are shown in Fig. A5.
We divided the Coil20’s manifolds into four patterns based on the symmetry of the objects in the image and the shape of the manifold.
(1) Objects that are single plane mirror symmetric have elongated ellipse embedding shapes; For objects with single plane mirror symmetry, an angle can be found from which an image taken by rotating to the left is approximately equal to an image taken by rotating to the right. The corresponding two-dimensional manifolds are therefore elongated ellipse (The endpoints of the two long axes of the ellipse correspond to the two images obtained by taking pictures along the plane of symmetry.).
(2) Objects that are rotational symmetric have round embedding shapes; For rotational symmetric objects, the resulting pictures are always very similar no matter what angle they are taken from, so that the resulting two-dimensional manifold is squeezed inward into a circle.
(3) Objects that are double vertical mirror symmetric and have nested double ring embeddings; For objects with double vertical mirror symmetry, every 180 degrees of rotation, the resulting image reappears (the reappeared image is very similar to the one from 180 degrees ago, and
Figure A3: Comparison of visualization results for toy dataset on training set
is very close in two-dimensional space), thus the resulting manifold consists of two nested rings.
(4) Object’s symmetry is not evident.
Figure A4: Comparison of visualization results for real-world dataset on training set
Figure A5: Details of the ELIS-Enc’s embedding of the Coil20 and four manifold patterns
A.4 MANIFOLD DATA GENERATION
The manifold generation task generates a complete manifold structure from finite manifold samples. In this experiment,the test steps are as follows:
(1) Training a network (includes encoder and decoder) that generating 2-dimensional embedding;
(2) Performing linear interpolation in the embeddings; (3) Mapping the interpolation result back to the data space via the decoder.
Generation results for comparison with the TopoAE and LIS-AE are shown in Fig. A6. The same network structure was used in the experiments.
Figure A6: Comparison in visualization with LIS-AE and TopoAE in manifold generation. The left side is three embedding result, and black point in manifolds is the location of the interpolation. The right side is the interpolation results. there are 12 images in the right of the figure, the leftmost and the rightmost images are the original images, and ten images in the middle are the generation results with geodesic distance.
ELIS-AE has an advantage over the other two methods. Both LIS-AE and TopoAE methods do not learn a satisfactory embedding, so the interpolation results are poor. The embedding in LIS-AE has overlapping manifolds, so it generates images belonging to other popular methods (e.g. manifold A). The TopoAE’s embedding is messy, so the decoder reconstructs fuzzy images.
A.5 ABLATION STUDY
Cross-layer ELIS constraint. The effect of the ELIS-Enc constraint is determined by the weights α(l, l′) as in Equ. (11). We set the weights α(l,l
′) in either of four schemes (where Head, Tail and Mids are used to denote input layer, latent layer and intermediate layers) for an L-layer encoder:
(1) Head-Tail: weight α(0,L) = 1 (the constraint is imposed between the input layer and the latent layer);
(2) Head-Mids: weights α(0,l) = 1/L where l ∈ {1, 2, · · · , L} (the constraints are imposed between the input layer and each of intermediate layers);
(3) Mids-Tail: weights α(l,L) = 1/L where l ∈ {1, 2, · · · , L} (the constraints are imposed between the latent layer and each of intermediate layers);
(4) Head-Mids + Mids-Tail: weights α(0,l) = α(l,L) = 1/2L where l ∈ {1, 2, · · · , L} (combination of Head-Mids and Mids-Tail).
In this ablation study, a 10-layer neural network is used and the width of the network is determined depending on the dataset. (Swiss Roll:[3, 50, 45, 35, 30, 25, 20, 15, 10, 5, 2], SpheresA:[101, 50, 45, 35, 30, 25, 20, 15, 10, 5, 2], SpheresB:[101, 500, 400, 300, 300, 200, 200, 100, 100, 2], COIL20:[16384, 50, 45, 35, 30, 25, 20, 15, 10, 5, 2])
The evaluation metrics for four different cross-layer schemes are presented in Table. A3. The results of different cross-layer schemes are shown in Fig. A7.
Figure A7: Comparison in visualisation of four different cross-layer schemes.
The visualization results and metrics show that the cross-layer scheme (4) has better results in a 10-layer network. The network is very difficult to train if the ELIS loss acts only on the first and last layers (cross-layer scheme (1) ). The network will be easier to train if ELIS losses acts from the first layer and all intermediate layers (cross-layer scheme (2)). The ELIS losses in the middle and last layers (cross-layer scheme (3)) does not improve the performance of the embedding if used alone.
Table A3: Comparison in performance metrics of four different cross-layer schemes.
Cont Trust ACC(SVM) ACC(NN) AUC
Swiss Roll (1) - - - - - (2) 0.9999 0.9999 - - - (3) - - - - - (4) 0.9999 0.9999 - - -
SpheresA
(1) - - - - - (2) 0.9402 0.8832 0.9149 0.9478 0.9696 (3) - - - - - (4) 0.9376 0.8858 0.9529 0.9784 0.9721
SpheresB
(1) 0.9111 0.6373 0.5225 1.0000 0.5486 (2) 0.9087 0.6341 0.5145 1.0000 0.5489 (3) 0.8520 0.6299 0.5388 1.0000 0.4474 (4) 0.8167 0.6432 0.8740 0.9936 0.7461
Coil20
(1) - - - - - (2) 0.9955 0.9852 0.8454 0.9792 0.9721 (3) 0.9904 0.9876 0.8459 0.9847 0.9524 (4) 0.9947 0.9901 0.8867 0.9986 0.9735
However, if used in conjunction with the cross-layer scheme (4), it will improve the metric of the resulting latent space.
Effect of ν value. Fig. A8 and Fig. A9 show the effect of data space hyperparameter and latent space hyperparameter on embedding results.
Figure A8: Embedding results with varying ν in input space.
ν in input space controls the range of sensations in data space. if input space’s ν is small, the derivative of the probabilistic mapping function of the input layer will be small. The probability will be insensitive to distance in data space. In other words, ELIS-Enc will degradation into LIS-Enc. If more attention is paid to the global information of the input data, raise ν. If more attention is paid to local information about the input data, lower input space’s ν. By default, ELIS-Enc does not change this hyperparameter, but defaults to input space’s ν = 100.
ν in latent space controls degree of display in latent space (show detailed information or show global information). if latent space’s ν is small, ELIS will tend to display global information. If latent space’s ν is large, ELIS-Enc will tend to display the local information. Fig. A9 shows, from left to right, a process that goes from showing global information to showing excessive local information.
Figure A9: Embedding results with varying ν in latent space.
Continuation strategy. Continuation in hyperparameter latent space’s ν is used during the ELIS encoder learning process. The algorithm starts with a small latent space’s ν value to include more global information. Then it gradually increases the value to focus more locally. The necessity of parameter continuation is shown in Fig. A10.
Figure A10: Ablation study of with and without parameter continuation in latent space ν. (The upper row shows results obtained via parameter continuation ν = 0.001 → ν = 100 in latent space, the lower row shows results with a fixed ν = 100)
Experiments prove that the effect of parameter continuation (ν = 0.001→ ν = 100 in latent space) is very obvious. There is a big improvement in Swiss Roll, Coil20 and MNIST. | 1. What is the main contribution of the paper on NLDR?
2. What are the weaknesses of the paper regarding its claims and experiments?
3. How does the reviewer assess the clarity and quality of the paper's content?
4. Do you have any concerns about the method's ability to preserve locality or show clusters better compared to other NLDR methods?
5. Are there any issues with the naming convention used in the paper, specifically the choice of "ELIS"? | Review | Review
This is yet another NLDR paper. But I don't see any shining point here. It is a marginal modification of UMAP, parameterized by neural networks.
What is the problem you are solving? You must be aware of tens of existing NLDR methods. Have you identified a severe problem among them and does your method solve the problem? Throughout the paper I don't see any exsiting problem but just an non-target proposal.
If your method preserves locality better (just guess from your title), then your experiment should quantified the improvenment in this sense. However, now Section 3 uses a lot classification benchmarks.
If your methods aims at showing clusters better, clearly ELIS loses to UMAP. Once the colors are removed, users cannot see the cluster boundaries in ELIS visualizations.
I doubt the Coil20 NNACC for ELIS-Enc. For 1440 instances, 0.9965 NNACC means there are only five misclassified points. But as shown in Fig.2(a), several classes, e.g.,(3), (4), (6) are heavily mixed.
It says ELIS is scalable to large datasets, but all experimented datasets are quite small.
The name ELIS is not good. LIS is just a rename of multidimensional scaling (MDS), and reference to MDS is missing.
Captions of figures and tables are too short. Descriptions should be moved from the text to the captions. |
ICLR | Title
Meta Learning with Minimax Regularization
Abstract
Even though meta-learning has attracted wide attention in recent years, the generalization problem of meta-learning is still not well addressed. Existing works focus on meta-generalization to unseen tasks at the meta-level, while ignoring that adapted-models may not be generalized to the tasks domain at the adaptationlevel, which can not be solved trivially. To this end, we propose a new regularization mechanism for meta-learning – Minimax-Meta Regularization. Especially, we maximize the regularizer in the inner-loop to encourage the adapted-model to be more sensitive to the new task, and minimize the regularizer in the outerloop to resist overfitting of the meta-model. This adversarial regularization forces the meta-algorithm to maintain generality at the meta-level while it is easy to learn specific assumptions at the task-specific level, thereby improving the generalization of meta-learning. We conduct extensive experiments on the representative meta-learning scenarios to verify our proposed method, including few-shot learning and robust reweighting. The results show that our method consistently improves the performance of the meta-learning algorithms and demonstrates the effectiveness of Minimax-Meta Regularization.
1 INTRODUCTION
Meta-learning has been proven to be a powerful paradigm for extracting well-generalized knowledge from data and accelerating the learning process for new tasks (Thrun & Pratt, 2012). It simulates the machine learning process by a bi-level objective (Finn et al., 2017), evaluating the query (metavalidation) set with an adapted-model learned from the meta-model by the support (meta-training) set. Meta-learning has received increasing attention in many machine learning settings such as fewshot learning (Sung et al., 2018; Sun et al., 2019; Wang et al., 2020) and robust learning (Ren et al., 2018; Shu et al., 2019; Li et al., 2019), and can be deployed in many practical applications (Kang et al., 2019; Dou et al., 2019; Yu et al., 2018; Madotto et al., 2019). Despite the success, the additional level of learning creates another potentially overfiting (Rajendran et al., 2020b), which significantly challenges the generalization of meta-learning algorithms. Specifically, the meta-model should be generalized to unseen tasks (meta-generalization). In the meanwhile, the adapted-model should be generalized to the domain of a specific task, which we called adaptation-generalization (Figure 1). A key challenge is how to regularize the meta-algorithms to ensure this two-levels generalization.
The deep neural networks tend to overfit the sampling bias due to its representation power, leading to poor generalization (Song et al., 2020). Regularizations such as weight decay (Krogh & Hertz, 1992), dropout (Gal & Ghahramani, 2016), and incorporating noise (Tishby & Zaslavsky, 2015; Alemi et al., 2016; Achille & Soatto, 2018), can effectively present the model from the overfitting and enhance the generalization. However, direct applying the regularization to the networks limited the flexibility of fast adaptation in the inner loop (meta-training) of meta-learning (Yao et al., 2021). Recent works aim to address the meta-generalization problem by meta-regularizations, such as constraining the meta-initialization space (Yin et al., 2019), enforcing the similarity of the performance of the meta-model on different tasks (Jamal & Qi, 2019), and augmenting meta-training data (Rajendran et al., 2020b; Ni et al., 2021; Yao et al., 2021). These methods significantly enhance the generalization for unseen tasks. However, they ignore the adaptation-generalization to the data distribution of the meta-testing tasks (Figure 1), which is not negligible.
The work takes the first step further to optimize both meta-generalization and adaptationgeneralization for meta-learning. However, the adaptation-generalization is significant challenging
for meta-learning, where we meet a dilemma between fast adaptation and generality: 1) regularizing the model during meta-testing time can enhance the generalization to the task domain, however, limits the fast adaptation that is the goal of meta-learning; 2) exacerbating the overfitting to the fewshot samples from meta-testing can enhance the fast adaptation, however, limits the generality to the task domain.
To address the challenge, we consider learning a meta-model resistant to the adapted-model overfitting during the meta-testing time. To achieve this, we design a well-general mechanism called Minimax-Meta Regularization for meta-learning. During the meta-training, we enforce the adaptedmodel to be more overfitting to the support data by adding a inverse (negative) regularization in the inner loop, and enforce the meta-model to be more generalized on the test samples by adding a positive regularization in the outer loop. By doing so, the learned meta-model can be meta-generalized, making adapted-models perform well on the query (meta-validation) set, even when the adaptedmodels are prone to overfit to the support (meta-training) set. Therefore during the meta testing, the adapted-model can still be generalized to the task domain, even though they are overfitting to fewshot samples. In particular, the Minimax-Meta Regularization is well general to be implemented in all bi-level optimization frameworks without additional computational cost.
To verify the above intuition, we conduct experiments of the basic MAML (Finn et al., 2017) framework. Surprisingly, we find that both positively regularizing the outer loop meta-training and negatively regularizing the inner loop adaption can significantly enhance the few-shot classification. Another interesting finding is that adding positive regularization in the inner loop impairs the performance, which indirectly proves the efficacy of our proposal. We conduct extensive experiments on few-shot regression, few-shot classification, and robust reweighting (Ren et al., 2018). The experimental results show that Minimax-Meta Regularization generally improves the performance of bi-level meta-learning algorithms and is compatible with common methodologies for enhancing meta-learning. Moreover, Minimax-Meta Regularization shows the capability to improve the generalization of meta-learning algorithms and help address meta-overfitting problems to a certain extent.
Our Contributions. 1) we propose a limitation of previous works on meta-generalization that ignore the adaptation-generalization; 2) we design a general mechanism named Minimax-Meta Regularization for meta-learning, which aims to capture a meta-model that is both meta-generalized
and resistant to the adaptation overfitting; 3) we empirically verify the intuition of Minimax-Meta Regularization and give possible reasons; 4) we conduct three different bi-level optimization tasks to show the efficacy of the proposed method.
2 PRELIMINARY
We first give a brief introduction and notation of meta-learning. In the meta-learning problem setting that we consider, the goal is to learn a generalized initialization model for better adapting to new tasks from only a few samples. To achieve this, it requires a set of support (meta-training) data{ Dsi = {xsi,j , ysi,j}kj=1 }n i=1 and query (meta-testing) data { Dqi = {x q i,j , y q i,j}mj=1 }n i=1
sampled from tasks {Ti}ni=1 drawn from distribution p(T ), where k and m denote the number of data samples from support and query data, and n is the number of tasks. Denote L and µ to be the loss function and inner-loop learning rate.
Meta-learning (Finn et al., 2017) simulates the adaptation and evaluation procedure of machine learning, and aims to learn a well-generalized model f parameterized by θ∗ by the following bilevel optimization
θ∗ = argmin θ
1
n n∑ i=1 1 m ∑ z∈Dqi L(φi(θ,Dsi ), z), s.t.φi(θ,Dsi ) = θ − µ∇θ ∑ z∈Dsi L(θ, z) (1)
where z represents the data sample (x, y). The outer loop (represents the meta-validation phase) measures the generalization performance of the adapted-model φi by the query data Dqi . The inner loop (represents the meta-training phase) defines that the adapted-model φi is finetuned from initialization θ by multiple steps gradient descent with the support data Dsi . Note that gradient steps can be more than one, the formulation 1 is written for shortness.
3 META LEARNING WITH MINIMAX-META REGULARIZATION
We aim to learn a well-generalized meta-initialization that can fast adapt to new tasks with robust performance. To achieve this, the meta-learner should be meta-generalized, i.e. learn a metamodel θ that is robust to tasks distribution p(T ), and adaptation-generalized, i.e. the adapted model φ(θ,Ds),Ds ∼ T should be robust to the data distribution of the task domain T . The metageneralization problem has been studied in many previous works (Yin et al., 2019; Collins et al., 2020; Yao et al., 2021; Ni et al., 2021), and can be addressed by designing regularization in the outer loop. However, due to the limited number of samples of Ds,Dq , the adaptation-generalization problem is significantly challenging for meta-learning.
To address this, we propose a novel and well-general regularization framework for meta-learning – Minimax-Meta Regularization. In this section, we first present the training objective for minimaxregularized meta-learning while giving the intuition behind the design, and run a simulation to verify the high-level insight.
3.1 TRAINING OBJECTIVE
Based on the formulation 1 for meta-learning, we present the minimax-regularized meta-learning training objective as follows, where we add a positive regularization in the outer loop to achieve meta-generalization, and an inverse (negative) regularization in the inner loop to achieve adaptationgeneralization.
θ∗ = argmin θ
1
n n∑ i=1 1 m ∑ z∈Dqi L(φi(θ,Dsi ), z) + λout 1 n n∑ i=1 Regout(φi(θ,Dsi )), (2)
s.t. φi(θ,Dsi ) = argmin φ 〈µ∇θ ∑ z∈Dsi L(θ, z), φ〉+ 1 2 ‖φ− θ‖2 − λin 1 n n∑ i=1 Regin(φ), (3)
where Regout and Regin are the regularizations in the outer and inner loop, while λout ≥ 0 and λin ≥ 0 are the coefficients respectively. Note the the formulation φi(θ,Dsi ) =
argminφ〈µ∇θ ∑ z∈Dsi L(θ, z), φ〉 + 12‖φ − θ‖ 2 in the inner loop is the equivalent mirror descent (Beck & Teboulle, 2003) version of the gradient descent. We next introduce the intuition behind this design.
Outer positive regularization. As defined in Eq 2, we add a positive regularization Regout(φi(θ,Dsi )) that regularizes the model-overfitting of the adapted-model φi(θ,Dsi ). By doing so, the meta learner is enforced to learn a generalized meta-model θ∗ such that the adapted-model φi(θ
∗,Dsi ) on each tasks is not overfitting and generalized to query data. This idea has been studied in previous works (Yin et al., 2019; Collins et al., 2020; Yao et al., 2021; Ni et al., 2021), and has been shown to significantly enhance the meta-generalization.
Inner inverse regularization. The generalization performance depends not only on the complexity of the adapted-model φi(θ,Dsi ), but also the adaptation rule, i.e. the formulation of the inner loop function. As defined in Eq 3, we add a inverse regularization Regin(φ) that negatively regularizes the model-complexity of the adapted-model φi(θ,Dsi ). By doing so, the inner loop function simulates the adaptation overfitting during meta-testing by enforcing the adapted-model to be overfitting during meta-training. Therefore, learning from the minimax regularized meta-learning, the learned meta-model θ∗ can be resistant to adaptation overfitting.
From the above discussion, the Minimax-Meta Regularization enables the meta-learning to capture a meta-model that is both meta-generalized and adaptation-generalized. This framework is computational efficient without additional computational cost. In addition, the Minimax-Meta Regularization is general to all bi-level optimization formulation, thus can be directly applied to different meta-algorithms on different bi-level learning problems.
3.2 EMPIRICAL VERIFICATION.
We next verify the design by a simulation test by conducting the basic MAML framework (Finn et al., 2017) with different regularization types. As illustrated in Table 1, we make the following observations:
Outer positive regularization enhances the generalization performance. Compare the results from “no regularization” and “regularize the outer-loop”, we observe that adding outer regularization can get 1.56% and 4.42% accuracy improvements in 1-shot and 5-shot experiments, which verifies the efficacy of the outer regularization. This is aligned with the intuition that outer regularization enhances the meta-generalization, leading to better performance.
Inner negative regularization enhances the generalization performance. Compare the results from “no regularization” and “inverse regularize the inner-loop”, we observe that adding inner inverse regularization can get 1.17% and 1.24% accuracy improvements in 1-shot and 5-shot experiments, which verifies the efficacy of the inner inverse regularization. This is aligned with the intuition that inner reverse regularization enhances adaptation-generalization, thus improving performance.
The outer regularization and inner inverse regularization are compatible. Compare the results from “Minimax-Meta Regularization”, “regularize the outer-loop”, and “inverse regularize the innerloop”, we observe that Minimax-Meta Regularization can get 0.52% (1-shot)/0.61% (5-shot) and 0.92% (1-shot)/3.79% (5-shot) accuracy improvements than solely regularizing the outer-loop and inverse regularizing the inner-loop, which verifies the compatibility of the inner inverse regulariza-
tion. This is aligned with the intuition that meta-generalization and adaptation-generalization are not in conflict.
Inner positive regularization impairs the generalization performance. Compare the results from “no regularization” and “regularize the inner-loop”, we observe that adding inner positive regularization suffers from -1.86% and -0.34% accuracy impairments in 1-shot and 5-shot experiments, which aligns with the intuition that positive regularization that limits the adaptation in the inner-loop impairs the adaptation-generalization.
4 RELATED WORK
Meta-learning. A line of meta-learning methods has sought to train recurrent neural networks that ingest entire datasets Santoro et al. (2016); Duan et al. (2016). However, they need to place constraints on the model architecture. Another line aims to learn a transferable metric space between samples from previous tasks (Vinyals et al., 2016; Snell et al., 2017; Mishra et al., 2018; Oreshkin et al., 2018). However, it is limited to classification problems. In this paper, we focus on gradientbased meta-learning methods that learn a meta-initialization (Finn et al., 2017; 2018; Li et al., 2017; Finn & Levine, 2018; Grant et al., 2018; Lee & Choi, 2018; Park & Oliva, 2019; Flennerhag et al., 2020), which is a well-generalized for meta-training tasks, being agnostic to both model architecture and problems. However, these approaches are shown to be overfitting the meta-training tasks and generalizing poorly to meta-testing tasks (Yoon et al., 2018; Collins et al., 2020; Rothfuss et al., 2021; Yao et al., 2021).
Meta-Regularization. Standard regularizations such as weight decay (Krogh & Hertz, 1992), dropout (Gal & Ghahramani, 2016), and incorporating noise (Tishby & Zaslavsky, 2015; Alemi et al., 2016; Achille & Soatto, 2018), which can significantly enhance the generality of single-loop machine learning. However, the straightforward method that regularizes the neural networks limits the flexibility of fast adaptation in the inner loop (Yao et al., 2021). Recently, a few works were proposed to design the meta-regularization to improve meta-generalization. MR-MAML (Yin et al., 2019) constrains the search space of the meta-model, and allows the adaptation to be sufficient in the inner loop. Jamal & Qi (2019) proposed TAML to enforce the meta-model to perform similarly across tasks. Rajendran et al. (2020a) explored an information-theoretic framework of meta-augmentation by adding randomness to labels of both support and query sets. Yao et al. (2021) proposed two task augmentation methods – MetaMix and Channel Shuffle, which is theoretically proved to be generalized to unseen tasks. Ni et al. (2021) investigated the distinct ways where data augmentation can be integrated at both the image and class levels. Rothfuss et al. (2021) addressed the meta-generalization problem using the PAC-Bayesian framework, and proposed PACOH that is PAC-optimal with Gaussian processes. However, these works focus only on the metageneralization, i.e., generalize to the unseen tasks, while the adaptation-generalization that measures how the adapted-model generalizes to the task domain is merely considered.
This paper proposes the Minimax-Meta Regularization for meta-learning, implementing a positive regularization in the outer-loop and a negative regularization in the inner-loop. The framework can enhance both meta-generalization and adaptation-generalization, and thus improve the performance.
5 EXPERIMENTS
In this section, we conduct extensive experiments on three types of classical meta-learning tasks including, few-shot classification, few-shot regression, and robust reweighting with meta-learning, to demonstrate the efficacy of our proposed methods. With these experiments, we demonstrate that our methods i) outperform previous meta-learning algorithms in terms of predictive accuracy; ii) mitigate the meta-overfitting effectively. We will introduce the experimental setup, results, and analysis in the following subsections.
5.1 FEW-SHOT CLASSIFICATION
We first carry out experiments on the few-shot classification task, one of the most popular tasks to evaluate meta-learning algorithms. To verify the effectiveness of our approach, we adapt Minimax-
Meta Regularization into bi-level optimization meta-learning algorithms and make a benchmark to compare with other methods.
5.1.1 EXPERIMENTAL SETUP
Datasets. For the few-shot classification task, we experiment on the public released datasets MiniImagenet (Ravi & Larochelle, 2017; Vinyals et al., 2016) and Omniglot (Lake et al., 2015), following the few-shot benchmark setting provided in (Antoniou et al., 2018). The Omniglot dataset is a collection of 1623 character classes with different alphabets. Each class in the dataset contains 20 instances. In the experiment, all the character classes are shuffled, and then the shuffled classes are divided into the training set, validation set, and test set, with 1150, 50, and 423 instances respectively. Rotation augmentation is applied to the images with 90-degree increments to create new classes. The second dataset used in the few-shot classification experiment is Mini-Imagenet (Ravi & Larochelle, 2017), which is sampled from ImageNet with 600 instances of 100 classes. Each image is resized into 84 × 84. Following the work (Ravi & Larochelle, 2017), we split the Mini-Imagenet dataset into 64 classes for training, 12 classes for validation, and 24 classes for testing.
Experimental details. We select MAML (Finn et al., 2017) as the representative bi-level optimization meta-leanring model. To evaluate the effectiveness of Minimax-Meta Regularization, we first begin the experiment with the baseline MAML on the 5-way 1/5-shot Mini-Imagenet setting. Then, on top of the original MAML, we implement Minimax-MAML by adding Minimax-Meta Regularization. We then compare Minimax-MAML with original MAML and other meta-learning baselines on the 5-way 1/5-shot Mini-Imagenet setting and the the 20-way 1-shot Omniglot setting. The compared baselines include Matching Networks (Vinyals et al., 2016), Meta-SGD (Li et al., 2017), Meta-Networks (Munkhdalai & Yu, 2017), Siamese Nets (Koch et al., 2015), Neural Statistician (Edwards & Storkey, 2016), and Memory Module (Kaiser et al., 2017). Here we also include MAML++ (Antoniou et al., 2018) in the experiment and further implement Minimax-MAML++ for comparison. MAML++ is an improved version of MAML, with 6 specific methodologies added together for the performance improvement of MAML. We include MAML++ in our experiment for studying two questions: i) By comparing Minimax-MAML with MAML++, we want to analyze if Minimax-Meta Regularization, as a general improving mechanism, has the potential to outperform algorithm-specific methodologies. ii) By comparing Minimax-MAML++ with MAML++, we want to evaluate if Minimax-Meta Regularization is compatible with complicated model-specific improving methodologies in bi-level optimization models. Note regularization is only added during the training phase. All the MAML/MAML++ experiments involving regularization share the same form of regularization objective. The regularization is achieved by combining the l2-norm regularization and output entropy regularization. More detailed experiment setting information could be find in Appendix B.
5.1.2 RESULTS AND ANALYSIS
The baseline comparison results under Omniglot and Mini-Imagenet settings are shown in Table 2 and Table 3. Minimax-Meta Regularization are shown to improve both the original MAML and the MAML++ frameworks. In the Omniglot 20-way 1-shot classification experiment, the mean accuracy of MAML and MAML++ are improved from 94.20% and 97.21% to 95.76% and 97.77% respectively. Both the methods had unstable results in these experiments. After adopting Minimax-Meta Regularization, the std values of the final accuracy of these two methods have been significantly reduced, indicating better performance stability. The Minimax-MAML++ reached the best performance in this setting compared to other baselines with good stability. Significant improvements from Minimax-Meta Regularization are also shown in Mini-Imagenet 5-way 1/5-shot classification experiments. In the 1-shot experiments, the original MAML cannot outperform Meta-SGD and Meta-Networks baselines. The Minimax-Meta Regularization improves the accuracy of MAML from the average of 48.75% to 50.84%, which enables MAML to outperform other baselines. In the 5-shot experiments, Minimax-MAML could outperform MAML++ by 1.02%. Considering that MAML++ adopts 6 individual techniques specifically designed for MAML, Minimax-Meta Regularization shows strong effectiveness in this outperform as a general methodology.
5.2 FEW-SHOT REGRESSION
5.2.1 EXPERIMENTAL SETUP
Datasets. For the few-shot regression task, we consider a non-mutually-exclusive regression problem based on the Sinusoids synthetic dataset. Each task of Sinusoids regression involves the regressing from the input to the output of a generated sine wave, where the amplitudes of the sinusoids are different among tasks. In our experiment, we follow the setting provided by (Yin et al., 2019). The Sinusoids data is created in the following way: the amplitude A of the sinusoid is uniformly sampled from a set of 20 scalars {0.1, 0.3, · · · , 4}; u is sampled uniformly from [−5, 5] ; and y is sampled from N (Asin(u), 0.12). Experimental details. During the training, both u and A are provided as input of models, i.e. x = (u,A). During the test time, we expand the range of the tasks by randomly sampling the amplitude A uniformly from [0.1, 4] and use a random one-hot vector as the input of the network. The meta-training tasks are a proper subset of the meta-test tasks. Under this setting, the amplitude input at the training phase makes this regression problem non-mutually-exclusive, which makes the meta-learning model prone to the memorization problem(Yin et al., 2019) during training. In the experiments, we compare with the representative bi-level optimization meta-learning baseline MAML (Finn et al., 2017), and the meta-regularized MAML (MR-MAML) (Yin et al., 2019) where the regularization is either on the activations (MR-MAML(A)) or the weights (MR-MAML(W)). Both MR-MAML(A) and MR-MAML(W) are initially designed for solving the memorization problem. The Minimax-Meta Regularization is implemented for all the above 3 methods with l2-norm as the regularization objectives for both the inner-loop and outer-loop.
5.2.2 RESULTS AND ANALYSIS
Original MAML was shown to be capable of solving normal sinusoid few-shot regression problem(Finn et al., 2017). However, the results of non-mutually-exclusive sinusoid regression 5.2.2 suggest that added amplitude input makes MAML suffer from memorization problem and give poor test result. From the experiment result5.2.2, we could observe that Minimax-Meta Regularization improves the performance of MAML on both 5-shot and 10-shot tasks. In the 10-shot task, the
test MSE of MAML improved from 0.153 to 0.125 with Minimax-Meta Regularization, which is close to the MR-MAML(A). This observation suggests that the minimax-regularization could help the meta-learning model be more resistant to the memorization problem to some extent. Moreover, by comparing the results of MR-MAML methods with Minimax-Meta Regularization and the original MR-MAML models, we could find that both MAML(A) and MAML(W) gained performance improvements with added Minimax-Meta Regularization on both 5-shot and 10-shot tasks. And the smaller std values indicate the promotion of stability. This shows that minimax could be compatible with methods specifically designed for addressing memorization problem and further improve the performance.
5.3 ROBUST REWEIGHTING WITH META-LEARNING
5.3.1 EXPERIMENTAL SETUP
To verify the general effectiveness of our proposed methods, we further conduct the experiments on the task of robust reweighting with meta-learning. For this experiment, we compare the performance of our method and baselines on the noisy MNIST dataset, which is created by randomly flipping the labels of 40% training images. Each image has a dimension of 28×28. The task is to classify each image into 0 to 9 handwritten numbers, where the 10000 training images have 40% noisy labeled data. The validation set consists of 100 correctly-labeled images that are randomly selected from the correctly-labeled samples in the training set to ensure that the reweight method does not have the privilege of training on more data. We use the LeNet-5 as the backbone model and train the model for 1000 epochs. The learning rates for the first 1/3, the middle 1/3, and the last 1/3 training epochs are set to be 1e-2, 1e-3, and 1e-4 respectively. The basic meta-learning baselines we evaluate here is Meta-Reweighting introduced by the work (Ren et al., 2018). The Meta-Reweighitng algorithm learns to assign weights to training examples for robust learning. To determine the example weights, Meta-Reweighting performs a meta gradient descent step on the mini-batch example weights (which are initialized from zero) to minimize the loss on a clean unbiased validation set. Our method adds the Minimax-Meta Regularization on top of Meta-Reweighting. We add regularization on the outerloop, where the optimal weights are calculated and adopted for meta-update. The inverted regularization is added on the inner-loop, where the weighted inner-model fits the clean unbiased validation set for optimal weight calculation. Intuitively, such a regularization method makes the model becomes more conservative when updating based on noise train data in the outer loop and values the diversity of predictions more, thereby resisting overfit. At the same time, the inner model was encouraged to make sharper predictions on the clean validation set by the inverted regularization, so that the potential of the clean data set can be more fully utilized. The regularization objective used in our method is maximizing output entropy (minimizing output entropy in the outer-loop). We call our method Minimax Reweighting. Detailed information of the implementation of Minimax Reweighting is provided in Appendix C.
5.3.2 RESULTS AND ANALYSIS
Under this setting, models experienced large epoch training with the big initial learning rate. Models are extremely prone to overfit the training dataset during the training phase. To understand the performance of the models under a robust learn setting, we could first look at the training curve of the models (Figure 2,3). Since the training set is noisy, models overfitted to the train set would show significant performance deduction on the clean test set. From the perspective of robust learning, the direct training model sets the lower performance bound to some extent. Since it does not have any denoising ability, it quickly overfits the training set during the training. It reaches peak accuracy on the clean test set around the 80th epoch and the overfitting begins after that epoch. We could identify the overfitting characteristic from the training and testing accuracy curve. Since 40% of the labels in the training set are wrong, once the model starts to predict the training data with accuracy larger than 60%, it’s fitting the distribution of the noise training data instead of the ground truth distribution. At the same time, the performance deduction on the clean test set would begin. Finally, we could observe the training accuracy and testing accuracy of directly trained model converged to nearly 100% and 60% respectively, which indicates a complete overfit. On the contrary, the model with optimal learning robustness should never overfit the train set, which would maintain a train accuracy value close to 60%(since only 60% of the train labels are correct) and keep optimal performance on the clean test set. Compared to direct training, the training curve of Meta-Reweighting baseline (Ren et al., 2018) shows a significant improvement in the learning robustness. However, it still suffers from overfitting. It neither completely overfits the training dataset nor ignores all the noises, its training accuracy converges to around 70%. Meta-Reweighting model could finally maintain test accuracy at around 87.5%, experienced continual test accuracy deduction after around 100th epoch. Minimax-Reweighting nearly reached the optimal learning robustness under this setting. The training accuracy of Minimax-Reweighting stuck at around 60% with rarely any change throughout the training phase. And the testing accuracy maintained peak value around 95.5% without observable deduction. To further evaluate the effectiveness of Minimax-Reweighting, we implemented the outer-loop-only regularization on top of the Meta-Reweighting algorithm to make comparisons. The results intend that only regularizing the outer loop at the meta-level cannot reach the performance of Minimax-Meta Regularization. Quantitative results of final accuracy are shown in Table 5. As for train accuracy, the original Meta-Reweighting algorithm reached 70.38% accuracy, which indicates a certain overfit. On the contrary, after adding regularization, both Minimax Reweighting and outer-loop regularized Meta-Reweighting could preserve a training accuracy of around 60%, which represents the resistance to training set overfit. However, Minimax Reweighting outperforms outer-loop regularized Meta Reweighting on the clean test set accuracy
6 CONCLUSION
This paper studies the generalization problem of meta-learning. In this paper, we go one step deeper and propose a new regularization mechanism for meta-learning – Minimax-Meta Regularization. Specifically, we maximize the regularizer in the inner loop to encourage the adapted-model to fit an “aggressive, more specific, prone to overfitting” hypothesis and minimize the regularizer in the outer loop to fit a “conservative, more general, resistant to overfitting” hypothesis. Such adversarial regularization forces the meta-model to maintain generality at the meta-level even when it is easy to learn specific assumptions at the task-specific level, thereby improving the robustness of the meta-model. In the experiment, representative meta-learning scenarios, including few-shot learning, robust learning, and reinforcement learning, are conducted to verify our method. The results show that our method consistently improves the meta-learning algorithms’ performance and demonstrates the advantage of Minimax-Meta Regularization.
A GENERAL FORM OF MINIMAX-META REGULARIZATION IN META-LEARNING
Algorithm 1 General Form of Minimax-Meta Regularization in Meta-Learning Require: Meta-training set Dmeta−train, Learner M with parameters φ Require: Meta-Learner R with parameters θ. Ensure: φT
1: randomly initialize φ 2: for d = 1, n do 3: Dsupport, Dquery ← random dataset from Dmeta−train 4: φ0 ← c0 5: for t=1, T do 6: Xt,Yt ← random batch from Dsupport 7: Lt ← 8: L (M (Xt;φt−1) ,Yt) + InverseRegObjective (M (Xt;φt−1) ,Yt, φt−1) 9: ct ← R (( ∇φt−1Lt,Lt ) ; θd−1
) 10: φt ← ct 11: end for 12: X,Y ← Dquery 13: Ltest ← L (M (X;φT ) ,Y) +RegObjective (M (X;φT ) ,Y, θd−1) 14: Update θd using∇φTLtest 15: end for
B DETAILS OF FEW-SHOT CLASSIFICATION EXPERIMENT
B.1 IMPLEMENTATION OF MINIMAX-MAML
Algorithm 2 Minimax-MAML Require: p(T ) : distribution over tasks Require: α, β : step size hyperparameters γe, γn: reg-rate hyperparameter of Information Entropy,
L2 Norm Ensure: θT
1: randomly initialize θ 2: while not done do 3: for all Ti do 4: Evaluate ∇θLTi (fθ) with respect to K examples 5: Compute adapted parameters with gradient 6: descent: θ′i = θ − α∇θ ( LTi (fθ) + γeEntropyTi(fθ)− γnL2 Norm(θ)
) 7: end for 8: Update θ ← θ − β∇θ ∑ Ti∼p(T ) ( LTi ( fθ′i ) − γeEntropyTi(fθ′i) + γnL2 Norm(θi
′) 9: end while
Pseudo code is shown in Algorithm 2.
For all few-shot classification experiments, we use a γe = 2 and γn = 5e-5.
All the MAML/MAML++ experiments involving regularization share the same form of regularization objective. The regularization is achieved by combining the l2-norm regularization and output
entropy regularization. The bi-level optimization objective could be written as:
θ∗ = argmin θ
1
n n∑ i=1 1 m ∑ z∈Dqi L(φi(θ,Dsi ), z) + δout · (γn · 0.5‖φi(θ,Dsi )‖2 − γeH(φi(θ,Dsi ), z))),
(4) s.t. φi(θ,Dsi ) = θ − µ∇θ ∑ z∈Dsi L(θ, z) + δin · (γn · 0.5‖θ‖2 − γeH(θ, z))),
(5)
Where H(θ, z) denotes the information entropy of prediction of z using θ as model parameter. Here δin and δout respectively determine the type of regularization for the inner-loop and outerloop. Their values of δin and δout can be 1, 0 or -1, corresponding to normal regularization, none regularization, and inverse regularization respectively. Original MAML has δin = 0 and δout = 0. MAML becomes Minimax-MAML while δin and δout are set by -1 and 1. The selection for δin and δout values for other experiment could be found in 1. γn and γe are hyper-parameters controlling the regularization rate. We use γn = 0.0005 and γe=2 for all the experiments. All the MAML experiments take 5 inner-steps. In one experiment, the training takes 100 epochs, and each epoch consists of 500 iterations. After each epoch, the performance of the model is evaluated on the validation set. When the training is complete, a prediction of the test set is made by the ensemble of the top 5 performing models on the validation set. Each experiment is repeated 3 times. The Adam optimizer was adopted for the model training, with a learning rate of 0.001, β1 = 0.9 and β2 = 0.99. Task batch size for all Omniglot experiments is 16. Mini-Imagenet experiments use task batch sizes of 4 and 2 for 1-shot and 5-shot experiments respectively.
As for empirical verification experiment, on top of the original MAML, we implement different individual regularization methods and run experiments for each one separately. The regularization methods include outer-loop-only regularization, inner-loop-only regularize , inverse inner-loop regularization, loss function regularization, and Minimax-Meta Regularization. This stage of experiments complete the empirical verification of method discussed in 3.2.
C IMPLEMENTATION DETAIL OF MINIMAX META-REWEIGHTING
Pseudo code is shown in Algorithm 3. In our experiment, we use a γin = 0.25 and γout = 2
Algorithm 3 Weighted Minimax Meta-Reweighting. Require: model θ0, train Df , valid Dg, n,m, γin, γout Ensure: θT
1: for t = 0 . . . T − 1 do 2: {Xf , yf} ← SampleMiniBatch (Df , n) 3: {Xg, yg} ← SampleMiniBatch (Dg,m) 4: ŷf ← Forward (Xf , θt) 5: ← 0; lf ← ∑n i=1 iC (yf,i, ŷf,i) 6: ∇θt ← BackwardAD (lf , θt) 7: θ̂t ← θt − α∇θt 8: ŷg ← Forward ( Xg, θ̂t
) 9: lg ← 1m ∑m i=1 (C (yg,i, ŷg,i) + γinEntropy(ŷg,i))
10: ∇ ← BackwardAD (lg, ) 11: w̃ ← max(−∇ , 0);w ← w̃∑
j w̃+δ( ∑ j w̃)
12: l̂f ← ∑n i=1 wi (C (yf,i, ŷf,i)− γoutEntropy(ŷf,i))
13: ∇θt ← BackwardAD ( l̂f , θt ) 14: θt+1 ← OptimizerStep (θt,∇θt) 15: end for | 1. What is the focus and contribution of the paper on meta-learning?
2. What are the strengths and weaknesses of the proposed Minimax-Meta Regularization?
3. Do you have any concerns regarding the writing and explanations in the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or issues with the empirical studies conducted in the paper? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes the Minimax-Meta Regularization to improve the meta-generalization and adaptation generalization for meta-learning at the same time.
Review
Strength:
The method is simple to use.
The paper conducts several empirical studies with both classification and regression problems.
Weakness:
A major problem of this paper is that the writing in multiple places are not precise which make the reader hard to parse. Since the method proposed by this paper is heuristic, clearly explain the intuition is essential. For example: (i) "regularizing the model during meta-testing time can enhance the generalization to the task domain", what does the meta-testing mean here? It usually means the evaluation on a new task after all the training. (ii) "a meta model θ that is robust to tasks distribution", "robust to the data distribution of the task domain T", how to be robust to a distribution and a domain? (iii) From the Figure 1, it is hard to understand what the proposed approach is.
The meta-generalization should be clearly defined. This paper defines it as "generalize to the unseen tasks". If it means making good prediction to the query set of an unseen task, how does the meta-generalization different from the adaptation generalization.
In the related work, the paper mentions the works (Yin et al. 2019) (Yao et al. 2021) are solving meta-overfitting, which is defined here as "meta-model overfits to meta-training tasks due to the limited number of tasks". But the overfitting problem studied in these related works are not because the number of tasks are limited, but rather the tasks are not mutually-exclusive.
The reviewer think the intuitive explanation of why the proposed regularization should work is not convincing yet, and there is no analytic analysis to support this intuition. The empirical advantage, as shown in Table 1, is also small.
There are several other places where the writing is inaccurate, though it only influences the understanding slightly: the proposed method "without additional computational cost"; really? Surprisingly, we find (this method works); why surprising after introducing the intuition? What is
μ
in Eq.(3)? Where are the results for Sec. 5.2.2? |
ICLR | Title
Meta Learning with Minimax Regularization
Abstract
Even though meta-learning has attracted wide attention in recent years, the generalization problem of meta-learning is still not well addressed. Existing works focus on meta-generalization to unseen tasks at the meta-level, while ignoring that adapted-models may not be generalized to the tasks domain at the adaptationlevel, which can not be solved trivially. To this end, we propose a new regularization mechanism for meta-learning – Minimax-Meta Regularization. Especially, we maximize the regularizer in the inner-loop to encourage the adapted-model to be more sensitive to the new task, and minimize the regularizer in the outerloop to resist overfitting of the meta-model. This adversarial regularization forces the meta-algorithm to maintain generality at the meta-level while it is easy to learn specific assumptions at the task-specific level, thereby improving the generalization of meta-learning. We conduct extensive experiments on the representative meta-learning scenarios to verify our proposed method, including few-shot learning and robust reweighting. The results show that our method consistently improves the performance of the meta-learning algorithms and demonstrates the effectiveness of Minimax-Meta Regularization.
1 INTRODUCTION
Meta-learning has been proven to be a powerful paradigm for extracting well-generalized knowledge from data and accelerating the learning process for new tasks (Thrun & Pratt, 2012). It simulates the machine learning process by a bi-level objective (Finn et al., 2017), evaluating the query (metavalidation) set with an adapted-model learned from the meta-model by the support (meta-training) set. Meta-learning has received increasing attention in many machine learning settings such as fewshot learning (Sung et al., 2018; Sun et al., 2019; Wang et al., 2020) and robust learning (Ren et al., 2018; Shu et al., 2019; Li et al., 2019), and can be deployed in many practical applications (Kang et al., 2019; Dou et al., 2019; Yu et al., 2018; Madotto et al., 2019). Despite the success, the additional level of learning creates another potentially overfiting (Rajendran et al., 2020b), which significantly challenges the generalization of meta-learning algorithms. Specifically, the meta-model should be generalized to unseen tasks (meta-generalization). In the meanwhile, the adapted-model should be generalized to the domain of a specific task, which we called adaptation-generalization (Figure 1). A key challenge is how to regularize the meta-algorithms to ensure this two-levels generalization.
The deep neural networks tend to overfit the sampling bias due to its representation power, leading to poor generalization (Song et al., 2020). Regularizations such as weight decay (Krogh & Hertz, 1992), dropout (Gal & Ghahramani, 2016), and incorporating noise (Tishby & Zaslavsky, 2015; Alemi et al., 2016; Achille & Soatto, 2018), can effectively present the model from the overfitting and enhance the generalization. However, direct applying the regularization to the networks limited the flexibility of fast adaptation in the inner loop (meta-training) of meta-learning (Yao et al., 2021). Recent works aim to address the meta-generalization problem by meta-regularizations, such as constraining the meta-initialization space (Yin et al., 2019), enforcing the similarity of the performance of the meta-model on different tasks (Jamal & Qi, 2019), and augmenting meta-training data (Rajendran et al., 2020b; Ni et al., 2021; Yao et al., 2021). These methods significantly enhance the generalization for unseen tasks. However, they ignore the adaptation-generalization to the data distribution of the meta-testing tasks (Figure 1), which is not negligible.
The work takes the first step further to optimize both meta-generalization and adaptationgeneralization for meta-learning. However, the adaptation-generalization is significant challenging
for meta-learning, where we meet a dilemma between fast adaptation and generality: 1) regularizing the model during meta-testing time can enhance the generalization to the task domain, however, limits the fast adaptation that is the goal of meta-learning; 2) exacerbating the overfitting to the fewshot samples from meta-testing can enhance the fast adaptation, however, limits the generality to the task domain.
To address the challenge, we consider learning a meta-model resistant to the adapted-model overfitting during the meta-testing time. To achieve this, we design a well-general mechanism called Minimax-Meta Regularization for meta-learning. During the meta-training, we enforce the adaptedmodel to be more overfitting to the support data by adding a inverse (negative) regularization in the inner loop, and enforce the meta-model to be more generalized on the test samples by adding a positive regularization in the outer loop. By doing so, the learned meta-model can be meta-generalized, making adapted-models perform well on the query (meta-validation) set, even when the adaptedmodels are prone to overfit to the support (meta-training) set. Therefore during the meta testing, the adapted-model can still be generalized to the task domain, even though they are overfitting to fewshot samples. In particular, the Minimax-Meta Regularization is well general to be implemented in all bi-level optimization frameworks without additional computational cost.
To verify the above intuition, we conduct experiments of the basic MAML (Finn et al., 2017) framework. Surprisingly, we find that both positively regularizing the outer loop meta-training and negatively regularizing the inner loop adaption can significantly enhance the few-shot classification. Another interesting finding is that adding positive regularization in the inner loop impairs the performance, which indirectly proves the efficacy of our proposal. We conduct extensive experiments on few-shot regression, few-shot classification, and robust reweighting (Ren et al., 2018). The experimental results show that Minimax-Meta Regularization generally improves the performance of bi-level meta-learning algorithms and is compatible with common methodologies for enhancing meta-learning. Moreover, Minimax-Meta Regularization shows the capability to improve the generalization of meta-learning algorithms and help address meta-overfitting problems to a certain extent.
Our Contributions. 1) we propose a limitation of previous works on meta-generalization that ignore the adaptation-generalization; 2) we design a general mechanism named Minimax-Meta Regularization for meta-learning, which aims to capture a meta-model that is both meta-generalized
and resistant to the adaptation overfitting; 3) we empirically verify the intuition of Minimax-Meta Regularization and give possible reasons; 4) we conduct three different bi-level optimization tasks to show the efficacy of the proposed method.
2 PRELIMINARY
We first give a brief introduction and notation of meta-learning. In the meta-learning problem setting that we consider, the goal is to learn a generalized initialization model for better adapting to new tasks from only a few samples. To achieve this, it requires a set of support (meta-training) data{ Dsi = {xsi,j , ysi,j}kj=1 }n i=1 and query (meta-testing) data { Dqi = {x q i,j , y q i,j}mj=1 }n i=1
sampled from tasks {Ti}ni=1 drawn from distribution p(T ), where k and m denote the number of data samples from support and query data, and n is the number of tasks. Denote L and µ to be the loss function and inner-loop learning rate.
Meta-learning (Finn et al., 2017) simulates the adaptation and evaluation procedure of machine learning, and aims to learn a well-generalized model f parameterized by θ∗ by the following bilevel optimization
θ∗ = argmin θ
1
n n∑ i=1 1 m ∑ z∈Dqi L(φi(θ,Dsi ), z), s.t.φi(θ,Dsi ) = θ − µ∇θ ∑ z∈Dsi L(θ, z) (1)
where z represents the data sample (x, y). The outer loop (represents the meta-validation phase) measures the generalization performance of the adapted-model φi by the query data Dqi . The inner loop (represents the meta-training phase) defines that the adapted-model φi is finetuned from initialization θ by multiple steps gradient descent with the support data Dsi . Note that gradient steps can be more than one, the formulation 1 is written for shortness.
3 META LEARNING WITH MINIMAX-META REGULARIZATION
We aim to learn a well-generalized meta-initialization that can fast adapt to new tasks with robust performance. To achieve this, the meta-learner should be meta-generalized, i.e. learn a metamodel θ that is robust to tasks distribution p(T ), and adaptation-generalized, i.e. the adapted model φ(θ,Ds),Ds ∼ T should be robust to the data distribution of the task domain T . The metageneralization problem has been studied in many previous works (Yin et al., 2019; Collins et al., 2020; Yao et al., 2021; Ni et al., 2021), and can be addressed by designing regularization in the outer loop. However, due to the limited number of samples of Ds,Dq , the adaptation-generalization problem is significantly challenging for meta-learning.
To address this, we propose a novel and well-general regularization framework for meta-learning – Minimax-Meta Regularization. In this section, we first present the training objective for minimaxregularized meta-learning while giving the intuition behind the design, and run a simulation to verify the high-level insight.
3.1 TRAINING OBJECTIVE
Based on the formulation 1 for meta-learning, we present the minimax-regularized meta-learning training objective as follows, where we add a positive regularization in the outer loop to achieve meta-generalization, and an inverse (negative) regularization in the inner loop to achieve adaptationgeneralization.
θ∗ = argmin θ
1
n n∑ i=1 1 m ∑ z∈Dqi L(φi(θ,Dsi ), z) + λout 1 n n∑ i=1 Regout(φi(θ,Dsi )), (2)
s.t. φi(θ,Dsi ) = argmin φ 〈µ∇θ ∑ z∈Dsi L(θ, z), φ〉+ 1 2 ‖φ− θ‖2 − λin 1 n n∑ i=1 Regin(φ), (3)
where Regout and Regin are the regularizations in the outer and inner loop, while λout ≥ 0 and λin ≥ 0 are the coefficients respectively. Note the the formulation φi(θ,Dsi ) =
argminφ〈µ∇θ ∑ z∈Dsi L(θ, z), φ〉 + 12‖φ − θ‖ 2 in the inner loop is the equivalent mirror descent (Beck & Teboulle, 2003) version of the gradient descent. We next introduce the intuition behind this design.
Outer positive regularization. As defined in Eq 2, we add a positive regularization Regout(φi(θ,Dsi )) that regularizes the model-overfitting of the adapted-model φi(θ,Dsi ). By doing so, the meta learner is enforced to learn a generalized meta-model θ∗ such that the adapted-model φi(θ
∗,Dsi ) on each tasks is not overfitting and generalized to query data. This idea has been studied in previous works (Yin et al., 2019; Collins et al., 2020; Yao et al., 2021; Ni et al., 2021), and has been shown to significantly enhance the meta-generalization.
Inner inverse regularization. The generalization performance depends not only on the complexity of the adapted-model φi(θ,Dsi ), but also the adaptation rule, i.e. the formulation of the inner loop function. As defined in Eq 3, we add a inverse regularization Regin(φ) that negatively regularizes the model-complexity of the adapted-model φi(θ,Dsi ). By doing so, the inner loop function simulates the adaptation overfitting during meta-testing by enforcing the adapted-model to be overfitting during meta-training. Therefore, learning from the minimax regularized meta-learning, the learned meta-model θ∗ can be resistant to adaptation overfitting.
From the above discussion, the Minimax-Meta Regularization enables the meta-learning to capture a meta-model that is both meta-generalized and adaptation-generalized. This framework is computational efficient without additional computational cost. In addition, the Minimax-Meta Regularization is general to all bi-level optimization formulation, thus can be directly applied to different meta-algorithms on different bi-level learning problems.
3.2 EMPIRICAL VERIFICATION.
We next verify the design by a simulation test by conducting the basic MAML framework (Finn et al., 2017) with different regularization types. As illustrated in Table 1, we make the following observations:
Outer positive regularization enhances the generalization performance. Compare the results from “no regularization” and “regularize the outer-loop”, we observe that adding outer regularization can get 1.56% and 4.42% accuracy improvements in 1-shot and 5-shot experiments, which verifies the efficacy of the outer regularization. This is aligned with the intuition that outer regularization enhances the meta-generalization, leading to better performance.
Inner negative regularization enhances the generalization performance. Compare the results from “no regularization” and “inverse regularize the inner-loop”, we observe that adding inner inverse regularization can get 1.17% and 1.24% accuracy improvements in 1-shot and 5-shot experiments, which verifies the efficacy of the inner inverse regularization. This is aligned with the intuition that inner reverse regularization enhances adaptation-generalization, thus improving performance.
The outer regularization and inner inverse regularization are compatible. Compare the results from “Minimax-Meta Regularization”, “regularize the outer-loop”, and “inverse regularize the innerloop”, we observe that Minimax-Meta Regularization can get 0.52% (1-shot)/0.61% (5-shot) and 0.92% (1-shot)/3.79% (5-shot) accuracy improvements than solely regularizing the outer-loop and inverse regularizing the inner-loop, which verifies the compatibility of the inner inverse regulariza-
tion. This is aligned with the intuition that meta-generalization and adaptation-generalization are not in conflict.
Inner positive regularization impairs the generalization performance. Compare the results from “no regularization” and “regularize the inner-loop”, we observe that adding inner positive regularization suffers from -1.86% and -0.34% accuracy impairments in 1-shot and 5-shot experiments, which aligns with the intuition that positive regularization that limits the adaptation in the inner-loop impairs the adaptation-generalization.
4 RELATED WORK
Meta-learning. A line of meta-learning methods has sought to train recurrent neural networks that ingest entire datasets Santoro et al. (2016); Duan et al. (2016). However, they need to place constraints on the model architecture. Another line aims to learn a transferable metric space between samples from previous tasks (Vinyals et al., 2016; Snell et al., 2017; Mishra et al., 2018; Oreshkin et al., 2018). However, it is limited to classification problems. In this paper, we focus on gradientbased meta-learning methods that learn a meta-initialization (Finn et al., 2017; 2018; Li et al., 2017; Finn & Levine, 2018; Grant et al., 2018; Lee & Choi, 2018; Park & Oliva, 2019; Flennerhag et al., 2020), which is a well-generalized for meta-training tasks, being agnostic to both model architecture and problems. However, these approaches are shown to be overfitting the meta-training tasks and generalizing poorly to meta-testing tasks (Yoon et al., 2018; Collins et al., 2020; Rothfuss et al., 2021; Yao et al., 2021).
Meta-Regularization. Standard regularizations such as weight decay (Krogh & Hertz, 1992), dropout (Gal & Ghahramani, 2016), and incorporating noise (Tishby & Zaslavsky, 2015; Alemi et al., 2016; Achille & Soatto, 2018), which can significantly enhance the generality of single-loop machine learning. However, the straightforward method that regularizes the neural networks limits the flexibility of fast adaptation in the inner loop (Yao et al., 2021). Recently, a few works were proposed to design the meta-regularization to improve meta-generalization. MR-MAML (Yin et al., 2019) constrains the search space of the meta-model, and allows the adaptation to be sufficient in the inner loop. Jamal & Qi (2019) proposed TAML to enforce the meta-model to perform similarly across tasks. Rajendran et al. (2020a) explored an information-theoretic framework of meta-augmentation by adding randomness to labels of both support and query sets. Yao et al. (2021) proposed two task augmentation methods – MetaMix and Channel Shuffle, which is theoretically proved to be generalized to unseen tasks. Ni et al. (2021) investigated the distinct ways where data augmentation can be integrated at both the image and class levels. Rothfuss et al. (2021) addressed the meta-generalization problem using the PAC-Bayesian framework, and proposed PACOH that is PAC-optimal with Gaussian processes. However, these works focus only on the metageneralization, i.e., generalize to the unseen tasks, while the adaptation-generalization that measures how the adapted-model generalizes to the task domain is merely considered.
This paper proposes the Minimax-Meta Regularization for meta-learning, implementing a positive regularization in the outer-loop and a negative regularization in the inner-loop. The framework can enhance both meta-generalization and adaptation-generalization, and thus improve the performance.
5 EXPERIMENTS
In this section, we conduct extensive experiments on three types of classical meta-learning tasks including, few-shot classification, few-shot regression, and robust reweighting with meta-learning, to demonstrate the efficacy of our proposed methods. With these experiments, we demonstrate that our methods i) outperform previous meta-learning algorithms in terms of predictive accuracy; ii) mitigate the meta-overfitting effectively. We will introduce the experimental setup, results, and analysis in the following subsections.
5.1 FEW-SHOT CLASSIFICATION
We first carry out experiments on the few-shot classification task, one of the most popular tasks to evaluate meta-learning algorithms. To verify the effectiveness of our approach, we adapt Minimax-
Meta Regularization into bi-level optimization meta-learning algorithms and make a benchmark to compare with other methods.
5.1.1 EXPERIMENTAL SETUP
Datasets. For the few-shot classification task, we experiment on the public released datasets MiniImagenet (Ravi & Larochelle, 2017; Vinyals et al., 2016) and Omniglot (Lake et al., 2015), following the few-shot benchmark setting provided in (Antoniou et al., 2018). The Omniglot dataset is a collection of 1623 character classes with different alphabets. Each class in the dataset contains 20 instances. In the experiment, all the character classes are shuffled, and then the shuffled classes are divided into the training set, validation set, and test set, with 1150, 50, and 423 instances respectively. Rotation augmentation is applied to the images with 90-degree increments to create new classes. The second dataset used in the few-shot classification experiment is Mini-Imagenet (Ravi & Larochelle, 2017), which is sampled from ImageNet with 600 instances of 100 classes. Each image is resized into 84 × 84. Following the work (Ravi & Larochelle, 2017), we split the Mini-Imagenet dataset into 64 classes for training, 12 classes for validation, and 24 classes for testing.
Experimental details. We select MAML (Finn et al., 2017) as the representative bi-level optimization meta-leanring model. To evaluate the effectiveness of Minimax-Meta Regularization, we first begin the experiment with the baseline MAML on the 5-way 1/5-shot Mini-Imagenet setting. Then, on top of the original MAML, we implement Minimax-MAML by adding Minimax-Meta Regularization. We then compare Minimax-MAML with original MAML and other meta-learning baselines on the 5-way 1/5-shot Mini-Imagenet setting and the the 20-way 1-shot Omniglot setting. The compared baselines include Matching Networks (Vinyals et al., 2016), Meta-SGD (Li et al., 2017), Meta-Networks (Munkhdalai & Yu, 2017), Siamese Nets (Koch et al., 2015), Neural Statistician (Edwards & Storkey, 2016), and Memory Module (Kaiser et al., 2017). Here we also include MAML++ (Antoniou et al., 2018) in the experiment and further implement Minimax-MAML++ for comparison. MAML++ is an improved version of MAML, with 6 specific methodologies added together for the performance improvement of MAML. We include MAML++ in our experiment for studying two questions: i) By comparing Minimax-MAML with MAML++, we want to analyze if Minimax-Meta Regularization, as a general improving mechanism, has the potential to outperform algorithm-specific methodologies. ii) By comparing Minimax-MAML++ with MAML++, we want to evaluate if Minimax-Meta Regularization is compatible with complicated model-specific improving methodologies in bi-level optimization models. Note regularization is only added during the training phase. All the MAML/MAML++ experiments involving regularization share the same form of regularization objective. The regularization is achieved by combining the l2-norm regularization and output entropy regularization. More detailed experiment setting information could be find in Appendix B.
5.1.2 RESULTS AND ANALYSIS
The baseline comparison results under Omniglot and Mini-Imagenet settings are shown in Table 2 and Table 3. Minimax-Meta Regularization are shown to improve both the original MAML and the MAML++ frameworks. In the Omniglot 20-way 1-shot classification experiment, the mean accuracy of MAML and MAML++ are improved from 94.20% and 97.21% to 95.76% and 97.77% respectively. Both the methods had unstable results in these experiments. After adopting Minimax-Meta Regularization, the std values of the final accuracy of these two methods have been significantly reduced, indicating better performance stability. The Minimax-MAML++ reached the best performance in this setting compared to other baselines with good stability. Significant improvements from Minimax-Meta Regularization are also shown in Mini-Imagenet 5-way 1/5-shot classification experiments. In the 1-shot experiments, the original MAML cannot outperform Meta-SGD and Meta-Networks baselines. The Minimax-Meta Regularization improves the accuracy of MAML from the average of 48.75% to 50.84%, which enables MAML to outperform other baselines. In the 5-shot experiments, Minimax-MAML could outperform MAML++ by 1.02%. Considering that MAML++ adopts 6 individual techniques specifically designed for MAML, Minimax-Meta Regularization shows strong effectiveness in this outperform as a general methodology.
5.2 FEW-SHOT REGRESSION
5.2.1 EXPERIMENTAL SETUP
Datasets. For the few-shot regression task, we consider a non-mutually-exclusive regression problem based on the Sinusoids synthetic dataset. Each task of Sinusoids regression involves the regressing from the input to the output of a generated sine wave, where the amplitudes of the sinusoids are different among tasks. In our experiment, we follow the setting provided by (Yin et al., 2019). The Sinusoids data is created in the following way: the amplitude A of the sinusoid is uniformly sampled from a set of 20 scalars {0.1, 0.3, · · · , 4}; u is sampled uniformly from [−5, 5] ; and y is sampled from N (Asin(u), 0.12). Experimental details. During the training, both u and A are provided as input of models, i.e. x = (u,A). During the test time, we expand the range of the tasks by randomly sampling the amplitude A uniformly from [0.1, 4] and use a random one-hot vector as the input of the network. The meta-training tasks are a proper subset of the meta-test tasks. Under this setting, the amplitude input at the training phase makes this regression problem non-mutually-exclusive, which makes the meta-learning model prone to the memorization problem(Yin et al., 2019) during training. In the experiments, we compare with the representative bi-level optimization meta-learning baseline MAML (Finn et al., 2017), and the meta-regularized MAML (MR-MAML) (Yin et al., 2019) where the regularization is either on the activations (MR-MAML(A)) or the weights (MR-MAML(W)). Both MR-MAML(A) and MR-MAML(W) are initially designed for solving the memorization problem. The Minimax-Meta Regularization is implemented for all the above 3 methods with l2-norm as the regularization objectives for both the inner-loop and outer-loop.
5.2.2 RESULTS AND ANALYSIS
Original MAML was shown to be capable of solving normal sinusoid few-shot regression problem(Finn et al., 2017). However, the results of non-mutually-exclusive sinusoid regression 5.2.2 suggest that added amplitude input makes MAML suffer from memorization problem and give poor test result. From the experiment result5.2.2, we could observe that Minimax-Meta Regularization improves the performance of MAML on both 5-shot and 10-shot tasks. In the 10-shot task, the
test MSE of MAML improved from 0.153 to 0.125 with Minimax-Meta Regularization, which is close to the MR-MAML(A). This observation suggests that the minimax-regularization could help the meta-learning model be more resistant to the memorization problem to some extent. Moreover, by comparing the results of MR-MAML methods with Minimax-Meta Regularization and the original MR-MAML models, we could find that both MAML(A) and MAML(W) gained performance improvements with added Minimax-Meta Regularization on both 5-shot and 10-shot tasks. And the smaller std values indicate the promotion of stability. This shows that minimax could be compatible with methods specifically designed for addressing memorization problem and further improve the performance.
5.3 ROBUST REWEIGHTING WITH META-LEARNING
5.3.1 EXPERIMENTAL SETUP
To verify the general effectiveness of our proposed methods, we further conduct the experiments on the task of robust reweighting with meta-learning. For this experiment, we compare the performance of our method and baselines on the noisy MNIST dataset, which is created by randomly flipping the labels of 40% training images. Each image has a dimension of 28×28. The task is to classify each image into 0 to 9 handwritten numbers, where the 10000 training images have 40% noisy labeled data. The validation set consists of 100 correctly-labeled images that are randomly selected from the correctly-labeled samples in the training set to ensure that the reweight method does not have the privilege of training on more data. We use the LeNet-5 as the backbone model and train the model for 1000 epochs. The learning rates for the first 1/3, the middle 1/3, and the last 1/3 training epochs are set to be 1e-2, 1e-3, and 1e-4 respectively. The basic meta-learning baselines we evaluate here is Meta-Reweighting introduced by the work (Ren et al., 2018). The Meta-Reweighitng algorithm learns to assign weights to training examples for robust learning. To determine the example weights, Meta-Reweighting performs a meta gradient descent step on the mini-batch example weights (which are initialized from zero) to minimize the loss on a clean unbiased validation set. Our method adds the Minimax-Meta Regularization on top of Meta-Reweighting. We add regularization on the outerloop, where the optimal weights are calculated and adopted for meta-update. The inverted regularization is added on the inner-loop, where the weighted inner-model fits the clean unbiased validation set for optimal weight calculation. Intuitively, such a regularization method makes the model becomes more conservative when updating based on noise train data in the outer loop and values the diversity of predictions more, thereby resisting overfit. At the same time, the inner model was encouraged to make sharper predictions on the clean validation set by the inverted regularization, so that the potential of the clean data set can be more fully utilized. The regularization objective used in our method is maximizing output entropy (minimizing output entropy in the outer-loop). We call our method Minimax Reweighting. Detailed information of the implementation of Minimax Reweighting is provided in Appendix C.
5.3.2 RESULTS AND ANALYSIS
Under this setting, models experienced large epoch training with the big initial learning rate. Models are extremely prone to overfit the training dataset during the training phase. To understand the performance of the models under a robust learn setting, we could first look at the training curve of the models (Figure 2,3). Since the training set is noisy, models overfitted to the train set would show significant performance deduction on the clean test set. From the perspective of robust learning, the direct training model sets the lower performance bound to some extent. Since it does not have any denoising ability, it quickly overfits the training set during the training. It reaches peak accuracy on the clean test set around the 80th epoch and the overfitting begins after that epoch. We could identify the overfitting characteristic from the training and testing accuracy curve. Since 40% of the labels in the training set are wrong, once the model starts to predict the training data with accuracy larger than 60%, it’s fitting the distribution of the noise training data instead of the ground truth distribution. At the same time, the performance deduction on the clean test set would begin. Finally, we could observe the training accuracy and testing accuracy of directly trained model converged to nearly 100% and 60% respectively, which indicates a complete overfit. On the contrary, the model with optimal learning robustness should never overfit the train set, which would maintain a train accuracy value close to 60%(since only 60% of the train labels are correct) and keep optimal performance on the clean test set. Compared to direct training, the training curve of Meta-Reweighting baseline (Ren et al., 2018) shows a significant improvement in the learning robustness. However, it still suffers from overfitting. It neither completely overfits the training dataset nor ignores all the noises, its training accuracy converges to around 70%. Meta-Reweighting model could finally maintain test accuracy at around 87.5%, experienced continual test accuracy deduction after around 100th epoch. Minimax-Reweighting nearly reached the optimal learning robustness under this setting. The training accuracy of Minimax-Reweighting stuck at around 60% with rarely any change throughout the training phase. And the testing accuracy maintained peak value around 95.5% without observable deduction. To further evaluate the effectiveness of Minimax-Reweighting, we implemented the outer-loop-only regularization on top of the Meta-Reweighting algorithm to make comparisons. The results intend that only regularizing the outer loop at the meta-level cannot reach the performance of Minimax-Meta Regularization. Quantitative results of final accuracy are shown in Table 5. As for train accuracy, the original Meta-Reweighting algorithm reached 70.38% accuracy, which indicates a certain overfit. On the contrary, after adding regularization, both Minimax Reweighting and outer-loop regularized Meta-Reweighting could preserve a training accuracy of around 60%, which represents the resistance to training set overfit. However, Minimax Reweighting outperforms outer-loop regularized Meta Reweighting on the clean test set accuracy
6 CONCLUSION
This paper studies the generalization problem of meta-learning. In this paper, we go one step deeper and propose a new regularization mechanism for meta-learning – Minimax-Meta Regularization. Specifically, we maximize the regularizer in the inner loop to encourage the adapted-model to fit an “aggressive, more specific, prone to overfitting” hypothesis and minimize the regularizer in the outer loop to fit a “conservative, more general, resistant to overfitting” hypothesis. Such adversarial regularization forces the meta-model to maintain generality at the meta-level even when it is easy to learn specific assumptions at the task-specific level, thereby improving the robustness of the meta-model. In the experiment, representative meta-learning scenarios, including few-shot learning, robust learning, and reinforcement learning, are conducted to verify our method. The results show that our method consistently improves the meta-learning algorithms’ performance and demonstrates the advantage of Minimax-Meta Regularization.
A GENERAL FORM OF MINIMAX-META REGULARIZATION IN META-LEARNING
Algorithm 1 General Form of Minimax-Meta Regularization in Meta-Learning Require: Meta-training set Dmeta−train, Learner M with parameters φ Require: Meta-Learner R with parameters θ. Ensure: φT
1: randomly initialize φ 2: for d = 1, n do 3: Dsupport, Dquery ← random dataset from Dmeta−train 4: φ0 ← c0 5: for t=1, T do 6: Xt,Yt ← random batch from Dsupport 7: Lt ← 8: L (M (Xt;φt−1) ,Yt) + InverseRegObjective (M (Xt;φt−1) ,Yt, φt−1) 9: ct ← R (( ∇φt−1Lt,Lt ) ; θd−1
) 10: φt ← ct 11: end for 12: X,Y ← Dquery 13: Ltest ← L (M (X;φT ) ,Y) +RegObjective (M (X;φT ) ,Y, θd−1) 14: Update θd using∇φTLtest 15: end for
B DETAILS OF FEW-SHOT CLASSIFICATION EXPERIMENT
B.1 IMPLEMENTATION OF MINIMAX-MAML
Algorithm 2 Minimax-MAML Require: p(T ) : distribution over tasks Require: α, β : step size hyperparameters γe, γn: reg-rate hyperparameter of Information Entropy,
L2 Norm Ensure: θT
1: randomly initialize θ 2: while not done do 3: for all Ti do 4: Evaluate ∇θLTi (fθ) with respect to K examples 5: Compute adapted parameters with gradient 6: descent: θ′i = θ − α∇θ ( LTi (fθ) + γeEntropyTi(fθ)− γnL2 Norm(θ)
) 7: end for 8: Update θ ← θ − β∇θ ∑ Ti∼p(T ) ( LTi ( fθ′i ) − γeEntropyTi(fθ′i) + γnL2 Norm(θi
′) 9: end while
Pseudo code is shown in Algorithm 2.
For all few-shot classification experiments, we use a γe = 2 and γn = 5e-5.
All the MAML/MAML++ experiments involving regularization share the same form of regularization objective. The regularization is achieved by combining the l2-norm regularization and output
entropy regularization. The bi-level optimization objective could be written as:
θ∗ = argmin θ
1
n n∑ i=1 1 m ∑ z∈Dqi L(φi(θ,Dsi ), z) + δout · (γn · 0.5‖φi(θ,Dsi )‖2 − γeH(φi(θ,Dsi ), z))),
(4) s.t. φi(θ,Dsi ) = θ − µ∇θ ∑ z∈Dsi L(θ, z) + δin · (γn · 0.5‖θ‖2 − γeH(θ, z))),
(5)
Where H(θ, z) denotes the information entropy of prediction of z using θ as model parameter. Here δin and δout respectively determine the type of regularization for the inner-loop and outerloop. Their values of δin and δout can be 1, 0 or -1, corresponding to normal regularization, none regularization, and inverse regularization respectively. Original MAML has δin = 0 and δout = 0. MAML becomes Minimax-MAML while δin and δout are set by -1 and 1. The selection for δin and δout values for other experiment could be found in 1. γn and γe are hyper-parameters controlling the regularization rate. We use γn = 0.0005 and γe=2 for all the experiments. All the MAML experiments take 5 inner-steps. In one experiment, the training takes 100 epochs, and each epoch consists of 500 iterations. After each epoch, the performance of the model is evaluated on the validation set. When the training is complete, a prediction of the test set is made by the ensemble of the top 5 performing models on the validation set. Each experiment is repeated 3 times. The Adam optimizer was adopted for the model training, with a learning rate of 0.001, β1 = 0.9 and β2 = 0.99. Task batch size for all Omniglot experiments is 16. Mini-Imagenet experiments use task batch sizes of 4 and 2 for 1-shot and 5-shot experiments respectively.
As for empirical verification experiment, on top of the original MAML, we implement different individual regularization methods and run experiments for each one separately. The regularization methods include outer-loop-only regularization, inner-loop-only regularize , inverse inner-loop regularization, loss function regularization, and Minimax-Meta Regularization. This stage of experiments complete the empirical verification of method discussed in 3.2.
C IMPLEMENTATION DETAIL OF MINIMAX META-REWEIGHTING
Pseudo code is shown in Algorithm 3. In our experiment, we use a γin = 0.25 and γout = 2
Algorithm 3 Weighted Minimax Meta-Reweighting. Require: model θ0, train Df , valid Dg, n,m, γin, γout Ensure: θT
1: for t = 0 . . . T − 1 do 2: {Xf , yf} ← SampleMiniBatch (Df , n) 3: {Xg, yg} ← SampleMiniBatch (Dg,m) 4: ŷf ← Forward (Xf , θt) 5: ← 0; lf ← ∑n i=1 iC (yf,i, ŷf,i) 6: ∇θt ← BackwardAD (lf , θt) 7: θ̂t ← θt − α∇θt 8: ŷg ← Forward ( Xg, θ̂t
) 9: lg ← 1m ∑m i=1 (C (yg,i, ŷg,i) + γinEntropy(ŷg,i))
10: ∇ ← BackwardAD (lg, ) 11: w̃ ← max(−∇ , 0);w ← w̃∑
j w̃+δ( ∑ j w̃)
12: l̂f ← ∑n i=1 wi (C (yf,i, ŷf,i)− γoutEntropy(ŷf,i))
13: ∇θt ← BackwardAD ( l̂f , θt ) 14: θt+1 ← OptimizerStep (θt,∇θt) 15: end for | 1. What is the focus and contribution of the paper on meta-learning?
2. How does the proposed approach differ from standard meta-learning techniques in terms of regularization mechanisms?
3. What are the strengths and weaknesses of the paper regarding its technical soundness, experimentation, and significance?
4. Do you have any concerns or suggestions regarding the limitations of the paper, such as the lack of theoretical analysis and narrow experimental scope?
5. What are some potential future directions or improvements that could be explored in this area of research? | Summary Of The Paper
Review | Summary Of The Paper
The paper presents a minimax meta-learning algorithm to address the generalization gap in standard meta-learning techniques. The proposed algorithm adds a new regularization mechanism that maximizes the regularizer in the inner-loop to encourage the adapted-model to be more sensitive to the new task, and minimizes the regularizer in the outer loop to resist overfitting. Empirically, experiments were conducted for few-shot learning and robust re-weighting. The results show that the proposed method improves the performance of the meta-learning algorithms in comparison to standard approaches.
Review
Originality
The novelty of the paper is limited. The proposed approach adds an additional regularization term to the standard meta-learning objective function, and the meta-learning algorithm is trained using a minimax objective.
Quality
The paper is technically sounds. However, experimental evaluation shows marginal gains to alternative meta-learning algorithms. Evaluation lacks experiments beyond computer vision applications, and state-of-the-art meta-learning algorithms like Meta-OPT-net and R2D2 are not included.
Clarity
The paper is clear and easy to follow.
Significance
The paper is relevant to researcher in the meta-learning community, however, the novelty of the paper is limited as described above.
Limitations
The paper lacks theoretical analysis.
Experimental analysis is limited to sine-wave regression and image classification.
Gains are marginal and the novelty is limited.
Questions to Authors
How does the performance of the proposed approach compare to other meta-learning algorithms beyond image classification and sine-wave prediction?
Can the authors provide theoretical guarantees of improvement for the proposed algorithm? |
ICLR | Title
Meta Learning with Minimax Regularization
Abstract
Even though meta-learning has attracted wide attention in recent years, the generalization problem of meta-learning is still not well addressed. Existing works focus on meta-generalization to unseen tasks at the meta-level, while ignoring that adapted-models may not be generalized to the tasks domain at the adaptationlevel, which can not be solved trivially. To this end, we propose a new regularization mechanism for meta-learning – Minimax-Meta Regularization. Especially, we maximize the regularizer in the inner-loop to encourage the adapted-model to be more sensitive to the new task, and minimize the regularizer in the outerloop to resist overfitting of the meta-model. This adversarial regularization forces the meta-algorithm to maintain generality at the meta-level while it is easy to learn specific assumptions at the task-specific level, thereby improving the generalization of meta-learning. We conduct extensive experiments on the representative meta-learning scenarios to verify our proposed method, including few-shot learning and robust reweighting. The results show that our method consistently improves the performance of the meta-learning algorithms and demonstrates the effectiveness of Minimax-Meta Regularization.
1 INTRODUCTION
Meta-learning has been proven to be a powerful paradigm for extracting well-generalized knowledge from data and accelerating the learning process for new tasks (Thrun & Pratt, 2012). It simulates the machine learning process by a bi-level objective (Finn et al., 2017), evaluating the query (metavalidation) set with an adapted-model learned from the meta-model by the support (meta-training) set. Meta-learning has received increasing attention in many machine learning settings such as fewshot learning (Sung et al., 2018; Sun et al., 2019; Wang et al., 2020) and robust learning (Ren et al., 2018; Shu et al., 2019; Li et al., 2019), and can be deployed in many practical applications (Kang et al., 2019; Dou et al., 2019; Yu et al., 2018; Madotto et al., 2019). Despite the success, the additional level of learning creates another potentially overfiting (Rajendran et al., 2020b), which significantly challenges the generalization of meta-learning algorithms. Specifically, the meta-model should be generalized to unseen tasks (meta-generalization). In the meanwhile, the adapted-model should be generalized to the domain of a specific task, which we called adaptation-generalization (Figure 1). A key challenge is how to regularize the meta-algorithms to ensure this two-levels generalization.
The deep neural networks tend to overfit the sampling bias due to its representation power, leading to poor generalization (Song et al., 2020). Regularizations such as weight decay (Krogh & Hertz, 1992), dropout (Gal & Ghahramani, 2016), and incorporating noise (Tishby & Zaslavsky, 2015; Alemi et al., 2016; Achille & Soatto, 2018), can effectively present the model from the overfitting and enhance the generalization. However, direct applying the regularization to the networks limited the flexibility of fast adaptation in the inner loop (meta-training) of meta-learning (Yao et al., 2021). Recent works aim to address the meta-generalization problem by meta-regularizations, such as constraining the meta-initialization space (Yin et al., 2019), enforcing the similarity of the performance of the meta-model on different tasks (Jamal & Qi, 2019), and augmenting meta-training data (Rajendran et al., 2020b; Ni et al., 2021; Yao et al., 2021). These methods significantly enhance the generalization for unseen tasks. However, they ignore the adaptation-generalization to the data distribution of the meta-testing tasks (Figure 1), which is not negligible.
The work takes the first step further to optimize both meta-generalization and adaptationgeneralization for meta-learning. However, the adaptation-generalization is significant challenging
for meta-learning, where we meet a dilemma between fast adaptation and generality: 1) regularizing the model during meta-testing time can enhance the generalization to the task domain, however, limits the fast adaptation that is the goal of meta-learning; 2) exacerbating the overfitting to the fewshot samples from meta-testing can enhance the fast adaptation, however, limits the generality to the task domain.
To address the challenge, we consider learning a meta-model resistant to the adapted-model overfitting during the meta-testing time. To achieve this, we design a well-general mechanism called Minimax-Meta Regularization for meta-learning. During the meta-training, we enforce the adaptedmodel to be more overfitting to the support data by adding a inverse (negative) regularization in the inner loop, and enforce the meta-model to be more generalized on the test samples by adding a positive regularization in the outer loop. By doing so, the learned meta-model can be meta-generalized, making adapted-models perform well on the query (meta-validation) set, even when the adaptedmodels are prone to overfit to the support (meta-training) set. Therefore during the meta testing, the adapted-model can still be generalized to the task domain, even though they are overfitting to fewshot samples. In particular, the Minimax-Meta Regularization is well general to be implemented in all bi-level optimization frameworks without additional computational cost.
To verify the above intuition, we conduct experiments of the basic MAML (Finn et al., 2017) framework. Surprisingly, we find that both positively regularizing the outer loop meta-training and negatively regularizing the inner loop adaption can significantly enhance the few-shot classification. Another interesting finding is that adding positive regularization in the inner loop impairs the performance, which indirectly proves the efficacy of our proposal. We conduct extensive experiments on few-shot regression, few-shot classification, and robust reweighting (Ren et al., 2018). The experimental results show that Minimax-Meta Regularization generally improves the performance of bi-level meta-learning algorithms and is compatible with common methodologies for enhancing meta-learning. Moreover, Minimax-Meta Regularization shows the capability to improve the generalization of meta-learning algorithms and help address meta-overfitting problems to a certain extent.
Our Contributions. 1) we propose a limitation of previous works on meta-generalization that ignore the adaptation-generalization; 2) we design a general mechanism named Minimax-Meta Regularization for meta-learning, which aims to capture a meta-model that is both meta-generalized
and resistant to the adaptation overfitting; 3) we empirically verify the intuition of Minimax-Meta Regularization and give possible reasons; 4) we conduct three different bi-level optimization tasks to show the efficacy of the proposed method.
2 PRELIMINARY
We first give a brief introduction and notation of meta-learning. In the meta-learning problem setting that we consider, the goal is to learn a generalized initialization model for better adapting to new tasks from only a few samples. To achieve this, it requires a set of support (meta-training) data{ Dsi = {xsi,j , ysi,j}kj=1 }n i=1 and query (meta-testing) data { Dqi = {x q i,j , y q i,j}mj=1 }n i=1
sampled from tasks {Ti}ni=1 drawn from distribution p(T ), where k and m denote the number of data samples from support and query data, and n is the number of tasks. Denote L and µ to be the loss function and inner-loop learning rate.
Meta-learning (Finn et al., 2017) simulates the adaptation and evaluation procedure of machine learning, and aims to learn a well-generalized model f parameterized by θ∗ by the following bilevel optimization
θ∗ = argmin θ
1
n n∑ i=1 1 m ∑ z∈Dqi L(φi(θ,Dsi ), z), s.t.φi(θ,Dsi ) = θ − µ∇θ ∑ z∈Dsi L(θ, z) (1)
where z represents the data sample (x, y). The outer loop (represents the meta-validation phase) measures the generalization performance of the adapted-model φi by the query data Dqi . The inner loop (represents the meta-training phase) defines that the adapted-model φi is finetuned from initialization θ by multiple steps gradient descent with the support data Dsi . Note that gradient steps can be more than one, the formulation 1 is written for shortness.
3 META LEARNING WITH MINIMAX-META REGULARIZATION
We aim to learn a well-generalized meta-initialization that can fast adapt to new tasks with robust performance. To achieve this, the meta-learner should be meta-generalized, i.e. learn a metamodel θ that is robust to tasks distribution p(T ), and adaptation-generalized, i.e. the adapted model φ(θ,Ds),Ds ∼ T should be robust to the data distribution of the task domain T . The metageneralization problem has been studied in many previous works (Yin et al., 2019; Collins et al., 2020; Yao et al., 2021; Ni et al., 2021), and can be addressed by designing regularization in the outer loop. However, due to the limited number of samples of Ds,Dq , the adaptation-generalization problem is significantly challenging for meta-learning.
To address this, we propose a novel and well-general regularization framework for meta-learning – Minimax-Meta Regularization. In this section, we first present the training objective for minimaxregularized meta-learning while giving the intuition behind the design, and run a simulation to verify the high-level insight.
3.1 TRAINING OBJECTIVE
Based on the formulation 1 for meta-learning, we present the minimax-regularized meta-learning training objective as follows, where we add a positive regularization in the outer loop to achieve meta-generalization, and an inverse (negative) regularization in the inner loop to achieve adaptationgeneralization.
θ∗ = argmin θ
1
n n∑ i=1 1 m ∑ z∈Dqi L(φi(θ,Dsi ), z) + λout 1 n n∑ i=1 Regout(φi(θ,Dsi )), (2)
s.t. φi(θ,Dsi ) = argmin φ 〈µ∇θ ∑ z∈Dsi L(θ, z), φ〉+ 1 2 ‖φ− θ‖2 − λin 1 n n∑ i=1 Regin(φ), (3)
where Regout and Regin are the regularizations in the outer and inner loop, while λout ≥ 0 and λin ≥ 0 are the coefficients respectively. Note the the formulation φi(θ,Dsi ) =
argminφ〈µ∇θ ∑ z∈Dsi L(θ, z), φ〉 + 12‖φ − θ‖ 2 in the inner loop is the equivalent mirror descent (Beck & Teboulle, 2003) version of the gradient descent. We next introduce the intuition behind this design.
Outer positive regularization. As defined in Eq 2, we add a positive regularization Regout(φi(θ,Dsi )) that regularizes the model-overfitting of the adapted-model φi(θ,Dsi ). By doing so, the meta learner is enforced to learn a generalized meta-model θ∗ such that the adapted-model φi(θ
∗,Dsi ) on each tasks is not overfitting and generalized to query data. This idea has been studied in previous works (Yin et al., 2019; Collins et al., 2020; Yao et al., 2021; Ni et al., 2021), and has been shown to significantly enhance the meta-generalization.
Inner inverse regularization. The generalization performance depends not only on the complexity of the adapted-model φi(θ,Dsi ), but also the adaptation rule, i.e. the formulation of the inner loop function. As defined in Eq 3, we add a inverse regularization Regin(φ) that negatively regularizes the model-complexity of the adapted-model φi(θ,Dsi ). By doing so, the inner loop function simulates the adaptation overfitting during meta-testing by enforcing the adapted-model to be overfitting during meta-training. Therefore, learning from the minimax regularized meta-learning, the learned meta-model θ∗ can be resistant to adaptation overfitting.
From the above discussion, the Minimax-Meta Regularization enables the meta-learning to capture a meta-model that is both meta-generalized and adaptation-generalized. This framework is computational efficient without additional computational cost. In addition, the Minimax-Meta Regularization is general to all bi-level optimization formulation, thus can be directly applied to different meta-algorithms on different bi-level learning problems.
3.2 EMPIRICAL VERIFICATION.
We next verify the design by a simulation test by conducting the basic MAML framework (Finn et al., 2017) with different regularization types. As illustrated in Table 1, we make the following observations:
Outer positive regularization enhances the generalization performance. Compare the results from “no regularization” and “regularize the outer-loop”, we observe that adding outer regularization can get 1.56% and 4.42% accuracy improvements in 1-shot and 5-shot experiments, which verifies the efficacy of the outer regularization. This is aligned with the intuition that outer regularization enhances the meta-generalization, leading to better performance.
Inner negative regularization enhances the generalization performance. Compare the results from “no regularization” and “inverse regularize the inner-loop”, we observe that adding inner inverse regularization can get 1.17% and 1.24% accuracy improvements in 1-shot and 5-shot experiments, which verifies the efficacy of the inner inverse regularization. This is aligned with the intuition that inner reverse regularization enhances adaptation-generalization, thus improving performance.
The outer regularization and inner inverse regularization are compatible. Compare the results from “Minimax-Meta Regularization”, “regularize the outer-loop”, and “inverse regularize the innerloop”, we observe that Minimax-Meta Regularization can get 0.52% (1-shot)/0.61% (5-shot) and 0.92% (1-shot)/3.79% (5-shot) accuracy improvements than solely regularizing the outer-loop and inverse regularizing the inner-loop, which verifies the compatibility of the inner inverse regulariza-
tion. This is aligned with the intuition that meta-generalization and adaptation-generalization are not in conflict.
Inner positive regularization impairs the generalization performance. Compare the results from “no regularization” and “regularize the inner-loop”, we observe that adding inner positive regularization suffers from -1.86% and -0.34% accuracy impairments in 1-shot and 5-shot experiments, which aligns with the intuition that positive regularization that limits the adaptation in the inner-loop impairs the adaptation-generalization.
4 RELATED WORK
Meta-learning. A line of meta-learning methods has sought to train recurrent neural networks that ingest entire datasets Santoro et al. (2016); Duan et al. (2016). However, they need to place constraints on the model architecture. Another line aims to learn a transferable metric space between samples from previous tasks (Vinyals et al., 2016; Snell et al., 2017; Mishra et al., 2018; Oreshkin et al., 2018). However, it is limited to classification problems. In this paper, we focus on gradientbased meta-learning methods that learn a meta-initialization (Finn et al., 2017; 2018; Li et al., 2017; Finn & Levine, 2018; Grant et al., 2018; Lee & Choi, 2018; Park & Oliva, 2019; Flennerhag et al., 2020), which is a well-generalized for meta-training tasks, being agnostic to both model architecture and problems. However, these approaches are shown to be overfitting the meta-training tasks and generalizing poorly to meta-testing tasks (Yoon et al., 2018; Collins et al., 2020; Rothfuss et al., 2021; Yao et al., 2021).
Meta-Regularization. Standard regularizations such as weight decay (Krogh & Hertz, 1992), dropout (Gal & Ghahramani, 2016), and incorporating noise (Tishby & Zaslavsky, 2015; Alemi et al., 2016; Achille & Soatto, 2018), which can significantly enhance the generality of single-loop machine learning. However, the straightforward method that regularizes the neural networks limits the flexibility of fast adaptation in the inner loop (Yao et al., 2021). Recently, a few works were proposed to design the meta-regularization to improve meta-generalization. MR-MAML (Yin et al., 2019) constrains the search space of the meta-model, and allows the adaptation to be sufficient in the inner loop. Jamal & Qi (2019) proposed TAML to enforce the meta-model to perform similarly across tasks. Rajendran et al. (2020a) explored an information-theoretic framework of meta-augmentation by adding randomness to labels of both support and query sets. Yao et al. (2021) proposed two task augmentation methods – MetaMix and Channel Shuffle, which is theoretically proved to be generalized to unseen tasks. Ni et al. (2021) investigated the distinct ways where data augmentation can be integrated at both the image and class levels. Rothfuss et al. (2021) addressed the meta-generalization problem using the PAC-Bayesian framework, and proposed PACOH that is PAC-optimal with Gaussian processes. However, these works focus only on the metageneralization, i.e., generalize to the unseen tasks, while the adaptation-generalization that measures how the adapted-model generalizes to the task domain is merely considered.
This paper proposes the Minimax-Meta Regularization for meta-learning, implementing a positive regularization in the outer-loop and a negative regularization in the inner-loop. The framework can enhance both meta-generalization and adaptation-generalization, and thus improve the performance.
5 EXPERIMENTS
In this section, we conduct extensive experiments on three types of classical meta-learning tasks including, few-shot classification, few-shot regression, and robust reweighting with meta-learning, to demonstrate the efficacy of our proposed methods. With these experiments, we demonstrate that our methods i) outperform previous meta-learning algorithms in terms of predictive accuracy; ii) mitigate the meta-overfitting effectively. We will introduce the experimental setup, results, and analysis in the following subsections.
5.1 FEW-SHOT CLASSIFICATION
We first carry out experiments on the few-shot classification task, one of the most popular tasks to evaluate meta-learning algorithms. To verify the effectiveness of our approach, we adapt Minimax-
Meta Regularization into bi-level optimization meta-learning algorithms and make a benchmark to compare with other methods.
5.1.1 EXPERIMENTAL SETUP
Datasets. For the few-shot classification task, we experiment on the public released datasets MiniImagenet (Ravi & Larochelle, 2017; Vinyals et al., 2016) and Omniglot (Lake et al., 2015), following the few-shot benchmark setting provided in (Antoniou et al., 2018). The Omniglot dataset is a collection of 1623 character classes with different alphabets. Each class in the dataset contains 20 instances. In the experiment, all the character classes are shuffled, and then the shuffled classes are divided into the training set, validation set, and test set, with 1150, 50, and 423 instances respectively. Rotation augmentation is applied to the images with 90-degree increments to create new classes. The second dataset used in the few-shot classification experiment is Mini-Imagenet (Ravi & Larochelle, 2017), which is sampled from ImageNet with 600 instances of 100 classes. Each image is resized into 84 × 84. Following the work (Ravi & Larochelle, 2017), we split the Mini-Imagenet dataset into 64 classes for training, 12 classes for validation, and 24 classes for testing.
Experimental details. We select MAML (Finn et al., 2017) as the representative bi-level optimization meta-leanring model. To evaluate the effectiveness of Minimax-Meta Regularization, we first begin the experiment with the baseline MAML on the 5-way 1/5-shot Mini-Imagenet setting. Then, on top of the original MAML, we implement Minimax-MAML by adding Minimax-Meta Regularization. We then compare Minimax-MAML with original MAML and other meta-learning baselines on the 5-way 1/5-shot Mini-Imagenet setting and the the 20-way 1-shot Omniglot setting. The compared baselines include Matching Networks (Vinyals et al., 2016), Meta-SGD (Li et al., 2017), Meta-Networks (Munkhdalai & Yu, 2017), Siamese Nets (Koch et al., 2015), Neural Statistician (Edwards & Storkey, 2016), and Memory Module (Kaiser et al., 2017). Here we also include MAML++ (Antoniou et al., 2018) in the experiment and further implement Minimax-MAML++ for comparison. MAML++ is an improved version of MAML, with 6 specific methodologies added together for the performance improvement of MAML. We include MAML++ in our experiment for studying two questions: i) By comparing Minimax-MAML with MAML++, we want to analyze if Minimax-Meta Regularization, as a general improving mechanism, has the potential to outperform algorithm-specific methodologies. ii) By comparing Minimax-MAML++ with MAML++, we want to evaluate if Minimax-Meta Regularization is compatible with complicated model-specific improving methodologies in bi-level optimization models. Note regularization is only added during the training phase. All the MAML/MAML++ experiments involving regularization share the same form of regularization objective. The regularization is achieved by combining the l2-norm regularization and output entropy regularization. More detailed experiment setting information could be find in Appendix B.
5.1.2 RESULTS AND ANALYSIS
The baseline comparison results under Omniglot and Mini-Imagenet settings are shown in Table 2 and Table 3. Minimax-Meta Regularization are shown to improve both the original MAML and the MAML++ frameworks. In the Omniglot 20-way 1-shot classification experiment, the mean accuracy of MAML and MAML++ are improved from 94.20% and 97.21% to 95.76% and 97.77% respectively. Both the methods had unstable results in these experiments. After adopting Minimax-Meta Regularization, the std values of the final accuracy of these two methods have been significantly reduced, indicating better performance stability. The Minimax-MAML++ reached the best performance in this setting compared to other baselines with good stability. Significant improvements from Minimax-Meta Regularization are also shown in Mini-Imagenet 5-way 1/5-shot classification experiments. In the 1-shot experiments, the original MAML cannot outperform Meta-SGD and Meta-Networks baselines. The Minimax-Meta Regularization improves the accuracy of MAML from the average of 48.75% to 50.84%, which enables MAML to outperform other baselines. In the 5-shot experiments, Minimax-MAML could outperform MAML++ by 1.02%. Considering that MAML++ adopts 6 individual techniques specifically designed for MAML, Minimax-Meta Regularization shows strong effectiveness in this outperform as a general methodology.
5.2 FEW-SHOT REGRESSION
5.2.1 EXPERIMENTAL SETUP
Datasets. For the few-shot regression task, we consider a non-mutually-exclusive regression problem based on the Sinusoids synthetic dataset. Each task of Sinusoids regression involves the regressing from the input to the output of a generated sine wave, where the amplitudes of the sinusoids are different among tasks. In our experiment, we follow the setting provided by (Yin et al., 2019). The Sinusoids data is created in the following way: the amplitude A of the sinusoid is uniformly sampled from a set of 20 scalars {0.1, 0.3, · · · , 4}; u is sampled uniformly from [−5, 5] ; and y is sampled from N (Asin(u), 0.12). Experimental details. During the training, both u and A are provided as input of models, i.e. x = (u,A). During the test time, we expand the range of the tasks by randomly sampling the amplitude A uniformly from [0.1, 4] and use a random one-hot vector as the input of the network. The meta-training tasks are a proper subset of the meta-test tasks. Under this setting, the amplitude input at the training phase makes this regression problem non-mutually-exclusive, which makes the meta-learning model prone to the memorization problem(Yin et al., 2019) during training. In the experiments, we compare with the representative bi-level optimization meta-learning baseline MAML (Finn et al., 2017), and the meta-regularized MAML (MR-MAML) (Yin et al., 2019) where the regularization is either on the activations (MR-MAML(A)) or the weights (MR-MAML(W)). Both MR-MAML(A) and MR-MAML(W) are initially designed for solving the memorization problem. The Minimax-Meta Regularization is implemented for all the above 3 methods with l2-norm as the regularization objectives for both the inner-loop and outer-loop.
5.2.2 RESULTS AND ANALYSIS
Original MAML was shown to be capable of solving normal sinusoid few-shot regression problem(Finn et al., 2017). However, the results of non-mutually-exclusive sinusoid regression 5.2.2 suggest that added amplitude input makes MAML suffer from memorization problem and give poor test result. From the experiment result5.2.2, we could observe that Minimax-Meta Regularization improves the performance of MAML on both 5-shot and 10-shot tasks. In the 10-shot task, the
test MSE of MAML improved from 0.153 to 0.125 with Minimax-Meta Regularization, which is close to the MR-MAML(A). This observation suggests that the minimax-regularization could help the meta-learning model be more resistant to the memorization problem to some extent. Moreover, by comparing the results of MR-MAML methods with Minimax-Meta Regularization and the original MR-MAML models, we could find that both MAML(A) and MAML(W) gained performance improvements with added Minimax-Meta Regularization on both 5-shot and 10-shot tasks. And the smaller std values indicate the promotion of stability. This shows that minimax could be compatible with methods specifically designed for addressing memorization problem and further improve the performance.
5.3 ROBUST REWEIGHTING WITH META-LEARNING
5.3.1 EXPERIMENTAL SETUP
To verify the general effectiveness of our proposed methods, we further conduct the experiments on the task of robust reweighting with meta-learning. For this experiment, we compare the performance of our method and baselines on the noisy MNIST dataset, which is created by randomly flipping the labels of 40% training images. Each image has a dimension of 28×28. The task is to classify each image into 0 to 9 handwritten numbers, where the 10000 training images have 40% noisy labeled data. The validation set consists of 100 correctly-labeled images that are randomly selected from the correctly-labeled samples in the training set to ensure that the reweight method does not have the privilege of training on more data. We use the LeNet-5 as the backbone model and train the model for 1000 epochs. The learning rates for the first 1/3, the middle 1/3, and the last 1/3 training epochs are set to be 1e-2, 1e-3, and 1e-4 respectively. The basic meta-learning baselines we evaluate here is Meta-Reweighting introduced by the work (Ren et al., 2018). The Meta-Reweighitng algorithm learns to assign weights to training examples for robust learning. To determine the example weights, Meta-Reweighting performs a meta gradient descent step on the mini-batch example weights (which are initialized from zero) to minimize the loss on a clean unbiased validation set. Our method adds the Minimax-Meta Regularization on top of Meta-Reweighting. We add regularization on the outerloop, where the optimal weights are calculated and adopted for meta-update. The inverted regularization is added on the inner-loop, where the weighted inner-model fits the clean unbiased validation set for optimal weight calculation. Intuitively, such a regularization method makes the model becomes more conservative when updating based on noise train data in the outer loop and values the diversity of predictions more, thereby resisting overfit. At the same time, the inner model was encouraged to make sharper predictions on the clean validation set by the inverted regularization, so that the potential of the clean data set can be more fully utilized. The regularization objective used in our method is maximizing output entropy (minimizing output entropy in the outer-loop). We call our method Minimax Reweighting. Detailed information of the implementation of Minimax Reweighting is provided in Appendix C.
5.3.2 RESULTS AND ANALYSIS
Under this setting, models experienced large epoch training with the big initial learning rate. Models are extremely prone to overfit the training dataset during the training phase. To understand the performance of the models under a robust learn setting, we could first look at the training curve of the models (Figure 2,3). Since the training set is noisy, models overfitted to the train set would show significant performance deduction on the clean test set. From the perspective of robust learning, the direct training model sets the lower performance bound to some extent. Since it does not have any denoising ability, it quickly overfits the training set during the training. It reaches peak accuracy on the clean test set around the 80th epoch and the overfitting begins after that epoch. We could identify the overfitting characteristic from the training and testing accuracy curve. Since 40% of the labels in the training set are wrong, once the model starts to predict the training data with accuracy larger than 60%, it’s fitting the distribution of the noise training data instead of the ground truth distribution. At the same time, the performance deduction on the clean test set would begin. Finally, we could observe the training accuracy and testing accuracy of directly trained model converged to nearly 100% and 60% respectively, which indicates a complete overfit. On the contrary, the model with optimal learning robustness should never overfit the train set, which would maintain a train accuracy value close to 60%(since only 60% of the train labels are correct) and keep optimal performance on the clean test set. Compared to direct training, the training curve of Meta-Reweighting baseline (Ren et al., 2018) shows a significant improvement in the learning robustness. However, it still suffers from overfitting. It neither completely overfits the training dataset nor ignores all the noises, its training accuracy converges to around 70%. Meta-Reweighting model could finally maintain test accuracy at around 87.5%, experienced continual test accuracy deduction after around 100th epoch. Minimax-Reweighting nearly reached the optimal learning robustness under this setting. The training accuracy of Minimax-Reweighting stuck at around 60% with rarely any change throughout the training phase. And the testing accuracy maintained peak value around 95.5% without observable deduction. To further evaluate the effectiveness of Minimax-Reweighting, we implemented the outer-loop-only regularization on top of the Meta-Reweighting algorithm to make comparisons. The results intend that only regularizing the outer loop at the meta-level cannot reach the performance of Minimax-Meta Regularization. Quantitative results of final accuracy are shown in Table 5. As for train accuracy, the original Meta-Reweighting algorithm reached 70.38% accuracy, which indicates a certain overfit. On the contrary, after adding regularization, both Minimax Reweighting and outer-loop regularized Meta-Reweighting could preserve a training accuracy of around 60%, which represents the resistance to training set overfit. However, Minimax Reweighting outperforms outer-loop regularized Meta Reweighting on the clean test set accuracy
6 CONCLUSION
This paper studies the generalization problem of meta-learning. In this paper, we go one step deeper and propose a new regularization mechanism for meta-learning – Minimax-Meta Regularization. Specifically, we maximize the regularizer in the inner loop to encourage the adapted-model to fit an “aggressive, more specific, prone to overfitting” hypothesis and minimize the regularizer in the outer loop to fit a “conservative, more general, resistant to overfitting” hypothesis. Such adversarial regularization forces the meta-model to maintain generality at the meta-level even when it is easy to learn specific assumptions at the task-specific level, thereby improving the robustness of the meta-model. In the experiment, representative meta-learning scenarios, including few-shot learning, robust learning, and reinforcement learning, are conducted to verify our method. The results show that our method consistently improves the meta-learning algorithms’ performance and demonstrates the advantage of Minimax-Meta Regularization.
A GENERAL FORM OF MINIMAX-META REGULARIZATION IN META-LEARNING
Algorithm 1 General Form of Minimax-Meta Regularization in Meta-Learning Require: Meta-training set Dmeta−train, Learner M with parameters φ Require: Meta-Learner R with parameters θ. Ensure: φT
1: randomly initialize φ 2: for d = 1, n do 3: Dsupport, Dquery ← random dataset from Dmeta−train 4: φ0 ← c0 5: for t=1, T do 6: Xt,Yt ← random batch from Dsupport 7: Lt ← 8: L (M (Xt;φt−1) ,Yt) + InverseRegObjective (M (Xt;φt−1) ,Yt, φt−1) 9: ct ← R (( ∇φt−1Lt,Lt ) ; θd−1
) 10: φt ← ct 11: end for 12: X,Y ← Dquery 13: Ltest ← L (M (X;φT ) ,Y) +RegObjective (M (X;φT ) ,Y, θd−1) 14: Update θd using∇φTLtest 15: end for
B DETAILS OF FEW-SHOT CLASSIFICATION EXPERIMENT
B.1 IMPLEMENTATION OF MINIMAX-MAML
Algorithm 2 Minimax-MAML Require: p(T ) : distribution over tasks Require: α, β : step size hyperparameters γe, γn: reg-rate hyperparameter of Information Entropy,
L2 Norm Ensure: θT
1: randomly initialize θ 2: while not done do 3: for all Ti do 4: Evaluate ∇θLTi (fθ) with respect to K examples 5: Compute adapted parameters with gradient 6: descent: θ′i = θ − α∇θ ( LTi (fθ) + γeEntropyTi(fθ)− γnL2 Norm(θ)
) 7: end for 8: Update θ ← θ − β∇θ ∑ Ti∼p(T ) ( LTi ( fθ′i ) − γeEntropyTi(fθ′i) + γnL2 Norm(θi
′) 9: end while
Pseudo code is shown in Algorithm 2.
For all few-shot classification experiments, we use a γe = 2 and γn = 5e-5.
All the MAML/MAML++ experiments involving regularization share the same form of regularization objective. The regularization is achieved by combining the l2-norm regularization and output
entropy regularization. The bi-level optimization objective could be written as:
θ∗ = argmin θ
1
n n∑ i=1 1 m ∑ z∈Dqi L(φi(θ,Dsi ), z) + δout · (γn · 0.5‖φi(θ,Dsi )‖2 − γeH(φi(θ,Dsi ), z))),
(4) s.t. φi(θ,Dsi ) = θ − µ∇θ ∑ z∈Dsi L(θ, z) + δin · (γn · 0.5‖θ‖2 − γeH(θ, z))),
(5)
Where H(θ, z) denotes the information entropy of prediction of z using θ as model parameter. Here δin and δout respectively determine the type of regularization for the inner-loop and outerloop. Their values of δin and δout can be 1, 0 or -1, corresponding to normal regularization, none regularization, and inverse regularization respectively. Original MAML has δin = 0 and δout = 0. MAML becomes Minimax-MAML while δin and δout are set by -1 and 1. The selection for δin and δout values for other experiment could be found in 1. γn and γe are hyper-parameters controlling the regularization rate. We use γn = 0.0005 and γe=2 for all the experiments. All the MAML experiments take 5 inner-steps. In one experiment, the training takes 100 epochs, and each epoch consists of 500 iterations. After each epoch, the performance of the model is evaluated on the validation set. When the training is complete, a prediction of the test set is made by the ensemble of the top 5 performing models on the validation set. Each experiment is repeated 3 times. The Adam optimizer was adopted for the model training, with a learning rate of 0.001, β1 = 0.9 and β2 = 0.99. Task batch size for all Omniglot experiments is 16. Mini-Imagenet experiments use task batch sizes of 4 and 2 for 1-shot and 5-shot experiments respectively.
As for empirical verification experiment, on top of the original MAML, we implement different individual regularization methods and run experiments for each one separately. The regularization methods include outer-loop-only regularization, inner-loop-only regularize , inverse inner-loop regularization, loss function regularization, and Minimax-Meta Regularization. This stage of experiments complete the empirical verification of method discussed in 3.2.
C IMPLEMENTATION DETAIL OF MINIMAX META-REWEIGHTING
Pseudo code is shown in Algorithm 3. In our experiment, we use a γin = 0.25 and γout = 2
Algorithm 3 Weighted Minimax Meta-Reweighting. Require: model θ0, train Df , valid Dg, n,m, γin, γout Ensure: θT
1: for t = 0 . . . T − 1 do 2: {Xf , yf} ← SampleMiniBatch (Df , n) 3: {Xg, yg} ← SampleMiniBatch (Dg,m) 4: ŷf ← Forward (Xf , θt) 5: ← 0; lf ← ∑n i=1 iC (yf,i, ŷf,i) 6: ∇θt ← BackwardAD (lf , θt) 7: θ̂t ← θt − α∇θt 8: ŷg ← Forward ( Xg, θ̂t
) 9: lg ← 1m ∑m i=1 (C (yg,i, ŷg,i) + γinEntropy(ŷg,i))
10: ∇ ← BackwardAD (lg, ) 11: w̃ ← max(−∇ , 0);w ← w̃∑
j w̃+δ( ∑ j w̃)
12: l̂f ← ∑n i=1 wi (C (yf,i, ŷf,i)− γoutEntropy(ŷf,i))
13: ∇θt ← BackwardAD ( l̂f , θt ) 14: θt+1 ← OptimizerStep (θt,∇θt) 15: end for | 1. What is the focus of the paper regarding meta-learning?
2. What are the strengths of the proposed approach, particularly in terms of its effectiveness?
3. What are the weaknesses of the paper regarding its presentation, writing, and lack of discussions?
4. How do the regularizers used in the paper work, and how do they contribute to the results?
5. Can you provide a clearer explanation of the concepts of meta-generalization and adaptation-generalization?
6. How do the improvements in the quantitative results relate to the improvements in generalization properties?
7. What are some minor issues with the paper, such as table formatting and precision?
8. What optimization methods were used in the baseline models, and how do they compare to the proposed method?
9. Will the proposed method potentially lead to more robust models against adversarial attacks? | Summary Of The Paper
Review | Summary Of The Paper
This paper propose a new method to enhance the generalization in meta-learning problem.
Review
Strengths: The paper discusses an interesting problem for meta-learning; Quantitative results show the effectiveness of the proposed method;
Weaknesses: The presentation and writing need to be improved, some important discussions and explanations seem to be missing.
How are the regularizers used in this paper designed? Do the authors propose new regularizers or simply adding existing regularizers to both inner and outer loop? Do authors try different regularizers? How will different regularizer influence the results? More detailed discussion on regularizers will enhance the paper;
Discussions on meta-generalization and adaptation-generalization seems to be vague and too heuristic; The improvements in quantitative results can not be well-connected to the improvements of generalization properties, do the improvements really indicate both better meta-generalization and adaptation-generalization?.
Minor problems: bottom border missing in Table 4; results have different precisions in Table 5;
Some questions & comments:
Are the baseline models in the experiments based on gradient descent or mirror descent? Are all the method compared with the same optimization methods?
Considering the motivation of the proposed method, will the proposed method lead to more robust models w.r.t different types of adversarial attacks? |
ICLR | Title
Meta Learning with Minimax Regularization
Abstract
Even though meta-learning has attracted wide attention in recent years, the generalization problem of meta-learning is still not well addressed. Existing works focus on meta-generalization to unseen tasks at the meta-level, while ignoring that adapted-models may not be generalized to the tasks domain at the adaptationlevel, which can not be solved trivially. To this end, we propose a new regularization mechanism for meta-learning – Minimax-Meta Regularization. Especially, we maximize the regularizer in the inner-loop to encourage the adapted-model to be more sensitive to the new task, and minimize the regularizer in the outerloop to resist overfitting of the meta-model. This adversarial regularization forces the meta-algorithm to maintain generality at the meta-level while it is easy to learn specific assumptions at the task-specific level, thereby improving the generalization of meta-learning. We conduct extensive experiments on the representative meta-learning scenarios to verify our proposed method, including few-shot learning and robust reweighting. The results show that our method consistently improves the performance of the meta-learning algorithms and demonstrates the effectiveness of Minimax-Meta Regularization.
1 INTRODUCTION
Meta-learning has been proven to be a powerful paradigm for extracting well-generalized knowledge from data and accelerating the learning process for new tasks (Thrun & Pratt, 2012). It simulates the machine learning process by a bi-level objective (Finn et al., 2017), evaluating the query (metavalidation) set with an adapted-model learned from the meta-model by the support (meta-training) set. Meta-learning has received increasing attention in many machine learning settings such as fewshot learning (Sung et al., 2018; Sun et al., 2019; Wang et al., 2020) and robust learning (Ren et al., 2018; Shu et al., 2019; Li et al., 2019), and can be deployed in many practical applications (Kang et al., 2019; Dou et al., 2019; Yu et al., 2018; Madotto et al., 2019). Despite the success, the additional level of learning creates another potentially overfiting (Rajendran et al., 2020b), which significantly challenges the generalization of meta-learning algorithms. Specifically, the meta-model should be generalized to unseen tasks (meta-generalization). In the meanwhile, the adapted-model should be generalized to the domain of a specific task, which we called adaptation-generalization (Figure 1). A key challenge is how to regularize the meta-algorithms to ensure this two-levels generalization.
The deep neural networks tend to overfit the sampling bias due to its representation power, leading to poor generalization (Song et al., 2020). Regularizations such as weight decay (Krogh & Hertz, 1992), dropout (Gal & Ghahramani, 2016), and incorporating noise (Tishby & Zaslavsky, 2015; Alemi et al., 2016; Achille & Soatto, 2018), can effectively present the model from the overfitting and enhance the generalization. However, direct applying the regularization to the networks limited the flexibility of fast adaptation in the inner loop (meta-training) of meta-learning (Yao et al., 2021). Recent works aim to address the meta-generalization problem by meta-regularizations, such as constraining the meta-initialization space (Yin et al., 2019), enforcing the similarity of the performance of the meta-model on different tasks (Jamal & Qi, 2019), and augmenting meta-training data (Rajendran et al., 2020b; Ni et al., 2021; Yao et al., 2021). These methods significantly enhance the generalization for unseen tasks. However, they ignore the adaptation-generalization to the data distribution of the meta-testing tasks (Figure 1), which is not negligible.
The work takes the first step further to optimize both meta-generalization and adaptationgeneralization for meta-learning. However, the adaptation-generalization is significant challenging
for meta-learning, where we meet a dilemma between fast adaptation and generality: 1) regularizing the model during meta-testing time can enhance the generalization to the task domain, however, limits the fast adaptation that is the goal of meta-learning; 2) exacerbating the overfitting to the fewshot samples from meta-testing can enhance the fast adaptation, however, limits the generality to the task domain.
To address the challenge, we consider learning a meta-model resistant to the adapted-model overfitting during the meta-testing time. To achieve this, we design a well-general mechanism called Minimax-Meta Regularization for meta-learning. During the meta-training, we enforce the adaptedmodel to be more overfitting to the support data by adding a inverse (negative) regularization in the inner loop, and enforce the meta-model to be more generalized on the test samples by adding a positive regularization in the outer loop. By doing so, the learned meta-model can be meta-generalized, making adapted-models perform well on the query (meta-validation) set, even when the adaptedmodels are prone to overfit to the support (meta-training) set. Therefore during the meta testing, the adapted-model can still be generalized to the task domain, even though they are overfitting to fewshot samples. In particular, the Minimax-Meta Regularization is well general to be implemented in all bi-level optimization frameworks without additional computational cost.
To verify the above intuition, we conduct experiments of the basic MAML (Finn et al., 2017) framework. Surprisingly, we find that both positively regularizing the outer loop meta-training and negatively regularizing the inner loop adaption can significantly enhance the few-shot classification. Another interesting finding is that adding positive regularization in the inner loop impairs the performance, which indirectly proves the efficacy of our proposal. We conduct extensive experiments on few-shot regression, few-shot classification, and robust reweighting (Ren et al., 2018). The experimental results show that Minimax-Meta Regularization generally improves the performance of bi-level meta-learning algorithms and is compatible with common methodologies for enhancing meta-learning. Moreover, Minimax-Meta Regularization shows the capability to improve the generalization of meta-learning algorithms and help address meta-overfitting problems to a certain extent.
Our Contributions. 1) we propose a limitation of previous works on meta-generalization that ignore the adaptation-generalization; 2) we design a general mechanism named Minimax-Meta Regularization for meta-learning, which aims to capture a meta-model that is both meta-generalized
and resistant to the adaptation overfitting; 3) we empirically verify the intuition of Minimax-Meta Regularization and give possible reasons; 4) we conduct three different bi-level optimization tasks to show the efficacy of the proposed method.
2 PRELIMINARY
We first give a brief introduction and notation of meta-learning. In the meta-learning problem setting that we consider, the goal is to learn a generalized initialization model for better adapting to new tasks from only a few samples. To achieve this, it requires a set of support (meta-training) data{ Dsi = {xsi,j , ysi,j}kj=1 }n i=1 and query (meta-testing) data { Dqi = {x q i,j , y q i,j}mj=1 }n i=1
sampled from tasks {Ti}ni=1 drawn from distribution p(T ), where k and m denote the number of data samples from support and query data, and n is the number of tasks. Denote L and µ to be the loss function and inner-loop learning rate.
Meta-learning (Finn et al., 2017) simulates the adaptation and evaluation procedure of machine learning, and aims to learn a well-generalized model f parameterized by θ∗ by the following bilevel optimization
θ∗ = argmin θ
1
n n∑ i=1 1 m ∑ z∈Dqi L(φi(θ,Dsi ), z), s.t.φi(θ,Dsi ) = θ − µ∇θ ∑ z∈Dsi L(θ, z) (1)
where z represents the data sample (x, y). The outer loop (represents the meta-validation phase) measures the generalization performance of the adapted-model φi by the query data Dqi . The inner loop (represents the meta-training phase) defines that the adapted-model φi is finetuned from initialization θ by multiple steps gradient descent with the support data Dsi . Note that gradient steps can be more than one, the formulation 1 is written for shortness.
3 META LEARNING WITH MINIMAX-META REGULARIZATION
We aim to learn a well-generalized meta-initialization that can fast adapt to new tasks with robust performance. To achieve this, the meta-learner should be meta-generalized, i.e. learn a metamodel θ that is robust to tasks distribution p(T ), and adaptation-generalized, i.e. the adapted model φ(θ,Ds),Ds ∼ T should be robust to the data distribution of the task domain T . The metageneralization problem has been studied in many previous works (Yin et al., 2019; Collins et al., 2020; Yao et al., 2021; Ni et al., 2021), and can be addressed by designing regularization in the outer loop. However, due to the limited number of samples of Ds,Dq , the adaptation-generalization problem is significantly challenging for meta-learning.
To address this, we propose a novel and well-general regularization framework for meta-learning – Minimax-Meta Regularization. In this section, we first present the training objective for minimaxregularized meta-learning while giving the intuition behind the design, and run a simulation to verify the high-level insight.
3.1 TRAINING OBJECTIVE
Based on the formulation 1 for meta-learning, we present the minimax-regularized meta-learning training objective as follows, where we add a positive regularization in the outer loop to achieve meta-generalization, and an inverse (negative) regularization in the inner loop to achieve adaptationgeneralization.
θ∗ = argmin θ
1
n n∑ i=1 1 m ∑ z∈Dqi L(φi(θ,Dsi ), z) + λout 1 n n∑ i=1 Regout(φi(θ,Dsi )), (2)
s.t. φi(θ,Dsi ) = argmin φ 〈µ∇θ ∑ z∈Dsi L(θ, z), φ〉+ 1 2 ‖φ− θ‖2 − λin 1 n n∑ i=1 Regin(φ), (3)
where Regout and Regin are the regularizations in the outer and inner loop, while λout ≥ 0 and λin ≥ 0 are the coefficients respectively. Note the the formulation φi(θ,Dsi ) =
argminφ〈µ∇θ ∑ z∈Dsi L(θ, z), φ〉 + 12‖φ − θ‖ 2 in the inner loop is the equivalent mirror descent (Beck & Teboulle, 2003) version of the gradient descent. We next introduce the intuition behind this design.
Outer positive regularization. As defined in Eq 2, we add a positive regularization Regout(φi(θ,Dsi )) that regularizes the model-overfitting of the adapted-model φi(θ,Dsi ). By doing so, the meta learner is enforced to learn a generalized meta-model θ∗ such that the adapted-model φi(θ
∗,Dsi ) on each tasks is not overfitting and generalized to query data. This idea has been studied in previous works (Yin et al., 2019; Collins et al., 2020; Yao et al., 2021; Ni et al., 2021), and has been shown to significantly enhance the meta-generalization.
Inner inverse regularization. The generalization performance depends not only on the complexity of the adapted-model φi(θ,Dsi ), but also the adaptation rule, i.e. the formulation of the inner loop function. As defined in Eq 3, we add a inverse regularization Regin(φ) that negatively regularizes the model-complexity of the adapted-model φi(θ,Dsi ). By doing so, the inner loop function simulates the adaptation overfitting during meta-testing by enforcing the adapted-model to be overfitting during meta-training. Therefore, learning from the minimax regularized meta-learning, the learned meta-model θ∗ can be resistant to adaptation overfitting.
From the above discussion, the Minimax-Meta Regularization enables the meta-learning to capture a meta-model that is both meta-generalized and adaptation-generalized. This framework is computational efficient without additional computational cost. In addition, the Minimax-Meta Regularization is general to all bi-level optimization formulation, thus can be directly applied to different meta-algorithms on different bi-level learning problems.
3.2 EMPIRICAL VERIFICATION.
We next verify the design by a simulation test by conducting the basic MAML framework (Finn et al., 2017) with different regularization types. As illustrated in Table 1, we make the following observations:
Outer positive regularization enhances the generalization performance. Compare the results from “no regularization” and “regularize the outer-loop”, we observe that adding outer regularization can get 1.56% and 4.42% accuracy improvements in 1-shot and 5-shot experiments, which verifies the efficacy of the outer regularization. This is aligned with the intuition that outer regularization enhances the meta-generalization, leading to better performance.
Inner negative regularization enhances the generalization performance. Compare the results from “no regularization” and “inverse regularize the inner-loop”, we observe that adding inner inverse regularization can get 1.17% and 1.24% accuracy improvements in 1-shot and 5-shot experiments, which verifies the efficacy of the inner inverse regularization. This is aligned with the intuition that inner reverse regularization enhances adaptation-generalization, thus improving performance.
The outer regularization and inner inverse regularization are compatible. Compare the results from “Minimax-Meta Regularization”, “regularize the outer-loop”, and “inverse regularize the innerloop”, we observe that Minimax-Meta Regularization can get 0.52% (1-shot)/0.61% (5-shot) and 0.92% (1-shot)/3.79% (5-shot) accuracy improvements than solely regularizing the outer-loop and inverse regularizing the inner-loop, which verifies the compatibility of the inner inverse regulariza-
tion. This is aligned with the intuition that meta-generalization and adaptation-generalization are not in conflict.
Inner positive regularization impairs the generalization performance. Compare the results from “no regularization” and “regularize the inner-loop”, we observe that adding inner positive regularization suffers from -1.86% and -0.34% accuracy impairments in 1-shot and 5-shot experiments, which aligns with the intuition that positive regularization that limits the adaptation in the inner-loop impairs the adaptation-generalization.
4 RELATED WORK
Meta-learning. A line of meta-learning methods has sought to train recurrent neural networks that ingest entire datasets Santoro et al. (2016); Duan et al. (2016). However, they need to place constraints on the model architecture. Another line aims to learn a transferable metric space between samples from previous tasks (Vinyals et al., 2016; Snell et al., 2017; Mishra et al., 2018; Oreshkin et al., 2018). However, it is limited to classification problems. In this paper, we focus on gradientbased meta-learning methods that learn a meta-initialization (Finn et al., 2017; 2018; Li et al., 2017; Finn & Levine, 2018; Grant et al., 2018; Lee & Choi, 2018; Park & Oliva, 2019; Flennerhag et al., 2020), which is a well-generalized for meta-training tasks, being agnostic to both model architecture and problems. However, these approaches are shown to be overfitting the meta-training tasks and generalizing poorly to meta-testing tasks (Yoon et al., 2018; Collins et al., 2020; Rothfuss et al., 2021; Yao et al., 2021).
Meta-Regularization. Standard regularizations such as weight decay (Krogh & Hertz, 1992), dropout (Gal & Ghahramani, 2016), and incorporating noise (Tishby & Zaslavsky, 2015; Alemi et al., 2016; Achille & Soatto, 2018), which can significantly enhance the generality of single-loop machine learning. However, the straightforward method that regularizes the neural networks limits the flexibility of fast adaptation in the inner loop (Yao et al., 2021). Recently, a few works were proposed to design the meta-regularization to improve meta-generalization. MR-MAML (Yin et al., 2019) constrains the search space of the meta-model, and allows the adaptation to be sufficient in the inner loop. Jamal & Qi (2019) proposed TAML to enforce the meta-model to perform similarly across tasks. Rajendran et al. (2020a) explored an information-theoretic framework of meta-augmentation by adding randomness to labels of both support and query sets. Yao et al. (2021) proposed two task augmentation methods – MetaMix and Channel Shuffle, which is theoretically proved to be generalized to unseen tasks. Ni et al. (2021) investigated the distinct ways where data augmentation can be integrated at both the image and class levels. Rothfuss et al. (2021) addressed the meta-generalization problem using the PAC-Bayesian framework, and proposed PACOH that is PAC-optimal with Gaussian processes. However, these works focus only on the metageneralization, i.e., generalize to the unseen tasks, while the adaptation-generalization that measures how the adapted-model generalizes to the task domain is merely considered.
This paper proposes the Minimax-Meta Regularization for meta-learning, implementing a positive regularization in the outer-loop and a negative regularization in the inner-loop. The framework can enhance both meta-generalization and adaptation-generalization, and thus improve the performance.
5 EXPERIMENTS
In this section, we conduct extensive experiments on three types of classical meta-learning tasks including, few-shot classification, few-shot regression, and robust reweighting with meta-learning, to demonstrate the efficacy of our proposed methods. With these experiments, we demonstrate that our methods i) outperform previous meta-learning algorithms in terms of predictive accuracy; ii) mitigate the meta-overfitting effectively. We will introduce the experimental setup, results, and analysis in the following subsections.
5.1 FEW-SHOT CLASSIFICATION
We first carry out experiments on the few-shot classification task, one of the most popular tasks to evaluate meta-learning algorithms. To verify the effectiveness of our approach, we adapt Minimax-
Meta Regularization into bi-level optimization meta-learning algorithms and make a benchmark to compare with other methods.
5.1.1 EXPERIMENTAL SETUP
Datasets. For the few-shot classification task, we experiment on the public released datasets MiniImagenet (Ravi & Larochelle, 2017; Vinyals et al., 2016) and Omniglot (Lake et al., 2015), following the few-shot benchmark setting provided in (Antoniou et al., 2018). The Omniglot dataset is a collection of 1623 character classes with different alphabets. Each class in the dataset contains 20 instances. In the experiment, all the character classes are shuffled, and then the shuffled classes are divided into the training set, validation set, and test set, with 1150, 50, and 423 instances respectively. Rotation augmentation is applied to the images with 90-degree increments to create new classes. The second dataset used in the few-shot classification experiment is Mini-Imagenet (Ravi & Larochelle, 2017), which is sampled from ImageNet with 600 instances of 100 classes. Each image is resized into 84 × 84. Following the work (Ravi & Larochelle, 2017), we split the Mini-Imagenet dataset into 64 classes for training, 12 classes for validation, and 24 classes for testing.
Experimental details. We select MAML (Finn et al., 2017) as the representative bi-level optimization meta-leanring model. To evaluate the effectiveness of Minimax-Meta Regularization, we first begin the experiment with the baseline MAML on the 5-way 1/5-shot Mini-Imagenet setting. Then, on top of the original MAML, we implement Minimax-MAML by adding Minimax-Meta Regularization. We then compare Minimax-MAML with original MAML and other meta-learning baselines on the 5-way 1/5-shot Mini-Imagenet setting and the the 20-way 1-shot Omniglot setting. The compared baselines include Matching Networks (Vinyals et al., 2016), Meta-SGD (Li et al., 2017), Meta-Networks (Munkhdalai & Yu, 2017), Siamese Nets (Koch et al., 2015), Neural Statistician (Edwards & Storkey, 2016), and Memory Module (Kaiser et al., 2017). Here we also include MAML++ (Antoniou et al., 2018) in the experiment and further implement Minimax-MAML++ for comparison. MAML++ is an improved version of MAML, with 6 specific methodologies added together for the performance improvement of MAML. We include MAML++ in our experiment for studying two questions: i) By comparing Minimax-MAML with MAML++, we want to analyze if Minimax-Meta Regularization, as a general improving mechanism, has the potential to outperform algorithm-specific methodologies. ii) By comparing Minimax-MAML++ with MAML++, we want to evaluate if Minimax-Meta Regularization is compatible with complicated model-specific improving methodologies in bi-level optimization models. Note regularization is only added during the training phase. All the MAML/MAML++ experiments involving regularization share the same form of regularization objective. The regularization is achieved by combining the l2-norm regularization and output entropy regularization. More detailed experiment setting information could be find in Appendix B.
5.1.2 RESULTS AND ANALYSIS
The baseline comparison results under Omniglot and Mini-Imagenet settings are shown in Table 2 and Table 3. Minimax-Meta Regularization are shown to improve both the original MAML and the MAML++ frameworks. In the Omniglot 20-way 1-shot classification experiment, the mean accuracy of MAML and MAML++ are improved from 94.20% and 97.21% to 95.76% and 97.77% respectively. Both the methods had unstable results in these experiments. After adopting Minimax-Meta Regularization, the std values of the final accuracy of these two methods have been significantly reduced, indicating better performance stability. The Minimax-MAML++ reached the best performance in this setting compared to other baselines with good stability. Significant improvements from Minimax-Meta Regularization are also shown in Mini-Imagenet 5-way 1/5-shot classification experiments. In the 1-shot experiments, the original MAML cannot outperform Meta-SGD and Meta-Networks baselines. The Minimax-Meta Regularization improves the accuracy of MAML from the average of 48.75% to 50.84%, which enables MAML to outperform other baselines. In the 5-shot experiments, Minimax-MAML could outperform MAML++ by 1.02%. Considering that MAML++ adopts 6 individual techniques specifically designed for MAML, Minimax-Meta Regularization shows strong effectiveness in this outperform as a general methodology.
5.2 FEW-SHOT REGRESSION
5.2.1 EXPERIMENTAL SETUP
Datasets. For the few-shot regression task, we consider a non-mutually-exclusive regression problem based on the Sinusoids synthetic dataset. Each task of Sinusoids regression involves the regressing from the input to the output of a generated sine wave, where the amplitudes of the sinusoids are different among tasks. In our experiment, we follow the setting provided by (Yin et al., 2019). The Sinusoids data is created in the following way: the amplitude A of the sinusoid is uniformly sampled from a set of 20 scalars {0.1, 0.3, · · · , 4}; u is sampled uniformly from [−5, 5] ; and y is sampled from N (Asin(u), 0.12). Experimental details. During the training, both u and A are provided as input of models, i.e. x = (u,A). During the test time, we expand the range of the tasks by randomly sampling the amplitude A uniformly from [0.1, 4] and use a random one-hot vector as the input of the network. The meta-training tasks are a proper subset of the meta-test tasks. Under this setting, the amplitude input at the training phase makes this regression problem non-mutually-exclusive, which makes the meta-learning model prone to the memorization problem(Yin et al., 2019) during training. In the experiments, we compare with the representative bi-level optimization meta-learning baseline MAML (Finn et al., 2017), and the meta-regularized MAML (MR-MAML) (Yin et al., 2019) where the regularization is either on the activations (MR-MAML(A)) or the weights (MR-MAML(W)). Both MR-MAML(A) and MR-MAML(W) are initially designed for solving the memorization problem. The Minimax-Meta Regularization is implemented for all the above 3 methods with l2-norm as the regularization objectives for both the inner-loop and outer-loop.
5.2.2 RESULTS AND ANALYSIS
Original MAML was shown to be capable of solving normal sinusoid few-shot regression problem(Finn et al., 2017). However, the results of non-mutually-exclusive sinusoid regression 5.2.2 suggest that added amplitude input makes MAML suffer from memorization problem and give poor test result. From the experiment result5.2.2, we could observe that Minimax-Meta Regularization improves the performance of MAML on both 5-shot and 10-shot tasks. In the 10-shot task, the
test MSE of MAML improved from 0.153 to 0.125 with Minimax-Meta Regularization, which is close to the MR-MAML(A). This observation suggests that the minimax-regularization could help the meta-learning model be more resistant to the memorization problem to some extent. Moreover, by comparing the results of MR-MAML methods with Minimax-Meta Regularization and the original MR-MAML models, we could find that both MAML(A) and MAML(W) gained performance improvements with added Minimax-Meta Regularization on both 5-shot and 10-shot tasks. And the smaller std values indicate the promotion of stability. This shows that minimax could be compatible with methods specifically designed for addressing memorization problem and further improve the performance.
5.3 ROBUST REWEIGHTING WITH META-LEARNING
5.3.1 EXPERIMENTAL SETUP
To verify the general effectiveness of our proposed methods, we further conduct the experiments on the task of robust reweighting with meta-learning. For this experiment, we compare the performance of our method and baselines on the noisy MNIST dataset, which is created by randomly flipping the labels of 40% training images. Each image has a dimension of 28×28. The task is to classify each image into 0 to 9 handwritten numbers, where the 10000 training images have 40% noisy labeled data. The validation set consists of 100 correctly-labeled images that are randomly selected from the correctly-labeled samples in the training set to ensure that the reweight method does not have the privilege of training on more data. We use the LeNet-5 as the backbone model and train the model for 1000 epochs. The learning rates for the first 1/3, the middle 1/3, and the last 1/3 training epochs are set to be 1e-2, 1e-3, and 1e-4 respectively. The basic meta-learning baselines we evaluate here is Meta-Reweighting introduced by the work (Ren et al., 2018). The Meta-Reweighitng algorithm learns to assign weights to training examples for robust learning. To determine the example weights, Meta-Reweighting performs a meta gradient descent step on the mini-batch example weights (which are initialized from zero) to minimize the loss on a clean unbiased validation set. Our method adds the Minimax-Meta Regularization on top of Meta-Reweighting. We add regularization on the outerloop, where the optimal weights are calculated and adopted for meta-update. The inverted regularization is added on the inner-loop, where the weighted inner-model fits the clean unbiased validation set for optimal weight calculation. Intuitively, such a regularization method makes the model becomes more conservative when updating based on noise train data in the outer loop and values the diversity of predictions more, thereby resisting overfit. At the same time, the inner model was encouraged to make sharper predictions on the clean validation set by the inverted regularization, so that the potential of the clean data set can be more fully utilized. The regularization objective used in our method is maximizing output entropy (minimizing output entropy in the outer-loop). We call our method Minimax Reweighting. Detailed information of the implementation of Minimax Reweighting is provided in Appendix C.
5.3.2 RESULTS AND ANALYSIS
Under this setting, models experienced large epoch training with the big initial learning rate. Models are extremely prone to overfit the training dataset during the training phase. To understand the performance of the models under a robust learn setting, we could first look at the training curve of the models (Figure 2,3). Since the training set is noisy, models overfitted to the train set would show significant performance deduction on the clean test set. From the perspective of robust learning, the direct training model sets the lower performance bound to some extent. Since it does not have any denoising ability, it quickly overfits the training set during the training. It reaches peak accuracy on the clean test set around the 80th epoch and the overfitting begins after that epoch. We could identify the overfitting characteristic from the training and testing accuracy curve. Since 40% of the labels in the training set are wrong, once the model starts to predict the training data with accuracy larger than 60%, it’s fitting the distribution of the noise training data instead of the ground truth distribution. At the same time, the performance deduction on the clean test set would begin. Finally, we could observe the training accuracy and testing accuracy of directly trained model converged to nearly 100% and 60% respectively, which indicates a complete overfit. On the contrary, the model with optimal learning robustness should never overfit the train set, which would maintain a train accuracy value close to 60%(since only 60% of the train labels are correct) and keep optimal performance on the clean test set. Compared to direct training, the training curve of Meta-Reweighting baseline (Ren et al., 2018) shows a significant improvement in the learning robustness. However, it still suffers from overfitting. It neither completely overfits the training dataset nor ignores all the noises, its training accuracy converges to around 70%. Meta-Reweighting model could finally maintain test accuracy at around 87.5%, experienced continual test accuracy deduction after around 100th epoch. Minimax-Reweighting nearly reached the optimal learning robustness under this setting. The training accuracy of Minimax-Reweighting stuck at around 60% with rarely any change throughout the training phase. And the testing accuracy maintained peak value around 95.5% without observable deduction. To further evaluate the effectiveness of Minimax-Reweighting, we implemented the outer-loop-only regularization on top of the Meta-Reweighting algorithm to make comparisons. The results intend that only regularizing the outer loop at the meta-level cannot reach the performance of Minimax-Meta Regularization. Quantitative results of final accuracy are shown in Table 5. As for train accuracy, the original Meta-Reweighting algorithm reached 70.38% accuracy, which indicates a certain overfit. On the contrary, after adding regularization, both Minimax Reweighting and outer-loop regularized Meta-Reweighting could preserve a training accuracy of around 60%, which represents the resistance to training set overfit. However, Minimax Reweighting outperforms outer-loop regularized Meta Reweighting on the clean test set accuracy
6 CONCLUSION
This paper studies the generalization problem of meta-learning. In this paper, we go one step deeper and propose a new regularization mechanism for meta-learning – Minimax-Meta Regularization. Specifically, we maximize the regularizer in the inner loop to encourage the adapted-model to fit an “aggressive, more specific, prone to overfitting” hypothesis and minimize the regularizer in the outer loop to fit a “conservative, more general, resistant to overfitting” hypothesis. Such adversarial regularization forces the meta-model to maintain generality at the meta-level even when it is easy to learn specific assumptions at the task-specific level, thereby improving the robustness of the meta-model. In the experiment, representative meta-learning scenarios, including few-shot learning, robust learning, and reinforcement learning, are conducted to verify our method. The results show that our method consistently improves the meta-learning algorithms’ performance and demonstrates the advantage of Minimax-Meta Regularization.
A GENERAL FORM OF MINIMAX-META REGULARIZATION IN META-LEARNING
Algorithm 1 General Form of Minimax-Meta Regularization in Meta-Learning Require: Meta-training set Dmeta−train, Learner M with parameters φ Require: Meta-Learner R with parameters θ. Ensure: φT
1: randomly initialize φ 2: for d = 1, n do 3: Dsupport, Dquery ← random dataset from Dmeta−train 4: φ0 ← c0 5: for t=1, T do 6: Xt,Yt ← random batch from Dsupport 7: Lt ← 8: L (M (Xt;φt−1) ,Yt) + InverseRegObjective (M (Xt;φt−1) ,Yt, φt−1) 9: ct ← R (( ∇φt−1Lt,Lt ) ; θd−1
) 10: φt ← ct 11: end for 12: X,Y ← Dquery 13: Ltest ← L (M (X;φT ) ,Y) +RegObjective (M (X;φT ) ,Y, θd−1) 14: Update θd using∇φTLtest 15: end for
B DETAILS OF FEW-SHOT CLASSIFICATION EXPERIMENT
B.1 IMPLEMENTATION OF MINIMAX-MAML
Algorithm 2 Minimax-MAML Require: p(T ) : distribution over tasks Require: α, β : step size hyperparameters γe, γn: reg-rate hyperparameter of Information Entropy,
L2 Norm Ensure: θT
1: randomly initialize θ 2: while not done do 3: for all Ti do 4: Evaluate ∇θLTi (fθ) with respect to K examples 5: Compute adapted parameters with gradient 6: descent: θ′i = θ − α∇θ ( LTi (fθ) + γeEntropyTi(fθ)− γnL2 Norm(θ)
) 7: end for 8: Update θ ← θ − β∇θ ∑ Ti∼p(T ) ( LTi ( fθ′i ) − γeEntropyTi(fθ′i) + γnL2 Norm(θi
′) 9: end while
Pseudo code is shown in Algorithm 2.
For all few-shot classification experiments, we use a γe = 2 and γn = 5e-5.
All the MAML/MAML++ experiments involving regularization share the same form of regularization objective. The regularization is achieved by combining the l2-norm regularization and output
entropy regularization. The bi-level optimization objective could be written as:
θ∗ = argmin θ
1
n n∑ i=1 1 m ∑ z∈Dqi L(φi(θ,Dsi ), z) + δout · (γn · 0.5‖φi(θ,Dsi )‖2 − γeH(φi(θ,Dsi ), z))),
(4) s.t. φi(θ,Dsi ) = θ − µ∇θ ∑ z∈Dsi L(θ, z) + δin · (γn · 0.5‖θ‖2 − γeH(θ, z))),
(5)
Where H(θ, z) denotes the information entropy of prediction of z using θ as model parameter. Here δin and δout respectively determine the type of regularization for the inner-loop and outerloop. Their values of δin and δout can be 1, 0 or -1, corresponding to normal regularization, none regularization, and inverse regularization respectively. Original MAML has δin = 0 and δout = 0. MAML becomes Minimax-MAML while δin and δout are set by -1 and 1. The selection for δin and δout values for other experiment could be found in 1. γn and γe are hyper-parameters controlling the regularization rate. We use γn = 0.0005 and γe=2 for all the experiments. All the MAML experiments take 5 inner-steps. In one experiment, the training takes 100 epochs, and each epoch consists of 500 iterations. After each epoch, the performance of the model is evaluated on the validation set. When the training is complete, a prediction of the test set is made by the ensemble of the top 5 performing models on the validation set. Each experiment is repeated 3 times. The Adam optimizer was adopted for the model training, with a learning rate of 0.001, β1 = 0.9 and β2 = 0.99. Task batch size for all Omniglot experiments is 16. Mini-Imagenet experiments use task batch sizes of 4 and 2 for 1-shot and 5-shot experiments respectively.
As for empirical verification experiment, on top of the original MAML, we implement different individual regularization methods and run experiments for each one separately. The regularization methods include outer-loop-only regularization, inner-loop-only regularize , inverse inner-loop regularization, loss function regularization, and Minimax-Meta Regularization. This stage of experiments complete the empirical verification of method discussed in 3.2.
C IMPLEMENTATION DETAIL OF MINIMAX META-REWEIGHTING
Pseudo code is shown in Algorithm 3. In our experiment, we use a γin = 0.25 and γout = 2
Algorithm 3 Weighted Minimax Meta-Reweighting. Require: model θ0, train Df , valid Dg, n,m, γin, γout Ensure: θT
1: for t = 0 . . . T − 1 do 2: {Xf , yf} ← SampleMiniBatch (Df , n) 3: {Xg, yg} ← SampleMiniBatch (Dg,m) 4: ŷf ← Forward (Xf , θt) 5: ← 0; lf ← ∑n i=1 iC (yf,i, ŷf,i) 6: ∇θt ← BackwardAD (lf , θt) 7: θ̂t ← θt − α∇θt 8: ŷg ← Forward ( Xg, θ̂t
) 9: lg ← 1m ∑m i=1 (C (yg,i, ŷg,i) + γinEntropy(ŷg,i))
10: ∇ ← BackwardAD (lg, ) 11: w̃ ← max(−∇ , 0);w ← w̃∑
j w̃+δ( ∑ j w̃)
12: l̂f ← ∑n i=1 wi (C (yf,i, ŷf,i)− γoutEntropy(ŷf,i))
13: ∇θt ← BackwardAD ( l̂f , θt ) 14: θt+1 ← OptimizerStep (θt,∇θt) 15: end for | 1. What is the focus of the paper regarding generalization in meta-learning?
2. What are the strengths and weaknesses of the proposed Minimax-Meta Regularization for meta-learning?
3. How does the reviewer assess the clarity and quality of the writing in the paper?
4. What additional information should be provided in the experimental details?
5. How can the effectiveness of the approach be improved?
6. Are there any concerns regarding the datasets used in the experiments?
7. Any suggestions for improving the readability of the paper, such as providing a clear caption for Table 1 and avoiding confusing captioning in double columns? | Summary Of The Paper
Review | Summary Of The Paper
This paper study the problem of generalization in meta-learning. It considers two different kind of generalization for meta-learning: meta-generalization and adaptation-generalization. The authors argue that recent works ignore the adaptation-generalization, and they propose the Minimax-Meta Regularization for meta-learning to address both problems. Then they empirically study the effect of their proposed regularization for MAML and MAML++ on three different tasks, namely Few-shot Classification, Few-shot Regression and Robust Reweighting.
Review
Strengths
The separation in two different kind of generalization is sounded and interesting.
The proposed approach gives promising results.
The experiments are conducted on three different tasks, to cover different problems where meta-learning can be applied to.
Weaknesses
The quality and clarity of the writing should be improved. A lot of sentences are not grammatically correct, which hinders readability. E.g, in Section 1, second paragraph, second sentence, "However, direct applying the regularization to the networks limited the flexibility of fast adaptation in the inner loop (meta-training) of meta-learning".
The experimental details for each tasks are limited, they are not sufficient to reproduce the results. At the very least, the architecture used and the training hyperparameters should appear.
The experiments are only conducted on MAML and MAML++. While the authors claim that their regularization is general and can be applied to all bi-level optimization formulation, it remains to be proven that it is effective for other methods.
The authors do not compare their results with other methods that tackle generalization in meta-learning. Although the authors claim that previous methods do not address the adaptation-generalization, this claim is not proven through experiments, nor do they prove that their approach improves the method in this regard.
The datasets considered are limited to MNIST for the Robust Reweighting task, Omniglot and mini-imagenet for the Few-shot Classification task and synthetic Sinusoids for the Few-shot Regression task. All these datasets are considered as either toy problems or easy datasets nowadays. To really assess the effectiveness of the approach, the authors should consider more complicated datasets.
Other remarks
There is no caption to Table 1 and no explanation of the experiment in the text. This hinders the understanding of the table, even more so that the table appear before the text referring to it.
The figures and tables in double columns leads to really confusing captioning.
In Section 2, the authors denote the model by
f
, but then use
ϕ
in the remaining of the paper. |
ICLR | Title
Learning Higher-Order Dynamics in Video-Based Cardiac Measurement
Abstract
Computer vision methods typically optimize for first-order dynamics (e.g., optical flow). However, in many cases the properties of interest are subtle variations in higher-order changes, such as acceleration. This is true in the cardiac pulse, where the second derivative can be used as an indicator of blood pressure and arterial disease. Recent developments in camera-based vital sign measurement have shown that cardiac measurements can be recovered with impressive accuracy from videos; however, the majority of research has focused on extracting summary statistics such as heart rate. Less emphasis has been put on the accuracy of waveform morphology that is necessary for many clinically impactful scenarios. In this work, we provide evidence that higher-order dynamics are better estimated by neural models when explicitly optimized for in the loss function. Furthermore, adding second-derivative inputs also improves performance when estimating second-order dynamics. By incorporating the second derivative of both the input frames and the target vital sign signals into the training procedure, our model is better able to estimate left ventricle ejection time (LVET) intervals.
N/A
Computer vision methods typically optimize for first-order dynamics (e.g., optical flow). However, in many cases the properties of interest are subtle variations in higher-order changes, such as acceleration. This is true in the cardiac pulse, where the second derivative can be used as an indicator of blood pressure and arterial disease. Recent developments in camera-based vital sign measurement have shown that cardiac measurements can be recovered with impressive accuracy from videos; however, the majority of research has focused on extracting summary statistics such as heart rate. Less emphasis has been put on the accuracy of waveform morphology that is necessary for many clinically impactful scenarios. In this work, we provide evidence that higher-order dynamics are better estimated by neural models when explicitly optimized for in the loss function. Furthermore, adding second-derivative inputs also improves performance when estimating second-order dynamics. By incorporating the second derivative of both the input frames and the target vital sign signals into the training procedure, our model is better able to estimate left ventricle ejection time (LVET) intervals.
1 INTRODUCTION
Many of the properties of dynamical systems only become apparent when they move or change as the result of forces applied to them. In most applications we are interested in behavior in terms of positions, velocities, and accelerations, and in some cases the properties of interest may only be observed in subtle variations in the higher-order dynamics (e.g., acceleration). Whether monitoring the flight of a drone to create a control mechanism for stabilization or analyzing the fluid dynamics of the cardiovascular system in the human body, there can be a need to recover these dynamics accurately. However, most video-based systems are trained on lower-order signals, such as position in the case of landmark tracking or velocity/rate-of-change (optical flow) in the case of visual odometry (Nister et al., 2004). Thus, they optimize for lower (zeroth or first) order dynamics. Does this harm their ability to estimate higher order changes? We hypothesize that networks trained to predict temporal signals will benefit from combined multi-derivative learning objectives. To test this hypothesis, we explore video-based cardiac measurement as an example application with a complex dynamical system (the cardiovascular system) and introduce simple but effective changes to the inputs and outputs to significantly improve the measurement of clinically relevant parameters.
Photoplethysmography (PPG) is a low-cost and non-invasive method for measuring the cardiovascular blood volume pulse (BVP). There are many clinical applications for PPG as the signal contains substantial information about health state and risk of cardiovascular diseases (Elgendi et al., 2019; Reisner et al., 2008; Pereira et al., 2020). In the current world, an acutely relevant application of PPG is for pulse oximetry (i.e. measuring pulse rate and blood oxygen saturation) as it can be used to detect low blood oxygen levels associated with the onset of COVID-19 (Greenhalgh et al., 2021). The COVID-19 pandemic has accelerated the adoption of teleheath systems (Annis et al., 2020) with more and more clinical consultations being conducted virtually. Therefore, techniques for remotely monitoring physiological vital signs are becoming increasingly important (Gawałko et al., 2021; Rohmetra et al., 2021). As one might expect, with many clinical applications the precision with which the PPG signal can be recovered is of critical importance when it comes to accurate inference of downstream conditions and the confidence of practitioners in the technology.
To date, in video-based PPG measurement the primary focus of analysis and evaluation has been on features extracted from the raw waveform or its first derivative (Chen & McDuff, 2018; Liu et al., 2020; 2021; Poh et al., 2010a). However, the second derivative of the PPG signal highlights subtle features that can be difficult to discern from those in the lower derivatives. Since the second derivative reflects the acceleration (Takazawa, 1993) or the rate-of rate-of change of the blood volume, it is more closely related to the change in pressure applied by the heart on blood vessels and its relation to vascular health.
An example of a particular feature accentuated in the second-derivative (i.e. acceleration) PPG is the dicrotic notch (see Fig. 1), which occurs when the heart’s aortic valve closes due to the pressure gradient between the aorta and the left ventricle. The dicrotic notch may only manifest as an inflection in the raw PPG wave; however, in the second derivative this inflection is a maxima. Inoue et al. (2017) found that the second derivative of the PPG signal can be used as an indicator of arterial stiffness - which itself is an indicator of cardiac disease. Takazawa et al. (1998) evaluated the second derivative of the PPG waveform and found that its characteristic shape can be used to estimate vascular aging, which was higher in subjects with a history of diabetes mellitus, hypertension, hypercholesterolemia, and ischemic heart disease compared to age-matched subjects without.
While the second derivative of a signal can be a rich source of information, often the zeroth- or first-order dynamics are given priority. For example, Chen & McDuff (2018) observed that training video- or imaging-based PPG (iPPG) models using first-derivative (difference) frames as input with an objective function of minimizing the mean squared error between the prediction and the first derivative of the target BVP signal was effective. This approach was used because the authors were designing their system to measure systolic time intervals only, which are most prominent in the lower order signals. However, they did not combine this with higher-order derivatives nor did they do any systematic comparison across derivative objectives.
We argue that a model trained with an explicit second-derivative (acceleration) objective should produce feature representations that better preserve/recover these dynamics than methods that simply derive acceleration from velocity. We observe that providing the model with a second derivative input also helps the network to better predict both the first and second derivative signals.
Finally, as diverse labeled data for training supervised models for predicting dynamical signals is often difficult to come by, we build on promising work in simulation to obtain our training data. Since light is absorbed and reflected differently for different skin tones (Bent et al., 2020; Dasari et al., 2021) having a training set that represents the true diversity of the target population is crucial for sufficient generalization. Our results show that models trained with synthetic data can learn parameters that successfully generalize to real human subjects. While this is not a central focus of our paper, we believe that it presents a promising proof-of-concept for future work.
To summarize, in this paper, we 1) demonstrate that directly incorporating higher-order dynamics into the loss function improves the quality of the estimated higher-order signals in terms of waveform morphology, 2) show that adding second-derivative inputs additionally improves performance, and 3) we describe a novel deep learning architecture that incorporates the second derivative input frames and target signals and evaluate it against clinical-grade contact sensor measurements.
2 BACKGROUND
Learning Higher-Order Motion from Videos. Despite its significance in many tasks, acceleration is often not explicitly modeled in many computer vision methods. However, there is a small body of literature that has considered how to recover (Edison & Jiji, 2017) and amplify optical acceleration (Zhang et al., 2017; Takeda et al., 2018). Given that acceleration can be equally as important as position and velocity in understanding dynamical systems, we argue that this topic deserves further attention.
A particularly relevant problem to ours is identifying small changes in videos (Wu et al., 2012; Zhang et al., 2017; Chen & McDuff, 2020; Takeda et al., 2018), and specifically in acceleration in the presence of relatively large motion. As an example, in the iPPG prediction task the aim is to identify minor changes in skin coloring due to variation in blood flow patterns, while ignoring major pixel changes due to subject or background motion. One method proposed by Zhang et al. (2017) for overcoming this signal separation problem is Video Acceleration Magnification, in which
large motions are assumed to be linear on the temporal scale of small changes while small changes deviate from this linearity. An extension to this method focused on making it more robust to sudden motions (Takeda et al., 2018). In both cases, a combination of Eulerian and Lagrangian approaches was used, rather than utilizing a supervised learning paradigm. Of relevance here is also work magnifying subtle physiological changes using neural architectures (Chen & McDuff, 2020), which have been shown to effectively separate signal and noise in both the spatial and temporal domains.
Our work might be most closely related to prior research into feature descriptors for optical acceleration (Edison & Jiji, 2017). One example uses histograms of optical acceleration to effectively encode the motion information. However, this work also defined handcrafted features, rather than learning representations from data. Our work is also related conceptually to architectures such as SlowFast (Feichtenhofer et al., 2019) in that it utilizes multiple “pathways” to learn different properties of the dynamics within a video. We were inspired by this approach; however, unlike SlowFast, we focus specifically on higher-order pathways rather than slower and faster frame sequences.
Video-based Cardiac Measurement. Diffuse reflections from the body vary depending on how much light is absorbed in the peripheral layers of the skin and this is influenced by the volume of blood in the capillaries. Digital cameras can capture these very subtle changes in light which can then be used to recover the PPG signal (Wu et al., 2000; Takano & Ohta, 2007; Verkruysse et al., 2008; Poh et al., 2010a). The task then becomes separating pixel changes due to blood flow from those due to body motions, ambient lighting variation, and other environmental factors that we consider noise in this context. While earlier methods leveraged source separation algorithms (Wang et al., 2016), such as ICA (Poh et al., 2010a) or PCA (Lewandowska et al., 2011), neural models provide the current state-of-the-art in this domain (Chen & McDuff, 2018; Liu et al., 2020; 2021; Song et al., 2021; Lu et al., 2021). These architectures support learning spatial attention and sourcespecific temporal variations and separating these from various sources of noise. Typically, the input to these models are normalized video frames and the output is a 1-D time series prediction of the PPG waveform or the heart rate. A vast majority of work has evaluated these methods based errors in heart rate estimation, which considers the dominant or “systolic” frequency alone. Only a few papers have used more challenging evaluation criteria, such as the estimation of systolic to diastolic peaks (McDuff et al., 2014).
3 OPTICAL BASIS
We start by providing an optical basis for the measurement of the pulse wave using a camera and specifically the second derivative signal. Starting with Shafer’s Dichromatic Reflection Model (DRM)(Wang et al., 2016; Chen & McDuff, 2018; Liu et al., 2020), we want to understand how higher order changes in the blood volume pulse impact pixel intensities to motivate the design of our inputs and loss function. Based on the DRM model the RGB values captured by the cameras as given by:
Ck(t) = I(t) · (vs(t) + vd(t)) + vn(t) (1)
where I(t) is the luminance intensity level, modulated by the specular reflection vs(t) and the diffuse reflection vd(t). Quantization noise of the camera sensor is captured by vn(t). I(t) can be decomposed into stationary and time-varying parts vs(t) and vd(t) (Wang et al., 2016):
vd(t) = ud · d0 + up · p(t) (2)
where ud is the unit color vector of the skin-tissue; d0 is the stationary reflection strength; up is the relative pulsatile strengths caused by hemoglobin and melanin absorption; p(t) represents the physiological changes. Let us assume for simplicity in this case that the luminance, I (i.e., illumination in the video) is constant, not time varying, which is a reasonable assumption for short videos and those in which the subject can control their environment (e.g., indoors). Then differentiating twice with respect to time, t:
∂2Ck(t)
∂t2 = I · (∂
2vs(t)
∂t2 + ∂2ud ∂t2 + ∂2up(t) ∂t2 + ∂2vn(t) ∂t2 ) (3)
The non-time varying part ud · d0 becomes zero. Thus simplifying the equation to:
∂2Ck(t)
∂t2 = I · (∂
2vs(t)
∂t2 +
∂2up(t)
∂t2 +
∂2vn(t)
∂t2 ) (4)
Furthermore, if specular reflections do not vary over time (e.g., if the camera and subject are stationary), the vs(t) term will also become zero. This means that the second derivative changes in pixel intensities are a sum of second derivative changes in PPG and camera noise. With current camera technology, and little video compression, image noise is typically much smaller than the PPG signal. Therefore, we would expect the pixel changes to be dominated by second derivative variations in the blood volume pulse:
∂2Ck(t)
∂t2 = I · ∂
2up(t)
∂t2 (5)
As such, we can infer that when attempting to estimate the second derivative of the PPG signal from videos without very large motions or illumination changes, second derivative changes in the pixel space would appear helpful and that minimizing the loss between the second derivative prediction and ground truth will be the simplest learning task for the algorithm when the input is secondderivative pixel changes.
4 OUR MODEL
time (s)
BVP
First Derivative
Second Derivative
Left Ventricle Ejection Time (LVET) Systolic Peak
Dicrotic Norch
Systolic Foot
pant’s skin) and ignore noisy regions (e.g. background). These attention masks are shared between the first-derivative branch and the second-derivative branch as we expect the same spatial regions to contain first and second derivative information. After feature representations are extracted from frames within each derivative-input branch, the features are concatenated together for each time step and the target signals are then generated using recurrent neural network (RNN) layers. A diagram depicting the architecture used for our experimentation is shown in Fig. 2.
4.1 PREDICTING MULTI-DERIVATIVE TARGET SIGNALS
The goal of iPPG is to obtain an estimate of the underlying PPG signal p(t) (as in Eq. 2), while only observing video frames X(t) containing a subject’s skin (in this case the face). Mathematically, this can be described as learning a function: p̂(t) = f(X(t)) or, because we are interested in changes in blood volume changes, estimating the first derivative of the PPG signal: p̂′(t) = f(X(t), X ′(t)) , where the first derivative PPG signal is defined as: p′(t) = p(t)− p(t− 1). Using prior methods, to obtain an estimate of the PPG signal’s second derivative, one would either differentiate the predicted PPG signal twice, or differentiate the predicted first-derivative PPG once, rather than calculate the acceleration PPG directly. In contrast, we explicitly predict the acceleration PPG waveform as a target signal. We define the second derivative waveform as the difference between consecutive first-derivative time points: p′′(t) = p′(t) − p′(t − 1). Then we train our model to predict the second derivative waveform p̂′′(t) = f(X(t), X ′(t)) given a set of input video frames X(t) and the corresponding normalized difference frames X ′(t). To optimize our model parameters we minimize the mean squared difference between the true and predicted second derivative waveforms:
L = 1
T T∑ t=1 (p′′(t)− p̂′′(t))2 (6)
4.2 LEVERAGING MULTI-DERIVATIVE INPUTS
It has been previously shown that the normalized difference frames are useful for predicting the first derivative PPG waveforms. Therefore, we hypothesized that incorporating the second derivative of the raw video frames X ′′(t) = X ′(t) − X ′(t − 1) (i.e. the difference-of-difference frames) may also be useful for predicting the PPG signal and its derivatives. Similar to the difference frames, we added a separate convolutional attention branch, where the attention mask is shared between both branches (see Fig. 2). Sharing the attention mask is a reasonable assumption as we would expect skin regions to all exhibit the signal and similar dynamics. After the feature maps in each branch are pooled into a single value per feature at each time step, the learned representations are concatenated together. These concatenated features over time are used as input sequences to the recurrent layers that generate the target waveforms.
Given that difference frames X ′(t) are useful for predicting the first derivative PPG waveforms, features learned from the difference-of-difference frames X ′′(t) may be beneficial for predicting the second derivative PPG signal. In theory, if difference-of-difference features are indeed useful for predicting the acceleration PPG, then the CAN network should be able to learn those features
from the difference frames due to the 3D convolutional operations. However, manually adding the difference-of-difference frames could help guide the model. To examine the effect of combining higher-order inputs and target signals, we fit a model p̂′′(t) = f(X(t), X ′(t), X ′′(t)) to predict the second-derivative PPG.
5 EXPERIMENTS
In this section we will describe the data used to train and evaluate our method and perform a systematic ablation study in which we test different combinations of inputs and outputs.
5.1 DATA
Training To train our models using a large and diverse set of subjects, we leverage recent work that uses highly-parameterized synthetic avatars to generate videos containing simulated subjects with various movements and backgrounds (McDuff et al., 2020). To drive changes in the synthetic avatars’ appearance, the PPG signal is used to manipulate the base skin color and the subsurface radius (McDuff et al., 2020). The subsurface scattering is spatially weighted using an artist-created subsurface scattering radius texture that captures variations in the thickness of the skin across the face. Using physiological waveforms signals from the MIMIC Physionet (Goldberger Ary L. et al., 2000) database, we randomly sampled windows of PPG waveforms from real patients. The physiological waveform data were sampled to maximize examples from different patients. Using the synthetic avatar pipeline and MIMIC waveforms, we generated 2,800 6-second videos, where half of the videos were generated using hand-crafted facial motion/action signals, and the other half using facial motion/action signals extracted using landmark detection on real videos. Examples of the avatars can be found in Appendix A.1.1.
Testing Given that we are focusing on recovering very subtle changes in pixel intensities due to the blood volume pulse, we use a highly controlled and very accurately annotated dataset of real videos for evaluation. The AFRL dataset (Estepp et al., 2014) consists of 300 videos from 25 participants (17 male and 8 female). Each video in the dataset has a resolution of 658x492 pixels sampled at 30 Hz. Ground truth PPG signals were recorded using a contact reflective PPG sensor attached to the subject’s index finger. Each participant was instructed to perform three head motion tasks including rotating the head along the horizontal axis, rotating the head along the vertical axis, and rotating the head randomly once every second to one of nine predefined locations. Since our goal in this work was to compare methods for estimating subtle waveform dynamics, which can be more difficult to do in the presence of large motion, we focused here on the first two AFRL tasks where participant motion is minimal. Examples of AFRL participants can be found in Appendix A.1.1.
5.2 IMPLEMENTATION DETAILS
We trained our models using a large dataset of generated synthetic avatars and evaluated model performance on the AFRL dataset, which consists of real human subjects. For each video, we first cropped the video frames so that the face was approximately centered. Next, we reduced the resolution of the video to 36x36 pixels to reduce noise and computational requirements while maintaining useful spatial signal Verkruysse et al. (2008); Wang et al. (2017); Poh et al. (2010b). The input to the attention branch was T raw video frames. The input to the first-derivative branch was a set of T normalized difference frames, calculated by subtracting consecutive frames and normalizing by the sum. The input to the second-derivative branch was a set of T − 1 difference-of-difference frames (second derivative frames), calculated by subtracting consecutive normalized difference frames (i.e. the T frames used as input to the motion branch). In our experiments, we used a window size of T = 30 video frames to predict the target signals for the corresponding 30 time points. During training, a sliding window of 15 frames (i.e. 50% overlap between consecutive windows) was used to increase the total number of training examples. The model was implemented using Tensorflow (Abadi et al., 2016) and trained for eight epochs using the Adam (Kingma & Ba, 2017) optimizer with a learning rate of 0.001, and a batch size of 16.
5.3 SYSTEMATIC EVALUATION
To measure the effect of using multi-derivative inputs and outputs, we systematically removed the second-derivative parts of the model and used quantitative and qualitative methods to examine the change in model performance. To quantitatively measure the quality of the predicted signal, we calculated two clinically important parameters - heart rate (HR) and the left ventricular ejection time (LVET) interval (see Appendix A.1.3 for details). Video-based HR prediction has been a major focus of iPPG applications, with many methods showing highly-accurate results. HR can be determined through peak detection or by determining the dominant frequency in the signal (e.g. using fast Fourier transform). Since current iPPG methods are able to achieve sufficiently-low error rates on the HR estimation task, we believe that metrics that capture the quality of waveform morphology should also be considered.
The LVET interval is defined as the time between the opening and closing of the heart’s aortic valve, i.e. the systolic phase when the heart is contracting (see Fig. 1). In the PPG waveform, this interval begins at the diastolic point (i.e. the global minimum pressure within a heartbeat cycle) and ends with the dicrotic notch (i.e. local minimum occurring after systolic peak, marking the end of the systolic phase and the beginning of the diastolic phase). LVET typically is correlated with cardiac output (stroke volume × heart rate)(Hamada et al., 1990), and has been shown to be an indicator of future heart failure as the time interval decreases with left-ventricle dysfunction (Biering-Sørensen et al., 2018).
Calculating LVET requires identification of the diastolic point and the dicrotic notch. The diastolic point is a (global) minimum point within a heart beat, meaning it corresponds to a positive peak
in the second derivative signal according to the second-derivative test. Similarly, the dicrotic notch is a (local) minimum in the PPG signal, and appears as a positive peak in the second derivative following the diastolic peak in time. Because the dicrotic notch can often be a subtle feature, it is much easier to identify in the PPG’s second derivative compared to the raw signal. Therefore, it is a good example of clinically-important waveform morphology that is best captured by higher-order dynamics.
Removing the second-derivative frames In Table 1, quantitative evaluation metrics (HR and LVET) are shown for all experiments in our ablation study, using tasks 1 and 2 from the AFRL dataset. Removing the second-derivative (SD) frames results in the model configurations in the top three rows of Table 1. When SD frames are removed, the result is a general decrease in the HR error. However, there is also a general increase in LVET interval prediction error, which suggests that including the SD frames leads to improved estimation of waveform morphology.
Removing the first-derivative target signal Intuitively, models that are optimized using a loss function specifically focusing on a single objective will perform better in terms of that objective compared to models trained with loss functions containing multiple objectives. By removing the first-derivative target signal from the training objective, the model is focused to exclusively focus on the second-derivative (SD) objective. Empirically, this leads the SD-Optimized model to have the lowest LVET MAE of any model configuration (last row of Table 1). While the SD-Optimized model achieves the lowest LVET error, the HR error is the highest of any configuration. These results suggest that there are performance trade-offs to consider when designing a system for particular downstream tasks.
Removing the second-derivative target signal When the second-derivative target signal is removed from the model, the optimization procedure is purely focused on improving the prediction of the first derivative. The FD-Optimized model (first row of Table 1) serves as a form of baseline, since previous works have focused on using first-derivative (FD) frames to predict the first-derivative PPG signal. Fig. 4 shows a Bland-Altman plot (Martin Bland & Altman, 1986) comparing the FDOptimized and SD-Optimized error distributions as a function of the ground-truth values both HR and LVET intervals.
Perhaps unsurprisingly, our results show the FD-Optimized model achieves the lowest HR MAE (0.66 ± 2.07 BPM) of any model configuration examined and, in particular, improves HR estimation compared to models without the first derivative target signal. However, the FD-Optimized model also has the worst performance in terms of the LVET MAE (108.26 ± 56.19 ms) of any model configuration. This suggests that while the configuration provides an accurate assessment of the heartbeat frequency, the quality of predicted waveform morphology can be improved by incorporating second-derivative information. We observe similar results when evaluating the models on the UBFC (Bobbia et al., 2019) and PURE (Stricker et al., 2014) datasets (see Appendix Table 3).
Qualitative comparisons For a qualitative comparison, in Fig. 3 we plot the ground-truth, FDOptimized, and SD-Optimized PPG, first derivative, and second derivative. Additionally, in the bottom panel of Fig. 3 we overlay the true and predicted LVET intervals for each signal to demonstrate model performance. For additional qualitative comparisons, see Appendix A.2.
6 CONCLUSIONS
Using the task of video-based cardiac measurement we have shown that when learning representations for dynamical systems that appropriately designing inputs, and optimizing for derivatives of interest can make a significant difference in model performance. Specifically, there is a trade-off between optimizing for lower-order and higher-order dynamics. Given the importance of secondderivatives (i.e., acceleration) in this, and many other video understanding tasks, we believe it is important to understand the trade-off between optimizing for targets that capture different dynamic properties. In cardiac measurement in particular, the LVET is one of the more important clinical parameters and can be better estimated using higher-order information. While we have investigated the importance of higher-order dynamics in the context of video-based cardiac measurement, this paradigm is generally applicable. We believe future work will continue to showcase the importance of explicitly incorporating higher-order dynamics.
7 ETHICS STATEMENT
Camera-based cardiac measurement could help improve the quality of remote health care, as well as enable less invasive measurement of important physiological signals. The COVID-19 pandemic has revealed the importance of tools to support remote care. These needs are likely to be particularly acute in low-resource settings where distance, travel costs, and time are a great barrier to access quality healthcare. However, given the non-contact nature of the technology, it could also be used to measure personal data without the knowledge of the subject. Just as is the case with traditional contact sensors, it must be made transparent when these methods are being used, and subjects should be required to consent before physiological data is measured or recorded. There should be no penalty for individuals who decline to be measured. New bio-metrics laws can help protect people from unwanted physiological monitoring, or discrimination based on pre-existing health conditions detected via non-contact monitoring. However, social norms also need to be constructed around the use of this technology.
In this work, data were collected under informed consent from the participants.
A APPENDIX
A.1 SUPPLEMENTAL METHODS
A.1.1 EXAMPLE VIDEO FRAMES
A.1.2 MODEL ARCHITECTURE
The first two 3D convolutional layers in each branch each have 16 filters and the final two 3D convolutional layers in each branch each have 32 filters. Each convolutional layer has a filter size of 3x3x3 for all 3D convolutional layers in the network. All convolutional layers are padded such that they have the same height, width, and number of time steps in each consecutive layer. Convolutional layers use the hyperbolic tangent activation function, except for the convolutional layers used for the attention masks which use a sigmoid activation function for generating the soft masks. Attention masks (one per time step) are applied by applying an element-wise multiplication of the attention mask with each 3D convolutional feature map. Average pooling layers reduce the height and width of the frames by a factor of two, except for the final average pooling layer that pools over the entire frame (i.e. reduces each feature map to a single value per time step). Dropout (25% probability) is applied after every pooling layer to reduce overfitting.
After the final pooling layer, the learned features for each time step in a branch are concatenated together (i.e. combined across branches to share information). Each target signal uses its own set of (2) RNN layers to read the concatenated features over time and generate a target sequence. The first RNN layer is implemented as a bi-directional GRU (hyperbolic tangent activation function) with 64 total units (32 each direction). The second RNN layer is a GRU (linear activation function) layer with 1 output value per time step.
A.1.3 METRIC CALCULATION
Heart Rate (HR) estimation To estimate the heart rate, we use an fast Fourier transform (FFT)based method to calculate the dominant frequency in the signal, which corresponds to the heart rate. We first estimate power spectral density using the “periodogram” function from the scipy.signal (Virtanen et al., 2020) library. Then we band-pass filter the PPG signal, with cutoff frequencies of 0.75- 4.0 Hz (corresponding to a minimum HR of 45 BPM and maximum HR of 240 BPM). Finally, we select the frequency with the maximum power, and use this as our estimated HR.
Left Ventricle Ejection Time (LVET) estimation The LVET time is defined as the time interval between the diastolic peak and the dicrotic notch. To calculate this interval, we first identified the diastolic point in the second derivative (SD) of the PPG signal, which, because it is a “global” minima in the PPG heartbeat, appears as a “global” maxima (positive SD value) in the SD PPG. Then, in each predicted SD PPG waveform, we identified candidate dicrotic notch points. Since the dicrotic notch manifests as a “local” minima in the PPG signal, it appears as a “local” maxima in the PPG SD signal (positive SD value). Using peak detection (“find peaks” function in the scipy.signal library (Virtanen et al., 2020)) we identify candiadate dicrotic notch points by finding local peaks that occur after a diastolic point, and use the dicrotic notch candidate point that is closest in time to the reference diastolic point.
Because both the ground truth PPG (and therefore its derivatives) and, in particular, the predicted PPG (and its derivatives), contain signal artifacts and noise, the peak detection process is not perfect. To reduce variability in the LVET interval estimates due to noise, we apply a smoothing operation. Specifically, we estimate the mean LVET interval within a 10-second non-overlapping window and use this as our estimate of true/predicted LVET. See Appendix Fig. 7 for example LVET intervals over time, and the estimated LVET intervals after smoothing within windows.
A.2 SUPPLEMENTAL RESULTS | 1. What is the focus and contribution of the paper regarding remote cardiac measurement?
2. What are the strengths of the proposed multi-derivative architecture?
3. What are the weaknesses of the paper, particularly regarding novelty and experimental comparisons?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns regarding the paper's methodology, results, or conclusions? | Summary Of The Paper
Review | Summary Of The Paper
This paper investigates the effectiveness of the higher-order (second derivatives) in the context of remote cardiac measurement. A multi-derivative architecture is designed based on the basic network of previous work (Chen & McDuff ,2018). The model is trained on the synthetic dataset (McDuff et al., 2020) and test on AFRL dataset (Estepp et al., 2014) with real person. Experimental results illustrate the effectiveness of second-order dynamics on measuring detailed cardiac signal and indicate the trade-off between low and high order dynamics of LVET error and HR error.
Review
Strength:
The investigation about higher-order of dynamics in remote cardiac measurement is inspiring. The discussion about the trade-off of low and high order dynamics indeed gives insights to future work.
As called by this paper, instead of predicting the heartbeat rate, the community should pay more attention on measuring the detailed waveform that benefits more clinical monitoring applications.
Weaknesses:
The novelty of the proposed multi-derivative architecture is limited since it is an updated version of the basic network proposed in (Chen & McDuff ,2018) with minor modifications. The 3D convolution and final RNN is not new as they have been used in other rPPG estimation works.
Since the low and high order dynamics can be a general choice for rPPG estimation, the experiments should also include other popular rPPG estimation networks.
Intuitively, the high order dynamics can also be sensitive to motion interferences, such as facial motion or facial expression. It is not clear what is the degree of the motion and illumination constraints of the physical model (Eq. 5). The unconscious facial motion and expression are inevitable in practice. Do we need heavy pre-processing that strictly align the face region for each frame? What are the pre-processing steps in this work? Is this one of the reasons that the network is trained on a synthetic dataset?
Also, from my understanding, the cardiac measurement is based on skin color variation, which is a kind of low-level feature. It is not clear why we need a large and diverse set of subjects to train the network.
The experiment should also include popular publicly available datasets such as PURE and UBFC-rPPG. It is also expected to compare the proposed method with the state-of-the-art on these datasets. |
ICLR | Title
Learning Higher-Order Dynamics in Video-Based Cardiac Measurement
Abstract
Computer vision methods typically optimize for first-order dynamics (e.g., optical flow). However, in many cases the properties of interest are subtle variations in higher-order changes, such as acceleration. This is true in the cardiac pulse, where the second derivative can be used as an indicator of blood pressure and arterial disease. Recent developments in camera-based vital sign measurement have shown that cardiac measurements can be recovered with impressive accuracy from videos; however, the majority of research has focused on extracting summary statistics such as heart rate. Less emphasis has been put on the accuracy of waveform morphology that is necessary for many clinically impactful scenarios. In this work, we provide evidence that higher-order dynamics are better estimated by neural models when explicitly optimized for in the loss function. Furthermore, adding second-derivative inputs also improves performance when estimating second-order dynamics. By incorporating the second derivative of both the input frames and the target vital sign signals into the training procedure, our model is better able to estimate left ventricle ejection time (LVET) intervals.
N/A
Computer vision methods typically optimize for first-order dynamics (e.g., optical flow). However, in many cases the properties of interest are subtle variations in higher-order changes, such as acceleration. This is true in the cardiac pulse, where the second derivative can be used as an indicator of blood pressure and arterial disease. Recent developments in camera-based vital sign measurement have shown that cardiac measurements can be recovered with impressive accuracy from videos; however, the majority of research has focused on extracting summary statistics such as heart rate. Less emphasis has been put on the accuracy of waveform morphology that is necessary for many clinically impactful scenarios. In this work, we provide evidence that higher-order dynamics are better estimated by neural models when explicitly optimized for in the loss function. Furthermore, adding second-derivative inputs also improves performance when estimating second-order dynamics. By incorporating the second derivative of both the input frames and the target vital sign signals into the training procedure, our model is better able to estimate left ventricle ejection time (LVET) intervals.
1 INTRODUCTION
Many of the properties of dynamical systems only become apparent when they move or change as the result of forces applied to them. In most applications we are interested in behavior in terms of positions, velocities, and accelerations, and in some cases the properties of interest may only be observed in subtle variations in the higher-order dynamics (e.g., acceleration). Whether monitoring the flight of a drone to create a control mechanism for stabilization or analyzing the fluid dynamics of the cardiovascular system in the human body, there can be a need to recover these dynamics accurately. However, most video-based systems are trained on lower-order signals, such as position in the case of landmark tracking or velocity/rate-of-change (optical flow) in the case of visual odometry (Nister et al., 2004). Thus, they optimize for lower (zeroth or first) order dynamics. Does this harm their ability to estimate higher order changes? We hypothesize that networks trained to predict temporal signals will benefit from combined multi-derivative learning objectives. To test this hypothesis, we explore video-based cardiac measurement as an example application with a complex dynamical system (the cardiovascular system) and introduce simple but effective changes to the inputs and outputs to significantly improve the measurement of clinically relevant parameters.
Photoplethysmography (PPG) is a low-cost and non-invasive method for measuring the cardiovascular blood volume pulse (BVP). There are many clinical applications for PPG as the signal contains substantial information about health state and risk of cardiovascular diseases (Elgendi et al., 2019; Reisner et al., 2008; Pereira et al., 2020). In the current world, an acutely relevant application of PPG is for pulse oximetry (i.e. measuring pulse rate and blood oxygen saturation) as it can be used to detect low blood oxygen levels associated with the onset of COVID-19 (Greenhalgh et al., 2021). The COVID-19 pandemic has accelerated the adoption of teleheath systems (Annis et al., 2020) with more and more clinical consultations being conducted virtually. Therefore, techniques for remotely monitoring physiological vital signs are becoming increasingly important (Gawałko et al., 2021; Rohmetra et al., 2021). As one might expect, with many clinical applications the precision with which the PPG signal can be recovered is of critical importance when it comes to accurate inference of downstream conditions and the confidence of practitioners in the technology.
To date, in video-based PPG measurement the primary focus of analysis and evaluation has been on features extracted from the raw waveform or its first derivative (Chen & McDuff, 2018; Liu et al., 2020; 2021; Poh et al., 2010a). However, the second derivative of the PPG signal highlights subtle features that can be difficult to discern from those in the lower derivatives. Since the second derivative reflects the acceleration (Takazawa, 1993) or the rate-of rate-of change of the blood volume, it is more closely related to the change in pressure applied by the heart on blood vessels and its relation to vascular health.
An example of a particular feature accentuated in the second-derivative (i.e. acceleration) PPG is the dicrotic notch (see Fig. 1), which occurs when the heart’s aortic valve closes due to the pressure gradient between the aorta and the left ventricle. The dicrotic notch may only manifest as an inflection in the raw PPG wave; however, in the second derivative this inflection is a maxima. Inoue et al. (2017) found that the second derivative of the PPG signal can be used as an indicator of arterial stiffness - which itself is an indicator of cardiac disease. Takazawa et al. (1998) evaluated the second derivative of the PPG waveform and found that its characteristic shape can be used to estimate vascular aging, which was higher in subjects with a history of diabetes mellitus, hypertension, hypercholesterolemia, and ischemic heart disease compared to age-matched subjects without.
While the second derivative of a signal can be a rich source of information, often the zeroth- or first-order dynamics are given priority. For example, Chen & McDuff (2018) observed that training video- or imaging-based PPG (iPPG) models using first-derivative (difference) frames as input with an objective function of minimizing the mean squared error between the prediction and the first derivative of the target BVP signal was effective. This approach was used because the authors were designing their system to measure systolic time intervals only, which are most prominent in the lower order signals. However, they did not combine this with higher-order derivatives nor did they do any systematic comparison across derivative objectives.
We argue that a model trained with an explicit second-derivative (acceleration) objective should produce feature representations that better preserve/recover these dynamics than methods that simply derive acceleration from velocity. We observe that providing the model with a second derivative input also helps the network to better predict both the first and second derivative signals.
Finally, as diverse labeled data for training supervised models for predicting dynamical signals is often difficult to come by, we build on promising work in simulation to obtain our training data. Since light is absorbed and reflected differently for different skin tones (Bent et al., 2020; Dasari et al., 2021) having a training set that represents the true diversity of the target population is crucial for sufficient generalization. Our results show that models trained with synthetic data can learn parameters that successfully generalize to real human subjects. While this is not a central focus of our paper, we believe that it presents a promising proof-of-concept for future work.
To summarize, in this paper, we 1) demonstrate that directly incorporating higher-order dynamics into the loss function improves the quality of the estimated higher-order signals in terms of waveform morphology, 2) show that adding second-derivative inputs additionally improves performance, and 3) we describe a novel deep learning architecture that incorporates the second derivative input frames and target signals and evaluate it against clinical-grade contact sensor measurements.
2 BACKGROUND
Learning Higher-Order Motion from Videos. Despite its significance in many tasks, acceleration is often not explicitly modeled in many computer vision methods. However, there is a small body of literature that has considered how to recover (Edison & Jiji, 2017) and amplify optical acceleration (Zhang et al., 2017; Takeda et al., 2018). Given that acceleration can be equally as important as position and velocity in understanding dynamical systems, we argue that this topic deserves further attention.
A particularly relevant problem to ours is identifying small changes in videos (Wu et al., 2012; Zhang et al., 2017; Chen & McDuff, 2020; Takeda et al., 2018), and specifically in acceleration in the presence of relatively large motion. As an example, in the iPPG prediction task the aim is to identify minor changes in skin coloring due to variation in blood flow patterns, while ignoring major pixel changes due to subject or background motion. One method proposed by Zhang et al. (2017) for overcoming this signal separation problem is Video Acceleration Magnification, in which
large motions are assumed to be linear on the temporal scale of small changes while small changes deviate from this linearity. An extension to this method focused on making it more robust to sudden motions (Takeda et al., 2018). In both cases, a combination of Eulerian and Lagrangian approaches was used, rather than utilizing a supervised learning paradigm. Of relevance here is also work magnifying subtle physiological changes using neural architectures (Chen & McDuff, 2020), which have been shown to effectively separate signal and noise in both the spatial and temporal domains.
Our work might be most closely related to prior research into feature descriptors for optical acceleration (Edison & Jiji, 2017). One example uses histograms of optical acceleration to effectively encode the motion information. However, this work also defined handcrafted features, rather than learning representations from data. Our work is also related conceptually to architectures such as SlowFast (Feichtenhofer et al., 2019) in that it utilizes multiple “pathways” to learn different properties of the dynamics within a video. We were inspired by this approach; however, unlike SlowFast, we focus specifically on higher-order pathways rather than slower and faster frame sequences.
Video-based Cardiac Measurement. Diffuse reflections from the body vary depending on how much light is absorbed in the peripheral layers of the skin and this is influenced by the volume of blood in the capillaries. Digital cameras can capture these very subtle changes in light which can then be used to recover the PPG signal (Wu et al., 2000; Takano & Ohta, 2007; Verkruysse et al., 2008; Poh et al., 2010a). The task then becomes separating pixel changes due to blood flow from those due to body motions, ambient lighting variation, and other environmental factors that we consider noise in this context. While earlier methods leveraged source separation algorithms (Wang et al., 2016), such as ICA (Poh et al., 2010a) or PCA (Lewandowska et al., 2011), neural models provide the current state-of-the-art in this domain (Chen & McDuff, 2018; Liu et al., 2020; 2021; Song et al., 2021; Lu et al., 2021). These architectures support learning spatial attention and sourcespecific temporal variations and separating these from various sources of noise. Typically, the input to these models are normalized video frames and the output is a 1-D time series prediction of the PPG waveform or the heart rate. A vast majority of work has evaluated these methods based errors in heart rate estimation, which considers the dominant or “systolic” frequency alone. Only a few papers have used more challenging evaluation criteria, such as the estimation of systolic to diastolic peaks (McDuff et al., 2014).
3 OPTICAL BASIS
We start by providing an optical basis for the measurement of the pulse wave using a camera and specifically the second derivative signal. Starting with Shafer’s Dichromatic Reflection Model (DRM)(Wang et al., 2016; Chen & McDuff, 2018; Liu et al., 2020), we want to understand how higher order changes in the blood volume pulse impact pixel intensities to motivate the design of our inputs and loss function. Based on the DRM model the RGB values captured by the cameras as given by:
Ck(t) = I(t) · (vs(t) + vd(t)) + vn(t) (1)
where I(t) is the luminance intensity level, modulated by the specular reflection vs(t) and the diffuse reflection vd(t). Quantization noise of the camera sensor is captured by vn(t). I(t) can be decomposed into stationary and time-varying parts vs(t) and vd(t) (Wang et al., 2016):
vd(t) = ud · d0 + up · p(t) (2)
where ud is the unit color vector of the skin-tissue; d0 is the stationary reflection strength; up is the relative pulsatile strengths caused by hemoglobin and melanin absorption; p(t) represents the physiological changes. Let us assume for simplicity in this case that the luminance, I (i.e., illumination in the video) is constant, not time varying, which is a reasonable assumption for short videos and those in which the subject can control their environment (e.g., indoors). Then differentiating twice with respect to time, t:
∂2Ck(t)
∂t2 = I · (∂
2vs(t)
∂t2 + ∂2ud ∂t2 + ∂2up(t) ∂t2 + ∂2vn(t) ∂t2 ) (3)
The non-time varying part ud · d0 becomes zero. Thus simplifying the equation to:
∂2Ck(t)
∂t2 = I · (∂
2vs(t)
∂t2 +
∂2up(t)
∂t2 +
∂2vn(t)
∂t2 ) (4)
Furthermore, if specular reflections do not vary over time (e.g., if the camera and subject are stationary), the vs(t) term will also become zero. This means that the second derivative changes in pixel intensities are a sum of second derivative changes in PPG and camera noise. With current camera technology, and little video compression, image noise is typically much smaller than the PPG signal. Therefore, we would expect the pixel changes to be dominated by second derivative variations in the blood volume pulse:
∂2Ck(t)
∂t2 = I · ∂
2up(t)
∂t2 (5)
As such, we can infer that when attempting to estimate the second derivative of the PPG signal from videos without very large motions or illumination changes, second derivative changes in the pixel space would appear helpful and that minimizing the loss between the second derivative prediction and ground truth will be the simplest learning task for the algorithm when the input is secondderivative pixel changes.
4 OUR MODEL
time (s)
BVP
First Derivative
Second Derivative
Left Ventricle Ejection Time (LVET) Systolic Peak
Dicrotic Norch
Systolic Foot
pant’s skin) and ignore noisy regions (e.g. background). These attention masks are shared between the first-derivative branch and the second-derivative branch as we expect the same spatial regions to contain first and second derivative information. After feature representations are extracted from frames within each derivative-input branch, the features are concatenated together for each time step and the target signals are then generated using recurrent neural network (RNN) layers. A diagram depicting the architecture used for our experimentation is shown in Fig. 2.
4.1 PREDICTING MULTI-DERIVATIVE TARGET SIGNALS
The goal of iPPG is to obtain an estimate of the underlying PPG signal p(t) (as in Eq. 2), while only observing video frames X(t) containing a subject’s skin (in this case the face). Mathematically, this can be described as learning a function: p̂(t) = f(X(t)) or, because we are interested in changes in blood volume changes, estimating the first derivative of the PPG signal: p̂′(t) = f(X(t), X ′(t)) , where the first derivative PPG signal is defined as: p′(t) = p(t)− p(t− 1). Using prior methods, to obtain an estimate of the PPG signal’s second derivative, one would either differentiate the predicted PPG signal twice, or differentiate the predicted first-derivative PPG once, rather than calculate the acceleration PPG directly. In contrast, we explicitly predict the acceleration PPG waveform as a target signal. We define the second derivative waveform as the difference between consecutive first-derivative time points: p′′(t) = p′(t) − p′(t − 1). Then we train our model to predict the second derivative waveform p̂′′(t) = f(X(t), X ′(t)) given a set of input video frames X(t) and the corresponding normalized difference frames X ′(t). To optimize our model parameters we minimize the mean squared difference between the true and predicted second derivative waveforms:
L = 1
T T∑ t=1 (p′′(t)− p̂′′(t))2 (6)
4.2 LEVERAGING MULTI-DERIVATIVE INPUTS
It has been previously shown that the normalized difference frames are useful for predicting the first derivative PPG waveforms. Therefore, we hypothesized that incorporating the second derivative of the raw video frames X ′′(t) = X ′(t) − X ′(t − 1) (i.e. the difference-of-difference frames) may also be useful for predicting the PPG signal and its derivatives. Similar to the difference frames, we added a separate convolutional attention branch, where the attention mask is shared between both branches (see Fig. 2). Sharing the attention mask is a reasonable assumption as we would expect skin regions to all exhibit the signal and similar dynamics. After the feature maps in each branch are pooled into a single value per feature at each time step, the learned representations are concatenated together. These concatenated features over time are used as input sequences to the recurrent layers that generate the target waveforms.
Given that difference frames X ′(t) are useful for predicting the first derivative PPG waveforms, features learned from the difference-of-difference frames X ′′(t) may be beneficial for predicting the second derivative PPG signal. In theory, if difference-of-difference features are indeed useful for predicting the acceleration PPG, then the CAN network should be able to learn those features
from the difference frames due to the 3D convolutional operations. However, manually adding the difference-of-difference frames could help guide the model. To examine the effect of combining higher-order inputs and target signals, we fit a model p̂′′(t) = f(X(t), X ′(t), X ′′(t)) to predict the second-derivative PPG.
5 EXPERIMENTS
In this section we will describe the data used to train and evaluate our method and perform a systematic ablation study in which we test different combinations of inputs and outputs.
5.1 DATA
Training To train our models using a large and diverse set of subjects, we leverage recent work that uses highly-parameterized synthetic avatars to generate videos containing simulated subjects with various movements and backgrounds (McDuff et al., 2020). To drive changes in the synthetic avatars’ appearance, the PPG signal is used to manipulate the base skin color and the subsurface radius (McDuff et al., 2020). The subsurface scattering is spatially weighted using an artist-created subsurface scattering radius texture that captures variations in the thickness of the skin across the face. Using physiological waveforms signals from the MIMIC Physionet (Goldberger Ary L. et al., 2000) database, we randomly sampled windows of PPG waveforms from real patients. The physiological waveform data were sampled to maximize examples from different patients. Using the synthetic avatar pipeline and MIMIC waveforms, we generated 2,800 6-second videos, where half of the videos were generated using hand-crafted facial motion/action signals, and the other half using facial motion/action signals extracted using landmark detection on real videos. Examples of the avatars can be found in Appendix A.1.1.
Testing Given that we are focusing on recovering very subtle changes in pixel intensities due to the blood volume pulse, we use a highly controlled and very accurately annotated dataset of real videos for evaluation. The AFRL dataset (Estepp et al., 2014) consists of 300 videos from 25 participants (17 male and 8 female). Each video in the dataset has a resolution of 658x492 pixels sampled at 30 Hz. Ground truth PPG signals were recorded using a contact reflective PPG sensor attached to the subject’s index finger. Each participant was instructed to perform three head motion tasks including rotating the head along the horizontal axis, rotating the head along the vertical axis, and rotating the head randomly once every second to one of nine predefined locations. Since our goal in this work was to compare methods for estimating subtle waveform dynamics, which can be more difficult to do in the presence of large motion, we focused here on the first two AFRL tasks where participant motion is minimal. Examples of AFRL participants can be found in Appendix A.1.1.
5.2 IMPLEMENTATION DETAILS
We trained our models using a large dataset of generated synthetic avatars and evaluated model performance on the AFRL dataset, which consists of real human subjects. For each video, we first cropped the video frames so that the face was approximately centered. Next, we reduced the resolution of the video to 36x36 pixels to reduce noise and computational requirements while maintaining useful spatial signal Verkruysse et al. (2008); Wang et al. (2017); Poh et al. (2010b). The input to the attention branch was T raw video frames. The input to the first-derivative branch was a set of T normalized difference frames, calculated by subtracting consecutive frames and normalizing by the sum. The input to the second-derivative branch was a set of T − 1 difference-of-difference frames (second derivative frames), calculated by subtracting consecutive normalized difference frames (i.e. the T frames used as input to the motion branch). In our experiments, we used a window size of T = 30 video frames to predict the target signals for the corresponding 30 time points. During training, a sliding window of 15 frames (i.e. 50% overlap between consecutive windows) was used to increase the total number of training examples. The model was implemented using Tensorflow (Abadi et al., 2016) and trained for eight epochs using the Adam (Kingma & Ba, 2017) optimizer with a learning rate of 0.001, and a batch size of 16.
5.3 SYSTEMATIC EVALUATION
To measure the effect of using multi-derivative inputs and outputs, we systematically removed the second-derivative parts of the model and used quantitative and qualitative methods to examine the change in model performance. To quantitatively measure the quality of the predicted signal, we calculated two clinically important parameters - heart rate (HR) and the left ventricular ejection time (LVET) interval (see Appendix A.1.3 for details). Video-based HR prediction has been a major focus of iPPG applications, with many methods showing highly-accurate results. HR can be determined through peak detection or by determining the dominant frequency in the signal (e.g. using fast Fourier transform). Since current iPPG methods are able to achieve sufficiently-low error rates on the HR estimation task, we believe that metrics that capture the quality of waveform morphology should also be considered.
The LVET interval is defined as the time between the opening and closing of the heart’s aortic valve, i.e. the systolic phase when the heart is contracting (see Fig. 1). In the PPG waveform, this interval begins at the diastolic point (i.e. the global minimum pressure within a heartbeat cycle) and ends with the dicrotic notch (i.e. local minimum occurring after systolic peak, marking the end of the systolic phase and the beginning of the diastolic phase). LVET typically is correlated with cardiac output (stroke volume × heart rate)(Hamada et al., 1990), and has been shown to be an indicator of future heart failure as the time interval decreases with left-ventricle dysfunction (Biering-Sørensen et al., 2018).
Calculating LVET requires identification of the diastolic point and the dicrotic notch. The diastolic point is a (global) minimum point within a heart beat, meaning it corresponds to a positive peak
in the second derivative signal according to the second-derivative test. Similarly, the dicrotic notch is a (local) minimum in the PPG signal, and appears as a positive peak in the second derivative following the diastolic peak in time. Because the dicrotic notch can often be a subtle feature, it is much easier to identify in the PPG’s second derivative compared to the raw signal. Therefore, it is a good example of clinically-important waveform morphology that is best captured by higher-order dynamics.
Removing the second-derivative frames In Table 1, quantitative evaluation metrics (HR and LVET) are shown for all experiments in our ablation study, using tasks 1 and 2 from the AFRL dataset. Removing the second-derivative (SD) frames results in the model configurations in the top three rows of Table 1. When SD frames are removed, the result is a general decrease in the HR error. However, there is also a general increase in LVET interval prediction error, which suggests that including the SD frames leads to improved estimation of waveform morphology.
Removing the first-derivative target signal Intuitively, models that are optimized using a loss function specifically focusing on a single objective will perform better in terms of that objective compared to models trained with loss functions containing multiple objectives. By removing the first-derivative target signal from the training objective, the model is focused to exclusively focus on the second-derivative (SD) objective. Empirically, this leads the SD-Optimized model to have the lowest LVET MAE of any model configuration (last row of Table 1). While the SD-Optimized model achieves the lowest LVET error, the HR error is the highest of any configuration. These results suggest that there are performance trade-offs to consider when designing a system for particular downstream tasks.
Removing the second-derivative target signal When the second-derivative target signal is removed from the model, the optimization procedure is purely focused on improving the prediction of the first derivative. The FD-Optimized model (first row of Table 1) serves as a form of baseline, since previous works have focused on using first-derivative (FD) frames to predict the first-derivative PPG signal. Fig. 4 shows a Bland-Altman plot (Martin Bland & Altman, 1986) comparing the FDOptimized and SD-Optimized error distributions as a function of the ground-truth values both HR and LVET intervals.
Perhaps unsurprisingly, our results show the FD-Optimized model achieves the lowest HR MAE (0.66 ± 2.07 BPM) of any model configuration examined and, in particular, improves HR estimation compared to models without the first derivative target signal. However, the FD-Optimized model also has the worst performance in terms of the LVET MAE (108.26 ± 56.19 ms) of any model configuration. This suggests that while the configuration provides an accurate assessment of the heartbeat frequency, the quality of predicted waveform morphology can be improved by incorporating second-derivative information. We observe similar results when evaluating the models on the UBFC (Bobbia et al., 2019) and PURE (Stricker et al., 2014) datasets (see Appendix Table 3).
Qualitative comparisons For a qualitative comparison, in Fig. 3 we plot the ground-truth, FDOptimized, and SD-Optimized PPG, first derivative, and second derivative. Additionally, in the bottom panel of Fig. 3 we overlay the true and predicted LVET intervals for each signal to demonstrate model performance. For additional qualitative comparisons, see Appendix A.2.
6 CONCLUSIONS
Using the task of video-based cardiac measurement we have shown that when learning representations for dynamical systems that appropriately designing inputs, and optimizing for derivatives of interest can make a significant difference in model performance. Specifically, there is a trade-off between optimizing for lower-order and higher-order dynamics. Given the importance of secondderivatives (i.e., acceleration) in this, and many other video understanding tasks, we believe it is important to understand the trade-off between optimizing for targets that capture different dynamic properties. In cardiac measurement in particular, the LVET is one of the more important clinical parameters and can be better estimated using higher-order information. While we have investigated the importance of higher-order dynamics in the context of video-based cardiac measurement, this paradigm is generally applicable. We believe future work will continue to showcase the importance of explicitly incorporating higher-order dynamics.
7 ETHICS STATEMENT
Camera-based cardiac measurement could help improve the quality of remote health care, as well as enable less invasive measurement of important physiological signals. The COVID-19 pandemic has revealed the importance of tools to support remote care. These needs are likely to be particularly acute in low-resource settings where distance, travel costs, and time are a great barrier to access quality healthcare. However, given the non-contact nature of the technology, it could also be used to measure personal data without the knowledge of the subject. Just as is the case with traditional contact sensors, it must be made transparent when these methods are being used, and subjects should be required to consent before physiological data is measured or recorded. There should be no penalty for individuals who decline to be measured. New bio-metrics laws can help protect people from unwanted physiological monitoring, or discrimination based on pre-existing health conditions detected via non-contact monitoring. However, social norms also need to be constructed around the use of this technology.
In this work, data were collected under informed consent from the participants.
A APPENDIX
A.1 SUPPLEMENTAL METHODS
A.1.1 EXAMPLE VIDEO FRAMES
A.1.2 MODEL ARCHITECTURE
The first two 3D convolutional layers in each branch each have 16 filters and the final two 3D convolutional layers in each branch each have 32 filters. Each convolutional layer has a filter size of 3x3x3 for all 3D convolutional layers in the network. All convolutional layers are padded such that they have the same height, width, and number of time steps in each consecutive layer. Convolutional layers use the hyperbolic tangent activation function, except for the convolutional layers used for the attention masks which use a sigmoid activation function for generating the soft masks. Attention masks (one per time step) are applied by applying an element-wise multiplication of the attention mask with each 3D convolutional feature map. Average pooling layers reduce the height and width of the frames by a factor of two, except for the final average pooling layer that pools over the entire frame (i.e. reduces each feature map to a single value per time step). Dropout (25% probability) is applied after every pooling layer to reduce overfitting.
After the final pooling layer, the learned features for each time step in a branch are concatenated together (i.e. combined across branches to share information). Each target signal uses its own set of (2) RNN layers to read the concatenated features over time and generate a target sequence. The first RNN layer is implemented as a bi-directional GRU (hyperbolic tangent activation function) with 64 total units (32 each direction). The second RNN layer is a GRU (linear activation function) layer with 1 output value per time step.
A.1.3 METRIC CALCULATION
Heart Rate (HR) estimation To estimate the heart rate, we use an fast Fourier transform (FFT)based method to calculate the dominant frequency in the signal, which corresponds to the heart rate. We first estimate power spectral density using the “periodogram” function from the scipy.signal (Virtanen et al., 2020) library. Then we band-pass filter the PPG signal, with cutoff frequencies of 0.75- 4.0 Hz (corresponding to a minimum HR of 45 BPM and maximum HR of 240 BPM). Finally, we select the frequency with the maximum power, and use this as our estimated HR.
Left Ventricle Ejection Time (LVET) estimation The LVET time is defined as the time interval between the diastolic peak and the dicrotic notch. To calculate this interval, we first identified the diastolic point in the second derivative (SD) of the PPG signal, which, because it is a “global” minima in the PPG heartbeat, appears as a “global” maxima (positive SD value) in the SD PPG. Then, in each predicted SD PPG waveform, we identified candidate dicrotic notch points. Since the dicrotic notch manifests as a “local” minima in the PPG signal, it appears as a “local” maxima in the PPG SD signal (positive SD value). Using peak detection (“find peaks” function in the scipy.signal library (Virtanen et al., 2020)) we identify candiadate dicrotic notch points by finding local peaks that occur after a diastolic point, and use the dicrotic notch candidate point that is closest in time to the reference diastolic point.
Because both the ground truth PPG (and therefore its derivatives) and, in particular, the predicted PPG (and its derivatives), contain signal artifacts and noise, the peak detection process is not perfect. To reduce variability in the LVET interval estimates due to noise, we apply a smoothing operation. Specifically, we estimate the mean LVET interval within a 10-second non-overlapping window and use this as our estimate of true/predicted LVET. See Appendix Fig. 7 for example LVET intervals over time, and the estimated LVET intervals after smoothing within windows.
A.2 SUPPLEMENTAL RESULTS | 1. What is the focus and contribution of the paper on contactless cardiac monitoring?
2. What are the strengths of the proposed deep learning approach, particularly in its ability to estimate clinical features?
3. What are the weaknesses of the paper, especially regarding its comparison with other works and its handling of pathological rhythms?
4. Do you have any concerns about the pre-processing of video data and the use of artificial data for training?
5. What is the significance of the proposed approach, and what are the implications for future research and practical applications? | Summary Of The Paper
Review | Summary Of The Paper
This paper describes a novel approach of extracting information from contact-less (video-based) cardiac monitoring. The authors highlight the potential of analyzing second-derivatives of the PPG signals, especially for the estimation of Left Ventricle Ejection Time intervals. The authors propose a deep learning approach, with multiple inputs (video, but also first and second derivatives and also a multi-task approach (first and/or second derivatives of the contact PPG)
Review
The paper is interesting as the subject is a hot topic in the community, and the development of contactless monitoring solutions is seen as key for multiple applications, from in-hospital (pediatric ICU, patients under dialysis) or remote medical appointment (especially during the COVD pandemic) The authors have evaluated their technique with relevant tests, such as estimation of HR and LVET. Such an evaluation technique is important, as biomedical applications need to be assessed on their final goal and authors did will not to look at MSE of estimated PPG or derivatives, but to look at the accuracy of the estimated clinical features. I have however several concerns regarding the submission: As denoted by the authors, the use of normalized differences for the prediction of first derivative PPG has already been suggested, the innovative aspect of the paper is therefore quite limited. The authors do not compare their approach with other techniques from the state-of-the-art, it is therefore hard to evaluate the performance of the suggested technique. How would the technique cope with pathological rhythms, which affect the quality and/or morphology of the PPG? The pre-processing of the videos consists in rescaling the images into 36X36 pixels. Do the authors believe that such a low resolution is enough for real-life applications? The use of artificial data is often key for the development of machine-learning techniques in the medical field due to the scarcity of real data. I have however some doubts about the realism of the training data, do the authors believe that avatar and generated videos are realistic enough? Does the addition of the physiological variation onto the avatar skin really reflect real and noisy data? Given the final results, it is unclear which approach the authors are suggesting? Do they believe a specific network should be trained for each application (HR, LVET, other ) |
ICLR | Title
Learning Higher-Order Dynamics in Video-Based Cardiac Measurement
Abstract
Computer vision methods typically optimize for first-order dynamics (e.g., optical flow). However, in many cases the properties of interest are subtle variations in higher-order changes, such as acceleration. This is true in the cardiac pulse, where the second derivative can be used as an indicator of blood pressure and arterial disease. Recent developments in camera-based vital sign measurement have shown that cardiac measurements can be recovered with impressive accuracy from videos; however, the majority of research has focused on extracting summary statistics such as heart rate. Less emphasis has been put on the accuracy of waveform morphology that is necessary for many clinically impactful scenarios. In this work, we provide evidence that higher-order dynamics are better estimated by neural models when explicitly optimized for in the loss function. Furthermore, adding second-derivative inputs also improves performance when estimating second-order dynamics. By incorporating the second derivative of both the input frames and the target vital sign signals into the training procedure, our model is better able to estimate left ventricle ejection time (LVET) intervals.
N/A
Computer vision methods typically optimize for first-order dynamics (e.g., optical flow). However, in many cases the properties of interest are subtle variations in higher-order changes, such as acceleration. This is true in the cardiac pulse, where the second derivative can be used as an indicator of blood pressure and arterial disease. Recent developments in camera-based vital sign measurement have shown that cardiac measurements can be recovered with impressive accuracy from videos; however, the majority of research has focused on extracting summary statistics such as heart rate. Less emphasis has been put on the accuracy of waveform morphology that is necessary for many clinically impactful scenarios. In this work, we provide evidence that higher-order dynamics are better estimated by neural models when explicitly optimized for in the loss function. Furthermore, adding second-derivative inputs also improves performance when estimating second-order dynamics. By incorporating the second derivative of both the input frames and the target vital sign signals into the training procedure, our model is better able to estimate left ventricle ejection time (LVET) intervals.
1 INTRODUCTION
Many of the properties of dynamical systems only become apparent when they move or change as the result of forces applied to them. In most applications we are interested in behavior in terms of positions, velocities, and accelerations, and in some cases the properties of interest may only be observed in subtle variations in the higher-order dynamics (e.g., acceleration). Whether monitoring the flight of a drone to create a control mechanism for stabilization or analyzing the fluid dynamics of the cardiovascular system in the human body, there can be a need to recover these dynamics accurately. However, most video-based systems are trained on lower-order signals, such as position in the case of landmark tracking or velocity/rate-of-change (optical flow) in the case of visual odometry (Nister et al., 2004). Thus, they optimize for lower (zeroth or first) order dynamics. Does this harm their ability to estimate higher order changes? We hypothesize that networks trained to predict temporal signals will benefit from combined multi-derivative learning objectives. To test this hypothesis, we explore video-based cardiac measurement as an example application with a complex dynamical system (the cardiovascular system) and introduce simple but effective changes to the inputs and outputs to significantly improve the measurement of clinically relevant parameters.
Photoplethysmography (PPG) is a low-cost and non-invasive method for measuring the cardiovascular blood volume pulse (BVP). There are many clinical applications for PPG as the signal contains substantial information about health state and risk of cardiovascular diseases (Elgendi et al., 2019; Reisner et al., 2008; Pereira et al., 2020). In the current world, an acutely relevant application of PPG is for pulse oximetry (i.e. measuring pulse rate and blood oxygen saturation) as it can be used to detect low blood oxygen levels associated with the onset of COVID-19 (Greenhalgh et al., 2021). The COVID-19 pandemic has accelerated the adoption of teleheath systems (Annis et al., 2020) with more and more clinical consultations being conducted virtually. Therefore, techniques for remotely monitoring physiological vital signs are becoming increasingly important (Gawałko et al., 2021; Rohmetra et al., 2021). As one might expect, with many clinical applications the precision with which the PPG signal can be recovered is of critical importance when it comes to accurate inference of downstream conditions and the confidence of practitioners in the technology.
To date, in video-based PPG measurement the primary focus of analysis and evaluation has been on features extracted from the raw waveform or its first derivative (Chen & McDuff, 2018; Liu et al., 2020; 2021; Poh et al., 2010a). However, the second derivative of the PPG signal highlights subtle features that can be difficult to discern from those in the lower derivatives. Since the second derivative reflects the acceleration (Takazawa, 1993) or the rate-of rate-of change of the blood volume, it is more closely related to the change in pressure applied by the heart on blood vessels and its relation to vascular health.
An example of a particular feature accentuated in the second-derivative (i.e. acceleration) PPG is the dicrotic notch (see Fig. 1), which occurs when the heart’s aortic valve closes due to the pressure gradient between the aorta and the left ventricle. The dicrotic notch may only manifest as an inflection in the raw PPG wave; however, in the second derivative this inflection is a maxima. Inoue et al. (2017) found that the second derivative of the PPG signal can be used as an indicator of arterial stiffness - which itself is an indicator of cardiac disease. Takazawa et al. (1998) evaluated the second derivative of the PPG waveform and found that its characteristic shape can be used to estimate vascular aging, which was higher in subjects with a history of diabetes mellitus, hypertension, hypercholesterolemia, and ischemic heart disease compared to age-matched subjects without.
While the second derivative of a signal can be a rich source of information, often the zeroth- or first-order dynamics are given priority. For example, Chen & McDuff (2018) observed that training video- or imaging-based PPG (iPPG) models using first-derivative (difference) frames as input with an objective function of minimizing the mean squared error between the prediction and the first derivative of the target BVP signal was effective. This approach was used because the authors were designing their system to measure systolic time intervals only, which are most prominent in the lower order signals. However, they did not combine this with higher-order derivatives nor did they do any systematic comparison across derivative objectives.
We argue that a model trained with an explicit second-derivative (acceleration) objective should produce feature representations that better preserve/recover these dynamics than methods that simply derive acceleration from velocity. We observe that providing the model with a second derivative input also helps the network to better predict both the first and second derivative signals.
Finally, as diverse labeled data for training supervised models for predicting dynamical signals is often difficult to come by, we build on promising work in simulation to obtain our training data. Since light is absorbed and reflected differently for different skin tones (Bent et al., 2020; Dasari et al., 2021) having a training set that represents the true diversity of the target population is crucial for sufficient generalization. Our results show that models trained with synthetic data can learn parameters that successfully generalize to real human subjects. While this is not a central focus of our paper, we believe that it presents a promising proof-of-concept for future work.
To summarize, in this paper, we 1) demonstrate that directly incorporating higher-order dynamics into the loss function improves the quality of the estimated higher-order signals in terms of waveform morphology, 2) show that adding second-derivative inputs additionally improves performance, and 3) we describe a novel deep learning architecture that incorporates the second derivative input frames and target signals and evaluate it against clinical-grade contact sensor measurements.
2 BACKGROUND
Learning Higher-Order Motion from Videos. Despite its significance in many tasks, acceleration is often not explicitly modeled in many computer vision methods. However, there is a small body of literature that has considered how to recover (Edison & Jiji, 2017) and amplify optical acceleration (Zhang et al., 2017; Takeda et al., 2018). Given that acceleration can be equally as important as position and velocity in understanding dynamical systems, we argue that this topic deserves further attention.
A particularly relevant problem to ours is identifying small changes in videos (Wu et al., 2012; Zhang et al., 2017; Chen & McDuff, 2020; Takeda et al., 2018), and specifically in acceleration in the presence of relatively large motion. As an example, in the iPPG prediction task the aim is to identify minor changes in skin coloring due to variation in blood flow patterns, while ignoring major pixel changes due to subject or background motion. One method proposed by Zhang et al. (2017) for overcoming this signal separation problem is Video Acceleration Magnification, in which
large motions are assumed to be linear on the temporal scale of small changes while small changes deviate from this linearity. An extension to this method focused on making it more robust to sudden motions (Takeda et al., 2018). In both cases, a combination of Eulerian and Lagrangian approaches was used, rather than utilizing a supervised learning paradigm. Of relevance here is also work magnifying subtle physiological changes using neural architectures (Chen & McDuff, 2020), which have been shown to effectively separate signal and noise in both the spatial and temporal domains.
Our work might be most closely related to prior research into feature descriptors for optical acceleration (Edison & Jiji, 2017). One example uses histograms of optical acceleration to effectively encode the motion information. However, this work also defined handcrafted features, rather than learning representations from data. Our work is also related conceptually to architectures such as SlowFast (Feichtenhofer et al., 2019) in that it utilizes multiple “pathways” to learn different properties of the dynamics within a video. We were inspired by this approach; however, unlike SlowFast, we focus specifically on higher-order pathways rather than slower and faster frame sequences.
Video-based Cardiac Measurement. Diffuse reflections from the body vary depending on how much light is absorbed in the peripheral layers of the skin and this is influenced by the volume of blood in the capillaries. Digital cameras can capture these very subtle changes in light which can then be used to recover the PPG signal (Wu et al., 2000; Takano & Ohta, 2007; Verkruysse et al., 2008; Poh et al., 2010a). The task then becomes separating pixel changes due to blood flow from those due to body motions, ambient lighting variation, and other environmental factors that we consider noise in this context. While earlier methods leveraged source separation algorithms (Wang et al., 2016), such as ICA (Poh et al., 2010a) or PCA (Lewandowska et al., 2011), neural models provide the current state-of-the-art in this domain (Chen & McDuff, 2018; Liu et al., 2020; 2021; Song et al., 2021; Lu et al., 2021). These architectures support learning spatial attention and sourcespecific temporal variations and separating these from various sources of noise. Typically, the input to these models are normalized video frames and the output is a 1-D time series prediction of the PPG waveform or the heart rate. A vast majority of work has evaluated these methods based errors in heart rate estimation, which considers the dominant or “systolic” frequency alone. Only a few papers have used more challenging evaluation criteria, such as the estimation of systolic to diastolic peaks (McDuff et al., 2014).
3 OPTICAL BASIS
We start by providing an optical basis for the measurement of the pulse wave using a camera and specifically the second derivative signal. Starting with Shafer’s Dichromatic Reflection Model (DRM)(Wang et al., 2016; Chen & McDuff, 2018; Liu et al., 2020), we want to understand how higher order changes in the blood volume pulse impact pixel intensities to motivate the design of our inputs and loss function. Based on the DRM model the RGB values captured by the cameras as given by:
Ck(t) = I(t) · (vs(t) + vd(t)) + vn(t) (1)
where I(t) is the luminance intensity level, modulated by the specular reflection vs(t) and the diffuse reflection vd(t). Quantization noise of the camera sensor is captured by vn(t). I(t) can be decomposed into stationary and time-varying parts vs(t) and vd(t) (Wang et al., 2016):
vd(t) = ud · d0 + up · p(t) (2)
where ud is the unit color vector of the skin-tissue; d0 is the stationary reflection strength; up is the relative pulsatile strengths caused by hemoglobin and melanin absorption; p(t) represents the physiological changes. Let us assume for simplicity in this case that the luminance, I (i.e., illumination in the video) is constant, not time varying, which is a reasonable assumption for short videos and those in which the subject can control their environment (e.g., indoors). Then differentiating twice with respect to time, t:
∂2Ck(t)
∂t2 = I · (∂
2vs(t)
∂t2 + ∂2ud ∂t2 + ∂2up(t) ∂t2 + ∂2vn(t) ∂t2 ) (3)
The non-time varying part ud · d0 becomes zero. Thus simplifying the equation to:
∂2Ck(t)
∂t2 = I · (∂
2vs(t)
∂t2 +
∂2up(t)
∂t2 +
∂2vn(t)
∂t2 ) (4)
Furthermore, if specular reflections do not vary over time (e.g., if the camera and subject are stationary), the vs(t) term will also become zero. This means that the second derivative changes in pixel intensities are a sum of second derivative changes in PPG and camera noise. With current camera technology, and little video compression, image noise is typically much smaller than the PPG signal. Therefore, we would expect the pixel changes to be dominated by second derivative variations in the blood volume pulse:
∂2Ck(t)
∂t2 = I · ∂
2up(t)
∂t2 (5)
As such, we can infer that when attempting to estimate the second derivative of the PPG signal from videos without very large motions or illumination changes, second derivative changes in the pixel space would appear helpful and that minimizing the loss between the second derivative prediction and ground truth will be the simplest learning task for the algorithm when the input is secondderivative pixel changes.
4 OUR MODEL
time (s)
BVP
First Derivative
Second Derivative
Left Ventricle Ejection Time (LVET) Systolic Peak
Dicrotic Norch
Systolic Foot
pant’s skin) and ignore noisy regions (e.g. background). These attention masks are shared between the first-derivative branch and the second-derivative branch as we expect the same spatial regions to contain first and second derivative information. After feature representations are extracted from frames within each derivative-input branch, the features are concatenated together for each time step and the target signals are then generated using recurrent neural network (RNN) layers. A diagram depicting the architecture used for our experimentation is shown in Fig. 2.
4.1 PREDICTING MULTI-DERIVATIVE TARGET SIGNALS
The goal of iPPG is to obtain an estimate of the underlying PPG signal p(t) (as in Eq. 2), while only observing video frames X(t) containing a subject’s skin (in this case the face). Mathematically, this can be described as learning a function: p̂(t) = f(X(t)) or, because we are interested in changes in blood volume changes, estimating the first derivative of the PPG signal: p̂′(t) = f(X(t), X ′(t)) , where the first derivative PPG signal is defined as: p′(t) = p(t)− p(t− 1). Using prior methods, to obtain an estimate of the PPG signal’s second derivative, one would either differentiate the predicted PPG signal twice, or differentiate the predicted first-derivative PPG once, rather than calculate the acceleration PPG directly. In contrast, we explicitly predict the acceleration PPG waveform as a target signal. We define the second derivative waveform as the difference between consecutive first-derivative time points: p′′(t) = p′(t) − p′(t − 1). Then we train our model to predict the second derivative waveform p̂′′(t) = f(X(t), X ′(t)) given a set of input video frames X(t) and the corresponding normalized difference frames X ′(t). To optimize our model parameters we minimize the mean squared difference between the true and predicted second derivative waveforms:
L = 1
T T∑ t=1 (p′′(t)− p̂′′(t))2 (6)
4.2 LEVERAGING MULTI-DERIVATIVE INPUTS
It has been previously shown that the normalized difference frames are useful for predicting the first derivative PPG waveforms. Therefore, we hypothesized that incorporating the second derivative of the raw video frames X ′′(t) = X ′(t) − X ′(t − 1) (i.e. the difference-of-difference frames) may also be useful for predicting the PPG signal and its derivatives. Similar to the difference frames, we added a separate convolutional attention branch, where the attention mask is shared between both branches (see Fig. 2). Sharing the attention mask is a reasonable assumption as we would expect skin regions to all exhibit the signal and similar dynamics. After the feature maps in each branch are pooled into a single value per feature at each time step, the learned representations are concatenated together. These concatenated features over time are used as input sequences to the recurrent layers that generate the target waveforms.
Given that difference frames X ′(t) are useful for predicting the first derivative PPG waveforms, features learned from the difference-of-difference frames X ′′(t) may be beneficial for predicting the second derivative PPG signal. In theory, if difference-of-difference features are indeed useful for predicting the acceleration PPG, then the CAN network should be able to learn those features
from the difference frames due to the 3D convolutional operations. However, manually adding the difference-of-difference frames could help guide the model. To examine the effect of combining higher-order inputs and target signals, we fit a model p̂′′(t) = f(X(t), X ′(t), X ′′(t)) to predict the second-derivative PPG.
5 EXPERIMENTS
In this section we will describe the data used to train and evaluate our method and perform a systematic ablation study in which we test different combinations of inputs and outputs.
5.1 DATA
Training To train our models using a large and diverse set of subjects, we leverage recent work that uses highly-parameterized synthetic avatars to generate videos containing simulated subjects with various movements and backgrounds (McDuff et al., 2020). To drive changes in the synthetic avatars’ appearance, the PPG signal is used to manipulate the base skin color and the subsurface radius (McDuff et al., 2020). The subsurface scattering is spatially weighted using an artist-created subsurface scattering radius texture that captures variations in the thickness of the skin across the face. Using physiological waveforms signals from the MIMIC Physionet (Goldberger Ary L. et al., 2000) database, we randomly sampled windows of PPG waveforms from real patients. The physiological waveform data were sampled to maximize examples from different patients. Using the synthetic avatar pipeline and MIMIC waveforms, we generated 2,800 6-second videos, where half of the videos were generated using hand-crafted facial motion/action signals, and the other half using facial motion/action signals extracted using landmark detection on real videos. Examples of the avatars can be found in Appendix A.1.1.
Testing Given that we are focusing on recovering very subtle changes in pixel intensities due to the blood volume pulse, we use a highly controlled and very accurately annotated dataset of real videos for evaluation. The AFRL dataset (Estepp et al., 2014) consists of 300 videos from 25 participants (17 male and 8 female). Each video in the dataset has a resolution of 658x492 pixels sampled at 30 Hz. Ground truth PPG signals were recorded using a contact reflective PPG sensor attached to the subject’s index finger. Each participant was instructed to perform three head motion tasks including rotating the head along the horizontal axis, rotating the head along the vertical axis, and rotating the head randomly once every second to one of nine predefined locations. Since our goal in this work was to compare methods for estimating subtle waveform dynamics, which can be more difficult to do in the presence of large motion, we focused here on the first two AFRL tasks where participant motion is minimal. Examples of AFRL participants can be found in Appendix A.1.1.
5.2 IMPLEMENTATION DETAILS
We trained our models using a large dataset of generated synthetic avatars and evaluated model performance on the AFRL dataset, which consists of real human subjects. For each video, we first cropped the video frames so that the face was approximately centered. Next, we reduced the resolution of the video to 36x36 pixels to reduce noise and computational requirements while maintaining useful spatial signal Verkruysse et al. (2008); Wang et al. (2017); Poh et al. (2010b). The input to the attention branch was T raw video frames. The input to the first-derivative branch was a set of T normalized difference frames, calculated by subtracting consecutive frames and normalizing by the sum. The input to the second-derivative branch was a set of T − 1 difference-of-difference frames (second derivative frames), calculated by subtracting consecutive normalized difference frames (i.e. the T frames used as input to the motion branch). In our experiments, we used a window size of T = 30 video frames to predict the target signals for the corresponding 30 time points. During training, a sliding window of 15 frames (i.e. 50% overlap between consecutive windows) was used to increase the total number of training examples. The model was implemented using Tensorflow (Abadi et al., 2016) and trained for eight epochs using the Adam (Kingma & Ba, 2017) optimizer with a learning rate of 0.001, and a batch size of 16.
5.3 SYSTEMATIC EVALUATION
To measure the effect of using multi-derivative inputs and outputs, we systematically removed the second-derivative parts of the model and used quantitative and qualitative methods to examine the change in model performance. To quantitatively measure the quality of the predicted signal, we calculated two clinically important parameters - heart rate (HR) and the left ventricular ejection time (LVET) interval (see Appendix A.1.3 for details). Video-based HR prediction has been a major focus of iPPG applications, with many methods showing highly-accurate results. HR can be determined through peak detection or by determining the dominant frequency in the signal (e.g. using fast Fourier transform). Since current iPPG methods are able to achieve sufficiently-low error rates on the HR estimation task, we believe that metrics that capture the quality of waveform morphology should also be considered.
The LVET interval is defined as the time between the opening and closing of the heart’s aortic valve, i.e. the systolic phase when the heart is contracting (see Fig. 1). In the PPG waveform, this interval begins at the diastolic point (i.e. the global minimum pressure within a heartbeat cycle) and ends with the dicrotic notch (i.e. local minimum occurring after systolic peak, marking the end of the systolic phase and the beginning of the diastolic phase). LVET typically is correlated with cardiac output (stroke volume × heart rate)(Hamada et al., 1990), and has been shown to be an indicator of future heart failure as the time interval decreases with left-ventricle dysfunction (Biering-Sørensen et al., 2018).
Calculating LVET requires identification of the diastolic point and the dicrotic notch. The diastolic point is a (global) minimum point within a heart beat, meaning it corresponds to a positive peak
in the second derivative signal according to the second-derivative test. Similarly, the dicrotic notch is a (local) minimum in the PPG signal, and appears as a positive peak in the second derivative following the diastolic peak in time. Because the dicrotic notch can often be a subtle feature, it is much easier to identify in the PPG’s second derivative compared to the raw signal. Therefore, it is a good example of clinically-important waveform morphology that is best captured by higher-order dynamics.
Removing the second-derivative frames In Table 1, quantitative evaluation metrics (HR and LVET) are shown for all experiments in our ablation study, using tasks 1 and 2 from the AFRL dataset. Removing the second-derivative (SD) frames results in the model configurations in the top three rows of Table 1. When SD frames are removed, the result is a general decrease in the HR error. However, there is also a general increase in LVET interval prediction error, which suggests that including the SD frames leads to improved estimation of waveform morphology.
Removing the first-derivative target signal Intuitively, models that are optimized using a loss function specifically focusing on a single objective will perform better in terms of that objective compared to models trained with loss functions containing multiple objectives. By removing the first-derivative target signal from the training objective, the model is focused to exclusively focus on the second-derivative (SD) objective. Empirically, this leads the SD-Optimized model to have the lowest LVET MAE of any model configuration (last row of Table 1). While the SD-Optimized model achieves the lowest LVET error, the HR error is the highest of any configuration. These results suggest that there are performance trade-offs to consider when designing a system for particular downstream tasks.
Removing the second-derivative target signal When the second-derivative target signal is removed from the model, the optimization procedure is purely focused on improving the prediction of the first derivative. The FD-Optimized model (first row of Table 1) serves as a form of baseline, since previous works have focused on using first-derivative (FD) frames to predict the first-derivative PPG signal. Fig. 4 shows a Bland-Altman plot (Martin Bland & Altman, 1986) comparing the FDOptimized and SD-Optimized error distributions as a function of the ground-truth values both HR and LVET intervals.
Perhaps unsurprisingly, our results show the FD-Optimized model achieves the lowest HR MAE (0.66 ± 2.07 BPM) of any model configuration examined and, in particular, improves HR estimation compared to models without the first derivative target signal. However, the FD-Optimized model also has the worst performance in terms of the LVET MAE (108.26 ± 56.19 ms) of any model configuration. This suggests that while the configuration provides an accurate assessment of the heartbeat frequency, the quality of predicted waveform morphology can be improved by incorporating second-derivative information. We observe similar results when evaluating the models on the UBFC (Bobbia et al., 2019) and PURE (Stricker et al., 2014) datasets (see Appendix Table 3).
Qualitative comparisons For a qualitative comparison, in Fig. 3 we plot the ground-truth, FDOptimized, and SD-Optimized PPG, first derivative, and second derivative. Additionally, in the bottom panel of Fig. 3 we overlay the true and predicted LVET intervals for each signal to demonstrate model performance. For additional qualitative comparisons, see Appendix A.2.
6 CONCLUSIONS
Using the task of video-based cardiac measurement we have shown that when learning representations for dynamical systems that appropriately designing inputs, and optimizing for derivatives of interest can make a significant difference in model performance. Specifically, there is a trade-off between optimizing for lower-order and higher-order dynamics. Given the importance of secondderivatives (i.e., acceleration) in this, and many other video understanding tasks, we believe it is important to understand the trade-off between optimizing for targets that capture different dynamic properties. In cardiac measurement in particular, the LVET is one of the more important clinical parameters and can be better estimated using higher-order information. While we have investigated the importance of higher-order dynamics in the context of video-based cardiac measurement, this paradigm is generally applicable. We believe future work will continue to showcase the importance of explicitly incorporating higher-order dynamics.
7 ETHICS STATEMENT
Camera-based cardiac measurement could help improve the quality of remote health care, as well as enable less invasive measurement of important physiological signals. The COVID-19 pandemic has revealed the importance of tools to support remote care. These needs are likely to be particularly acute in low-resource settings where distance, travel costs, and time are a great barrier to access quality healthcare. However, given the non-contact nature of the technology, it could also be used to measure personal data without the knowledge of the subject. Just as is the case with traditional contact sensors, it must be made transparent when these methods are being used, and subjects should be required to consent before physiological data is measured or recorded. There should be no penalty for individuals who decline to be measured. New bio-metrics laws can help protect people from unwanted physiological monitoring, or discrimination based on pre-existing health conditions detected via non-contact monitoring. However, social norms also need to be constructed around the use of this technology.
In this work, data were collected under informed consent from the participants.
A APPENDIX
A.1 SUPPLEMENTAL METHODS
A.1.1 EXAMPLE VIDEO FRAMES
A.1.2 MODEL ARCHITECTURE
The first two 3D convolutional layers in each branch each have 16 filters and the final two 3D convolutional layers in each branch each have 32 filters. Each convolutional layer has a filter size of 3x3x3 for all 3D convolutional layers in the network. All convolutional layers are padded such that they have the same height, width, and number of time steps in each consecutive layer. Convolutional layers use the hyperbolic tangent activation function, except for the convolutional layers used for the attention masks which use a sigmoid activation function for generating the soft masks. Attention masks (one per time step) are applied by applying an element-wise multiplication of the attention mask with each 3D convolutional feature map. Average pooling layers reduce the height and width of the frames by a factor of two, except for the final average pooling layer that pools over the entire frame (i.e. reduces each feature map to a single value per time step). Dropout (25% probability) is applied after every pooling layer to reduce overfitting.
After the final pooling layer, the learned features for each time step in a branch are concatenated together (i.e. combined across branches to share information). Each target signal uses its own set of (2) RNN layers to read the concatenated features over time and generate a target sequence. The first RNN layer is implemented as a bi-directional GRU (hyperbolic tangent activation function) with 64 total units (32 each direction). The second RNN layer is a GRU (linear activation function) layer with 1 output value per time step.
A.1.3 METRIC CALCULATION
Heart Rate (HR) estimation To estimate the heart rate, we use an fast Fourier transform (FFT)based method to calculate the dominant frequency in the signal, which corresponds to the heart rate. We first estimate power spectral density using the “periodogram” function from the scipy.signal (Virtanen et al., 2020) library. Then we band-pass filter the PPG signal, with cutoff frequencies of 0.75- 4.0 Hz (corresponding to a minimum HR of 45 BPM and maximum HR of 240 BPM). Finally, we select the frequency with the maximum power, and use this as our estimated HR.
Left Ventricle Ejection Time (LVET) estimation The LVET time is defined as the time interval between the diastolic peak and the dicrotic notch. To calculate this interval, we first identified the diastolic point in the second derivative (SD) of the PPG signal, which, because it is a “global” minima in the PPG heartbeat, appears as a “global” maxima (positive SD value) in the SD PPG. Then, in each predicted SD PPG waveform, we identified candidate dicrotic notch points. Since the dicrotic notch manifests as a “local” minima in the PPG signal, it appears as a “local” maxima in the PPG SD signal (positive SD value). Using peak detection (“find peaks” function in the scipy.signal library (Virtanen et al., 2020)) we identify candiadate dicrotic notch points by finding local peaks that occur after a diastolic point, and use the dicrotic notch candidate point that is closest in time to the reference diastolic point.
Because both the ground truth PPG (and therefore its derivatives) and, in particular, the predicted PPG (and its derivatives), contain signal artifacts and noise, the peak detection process is not perfect. To reduce variability in the LVET interval estimates due to noise, we apply a smoothing operation. Specifically, we estimate the mean LVET interval within a 10-second non-overlapping window and use this as our estimate of true/predicted LVET. See Appendix Fig. 7 for example LVET intervals over time, and the estimated LVET intervals after smoothing within windows.
A.2 SUPPLEMENTAL RESULTS | 1. What is the focus and contribution of the paper regarding the estimation of left ventricle ejection time interval (LVET)?
2. What are the strengths of the proposed approach, particularly in terms of its application and the use of deep learning techniques?
3. What are the weaknesses of the paper, especially regarding the algorithmic novelty and the reliance on a somewhat novel dataset?
4. Are there any concerns or limitations regarding the use of photoplethysmography (PPG) labels as training labels and video of faces as input?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This is an application paper providing an algorithm for estimating the left ventricle ejection time interval (LVET), which is the time for heart systole to occur. This metric has various clinical applications. The authors extract this metric using Photoplethysmography (PPG) labels as training labels, and video of faces as input. The authors use a DNN to extract from the images subtle changes in skin color and thus estimate blood flow changes that enable them to estimate the LVET. The authors argue that by extracting second order derivatives in the trained DNN they are able to obtain improved results
Review
Strengths: -This is an interesting application. It is somewhat surprising how much physiological information can be extracted from simple video. This in turn brings up interesting privacy & ethical concerns which the authors briefly discuss in Section 7. I think quite a few researchers will find it interesting that so much physiological information can be extracted from a simple video of someone -The paper is for the most part well written -Source code will be provided. However the dataset cannot be released
Weaknesses -Algorithmically the work is not really novel. It is a rather vanilla neural network trained on a somewhat novel dataset. -Section 3 feels a bit detached from the rest of the paper. As far as I can tell this does not contribute to the paper's main argument. -In section 1 the authors can provide a better description of what PPG is. Perhaps provide there a figure illustrating what type of graph data is typically provided. They may also want to consider providing up-front a diagram describing their system -The authors are using 30fps video. How reliable are the second derivatives in such a case? If this was a high frame rate camera would they expect better results? How do intrinsic camera parameters like autogain & shutter speed affect the system reliability? How does environment illumination affect system reliability? Do the subjects need to be in a controlled environment for this to work? A discussion on this would be helpful |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.