venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
ICLR
Title Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution Abstract Reinforcement Learning algorithms require a large number of samples to solve complex tasks with sparse and delayed rewards. Complex tasks are often hierarchically composed of sub-tasks. Solving a sub-task increases the return expectation and leads to a step in the Q-function. RUDDER identifies these steps and then redistributes reward to them, thus immediately giving reward if sub-tasks are solved. Since the delay of rewards is reduced, learning is considerably sped up. However, for complex tasks, current exploration strategies struggle with discovering episodes with high rewards. Therefore, we assume that episodes with high rewards are given as demonstrations and do not have to be discovered by exploration. Unfortunately, the number of demonstrations is typically small and RUDDER’s LSTM as a deep learning model does not learn well on these few training samples. Hence, we introduce Align-RUDDER, which is RUDDER with two major modifications. First, Align-RUDDER assumes that episodes with high rewards are given as demonstrations, replacing RUDDER’s safe exploration and lessons replay buffer. Second, we substitute RUDDER’s LSTM model by a profile model that is obtained from multiple sequence alignment of demonstrations. Profile models can be constructed from as few as two demonstrations. Align-RUDDER uses reward redistribution to speed up learning by reducing the delay of rewards. Align-RUDDER outperforms competitors on complex artificial tasks with delayed rewards and few demonstrations. On the MineCraft ObtainDiamond task, Align-RUDDER is able to mine a diamond, though not frequently. N/A Reinforcement Learning algorithms require a large number of samples to solve complex tasks with sparse and delayed rewards. Complex tasks are often hierarchically composed of sub-tasks. Solving a sub-task increases the return expectation and leads to a step in the Q-function. RUDDER identifies these steps and then redistributes reward to them, thus immediately giving reward if sub-tasks are solved. Since the delay of rewards is reduced, learning is considerably sped up. However, for complex tasks, current exploration strategies struggle with discovering episodes with high rewards. Therefore, we assume that episodes with high rewards are given as demonstrations and do not have to be discovered by exploration. Unfortunately, the number of demonstrations is typically small and RUDDER’s LSTM as a deep learning model does not learn well on these few training samples. Hence, we introduce Align-RUDDER, which is RUDDER with two major modifications. First, Align-RUDDER assumes that episodes with high rewards are given as demonstrations, replacing RUDDER’s safe exploration and lessons replay buffer. Second, we substitute RUDDER’s LSTM model by a profile model that is obtained from multiple sequence alignment of demonstrations. Profile models can be constructed from as few as two demonstrations. Align-RUDDER uses reward redistribution to speed up learning by reducing the delay of rewards. Align-RUDDER outperforms competitors on complex artificial tasks with delayed rewards and few demonstrations. On the MineCraft ObtainDiamond task, Align-RUDDER is able to mine a diamond, though not frequently. 1 INTRODUCTION Reinforcement learning algorithms struggle with learning complex tasks that have sparse and delayed rewards (Sutton & Barto, 2018; Rahmandad et al., 2009; Luoma et al., 2017). For delayed rewards, temporal difference (TD) suffers from vanishing information (Arjona-Medina et al., 2019). On the other hand Monte Carlo (MC) has high variance since it must average over all possible futures (ArjonaMedina et al., 2019). Monte-Carlo Tree Search (MCTS), used for Go and chess, can handle delayed and rare rewards since it has a perfect environment model (Silver et al., 2016; 2017). RUDDER (Arjona-Medina et al., 2019; 2018) has been shown to excel in model-free learning of policies when only sparse and delayed rewards are given. RUDDER requires episodes with high rewards to store them in its lessons replay buffer for learning a reward redistribution model like an LSTM network. However, for complex tasks, current exploration strategies find episodes with high rewards only after an incommensurate long time. Humans and animals obtain high reward episodes by teachers, role models, or prototypes. Along this line, we assume that episodes with high rewards are given as demonstrations. Since generating demonstrations is often tedious for humans and time-consuming for exploration strategies, typically, only a few demonstrations are available. However, RUDDER’s LSTM (Hochreiter, 1991; Hochreiter & Schmidhuber, 1997a) as a deep learning method requires many examples for learning. Therefore, we introduce Align-RUDDER, which replaces RUDDER’s LSTM with a profile model obtained from multiple sequence alignment (MSA) of the demonstrations. Profile models are well known in bioinformatics. They are used to score new sequences according to their sequence similarity to the aligned sequences. Like RUDDER also Align-RUDDER performs reward redistribution —using an alignment model—, which considerably speeds up learning even if only a few demonstrations are available. Our main contributions are: • We suggest a reinforcement algorithm that works well for sparse and delayed rewards, where standard exploration fails but few demonstrations with high rewards are available. • We adopt multiple sequence alignment from bioinformatics to construct a reward redistribution technique that works with few demonstrations. • We propose a method that uses alignment techniques and reward redistribution for identifying sub-goals and sub-tasks which in turn allow for hierarchical reinforcement learning. 2 REVIEW OF RUDDER Basic insight: Q-functions for complex tasks are step functions. Complex tasks are typically composed of sub-tasks. Therefore the Q-function of an optimal policy resembles a step function. The Q-function is the expected future return and it increases (i.e, makes a step) when a sub-task is completed. Identifying large steps in the Q-function speeds up learning since it allows (i) to increase the return by performing actions that cause the step and (ii) to sample episodes with a larger return for learning. An approximation to the Q-function must predict the expected future return for every state-action pair. However, a Q-function that resembles a step-function is mostly constant. Therefore predictions are only necessary at the steps. We have to identify the relevant state-actions that cause the steps and then predict the size of the steps. An LSTM network (Hochreiter, 1991; Hochreiter & Schmidhuber, 1995; 1997a;b) can identify relevant state-actions that open the input gate to store the size of the steps in the memory cells. Consequently, LSTM only updates its states and changes its return prediction when a new relevant state-action pair is observed. Therefore, both the change of the prediction and opening input gates indicate Q-function steps through an LSTM network that predicts the return of an episode. Reward Redistribution. We consider episodic Markov decision processes (MDPs), i.e., the reward is only given once at the end of the sequence. The Q-function is assumed to be a step function, that is, the task can be decomposed into sub-tasks (see previous paragraph). Reward redistribution aims at giving the differences in the Q-function of an optimal policy as a new immediate reward. Since the Q-function of an optimal policy is not known, we approximate it by predicting the expected return by an LSTM network or by an alignment model in this work. The differences in predictions determine the reward redistribution. The prediction model will first identify the largest steps in the Q-function as they decrease the prediction error most. Fortunately, just identifying the largest steps even with poor predictions speeds up learning considerably. See Figure 1 for a description of the reward redistribution. Learning methods based on reward redistribution. The redistributed reward serves as reward for a subsequent learning method: (A) The Q-values can be directly estimated (Arjona-Medina et al., 2019), which is used in the experiments for the artificial tasks and BC pre-training for MineCraft. (B) Redistributed rewards can serve for learning with policy gradients like Proximal Policy Optimization (PPO) (Schulman et al., 2018), which is used in the MineCraft experiments. (C) Redistributed rewards can serve for temporal difference learning like Q-learning (Watkins, 1989). LSTM models for reward redistribution. RUDDER uses an LSTM model for predicting the future return. The reward redistribution is the difference between two subsequent predictions. If a stateaction pair increases the prediction of the return, then it is immediately rewarded. Using state-action sub-sequences (s, a)0:t = (s0, a0, . . . , st, at), the redistributed reward is Rt+1 = g((s, a)0:t) − g((s, a)0:t−1), where g is an LSTM model that predicts the return of the episode. The LSTM model learns at first to approximate the largest steps of the Q-function since they reduce the prediction error the most. 3 ALIGN-RUDDER: RUDDER WITH FEW DEMONSTRATIONS In bioinformatics, sequence alignment identifies similarities between biological sequences to determine their evolutionary relationship (Needleman & Wunsch, 1970; Smith & Waterman, 1981). The result of the alignment of multiple sequences is a profile model. The profile model is a consensus sequence, a frequency matrix, or a Position-Specific Scoring Matrix (PSSM) (Stormo et al., 1982). New sequences can be aligned to a profile model and receive an alignment score that indicates how well the new sequences agree to the profile model. Align-RUDDER uses such alignment techniques to align two or more high return demonstrations. For the alignment, we assume that the demonstrations follow the same underlying strategy, therefore they are similar to each other analog to being evolutionary related. If the agent generates a state-action sequence (s, a)0:t−1, then this sequence is aligned to the profile model g giving a score g((s, a)0:t−1). The next action of the agent extends the state-action sequence by one state-action pair (st, at). The extended sequence (s, a)0:t is also aligned to the profile model g giving another score g((s, a)0:t). The redistributed reward Rt+1 is the difference of these scores: Rt+1 = g((s, a)0:t)− g((s, a)0:t−1) (see Eq. (1)). This difference indicates how much of the return is gained or lost by a adding another sequence element. Align-RUDDER scores how close an agent follows an underlying strategy, which has been extracted by the profile model. Similar to the LSTM model, we identify the largest steps in the Q-function via relevant events determined by the profile model. Therefore, redistributing the reward by sequence alignment fits into the RUDDER framework with all its theoretical guarantees. RUDDER’s theory for reward redistribution is valid for LSTM, other recurrent networks, attention mechanisms, or sequence and profile models. Advantages of alignment compared to LSTM. Learning an LSTM model is severely limited when very few demonstrations are available. First, LSTM is known to require a large number of samples to generalize to new sequences. In contrast, sequence alignment requires only two examples to generalize well as known from bioinformatics. Second, expert demonstrations have high rewards. Therefore random demonstrations with very low rewards have to be generated. LSTM does not generalize well when only these extreme reward cases can be observed in the training set. In contrast, sequence alignment only uses examples that are closely related; that is, they belong to the same category (expert demonstrations). Reward Redistribution by Sequence Alignment. The new reward redistribution approach consists of five steps, see Fig. 3: (I) Define events to turn episodes of state-action sequences into sequences of events. (II) Determine an alignment scoring scheme, so that relevant events are aligned to each other. (III) Perform a multiple sequence alignment (MSA) of the demonstrations. (IV) Compute the profile model like a PSSM. (V) Redistribute the reward: Each sub-sequence τt of a new episode τ is aligned to the profile. The redistributed reward Rt+1 is proportional to the difference of scores S based on the PSSM given in step (IV), i.e. Rt+1 ∝ S(τt)− S(τt−1). In the following, the five steps of Align-RUDDER’s reward redistribution are outlined. For the interested reader, each step is detailed in Sec. A.3 in the appendix. Finally, in Sec. A.7.3 in the appendix, we illustrate these five steps on the example of Minecraft. (I) Defining Events. Instead of states, we consider differences of consecutive states to detect a change caused by an important event like achieving a sub-goal. An event is defined as a cluster of state differences. We use similarity-based clustering like affinity propagation (AP) (Frey & Dueck, 2007). If states are only enumerated, we suggest to use the “successor representation” (Dayan, 1993) or “successor features” (Barreto et al., 2017). We use the demonstrations combined with state-action sequences generated by a random policy to construct the successor representation. A sequence of events is obtained from a state-action sequence by mapping states s to its cluster identifier e (the event) and ignoring the actions. Alignment techniques from bioinformatics assume sequences composed of a few events, e.g. 20 events. If there are too many events, good fitting alignments cannot be distinguished from random alignments. This effect is known in bioinformatics as “Inconsistency of Maximum Parsimony” (Felsenstein, 1978). (II) Determining the Alignment Scoring System. A scoring matrix S with entries si,j determines the score for aligning event i with j. A priori, we only know that a relevant event should be aligned to itself but not to other events. Therefore, we set si,j = 1/pi for i = j and si,j = α for i 6= j. Here, pi is the relative frequency of event i in the demonstrations. α is a hyper-parameter, which is typically a small negative number. This scoring scheme encourages alignment of rare events, for which pi is small. For more details see Appendix Sec. A.3. (III) Multiple sequence alignment (MSA). An MSA algorithm maximizes the sum of all pairwise scores SMSA = ∑ i,j,i<j ∑L t=0 si,j,ti,tj ,t in an alignment, where si,j,ti,tj ,t is the score at alignment column t for aligning the event at position ti in sequence i to the event at position tj in sequence j. L ≥ T is the alignment length, since gaps make the alignment longer than the length of each sequence. We use ClustalW (Thompson et al., 1994) for MSA. MSA constructs a guiding tree by agglomerative hierarchical clustering of pairwise alignments between all demonstrations. This guiding tree allows to identify multiple strategies. For more details see Appendix Sec. A.3. (IV) Position-Specific Scoring Matrix (PSSM) and MSA profile model. From the alignment, we construct a profile model as a) column-wise event probabilities and b) a PSSM (Stormo et al., 1982). The PSSM is a column-wise scoring matrix to align new sequences to the profile model. More details are given in Appendix Sec. A.3. (V) Reward Redistribution. The reward redistribution is based on the profile model. A sequence τ = e0:T (et is event at position t) is aligned to the profile, which gives the score S(τ) = ∑L l=0 sl,tl . Here, sl,tl is the alignment score for event etl at position l in the alignment. Alignment gaps are columns to which no event was aligned, which have tl = T + 1 with gap penalty sl,T+1. If τt = e0:t is the prefix sequence of τ of length t+ 1, then the reward redistribution Rt+1 for 0 6 t 6 T is Rt+1 = (S(τt)− S(τt−1)) C = g((s, a)0:t)− g((s, a)0:t−1), RT+2 = G̃0 − T∑ t=0 Rt+1, (1) where C = Edemo [ G̃0 ] / Edemo [∑T t=0 S(τt)− S(τt−1) ] with S(τ−1) = 0. The original return of the sequence τ is G̃0 = ∑T t=0 R̃t+1 and the expectation of the return over demonstrations is Edemo. The constant C scales Rt+1 to the range of G̃0. RT+2 is the correction of the redistributed reward (Arjona-Medina et al., 2019), with zero expectation for demonstrations: Edemo [RT+2] = 0. Since τt = e0:t and et = f(st, at), we can set g((s, a)0:t) = S(τt)C. We ensure strict return equivalence (Arjona-Medina et al., 2019) by G0 = ∑T+1 t=0 Rt+1 = G̃0. The redistributed reward depends only on the past: Rt+1 = h((s, a)0:t). Sub-tasks. The reward redistribution identifies sub-tasks as alignment positions with high redistributed rewards. These sub-tasks are indicated by high scores s in the PSSM. Reward redistribution also determines the terminal states of sub-tasks since it assigns rewards for solving the sub-tasks. However, reward redistribution and Align-RUDDER cannot guarantee that the redistributed reward is Markov. For redistributed Markov reward, options (Sutton et al., 1999), MAXQ (Dietterich, 2000), or recursive option composition (Silver & Ciosek, 2012) can be used. Higher Order Markov Reward Redistributions. Align-RUDDER may lead to higher-order Markov redistribution. Corollary 1 in the appendix states that the optimality criterion from Theorem 2 in Arjona-Medina et al. (2019) also holds for higher-order Markov reward redistributions. If the expected redistributed higher-order Markov reward is the difference of Q-values. In that case the redistribution is optimal, and there is no delayed reward. Furthermore, the optimal policies are the same as for the original problem. This corollary is the motivation for redistributing the reward to the steps in the Q-function. In the Appendix, Corollary 2 states that under a condition, an optimal higher-order reward redistribution can be expressed as the difference of Q-values. 4 EXPERIMENTS Align-RUDDER is compared on three artificial tasks with sparse & delayed rewards and few demonstrations to Behavioral Cloning with Q-learning (BC+Q), Soft Q Imitation Learning (SQIL) (Reddy et al., 2020), RUDDER (LSTM), and Deep Q-learning from Demonstrations (DQfD) (Hester et al., 2018). GAIL (Ho & Ermon, 2016) failed to solve the two artificial tasks, as reported previously for similar tasks (Reddy et al., 2020). Then, we test Align-RUDDER on the complex MineCraft ObtainDiamond task with episodic rewards (Guss et al., 2019b). All experiments use finite time MDPs with γ = 1 and episodic reward. More details are in Appendix Sec. A.6. Alignment vs LSTM in 1D key-chest environment. We use a 1D key-chest environment to show the effectiveness of sequence alignment in a low data regime compared to an LSTM model. The agent has to collect the key and then open the chest to get a positive reward at the last timestep. See Appendix Fig. A.9 for a schematic representation of the environment. As the key-events (important state-action pairs) in this environment are known, we can compute the key-event detection rate of a reward redistribution model. A key event is detected if the redistributed reward of an important state-action pair is larger than the average redistributed reward in the sequence. We train the reward redistribution models with 2, 5, and 10 training episodes and test on 1000 test episodes, averaged over ten trials. Align-RUDDER significantly outperforms LSTM (RUDDER) for detecting these key events in all cases, with an average key-event detection rate of 0.96 for sequence alignment vs. 0.46 for the LSTM models overall dataset sizes. See Appendix Fig. A.10 for the detailed results. Artificial tasks (I) and (II). They are variations of the gridworld rooms example (Sutton et al., 1999), where cells are the MDP states. In our setting, the states do not have to be time-aware for ensuring stationary optimal policies, but the unobserved used-up time introduces a random effect. The grid is divided into rooms. The agent’s goal is to reach a target from an initial state with the fewest steps. It has to cross different rooms, which are connected by doors, except for the first room, which is only connected to the second room by a teleportation portal. The portal is introduced to avoid BC initialization alone, solving the task. It enforces that going to the portal entry cells is learned when they are at positions not observed in demonstrations. At every location, the agent can move up, down, left, right. The state transitions are stochastic. An episode ends after T = 200 time steps. Suppose the agent arrives at the target. In that case, it goes into an absorbing state where it stays until T = 200 without receiving further rewards. The reward is only given at the end of the episode. Demonstrations are generated by an optimal policy with a 0.2 exploration rate. The five steps of Align-RUDDER’s reward redistribution are: (1) Events are clusters of states obtained by Affinity Propagation using as similarity the successor representation based on demonstrations. (2) The scoring matrix is obtained according to (II), using = 0 and setting all off-diagonal values of the scoring matrix to −1. (3) ClustalW is used for the MSA of the demonstrations with zero gap penalties and no biological options. (4) The MSA supplies a profile model and a PSSM as in (IV). (5) Sequences generated by the agent are mapped to sequences of events according to (I). The reward is redistributed via differences of profile alignment scores of consecutive sub-sequences according to Eq. (1) using the PSSM. The reward redistribution determines sub-tasks like doors or portal arrival. The sub-tasks partition the Q-table into sub-tables that represent a sub-agent. However, we optimize a single Q-table in these experiments. Defining sub-tasks has no effect on learning in the tabular case. All compared methods learn a Q-table and use an -greedy policy with = 0.2. The Q-table is initialized by behavioral cloning (BC). The state-action pairs which are not initialized since they are not visited in the demonstrations get an initialization by drawing a sample from a normal distribution. Align-RUDDER learns the Q-table via RUDDER’s Q-value estimation (learning method (A) from Sec.2). For BC+Q, RUDDER (LSTM), SQIL, and DQfD a Q-table is learned by Q-learning. Hyperparameters are selected via grid search using the same amount of time for each method. For different numbers of demonstrations, performance is measured by the number of episodes to achieve 80% of the average return of the demonstrations. A Wilcoxon rank-sum test determines the significance of performance differences between Align-RUDDER and the other methods. Task (I) environment is a 12×12 gridworld with four rooms. The target is in room #4, and the start is in room #1 with 20 portal entry locations. The state contains the portal entry for each episode. Fig. 5 shows the number of episodes required for achieving 80% of the average reward of the demonstrations for different numbers of demonstrations. Results are averaged over 100 trials. Align-RUDDER significantly outperforms all other methods, for 6 10 demonstrations (p-values < 10−10). Task (II) is a 12×24 gridworld with eight rooms: target in room #8, and start in room #1 with 20 portal entry locations. Fig. 5 shows the results with settings as in Task (I). Align-RUDDER significantly outperforms all other methods, for 6 10 demonstrations (p-values < 10−19). We also conduct an ablation study to study performance of Align-RUDDER, while changing various parameters, like environment stochasticity (See Sec. A.6.4) and number of clusters (See Sec. A.6.5). MineCraft. We further test Align-RUDDER on MineCraft ObtainDiamond task from the MineRL dataset (Guss et al., 2019b). We do not use intermediate rewards given by achieving subgoals from the challenge, since Align-RUDDER is supposed to discover such sub-goals automatically via reward redistribution. We only give a reward for mining the diamond. This requires resource gathering and tool building in a hierarchical way. To the best of our knowledge, no pure learning method (sub-goals are also learned) has mined a diamond yet (Scheller et al., 2020). The dataset contains demonstrations which are insufficient to directly learn a single policy (117 demonstrations, 67 mined a diamond). Implementation: (1) A state consists of visual input and an inventory (incl. equip state). Both inputs are normalized to the same information, that is, the same number of components and the same variance. We cluster the differences of consecutive states (Arjona-Medina et al., 2019). Very large clusters are removed, and small merged, giving 19 clusters corresponding to events, which are characterized by inventory changes. Finally, demonstrations are mapped to sequences of events. (2) The scoring matrix is computed according to (II). (3) The ten shortest demonstrations that obtained a diamond are aligned by ClustalW with zero gap penalties and no biological options. (4) The multiple alignments gives a profile model and a PSSM. (5) The reward is redistributed via differences of profile alignment scores of consecutive sub-sequences according to Eq. (1) using the PSSM. Based on the reward redistribution, we define sub-goals. Sub-goals are identified as profile model positions that obtain an average redistributed reward above a threshold for the demonstrations. Demonstration subsequences between sub-goals are considered as demonstrations for the sub-tasks. New sub-sequences generated by the agent are aligned to the profile model to determine whether a sub-goal is achieved. The redistributed reward between two sub-goals is given at the end of the sub-sequence, therefore, the sub-tasks also have an episodic reward. Fig. 4 shows how sub-goals are identified. Sub-agents are pre-trained on the demonstrations for the sub-tasks using BC, and further trained in the environment using Proximal Policy Optimization (PPO) (Schulman et al., 2018). BC pre-training corresponds to RUDDER’s Q-value estimation (learning method (A) from above), while PPO corresponds to RUDDER’s PPO training (learning method (B) from above). Our main agent can perform all actions but additionally can execute sub-agents and learns via the redistributed reward. The main agent corresponds to and is treated like a Manager module (Vezhnevets et al., 2017). The main agent is initialized by executing sub-agents according to the alignment but can deviate from this strategy. When a sub-agent successfully completes its task, the main agent executes the next sub-agent according to the alignment. More details can be found in Appendix Sec. A.7.1. Using only ten demonstrations, Align-RUDDER is able to learn to mine a diamond. A diamond is obtained in 0.1% of the cases. With 0.5 success probability for each of the 31 extracted sub-tasks (skilled agents not random agents), the resulting success rate for mining the diamond would be 4.66× 10−10. Tab. 1 shows a comparison of methods on the MineCraft MineRL dataset by the maximum item score (Milani et al., 2020). Results are taken from (Milani et al., 2020), in particular from Figure 2, and completed by (Skrynnik et al., 2019; Kanervisto et al., 2020; Scheller et al., 2020). Align-RUDDER was not evaluated during the MineCraft MineRL challenge, but it follows the timesteps limit (8 million) imposed by the challenge. Align-RUDDER did not receive the intermediate rewards provided by the challenge that hint at sub-tasks, thus tries to solve a more difficult task. Recently, ForgER++ (Skrynnik et al., 2020) was able to mine a diamond in 0.0667% of the cases. We do not include it in Table 1 as it did not have any limitations on the number of timesteps. Also, ForgER++ generates sub-goals for MineCraft using a heuristic, while Align-RUDDER uses redistributed reward to automatically obtain sub-goals. Analysis of MineCraft Agent Behaviour. For each agent and its sub-task, we estimate the success rate and its improvement during fine-tuning by averaging over return of multiple runs (see Fig. 6). For earlier sub-tasks, the agent has a relatively higher sub-task success rate. This also corresponds to the agent having access to much more data for earlier sub-tasks. During learning from demonstrations, much less data is available for training for later sub-tasks, as not all expert demonstrations achieve the later tasks. During online training using reinforcement learning, an agent has to successfully complete all earlier sub-tasks to generate trajectories for later sub-tasks. This is exponentially difficult. Lack of demonstrations and difficulty of the learned agent to generate data for later sub-tasks leads to degradation of the success rate in MineCraft. 5 RELATED WORK Learning from demonstrations has been widely studied over the last 50 years (Billard et al., 2008). An example is imitation learning, which uses supervised techniques when the number of demonstrations is large enough (Michie et al., 1990; Pomerleau, 1991; Michie & Camacho, 1994; Schaal, 1996; Kakade & Langford, 2002). However, policies trained with imitation learning tend to drift away from demonstration trajectories due to a distribution shift (Ross & Bagnell, 2010). This effect can be mitigated (III et al., 2009; Ross & Bagnell, 2010; Ross et al., 2011; Judah et al., 2014; Sun et al., 2017; 2018). Many approaches use demonstrations for initialization, e.g. of policy networks (Taylor et al., 2011; Silver et al., 2016), value function networks (Hester et al., 2017; 2018), both networks (Zhang & Ma, 2018; Nair et al., 2018), or an experience replay buffer (Hosu & Rebedea, 2016). Beyond initialization, demonstrations are used to define constraints (Kim et al., 2013), generate sub-goals (Eysenbach et al., 2019), enforce regularization (Reddy et al., 2020), guide exploration (Subramanian et al., 2016; Jing et al., 2019), or shape rewards (Judah et al., 2014; Brys et al., 2015; Suay et al., 2016). Demonstrations may serve for inverse reinforcement learning (Ng & Russell, 2000; Abbeel & Ng, 2004; Ho & Ermon, 2016), which aims at learning a (non-sparse) reward function that best explains the demonstrations. Learning reward functions requires a large number of demonstrations (Syed & Schapire, 2007; Ziebart et al., 2008; Silva et al., 2019). Some approaches rely on few-shot or/and meta learning (Duan et al., 2017; Finn et al., 2017; Zhou et al., 2020). However, few-shot and meta learning demand a large set of auxiliary tasks or prerecorded data. Concluding, most methods that learn from demonstrations rely on the availability of many demonstrations (Khardon, 1999; Lopes et al., 2009), in particular, if using deep learning methods (Bengio & Lecun, 2007; Lakshminarayanan et al., 2016). Some methods can learn on few demonstrations like Soft Q Imitation Learning (SQIL) (Reddy et al., 2020), Generative Adversarial Imitation Learning (GAIL) (Ho & Ermon, 2016), and Deep Q-learning from Demonstrations (DQfD) (Hester et al., 2018). 6 DISCUSSION AND CONCLUSION Discussion. Firstly, reward redistributions do not change the optimal policies (see Theorem 1 in Appendix). Thus, suboptimal reward redistributions due to alignment errors or choosing events that are non-essential for reaching the goal might not speed up learning, but also do not change the optimal policies. Secondly, while Align-RUDDER can speed up learning even in complex environments, the resulting performance depends on the quality of the alignment model. A low quality alignment model can arise from multiple factors, one of which is having large number ( 20) of distinct events. Clustering can be used to reduce the number of events, which could also lead to a low quality alignment model if too many relevant events are clustered together. While the optimal policy is not changed by poor demonstration alignment, the benefit of employing reward redistribution based on it diminishes. Thirdly, the alignment could fail if the demonstrations have different underlying strategies i.e no events are common in the demonstrations. We assume that the demonstrations follow the same underlying strategy, therefore they are similar to each other and can be aligned. However, if no underlying strategy exists, then identifying those relevant events via alignment, which should receive high redistributed rewards, may fail. In this case, reward is given at sequence end, when the redistributed reward is corrected, which leads to an episodic reward without reducing the delay of the rewards and speeding up learning. Conclusions. We have introduced Align-RUDDER to solve highly complex tasks with delayed and sparse reward from few demonstrations. We have shown experimentally that Align-RUDDER outperforms state of the art methods designed for learning from demonstrations in the regime of few demonstrations. On the MineCraft ObtainDiamond task, Align-RUDDER is, to the best of our knowledge, the first pure learning method to mine a diamond. ETHICS STATEMENT Impact on ML and related scientific fields. Our research has the potential to positively impact a wide variety of fields of life due to its general applicability. Most importantly, it has the potential to reduce the cost for training and deploying agents in real world applications and therefore enable systems that have not been possible until now. However, any new development in machine learning can be applied for good or for bad. Our system can be used for medical applications where it can save life but it could be used for malevolent systems. It is the society that decides how new technology is employed. However, we as scientist have to inform the society and the decision makers about our technologies. We have to show the limits of our technology, to give ideas of possible applications, to point out possible misuse or erroneous operation of our new technology. Impact on society. A big danger is that users rely too much on our new approach and use it without reflecting on the outcomes. For example, in medical treatment decisions doctors may rely on the technical system and push are the responsibility toward the machine: “The machine suggested this treatment, therefore it is not my fault”. Another example is self-driving cars where we see that drivers become more careless even if they are supposed to pay attention and keep the hands on the steering wheel. They trust too much in the technology, even if the technology does not justify this trust or is not mature. Finally, our method can be deployed in companies for job automation. Therefore there is the danger that some people lose their jobs, particularly those whose work is to perform predictable and repetitive tasks. An often used example is the taxi driver who would lose their job because of self-driving cars. The same holds for many jobs in production industry where automation can replace jobs. However all industrialization led to loss of jobs but new jobs have been created. Consequences of failures of the method. Depending on the application area, a failure of this method might be of lesser concern, such as a failed execution of a computer program. If our method is employed within a larger automation system, a failure can result in damages such as a car accident. However, this holds for almost all reinforcement learning methods, and usage and testing falls within the responsibility of the application area. We note that in this work, the method was only used in computer game environments. Leveraging of biases in the data and potential discrimination. Our proposed method relies on human demonstrations and thereby human decisions, which are usually strongly biased. As almost all machine learning methods trained on human-influenced data, our method could learn to use and exploit those biases and make similar decisions (Solaiman et al., 2019). Therefore, the responsible use of our method depends on a careful selection of the training data and awareness of the potential biases within those. REPRODUCIBILITY STATEMENT Code for experiments on the FourRooms and EightRooms environment is included as supplementary material. The README contains step-by-step instructions to set up an environment and run the experiments. We have specified all the training details ex. hyperparameters and how they were chosen in the Appendix (See Section A.6). We trained 100 replicates for each datapoint of the first set of experiments and are shown in Fig. 5. Using the code in the supplementary material, it is quite easy to reproduce our results for these experiments. We also include code for the experiments done for MineCraft in the supplementary materials. All the preprocessing steps, hyperparameters and other implementation details are given in the Appendix (See Section A.7). We also provide a deeper overview of the RUDDER (Arjona-Medina et al., 2019) theory in the Appendix (See Section A.2) as it is important for many design choices in Align-RUDDER. Finally, a video showcasing the MineCraft agent is also provided as supplementary material. A APPENDIX CONTENTS OF THE APPENDIX A.1 Introduction to the Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 A.2 Review Reward Redistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 A.3 The Five Steps of Align-RUDDER’s Reward Redistribution . . . . . . . . . . . . . 25 A.4 Sequence Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 A.5 Extended Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 A.6 Artificial Task Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 A.6.1 Hyperparameter Selection . . . . . . . . . . . . . . . . . . . . . . . . . . 29 A.6.2 Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 A.6.3 Artificial Task p-values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 A.6.4 Stochastic Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 A.6.5 Changing number of Clusters . . . . . . . . . . . . . . . . . . . . . . . . 32 A.6.6 Key-Event Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 A.7 Minecraft Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 A.7.1 MineCraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 A.7.2 Related Work and Steps Towards a General Agent . . . . . . . . . . . . . 34 A.7.3 The Five Steps of Align-RUDDER Demonstrated on Minecraft . . . . . . 36 A.7.4 Implementation of our Algorithm for Minecraft . . . . . . . . . . . . . . . 38 A.7.5 Policy and Value Network Architecture . . . . . . . . . . . . . . . . . . . 39 A.7.6 Imitation Learning of Sub-Task Agents . . . . . . . . . . . . . . . . . . . 40 A.7.7 Reinforcement Learning on Sub-Task Agents . . . . . . . . . . . . . . . . 41 A.8 Reproducing the Artificial Task Results . . . . . . . . . . . . . . . . . . . . . . . 41 A.9 Software Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 A.10 Compute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 LIST OF FIGURES A.2 Clusters formed in the FourRooms and EightRooms environment . . . . . . . . . . 29 A.3 Clusters formed in the FourRooms and EightRooms environment . . . . . . . . . . 30 A.4 Clusters formed in the FourRooms and EightRooms environment . . . . . . . . . . 30 A.5 FourRooms and EightRooms environments . . . . . . . . . . . . . . . . . . . . . 31 A.6 Reward redistribution for the FourRooms and EightRooms environments . . . . . . 31 A.11 Step (I): Define events and map demonstrations into sequences of events. First, we extract the sequence of states from human demonstrations, transform images into feature vectors using a pre-trained network and transform them into a sequence of consecutive state deltas (concatenating image feature vectors and inventory states). We cluster the resulting state deltas and remove clusters with a large number of members and merge smaller clusters. In the case of demonstrations for the ObtainDiamond task in Minecraft the resulting clusters correspond to obtaining specific resources and items required to solve the task. Then we map the demonstrations to sequences of events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 A.12 Step (II): Construct a scoring matrix using event probabilities from demonstrations for diagonal elements and setting off-diagonal to a constant value. The scores in the diagonal position are proportional to the inverse of the event frequencies. Thus, aligning rare events has higher score. Darker colors signify higher score values. . . 36 A.13 Step (III) Perform multipe sequence alignment (MSA) of the demonstrations. The MSA algorithm maximizes the pairwise sum of scores of all alignments. The score of an alignment at each position is given by the scoring matrix. As the off-diagonal entries are negative, the algorithm will always try to align an event to itself, while giving preference to events which give higher scores. . . . . . . . . . . . . . . . . 37 A.14 Step (IV) Compute a position-specific scoring matrix (PSSM). This matrix can be computed using the MSA (Step (III)) and the scoring matrix (Step (II)). Every column entry is for a position from the MSA. The score at a position (column) and for an event (row) depends on the frequency of that event at that position in the MSA. For example, the event in the last position is present in all the sequences, and thus gets a high score at the last position. But it is absent in the remaining position, and thus gets a score of zero elsewhere. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 A.15 Step (V) A new sequence is aligned step by step to the profile model using the PSSM, resulting in an alignment score for each sub-sequence. The redistributed reward is then proportional to the difference of scores of subsequent alignments. . . . . . . . 37 A.16 Conceptual overview of our MineRL agent . . . . . . . . . . . . . . . . . . . . . . 38 A.17 Conceptual architecture of Align-RUDDER MineRL policy and value networks . . 39 A.18 Discretization and interpolation of camera angles . . . . . . . . . . . . . . . . . . 40 A.19 Mapping of clusters to letters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 A.20 Trajectory replay given by an exemplary consensus . . . . . . . . . . . . . . . . . 41 A.1 INTRODUCTION TO THE APPENDIX This is the appendix to the paper “Align-RUDDER: Learning from few Demonstrations by Reward Redistribution”. The appendix aims at supporting the main document and provides more detailed information about the implementation of our method for different tasks. The content of this document is summarized as follows: • Section A.3 describes the five steps of Align-RUDDER’s reward redistribution in more detail. In particular, the scoring systems are described in more detail. • Section A.4 provides a brief overview of sequence alignment methods and the hyperparameters used in our experiments. • Section A.6 provides figures and tables to support the results of the experiments in Artificial Tasks (I) and (II). • Section A.7 explains in detail the experiments conducted in the Minecraft ObtainDiamond task. A.2 REVIEW REWARD REDISTRIBUTION Reward redistribution and return decomposition are concepts introduced in RUDDER but also apply to Align-RUDDER as it is a variant of RUDDER. Reward redistribution based on return decomposition eliminates – or at least mitigates – delays of rewards while preserving the same optimal policies. Align-RUDDER is justified by the theory of return decomposition and reward redistribution when using multiple sequence alignment for constructing a reward redistribution model. In this section, we review the concepts of return decomposition and reward redistribution. Preliminaries. We consider a finite MDP defined by the 5-tuple P = (S,A,R, p, γ) where the state space S and the action space A are sets of finite states s and actions a and R the set of bounded rewards r. For a given time step t, the corresponding random variables are St, At and Rt+1. Furthermore, P has transition-reward distributions p(St+1 = s′, Rt+1 = r | St = s,At = a), and a discount factor γ ∈ (0, 1], which we keep at γ = 1. A Markov policy π(a | s) is a probability of an action a given a state s. We consider MDPs with finite time horizon or with an absorbing state. The discounted return of a sequence of length T at time t is Gt = ∑T−t k=0 γ kRt+k+1. As usual, the Q-function for a given policy π is qπ(s, a) = Eπ [Gt | St = s,At = a]. Eπ[x | s, a] is the expectation of x, where the random variable is a sequence of states, actions, and rewards that is generated with transition-reward distribution p, policy π, and starting at (s, a). The goal is to find an optimal policy π∗ = argmax π Eπ[G0] maximizing the expected return at t = 0. We assume that the states s are time-aware (time t can be extracted from each state) in order to assure stationary optimal policies. According to Proposition 4.4.3 in (Puterman, 2005), a deterministic optimal policy π∗ exists. Definitions. A sequence-Markov decision process (SDP) is defined as a decision process that has Markov transition probabilities but a reward probability that is not required to be Markov. Two SDPs P̃ and P with different reward probabilities are return-equivalent if they have the same expected return at t = 0 for each policy π, and strictly return-equivalent if they additionally have the same expected return for every episode. Since for every π the expected return at t = 0 is the same, return-equivalent SDPs have the same optimal policies. A reward redistribution is a procedure that —for a given sequence of a delayed reward SDP P̃— redistributes the realization or expectation of its return G̃0 along the sequence. This yields a new SDP P with R as random variable for the redistributed reward and the same optimal policies as P̃: Theorem 1 (Arjona-Medina et al. (2019)). Both the SDP P̃ with delayed reward R̃t+1 and the SDP P with redistributed reward Rt+1 have the same optimal policies. Proof. The proof can be found in (Arjona-Medina et al., 2019). The delay of rewards is captured by the expected future rewards κ(m, t − 1) at time (t − 1). κ is defined as κ(m, t− 1) := Eπ [ ∑m τ=0Rt+1+τ | st−1, at−1], that is, at time (t− 1) the expected sum of future rewards from Rt+1 to Rt+1+m but not the immediate reward Rt. A reward redistribution is defined to be optimal, if κ(T − t − 1, t) = 0 for 0 6 t 6 T − 1, which is equivalent to Eπ [Rt+1 | st−1, at−1, st, at] = q̃π(st, at) − q̃π(st−1, at−1): Theorem 2 (Arjona-Medina et al. (2019)). We assume a delayed reward MDP P̃ , with episodic reward. A new SDP P is obtained by a second order Markov reward redistribution, which ensures that P is return-equivalent to P̃ . For a specific π, the following two statements are equivalent: (I) κ(T − t− 1, t) = 0, i.e. the reward redistribution is optimal, (II) Eπ [Rt+1 | st−1, at−1, st, at] = q̃π(st, at) − q̃π(st−1, at−1) (2) An optimal reward redistribution fulfills for 1 6 t 6 T and 0 6 m 6 T − t: κ(m, t− 1) = 0. Proof. The proof can be found in (Arjona-Medina et al., 2019). This theorem shows that an optimal reward redistribution relies on steps q̃π(st, at)− q̃π(st−1, at−1) of the Q-function. Identifying the largest steps in the Q-function detects the largest rewards that have to be redistributed, which makes the largest progress towards obtaining an optimal reward redistribution. Corollary 1 (Higher order Markov reward redistribution optimality conditions). We assume a delayed reward MDP P̃ , with episodic reward. A new SDP P is obtained by a higher order Markov reward redistribution. The reward redistribution ensures that P is return-equivalent to P̃ . If for a specific π Eπ [Rt+1 | st−1, at−1, st, at] = q̃π(st, at) − q̃π(st−1, at−1) (3) holds, then the higher order reward redistribution Rt+1 is optimal, that is, κ(T − t− 1, t) = 0. Proof. The proof is just PART (II) of the proof of Theorem 2 in (Arjona-Medina et al., 2019). We repeat it here for completeness. We assume that Eπ [Rt+1 | st−1, at−1, st, at] = ht = q̃π(st, at) − q̃π(st−1, at−1) , (4) where we abbreviate the expected Rt+1 by ht: Eπ [Rt+1 | st−1, at−1, st, at] = ht . (5) The expectations Eπ [. | st−1, at−1] like Eπ [ R̃T+1 | st−1, at−1 ] are expectations over all episodes that contain the state-action pair (st−1, at−1) at time t−1. The expectations Eπ [. | st−1, at−1, st, at] like Eπ [ R̃T+1 | st−1, at−1, st, at ] are expectations over all episodes that contain the state-action pairs (st−1, at−1) at time t− 1 and (st, at) at time t. The Q-values are defined as q̃π(st, at) = Eπ [ T−t∑ k=0 R̃t+k+1 | st, at ] = Eπ [ R̃T+1 | st, at ] , (6) qπ(st, at) = Eπ [ T−t∑ k=0 Rt+k+1 | st, at ] , (7) which are expectations over all trajectories that contain (st, at) at time t. Since P̃ is Markov, for q̃π only the suffix trajectories beginning at (st, at) enter the expectation. The definition of κ(m, t − 1) for 1 6 t 6 T and 0 6 m 6 T − t was κ(m, t − 1) = Eπ [ ∑m τ=0Rt+1+τ | st−1, at−1]. We have to proof κ(T − t− 1, t) = 0. First, we consider m = 0 and 1 6 t 6 T , therefore κ(0, t − 1) = Eπ [Rt+1 | st−1, at−1]. Since the original MDP P̃ has episodic reward, we have r̃(st−1, at−1) = E [ R̃t | st−1, at−1 ] = 0 for 1 6 t 6 T . Therefore, we obtain: q̃π(st−1, at−1) = r̃(st−1, at−1) + ∑ st,at p(st, at | st−1, at−1) q̃π(st, at) (8) = ∑ st,at p(st, at | st−1, at−1) q̃π(st, at) . Using this equation we obtain for 1 6 t 6 T : κ(0, t− 1) = Eπ [Rt+1 | st−1, at−1] (9) = Est,at [q̃ π(st, at) − q̃π(st−1, at−1) | st−1, at−1] = ∑ st,at p(st, at | st−1, at−1) (q̃π(st, at) − q̃π(st−1, at−1)) = q̃π(st−1, at−1) − ∑ st,at p(st, at | st−1, at−1) q̃π(st−1, at−1) = q̃π(st−1, at−1) − q̃π(st−1, at−1) = 0 . Next, we consider the expectation of ∑m τ=0Rt+1+τ for 1 6 t 6 T and 1 6 m 6 T − t (for m > 0) κ(m, t− 1) = Eπ [ m∑ τ=0 Rt+1+τ | st−1, at−1 ] (10) = Eπ [ m∑ τ=0 (q̃π(sτ+t, aτ+t) − q̃π(sτ+t−1, aτ+t−1)) | st−1, at−1 ] = Eπ [q̃ π(st+m, at+m) − q̃π(st−1, at−1) | st−1, at−1] = Eπ [ Eπ [ T∑ τ=t+m R̃τ+1 | st+m, at+m ] | st−1, at−1 ] − Eπ [ Eπ [ T∑ τ=t−1 R̃τ+1 | st−1, at−1 ] | st−1, at−1 ] = Eπ [ R̃T+1 | st−1, at−1 ] − Eπ [ R̃T+1 | st−1, at−1 ] = 0 . We used that R̃t+1 = 0 for t < T . For the particualr cases t = τ + 1 and m = T − t = T − τ − 1 we have κ(T − τ − 1, τ) = 0 . (11) That is exactly what we wanted to proof. Corollary 1 explicitly states that the optimality criterion ensures an optimal reward redistribution even if the reward redistribution is higher order Markov. For Align-RUDDER we may obtain a higher order Markov reward redistribution due to the profile alignment of the sub-sequences. Corollary 2 (Higher order Markov reward redistribution optimality representation). We assume a delayed reward MDP P̃ , with episodic reward and that a new SDP P is obtained by a higher order Markov reward redistribution. The reward redistribution ensures that P is strictly return-equivalent to P̃ . We assume that the reward redistribuition is optimal, that is, κ(T − t − 1, t) = 0. If the condition Eπ [ T−t−1∑ τ=0 Rt+2+τ | st, at ] = Eπ [ T−t−1∑ τ=0 Rt+2+τ | s0, a0, . . . , st, at ] (12) holds, then Eπ [Rt+1 | st−1, at−1, st, at] = q̃π(st, at) − q̃π(st−1, at−1) . (13) Proof. By and large, the proof is PART (I) of the proof of Theorem 2 in (Arjona-Medina et al., 2019). We repeat it here for completeness. We assume that the reward redistribution is optimal, that is, κ(T − t− 1, t) = 0 . (14) We abbreviate the expected Rt+1 by ht: Eπ [Rt+1 | st−1, at−1, st, at] = ht . (15) In (Arjona-Medina et al., 2019) Lemma A4 is as follows. Lemma 1. Two strictly return-equivalent SDPs P̃ and P have the same expected return for each start state-action sub-sequence (s0, a0, . . . , st, at), 0 6 t 6 T : Eπ [ G̃0 | s0, a0, . . . , st, at ] = Eπ [G0 | s0, a0, . . . , st, at] . (16) The assumptions of Lemma 1 hold for for the delayed reward MDP P̃ and the redistributed reward SDP P , since a reward redistribution ensures strictly return-equivalent SDPs. Therefore for a given state-action sub-sequence (s0, a0, . . . , st, at), 0 6 t 6 T : Eπ [ G̃0 | s0, a0, . . . , st, at ] = Eπ [G0 | s0, a0, . . . , st, at] (17) with G0 = ∑T τ=0Rτ+1 and G̃0 = R̃T+1. The Markov property of the MDP P̃ ensures that the future reward from t+ 1 on is independent of the past sub-sequence s0, a0, . . . , st−1, at−1: Eπ [ T−t∑ τ=0 R̃t+1+τ | st, at ] = Eπ [ T−t∑ τ=0 R̃t+1+τ | s0, a0, . . . , st, at ] . (18) According to Eq. (12), the future reward from t + 2 on is independent of the past sub-sequence s0, a0, . . . , st−1, at−1: Eπ [ T−t−1∑ τ=0 Rt+2+τ | st, at ] = Eπ [ T−t−1∑ τ=0 Rt+2+τ | s0, a0, . . . , st, at ] . (19) Using these properties we obtain q̃π(st, at) = Eπ [ T−t∑ τ=0 R̃t+1+τ | st, at ] (20) = Eπ [ T−t∑ τ=0 R̃t+1+τ | s0, a0, . . . , st, at ] = Eπ [ R̃T+1 | s0, a0, . . . , st, at ] = Eπ [ T∑ τ=0 R̃τ+1 | s0, a0, . . . , st, at ] = Eπ [ G̃0 | s0, a0, . . . , st, at ] = Eπ [G0 | s0, a0, . . . , st, at] = Eπ [ T∑ τ=0 Rτ+1 | s0, a0, . . . , st, at ] = Eπ [ T−t−1∑ τ=0 Rt+2+τ | s0, a0, . . . , st, at ] + t∑ τ=0 hτ = Eπ [ T−t−1∑ τ=0 Rt+2+τ | st, at ] + t∑ τ=0 hτ = κ(T − t− 1, t) + t∑ τ=0 hτ = t∑ τ=0 hτ . We used the optimality condition κ(T − t− 1, t) = Eπ [ T−t−1∑ τ=0 Rt+2+τ | st, at ] = 0 . (21) It follows that Eπ [Rt+1 | st−1, at−1, st, at] = ht = q̃π(st, at) − q̃π(st−1, at−1) . (22) This is exactly what we wanted to proof. This corollary shows that optimal reward redistributions can be expressed as difference of Q-values if Eq. (12) holds. Eq. (12) states that the past can be averaged out. However, there may exist optimal reward redistributions for which Eq. (12) does not hold. If the reward redistribution is optimal, the Q-values of P are given by qπ(st, at) = q̃π(st, at) − ψπ(st) and therefore P̃ and P have the same advantage function: Theorem 3 (Arjona-Medina et al. (2019)). If the reward redistribution is optimal, then the Q-values of the SDP P are qπ(st, at) = r(st, at) and qπ(st, at) = q̃ π(st, at) − Est−1,at−1 [q̃π(st−1, at−1) | st] = q̃π(st, at) − ψπ(st) . (23) The SDP P and the original MDP P̃ have the same advantage function. Proof. The proof can be found in (Arjona-Medina et al., 2019). For an optimal reward redistribution only the expectation of the immediate reward r(st, at) = Eπ [Rt+1 | st, at] must be estimated. This considerably simplifies learning. Learning methods according to Arjona-Medina et al. (2019). The redistributed reward serves as reward for a subsequent learning method, which can be Type A, B, and C as described in ArjonaMedina et al. (2019). Type A methods estimate the Q-values. They can be estimated directly according to Eq. (23) assuming an optimal redistribution (Type A variant i). Q-values can be corrected for a non-optimal reward redistribution by additionally estimating κ (Type A variant ii). Q-value estimation can use eligibility traces (Type A variant iii). Type B methods use the redistributed rewards for policy gradients like Proximal Policy Optimization (PPO) Schulman et al. (2018). Type C methods use TD learning like Q-learning Watkins (1989), where immediate and future reward must be drawn together as typically done. For all these learning methods, demonstrations can be used for initialization (e.g. experience replay buffer) or pre-training (e.g. policy network with behavioral cloning). Recently, the convergence of RUDDER learning methods has been proven under commonly used assumptions (Holzleitner et al., 2020). Non-optimal reward redistribution and Align-RUDDER. According to Theorem 1, non-optimal reward redistributions do not change the optimal policies. The value κ(T − t− 1, t) measures the remaining delayed reward. The smaller κ is, the faster is the learning process. For Monte Carlo (MC) estimates, smaller κ reduces the variance of the future rewards, and, therefore the variance of the estimation. For temporal difference (TD) estimates, smaller κ reduces the amount of information that has to flow back. Align-RUDDER dramatically reduces the amount of delayed rewards by identifying key events via multiple sequence alignment, to which reward is redistributed. For an episodic MDP, a reward that is redistributed to time t reduces all κ(m, τ) with t 6 τ < T by the expectation of the reward. Therefore, in most cases Align-RUDDER makes κ-values much smaller. A.3 THE FIVE STEPS OF ALIGN-RUDDER’S REWARD REDISTRIBUTION The new reward redistribution approach consists of five steps, see Fig. A.1: (I) Define events to turn episodes of state-action sequences into sequences of events. (II) Determine an alignment scoring scheme, so that relevant events are aligned to each other. (III) Perform a multiple sequence alignment (MSA) of the demonstrations. (IV) Compute the profile model and the PSSM. (V) Redistribute the reward: Each sub-sequence τt of a new episode τ is aligned to the profile. The redistributed reward Rt+1 is proportional to the difference of scores S based on the PSSM given in step (IV), i.e. Rt+1 ∝ S(τt)− S(τt−1). (I) Defining Events. Alignment techniques assume that sequences consist of few symbols, e.g. about 20 symbols, the events. It is crucial to keep the number of events small in order to increase the difference between a random alignment and an alignment of demonstrations. If there are many events, then two demonstrations might have few events that can be matched, which cannot be well distinguished from random alignments. This effect is known in bioinformatics as “Inconsistency of Maximum Parsimony” (Felsenstein, 1978). The events can be the original state-action pairs, clusters thereof, or other representations of state-action pairs, e.g. indicating changes of inventory, health, energy, skills etc. In general, we define events as a cluster of states or state-actions. A sequence of events is obtained from a state-action sequence by substituting states or state-actions by their cluster identifier. In order to cluster states, a similarity measure between them is required. We suggest to use the “successor representation” (Dayan, 1993) of the states, which gives a similarity matrix based on how connected two states are given a policy. Successor representation have been used before (Machado et al., 2017; Ramesh et al., 2019) to obtain important events, for option learning. For computing the successor representation, we use the demonstrations combined with state-action sequences generated by a random policy. For high dimensional state spaces “successor features” (Barreto et al., 2017) can be used. We use similarity-based clustering methods like affinity propagation (AP) (Frey & Dueck, 2007). For AP the similarity matrix does not have to be symmetric and the number of clusters need not be known. State action pairs (s, a) are mapped to events e. (II) Determining the Alignment Scoring System. Alignment algorithms distinguish similar sequences from dissimilar sequences using a scoring system. A scoring matrix S has entries si,j that give the score for aligning event i with j. The MSA score SMSA of a multiple sequence alignment is the sum of all pairwise scores: SMSA = ∑ i,j,i<j ∑L t=0 sxi,t,xj,t , where xi,t means that event xi,t is at position t for sequence τi = ei,0:T in the alignment, analog for xj,t and the sequence τj = ej,0:T , andL is the alignment length. Note thatL ≥ T and xi,t 6= ei,t, since gaps are present in the alignment. In the alignment, events should have the same probability of being aligned as they would have if we know the strategy and align demonstrations accordingly. The theory of high scoring segments gives a scoring scheme with these alignment probabilities (Karlin & Altschul, 1990; Karlin et al., 1990; Altschul et al., 1990). Event i is observed with probability pi in the demonstrations, therefore a random alignment aligns event i with j with probability pipj . An alignment algorithm maximizes the MSA score SMSA and, thereby, aligns events i and j with probability qij for demonstrations. High values of qij means that the MSA often aligns events i and j in the demonstrations using the scoring matrix S with entries si,j . According to Theorem 2 and Equation [3] in Karlin & Altschul (1990), asymptotically with the sequence length, we have si,j = ln(qij/(pipj))/λ∗, where λ∗ is the unique positive root of ∑n,n i=1,j=1 pipj exp(λsi,j) = 1 (Equation [4] in Karlin & Altschul (1990)). We can now choose a desired probability qij and then compute the scoring matrix S with entries si,j . High values of qij should indicate relevant events for the strategy. A priori, we only know that a relevant event should be aligned to itself, while we do not know which events are relevant. Therefore we set qij to large values for every i = j and to low values for i 6= j. Concretely, we set qij = pi − for i = j and qij = /(n− 1) for i 6= j, where n is the number of different possible events. Events with smaller pi receive a higher score si,i when aligned to themselves since this self-match is less often observed when randomly matching events (pipi is the probability of a random self-match). Any prior knowledge about events should be incorporated into qij . (III) Multiple sequence alignment (MSA). MSA first produces pairwise alignments between all demonstrations. Then, a guiding tree (agglomerative hierarchical clustering) is produced via hierarchical clustering sequences, according to their pairwise alignment scores. Demonstrations which follow the same strategy appear in the same cluster in the guiding tree. Each cluster is aligned separately via MSA to address different strategies. However, if there is not a cluster of demonstrations, then the alignment will fail. MSA methods like ClustalW (Thompson et al., 1994) or MUSCLE (Edgar, 2004) can be used. (IV) Position-Specific Scoring Matrix (PSSM) and Profile. From the final alignment, we construct a) an MSA profile (column-wise event frequencies qi,j) and b) a PSSM (Stormo et al., 1982) which is used for aligning new sequences to the profile of the MSA. To compute the PSSM (column-wise scores si,t), we apply Theorem 2 and Equation [3] in Karlin & Altschul (1990). Event i is observed with probability pi in the data. For each position t in the alignment, we compute qi,t, which indicates the frequency of event i at position t. The PSSM is si,t = ln(qi,t/pi)/λ∗t , where λ ∗ t is the single unique positive root of ∑n i=1 pi exp(λsi,t) = 1 (Equation [1] in Karlin & Altschul (1990)). If we align a new sequence that follows the underlying strategy (a new demonstration) to the profile model, we would see that event i is aligned to position t in the profile with probability qi,t. (V) Reward Redistribution. The reward redistribution is based on the profile model. A sequence τ = e0:T (et is event at position t) is aligned to the profile, which gives the score S(τ) = ∑L t=0 sxt,t. Here, si,t is the alignment score for event i and xt is the event of τ at position t in the alignment. L is the profile length, where L ≥ T and xt 6= et, because of gaps in the alignment. If τt = e0:t is the prefix sequence of τ of length t+ 1, then the reward redistribution Rt+1 for 0 6 t 6 T is Rt+1 = (S(τt) − S(τt−1))C = g((s, a)0:t)− g((s, a)0:t−1), RT+2 = G̃0 − T∑ t=0 Rt+1, (24) where C = Edemo [ G̃0 ] / Edemo [∑T t=0 S(τt)− S(τt−1) ] and G̃0 = ∑T t=0 R̃t+1 is the original return of the sequence τ and S(τt−1) = 0. Edemo is the expectation over demonstrations, and C scales Rt+1 to the range of G̃0. RT+2 is the correction of the redistributed reward (Arjona-Medina et al., 2019), with zero expectation for demonstrations: Edemo [RT+2] = 0. Since τt = e0:t and et = f(st, at), we can set g((s, a)0:t) = S(τt)C. We ensure strict return equivalence, since G0 =∑T+1 t=0 Rt+1 = G̃0. The redistributed reward depends only on the past, that is, Rt+1 = h((s, a)0:t). For computational efficiency, the alignment of τt−1 can be extended to one for τt, like exact matches are extended to high-scoring sequence pairs with the BLAST algorithm (Altschul et al., 1990; 1997). Sub-tasks. The reward redistribution identifies sub-tasks, which are alignment positions with high redistributed reward. It also determines the terminal states and automatically assigns reward for solving the sub-tasks. However, reward redistribution and Align-RUDDER cannot guarantee that the reward is Markov. For redistributed reward that is Markov, the option framework (Sutton et al., 1999), the MAXQ framework (Dietterich, 2000), or recursive composition of option models (Silver & Ciosek, 2012) can be used as subsequent approaches to hierarchical reinforcement learning. A.4 SEQUENCE ALIGNMENT In bioinformatics, sequence alignment identifies regions of significant similarity among different biological sequences to establish evolutionary relationships between those sequences. In 1970, Needleman and Wunsch proposed a global alignment method based on dynamic programming (Needleman & Wunsch, 1970). This approach ensures the best possible alignment given a substitution matrix, such as PAM (Dayhoff, 1978) or BLOSUM(Henikoff & Henikoff, 1992), and other parameters to penalize gaps in the alignment. The method of Needlemann and Wunsch is of O(mn) complexity both in memory and time, which could be prohibitive in long sequences like genomes. An optimization of this method by Hirschberg (1975), reduces memory to O(m+ n), but still requires O(mn) time. Later, Smith and Waterman developed a local alignment method for sequences (Smith & Waterman, 1981). It is a variation of Needleman and Wunsch’s method, keeping the substitution matrix and the gap-scoring scheme but setting cells in the similarity matrix with negative scores to zero. The complexity for this algorithm is of O(n2M). Osamu Gotoh published an optimization of this method, running in O(mn) runtime (Gotoh, 1982). The main difference between both methods is the following: • The global alignment method by Needleman and Wunsch aligns the sequences fixing the first and the last position of both sequences. It attempts to align every symbol in the sequence, allowing some gaps, but the main purpose is to get a global alignment. This is especially useful when the two sequences are highly similar. For instance: ATCGGATCGACTGGCTAGATCATCGCTGG CGAGCATC-ACTGTCT-GATCGACCTTAG * *** **** ** **** * * * • As an alternative to global methods, the local method of Smith and Waterman aligns the sequences with a higher degree of freedom, allowing the alignment to start or end with gaps. This is extremely useful when the two sequences are substantially dissimilar in general but suspected of having a highly related sub region. ATCAAGGAGATCATCGCTGGACTGAGTGGCT----ACGTGGTATGT ATC----CGATCATCGCTGG-CTGATCGACCTTCTACGT------*** ************ **** * * **** A.4.0.1 Multiple Sequence Alignment algorithms. The sequence alignment algorithms by Needleman and Wunsch and Smith and Waterman are limited to aligning two sequences. The approaches for generalizing these algorithms to multiple sequences can be classified into four categories: • Exact methods (Wang & Jiang, 1994). • Progressive methods: ClustalW (Thompson et al., 1994), Clustal Omega (Sievers et al., 2014), T-Coffee (Notredame et al., 2000). • Iterative and search algorithms: DIALIGN (Morgenstern, 2004), MultiAlign (Corpet, 1988). • Local methods: eMOTIF (Mccammon & Wolynes, 1998), PROSITE (Bairoch & Bucher, 1994). For more details, visit Sequence Comparison: Theory and methods (Chao & Zhang, 2009). In our experiments, we use ClustalW from Biopython (Cock et al., 2009) with the following parameters: clustalw2 -ALIGN -CLUSTERING=UPGMA -NEGATIVE " \ "-INFILE={infile} -OUTFILE={outfile} " \ "-PWMATRIX={scores} -PWGAPOPEN=0 -PWGAPEXT=0 " \ "-MATRIX={scores} -GAPOPEN=0 -GAPEXT=0 -CASE=UPPER " \ "-NOPGAP -NOHGAP -MAXDIV=0 -ENDGAPS -NOVGAP " \ "-NEWTREE={outputtree} -TYPE=PROTEIN -OUTPUT=GDE where the PWMATRIX and MATRIX are computed according to step (II) in Sec. 3 of the main paper. A.5 EXTENDED RELATED WORK Align-RUDDER allows to identify sub-goals and sub-tasks, therefore it is related to hierarchical reinforcement learning (HRL) approaches like the option framework (Sutton et al., 1999),
1. What is the focus and contribution of the paper regarding reward redistribution? 2. What are the strengths of the proposed approach, particularly in its novel application of biological sequence alignment? 3. What are the weaknesses of the paper, especially in terms of explanatory figures and methodology clarity? 4. How does the reviewer assess the empirical results and their validation? 5. Are there any minor issues or suggestions for improvement in the paper?
Summary Of The Paper Review
Summary Of The Paper This paper presents an improvement over RUDDER to allow for RUDDER-style reward redistribution when given limited sets of demonstrations by using a biological sequence alignment model. Review Paper Strengths Motivation: The authors make very clear the motivation. Improving RUDDER, a reward redistribution method, so that it can work on tasks in which demonstrations of high-reward trajectories are given because the task is too hard for exploration alone. Method Novelty: Applying a sequence alignment mechanism seems relevant and a novel contribution, also leading to good empirical results. Experiment Results: Align-RUDDER seems to have great performance improvements over RUDDER, especially with a very limited number of demonstrations. The compared baselines also seem to be relevant, demonstrating Align-RUDDER's good performance. Furthermore, the MineCraft diamond mining demonstrates impressive performance gains. The authors also perform hypothesis testing, further validating the results of their experiments. Supplementary Material: The authors provide a detailed set of background knowledge for RUDDER, sequence alignment, and extra figures/examples in the appendix. Furthermore, the supplementary video was nice and informative. Paper Weaknesses Grammatical Issues: A few grammatical issues throughout, it doesn't detract too much but occasionally causes a hiccup in the flow when reading, so please fix these. Method Clarity: The paper figures do not explain the method too well. Figure 2 is a decent figure, but the caption does not really explain the right half, nor does it draw any parallel between the two parts of the figures. I think this figure should be simplified, labelled, and used to better show how the reward redistribution using conservation score mechanism works in the biological sequence alignment case, and in the reward redistribution case. Figure 3 is nice, but it should have some text explaining things better in the caption. As the actual explanation for how the method + alignment strategy works is very complicated, these figures are critical for showing it clearly at a higher level. Furthermore, there should be some more intuition presented earlier about why sequence alignment is easier than RUDDER's LSTM. In Section 3, "Reward Redistribution by Sequence Alignment," to someone unfamiliar with biological alignment techniques, you should explain why alignment will intuitively help first, before diving into details. In fact, in general, I think at least to me the amount of detail given to the alignment explanation should be redistributed for a conference like ICLR. The authors should rewrite the method section to have far fewer details (except those explicitly needed) about the alignment algorithm itself, this should all be in the appendix. Instead, the entire reward redistribution scheme should be used to explain both intuitively and in detail how this 1) encourages alignment of similar trajectories in terms of states and actions, 2) better alignment scores result in a better reward redistribution scheme, and 3) is far easier to learn with few demonstrations than RUDDER's LSTM. Of course, some details should be kept but in general I think there is too much detail placed in the nitty-gritty of how alignment is done in the main paper. Experiments: To me, the experimental setup seems valid. But along the same lines as what was stated above, there should be a bit more analysis in the main text about why Align-RUDDER performs the way it does in comparison to the other methods. Questions Appendix page 33: "We...transform images into feature vectors using a standard pre-trained network," what standard pre-trained network was used? Minor Issues Contribution 2 and 4 in the intro are saying basically the same thing "Sub-tasks" in page 5, is this meant to have a new line before and be bolded instead? Page 7, describing the hierarchical setup, "more details can be found in the appendix," please link an appendix section for easier referencing here The reference to learning methods should be a clickable link back to that subsection describing the learning method (i had forgotten the learning methods by the time I reached page 6 where it's mentioned in the experiments)
ICLR
Title Simple initialization and parametrization of sinusoidal networks via their kernel bandwidth Abstract Neural networks with sinusoidal activations have been proposed as an alternative to networks with traditional activation functions. Despite their promise, particularly for learning implicit models, their training behavior is not yet fully understood, leading to a number of empirical design choices that are not well justified. In this work, we first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis. We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth. Finally, we utilize these insights to inform the sinusoidal network initialization, optimizing their performance for each of a series of tasks, including learning implicit models and solving differential equations. N/A Neural networks with sinusoidal activations have been proposed as an alternative to networks with traditional activation functions. Despite their promise, particularly for learning implicit models, their training behavior is not yet fully understood, leading to a number of empirical design choices that are not well justified. In this work, we first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis. We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth. Finally, we utilize these insights to inform the sinusoidal network initialization, optimizing their performance for each of a series of tasks, including learning implicit models and solving differential equations. 1 INTRODUCTION Sinusoidal networks are neural networks with sine nonlinearities, instead of the traditional ReLU or hyperbolic tangent. They have been recently popularized, particularly for applications in implicit representation models, in the form of SIRENs (Sitzmann et al., 2020). However, despite their popularity, many aspects of their behavior and comparative advantages are not yet fully understood. Particularly, some initialization and parametrization choices for sinusoidal networks are often defined arbitrarily, without a clear understanding of how to optimize these settings in order to maximize performance. In this paper, we first propose a simplified version of such sinusoidal networks, that allows for easier implementation and theoretical analysis. We show that these simple sinusoidal networks can match and outperform SIRENs in implicit representation learning tasks, such as fitting videos, images and audio signals. We then analyze sinusoidal networks from a neural tangent kernel (NTK) perspective (Jacot et al., 2018), demonstrating that their NTK approximates a low-pass filter with adjustable bandwidth. We confirm, through an empirical analysis this theoretically predicted behavior also holds approximately in practice. We then use the insights from this analysis to inform the choices of initialization and parameters for sinusoidal networks. We demonstrate we can optimize the performance of a sinusoidal network by tuning the bandwidth of its kernel to the maximum frequency present in the input signal being learned. Finally, we apply these insights in practice, demonstrating that “well tuned” sinusoidal networks outperform other networks in learning implicit representation models with good interpolation outside the training points, and in learning the solution to differential equations. 2 BACKGROUND AND RELATED WORK Sinusoidal networks. Sinusoidal networks have been recently popularized for implicit modelling tasks by sinusoidal representation networks (SIRENs) (Sitzmann et al., 2020). They have also been evaluated for physics-informed learning, demonstrating promising results in a series of domains (Raissi et al., 2019b; Song et al., 2021; Huang et al., 2021b;a; Wong et al., 2022). Among the benefits of such networks is the fact that the mapping of inputs through an (initially) random linear layer followed by a sine function is mathematically equivalent to a transformation to a random Fourier basis, rendering them close to networks with Fourier feature transforms (Tancik et al., 2020; Rahimi & Recht, 2007), and possibly able to address spectral bias (Basri et al., 2019; Rahaman et al., 2019; Wang et al., 2021). Sinusoidal networks also have the property that the derivative of their outputs is given simply by another sinusoidal network, due to the fact that the derivative of sine function is a phase-shifted sine. Neural tangent kernel. An important prior result to the neural tangent kernel (NTK) is the neural network Gaussian process (NNGP). At random initialization of its parameters θ, the output function of a neural network of depth L with nonlinearity σ, converges to a Gaussian process, called the NNGP, as the width of its layers n1, . . . , nL → ∞. (Neal, 1994; Lee et al., 2018). This result, though interesting, does not say much on its own about the behavior of trained neural networks. This role is left to the NTK, which is defined as the kernel given by Θ(x, x̃) = ⟨∇θfθ(x),∇θfθ(x̃)⟩. It can be shown that this kernel can be written out as a recursive expression involving the NNGP. Importantly, Jacot et al. (2018) demonstrated that, again as the network layer widths n1, . . . , nL → ∞, the NTK is (1) deterministic at initialization and (2) constant throughout training. Finally, it has also been demonstrated that under some assumptions on its parametrization, the output function of the trained neural network fθ converges to the kernel regression solution using the NTK (Lee et al., 2020; Arora et al., 2019). In other words, under certain assumptions the behavior of a trained deep neural network can be modeled as kernel regression using the NTK. Physics-informed neural networks. Physics-informed neural networks (Raissi et al., 2019a) are a method for approximating the solution to differential equations using neural networks (NNs). In this method, a neural network û(t, x; θ), with learned parameters θ, is trained to approximate the actual solution function u(t, x) to a given partial differential equation (PDE). Importantly, PINNs employ not only a standard “supervised” data loss, but also a physics-informed loss, which consists of the differential equation residual N . Thus, the training loss consists of a linear combination of two loss terms, one directly supervised from data and one informed by the underlying differential equations. 3 SIMPLE SINUSOIDAL NETWORKS There are many details that complicate the practical implementation of current sinusoidal networks. We aim to propose a simplified version of such networks in order to facilitate theoretical analysis and practical implementation, by removing such complications. As an example we can look at SIRENs, which have their layer activations defined as fl(x) = sin(ω(Wlx+ bl)). Then, in order to cancel the ω factor, layers after the first one have their weight initialization follow a uniform distribution with range [− √ 6/n ω , √ 6/n ω ], where n is the size of the layer. Unlike the other layers, the first layer is sampled from a uniform distribution with range [−1/n, 1/n]. We instead propose a simple sinusoidal network, with the goal of formulating an architecture that mainly amounts to substituting its activation functions by the sine function. We will, however, keep the ω parameter, since (as we will see in future analyses) it is in fact a useful tool for allowing the network to fit inputs of diverse frequencies. The layer activation equations of our simple sinusoidal network, with parameter ω, are defined as f1(x) = sin(ω (W1x+ b1)), fl(x) = sin(Wlx+ bl), l > 1. (1) Finally, instead of utilizing a uniform initialization as in SIRENs (with different bounds for the first and subsequent layers), we propose initializing all parameters in our simple sinusoidal network using a default Kaiming (He) normal initialization scheme. This choice not only greatly simplifies the initialization scheme of the network, but it also facilitates theoretical analysis of the behavior of the network under the NTK framework, as we will see in Section 4. Analysis of the initialization scheme. The initialization scheme proposed above differs from the one implemented in SIRENs. We will now show that this particular choice of initialization distribution preserves the variance of the original proposed SIREN initialization distribution. As a consequence, the original theoretical justifications for its initialization scheme still hold under this activation, namely that the distribution of activations across layers are stable, well-behaved and shift-invariant. Due to space constraints, proofs are presented in Appendix A. Moreover, we also demonstrate empirically that these properties are maintained in practice. Lemma 1. Given any c, for X ∼ N ( 0, 13c 2 ) and Y ∼ U (−c, c), we have Var[X] = Var[Y ] = 13c 2. This simple Lemma and relates to Lemma 1.7 in Sitzmann et al. (2020), showing that the initialization we propose here has the same variance as the one proposed for SIRENs. Using this result we can translate the result from the main Theorem 1.8 from Sitzmann et al. (2020), which claims that the SIREN initialization indeed has the desired properties, to our proposed initialization:1 For a uniform input in [−1, 1], the activations throughout a sinusoidal network are approximately standard normal distributed before each sine non-linearity and arcsine-distributed after each sine non-linearity, irrespective of the depth of the network, if the weights are distributed normally, with mean 0 and variance 2n , where n is a layer’s fan-in. Empirical evaluation of initialization scheme. To empirically demonstrate the proposed simple initialization scheme preserves the properties from the SIREN initialization scheme, we perform the same analysis performed by Sitzmann et al. (2020). We observe that the distribution of activations matches the predicted normal (before the non-linearity) and arcsine (after the non-linearity) distributions, and that this behavior is stable across many layers. These results are reported in detail in the Appendix B. 3.1 COMPARISON TO SIREN In order to demonstrate our simplified sinusoidal network has comparable performance to a standard SIREN, in this section we reproduce the main results from Sitzmann et al. (2020). Table 1 compiles the results for all experiments. In order to be fair, we compare the simplified sinusoidal network proposed in this chapter with both the results directly reported in Sitzmann et al. (2020), and our own reproduction of the SIREN results (using the same parameters and settings as the original). We can see from the numbers reported in the table that the performance of the simple sinusoidal network proposed in this chapter matches the performance of the SIREN in all cases, in fact surpassing it in most of the experiments. Qualitative results are presented in Appendix C. It is important to note that this is not a favorable setting for simple sinusoidal networks, given that the training durations were very short. The SIREN favors quickly converging to a solution, though it does not have as strong asymptotic behavior. This effect is likely due to the multiplicative factor applied to later layers described in Section 3. We observe that indeed in almost all cases we can compensate for this effect by simply increasing the learning rate in the Adam optimizer (Kingma & Ba, 2014). Finally, we observe that besides being able to surpass the performance of SIREN in most cases in a short training regimen, the simple sinusoidal network performs even more strongly with longer training. To demonstrate this, we repeated some experiments from above, but with longer training durations. These results are shown in Table 4 in Appendix C. 4 NEURAL TANGENT KERNEL ANALYSIS In the following we derive the NTK for sinusoidal networks. This analysis will show us that the sinusoidal networks NTK is approximately a low-pass filter, with its bandwidth directly defined by ω. We support these findings with an empirical analysis as well in the following section. Finally, we demonstrate how the insights from the NTK can be leveraged to properly “tune” sinusoidal networks to the spectrum of the desired signal. Full derivations and extensive, detailed analysis are left to Appendix D. The NTK for a simple sinusoidal network with a single hidden layer is presented in the theorem below. The NTK for siren with 1 and 6 hidden layers are shown in Figure 1. Theorem 2. Shallow SSN NTK. For a simple sinusoidal network with one hidden layer f (1) : Rn0 → Rn2 following Definition 1, its neural tangent kernel (NTK), as defined in Theorem 6, is given by Θ(1)(x, x̃) = 1 2 ( ω2 ( xT x̃+ 1 ) + 1 ) e− ω2 2 ∥x−x̃∥ 2 2 − 1 2 ( ω2 ( xT x̃+ 1 ) − 1 ) e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 + ω2 ( xT x̃+ 1 ) + 1. 1We note that despite being named Theorem 1.8 in Sitzmann et al. (2020), this result is not fully formal, due to the Gaussian distribution being approximated without a formal analysis of this approximation. Additionally, a CLT result is employed which assumes infinite width, which is not applicable in this context. We thus refrain from calling our equivalent result a theorem. Nevertheless, to the extent that the argument is applicable, it would still hold for our proposed initialization, due to its dependence solely on the variance demonstrated in Lemma 1 above. We can see that for values of ω > 2, the second term quickly vanishes due to the e−2ω 2 factor. This leaves us with only the first term, which has a Gaussian form. Due to the linear scaling term xT x̃, this is only approximately Gaussian, but the approximation improves as ω increases. We can thus observe that this kernel approximates a Gaussian kernel, which is a low-pass filter, with its bandwidth defined by ω. Figure 1 presents visualizations for NTKs for the simple sinusoidal network, compared to a (scaled) pure Gaussian with variance ω−2, showing there is a close match between the two. If we write out the NTK for networks with more than one hidden layer, it quickly becomes un-interpretable due to the recursive nature of the NTK definition (see Appendix D). However, as shown empirically in Figure 1, these kernels are still approximated by Gaussians with variance ω−2. We also observe that the NTK for a SIREN with a single hidden layer is analogous, but with a sinc form, which is also a low-pass filter. Theorem 3. Shallow SIREN NTK. For a single hidden layer SIREN f (1) : Rn0 → Rn2 following Definition 1, its neural tangent kernel (NTK), as defined in Theorem 6, is given by Θ(1)(x, x̃) = c2 6 ( ω2 ( xT x̃+ 1 ) + 1 ) n0∏ j=1 sinc(c ω (xj − x̃j)) − c 2 6 ( ω2 ( xT x̃+ 1 ) − 1 ) e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + ω 2 ( xT x̃+ 1 ) + 1. For deeper SIREN networks, the kernels defined by the later layers are in fact Gaussian too, as discussed in Appendix D. This leads to an NTK that is approximated by a product of a sinc function and a Gaussian. These SIREN kernels are also presented in Figure 1. 5 EMPIRICAL ANALYSIS As shown above, neural tangent kernel theory suggests that sinusoidal networks work as low-pass filters, with their bandwidth controlled by the parameter ω. In this section, we demonstrate empirically that we can observe this predicted behavior even in real sinusoidal networks. For this experiment, we generate a 512× 512 monochromatic image by super-imposing two orthogonal sinusoidal signals, each consisting of a single frequency, f(x, y) = cos(128πx) + cos(32πy). This function is sampled in the domain [−1, 1]2 to generate the image on the left of Figure 2. To demonstrate what we can expect from applying low-pass filters of different bandwidths to this signal, we perform a discrete Fourier transform (DFT), cut off frequencies above a certain value, and perform an inverse transform to recover the (filtered) image. The MSE of the reconstruction, as a function of the cutoff frequency, is shown in Figure 3. We can see that due to the simple nature of the signal, containing only two frequencies, there are only three loss levels. If indeed the NTK analysis is correct and sinusoidal networks act as low-pass filters, with bandwidth controlled by ω, we should be able to observe similar behavior with sinusoidal networks with different ω values. We plot the final training loss and training curves for sinusoidal networks with different ω in Figure 3. We can observe, again, that there are three consistent loss levels following the magnitude of the ω parameter, in line with the intuition that the sinusoidal network is working as a low-pass filter. This is also observable in Figure 2, where we see example reconstructions for networks of various ω values after training. However, unlike with the DFT low-pass filter (which does not involve any learning), we see in Figure 3 that during training some sinusoidal networks shift from one loss level to a lower one. This demonstrates that sinusoidal networks differ from true low-pass filters in that their weights can change, which implies that the bandwidth defined by ω also changes with learning. We know the weights W1 in the first layer of a sinusoidal network, given by f1(x) = sin ( ω ·WT1 x+ b1 ) , will change with training. Empirically, we observed that the spectral norm of W1 increases throughout training for small ω values. We can interpret that as the overall magnitude of the term ω ·WT1 x increasing, which is functionally equivalent to an increase in ω itself. In Figure 3, we observe that sinusoidal networks with smaller values of ω take a longer time to achieve a lower loss (if at all). Intuitively, this happens because, due to the effect described above, lower ω values require a larger increase in magnitude by the weights W1. Given that all networks were trained with the same learning rate, the ones with a smaller ω require their weights to move a longer distance, and thus take more training steps to achieve a lower loss. 6 TUNING ω As shown in the previous section, though the bandwidth of a network can change throughout training, the choice of ω still influences how easily and quickly (if at all) it can learn a given signal. The value of the ω parameter is thus crucial for the learning of the network. Despite this fact, in SIRENs, for example, this value is not adjusted for each task (except for the audio fitting experiments), and is simply set empirically to an arbitrary value. In this section, we seek to justify a proper initialization for this parameter, such that it can be chosen appropriately for each given task. Moreover, it is often not the case that we simply want to fit only the exact training samples but instead want to find a good interpolation (i.e., generalize well). Setting ω too high, and thus allowing the network to model frequencies that are much larger than the ones present in the actual signal is likely to cause overfitting. This is demonstrated empirically in Figure 4. Consequently, we want instead to tune the network to the highest frequency present in the signal. However, we do not always have the knowledge of what is the value of the highest frequency in the true underlying signal of interest. Moreover, we have also observed that, since the network learns and its weights change in magnitude, that value in fact changes with training. Therefore, the most we can hope for is to have a good heuristic to guide the choice of ω. Nevertheless, having a reasonable guess for ω is also likely sufficient for good performance, precisely due to the ability of the network to adapt during training and compensate for a possibly slightly suboptimal choice. Choosing ω from the Nyquist frequency. One source of empirical information on the relationship between ω and the sinusoidal network’s “learnable frequencies” is the previous section’s empirical analysis. Taking into account the scaling, we can see from Fig. 3 that around ω = 16 the network starts to be able to learn the full signal (freq. 128). We can similarly note that at about ω = 4 the sinusoidal network starts to be able to efficiently learn a signal with frequency 32, but not the one with frequency 128. This scaling suggests a heuristic of setting ω to about 1/8 of the signal’s maximum frequency. For natural signals, such as pictures, it is common for frequencies up to the Nyquist frequency of the discrete sampling to be present. We provide an example for the “camera” image we have utilized so far in Figure 23 in Appendix E, where we can see that the reconstruction loss through a low-pass filter continues to decrease significantly up to the Nyquist frequency for the image resolution. In light of this information, analyzing the choices of ω for the experiments in Section 3.1 again suggests that ω should be set around 1/8 of the Nyquist frequency of the signal. These values of ω are summarized in Table 2 in the “Fitting ω” column. For example, the image fitting experiment shows that, for an image of shape 512× 512 (and thus Nyquist frequency of 256 for each dimension), this heuristic suggests an ω value of 256/8 = 32, which is the value found to work best empirically through search. We find similar results for the audio fitting experiments. The audio signals used in the audio fitting experiment contained approximately 300, 000 and 500, 000 points, and thus maximum frequencies of approximately 150, 00 and 250, 000. This suggests reasonable values for ω of 18, 750 and 31, 250, which are close to the ones found empirically to work well. In examples such as the video fitting experiments, in which each dimension has a different frequency, it is not completely clear how to pick a single ω to fit all dimensions. This suggests that having independent values of ω for each dimension might be useful for such cases, as discussed in the next section. Finally, when performing the generalization experiments in Section 7, we show the best performing ω ended up being half the value of the best ω used in the fitting tasks from Section 3.1. This follows intuitively, since for the generalization task we set apart half the points for training and the other half for testing, thus dividing the maximum possible frequency in the training sample in half, providing further evidence of the relationship between ω and the maximum frequency in the input signal. Multi-dimensional ω. In many problems, such as the video fitting and PDE problems, not only is the input space multi-dimensional, it also contains time and space dimensions (which are additionally possibly of different shape). This suggests that employing a multi-dimensional ω, specifying different frequencies for each dimension might be beneficial. In practice, if we employ a scaling factor λ = [λ1 λ2 . . . λd] T , we have the first layer of the sinusoidal network given by f1(x) = sin(ω (W1 (λ⊙ x) + b1)) = sin(W1 (Ω⊙ x) + ωb1), (2) where Ω = [λ1ω λ2ω . . . λdω] T works as a multi-dimensional ω. In the following experiments, we employ this approach to three-dimensional problems, in which we have time and differently shaped space domains, namely the video fitting and physics-informed neural network PDE experiments. For these experiments, we report the ω in the form of the (already scaled) Ω vector for simplicity. Choosing ω from available information Finally, in many problems we do have some knowledge of the underlying signal we can leverage, such as in the case of inverse problems. For example, let’s say we have velocity fields for a fluid and we are trying to solve for the coupled pressure field and the Reynolds number using a physics-informed neural network (as done in Section 7). In this case, we have access to two components of the solution field. Performing a Fourier transform on the training data we have can reveal the relevant spectrum and inform our choice of ω. If the maximum frequency in the signal is lower than the Nyquist frequency implied by the sampling, this can lead to a more appropriate choice of ω than suggested purely from the sampling. 7 EXPERIMENTS In this section, we first perform experiments to demonstrate how the optimal value of ω influences the generalization error of a sinusoidal network, following the discussion in Section 6. After that, we demonstrate that sinusoidal networks with properly tuned ω values outperform traditional physicsinformed neural networks in classic PDE tasks. 7.1 EVALUATING GENERALIZATION We now evaluate the simple sinusoidal network generalization capabilities. To do this, in all experiments in this section we segment the input signal into training and test sets using a checkerboard pattern – along all axis-aligned directions, points alternate between belonging to train and test set. We perform audio, image and video fitting experiments. When performing these experiments, we search for the best performing ω value for generalization (defined as performance on the held-out points). We report the best values on Table 2. We observe that, as expected from the discussion in Section 6, the best performing ω values follow the heuristic discussed above, and are in fact half the best-performing value found in the previous fitting experiments from Section 3.1, confirming our expectation. This is also demonstrated in the plot in Figure 4. Using a higher ω leads to overfitting and poor generalization outside the training points. This is demonstrated in Figure 4, in which we can see that choosing an appropriate ω value from the heuristics described previously leads to a good fit and interpolation. Setting ω too high leads to interpolation artifacts, due to overfitting of spurious high-frequency components. For the video signals, which have different size along each axis, we employ a multi-dimensional ω. We scale each dimension of ω proportional to the size of the input signal along the corresponding axis. 7.2 SOLVING DIFFERENTIAL EQUATIONS Finally, we apply our analysis to physics-informed learning. We compare the performance of simple sinusoidal networks to the tanh networks that are commonly used for these tasks. Results are summarized in Table 3. Details for the Schrödinger and Helmholtz experiments are presented in Appendix E. 7.2.1 BURGERS EQUATION (IDENTIFICATION) This experiment reproduces the Burgers equation identification experiment from Raissi et al. (2019a). Here we are identifying the parameters λ1 and λ2 of a 1D Burgers equation, ut+λ1uux−λ2uxx = 0, given a known solution field. The ground truth value of the parameters are λ1 = 1.0 and λ2 = 0.01/π. In order to find a good value for ω, we perform a low-pass reconstruction of the solution as before. We can observe in Figure 5 that the solution does not have high bandwidth, with most of the loss being minimized with only the lower half of the spectrum. Note that the sampling performed for the training data (N = 2, 000) is sufficient to support such frequencies. This suggests an ω value in the range 8− 10. Indeed, we observe that ω = 10 gives the best identification of the desired parameters, with errors of 0.0071% and 0.0507% for λ1 and λ2 respectively, against errors of 0.0521% and 0.4522% of the baseline. This value of ω also achieves the lowest reconstruction loss against the known solution, with an MSE of 8.034 · 10−4. Figure 5 shows the reconstructed solution using the identified parameters. 7.2.2 NAVIER-STOKES (IDENTIFICATION) This experiment reproduces the Navier-Stokes identification experiment from Raissi et al. (2019a). In this experiment, we are trying to identify, the parameters λ1, λ2 and the pressure field p of the 2D Navier-Stokes equations given by ∂u∂t + λ1u · ∇u = −∇p+ λ2∇ 2u, given known velocity fields u and v. The ground truth value of the parameters are λ1 = 1.0 and λ2 = 0.01. Unlike the 1D Burgers case, in this case the amount of points sampled for the training set (N = 5, 000) is not high, compared to the size of the full solution volume, and is thus the limiting factor for the bandwidth of the input signal. Given the random sampling of points from the full solution, the Correct PDE u_t + u u_x - 0.0031831 u_{xx} = 0 Identified PDE (clean data) u_t + 1.00007 u u_x - 0.0031847 u_{xx} = 0 Identified PDE (1\% noise) u_t + 0.00000 u u_x - 0.0000000 u_{xx} = 0 v_t + (u v_x + v v_y) = -p_y + 0.01 (v_{xx} + v_{yy}) Identified PDE (clean data) u_t + 1.000 (u u_x + v u_y) = -p_x + 0.01018 (u_{xx} + u_{yy}) v_t + 1.000 (u v_x + v v_y) = -p_y + 0.01018 (v_{xx} + v_{yy}) Identified PDE (1\% noise) u_t + 0.000 (u u_x + v u_y) = -p_x + 0.00000 (u_{xx} + u_{yy}) v_t + 0.000 (u v_x + v v_y) = -p_y + 0.00000 (v_{xx} + v_{yy}) generalized sampling theorem applies. The original solution has dimensions of 100 × 50 × 200. With the 5, 000 randomly sampled points, the average sampling rate per dimension is approximately 17, on average, corresponding to a Nyquist frequency of approximately 8.5. Furthermore, given the multi-dimensional nature of this problem, with both spatial and temporal axes, we employ an independent scaling to ω for each dimension. The analysis above suggests an average ω ≈ 1, with the dimensions of the problem suggesting scaling factors of [0.5 1 2]T . Indeed, we observe that Ω = [0.3 0.6 1.2] T gives the best results. With with errors of 0.0038% and 1.782% for λ1 and λ2 respectively, against errors of 0.0046% and 2.093% of the baseline. Figure 6 shows the identified pressure field. Note that given the nature of the problem, this field can only be identified up to a constant. 8 CONCLUSION In this work, we have present a simplified formulation for sinusoidal networks. Analysis of this architecture from the neural tangent kernel perspective, combined with empirical results, reveals that the kernel for sinusoidal networks corresponds to a low-pass filter with adjustable bandwidth. We leverage this information in order to initialize these networks appropriately, choosing their bandwidth such that it is tuned to the signal being learned. Employing this strategy, we demonstrated improved results in both implicit modelling and physics-informed learning tasks. A SIMPLE SINUSOIDAL NETWORK INITIALIZATION We present here the proofs for the initialization scheme of the simple sinusoidal network from Section 3. Lemma 4. Given any c, for X ∼ N ( 0, 13c 2 ) and Y ∼ U (−c, c), we have Var[X] = Var[Y ] = 13c 2. Proof. By definition, Var[X] = σ2 = 13c 2. For Y , we know that the variance of a uniformly distributed random variable with bound [a, b] is given by 112 (b − a) 2. Thus, Var[Y ] = 112 (2c) 2 = 13c 2. Theorem 5. For a uniform input in [−1, 1], the activations throughout a sinusoidal networks are approximately standard normal distributed before each sine non-linearity and arcsine-distributed after each sine non-linearity, irrespective of the depth of the network, if the weights are distributed normally, with mean 0 and variance 2n with n is the layer’s fan-in. Proof. The proof follows exactly the proof for Theorem 1.8 in Sitzmann et al. (2020), only using Lemma 4 when necessary to show that the initialization proposed here has the same variance necessary for the proof to follow. B EMPIRICAL EVALUATION OF SSN INITIALIZATION Here we report an empirical analysis the initialization scheme of simple sinusoidal networks, referenced in Section 3. For this analysis we use a sinusoidal MLP with 6 hidden layers of 2048 units, and single-dimensional input and output. This MLP is initialized using the simplified scheme described above. For testing, 28 equally spaced inputs from the range [−1, 1] are passed through the network. We then plot the histogram of activations after each linear operation (before the sine non-linearity) and after each sine non-linearity. To match the original plot, we also plot the 1D Fast Fourier Transform of all activations in a layer, and the gradient of this output with respect to each activation. These results are presented in Figure 8. The main conclusion from this figure is that the distribution of activations matches the predicted normal (before the non-linearity) and arcsine (after the non-linearity) distributions, and that this behavior is stable across many layers. We also reproduced the same result up to 50 layers. We then perform an additional experiment in which the exact same setup as above is employed, yet the 1D inputs are shifted by a large value (i.e., x → x+ 1000). We the show the same plot as before in Figure 9. We can see that there is essentially no change from the previous plot, which demonstrates the sinusoidal networks shift-invariance in the input space, one of its important desirable properties, as discussed previously. C EXPERIMENTAL DETAILS FOR COMPARISON TO SIREN Below, we present qualitative results and describe experimental details for each experiment. As these are a reproduction of the experiments in Sitzmann et al. (2020), we refer to their details as well for further information. C.1 IMAGE In the image fitting experiment, we treat an image as a function from the spatial domain to color values (x, y) → (r, g, b). In the case of a monochromatic image, used here, this function maps instead to one-dimensional intensity values. We try to learn a function f : R2 → R, parametrized as a sinusoidal network, in order to fit such an image. Figure 7 shows the image used in this experiment, and the reconstruction from the fitted sinusoidal network. The gradient and Laplacian for the learned function are also presented, demonstrating that higher order derivatives are also learned appropriately. Table 4: Comparison of the simple sinusoidal network and SIREN on some experiments, with a longer training duration. The specific durations are described below in the details for each experiment. We can see that the simple sinusoidal network has stronger asymptotic performance. Values above the horizontal center line are peak signal to noise ratio (PSNR), values below are mean squared error (MSE). †Audio experiments utilized a different learning rate for the first layer, see the full description below for details. Experiment Simple Sinusoidal Network SIREN [ours] Image 54.70 52.43 Poisson (Gradient) 39.51 38.70 Poisson (Laplacian) 22.09 20.82 Video (cat) 34.64 32.26 Video (bikes) 37.71 34.07 Audio (Bach)† 5.66 · 10−7 3.02 · 10−6 Audio (counting)† 4.02 · 10−5 6.33 · 10−5 Image Gradient Laplacian Figure 7: Top row: Ground truth image. Bottom: Reconstructed with sinusoidal network. Training parameters. The input image used is 512×512, mapped to an input domain [−1, 1]2. The sinusoidal network used is a 5-layer MLP with hidden size 256, following the proposed initialization scheme above. The parameter ω is set to 32. The Adam optimizer is used with a learning rate of 3 · 10−3, trained for 10, 000 steps in the short duration training results and for 20, 000 steps in the long duration training results. C.2 POISSON These tasks are similar to the image fitting experiment, but instead of supervising directly on the ground truth image, the learned fitted sinusoidal network is supervised on its derivatives, constituting a Poisson problem. We perform the experiment by supervising both on the input image’s gradient and Laplacian, and report the reconstruction of the image and it’s gradients in each case. Figure 10 shows the image used in this experiment, and the reconstruction from the fitted sinusoidal networks. Since reconstruction from derivatives can only be correct up to a scaling factor, we scale the reconstructions for visualization. As in the original SIREN results, we can observe that the reconstruction from the gradient is of higher quality than the one from the Laplacian. Training parameters. The input image used is of size 256× 256, mapped from an input domain [−1, 1]2. The sinusoidal network used is a 5-layer MLP with hidden size 256, following the proposed initialization scheme above. For both experiments, the parameter ω is set to 32 and the Adam optimizer is used. For the gradient experiments, in short and long training results, a learning rate of 1 · 10−4 is used, trained for 10, 000 and 20, 000 steps respectively. For the Laplace experiments, in short and long training results, a learning rate of 1 · 10−3 is used, trained for 10, 000 and 20, 000 steps respectively. C.3 VIDEO These tasks are similar to the image fitting experiment, but we instead fit a video, which also has a temporal input dimension, (t, x, y) → (r, g, b). We learn a function f : R3 → R3, parametrized as a sinusoidal network, in order to fit such a video. Figures 11 and 12 show sampled frames from the videos used in this experiment, and their respective reconstructions from the fitted sinusoidal networks. Training parameters. The cat video contains 300 frames of size 512 × 512. The bikes video contains 250 frames of size 272× 640. These signals are fitted from the input domain [−1, 1]3. The sinusoidal network used is a 5-layer MLP with hidden size 1024, following the proposed initialization scheme above. The parameter ω is set to 8. The Adam optimizer is used, with a learning rate of 3 · 10−4 trained for 100, 000 steps in the short duration training results and for 200, 000 steps in the long duration training results. C.4 AUDIO In the audio experiments, we fit an audio signal in the temporal domain as a waveform t → w. We to learn a function f : R → R, parametrized as a sinusoidal network, in order to fit the audio. Figure 13 shows the waveforms for the input audios and the reconstructed audios from the fitted sinusoidal network. In this experiment, we utilized a lower learning rate for the first layer compared to the rest of the network. This was used to compensate the very large ω used (in the 15, 000−30, 000 range, compared to the 10−30 range for all other experiments). One might argue that this is re-introducing complexity, counteracting the purpose the proposed simplification. However, we would claim (1) that this is only limited to cases with extremely high ω, which was not present in any case except for fitting audio waves, and (2) that adjusting the learning rate for an individual layer is still an approach that is simpler and more in line with standard machine learning practice compared to multiplying all layers by a scaling factor and then adjusting their initialization variance by the same amount. Training parameters. Both audios use a sampling rate of 44100Hz. The Bach audio is 7s long and the counting audio is approximately 12s long. These signals are fitted from the input domain [−1, 1]. The sinusoidal network used is a 5-layer MLP with hidden size 256, following the proposed initialization scheme above. For short and long training results, training is performed for 5, 000 and 50, 000 steps respectively. For the Bach experiment, the parameter ω is set to 15, 000. The Adam optimizer is used, with a general learning rate of 3 · 10−3. A separate learning rate of 1 · 10−6 is used for the first layer to stabilize training due to the large ω value. For the counting experiment, the parameter ω is set to 32, 000. The Adam optimizer is used, with a general learning rate of 1 · 10−3 and a first layer learning rate of 1 · 10−6. C.5 HELMHOLTZ EQUATION In this experiment we solve for the unknown wavefield Φ : R2 → R2 in the Helmholtz equation (∆ + k2)Φ(x) = −f(x), (3) with known wavenumber k and source function f (a Gaussian with µ = 0 and σ2 = 10−4). We solve this differential equation using a sinusoidal network supervised with the physics-informed loss ∫ Ω ∥(∆ + k2)Φ(x) + f(x)∥1dx, evaluated at random points sampled uniformly in the domain Ω = [−1, 1]2. Figure 14 shows the real and imaginary components of the ground truth solution to the differential equation and the solution recovered by the fitted sinusoidal network. Training parameters. The sinusoidal network used is a 5-layer MLP with hidden size 256, following the proposed initialization scheme above. The parameter ω is set to 16. The Adam optimizer is used, with a learning rate of 3 · 10−4 trained for 50, 000 steps. C.6 SIGNED DISTANCE FUNCTION (SDF) In these tasks we learn a 3D signed distance function. We learn a function f : R3 → R, parametrized as a sinusoidal network, to model a signed distance function representing a 3D scene. This function is supervised indirectly from point cloud data of the scene. Figures 16 and 15 show 3D renderings of the volumes inferred from the learned SDFs. Training parameters. The statue point cloud contains 4, 999, 996 points. The room point cloud contains 10, 250, 688 points. These signals are fitted from the input domain [−1, 1]3. The sinusoidal network used is a 5-layer MLP with hidden size 256 for the statue and 1024 for the room. The parameter ω is set to 4. The Adam optimizer is used, with a learning rate of 8 · 10−4 and a batch size of 1400. All models are trained for 190, 000 steps for the statue experiment and for 410, 000 steps for the room experiment. D NEURAL TANGENT KERNEL ANALYSIS AND PROOFS D.1 PRELIMINARIES In order to perform the subsequent NTK analysis, we first need to formalize definitions for simple sinusoidal networks and SIRENs. The definitions used here adhere to the common NTK analysis practices, and thus differ slightly from practical implementation. Definition 1. For the purposes of the following proofs, a (sinusoidal) fully-connected neural network with L hidden layers that takes as input x ∈ Rn0 , is defined as the function f (L) : Rn0 → RnL+1 , recursively given by f (0)(x) = ω ( W (0)x+ b(0) ) , f (L)(x) = W (L) 1 √ nL sin ( f (L−1) ) + b(L), where ω ∈ R. The parameters { W (j) }L j=0 have shape nj+1×nj and all have each element sampled independently either from N (0, 1) (for simple sinusoidal networks) or from U(−c, c) with some bound c ∈ R (for SIRENs). The { b(j) }L j=0 are nj+1-dimensional vectors sampled independently from N (0, Inj+1). With this definition, we now state the general formulation of the NTK, which applies in general to fully-connected networks with Lipschitz non-linearities, and consequently in particular to the sinusoidal networks studied here as well. Let us first define the NNGP, which has covariance recursively defined by Σ(L+1)(x, x̃) = Ef∼N (0,Σ(L)) [σ(f(x))σ(f(x̃))] + β2, with base case Σ(1)(x, x̃) = 1n0x T x̃+ β2, and where β gives the variance of the bias terms in the neural network layers (Neal, 1994; Lee et al., 2018). Now the NTK is given by the following theorem. Theorem 6. For a neural network with L hidden layers f (L) : Rn0 → RnL+1 following Definition 1, as the size of the hidden layers n1, . . . , nL → ∞ sequentially, the neural tangent kernel (NTK) of f (L) converges in probability to the deterministic kernel Θ(L) defined recursively as Θ(0)(x, x̃) = Σ(0)(x, x̃) = ω2 ( xT x̃+ 1 ) , Θ(L)(x, x̃) = Θ(L−1)(x, x̃)Σ̇(L)(x, x̃) + Σ(L)(x, x̃), where { Σ(l) }L l=0 are the neural network Gaussian processes (NNGPs) corresponding to each f (l) and Σ̇(l)(x, x̃) = E(u,v)∼Σ(l−1)(x,x̃) [cos(u)cos(v)] . Proof. This is a standard general NTK theorem, showing that the limiting kernel recursively in terms of the network’s NNGPs and the previous layer’s NTK. For brevity we omit the proof here and refer the reader to, for example, Jacot et al. (2020). The only difference is for the base case Σ(0), due to the fact that we have an additional ω parameter in the first layer. It is simple to see that the neural network with 0 hidden layers, i.e. the linear model ω ( W (0)x+ b(0) ) will lead to the same Gaussian process covariance kernel as the original proof, xT x̃+ 1, only adjusted by the additional variance factor ω2. Theorem 6 demonstrates that the NTK can be constructed as a recursive function of the NTK of previous layers and the network’s NNGPs. In the following sections we will derive the NNGPs for the SIREN and the simple sinusoidal network directly. We will then use these NNGPs with Theorem 6 to derive their NTKs as well. To finalize this preliminary section, we also provide two propositions that will be useful in following proofs in this section. Proposition 7. For any ω ∈ R, x ∈ Rd, Ew∼N (0,Id) [ eiω(w T x) ] = e− ω2 2 ∥x∥ 2 2 Proof. Omitting w ∼ N (0, Id) from the expectation for brevity, we have E [ eiω(w T x) ] = E [ eiω ∑d j=1 wjxj ] . By independence of the components of w and the definition of expectation, E [ eiω ∑d j=1 iwjxj ] = d∏ j=1 E [ eiω wjxj ] = d∏ j=1 1√ 2π ∫ ∞ −∞ eiω wjxje− w2j 2 dwj . Completing the square, we get d∏ j=1 1√ 2π ∫ ∞ −∞ eiω wjxje− 1 2w 2 j dwj = d∏ j=1 1√ 2π ∫ ∞ −∞ e 1 2 (i 2ω2x2j−i 2ω2x2j+2ixjwj−w 2 j)dwj = d∏ j=1 e 1 2 i 2ω2x2j 1√ 2π ∫ ∞ −∞ e− 1 2 (i 2ω2x2j−2iω 2xjwj+w 2 j)dwj = d∏ j=1 e− 1 2ω 2x2j 1√ 2π ∫ ∞ −∞ e− 1 2 (wj−iωxj) 2 dwj . Since the integral and its preceding factor constitute a Gaussian pdf, they integrate to 1, leaving the final result d∏ j=1 e− ω2 2 x 2 j = e− ω2 2 ∑d j=1 x 2 j = e− ω2 2 ∥xj∥ 2 2 . Proposition 8. For any c, ω ∈ R, x ∈ Rd, Ew∼Ud(−c,c) [ eiω(w T x) ] = d∏ j=1 sinc(c ωxj). Proof. Omitting w ∼ Ud(−c, c) from the expectation for brevity, we have E [ eiω(w T x) ] = E [ eiω ∑d j=1 wjxj ] . By independence of the components of w and the definition of expectation, E [ eiω ∑d j=1 wjxj ] = d∏ j=1 E [ eiω wjxj ] = d∏ j=1 ∫ c −c eiω wjxj 1 2c dwj = d∏ j=1 1 2c ∫ c −c eiω wjxjdwj . Now, focusing on the integral above, we have∫ c −c eiω wjxjdwj = ∫ c −c cos(ω wjxj)dwj + i ∫ c −c sin(ω wjxj)dwj = sin(ω wjxj) ωxj ∣∣∣∣∣ c −c − icos(ω wjxj) ωxj ∣∣∣∣∣ c −c = 2sin(c ωxj) ωxj . Finally, plugging this back into the product above, we get d∏ j=1 1 2c ∫ c −c eiω wjxjdwj = d∏ j=1 1 2c 2sin(c ωxj) ωxj = d∏ j=1 sinc(c ωxj). D.2 SHALLOW SINUSOIDAL NETWORKS For the next few proofs, we will be focusing on neural networks with a single hidden layer, i.e. L = 1. Expanding the definition above, such a network is given by f (1)(x) = W (1) 1 √ n1 sin ( ω ( W (0)x+ b(0) )) + b(1). (4) The advantage of analysing such shallow networks is that their NNGPs and NTKs have formulations that are intuitively interpretable, providing insight into their characteristics. We later extend these derivations to networks of arbitrary depth. D.2.1 SIREN First, let us derive the NNGP for a SIREN with a single hidden layer. Theorem 9. Shallow SIREN NNGP. For a single hidden layer SIREN f (1) : Rn0 → Rn2 following Definition 1, as the size of the hidden layer n1 → ∞, f (1) tends (by law of large numbers) to the neural network Gaussian Process (NNGP) with covariance Σ(1)(x, x̃) = c2 6 n0∏ j=1 sinc(c ω (xj − x̃j))− e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + 1. Proof. We first show that despite the usage of a uniform distribution for the weights, this initialization scheme still leads to an NNGP. In this initial part, we follow an approach similar to Lee et al. (2018), with the modifications necessary for this conclusion to hold. From our neural network definition, each element f (1)(x)j in the output vector is a weighted combination of elements in W (1) and b(1). Conditioning on the outputs from the first layer (L = 0), since the sine function is bounded and each of the parameters is uniformly distributed with finite variance and zero mean, the f (1)(x)j become normally distributed with mean zero as n1 → ∞ by the (Lyapunov) central limit theorem (CLT). Since any subset of elements in f (1)(x) is jointly Gaussian, we have that this outer layer is described by a Gaussian process. Now that we have concluded that this initialization scheme still entails an NNGP, we have that its covariance is determined by σ2WΣ (1) + σ2b = c2 3 Σ (1) + 1, where Σ(1)(x, x̃) = lim n1→∞ [ 1 n1 〈 sin ( f (0)(x) ) , sin ( f (0)(x̃) )〉] = lim n1→∞ 1 n1 n1∑ j=1 sin ( f (0)(x) ) j sin ( f (0)(x̃) ) j = lim n1→∞ 1 n1 n1∑ j=1 sin ( ω ( W (0) j x+ b (0) j )) sin ( ω ( W (0) j x̃+ b (0) j )) . Now by the law of large number (LLN) the limit above converges to Ew∼Un0 (−c,c), b∼N (0,1) [ sin ( ω ( wTx+ b )) sin ( ω ( wT x̃+ b ))] , where w ∈ Rn0 and b ∈ R. Omitting the distributions from the expectation for brevity and expanding the exponential definition of sine, we have E [ 1 2i ( eiω(w T x+b) − e−iω(w T x+b) ) 1 2i ( eiω(w T x̃+b) − e−iω(w T x̃+b) )] = −1 4 E [ eiω(w T x+b)+iω(wT x̃+b) − eiω(w T x+b)−iω(wT x̃+b) − e−iω(w T x+b)+iω(wT x̃+b) + e−iω(w T x+b)−iω(wT x̃+b) ] = −1 4 [ E [ eiω(w T (x+x̃)) ] E [ e2iωb ] − E [ eiω(w T (x−x̃)) ] − E [ eiω(w T (x̃−x)) ] + E [ eiω(w T (−x−x̃)) ] E [ e−2iωb ]] Applying Propositions 7 and 8 to each expectation above and noting that the sinc function is even, we are left with − 1 4 2 n0∏ j=1 sinc(c ω (xj + x̃j))− 2e−2ω 2 n0∏ j=1 sinc(c ω (xj − x̃j)) = 1 2 n0∏ j=1 sinc(c ω (xj − x̃j))− e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) . For simplicity, if we take the case of a one-dimensional output (e.g., an audio signal or a monochromatic image) with the standard SIREN setting of c = √ 6, the NNGP reduces to Σ(1)(x, x̃) = sinc (√ 6ω (x− x̃) ) − e−2ω 2 sinc (√ 6ω (x+ x̃) ) + 1. We can already notice that this kernel is composed of sinc functions. The sinc function is the ideal low-pass filter. For any value of ω > 1, we can see the the first term in the expression above will completely dominate the expression, due to the exponential e−2ω 2 factor. In practice, ω is commonly set to values at least one order of magnitude above 1, if not multiple orders of magnitude above that in certain cases (e.g., high frequency audio signals). This leaves us with simply Σ(1)(x, x̃) = sinc (√ 6ω (x− x̃) ) + 1. Notice that not only does our kernel reduce to the sinc function, but it also reduces to a function solely of ∆x = x − x̃. This agrees with the shift-invariant property we observe in SIRENs, since the NNGP is dependent only on ∆x, but not on the particular values of x and x̃. Notice also that ω defines the bandwidth of the sinc function, thus determining the maximum frequencies it allows to pass. The general sinc form and the shift-invariance of this kernel can be visualized in Figure 17, along with the effect of varying ω on the bandwidth of the NNGP kernel. We can see that the NTK of the shallow SIREN, derived below, maintains the same relevant characteristics as the NNGP. We first derive Σ̇ in the Lemma below. Lemma 10. For ω ∈ R, Σ̇(1)(x, x̃) : Rn0 × Rn0 → R is given by Σ(1)(x, x̃) = c2 6 n0∏ j=1 sinc(c ω (xj − x̃j)) + e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + 1. Proof. The proof follows the same pattern as Theorem 9, with the only difference being a few sign changes after the exponential expansion of the trigonometric functions, due to the different identities for sine and cosine. Now we can derive the NTK for the shallow SIREN. Corollary 11. Shallow SIREN NTK. For a single hidden layer SIREN f (1) : Rn0 → Rn2 following Definition 1, its neural tangent kernel (NTK), as defined in Theorem 6, is given by Θ(1)(x, x̃) = ( ω2 ( xT x̃+ 1 ))c2 6 n0∏ j=1 sinc(c ω (xj − x̃j))− e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + 1 + c2 6 n0∏ j=1 sinc(c ω (xj − x̃j)) + e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + 1 = c2 6 ( ω2 ( xT x̃+ 1 ) + 1 ) n0∏ j=1 sinc(c ω (xj − x̃j)) − c 2 6 ( ω2 ( xT x̃+ 1 ) − 1 ) e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + ω 2 ( xT x̃+ 1 ) + 1. Proof. Follows trivially by applying Theorem 9 and Lemma 10 to Theorem 6. Though the expressions become more complex due to the formulation of the NTK, we can see that many of the same properties from the NNGP still apply. Again, for reasonable values of ω, the term with the exponential factor e−2ω 2 will be of negligible relative magnitude. With c = √ 6, this leaves us with ( ω2 ( xT x̃+ 1 ) + 1 ) n0∏ j=1 sinc (√ 6ω (xj − x̃j) ) + ω2 ( xT x̃+ 1 ) + 1, which is of the same form as the NNGP, with some additional linear terms xT x̃. Though these linear terms break the pure shift-invariance, we still have a strong diagonal and the sinc form with bandwidth determined by ω, as can be seen in Figure 18. Similarly to the NNGP, the SIREN NTK suggests that training a shallow SIREN is approximately equivalent to performing kernel regression with a sinc kernel, a low-pass filter, with its bandwidth defined by ω. This agrees intuitively with the experimental observations from the paper that in order to fit higher frequencies signals, a larger ω is required. D.2.2 SIMPLE SINUSOIDAL NETWORK Just as we did in the last section, we will now first derive the NNGP for a simple sinusoidal network, and then use that in order to obtain its NTK as well. As we will see, the Gaussian initialization employed in the SSN has the benefit of rendering the derivations cleaner, while retaining the relevant properties from the SIREN initialization. We observe that a similar derivation of this NNGP (using cosine functions instead of sine) can be found in Pearce et al. (2019), with a focus on a Bayesian perspective for the result. Theorem 12. Shallow SSN NNGP. For a single hidden layer simple sinusoidal network f (1) : Rn0 → Rn2 following Definition 1, as the size of the hidden layer n1 → ∞, f (1) tends (by law of large numbers) to the neural network Gaussian Process (NNGP) with covariance Σ(1)(x, x̃) = 1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 − e−ω 2 2 ∥x+x̃∥ 2 2e−2ω 2 ) + 1. Proof. We again initially follow an approach similar to the one described in Lee et al. (2018). From our sinusoidal network definition, each element f (1)(x)j in the output vector is a weighted combination of elements in W (1) and b(1). Conditioning on the outputs from the first layer (L = 0), since the sine function is bounded and each of the parameters is Gaussian with finite variance and zero mean, the f (1)(x)j are also normally distributed with mean zero by the CLT. Since any subset of elements in f (1)(x) is jointly Gaussian, we have that this outer layer is described by a Gaussian process. Therefore, its covariance is determined by σ2WΣ (1) + σ2b = Σ (1) + 1, where Σ(1)(x, x̃) = lim n1→∞ [ 1 n1 〈 sin ( f (0)(x) ) , sin ( f (0)(x̃) )〉] = lim n1→∞ 1 n1 n1∑ j=1 sin ( f (0)(x) ) j sin ( f (0)(x̃) ) j = lim n1→∞ 1 n1 n1∑ j=1 sin ( ω ( W (0) j x+ b (0) j )) sin ( ω ( W (0) j x̃+ b (0) j )) . Now by the LLN the limit above converges to Ew∼N (0,In0 ),b∼N (0,1) [ sin ( ω ( wTx+ b )) sin ( ω ( wT x̃+ b ))] , where w ∈ Rn0 and b ∈ R. Omitting the distributions from the expectation for brevity and expanding the exponential definition of sine, we have E [ 1 2i ( eiω(w T x+b) − e−iω(w T x+b) ) 1 2i ( eiω(w T x̃+b) − e−iω(w T x̃+b) )] = −1 4 E [ eiω(w T x+b)+iω(wT x̃+b) − eiω(w T x+b)−iω(wT x̃+b) − e−iω(w T x+b)+iω(wT x̃+b) + e−iω(w T x+b)−iω(wT x̃+b) ] = −1 4 [ E [ eiω(w T (x+x̃)) ] E [ e2iωb ] − E [ eiω(w T (x−x̃)) ] − E [ eiω(w T (x̃−x)) ] + E [ eiω(w T (−x−x̃)) ] E [ e−2iωb ]] Applying Proposition 7 to each expectation above, it becomes −1 4 ( e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 − e−ω 2 2 ∥x−x̃∥ 2 2 − e−ω 2 2 ∥x+x̃∥ 2 2 + e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 ) = 1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 − e−ω 2 2 ∥x+x̃∥ 2 2e−2ω 2 ) . We an once again observe that, for practical values of ω, the NNGP simplifies to 1 2 e− ω2 2 ∥x−x̃∥ 2 2 + 1. This takes the form of a Gaussian kernel, which is also a low-pass filter, with its bandwidth determined by ω. We note that, similar to the c = √ 6 setting from SIRENs, in practice a scaling factor of √ 2 is applied to the normal activations, as described in Section 3, which cancels out the 1/2 factors from the kernels, preserving the variance magnitude. Moreover, we can also observe again that the kernel is a function solely of ∆x, in agreement with the shift invariance that is also observed in simple sinusoidal networks. Visualizations of this NNGP are provided in Figure 19. We will now proceed to derive the NTK, which requires first obtaining Σ̇. Lemma 13. For ω ∈ R, Σ̇(1)(x, x̃) : Rn0 × Rn0 → R is given by Σ̇(1)(x, x̃) = 1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 + e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 ) + 1. Proof. The proof follows the same pattern as Theorem 12, with the only difference being a few sign changes after the exponential expansion of the trigonometric functions, due to the different identities for sine and cosine. Corollary 14. Shallow SSN NTK. For a simple sinusoidal network with a single hidden layer f (1) : Rn0 → Rn2 following Definition 1, its neural tangent kernel (NTK), as defined in Theorem 6, is given by Θ(1)(x, x̃) = ( ω2 ( xT x̃+ 1 )) [1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 + e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 ) + 1 ] + 1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 − e−ω 2 2 ∥x+x̃∥ 2 2e−2ω 2 ) + 1 = 1 2 ( ω2 ( xT x̃+ 1 ) + 1 ) e− ω2 2 ∥x−x̃∥ 2 2 − 1 2 ( ω2 ( xT x̃+ 1 ) − 1 ) e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 + ω2 ( xT x̃+ 1 ) + 1. Proof. Follows trivially by applying Theorem 12 and Lemma 13 to Theorem 6. We again note the vanishing factor e−2ω 2 , which leaves us with 1 2 ( ω2 ( xT x̃+ 1 ) + 1 ) e− ω2 2 ∥x−x̃∥ 2 2 + ω2 ( xT x̃+ 1 ) + 1. (5) As with the SIREN before, this NTK is still of the same form as its corresponding NNGP. While again we have additional linear terms xT x̃ in the NTK compared to the NNGP, in this case as well the kernel preserves its strong diagonal. It is still close to a Gaussian kernel, with its bandwidth determined directly by ω. We demonstrate this in Figure 20, where the NTK for different values of ω is shown. Additionally, we also plot a pure Gaussian kernel with variance ω2, scaled to match the maximum and minimum values of the NTK. We can observe the NTK kernel closely matches the Gaussian. Moreover, we can also observe that, at x̃ = 0 the maximum value is predicted by k ≈ ω2/2, as expected from the scaling factors in the kernel in Equation 5. This NTK suggests that training a simple sinusoidal network is approximately equivalent to performing kernel regression with a Gaussian kernel, a low-pass filter, with its bandwidth defined by ω. We note that even though this sinusoidal network kernel approximates a Gaussian kernel, an actual Gaussian kernel can be recovered if a combination of sine and cosine activations are employed, as demonstrated in Tsuchida (2020) (Proposition 18). D.3 DEEP SINUSOIDAL NETWORKS We will now look at the full NNGP and NTK for sinusoidal networks of arbitrary depth. As we will see, due to the recursive nature of these kernels, for networks deeper than the ones analyzed in the previous section, their full unrolled expressions quickly become intractable intuitively, especially for the NTK. Nevertheless, these kernels can still provide some insight, into the behavior of their corresponding networks. Moreover, despite their symbolic complexity, we will also demonstrate empirically that the resulting kernels can be approximated by simple Gaussian kernels, even for deep networks. D.3.1 SIMPLE SINUSOIDAL NETWORK As demonstrated in the previous section, simple sinusoidal networks produce simpler NNGP and NTK kernels due to their Gaussian initialization. We thus begin this section by now analyzing SSNs first, starting with their general NNGP. Theorem 15. SSN NNGP. For a simple sinusoidal network with L hidden layers f (L) : Rn0 → RnL+1 following Definition 1, as the size of the hidden layers n1, . . . , nL → ∞ sequentially, f (L) tends (by law of large numbers) to the neural network Gaussian Process (NNGP) with covariance Σ(L)(x, x̃), recursively defined as Σ(0)(x, x̃) = ω2 ( xT x̃+ 1 ) Σ(L)(x, x̃) = 1 2 e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)) ( eΣ (L−1)(x,x̃) − e−Σ (L−1)(x,x̃) ) + 1. Proof. We will proceed by induction on the depth L, demonstrating the NNGP for successive layers as n1, . . . , nL → ∞ sequentially. To demonstrate the base case L = 1, let us rearrange Σ(1) from Theorem 12 in order to express it in terms of inner products, Σ(1)(x, x̃) = 1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 + e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 ) + 1 = 1 2 [ e− ω2 2 (x T x−2xT x̃+x̃T x̃) − e− ω2 2 (x T x+2xT x̃+x̃T x̃)e−2ω 2 ] + 1 = 1 2 [ e− 1 2 [ω 2(xT x+1)+ω2(x̃T x̃+1)]+ω2(xT x̃+1) − e− 1 2 [ω 2(xT x+1)+ω2(x̃T x̃+1)]−ω2(xT x̃+1) ] + 1. Given the definition of Σ(0), this is equivalent to 1 2 e− 1 2 (Σ (0)(x,x)+Σ(0)(x̃,x̃)) ( eΣ (0)(x,x̃) − e−Σ (0)(x,x̃) ) + 1, which concludes this case. Now given the inductive hypothesis, as n1, . . . , nL−1 → ∞ we have that the first L− 1 layers define a network f (L−1) with NNGP given by Σ(L−1)(x, x̃). Now it is left to show that as nL → ∞, we get the NNGP given by Σ(L). Following the same argument in Theorem 12, the network f (L)(x) = W (L) 1 √ nL sin ( f (L−1) ) + b(L) constitutes a Gaussian process given the outputs of the previous layer, due to the distributions of W (L) and b(L). Its covariance is given by σ2WΣ (L) + σ2b = Σ (L) + 1, where Σ(L)(x, x̃) = lim nL→∞ [ 1 nL 〈 sin ( f (L−1)(x) ) , sin ( f (L−1)(x̃) )〉] = lim nL→∞ 1 nL nL∑ j=1 sin ( f (L−1)(x) ) j sin ( f (L−1)(x̃) ) j . By inductive hypothesis, f (L−1) is a Gaussian process Σ(L−1)(x, x̃). Thus by the LLN the limit above equals E(u,v)∼N(0,Σ(L−1)(x,x̃)) [sin(u)sin(v)] . Omitting the distribution from the expectation for brevity and expanding the exponential definition of sine, we have E [ 1 2i ( eiu − e−iu ) 1 2i ( eiv − e−iv )] = −1 4 [ E [ ei(u+v) ] − E [ ei(u−v) ] − E [ e−i(u−v) ] + E [ e−i(u+v) ]] . Since u and v are jointly Gaussian, p = u+ v and m = u− v are also Gaussian, with mean 0 and variance σ2p = σ 2 u + σ 2 v + 2Cov[u, v] = Σ (L−1)(x, x) + Σ(L−1)(x̃, x̃) + 2Σ(L−1)(x, x̃), σ2m = σ 2 u + σ 2 v − 2Cov[u, v] = Σ(L−1)(x, x) + Σ(L−1)(x̃, x̃)− 2Σ(L−1)(x, x̃). We can now rewriting the expectations in terms of normalized variables −1 4 [ Ez∼N (0,1) [ eiσpz ] − Ez∼N (0,1) [ eiσmz ] − Ez∼N (0,1) [ e−iσmz ] + Ez∼N (0,1) [ e−iσpz ]] . Applying Proposition 7 to each expectation, we get 1 2 [ e− 1 2σ 2 m − e− 12σ 2 p ] = 1 2 [ e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)−2Σ(L−1)(x,x̃)) − e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)+2Σ(L−1)(x,x̃)) ] = 1 2 e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)) ( eΣ (L−1)(x,x̃) − e−Σ (L−1)(x,x̃)) ) Unrolling the definition beyond L = 1 leads to expressions that are difficult to parse. However, without unrolling, we can rearrange the terms in the NNGP above as Σ(L)(x, x̃) = 1 2 e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)) ( eΣ (L−1)(x,x̃) − e−Σ (L−1)(x,x̃) ) + 1 = 1 2 [ e− 1 2 (Σ (L−1)(x,x)−2Σ(L−1)(x,x̃)+Σ(L−1)(x̃,x̃)) − e− 1 2 (Σ (L−1)(x,x)+2Σ(L−1)(x,x̃)+Σ(L−1)(x̃,x̃)) ] + 1. Since the covariance matrix Σ(L−1) is positive semi-definite, we can observe that the exponent expressions can be reformulated into a quadratic forms analogous to the ones in Theorem 12. We can thus observe that the same structure is essentially preserved through the composition of layers, except for the ω factor present in the first layer. Moreover, given this recursive definition, since the NNGP at any given depth L is a function only of the preceding kernels, the resulting kernel will also be shift-invariant. Let us now derive the Σ̇ kernel, required for the NTK. Lemma 16. For ω ∈ R, Σ̇(L)(x, x̃) : Rn0 × Rn0 → R, is given by Σ̇(L)(x, x̃) = 1 2 e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)) ( eΣ (L−1)(x,x̃) + e−Σ (L−1)(x,x̃) ) + 1. Proof. The proof follows the same pattern as Theorem 15, with the only difference being a few sign changes after the exponential expansion of the trigonometric functions, due to the different identities for sine and cosine. As done in the previous section, it would be simple to now derive the full NTK for a simple sinusoidal network of arbitrary depth by applying Theorem 6 with the NNGP kernels from above. However, there is not much to be gained by writing the convoluted NTK expression explicitly, beyond what we have already gleaned from the NNGP above. Nevertheless, some insight can be gained from the recursive expression of the NTK itself, as defined in Theorem 6. First, note that, as before, for practical values of ω, Σ̇ ≈ Σ, both converging to simply a single Gaussian kernel. Thus, our NTK recursion becomes Θ(L)(x, x̃) ≈ ( Θ(L−1)(x, x̃) + 1 ) Σ(L)(x, x̃). Now, note that when expanded, the form of this NTK recursion is essentially as a product of the Gaussian Σ kernels, Θ(L)(x, x̃) ≈ (( . . . (( Σ(0)(x, x̃) + 1 ) Σ(1)(x, x̃) + 1 ) . . . ) Σ(L−1)(x, x̃) + 1 ) Σ(L)(x, x̃) = (( . . . (( ω2 ( xT x̃+ 1 ) + 1 ) Σ(1)(x, x̃) + 1 ) . . . ) Σ(L−1)(x, x̃) + 1 ) Σ(L)(x, x̃). (6) We know that the product of two Gaussian kernels is Gaussian and thus the general form of
1. What is the focus of the paper regarding neural networks and their dynamics? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and computational aspects? 3. What are the weaknesses of the paper, especially regarding its dependence on prior works and the scaling factor? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a restriction of the SIREN structure (neural network with sine activation function), named SSN. SSN has a tunable frequency in the first layer only (instead of all the layers), and its weights are initialized with a Gaussian distribution, instead of a uniform distribution. They compute and compare the Neural Tangent Kernel (NTK) of SIREN and SSN, and conclude that both perform low-pass filtering. Strengths And Weaknesses Strengths The NTK computation for SIREN and SSN is new. It can definitely be useful to understand the dynamics of SIREN and SSN neural networks. Doubts Dependence on Sitzmann's proofs Theorem 2 depends entirely on Sitzmann's proof (Implicit Neural Representations with Periodic Activation Functions, 2020). However, it appears that, by using their Theorem 1.5 (Central Limit Theorem), they assume implicitly that they are at the infinite-width limit, that is, the number of neurons per layer tends to infinity. So, this theorem is not valid with a finite number of neurons per layer. (Or I may have missed an argument.) I recall that the tails of the distributions of the pre-activations in a ReLU NN tends to become heavier and heavier after each layer (in a finite-width setup). So, the term ``approximately standard normal'' should be more explicit: what is the considered distance? Scaling ω At the beginning of Section 3, the authors recall that, when initializing SIRENs, the weights are sampled from U ( [ − c / ω , c / ω ] ) , excluding the weights of the first layer (why?). This choice is understandable in common NNs, and leads to different learning trajectories (while it does not change the function represented by the NN at initialization). My questions are the following: when initializing SSNs, the weights are Gaussian. But what is their variance? it seems that, for SSNs and SIRENs, the weights are initialized without being scaled by 1 / ω . Why? if we scale the variance of the initialization distribution of the weights by 1 / ω , do we recover similar results when computing the NTKs? Clarity, Quality, Novelty And Reproducibility Clarity The paper is very well written. Quality As mentioned above, I have some doubts about Theorem 2 and the role of the scaling ω . Besides, I am not a specialist of periodic activation functions, so I cannot evaluate the impact of such theoretical results. Novelty The results are definitely new.
ICLR
Title Simple initialization and parametrization of sinusoidal networks via their kernel bandwidth Abstract Neural networks with sinusoidal activations have been proposed as an alternative to networks with traditional activation functions. Despite their promise, particularly for learning implicit models, their training behavior is not yet fully understood, leading to a number of empirical design choices that are not well justified. In this work, we first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis. We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth. Finally, we utilize these insights to inform the sinusoidal network initialization, optimizing their performance for each of a series of tasks, including learning implicit models and solving differential equations. N/A Neural networks with sinusoidal activations have been proposed as an alternative to networks with traditional activation functions. Despite their promise, particularly for learning implicit models, their training behavior is not yet fully understood, leading to a number of empirical design choices that are not well justified. In this work, we first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis. We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth. Finally, we utilize these insights to inform the sinusoidal network initialization, optimizing their performance for each of a series of tasks, including learning implicit models and solving differential equations. 1 INTRODUCTION Sinusoidal networks are neural networks with sine nonlinearities, instead of the traditional ReLU or hyperbolic tangent. They have been recently popularized, particularly for applications in implicit representation models, in the form of SIRENs (Sitzmann et al., 2020). However, despite their popularity, many aspects of their behavior and comparative advantages are not yet fully understood. Particularly, some initialization and parametrization choices for sinusoidal networks are often defined arbitrarily, without a clear understanding of how to optimize these settings in order to maximize performance. In this paper, we first propose a simplified version of such sinusoidal networks, that allows for easier implementation and theoretical analysis. We show that these simple sinusoidal networks can match and outperform SIRENs in implicit representation learning tasks, such as fitting videos, images and audio signals. We then analyze sinusoidal networks from a neural tangent kernel (NTK) perspective (Jacot et al., 2018), demonstrating that their NTK approximates a low-pass filter with adjustable bandwidth. We confirm, through an empirical analysis this theoretically predicted behavior also holds approximately in practice. We then use the insights from this analysis to inform the choices of initialization and parameters for sinusoidal networks. We demonstrate we can optimize the performance of a sinusoidal network by tuning the bandwidth of its kernel to the maximum frequency present in the input signal being learned. Finally, we apply these insights in practice, demonstrating that “well tuned” sinusoidal networks outperform other networks in learning implicit representation models with good interpolation outside the training points, and in learning the solution to differential equations. 2 BACKGROUND AND RELATED WORK Sinusoidal networks. Sinusoidal networks have been recently popularized for implicit modelling tasks by sinusoidal representation networks (SIRENs) (Sitzmann et al., 2020). They have also been evaluated for physics-informed learning, demonstrating promising results in a series of domains (Raissi et al., 2019b; Song et al., 2021; Huang et al., 2021b;a; Wong et al., 2022). Among the benefits of such networks is the fact that the mapping of inputs through an (initially) random linear layer followed by a sine function is mathematically equivalent to a transformation to a random Fourier basis, rendering them close to networks with Fourier feature transforms (Tancik et al., 2020; Rahimi & Recht, 2007), and possibly able to address spectral bias (Basri et al., 2019; Rahaman et al., 2019; Wang et al., 2021). Sinusoidal networks also have the property that the derivative of their outputs is given simply by another sinusoidal network, due to the fact that the derivative of sine function is a phase-shifted sine. Neural tangent kernel. An important prior result to the neural tangent kernel (NTK) is the neural network Gaussian process (NNGP). At random initialization of its parameters θ, the output function of a neural network of depth L with nonlinearity σ, converges to a Gaussian process, called the NNGP, as the width of its layers n1, . . . , nL → ∞. (Neal, 1994; Lee et al., 2018). This result, though interesting, does not say much on its own about the behavior of trained neural networks. This role is left to the NTK, which is defined as the kernel given by Θ(x, x̃) = ⟨∇θfθ(x),∇θfθ(x̃)⟩. It can be shown that this kernel can be written out as a recursive expression involving the NNGP. Importantly, Jacot et al. (2018) demonstrated that, again as the network layer widths n1, . . . , nL → ∞, the NTK is (1) deterministic at initialization and (2) constant throughout training. Finally, it has also been demonstrated that under some assumptions on its parametrization, the output function of the trained neural network fθ converges to the kernel regression solution using the NTK (Lee et al., 2020; Arora et al., 2019). In other words, under certain assumptions the behavior of a trained deep neural network can be modeled as kernel regression using the NTK. Physics-informed neural networks. Physics-informed neural networks (Raissi et al., 2019a) are a method for approximating the solution to differential equations using neural networks (NNs). In this method, a neural network û(t, x; θ), with learned parameters θ, is trained to approximate the actual solution function u(t, x) to a given partial differential equation (PDE). Importantly, PINNs employ not only a standard “supervised” data loss, but also a physics-informed loss, which consists of the differential equation residual N . Thus, the training loss consists of a linear combination of two loss terms, one directly supervised from data and one informed by the underlying differential equations. 3 SIMPLE SINUSOIDAL NETWORKS There are many details that complicate the practical implementation of current sinusoidal networks. We aim to propose a simplified version of such networks in order to facilitate theoretical analysis and practical implementation, by removing such complications. As an example we can look at SIRENs, which have their layer activations defined as fl(x) = sin(ω(Wlx+ bl)). Then, in order to cancel the ω factor, layers after the first one have their weight initialization follow a uniform distribution with range [− √ 6/n ω , √ 6/n ω ], where n is the size of the layer. Unlike the other layers, the first layer is sampled from a uniform distribution with range [−1/n, 1/n]. We instead propose a simple sinusoidal network, with the goal of formulating an architecture that mainly amounts to substituting its activation functions by the sine function. We will, however, keep the ω parameter, since (as we will see in future analyses) it is in fact a useful tool for allowing the network to fit inputs of diverse frequencies. The layer activation equations of our simple sinusoidal network, with parameter ω, are defined as f1(x) = sin(ω (W1x+ b1)), fl(x) = sin(Wlx+ bl), l > 1. (1) Finally, instead of utilizing a uniform initialization as in SIRENs (with different bounds for the first and subsequent layers), we propose initializing all parameters in our simple sinusoidal network using a default Kaiming (He) normal initialization scheme. This choice not only greatly simplifies the initialization scheme of the network, but it also facilitates theoretical analysis of the behavior of the network under the NTK framework, as we will see in Section 4. Analysis of the initialization scheme. The initialization scheme proposed above differs from the one implemented in SIRENs. We will now show that this particular choice of initialization distribution preserves the variance of the original proposed SIREN initialization distribution. As a consequence, the original theoretical justifications for its initialization scheme still hold under this activation, namely that the distribution of activations across layers are stable, well-behaved and shift-invariant. Due to space constraints, proofs are presented in Appendix A. Moreover, we also demonstrate empirically that these properties are maintained in practice. Lemma 1. Given any c, for X ∼ N ( 0, 13c 2 ) and Y ∼ U (−c, c), we have Var[X] = Var[Y ] = 13c 2. This simple Lemma and relates to Lemma 1.7 in Sitzmann et al. (2020), showing that the initialization we propose here has the same variance as the one proposed for SIRENs. Using this result we can translate the result from the main Theorem 1.8 from Sitzmann et al. (2020), which claims that the SIREN initialization indeed has the desired properties, to our proposed initialization:1 For a uniform input in [−1, 1], the activations throughout a sinusoidal network are approximately standard normal distributed before each sine non-linearity and arcsine-distributed after each sine non-linearity, irrespective of the depth of the network, if the weights are distributed normally, with mean 0 and variance 2n , where n is a layer’s fan-in. Empirical evaluation of initialization scheme. To empirically demonstrate the proposed simple initialization scheme preserves the properties from the SIREN initialization scheme, we perform the same analysis performed by Sitzmann et al. (2020). We observe that the distribution of activations matches the predicted normal (before the non-linearity) and arcsine (after the non-linearity) distributions, and that this behavior is stable across many layers. These results are reported in detail in the Appendix B. 3.1 COMPARISON TO SIREN In order to demonstrate our simplified sinusoidal network has comparable performance to a standard SIREN, in this section we reproduce the main results from Sitzmann et al. (2020). Table 1 compiles the results for all experiments. In order to be fair, we compare the simplified sinusoidal network proposed in this chapter with both the results directly reported in Sitzmann et al. (2020), and our own reproduction of the SIREN results (using the same parameters and settings as the original). We can see from the numbers reported in the table that the performance of the simple sinusoidal network proposed in this chapter matches the performance of the SIREN in all cases, in fact surpassing it in most of the experiments. Qualitative results are presented in Appendix C. It is important to note that this is not a favorable setting for simple sinusoidal networks, given that the training durations were very short. The SIREN favors quickly converging to a solution, though it does not have as strong asymptotic behavior. This effect is likely due to the multiplicative factor applied to later layers described in Section 3. We observe that indeed in almost all cases we can compensate for this effect by simply increasing the learning rate in the Adam optimizer (Kingma & Ba, 2014). Finally, we observe that besides being able to surpass the performance of SIREN in most cases in a short training regimen, the simple sinusoidal network performs even more strongly with longer training. To demonstrate this, we repeated some experiments from above, but with longer training durations. These results are shown in Table 4 in Appendix C. 4 NEURAL TANGENT KERNEL ANALYSIS In the following we derive the NTK for sinusoidal networks. This analysis will show us that the sinusoidal networks NTK is approximately a low-pass filter, with its bandwidth directly defined by ω. We support these findings with an empirical analysis as well in the following section. Finally, we demonstrate how the insights from the NTK can be leveraged to properly “tune” sinusoidal networks to the spectrum of the desired signal. Full derivations and extensive, detailed analysis are left to Appendix D. The NTK for a simple sinusoidal network with a single hidden layer is presented in the theorem below. The NTK for siren with 1 and 6 hidden layers are shown in Figure 1. Theorem 2. Shallow SSN NTK. For a simple sinusoidal network with one hidden layer f (1) : Rn0 → Rn2 following Definition 1, its neural tangent kernel (NTK), as defined in Theorem 6, is given by Θ(1)(x, x̃) = 1 2 ( ω2 ( xT x̃+ 1 ) + 1 ) e− ω2 2 ∥x−x̃∥ 2 2 − 1 2 ( ω2 ( xT x̃+ 1 ) − 1 ) e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 + ω2 ( xT x̃+ 1 ) + 1. 1We note that despite being named Theorem 1.8 in Sitzmann et al. (2020), this result is not fully formal, due to the Gaussian distribution being approximated without a formal analysis of this approximation. Additionally, a CLT result is employed which assumes infinite width, which is not applicable in this context. We thus refrain from calling our equivalent result a theorem. Nevertheless, to the extent that the argument is applicable, it would still hold for our proposed initialization, due to its dependence solely on the variance demonstrated in Lemma 1 above. We can see that for values of ω > 2, the second term quickly vanishes due to the e−2ω 2 factor. This leaves us with only the first term, which has a Gaussian form. Due to the linear scaling term xT x̃, this is only approximately Gaussian, but the approximation improves as ω increases. We can thus observe that this kernel approximates a Gaussian kernel, which is a low-pass filter, with its bandwidth defined by ω. Figure 1 presents visualizations for NTKs for the simple sinusoidal network, compared to a (scaled) pure Gaussian with variance ω−2, showing there is a close match between the two. If we write out the NTK for networks with more than one hidden layer, it quickly becomes un-interpretable due to the recursive nature of the NTK definition (see Appendix D). However, as shown empirically in Figure 1, these kernels are still approximated by Gaussians with variance ω−2. We also observe that the NTK for a SIREN with a single hidden layer is analogous, but with a sinc form, which is also a low-pass filter. Theorem 3. Shallow SIREN NTK. For a single hidden layer SIREN f (1) : Rn0 → Rn2 following Definition 1, its neural tangent kernel (NTK), as defined in Theorem 6, is given by Θ(1)(x, x̃) = c2 6 ( ω2 ( xT x̃+ 1 ) + 1 ) n0∏ j=1 sinc(c ω (xj − x̃j)) − c 2 6 ( ω2 ( xT x̃+ 1 ) − 1 ) e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + ω 2 ( xT x̃+ 1 ) + 1. For deeper SIREN networks, the kernels defined by the later layers are in fact Gaussian too, as discussed in Appendix D. This leads to an NTK that is approximated by a product of a sinc function and a Gaussian. These SIREN kernels are also presented in Figure 1. 5 EMPIRICAL ANALYSIS As shown above, neural tangent kernel theory suggests that sinusoidal networks work as low-pass filters, with their bandwidth controlled by the parameter ω. In this section, we demonstrate empirically that we can observe this predicted behavior even in real sinusoidal networks. For this experiment, we generate a 512× 512 monochromatic image by super-imposing two orthogonal sinusoidal signals, each consisting of a single frequency, f(x, y) = cos(128πx) + cos(32πy). This function is sampled in the domain [−1, 1]2 to generate the image on the left of Figure 2. To demonstrate what we can expect from applying low-pass filters of different bandwidths to this signal, we perform a discrete Fourier transform (DFT), cut off frequencies above a certain value, and perform an inverse transform to recover the (filtered) image. The MSE of the reconstruction, as a function of the cutoff frequency, is shown in Figure 3. We can see that due to the simple nature of the signal, containing only two frequencies, there are only three loss levels. If indeed the NTK analysis is correct and sinusoidal networks act as low-pass filters, with bandwidth controlled by ω, we should be able to observe similar behavior with sinusoidal networks with different ω values. We plot the final training loss and training curves for sinusoidal networks with different ω in Figure 3. We can observe, again, that there are three consistent loss levels following the magnitude of the ω parameter, in line with the intuition that the sinusoidal network is working as a low-pass filter. This is also observable in Figure 2, where we see example reconstructions for networks of various ω values after training. However, unlike with the DFT low-pass filter (which does not involve any learning), we see in Figure 3 that during training some sinusoidal networks shift from one loss level to a lower one. This demonstrates that sinusoidal networks differ from true low-pass filters in that their weights can change, which implies that the bandwidth defined by ω also changes with learning. We know the weights W1 in the first layer of a sinusoidal network, given by f1(x) = sin ( ω ·WT1 x+ b1 ) , will change with training. Empirically, we observed that the spectral norm of W1 increases throughout training for small ω values. We can interpret that as the overall magnitude of the term ω ·WT1 x increasing, which is functionally equivalent to an increase in ω itself. In Figure 3, we observe that sinusoidal networks with smaller values of ω take a longer time to achieve a lower loss (if at all). Intuitively, this happens because, due to the effect described above, lower ω values require a larger increase in magnitude by the weights W1. Given that all networks were trained with the same learning rate, the ones with a smaller ω require their weights to move a longer distance, and thus take more training steps to achieve a lower loss. 6 TUNING ω As shown in the previous section, though the bandwidth of a network can change throughout training, the choice of ω still influences how easily and quickly (if at all) it can learn a given signal. The value of the ω parameter is thus crucial for the learning of the network. Despite this fact, in SIRENs, for example, this value is not adjusted for each task (except for the audio fitting experiments), and is simply set empirically to an arbitrary value. In this section, we seek to justify a proper initialization for this parameter, such that it can be chosen appropriately for each given task. Moreover, it is often not the case that we simply want to fit only the exact training samples but instead want to find a good interpolation (i.e., generalize well). Setting ω too high, and thus allowing the network to model frequencies that are much larger than the ones present in the actual signal is likely to cause overfitting. This is demonstrated empirically in Figure 4. Consequently, we want instead to tune the network to the highest frequency present in the signal. However, we do not always have the knowledge of what is the value of the highest frequency in the true underlying signal of interest. Moreover, we have also observed that, since the network learns and its weights change in magnitude, that value in fact changes with training. Therefore, the most we can hope for is to have a good heuristic to guide the choice of ω. Nevertheless, having a reasonable guess for ω is also likely sufficient for good performance, precisely due to the ability of the network to adapt during training and compensate for a possibly slightly suboptimal choice. Choosing ω from the Nyquist frequency. One source of empirical information on the relationship between ω and the sinusoidal network’s “learnable frequencies” is the previous section’s empirical analysis. Taking into account the scaling, we can see from Fig. 3 that around ω = 16 the network starts to be able to learn the full signal (freq. 128). We can similarly note that at about ω = 4 the sinusoidal network starts to be able to efficiently learn a signal with frequency 32, but not the one with frequency 128. This scaling suggests a heuristic of setting ω to about 1/8 of the signal’s maximum frequency. For natural signals, such as pictures, it is common for frequencies up to the Nyquist frequency of the discrete sampling to be present. We provide an example for the “camera” image we have utilized so far in Figure 23 in Appendix E, where we can see that the reconstruction loss through a low-pass filter continues to decrease significantly up to the Nyquist frequency for the image resolution. In light of this information, analyzing the choices of ω for the experiments in Section 3.1 again suggests that ω should be set around 1/8 of the Nyquist frequency of the signal. These values of ω are summarized in Table 2 in the “Fitting ω” column. For example, the image fitting experiment shows that, for an image of shape 512× 512 (and thus Nyquist frequency of 256 for each dimension), this heuristic suggests an ω value of 256/8 = 32, which is the value found to work best empirically through search. We find similar results for the audio fitting experiments. The audio signals used in the audio fitting experiment contained approximately 300, 000 and 500, 000 points, and thus maximum frequencies of approximately 150, 00 and 250, 000. This suggests reasonable values for ω of 18, 750 and 31, 250, which are close to the ones found empirically to work well. In examples such as the video fitting experiments, in which each dimension has a different frequency, it is not completely clear how to pick a single ω to fit all dimensions. This suggests that having independent values of ω for each dimension might be useful for such cases, as discussed in the next section. Finally, when performing the generalization experiments in Section 7, we show the best performing ω ended up being half the value of the best ω used in the fitting tasks from Section 3.1. This follows intuitively, since for the generalization task we set apart half the points for training and the other half for testing, thus dividing the maximum possible frequency in the training sample in half, providing further evidence of the relationship between ω and the maximum frequency in the input signal. Multi-dimensional ω. In many problems, such as the video fitting and PDE problems, not only is the input space multi-dimensional, it also contains time and space dimensions (which are additionally possibly of different shape). This suggests that employing a multi-dimensional ω, specifying different frequencies for each dimension might be beneficial. In practice, if we employ a scaling factor λ = [λ1 λ2 . . . λd] T , we have the first layer of the sinusoidal network given by f1(x) = sin(ω (W1 (λ⊙ x) + b1)) = sin(W1 (Ω⊙ x) + ωb1), (2) where Ω = [λ1ω λ2ω . . . λdω] T works as a multi-dimensional ω. In the following experiments, we employ this approach to three-dimensional problems, in which we have time and differently shaped space domains, namely the video fitting and physics-informed neural network PDE experiments. For these experiments, we report the ω in the form of the (already scaled) Ω vector for simplicity. Choosing ω from available information Finally, in many problems we do have some knowledge of the underlying signal we can leverage, such as in the case of inverse problems. For example, let’s say we have velocity fields for a fluid and we are trying to solve for the coupled pressure field and the Reynolds number using a physics-informed neural network (as done in Section 7). In this case, we have access to two components of the solution field. Performing a Fourier transform on the training data we have can reveal the relevant spectrum and inform our choice of ω. If the maximum frequency in the signal is lower than the Nyquist frequency implied by the sampling, this can lead to a more appropriate choice of ω than suggested purely from the sampling. 7 EXPERIMENTS In this section, we first perform experiments to demonstrate how the optimal value of ω influences the generalization error of a sinusoidal network, following the discussion in Section 6. After that, we demonstrate that sinusoidal networks with properly tuned ω values outperform traditional physicsinformed neural networks in classic PDE tasks. 7.1 EVALUATING GENERALIZATION We now evaluate the simple sinusoidal network generalization capabilities. To do this, in all experiments in this section we segment the input signal into training and test sets using a checkerboard pattern – along all axis-aligned directions, points alternate between belonging to train and test set. We perform audio, image and video fitting experiments. When performing these experiments, we search for the best performing ω value for generalization (defined as performance on the held-out points). We report the best values on Table 2. We observe that, as expected from the discussion in Section 6, the best performing ω values follow the heuristic discussed above, and are in fact half the best-performing value found in the previous fitting experiments from Section 3.1, confirming our expectation. This is also demonstrated in the plot in Figure 4. Using a higher ω leads to overfitting and poor generalization outside the training points. This is demonstrated in Figure 4, in which we can see that choosing an appropriate ω value from the heuristics described previously leads to a good fit and interpolation. Setting ω too high leads to interpolation artifacts, due to overfitting of spurious high-frequency components. For the video signals, which have different size along each axis, we employ a multi-dimensional ω. We scale each dimension of ω proportional to the size of the input signal along the corresponding axis. 7.2 SOLVING DIFFERENTIAL EQUATIONS Finally, we apply our analysis to physics-informed learning. We compare the performance of simple sinusoidal networks to the tanh networks that are commonly used for these tasks. Results are summarized in Table 3. Details for the Schrödinger and Helmholtz experiments are presented in Appendix E. 7.2.1 BURGERS EQUATION (IDENTIFICATION) This experiment reproduces the Burgers equation identification experiment from Raissi et al. (2019a). Here we are identifying the parameters λ1 and λ2 of a 1D Burgers equation, ut+λ1uux−λ2uxx = 0, given a known solution field. The ground truth value of the parameters are λ1 = 1.0 and λ2 = 0.01/π. In order to find a good value for ω, we perform a low-pass reconstruction of the solution as before. We can observe in Figure 5 that the solution does not have high bandwidth, with most of the loss being minimized with only the lower half of the spectrum. Note that the sampling performed for the training data (N = 2, 000) is sufficient to support such frequencies. This suggests an ω value in the range 8− 10. Indeed, we observe that ω = 10 gives the best identification of the desired parameters, with errors of 0.0071% and 0.0507% for λ1 and λ2 respectively, against errors of 0.0521% and 0.4522% of the baseline. This value of ω also achieves the lowest reconstruction loss against the known solution, with an MSE of 8.034 · 10−4. Figure 5 shows the reconstructed solution using the identified parameters. 7.2.2 NAVIER-STOKES (IDENTIFICATION) This experiment reproduces the Navier-Stokes identification experiment from Raissi et al. (2019a). In this experiment, we are trying to identify, the parameters λ1, λ2 and the pressure field p of the 2D Navier-Stokes equations given by ∂u∂t + λ1u · ∇u = −∇p+ λ2∇ 2u, given known velocity fields u and v. The ground truth value of the parameters are λ1 = 1.0 and λ2 = 0.01. Unlike the 1D Burgers case, in this case the amount of points sampled for the training set (N = 5, 000) is not high, compared to the size of the full solution volume, and is thus the limiting factor for the bandwidth of the input signal. Given the random sampling of points from the full solution, the Correct PDE u_t + u u_x - 0.0031831 u_{xx} = 0 Identified PDE (clean data) u_t + 1.00007 u u_x - 0.0031847 u_{xx} = 0 Identified PDE (1\% noise) u_t + 0.00000 u u_x - 0.0000000 u_{xx} = 0 v_t + (u v_x + v v_y) = -p_y + 0.01 (v_{xx} + v_{yy}) Identified PDE (clean data) u_t + 1.000 (u u_x + v u_y) = -p_x + 0.01018 (u_{xx} + u_{yy}) v_t + 1.000 (u v_x + v v_y) = -p_y + 0.01018 (v_{xx} + v_{yy}) Identified PDE (1\% noise) u_t + 0.000 (u u_x + v u_y) = -p_x + 0.00000 (u_{xx} + u_{yy}) v_t + 0.000 (u v_x + v v_y) = -p_y + 0.00000 (v_{xx} + v_{yy}) generalized sampling theorem applies. The original solution has dimensions of 100 × 50 × 200. With the 5, 000 randomly sampled points, the average sampling rate per dimension is approximately 17, on average, corresponding to a Nyquist frequency of approximately 8.5. Furthermore, given the multi-dimensional nature of this problem, with both spatial and temporal axes, we employ an independent scaling to ω for each dimension. The analysis above suggests an average ω ≈ 1, with the dimensions of the problem suggesting scaling factors of [0.5 1 2]T . Indeed, we observe that Ω = [0.3 0.6 1.2] T gives the best results. With with errors of 0.0038% and 1.782% for λ1 and λ2 respectively, against errors of 0.0046% and 2.093% of the baseline. Figure 6 shows the identified pressure field. Note that given the nature of the problem, this field can only be identified up to a constant. 8 CONCLUSION In this work, we have present a simplified formulation for sinusoidal networks. Analysis of this architecture from the neural tangent kernel perspective, combined with empirical results, reveals that the kernel for sinusoidal networks corresponds to a low-pass filter with adjustable bandwidth. We leverage this information in order to initialize these networks appropriately, choosing their bandwidth such that it is tuned to the signal being learned. Employing this strategy, we demonstrated improved results in both implicit modelling and physics-informed learning tasks. A SIMPLE SINUSOIDAL NETWORK INITIALIZATION We present here the proofs for the initialization scheme of the simple sinusoidal network from Section 3. Lemma 4. Given any c, for X ∼ N ( 0, 13c 2 ) and Y ∼ U (−c, c), we have Var[X] = Var[Y ] = 13c 2. Proof. By definition, Var[X] = σ2 = 13c 2. For Y , we know that the variance of a uniformly distributed random variable with bound [a, b] is given by 112 (b − a) 2. Thus, Var[Y ] = 112 (2c) 2 = 13c 2. Theorem 5. For a uniform input in [−1, 1], the activations throughout a sinusoidal networks are approximately standard normal distributed before each sine non-linearity and arcsine-distributed after each sine non-linearity, irrespective of the depth of the network, if the weights are distributed normally, with mean 0 and variance 2n with n is the layer’s fan-in. Proof. The proof follows exactly the proof for Theorem 1.8 in Sitzmann et al. (2020), only using Lemma 4 when necessary to show that the initialization proposed here has the same variance necessary for the proof to follow. B EMPIRICAL EVALUATION OF SSN INITIALIZATION Here we report an empirical analysis the initialization scheme of simple sinusoidal networks, referenced in Section 3. For this analysis we use a sinusoidal MLP with 6 hidden layers of 2048 units, and single-dimensional input and output. This MLP is initialized using the simplified scheme described above. For testing, 28 equally spaced inputs from the range [−1, 1] are passed through the network. We then plot the histogram of activations after each linear operation (before the sine non-linearity) and after each sine non-linearity. To match the original plot, we also plot the 1D Fast Fourier Transform of all activations in a layer, and the gradient of this output with respect to each activation. These results are presented in Figure 8. The main conclusion from this figure is that the distribution of activations matches the predicted normal (before the non-linearity) and arcsine (after the non-linearity) distributions, and that this behavior is stable across many layers. We also reproduced the same result up to 50 layers. We then perform an additional experiment in which the exact same setup as above is employed, yet the 1D inputs are shifted by a large value (i.e., x → x+ 1000). We the show the same plot as before in Figure 9. We can see that there is essentially no change from the previous plot, which demonstrates the sinusoidal networks shift-invariance in the input space, one of its important desirable properties, as discussed previously. C EXPERIMENTAL DETAILS FOR COMPARISON TO SIREN Below, we present qualitative results and describe experimental details for each experiment. As these are a reproduction of the experiments in Sitzmann et al. (2020), we refer to their details as well for further information. C.1 IMAGE In the image fitting experiment, we treat an image as a function from the spatial domain to color values (x, y) → (r, g, b). In the case of a monochromatic image, used here, this function maps instead to one-dimensional intensity values. We try to learn a function f : R2 → R, parametrized as a sinusoidal network, in order to fit such an image. Figure 7 shows the image used in this experiment, and the reconstruction from the fitted sinusoidal network. The gradient and Laplacian for the learned function are also presented, demonstrating that higher order derivatives are also learned appropriately. Table 4: Comparison of the simple sinusoidal network and SIREN on some experiments, with a longer training duration. The specific durations are described below in the details for each experiment. We can see that the simple sinusoidal network has stronger asymptotic performance. Values above the horizontal center line are peak signal to noise ratio (PSNR), values below are mean squared error (MSE). †Audio experiments utilized a different learning rate for the first layer, see the full description below for details. Experiment Simple Sinusoidal Network SIREN [ours] Image 54.70 52.43 Poisson (Gradient) 39.51 38.70 Poisson (Laplacian) 22.09 20.82 Video (cat) 34.64 32.26 Video (bikes) 37.71 34.07 Audio (Bach)† 5.66 · 10−7 3.02 · 10−6 Audio (counting)† 4.02 · 10−5 6.33 · 10−5 Image Gradient Laplacian Figure 7: Top row: Ground truth image. Bottom: Reconstructed with sinusoidal network. Training parameters. The input image used is 512×512, mapped to an input domain [−1, 1]2. The sinusoidal network used is a 5-layer MLP with hidden size 256, following the proposed initialization scheme above. The parameter ω is set to 32. The Adam optimizer is used with a learning rate of 3 · 10−3, trained for 10, 000 steps in the short duration training results and for 20, 000 steps in the long duration training results. C.2 POISSON These tasks are similar to the image fitting experiment, but instead of supervising directly on the ground truth image, the learned fitted sinusoidal network is supervised on its derivatives, constituting a Poisson problem. We perform the experiment by supervising both on the input image’s gradient and Laplacian, and report the reconstruction of the image and it’s gradients in each case. Figure 10 shows the image used in this experiment, and the reconstruction from the fitted sinusoidal networks. Since reconstruction from derivatives can only be correct up to a scaling factor, we scale the reconstructions for visualization. As in the original SIREN results, we can observe that the reconstruction from the gradient is of higher quality than the one from the Laplacian. Training parameters. The input image used is of size 256× 256, mapped from an input domain [−1, 1]2. The sinusoidal network used is a 5-layer MLP with hidden size 256, following the proposed initialization scheme above. For both experiments, the parameter ω is set to 32 and the Adam optimizer is used. For the gradient experiments, in short and long training results, a learning rate of 1 · 10−4 is used, trained for 10, 000 and 20, 000 steps respectively. For the Laplace experiments, in short and long training results, a learning rate of 1 · 10−3 is used, trained for 10, 000 and 20, 000 steps respectively. C.3 VIDEO These tasks are similar to the image fitting experiment, but we instead fit a video, which also has a temporal input dimension, (t, x, y) → (r, g, b). We learn a function f : R3 → R3, parametrized as a sinusoidal network, in order to fit such a video. Figures 11 and 12 show sampled frames from the videos used in this experiment, and their respective reconstructions from the fitted sinusoidal networks. Training parameters. The cat video contains 300 frames of size 512 × 512. The bikes video contains 250 frames of size 272× 640. These signals are fitted from the input domain [−1, 1]3. The sinusoidal network used is a 5-layer MLP with hidden size 1024, following the proposed initialization scheme above. The parameter ω is set to 8. The Adam optimizer is used, with a learning rate of 3 · 10−4 trained for 100, 000 steps in the short duration training results and for 200, 000 steps in the long duration training results. C.4 AUDIO In the audio experiments, we fit an audio signal in the temporal domain as a waveform t → w. We to learn a function f : R → R, parametrized as a sinusoidal network, in order to fit the audio. Figure 13 shows the waveforms for the input audios and the reconstructed audios from the fitted sinusoidal network. In this experiment, we utilized a lower learning rate for the first layer compared to the rest of the network. This was used to compensate the very large ω used (in the 15, 000−30, 000 range, compared to the 10−30 range for all other experiments). One might argue that this is re-introducing complexity, counteracting the purpose the proposed simplification. However, we would claim (1) that this is only limited to cases with extremely high ω, which was not present in any case except for fitting audio waves, and (2) that adjusting the learning rate for an individual layer is still an approach that is simpler and more in line with standard machine learning practice compared to multiplying all layers by a scaling factor and then adjusting their initialization variance by the same amount. Training parameters. Both audios use a sampling rate of 44100Hz. The Bach audio is 7s long and the counting audio is approximately 12s long. These signals are fitted from the input domain [−1, 1]. The sinusoidal network used is a 5-layer MLP with hidden size 256, following the proposed initialization scheme above. For short and long training results, training is performed for 5, 000 and 50, 000 steps respectively. For the Bach experiment, the parameter ω is set to 15, 000. The Adam optimizer is used, with a general learning rate of 3 · 10−3. A separate learning rate of 1 · 10−6 is used for the first layer to stabilize training due to the large ω value. For the counting experiment, the parameter ω is set to 32, 000. The Adam optimizer is used, with a general learning rate of 1 · 10−3 and a first layer learning rate of 1 · 10−6. C.5 HELMHOLTZ EQUATION In this experiment we solve for the unknown wavefield Φ : R2 → R2 in the Helmholtz equation (∆ + k2)Φ(x) = −f(x), (3) with known wavenumber k and source function f (a Gaussian with µ = 0 and σ2 = 10−4). We solve this differential equation using a sinusoidal network supervised with the physics-informed loss ∫ Ω ∥(∆ + k2)Φ(x) + f(x)∥1dx, evaluated at random points sampled uniformly in the domain Ω = [−1, 1]2. Figure 14 shows the real and imaginary components of the ground truth solution to the differential equation and the solution recovered by the fitted sinusoidal network. Training parameters. The sinusoidal network used is a 5-layer MLP with hidden size 256, following the proposed initialization scheme above. The parameter ω is set to 16. The Adam optimizer is used, with a learning rate of 3 · 10−4 trained for 50, 000 steps. C.6 SIGNED DISTANCE FUNCTION (SDF) In these tasks we learn a 3D signed distance function. We learn a function f : R3 → R, parametrized as a sinusoidal network, to model a signed distance function representing a 3D scene. This function is supervised indirectly from point cloud data of the scene. Figures 16 and 15 show 3D renderings of the volumes inferred from the learned SDFs. Training parameters. The statue point cloud contains 4, 999, 996 points. The room point cloud contains 10, 250, 688 points. These signals are fitted from the input domain [−1, 1]3. The sinusoidal network used is a 5-layer MLP with hidden size 256 for the statue and 1024 for the room. The parameter ω is set to 4. The Adam optimizer is used, with a learning rate of 8 · 10−4 and a batch size of 1400. All models are trained for 190, 000 steps for the statue experiment and for 410, 000 steps for the room experiment. D NEURAL TANGENT KERNEL ANALYSIS AND PROOFS D.1 PRELIMINARIES In order to perform the subsequent NTK analysis, we first need to formalize definitions for simple sinusoidal networks and SIRENs. The definitions used here adhere to the common NTK analysis practices, and thus differ slightly from practical implementation. Definition 1. For the purposes of the following proofs, a (sinusoidal) fully-connected neural network with L hidden layers that takes as input x ∈ Rn0 , is defined as the function f (L) : Rn0 → RnL+1 , recursively given by f (0)(x) = ω ( W (0)x+ b(0) ) , f (L)(x) = W (L) 1 √ nL sin ( f (L−1) ) + b(L), where ω ∈ R. The parameters { W (j) }L j=0 have shape nj+1×nj and all have each element sampled independently either from N (0, 1) (for simple sinusoidal networks) or from U(−c, c) with some bound c ∈ R (for SIRENs). The { b(j) }L j=0 are nj+1-dimensional vectors sampled independently from N (0, Inj+1). With this definition, we now state the general formulation of the NTK, which applies in general to fully-connected networks with Lipschitz non-linearities, and consequently in particular to the sinusoidal networks studied here as well. Let us first define the NNGP, which has covariance recursively defined by Σ(L+1)(x, x̃) = Ef∼N (0,Σ(L)) [σ(f(x))σ(f(x̃))] + β2, with base case Σ(1)(x, x̃) = 1n0x T x̃+ β2, and where β gives the variance of the bias terms in the neural network layers (Neal, 1994; Lee et al., 2018). Now the NTK is given by the following theorem. Theorem 6. For a neural network with L hidden layers f (L) : Rn0 → RnL+1 following Definition 1, as the size of the hidden layers n1, . . . , nL → ∞ sequentially, the neural tangent kernel (NTK) of f (L) converges in probability to the deterministic kernel Θ(L) defined recursively as Θ(0)(x, x̃) = Σ(0)(x, x̃) = ω2 ( xT x̃+ 1 ) , Θ(L)(x, x̃) = Θ(L−1)(x, x̃)Σ̇(L)(x, x̃) + Σ(L)(x, x̃), where { Σ(l) }L l=0 are the neural network Gaussian processes (NNGPs) corresponding to each f (l) and Σ̇(l)(x, x̃) = E(u,v)∼Σ(l−1)(x,x̃) [cos(u)cos(v)] . Proof. This is a standard general NTK theorem, showing that the limiting kernel recursively in terms of the network’s NNGPs and the previous layer’s NTK. For brevity we omit the proof here and refer the reader to, for example, Jacot et al. (2020). The only difference is for the base case Σ(0), due to the fact that we have an additional ω parameter in the first layer. It is simple to see that the neural network with 0 hidden layers, i.e. the linear model ω ( W (0)x+ b(0) ) will lead to the same Gaussian process covariance kernel as the original proof, xT x̃+ 1, only adjusted by the additional variance factor ω2. Theorem 6 demonstrates that the NTK can be constructed as a recursive function of the NTK of previous layers and the network’s NNGPs. In the following sections we will derive the NNGPs for the SIREN and the simple sinusoidal network directly. We will then use these NNGPs with Theorem 6 to derive their NTKs as well. To finalize this preliminary section, we also provide two propositions that will be useful in following proofs in this section. Proposition 7. For any ω ∈ R, x ∈ Rd, Ew∼N (0,Id) [ eiω(w T x) ] = e− ω2 2 ∥x∥ 2 2 Proof. Omitting w ∼ N (0, Id) from the expectation for brevity, we have E [ eiω(w T x) ] = E [ eiω ∑d j=1 wjxj ] . By independence of the components of w and the definition of expectation, E [ eiω ∑d j=1 iwjxj ] = d∏ j=1 E [ eiω wjxj ] = d∏ j=1 1√ 2π ∫ ∞ −∞ eiω wjxje− w2j 2 dwj . Completing the square, we get d∏ j=1 1√ 2π ∫ ∞ −∞ eiω wjxje− 1 2w 2 j dwj = d∏ j=1 1√ 2π ∫ ∞ −∞ e 1 2 (i 2ω2x2j−i 2ω2x2j+2ixjwj−w 2 j)dwj = d∏ j=1 e 1 2 i 2ω2x2j 1√ 2π ∫ ∞ −∞ e− 1 2 (i 2ω2x2j−2iω 2xjwj+w 2 j)dwj = d∏ j=1 e− 1 2ω 2x2j 1√ 2π ∫ ∞ −∞ e− 1 2 (wj−iωxj) 2 dwj . Since the integral and its preceding factor constitute a Gaussian pdf, they integrate to 1, leaving the final result d∏ j=1 e− ω2 2 x 2 j = e− ω2 2 ∑d j=1 x 2 j = e− ω2 2 ∥xj∥ 2 2 . Proposition 8. For any c, ω ∈ R, x ∈ Rd, Ew∼Ud(−c,c) [ eiω(w T x) ] = d∏ j=1 sinc(c ωxj). Proof. Omitting w ∼ Ud(−c, c) from the expectation for brevity, we have E [ eiω(w T x) ] = E [ eiω ∑d j=1 wjxj ] . By independence of the components of w and the definition of expectation, E [ eiω ∑d j=1 wjxj ] = d∏ j=1 E [ eiω wjxj ] = d∏ j=1 ∫ c −c eiω wjxj 1 2c dwj = d∏ j=1 1 2c ∫ c −c eiω wjxjdwj . Now, focusing on the integral above, we have∫ c −c eiω wjxjdwj = ∫ c −c cos(ω wjxj)dwj + i ∫ c −c sin(ω wjxj)dwj = sin(ω wjxj) ωxj ∣∣∣∣∣ c −c − icos(ω wjxj) ωxj ∣∣∣∣∣ c −c = 2sin(c ωxj) ωxj . Finally, plugging this back into the product above, we get d∏ j=1 1 2c ∫ c −c eiω wjxjdwj = d∏ j=1 1 2c 2sin(c ωxj) ωxj = d∏ j=1 sinc(c ωxj). D.2 SHALLOW SINUSOIDAL NETWORKS For the next few proofs, we will be focusing on neural networks with a single hidden layer, i.e. L = 1. Expanding the definition above, such a network is given by f (1)(x) = W (1) 1 √ n1 sin ( ω ( W (0)x+ b(0) )) + b(1). (4) The advantage of analysing such shallow networks is that their NNGPs and NTKs have formulations that are intuitively interpretable, providing insight into their characteristics. We later extend these derivations to networks of arbitrary depth. D.2.1 SIREN First, let us derive the NNGP for a SIREN with a single hidden layer. Theorem 9. Shallow SIREN NNGP. For a single hidden layer SIREN f (1) : Rn0 → Rn2 following Definition 1, as the size of the hidden layer n1 → ∞, f (1) tends (by law of large numbers) to the neural network Gaussian Process (NNGP) with covariance Σ(1)(x, x̃) = c2 6 n0∏ j=1 sinc(c ω (xj − x̃j))− e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + 1. Proof. We first show that despite the usage of a uniform distribution for the weights, this initialization scheme still leads to an NNGP. In this initial part, we follow an approach similar to Lee et al. (2018), with the modifications necessary for this conclusion to hold. From our neural network definition, each element f (1)(x)j in the output vector is a weighted combination of elements in W (1) and b(1). Conditioning on the outputs from the first layer (L = 0), since the sine function is bounded and each of the parameters is uniformly distributed with finite variance and zero mean, the f (1)(x)j become normally distributed with mean zero as n1 → ∞ by the (Lyapunov) central limit theorem (CLT). Since any subset of elements in f (1)(x) is jointly Gaussian, we have that this outer layer is described by a Gaussian process. Now that we have concluded that this initialization scheme still entails an NNGP, we have that its covariance is determined by σ2WΣ (1) + σ2b = c2 3 Σ (1) + 1, where Σ(1)(x, x̃) = lim n1→∞ [ 1 n1 〈 sin ( f (0)(x) ) , sin ( f (0)(x̃) )〉] = lim n1→∞ 1 n1 n1∑ j=1 sin ( f (0)(x) ) j sin ( f (0)(x̃) ) j = lim n1→∞ 1 n1 n1∑ j=1 sin ( ω ( W (0) j x+ b (0) j )) sin ( ω ( W (0) j x̃+ b (0) j )) . Now by the law of large number (LLN) the limit above converges to Ew∼Un0 (−c,c), b∼N (0,1) [ sin ( ω ( wTx+ b )) sin ( ω ( wT x̃+ b ))] , where w ∈ Rn0 and b ∈ R. Omitting the distributions from the expectation for brevity and expanding the exponential definition of sine, we have E [ 1 2i ( eiω(w T x+b) − e−iω(w T x+b) ) 1 2i ( eiω(w T x̃+b) − e−iω(w T x̃+b) )] = −1 4 E [ eiω(w T x+b)+iω(wT x̃+b) − eiω(w T x+b)−iω(wT x̃+b) − e−iω(w T x+b)+iω(wT x̃+b) + e−iω(w T x+b)−iω(wT x̃+b) ] = −1 4 [ E [ eiω(w T (x+x̃)) ] E [ e2iωb ] − E [ eiω(w T (x−x̃)) ] − E [ eiω(w T (x̃−x)) ] + E [ eiω(w T (−x−x̃)) ] E [ e−2iωb ]] Applying Propositions 7 and 8 to each expectation above and noting that the sinc function is even, we are left with − 1 4 2 n0∏ j=1 sinc(c ω (xj + x̃j))− 2e−2ω 2 n0∏ j=1 sinc(c ω (xj − x̃j)) = 1 2 n0∏ j=1 sinc(c ω (xj − x̃j))− e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) . For simplicity, if we take the case of a one-dimensional output (e.g., an audio signal or a monochromatic image) with the standard SIREN setting of c = √ 6, the NNGP reduces to Σ(1)(x, x̃) = sinc (√ 6ω (x− x̃) ) − e−2ω 2 sinc (√ 6ω (x+ x̃) ) + 1. We can already notice that this kernel is composed of sinc functions. The sinc function is the ideal low-pass filter. For any value of ω > 1, we can see the the first term in the expression above will completely dominate the expression, due to the exponential e−2ω 2 factor. In practice, ω is commonly set to values at least one order of magnitude above 1, if not multiple orders of magnitude above that in certain cases (e.g., high frequency audio signals). This leaves us with simply Σ(1)(x, x̃) = sinc (√ 6ω (x− x̃) ) + 1. Notice that not only does our kernel reduce to the sinc function, but it also reduces to a function solely of ∆x = x − x̃. This agrees with the shift-invariant property we observe in SIRENs, since the NNGP is dependent only on ∆x, but not on the particular values of x and x̃. Notice also that ω defines the bandwidth of the sinc function, thus determining the maximum frequencies it allows to pass. The general sinc form and the shift-invariance of this kernel can be visualized in Figure 17, along with the effect of varying ω on the bandwidth of the NNGP kernel. We can see that the NTK of the shallow SIREN, derived below, maintains the same relevant characteristics as the NNGP. We first derive Σ̇ in the Lemma below. Lemma 10. For ω ∈ R, Σ̇(1)(x, x̃) : Rn0 × Rn0 → R is given by Σ(1)(x, x̃) = c2 6 n0∏ j=1 sinc(c ω (xj − x̃j)) + e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + 1. Proof. The proof follows the same pattern as Theorem 9, with the only difference being a few sign changes after the exponential expansion of the trigonometric functions, due to the different identities for sine and cosine. Now we can derive the NTK for the shallow SIREN. Corollary 11. Shallow SIREN NTK. For a single hidden layer SIREN f (1) : Rn0 → Rn2 following Definition 1, its neural tangent kernel (NTK), as defined in Theorem 6, is given by Θ(1)(x, x̃) = ( ω2 ( xT x̃+ 1 ))c2 6 n0∏ j=1 sinc(c ω (xj − x̃j))− e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + 1 + c2 6 n0∏ j=1 sinc(c ω (xj − x̃j)) + e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + 1 = c2 6 ( ω2 ( xT x̃+ 1 ) + 1 ) n0∏ j=1 sinc(c ω (xj − x̃j)) − c 2 6 ( ω2 ( xT x̃+ 1 ) − 1 ) e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + ω 2 ( xT x̃+ 1 ) + 1. Proof. Follows trivially by applying Theorem 9 and Lemma 10 to Theorem 6. Though the expressions become more complex due to the formulation of the NTK, we can see that many of the same properties from the NNGP still apply. Again, for reasonable values of ω, the term with the exponential factor e−2ω 2 will be of negligible relative magnitude. With c = √ 6, this leaves us with ( ω2 ( xT x̃+ 1 ) + 1 ) n0∏ j=1 sinc (√ 6ω (xj − x̃j) ) + ω2 ( xT x̃+ 1 ) + 1, which is of the same form as the NNGP, with some additional linear terms xT x̃. Though these linear terms break the pure shift-invariance, we still have a strong diagonal and the sinc form with bandwidth determined by ω, as can be seen in Figure 18. Similarly to the NNGP, the SIREN NTK suggests that training a shallow SIREN is approximately equivalent to performing kernel regression with a sinc kernel, a low-pass filter, with its bandwidth defined by ω. This agrees intuitively with the experimental observations from the paper that in order to fit higher frequencies signals, a larger ω is required. D.2.2 SIMPLE SINUSOIDAL NETWORK Just as we did in the last section, we will now first derive the NNGP for a simple sinusoidal network, and then use that in order to obtain its NTK as well. As we will see, the Gaussian initialization employed in the SSN has the benefit of rendering the derivations cleaner, while retaining the relevant properties from the SIREN initialization. We observe that a similar derivation of this NNGP (using cosine functions instead of sine) can be found in Pearce et al. (2019), with a focus on a Bayesian perspective for the result. Theorem 12. Shallow SSN NNGP. For a single hidden layer simple sinusoidal network f (1) : Rn0 → Rn2 following Definition 1, as the size of the hidden layer n1 → ∞, f (1) tends (by law of large numbers) to the neural network Gaussian Process (NNGP) with covariance Σ(1)(x, x̃) = 1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 − e−ω 2 2 ∥x+x̃∥ 2 2e−2ω 2 ) + 1. Proof. We again initially follow an approach similar to the one described in Lee et al. (2018). From our sinusoidal network definition, each element f (1)(x)j in the output vector is a weighted combination of elements in W (1) and b(1). Conditioning on the outputs from the first layer (L = 0), since the sine function is bounded and each of the parameters is Gaussian with finite variance and zero mean, the f (1)(x)j are also normally distributed with mean zero by the CLT. Since any subset of elements in f (1)(x) is jointly Gaussian, we have that this outer layer is described by a Gaussian process. Therefore, its covariance is determined by σ2WΣ (1) + σ2b = Σ (1) + 1, where Σ(1)(x, x̃) = lim n1→∞ [ 1 n1 〈 sin ( f (0)(x) ) , sin ( f (0)(x̃) )〉] = lim n1→∞ 1 n1 n1∑ j=1 sin ( f (0)(x) ) j sin ( f (0)(x̃) ) j = lim n1→∞ 1 n1 n1∑ j=1 sin ( ω ( W (0) j x+ b (0) j )) sin ( ω ( W (0) j x̃+ b (0) j )) . Now by the LLN the limit above converges to Ew∼N (0,In0 ),b∼N (0,1) [ sin ( ω ( wTx+ b )) sin ( ω ( wT x̃+ b ))] , where w ∈ Rn0 and b ∈ R. Omitting the distributions from the expectation for brevity and expanding the exponential definition of sine, we have E [ 1 2i ( eiω(w T x+b) − e−iω(w T x+b) ) 1 2i ( eiω(w T x̃+b) − e−iω(w T x̃+b) )] = −1 4 E [ eiω(w T x+b)+iω(wT x̃+b) − eiω(w T x+b)−iω(wT x̃+b) − e−iω(w T x+b)+iω(wT x̃+b) + e−iω(w T x+b)−iω(wT x̃+b) ] = −1 4 [ E [ eiω(w T (x+x̃)) ] E [ e2iωb ] − E [ eiω(w T (x−x̃)) ] − E [ eiω(w T (x̃−x)) ] + E [ eiω(w T (−x−x̃)) ] E [ e−2iωb ]] Applying Proposition 7 to each expectation above, it becomes −1 4 ( e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 − e−ω 2 2 ∥x−x̃∥ 2 2 − e−ω 2 2 ∥x+x̃∥ 2 2 + e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 ) = 1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 − e−ω 2 2 ∥x+x̃∥ 2 2e−2ω 2 ) . We an once again observe that, for practical values of ω, the NNGP simplifies to 1 2 e− ω2 2 ∥x−x̃∥ 2 2 + 1. This takes the form of a Gaussian kernel, which is also a low-pass filter, with its bandwidth determined by ω. We note that, similar to the c = √ 6 setting from SIRENs, in practice a scaling factor of √ 2 is applied to the normal activations, as described in Section 3, which cancels out the 1/2 factors from the kernels, preserving the variance magnitude. Moreover, we can also observe again that the kernel is a function solely of ∆x, in agreement with the shift invariance that is also observed in simple sinusoidal networks. Visualizations of this NNGP are provided in Figure 19. We will now proceed to derive the NTK, which requires first obtaining Σ̇. Lemma 13. For ω ∈ R, Σ̇(1)(x, x̃) : Rn0 × Rn0 → R is given by Σ̇(1)(x, x̃) = 1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 + e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 ) + 1. Proof. The proof follows the same pattern as Theorem 12, with the only difference being a few sign changes after the exponential expansion of the trigonometric functions, due to the different identities for sine and cosine. Corollary 14. Shallow SSN NTK. For a simple sinusoidal network with a single hidden layer f (1) : Rn0 → Rn2 following Definition 1, its neural tangent kernel (NTK), as defined in Theorem 6, is given by Θ(1)(x, x̃) = ( ω2 ( xT x̃+ 1 )) [1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 + e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 ) + 1 ] + 1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 − e−ω 2 2 ∥x+x̃∥ 2 2e−2ω 2 ) + 1 = 1 2 ( ω2 ( xT x̃+ 1 ) + 1 ) e− ω2 2 ∥x−x̃∥ 2 2 − 1 2 ( ω2 ( xT x̃+ 1 ) − 1 ) e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 + ω2 ( xT x̃+ 1 ) + 1. Proof. Follows trivially by applying Theorem 12 and Lemma 13 to Theorem 6. We again note the vanishing factor e−2ω 2 , which leaves us with 1 2 ( ω2 ( xT x̃+ 1 ) + 1 ) e− ω2 2 ∥x−x̃∥ 2 2 + ω2 ( xT x̃+ 1 ) + 1. (5) As with the SIREN before, this NTK is still of the same form as its corresponding NNGP. While again we have additional linear terms xT x̃ in the NTK compared to the NNGP, in this case as well the kernel preserves its strong diagonal. It is still close to a Gaussian kernel, with its bandwidth determined directly by ω. We demonstrate this in Figure 20, where the NTK for different values of ω is shown. Additionally, we also plot a pure Gaussian kernel with variance ω2, scaled to match the maximum and minimum values of the NTK. We can observe the NTK kernel closely matches the Gaussian. Moreover, we can also observe that, at x̃ = 0 the maximum value is predicted by k ≈ ω2/2, as expected from the scaling factors in the kernel in Equation 5. This NTK suggests that training a simple sinusoidal network is approximately equivalent to performing kernel regression with a Gaussian kernel, a low-pass filter, with its bandwidth defined by ω. We note that even though this sinusoidal network kernel approximates a Gaussian kernel, an actual Gaussian kernel can be recovered if a combination of sine and cosine activations are employed, as demonstrated in Tsuchida (2020) (Proposition 18). D.3 DEEP SINUSOIDAL NETWORKS We will now look at the full NNGP and NTK for sinusoidal networks of arbitrary depth. As we will see, due to the recursive nature of these kernels, for networks deeper than the ones analyzed in the previous section, their full unrolled expressions quickly become intractable intuitively, especially for the NTK. Nevertheless, these kernels can still provide some insight, into the behavior of their corresponding networks. Moreover, despite their symbolic complexity, we will also demonstrate empirically that the resulting kernels can be approximated by simple Gaussian kernels, even for deep networks. D.3.1 SIMPLE SINUSOIDAL NETWORK As demonstrated in the previous section, simple sinusoidal networks produce simpler NNGP and NTK kernels due to their Gaussian initialization. We thus begin this section by now analyzing SSNs first, starting with their general NNGP. Theorem 15. SSN NNGP. For a simple sinusoidal network with L hidden layers f (L) : Rn0 → RnL+1 following Definition 1, as the size of the hidden layers n1, . . . , nL → ∞ sequentially, f (L) tends (by law of large numbers) to the neural network Gaussian Process (NNGP) with covariance Σ(L)(x, x̃), recursively defined as Σ(0)(x, x̃) = ω2 ( xT x̃+ 1 ) Σ(L)(x, x̃) = 1 2 e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)) ( eΣ (L−1)(x,x̃) − e−Σ (L−1)(x,x̃) ) + 1. Proof. We will proceed by induction on the depth L, demonstrating the NNGP for successive layers as n1, . . . , nL → ∞ sequentially. To demonstrate the base case L = 1, let us rearrange Σ(1) from Theorem 12 in order to express it in terms of inner products, Σ(1)(x, x̃) = 1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 + e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 ) + 1 = 1 2 [ e− ω2 2 (x T x−2xT x̃+x̃T x̃) − e− ω2 2 (x T x+2xT x̃+x̃T x̃)e−2ω 2 ] + 1 = 1 2 [ e− 1 2 [ω 2(xT x+1)+ω2(x̃T x̃+1)]+ω2(xT x̃+1) − e− 1 2 [ω 2(xT x+1)+ω2(x̃T x̃+1)]−ω2(xT x̃+1) ] + 1. Given the definition of Σ(0), this is equivalent to 1 2 e− 1 2 (Σ (0)(x,x)+Σ(0)(x̃,x̃)) ( eΣ (0)(x,x̃) − e−Σ (0)(x,x̃) ) + 1, which concludes this case. Now given the inductive hypothesis, as n1, . . . , nL−1 → ∞ we have that the first L− 1 layers define a network f (L−1) with NNGP given by Σ(L−1)(x, x̃). Now it is left to show that as nL → ∞, we get the NNGP given by Σ(L). Following the same argument in Theorem 12, the network f (L)(x) = W (L) 1 √ nL sin ( f (L−1) ) + b(L) constitutes a Gaussian process given the outputs of the previous layer, due to the distributions of W (L) and b(L). Its covariance is given by σ2WΣ (L) + σ2b = Σ (L) + 1, where Σ(L)(x, x̃) = lim nL→∞ [ 1 nL 〈 sin ( f (L−1)(x) ) , sin ( f (L−1)(x̃) )〉] = lim nL→∞ 1 nL nL∑ j=1 sin ( f (L−1)(x) ) j sin ( f (L−1)(x̃) ) j . By inductive hypothesis, f (L−1) is a Gaussian process Σ(L−1)(x, x̃). Thus by the LLN the limit above equals E(u,v)∼N(0,Σ(L−1)(x,x̃)) [sin(u)sin(v)] . Omitting the distribution from the expectation for brevity and expanding the exponential definition of sine, we have E [ 1 2i ( eiu − e−iu ) 1 2i ( eiv − e−iv )] = −1 4 [ E [ ei(u+v) ] − E [ ei(u−v) ] − E [ e−i(u−v) ] + E [ e−i(u+v) ]] . Since u and v are jointly Gaussian, p = u+ v and m = u− v are also Gaussian, with mean 0 and variance σ2p = σ 2 u + σ 2 v + 2Cov[u, v] = Σ (L−1)(x, x) + Σ(L−1)(x̃, x̃) + 2Σ(L−1)(x, x̃), σ2m = σ 2 u + σ 2 v − 2Cov[u, v] = Σ(L−1)(x, x) + Σ(L−1)(x̃, x̃)− 2Σ(L−1)(x, x̃). We can now rewriting the expectations in terms of normalized variables −1 4 [ Ez∼N (0,1) [ eiσpz ] − Ez∼N (0,1) [ eiσmz ] − Ez∼N (0,1) [ e−iσmz ] + Ez∼N (0,1) [ e−iσpz ]] . Applying Proposition 7 to each expectation, we get 1 2 [ e− 1 2σ 2 m − e− 12σ 2 p ] = 1 2 [ e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)−2Σ(L−1)(x,x̃)) − e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)+2Σ(L−1)(x,x̃)) ] = 1 2 e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)) ( eΣ (L−1)(x,x̃) − e−Σ (L−1)(x,x̃)) ) Unrolling the definition beyond L = 1 leads to expressions that are difficult to parse. However, without unrolling, we can rearrange the terms in the NNGP above as Σ(L)(x, x̃) = 1 2 e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)) ( eΣ (L−1)(x,x̃) − e−Σ (L−1)(x,x̃) ) + 1 = 1 2 [ e− 1 2 (Σ (L−1)(x,x)−2Σ(L−1)(x,x̃)+Σ(L−1)(x̃,x̃)) − e− 1 2 (Σ (L−1)(x,x)+2Σ(L−1)(x,x̃)+Σ(L−1)(x̃,x̃)) ] + 1. Since the covariance matrix Σ(L−1) is positive semi-definite, we can observe that the exponent expressions can be reformulated into a quadratic forms analogous to the ones in Theorem 12. We can thus observe that the same structure is essentially preserved through the composition of layers, except for the ω factor present in the first layer. Moreover, given this recursive definition, since the NNGP at any given depth L is a function only of the preceding kernels, the resulting kernel will also be shift-invariant. Let us now derive the Σ̇ kernel, required for the NTK. Lemma 16. For ω ∈ R, Σ̇(L)(x, x̃) : Rn0 × Rn0 → R, is given by Σ̇(L)(x, x̃) = 1 2 e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)) ( eΣ (L−1)(x,x̃) + e−Σ (L−1)(x,x̃) ) + 1. Proof. The proof follows the same pattern as Theorem 15, with the only difference being a few sign changes after the exponential expansion of the trigonometric functions, due to the different identities for sine and cosine. As done in the previous section, it would be simple to now derive the full NTK for a simple sinusoidal network of arbitrary depth by applying Theorem 6 with the NNGP kernels from above. However, there is not much to be gained by writing the convoluted NTK expression explicitly, beyond what we have already gleaned from the NNGP above. Nevertheless, some insight can be gained from the recursive expression of the NTK itself, as defined in Theorem 6. First, note that, as before, for practical values of ω, Σ̇ ≈ Σ, both converging to simply a single Gaussian kernel. Thus, our NTK recursion becomes Θ(L)(x, x̃) ≈ ( Θ(L−1)(x, x̃) + 1 ) Σ(L)(x, x̃). Now, note that when expanded, the form of this NTK recursion is essentially as a product of the Gaussian Σ kernels, Θ(L)(x, x̃) ≈ (( . . . (( Σ(0)(x, x̃) + 1 ) Σ(1)(x, x̃) + 1 ) . . . ) Σ(L−1)(x, x̃) + 1 ) Σ(L)(x, x̃) = (( . . . (( ω2 ( xT x̃+ 1 ) + 1 ) Σ(1)(x, x̃) + 1 ) . . . ) Σ(L−1)(x, x̃) + 1 ) Σ(L)(x, x̃). (6) We know that the product of two Gaussian kernels is Gaussian and thus the general form of
1. What is the focus and contribution of the paper on simplified sinusoidal networks? 2. What are the strengths of the proposed approach, particularly in terms of ease of implementation and theoretical analysis? 3. What are the weaknesses of the paper regarding comparisons with other works and potential applications? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposed a simplified version of the original form of sinusoidal networks in SIREN, that allows for easier implementation and theoretical analysis. The proposed sinusoidal networks outperform SIRENs in implicit representation learning tasks. Analysis of the proposed sinusoidal networks from an NTK perspective. The NTK approximates a low-pass filter with adjustable bandwidth. Demonstrate that the performance of a sinusoidal network can be optimized by tuning the bandwidth implied in the NTK approximation. Strengths And Weaknesses Strength: The modification over the original SIREN networks is simple, effective and easier to analyze. The theoretical framework is solid and clear. It is a significant discovery that the sinusoidal networks work approximately as low-pass filters with bandwidth controlled by w, which can be utilized to design the initialization parameters of sinusoidal networks. Weakness: This paper compared the Simple Sinusoidal Network with the original implementation of SIREN using different initialization strategies. I wonder how much contribution the Kaiming normal initialization and different network models make respectively. What happens if applying the proposed initialization method to the original sinusoidal networks? Will it behave whether a comparable or a worse performance? One question: You claimed that the modified sinusoidal networks are easier to analyze in theoretical way, but in Corollary 12 it seems that it is possible to figure out the NTK expression for original SIREN networks as well. In what aspect the modified model could embody better potential for analysis? Clarity, Quality, Novelty And Reproducibility Quality and Clarity: Great. The writing and organization of this paper is clear and well structured. The basic idea and methodology is easy to understand by readers, and the theoretical analysis and its related framework were solid. Originality: Good. This paper is mainly based on two existing foundations, sinusoidal network and NTK technology. The author modified the original sinusoidal network to make it easy to analyze and outperform the original one empirically. Most computation of the NTK analysis used the existing framework of NTK, but the results are interesting and striking enough. Besides, the theoretical results match exactly with the experiments, endorsing the correctness of theoretical analysis.
ICLR
Title Simple initialization and parametrization of sinusoidal networks via their kernel bandwidth Abstract Neural networks with sinusoidal activations have been proposed as an alternative to networks with traditional activation functions. Despite their promise, particularly for learning implicit models, their training behavior is not yet fully understood, leading to a number of empirical design choices that are not well justified. In this work, we first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis. We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth. Finally, we utilize these insights to inform the sinusoidal network initialization, optimizing their performance for each of a series of tasks, including learning implicit models and solving differential equations. N/A Neural networks with sinusoidal activations have been proposed as an alternative to networks with traditional activation functions. Despite their promise, particularly for learning implicit models, their training behavior is not yet fully understood, leading to a number of empirical design choices that are not well justified. In this work, we first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis. We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth. Finally, we utilize these insights to inform the sinusoidal network initialization, optimizing their performance for each of a series of tasks, including learning implicit models and solving differential equations. 1 INTRODUCTION Sinusoidal networks are neural networks with sine nonlinearities, instead of the traditional ReLU or hyperbolic tangent. They have been recently popularized, particularly for applications in implicit representation models, in the form of SIRENs (Sitzmann et al., 2020). However, despite their popularity, many aspects of their behavior and comparative advantages are not yet fully understood. Particularly, some initialization and parametrization choices for sinusoidal networks are often defined arbitrarily, without a clear understanding of how to optimize these settings in order to maximize performance. In this paper, we first propose a simplified version of such sinusoidal networks, that allows for easier implementation and theoretical analysis. We show that these simple sinusoidal networks can match and outperform SIRENs in implicit representation learning tasks, such as fitting videos, images and audio signals. We then analyze sinusoidal networks from a neural tangent kernel (NTK) perspective (Jacot et al., 2018), demonstrating that their NTK approximates a low-pass filter with adjustable bandwidth. We confirm, through an empirical analysis this theoretically predicted behavior also holds approximately in practice. We then use the insights from this analysis to inform the choices of initialization and parameters for sinusoidal networks. We demonstrate we can optimize the performance of a sinusoidal network by tuning the bandwidth of its kernel to the maximum frequency present in the input signal being learned. Finally, we apply these insights in practice, demonstrating that “well tuned” sinusoidal networks outperform other networks in learning implicit representation models with good interpolation outside the training points, and in learning the solution to differential equations. 2 BACKGROUND AND RELATED WORK Sinusoidal networks. Sinusoidal networks have been recently popularized for implicit modelling tasks by sinusoidal representation networks (SIRENs) (Sitzmann et al., 2020). They have also been evaluated for physics-informed learning, demonstrating promising results in a series of domains (Raissi et al., 2019b; Song et al., 2021; Huang et al., 2021b;a; Wong et al., 2022). Among the benefits of such networks is the fact that the mapping of inputs through an (initially) random linear layer followed by a sine function is mathematically equivalent to a transformation to a random Fourier basis, rendering them close to networks with Fourier feature transforms (Tancik et al., 2020; Rahimi & Recht, 2007), and possibly able to address spectral bias (Basri et al., 2019; Rahaman et al., 2019; Wang et al., 2021). Sinusoidal networks also have the property that the derivative of their outputs is given simply by another sinusoidal network, due to the fact that the derivative of sine function is a phase-shifted sine. Neural tangent kernel. An important prior result to the neural tangent kernel (NTK) is the neural network Gaussian process (NNGP). At random initialization of its parameters θ, the output function of a neural network of depth L with nonlinearity σ, converges to a Gaussian process, called the NNGP, as the width of its layers n1, . . . , nL → ∞. (Neal, 1994; Lee et al., 2018). This result, though interesting, does not say much on its own about the behavior of trained neural networks. This role is left to the NTK, which is defined as the kernel given by Θ(x, x̃) = ⟨∇θfθ(x),∇θfθ(x̃)⟩. It can be shown that this kernel can be written out as a recursive expression involving the NNGP. Importantly, Jacot et al. (2018) demonstrated that, again as the network layer widths n1, . . . , nL → ∞, the NTK is (1) deterministic at initialization and (2) constant throughout training. Finally, it has also been demonstrated that under some assumptions on its parametrization, the output function of the trained neural network fθ converges to the kernel regression solution using the NTK (Lee et al., 2020; Arora et al., 2019). In other words, under certain assumptions the behavior of a trained deep neural network can be modeled as kernel regression using the NTK. Physics-informed neural networks. Physics-informed neural networks (Raissi et al., 2019a) are a method for approximating the solution to differential equations using neural networks (NNs). In this method, a neural network û(t, x; θ), with learned parameters θ, is trained to approximate the actual solution function u(t, x) to a given partial differential equation (PDE). Importantly, PINNs employ not only a standard “supervised” data loss, but also a physics-informed loss, which consists of the differential equation residual N . Thus, the training loss consists of a linear combination of two loss terms, one directly supervised from data and one informed by the underlying differential equations. 3 SIMPLE SINUSOIDAL NETWORKS There are many details that complicate the practical implementation of current sinusoidal networks. We aim to propose a simplified version of such networks in order to facilitate theoretical analysis and practical implementation, by removing such complications. As an example we can look at SIRENs, which have their layer activations defined as fl(x) = sin(ω(Wlx+ bl)). Then, in order to cancel the ω factor, layers after the first one have their weight initialization follow a uniform distribution with range [− √ 6/n ω , √ 6/n ω ], where n is the size of the layer. Unlike the other layers, the first layer is sampled from a uniform distribution with range [−1/n, 1/n]. We instead propose a simple sinusoidal network, with the goal of formulating an architecture that mainly amounts to substituting its activation functions by the sine function. We will, however, keep the ω parameter, since (as we will see in future analyses) it is in fact a useful tool for allowing the network to fit inputs of diverse frequencies. The layer activation equations of our simple sinusoidal network, with parameter ω, are defined as f1(x) = sin(ω (W1x+ b1)), fl(x) = sin(Wlx+ bl), l > 1. (1) Finally, instead of utilizing a uniform initialization as in SIRENs (with different bounds for the first and subsequent layers), we propose initializing all parameters in our simple sinusoidal network using a default Kaiming (He) normal initialization scheme. This choice not only greatly simplifies the initialization scheme of the network, but it also facilitates theoretical analysis of the behavior of the network under the NTK framework, as we will see in Section 4. Analysis of the initialization scheme. The initialization scheme proposed above differs from the one implemented in SIRENs. We will now show that this particular choice of initialization distribution preserves the variance of the original proposed SIREN initialization distribution. As a consequence, the original theoretical justifications for its initialization scheme still hold under this activation, namely that the distribution of activations across layers are stable, well-behaved and shift-invariant. Due to space constraints, proofs are presented in Appendix A. Moreover, we also demonstrate empirically that these properties are maintained in practice. Lemma 1. Given any c, for X ∼ N ( 0, 13c 2 ) and Y ∼ U (−c, c), we have Var[X] = Var[Y ] = 13c 2. This simple Lemma and relates to Lemma 1.7 in Sitzmann et al. (2020), showing that the initialization we propose here has the same variance as the one proposed for SIRENs. Using this result we can translate the result from the main Theorem 1.8 from Sitzmann et al. (2020), which claims that the SIREN initialization indeed has the desired properties, to our proposed initialization:1 For a uniform input in [−1, 1], the activations throughout a sinusoidal network are approximately standard normal distributed before each sine non-linearity and arcsine-distributed after each sine non-linearity, irrespective of the depth of the network, if the weights are distributed normally, with mean 0 and variance 2n , where n is a layer’s fan-in. Empirical evaluation of initialization scheme. To empirically demonstrate the proposed simple initialization scheme preserves the properties from the SIREN initialization scheme, we perform the same analysis performed by Sitzmann et al. (2020). We observe that the distribution of activations matches the predicted normal (before the non-linearity) and arcsine (after the non-linearity) distributions, and that this behavior is stable across many layers. These results are reported in detail in the Appendix B. 3.1 COMPARISON TO SIREN In order to demonstrate our simplified sinusoidal network has comparable performance to a standard SIREN, in this section we reproduce the main results from Sitzmann et al. (2020). Table 1 compiles the results for all experiments. In order to be fair, we compare the simplified sinusoidal network proposed in this chapter with both the results directly reported in Sitzmann et al. (2020), and our own reproduction of the SIREN results (using the same parameters and settings as the original). We can see from the numbers reported in the table that the performance of the simple sinusoidal network proposed in this chapter matches the performance of the SIREN in all cases, in fact surpassing it in most of the experiments. Qualitative results are presented in Appendix C. It is important to note that this is not a favorable setting for simple sinusoidal networks, given that the training durations were very short. The SIREN favors quickly converging to a solution, though it does not have as strong asymptotic behavior. This effect is likely due to the multiplicative factor applied to later layers described in Section 3. We observe that indeed in almost all cases we can compensate for this effect by simply increasing the learning rate in the Adam optimizer (Kingma & Ba, 2014). Finally, we observe that besides being able to surpass the performance of SIREN in most cases in a short training regimen, the simple sinusoidal network performs even more strongly with longer training. To demonstrate this, we repeated some experiments from above, but with longer training durations. These results are shown in Table 4 in Appendix C. 4 NEURAL TANGENT KERNEL ANALYSIS In the following we derive the NTK for sinusoidal networks. This analysis will show us that the sinusoidal networks NTK is approximately a low-pass filter, with its bandwidth directly defined by ω. We support these findings with an empirical analysis as well in the following section. Finally, we demonstrate how the insights from the NTK can be leveraged to properly “tune” sinusoidal networks to the spectrum of the desired signal. Full derivations and extensive, detailed analysis are left to Appendix D. The NTK for a simple sinusoidal network with a single hidden layer is presented in the theorem below. The NTK for siren with 1 and 6 hidden layers are shown in Figure 1. Theorem 2. Shallow SSN NTK. For a simple sinusoidal network with one hidden layer f (1) : Rn0 → Rn2 following Definition 1, its neural tangent kernel (NTK), as defined in Theorem 6, is given by Θ(1)(x, x̃) = 1 2 ( ω2 ( xT x̃+ 1 ) + 1 ) e− ω2 2 ∥x−x̃∥ 2 2 − 1 2 ( ω2 ( xT x̃+ 1 ) − 1 ) e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 + ω2 ( xT x̃+ 1 ) + 1. 1We note that despite being named Theorem 1.8 in Sitzmann et al. (2020), this result is not fully formal, due to the Gaussian distribution being approximated without a formal analysis of this approximation. Additionally, a CLT result is employed which assumes infinite width, which is not applicable in this context. We thus refrain from calling our equivalent result a theorem. Nevertheless, to the extent that the argument is applicable, it would still hold for our proposed initialization, due to its dependence solely on the variance demonstrated in Lemma 1 above. We can see that for values of ω > 2, the second term quickly vanishes due to the e−2ω 2 factor. This leaves us with only the first term, which has a Gaussian form. Due to the linear scaling term xT x̃, this is only approximately Gaussian, but the approximation improves as ω increases. We can thus observe that this kernel approximates a Gaussian kernel, which is a low-pass filter, with its bandwidth defined by ω. Figure 1 presents visualizations for NTKs for the simple sinusoidal network, compared to a (scaled) pure Gaussian with variance ω−2, showing there is a close match between the two. If we write out the NTK for networks with more than one hidden layer, it quickly becomes un-interpretable due to the recursive nature of the NTK definition (see Appendix D). However, as shown empirically in Figure 1, these kernels are still approximated by Gaussians with variance ω−2. We also observe that the NTK for a SIREN with a single hidden layer is analogous, but with a sinc form, which is also a low-pass filter. Theorem 3. Shallow SIREN NTK. For a single hidden layer SIREN f (1) : Rn0 → Rn2 following Definition 1, its neural tangent kernel (NTK), as defined in Theorem 6, is given by Θ(1)(x, x̃) = c2 6 ( ω2 ( xT x̃+ 1 ) + 1 ) n0∏ j=1 sinc(c ω (xj − x̃j)) − c 2 6 ( ω2 ( xT x̃+ 1 ) − 1 ) e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + ω 2 ( xT x̃+ 1 ) + 1. For deeper SIREN networks, the kernels defined by the later layers are in fact Gaussian too, as discussed in Appendix D. This leads to an NTK that is approximated by a product of a sinc function and a Gaussian. These SIREN kernels are also presented in Figure 1. 5 EMPIRICAL ANALYSIS As shown above, neural tangent kernel theory suggests that sinusoidal networks work as low-pass filters, with their bandwidth controlled by the parameter ω. In this section, we demonstrate empirically that we can observe this predicted behavior even in real sinusoidal networks. For this experiment, we generate a 512× 512 monochromatic image by super-imposing two orthogonal sinusoidal signals, each consisting of a single frequency, f(x, y) = cos(128πx) + cos(32πy). This function is sampled in the domain [−1, 1]2 to generate the image on the left of Figure 2. To demonstrate what we can expect from applying low-pass filters of different bandwidths to this signal, we perform a discrete Fourier transform (DFT), cut off frequencies above a certain value, and perform an inverse transform to recover the (filtered) image. The MSE of the reconstruction, as a function of the cutoff frequency, is shown in Figure 3. We can see that due to the simple nature of the signal, containing only two frequencies, there are only three loss levels. If indeed the NTK analysis is correct and sinusoidal networks act as low-pass filters, with bandwidth controlled by ω, we should be able to observe similar behavior with sinusoidal networks with different ω values. We plot the final training loss and training curves for sinusoidal networks with different ω in Figure 3. We can observe, again, that there are three consistent loss levels following the magnitude of the ω parameter, in line with the intuition that the sinusoidal network is working as a low-pass filter. This is also observable in Figure 2, where we see example reconstructions for networks of various ω values after training. However, unlike with the DFT low-pass filter (which does not involve any learning), we see in Figure 3 that during training some sinusoidal networks shift from one loss level to a lower one. This demonstrates that sinusoidal networks differ from true low-pass filters in that their weights can change, which implies that the bandwidth defined by ω also changes with learning. We know the weights W1 in the first layer of a sinusoidal network, given by f1(x) = sin ( ω ·WT1 x+ b1 ) , will change with training. Empirically, we observed that the spectral norm of W1 increases throughout training for small ω values. We can interpret that as the overall magnitude of the term ω ·WT1 x increasing, which is functionally equivalent to an increase in ω itself. In Figure 3, we observe that sinusoidal networks with smaller values of ω take a longer time to achieve a lower loss (if at all). Intuitively, this happens because, due to the effect described above, lower ω values require a larger increase in magnitude by the weights W1. Given that all networks were trained with the same learning rate, the ones with a smaller ω require their weights to move a longer distance, and thus take more training steps to achieve a lower loss. 6 TUNING ω As shown in the previous section, though the bandwidth of a network can change throughout training, the choice of ω still influences how easily and quickly (if at all) it can learn a given signal. The value of the ω parameter is thus crucial for the learning of the network. Despite this fact, in SIRENs, for example, this value is not adjusted for each task (except for the audio fitting experiments), and is simply set empirically to an arbitrary value. In this section, we seek to justify a proper initialization for this parameter, such that it can be chosen appropriately for each given task. Moreover, it is often not the case that we simply want to fit only the exact training samples but instead want to find a good interpolation (i.e., generalize well). Setting ω too high, and thus allowing the network to model frequencies that are much larger than the ones present in the actual signal is likely to cause overfitting. This is demonstrated empirically in Figure 4. Consequently, we want instead to tune the network to the highest frequency present in the signal. However, we do not always have the knowledge of what is the value of the highest frequency in the true underlying signal of interest. Moreover, we have also observed that, since the network learns and its weights change in magnitude, that value in fact changes with training. Therefore, the most we can hope for is to have a good heuristic to guide the choice of ω. Nevertheless, having a reasonable guess for ω is also likely sufficient for good performance, precisely due to the ability of the network to adapt during training and compensate for a possibly slightly suboptimal choice. Choosing ω from the Nyquist frequency. One source of empirical information on the relationship between ω and the sinusoidal network’s “learnable frequencies” is the previous section’s empirical analysis. Taking into account the scaling, we can see from Fig. 3 that around ω = 16 the network starts to be able to learn the full signal (freq. 128). We can similarly note that at about ω = 4 the sinusoidal network starts to be able to efficiently learn a signal with frequency 32, but not the one with frequency 128. This scaling suggests a heuristic of setting ω to about 1/8 of the signal’s maximum frequency. For natural signals, such as pictures, it is common for frequencies up to the Nyquist frequency of the discrete sampling to be present. We provide an example for the “camera” image we have utilized so far in Figure 23 in Appendix E, where we can see that the reconstruction loss through a low-pass filter continues to decrease significantly up to the Nyquist frequency for the image resolution. In light of this information, analyzing the choices of ω for the experiments in Section 3.1 again suggests that ω should be set around 1/8 of the Nyquist frequency of the signal. These values of ω are summarized in Table 2 in the “Fitting ω” column. For example, the image fitting experiment shows that, for an image of shape 512× 512 (and thus Nyquist frequency of 256 for each dimension), this heuristic suggests an ω value of 256/8 = 32, which is the value found to work best empirically through search. We find similar results for the audio fitting experiments. The audio signals used in the audio fitting experiment contained approximately 300, 000 and 500, 000 points, and thus maximum frequencies of approximately 150, 00 and 250, 000. This suggests reasonable values for ω of 18, 750 and 31, 250, which are close to the ones found empirically to work well. In examples such as the video fitting experiments, in which each dimension has a different frequency, it is not completely clear how to pick a single ω to fit all dimensions. This suggests that having independent values of ω for each dimension might be useful for such cases, as discussed in the next section. Finally, when performing the generalization experiments in Section 7, we show the best performing ω ended up being half the value of the best ω used in the fitting tasks from Section 3.1. This follows intuitively, since for the generalization task we set apart half the points for training and the other half for testing, thus dividing the maximum possible frequency in the training sample in half, providing further evidence of the relationship between ω and the maximum frequency in the input signal. Multi-dimensional ω. In many problems, such as the video fitting and PDE problems, not only is the input space multi-dimensional, it also contains time and space dimensions (which are additionally possibly of different shape). This suggests that employing a multi-dimensional ω, specifying different frequencies for each dimension might be beneficial. In practice, if we employ a scaling factor λ = [λ1 λ2 . . . λd] T , we have the first layer of the sinusoidal network given by f1(x) = sin(ω (W1 (λ⊙ x) + b1)) = sin(W1 (Ω⊙ x) + ωb1), (2) where Ω = [λ1ω λ2ω . . . λdω] T works as a multi-dimensional ω. In the following experiments, we employ this approach to three-dimensional problems, in which we have time and differently shaped space domains, namely the video fitting and physics-informed neural network PDE experiments. For these experiments, we report the ω in the form of the (already scaled) Ω vector for simplicity. Choosing ω from available information Finally, in many problems we do have some knowledge of the underlying signal we can leverage, such as in the case of inverse problems. For example, let’s say we have velocity fields for a fluid and we are trying to solve for the coupled pressure field and the Reynolds number using a physics-informed neural network (as done in Section 7). In this case, we have access to two components of the solution field. Performing a Fourier transform on the training data we have can reveal the relevant spectrum and inform our choice of ω. If the maximum frequency in the signal is lower than the Nyquist frequency implied by the sampling, this can lead to a more appropriate choice of ω than suggested purely from the sampling. 7 EXPERIMENTS In this section, we first perform experiments to demonstrate how the optimal value of ω influences the generalization error of a sinusoidal network, following the discussion in Section 6. After that, we demonstrate that sinusoidal networks with properly tuned ω values outperform traditional physicsinformed neural networks in classic PDE tasks. 7.1 EVALUATING GENERALIZATION We now evaluate the simple sinusoidal network generalization capabilities. To do this, in all experiments in this section we segment the input signal into training and test sets using a checkerboard pattern – along all axis-aligned directions, points alternate between belonging to train and test set. We perform audio, image and video fitting experiments. When performing these experiments, we search for the best performing ω value for generalization (defined as performance on the held-out points). We report the best values on Table 2. We observe that, as expected from the discussion in Section 6, the best performing ω values follow the heuristic discussed above, and are in fact half the best-performing value found in the previous fitting experiments from Section 3.1, confirming our expectation. This is also demonstrated in the plot in Figure 4. Using a higher ω leads to overfitting and poor generalization outside the training points. This is demonstrated in Figure 4, in which we can see that choosing an appropriate ω value from the heuristics described previously leads to a good fit and interpolation. Setting ω too high leads to interpolation artifacts, due to overfitting of spurious high-frequency components. For the video signals, which have different size along each axis, we employ a multi-dimensional ω. We scale each dimension of ω proportional to the size of the input signal along the corresponding axis. 7.2 SOLVING DIFFERENTIAL EQUATIONS Finally, we apply our analysis to physics-informed learning. We compare the performance of simple sinusoidal networks to the tanh networks that are commonly used for these tasks. Results are summarized in Table 3. Details for the Schrödinger and Helmholtz experiments are presented in Appendix E. 7.2.1 BURGERS EQUATION (IDENTIFICATION) This experiment reproduces the Burgers equation identification experiment from Raissi et al. (2019a). Here we are identifying the parameters λ1 and λ2 of a 1D Burgers equation, ut+λ1uux−λ2uxx = 0, given a known solution field. The ground truth value of the parameters are λ1 = 1.0 and λ2 = 0.01/π. In order to find a good value for ω, we perform a low-pass reconstruction of the solution as before. We can observe in Figure 5 that the solution does not have high bandwidth, with most of the loss being minimized with only the lower half of the spectrum. Note that the sampling performed for the training data (N = 2, 000) is sufficient to support such frequencies. This suggests an ω value in the range 8− 10. Indeed, we observe that ω = 10 gives the best identification of the desired parameters, with errors of 0.0071% and 0.0507% for λ1 and λ2 respectively, against errors of 0.0521% and 0.4522% of the baseline. This value of ω also achieves the lowest reconstruction loss against the known solution, with an MSE of 8.034 · 10−4. Figure 5 shows the reconstructed solution using the identified parameters. 7.2.2 NAVIER-STOKES (IDENTIFICATION) This experiment reproduces the Navier-Stokes identification experiment from Raissi et al. (2019a). In this experiment, we are trying to identify, the parameters λ1, λ2 and the pressure field p of the 2D Navier-Stokes equations given by ∂u∂t + λ1u · ∇u = −∇p+ λ2∇ 2u, given known velocity fields u and v. The ground truth value of the parameters are λ1 = 1.0 and λ2 = 0.01. Unlike the 1D Burgers case, in this case the amount of points sampled for the training set (N = 5, 000) is not high, compared to the size of the full solution volume, and is thus the limiting factor for the bandwidth of the input signal. Given the random sampling of points from the full solution, the Correct PDE u_t + u u_x - 0.0031831 u_{xx} = 0 Identified PDE (clean data) u_t + 1.00007 u u_x - 0.0031847 u_{xx} = 0 Identified PDE (1\% noise) u_t + 0.00000 u u_x - 0.0000000 u_{xx} = 0 v_t + (u v_x + v v_y) = -p_y + 0.01 (v_{xx} + v_{yy}) Identified PDE (clean data) u_t + 1.000 (u u_x + v u_y) = -p_x + 0.01018 (u_{xx} + u_{yy}) v_t + 1.000 (u v_x + v v_y) = -p_y + 0.01018 (v_{xx} + v_{yy}) Identified PDE (1\% noise) u_t + 0.000 (u u_x + v u_y) = -p_x + 0.00000 (u_{xx} + u_{yy}) v_t + 0.000 (u v_x + v v_y) = -p_y + 0.00000 (v_{xx} + v_{yy}) generalized sampling theorem applies. The original solution has dimensions of 100 × 50 × 200. With the 5, 000 randomly sampled points, the average sampling rate per dimension is approximately 17, on average, corresponding to a Nyquist frequency of approximately 8.5. Furthermore, given the multi-dimensional nature of this problem, with both spatial and temporal axes, we employ an independent scaling to ω for each dimension. The analysis above suggests an average ω ≈ 1, with the dimensions of the problem suggesting scaling factors of [0.5 1 2]T . Indeed, we observe that Ω = [0.3 0.6 1.2] T gives the best results. With with errors of 0.0038% and 1.782% for λ1 and λ2 respectively, against errors of 0.0046% and 2.093% of the baseline. Figure 6 shows the identified pressure field. Note that given the nature of the problem, this field can only be identified up to a constant. 8 CONCLUSION In this work, we have present a simplified formulation for sinusoidal networks. Analysis of this architecture from the neural tangent kernel perspective, combined with empirical results, reveals that the kernel for sinusoidal networks corresponds to a low-pass filter with adjustable bandwidth. We leverage this information in order to initialize these networks appropriately, choosing their bandwidth such that it is tuned to the signal being learned. Employing this strategy, we demonstrated improved results in both implicit modelling and physics-informed learning tasks. A SIMPLE SINUSOIDAL NETWORK INITIALIZATION We present here the proofs for the initialization scheme of the simple sinusoidal network from Section 3. Lemma 4. Given any c, for X ∼ N ( 0, 13c 2 ) and Y ∼ U (−c, c), we have Var[X] = Var[Y ] = 13c 2. Proof. By definition, Var[X] = σ2 = 13c 2. For Y , we know that the variance of a uniformly distributed random variable with bound [a, b] is given by 112 (b − a) 2. Thus, Var[Y ] = 112 (2c) 2 = 13c 2. Theorem 5. For a uniform input in [−1, 1], the activations throughout a sinusoidal networks are approximately standard normal distributed before each sine non-linearity and arcsine-distributed after each sine non-linearity, irrespective of the depth of the network, if the weights are distributed normally, with mean 0 and variance 2n with n is the layer’s fan-in. Proof. The proof follows exactly the proof for Theorem 1.8 in Sitzmann et al. (2020), only using Lemma 4 when necessary to show that the initialization proposed here has the same variance necessary for the proof to follow. B EMPIRICAL EVALUATION OF SSN INITIALIZATION Here we report an empirical analysis the initialization scheme of simple sinusoidal networks, referenced in Section 3. For this analysis we use a sinusoidal MLP with 6 hidden layers of 2048 units, and single-dimensional input and output. This MLP is initialized using the simplified scheme described above. For testing, 28 equally spaced inputs from the range [−1, 1] are passed through the network. We then plot the histogram of activations after each linear operation (before the sine non-linearity) and after each sine non-linearity. To match the original plot, we also plot the 1D Fast Fourier Transform of all activations in a layer, and the gradient of this output with respect to each activation. These results are presented in Figure 8. The main conclusion from this figure is that the distribution of activations matches the predicted normal (before the non-linearity) and arcsine (after the non-linearity) distributions, and that this behavior is stable across many layers. We also reproduced the same result up to 50 layers. We then perform an additional experiment in which the exact same setup as above is employed, yet the 1D inputs are shifted by a large value (i.e., x → x+ 1000). We the show the same plot as before in Figure 9. We can see that there is essentially no change from the previous plot, which demonstrates the sinusoidal networks shift-invariance in the input space, one of its important desirable properties, as discussed previously. C EXPERIMENTAL DETAILS FOR COMPARISON TO SIREN Below, we present qualitative results and describe experimental details for each experiment. As these are a reproduction of the experiments in Sitzmann et al. (2020), we refer to their details as well for further information. C.1 IMAGE In the image fitting experiment, we treat an image as a function from the spatial domain to color values (x, y) → (r, g, b). In the case of a monochromatic image, used here, this function maps instead to one-dimensional intensity values. We try to learn a function f : R2 → R, parametrized as a sinusoidal network, in order to fit such an image. Figure 7 shows the image used in this experiment, and the reconstruction from the fitted sinusoidal network. The gradient and Laplacian for the learned function are also presented, demonstrating that higher order derivatives are also learned appropriately. Table 4: Comparison of the simple sinusoidal network and SIREN on some experiments, with a longer training duration. The specific durations are described below in the details for each experiment. We can see that the simple sinusoidal network has stronger asymptotic performance. Values above the horizontal center line are peak signal to noise ratio (PSNR), values below are mean squared error (MSE). †Audio experiments utilized a different learning rate for the first layer, see the full description below for details. Experiment Simple Sinusoidal Network SIREN [ours] Image 54.70 52.43 Poisson (Gradient) 39.51 38.70 Poisson (Laplacian) 22.09 20.82 Video (cat) 34.64 32.26 Video (bikes) 37.71 34.07 Audio (Bach)† 5.66 · 10−7 3.02 · 10−6 Audio (counting)† 4.02 · 10−5 6.33 · 10−5 Image Gradient Laplacian Figure 7: Top row: Ground truth image. Bottom: Reconstructed with sinusoidal network. Training parameters. The input image used is 512×512, mapped to an input domain [−1, 1]2. The sinusoidal network used is a 5-layer MLP with hidden size 256, following the proposed initialization scheme above. The parameter ω is set to 32. The Adam optimizer is used with a learning rate of 3 · 10−3, trained for 10, 000 steps in the short duration training results and for 20, 000 steps in the long duration training results. C.2 POISSON These tasks are similar to the image fitting experiment, but instead of supervising directly on the ground truth image, the learned fitted sinusoidal network is supervised on its derivatives, constituting a Poisson problem. We perform the experiment by supervising both on the input image’s gradient and Laplacian, and report the reconstruction of the image and it’s gradients in each case. Figure 10 shows the image used in this experiment, and the reconstruction from the fitted sinusoidal networks. Since reconstruction from derivatives can only be correct up to a scaling factor, we scale the reconstructions for visualization. As in the original SIREN results, we can observe that the reconstruction from the gradient is of higher quality than the one from the Laplacian. Training parameters. The input image used is of size 256× 256, mapped from an input domain [−1, 1]2. The sinusoidal network used is a 5-layer MLP with hidden size 256, following the proposed initialization scheme above. For both experiments, the parameter ω is set to 32 and the Adam optimizer is used. For the gradient experiments, in short and long training results, a learning rate of 1 · 10−4 is used, trained for 10, 000 and 20, 000 steps respectively. For the Laplace experiments, in short and long training results, a learning rate of 1 · 10−3 is used, trained for 10, 000 and 20, 000 steps respectively. C.3 VIDEO These tasks are similar to the image fitting experiment, but we instead fit a video, which also has a temporal input dimension, (t, x, y) → (r, g, b). We learn a function f : R3 → R3, parametrized as a sinusoidal network, in order to fit such a video. Figures 11 and 12 show sampled frames from the videos used in this experiment, and their respective reconstructions from the fitted sinusoidal networks. Training parameters. The cat video contains 300 frames of size 512 × 512. The bikes video contains 250 frames of size 272× 640. These signals are fitted from the input domain [−1, 1]3. The sinusoidal network used is a 5-layer MLP with hidden size 1024, following the proposed initialization scheme above. The parameter ω is set to 8. The Adam optimizer is used, with a learning rate of 3 · 10−4 trained for 100, 000 steps in the short duration training results and for 200, 000 steps in the long duration training results. C.4 AUDIO In the audio experiments, we fit an audio signal in the temporal domain as a waveform t → w. We to learn a function f : R → R, parametrized as a sinusoidal network, in order to fit the audio. Figure 13 shows the waveforms for the input audios and the reconstructed audios from the fitted sinusoidal network. In this experiment, we utilized a lower learning rate for the first layer compared to the rest of the network. This was used to compensate the very large ω used (in the 15, 000−30, 000 range, compared to the 10−30 range for all other experiments). One might argue that this is re-introducing complexity, counteracting the purpose the proposed simplification. However, we would claim (1) that this is only limited to cases with extremely high ω, which was not present in any case except for fitting audio waves, and (2) that adjusting the learning rate for an individual layer is still an approach that is simpler and more in line with standard machine learning practice compared to multiplying all layers by a scaling factor and then adjusting their initialization variance by the same amount. Training parameters. Both audios use a sampling rate of 44100Hz. The Bach audio is 7s long and the counting audio is approximately 12s long. These signals are fitted from the input domain [−1, 1]. The sinusoidal network used is a 5-layer MLP with hidden size 256, following the proposed initialization scheme above. For short and long training results, training is performed for 5, 000 and 50, 000 steps respectively. For the Bach experiment, the parameter ω is set to 15, 000. The Adam optimizer is used, with a general learning rate of 3 · 10−3. A separate learning rate of 1 · 10−6 is used for the first layer to stabilize training due to the large ω value. For the counting experiment, the parameter ω is set to 32, 000. The Adam optimizer is used, with a general learning rate of 1 · 10−3 and a first layer learning rate of 1 · 10−6. C.5 HELMHOLTZ EQUATION In this experiment we solve for the unknown wavefield Φ : R2 → R2 in the Helmholtz equation (∆ + k2)Φ(x) = −f(x), (3) with known wavenumber k and source function f (a Gaussian with µ = 0 and σ2 = 10−4). We solve this differential equation using a sinusoidal network supervised with the physics-informed loss ∫ Ω ∥(∆ + k2)Φ(x) + f(x)∥1dx, evaluated at random points sampled uniformly in the domain Ω = [−1, 1]2. Figure 14 shows the real and imaginary components of the ground truth solution to the differential equation and the solution recovered by the fitted sinusoidal network. Training parameters. The sinusoidal network used is a 5-layer MLP with hidden size 256, following the proposed initialization scheme above. The parameter ω is set to 16. The Adam optimizer is used, with a learning rate of 3 · 10−4 trained for 50, 000 steps. C.6 SIGNED DISTANCE FUNCTION (SDF) In these tasks we learn a 3D signed distance function. We learn a function f : R3 → R, parametrized as a sinusoidal network, to model a signed distance function representing a 3D scene. This function is supervised indirectly from point cloud data of the scene. Figures 16 and 15 show 3D renderings of the volumes inferred from the learned SDFs. Training parameters. The statue point cloud contains 4, 999, 996 points. The room point cloud contains 10, 250, 688 points. These signals are fitted from the input domain [−1, 1]3. The sinusoidal network used is a 5-layer MLP with hidden size 256 for the statue and 1024 for the room. The parameter ω is set to 4. The Adam optimizer is used, with a learning rate of 8 · 10−4 and a batch size of 1400. All models are trained for 190, 000 steps for the statue experiment and for 410, 000 steps for the room experiment. D NEURAL TANGENT KERNEL ANALYSIS AND PROOFS D.1 PRELIMINARIES In order to perform the subsequent NTK analysis, we first need to formalize definitions for simple sinusoidal networks and SIRENs. The definitions used here adhere to the common NTK analysis practices, and thus differ slightly from practical implementation. Definition 1. For the purposes of the following proofs, a (sinusoidal) fully-connected neural network with L hidden layers that takes as input x ∈ Rn0 , is defined as the function f (L) : Rn0 → RnL+1 , recursively given by f (0)(x) = ω ( W (0)x+ b(0) ) , f (L)(x) = W (L) 1 √ nL sin ( f (L−1) ) + b(L), where ω ∈ R. The parameters { W (j) }L j=0 have shape nj+1×nj and all have each element sampled independently either from N (0, 1) (for simple sinusoidal networks) or from U(−c, c) with some bound c ∈ R (for SIRENs). The { b(j) }L j=0 are nj+1-dimensional vectors sampled independently from N (0, Inj+1). With this definition, we now state the general formulation of the NTK, which applies in general to fully-connected networks with Lipschitz non-linearities, and consequently in particular to the sinusoidal networks studied here as well. Let us first define the NNGP, which has covariance recursively defined by Σ(L+1)(x, x̃) = Ef∼N (0,Σ(L)) [σ(f(x))σ(f(x̃))] + β2, with base case Σ(1)(x, x̃) = 1n0x T x̃+ β2, and where β gives the variance of the bias terms in the neural network layers (Neal, 1994; Lee et al., 2018). Now the NTK is given by the following theorem. Theorem 6. For a neural network with L hidden layers f (L) : Rn0 → RnL+1 following Definition 1, as the size of the hidden layers n1, . . . , nL → ∞ sequentially, the neural tangent kernel (NTK) of f (L) converges in probability to the deterministic kernel Θ(L) defined recursively as Θ(0)(x, x̃) = Σ(0)(x, x̃) = ω2 ( xT x̃+ 1 ) , Θ(L)(x, x̃) = Θ(L−1)(x, x̃)Σ̇(L)(x, x̃) + Σ(L)(x, x̃), where { Σ(l) }L l=0 are the neural network Gaussian processes (NNGPs) corresponding to each f (l) and Σ̇(l)(x, x̃) = E(u,v)∼Σ(l−1)(x,x̃) [cos(u)cos(v)] . Proof. This is a standard general NTK theorem, showing that the limiting kernel recursively in terms of the network’s NNGPs and the previous layer’s NTK. For brevity we omit the proof here and refer the reader to, for example, Jacot et al. (2020). The only difference is for the base case Σ(0), due to the fact that we have an additional ω parameter in the first layer. It is simple to see that the neural network with 0 hidden layers, i.e. the linear model ω ( W (0)x+ b(0) ) will lead to the same Gaussian process covariance kernel as the original proof, xT x̃+ 1, only adjusted by the additional variance factor ω2. Theorem 6 demonstrates that the NTK can be constructed as a recursive function of the NTK of previous layers and the network’s NNGPs. In the following sections we will derive the NNGPs for the SIREN and the simple sinusoidal network directly. We will then use these NNGPs with Theorem 6 to derive their NTKs as well. To finalize this preliminary section, we also provide two propositions that will be useful in following proofs in this section. Proposition 7. For any ω ∈ R, x ∈ Rd, Ew∼N (0,Id) [ eiω(w T x) ] = e− ω2 2 ∥x∥ 2 2 Proof. Omitting w ∼ N (0, Id) from the expectation for brevity, we have E [ eiω(w T x) ] = E [ eiω ∑d j=1 wjxj ] . By independence of the components of w and the definition of expectation, E [ eiω ∑d j=1 iwjxj ] = d∏ j=1 E [ eiω wjxj ] = d∏ j=1 1√ 2π ∫ ∞ −∞ eiω wjxje− w2j 2 dwj . Completing the square, we get d∏ j=1 1√ 2π ∫ ∞ −∞ eiω wjxje− 1 2w 2 j dwj = d∏ j=1 1√ 2π ∫ ∞ −∞ e 1 2 (i 2ω2x2j−i 2ω2x2j+2ixjwj−w 2 j)dwj = d∏ j=1 e 1 2 i 2ω2x2j 1√ 2π ∫ ∞ −∞ e− 1 2 (i 2ω2x2j−2iω 2xjwj+w 2 j)dwj = d∏ j=1 e− 1 2ω 2x2j 1√ 2π ∫ ∞ −∞ e− 1 2 (wj−iωxj) 2 dwj . Since the integral and its preceding factor constitute a Gaussian pdf, they integrate to 1, leaving the final result d∏ j=1 e− ω2 2 x 2 j = e− ω2 2 ∑d j=1 x 2 j = e− ω2 2 ∥xj∥ 2 2 . Proposition 8. For any c, ω ∈ R, x ∈ Rd, Ew∼Ud(−c,c) [ eiω(w T x) ] = d∏ j=1 sinc(c ωxj). Proof. Omitting w ∼ Ud(−c, c) from the expectation for brevity, we have E [ eiω(w T x) ] = E [ eiω ∑d j=1 wjxj ] . By independence of the components of w and the definition of expectation, E [ eiω ∑d j=1 wjxj ] = d∏ j=1 E [ eiω wjxj ] = d∏ j=1 ∫ c −c eiω wjxj 1 2c dwj = d∏ j=1 1 2c ∫ c −c eiω wjxjdwj . Now, focusing on the integral above, we have∫ c −c eiω wjxjdwj = ∫ c −c cos(ω wjxj)dwj + i ∫ c −c sin(ω wjxj)dwj = sin(ω wjxj) ωxj ∣∣∣∣∣ c −c − icos(ω wjxj) ωxj ∣∣∣∣∣ c −c = 2sin(c ωxj) ωxj . Finally, plugging this back into the product above, we get d∏ j=1 1 2c ∫ c −c eiω wjxjdwj = d∏ j=1 1 2c 2sin(c ωxj) ωxj = d∏ j=1 sinc(c ωxj). D.2 SHALLOW SINUSOIDAL NETWORKS For the next few proofs, we will be focusing on neural networks with a single hidden layer, i.e. L = 1. Expanding the definition above, such a network is given by f (1)(x) = W (1) 1 √ n1 sin ( ω ( W (0)x+ b(0) )) + b(1). (4) The advantage of analysing such shallow networks is that their NNGPs and NTKs have formulations that are intuitively interpretable, providing insight into their characteristics. We later extend these derivations to networks of arbitrary depth. D.2.1 SIREN First, let us derive the NNGP for a SIREN with a single hidden layer. Theorem 9. Shallow SIREN NNGP. For a single hidden layer SIREN f (1) : Rn0 → Rn2 following Definition 1, as the size of the hidden layer n1 → ∞, f (1) tends (by law of large numbers) to the neural network Gaussian Process (NNGP) with covariance Σ(1)(x, x̃) = c2 6 n0∏ j=1 sinc(c ω (xj − x̃j))− e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + 1. Proof. We first show that despite the usage of a uniform distribution for the weights, this initialization scheme still leads to an NNGP. In this initial part, we follow an approach similar to Lee et al. (2018), with the modifications necessary for this conclusion to hold. From our neural network definition, each element f (1)(x)j in the output vector is a weighted combination of elements in W (1) and b(1). Conditioning on the outputs from the first layer (L = 0), since the sine function is bounded and each of the parameters is uniformly distributed with finite variance and zero mean, the f (1)(x)j become normally distributed with mean zero as n1 → ∞ by the (Lyapunov) central limit theorem (CLT). Since any subset of elements in f (1)(x) is jointly Gaussian, we have that this outer layer is described by a Gaussian process. Now that we have concluded that this initialization scheme still entails an NNGP, we have that its covariance is determined by σ2WΣ (1) + σ2b = c2 3 Σ (1) + 1, where Σ(1)(x, x̃) = lim n1→∞ [ 1 n1 〈 sin ( f (0)(x) ) , sin ( f (0)(x̃) )〉] = lim n1→∞ 1 n1 n1∑ j=1 sin ( f (0)(x) ) j sin ( f (0)(x̃) ) j = lim n1→∞ 1 n1 n1∑ j=1 sin ( ω ( W (0) j x+ b (0) j )) sin ( ω ( W (0) j x̃+ b (0) j )) . Now by the law of large number (LLN) the limit above converges to Ew∼Un0 (−c,c), b∼N (0,1) [ sin ( ω ( wTx+ b )) sin ( ω ( wT x̃+ b ))] , where w ∈ Rn0 and b ∈ R. Omitting the distributions from the expectation for brevity and expanding the exponential definition of sine, we have E [ 1 2i ( eiω(w T x+b) − e−iω(w T x+b) ) 1 2i ( eiω(w T x̃+b) − e−iω(w T x̃+b) )] = −1 4 E [ eiω(w T x+b)+iω(wT x̃+b) − eiω(w T x+b)−iω(wT x̃+b) − e−iω(w T x+b)+iω(wT x̃+b) + e−iω(w T x+b)−iω(wT x̃+b) ] = −1 4 [ E [ eiω(w T (x+x̃)) ] E [ e2iωb ] − E [ eiω(w T (x−x̃)) ] − E [ eiω(w T (x̃−x)) ] + E [ eiω(w T (−x−x̃)) ] E [ e−2iωb ]] Applying Propositions 7 and 8 to each expectation above and noting that the sinc function is even, we are left with − 1 4 2 n0∏ j=1 sinc(c ω (xj + x̃j))− 2e−2ω 2 n0∏ j=1 sinc(c ω (xj − x̃j)) = 1 2 n0∏ j=1 sinc(c ω (xj − x̃j))− e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) . For simplicity, if we take the case of a one-dimensional output (e.g., an audio signal or a monochromatic image) with the standard SIREN setting of c = √ 6, the NNGP reduces to Σ(1)(x, x̃) = sinc (√ 6ω (x− x̃) ) − e−2ω 2 sinc (√ 6ω (x+ x̃) ) + 1. We can already notice that this kernel is composed of sinc functions. The sinc function is the ideal low-pass filter. For any value of ω > 1, we can see the the first term in the expression above will completely dominate the expression, due to the exponential e−2ω 2 factor. In practice, ω is commonly set to values at least one order of magnitude above 1, if not multiple orders of magnitude above that in certain cases (e.g., high frequency audio signals). This leaves us with simply Σ(1)(x, x̃) = sinc (√ 6ω (x− x̃) ) + 1. Notice that not only does our kernel reduce to the sinc function, but it also reduces to a function solely of ∆x = x − x̃. This agrees with the shift-invariant property we observe in SIRENs, since the NNGP is dependent only on ∆x, but not on the particular values of x and x̃. Notice also that ω defines the bandwidth of the sinc function, thus determining the maximum frequencies it allows to pass. The general sinc form and the shift-invariance of this kernel can be visualized in Figure 17, along with the effect of varying ω on the bandwidth of the NNGP kernel. We can see that the NTK of the shallow SIREN, derived below, maintains the same relevant characteristics as the NNGP. We first derive Σ̇ in the Lemma below. Lemma 10. For ω ∈ R, Σ̇(1)(x, x̃) : Rn0 × Rn0 → R is given by Σ(1)(x, x̃) = c2 6 n0∏ j=1 sinc(c ω (xj − x̃j)) + e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + 1. Proof. The proof follows the same pattern as Theorem 9, with the only difference being a few sign changes after the exponential expansion of the trigonometric functions, due to the different identities for sine and cosine. Now we can derive the NTK for the shallow SIREN. Corollary 11. Shallow SIREN NTK. For a single hidden layer SIREN f (1) : Rn0 → Rn2 following Definition 1, its neural tangent kernel (NTK), as defined in Theorem 6, is given by Θ(1)(x, x̃) = ( ω2 ( xT x̃+ 1 ))c2 6 n0∏ j=1 sinc(c ω (xj − x̃j))− e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + 1 + c2 6 n0∏ j=1 sinc(c ω (xj − x̃j)) + e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + 1 = c2 6 ( ω2 ( xT x̃+ 1 ) + 1 ) n0∏ j=1 sinc(c ω (xj − x̃j)) − c 2 6 ( ω2 ( xT x̃+ 1 ) − 1 ) e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + ω 2 ( xT x̃+ 1 ) + 1. Proof. Follows trivially by applying Theorem 9 and Lemma 10 to Theorem 6. Though the expressions become more complex due to the formulation of the NTK, we can see that many of the same properties from the NNGP still apply. Again, for reasonable values of ω, the term with the exponential factor e−2ω 2 will be of negligible relative magnitude. With c = √ 6, this leaves us with ( ω2 ( xT x̃+ 1 ) + 1 ) n0∏ j=1 sinc (√ 6ω (xj − x̃j) ) + ω2 ( xT x̃+ 1 ) + 1, which is of the same form as the NNGP, with some additional linear terms xT x̃. Though these linear terms break the pure shift-invariance, we still have a strong diagonal and the sinc form with bandwidth determined by ω, as can be seen in Figure 18. Similarly to the NNGP, the SIREN NTK suggests that training a shallow SIREN is approximately equivalent to performing kernel regression with a sinc kernel, a low-pass filter, with its bandwidth defined by ω. This agrees intuitively with the experimental observations from the paper that in order to fit higher frequencies signals, a larger ω is required. D.2.2 SIMPLE SINUSOIDAL NETWORK Just as we did in the last section, we will now first derive the NNGP for a simple sinusoidal network, and then use that in order to obtain its NTK as well. As we will see, the Gaussian initialization employed in the SSN has the benefit of rendering the derivations cleaner, while retaining the relevant properties from the SIREN initialization. We observe that a similar derivation of this NNGP (using cosine functions instead of sine) can be found in Pearce et al. (2019), with a focus on a Bayesian perspective for the result. Theorem 12. Shallow SSN NNGP. For a single hidden layer simple sinusoidal network f (1) : Rn0 → Rn2 following Definition 1, as the size of the hidden layer n1 → ∞, f (1) tends (by law of large numbers) to the neural network Gaussian Process (NNGP) with covariance Σ(1)(x, x̃) = 1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 − e−ω 2 2 ∥x+x̃∥ 2 2e−2ω 2 ) + 1. Proof. We again initially follow an approach similar to the one described in Lee et al. (2018). From our sinusoidal network definition, each element f (1)(x)j in the output vector is a weighted combination of elements in W (1) and b(1). Conditioning on the outputs from the first layer (L = 0), since the sine function is bounded and each of the parameters is Gaussian with finite variance and zero mean, the f (1)(x)j are also normally distributed with mean zero by the CLT. Since any subset of elements in f (1)(x) is jointly Gaussian, we have that this outer layer is described by a Gaussian process. Therefore, its covariance is determined by σ2WΣ (1) + σ2b = Σ (1) + 1, where Σ(1)(x, x̃) = lim n1→∞ [ 1 n1 〈 sin ( f (0)(x) ) , sin ( f (0)(x̃) )〉] = lim n1→∞ 1 n1 n1∑ j=1 sin ( f (0)(x) ) j sin ( f (0)(x̃) ) j = lim n1→∞ 1 n1 n1∑ j=1 sin ( ω ( W (0) j x+ b (0) j )) sin ( ω ( W (0) j x̃+ b (0) j )) . Now by the LLN the limit above converges to Ew∼N (0,In0 ),b∼N (0,1) [ sin ( ω ( wTx+ b )) sin ( ω ( wT x̃+ b ))] , where w ∈ Rn0 and b ∈ R. Omitting the distributions from the expectation for brevity and expanding the exponential definition of sine, we have E [ 1 2i ( eiω(w T x+b) − e−iω(w T x+b) ) 1 2i ( eiω(w T x̃+b) − e−iω(w T x̃+b) )] = −1 4 E [ eiω(w T x+b)+iω(wT x̃+b) − eiω(w T x+b)−iω(wT x̃+b) − e−iω(w T x+b)+iω(wT x̃+b) + e−iω(w T x+b)−iω(wT x̃+b) ] = −1 4 [ E [ eiω(w T (x+x̃)) ] E [ e2iωb ] − E [ eiω(w T (x−x̃)) ] − E [ eiω(w T (x̃−x)) ] + E [ eiω(w T (−x−x̃)) ] E [ e−2iωb ]] Applying Proposition 7 to each expectation above, it becomes −1 4 ( e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 − e−ω 2 2 ∥x−x̃∥ 2 2 − e−ω 2 2 ∥x+x̃∥ 2 2 + e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 ) = 1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 − e−ω 2 2 ∥x+x̃∥ 2 2e−2ω 2 ) . We an once again observe that, for practical values of ω, the NNGP simplifies to 1 2 e− ω2 2 ∥x−x̃∥ 2 2 + 1. This takes the form of a Gaussian kernel, which is also a low-pass filter, with its bandwidth determined by ω. We note that, similar to the c = √ 6 setting from SIRENs, in practice a scaling factor of √ 2 is applied to the normal activations, as described in Section 3, which cancels out the 1/2 factors from the kernels, preserving the variance magnitude. Moreover, we can also observe again that the kernel is a function solely of ∆x, in agreement with the shift invariance that is also observed in simple sinusoidal networks. Visualizations of this NNGP are provided in Figure 19. We will now proceed to derive the NTK, which requires first obtaining Σ̇. Lemma 13. For ω ∈ R, Σ̇(1)(x, x̃) : Rn0 × Rn0 → R is given by Σ̇(1)(x, x̃) = 1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 + e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 ) + 1. Proof. The proof follows the same pattern as Theorem 12, with the only difference being a few sign changes after the exponential expansion of the trigonometric functions, due to the different identities for sine and cosine. Corollary 14. Shallow SSN NTK. For a simple sinusoidal network with a single hidden layer f (1) : Rn0 → Rn2 following Definition 1, its neural tangent kernel (NTK), as defined in Theorem 6, is given by Θ(1)(x, x̃) = ( ω2 ( xT x̃+ 1 )) [1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 + e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 ) + 1 ] + 1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 − e−ω 2 2 ∥x+x̃∥ 2 2e−2ω 2 ) + 1 = 1 2 ( ω2 ( xT x̃+ 1 ) + 1 ) e− ω2 2 ∥x−x̃∥ 2 2 − 1 2 ( ω2 ( xT x̃+ 1 ) − 1 ) e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 + ω2 ( xT x̃+ 1 ) + 1. Proof. Follows trivially by applying Theorem 12 and Lemma 13 to Theorem 6. We again note the vanishing factor e−2ω 2 , which leaves us with 1 2 ( ω2 ( xT x̃+ 1 ) + 1 ) e− ω2 2 ∥x−x̃∥ 2 2 + ω2 ( xT x̃+ 1 ) + 1. (5) As with the SIREN before, this NTK is still of the same form as its corresponding NNGP. While again we have additional linear terms xT x̃ in the NTK compared to the NNGP, in this case as well the kernel preserves its strong diagonal. It is still close to a Gaussian kernel, with its bandwidth determined directly by ω. We demonstrate this in Figure 20, where the NTK for different values of ω is shown. Additionally, we also plot a pure Gaussian kernel with variance ω2, scaled to match the maximum and minimum values of the NTK. We can observe the NTK kernel closely matches the Gaussian. Moreover, we can also observe that, at x̃ = 0 the maximum value is predicted by k ≈ ω2/2, as expected from the scaling factors in the kernel in Equation 5. This NTK suggests that training a simple sinusoidal network is approximately equivalent to performing kernel regression with a Gaussian kernel, a low-pass filter, with its bandwidth defined by ω. We note that even though this sinusoidal network kernel approximates a Gaussian kernel, an actual Gaussian kernel can be recovered if a combination of sine and cosine activations are employed, as demonstrated in Tsuchida (2020) (Proposition 18). D.3 DEEP SINUSOIDAL NETWORKS We will now look at the full NNGP and NTK for sinusoidal networks of arbitrary depth. As we will see, due to the recursive nature of these kernels, for networks deeper than the ones analyzed in the previous section, their full unrolled expressions quickly become intractable intuitively, especially for the NTK. Nevertheless, these kernels can still provide some insight, into the behavior of their corresponding networks. Moreover, despite their symbolic complexity, we will also demonstrate empirically that the resulting kernels can be approximated by simple Gaussian kernels, even for deep networks. D.3.1 SIMPLE SINUSOIDAL NETWORK As demonstrated in the previous section, simple sinusoidal networks produce simpler NNGP and NTK kernels due to their Gaussian initialization. We thus begin this section by now analyzing SSNs first, starting with their general NNGP. Theorem 15. SSN NNGP. For a simple sinusoidal network with L hidden layers f (L) : Rn0 → RnL+1 following Definition 1, as the size of the hidden layers n1, . . . , nL → ∞ sequentially, f (L) tends (by law of large numbers) to the neural network Gaussian Process (NNGP) with covariance Σ(L)(x, x̃), recursively defined as Σ(0)(x, x̃) = ω2 ( xT x̃+ 1 ) Σ(L)(x, x̃) = 1 2 e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)) ( eΣ (L−1)(x,x̃) − e−Σ (L−1)(x,x̃) ) + 1. Proof. We will proceed by induction on the depth L, demonstrating the NNGP for successive layers as n1, . . . , nL → ∞ sequentially. To demonstrate the base case L = 1, let us rearrange Σ(1) from Theorem 12 in order to express it in terms of inner products, Σ(1)(x, x̃) = 1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 + e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 ) + 1 = 1 2 [ e− ω2 2 (x T x−2xT x̃+x̃T x̃) − e− ω2 2 (x T x+2xT x̃+x̃T x̃)e−2ω 2 ] + 1 = 1 2 [ e− 1 2 [ω 2(xT x+1)+ω2(x̃T x̃+1)]+ω2(xT x̃+1) − e− 1 2 [ω 2(xT x+1)+ω2(x̃T x̃+1)]−ω2(xT x̃+1) ] + 1. Given the definition of Σ(0), this is equivalent to 1 2 e− 1 2 (Σ (0)(x,x)+Σ(0)(x̃,x̃)) ( eΣ (0)(x,x̃) − e−Σ (0)(x,x̃) ) + 1, which concludes this case. Now given the inductive hypothesis, as n1, . . . , nL−1 → ∞ we have that the first L− 1 layers define a network f (L−1) with NNGP given by Σ(L−1)(x, x̃). Now it is left to show that as nL → ∞, we get the NNGP given by Σ(L). Following the same argument in Theorem 12, the network f (L)(x) = W (L) 1 √ nL sin ( f (L−1) ) + b(L) constitutes a Gaussian process given the outputs of the previous layer, due to the distributions of W (L) and b(L). Its covariance is given by σ2WΣ (L) + σ2b = Σ (L) + 1, where Σ(L)(x, x̃) = lim nL→∞ [ 1 nL 〈 sin ( f (L−1)(x) ) , sin ( f (L−1)(x̃) )〉] = lim nL→∞ 1 nL nL∑ j=1 sin ( f (L−1)(x) ) j sin ( f (L−1)(x̃) ) j . By inductive hypothesis, f (L−1) is a Gaussian process Σ(L−1)(x, x̃). Thus by the LLN the limit above equals E(u,v)∼N(0,Σ(L−1)(x,x̃)) [sin(u)sin(v)] . Omitting the distribution from the expectation for brevity and expanding the exponential definition of sine, we have E [ 1 2i ( eiu − e−iu ) 1 2i ( eiv − e−iv )] = −1 4 [ E [ ei(u+v) ] − E [ ei(u−v) ] − E [ e−i(u−v) ] + E [ e−i(u+v) ]] . Since u and v are jointly Gaussian, p = u+ v and m = u− v are also Gaussian, with mean 0 and variance σ2p = σ 2 u + σ 2 v + 2Cov[u, v] = Σ (L−1)(x, x) + Σ(L−1)(x̃, x̃) + 2Σ(L−1)(x, x̃), σ2m = σ 2 u + σ 2 v − 2Cov[u, v] = Σ(L−1)(x, x) + Σ(L−1)(x̃, x̃)− 2Σ(L−1)(x, x̃). We can now rewriting the expectations in terms of normalized variables −1 4 [ Ez∼N (0,1) [ eiσpz ] − Ez∼N (0,1) [ eiσmz ] − Ez∼N (0,1) [ e−iσmz ] + Ez∼N (0,1) [ e−iσpz ]] . Applying Proposition 7 to each expectation, we get 1 2 [ e− 1 2σ 2 m − e− 12σ 2 p ] = 1 2 [ e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)−2Σ(L−1)(x,x̃)) − e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)+2Σ(L−1)(x,x̃)) ] = 1 2 e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)) ( eΣ (L−1)(x,x̃) − e−Σ (L−1)(x,x̃)) ) Unrolling the definition beyond L = 1 leads to expressions that are difficult to parse. However, without unrolling, we can rearrange the terms in the NNGP above as Σ(L)(x, x̃) = 1 2 e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)) ( eΣ (L−1)(x,x̃) − e−Σ (L−1)(x,x̃) ) + 1 = 1 2 [ e− 1 2 (Σ (L−1)(x,x)−2Σ(L−1)(x,x̃)+Σ(L−1)(x̃,x̃)) − e− 1 2 (Σ (L−1)(x,x)+2Σ(L−1)(x,x̃)+Σ(L−1)(x̃,x̃)) ] + 1. Since the covariance matrix Σ(L−1) is positive semi-definite, we can observe that the exponent expressions can be reformulated into a quadratic forms analogous to the ones in Theorem 12. We can thus observe that the same structure is essentially preserved through the composition of layers, except for the ω factor present in the first layer. Moreover, given this recursive definition, since the NNGP at any given depth L is a function only of the preceding kernels, the resulting kernel will also be shift-invariant. Let us now derive the Σ̇ kernel, required for the NTK. Lemma 16. For ω ∈ R, Σ̇(L)(x, x̃) : Rn0 × Rn0 → R, is given by Σ̇(L)(x, x̃) = 1 2 e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)) ( eΣ (L−1)(x,x̃) + e−Σ (L−1)(x,x̃) ) + 1. Proof. The proof follows the same pattern as Theorem 15, with the only difference being a few sign changes after the exponential expansion of the trigonometric functions, due to the different identities for sine and cosine. As done in the previous section, it would be simple to now derive the full NTK for a simple sinusoidal network of arbitrary depth by applying Theorem 6 with the NNGP kernels from above. However, there is not much to be gained by writing the convoluted NTK expression explicitly, beyond what we have already gleaned from the NNGP above. Nevertheless, some insight can be gained from the recursive expression of the NTK itself, as defined in Theorem 6. First, note that, as before, for practical values of ω, Σ̇ ≈ Σ, both converging to simply a single Gaussian kernel. Thus, our NTK recursion becomes Θ(L)(x, x̃) ≈ ( Θ(L−1)(x, x̃) + 1 ) Σ(L)(x, x̃). Now, note that when expanded, the form of this NTK recursion is essentially as a product of the Gaussian Σ kernels, Θ(L)(x, x̃) ≈ (( . . . (( Σ(0)(x, x̃) + 1 ) Σ(1)(x, x̃) + 1 ) . . . ) Σ(L−1)(x, x̃) + 1 ) Σ(L)(x, x̃) = (( . . . (( ω2 ( xT x̃+ 1 ) + 1 ) Σ(1)(x, x̃) + 1 ) . . . ) Σ(L−1)(x, x̃) + 1 ) Σ(L)(x, x̃). (6) We know that the product of two Gaussian kernels is Gaussian and thus the general form of
1. What is the focus of the paper regarding neural networks and their applications? 2. What are the strengths of the proposed approach, particularly in terms of mathematical correctness? 3. Do you have any concerns or suggestions regarding the paper's content, such as typos, similarities to other works, or minor issues? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors analyse the NNGP kernel and NTK of neural networks with sinusoidal activations. The NTK has an adjustable band-width parameter, which allows users to understand such kernel models in terms of a low-pass filter. A discussion on how the cutoff frequency is chosen is provided. The authors show that such kernels can be applied to implicit models and differential equations. Strengths And Weaknesses Strengths: To the best of my knowledge, this is the first paper that studies in-depth the way in which infinite-width neural network theory can help inform neural networks with sinusoidal activations that are trained using gradient-based optimisers. Mathematical correctness: Theorem 2. In particular, the phrase "approximately standard normal distributed" is not precisely mathematically defined anywhere in the paper before the theorem is presented. Either change remove the theorem status of this statement, or define "approximately standard normal distributed". More importantly: asymptotic normality will hold (due to the previously cited results of Lee et al. 2018). If all you need is Lee et al.'s result, then just state this. "We can thus observe that this kernel approximates a Gaussian kernel, which is a low-pass filter, with its bandwidth define by ω". If you really want a squared exponential, RBF kernel, you can obtain this using half sine and half cosine activations. You can observe that summing the two kernels cancels out some terms. For example, see Proposition 18 of "Results on Infinitely Wide Multi-layer Perceptrons". Minor: Theorem 13 has typo in "shalow". Note this kernel is very similar to the kernel in "Expressive Priors in Bayesian Neural Networks: Kernel Combinations and Periodic Functions", which uses cosine instead of sine (but the analysis is pretty much identical). Maybe it is identical to Lemma 17. It is perhaps also worth mentioning that it is straight-forward to handle non-zero mean weights for these activations in the NNGP setting, since the integrals also have a closed-form expression. In general, this work might be worth citing as a Bayesian counterpart to the gradient-descent approach considered here (with the other difference that cosine is replaced with sine, which is not too important). Clarity, Quality, Novelty And Reproducibility The paper is well-written and easy to follow. The experiments are described in adequate detail from a reproducibility perspective. While the individual elements presented in this paper are not entirely novel, they are synthesised into a coherent and reasonably compelling story.
ICLR
Title Simple initialization and parametrization of sinusoidal networks via their kernel bandwidth Abstract Neural networks with sinusoidal activations have been proposed as an alternative to networks with traditional activation functions. Despite their promise, particularly for learning implicit models, their training behavior is not yet fully understood, leading to a number of empirical design choices that are not well justified. In this work, we first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis. We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth. Finally, we utilize these insights to inform the sinusoidal network initialization, optimizing their performance for each of a series of tasks, including learning implicit models and solving differential equations. N/A Neural networks with sinusoidal activations have been proposed as an alternative to networks with traditional activation functions. Despite their promise, particularly for learning implicit models, their training behavior is not yet fully understood, leading to a number of empirical design choices that are not well justified. In this work, we first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis. We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth. Finally, we utilize these insights to inform the sinusoidal network initialization, optimizing their performance for each of a series of tasks, including learning implicit models and solving differential equations. 1 INTRODUCTION Sinusoidal networks are neural networks with sine nonlinearities, instead of the traditional ReLU or hyperbolic tangent. They have been recently popularized, particularly for applications in implicit representation models, in the form of SIRENs (Sitzmann et al., 2020). However, despite their popularity, many aspects of their behavior and comparative advantages are not yet fully understood. Particularly, some initialization and parametrization choices for sinusoidal networks are often defined arbitrarily, without a clear understanding of how to optimize these settings in order to maximize performance. In this paper, we first propose a simplified version of such sinusoidal networks, that allows for easier implementation and theoretical analysis. We show that these simple sinusoidal networks can match and outperform SIRENs in implicit representation learning tasks, such as fitting videos, images and audio signals. We then analyze sinusoidal networks from a neural tangent kernel (NTK) perspective (Jacot et al., 2018), demonstrating that their NTK approximates a low-pass filter with adjustable bandwidth. We confirm, through an empirical analysis this theoretically predicted behavior also holds approximately in practice. We then use the insights from this analysis to inform the choices of initialization and parameters for sinusoidal networks. We demonstrate we can optimize the performance of a sinusoidal network by tuning the bandwidth of its kernel to the maximum frequency present in the input signal being learned. Finally, we apply these insights in practice, demonstrating that “well tuned” sinusoidal networks outperform other networks in learning implicit representation models with good interpolation outside the training points, and in learning the solution to differential equations. 2 BACKGROUND AND RELATED WORK Sinusoidal networks. Sinusoidal networks have been recently popularized for implicit modelling tasks by sinusoidal representation networks (SIRENs) (Sitzmann et al., 2020). They have also been evaluated for physics-informed learning, demonstrating promising results in a series of domains (Raissi et al., 2019b; Song et al., 2021; Huang et al., 2021b;a; Wong et al., 2022). Among the benefits of such networks is the fact that the mapping of inputs through an (initially) random linear layer followed by a sine function is mathematically equivalent to a transformation to a random Fourier basis, rendering them close to networks with Fourier feature transforms (Tancik et al., 2020; Rahimi & Recht, 2007), and possibly able to address spectral bias (Basri et al., 2019; Rahaman et al., 2019; Wang et al., 2021). Sinusoidal networks also have the property that the derivative of their outputs is given simply by another sinusoidal network, due to the fact that the derivative of sine function is a phase-shifted sine. Neural tangent kernel. An important prior result to the neural tangent kernel (NTK) is the neural network Gaussian process (NNGP). At random initialization of its parameters θ, the output function of a neural network of depth L with nonlinearity σ, converges to a Gaussian process, called the NNGP, as the width of its layers n1, . . . , nL → ∞. (Neal, 1994; Lee et al., 2018). This result, though interesting, does not say much on its own about the behavior of trained neural networks. This role is left to the NTK, which is defined as the kernel given by Θ(x, x̃) = ⟨∇θfθ(x),∇θfθ(x̃)⟩. It can be shown that this kernel can be written out as a recursive expression involving the NNGP. Importantly, Jacot et al. (2018) demonstrated that, again as the network layer widths n1, . . . , nL → ∞, the NTK is (1) deterministic at initialization and (2) constant throughout training. Finally, it has also been demonstrated that under some assumptions on its parametrization, the output function of the trained neural network fθ converges to the kernel regression solution using the NTK (Lee et al., 2020; Arora et al., 2019). In other words, under certain assumptions the behavior of a trained deep neural network can be modeled as kernel regression using the NTK. Physics-informed neural networks. Physics-informed neural networks (Raissi et al., 2019a) are a method for approximating the solution to differential equations using neural networks (NNs). In this method, a neural network û(t, x; θ), with learned parameters θ, is trained to approximate the actual solution function u(t, x) to a given partial differential equation (PDE). Importantly, PINNs employ not only a standard “supervised” data loss, but also a physics-informed loss, which consists of the differential equation residual N . Thus, the training loss consists of a linear combination of two loss terms, one directly supervised from data and one informed by the underlying differential equations. 3 SIMPLE SINUSOIDAL NETWORKS There are many details that complicate the practical implementation of current sinusoidal networks. We aim to propose a simplified version of such networks in order to facilitate theoretical analysis and practical implementation, by removing such complications. As an example we can look at SIRENs, which have their layer activations defined as fl(x) = sin(ω(Wlx+ bl)). Then, in order to cancel the ω factor, layers after the first one have their weight initialization follow a uniform distribution with range [− √ 6/n ω , √ 6/n ω ], where n is the size of the layer. Unlike the other layers, the first layer is sampled from a uniform distribution with range [−1/n, 1/n]. We instead propose a simple sinusoidal network, with the goal of formulating an architecture that mainly amounts to substituting its activation functions by the sine function. We will, however, keep the ω parameter, since (as we will see in future analyses) it is in fact a useful tool for allowing the network to fit inputs of diverse frequencies. The layer activation equations of our simple sinusoidal network, with parameter ω, are defined as f1(x) = sin(ω (W1x+ b1)), fl(x) = sin(Wlx+ bl), l > 1. (1) Finally, instead of utilizing a uniform initialization as in SIRENs (with different bounds for the first and subsequent layers), we propose initializing all parameters in our simple sinusoidal network using a default Kaiming (He) normal initialization scheme. This choice not only greatly simplifies the initialization scheme of the network, but it also facilitates theoretical analysis of the behavior of the network under the NTK framework, as we will see in Section 4. Analysis of the initialization scheme. The initialization scheme proposed above differs from the one implemented in SIRENs. We will now show that this particular choice of initialization distribution preserves the variance of the original proposed SIREN initialization distribution. As a consequence, the original theoretical justifications for its initialization scheme still hold under this activation, namely that the distribution of activations across layers are stable, well-behaved and shift-invariant. Due to space constraints, proofs are presented in Appendix A. Moreover, we also demonstrate empirically that these properties are maintained in practice. Lemma 1. Given any c, for X ∼ N ( 0, 13c 2 ) and Y ∼ U (−c, c), we have Var[X] = Var[Y ] = 13c 2. This simple Lemma and relates to Lemma 1.7 in Sitzmann et al. (2020), showing that the initialization we propose here has the same variance as the one proposed for SIRENs. Using this result we can translate the result from the main Theorem 1.8 from Sitzmann et al. (2020), which claims that the SIREN initialization indeed has the desired properties, to our proposed initialization:1 For a uniform input in [−1, 1], the activations throughout a sinusoidal network are approximately standard normal distributed before each sine non-linearity and arcsine-distributed after each sine non-linearity, irrespective of the depth of the network, if the weights are distributed normally, with mean 0 and variance 2n , where n is a layer’s fan-in. Empirical evaluation of initialization scheme. To empirically demonstrate the proposed simple initialization scheme preserves the properties from the SIREN initialization scheme, we perform the same analysis performed by Sitzmann et al. (2020). We observe that the distribution of activations matches the predicted normal (before the non-linearity) and arcsine (after the non-linearity) distributions, and that this behavior is stable across many layers. These results are reported in detail in the Appendix B. 3.1 COMPARISON TO SIREN In order to demonstrate our simplified sinusoidal network has comparable performance to a standard SIREN, in this section we reproduce the main results from Sitzmann et al. (2020). Table 1 compiles the results for all experiments. In order to be fair, we compare the simplified sinusoidal network proposed in this chapter with both the results directly reported in Sitzmann et al. (2020), and our own reproduction of the SIREN results (using the same parameters and settings as the original). We can see from the numbers reported in the table that the performance of the simple sinusoidal network proposed in this chapter matches the performance of the SIREN in all cases, in fact surpassing it in most of the experiments. Qualitative results are presented in Appendix C. It is important to note that this is not a favorable setting for simple sinusoidal networks, given that the training durations were very short. The SIREN favors quickly converging to a solution, though it does not have as strong asymptotic behavior. This effect is likely due to the multiplicative factor applied to later layers described in Section 3. We observe that indeed in almost all cases we can compensate for this effect by simply increasing the learning rate in the Adam optimizer (Kingma & Ba, 2014). Finally, we observe that besides being able to surpass the performance of SIREN in most cases in a short training regimen, the simple sinusoidal network performs even more strongly with longer training. To demonstrate this, we repeated some experiments from above, but with longer training durations. These results are shown in Table 4 in Appendix C. 4 NEURAL TANGENT KERNEL ANALYSIS In the following we derive the NTK for sinusoidal networks. This analysis will show us that the sinusoidal networks NTK is approximately a low-pass filter, with its bandwidth directly defined by ω. We support these findings with an empirical analysis as well in the following section. Finally, we demonstrate how the insights from the NTK can be leveraged to properly “tune” sinusoidal networks to the spectrum of the desired signal. Full derivations and extensive, detailed analysis are left to Appendix D. The NTK for a simple sinusoidal network with a single hidden layer is presented in the theorem below. The NTK for siren with 1 and 6 hidden layers are shown in Figure 1. Theorem 2. Shallow SSN NTK. For a simple sinusoidal network with one hidden layer f (1) : Rn0 → Rn2 following Definition 1, its neural tangent kernel (NTK), as defined in Theorem 6, is given by Θ(1)(x, x̃) = 1 2 ( ω2 ( xT x̃+ 1 ) + 1 ) e− ω2 2 ∥x−x̃∥ 2 2 − 1 2 ( ω2 ( xT x̃+ 1 ) − 1 ) e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 + ω2 ( xT x̃+ 1 ) + 1. 1We note that despite being named Theorem 1.8 in Sitzmann et al. (2020), this result is not fully formal, due to the Gaussian distribution being approximated without a formal analysis of this approximation. Additionally, a CLT result is employed which assumes infinite width, which is not applicable in this context. We thus refrain from calling our equivalent result a theorem. Nevertheless, to the extent that the argument is applicable, it would still hold for our proposed initialization, due to its dependence solely on the variance demonstrated in Lemma 1 above. We can see that for values of ω > 2, the second term quickly vanishes due to the e−2ω 2 factor. This leaves us with only the first term, which has a Gaussian form. Due to the linear scaling term xT x̃, this is only approximately Gaussian, but the approximation improves as ω increases. We can thus observe that this kernel approximates a Gaussian kernel, which is a low-pass filter, with its bandwidth defined by ω. Figure 1 presents visualizations for NTKs for the simple sinusoidal network, compared to a (scaled) pure Gaussian with variance ω−2, showing there is a close match between the two. If we write out the NTK for networks with more than one hidden layer, it quickly becomes un-interpretable due to the recursive nature of the NTK definition (see Appendix D). However, as shown empirically in Figure 1, these kernels are still approximated by Gaussians with variance ω−2. We also observe that the NTK for a SIREN with a single hidden layer is analogous, but with a sinc form, which is also a low-pass filter. Theorem 3. Shallow SIREN NTK. For a single hidden layer SIREN f (1) : Rn0 → Rn2 following Definition 1, its neural tangent kernel (NTK), as defined in Theorem 6, is given by Θ(1)(x, x̃) = c2 6 ( ω2 ( xT x̃+ 1 ) + 1 ) n0∏ j=1 sinc(c ω (xj − x̃j)) − c 2 6 ( ω2 ( xT x̃+ 1 ) − 1 ) e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + ω 2 ( xT x̃+ 1 ) + 1. For deeper SIREN networks, the kernels defined by the later layers are in fact Gaussian too, as discussed in Appendix D. This leads to an NTK that is approximated by a product of a sinc function and a Gaussian. These SIREN kernels are also presented in Figure 1. 5 EMPIRICAL ANALYSIS As shown above, neural tangent kernel theory suggests that sinusoidal networks work as low-pass filters, with their bandwidth controlled by the parameter ω. In this section, we demonstrate empirically that we can observe this predicted behavior even in real sinusoidal networks. For this experiment, we generate a 512× 512 monochromatic image by super-imposing two orthogonal sinusoidal signals, each consisting of a single frequency, f(x, y) = cos(128πx) + cos(32πy). This function is sampled in the domain [−1, 1]2 to generate the image on the left of Figure 2. To demonstrate what we can expect from applying low-pass filters of different bandwidths to this signal, we perform a discrete Fourier transform (DFT), cut off frequencies above a certain value, and perform an inverse transform to recover the (filtered) image. The MSE of the reconstruction, as a function of the cutoff frequency, is shown in Figure 3. We can see that due to the simple nature of the signal, containing only two frequencies, there are only three loss levels. If indeed the NTK analysis is correct and sinusoidal networks act as low-pass filters, with bandwidth controlled by ω, we should be able to observe similar behavior with sinusoidal networks with different ω values. We plot the final training loss and training curves for sinusoidal networks with different ω in Figure 3. We can observe, again, that there are three consistent loss levels following the magnitude of the ω parameter, in line with the intuition that the sinusoidal network is working as a low-pass filter. This is also observable in Figure 2, where we see example reconstructions for networks of various ω values after training. However, unlike with the DFT low-pass filter (which does not involve any learning), we see in Figure 3 that during training some sinusoidal networks shift from one loss level to a lower one. This demonstrates that sinusoidal networks differ from true low-pass filters in that their weights can change, which implies that the bandwidth defined by ω also changes with learning. We know the weights W1 in the first layer of a sinusoidal network, given by f1(x) = sin ( ω ·WT1 x+ b1 ) , will change with training. Empirically, we observed that the spectral norm of W1 increases throughout training for small ω values. We can interpret that as the overall magnitude of the term ω ·WT1 x increasing, which is functionally equivalent to an increase in ω itself. In Figure 3, we observe that sinusoidal networks with smaller values of ω take a longer time to achieve a lower loss (if at all). Intuitively, this happens because, due to the effect described above, lower ω values require a larger increase in magnitude by the weights W1. Given that all networks were trained with the same learning rate, the ones with a smaller ω require their weights to move a longer distance, and thus take more training steps to achieve a lower loss. 6 TUNING ω As shown in the previous section, though the bandwidth of a network can change throughout training, the choice of ω still influences how easily and quickly (if at all) it can learn a given signal. The value of the ω parameter is thus crucial for the learning of the network. Despite this fact, in SIRENs, for example, this value is not adjusted for each task (except for the audio fitting experiments), and is simply set empirically to an arbitrary value. In this section, we seek to justify a proper initialization for this parameter, such that it can be chosen appropriately for each given task. Moreover, it is often not the case that we simply want to fit only the exact training samples but instead want to find a good interpolation (i.e., generalize well). Setting ω too high, and thus allowing the network to model frequencies that are much larger than the ones present in the actual signal is likely to cause overfitting. This is demonstrated empirically in Figure 4. Consequently, we want instead to tune the network to the highest frequency present in the signal. However, we do not always have the knowledge of what is the value of the highest frequency in the true underlying signal of interest. Moreover, we have also observed that, since the network learns and its weights change in magnitude, that value in fact changes with training. Therefore, the most we can hope for is to have a good heuristic to guide the choice of ω. Nevertheless, having a reasonable guess for ω is also likely sufficient for good performance, precisely due to the ability of the network to adapt during training and compensate for a possibly slightly suboptimal choice. Choosing ω from the Nyquist frequency. One source of empirical information on the relationship between ω and the sinusoidal network’s “learnable frequencies” is the previous section’s empirical analysis. Taking into account the scaling, we can see from Fig. 3 that around ω = 16 the network starts to be able to learn the full signal (freq. 128). We can similarly note that at about ω = 4 the sinusoidal network starts to be able to efficiently learn a signal with frequency 32, but not the one with frequency 128. This scaling suggests a heuristic of setting ω to about 1/8 of the signal’s maximum frequency. For natural signals, such as pictures, it is common for frequencies up to the Nyquist frequency of the discrete sampling to be present. We provide an example for the “camera” image we have utilized so far in Figure 23 in Appendix E, where we can see that the reconstruction loss through a low-pass filter continues to decrease significantly up to the Nyquist frequency for the image resolution. In light of this information, analyzing the choices of ω for the experiments in Section 3.1 again suggests that ω should be set around 1/8 of the Nyquist frequency of the signal. These values of ω are summarized in Table 2 in the “Fitting ω” column. For example, the image fitting experiment shows that, for an image of shape 512× 512 (and thus Nyquist frequency of 256 for each dimension), this heuristic suggests an ω value of 256/8 = 32, which is the value found to work best empirically through search. We find similar results for the audio fitting experiments. The audio signals used in the audio fitting experiment contained approximately 300, 000 and 500, 000 points, and thus maximum frequencies of approximately 150, 00 and 250, 000. This suggests reasonable values for ω of 18, 750 and 31, 250, which are close to the ones found empirically to work well. In examples such as the video fitting experiments, in which each dimension has a different frequency, it is not completely clear how to pick a single ω to fit all dimensions. This suggests that having independent values of ω for each dimension might be useful for such cases, as discussed in the next section. Finally, when performing the generalization experiments in Section 7, we show the best performing ω ended up being half the value of the best ω used in the fitting tasks from Section 3.1. This follows intuitively, since for the generalization task we set apart half the points for training and the other half for testing, thus dividing the maximum possible frequency in the training sample in half, providing further evidence of the relationship between ω and the maximum frequency in the input signal. Multi-dimensional ω. In many problems, such as the video fitting and PDE problems, not only is the input space multi-dimensional, it also contains time and space dimensions (which are additionally possibly of different shape). This suggests that employing a multi-dimensional ω, specifying different frequencies for each dimension might be beneficial. In practice, if we employ a scaling factor λ = [λ1 λ2 . . . λd] T , we have the first layer of the sinusoidal network given by f1(x) = sin(ω (W1 (λ⊙ x) + b1)) = sin(W1 (Ω⊙ x) + ωb1), (2) where Ω = [λ1ω λ2ω . . . λdω] T works as a multi-dimensional ω. In the following experiments, we employ this approach to three-dimensional problems, in which we have time and differently shaped space domains, namely the video fitting and physics-informed neural network PDE experiments. For these experiments, we report the ω in the form of the (already scaled) Ω vector for simplicity. Choosing ω from available information Finally, in many problems we do have some knowledge of the underlying signal we can leverage, such as in the case of inverse problems. For example, let’s say we have velocity fields for a fluid and we are trying to solve for the coupled pressure field and the Reynolds number using a physics-informed neural network (as done in Section 7). In this case, we have access to two components of the solution field. Performing a Fourier transform on the training data we have can reveal the relevant spectrum and inform our choice of ω. If the maximum frequency in the signal is lower than the Nyquist frequency implied by the sampling, this can lead to a more appropriate choice of ω than suggested purely from the sampling. 7 EXPERIMENTS In this section, we first perform experiments to demonstrate how the optimal value of ω influences the generalization error of a sinusoidal network, following the discussion in Section 6. After that, we demonstrate that sinusoidal networks with properly tuned ω values outperform traditional physicsinformed neural networks in classic PDE tasks. 7.1 EVALUATING GENERALIZATION We now evaluate the simple sinusoidal network generalization capabilities. To do this, in all experiments in this section we segment the input signal into training and test sets using a checkerboard pattern – along all axis-aligned directions, points alternate between belonging to train and test set. We perform audio, image and video fitting experiments. When performing these experiments, we search for the best performing ω value for generalization (defined as performance on the held-out points). We report the best values on Table 2. We observe that, as expected from the discussion in Section 6, the best performing ω values follow the heuristic discussed above, and are in fact half the best-performing value found in the previous fitting experiments from Section 3.1, confirming our expectation. This is also demonstrated in the plot in Figure 4. Using a higher ω leads to overfitting and poor generalization outside the training points. This is demonstrated in Figure 4, in which we can see that choosing an appropriate ω value from the heuristics described previously leads to a good fit and interpolation. Setting ω too high leads to interpolation artifacts, due to overfitting of spurious high-frequency components. For the video signals, which have different size along each axis, we employ a multi-dimensional ω. We scale each dimension of ω proportional to the size of the input signal along the corresponding axis. 7.2 SOLVING DIFFERENTIAL EQUATIONS Finally, we apply our analysis to physics-informed learning. We compare the performance of simple sinusoidal networks to the tanh networks that are commonly used for these tasks. Results are summarized in Table 3. Details for the Schrödinger and Helmholtz experiments are presented in Appendix E. 7.2.1 BURGERS EQUATION (IDENTIFICATION) This experiment reproduces the Burgers equation identification experiment from Raissi et al. (2019a). Here we are identifying the parameters λ1 and λ2 of a 1D Burgers equation, ut+λ1uux−λ2uxx = 0, given a known solution field. The ground truth value of the parameters are λ1 = 1.0 and λ2 = 0.01/π. In order to find a good value for ω, we perform a low-pass reconstruction of the solution as before. We can observe in Figure 5 that the solution does not have high bandwidth, with most of the loss being minimized with only the lower half of the spectrum. Note that the sampling performed for the training data (N = 2, 000) is sufficient to support such frequencies. This suggests an ω value in the range 8− 10. Indeed, we observe that ω = 10 gives the best identification of the desired parameters, with errors of 0.0071% and 0.0507% for λ1 and λ2 respectively, against errors of 0.0521% and 0.4522% of the baseline. This value of ω also achieves the lowest reconstruction loss against the known solution, with an MSE of 8.034 · 10−4. Figure 5 shows the reconstructed solution using the identified parameters. 7.2.2 NAVIER-STOKES (IDENTIFICATION) This experiment reproduces the Navier-Stokes identification experiment from Raissi et al. (2019a). In this experiment, we are trying to identify, the parameters λ1, λ2 and the pressure field p of the 2D Navier-Stokes equations given by ∂u∂t + λ1u · ∇u = −∇p+ λ2∇ 2u, given known velocity fields u and v. The ground truth value of the parameters are λ1 = 1.0 and λ2 = 0.01. Unlike the 1D Burgers case, in this case the amount of points sampled for the training set (N = 5, 000) is not high, compared to the size of the full solution volume, and is thus the limiting factor for the bandwidth of the input signal. Given the random sampling of points from the full solution, the Correct PDE u_t + u u_x - 0.0031831 u_{xx} = 0 Identified PDE (clean data) u_t + 1.00007 u u_x - 0.0031847 u_{xx} = 0 Identified PDE (1\% noise) u_t + 0.00000 u u_x - 0.0000000 u_{xx} = 0 v_t + (u v_x + v v_y) = -p_y + 0.01 (v_{xx} + v_{yy}) Identified PDE (clean data) u_t + 1.000 (u u_x + v u_y) = -p_x + 0.01018 (u_{xx} + u_{yy}) v_t + 1.000 (u v_x + v v_y) = -p_y + 0.01018 (v_{xx} + v_{yy}) Identified PDE (1\% noise) u_t + 0.000 (u u_x + v u_y) = -p_x + 0.00000 (u_{xx} + u_{yy}) v_t + 0.000 (u v_x + v v_y) = -p_y + 0.00000 (v_{xx} + v_{yy}) generalized sampling theorem applies. The original solution has dimensions of 100 × 50 × 200. With the 5, 000 randomly sampled points, the average sampling rate per dimension is approximately 17, on average, corresponding to a Nyquist frequency of approximately 8.5. Furthermore, given the multi-dimensional nature of this problem, with both spatial and temporal axes, we employ an independent scaling to ω for each dimension. The analysis above suggests an average ω ≈ 1, with the dimensions of the problem suggesting scaling factors of [0.5 1 2]T . Indeed, we observe that Ω = [0.3 0.6 1.2] T gives the best results. With with errors of 0.0038% and 1.782% for λ1 and λ2 respectively, against errors of 0.0046% and 2.093% of the baseline. Figure 6 shows the identified pressure field. Note that given the nature of the problem, this field can only be identified up to a constant. 8 CONCLUSION In this work, we have present a simplified formulation for sinusoidal networks. Analysis of this architecture from the neural tangent kernel perspective, combined with empirical results, reveals that the kernel for sinusoidal networks corresponds to a low-pass filter with adjustable bandwidth. We leverage this information in order to initialize these networks appropriately, choosing their bandwidth such that it is tuned to the signal being learned. Employing this strategy, we demonstrated improved results in both implicit modelling and physics-informed learning tasks. A SIMPLE SINUSOIDAL NETWORK INITIALIZATION We present here the proofs for the initialization scheme of the simple sinusoidal network from Section 3. Lemma 4. Given any c, for X ∼ N ( 0, 13c 2 ) and Y ∼ U (−c, c), we have Var[X] = Var[Y ] = 13c 2. Proof. By definition, Var[X] = σ2 = 13c 2. For Y , we know that the variance of a uniformly distributed random variable with bound [a, b] is given by 112 (b − a) 2. Thus, Var[Y ] = 112 (2c) 2 = 13c 2. Theorem 5. For a uniform input in [−1, 1], the activations throughout a sinusoidal networks are approximately standard normal distributed before each sine non-linearity and arcsine-distributed after each sine non-linearity, irrespective of the depth of the network, if the weights are distributed normally, with mean 0 and variance 2n with n is the layer’s fan-in. Proof. The proof follows exactly the proof for Theorem 1.8 in Sitzmann et al. (2020), only using Lemma 4 when necessary to show that the initialization proposed here has the same variance necessary for the proof to follow. B EMPIRICAL EVALUATION OF SSN INITIALIZATION Here we report an empirical analysis the initialization scheme of simple sinusoidal networks, referenced in Section 3. For this analysis we use a sinusoidal MLP with 6 hidden layers of 2048 units, and single-dimensional input and output. This MLP is initialized using the simplified scheme described above. For testing, 28 equally spaced inputs from the range [−1, 1] are passed through the network. We then plot the histogram of activations after each linear operation (before the sine non-linearity) and after each sine non-linearity. To match the original plot, we also plot the 1D Fast Fourier Transform of all activations in a layer, and the gradient of this output with respect to each activation. These results are presented in Figure 8. The main conclusion from this figure is that the distribution of activations matches the predicted normal (before the non-linearity) and arcsine (after the non-linearity) distributions, and that this behavior is stable across many layers. We also reproduced the same result up to 50 layers. We then perform an additional experiment in which the exact same setup as above is employed, yet the 1D inputs are shifted by a large value (i.e., x → x+ 1000). We the show the same plot as before in Figure 9. We can see that there is essentially no change from the previous plot, which demonstrates the sinusoidal networks shift-invariance in the input space, one of its important desirable properties, as discussed previously. C EXPERIMENTAL DETAILS FOR COMPARISON TO SIREN Below, we present qualitative results and describe experimental details for each experiment. As these are a reproduction of the experiments in Sitzmann et al. (2020), we refer to their details as well for further information. C.1 IMAGE In the image fitting experiment, we treat an image as a function from the spatial domain to color values (x, y) → (r, g, b). In the case of a monochromatic image, used here, this function maps instead to one-dimensional intensity values. We try to learn a function f : R2 → R, parametrized as a sinusoidal network, in order to fit such an image. Figure 7 shows the image used in this experiment, and the reconstruction from the fitted sinusoidal network. The gradient and Laplacian for the learned function are also presented, demonstrating that higher order derivatives are also learned appropriately. Table 4: Comparison of the simple sinusoidal network and SIREN on some experiments, with a longer training duration. The specific durations are described below in the details for each experiment. We can see that the simple sinusoidal network has stronger asymptotic performance. Values above the horizontal center line are peak signal to noise ratio (PSNR), values below are mean squared error (MSE). †Audio experiments utilized a different learning rate for the first layer, see the full description below for details. Experiment Simple Sinusoidal Network SIREN [ours] Image 54.70 52.43 Poisson (Gradient) 39.51 38.70 Poisson (Laplacian) 22.09 20.82 Video (cat) 34.64 32.26 Video (bikes) 37.71 34.07 Audio (Bach)† 5.66 · 10−7 3.02 · 10−6 Audio (counting)† 4.02 · 10−5 6.33 · 10−5 Image Gradient Laplacian Figure 7: Top row: Ground truth image. Bottom: Reconstructed with sinusoidal network. Training parameters. The input image used is 512×512, mapped to an input domain [−1, 1]2. The sinusoidal network used is a 5-layer MLP with hidden size 256, following the proposed initialization scheme above. The parameter ω is set to 32. The Adam optimizer is used with a learning rate of 3 · 10−3, trained for 10, 000 steps in the short duration training results and for 20, 000 steps in the long duration training results. C.2 POISSON These tasks are similar to the image fitting experiment, but instead of supervising directly on the ground truth image, the learned fitted sinusoidal network is supervised on its derivatives, constituting a Poisson problem. We perform the experiment by supervising both on the input image’s gradient and Laplacian, and report the reconstruction of the image and it’s gradients in each case. Figure 10 shows the image used in this experiment, and the reconstruction from the fitted sinusoidal networks. Since reconstruction from derivatives can only be correct up to a scaling factor, we scale the reconstructions for visualization. As in the original SIREN results, we can observe that the reconstruction from the gradient is of higher quality than the one from the Laplacian. Training parameters. The input image used is of size 256× 256, mapped from an input domain [−1, 1]2. The sinusoidal network used is a 5-layer MLP with hidden size 256, following the proposed initialization scheme above. For both experiments, the parameter ω is set to 32 and the Adam optimizer is used. For the gradient experiments, in short and long training results, a learning rate of 1 · 10−4 is used, trained for 10, 000 and 20, 000 steps respectively. For the Laplace experiments, in short and long training results, a learning rate of 1 · 10−3 is used, trained for 10, 000 and 20, 000 steps respectively. C.3 VIDEO These tasks are similar to the image fitting experiment, but we instead fit a video, which also has a temporal input dimension, (t, x, y) → (r, g, b). We learn a function f : R3 → R3, parametrized as a sinusoidal network, in order to fit such a video. Figures 11 and 12 show sampled frames from the videos used in this experiment, and their respective reconstructions from the fitted sinusoidal networks. Training parameters. The cat video contains 300 frames of size 512 × 512. The bikes video contains 250 frames of size 272× 640. These signals are fitted from the input domain [−1, 1]3. The sinusoidal network used is a 5-layer MLP with hidden size 1024, following the proposed initialization scheme above. The parameter ω is set to 8. The Adam optimizer is used, with a learning rate of 3 · 10−4 trained for 100, 000 steps in the short duration training results and for 200, 000 steps in the long duration training results. C.4 AUDIO In the audio experiments, we fit an audio signal in the temporal domain as a waveform t → w. We to learn a function f : R → R, parametrized as a sinusoidal network, in order to fit the audio. Figure 13 shows the waveforms for the input audios and the reconstructed audios from the fitted sinusoidal network. In this experiment, we utilized a lower learning rate for the first layer compared to the rest of the network. This was used to compensate the very large ω used (in the 15, 000−30, 000 range, compared to the 10−30 range for all other experiments). One might argue that this is re-introducing complexity, counteracting the purpose the proposed simplification. However, we would claim (1) that this is only limited to cases with extremely high ω, which was not present in any case except for fitting audio waves, and (2) that adjusting the learning rate for an individual layer is still an approach that is simpler and more in line with standard machine learning practice compared to multiplying all layers by a scaling factor and then adjusting their initialization variance by the same amount. Training parameters. Both audios use a sampling rate of 44100Hz. The Bach audio is 7s long and the counting audio is approximately 12s long. These signals are fitted from the input domain [−1, 1]. The sinusoidal network used is a 5-layer MLP with hidden size 256, following the proposed initialization scheme above. For short and long training results, training is performed for 5, 000 and 50, 000 steps respectively. For the Bach experiment, the parameter ω is set to 15, 000. The Adam optimizer is used, with a general learning rate of 3 · 10−3. A separate learning rate of 1 · 10−6 is used for the first layer to stabilize training due to the large ω value. For the counting experiment, the parameter ω is set to 32, 000. The Adam optimizer is used, with a general learning rate of 1 · 10−3 and a first layer learning rate of 1 · 10−6. C.5 HELMHOLTZ EQUATION In this experiment we solve for the unknown wavefield Φ : R2 → R2 in the Helmholtz equation (∆ + k2)Φ(x) = −f(x), (3) with known wavenumber k and source function f (a Gaussian with µ = 0 and σ2 = 10−4). We solve this differential equation using a sinusoidal network supervised with the physics-informed loss ∫ Ω ∥(∆ + k2)Φ(x) + f(x)∥1dx, evaluated at random points sampled uniformly in the domain Ω = [−1, 1]2. Figure 14 shows the real and imaginary components of the ground truth solution to the differential equation and the solution recovered by the fitted sinusoidal network. Training parameters. The sinusoidal network used is a 5-layer MLP with hidden size 256, following the proposed initialization scheme above. The parameter ω is set to 16. The Adam optimizer is used, with a learning rate of 3 · 10−4 trained for 50, 000 steps. C.6 SIGNED DISTANCE FUNCTION (SDF) In these tasks we learn a 3D signed distance function. We learn a function f : R3 → R, parametrized as a sinusoidal network, to model a signed distance function representing a 3D scene. This function is supervised indirectly from point cloud data of the scene. Figures 16 and 15 show 3D renderings of the volumes inferred from the learned SDFs. Training parameters. The statue point cloud contains 4, 999, 996 points. The room point cloud contains 10, 250, 688 points. These signals are fitted from the input domain [−1, 1]3. The sinusoidal network used is a 5-layer MLP with hidden size 256 for the statue and 1024 for the room. The parameter ω is set to 4. The Adam optimizer is used, with a learning rate of 8 · 10−4 and a batch size of 1400. All models are trained for 190, 000 steps for the statue experiment and for 410, 000 steps for the room experiment. D NEURAL TANGENT KERNEL ANALYSIS AND PROOFS D.1 PRELIMINARIES In order to perform the subsequent NTK analysis, we first need to formalize definitions for simple sinusoidal networks and SIRENs. The definitions used here adhere to the common NTK analysis practices, and thus differ slightly from practical implementation. Definition 1. For the purposes of the following proofs, a (sinusoidal) fully-connected neural network with L hidden layers that takes as input x ∈ Rn0 , is defined as the function f (L) : Rn0 → RnL+1 , recursively given by f (0)(x) = ω ( W (0)x+ b(0) ) , f (L)(x) = W (L) 1 √ nL sin ( f (L−1) ) + b(L), where ω ∈ R. The parameters { W (j) }L j=0 have shape nj+1×nj and all have each element sampled independently either from N (0, 1) (for simple sinusoidal networks) or from U(−c, c) with some bound c ∈ R (for SIRENs). The { b(j) }L j=0 are nj+1-dimensional vectors sampled independently from N (0, Inj+1). With this definition, we now state the general formulation of the NTK, which applies in general to fully-connected networks with Lipschitz non-linearities, and consequently in particular to the sinusoidal networks studied here as well. Let us first define the NNGP, which has covariance recursively defined by Σ(L+1)(x, x̃) = Ef∼N (0,Σ(L)) [σ(f(x))σ(f(x̃))] + β2, with base case Σ(1)(x, x̃) = 1n0x T x̃+ β2, and where β gives the variance of the bias terms in the neural network layers (Neal, 1994; Lee et al., 2018). Now the NTK is given by the following theorem. Theorem 6. For a neural network with L hidden layers f (L) : Rn0 → RnL+1 following Definition 1, as the size of the hidden layers n1, . . . , nL → ∞ sequentially, the neural tangent kernel (NTK) of f (L) converges in probability to the deterministic kernel Θ(L) defined recursively as Θ(0)(x, x̃) = Σ(0)(x, x̃) = ω2 ( xT x̃+ 1 ) , Θ(L)(x, x̃) = Θ(L−1)(x, x̃)Σ̇(L)(x, x̃) + Σ(L)(x, x̃), where { Σ(l) }L l=0 are the neural network Gaussian processes (NNGPs) corresponding to each f (l) and Σ̇(l)(x, x̃) = E(u,v)∼Σ(l−1)(x,x̃) [cos(u)cos(v)] . Proof. This is a standard general NTK theorem, showing that the limiting kernel recursively in terms of the network’s NNGPs and the previous layer’s NTK. For brevity we omit the proof here and refer the reader to, for example, Jacot et al. (2020). The only difference is for the base case Σ(0), due to the fact that we have an additional ω parameter in the first layer. It is simple to see that the neural network with 0 hidden layers, i.e. the linear model ω ( W (0)x+ b(0) ) will lead to the same Gaussian process covariance kernel as the original proof, xT x̃+ 1, only adjusted by the additional variance factor ω2. Theorem 6 demonstrates that the NTK can be constructed as a recursive function of the NTK of previous layers and the network’s NNGPs. In the following sections we will derive the NNGPs for the SIREN and the simple sinusoidal network directly. We will then use these NNGPs with Theorem 6 to derive their NTKs as well. To finalize this preliminary section, we also provide two propositions that will be useful in following proofs in this section. Proposition 7. For any ω ∈ R, x ∈ Rd, Ew∼N (0,Id) [ eiω(w T x) ] = e− ω2 2 ∥x∥ 2 2 Proof. Omitting w ∼ N (0, Id) from the expectation for brevity, we have E [ eiω(w T x) ] = E [ eiω ∑d j=1 wjxj ] . By independence of the components of w and the definition of expectation, E [ eiω ∑d j=1 iwjxj ] = d∏ j=1 E [ eiω wjxj ] = d∏ j=1 1√ 2π ∫ ∞ −∞ eiω wjxje− w2j 2 dwj . Completing the square, we get d∏ j=1 1√ 2π ∫ ∞ −∞ eiω wjxje− 1 2w 2 j dwj = d∏ j=1 1√ 2π ∫ ∞ −∞ e 1 2 (i 2ω2x2j−i 2ω2x2j+2ixjwj−w 2 j)dwj = d∏ j=1 e 1 2 i 2ω2x2j 1√ 2π ∫ ∞ −∞ e− 1 2 (i 2ω2x2j−2iω 2xjwj+w 2 j)dwj = d∏ j=1 e− 1 2ω 2x2j 1√ 2π ∫ ∞ −∞ e− 1 2 (wj−iωxj) 2 dwj . Since the integral and its preceding factor constitute a Gaussian pdf, they integrate to 1, leaving the final result d∏ j=1 e− ω2 2 x 2 j = e− ω2 2 ∑d j=1 x 2 j = e− ω2 2 ∥xj∥ 2 2 . Proposition 8. For any c, ω ∈ R, x ∈ Rd, Ew∼Ud(−c,c) [ eiω(w T x) ] = d∏ j=1 sinc(c ωxj). Proof. Omitting w ∼ Ud(−c, c) from the expectation for brevity, we have E [ eiω(w T x) ] = E [ eiω ∑d j=1 wjxj ] . By independence of the components of w and the definition of expectation, E [ eiω ∑d j=1 wjxj ] = d∏ j=1 E [ eiω wjxj ] = d∏ j=1 ∫ c −c eiω wjxj 1 2c dwj = d∏ j=1 1 2c ∫ c −c eiω wjxjdwj . Now, focusing on the integral above, we have∫ c −c eiω wjxjdwj = ∫ c −c cos(ω wjxj)dwj + i ∫ c −c sin(ω wjxj)dwj = sin(ω wjxj) ωxj ∣∣∣∣∣ c −c − icos(ω wjxj) ωxj ∣∣∣∣∣ c −c = 2sin(c ωxj) ωxj . Finally, plugging this back into the product above, we get d∏ j=1 1 2c ∫ c −c eiω wjxjdwj = d∏ j=1 1 2c 2sin(c ωxj) ωxj = d∏ j=1 sinc(c ωxj). D.2 SHALLOW SINUSOIDAL NETWORKS For the next few proofs, we will be focusing on neural networks with a single hidden layer, i.e. L = 1. Expanding the definition above, such a network is given by f (1)(x) = W (1) 1 √ n1 sin ( ω ( W (0)x+ b(0) )) + b(1). (4) The advantage of analysing such shallow networks is that their NNGPs and NTKs have formulations that are intuitively interpretable, providing insight into their characteristics. We later extend these derivations to networks of arbitrary depth. D.2.1 SIREN First, let us derive the NNGP for a SIREN with a single hidden layer. Theorem 9. Shallow SIREN NNGP. For a single hidden layer SIREN f (1) : Rn0 → Rn2 following Definition 1, as the size of the hidden layer n1 → ∞, f (1) tends (by law of large numbers) to the neural network Gaussian Process (NNGP) with covariance Σ(1)(x, x̃) = c2 6 n0∏ j=1 sinc(c ω (xj − x̃j))− e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + 1. Proof. We first show that despite the usage of a uniform distribution for the weights, this initialization scheme still leads to an NNGP. In this initial part, we follow an approach similar to Lee et al. (2018), with the modifications necessary for this conclusion to hold. From our neural network definition, each element f (1)(x)j in the output vector is a weighted combination of elements in W (1) and b(1). Conditioning on the outputs from the first layer (L = 0), since the sine function is bounded and each of the parameters is uniformly distributed with finite variance and zero mean, the f (1)(x)j become normally distributed with mean zero as n1 → ∞ by the (Lyapunov) central limit theorem (CLT). Since any subset of elements in f (1)(x) is jointly Gaussian, we have that this outer layer is described by a Gaussian process. Now that we have concluded that this initialization scheme still entails an NNGP, we have that its covariance is determined by σ2WΣ (1) + σ2b = c2 3 Σ (1) + 1, where Σ(1)(x, x̃) = lim n1→∞ [ 1 n1 〈 sin ( f (0)(x) ) , sin ( f (0)(x̃) )〉] = lim n1→∞ 1 n1 n1∑ j=1 sin ( f (0)(x) ) j sin ( f (0)(x̃) ) j = lim n1→∞ 1 n1 n1∑ j=1 sin ( ω ( W (0) j x+ b (0) j )) sin ( ω ( W (0) j x̃+ b (0) j )) . Now by the law of large number (LLN) the limit above converges to Ew∼Un0 (−c,c), b∼N (0,1) [ sin ( ω ( wTx+ b )) sin ( ω ( wT x̃+ b ))] , where w ∈ Rn0 and b ∈ R. Omitting the distributions from the expectation for brevity and expanding the exponential definition of sine, we have E [ 1 2i ( eiω(w T x+b) − e−iω(w T x+b) ) 1 2i ( eiω(w T x̃+b) − e−iω(w T x̃+b) )] = −1 4 E [ eiω(w T x+b)+iω(wT x̃+b) − eiω(w T x+b)−iω(wT x̃+b) − e−iω(w T x+b)+iω(wT x̃+b) + e−iω(w T x+b)−iω(wT x̃+b) ] = −1 4 [ E [ eiω(w T (x+x̃)) ] E [ e2iωb ] − E [ eiω(w T (x−x̃)) ] − E [ eiω(w T (x̃−x)) ] + E [ eiω(w T (−x−x̃)) ] E [ e−2iωb ]] Applying Propositions 7 and 8 to each expectation above and noting that the sinc function is even, we are left with − 1 4 2 n0∏ j=1 sinc(c ω (xj + x̃j))− 2e−2ω 2 n0∏ j=1 sinc(c ω (xj − x̃j)) = 1 2 n0∏ j=1 sinc(c ω (xj − x̃j))− e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) . For simplicity, if we take the case of a one-dimensional output (e.g., an audio signal or a monochromatic image) with the standard SIREN setting of c = √ 6, the NNGP reduces to Σ(1)(x, x̃) = sinc (√ 6ω (x− x̃) ) − e−2ω 2 sinc (√ 6ω (x+ x̃) ) + 1. We can already notice that this kernel is composed of sinc functions. The sinc function is the ideal low-pass filter. For any value of ω > 1, we can see the the first term in the expression above will completely dominate the expression, due to the exponential e−2ω 2 factor. In practice, ω is commonly set to values at least one order of magnitude above 1, if not multiple orders of magnitude above that in certain cases (e.g., high frequency audio signals). This leaves us with simply Σ(1)(x, x̃) = sinc (√ 6ω (x− x̃) ) + 1. Notice that not only does our kernel reduce to the sinc function, but it also reduces to a function solely of ∆x = x − x̃. This agrees with the shift-invariant property we observe in SIRENs, since the NNGP is dependent only on ∆x, but not on the particular values of x and x̃. Notice also that ω defines the bandwidth of the sinc function, thus determining the maximum frequencies it allows to pass. The general sinc form and the shift-invariance of this kernel can be visualized in Figure 17, along with the effect of varying ω on the bandwidth of the NNGP kernel. We can see that the NTK of the shallow SIREN, derived below, maintains the same relevant characteristics as the NNGP. We first derive Σ̇ in the Lemma below. Lemma 10. For ω ∈ R, Σ̇(1)(x, x̃) : Rn0 × Rn0 → R is given by Σ(1)(x, x̃) = c2 6 n0∏ j=1 sinc(c ω (xj − x̃j)) + e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + 1. Proof. The proof follows the same pattern as Theorem 9, with the only difference being a few sign changes after the exponential expansion of the trigonometric functions, due to the different identities for sine and cosine. Now we can derive the NTK for the shallow SIREN. Corollary 11. Shallow SIREN NTK. For a single hidden layer SIREN f (1) : Rn0 → Rn2 following Definition 1, its neural tangent kernel (NTK), as defined in Theorem 6, is given by Θ(1)(x, x̃) = ( ω2 ( xT x̃+ 1 ))c2 6 n0∏ j=1 sinc(c ω (xj − x̃j))− e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + 1 + c2 6 n0∏ j=1 sinc(c ω (xj − x̃j)) + e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + 1 = c2 6 ( ω2 ( xT x̃+ 1 ) + 1 ) n0∏ j=1 sinc(c ω (xj − x̃j)) − c 2 6 ( ω2 ( xT x̃+ 1 ) − 1 ) e−2ω 2 n0∏ j=1 sinc(c ω (xj + x̃j)) + ω 2 ( xT x̃+ 1 ) + 1. Proof. Follows trivially by applying Theorem 9 and Lemma 10 to Theorem 6. Though the expressions become more complex due to the formulation of the NTK, we can see that many of the same properties from the NNGP still apply. Again, for reasonable values of ω, the term with the exponential factor e−2ω 2 will be of negligible relative magnitude. With c = √ 6, this leaves us with ( ω2 ( xT x̃+ 1 ) + 1 ) n0∏ j=1 sinc (√ 6ω (xj − x̃j) ) + ω2 ( xT x̃+ 1 ) + 1, which is of the same form as the NNGP, with some additional linear terms xT x̃. Though these linear terms break the pure shift-invariance, we still have a strong diagonal and the sinc form with bandwidth determined by ω, as can be seen in Figure 18. Similarly to the NNGP, the SIREN NTK suggests that training a shallow SIREN is approximately equivalent to performing kernel regression with a sinc kernel, a low-pass filter, with its bandwidth defined by ω. This agrees intuitively with the experimental observations from the paper that in order to fit higher frequencies signals, a larger ω is required. D.2.2 SIMPLE SINUSOIDAL NETWORK Just as we did in the last section, we will now first derive the NNGP for a simple sinusoidal network, and then use that in order to obtain its NTK as well. As we will see, the Gaussian initialization employed in the SSN has the benefit of rendering the derivations cleaner, while retaining the relevant properties from the SIREN initialization. We observe that a similar derivation of this NNGP (using cosine functions instead of sine) can be found in Pearce et al. (2019), with a focus on a Bayesian perspective for the result. Theorem 12. Shallow SSN NNGP. For a single hidden layer simple sinusoidal network f (1) : Rn0 → Rn2 following Definition 1, as the size of the hidden layer n1 → ∞, f (1) tends (by law of large numbers) to the neural network Gaussian Process (NNGP) with covariance Σ(1)(x, x̃) = 1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 − e−ω 2 2 ∥x+x̃∥ 2 2e−2ω 2 ) + 1. Proof. We again initially follow an approach similar to the one described in Lee et al. (2018). From our sinusoidal network definition, each element f (1)(x)j in the output vector is a weighted combination of elements in W (1) and b(1). Conditioning on the outputs from the first layer (L = 0), since the sine function is bounded and each of the parameters is Gaussian with finite variance and zero mean, the f (1)(x)j are also normally distributed with mean zero by the CLT. Since any subset of elements in f (1)(x) is jointly Gaussian, we have that this outer layer is described by a Gaussian process. Therefore, its covariance is determined by σ2WΣ (1) + σ2b = Σ (1) + 1, where Σ(1)(x, x̃) = lim n1→∞ [ 1 n1 〈 sin ( f (0)(x) ) , sin ( f (0)(x̃) )〉] = lim n1→∞ 1 n1 n1∑ j=1 sin ( f (0)(x) ) j sin ( f (0)(x̃) ) j = lim n1→∞ 1 n1 n1∑ j=1 sin ( ω ( W (0) j x+ b (0) j )) sin ( ω ( W (0) j x̃+ b (0) j )) . Now by the LLN the limit above converges to Ew∼N (0,In0 ),b∼N (0,1) [ sin ( ω ( wTx+ b )) sin ( ω ( wT x̃+ b ))] , where w ∈ Rn0 and b ∈ R. Omitting the distributions from the expectation for brevity and expanding the exponential definition of sine, we have E [ 1 2i ( eiω(w T x+b) − e−iω(w T x+b) ) 1 2i ( eiω(w T x̃+b) − e−iω(w T x̃+b) )] = −1 4 E [ eiω(w T x+b)+iω(wT x̃+b) − eiω(w T x+b)−iω(wT x̃+b) − e−iω(w T x+b)+iω(wT x̃+b) + e−iω(w T x+b)−iω(wT x̃+b) ] = −1 4 [ E [ eiω(w T (x+x̃)) ] E [ e2iωb ] − E [ eiω(w T (x−x̃)) ] − E [ eiω(w T (x̃−x)) ] + E [ eiω(w T (−x−x̃)) ] E [ e−2iωb ]] Applying Proposition 7 to each expectation above, it becomes −1 4 ( e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 − e−ω 2 2 ∥x−x̃∥ 2 2 − e−ω 2 2 ∥x+x̃∥ 2 2 + e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 ) = 1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 − e−ω 2 2 ∥x+x̃∥ 2 2e−2ω 2 ) . We an once again observe that, for practical values of ω, the NNGP simplifies to 1 2 e− ω2 2 ∥x−x̃∥ 2 2 + 1. This takes the form of a Gaussian kernel, which is also a low-pass filter, with its bandwidth determined by ω. We note that, similar to the c = √ 6 setting from SIRENs, in practice a scaling factor of √ 2 is applied to the normal activations, as described in Section 3, which cancels out the 1/2 factors from the kernels, preserving the variance magnitude. Moreover, we can also observe again that the kernel is a function solely of ∆x, in agreement with the shift invariance that is also observed in simple sinusoidal networks. Visualizations of this NNGP are provided in Figure 19. We will now proceed to derive the NTK, which requires first obtaining Σ̇. Lemma 13. For ω ∈ R, Σ̇(1)(x, x̃) : Rn0 × Rn0 → R is given by Σ̇(1)(x, x̃) = 1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 + e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 ) + 1. Proof. The proof follows the same pattern as Theorem 12, with the only difference being a few sign changes after the exponential expansion of the trigonometric functions, due to the different identities for sine and cosine. Corollary 14. Shallow SSN NTK. For a simple sinusoidal network with a single hidden layer f (1) : Rn0 → Rn2 following Definition 1, its neural tangent kernel (NTK), as defined in Theorem 6, is given by Θ(1)(x, x̃) = ( ω2 ( xT x̃+ 1 )) [1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 + e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 ) + 1 ] + 1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 − e−ω 2 2 ∥x+x̃∥ 2 2e−2ω 2 ) + 1 = 1 2 ( ω2 ( xT x̃+ 1 ) + 1 ) e− ω2 2 ∥x−x̃∥ 2 2 − 1 2 ( ω2 ( xT x̃+ 1 ) − 1 ) e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 + ω2 ( xT x̃+ 1 ) + 1. Proof. Follows trivially by applying Theorem 12 and Lemma 13 to Theorem 6. We again note the vanishing factor e−2ω 2 , which leaves us with 1 2 ( ω2 ( xT x̃+ 1 ) + 1 ) e− ω2 2 ∥x−x̃∥ 2 2 + ω2 ( xT x̃+ 1 ) + 1. (5) As with the SIREN before, this NTK is still of the same form as its corresponding NNGP. While again we have additional linear terms xT x̃ in the NTK compared to the NNGP, in this case as well the kernel preserves its strong diagonal. It is still close to a Gaussian kernel, with its bandwidth determined directly by ω. We demonstrate this in Figure 20, where the NTK for different values of ω is shown. Additionally, we also plot a pure Gaussian kernel with variance ω2, scaled to match the maximum and minimum values of the NTK. We can observe the NTK kernel closely matches the Gaussian. Moreover, we can also observe that, at x̃ = 0 the maximum value is predicted by k ≈ ω2/2, as expected from the scaling factors in the kernel in Equation 5. This NTK suggests that training a simple sinusoidal network is approximately equivalent to performing kernel regression with a Gaussian kernel, a low-pass filter, with its bandwidth defined by ω. We note that even though this sinusoidal network kernel approximates a Gaussian kernel, an actual Gaussian kernel can be recovered if a combination of sine and cosine activations are employed, as demonstrated in Tsuchida (2020) (Proposition 18). D.3 DEEP SINUSOIDAL NETWORKS We will now look at the full NNGP and NTK for sinusoidal networks of arbitrary depth. As we will see, due to the recursive nature of these kernels, for networks deeper than the ones analyzed in the previous section, their full unrolled expressions quickly become intractable intuitively, especially for the NTK. Nevertheless, these kernels can still provide some insight, into the behavior of their corresponding networks. Moreover, despite their symbolic complexity, we will also demonstrate empirically that the resulting kernels can be approximated by simple Gaussian kernels, even for deep networks. D.3.1 SIMPLE SINUSOIDAL NETWORK As demonstrated in the previous section, simple sinusoidal networks produce simpler NNGP and NTK kernels due to their Gaussian initialization. We thus begin this section by now analyzing SSNs first, starting with their general NNGP. Theorem 15. SSN NNGP. For a simple sinusoidal network with L hidden layers f (L) : Rn0 → RnL+1 following Definition 1, as the size of the hidden layers n1, . . . , nL → ∞ sequentially, f (L) tends (by law of large numbers) to the neural network Gaussian Process (NNGP) with covariance Σ(L)(x, x̃), recursively defined as Σ(0)(x, x̃) = ω2 ( xT x̃+ 1 ) Σ(L)(x, x̃) = 1 2 e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)) ( eΣ (L−1)(x,x̃) − e−Σ (L−1)(x,x̃) ) + 1. Proof. We will proceed by induction on the depth L, demonstrating the NNGP for successive layers as n1, . . . , nL → ∞ sequentially. To demonstrate the base case L = 1, let us rearrange Σ(1) from Theorem 12 in order to express it in terms of inner products, Σ(1)(x, x̃) = 1 2 ( e− ω2 2 ∥x−x̃∥ 2 2 + e− ω2 2 ∥x+x̃∥ 2 2e−2ω 2 ) + 1 = 1 2 [ e− ω2 2 (x T x−2xT x̃+x̃T x̃) − e− ω2 2 (x T x+2xT x̃+x̃T x̃)e−2ω 2 ] + 1 = 1 2 [ e− 1 2 [ω 2(xT x+1)+ω2(x̃T x̃+1)]+ω2(xT x̃+1) − e− 1 2 [ω 2(xT x+1)+ω2(x̃T x̃+1)]−ω2(xT x̃+1) ] + 1. Given the definition of Σ(0), this is equivalent to 1 2 e− 1 2 (Σ (0)(x,x)+Σ(0)(x̃,x̃)) ( eΣ (0)(x,x̃) − e−Σ (0)(x,x̃) ) + 1, which concludes this case. Now given the inductive hypothesis, as n1, . . . , nL−1 → ∞ we have that the first L− 1 layers define a network f (L−1) with NNGP given by Σ(L−1)(x, x̃). Now it is left to show that as nL → ∞, we get the NNGP given by Σ(L). Following the same argument in Theorem 12, the network f (L)(x) = W (L) 1 √ nL sin ( f (L−1) ) + b(L) constitutes a Gaussian process given the outputs of the previous layer, due to the distributions of W (L) and b(L). Its covariance is given by σ2WΣ (L) + σ2b = Σ (L) + 1, where Σ(L)(x, x̃) = lim nL→∞ [ 1 nL 〈 sin ( f (L−1)(x) ) , sin ( f (L−1)(x̃) )〉] = lim nL→∞ 1 nL nL∑ j=1 sin ( f (L−1)(x) ) j sin ( f (L−1)(x̃) ) j . By inductive hypothesis, f (L−1) is a Gaussian process Σ(L−1)(x, x̃). Thus by the LLN the limit above equals E(u,v)∼N(0,Σ(L−1)(x,x̃)) [sin(u)sin(v)] . Omitting the distribution from the expectation for brevity and expanding the exponential definition of sine, we have E [ 1 2i ( eiu − e−iu ) 1 2i ( eiv − e−iv )] = −1 4 [ E [ ei(u+v) ] − E [ ei(u−v) ] − E [ e−i(u−v) ] + E [ e−i(u+v) ]] . Since u and v are jointly Gaussian, p = u+ v and m = u− v are also Gaussian, with mean 0 and variance σ2p = σ 2 u + σ 2 v + 2Cov[u, v] = Σ (L−1)(x, x) + Σ(L−1)(x̃, x̃) + 2Σ(L−1)(x, x̃), σ2m = σ 2 u + σ 2 v − 2Cov[u, v] = Σ(L−1)(x, x) + Σ(L−1)(x̃, x̃)− 2Σ(L−1)(x, x̃). We can now rewriting the expectations in terms of normalized variables −1 4 [ Ez∼N (0,1) [ eiσpz ] − Ez∼N (0,1) [ eiσmz ] − Ez∼N (0,1) [ e−iσmz ] + Ez∼N (0,1) [ e−iσpz ]] . Applying Proposition 7 to each expectation, we get 1 2 [ e− 1 2σ 2 m − e− 12σ 2 p ] = 1 2 [ e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)−2Σ(L−1)(x,x̃)) − e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)+2Σ(L−1)(x,x̃)) ] = 1 2 e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)) ( eΣ (L−1)(x,x̃) − e−Σ (L−1)(x,x̃)) ) Unrolling the definition beyond L = 1 leads to expressions that are difficult to parse. However, without unrolling, we can rearrange the terms in the NNGP above as Σ(L)(x, x̃) = 1 2 e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)) ( eΣ (L−1)(x,x̃) − e−Σ (L−1)(x,x̃) ) + 1 = 1 2 [ e− 1 2 (Σ (L−1)(x,x)−2Σ(L−1)(x,x̃)+Σ(L−1)(x̃,x̃)) − e− 1 2 (Σ (L−1)(x,x)+2Σ(L−1)(x,x̃)+Σ(L−1)(x̃,x̃)) ] + 1. Since the covariance matrix Σ(L−1) is positive semi-definite, we can observe that the exponent expressions can be reformulated into a quadratic forms analogous to the ones in Theorem 12. We can thus observe that the same structure is essentially preserved through the composition of layers, except for the ω factor present in the first layer. Moreover, given this recursive definition, since the NNGP at any given depth L is a function only of the preceding kernels, the resulting kernel will also be shift-invariant. Let us now derive the Σ̇ kernel, required for the NTK. Lemma 16. For ω ∈ R, Σ̇(L)(x, x̃) : Rn0 × Rn0 → R, is given by Σ̇(L)(x, x̃) = 1 2 e− 1 2 (Σ (L−1)(x,x)+Σ(L−1)(x̃,x̃)) ( eΣ (L−1)(x,x̃) + e−Σ (L−1)(x,x̃) ) + 1. Proof. The proof follows the same pattern as Theorem 15, with the only difference being a few sign changes after the exponential expansion of the trigonometric functions, due to the different identities for sine and cosine. As done in the previous section, it would be simple to now derive the full NTK for a simple sinusoidal network of arbitrary depth by applying Theorem 6 with the NNGP kernels from above. However, there is not much to be gained by writing the convoluted NTK expression explicitly, beyond what we have already gleaned from the NNGP above. Nevertheless, some insight can be gained from the recursive expression of the NTK itself, as defined in Theorem 6. First, note that, as before, for practical values of ω, Σ̇ ≈ Σ, both converging to simply a single Gaussian kernel. Thus, our NTK recursion becomes Θ(L)(x, x̃) ≈ ( Θ(L−1)(x, x̃) + 1 ) Σ(L)(x, x̃). Now, note that when expanded, the form of this NTK recursion is essentially as a product of the Gaussian Σ kernels, Θ(L)(x, x̃) ≈ (( . . . (( Σ(0)(x, x̃) + 1 ) Σ(1)(x, x̃) + 1 ) . . . ) Σ(L−1)(x, x̃) + 1 ) Σ(L)(x, x̃) = (( . . . (( ω2 ( xT x̃+ 1 ) + 1 ) Σ(1)(x, x̃) + 1 ) . . . ) Σ(L−1)(x, x̃) + 1 ) Σ(L)(x, x̃). (6) We know that the product of two Gaussian kernels is Gaussian and thus the general form of
1. What is the main contribution of the paper regarding sinusoidal neural networks? 2. What are the strengths and weaknesses of the proposed approach? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What are the key findings and insights provided by the paper regarding NTK and its application to sinusoidal networks? 5. Can you provide more explanation and examples to help readers understand the concepts discussed in the paper?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper analyzes sinusoidal neural networks from the Neural Tanget Kernal (NTK) perspective. This analysis leads to a number of important observations. The most important finding, perhaps, is that their NTK approximates a tuneable low-pass filter. This insight is subsequently used to develop guidelines for optimizing the performance of a sinusoidal network by tuning the bandwidths of its kernels according the maximum frequency present in the input signal. The paper also suggests an initialization scheme for sinusoidal networks that leads to improved results. The ideas developed in this work are evaluated using two tasks---1) learning implicit models and 2) solving differential equations---and the results suggest that the ideas developed in this work have merit. Strengths And Weaknesses This is a well-written paper. I quite liked the narrative structure of this manuscript. The paper begins by constructing a simplified sinusoidal network model that mimics the key characterstics of SIREN but is much more amenable to theoretical analysis. In addition to providing mathematical reasoning that confirms that the simplified sinusoidal model used in this work is similar to SIREN, the paper also provides empirical results that show that the simplified network achieves performance similar to that attained by SIREN. Section 4 shows that the kernals of the simplified network approximates a Gaussian kernel whose width can be tuned. Section 5 uses a toy example to show how network behaves as this "width" parameter value shifts. It is not immediately obvious to me how to parse the results presented in Figure 3. The following sentence confuses me, "We can see that due to the simple nature of the signal, containing only two frequencies, there are only three loss levels." Why is this? It would be useful for a reader like me to include a sentence for why there are only three loss levels for the current two-frequency problem setup. Section 6 discusses how to tune the aforementioned "width" parameter. The motivation being that this value is "crucial for the learning of the network." This discussion presents a heuristic for setting this "width" parameter to one-eighth of the maximum frequency in the signal. Results in Figures 3 seems to imply this heuristic. This section of the paper is somewhat underwhelming. Why one-eighth? Why not, say, one-tenth? Given that it is okay to select a slightly suboptimal value for this "width" parameter since the network is able to adjust it during training. It is however clear that using too large a value may results in overfitting. Section 7 and 8 presents results and conclusions. Clarity, Quality, Novelty And Reproducibility This is a well-written paper. Sinusoidal neural networks are increasingly being used to learn implicit models, and the work presented in this paper sheds light on the inner working on these networks. In addition, the work also presents guidelines that can help an interested reader design sinusoidal networks that exhibit better performance and achieve faster convergence. This is all good news.
ICLR
Title Autoregressive Dynamics Models for Offline Policy Evaluation and Optimization Abstract Standard dynamics models for continuous control make use of feedforward computation to predict the conditional distribution of next state and reward given current state and action using a multivariate Gaussian with a diagonal covariance structure. This modeling choice assumes that different dimensions of the next state and reward are conditionally independent given the current state and action and may be driven by the fact that fully observable physics-based simulation environments entail deterministic transition dynamics. In this paper, we challenge this conditional independence assumption and propose a family of expressive autoregressive dynamics models that generate different dimensions of the next state and reward sequentially conditioned on previous dimensions. We demonstrate that autoregressive dynamics models indeed outperform standard feedforward models in log-likelihood on heldout transitions. Furthermore, we compare different model-based and model-free off-policy evaluation (OPE) methods on RL Unplugged, a suite of offline MuJoCo datasets, and find that autoregressive dynamics models consistently outperform all baselines, achieving a new state-of-the-art. Finally, we show that autoregressive dynamics models are useful for offline policy optimization by serving as a way to enrich the replay buffer through data augmentation and improving performance using model-based planning. 1 INTRODUCTION Model-based Reinforcement Learning (RL) aims to learn an approximate model of the environment’s dynamics from existing logged interactions to facilitate efficient policy evaluation and optimization. Early work on Model-based RL uses simple tabular (Sutton, 1990; Moore and Atkeson, 1993; Peng and Williams, 1993) and locally linear (Atkeson et al., 1997) dynamics models, which often result in a large degree of model bias (Deisenroth and Rasmussen, 2011). Recent work adopts feedforward neural networks to model complex transition dynamics and improve generalization to unseen states and actions, achieving a high level of performance on standard RL benchmarks (Chua et al., 2018; Wang et al., 2019). However, standard feedforward dynamics models assume that different dimensions of the next state and reward are conditionally independent given the current state and action, which may lead to a poor estimation of uncertainty and unclear effects on RL applications. In this work, we propose a new family of autoregressive dynamics models and study their effectiveness for off-policy evaluation (OPE) and offline policy optimization on continuous control. Autoregressive dynamics models generate each dimension of the next state conditioned on previous dimensions of the next state, in addition to the current state and action (see Figure 1). This means that to sample the next state from an autoregressive dynamics model, one needs n sequential steps, where n is the number of state dimensions, and one more step to generate the reward. By contrast, standard feedforward dynamics models take current state and action as input and predict the distribution of the next state and reward as a multivariate Gaussian with a diagonal covariance structure (e.g., Chua et al. (2018); Janner et al. (2019)). This modeling choice assumes that different state dimensions are conditionally independent. ∗Work done as an intern at Google Brain. Autoregressive generative models have seen success in generating natural images (Parmar et al., 2018), text (Brown et al., 2020), and speech (Oord et al., 2016), but they have not seen use in Model-based RL for continuous control. We find that autoregressive dynamics models achieve higher log-likelihood compared to their feedforward counterparts on heldout validation transitions of all DM continuous control tasks (Tassa et al., 2018) from the RL Unplugged dataset (Gulcehre et al., 2020). To determine the impact of improved transition dynamics models, we primarily focus on OPE because it allows us to isolate contributions of the dynamics model in value estimation vs. the many other factors of variation in policy optimization and data collection. We find that autoregressive dynamics models consistently outperform existing Model-based and Model-free OPE baselines on continuous control in both ranking and value estimation metrics. We expect that our advances in model-based OPE will improve offline policy selection for offline RL (Paine et al., 2020). Finally, we show that our autoregressive dynamics models can help improve offline policy optimization by model predictive control, achieving a new state-of-the-art on cheetah-run and fish-swim from RL Unplugged (Gulcehre et al., 2020). Key contributions of this paper include: • We propose autoregressive dynamics models to capture dependencies between state dimensions in forward prediction. We show that autoregressive models improve log-likelihood over nonautoregressive models for continuous control tasks from the DM Control Suite (Tassa et al., 2018). • We apply autoregressive dynamics models to Off-Policy Evaluation (OPE), surpassing the performance of state-of-the art baselines in median absolute error, rank correlation, and normalized top-5 regret across 9 control tasks. • We show that autoregressive dynamics models are more useful than feedforward models for offline policy optimization, serving as a way to enrich experience replay by data augmentation and improving performance via model-based planning. 2 PRELIMINARIES Here we introduce relevant notation and discuss off-policy (offline) policy evaluation (OPE). We refer the reader to Lange et al. (2012) and Levine et al. (2020) for background on offline RL, which is also known as batch RL in the literature. A finite-horizon Markov Decision Process (MDP) is defined by a tupleM = (S,A, T , d0, r, γ), where S is a set of states s ∈ S, A is a set of actions a ∈ A, T defines transition probability distributions p(st+1|st, at), d0 defines the initial state distribution d0 ≡ p(s0), r defines a reward function r : S × A → R, and γ is a scalar discount factor. A policy π(a | s) defines a conditional distribution over actions conditioned on states. A trajectory consists of a sequence of states and actions τ = (s0, a0, s1, a1, . . . , sH) of horizon length H . We use st,i to denote the i-th dimension of the state at time step t (and similarly for actions). In reinforcement learning, the objective is to maximize the expected sum of discounted rewards over the trajectory distribution induced by the policy: Vγ(π) = Eτ∼pπ(τ) [ H∑ t=0 γtr(st, at) ] . (1) The trajectory distribution is characterized by the initial state distribution, policy, and transition probability distribution: pπ(τ) = d0(s0) H−1∏ t=0 π(at|st)p(st+1|st, at). (2) In offline RL, we are given access to a dataset of transitions D = {(sit, ait, rit+1, sit+1)}Ni=1 and a set of initial states S0. Offline RL is inherently a data-driven approach since the agent needs to optimize the same objective as in Eq. (1) but is not allowed additional interactions with the environment. Even though offline RL offers the promise of leveraging existing logged datasets, current offline RL algorithms (Fujimoto et al., 2019; Agarwal et al., 2020; Kumar et al., 2019) are typically evaluated using online interaction, which limits their applicability in the real world. The problem of off-policy (offline) policy evaluation (OPE) entails estimating Vγ(π), the value of a target policy π, based on a fixed dataset of transitions denoted D, without access to the environment’s dynamics. Some OPE methods assume that D is generated from a known behavior (logging) policy µ and assume access to µ in addition to D. In practice, the logged dataset D may be the result of following some existing system that does not have a probabilistic form. Hence, in our work, we will assume no access to the original behavior policy µ for OPE. That said, for methods that require access to µ, we train a behavior cloning policy on D. 3 PROBABILISTIC DYNAMICS MODELS Feedforward dynamics model. In the context of our paper, we use the term “model” to jointly refer to the forward dynamics model ps(st+1|st, at) and reward model pr(rt+1|st, at). We use neural nets to parameterize both distributions since they are powerful function approximators that have been effective for model-based RL (Chua et al., 2018; Nagabandi et al., 2018; Janner et al., 2019). Let θ denote the parameters of a fully connected network used to model pθ(st+1, rt+1 | st, at). We expect joint modeling of the next state and reward to benefit from sharing intermediate network features. Similar to prior work (Janner et al., 2019), our baseline feedforward model outputs the mean and log variance of all state dimensions and reward simultaneously, as follows: pθ(st+1, rt+1 | st, at) = N ( µ(st, at),Diag(exp{l(st, at)}) ) , (3) where µ(st, at) ∈ Rn+1 denotes the mean for the concatenation of the next state and reward, l(st, at) ∈ Rn+1 denotes the log variance, and Diag(v) is an operator that creates a diagonal matrix with the main diagonal specified by the vector v. During training, we seek to minimize the negative log likelihood of the parameters given observed transitions in the dataset D: `(θ | D) = − ∑ (s,a,r′,s′)∈D log pθ(s ′, r′ | s, a) . (4) While it is possible to place different weights on the loss for next state and reward prediction, we did not apply any special weighting and treated the reward as an additional state dimension in all of our experiments. This is straightforward to implement and does not require tuning an additional hyperparameter, which is challenging for OPE. Note that the input has |s|+ |a| dimensions. Autoregressive dynamics model. We now describe our autoregressive model. We seek to demonstrate the utility of predicting state dimensions in an autoregressive way. Therefore, rather than using a complex neural network architecture, where improvements in log-likelihood and policy evaluation are confounded by architectural differences, we opt to make simple modifications to the feedforward model described above. This allows us to isolate the source of performance improvements. The autoregressive model we use is a fully connected model that predicts the mean and log variance of a single state dimension. We augment the input space of the baseline with the previous predicted state dimensions and a one-hot encoding to indicate which dimension to predict. This is illustrated in Figure 1. The autoregressive model therefore has 3|s| + |a| input dimensions. Hence, the autoregressive model has a small number of additional weights in the first fully connected layer, but as will be shown in our experiments, these extra parameters are not the reason for a performance gain. At training time, the autoregressive model has a similar computational cost to the fully connected model as we can mask ground truth states and use data parallelism to compute all state dimensions simultaneously. At inference, the autoregressive model requires additional forward passes, on the order of the number of state dimensions in a given environment. We use the default ordering for the state dimensions in a given environment, though it is interesting to explore different orderings in future works. The negative log-likelihood for an autoregressive model takes the form of: `(θ | D) = − ∑ (s,a,r′,s′)∈D [ log pθ(r ′ | s, a, s′) + ∑n i=1 log pθ(s ′ i | s, a, s′1, . . . , s′i−1) ] , (5) where we use chain rule to factorize the joint probability of p(s′, r′ | s, a). The main advantage of the autoregressive model is that it makes no conditional independence assumption between next state dimensions. This class of models can therefore capture non-unimodal dependencies, e.g., between different joint angles of a robot. Paduraru (2007) demonstrates this increased expressivity in the tabular setting, constructing an example on which a model assuming conditional independence fails. While the expressive power of autoregressive models have been shown in various generative models (Parmar et al., 2018; Oord et al., 2016), autoregressive dynamics models have not seen much use in Model-based RL for continuous control before this work. Algorithm 1 Model-based OPE Require: Number of rollouts n, discount factor γ, horizon length H , policy π, dynamics model p, set of initial states S0 for i = 1, 2, . . . n do Ri ← 0 sample initial state s0 ∼ S0 for t = 0, 1, 2, . . . ,H − 1 do sample from policy: at ∼ π(· | st) sample from the dynamics model: st+1, rt+1 ∼ p(·, · | st, at) Ri ← Ri + γtrt+1 end for end for return 1n ∑n i=1Ri Model-based OPE. Once a dynamics model is trained from offline data, OPE can be performed in a direct and primitive way. We let the policy and model interact—the policy generates the next action, the model plays the role of the environment and generates the next state and reward. Due to the stochasticity in the model and the policy, we estimate the return for a policy with Monte-Carlo sampling and monitor standard error. See Algorithm 1 for pseudocode. 4 RELATED WORK Our work follows a long line of OPE research, which is especially relevant to many practical domains such as medicine (Murphy et al., 2001), recommendation systems (Li et al., 2011), and education (Mandel et al., 2014) in order to avoid the costs and risks associated with online evaluation. There exists a large body of work on OPE, including methods based on importance weighting (Precup, 2000; Li et al., 2014) and Lagrangian duality (Nachum et al., 2019; Yang et al., 2020; Uehara and Jiang, 2019). The model-based approach that we focus on in this paper lies within the class of algorithms referred to as the direct method (Kostrikov and Nachum, 2020; Dudík et al., 2011; Voloshin et al., 2019), which approximate the value of a new policy by either explicitly or implicitly estimating the transition and reward functions of the environment. While model-based policy evaluation has been considered by previous works (Paduraru, 2007; Thomas and Brunskill, 2016a; Hanna et al., 2017), it has largely been confined to simple domains with finite state and action spaces where function approximation is not necessary. By contrast, our work provides an extensive demonstration of model-based OPE in challenging continuous control benchmark domains. Previous instances of the use of function approximation for model-based OPE (Hallak et al., 2015) impose strong assumptions on the probabilistic dynamics models, such as factorability of the MDP. Our results indicate that even seemingly benign assumptions about the independence of different state dimensions can have detrimental consequences for the effectiveness of a model-based OPE estimate. While the use of model-based principles in OPE has been relatively rare, it has been more commonly used for policy optimization. The field of model-based RL has matured in recent years to yield impressive results for both online (Nagabandi et al., 2018; Chua et al., 2018; Kurutach et al., 2018; Janner et al., 2019) and offline (Matsushima et al., 2020; Kidambi et al., 2020; Yu et al., 2020; Argenson and Dulac-Arnold, 2020) policy optimization. Several of the techniques we employ, such as the normalization of the observation space, are borrowed from this previous literature (Nagabandi et al., 2018; Chua et al., 2018). Conversely, we present strong empirical evidence that the benefits of our introduced autoregressive generative models of state observations do carry over to model-based policy optimization, at least in the offline setting, and this is an interesting avenue for future work. 5 RESULTS We conduct our experiments on the DeepMind control suite (Tassa et al., 2018), a set of control tasks implemented in MuJoCo (Todorov et al., 2012). We use the offline datasets from RL Unplugged (Gulcehre et al., 2020), the details of which are provided in Table 1. These environments capture a wide range of complexity, from 40K transitions in a 5-dimensional cartpole environment to 1.5 million transitions on complex manipulation tasks. We follow the evaluation protocol in the Deep OPE (Fu et al., 2021) benchmark and use policies generated by four different algorithms: behavioral cloning (Bain, 1995), D4PG (Barth-Maron et al., 2018), Critic Regularized Regression (Wang et al., 2020), and ABM (Siegel et al., 2019). With varied hyperparameters, these form a diverse set of policies of varying quality. We perform a thorough hyperparameter sweep in the experiments and use standard practice from generative modeling to improve the quality of the models. We allocate 80% of the data for training and 20% of the data for model selection. We vary the depth and width of the neural networks (number of layers ∈ {3, 4}, layer size ∈ {512, 1024}), add different amounts of noise to input states and actions, and consider two levels of weight decay for regularization (input noise ∈ {0, 1e−6, 1e−7}, weight decay ∈ {0, 1e−6}). For the choice of optimizer, we consider both Adam (Kingma and Ba, 2014) and SGD with momentum and find Adam to be more effective at maximizing log-likelihood across all tasks in preliminary experiments. We thus use Adam in all of our experiments with two learning rates ∈ {1e−3, 3e−4}. We decay the optimizer’s learning rate linearly to zero throughout training, finding this choice to outperform a constant learning rate. Lastly, we find that longer training often improves log-likelihood results. We use 500 epochs for training final models. For each task we consider in total 48 hyperparameter combinations (listed above) for both models and pick the best model in each model family based on validation log-likelihood. This model is then used for model-based OPE and policy optimization. Note that, in our experiments, 20% of the transitions are used only for validation, but we believe one can re-train the models with the best hyperparameter configuration on the full transition datasets to improve the results even further. 5.1 AUTOREGRESSIVE DYNAMICS MODELS OUTPERFORM FEEDFORWARD MODELS IN NLL To evaluate the effectiveness of autoregressive dynamics models compared to feedforward counterparts, Table 2 reports negative log-likelihood (NLL) on the heldout validation set for the best performing models from our hyperparameter sweep. For each environment, we report the NLL for the best-performing model (Top-1) and the average NLL across the Top-5 models. The autoregressive model has lower NLL on all environments, indicating that it generalizes better to unseen data. To study the impact of model size on NLL, Figure 2 shows validation NLL as a function of parameter count. We find that on small datasets large models hurt, but more importantly autoregressive models outperform feedforward models regardless of the parameter count regime, i.e., even small autoregressive models attain a lower validation NLL compared to big feedforward models. This indicates that autoregressive models have a better inductive bias in modeling the transition dynamics than feedforward models that make a conditional independence assumption. 5.2 ARE DYNAMICS MODELS WITH LOWER NLL BETTER FOR MODEL-BASED OPE? We ultimately care not just about the log-likelihood numbers, but also whether or not the dynamics models are useful in policy evaluation and optimization. To study the relationship of NLL and OPE performance for model-based methods, we compute OPE estimates via Algorithm 1 and compute the Pearson correlation between the OPE estimates and the true discounted returns. This serves as a measure of the effectiveness of the model for OPE. We repeat this for all 96 dynamics models we trained on a given environment and plot the correlation coefficients against validation NLL in Figure 3. Models with low NLL are generally more accurate in OPE. Lambert et al. (2020) have previously demonstrated that in Model-based RL, “training cost does not hold a strong correlation to maximization of episode reward." We use validation NLL instead, and our results on policy evaluation decouple the model from policy optimization, suggesting a more nuanced picture: low validation NLL numbers generally correspond to accurate policy evaluation, while higher NLL numbers are generally less meaningful. In other words, if the dynamics model does not capture the transition dynamics accurately enough, then it is very hard to predict its performance on OPE. However, once the model starts to capture the dynamics faithfully, we conjecture that NLL starts to become a reasonable metric for model selection. For instance, validation NLL does not seem to be a great metric for ranking feedforward models, whereas it is more reasonable for autoregressive models. 5.3 COMPARISON WITH OTHER OPE METHODS We adopt a recently proposed benchmark for OPE (Fu et al., 2021) and compare our model-based approaches with state-of-the-art OPE baselines therein. Figures 4 and B.4 compare OPE estimates from two Fitted-Q Evaluation (FQE) baselines (Le et al., 2019; Kostrikov and Nachum, 2020; Paine et al., 2020), our feedforward models, and the autoregressive approach. Each plot reports the Pearson correlation between the OPE estimates and the true returns. The autoregressive model consistently outperforms the feedforward model and FQE methods on most environments. We report ensembling results in the appendix, but compare single models for fairness in the rest of the paper. We compute summary statistics for OPE methods in Table 3, Table A.1, and Table A.2. These tables report the Spearman’s rank correlation, regret, and absolute error, respectively. These metrics capture different desirable properties of OPE methods (Fu et al., 2021); more details about how they are computed are in the appendix. In all three metrics, the autoregressive model achieves the best median performance across nine environments, whereas the baseline model is not as good as FQE. The only environment in which the autoregressive model has negative rank correlation is manipulator insert ball. In addition, a major advantage of our model-based approach over FQE is that the model only needs to be trained once per environment—we do not need to perform additional policy-specific optimization, whereas FQE needs to optimize a separate Q-function approximator per policy. 5.4 AUTOREGRESSIVE DYNAMICS MODELS FOR OFFLINE POLICY OPTIMIZATION Policy evaluation is an integral part of reinforcement learning. Improvement in policy evaluation can therefore be adapted for policy optimization. In this section, we explore two possibilities of using models to improve offline reinforcement learning. In all experiments, we use Critic Regularized Regression (CRR) as a base offline reinforcement learning algorithm (Wang et al., 2020). First, we utilize the model during test time for planning by using a modified version of Model Predictive Path Integral (MPPI) (Williams et al., 2015). Unlike MPPI, we truncate the planning process after 10 steps of rollout and use the CRR critic to evaluate future discounted returns. We provide additional details in the appendix. Secondly, we use the model to augment the transition dataset to learn a better critic for CRR. More precisely, given sit ∼ D, and the current policy π, we can generate additional data using the following process: âit ∼ π(·|sit), ŝit+1, r̂it+1 ∼ p(·, ·|sit, ât). These two options are orthogonal and can be applied jointly. We implemented both techniques on top of the CRR exp variant (Wang et al., 2020) and show their combined effect in Figure 5. The figure shows that autoregressive dynamics models also outperform feedforward ones in the policy optimization context. Notably, in the case of cheetah run and fish swim, using autoregressive models for planning as well as data augmentation enables us to outperform the previous state-of-the-art on these offline datasets. Additionally, when using autoregressive dynamics models, both techniques improve performance. In the appendix, we show this result as well as more ablations. 6 CONCLUSION This paper shows the promise of autoregressive models in learning transition dynamics for continuous control, showing strong results for off-policy policy evaluation and offline policy optimization. Our contributions to offline model-based policy optimization are orthogonal to prior work that uses ensembles to lower the values when ensemble components disagree (Kidambi et al., 2020). Incorporating conservative value estimation into our method is an interesting avenue for future research. We use relatively primitive autoregressive neural architectures in this paper to enable a fair comparison with existing feedforward dynamics models. That said, it will be exciting to apply more sophisticated autoregressive neural network architectures with cross attention (Bahdanau et al., 2014) and self-attention (Vaswani et al., 2017) to Model-based RL for continuous control. Acknowledgements We thank Jimmy Ba, William Chan, Rishabh Agarwal, Dale Schuurmans, and Silviu Pitis for fruitful discussions on our work. We are also grateful for the helpful comments from Lihong Li, Jenny Liu, Harris Chan, Keiran Paster, Sheng Jia, and Tingwu Wang on earlier drafts. A OFFLINE POLICY EVALUATION We use the baseline results in Fu et al. (2021). For convenience, we replicate their description of the OPE baselines and metrics. A.1 OPE METRICS To evaluate the OPE algorithms, we compute three different metrics between the estimated returns and the ground truth returns: 1. Rank correlation This metric assesses how well estimated values rank policies. It is equal to the correlation between the ranking (sorted order) by the OPE estimates and the ranking by the ground truth values. 2. Absolute Error: This metric measures the deviations of the estimates from the ground truth and does not directly access the usefulness for ranking. 3. Regret@k This metric measures how much worse the best policies identified by the estimates are than the best policy in the entire set. Regret@k is the difference between the actual expected return of the best policy in the entire set, and the actual value of the best policy in the top-k set. A.2 OPE BASELINES Fitted Q-Evaluation (FQE) As in Le et al. (2019), we train a neural network to estimate the value of the evaluation policy πe by bootstrapping fromQ(s′, πe(s′)). We tried two different implementations, one from Kostrikov and Nachum (2020) and another from Paine et al. (2020). Importance Sampling (IS) We perform importance sampling with a learned behavior policy. We use the implementation from Kostrikov and Nachum (2020), which uses self-normalized (also known as weighted) step-wise importance sampling (Liu et al., 2018; Nachum et al., 2019). Since the behavior policy is not known explicitly, we learn an estimate of it via a max-likelihood objective over the dataset D, as advocated by Hanna et al. (2019). In order to be able to compute log-probabilities when the target policy is deterministic, we add artificial Gaussian noise with standard deviation 0.01 for all deterministic target policies. Doubly-Robust (DR) We perform weighted doubly-robust policy evaluation based on Thomas and Brunskill (2016b) and using the implementation of Kostrikov and Nachum (2020). Specifically, this method combines the IS technique above with a value estimator for variance reduction. The value estimator is learned according to Kostrikov and Nachum (2020), using deep FQE with an L2 loss function. DICE This method uses a saddle-point objective to estimate marginalized importance weights dπ(s, a)/dπB (s, a); these weights are then used to compute a weighted average of reward over the offline dataset, and this serves as an estimate of the policy’s value in the MDP. We use the implementation from Yang et al. (2020) corresponding to the algorithm BestDICE. Variational Power Method (VPM) This method runs a variational power iteration algorithm to estimate the importance weights dπ(s, a)/dπB (s, a) without the knowledge of the behavior policy. It then estimates the target policy value using weighted average of rewards similar to the DICE method. Our implementation is based on the same network and hyperparameters for OPE setting as in Wen et al. (2020). We further tune the hyperparameters including the regularization parameter λ, learning rates αθ and αv, and number of iterations on the Cartpole swingup task using ground-truth policy value, and then fix them for all other tasks. A.3 ENSEMBLING As in Chua et al. (2018); Janner et al. (2019), we can form an ensemble using our best-performing models. We generate rollouts using the procedure detailed in Janner et al. (2019), forming an ensemble with 4 models. We see some improvement in policy evaluation results, as shown in Figure A.1. Ensembling could likely be further improved by forcing unique hyperparameter settings and seeds. Algorithm 2 Model Predictive Path Integral Planning Require: state s, policy π, dynamics model p, critic Q, temperature β, and noise variance σ2. for m = 1, ...,M do for n = 1, ..., N do s0 ← s Rn ← 0 for τ = 0, ...,H − 1 do aτn ∼ π(·|sτn) sτ+1n , r τ+1 n ∼ π(·, ·|sτn, aτn) Rn ← Rn + γτrτ+1n end for aH ∼ π(·|sH) Rn ← Rn + γHQ(sHn , aHn ) end for Re-define π such that π(·|ŝτ ) = ∑ n exp(Rn/β)∑ m exp(Rm/β) N (·|aτn, σ2I). (π depends on τ and not ŝ.) end for sample final action a ∼ ∑ n exp(Rn/β)∑ m exp(Rm/β) δ(a0n) return a B ADDITIONAL DETAILS REGARDING POLICY OPTIMIZATION To test dynamic models for policy optimization, we implement the two methods discussed in Section 5.4 on top of CRR exp, one of the CRR variants (Wang et al., 2020). We use the RL Unplugged datasets (Gulcehre et al., 2020) for all environments studied in this section. When using data augmentation, we adopt a 1-to-1 ratio between the original dataset and the augmented dataset. To take advantage of the dynamics models at test time, we use a variant of Model Predictive Path Integral (MPPI) for planning. To reduce the planning horizon, we truncate the model rollout using CRR critics. The details of the planning procedure is summarized in Algorithm 2. All hyperparameter tuning for the planning process is conducted on the “cartpole swingup” task. The hyperparameters used in the planning process are M = 3, N = 16, H = 10, β = 0.1, and σ2 = 0.01. To match the temperature used in the planning component, we choose β = 0.1 for the CWP component of CRR. This change, however, does not impact the baseline CRR agent performance much. With the exception of β and the planning component, all hyperparameters are kept the same as CRR exp. We compare the agents’ performance with and without the planning procedure to test its effects. As shown in Figure B.2, planning using an autoregressive model significantly increases performance. Data augmentation does not change the agents’ performance on cartpole swingup, fish swim, or finger turn hard. It, however, boosts performance considerably on cheetah run. In Figure B.3, we show the effects of data augmentation on cheetah run.
1. What is the focus of the paper regarding policy evaluation and optimization? 2. What are the strengths of the proposed approach using autoregressive models? 3. What are the weaknesses or limitations of the method? 4. Are there any concerns about the experimental setup or results? 5. How does the reviewer assess the overall quality and contribution of the paper?
Review
Review Summary The paper studies offline policy evaluation (OPE) and optimization in the model-based setting. The main methodological contribution of the paper is using autoregressive models for the next state and reward prediction. The authors demonstrate that autoregressive models achieve higher likelihood compared to feedforward models on 9 environments from RL Unplugged [1] offline dataset. Given that model likelihood is only a proxy quality metric in OPE and control, they further demonstrate a positive correlation between likelihood and OPE estimates. The paper shows quantitatively that using autoregressive models results in more accurate OPE estimates than for feedforward models and model-free benchmarks. Finally, the authors apply autoregressive models for offline control and achieve higher returns than for feedforward models. Strengths The paper is written clearly and generally easy to follow. The proposed modification is simple, straightforward to implement, and demonstrates convincing results consistently on different environments. For example, the median rank correlation between OPE and ground truth is the best against 7 OPE baselines on 9 environments from RL Unplugged. The experimental setup follows the standard practices (e.g. using a validation set for hyperparameter selection) and the details necessary for the reproduction of the results are provided (e.g. optimizers, learning rate schedules, number of epochs, architectures). Weaknesses The authors claim that “standard feedforward dynamics models assume that different dimensions of the next state and reward are conditionally independent given the current state and action”. In other words, p(s’,r|s,a) is claimed to be equal to p(s’_1|s,a) … p(s’_n|s,a) p(r|s,a) when using a feedforward model. The statement is incorrect unless we use a linear function approximator as a model. However, this mistake does not affect much the quality of the paper. Using autoregressive models does not address aspects that are specific to the offline setting. Providing results for the online setting will be helpful for understanding whether autoregressive models should be favored in general for model-based reinforcement learning. Recommendation The reviewer votes for accepting the paper. The paper is well-written, the proposed extension is simple to implement and convincingly outperforms baselines on a variety of environments. Notes Appendix A.2 is identical to Section 5.1 of another submission [2]. The abbreviation FQE is used throughout the paper but expanded only in the appendix. References [1] Gulcehre, Caglar, Ziyu Wang, Alexander Novikov, Tom Le Paine, Sergio Gómez Colmenarejo, Konrad Zolna, Rishabh Agarwal et al. "Rl unplugged: Benchmarks for offline reinforcement learning." arXiv preprint arXiv:2006.13888 (2020). [2] Anonymous. Benchmarks for deep off-policy evaluation. In Submission to International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=kWSeGEeHvF8. Under review.
ICLR
Title Autoregressive Dynamics Models for Offline Policy Evaluation and Optimization Abstract Standard dynamics models for continuous control make use of feedforward computation to predict the conditional distribution of next state and reward given current state and action using a multivariate Gaussian with a diagonal covariance structure. This modeling choice assumes that different dimensions of the next state and reward are conditionally independent given the current state and action and may be driven by the fact that fully observable physics-based simulation environments entail deterministic transition dynamics. In this paper, we challenge this conditional independence assumption and propose a family of expressive autoregressive dynamics models that generate different dimensions of the next state and reward sequentially conditioned on previous dimensions. We demonstrate that autoregressive dynamics models indeed outperform standard feedforward models in log-likelihood on heldout transitions. Furthermore, we compare different model-based and model-free off-policy evaluation (OPE) methods on RL Unplugged, a suite of offline MuJoCo datasets, and find that autoregressive dynamics models consistently outperform all baselines, achieving a new state-of-the-art. Finally, we show that autoregressive dynamics models are useful for offline policy optimization by serving as a way to enrich the replay buffer through data augmentation and improving performance using model-based planning. 1 INTRODUCTION Model-based Reinforcement Learning (RL) aims to learn an approximate model of the environment’s dynamics from existing logged interactions to facilitate efficient policy evaluation and optimization. Early work on Model-based RL uses simple tabular (Sutton, 1990; Moore and Atkeson, 1993; Peng and Williams, 1993) and locally linear (Atkeson et al., 1997) dynamics models, which often result in a large degree of model bias (Deisenroth and Rasmussen, 2011). Recent work adopts feedforward neural networks to model complex transition dynamics and improve generalization to unseen states and actions, achieving a high level of performance on standard RL benchmarks (Chua et al., 2018; Wang et al., 2019). However, standard feedforward dynamics models assume that different dimensions of the next state and reward are conditionally independent given the current state and action, which may lead to a poor estimation of uncertainty and unclear effects on RL applications. In this work, we propose a new family of autoregressive dynamics models and study their effectiveness for off-policy evaluation (OPE) and offline policy optimization on continuous control. Autoregressive dynamics models generate each dimension of the next state conditioned on previous dimensions of the next state, in addition to the current state and action (see Figure 1). This means that to sample the next state from an autoregressive dynamics model, one needs n sequential steps, where n is the number of state dimensions, and one more step to generate the reward. By contrast, standard feedforward dynamics models take current state and action as input and predict the distribution of the next state and reward as a multivariate Gaussian with a diagonal covariance structure (e.g., Chua et al. (2018); Janner et al. (2019)). This modeling choice assumes that different state dimensions are conditionally independent. ∗Work done as an intern at Google Brain. Autoregressive generative models have seen success in generating natural images (Parmar et al., 2018), text (Brown et al., 2020), and speech (Oord et al., 2016), but they have not seen use in Model-based RL for continuous control. We find that autoregressive dynamics models achieve higher log-likelihood compared to their feedforward counterparts on heldout validation transitions of all DM continuous control tasks (Tassa et al., 2018) from the RL Unplugged dataset (Gulcehre et al., 2020). To determine the impact of improved transition dynamics models, we primarily focus on OPE because it allows us to isolate contributions of the dynamics model in value estimation vs. the many other factors of variation in policy optimization and data collection. We find that autoregressive dynamics models consistently outperform existing Model-based and Model-free OPE baselines on continuous control in both ranking and value estimation metrics. We expect that our advances in model-based OPE will improve offline policy selection for offline RL (Paine et al., 2020). Finally, we show that our autoregressive dynamics models can help improve offline policy optimization by model predictive control, achieving a new state-of-the-art on cheetah-run and fish-swim from RL Unplugged (Gulcehre et al., 2020). Key contributions of this paper include: • We propose autoregressive dynamics models to capture dependencies between state dimensions in forward prediction. We show that autoregressive models improve log-likelihood over nonautoregressive models for continuous control tasks from the DM Control Suite (Tassa et al., 2018). • We apply autoregressive dynamics models to Off-Policy Evaluation (OPE), surpassing the performance of state-of-the art baselines in median absolute error, rank correlation, and normalized top-5 regret across 9 control tasks. • We show that autoregressive dynamics models are more useful than feedforward models for offline policy optimization, serving as a way to enrich experience replay by data augmentation and improving performance via model-based planning. 2 PRELIMINARIES Here we introduce relevant notation and discuss off-policy (offline) policy evaluation (OPE). We refer the reader to Lange et al. (2012) and Levine et al. (2020) for background on offline RL, which is also known as batch RL in the literature. A finite-horizon Markov Decision Process (MDP) is defined by a tupleM = (S,A, T , d0, r, γ), where S is a set of states s ∈ S, A is a set of actions a ∈ A, T defines transition probability distributions p(st+1|st, at), d0 defines the initial state distribution d0 ≡ p(s0), r defines a reward function r : S × A → R, and γ is a scalar discount factor. A policy π(a | s) defines a conditional distribution over actions conditioned on states. A trajectory consists of a sequence of states and actions τ = (s0, a0, s1, a1, . . . , sH) of horizon length H . We use st,i to denote the i-th dimension of the state at time step t (and similarly for actions). In reinforcement learning, the objective is to maximize the expected sum of discounted rewards over the trajectory distribution induced by the policy: Vγ(π) = Eτ∼pπ(τ) [ H∑ t=0 γtr(st, at) ] . (1) The trajectory distribution is characterized by the initial state distribution, policy, and transition probability distribution: pπ(τ) = d0(s0) H−1∏ t=0 π(at|st)p(st+1|st, at). (2) In offline RL, we are given access to a dataset of transitions D = {(sit, ait, rit+1, sit+1)}Ni=1 and a set of initial states S0. Offline RL is inherently a data-driven approach since the agent needs to optimize the same objective as in Eq. (1) but is not allowed additional interactions with the environment. Even though offline RL offers the promise of leveraging existing logged datasets, current offline RL algorithms (Fujimoto et al., 2019; Agarwal et al., 2020; Kumar et al., 2019) are typically evaluated using online interaction, which limits their applicability in the real world. The problem of off-policy (offline) policy evaluation (OPE) entails estimating Vγ(π), the value of a target policy π, based on a fixed dataset of transitions denoted D, without access to the environment’s dynamics. Some OPE methods assume that D is generated from a known behavior (logging) policy µ and assume access to µ in addition to D. In practice, the logged dataset D may be the result of following some existing system that does not have a probabilistic form. Hence, in our work, we will assume no access to the original behavior policy µ for OPE. That said, for methods that require access to µ, we train a behavior cloning policy on D. 3 PROBABILISTIC DYNAMICS MODELS Feedforward dynamics model. In the context of our paper, we use the term “model” to jointly refer to the forward dynamics model ps(st+1|st, at) and reward model pr(rt+1|st, at). We use neural nets to parameterize both distributions since they are powerful function approximators that have been effective for model-based RL (Chua et al., 2018; Nagabandi et al., 2018; Janner et al., 2019). Let θ denote the parameters of a fully connected network used to model pθ(st+1, rt+1 | st, at). We expect joint modeling of the next state and reward to benefit from sharing intermediate network features. Similar to prior work (Janner et al., 2019), our baseline feedforward model outputs the mean and log variance of all state dimensions and reward simultaneously, as follows: pθ(st+1, rt+1 | st, at) = N ( µ(st, at),Diag(exp{l(st, at)}) ) , (3) where µ(st, at) ∈ Rn+1 denotes the mean for the concatenation of the next state and reward, l(st, at) ∈ Rn+1 denotes the log variance, and Diag(v) is an operator that creates a diagonal matrix with the main diagonal specified by the vector v. During training, we seek to minimize the negative log likelihood of the parameters given observed transitions in the dataset D: `(θ | D) = − ∑ (s,a,r′,s′)∈D log pθ(s ′, r′ | s, a) . (4) While it is possible to place different weights on the loss for next state and reward prediction, we did not apply any special weighting and treated the reward as an additional state dimension in all of our experiments. This is straightforward to implement and does not require tuning an additional hyperparameter, which is challenging for OPE. Note that the input has |s|+ |a| dimensions. Autoregressive dynamics model. We now describe our autoregressive model. We seek to demonstrate the utility of predicting state dimensions in an autoregressive way. Therefore, rather than using a complex neural network architecture, where improvements in log-likelihood and policy evaluation are confounded by architectural differences, we opt to make simple modifications to the feedforward model described above. This allows us to isolate the source of performance improvements. The autoregressive model we use is a fully connected model that predicts the mean and log variance of a single state dimension. We augment the input space of the baseline with the previous predicted state dimensions and a one-hot encoding to indicate which dimension to predict. This is illustrated in Figure 1. The autoregressive model therefore has 3|s| + |a| input dimensions. Hence, the autoregressive model has a small number of additional weights in the first fully connected layer, but as will be shown in our experiments, these extra parameters are not the reason for a performance gain. At training time, the autoregressive model has a similar computational cost to the fully connected model as we can mask ground truth states and use data parallelism to compute all state dimensions simultaneously. At inference, the autoregressive model requires additional forward passes, on the order of the number of state dimensions in a given environment. We use the default ordering for the state dimensions in a given environment, though it is interesting to explore different orderings in future works. The negative log-likelihood for an autoregressive model takes the form of: `(θ | D) = − ∑ (s,a,r′,s′)∈D [ log pθ(r ′ | s, a, s′) + ∑n i=1 log pθ(s ′ i | s, a, s′1, . . . , s′i−1) ] , (5) where we use chain rule to factorize the joint probability of p(s′, r′ | s, a). The main advantage of the autoregressive model is that it makes no conditional independence assumption between next state dimensions. This class of models can therefore capture non-unimodal dependencies, e.g., between different joint angles of a robot. Paduraru (2007) demonstrates this increased expressivity in the tabular setting, constructing an example on which a model assuming conditional independence fails. While the expressive power of autoregressive models have been shown in various generative models (Parmar et al., 2018; Oord et al., 2016), autoregressive dynamics models have not seen much use in Model-based RL for continuous control before this work. Algorithm 1 Model-based OPE Require: Number of rollouts n, discount factor γ, horizon length H , policy π, dynamics model p, set of initial states S0 for i = 1, 2, . . . n do Ri ← 0 sample initial state s0 ∼ S0 for t = 0, 1, 2, . . . ,H − 1 do sample from policy: at ∼ π(· | st) sample from the dynamics model: st+1, rt+1 ∼ p(·, · | st, at) Ri ← Ri + γtrt+1 end for end for return 1n ∑n i=1Ri Model-based OPE. Once a dynamics model is trained from offline data, OPE can be performed in a direct and primitive way. We let the policy and model interact—the policy generates the next action, the model plays the role of the environment and generates the next state and reward. Due to the stochasticity in the model and the policy, we estimate the return for a policy with Monte-Carlo sampling and monitor standard error. See Algorithm 1 for pseudocode. 4 RELATED WORK Our work follows a long line of OPE research, which is especially relevant to many practical domains such as medicine (Murphy et al., 2001), recommendation systems (Li et al., 2011), and education (Mandel et al., 2014) in order to avoid the costs and risks associated with online evaluation. There exists a large body of work on OPE, including methods based on importance weighting (Precup, 2000; Li et al., 2014) and Lagrangian duality (Nachum et al., 2019; Yang et al., 2020; Uehara and Jiang, 2019). The model-based approach that we focus on in this paper lies within the class of algorithms referred to as the direct method (Kostrikov and Nachum, 2020; Dudík et al., 2011; Voloshin et al., 2019), which approximate the value of a new policy by either explicitly or implicitly estimating the transition and reward functions of the environment. While model-based policy evaluation has been considered by previous works (Paduraru, 2007; Thomas and Brunskill, 2016a; Hanna et al., 2017), it has largely been confined to simple domains with finite state and action spaces where function approximation is not necessary. By contrast, our work provides an extensive demonstration of model-based OPE in challenging continuous control benchmark domains. Previous instances of the use of function approximation for model-based OPE (Hallak et al., 2015) impose strong assumptions on the probabilistic dynamics models, such as factorability of the MDP. Our results indicate that even seemingly benign assumptions about the independence of different state dimensions can have detrimental consequences for the effectiveness of a model-based OPE estimate. While the use of model-based principles in OPE has been relatively rare, it has been more commonly used for policy optimization. The field of model-based RL has matured in recent years to yield impressive results for both online (Nagabandi et al., 2018; Chua et al., 2018; Kurutach et al., 2018; Janner et al., 2019) and offline (Matsushima et al., 2020; Kidambi et al., 2020; Yu et al., 2020; Argenson and Dulac-Arnold, 2020) policy optimization. Several of the techniques we employ, such as the normalization of the observation space, are borrowed from this previous literature (Nagabandi et al., 2018; Chua et al., 2018). Conversely, we present strong empirical evidence that the benefits of our introduced autoregressive generative models of state observations do carry over to model-based policy optimization, at least in the offline setting, and this is an interesting avenue for future work. 5 RESULTS We conduct our experiments on the DeepMind control suite (Tassa et al., 2018), a set of control tasks implemented in MuJoCo (Todorov et al., 2012). We use the offline datasets from RL Unplugged (Gulcehre et al., 2020), the details of which are provided in Table 1. These environments capture a wide range of complexity, from 40K transitions in a 5-dimensional cartpole environment to 1.5 million transitions on complex manipulation tasks. We follow the evaluation protocol in the Deep OPE (Fu et al., 2021) benchmark and use policies generated by four different algorithms: behavioral cloning (Bain, 1995), D4PG (Barth-Maron et al., 2018), Critic Regularized Regression (Wang et al., 2020), and ABM (Siegel et al., 2019). With varied hyperparameters, these form a diverse set of policies of varying quality. We perform a thorough hyperparameter sweep in the experiments and use standard practice from generative modeling to improve the quality of the models. We allocate 80% of the data for training and 20% of the data for model selection. We vary the depth and width of the neural networks (number of layers ∈ {3, 4}, layer size ∈ {512, 1024}), add different amounts of noise to input states and actions, and consider two levels of weight decay for regularization (input noise ∈ {0, 1e−6, 1e−7}, weight decay ∈ {0, 1e−6}). For the choice of optimizer, we consider both Adam (Kingma and Ba, 2014) and SGD with momentum and find Adam to be more effective at maximizing log-likelihood across all tasks in preliminary experiments. We thus use Adam in all of our experiments with two learning rates ∈ {1e−3, 3e−4}. We decay the optimizer’s learning rate linearly to zero throughout training, finding this choice to outperform a constant learning rate. Lastly, we find that longer training often improves log-likelihood results. We use 500 epochs for training final models. For each task we consider in total 48 hyperparameter combinations (listed above) for both models and pick the best model in each model family based on validation log-likelihood. This model is then used for model-based OPE and policy optimization. Note that, in our experiments, 20% of the transitions are used only for validation, but we believe one can re-train the models with the best hyperparameter configuration on the full transition datasets to improve the results even further. 5.1 AUTOREGRESSIVE DYNAMICS MODELS OUTPERFORM FEEDFORWARD MODELS IN NLL To evaluate the effectiveness of autoregressive dynamics models compared to feedforward counterparts, Table 2 reports negative log-likelihood (NLL) on the heldout validation set for the best performing models from our hyperparameter sweep. For each environment, we report the NLL for the best-performing model (Top-1) and the average NLL across the Top-5 models. The autoregressive model has lower NLL on all environments, indicating that it generalizes better to unseen data. To study the impact of model size on NLL, Figure 2 shows validation NLL as a function of parameter count. We find that on small datasets large models hurt, but more importantly autoregressive models outperform feedforward models regardless of the parameter count regime, i.e., even small autoregressive models attain a lower validation NLL compared to big feedforward models. This indicates that autoregressive models have a better inductive bias in modeling the transition dynamics than feedforward models that make a conditional independence assumption. 5.2 ARE DYNAMICS MODELS WITH LOWER NLL BETTER FOR MODEL-BASED OPE? We ultimately care not just about the log-likelihood numbers, but also whether or not the dynamics models are useful in policy evaluation and optimization. To study the relationship of NLL and OPE performance for model-based methods, we compute OPE estimates via Algorithm 1 and compute the Pearson correlation between the OPE estimates and the true discounted returns. This serves as a measure of the effectiveness of the model for OPE. We repeat this for all 96 dynamics models we trained on a given environment and plot the correlation coefficients against validation NLL in Figure 3. Models with low NLL are generally more accurate in OPE. Lambert et al. (2020) have previously demonstrated that in Model-based RL, “training cost does not hold a strong correlation to maximization of episode reward." We use validation NLL instead, and our results on policy evaluation decouple the model from policy optimization, suggesting a more nuanced picture: low validation NLL numbers generally correspond to accurate policy evaluation, while higher NLL numbers are generally less meaningful. In other words, if the dynamics model does not capture the transition dynamics accurately enough, then it is very hard to predict its performance on OPE. However, once the model starts to capture the dynamics faithfully, we conjecture that NLL starts to become a reasonable metric for model selection. For instance, validation NLL does not seem to be a great metric for ranking feedforward models, whereas it is more reasonable for autoregressive models. 5.3 COMPARISON WITH OTHER OPE METHODS We adopt a recently proposed benchmark for OPE (Fu et al., 2021) and compare our model-based approaches with state-of-the-art OPE baselines therein. Figures 4 and B.4 compare OPE estimates from two Fitted-Q Evaluation (FQE) baselines (Le et al., 2019; Kostrikov and Nachum, 2020; Paine et al., 2020), our feedforward models, and the autoregressive approach. Each plot reports the Pearson correlation between the OPE estimates and the true returns. The autoregressive model consistently outperforms the feedforward model and FQE methods on most environments. We report ensembling results in the appendix, but compare single models for fairness in the rest of the paper. We compute summary statistics for OPE methods in Table 3, Table A.1, and Table A.2. These tables report the Spearman’s rank correlation, regret, and absolute error, respectively. These metrics capture different desirable properties of OPE methods (Fu et al., 2021); more details about how they are computed are in the appendix. In all three metrics, the autoregressive model achieves the best median performance across nine environments, whereas the baseline model is not as good as FQE. The only environment in which the autoregressive model has negative rank correlation is manipulator insert ball. In addition, a major advantage of our model-based approach over FQE is that the model only needs to be trained once per environment—we do not need to perform additional policy-specific optimization, whereas FQE needs to optimize a separate Q-function approximator per policy. 5.4 AUTOREGRESSIVE DYNAMICS MODELS FOR OFFLINE POLICY OPTIMIZATION Policy evaluation is an integral part of reinforcement learning. Improvement in policy evaluation can therefore be adapted for policy optimization. In this section, we explore two possibilities of using models to improve offline reinforcement learning. In all experiments, we use Critic Regularized Regression (CRR) as a base offline reinforcement learning algorithm (Wang et al., 2020). First, we utilize the model during test time for planning by using a modified version of Model Predictive Path Integral (MPPI) (Williams et al., 2015). Unlike MPPI, we truncate the planning process after 10 steps of rollout and use the CRR critic to evaluate future discounted returns. We provide additional details in the appendix. Secondly, we use the model to augment the transition dataset to learn a better critic for CRR. More precisely, given sit ∼ D, and the current policy π, we can generate additional data using the following process: âit ∼ π(·|sit), ŝit+1, r̂it+1 ∼ p(·, ·|sit, ât). These two options are orthogonal and can be applied jointly. We implemented both techniques on top of the CRR exp variant (Wang et al., 2020) and show their combined effect in Figure 5. The figure shows that autoregressive dynamics models also outperform feedforward ones in the policy optimization context. Notably, in the case of cheetah run and fish swim, using autoregressive models for planning as well as data augmentation enables us to outperform the previous state-of-the-art on these offline datasets. Additionally, when using autoregressive dynamics models, both techniques improve performance. In the appendix, we show this result as well as more ablations. 6 CONCLUSION This paper shows the promise of autoregressive models in learning transition dynamics for continuous control, showing strong results for off-policy policy evaluation and offline policy optimization. Our contributions to offline model-based policy optimization are orthogonal to prior work that uses ensembles to lower the values when ensemble components disagree (Kidambi et al., 2020). Incorporating conservative value estimation into our method is an interesting avenue for future research. We use relatively primitive autoregressive neural architectures in this paper to enable a fair comparison with existing feedforward dynamics models. That said, it will be exciting to apply more sophisticated autoregressive neural network architectures with cross attention (Bahdanau et al., 2014) and self-attention (Vaswani et al., 2017) to Model-based RL for continuous control. Acknowledgements We thank Jimmy Ba, William Chan, Rishabh Agarwal, Dale Schuurmans, and Silviu Pitis for fruitful discussions on our work. We are also grateful for the helpful comments from Lihong Li, Jenny Liu, Harris Chan, Keiran Paster, Sheng Jia, and Tingwu Wang on earlier drafts. A OFFLINE POLICY EVALUATION We use the baseline results in Fu et al. (2021). For convenience, we replicate their description of the OPE baselines and metrics. A.1 OPE METRICS To evaluate the OPE algorithms, we compute three different metrics between the estimated returns and the ground truth returns: 1. Rank correlation This metric assesses how well estimated values rank policies. It is equal to the correlation between the ranking (sorted order) by the OPE estimates and the ranking by the ground truth values. 2. Absolute Error: This metric measures the deviations of the estimates from the ground truth and does not directly access the usefulness for ranking. 3. Regret@k This metric measures how much worse the best policies identified by the estimates are than the best policy in the entire set. Regret@k is the difference between the actual expected return of the best policy in the entire set, and the actual value of the best policy in the top-k set. A.2 OPE BASELINES Fitted Q-Evaluation (FQE) As in Le et al. (2019), we train a neural network to estimate the value of the evaluation policy πe by bootstrapping fromQ(s′, πe(s′)). We tried two different implementations, one from Kostrikov and Nachum (2020) and another from Paine et al. (2020). Importance Sampling (IS) We perform importance sampling with a learned behavior policy. We use the implementation from Kostrikov and Nachum (2020), which uses self-normalized (also known as weighted) step-wise importance sampling (Liu et al., 2018; Nachum et al., 2019). Since the behavior policy is not known explicitly, we learn an estimate of it via a max-likelihood objective over the dataset D, as advocated by Hanna et al. (2019). In order to be able to compute log-probabilities when the target policy is deterministic, we add artificial Gaussian noise with standard deviation 0.01 for all deterministic target policies. Doubly-Robust (DR) We perform weighted doubly-robust policy evaluation based on Thomas and Brunskill (2016b) and using the implementation of Kostrikov and Nachum (2020). Specifically, this method combines the IS technique above with a value estimator for variance reduction. The value estimator is learned according to Kostrikov and Nachum (2020), using deep FQE with an L2 loss function. DICE This method uses a saddle-point objective to estimate marginalized importance weights dπ(s, a)/dπB (s, a); these weights are then used to compute a weighted average of reward over the offline dataset, and this serves as an estimate of the policy’s value in the MDP. We use the implementation from Yang et al. (2020) corresponding to the algorithm BestDICE. Variational Power Method (VPM) This method runs a variational power iteration algorithm to estimate the importance weights dπ(s, a)/dπB (s, a) without the knowledge of the behavior policy. It then estimates the target policy value using weighted average of rewards similar to the DICE method. Our implementation is based on the same network and hyperparameters for OPE setting as in Wen et al. (2020). We further tune the hyperparameters including the regularization parameter λ, learning rates αθ and αv, and number of iterations on the Cartpole swingup task using ground-truth policy value, and then fix them for all other tasks. A.3 ENSEMBLING As in Chua et al. (2018); Janner et al. (2019), we can form an ensemble using our best-performing models. We generate rollouts using the procedure detailed in Janner et al. (2019), forming an ensemble with 4 models. We see some improvement in policy evaluation results, as shown in Figure A.1. Ensembling could likely be further improved by forcing unique hyperparameter settings and seeds. Algorithm 2 Model Predictive Path Integral Planning Require: state s, policy π, dynamics model p, critic Q, temperature β, and noise variance σ2. for m = 1, ...,M do for n = 1, ..., N do s0 ← s Rn ← 0 for τ = 0, ...,H − 1 do aτn ∼ π(·|sτn) sτ+1n , r τ+1 n ∼ π(·, ·|sτn, aτn) Rn ← Rn + γτrτ+1n end for aH ∼ π(·|sH) Rn ← Rn + γHQ(sHn , aHn ) end for Re-define π such that π(·|ŝτ ) = ∑ n exp(Rn/β)∑ m exp(Rm/β) N (·|aτn, σ2I). (π depends on τ and not ŝ.) end for sample final action a ∼ ∑ n exp(Rn/β)∑ m exp(Rm/β) δ(a0n) return a B ADDITIONAL DETAILS REGARDING POLICY OPTIMIZATION To test dynamic models for policy optimization, we implement the two methods discussed in Section 5.4 on top of CRR exp, one of the CRR variants (Wang et al., 2020). We use the RL Unplugged datasets (Gulcehre et al., 2020) for all environments studied in this section. When using data augmentation, we adopt a 1-to-1 ratio between the original dataset and the augmented dataset. To take advantage of the dynamics models at test time, we use a variant of Model Predictive Path Integral (MPPI) for planning. To reduce the planning horizon, we truncate the model rollout using CRR critics. The details of the planning procedure is summarized in Algorithm 2. All hyperparameter tuning for the planning process is conducted on the “cartpole swingup” task. The hyperparameters used in the planning process are M = 3, N = 16, H = 10, β = 0.1, and σ2 = 0.01. To match the temperature used in the planning component, we choose β = 0.1 for the CWP component of CRR. This change, however, does not impact the baseline CRR agent performance much. With the exception of β and the planning component, all hyperparameters are kept the same as CRR exp. We compare the agents’ performance with and without the planning procedure to test its effects. As shown in Figure B.2, planning using an autoregressive model significantly increases performance. Data augmentation does not change the agents’ performance on cartpole swingup, fish swim, or finger turn hard. It, however, boosts performance considerably on cheetah run. In Figure B.3, we show the effects of data augmentation on cheetah run.
1. What is the focus of the paper regarding dynamics models for continuous control tasks? 2. What are the strengths of the proposed approach, particularly in terms of its ability to fit data better and provide a richer model? 3. Do you have any concerns or questions regarding the relevance of the approach only in continuous control tasks, the nature of the inductive bias, and how the model copes with arbitrary orderings? 4. How does the proposed approach compare to prior works, especially regarding its ability to enrich replay buffers? 5. Can you clarify the relationship between the feedforward model and explicit conditional independence assumptions? 6. What is the ultimate goal of using dynamics models in policy evaluation and optimization, and how does the paper address this issue? 7. Why should we not care about log-likelihood numbers when evaluating the performance of dynamics models?
Review
Review The paper proposes extra conditioning in a dynamics model, wherein each dimension of the next state is generated on previous dimensions as well as previous state and action. This allows for a richer model (as a model without conditioning on previous dimensions is a special case).The paper claims that this additional conditioning adds a better inductive bias in certain tasks. The new models fit data better (Deepmind suite, RL Unplugged data) in terms of log-likelihood. The authors study the impact of better models on OPE and on policy optimization. The paper is generally well written. The contribution seems straightforward. Why is this relevant only in continuous control tasks? Is this inductive bias really a general pattern observed in multiple tasks? What happens when there is no structure a priori (e.g., independence holds): do you lose in terms of sample efficiency? Shouldn't any dynamics model be able to enrich replay buffer? The default ordering may be completely arbitrary, how is the new dynamics model able to cope with this in the experiments? In other words, it is unclear if P(s_i|s_{j<i},...) is appropriate at all without knowing the ordering. Also since p(s|s_prev,a) = \Prod_{i}p(s_i|s_prev,a,s_{j<i}) by definition, how is the feedforward model restrictive and making explicit conditional independence assumption? It seems that the feedforward model is not restrictive but too general and explicitly capturing this sequential dependency across states is useful. We ultimately care not about the log-likelihood numbers, but whether or not the dynamics models are faithful in policy evaluation and optimization. The above needs clarification. What is the faithfulness property? Also, why would we not care about log-likelihood?
ICLR
Title Autoregressive Dynamics Models for Offline Policy Evaluation and Optimization Abstract Standard dynamics models for continuous control make use of feedforward computation to predict the conditional distribution of next state and reward given current state and action using a multivariate Gaussian with a diagonal covariance structure. This modeling choice assumes that different dimensions of the next state and reward are conditionally independent given the current state and action and may be driven by the fact that fully observable physics-based simulation environments entail deterministic transition dynamics. In this paper, we challenge this conditional independence assumption and propose a family of expressive autoregressive dynamics models that generate different dimensions of the next state and reward sequentially conditioned on previous dimensions. We demonstrate that autoregressive dynamics models indeed outperform standard feedforward models in log-likelihood on heldout transitions. Furthermore, we compare different model-based and model-free off-policy evaluation (OPE) methods on RL Unplugged, a suite of offline MuJoCo datasets, and find that autoregressive dynamics models consistently outperform all baselines, achieving a new state-of-the-art. Finally, we show that autoregressive dynamics models are useful for offline policy optimization by serving as a way to enrich the replay buffer through data augmentation and improving performance using model-based planning. 1 INTRODUCTION Model-based Reinforcement Learning (RL) aims to learn an approximate model of the environment’s dynamics from existing logged interactions to facilitate efficient policy evaluation and optimization. Early work on Model-based RL uses simple tabular (Sutton, 1990; Moore and Atkeson, 1993; Peng and Williams, 1993) and locally linear (Atkeson et al., 1997) dynamics models, which often result in a large degree of model bias (Deisenroth and Rasmussen, 2011). Recent work adopts feedforward neural networks to model complex transition dynamics and improve generalization to unseen states and actions, achieving a high level of performance on standard RL benchmarks (Chua et al., 2018; Wang et al., 2019). However, standard feedforward dynamics models assume that different dimensions of the next state and reward are conditionally independent given the current state and action, which may lead to a poor estimation of uncertainty and unclear effects on RL applications. In this work, we propose a new family of autoregressive dynamics models and study their effectiveness for off-policy evaluation (OPE) and offline policy optimization on continuous control. Autoregressive dynamics models generate each dimension of the next state conditioned on previous dimensions of the next state, in addition to the current state and action (see Figure 1). This means that to sample the next state from an autoregressive dynamics model, one needs n sequential steps, where n is the number of state dimensions, and one more step to generate the reward. By contrast, standard feedforward dynamics models take current state and action as input and predict the distribution of the next state and reward as a multivariate Gaussian with a diagonal covariance structure (e.g., Chua et al. (2018); Janner et al. (2019)). This modeling choice assumes that different state dimensions are conditionally independent. ∗Work done as an intern at Google Brain. Autoregressive generative models have seen success in generating natural images (Parmar et al., 2018), text (Brown et al., 2020), and speech (Oord et al., 2016), but they have not seen use in Model-based RL for continuous control. We find that autoregressive dynamics models achieve higher log-likelihood compared to their feedforward counterparts on heldout validation transitions of all DM continuous control tasks (Tassa et al., 2018) from the RL Unplugged dataset (Gulcehre et al., 2020). To determine the impact of improved transition dynamics models, we primarily focus on OPE because it allows us to isolate contributions of the dynamics model in value estimation vs. the many other factors of variation in policy optimization and data collection. We find that autoregressive dynamics models consistently outperform existing Model-based and Model-free OPE baselines on continuous control in both ranking and value estimation metrics. We expect that our advances in model-based OPE will improve offline policy selection for offline RL (Paine et al., 2020). Finally, we show that our autoregressive dynamics models can help improve offline policy optimization by model predictive control, achieving a new state-of-the-art on cheetah-run and fish-swim from RL Unplugged (Gulcehre et al., 2020). Key contributions of this paper include: • We propose autoregressive dynamics models to capture dependencies between state dimensions in forward prediction. We show that autoregressive models improve log-likelihood over nonautoregressive models for continuous control tasks from the DM Control Suite (Tassa et al., 2018). • We apply autoregressive dynamics models to Off-Policy Evaluation (OPE), surpassing the performance of state-of-the art baselines in median absolute error, rank correlation, and normalized top-5 regret across 9 control tasks. • We show that autoregressive dynamics models are more useful than feedforward models for offline policy optimization, serving as a way to enrich experience replay by data augmentation and improving performance via model-based planning. 2 PRELIMINARIES Here we introduce relevant notation and discuss off-policy (offline) policy evaluation (OPE). We refer the reader to Lange et al. (2012) and Levine et al. (2020) for background on offline RL, which is also known as batch RL in the literature. A finite-horizon Markov Decision Process (MDP) is defined by a tupleM = (S,A, T , d0, r, γ), where S is a set of states s ∈ S, A is a set of actions a ∈ A, T defines transition probability distributions p(st+1|st, at), d0 defines the initial state distribution d0 ≡ p(s0), r defines a reward function r : S × A → R, and γ is a scalar discount factor. A policy π(a | s) defines a conditional distribution over actions conditioned on states. A trajectory consists of a sequence of states and actions τ = (s0, a0, s1, a1, . . . , sH) of horizon length H . We use st,i to denote the i-th dimension of the state at time step t (and similarly for actions). In reinforcement learning, the objective is to maximize the expected sum of discounted rewards over the trajectory distribution induced by the policy: Vγ(π) = Eτ∼pπ(τ) [ H∑ t=0 γtr(st, at) ] . (1) The trajectory distribution is characterized by the initial state distribution, policy, and transition probability distribution: pπ(τ) = d0(s0) H−1∏ t=0 π(at|st)p(st+1|st, at). (2) In offline RL, we are given access to a dataset of transitions D = {(sit, ait, rit+1, sit+1)}Ni=1 and a set of initial states S0. Offline RL is inherently a data-driven approach since the agent needs to optimize the same objective as in Eq. (1) but is not allowed additional interactions with the environment. Even though offline RL offers the promise of leveraging existing logged datasets, current offline RL algorithms (Fujimoto et al., 2019; Agarwal et al., 2020; Kumar et al., 2019) are typically evaluated using online interaction, which limits their applicability in the real world. The problem of off-policy (offline) policy evaluation (OPE) entails estimating Vγ(π), the value of a target policy π, based on a fixed dataset of transitions denoted D, without access to the environment’s dynamics. Some OPE methods assume that D is generated from a known behavior (logging) policy µ and assume access to µ in addition to D. In practice, the logged dataset D may be the result of following some existing system that does not have a probabilistic form. Hence, in our work, we will assume no access to the original behavior policy µ for OPE. That said, for methods that require access to µ, we train a behavior cloning policy on D. 3 PROBABILISTIC DYNAMICS MODELS Feedforward dynamics model. In the context of our paper, we use the term “model” to jointly refer to the forward dynamics model ps(st+1|st, at) and reward model pr(rt+1|st, at). We use neural nets to parameterize both distributions since they are powerful function approximators that have been effective for model-based RL (Chua et al., 2018; Nagabandi et al., 2018; Janner et al., 2019). Let θ denote the parameters of a fully connected network used to model pθ(st+1, rt+1 | st, at). We expect joint modeling of the next state and reward to benefit from sharing intermediate network features. Similar to prior work (Janner et al., 2019), our baseline feedforward model outputs the mean and log variance of all state dimensions and reward simultaneously, as follows: pθ(st+1, rt+1 | st, at) = N ( µ(st, at),Diag(exp{l(st, at)}) ) , (3) where µ(st, at) ∈ Rn+1 denotes the mean for the concatenation of the next state and reward, l(st, at) ∈ Rn+1 denotes the log variance, and Diag(v) is an operator that creates a diagonal matrix with the main diagonal specified by the vector v. During training, we seek to minimize the negative log likelihood of the parameters given observed transitions in the dataset D: `(θ | D) = − ∑ (s,a,r′,s′)∈D log pθ(s ′, r′ | s, a) . (4) While it is possible to place different weights on the loss for next state and reward prediction, we did not apply any special weighting and treated the reward as an additional state dimension in all of our experiments. This is straightforward to implement and does not require tuning an additional hyperparameter, which is challenging for OPE. Note that the input has |s|+ |a| dimensions. Autoregressive dynamics model. We now describe our autoregressive model. We seek to demonstrate the utility of predicting state dimensions in an autoregressive way. Therefore, rather than using a complex neural network architecture, where improvements in log-likelihood and policy evaluation are confounded by architectural differences, we opt to make simple modifications to the feedforward model described above. This allows us to isolate the source of performance improvements. The autoregressive model we use is a fully connected model that predicts the mean and log variance of a single state dimension. We augment the input space of the baseline with the previous predicted state dimensions and a one-hot encoding to indicate which dimension to predict. This is illustrated in Figure 1. The autoregressive model therefore has 3|s| + |a| input dimensions. Hence, the autoregressive model has a small number of additional weights in the first fully connected layer, but as will be shown in our experiments, these extra parameters are not the reason for a performance gain. At training time, the autoregressive model has a similar computational cost to the fully connected model as we can mask ground truth states and use data parallelism to compute all state dimensions simultaneously. At inference, the autoregressive model requires additional forward passes, on the order of the number of state dimensions in a given environment. We use the default ordering for the state dimensions in a given environment, though it is interesting to explore different orderings in future works. The negative log-likelihood for an autoregressive model takes the form of: `(θ | D) = − ∑ (s,a,r′,s′)∈D [ log pθ(r ′ | s, a, s′) + ∑n i=1 log pθ(s ′ i | s, a, s′1, . . . , s′i−1) ] , (5) where we use chain rule to factorize the joint probability of p(s′, r′ | s, a). The main advantage of the autoregressive model is that it makes no conditional independence assumption between next state dimensions. This class of models can therefore capture non-unimodal dependencies, e.g., between different joint angles of a robot. Paduraru (2007) demonstrates this increased expressivity in the tabular setting, constructing an example on which a model assuming conditional independence fails. While the expressive power of autoregressive models have been shown in various generative models (Parmar et al., 2018; Oord et al., 2016), autoregressive dynamics models have not seen much use in Model-based RL for continuous control before this work. Algorithm 1 Model-based OPE Require: Number of rollouts n, discount factor γ, horizon length H , policy π, dynamics model p, set of initial states S0 for i = 1, 2, . . . n do Ri ← 0 sample initial state s0 ∼ S0 for t = 0, 1, 2, . . . ,H − 1 do sample from policy: at ∼ π(· | st) sample from the dynamics model: st+1, rt+1 ∼ p(·, · | st, at) Ri ← Ri + γtrt+1 end for end for return 1n ∑n i=1Ri Model-based OPE. Once a dynamics model is trained from offline data, OPE can be performed in a direct and primitive way. We let the policy and model interact—the policy generates the next action, the model plays the role of the environment and generates the next state and reward. Due to the stochasticity in the model and the policy, we estimate the return for a policy with Monte-Carlo sampling and monitor standard error. See Algorithm 1 for pseudocode. 4 RELATED WORK Our work follows a long line of OPE research, which is especially relevant to many practical domains such as medicine (Murphy et al., 2001), recommendation systems (Li et al., 2011), and education (Mandel et al., 2014) in order to avoid the costs and risks associated with online evaluation. There exists a large body of work on OPE, including methods based on importance weighting (Precup, 2000; Li et al., 2014) and Lagrangian duality (Nachum et al., 2019; Yang et al., 2020; Uehara and Jiang, 2019). The model-based approach that we focus on in this paper lies within the class of algorithms referred to as the direct method (Kostrikov and Nachum, 2020; Dudík et al., 2011; Voloshin et al., 2019), which approximate the value of a new policy by either explicitly or implicitly estimating the transition and reward functions of the environment. While model-based policy evaluation has been considered by previous works (Paduraru, 2007; Thomas and Brunskill, 2016a; Hanna et al., 2017), it has largely been confined to simple domains with finite state and action spaces where function approximation is not necessary. By contrast, our work provides an extensive demonstration of model-based OPE in challenging continuous control benchmark domains. Previous instances of the use of function approximation for model-based OPE (Hallak et al., 2015) impose strong assumptions on the probabilistic dynamics models, such as factorability of the MDP. Our results indicate that even seemingly benign assumptions about the independence of different state dimensions can have detrimental consequences for the effectiveness of a model-based OPE estimate. While the use of model-based principles in OPE has been relatively rare, it has been more commonly used for policy optimization. The field of model-based RL has matured in recent years to yield impressive results for both online (Nagabandi et al., 2018; Chua et al., 2018; Kurutach et al., 2018; Janner et al., 2019) and offline (Matsushima et al., 2020; Kidambi et al., 2020; Yu et al., 2020; Argenson and Dulac-Arnold, 2020) policy optimization. Several of the techniques we employ, such as the normalization of the observation space, are borrowed from this previous literature (Nagabandi et al., 2018; Chua et al., 2018). Conversely, we present strong empirical evidence that the benefits of our introduced autoregressive generative models of state observations do carry over to model-based policy optimization, at least in the offline setting, and this is an interesting avenue for future work. 5 RESULTS We conduct our experiments on the DeepMind control suite (Tassa et al., 2018), a set of control tasks implemented in MuJoCo (Todorov et al., 2012). We use the offline datasets from RL Unplugged (Gulcehre et al., 2020), the details of which are provided in Table 1. These environments capture a wide range of complexity, from 40K transitions in a 5-dimensional cartpole environment to 1.5 million transitions on complex manipulation tasks. We follow the evaluation protocol in the Deep OPE (Fu et al., 2021) benchmark and use policies generated by four different algorithms: behavioral cloning (Bain, 1995), D4PG (Barth-Maron et al., 2018), Critic Regularized Regression (Wang et al., 2020), and ABM (Siegel et al., 2019). With varied hyperparameters, these form a diverse set of policies of varying quality. We perform a thorough hyperparameter sweep in the experiments and use standard practice from generative modeling to improve the quality of the models. We allocate 80% of the data for training and 20% of the data for model selection. We vary the depth and width of the neural networks (number of layers ∈ {3, 4}, layer size ∈ {512, 1024}), add different amounts of noise to input states and actions, and consider two levels of weight decay for regularization (input noise ∈ {0, 1e−6, 1e−7}, weight decay ∈ {0, 1e−6}). For the choice of optimizer, we consider both Adam (Kingma and Ba, 2014) and SGD with momentum and find Adam to be more effective at maximizing log-likelihood across all tasks in preliminary experiments. We thus use Adam in all of our experiments with two learning rates ∈ {1e−3, 3e−4}. We decay the optimizer’s learning rate linearly to zero throughout training, finding this choice to outperform a constant learning rate. Lastly, we find that longer training often improves log-likelihood results. We use 500 epochs for training final models. For each task we consider in total 48 hyperparameter combinations (listed above) for both models and pick the best model in each model family based on validation log-likelihood. This model is then used for model-based OPE and policy optimization. Note that, in our experiments, 20% of the transitions are used only for validation, but we believe one can re-train the models with the best hyperparameter configuration on the full transition datasets to improve the results even further. 5.1 AUTOREGRESSIVE DYNAMICS MODELS OUTPERFORM FEEDFORWARD MODELS IN NLL To evaluate the effectiveness of autoregressive dynamics models compared to feedforward counterparts, Table 2 reports negative log-likelihood (NLL) on the heldout validation set for the best performing models from our hyperparameter sweep. For each environment, we report the NLL for the best-performing model (Top-1) and the average NLL across the Top-5 models. The autoregressive model has lower NLL on all environments, indicating that it generalizes better to unseen data. To study the impact of model size on NLL, Figure 2 shows validation NLL as a function of parameter count. We find that on small datasets large models hurt, but more importantly autoregressive models outperform feedforward models regardless of the parameter count regime, i.e., even small autoregressive models attain a lower validation NLL compared to big feedforward models. This indicates that autoregressive models have a better inductive bias in modeling the transition dynamics than feedforward models that make a conditional independence assumption. 5.2 ARE DYNAMICS MODELS WITH LOWER NLL BETTER FOR MODEL-BASED OPE? We ultimately care not just about the log-likelihood numbers, but also whether or not the dynamics models are useful in policy evaluation and optimization. To study the relationship of NLL and OPE performance for model-based methods, we compute OPE estimates via Algorithm 1 and compute the Pearson correlation between the OPE estimates and the true discounted returns. This serves as a measure of the effectiveness of the model for OPE. We repeat this for all 96 dynamics models we trained on a given environment and plot the correlation coefficients against validation NLL in Figure 3. Models with low NLL are generally more accurate in OPE. Lambert et al. (2020) have previously demonstrated that in Model-based RL, “training cost does not hold a strong correlation to maximization of episode reward." We use validation NLL instead, and our results on policy evaluation decouple the model from policy optimization, suggesting a more nuanced picture: low validation NLL numbers generally correspond to accurate policy evaluation, while higher NLL numbers are generally less meaningful. In other words, if the dynamics model does not capture the transition dynamics accurately enough, then it is very hard to predict its performance on OPE. However, once the model starts to capture the dynamics faithfully, we conjecture that NLL starts to become a reasonable metric for model selection. For instance, validation NLL does not seem to be a great metric for ranking feedforward models, whereas it is more reasonable for autoregressive models. 5.3 COMPARISON WITH OTHER OPE METHODS We adopt a recently proposed benchmark for OPE (Fu et al., 2021) and compare our model-based approaches with state-of-the-art OPE baselines therein. Figures 4 and B.4 compare OPE estimates from two Fitted-Q Evaluation (FQE) baselines (Le et al., 2019; Kostrikov and Nachum, 2020; Paine et al., 2020), our feedforward models, and the autoregressive approach. Each plot reports the Pearson correlation between the OPE estimates and the true returns. The autoregressive model consistently outperforms the feedforward model and FQE methods on most environments. We report ensembling results in the appendix, but compare single models for fairness in the rest of the paper. We compute summary statistics for OPE methods in Table 3, Table A.1, and Table A.2. These tables report the Spearman’s rank correlation, regret, and absolute error, respectively. These metrics capture different desirable properties of OPE methods (Fu et al., 2021); more details about how they are computed are in the appendix. In all three metrics, the autoregressive model achieves the best median performance across nine environments, whereas the baseline model is not as good as FQE. The only environment in which the autoregressive model has negative rank correlation is manipulator insert ball. In addition, a major advantage of our model-based approach over FQE is that the model only needs to be trained once per environment—we do not need to perform additional policy-specific optimization, whereas FQE needs to optimize a separate Q-function approximator per policy. 5.4 AUTOREGRESSIVE DYNAMICS MODELS FOR OFFLINE POLICY OPTIMIZATION Policy evaluation is an integral part of reinforcement learning. Improvement in policy evaluation can therefore be adapted for policy optimization. In this section, we explore two possibilities of using models to improve offline reinforcement learning. In all experiments, we use Critic Regularized Regression (CRR) as a base offline reinforcement learning algorithm (Wang et al., 2020). First, we utilize the model during test time for planning by using a modified version of Model Predictive Path Integral (MPPI) (Williams et al., 2015). Unlike MPPI, we truncate the planning process after 10 steps of rollout and use the CRR critic to evaluate future discounted returns. We provide additional details in the appendix. Secondly, we use the model to augment the transition dataset to learn a better critic for CRR. More precisely, given sit ∼ D, and the current policy π, we can generate additional data using the following process: âit ∼ π(·|sit), ŝit+1, r̂it+1 ∼ p(·, ·|sit, ât). These two options are orthogonal and can be applied jointly. We implemented both techniques on top of the CRR exp variant (Wang et al., 2020) and show their combined effect in Figure 5. The figure shows that autoregressive dynamics models also outperform feedforward ones in the policy optimization context. Notably, in the case of cheetah run and fish swim, using autoregressive models for planning as well as data augmentation enables us to outperform the previous state-of-the-art on these offline datasets. Additionally, when using autoregressive dynamics models, both techniques improve performance. In the appendix, we show this result as well as more ablations. 6 CONCLUSION This paper shows the promise of autoregressive models in learning transition dynamics for continuous control, showing strong results for off-policy policy evaluation and offline policy optimization. Our contributions to offline model-based policy optimization are orthogonal to prior work that uses ensembles to lower the values when ensemble components disagree (Kidambi et al., 2020). Incorporating conservative value estimation into our method is an interesting avenue for future research. We use relatively primitive autoregressive neural architectures in this paper to enable a fair comparison with existing feedforward dynamics models. That said, it will be exciting to apply more sophisticated autoregressive neural network architectures with cross attention (Bahdanau et al., 2014) and self-attention (Vaswani et al., 2017) to Model-based RL for continuous control. Acknowledgements We thank Jimmy Ba, William Chan, Rishabh Agarwal, Dale Schuurmans, and Silviu Pitis for fruitful discussions on our work. We are also grateful for the helpful comments from Lihong Li, Jenny Liu, Harris Chan, Keiran Paster, Sheng Jia, and Tingwu Wang on earlier drafts. A OFFLINE POLICY EVALUATION We use the baseline results in Fu et al. (2021). For convenience, we replicate their description of the OPE baselines and metrics. A.1 OPE METRICS To evaluate the OPE algorithms, we compute three different metrics between the estimated returns and the ground truth returns: 1. Rank correlation This metric assesses how well estimated values rank policies. It is equal to the correlation between the ranking (sorted order) by the OPE estimates and the ranking by the ground truth values. 2. Absolute Error: This metric measures the deviations of the estimates from the ground truth and does not directly access the usefulness for ranking. 3. Regret@k This metric measures how much worse the best policies identified by the estimates are than the best policy in the entire set. Regret@k is the difference between the actual expected return of the best policy in the entire set, and the actual value of the best policy in the top-k set. A.2 OPE BASELINES Fitted Q-Evaluation (FQE) As in Le et al. (2019), we train a neural network to estimate the value of the evaluation policy πe by bootstrapping fromQ(s′, πe(s′)). We tried two different implementations, one from Kostrikov and Nachum (2020) and another from Paine et al. (2020). Importance Sampling (IS) We perform importance sampling with a learned behavior policy. We use the implementation from Kostrikov and Nachum (2020), which uses self-normalized (also known as weighted) step-wise importance sampling (Liu et al., 2018; Nachum et al., 2019). Since the behavior policy is not known explicitly, we learn an estimate of it via a max-likelihood objective over the dataset D, as advocated by Hanna et al. (2019). In order to be able to compute log-probabilities when the target policy is deterministic, we add artificial Gaussian noise with standard deviation 0.01 for all deterministic target policies. Doubly-Robust (DR) We perform weighted doubly-robust policy evaluation based on Thomas and Brunskill (2016b) and using the implementation of Kostrikov and Nachum (2020). Specifically, this method combines the IS technique above with a value estimator for variance reduction. The value estimator is learned according to Kostrikov and Nachum (2020), using deep FQE with an L2 loss function. DICE This method uses a saddle-point objective to estimate marginalized importance weights dπ(s, a)/dπB (s, a); these weights are then used to compute a weighted average of reward over the offline dataset, and this serves as an estimate of the policy’s value in the MDP. We use the implementation from Yang et al. (2020) corresponding to the algorithm BestDICE. Variational Power Method (VPM) This method runs a variational power iteration algorithm to estimate the importance weights dπ(s, a)/dπB (s, a) without the knowledge of the behavior policy. It then estimates the target policy value using weighted average of rewards similar to the DICE method. Our implementation is based on the same network and hyperparameters for OPE setting as in Wen et al. (2020). We further tune the hyperparameters including the regularization parameter λ, learning rates αθ and αv, and number of iterations on the Cartpole swingup task using ground-truth policy value, and then fix them for all other tasks. A.3 ENSEMBLING As in Chua et al. (2018); Janner et al. (2019), we can form an ensemble using our best-performing models. We generate rollouts using the procedure detailed in Janner et al. (2019), forming an ensemble with 4 models. We see some improvement in policy evaluation results, as shown in Figure A.1. Ensembling could likely be further improved by forcing unique hyperparameter settings and seeds. Algorithm 2 Model Predictive Path Integral Planning Require: state s, policy π, dynamics model p, critic Q, temperature β, and noise variance σ2. for m = 1, ...,M do for n = 1, ..., N do s0 ← s Rn ← 0 for τ = 0, ...,H − 1 do aτn ∼ π(·|sτn) sτ+1n , r τ+1 n ∼ π(·, ·|sτn, aτn) Rn ← Rn + γτrτ+1n end for aH ∼ π(·|sH) Rn ← Rn + γHQ(sHn , aHn ) end for Re-define π such that π(·|ŝτ ) = ∑ n exp(Rn/β)∑ m exp(Rm/β) N (·|aτn, σ2I). (π depends on τ and not ŝ.) end for sample final action a ∼ ∑ n exp(Rn/β)∑ m exp(Rm/β) δ(a0n) return a B ADDITIONAL DETAILS REGARDING POLICY OPTIMIZATION To test dynamic models for policy optimization, we implement the two methods discussed in Section 5.4 on top of CRR exp, one of the CRR variants (Wang et al., 2020). We use the RL Unplugged datasets (Gulcehre et al., 2020) for all environments studied in this section. When using data augmentation, we adopt a 1-to-1 ratio between the original dataset and the augmented dataset. To take advantage of the dynamics models at test time, we use a variant of Model Predictive Path Integral (MPPI) for planning. To reduce the planning horizon, we truncate the model rollout using CRR critics. The details of the planning procedure is summarized in Algorithm 2. All hyperparameter tuning for the planning process is conducted on the “cartpole swingup” task. The hyperparameters used in the planning process are M = 3, N = 16, H = 10, β = 0.1, and σ2 = 0.01. To match the temperature used in the planning component, we choose β = 0.1 for the CWP component of CRR. This change, however, does not impact the baseline CRR agent performance much. With the exception of β and the planning component, all hyperparameters are kept the same as CRR exp. We compare the agents’ performance with and without the planning procedure to test its effects. As shown in Figure B.2, planning using an autoregressive model significantly increases performance. Data augmentation does not change the agents’ performance on cartpole swingup, fish swim, or finger turn hard. It, however, boosts performance considerably on cheetah run. In Figure B.3, we show the effects of data augmentation on cheetah run.
1. What is the main contribution of the paper regarding the usage of autoregressive dynamics models for batch model-based RL? 2. What are the strengths of the proposed approach, particularly in its empirical results and superiority over standard feed-forward models? 3. Do you have any concerns or questions about the novelty of the application of autoregressive models in this setting? 4. Can you provide more intuition on why autoregressive models perform better than feedforward ones in these domains? 5. How does the reviewer assess the limitations of the proposed approach, such as the ordering of state variables and its potential impact on performance? 6. Are there any minor comments or questions regarding the paper's content, equations, citations, or figures?
Review
Review Summary The authors consider the usage of autoregressive dynamics models for batch model-based RL, where state-variable/reward predictions are performed sequentially conditioned on previously-predicted variables. Extensive numerical results are provided in several continuous domains for both policy evaluation and optimization problems. The results showcase the effectiveness of autoregressive models and, in particular, their superiority over standard feed-forward models. Pros The paper is very well-written and easy to follow. The experiments are described with sufficient details to understand the results The usage of these autoregressive models for model-based RL is, to my knowledge, novel The paper presents extensive experiments on several challenging domains. The results are convincing and significant. In particular, they show that autoregressive models are superior to feedforward ones Cons The paper's sole contribution seems to be empirical since autoregressive models are (as acknowledged) not novel, though their application to this setting is. While the empirical results are very convincing, I did not find much intuition on where this big improvement over feedforward models comes from (see detailed comments below). The ordering of the state variables might be a limitation (again, see below). Detailed comments As mentioned above, I did not find much intuition on the better performances of autoregressive models vs feedforward ones. As I am not entirely familiar with the system dynamics of the considered domains, do you think that they possess any property which makes autoregressive models more suitable than feedforward ones (e.g., strong correlations between next-state variables)? Aren't the transition dynamics deterministic in most of the considered domains? Since the reward in most of the considered domains is (I suppose) a function of state, action, and next-state, could it be that one of the reasons behind the worse performance of feedforward models is that they try to predict the reward as a function of state-action only? Would their performance change if they explicitly modeled the reward as a function of s,a,s'? Related to the previous point, the autoregressive model naturally predicts a reward as a function of s,a,s' since r is considered as the (n+1)-th state component. But what if we re-ordered the state variables with r as the first component instead of the last one? Would the performance change? More generally, do you think that the ordering of the state variables might be a limitation? For instance, could there be an ordering of these variables that makes the model perform well and one that makes it perform poorly? While in, e.g., image/text generation problems where autoregressive models are applied we have a natural ordering between the variables involved (e.g., by space or time), here there seems to be no particular relationship between state variables with similar index. Maybe some additional experiments could help in clarifying whether this could be a limitation or not. Some minor comments/questions: In Eq. 2, should the product be up to H-1? Before Sec. 3, a citation for "behavioral cloning" could be added Sec. 5.3: the FQE acronym was not introduced Fig. 4: what is "r" above each plot?
ICLR
Title Multi-Reward Fusion: Learning from Other Policies by Distilling Abstract Designing rewards is crucial for applying reinforcement learning in practice. However, it is difficult to design a shaping reward which can accelerate agents’ learning process without biasing the original task’s optimization objective. Moreover, the low-dimensional representation of the reward and value function (i.e. scalar value) may also be an obstruction during the learning process. This paper contributes towards tackling these challenges, by proposing a new method, called Multi-Reward Fusion (MRF). MRF take as input a list of human designed rewards, which contains the information from multiple perspectives about the task, and learns separate policies for each component of the reward list. We formulate the problem of learning the target policy as a distillation task, propose a novel method which can selectively distills knowledge from the auxiliary policies, and theoretically show the feasibility of this method. We conduct extensive experiments and show that the MRF method performs better than state-of-the-art reward shaping methods. 1 INTRODUCTION For applying reinforcement learning in real-world tasks, how to design a suitable reward function is a challenging problem. A common way for addressing this problem is reward shaping (RS), which transforms the human prior knowledge into shaping reward, so that the agents can be guided to learn faster and better with the combination of the original and new rewards. In early works, hand-crafted reward function had been used in robot behavior learning Dorigo & Colombetti (1994)Randløv & Alstrøm (1998). But the introduced shaping reward may deviate the converged policy away from the optimal policy of the original task. The potential-based reward shaping (PBRS) method Ng et al. (1999) firstly solved this problem by designing the shaping reward via the form of the difference of potential values, which guarantees the policy invariance. Although PBRS and its variantsDevlin & Kudenko (2012); Grzes & Kudenko (2008); Harutyunyan et al. (2015); Wiewiora et al. (2003) have good mathematical characteristics, sometimes it doesn’t work due to its weak driving force. The phenomenon result from the invariance of the state-action value function by using PBRS, whose aid to policy learning is less straightforward. Moreover, the automatic shaping approaches Marthi (2007); Hu et al. (2020); Fu et al. (2019) learn to take advantage of multiple auxiliary shaping reward functions by adjusting the weight vector of the reward functions. All of these exist works about reward shaping are trying to find a feasible way to introduce human prior knowledge into agents’ learning process. However, when using these architectures we have to face such a dilemma: how to generate useful shaping rewards? The design of the rewards directly affect the results of training. For example, only when the shaping rewards transformed from prior knowledge are completely helpful, PBRS and its variants may work. Although, the automatic shaping approaches can alleviate this problem to some extent, their computational complexity is prohibitive. We consider that the multiple sources of reward setting, recommended in hybrid reward architectures (HRA) Van Seijen et al. (2017) and RD2 Lin et al. (2020), may be more suitable than the traditional scalar form of the reward. Because, a useful reward usually contains information from multiple perspective of the same task. For example, when we design reward functions for training a Doom agent Lample & Chaplot (2017), designers should consider in multiple perspectives such as object pickup, shooting, losing health, and losing ammo. These multi-perspective sources of rewards are high-dimensional information, directly mapping them to a scalar is very rude and will lose a lot of information. In this paper, we use a list of shaping rewards sourcing from different perspectives to train a list of critics. Each critic corresponds to a policy. And we use the relationship between the list of critics to decide how to learn from these policies by distillation. In this process, except the target policy and target critic, all of the other auxiliary critics and policies are trained in offline wayAn et al. (2021). The contributions of this paper are as follow: • We use multi-perspective sources of rewards to train the agent to prevent information loss caused by dimensionality reduction (i.e., summation the shaping rewards into one scalar). In this process, we transform the optimization objective of optimization Haarnoja et al. (2018a) into a new form of policy distillation, and prove the equivalence of these two optimization objectives theoretically. • We provide a gradient similarity-based regularization method to eliminate the effects of adverse rewards automatically. This regularization can improve convergence efficiency. • Empirically, our method can make better trade-off between the policy invariance and driven power from the shaping rewards. Moreover, the auxiliary policies’ offline training can proceed in parallel, which can spare training time. 2 BACKGROUND 2.1 SOFT ACTOR CRITIC In this paper, we consider the soft actor critic framework Haarnoja et al. (2018a; 2017) of reinforcement learning and adopt Morkov decision process (MDP) as the mathematical model. Formally, MDP can be denoted as a tuple M =< S,A, P, r, p0, γ >, where S is the state space, A is the action space, P : S × A × S → [0, 1] denotes the state transition function, r : S × A → R is the reward function, p0 : S → [0, 1] is the probability distribution of the initial state, and γ ∈ [0, 1] is the discount rate. Normally, the purpose of reinforcement learning is to find a policy of an agent π : S × A → [0, 1] in an MDP, which can maximize the expectation of the accumulative rewards Es∼ρπ,a∼π[r(s, a)]. Here, ρπ(s) = ∫ S ∑∞ t=1 γ t−1p0(s ′)p(s′ → s, t, π) denotes the distribution of the state and p(s′ → s, t, π) means the probability that state s is visited after t steps from state s′ under policy π. Usually, we represent the policy π by a neural network and the parameters of this neural network can be denoted as θ. According to the soft actor critic (SAC) method, the entropy of the policy also needs to be maximized to stimulate exploration. And the Bellman operator can be denoted as : Qϕ(s, a) = r(s, a) + γEs′∼ρπ [V (s ′)]], where V (s) = Ea∼π[Q(s, a) − α log π(a|s)], ϕ is the vector of parameter in critic neural network. Here, α is the temperature parameter. In Haarnoja et al. (2018b), α is used to maintain the constraint of the entropy of policy Es∼ρπ,a∼π[− log π(a|s)] ≥ H. Moreover, It is worth mentioning that the the policy parameters are learning to minimizing the expected KLdivergence Es∼ρπ [DKL(πθ(·|s)|| exp(Qϕ(s,·))Zϕ(s) )], where Zϕ(s) is a normalization term, which can be ignored during training. 2.2 RELATED WORK 2.2.1 REWARD SHAPING The traditional reward shaping usually means that modifying the original reward with shaping reward functions which introduce domain knowledge, such as PBRS and its theoretical analysisWiewiora et al. (2003); Laud & DeJong (2003), automatic reward shaping Marthi (2007); Grzes & Kudenko (2008), multi-agent reward shaping Devlin & Kudenko (2011); Sun et al. (2018); Wang et al. (2022), belief reward shaping Marom & Rosman (2018), ethics shapingWu & Lin (2018), and reward shaping via meta learning Zou et al. (2019). The automatic reward shaping methods, such as the automatic successive reinforcement learning (ASR) framework Fu et al. (2019) and the bi-level optimization of parameterized reward shaping (BiPaRS) Hu et al. (2020), have the similar motivation of this paper, namely letting the agent learn from the knowledge that should be focused on. Instead of considering how to adjust the weight of each component of the shaping reward, we directly learning multi-perspective state-action value functions and corresponding policies by multi-perspective rewards. It’s meaningful to mention that each component of the traditional shaping reward can be seen as one source of the multi-perspective rewards. 2.3 HYBRID REWARD ARCHITECTURE Hybrid Reward Architecture (HRA) Van Seijen et al. (2017) proposes a hybrid architecture to model the value functions for the rewards source from different perspectives. Their work justifies that learning from multi-perspective source of the rewards can improve sample efficiency empirically. HRA is built upon the Horde architecture Sutton et al. (2011) , which trains a separate general value function (GVF) for each pseudo-reward function. The work about reward decomposition Lin et al. (2020) also demonstrates that learning from multiple reward functions is beneficial. The structured policy iteration Boutilier et al. (1995) also supports this viewpoint. In these works, the multiple sources of rewards can be seen as auxiliary tasks, which serve as additional supervision for the agent to learn multi-perspective value functions and better representations of the task in different perspectives. However, in this paper, we not only learn multi-perspective value functions via multi-perspective sources of the rewards, but also learn the corresponding multi-perspective policies which are supplied for the distillation of the agent. Through our method, we only need to add the information we focused into the multi-perspective rewards’ list without paying attention to how to adjust the weight of each reward’s component. Furthermore, our method learning target policy by distilling from the auxiliary policies which is more direct than previous methods and has higher data efficiency. 3 METHOD Given an Markov decision process < S,A, P, r, p0, γ > and a shaping reward function f : S×A→ R n×1, we denote the output of reward function as f(s, a) = [r1, r2, · · ·, rn, 0]T. The vector of multiperspective rewards, which is composed by these shaping rewards and the original reward, can be formalized in the additive form as r = ro + f . In this paper, we label derivatives (such as Qo, πo) of the original reward ro with subscript o. 3.1 RELATIONSHIP BETWEEN AUXILIARY POLICY πi AND TARGET POLICY πo Let Qi represents the state-action value function which is learnt via ri (i.e. the ith component of r). And Q = [Q1, Q2, · · ·, Qn, Qo]T corresponds to the state-action value function of r. For the reason that all the rewards describe the same task from different perspectives, there must exist some implicit connection between each component of Q. And we use the additive form to describe the relationship of the components of Q as: Qπo = n∑ i=1 wiQ π i (1) where wi : S × A → R is the weight of each Qi (i.e. the ith component of Q). As long as n ≥ 2, Eq.( 1) must exist a solution w = [w1, w2, · · ·, wn] Marcus & Minc (1992). When we use the SAC framework, the loss function of the policy πo can be represented as Lπo = Es∼ρπo [DKL(πθo(·|s)|| exp( 1αQϕo (s,·)) Zϕo (s) )]. Furthermore, via this loss function, the relationship between the policies π = [π1, π2, · · ·, πn] and the target policy πo can be described in the form of distillation as shown in Eq.( 2). Lπo = Es∼ρπo [DKL(πθo(·|s)|| exp( 1α ∑ wiQ πo i (·|s)) Z(s) )] ≤ Es∼ρπo [ ∑ λi(s, ξ)DKL(πθo ||πi) + ∫ a πθo(log ∑ λi(s, a) πi exp qiα ) where λi = |wi|∑ |wj | qi = Q πo i wi ∑ |wj | |wi| ] (2) where λi means the confidence coefficient of the corresponding policy πi, and ξ ∈ A is the outcome of using the mean value theorem of integrals. And λi is normalized in the range of [0, 1], which is helpful for using the Jensen’s inequality Rudin (1987). For the reason that the determinacy of policy πo is gradually increased in the training process, we approximate ξ by the mean of the distribution of πo, which is usually represented by a Gaussian distribution, to simplify the calculation. 3.2 λ’S LEARNING ARCHITECTURE The architecture of the λ calculator unit is shown in Fig.( 1). To satisfy the equation constraint shown in Eq.( 1), we want to describe the feasible set in an affine form as: Q̂ = [Q1, Q2, · · ·, Qn]T {w| < Q̂,w >= Qo} = {Fz+ ŵ|z ∈ R(n−1)×1} (3) where F ∈ Rn×(n−1), ŵ ∈ R(n−1)×1, and z ∈ Rn×1 is the output of neural network. LU decomposition method Trefethen & Bau III (1997) can be used here to calculator F and ŵ: Q̂ = PLU L = [ L1 L2 ] ŵ = P [ L−T1 U −TQo 0 ] F = P [ −L−T1 LT2 I ] (4) where P ∈ Rn×n is the permutation matrix, L ∈ Rn×1 is the unit lower triangular matrix ( L1 ∈ R 1×1,L2 ∈ R(n−1)×1 ), and U ∈ R1×1 nonsingular upper triangular matrix. 3.3 THE SIMILARITY REGULARIZATION FOR LEARNING λ Sometimes, some components of r are not helpful for the agent tackling the task. The corresponding state-action value function Qi ∈ Q may also be diverging or getting meaningless, which may be an obstacle to the convergence of λ calculator. Hence, we propose a gradient similarity-based regularization to mitigate this trouble. M = ∂Q̂ ∂a · (∂Qo ∂a )T = [ < ∂Q1 ∂a , ∂Qo ∂a >,< ∂Q2 ∂a , ∂Qo ∂a >, · · ·, < ∂Qn ∂a , ∂Qo ∂a > ]T Lreg =< 1M<0,w ◦w > (5) where ∂Qi∂a i ∈ [1, 2, · · ·, n, o] is the gradient of Qi, M ∈ R n×1 is the mask matrix, and 1M<0 : R n×1 → Rn×1 is the indicator function. We use this form to restrict the output of λ calculator corresponding to the useless Qi. 3.4 MULTI-REWARD FUSION ARCHITECTURE MRF breaks through the traditional methods of using shaping rewards. Instead of working on the better representation of state-action value, we directly learning the policy by distillation of the auxiliary policies. On account of our method is based on SAC, the loss function of the critic can be formulated as: LQ(ϕ) = ∑ i E(s,a,s′)∼D[ 1 2 ( Qϕi(s, a)− (ri(s, a) + γ(Ea′∼πi [Qϕ̄i(s ′, a′)− αi log πi(a′|s′)])) )2 ] where i ∈ [1, 2, · · ·, n, o] (6) where D represent the replay buffer, ϕ̄i is the parameter vector of the target state-action neural network. For the auxiliary policies, we design their loss function in the form of SAC: Lπaux(θ) =Es∼D[ ∑ i Ea∼πi [αi log(πθi(a|s)−Qi(s, a))]] where i ∈ [1, 2, · · ·, n] (7) In section 3.1, the upper bound of Lπo have been demonstrated (More details can be seen in Appendix 3.1). Here, we will use this upper bound to optimize the target policy of the agent. And the optimization objective of policy can be replaced as: Lπo(θo,φ) =Es∼D [∑ i λφi(s, ā)DKL(πθo(·|s)||πi(·|s)) +Ea∼πo [log ∑ λφi(s, a) πi exp qiαi ] ] where i ∈ [1, 2, · · ·, n] (8) where φ is the parameter of λ calculator, and ā is the mean of a Gaussian policy πo, mentioned in section3.1. The optimization objective of the automatic temperature α is shown in Eq:9. Lα(α) = ∑ i Ea∼πo [−αi log πi(a|s)− αiH] (9) where H is the target entropy. Through minimize Eq:9, the policy πo can gradually satisfy the constraint Es∼ρπ,a∼π[− log π(a|s)] ≥ H, meanwhile, the auxiliary policies πi will not be similar with πo. We name the resulting actor-critic algorithm as Multi-Reward Fusion (MRF) and present the detailed procedure in Algorithm 1 and Fig.( 2) Algorithm 1 Multi-Reward Fusion (MRF) Initialize parameter vectors ϕ, ϕ̄, θaux, θo, φ. for each iteration do for each environment step do a ∼ πθo(·|s) s′ ∼ p(·|s, a) D ← D ∪ {(s, a, r(s, a), s′)}, where r(s, a) ∈ R(n+1)×1 end for for each gradient step do ϕi ← ϕi − βQ∇ϕLQi for i ∈ {1, 2} θaux ← θaux − βπaux∇θauxLπaux Fix θo, then φ← φ− βφ∇φ(Lπo + Lreg) Fix φ, then θo ← θo − βπo∇θoLπo α← α− βα∇αLα ϕ̄← τϕ+ (1− τ)ϕ̄ end for end for where βQ, βπaux , βφ, βπo , βα, τ are the hyperparameters of MRF, more details can be seen in Appendix B. 4 EXPERIMENTS 4.1 EFFECTIVENESS OF COSINE SIMILARITY REGULARIZATION This experiment we mainly discuss that if the similarity-based regularization proposed in section 3.3 is effective. To facilitate the research, we chose Random Walk task Sutton & Barto (2018) as the experimental scene. It is a simple discrete environment, where the agent only needs to choose turn left or turn right to approach the goal state. Except reaching the goal state where the agent will receive a +100 reward, each step the agent gets a reward −0.1 from the environment. The diagram of Random Walk environment is shown in Fig.( 3). SL and SG are used to denote the leftmost terminal state and the goal state, respectively. We adopt the basic MRF algorithm as the base leaner, which do not set Lreg as the component of λ calculator’s optimization objective. And compare this method’s performance with the MRF which using similarity regularization. Test Setting: The test of each method contains 1,000,000 training steps. During the training process, a 10-episode evaluation is conducted every 1,000 steps. The maximal length of an episode is 10. The shaping reward functions we designed is shown as follow: r1(s) = −∥SG − s∥2 r2(s) = ∥SL − s∥2 r3(s) = ∥SG − s∥2 r = ro + [r1, r2, r3, 0.0] T (10) where r1 and r2 encourage the agent approaching the goal state, r3 is the interference term. Empirically speaking, r1 is much direct than r2. Because using r2 may guide the agent unwilling to terminate the task. Results: The performance of these two method are shown in Fig.( 4) The ’worker:i’ i ∈ [0, 1, 2] corresponds to the multi-perspective auxiliary policies, and ’worker:3’ is the behavior policy. Firstly, we recognize that different shaping rewards may result in different scales of the value function, as shown in Fig.( 4.1). The scale difference would result in the λ calculator convergence difficulties, which have been mentioned in section 3.3. The phenomenon, demonstrated in Fig.( 4.1) and Fig.( 4.1), can also imply that the large gap between the input Q may make the lamda calculator ineffective. With the huge difference of the input values, λ calculator can not make the right decision about which shaping reward needs to be attention. After using the regularization we proposed, we can find that the λ calculator are easily to learn which policy needs to be distilled. It’s worth mentioning that the mean similarity in Fig.( 4.1) and Fig.( 4.1) are calculated by Eq 5. It means that the higher mean similarity is, the corresponding auxiliary policy is more likely to be ignored, when we using the regularization. 4.2 EFFECT OF THE SHAPING REWARDS’ NUMBER This part we will find the effect of the number of shaping rewards. We suspect that quantity will not directly affect performance. Because we regard the essence of reward shaping as introducing effective information, and the amount of effective information for a task is limited. We choose Hungry-Thirsty Singh et al. (2009) as the experiment environment, which isn’t adequate to devise a good reward signal via the intuition alone. In this environment, the agent has movement actions and two special actions available: a) eat—with which the agent can consume food at the food location b)drink—with which the agent can consume water at the water location When the agent eats food, it becomes not-hungry for one time step, after which it becomes hungry again. When the agent drinks water, it becomes not-thirsty for a random period of time (when not-thirsty, it becomes thirsty with probability 0.1 at each successive time step). Test Setting: The maximal episode length is set as 200, and the training process lasts 1,000,000 update steps. The shaping rewards are designed as follow: ρ0 = [1eat,1drink] T, ρ1 = [1HT ,1HT̄ ,1H̄T ,1H̄T ] T, w1 = [Neat, Ndrink] T, w2 = [−0.05,−0.01, 1.0, 0.5]T, r1 =< w1, ρ0 >, r2 =< w1, w1 >, r3 =< w2, ρ1 >, r4 = −∥Seat − s∥2, r5 = −∥Sdrink − s∥2, (11) where N means the count of the events, namely eat food and drink water, subscript H and T represent the agent is hungry or thirsty, respectively. We mainly compare the performance of the MRF using different number of shaping rewards. We set MRF 0 (namely SAC without shaping rewards), MRF 5 (ro+[r1, r2, r3, 0.0, 0.0]), MRF 7 (ro+[r1, r2, r3, r4, r5, 0.0, 0.0]), where ro is the original reward of the task. We usually set two ’0.0’ in the reward list. Because the multi-perspective rewards corresponds to Q1, Q2 · ··, Qn, Qo, and we want to make it easy for the policy to be polarized to the auxiliary policy πn under the constraint Eq. 1. Results: The performance of MRF using different numbers of shaping rewards are shown in Fig( 7). We can find that the introduce information can encourage the agent learning, but the performance gap between MRF 5 and MRF 7 is limited. It means that an appropriate amount of decoupled information can promote agent learning, but the effect of information introduced will become saturated as the number of shaping rewards increasing. 4.3 PERFORMANCE OF MRF We believe that our method can show advantage in the problem of energy optimal control. Because the commonality of these problems is the sparse rewards, only at terminal state their rewards include the information about the completion of tasks. Hence, we choose two environments: Mountain Car and Lunar Lander. These tasks are all the continuous version in Gym (Brockman et al. (2016)) and we have converted these tasks into the form of energy optimal control problem, namely the environment’s rewards become sparse. The performance of MRF are shown in Fig.( 4.3) and Fig.( 4.3). In Mountain Car, we provide two shaping rewards, the form of these rewards are likely to the form we mentioned in section 4.1. The total figure is shown in Appendix C. Here, for demonstrating the performance of our method, we don’t exhibit the ’away’ shaping reward’s corresponding performance (whose episode reward converge to about -20). Combining Fig.( 4.3) and Fig.( 4.3), we can find that MRF can perform better than the traditional reward shaping and the basic SAC. However, the regularization imported causes some performance loss. It implies that when we have confidence of multi-perspective rewards’ qualities, we’d better not use the regularization. Although, any form of MRF performs better than other methods. We also test the performance of MRF in another style of tasks, like Hopper, where the rewards are dense enough that shaping rewards may be useless. When we face relatively complex tasks, the OOD problem may be gradually serious. We use the trick of EDAC An et al. (2021), and the performance is shown in Fig.( 4.3). We can find that with the correction from EDAC our method can also improve the data efficiency. 5 CONCLUSION In this work, we propose a novel reward shaping architecture, called the Multi-Reward Fusion (MRF), which learns the policy by distilling from a series of auxiliary policies. We formulate the relationship between auxiliary policies and the policy to be optimized, via the multi-perspective state-value function. Furthermore, we propose a gradient similarity-based regularization to reduce the influence of useless shaping rewards. The results in Mountain Car and some other tasks show that our algorithm can exploit the information from shaping rewards without deviating the optimization objective of the original task. Moreover, with the similarity regularization we proposed, the unbeneficial shaping rewards can be ignored.
1. What is the main contribution of the paper regarding policy learning and shaped rewards? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its novelty and experimental results? 3. Do you have any concerns regarding the presentation and clarity of the paper? 4. How does the reviewer assess the scalability and practicality of the method? 5. What are the limitations of the experimental setup and comparisons with other works? 6. How could the authors improve the discussion and presentation of their results?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a way to improve policy learning by taking advantage of shaped rewards designed by human experts. Instead of fusing the shaped rewards into a single reward, the method learns a separate critic for each reward function and then combines policies through policy distillation. The empirical results demonstrate the benefit of the proposed method against a simple single reward learning. The experimental environments include random walk, mountain car, lunar lander and hopper environments. Strengths And Weaknesses Strengths: The main idea of the paper (combining various rewards through the policy distillation of policies learnt with various reward functions) is interesting and novel. Some experimental results look promising. They demonstrate that learning a policy with a single (not shaped) reward might be challenging, while combining policies learnt for various shaped rewards results in good performance. Weaknesses: The quality of the paper presentation makes it very difficult to follow. This includes language issues, structure and illustrations quality. For the language: there are multiple grammatical errors (e.g., just in the first paragraph "the phenomenon resultS", "multiple perspectiveS", "All of these existING works" etc), sentence structure is not clear or incomplete (e.g., "Because, a useful reward usually contains information from multiple perspective of the same task."). In terms of formatting, the references are not properly included (in many cases they need to be in brackets and punctuation marks are missing). In terms of structure, for example, in the experiments, the authors first talk about various ablations and only later get to the main result. I believe that the clarity could be improved by reversing this order. Illustrations are hard to read and they are also not always referenced in the text (for example, Figure 5), or referenced incorrectly (for example, "in Fig.(4.1) and Fig.(4.1)"). Finally, a lot of terms in the paper (which are widely used) do not have a clear and precise scientific meaning (for example, "weak driving force", "completely useful", "original reward" -- what is the original reward? was it introduced?, "useless Q"). All of this makes reading very laborious and it is easy to miss or misunderstand the message of the paper. Motivation is not clear, the advantages of the method are not clearly demonstrated. The paper proposes to combine several rewards through policy distillation. Where would these rewards come from? In the current experiments, all rewards were designed by a human who understands the tasks very well. How scalable is this approach? Why would it be useful in practice? The experiments could contain a bit more realistic example of such a use case. The experiments are simplistic and not very well discussed, the baselines are not sufficient. The only baseline relies only on a single non-shaped reward which makes the comparison not very fair as it does not take advantage of any reward shaping at all. For example, I think the experiments should show the performance of a simple baseline that just sums up all the rewards. Also, the related literature review mentions several methods with similar motivation, but none of them is included as a baseline. Additionally, the environments in the experiments are very simple. How would the method perform in more realistic complex settings? Finally, the experimental results are not well discussed. I mentioned above the problems with the order of experiments and figure referencing. Additionally, the discussion is very limited and does not include the limitations of the method. Clarity, Quality, Novelty And Reproducibility Clarity and reproducibility: poor, see my comments about the writing, presentation and vague language. Quality and Novelty: medium, the method sounds interesting, although I have difficulties in assessing its quality precisely due to the presentation quality of the paper.
ICLR
Title Multi-Reward Fusion: Learning from Other Policies by Distilling Abstract Designing rewards is crucial for applying reinforcement learning in practice. However, it is difficult to design a shaping reward which can accelerate agents’ learning process without biasing the original task’s optimization objective. Moreover, the low-dimensional representation of the reward and value function (i.e. scalar value) may also be an obstruction during the learning process. This paper contributes towards tackling these challenges, by proposing a new method, called Multi-Reward Fusion (MRF). MRF take as input a list of human designed rewards, which contains the information from multiple perspectives about the task, and learns separate policies for each component of the reward list. We formulate the problem of learning the target policy as a distillation task, propose a novel method which can selectively distills knowledge from the auxiliary policies, and theoretically show the feasibility of this method. We conduct extensive experiments and show that the MRF method performs better than state-of-the-art reward shaping methods. 1 INTRODUCTION For applying reinforcement learning in real-world tasks, how to design a suitable reward function is a challenging problem. A common way for addressing this problem is reward shaping (RS), which transforms the human prior knowledge into shaping reward, so that the agents can be guided to learn faster and better with the combination of the original and new rewards. In early works, hand-crafted reward function had been used in robot behavior learning Dorigo & Colombetti (1994)Randløv & Alstrøm (1998). But the introduced shaping reward may deviate the converged policy away from the optimal policy of the original task. The potential-based reward shaping (PBRS) method Ng et al. (1999) firstly solved this problem by designing the shaping reward via the form of the difference of potential values, which guarantees the policy invariance. Although PBRS and its variantsDevlin & Kudenko (2012); Grzes & Kudenko (2008); Harutyunyan et al. (2015); Wiewiora et al. (2003) have good mathematical characteristics, sometimes it doesn’t work due to its weak driving force. The phenomenon result from the invariance of the state-action value function by using PBRS, whose aid to policy learning is less straightforward. Moreover, the automatic shaping approaches Marthi (2007); Hu et al. (2020); Fu et al. (2019) learn to take advantage of multiple auxiliary shaping reward functions by adjusting the weight vector of the reward functions. All of these exist works about reward shaping are trying to find a feasible way to introduce human prior knowledge into agents’ learning process. However, when using these architectures we have to face such a dilemma: how to generate useful shaping rewards? The design of the rewards directly affect the results of training. For example, only when the shaping rewards transformed from prior knowledge are completely helpful, PBRS and its variants may work. Although, the automatic shaping approaches can alleviate this problem to some extent, their computational complexity is prohibitive. We consider that the multiple sources of reward setting, recommended in hybrid reward architectures (HRA) Van Seijen et al. (2017) and RD2 Lin et al. (2020), may be more suitable than the traditional scalar form of the reward. Because, a useful reward usually contains information from multiple perspective of the same task. For example, when we design reward functions for training a Doom agent Lample & Chaplot (2017), designers should consider in multiple perspectives such as object pickup, shooting, losing health, and losing ammo. These multi-perspective sources of rewards are high-dimensional information, directly mapping them to a scalar is very rude and will lose a lot of information. In this paper, we use a list of shaping rewards sourcing from different perspectives to train a list of critics. Each critic corresponds to a policy. And we use the relationship between the list of critics to decide how to learn from these policies by distillation. In this process, except the target policy and target critic, all of the other auxiliary critics and policies are trained in offline wayAn et al. (2021). The contributions of this paper are as follow: • We use multi-perspective sources of rewards to train the agent to prevent information loss caused by dimensionality reduction (i.e., summation the shaping rewards into one scalar). In this process, we transform the optimization objective of optimization Haarnoja et al. (2018a) into a new form of policy distillation, and prove the equivalence of these two optimization objectives theoretically. • We provide a gradient similarity-based regularization method to eliminate the effects of adverse rewards automatically. This regularization can improve convergence efficiency. • Empirically, our method can make better trade-off between the policy invariance and driven power from the shaping rewards. Moreover, the auxiliary policies’ offline training can proceed in parallel, which can spare training time. 2 BACKGROUND 2.1 SOFT ACTOR CRITIC In this paper, we consider the soft actor critic framework Haarnoja et al. (2018a; 2017) of reinforcement learning and adopt Morkov decision process (MDP) as the mathematical model. Formally, MDP can be denoted as a tuple M =< S,A, P, r, p0, γ >, where S is the state space, A is the action space, P : S × A × S → [0, 1] denotes the state transition function, r : S × A → R is the reward function, p0 : S → [0, 1] is the probability distribution of the initial state, and γ ∈ [0, 1] is the discount rate. Normally, the purpose of reinforcement learning is to find a policy of an agent π : S × A → [0, 1] in an MDP, which can maximize the expectation of the accumulative rewards Es∼ρπ,a∼π[r(s, a)]. Here, ρπ(s) = ∫ S ∑∞ t=1 γ t−1p0(s ′)p(s′ → s, t, π) denotes the distribution of the state and p(s′ → s, t, π) means the probability that state s is visited after t steps from state s′ under policy π. Usually, we represent the policy π by a neural network and the parameters of this neural network can be denoted as θ. According to the soft actor critic (SAC) method, the entropy of the policy also needs to be maximized to stimulate exploration. And the Bellman operator can be denoted as : Qϕ(s, a) = r(s, a) + γEs′∼ρπ [V (s ′)]], where V (s) = Ea∼π[Q(s, a) − α log π(a|s)], ϕ is the vector of parameter in critic neural network. Here, α is the temperature parameter. In Haarnoja et al. (2018b), α is used to maintain the constraint of the entropy of policy Es∼ρπ,a∼π[− log π(a|s)] ≥ H. Moreover, It is worth mentioning that the the policy parameters are learning to minimizing the expected KLdivergence Es∼ρπ [DKL(πθ(·|s)|| exp(Qϕ(s,·))Zϕ(s) )], where Zϕ(s) is a normalization term, which can be ignored during training. 2.2 RELATED WORK 2.2.1 REWARD SHAPING The traditional reward shaping usually means that modifying the original reward with shaping reward functions which introduce domain knowledge, such as PBRS and its theoretical analysisWiewiora et al. (2003); Laud & DeJong (2003), automatic reward shaping Marthi (2007); Grzes & Kudenko (2008), multi-agent reward shaping Devlin & Kudenko (2011); Sun et al. (2018); Wang et al. (2022), belief reward shaping Marom & Rosman (2018), ethics shapingWu & Lin (2018), and reward shaping via meta learning Zou et al. (2019). The automatic reward shaping methods, such as the automatic successive reinforcement learning (ASR) framework Fu et al. (2019) and the bi-level optimization of parameterized reward shaping (BiPaRS) Hu et al. (2020), have the similar motivation of this paper, namely letting the agent learn from the knowledge that should be focused on. Instead of considering how to adjust the weight of each component of the shaping reward, we directly learning multi-perspective state-action value functions and corresponding policies by multi-perspective rewards. It’s meaningful to mention that each component of the traditional shaping reward can be seen as one source of the multi-perspective rewards. 2.3 HYBRID REWARD ARCHITECTURE Hybrid Reward Architecture (HRA) Van Seijen et al. (2017) proposes a hybrid architecture to model the value functions for the rewards source from different perspectives. Their work justifies that learning from multi-perspective source of the rewards can improve sample efficiency empirically. HRA is built upon the Horde architecture Sutton et al. (2011) , which trains a separate general value function (GVF) for each pseudo-reward function. The work about reward decomposition Lin et al. (2020) also demonstrates that learning from multiple reward functions is beneficial. The structured policy iteration Boutilier et al. (1995) also supports this viewpoint. In these works, the multiple sources of rewards can be seen as auxiliary tasks, which serve as additional supervision for the agent to learn multi-perspective value functions and better representations of the task in different perspectives. However, in this paper, we not only learn multi-perspective value functions via multi-perspective sources of the rewards, but also learn the corresponding multi-perspective policies which are supplied for the distillation of the agent. Through our method, we only need to add the information we focused into the multi-perspective rewards’ list without paying attention to how to adjust the weight of each reward’s component. Furthermore, our method learning target policy by distilling from the auxiliary policies which is more direct than previous methods and has higher data efficiency. 3 METHOD Given an Markov decision process < S,A, P, r, p0, γ > and a shaping reward function f : S×A→ R n×1, we denote the output of reward function as f(s, a) = [r1, r2, · · ·, rn, 0]T. The vector of multiperspective rewards, which is composed by these shaping rewards and the original reward, can be formalized in the additive form as r = ro + f . In this paper, we label derivatives (such as Qo, πo) of the original reward ro with subscript o. 3.1 RELATIONSHIP BETWEEN AUXILIARY POLICY πi AND TARGET POLICY πo Let Qi represents the state-action value function which is learnt via ri (i.e. the ith component of r). And Q = [Q1, Q2, · · ·, Qn, Qo]T corresponds to the state-action value function of r. For the reason that all the rewards describe the same task from different perspectives, there must exist some implicit connection between each component of Q. And we use the additive form to describe the relationship of the components of Q as: Qπo = n∑ i=1 wiQ π i (1) where wi : S × A → R is the weight of each Qi (i.e. the ith component of Q). As long as n ≥ 2, Eq.( 1) must exist a solution w = [w1, w2, · · ·, wn] Marcus & Minc (1992). When we use the SAC framework, the loss function of the policy πo can be represented as Lπo = Es∼ρπo [DKL(πθo(·|s)|| exp( 1αQϕo (s,·)) Zϕo (s) )]. Furthermore, via this loss function, the relationship between the policies π = [π1, π2, · · ·, πn] and the target policy πo can be described in the form of distillation as shown in Eq.( 2). Lπo = Es∼ρπo [DKL(πθo(·|s)|| exp( 1α ∑ wiQ πo i (·|s)) Z(s) )] ≤ Es∼ρπo [ ∑ λi(s, ξ)DKL(πθo ||πi) + ∫ a πθo(log ∑ λi(s, a) πi exp qiα ) where λi = |wi|∑ |wj | qi = Q πo i wi ∑ |wj | |wi| ] (2) where λi means the confidence coefficient of the corresponding policy πi, and ξ ∈ A is the outcome of using the mean value theorem of integrals. And λi is normalized in the range of [0, 1], which is helpful for using the Jensen’s inequality Rudin (1987). For the reason that the determinacy of policy πo is gradually increased in the training process, we approximate ξ by the mean of the distribution of πo, which is usually represented by a Gaussian distribution, to simplify the calculation. 3.2 λ’S LEARNING ARCHITECTURE The architecture of the λ calculator unit is shown in Fig.( 1). To satisfy the equation constraint shown in Eq.( 1), we want to describe the feasible set in an affine form as: Q̂ = [Q1, Q2, · · ·, Qn]T {w| < Q̂,w >= Qo} = {Fz+ ŵ|z ∈ R(n−1)×1} (3) where F ∈ Rn×(n−1), ŵ ∈ R(n−1)×1, and z ∈ Rn×1 is the output of neural network. LU decomposition method Trefethen & Bau III (1997) can be used here to calculator F and ŵ: Q̂ = PLU L = [ L1 L2 ] ŵ = P [ L−T1 U −TQo 0 ] F = P [ −L−T1 LT2 I ] (4) where P ∈ Rn×n is the permutation matrix, L ∈ Rn×1 is the unit lower triangular matrix ( L1 ∈ R 1×1,L2 ∈ R(n−1)×1 ), and U ∈ R1×1 nonsingular upper triangular matrix. 3.3 THE SIMILARITY REGULARIZATION FOR LEARNING λ Sometimes, some components of r are not helpful for the agent tackling the task. The corresponding state-action value function Qi ∈ Q may also be diverging or getting meaningless, which may be an obstacle to the convergence of λ calculator. Hence, we propose a gradient similarity-based regularization to mitigate this trouble. M = ∂Q̂ ∂a · (∂Qo ∂a )T = [ < ∂Q1 ∂a , ∂Qo ∂a >,< ∂Q2 ∂a , ∂Qo ∂a >, · · ·, < ∂Qn ∂a , ∂Qo ∂a > ]T Lreg =< 1M<0,w ◦w > (5) where ∂Qi∂a i ∈ [1, 2, · · ·, n, o] is the gradient of Qi, M ∈ R n×1 is the mask matrix, and 1M<0 : R n×1 → Rn×1 is the indicator function. We use this form to restrict the output of λ calculator corresponding to the useless Qi. 3.4 MULTI-REWARD FUSION ARCHITECTURE MRF breaks through the traditional methods of using shaping rewards. Instead of working on the better representation of state-action value, we directly learning the policy by distillation of the auxiliary policies. On account of our method is based on SAC, the loss function of the critic can be formulated as: LQ(ϕ) = ∑ i E(s,a,s′)∼D[ 1 2 ( Qϕi(s, a)− (ri(s, a) + γ(Ea′∼πi [Qϕ̄i(s ′, a′)− αi log πi(a′|s′)])) )2 ] where i ∈ [1, 2, · · ·, n, o] (6) where D represent the replay buffer, ϕ̄i is the parameter vector of the target state-action neural network. For the auxiliary policies, we design their loss function in the form of SAC: Lπaux(θ) =Es∼D[ ∑ i Ea∼πi [αi log(πθi(a|s)−Qi(s, a))]] where i ∈ [1, 2, · · ·, n] (7) In section 3.1, the upper bound of Lπo have been demonstrated (More details can be seen in Appendix 3.1). Here, we will use this upper bound to optimize the target policy of the agent. And the optimization objective of policy can be replaced as: Lπo(θo,φ) =Es∼D [∑ i λφi(s, ā)DKL(πθo(·|s)||πi(·|s)) +Ea∼πo [log ∑ λφi(s, a) πi exp qiαi ] ] where i ∈ [1, 2, · · ·, n] (8) where φ is the parameter of λ calculator, and ā is the mean of a Gaussian policy πo, mentioned in section3.1. The optimization objective of the automatic temperature α is shown in Eq:9. Lα(α) = ∑ i Ea∼πo [−αi log πi(a|s)− αiH] (9) where H is the target entropy. Through minimize Eq:9, the policy πo can gradually satisfy the constraint Es∼ρπ,a∼π[− log π(a|s)] ≥ H, meanwhile, the auxiliary policies πi will not be similar with πo. We name the resulting actor-critic algorithm as Multi-Reward Fusion (MRF) and present the detailed procedure in Algorithm 1 and Fig.( 2) Algorithm 1 Multi-Reward Fusion (MRF) Initialize parameter vectors ϕ, ϕ̄, θaux, θo, φ. for each iteration do for each environment step do a ∼ πθo(·|s) s′ ∼ p(·|s, a) D ← D ∪ {(s, a, r(s, a), s′)}, where r(s, a) ∈ R(n+1)×1 end for for each gradient step do ϕi ← ϕi − βQ∇ϕLQi for i ∈ {1, 2} θaux ← θaux − βπaux∇θauxLπaux Fix θo, then φ← φ− βφ∇φ(Lπo + Lreg) Fix φ, then θo ← θo − βπo∇θoLπo α← α− βα∇αLα ϕ̄← τϕ+ (1− τ)ϕ̄ end for end for where βQ, βπaux , βφ, βπo , βα, τ are the hyperparameters of MRF, more details can be seen in Appendix B. 4 EXPERIMENTS 4.1 EFFECTIVENESS OF COSINE SIMILARITY REGULARIZATION This experiment we mainly discuss that if the similarity-based regularization proposed in section 3.3 is effective. To facilitate the research, we chose Random Walk task Sutton & Barto (2018) as the experimental scene. It is a simple discrete environment, where the agent only needs to choose turn left or turn right to approach the goal state. Except reaching the goal state where the agent will receive a +100 reward, each step the agent gets a reward −0.1 from the environment. The diagram of Random Walk environment is shown in Fig.( 3). SL and SG are used to denote the leftmost terminal state and the goal state, respectively. We adopt the basic MRF algorithm as the base leaner, which do not set Lreg as the component of λ calculator’s optimization objective. And compare this method’s performance with the MRF which using similarity regularization. Test Setting: The test of each method contains 1,000,000 training steps. During the training process, a 10-episode evaluation is conducted every 1,000 steps. The maximal length of an episode is 10. The shaping reward functions we designed is shown as follow: r1(s) = −∥SG − s∥2 r2(s) = ∥SL − s∥2 r3(s) = ∥SG − s∥2 r = ro + [r1, r2, r3, 0.0] T (10) where r1 and r2 encourage the agent approaching the goal state, r3 is the interference term. Empirically speaking, r1 is much direct than r2. Because using r2 may guide the agent unwilling to terminate the task. Results: The performance of these two method are shown in Fig.( 4) The ’worker:i’ i ∈ [0, 1, 2] corresponds to the multi-perspective auxiliary policies, and ’worker:3’ is the behavior policy. Firstly, we recognize that different shaping rewards may result in different scales of the value function, as shown in Fig.( 4.1). The scale difference would result in the λ calculator convergence difficulties, which have been mentioned in section 3.3. The phenomenon, demonstrated in Fig.( 4.1) and Fig.( 4.1), can also imply that the large gap between the input Q may make the lamda calculator ineffective. With the huge difference of the input values, λ calculator can not make the right decision about which shaping reward needs to be attention. After using the regularization we proposed, we can find that the λ calculator are easily to learn which policy needs to be distilled. It’s worth mentioning that the mean similarity in Fig.( 4.1) and Fig.( 4.1) are calculated by Eq 5. It means that the higher mean similarity is, the corresponding auxiliary policy is more likely to be ignored, when we using the regularization. 4.2 EFFECT OF THE SHAPING REWARDS’ NUMBER This part we will find the effect of the number of shaping rewards. We suspect that quantity will not directly affect performance. Because we regard the essence of reward shaping as introducing effective information, and the amount of effective information for a task is limited. We choose Hungry-Thirsty Singh et al. (2009) as the experiment environment, which isn’t adequate to devise a good reward signal via the intuition alone. In this environment, the agent has movement actions and two special actions available: a) eat—with which the agent can consume food at the food location b)drink—with which the agent can consume water at the water location When the agent eats food, it becomes not-hungry for one time step, after which it becomes hungry again. When the agent drinks water, it becomes not-thirsty for a random period of time (when not-thirsty, it becomes thirsty with probability 0.1 at each successive time step). Test Setting: The maximal episode length is set as 200, and the training process lasts 1,000,000 update steps. The shaping rewards are designed as follow: ρ0 = [1eat,1drink] T, ρ1 = [1HT ,1HT̄ ,1H̄T ,1H̄T ] T, w1 = [Neat, Ndrink] T, w2 = [−0.05,−0.01, 1.0, 0.5]T, r1 =< w1, ρ0 >, r2 =< w1, w1 >, r3 =< w2, ρ1 >, r4 = −∥Seat − s∥2, r5 = −∥Sdrink − s∥2, (11) where N means the count of the events, namely eat food and drink water, subscript H and T represent the agent is hungry or thirsty, respectively. We mainly compare the performance of the MRF using different number of shaping rewards. We set MRF 0 (namely SAC without shaping rewards), MRF 5 (ro+[r1, r2, r3, 0.0, 0.0]), MRF 7 (ro+[r1, r2, r3, r4, r5, 0.0, 0.0]), where ro is the original reward of the task. We usually set two ’0.0’ in the reward list. Because the multi-perspective rewards corresponds to Q1, Q2 · ··, Qn, Qo, and we want to make it easy for the policy to be polarized to the auxiliary policy πn under the constraint Eq. 1. Results: The performance of MRF using different numbers of shaping rewards are shown in Fig( 7). We can find that the introduce information can encourage the agent learning, but the performance gap between MRF 5 and MRF 7 is limited. It means that an appropriate amount of decoupled information can promote agent learning, but the effect of information introduced will become saturated as the number of shaping rewards increasing. 4.3 PERFORMANCE OF MRF We believe that our method can show advantage in the problem of energy optimal control. Because the commonality of these problems is the sparse rewards, only at terminal state their rewards include the information about the completion of tasks. Hence, we choose two environments: Mountain Car and Lunar Lander. These tasks are all the continuous version in Gym (Brockman et al. (2016)) and we have converted these tasks into the form of energy optimal control problem, namely the environment’s rewards become sparse. The performance of MRF are shown in Fig.( 4.3) and Fig.( 4.3). In Mountain Car, we provide two shaping rewards, the form of these rewards are likely to the form we mentioned in section 4.1. The total figure is shown in Appendix C. Here, for demonstrating the performance of our method, we don’t exhibit the ’away’ shaping reward’s corresponding performance (whose episode reward converge to about -20). Combining Fig.( 4.3) and Fig.( 4.3), we can find that MRF can perform better than the traditional reward shaping and the basic SAC. However, the regularization imported causes some performance loss. It implies that when we have confidence of multi-perspective rewards’ qualities, we’d better not use the regularization. Although, any form of MRF performs better than other methods. We also test the performance of MRF in another style of tasks, like Hopper, where the rewards are dense enough that shaping rewards may be useless. When we face relatively complex tasks, the OOD problem may be gradually serious. We use the trick of EDAC An et al. (2021), and the performance is shown in Fig.( 4.3). We can find that with the correction from EDAC our method can also improve the data efficiency. 5 CONCLUSION In this work, we propose a novel reward shaping architecture, called the Multi-Reward Fusion (MRF), which learns the policy by distilling from a series of auxiliary policies. We formulate the relationship between auxiliary policies and the policy to be optimized, via the multi-perspective state-value function. Furthermore, we propose a gradient similarity-based regularization to reduce the influence of useless shaping rewards. The results in Mountain Car and some other tasks show that our algorithm can exploit the information from shaping rewards without deviating the optimization objective of the original task. Moreover, with the similarity regularization we proposed, the unbeneficial shaping rewards can be ignored.
1. What is the focus of the paper regarding reward shaping? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of presentation, language, and scientific writing? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Do you have any questions or concerns about the paper's claims, experiments, and comparisons with other works?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper claims to propose a method for reward shaping by combining different human designed reward using a policy distillation framework. Strengths And Weaknesses The biggest weakness of the work is readability and presentation. The language is very imprecise, unfit for writing scientific work. There are several grammatical and writing errors which makes me wonder if the paper was written using a language translator. I had a hard time reading the paper with phrases like: "directly mapping them to a scalar is very rude and will lose a lot of information". The citation needs to be change as well. The equations are neither self sufficient nor consistent with the description proposed. The work seems to confuse the problem of reward design with reward shaping. What does it mean by "For the reason that all the rewards describe the same task from different perspectives, there must exist some implicit connection between each component of Q", why should this be the case? There is no guarantee that different humans would provide the same reward specification. If they do then why is the proposed method required? Why not approaches like potential based rewards? Clarity, Quality, Novelty And Reproducibility As mentioned before, the work needs to improve a lot on clarity. Further the quality of results and methods is also inadequate, there is no comparison with any of the existing reward shaping methods.
ICLR
Title Multi-Reward Fusion: Learning from Other Policies by Distilling Abstract Designing rewards is crucial for applying reinforcement learning in practice. However, it is difficult to design a shaping reward which can accelerate agents’ learning process without biasing the original task’s optimization objective. Moreover, the low-dimensional representation of the reward and value function (i.e. scalar value) may also be an obstruction during the learning process. This paper contributes towards tackling these challenges, by proposing a new method, called Multi-Reward Fusion (MRF). MRF take as input a list of human designed rewards, which contains the information from multiple perspectives about the task, and learns separate policies for each component of the reward list. We formulate the problem of learning the target policy as a distillation task, propose a novel method which can selectively distills knowledge from the auxiliary policies, and theoretically show the feasibility of this method. We conduct extensive experiments and show that the MRF method performs better than state-of-the-art reward shaping methods. 1 INTRODUCTION For applying reinforcement learning in real-world tasks, how to design a suitable reward function is a challenging problem. A common way for addressing this problem is reward shaping (RS), which transforms the human prior knowledge into shaping reward, so that the agents can be guided to learn faster and better with the combination of the original and new rewards. In early works, hand-crafted reward function had been used in robot behavior learning Dorigo & Colombetti (1994)Randløv & Alstrøm (1998). But the introduced shaping reward may deviate the converged policy away from the optimal policy of the original task. The potential-based reward shaping (PBRS) method Ng et al. (1999) firstly solved this problem by designing the shaping reward via the form of the difference of potential values, which guarantees the policy invariance. Although PBRS and its variantsDevlin & Kudenko (2012); Grzes & Kudenko (2008); Harutyunyan et al. (2015); Wiewiora et al. (2003) have good mathematical characteristics, sometimes it doesn’t work due to its weak driving force. The phenomenon result from the invariance of the state-action value function by using PBRS, whose aid to policy learning is less straightforward. Moreover, the automatic shaping approaches Marthi (2007); Hu et al. (2020); Fu et al. (2019) learn to take advantage of multiple auxiliary shaping reward functions by adjusting the weight vector of the reward functions. All of these exist works about reward shaping are trying to find a feasible way to introduce human prior knowledge into agents’ learning process. However, when using these architectures we have to face such a dilemma: how to generate useful shaping rewards? The design of the rewards directly affect the results of training. For example, only when the shaping rewards transformed from prior knowledge are completely helpful, PBRS and its variants may work. Although, the automatic shaping approaches can alleviate this problem to some extent, their computational complexity is prohibitive. We consider that the multiple sources of reward setting, recommended in hybrid reward architectures (HRA) Van Seijen et al. (2017) and RD2 Lin et al. (2020), may be more suitable than the traditional scalar form of the reward. Because, a useful reward usually contains information from multiple perspective of the same task. For example, when we design reward functions for training a Doom agent Lample & Chaplot (2017), designers should consider in multiple perspectives such as object pickup, shooting, losing health, and losing ammo. These multi-perspective sources of rewards are high-dimensional information, directly mapping them to a scalar is very rude and will lose a lot of information. In this paper, we use a list of shaping rewards sourcing from different perspectives to train a list of critics. Each critic corresponds to a policy. And we use the relationship between the list of critics to decide how to learn from these policies by distillation. In this process, except the target policy and target critic, all of the other auxiliary critics and policies are trained in offline wayAn et al. (2021). The contributions of this paper are as follow: • We use multi-perspective sources of rewards to train the agent to prevent information loss caused by dimensionality reduction (i.e., summation the shaping rewards into one scalar). In this process, we transform the optimization objective of optimization Haarnoja et al. (2018a) into a new form of policy distillation, and prove the equivalence of these two optimization objectives theoretically. • We provide a gradient similarity-based regularization method to eliminate the effects of adverse rewards automatically. This regularization can improve convergence efficiency. • Empirically, our method can make better trade-off between the policy invariance and driven power from the shaping rewards. Moreover, the auxiliary policies’ offline training can proceed in parallel, which can spare training time. 2 BACKGROUND 2.1 SOFT ACTOR CRITIC In this paper, we consider the soft actor critic framework Haarnoja et al. (2018a; 2017) of reinforcement learning and adopt Morkov decision process (MDP) as the mathematical model. Formally, MDP can be denoted as a tuple M =< S,A, P, r, p0, γ >, where S is the state space, A is the action space, P : S × A × S → [0, 1] denotes the state transition function, r : S × A → R is the reward function, p0 : S → [0, 1] is the probability distribution of the initial state, and γ ∈ [0, 1] is the discount rate. Normally, the purpose of reinforcement learning is to find a policy of an agent π : S × A → [0, 1] in an MDP, which can maximize the expectation of the accumulative rewards Es∼ρπ,a∼π[r(s, a)]. Here, ρπ(s) = ∫ S ∑∞ t=1 γ t−1p0(s ′)p(s′ → s, t, π) denotes the distribution of the state and p(s′ → s, t, π) means the probability that state s is visited after t steps from state s′ under policy π. Usually, we represent the policy π by a neural network and the parameters of this neural network can be denoted as θ. According to the soft actor critic (SAC) method, the entropy of the policy also needs to be maximized to stimulate exploration. And the Bellman operator can be denoted as : Qϕ(s, a) = r(s, a) + γEs′∼ρπ [V (s ′)]], where V (s) = Ea∼π[Q(s, a) − α log π(a|s)], ϕ is the vector of parameter in critic neural network. Here, α is the temperature parameter. In Haarnoja et al. (2018b), α is used to maintain the constraint of the entropy of policy Es∼ρπ,a∼π[− log π(a|s)] ≥ H. Moreover, It is worth mentioning that the the policy parameters are learning to minimizing the expected KLdivergence Es∼ρπ [DKL(πθ(·|s)|| exp(Qϕ(s,·))Zϕ(s) )], where Zϕ(s) is a normalization term, which can be ignored during training. 2.2 RELATED WORK 2.2.1 REWARD SHAPING The traditional reward shaping usually means that modifying the original reward with shaping reward functions which introduce domain knowledge, such as PBRS and its theoretical analysisWiewiora et al. (2003); Laud & DeJong (2003), automatic reward shaping Marthi (2007); Grzes & Kudenko (2008), multi-agent reward shaping Devlin & Kudenko (2011); Sun et al. (2018); Wang et al. (2022), belief reward shaping Marom & Rosman (2018), ethics shapingWu & Lin (2018), and reward shaping via meta learning Zou et al. (2019). The automatic reward shaping methods, such as the automatic successive reinforcement learning (ASR) framework Fu et al. (2019) and the bi-level optimization of parameterized reward shaping (BiPaRS) Hu et al. (2020), have the similar motivation of this paper, namely letting the agent learn from the knowledge that should be focused on. Instead of considering how to adjust the weight of each component of the shaping reward, we directly learning multi-perspective state-action value functions and corresponding policies by multi-perspective rewards. It’s meaningful to mention that each component of the traditional shaping reward can be seen as one source of the multi-perspective rewards. 2.3 HYBRID REWARD ARCHITECTURE Hybrid Reward Architecture (HRA) Van Seijen et al. (2017) proposes a hybrid architecture to model the value functions for the rewards source from different perspectives. Their work justifies that learning from multi-perspective source of the rewards can improve sample efficiency empirically. HRA is built upon the Horde architecture Sutton et al. (2011) , which trains a separate general value function (GVF) for each pseudo-reward function. The work about reward decomposition Lin et al. (2020) also demonstrates that learning from multiple reward functions is beneficial. The structured policy iteration Boutilier et al. (1995) also supports this viewpoint. In these works, the multiple sources of rewards can be seen as auxiliary tasks, which serve as additional supervision for the agent to learn multi-perspective value functions and better representations of the task in different perspectives. However, in this paper, we not only learn multi-perspective value functions via multi-perspective sources of the rewards, but also learn the corresponding multi-perspective policies which are supplied for the distillation of the agent. Through our method, we only need to add the information we focused into the multi-perspective rewards’ list without paying attention to how to adjust the weight of each reward’s component. Furthermore, our method learning target policy by distilling from the auxiliary policies which is more direct than previous methods and has higher data efficiency. 3 METHOD Given an Markov decision process < S,A, P, r, p0, γ > and a shaping reward function f : S×A→ R n×1, we denote the output of reward function as f(s, a) = [r1, r2, · · ·, rn, 0]T. The vector of multiperspective rewards, which is composed by these shaping rewards and the original reward, can be formalized in the additive form as r = ro + f . In this paper, we label derivatives (such as Qo, πo) of the original reward ro with subscript o. 3.1 RELATIONSHIP BETWEEN AUXILIARY POLICY πi AND TARGET POLICY πo Let Qi represents the state-action value function which is learnt via ri (i.e. the ith component of r). And Q = [Q1, Q2, · · ·, Qn, Qo]T corresponds to the state-action value function of r. For the reason that all the rewards describe the same task from different perspectives, there must exist some implicit connection between each component of Q. And we use the additive form to describe the relationship of the components of Q as: Qπo = n∑ i=1 wiQ π i (1) where wi : S × A → R is the weight of each Qi (i.e. the ith component of Q). As long as n ≥ 2, Eq.( 1) must exist a solution w = [w1, w2, · · ·, wn] Marcus & Minc (1992). When we use the SAC framework, the loss function of the policy πo can be represented as Lπo = Es∼ρπo [DKL(πθo(·|s)|| exp( 1αQϕo (s,·)) Zϕo (s) )]. Furthermore, via this loss function, the relationship between the policies π = [π1, π2, · · ·, πn] and the target policy πo can be described in the form of distillation as shown in Eq.( 2). Lπo = Es∼ρπo [DKL(πθo(·|s)|| exp( 1α ∑ wiQ πo i (·|s)) Z(s) )] ≤ Es∼ρπo [ ∑ λi(s, ξ)DKL(πθo ||πi) + ∫ a πθo(log ∑ λi(s, a) πi exp qiα ) where λi = |wi|∑ |wj | qi = Q πo i wi ∑ |wj | |wi| ] (2) where λi means the confidence coefficient of the corresponding policy πi, and ξ ∈ A is the outcome of using the mean value theorem of integrals. And λi is normalized in the range of [0, 1], which is helpful for using the Jensen’s inequality Rudin (1987). For the reason that the determinacy of policy πo is gradually increased in the training process, we approximate ξ by the mean of the distribution of πo, which is usually represented by a Gaussian distribution, to simplify the calculation. 3.2 λ’S LEARNING ARCHITECTURE The architecture of the λ calculator unit is shown in Fig.( 1). To satisfy the equation constraint shown in Eq.( 1), we want to describe the feasible set in an affine form as: Q̂ = [Q1, Q2, · · ·, Qn]T {w| < Q̂,w >= Qo} = {Fz+ ŵ|z ∈ R(n−1)×1} (3) where F ∈ Rn×(n−1), ŵ ∈ R(n−1)×1, and z ∈ Rn×1 is the output of neural network. LU decomposition method Trefethen & Bau III (1997) can be used here to calculator F and ŵ: Q̂ = PLU L = [ L1 L2 ] ŵ = P [ L−T1 U −TQo 0 ] F = P [ −L−T1 LT2 I ] (4) where P ∈ Rn×n is the permutation matrix, L ∈ Rn×1 is the unit lower triangular matrix ( L1 ∈ R 1×1,L2 ∈ R(n−1)×1 ), and U ∈ R1×1 nonsingular upper triangular matrix. 3.3 THE SIMILARITY REGULARIZATION FOR LEARNING λ Sometimes, some components of r are not helpful for the agent tackling the task. The corresponding state-action value function Qi ∈ Q may also be diverging or getting meaningless, which may be an obstacle to the convergence of λ calculator. Hence, we propose a gradient similarity-based regularization to mitigate this trouble. M = ∂Q̂ ∂a · (∂Qo ∂a )T = [ < ∂Q1 ∂a , ∂Qo ∂a >,< ∂Q2 ∂a , ∂Qo ∂a >, · · ·, < ∂Qn ∂a , ∂Qo ∂a > ]T Lreg =< 1M<0,w ◦w > (5) where ∂Qi∂a i ∈ [1, 2, · · ·, n, o] is the gradient of Qi, M ∈ R n×1 is the mask matrix, and 1M<0 : R n×1 → Rn×1 is the indicator function. We use this form to restrict the output of λ calculator corresponding to the useless Qi. 3.4 MULTI-REWARD FUSION ARCHITECTURE MRF breaks through the traditional methods of using shaping rewards. Instead of working on the better representation of state-action value, we directly learning the policy by distillation of the auxiliary policies. On account of our method is based on SAC, the loss function of the critic can be formulated as: LQ(ϕ) = ∑ i E(s,a,s′)∼D[ 1 2 ( Qϕi(s, a)− (ri(s, a) + γ(Ea′∼πi [Qϕ̄i(s ′, a′)− αi log πi(a′|s′)])) )2 ] where i ∈ [1, 2, · · ·, n, o] (6) where D represent the replay buffer, ϕ̄i is the parameter vector of the target state-action neural network. For the auxiliary policies, we design their loss function in the form of SAC: Lπaux(θ) =Es∼D[ ∑ i Ea∼πi [αi log(πθi(a|s)−Qi(s, a))]] where i ∈ [1, 2, · · ·, n] (7) In section 3.1, the upper bound of Lπo have been demonstrated (More details can be seen in Appendix 3.1). Here, we will use this upper bound to optimize the target policy of the agent. And the optimization objective of policy can be replaced as: Lπo(θo,φ) =Es∼D [∑ i λφi(s, ā)DKL(πθo(·|s)||πi(·|s)) +Ea∼πo [log ∑ λφi(s, a) πi exp qiαi ] ] where i ∈ [1, 2, · · ·, n] (8) where φ is the parameter of λ calculator, and ā is the mean of a Gaussian policy πo, mentioned in section3.1. The optimization objective of the automatic temperature α is shown in Eq:9. Lα(α) = ∑ i Ea∼πo [−αi log πi(a|s)− αiH] (9) where H is the target entropy. Through minimize Eq:9, the policy πo can gradually satisfy the constraint Es∼ρπ,a∼π[− log π(a|s)] ≥ H, meanwhile, the auxiliary policies πi will not be similar with πo. We name the resulting actor-critic algorithm as Multi-Reward Fusion (MRF) and present the detailed procedure in Algorithm 1 and Fig.( 2) Algorithm 1 Multi-Reward Fusion (MRF) Initialize parameter vectors ϕ, ϕ̄, θaux, θo, φ. for each iteration do for each environment step do a ∼ πθo(·|s) s′ ∼ p(·|s, a) D ← D ∪ {(s, a, r(s, a), s′)}, where r(s, a) ∈ R(n+1)×1 end for for each gradient step do ϕi ← ϕi − βQ∇ϕLQi for i ∈ {1, 2} θaux ← θaux − βπaux∇θauxLπaux Fix θo, then φ← φ− βφ∇φ(Lπo + Lreg) Fix φ, then θo ← θo − βπo∇θoLπo α← α− βα∇αLα ϕ̄← τϕ+ (1− τ)ϕ̄ end for end for where βQ, βπaux , βφ, βπo , βα, τ are the hyperparameters of MRF, more details can be seen in Appendix B. 4 EXPERIMENTS 4.1 EFFECTIVENESS OF COSINE SIMILARITY REGULARIZATION This experiment we mainly discuss that if the similarity-based regularization proposed in section 3.3 is effective. To facilitate the research, we chose Random Walk task Sutton & Barto (2018) as the experimental scene. It is a simple discrete environment, where the agent only needs to choose turn left or turn right to approach the goal state. Except reaching the goal state where the agent will receive a +100 reward, each step the agent gets a reward −0.1 from the environment. The diagram of Random Walk environment is shown in Fig.( 3). SL and SG are used to denote the leftmost terminal state and the goal state, respectively. We adopt the basic MRF algorithm as the base leaner, which do not set Lreg as the component of λ calculator’s optimization objective. And compare this method’s performance with the MRF which using similarity regularization. Test Setting: The test of each method contains 1,000,000 training steps. During the training process, a 10-episode evaluation is conducted every 1,000 steps. The maximal length of an episode is 10. The shaping reward functions we designed is shown as follow: r1(s) = −∥SG − s∥2 r2(s) = ∥SL − s∥2 r3(s) = ∥SG − s∥2 r = ro + [r1, r2, r3, 0.0] T (10) where r1 and r2 encourage the agent approaching the goal state, r3 is the interference term. Empirically speaking, r1 is much direct than r2. Because using r2 may guide the agent unwilling to terminate the task. Results: The performance of these two method are shown in Fig.( 4) The ’worker:i’ i ∈ [0, 1, 2] corresponds to the multi-perspective auxiliary policies, and ’worker:3’ is the behavior policy. Firstly, we recognize that different shaping rewards may result in different scales of the value function, as shown in Fig.( 4.1). The scale difference would result in the λ calculator convergence difficulties, which have been mentioned in section 3.3. The phenomenon, demonstrated in Fig.( 4.1) and Fig.( 4.1), can also imply that the large gap between the input Q may make the lamda calculator ineffective. With the huge difference of the input values, λ calculator can not make the right decision about which shaping reward needs to be attention. After using the regularization we proposed, we can find that the λ calculator are easily to learn which policy needs to be distilled. It’s worth mentioning that the mean similarity in Fig.( 4.1) and Fig.( 4.1) are calculated by Eq 5. It means that the higher mean similarity is, the corresponding auxiliary policy is more likely to be ignored, when we using the regularization. 4.2 EFFECT OF THE SHAPING REWARDS’ NUMBER This part we will find the effect of the number of shaping rewards. We suspect that quantity will not directly affect performance. Because we regard the essence of reward shaping as introducing effective information, and the amount of effective information for a task is limited. We choose Hungry-Thirsty Singh et al. (2009) as the experiment environment, which isn’t adequate to devise a good reward signal via the intuition alone. In this environment, the agent has movement actions and two special actions available: a) eat—with which the agent can consume food at the food location b)drink—with which the agent can consume water at the water location When the agent eats food, it becomes not-hungry for one time step, after which it becomes hungry again. When the agent drinks water, it becomes not-thirsty for a random period of time (when not-thirsty, it becomes thirsty with probability 0.1 at each successive time step). Test Setting: The maximal episode length is set as 200, and the training process lasts 1,000,000 update steps. The shaping rewards are designed as follow: ρ0 = [1eat,1drink] T, ρ1 = [1HT ,1HT̄ ,1H̄T ,1H̄T ] T, w1 = [Neat, Ndrink] T, w2 = [−0.05,−0.01, 1.0, 0.5]T, r1 =< w1, ρ0 >, r2 =< w1, w1 >, r3 =< w2, ρ1 >, r4 = −∥Seat − s∥2, r5 = −∥Sdrink − s∥2, (11) where N means the count of the events, namely eat food and drink water, subscript H and T represent the agent is hungry or thirsty, respectively. We mainly compare the performance of the MRF using different number of shaping rewards. We set MRF 0 (namely SAC without shaping rewards), MRF 5 (ro+[r1, r2, r3, 0.0, 0.0]), MRF 7 (ro+[r1, r2, r3, r4, r5, 0.0, 0.0]), where ro is the original reward of the task. We usually set two ’0.0’ in the reward list. Because the multi-perspective rewards corresponds to Q1, Q2 · ··, Qn, Qo, and we want to make it easy for the policy to be polarized to the auxiliary policy πn under the constraint Eq. 1. Results: The performance of MRF using different numbers of shaping rewards are shown in Fig( 7). We can find that the introduce information can encourage the agent learning, but the performance gap between MRF 5 and MRF 7 is limited. It means that an appropriate amount of decoupled information can promote agent learning, but the effect of information introduced will become saturated as the number of shaping rewards increasing. 4.3 PERFORMANCE OF MRF We believe that our method can show advantage in the problem of energy optimal control. Because the commonality of these problems is the sparse rewards, only at terminal state their rewards include the information about the completion of tasks. Hence, we choose two environments: Mountain Car and Lunar Lander. These tasks are all the continuous version in Gym (Brockman et al. (2016)) and we have converted these tasks into the form of energy optimal control problem, namely the environment’s rewards become sparse. The performance of MRF are shown in Fig.( 4.3) and Fig.( 4.3). In Mountain Car, we provide two shaping rewards, the form of these rewards are likely to the form we mentioned in section 4.1. The total figure is shown in Appendix C. Here, for demonstrating the performance of our method, we don’t exhibit the ’away’ shaping reward’s corresponding performance (whose episode reward converge to about -20). Combining Fig.( 4.3) and Fig.( 4.3), we can find that MRF can perform better than the traditional reward shaping and the basic SAC. However, the regularization imported causes some performance loss. It implies that when we have confidence of multi-perspective rewards’ qualities, we’d better not use the regularization. Although, any form of MRF performs better than other methods. We also test the performance of MRF in another style of tasks, like Hopper, where the rewards are dense enough that shaping rewards may be useless. When we face relatively complex tasks, the OOD problem may be gradually serious. We use the trick of EDAC An et al. (2021), and the performance is shown in Fig.( 4.3). We can find that with the correction from EDAC our method can also improve the data efficiency. 5 CONCLUSION In this work, we propose a novel reward shaping architecture, called the Multi-Reward Fusion (MRF), which learns the policy by distilling from a series of auxiliary policies. We formulate the relationship between auxiliary policies and the policy to be optimized, via the multi-perspective state-value function. Furthermore, we propose a gradient similarity-based regularization to reduce the influence of useless shaping rewards. The results in Mountain Car and some other tasks show that our algorithm can exploit the information from shaping rewards without deviating the optimization objective of the original task. Moreover, with the similarity regularization we proposed, the unbeneficial shaping rewards can be ignored.
1. What is the main contribution of the paper regarding reward shaping? 2. What are the strengths and weaknesses of the proposed MRF algorithm? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What motivates the random walk environment for evaluating cosine similarity regularization? 5. Why do the authors regard the number of shaping rewards as limited, and how might poorly defined or unhelpful rewards impact their hypothesis? 6. Can you provide additional explanations or examples to help follow the method and results, such as explaining the purpose of each shaping reward used in the experiments? 7. How does the proposed method differ from previous works learning a set of value functions, and what are the benefits of using policies instead? 8. Are there any implementation details missing from the PDF's appendix that could aid in reproducing the results?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper approaches the problem of reward shaping by learning a policy per reward shaping term instead of either combining the scalar rewards into a single scalar reward values or learning a value function per reward shaping term. The final task policy is then learned by distilling it from the set of policies learned per reward shaping term. Each shaping reward is intended to correspond to a different aspect of task success, where true task success requires optimizing for all rewards. The authors evaluate the proposed method on several tasks to assess the contribution of the different proposed components such the importance of cosine similarity regularization and the number of reward shaping terms. The authors additionally evaluate how well MRF performs in sparse reward settings. Strengths And Weaknesses Strengths: The authors provide the MRF algorithm, which is helpful to understand how the approach is working. The authors' experiments ablate various aspects of the proposed MRF algorithm to make it clear exactly what each proposed component contributes. Weaknesses: It does not look like the authors compare the method against the related work making it difficult to assess how impactful their contribution is. The majority of results appear to compare different aspects of the proposed method and not against other methods. Section 4.3 compares against SAC with and without shaping, however it is not clear if this is a comparison to a state-of-the-art method. The authors do not motivate why the random walk environment is a good environment to assess "effectiveness of cosine similarity regularization". The authors hypothesize that the number of shaping rewards is not likely to directly impact policy performance, because "we regard the essence of reward shaping as introducing effective information, and the amount of effective information for a task is limited." However, the shaping rewards are ultimately provided by humans who are not perfect and may include either poorly defined shaping reward or reward that do not contribute useful information. It seems reasonable that the more shaping rewards humans provide the greater the probability one or more of the shaping rewards are do not convey "effective information". The author's experiments do not assess the scenario where additionally shaping rewards are subpar. The paper would benefit from edits to improve the clarity of the writing to make it easier to follow the method and the results. Clarity, Quality, Novelty And Reproducibility In terms of clarity/quality: I struggled to follow the exact contribution and motivation of the work. The paper would benefit from further rounds of review and feedback from individuals not involved in the paper. The citation formatting looks like it is off and made following portions of the papers more challenging. It would be helpful put citations within parenthesis, especially long lists of citations. The figure references are off and maybe refer to sections. This makes it very difficult to follow the experiments and results sections. It would helpful to walk through what types of behavior the shaping rewards used in the experiment are intended to encourage in the policy. In terms of novelty: The authors distill a policy from a set of policies (one per reward shaping component) and contrast this against prior approaches that learn a set of value functions. It was not clear exactly how the proposed method differed from prior work learning a set of value functions. A discussion more clearly motivating why the use of policies is better and how the resulting algorithm differs would be helpful to better motivate the approach's novelty. In terms of reproducibility: The authors reference that implementation details are included the appendix, however an appendix is not included in the PDF.
ICLR
Title Visualizing and Understanding the Semantics of Embedding Spaces via Algebraic Formulae Abstract Embeddings are a fundamental component of many modern machine learning and natural language processing models. Understanding them and visualizing them is essential for gathering insights about the information they capture and the behavior of the models. State of the art in analyzing embeddings consists in projecting them in two-dimensional planes without any interpretable semantics associated to the axes of the projection, which makes detailed analyses and comparison among multiple sets of embeddings challenging. In this work, we propose to use explicit axes defined as algebraic formulae over embeddings to project them into a lower dimensional, but semantically meaningful subspace, as a simple yet effective analysis and visualization methodology. This methodology assigns an interpretable semantics to the measures of variability and the axes of visualizations, allowing for both comparisons among different sets of embeddings and fine-grained inspection of the embedding spaces. We demonstrate the power of the proposed methodology through a series of case studies that make use of visualizations constructed around the underlying methodology and through a user study. The results show how the methodology is effective at providing more profound insights than classical projection methods and how it is widely applicable to many other use cases. 1 INTRODUCTION Learning representations is an important part of modern machine learning and natural language processing research. Those representations are often real-valued vectors also called embeddings and are obtained both as byproducts of supervised learning or as the direct goal of unsupervised methods. Independently of how the embeddings are learned, there is much value in understanding what information they capture, how they relate to each other and how the data they are learned from influences them. A better understanding of the embedded space may lead to a better understanding of the data, of the problem and the behavior of the model, and may lead to critical insights in improving such models. Because of their high-dimensional nature, they are hard to visualize effectively, and the most adopted approach is to project them in a bi-dimensional space. Projections have a few shortcomings: 1) they may not preserve distance in the original space, 2) they are not comparable across models and 3) do not provide interpretable dimensions of variability to project to, preventing for more detailed analysis and understanding. For these reasons, there is value in mapping embeddings into a more specific, controllable and interpretable semantic space. Principal Component Analysis (PCA) (Pearson, 1901) and t-Distributed Stochastic Neighbor Embedding (t-SNE) (van der Maaten & Hinton, 2008) are two projection techniques often used for visualizing embeddings in two dimensions, although other techniques can be used. PCA projects embeddings on a lower dimensional space that has the directions of the highest variance in the dataset as axes. Those dimensions do not carry any interpretable meaning, making interpretation difficult. By visualizing the first two dimensions of a PCA projection, the only insight obtainable is semantic relatedness (Budanitsky & Hirst, 2006) between points by observing their relative closeness and therefore topical clusters can be identified. The downside is that embeddings that end up being close in the projected space may not be close in the original embedding space and vice versa. Moreover, as the directions of highest variance are different from embedding space to embedding space, the projections are incompatible among different embeddings spaces, and this makes them not comparable, a common issue among dimensionality reduction techniques. t-SNE, differently from PCA, optimizes a loss that encourages embeddings that are close in the original high-dimensional space to be close in the lower dimensional projection space. This helps in visualizing clusters better than with PCA, as t-SNE puts each point in the projected space so that distance in the original space with respect to its nearest neighbors is preserved as much as possible. Visualizations obtained in this way reflect more the original embedding space and topical clusters are more clearly distinguishable, but doesn’t solve the issue of comparability of two different sets of embeddings, nor it solves the lack of interpretability of the axes and still doesn’t allow for finegrained inspection. Moreover, t-SNE is pretty sensible to hyperparameters, making it unclear how much the projection reflects the data. In this paper, a new and simple method to inspect, explore and debug embedding spaces at a finegrained level is proposed. It consists in defining explicitly the axes of projection through formulae in vector algebra over the embeddings themselves. Explicit axis definition gives an interpretable and fine-grained semantics to the axes of projection. Defining axes explicitly makes it possible to analyze in a detailed way how embeddings relate to each other with respect to interpretable dimensions of variability, as carefully crafted formulas can map (to a certain extent) to semantically meaningful portions of the learned spaces. The explicit axes definition also allows for comparing of embeddings obtained from different datasets, as long as they have common labels. We demonstrate three visualizations for analyzing subspaces of interest of embedding spaces and a set of example case studies including bias detection, polysemy analysis and fine-grained embedding analysis. Additional tasks that may be performed using the proposed methodology and visualization are diachronic analysis and analysis of representations learned from graphs and knowledge bases. The proposed visualizations can moreover be used for debugging purposes and in general for obtaining a better understanding of the embedding spaces learned by different models and representation learning approaches. We are releasing an open-source 1 interactive tool that implements the proposed visualizations, in order to enable researchers in the fields of machine learning, computational linguistics, natural language processing, social sciences and digital humanities to perform exploratory analysis and better understand the semantics of their embeddings. The main contribution of this work lies in the use of explicit user-defined algebraic formulae as axes for projecting embedding spaces into semantically-meaningful subspaces that when visualized provide interpretable axes. We show how this methodology can be widely used through a series of case studies on well known models and data and we furthermore validate the how the visualizations are more interpretable through a user study. 2 RELATED WORK 2.1 EMBEDDING METHODS AND APPLICATIONS Several methods for learning embeddings from symbolic data have been recently proposed (Pennington et al., 2014; Mikolov et al., 2013; Mnih & Kavukcuoglu, 2013; Lebret & Collobert, 2014; Ji et al., 2016; Rudolph et al., 2016; Nickel et al., 2016). The learned representations have been used for a variety of tasks like recommendation (Barkan & Koenigstein, 2016), link prediction on graphs (Grover & Leskovec, 2016), discovery of drug-drug interaction (Abdelaziz et al., 2017) and many more. In particular, positive results in learning embeddings for words using a surrogate prediction task (Mikolov et al., 2013) started the resurgence of interest in those methods, while a substantial body of research from the distributional semantics community using count and matrix factorization based methods (Deerwester et al., 1990; Baroni & Lenci, 2010; Kanerva et al., 2000; Levy & Goldberg, 2014; Biemann & Riedl, 2013; Gabrilovich & Markovitch, 2007) was previously developed. Refer to Lenci (2018) for a comprehensive overview. 2.2 EMBEDDING VISUALIZATION In their recent paper, Heimerl & Gleicher (2018) extracted a list of routinely conducted tasks where embeddings are employed in visual analytics for NLP, such as compare concepts, finding analogies, and predict contexts. iVisClustering (Lee et al., 2012) represents topic clusters as their most 1The tool will be made available after the review period to preserve double-blindness representative keywords and displays them as a 2D scatter plot and a set of linked visualization components supporting interactively constructing topic hierarchies. ConceptVector (Park et al., 2018) makes use of multiple keyword sets to encode the relevance scores of documents and topics: positive words, negative words, and irrelevant words. It allows users to select and build a concept iteratively. Liu et al. (2018) display pairs of analogous words obtained through analogy by projecting them on a 2D plane obtained through a PCA and an SVM to find the plane that separates words on the two sides of the analogy. Besides word embedding, textual visualization has been used to understand topic modeling (Chuang et al., 2012) and how topic models evolve over time (Havre et al., 2002). Compared to literature, our work allows more fine-grained control over the conceptual axes and the filtering logic, e.g., allowing users to define concept based on explicit algebraic formulae beyond single keywords (Section 3), metadata based filtering, as well as multidimensional and multi-data source comparison beyond the common 2D scatter plot view. (Sec 4) 3 METHODOLOGY The basic insight of this work is that goal-oriented inspection of embedding spaces can be defined in terms of items and dimensions of variability. For instance, if the goal is to discover if a dataset (and by consequence an embedding model trained on it) includes gender bias, a user may define professions as specific items of interest and analyze how they are distributed among the concept of “male” and “female”, the two dimensions of variability. We use this as a running example in this section, while in the next section we present how to turn goal definitions into visualizations. The dimensions of variability are defined as algebraic formulae that use embedding labels as atoms. Algebraic formulae are a composition vector math operators (e.g., add, sub, mul) to be applied on vectors (referenced by their label in the data, i.e. the vector of “apple” is obtained by using using the keyword “apple” in the formula). They are used as the axes of a subspace of the entire embedding space and can be interpreted as concepts. In our example we can define two axes amale = man and afemale = woman. These are the most simple formulae as they are made of only one literal, but any formula using algebraic operation can be used instead. For instance amale = man + him and afemale = woman + her could be used instead. Defining axes explicitly as algebraic formulae gives an interpretable semantics to the dimensions of variability and by consequence to the axes of the visualization. To project on the axes, different distance and similarity measures can be used (euclidean distance, correlation, dot product), in particular we will use cosine similarity in the remaining of the paper, defined as cossim(a,b) = a·b‖a‖‖b‖ The items of the goal are a set defined by extention (items = {item1, . . . , itemn}) or by intention (with rules). The rules use items’ embeddings or items’ metadata, like word frequencies, parts of speech, sentiment or categories the label belongs to. Intentions identify a semantically coherent region of the embedding space through logical formulae. Rules using item’s embeddings are defined as re = 〈d, e, c, v〉 where d is a distance or similarity function, e is an algebraic formula that uses embeddings names as atoms and is resolved into a vector, c ∈ {<,≤,=,≥, >}, v is a numeric value. They can be used, for instance, to select all the items that have a d = cosinesimilarity with respect to e = job + profession that is c =≥ then v = 0.5. Rules using item’s metadata instead use typed metadata associated with each item. An item can have categorical fields (e.g., words can be stop-words or not), set fields (e.g., the parts of speech a word can belongs to) and numerical fields (e.g., unigram frequencies in a corpus) associated with it. Rules are be defined as inclusion in a set: rm = icat ∩ set 6= ∅ where icat is the set of categories associated with an item, containing only one element in the case of categorical fields or multiple values in the case of set fields, while for numerical fields they are defined as ranges. Following on with our example, we can select some professions like “doctor”, “nurse”, “teacher” and “astronaut” as our items, or we can define the items of interest as the set of all words in the embedding space that are close to the word “profession” (e.g., cosine similarity greater than 0.7), that are not too frequent (inside a range of frequency from 100 to 1000) and that are not stop-words. 4 VISUALIZATIONS Goals defined in terms of dimensions of variability and items identify a subspace of the entire embedding space to visualize and the best way to visualize it depends on some characteristics of the goal. In the case of few dimensions of variability (one to three) and potentially many items of interest, like the ones obtained by an empty set of rules, a Cartesian view is ideal, where each axis is the vector obtained by evaluating the algebraic formula it is associated with and the coordinates displayed are similarities or distances of the items with respect to each axis. An example of a bi-dimensional Cartesian view is depicted in the left side of Figure 1. In the case where the goal is defined in terms of many dimensions of variability, the Cartesian view can’t be used, and a polar view is preferred. By visualizing each dimension of variability in circle, the polar view can visualize many more axes, but it is limited in the number of items it can display, as each item will be displayed as a polygon with each vertex lying on the axis defined for each dimension of variability and many overlapping polygons make the visualization cluttered. An example of a five-dimensional polar view is depicted in Figure 5. The use of explicit axes allows for straightforward and interpretable comparison of different embedding spaces. For instance, embeddings trained on different corpora or on the same corpora but with different models. The only requirement for embedding spaces to be comparable is that they contain embeddings for all labels present in the formulae defining the axes. Moreover, embeddings in the two spaces do not need to be of the same dimension. Items will now have two sets of coordinates, one for each embedding space, thus they will be displayed as lines. Short lines are interpreted as items being embedded similarly in the subspaces defined by the axes in both original embedding spaces, while long lines can be interpreted as really different locations in the subspaces, and their direction gives insight on how items shift in the two subspaces. Those two embedding spaces could be, for instance, embeddings trained on a clean corpus like Wikipedia as opposed to a noisy corpus like tweets from Twitter, or the two corpora could be two different time slices of the same corpus, in order to compare how words changed over time. The right side of Figure 1 shows an example of how to use the Cartesian comparison view to compare two datasets. 5 CASE STUDIES The methodology and visualizations can be used fruitfully in many analysis tasks in linguistics, digital humanities, in social studies based on empirical methods, and can also be used by researchers in computational linguistics and machine learning to inspect, debug and ultimately better understand the representations learned by their models. Here few of those use cases are presented, but the methodology is flexible enough to allow many other unforeseen uses. For those tasks, we used 50-dimensional publicly available GloVe (Pennington et al., 2014) embeddings trained on a corpus obtained concatenating a 2014 dump of Wikipedia and Gigaword 5 containing 6 billion tokens (for short Wikipedia) and a set of 2 billion tweets containing 27 billion tokens (for short Twitter). 5.1 BIAS DETECTION The task of bias detection is to identify, and in some cases correct for, bias in data that is reflected in the embeddings trained on such data. Studies have shown how embeddings incorporate gender and ethnic biases (Garg et al. (2018); Bolukbasi et al. (2016); Islam et al. (2017)), while other studies focused on warping spaces in order to de-bias the resulting embeddings (Bolukbasi et al. (2016); Zhao et al. (2017)). We show how our proposed methodology can help visualize biases. To visualize gender bias with respect to professions, the goal is defined with the formulae avg(he, him) and avg(she, her) as two dimensions of variability, in a similar vein to Garg et al. (2018). A subset of the professions used by Bolukbasi et al. (2016) is selected as items and cosine similarity is adopted as the measure for the projection. The Cartesian view visualizing Wikipedia embeddings is shown in the left side of Figure 1. Nurse, dancer, and maid are the professions that end up closer to the “female” axis, while boss, captain, and commander end up closer to the “male” axis. The Cartesian comparison view comparing the embeddings trained on Wikipedia and Twitter is shown in the right side of Figure 1. Only the embeddings with a line length above 0.05 are displayed. The most interesting words in this visualization are the ones that shift the most in the direction of negative slope. In this case, are chef and doctor are closer to the “male” axis in Twitter than in Wikipedia, while dancer and secretary are closer to the bisector in Twitter than in Wikipedia. Additional analysis of how words tend to shift in the two embedding spaces would be needed in order to derive provable conclusions about the significance of the shift, for instance through a permutation test with respect to all possible pairs, but the visualization can help inform the most promising words to perform the test on. 5.2 POLYSEMY ANALYSIS Embedding methods conflate different meanings of a word into the same vector A few methods have been proposed to obtain more fine-grained representations by clustering contexts and representing words with multiple vectors (Huang et al., 2012; Neelakantan et al., 2014), but widely used pretrained GloVe vectors still conflate different meanings in the same embedding. Widdows (2003) showed how using a binary orthonormalization operator that has ties with the quantum logic not operator it is possible to remove from the embedding of a polysemous word part of the conflated meaning. The authors define the operator nqnot(a, b) = a− a·b|b|2 b and we show with a comparison plot how it can help distinguish the different meanings of a word. For illustrative purposes we choose the same polysemous word used by Widdows (2003), suit, and use the nqnot operator to orthonormalize with respect to lawsuit and dress, the two main meanings used as dimensions of variability. The items in our goal are the 20000 most frequent words in the Wikipedia embedding space removing stop-words. In the top of Figure 2 we show the overall plot and we zoom on the items that are closer to each axis. Words closer to the axis negating lawsuit are all related to dresses and the act of wearing something, while words closer to the axis negating dress are related to law. We chose another polysemous word, apple, and orthonornalized with respect to fruit and computer. In the bottom of Figure 2 words that have a higher similarity with respect to the first axis are all tech related, while the ones that have a higher similarity with respect to the second axis are mostly other fruits or food. Both examples confirm the ability of the nqnot operator to disentangle multiple meanings from polysemous embeddings and show how the proposed visualizations are able to show it clearly. 5.3 FINE-GRAINED EMBEDDING ANALYSIS We consider embeddings that are close in the embedding space to be semantically related, but even close embeddings may have nuances that distinguish them. When projecting in two dimensions through PCA or t-SNE we are conflating a multidimensional notion of similarity to a bi-dimensional one, loosing the fine grained distinctions among different embeddings. The Cartesian view allows for a more fine-grained visualization of similarities and differences among embeddings that emphasizes nuances that could go otherwise unnoticed. To demonstrate this capability we select as dimensions of variability formulae made of just single words that are in close vicinity to each other in the Wikipedia embedding space: google and microsoft, as google is the closest word to microsoft and microsoft is the 3rd closest word to google. As items we pick the 30000 most frequent words removing stop-words and the 500 most frequent words (as they are too generic) and keeping only the words that have a cosine similarity of at least 0.4 with both google and microsoft while having a cosine similarity below 0.75 with respect to the formula google+microsoft, as we are interested in the most polarized words. The left side of Figure 3 shows how even if those embeddings are close to each other it is easy to identify peculiar words (highlighted with red dots). The ones that relate to web companies and services (twitter, youtube, myspace) are much closer to the google axis. Words related to both legal issues (lawsuit, antitrust) and videogames (ps3, nintendo, xbox) and traditional IT companies are closer to the microsoft axis. In Figure 4 we the same words using google and microsoft orthonormalized with respect to each other as axes. The top left and the bottom right corners are the most interesting ones, as they contain terms that are related to one word after having negated the other. The pattern that emerges is similar to the one highlighted in the left side of Figure 3, but now also operating systems terms (unix, os/2) appear in the microsoft corner, while advertisement and tracking appear in the google corner. For contrast, the t-SNE projection is shown in the right side of Figure 3: it is hard to appreciate the similarities and differences among those embeddings other than seeing them being close in the projected space. This confirms on one hand that the notion of similarity between terms in an embedding space hides many nuances that are captured in those representations and on the other hand that the proposed methodology enables for a more detailed inspection of the embedded space. Multi-dimensional similarity nuances can be visualized using the polar view. In Figure 5 we show an example of how to visualize a small number of items on more than two axes, specifically five foodrelated items compared over five countries axes. The most typical food from a specific country is the closest to the country axis, with sushi being predominantly close to Japan and China, dumplings being close to both Asian countries and Italy, pasta being predominantly closer to Italy’s axis, chocolate being close to European countries and champagne being closer to France and Italy. This same approach could be used also for bias detection where the axes are concepts capturing the notion of ethnicity and items could be adjectives, or the two could be swapped, depending. 6 USER STUDY We conducted a series of user studies to quantify the effectiveness of the proposed method. The goal is to find out if and how visualizations using user-defined semantically meaningful algebraic formulae as their axes help users achieve their analysis goals. What we are not testing for is the quality of projection itself, as in PCA AND t-SNE the projection axes are obtained algorithmically, while in our case they are explicitly defined by the user. We formalized the research questions as: Q1) Does Explicit Formulae outperform t-SNE in goal-oriented tasks? Q2) Can Explicit Formulae reduce time to complete goal-oriented tasks wrt. t-SNE? Q3) Which visualization do users prefer? To answer these questions we invited twelve subjects among data scientists and machine learning researchers, all acquainted with interpreting dimensionality reduction results. We defined two types of tasks, namely Commonality and Polarization, in which subjects were given a visualization together with a pair of words (used as axes in Explicit Formulae or highlighted with a big font and red dot in case of t-SNE). We asked the subjects to identify either common or polarized words w.r.t. the two provided ones. The provided pairs were: banana & strawberry, google & microsoft, nerd & geek, book & magazine. The test subjects were given a list of eight questions, four per task type, and their proposed lists of five words are compared with a gold standard provided by a committee of two computational linguistics experts. The tasks are fully randomized within the subject to prevent from learning effects. In addition, we obfuscated half of our questions by replacing the words with a random numeric ID to prevent prior knowledge from affecting the judgment. We employed three measures: accuracy in which we calculate the number of words provided by the subjects that are present in the gold standard set, speed recording the amount of time users spend to answer the questions normalized by the number of words (commonality: 1, polarization: 2), and we also collected an overall preference for either visualizations. As reported in Table 1, two-way ANOVA tests revealed significant differences in Accuracy for the factor of Projection (Explicit Formulae (µ = 2.02, σ = 1.40) and t-SNE (µ = 0.50, σ = 0.71)) against both Task (F1,91 = 46.11, p = 1.078× 10−9) and Obfuscation (F1,91 = 57.73, p = 2.446× 10−11), which is a strong indicator that the proposed Explicit Formulae method outperforms t-SNE in terms of accuracy in both Commonality and Polarization tasks. We also observed significant differences (F1,91 = 23.93, p = 4.228× 10−6) in Obfuscation: subjects tend to have better accuracy when the words are not obfuscated (µ = 1.75, σ = 1.55 vs. µ = 0.77, σ = 0.88 when obfuscated), but are significantly slower (F1,91 = 5.901, p = 0.017). We run post-hoc t-tests that confirmed how accuracy of Explicit Formulae on Non-obfuscated is better than Obfuscated (t = 4.172, p < 0.0001), which in turn is better that t-SNE Non-obfuscated (t = 2.137, p = 0.0190), which is better than t-SNE Obfuscated (t = 2.563, p = 0.007). One explanation is that the subjects relied on both the visualization and linguistic knowledge to perform the task, but the fact that Explicit Formulae Obfuscated is still better than t-SNE Non-obfuscated suggests that Explicit Formulae, even with obfuscated labels, is consistently more reliable than t-SNE. Concerning Speed, we did not observe signs that the subjects performed faster with Explicit Formulae comparing to t-SNE. Concerning Preference, nine out of all twelve (75%) subjects chose Explicit Formulae over t-SNE, while the rest three prefers t-SNE because of familiarity, indicating there is still non-negligible learning curve of our proposed methods. In conclusion, our answers to the research questions are that (Q1) Explicit Formulae leads to better ac curacy in goal-oriented tasks, (Q2) there is no significant difference between the two techniques in terms of speed and (Q3) users prefer Explicit Formulae over t-SNE. 7 CONCLUSIONS We presented a simple methodology for projecting embeddings into lower-dimensional semantically-meaningful subspaces through explicit vector algebra formulae operating on the embedding themselves. Classical projection methods are useful to gather on overall coarse-grained view of the embedding space and how embeddings cluster, but we showed how our approach allows goal-oriented analyses with more fine-grained comparison and enables cross-dataset comparison through a series of case studies and a user study. This is possible thanks to the ability of the proposed methodology to assign an explicit semantics to the measures of variability used as axes of the visualization that in turns makes them interpretable and widely applicable to many use cases in computational linguistics, natural language processing, machine learning, social sciences and digital humanities. A APPENDIX Fi gu re 8: Pl ot of em be dd in gs in W ik ip ed ia w ith su it ne ga te d w ith re sp ec tt o la w su it an d dr es s re sp ec tiv el y as ax es . Fi gu re 9: Pl ot of em be dd in gs in W ik ip ed ia w ith ap pl e ne ga te d w ith re sp ec tt o fr ui ta nd co m pu te r re sp ec tiv el y as ax es . Fi gu re 11 :F in egr ai ne d co m pa ri so n of th e su bs pa ce on th e ax is n qn ot (g oo g le ,m ic ro so f t) an d n qn ot (m ic ro so f t, g oo g le ) in W ik ip ed ia .
1. What is the focus and contribution of the paper on visualizing embedding spaces? 2. What are the strengths and weaknesses of the proposed approach, particularly in its simplicity and case studies? 3. Do you have concerns regarding the novelty of the idea, considering similar works by other authors? 4. How do you assess the reliability of the user study with a small sample size?
Review
Review Paper presented a new and simple method to visualize the embedding space geometry rather than standard t-SNE or PCA. The key is to carefully select items to be visualized/embed and interpretable dimensions. A few case study and user study were conducted to show the benefit of the proposed approach. - on algebraic formulae (AF): it would be good to clarify the def of AF explicitly. Rules/extention/axes are not very clear and mathematically consistent in section 3. Would projection idea be applied to arbitrary AFs? - while the idea being simple, I am not quite confident about the novelty. For example for the de-bias application, Bolukbasi et al. had already did the same plot along the he-she axis. Similar plots on the polysemous word embedding can be found in Arora et al., 2017, etc. - The user study with n=10 are typically less reliable for any p-value evaluation.
ICLR
Title Visualizing and Understanding the Semantics of Embedding Spaces via Algebraic Formulae Abstract Embeddings are a fundamental component of many modern machine learning and natural language processing models. Understanding them and visualizing them is essential for gathering insights about the information they capture and the behavior of the models. State of the art in analyzing embeddings consists in projecting them in two-dimensional planes without any interpretable semantics associated to the axes of the projection, which makes detailed analyses and comparison among multiple sets of embeddings challenging. In this work, we propose to use explicit axes defined as algebraic formulae over embeddings to project them into a lower dimensional, but semantically meaningful subspace, as a simple yet effective analysis and visualization methodology. This methodology assigns an interpretable semantics to the measures of variability and the axes of visualizations, allowing for both comparisons among different sets of embeddings and fine-grained inspection of the embedding spaces. We demonstrate the power of the proposed methodology through a series of case studies that make use of visualizations constructed around the underlying methodology and through a user study. The results show how the methodology is effective at providing more profound insights than classical projection methods and how it is widely applicable to many other use cases. 1 INTRODUCTION Learning representations is an important part of modern machine learning and natural language processing research. Those representations are often real-valued vectors also called embeddings and are obtained both as byproducts of supervised learning or as the direct goal of unsupervised methods. Independently of how the embeddings are learned, there is much value in understanding what information they capture, how they relate to each other and how the data they are learned from influences them. A better understanding of the embedded space may lead to a better understanding of the data, of the problem and the behavior of the model, and may lead to critical insights in improving such models. Because of their high-dimensional nature, they are hard to visualize effectively, and the most adopted approach is to project them in a bi-dimensional space. Projections have a few shortcomings: 1) they may not preserve distance in the original space, 2) they are not comparable across models and 3) do not provide interpretable dimensions of variability to project to, preventing for more detailed analysis and understanding. For these reasons, there is value in mapping embeddings into a more specific, controllable and interpretable semantic space. Principal Component Analysis (PCA) (Pearson, 1901) and t-Distributed Stochastic Neighbor Embedding (t-SNE) (van der Maaten & Hinton, 2008) are two projection techniques often used for visualizing embeddings in two dimensions, although other techniques can be used. PCA projects embeddings on a lower dimensional space that has the directions of the highest variance in the dataset as axes. Those dimensions do not carry any interpretable meaning, making interpretation difficult. By visualizing the first two dimensions of a PCA projection, the only insight obtainable is semantic relatedness (Budanitsky & Hirst, 2006) between points by observing their relative closeness and therefore topical clusters can be identified. The downside is that embeddings that end up being close in the projected space may not be close in the original embedding space and vice versa. Moreover, as the directions of highest variance are different from embedding space to embedding space, the projections are incompatible among different embeddings spaces, and this makes them not comparable, a common issue among dimensionality reduction techniques. t-SNE, differently from PCA, optimizes a loss that encourages embeddings that are close in the original high-dimensional space to be close in the lower dimensional projection space. This helps in visualizing clusters better than with PCA, as t-SNE puts each point in the projected space so that distance in the original space with respect to its nearest neighbors is preserved as much as possible. Visualizations obtained in this way reflect more the original embedding space and topical clusters are more clearly distinguishable, but doesn’t solve the issue of comparability of two different sets of embeddings, nor it solves the lack of interpretability of the axes and still doesn’t allow for finegrained inspection. Moreover, t-SNE is pretty sensible to hyperparameters, making it unclear how much the projection reflects the data. In this paper, a new and simple method to inspect, explore and debug embedding spaces at a finegrained level is proposed. It consists in defining explicitly the axes of projection through formulae in vector algebra over the embeddings themselves. Explicit axis definition gives an interpretable and fine-grained semantics to the axes of projection. Defining axes explicitly makes it possible to analyze in a detailed way how embeddings relate to each other with respect to interpretable dimensions of variability, as carefully crafted formulas can map (to a certain extent) to semantically meaningful portions of the learned spaces. The explicit axes definition also allows for comparing of embeddings obtained from different datasets, as long as they have common labels. We demonstrate three visualizations for analyzing subspaces of interest of embedding spaces and a set of example case studies including bias detection, polysemy analysis and fine-grained embedding analysis. Additional tasks that may be performed using the proposed methodology and visualization are diachronic analysis and analysis of representations learned from graphs and knowledge bases. The proposed visualizations can moreover be used for debugging purposes and in general for obtaining a better understanding of the embedding spaces learned by different models and representation learning approaches. We are releasing an open-source 1 interactive tool that implements the proposed visualizations, in order to enable researchers in the fields of machine learning, computational linguistics, natural language processing, social sciences and digital humanities to perform exploratory analysis and better understand the semantics of their embeddings. The main contribution of this work lies in the use of explicit user-defined algebraic formulae as axes for projecting embedding spaces into semantically-meaningful subspaces that when visualized provide interpretable axes. We show how this methodology can be widely used through a series of case studies on well known models and data and we furthermore validate the how the visualizations are more interpretable through a user study. 2 RELATED WORK 2.1 EMBEDDING METHODS AND APPLICATIONS Several methods for learning embeddings from symbolic data have been recently proposed (Pennington et al., 2014; Mikolov et al., 2013; Mnih & Kavukcuoglu, 2013; Lebret & Collobert, 2014; Ji et al., 2016; Rudolph et al., 2016; Nickel et al., 2016). The learned representations have been used for a variety of tasks like recommendation (Barkan & Koenigstein, 2016), link prediction on graphs (Grover & Leskovec, 2016), discovery of drug-drug interaction (Abdelaziz et al., 2017) and many more. In particular, positive results in learning embeddings for words using a surrogate prediction task (Mikolov et al., 2013) started the resurgence of interest in those methods, while a substantial body of research from the distributional semantics community using count and matrix factorization based methods (Deerwester et al., 1990; Baroni & Lenci, 2010; Kanerva et al., 2000; Levy & Goldberg, 2014; Biemann & Riedl, 2013; Gabrilovich & Markovitch, 2007) was previously developed. Refer to Lenci (2018) for a comprehensive overview. 2.2 EMBEDDING VISUALIZATION In their recent paper, Heimerl & Gleicher (2018) extracted a list of routinely conducted tasks where embeddings are employed in visual analytics for NLP, such as compare concepts, finding analogies, and predict contexts. iVisClustering (Lee et al., 2012) represents topic clusters as their most 1The tool will be made available after the review period to preserve double-blindness representative keywords and displays them as a 2D scatter plot and a set of linked visualization components supporting interactively constructing topic hierarchies. ConceptVector (Park et al., 2018) makes use of multiple keyword sets to encode the relevance scores of documents and topics: positive words, negative words, and irrelevant words. It allows users to select and build a concept iteratively. Liu et al. (2018) display pairs of analogous words obtained through analogy by projecting them on a 2D plane obtained through a PCA and an SVM to find the plane that separates words on the two sides of the analogy. Besides word embedding, textual visualization has been used to understand topic modeling (Chuang et al., 2012) and how topic models evolve over time (Havre et al., 2002). Compared to literature, our work allows more fine-grained control over the conceptual axes and the filtering logic, e.g., allowing users to define concept based on explicit algebraic formulae beyond single keywords (Section 3), metadata based filtering, as well as multidimensional and multi-data source comparison beyond the common 2D scatter plot view. (Sec 4) 3 METHODOLOGY The basic insight of this work is that goal-oriented inspection of embedding spaces can be defined in terms of items and dimensions of variability. For instance, if the goal is to discover if a dataset (and by consequence an embedding model trained on it) includes gender bias, a user may define professions as specific items of interest and analyze how they are distributed among the concept of “male” and “female”, the two dimensions of variability. We use this as a running example in this section, while in the next section we present how to turn goal definitions into visualizations. The dimensions of variability are defined as algebraic formulae that use embedding labels as atoms. Algebraic formulae are a composition vector math operators (e.g., add, sub, mul) to be applied on vectors (referenced by their label in the data, i.e. the vector of “apple” is obtained by using using the keyword “apple” in the formula). They are used as the axes of a subspace of the entire embedding space and can be interpreted as concepts. In our example we can define two axes amale = man and afemale = woman. These are the most simple formulae as they are made of only one literal, but any formula using algebraic operation can be used instead. For instance amale = man + him and afemale = woman + her could be used instead. Defining axes explicitly as algebraic formulae gives an interpretable semantics to the dimensions of variability and by consequence to the axes of the visualization. To project on the axes, different distance and similarity measures can be used (euclidean distance, correlation, dot product), in particular we will use cosine similarity in the remaining of the paper, defined as cossim(a,b) = a·b‖a‖‖b‖ The items of the goal are a set defined by extention (items = {item1, . . . , itemn}) or by intention (with rules). The rules use items’ embeddings or items’ metadata, like word frequencies, parts of speech, sentiment or categories the label belongs to. Intentions identify a semantically coherent region of the embedding space through logical formulae. Rules using item’s embeddings are defined as re = 〈d, e, c, v〉 where d is a distance or similarity function, e is an algebraic formula that uses embeddings names as atoms and is resolved into a vector, c ∈ {<,≤,=,≥, >}, v is a numeric value. They can be used, for instance, to select all the items that have a d = cosinesimilarity with respect to e = job + profession that is c =≥ then v = 0.5. Rules using item’s metadata instead use typed metadata associated with each item. An item can have categorical fields (e.g., words can be stop-words or not), set fields (e.g., the parts of speech a word can belongs to) and numerical fields (e.g., unigram frequencies in a corpus) associated with it. Rules are be defined as inclusion in a set: rm = icat ∩ set 6= ∅ where icat is the set of categories associated with an item, containing only one element in the case of categorical fields or multiple values in the case of set fields, while for numerical fields they are defined as ranges. Following on with our example, we can select some professions like “doctor”, “nurse”, “teacher” and “astronaut” as our items, or we can define the items of interest as the set of all words in the embedding space that are close to the word “profession” (e.g., cosine similarity greater than 0.7), that are not too frequent (inside a range of frequency from 100 to 1000) and that are not stop-words. 4 VISUALIZATIONS Goals defined in terms of dimensions of variability and items identify a subspace of the entire embedding space to visualize and the best way to visualize it depends on some characteristics of the goal. In the case of few dimensions of variability (one to three) and potentially many items of interest, like the ones obtained by an empty set of rules, a Cartesian view is ideal, where each axis is the vector obtained by evaluating the algebraic formula it is associated with and the coordinates displayed are similarities or distances of the items with respect to each axis. An example of a bi-dimensional Cartesian view is depicted in the left side of Figure 1. In the case where the goal is defined in terms of many dimensions of variability, the Cartesian view can’t be used, and a polar view is preferred. By visualizing each dimension of variability in circle, the polar view can visualize many more axes, but it is limited in the number of items it can display, as each item will be displayed as a polygon with each vertex lying on the axis defined for each dimension of variability and many overlapping polygons make the visualization cluttered. An example of a five-dimensional polar view is depicted in Figure 5. The use of explicit axes allows for straightforward and interpretable comparison of different embedding spaces. For instance, embeddings trained on different corpora or on the same corpora but with different models. The only requirement for embedding spaces to be comparable is that they contain embeddings for all labels present in the formulae defining the axes. Moreover, embeddings in the two spaces do not need to be of the same dimension. Items will now have two sets of coordinates, one for each embedding space, thus they will be displayed as lines. Short lines are interpreted as items being embedded similarly in the subspaces defined by the axes in both original embedding spaces, while long lines can be interpreted as really different locations in the subspaces, and their direction gives insight on how items shift in the two subspaces. Those two embedding spaces could be, for instance, embeddings trained on a clean corpus like Wikipedia as opposed to a noisy corpus like tweets from Twitter, or the two corpora could be two different time slices of the same corpus, in order to compare how words changed over time. The right side of Figure 1 shows an example of how to use the Cartesian comparison view to compare two datasets. 5 CASE STUDIES The methodology and visualizations can be used fruitfully in many analysis tasks in linguistics, digital humanities, in social studies based on empirical methods, and can also be used by researchers in computational linguistics and machine learning to inspect, debug and ultimately better understand the representations learned by their models. Here few of those use cases are presented, but the methodology is flexible enough to allow many other unforeseen uses. For those tasks, we used 50-dimensional publicly available GloVe (Pennington et al., 2014) embeddings trained on a corpus obtained concatenating a 2014 dump of Wikipedia and Gigaword 5 containing 6 billion tokens (for short Wikipedia) and a set of 2 billion tweets containing 27 billion tokens (for short Twitter). 5.1 BIAS DETECTION The task of bias detection is to identify, and in some cases correct for, bias in data that is reflected in the embeddings trained on such data. Studies have shown how embeddings incorporate gender and ethnic biases (Garg et al. (2018); Bolukbasi et al. (2016); Islam et al. (2017)), while other studies focused on warping spaces in order to de-bias the resulting embeddings (Bolukbasi et al. (2016); Zhao et al. (2017)). We show how our proposed methodology can help visualize biases. To visualize gender bias with respect to professions, the goal is defined with the formulae avg(he, him) and avg(she, her) as two dimensions of variability, in a similar vein to Garg et al. (2018). A subset of the professions used by Bolukbasi et al. (2016) is selected as items and cosine similarity is adopted as the measure for the projection. The Cartesian view visualizing Wikipedia embeddings is shown in the left side of Figure 1. Nurse, dancer, and maid are the professions that end up closer to the “female” axis, while boss, captain, and commander end up closer to the “male” axis. The Cartesian comparison view comparing the embeddings trained on Wikipedia and Twitter is shown in the right side of Figure 1. Only the embeddings with a line length above 0.05 are displayed. The most interesting words in this visualization are the ones that shift the most in the direction of negative slope. In this case, are chef and doctor are closer to the “male” axis in Twitter than in Wikipedia, while dancer and secretary are closer to the bisector in Twitter than in Wikipedia. Additional analysis of how words tend to shift in the two embedding spaces would be needed in order to derive provable conclusions about the significance of the shift, for instance through a permutation test with respect to all possible pairs, but the visualization can help inform the most promising words to perform the test on. 5.2 POLYSEMY ANALYSIS Embedding methods conflate different meanings of a word into the same vector A few methods have been proposed to obtain more fine-grained representations by clustering contexts and representing words with multiple vectors (Huang et al., 2012; Neelakantan et al., 2014), but widely used pretrained GloVe vectors still conflate different meanings in the same embedding. Widdows (2003) showed how using a binary orthonormalization operator that has ties with the quantum logic not operator it is possible to remove from the embedding of a polysemous word part of the conflated meaning. The authors define the operator nqnot(a, b) = a− a·b|b|2 b and we show with a comparison plot how it can help distinguish the different meanings of a word. For illustrative purposes we choose the same polysemous word used by Widdows (2003), suit, and use the nqnot operator to orthonormalize with respect to lawsuit and dress, the two main meanings used as dimensions of variability. The items in our goal are the 20000 most frequent words in the Wikipedia embedding space removing stop-words. In the top of Figure 2 we show the overall plot and we zoom on the items that are closer to each axis. Words closer to the axis negating lawsuit are all related to dresses and the act of wearing something, while words closer to the axis negating dress are related to law. We chose another polysemous word, apple, and orthonornalized with respect to fruit and computer. In the bottom of Figure 2 words that have a higher similarity with respect to the first axis are all tech related, while the ones that have a higher similarity with respect to the second axis are mostly other fruits or food. Both examples confirm the ability of the nqnot operator to disentangle multiple meanings from polysemous embeddings and show how the proposed visualizations are able to show it clearly. 5.3 FINE-GRAINED EMBEDDING ANALYSIS We consider embeddings that are close in the embedding space to be semantically related, but even close embeddings may have nuances that distinguish them. When projecting in two dimensions through PCA or t-SNE we are conflating a multidimensional notion of similarity to a bi-dimensional one, loosing the fine grained distinctions among different embeddings. The Cartesian view allows for a more fine-grained visualization of similarities and differences among embeddings that emphasizes nuances that could go otherwise unnoticed. To demonstrate this capability we select as dimensions of variability formulae made of just single words that are in close vicinity to each other in the Wikipedia embedding space: google and microsoft, as google is the closest word to microsoft and microsoft is the 3rd closest word to google. As items we pick the 30000 most frequent words removing stop-words and the 500 most frequent words (as they are too generic) and keeping only the words that have a cosine similarity of at least 0.4 with both google and microsoft while having a cosine similarity below 0.75 with respect to the formula google+microsoft, as we are interested in the most polarized words. The left side of Figure 3 shows how even if those embeddings are close to each other it is easy to identify peculiar words (highlighted with red dots). The ones that relate to web companies and services (twitter, youtube, myspace) are much closer to the google axis. Words related to both legal issues (lawsuit, antitrust) and videogames (ps3, nintendo, xbox) and traditional IT companies are closer to the microsoft axis. In Figure 4 we the same words using google and microsoft orthonormalized with respect to each other as axes. The top left and the bottom right corners are the most interesting ones, as they contain terms that are related to one word after having negated the other. The pattern that emerges is similar to the one highlighted in the left side of Figure 3, but now also operating systems terms (unix, os/2) appear in the microsoft corner, while advertisement and tracking appear in the google corner. For contrast, the t-SNE projection is shown in the right side of Figure 3: it is hard to appreciate the similarities and differences among those embeddings other than seeing them being close in the projected space. This confirms on one hand that the notion of similarity between terms in an embedding space hides many nuances that are captured in those representations and on the other hand that the proposed methodology enables for a more detailed inspection of the embedded space. Multi-dimensional similarity nuances can be visualized using the polar view. In Figure 5 we show an example of how to visualize a small number of items on more than two axes, specifically five foodrelated items compared over five countries axes. The most typical food from a specific country is the closest to the country axis, with sushi being predominantly close to Japan and China, dumplings being close to both Asian countries and Italy, pasta being predominantly closer to Italy’s axis, chocolate being close to European countries and champagne being closer to France and Italy. This same approach could be used also for bias detection where the axes are concepts capturing the notion of ethnicity and items could be adjectives, or the two could be swapped, depending. 6 USER STUDY We conducted a series of user studies to quantify the effectiveness of the proposed method. The goal is to find out if and how visualizations using user-defined semantically meaningful algebraic formulae as their axes help users achieve their analysis goals. What we are not testing for is the quality of projection itself, as in PCA AND t-SNE the projection axes are obtained algorithmically, while in our case they are explicitly defined by the user. We formalized the research questions as: Q1) Does Explicit Formulae outperform t-SNE in goal-oriented tasks? Q2) Can Explicit Formulae reduce time to complete goal-oriented tasks wrt. t-SNE? Q3) Which visualization do users prefer? To answer these questions we invited twelve subjects among data scientists and machine learning researchers, all acquainted with interpreting dimensionality reduction results. We defined two types of tasks, namely Commonality and Polarization, in which subjects were given a visualization together with a pair of words (used as axes in Explicit Formulae or highlighted with a big font and red dot in case of t-SNE). We asked the subjects to identify either common or polarized words w.r.t. the two provided ones. The provided pairs were: banana & strawberry, google & microsoft, nerd & geek, book & magazine. The test subjects were given a list of eight questions, four per task type, and their proposed lists of five words are compared with a gold standard provided by a committee of two computational linguistics experts. The tasks are fully randomized within the subject to prevent from learning effects. In addition, we obfuscated half of our questions by replacing the words with a random numeric ID to prevent prior knowledge from affecting the judgment. We employed three measures: accuracy in which we calculate the number of words provided by the subjects that are present in the gold standard set, speed recording the amount of time users spend to answer the questions normalized by the number of words (commonality: 1, polarization: 2), and we also collected an overall preference for either visualizations. As reported in Table 1, two-way ANOVA tests revealed significant differences in Accuracy for the factor of Projection (Explicit Formulae (µ = 2.02, σ = 1.40) and t-SNE (µ = 0.50, σ = 0.71)) against both Task (F1,91 = 46.11, p = 1.078× 10−9) and Obfuscation (F1,91 = 57.73, p = 2.446× 10−11), which is a strong indicator that the proposed Explicit Formulae method outperforms t-SNE in terms of accuracy in both Commonality and Polarization tasks. We also observed significant differences (F1,91 = 23.93, p = 4.228× 10−6) in Obfuscation: subjects tend to have better accuracy when the words are not obfuscated (µ = 1.75, σ = 1.55 vs. µ = 0.77, σ = 0.88 when obfuscated), but are significantly slower (F1,91 = 5.901, p = 0.017). We run post-hoc t-tests that confirmed how accuracy of Explicit Formulae on Non-obfuscated is better than Obfuscated (t = 4.172, p < 0.0001), which in turn is better that t-SNE Non-obfuscated (t = 2.137, p = 0.0190), which is better than t-SNE Obfuscated (t = 2.563, p = 0.007). One explanation is that the subjects relied on both the visualization and linguistic knowledge to perform the task, but the fact that Explicit Formulae Obfuscated is still better than t-SNE Non-obfuscated suggests that Explicit Formulae, even with obfuscated labels, is consistently more reliable than t-SNE. Concerning Speed, we did not observe signs that the subjects performed faster with Explicit Formulae comparing to t-SNE. Concerning Preference, nine out of all twelve (75%) subjects chose Explicit Formulae over t-SNE, while the rest three prefers t-SNE because of familiarity, indicating there is still non-negligible learning curve of our proposed methods. In conclusion, our answers to the research questions are that (Q1) Explicit Formulae leads to better ac curacy in goal-oriented tasks, (Q2) there is no significant difference between the two techniques in terms of speed and (Q3) users prefer Explicit Formulae over t-SNE. 7 CONCLUSIONS We presented a simple methodology for projecting embeddings into lower-dimensional semantically-meaningful subspaces through explicit vector algebra formulae operating on the embedding themselves. Classical projection methods are useful to gather on overall coarse-grained view of the embedding space and how embeddings cluster, but we showed how our approach allows goal-oriented analyses with more fine-grained comparison and enables cross-dataset comparison through a series of case studies and a user study. This is possible thanks to the ability of the proposed methodology to assign an explicit semantics to the measures of variability used as axes of the visualization that in turns makes them interpretable and widely applicable to many use cases in computational linguistics, natural language processing, machine learning, social sciences and digital humanities. A APPENDIX Fi gu re 8: Pl ot of em be dd in gs in W ik ip ed ia w ith su it ne ga te d w ith re sp ec tt o la w su it an d dr es s re sp ec tiv el y as ax es . Fi gu re 9: Pl ot of em be dd in gs in W ik ip ed ia w ith ap pl e ne ga te d w ith re sp ec tt o fr ui ta nd co m pu te r re sp ec tiv el y as ax es . Fi gu re 11 :F in egr ai ne d co m pa ri so n of th e su bs pa ce on th e ax is n qn ot (g oo g le ,m ic ro so f t) an d n qn ot (m ic ro so f t, g oo g le ) in W ik ip ed ia .
1. What is the main contribution of the paper, and how does it differ from previous works? 2. What are the strengths and weaknesses of the proposed methodology? 3. How does the reviewer assess the novelty and significance of the visualization strategies presented in the paper? 4. What are some potential limitations and drawbacks of the evaluation methods used in the paper? 5. How could the authors improve the clarity and effectiveness of their visualizations?
Review
Review The idea of analyzing embedding spaces in a non-parametric (example-based) way is well-motivated. However, the main technical contribution of this paper is otherwise not clear - the methodology section covers a very broad set of techniques but doesn't provide a clear picture of what is novel; furthermore, it makes a strong assumption about linear structure in the embedding space that may not hold. (It's worth noting that t-SNE does not make this assumption.) The visualization strategies presented don't appear to be particularly novel. In particular, projection onto a linear subspace defined by particular attributes was done in the original word2vec and GloVe papers for the analogy task. There's also a lot of other literature on interpreting deeper models using locally-linear predictors, see for example LIME (Ribeiro et al. 2016) or TCAV (Kim at el. 2018). Evaluations are exclusively qualitative, which is disappointing because there are quantitative ways of evaluating a projection - for example, how well do the reduced dimensions predict a particular attribute relative to the entire vector. Five-axis polar plots can pack in more information than a 2-dimensional plot in some ways, but quickly become cluttered. The authors might consider using heatmaps or bar plots, as are commonly used elsewhere in the literature (e.g. for visualizing activation maps or attention vectors). User study is hard to evaluate. What were the specific formulae used in the comparison? Did subjects just see a list of nearest-neighbors, or did they see the 2D projection? If the latter, I'd imagine it would be easy to tell which was the t-SNE plot, since most researchers are familiar with how these look.
ICLR
Title Visualizing and Understanding the Semantics of Embedding Spaces via Algebraic Formulae Abstract Embeddings are a fundamental component of many modern machine learning and natural language processing models. Understanding them and visualizing them is essential for gathering insights about the information they capture and the behavior of the models. State of the art in analyzing embeddings consists in projecting them in two-dimensional planes without any interpretable semantics associated to the axes of the projection, which makes detailed analyses and comparison among multiple sets of embeddings challenging. In this work, we propose to use explicit axes defined as algebraic formulae over embeddings to project them into a lower dimensional, but semantically meaningful subspace, as a simple yet effective analysis and visualization methodology. This methodology assigns an interpretable semantics to the measures of variability and the axes of visualizations, allowing for both comparisons among different sets of embeddings and fine-grained inspection of the embedding spaces. We demonstrate the power of the proposed methodology through a series of case studies that make use of visualizations constructed around the underlying methodology and through a user study. The results show how the methodology is effective at providing more profound insights than classical projection methods and how it is widely applicable to many other use cases. 1 INTRODUCTION Learning representations is an important part of modern machine learning and natural language processing research. Those representations are often real-valued vectors also called embeddings and are obtained both as byproducts of supervised learning or as the direct goal of unsupervised methods. Independently of how the embeddings are learned, there is much value in understanding what information they capture, how they relate to each other and how the data they are learned from influences them. A better understanding of the embedded space may lead to a better understanding of the data, of the problem and the behavior of the model, and may lead to critical insights in improving such models. Because of their high-dimensional nature, they are hard to visualize effectively, and the most adopted approach is to project them in a bi-dimensional space. Projections have a few shortcomings: 1) they may not preserve distance in the original space, 2) they are not comparable across models and 3) do not provide interpretable dimensions of variability to project to, preventing for more detailed analysis and understanding. For these reasons, there is value in mapping embeddings into a more specific, controllable and interpretable semantic space. Principal Component Analysis (PCA) (Pearson, 1901) and t-Distributed Stochastic Neighbor Embedding (t-SNE) (van der Maaten & Hinton, 2008) are two projection techniques often used for visualizing embeddings in two dimensions, although other techniques can be used. PCA projects embeddings on a lower dimensional space that has the directions of the highest variance in the dataset as axes. Those dimensions do not carry any interpretable meaning, making interpretation difficult. By visualizing the first two dimensions of a PCA projection, the only insight obtainable is semantic relatedness (Budanitsky & Hirst, 2006) between points by observing their relative closeness and therefore topical clusters can be identified. The downside is that embeddings that end up being close in the projected space may not be close in the original embedding space and vice versa. Moreover, as the directions of highest variance are different from embedding space to embedding space, the projections are incompatible among different embeddings spaces, and this makes them not comparable, a common issue among dimensionality reduction techniques. t-SNE, differently from PCA, optimizes a loss that encourages embeddings that are close in the original high-dimensional space to be close in the lower dimensional projection space. This helps in visualizing clusters better than with PCA, as t-SNE puts each point in the projected space so that distance in the original space with respect to its nearest neighbors is preserved as much as possible. Visualizations obtained in this way reflect more the original embedding space and topical clusters are more clearly distinguishable, but doesn’t solve the issue of comparability of two different sets of embeddings, nor it solves the lack of interpretability of the axes and still doesn’t allow for finegrained inspection. Moreover, t-SNE is pretty sensible to hyperparameters, making it unclear how much the projection reflects the data. In this paper, a new and simple method to inspect, explore and debug embedding spaces at a finegrained level is proposed. It consists in defining explicitly the axes of projection through formulae in vector algebra over the embeddings themselves. Explicit axis definition gives an interpretable and fine-grained semantics to the axes of projection. Defining axes explicitly makes it possible to analyze in a detailed way how embeddings relate to each other with respect to interpretable dimensions of variability, as carefully crafted formulas can map (to a certain extent) to semantically meaningful portions of the learned spaces. The explicit axes definition also allows for comparing of embeddings obtained from different datasets, as long as they have common labels. We demonstrate three visualizations for analyzing subspaces of interest of embedding spaces and a set of example case studies including bias detection, polysemy analysis and fine-grained embedding analysis. Additional tasks that may be performed using the proposed methodology and visualization are diachronic analysis and analysis of representations learned from graphs and knowledge bases. The proposed visualizations can moreover be used for debugging purposes and in general for obtaining a better understanding of the embedding spaces learned by different models and representation learning approaches. We are releasing an open-source 1 interactive tool that implements the proposed visualizations, in order to enable researchers in the fields of machine learning, computational linguistics, natural language processing, social sciences and digital humanities to perform exploratory analysis and better understand the semantics of their embeddings. The main contribution of this work lies in the use of explicit user-defined algebraic formulae as axes for projecting embedding spaces into semantically-meaningful subspaces that when visualized provide interpretable axes. We show how this methodology can be widely used through a series of case studies on well known models and data and we furthermore validate the how the visualizations are more interpretable through a user study. 2 RELATED WORK 2.1 EMBEDDING METHODS AND APPLICATIONS Several methods for learning embeddings from symbolic data have been recently proposed (Pennington et al., 2014; Mikolov et al., 2013; Mnih & Kavukcuoglu, 2013; Lebret & Collobert, 2014; Ji et al., 2016; Rudolph et al., 2016; Nickel et al., 2016). The learned representations have been used for a variety of tasks like recommendation (Barkan & Koenigstein, 2016), link prediction on graphs (Grover & Leskovec, 2016), discovery of drug-drug interaction (Abdelaziz et al., 2017) and many more. In particular, positive results in learning embeddings for words using a surrogate prediction task (Mikolov et al., 2013) started the resurgence of interest in those methods, while a substantial body of research from the distributional semantics community using count and matrix factorization based methods (Deerwester et al., 1990; Baroni & Lenci, 2010; Kanerva et al., 2000; Levy & Goldberg, 2014; Biemann & Riedl, 2013; Gabrilovich & Markovitch, 2007) was previously developed. Refer to Lenci (2018) for a comprehensive overview. 2.2 EMBEDDING VISUALIZATION In their recent paper, Heimerl & Gleicher (2018) extracted a list of routinely conducted tasks where embeddings are employed in visual analytics for NLP, such as compare concepts, finding analogies, and predict contexts. iVisClustering (Lee et al., 2012) represents topic clusters as their most 1The tool will be made available after the review period to preserve double-blindness representative keywords and displays them as a 2D scatter plot and a set of linked visualization components supporting interactively constructing topic hierarchies. ConceptVector (Park et al., 2018) makes use of multiple keyword sets to encode the relevance scores of documents and topics: positive words, negative words, and irrelevant words. It allows users to select and build a concept iteratively. Liu et al. (2018) display pairs of analogous words obtained through analogy by projecting them on a 2D plane obtained through a PCA and an SVM to find the plane that separates words on the two sides of the analogy. Besides word embedding, textual visualization has been used to understand topic modeling (Chuang et al., 2012) and how topic models evolve over time (Havre et al., 2002). Compared to literature, our work allows more fine-grained control over the conceptual axes and the filtering logic, e.g., allowing users to define concept based on explicit algebraic formulae beyond single keywords (Section 3), metadata based filtering, as well as multidimensional and multi-data source comparison beyond the common 2D scatter plot view. (Sec 4) 3 METHODOLOGY The basic insight of this work is that goal-oriented inspection of embedding spaces can be defined in terms of items and dimensions of variability. For instance, if the goal is to discover if a dataset (and by consequence an embedding model trained on it) includes gender bias, a user may define professions as specific items of interest and analyze how they are distributed among the concept of “male” and “female”, the two dimensions of variability. We use this as a running example in this section, while in the next section we present how to turn goal definitions into visualizations. The dimensions of variability are defined as algebraic formulae that use embedding labels as atoms. Algebraic formulae are a composition vector math operators (e.g., add, sub, mul) to be applied on vectors (referenced by their label in the data, i.e. the vector of “apple” is obtained by using using the keyword “apple” in the formula). They are used as the axes of a subspace of the entire embedding space and can be interpreted as concepts. In our example we can define two axes amale = man and afemale = woman. These are the most simple formulae as they are made of only one literal, but any formula using algebraic operation can be used instead. For instance amale = man + him and afemale = woman + her could be used instead. Defining axes explicitly as algebraic formulae gives an interpretable semantics to the dimensions of variability and by consequence to the axes of the visualization. To project on the axes, different distance and similarity measures can be used (euclidean distance, correlation, dot product), in particular we will use cosine similarity in the remaining of the paper, defined as cossim(a,b) = a·b‖a‖‖b‖ The items of the goal are a set defined by extention (items = {item1, . . . , itemn}) or by intention (with rules). The rules use items’ embeddings or items’ metadata, like word frequencies, parts of speech, sentiment or categories the label belongs to. Intentions identify a semantically coherent region of the embedding space through logical formulae. Rules using item’s embeddings are defined as re = 〈d, e, c, v〉 where d is a distance or similarity function, e is an algebraic formula that uses embeddings names as atoms and is resolved into a vector, c ∈ {<,≤,=,≥, >}, v is a numeric value. They can be used, for instance, to select all the items that have a d = cosinesimilarity with respect to e = job + profession that is c =≥ then v = 0.5. Rules using item’s metadata instead use typed metadata associated with each item. An item can have categorical fields (e.g., words can be stop-words or not), set fields (e.g., the parts of speech a word can belongs to) and numerical fields (e.g., unigram frequencies in a corpus) associated with it. Rules are be defined as inclusion in a set: rm = icat ∩ set 6= ∅ where icat is the set of categories associated with an item, containing only one element in the case of categorical fields or multiple values in the case of set fields, while for numerical fields they are defined as ranges. Following on with our example, we can select some professions like “doctor”, “nurse”, “teacher” and “astronaut” as our items, or we can define the items of interest as the set of all words in the embedding space that are close to the word “profession” (e.g., cosine similarity greater than 0.7), that are not too frequent (inside a range of frequency from 100 to 1000) and that are not stop-words. 4 VISUALIZATIONS Goals defined in terms of dimensions of variability and items identify a subspace of the entire embedding space to visualize and the best way to visualize it depends on some characteristics of the goal. In the case of few dimensions of variability (one to three) and potentially many items of interest, like the ones obtained by an empty set of rules, a Cartesian view is ideal, where each axis is the vector obtained by evaluating the algebraic formula it is associated with and the coordinates displayed are similarities or distances of the items with respect to each axis. An example of a bi-dimensional Cartesian view is depicted in the left side of Figure 1. In the case where the goal is defined in terms of many dimensions of variability, the Cartesian view can’t be used, and a polar view is preferred. By visualizing each dimension of variability in circle, the polar view can visualize many more axes, but it is limited in the number of items it can display, as each item will be displayed as a polygon with each vertex lying on the axis defined for each dimension of variability and many overlapping polygons make the visualization cluttered. An example of a five-dimensional polar view is depicted in Figure 5. The use of explicit axes allows for straightforward and interpretable comparison of different embedding spaces. For instance, embeddings trained on different corpora or on the same corpora but with different models. The only requirement for embedding spaces to be comparable is that they contain embeddings for all labels present in the formulae defining the axes. Moreover, embeddings in the two spaces do not need to be of the same dimension. Items will now have two sets of coordinates, one for each embedding space, thus they will be displayed as lines. Short lines are interpreted as items being embedded similarly in the subspaces defined by the axes in both original embedding spaces, while long lines can be interpreted as really different locations in the subspaces, and their direction gives insight on how items shift in the two subspaces. Those two embedding spaces could be, for instance, embeddings trained on a clean corpus like Wikipedia as opposed to a noisy corpus like tweets from Twitter, or the two corpora could be two different time slices of the same corpus, in order to compare how words changed over time. The right side of Figure 1 shows an example of how to use the Cartesian comparison view to compare two datasets. 5 CASE STUDIES The methodology and visualizations can be used fruitfully in many analysis tasks in linguistics, digital humanities, in social studies based on empirical methods, and can also be used by researchers in computational linguistics and machine learning to inspect, debug and ultimately better understand the representations learned by their models. Here few of those use cases are presented, but the methodology is flexible enough to allow many other unforeseen uses. For those tasks, we used 50-dimensional publicly available GloVe (Pennington et al., 2014) embeddings trained on a corpus obtained concatenating a 2014 dump of Wikipedia and Gigaword 5 containing 6 billion tokens (for short Wikipedia) and a set of 2 billion tweets containing 27 billion tokens (for short Twitter). 5.1 BIAS DETECTION The task of bias detection is to identify, and in some cases correct for, bias in data that is reflected in the embeddings trained on such data. Studies have shown how embeddings incorporate gender and ethnic biases (Garg et al. (2018); Bolukbasi et al. (2016); Islam et al. (2017)), while other studies focused on warping spaces in order to de-bias the resulting embeddings (Bolukbasi et al. (2016); Zhao et al. (2017)). We show how our proposed methodology can help visualize biases. To visualize gender bias with respect to professions, the goal is defined with the formulae avg(he, him) and avg(she, her) as two dimensions of variability, in a similar vein to Garg et al. (2018). A subset of the professions used by Bolukbasi et al. (2016) is selected as items and cosine similarity is adopted as the measure for the projection. The Cartesian view visualizing Wikipedia embeddings is shown in the left side of Figure 1. Nurse, dancer, and maid are the professions that end up closer to the “female” axis, while boss, captain, and commander end up closer to the “male” axis. The Cartesian comparison view comparing the embeddings trained on Wikipedia and Twitter is shown in the right side of Figure 1. Only the embeddings with a line length above 0.05 are displayed. The most interesting words in this visualization are the ones that shift the most in the direction of negative slope. In this case, are chef and doctor are closer to the “male” axis in Twitter than in Wikipedia, while dancer and secretary are closer to the bisector in Twitter than in Wikipedia. Additional analysis of how words tend to shift in the two embedding spaces would be needed in order to derive provable conclusions about the significance of the shift, for instance through a permutation test with respect to all possible pairs, but the visualization can help inform the most promising words to perform the test on. 5.2 POLYSEMY ANALYSIS Embedding methods conflate different meanings of a word into the same vector A few methods have been proposed to obtain more fine-grained representations by clustering contexts and representing words with multiple vectors (Huang et al., 2012; Neelakantan et al., 2014), but widely used pretrained GloVe vectors still conflate different meanings in the same embedding. Widdows (2003) showed how using a binary orthonormalization operator that has ties with the quantum logic not operator it is possible to remove from the embedding of a polysemous word part of the conflated meaning. The authors define the operator nqnot(a, b) = a− a·b|b|2 b and we show with a comparison plot how it can help distinguish the different meanings of a word. For illustrative purposes we choose the same polysemous word used by Widdows (2003), suit, and use the nqnot operator to orthonormalize with respect to lawsuit and dress, the two main meanings used as dimensions of variability. The items in our goal are the 20000 most frequent words in the Wikipedia embedding space removing stop-words. In the top of Figure 2 we show the overall plot and we zoom on the items that are closer to each axis. Words closer to the axis negating lawsuit are all related to dresses and the act of wearing something, while words closer to the axis negating dress are related to law. We chose another polysemous word, apple, and orthonornalized with respect to fruit and computer. In the bottom of Figure 2 words that have a higher similarity with respect to the first axis are all tech related, while the ones that have a higher similarity with respect to the second axis are mostly other fruits or food. Both examples confirm the ability of the nqnot operator to disentangle multiple meanings from polysemous embeddings and show how the proposed visualizations are able to show it clearly. 5.3 FINE-GRAINED EMBEDDING ANALYSIS We consider embeddings that are close in the embedding space to be semantically related, but even close embeddings may have nuances that distinguish them. When projecting in two dimensions through PCA or t-SNE we are conflating a multidimensional notion of similarity to a bi-dimensional one, loosing the fine grained distinctions among different embeddings. The Cartesian view allows for a more fine-grained visualization of similarities and differences among embeddings that emphasizes nuances that could go otherwise unnoticed. To demonstrate this capability we select as dimensions of variability formulae made of just single words that are in close vicinity to each other in the Wikipedia embedding space: google and microsoft, as google is the closest word to microsoft and microsoft is the 3rd closest word to google. As items we pick the 30000 most frequent words removing stop-words and the 500 most frequent words (as they are too generic) and keeping only the words that have a cosine similarity of at least 0.4 with both google and microsoft while having a cosine similarity below 0.75 with respect to the formula google+microsoft, as we are interested in the most polarized words. The left side of Figure 3 shows how even if those embeddings are close to each other it is easy to identify peculiar words (highlighted with red dots). The ones that relate to web companies and services (twitter, youtube, myspace) are much closer to the google axis. Words related to both legal issues (lawsuit, antitrust) and videogames (ps3, nintendo, xbox) and traditional IT companies are closer to the microsoft axis. In Figure 4 we the same words using google and microsoft orthonormalized with respect to each other as axes. The top left and the bottom right corners are the most interesting ones, as they contain terms that are related to one word after having negated the other. The pattern that emerges is similar to the one highlighted in the left side of Figure 3, but now also operating systems terms (unix, os/2) appear in the microsoft corner, while advertisement and tracking appear in the google corner. For contrast, the t-SNE projection is shown in the right side of Figure 3: it is hard to appreciate the similarities and differences among those embeddings other than seeing them being close in the projected space. This confirms on one hand that the notion of similarity between terms in an embedding space hides many nuances that are captured in those representations and on the other hand that the proposed methodology enables for a more detailed inspection of the embedded space. Multi-dimensional similarity nuances can be visualized using the polar view. In Figure 5 we show an example of how to visualize a small number of items on more than two axes, specifically five foodrelated items compared over five countries axes. The most typical food from a specific country is the closest to the country axis, with sushi being predominantly close to Japan and China, dumplings being close to both Asian countries and Italy, pasta being predominantly closer to Italy’s axis, chocolate being close to European countries and champagne being closer to France and Italy. This same approach could be used also for bias detection where the axes are concepts capturing the notion of ethnicity and items could be adjectives, or the two could be swapped, depending. 6 USER STUDY We conducted a series of user studies to quantify the effectiveness of the proposed method. The goal is to find out if and how visualizations using user-defined semantically meaningful algebraic formulae as their axes help users achieve their analysis goals. What we are not testing for is the quality of projection itself, as in PCA AND t-SNE the projection axes are obtained algorithmically, while in our case they are explicitly defined by the user. We formalized the research questions as: Q1) Does Explicit Formulae outperform t-SNE in goal-oriented tasks? Q2) Can Explicit Formulae reduce time to complete goal-oriented tasks wrt. t-SNE? Q3) Which visualization do users prefer? To answer these questions we invited twelve subjects among data scientists and machine learning researchers, all acquainted with interpreting dimensionality reduction results. We defined two types of tasks, namely Commonality and Polarization, in which subjects were given a visualization together with a pair of words (used as axes in Explicit Formulae or highlighted with a big font and red dot in case of t-SNE). We asked the subjects to identify either common or polarized words w.r.t. the two provided ones. The provided pairs were: banana & strawberry, google & microsoft, nerd & geek, book & magazine. The test subjects were given a list of eight questions, four per task type, and their proposed lists of five words are compared with a gold standard provided by a committee of two computational linguistics experts. The tasks are fully randomized within the subject to prevent from learning effects. In addition, we obfuscated half of our questions by replacing the words with a random numeric ID to prevent prior knowledge from affecting the judgment. We employed three measures: accuracy in which we calculate the number of words provided by the subjects that are present in the gold standard set, speed recording the amount of time users spend to answer the questions normalized by the number of words (commonality: 1, polarization: 2), and we also collected an overall preference for either visualizations. As reported in Table 1, two-way ANOVA tests revealed significant differences in Accuracy for the factor of Projection (Explicit Formulae (µ = 2.02, σ = 1.40) and t-SNE (µ = 0.50, σ = 0.71)) against both Task (F1,91 = 46.11, p = 1.078× 10−9) and Obfuscation (F1,91 = 57.73, p = 2.446× 10−11), which is a strong indicator that the proposed Explicit Formulae method outperforms t-SNE in terms of accuracy in both Commonality and Polarization tasks. We also observed significant differences (F1,91 = 23.93, p = 4.228× 10−6) in Obfuscation: subjects tend to have better accuracy when the words are not obfuscated (µ = 1.75, σ = 1.55 vs. µ = 0.77, σ = 0.88 when obfuscated), but are significantly slower (F1,91 = 5.901, p = 0.017). We run post-hoc t-tests that confirmed how accuracy of Explicit Formulae on Non-obfuscated is better than Obfuscated (t = 4.172, p < 0.0001), which in turn is better that t-SNE Non-obfuscated (t = 2.137, p = 0.0190), which is better than t-SNE Obfuscated (t = 2.563, p = 0.007). One explanation is that the subjects relied on both the visualization and linguistic knowledge to perform the task, but the fact that Explicit Formulae Obfuscated is still better than t-SNE Non-obfuscated suggests that Explicit Formulae, even with obfuscated labels, is consistently more reliable than t-SNE. Concerning Speed, we did not observe signs that the subjects performed faster with Explicit Formulae comparing to t-SNE. Concerning Preference, nine out of all twelve (75%) subjects chose Explicit Formulae over t-SNE, while the rest three prefers t-SNE because of familiarity, indicating there is still non-negligible learning curve of our proposed methods. In conclusion, our answers to the research questions are that (Q1) Explicit Formulae leads to better ac curacy in goal-oriented tasks, (Q2) there is no significant difference between the two techniques in terms of speed and (Q3) users prefer Explicit Formulae over t-SNE. 7 CONCLUSIONS We presented a simple methodology for projecting embeddings into lower-dimensional semantically-meaningful subspaces through explicit vector algebra formulae operating on the embedding themselves. Classical projection methods are useful to gather on overall coarse-grained view of the embedding space and how embeddings cluster, but we showed how our approach allows goal-oriented analyses with more fine-grained comparison and enables cross-dataset comparison through a series of case studies and a user study. This is possible thanks to the ability of the proposed methodology to assign an explicit semantics to the measures of variability used as axes of the visualization that in turns makes them interpretable and widely applicable to many use cases in computational linguistics, natural language processing, machine learning, social sciences and digital humanities. A APPENDIX Fi gu re 8: Pl ot of em be dd in gs in W ik ip ed ia w ith su it ne ga te d w ith re sp ec tt o la w su it an d dr es s re sp ec tiv el y as ax es . Fi gu re 9: Pl ot of em be dd in gs in W ik ip ed ia w ith ap pl e ne ga te d w ith re sp ec tt o fr ui ta nd co m pu te r re sp ec tiv el y as ax es . Fi gu re 11 :F in egr ai ne d co m pa ri so n of th e su bs pa ce on th e ax is n qn ot (g oo g le ,m ic ro so f t) an d n qn ot (m ic ro so f t, g oo g le ) in W ik ip ed ia .
1. What is the main contribution of the paper in terms of methodological ideas for visualizing and analyzing representations? 2. What kind of analysis or case study would help better understand the proposed approach and its potential benefits? 3. How does the reviewer assess the novelty and impact of the paper's content, particularly compared to existing works in the field? 4. What are the strengths and weaknesses of the paper regarding its clarity, focus, and potential usefulness in machine learning research?
Review
Review To the best of my understanding the paper proposes some methodological ideas for visualizing and analyzing representations. The paper is unclear mainly because it is a bit difficult to pinpoint the contribution and its audience. What would help me better understand and potentially raise my rating is an analysis of a classical model on a known dataset as a case study and some interesting findings would help make it more exciting and give the readers more incentives to try this out. Like train an AlexNet and VGG imagenet model and show that the embeddings are better aligned with the wordnet taxonomy in one of the other. This should be possible with their approach if i understand it correctly. pros: - visualization and analysis is a very exciting and important topic in machine learning - this is clearly useful if it worked cons: - not sure what the contribution claim for the paper is since these types of plots existed already in the literature (is it a popularization claim ?)
ICLR
Title AutoGrow: Automatic Layer Growing in Deep Convolutional Networks Abstract Depth is a key component of Deep Neural Networks (DNNs), however, designing depth is heuristic and requires many human efforts. We propose AutoGrow to automate depth discovery in DNNs: starting from a shallow seed architecture, AutoGrow grows new layers if the growth improves the accuracy; otherwise, stops growing and thus discovers the depth. We propose robust growing and stopping policies to generalize to different network architectures and datasets. Our experiments show that by applying the same policy to different network architectures, AutoGrow can always discover near-optimal depth on various datasets of MNIST, FashionMNIST, SVHN, CIFAR10, CIFAR100 and ImageNet. For example, in terms of accuracy-computation trade-off, AutoGrow discovers a better depth combination in ResNets than human experts. Our AutoGrow is efficient. It discovers depth within similar time of training a single DNN. 1 INTRODUCTION Layer depth is one of the decisive factors of the success of Deep Neural Networks (DNNs). For example, image classification accuracy keeps improving as the depth of network models grows (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; Szegedy et al., 2015; He et al., 2016; Huang et al., 2017). Although shallow networks cannot ensure high accuracy, DNNs composed of too many layers may suffer from over-fitting and convergence difficulty in training. How to obtain the optimal depth for a DNN still remains mysterious. For instance, ResNet-152 (He et al., 2016) uses 3, 8, 36 and 3 residual blocks under output sizes of 56 × 56, 28 × 28, 14 × 14 and 7 × 7, respectively, which don’t show an obvious quantitative relation. In practice, people usually reply on some heuristic trials and tests to obtain the depth of a network: they first design a DNN with a specific depth and then train and evaluate the network on a given dataset; finally, they change the depth and repeat the procedure until the accuracy meets the requirement. Besides the high computational cost induced by the iteration process, such trial & test iterations must be repeated whenever dataset changes. In this paper, we propose AutoGrow that can automate depth discovery given a layer architecture. We will show that AutoGrow generalizes to different datasets and layer architectures. There are some previous works which add or morph layers to increase the depth in DNNs. VggNet (Simonyan & Zisserman, 2014) and DropIn (Smith et al., 2016) added new layers into shallower DNNs; Network Morphism (Wei et al., 2016; 2017; Chen et al., 2015) morphed each layer to multiple layers to increase the depth meanwhile preserving the function of the shallower net. Table 1 summarizes differences in this work. Their goal was to overcome difficulty of training deeper DNNs or accelerate it. Our goal is to automatically find an optimal depth. Moreover, previous works applied layer growth by once or a few times at pre-defined locations to grow a pre-defined number of layers; in contrast, ours automatically learns the number of new layers and growth locations without limiting growing times. We will summarize more related works in Section 4. Figure 1 illustrates an example of AutoGrow. It starts from the shallowest backbone network and gradually grows sub-modules (A sub-module can be one or more layers, e.g., a residual block); the growth stops once a stopping policy is satisfied. We studied multiple initializers of new layers and multiple growing policies, and surprisingly find that: (1) a random initializer works equally or better than complicated Network Morphism; (2) it is more effective to grow before a shallow net converges. We hypothesize that this is because a converged shallow net is an inadequate initialization for training deeper net, while random initialization can help to escape from a bad starting point. Motivated by this, we intentionally avoid full convergence during the growing by using (1) random initialization of new layers, (2) a constant large learning rate, and (3) a short growing interval. Growing sub-nets Sub-modules Stopped sub-nets Initializer Epochs Seed Discovered Figure 1: A simple example of AutoGrow. Previous works Ours Goal Ease training Depth automation Times Once or a few Unlimited Locations Human defined Learned Layer # Human defined Learned Table 1: Comparison with previous works about layer growth. Our contributions are: (1) We propose AutoGrow to automate DNN layer growing and depth discovery. AutoGrow is very robust. With the same hyper-parameters, it adapts network depth to various datasets including MNIST, FashionMNIST, SVHN, CIFAR10, CIFAR100 and ImageNet. Moreover, AutoGrow can also discover shallower DNNs when the dataset is a subset. (2) AutoGrow demonstrates high efficiency and scales up to ImageNet, because the layer growing is as fast as training a single DNN. On ImageNet, it discovers a new ResNets with better trade-off between accuracy and computation complexity. (3) We challenge the idea of Network Morphism, as random initialization works equally or better when growing layers. (4) We find that it is beneficial to rapidly grow layers before a shallower net converge, contradicting previous intuition. 2 AutoGrow – A DEPTH GROWING ALGORITHM Algorithm 1 AutoGrow Algorithm. Input : 1 A seed shallow network g(X0) composed of M sub-networks F = {fi (·;Wi) : i = 0 . . .M − 1}, where each sub-network has only one sub-module (a dimension reduction sub-module); an epoch interval K to check growing and stopping policies; the number of fine-tuning epochs N after growing. Initialization: 2 A Circular Linked List of sub-networks under growing: subNetList = f0 (·;W0)→ · · · → fM−1 (·;WM−1)←−−−−−−−−−−−−−−−−−−−−−−−−−−− ; 3 The current growing sub-network: growingSubNet = subNetList.head() = f0 (·;W0); 4 The recent grown sub-network: grownSubNet = None; Process : 5 # if there exist growing sub-network(s) 6 while subNetList.size()>0 do 7 train(g(X0), K) # train the whole network g(X0) for K epochs 8 if meetStoppingPolicy() then 9 # remove a sub-network from the growing list if its growth did not improve accuracy 10 subNetList.delete(grownSubNet); 11 end 12 if meetGrowingPolicy() and subNetList.size()>0 then 13 # current growing sub-network growingSubNet== fi (·;Wi) 14 Wi = Wi ∪W # stack a sub-module on top of fi (·;Wi) 15 initializer(W); # initialize the new sub-moduleW 16 # record the recent grown sub-network and iterate to a next sub-network 17 grownSubNet= growingSubNet; 18 growingSubNet= subNetList.next(growingSubNet); 19 end 20 end 21 Fine-tune the discovered network g(X0) for N epochs; Output : 22 A trained neural network g(X0) with learned depth. Figure 1 gives an overview of the proposed AutoGrow. In this paper, we use network, sub-networks, sub-modules and layers to describe the architecture hierarchy. A network is composed of a cascade of sub-networks. A sub-network is composed of sub-modules, which typical share the same output size. A sub-module (e.g. a residual block) is an elementary growing block composed of one or a few layers. In this section, we rigorously formulate a generic version of AutoGrow which will be materialized in subsections. A deep convolutional network g(X0) is a cascade of sub-networks by composing functions as g(X0) = l (fM−1 (fM−2 (· · ·f1 (f0 (X0)) · · · ))), where X0 is an input image, M is the number of sub-networks, l(·) is a loss function, and Xi+1 = fi (Xi) is a sub-network that operates on an input image or a feature tensor Xi ∈ Rci×hi×wi . Here, ci is the number of channels, and hi and wi are spatial dimensions. fi (Xi) is a simplified notation of fi (Xi;Wi), where Wi is a set of sub-modules’ parameters within the i-th sub-network. Thus W = {Wi : i = 0 . . .M − 1} denotes the whole set of parameters in the DNN. To facilitate growing, the following properties are supported within a sub-network: (1) the first sub-module usually reduces the size of input feature maps, e.g., using pooling or convolution with a stride; and (2) all sub-modules in a sub-network maintain the same output size. As such, our framework can support popular networks, including VggNet-like plain networks (Simonyan & Zisserman, 2014), GoogLeNet (Szegedy et al., 2015), ResNets (He et al., 2016) and DenseNets (Huang et al., 2017). In this paper, we select ResNets and VggNet-like nets as representatives of DNNs with and without shortcuts, respectively. With above notations, Algorithm 1 rigorously describes the AutoGrow algorithm. In brief, AutoGrow starts with the shallowest net where every sub-network has only one sub-module for spatial dimension reduction. AutoGrow loops over all growing sub-networks in order. For each sub-network, AutoGrow stacks a new sub-module. When the new sub-module does not improve the accuracy, the growth in corresponding sub-network will be permanently stopped. The details of our method will be materialized in the following subsections. 2.1 SEED SHALLOW NETWORKS AND SUB-MODULES In this paper, in all datasets except ImageNet, we explore growing depth for four types of DNNs: (1) Basic3ResNet: the same ResNet used for CIFAR10 in He et al. (2016), which has 3 residual subnetworks with output spatial sizes of 32×32, 16×16 and 8×8, respectively; (2) Basic4ResNet: a variant of ResNet used for ImageNet in He et al. (2016) built by basic residual blocks (each of which contains two convolutions and one shortcut). There are 4 sub-networks with output spatial sizes of 32× 32, 16× 16, 8× 8 and 4× 4, respectively; (3) Plain3Net: a VggNet-like plain net by removing shortcuts in Basic3ResNet; (4) Plain4Net: a VggNet-like plain net by removing shortcuts in Basic4ResNet. In AutoGrow, the architectures of seed shallow networks and sub-modules are pre-defined. In plain DNNs, a sub-module is a stack of convolution, Batch Normalization and ReLU; in residual DNNs, a sub-module is a residual block. In AutoGrow, a sub-network is a stack of all sub-modules with the same output spatial size. Unlike He et al. (2016) which manually designed the depth, AutoGrow starts from a seed architecture in which each sub-network has only one sub-module and automatically learns the number of sub-modules. On ImageNet, we apply the same backbones in He et al. (2016) as the seed architectures. A seed architecture has only one sub-module under each output spatial size. For a ResNet using basic residual blocks or bottleneck residual blocks (He et al., 2016), we respectively name it as Basic4ResNet or Bottleneck4ResNet. Plain4Net is also obtained by removing shortcuts in Basic4ResNet. 2.2 SUB-MODULE INITIALIZERS Here we explain how to initialize a new sub-moduleW in initializer(W) mentioned in Algorithm 1. Network Morphism changes DNN architecture meanwhile preserving the loss function via special initialization of new layers, that is, g(X0;W) = g(X0;W∪W) ∀X0. A residual sub-module shows a nice property: when stacking a residual block and initializing the last Batch Normalization layer as zeros, the function of the shallower net is preserved but the DNN is morphed to a deeper net. Thus, Network Morphism can be easily implemented by this zero initialization (ZeroInit). In this work, all layers inW are initialized using default randomization, except for a special treatment of the last Batch Normalization layer in a residual sub-module. Besides ZeroInit, we propose a new AdamInit for Network Morphism. In AdamInit, we freeze all parameters except the last Batch Normalization layer inW , and then use Adam optimizer (Kingma & Ba, 2014) to optimize the last Bath Normalization for maximum 10 epochs till the training accuracy of the deeper net is as good as the shallower one. After AdamInit, all parameters are jointly optimized. We view AdamInit as a Network Morphism because the training loss is similar after AdamInit. We empirically find that AdamInit can usually find a solution in less than 3 epochs. We also study random initialization of the last Batch Normalization layer using uniform (UniInit) or Gaussian (GauInit) noises with a standard deviation 1.0. We will show that GauInit obtains the best result, challenging the idea of Network Morphism (Wei et al., 2016; 2017; Chen et al., 2015). 2.3 GROWING AND STOPPING POLICIES In Algorithm 1, a growing policy refers to meetGrowingPolicy(), which returns true when the network should grow a sub-module. Two growing policies are studied here: 1. Convergent Growth: meetGrowingPolicy() returns true when the improvement of validation accuracy is less than τ in the last K epochs. That is, in Convergent Growth, AutoGrow only grows when current network has converged. This is a similar growing criterion adopted in previous works (Elsken et al., 2017; Cai et al., 2018a;b). 2. Periodic Growth: meetGrowingPolicy() always returns true, that is, the network always grows everyK epochs. Therefore,K is also the growing period. In the best practice of AutoGrow, K is small (e.g. K = 3) such that it grows before current network converges. Our experiments will show that Periodic Growth outperforms Convergent Growth. We hypothesize that a fully converged shallower net is an inadequate initialization to train a deeper net. We will perform experiments to test this hypothesis and visualize optimization trajectory to illustrate it. A stopping policy denotes meetStoppingPolicy() in Algorithm 1. When Convergent Growth is adopted, meetStoppingPolicy() returns true if a recent growth does not improve validation accuracy more than τ within K epochs. We use a similar stopping policy for Periodic Growth; however, as it can grow rapidly with a small period K (e.g. K = 3) before it converges, we use a larger window size J for stop. Specifically, when Periodic Growth is adopted, meetStoppingPolicy() returns true when the validation accuracy improves less than τ in the last J epochs, where J K. Hyper-parameters τ , J and K control the operation of AutoGrow and can be easily setup and generalize well. τ denotes the significance of accuracy improvement for classification. We simply set τ = 0.05% in all experiments. J represents how many epochs to wait for an accuracy improvement before stopping the growth of a sub-network. It is more meaningful to consider stopping when the new net is trained to some extent. As such, we set J to the number of epochs T under the largest learning rate when training a baseline. K means how frequently AutoGrow checks the polices. In Convergent Growth, we simply setK = T , which is long enough to ensure convergence. In Periodic Growth, K is set to a small fraction of T to enable fast growth before convergence; more importantly, K = 3 is very robust to all networks and datasets. Therefore, all those hyper-parameters are very robust and strongly correlated to design considerations. 3 EXPERIMENTS In this paper, we use Basic3ResNet-2-3-2, for instance, to denote a model architecture which contains 2, 3 and 2 sub-modules in the first, second and third sub-networks, respectively. Sometimes we simplify it as 2-3-2 for convenience. AutoGrow always starts from the shallowest depth of 1-1-1 and uses the maximum validation accuracy as the metric to guide growing and stopping. All DNN baselines are trained by SGD with momentum 0.9 using staircase learning rate. The initial learning rate is 0.1 in ResNets and 0.01 in plain networks. On ImageNet, baselines are trained using batch size 256 for 90 epochs, within which learning rate is decayed by 0.1× at epoch 30 and 60. In all other smaller datasets, baselines are trained using batch size 128 for 200 epochs and learning rate is decayed by 0.1× at epoch 100 and 150. Our early experiments followed prior wisdom by growing layers with Network Morphism (Wei et al., 2016; 2017; Chen et al., 2015; Elsken et al., 2017; Cai et al., 2018a;b), i.e., AutoGrow with ZeroInit (or AdamInit) and Convergent Growth policy; however, it stopped early with very shallow DNNs, failing to find optimal depth. We hypothesize that a converged shallow net with Network Morphism gives a bad initialization to train a deeper neural network. Section 3.1 experimentally test that the hypothesis is valid. To tackle this issue, we intentionally avoid convergence during growing by three simple solutions, which are evaluated in Section 3.2. Finally, Section 3.3 and Section 3.4 include extensive experiments to show the effectiveness of our final AutoGrow. 3.1 SUBOPTIMUM OF NETWORK MORPHISM AND CONVERGENT GROWTH In this section, we study Network Morphism itself and its integration into our AutoGrow under Convergent Growth. When studying Network Morphism, we take the following steps: 1) train a shallower ResNet to converge, 2) stack residual blocks on top of each sub-network to morph to a deeper net, 3) use ZeroInit or AdamInit to initialize new layers, and 4) train the deeper net in a standard way. We compare the accuracy difference (“∆”) between Network Morphism and training the deeper net from scratch. Table 2 summaries our results. Network Morphism has a lower accuracy (negative “∆”) in all the cases, which validates our hypothesis that a converged shallow network with Network Morphism gives a bad initialization to train a deeper net. We visualize the optimization trajectories in Appendix A.0.1 to illustrate the hypothesis. To further validate our hypothesis, we integrate Network Morphism as the initializer in AutoGrow with Convergent Growth policy. We refer to this version of AutoGrow as c-AutoGrow with “c-” denoting “Convergent.” More specific, we take ZeroInit or AdamInit as sub-module initializer and “Convergent Growth” policy in Algorithm 1. To recap, in this setting, AutoGrow trains a shallower net till it converges, then grows a sub-module which is initialized by Network Morphism, and repeats the same process till there is no further accuracy improvement. In every interval of K training epochs (train(g(X0),K) in Algorithm 1), “staircase” learning rate is used. The learning rate is reset to 0.1 at the first epoch, and decayed by 0.1× at epoch K2 and 3K 4 . The results are shown in Table 3 by “staircase” rows, which illustrate that c-AutoGrow can grow a DNN multiple times and finally find a depth. However, there are two problems: 1) the final accuracy is lower than training the found net from scratch, as indicated by “∆”, validating our hypothesis; 2) the depth learning stops too early with a relatively shallower net, while a deeper net beyond the found depth can achieve a higher accuracy as we will show in Table 6. These problems provide a circumstantial evidence of the hypothesis that a converged shallow net with Network Morphism gives a bad initialization. Thus, AutoGrow cannot receive signals to continue growing after a limited number of growths. In Appendix A.0.1, Figure 6(a) visualizes the trajectory of c-AutoGrow corresponding to row “2-3-6” in Table 3. 3.2 ABLATION STUDY FOR AutoGrow DESIGN Based on the findings in Section 3.1, we propose three simple but effective solutions to further enhance AutoGrow and refer it as p-AutoGrow, with “p-” denoting “Periodic”: (1) Use a large constant learning rate for growing, i.e., 0.1 for residual networks and 0.01 for plain networks. Stochastic gradient descent with a large learning rate intrinsically introduces noises, which help to avoid a full convergence into a bad initialization from a shallower net. Note that staircase learning rate is still used for fine-tuning after discovering the final DNN; (2) Use random initialization (UniInit or GauInit) as noises to escape from an inadequate initialization; (3) Grow rapidly before a shallower net converges by taking Periodic Growth with a small K. p-AutoGrow is our final AutoGrow. In the rest part of this section, we perform ablation study to prove that the three solutions are effective. We start from c-AutoGrow, and incrementally add above solutions one by one and eventually obtain p-AutoGrow. In Table 3, first, we replace the staircase learning rate with a constant learning rate, the accuracy of AutoGrow improves and therefore “∆” improves; second, we further replace Network Morphism (ZeroInit or AdamInit) with a random initializer (UniInit or GauInit) and result in a bigger gain. Overall, combining a constant learning rate with GauInit performs the best. Thus, constant learning rate and GauInit are adopted in the remaining experiments, unless we explicitly specify them. Note that, in this paper, we are more interested in automating depth discovery to find a final DNN (“found net”) with a high accuracy (“accu”). Ideally, the “found net” has a minimum depth, a larger depth than which cannot further improve “accu”. We will show in Figure 3 that AutoGrow discovers a depth approximately satisfying this property. The “∆” is a metric to indicate how well shallower nets initialize deeper nets; a negative “∆” indicates that weight initialization from a shallower net hurts training of a deeper net; while a positive “∆” indicates AutoGrow helps training a deeper net, which is a byproduct of this work. Finally, we apply the last solution – Periodic Growth, and obtain our final p-AutoGrow. Our ablation study results for p-AutoGrow are summarized in Table 5 and Table 4. Table 5 analyzes the impact of the growing period K. In general, K is a hyper-parameter to trade off speed and accuracy: a smaller K takes a longer learning time but discovers a deeper net, vice versa. Our results validate the preference of a faster growth (i.e. a smaller K). On CIFAR10/CIFAR100, the accuracy reaches plateau/peak at K = 3; further reducing K produces a deeper net while the accuracy gain is marginal/impossible. In the following, we simply select K = 3 for robustness test. More importantly, our quantitative results in Table 5 show that p-AutoGrow finds much deeper nets, overcoming the very-early stop issue in c-AutoGrow in Table 3. That is, Periodic Growth proposed in this work is much more effective than Convergent Growth utilized in previous work. For sanity check, we perform the ablation study of initializers for p-AutoGrow. The results are in Table 8 in Appendix A.0.3, which further validates our wisdom on selecting GauInit. The motivation of Network Morphism in previous work was to start a deeper net from a loss function that has been well optimized by a shallower net, so as not to restart the deeper net training from scratch (Wei et al., 2016; 2017; Chen et al., 2015; Elsken et al., 2017; Cai et al., 2018a;b). In all our experiments, we find this is sure even with random initialization. Figure 2 plots the convergence curves and learning process for “42-42-42” in Table 5. Even with GauInit, the loss and accuracy rapidly recover and no restart is observed. The convergence pattern in the “Growing” stage is similar to the “Fine-tuning” stage under the same learning rate (the initial learning rate 0.1). Similar results on ImageNet will be shown in Figure 8. Our results challenge the necessity of Network Morphism when growing neural networks. At last, we perform the ablation study on the initial depth of the seed network. Table 4 demonstrates that a shallowest DNN works as well as a deeper seed. This implies that AutoGrow can appropriately stop regardless of the depth of the seed network. As the focus of this work is on depth automation, we prefer starting with the shallowest seed to avoid a manual search of a seed depth. 3.3 ADAPTABILITY OF AutoGrow To verify the adaptability of AutoGrow, we use an identical configuration (p-AutoGrow withK = 3) and test over 5 datasets and 4 seed architectures. Table 6 includes the results of all 20 combinations. Figure 3 compares AutoGrow with manual search which is obtained by training many DNNs with different depths from scratch. The results lead to the following conclusions and contributions: Finally, our supposition is that: when the size of dataset is smaller, the optimal depth should be smaller. Under this supposition, we test the effectiveness of AutoGrow by sampling a subset of dataset and verify if AutoGrow can discover a shallower depth. In Appendix A.0.3, Table 11 summarizes the results. As expected, our experiments show that AutoGrow adapts to shallower networks when the datasets are smaller. 3.4 SCALING TO IMAGENET AND EFFICIENCY In ImageNet, K = 3 should generalize well, but we explore AutoGrow with K = 2 and K = 5 to obtain an accuracy-depth trade-off line for comparison with human experts. The larger K = 5 enables AutoGrow to obtain a smaller DNN to trade-off accuracy and model size (computation) and the smaller K = 2 achieves higher accuracy. The results are shown in Table 7, which proves that AutoGrow automatically finds a good depth without any tuning. As a byproduct, the accuracy is even higher than training the found net from scratch, indicating that the Periodic Growth in AutoGrow helps training deeper nets. The comparison of AutoGrow and manual depth design (He et al., 2016) is in Figure 4, which shows that AutoGrow achieves better trade-off between accuracy and computation (measured by floating point operations). In Appendix A.0.3, Table 10 summarizes the breakdown of wall-clock time in AutoGrow. The growing/searching time is as efficient as (often more efficient than) fine-tuning the single discovered DNN. The scalability of AutoGrow comes from its intrinsic features that (1) it grows quickly with a short period K and stops immediately if no improvement is sensed; and (2) the network is small at the beginning of growing. 4 RELATED WORK Neural Architecture Search (NAS) (Zoph & Le, 2016) and neural evolution (Miikkulainen et al., 2019; Angeline et al., 1994; Stanley & Miikkulainen, 2002; Liu et al., 2017a; Real et al., 2017) can search network architectures from a gigantic search space. In NAS, the depth of DNNs in the search space is fixed, while AutoGrow learns the depth. Some NAS methods (Bender et al., 2018; Liu et al., 2018b; Cortes et al., 2017) can find DNNs with different depths, however, the maximum depth is pre-defined and shallower nets are obtained by padding zero operations or selecting shallower branches, while our AutoGrow learns the depth in an open domain to find a minimum depth, beyond which no accuracy improvement can be obtained. Moreover, NAS is very computation and memory intensive. To accelerate NAS, one-shot models (Saxena & Verbeek, 2016; Pham et al., 2018; Bender et al., 2018), DARTS (Liu et al., 2018b) and NAS with Transferable Cell (Zoph et al., 2018; Liu et al., 2018a) were proposed. The search time reduces dramatically but is still long from practical perspective. It is still very challenging to deploy these methods to larger datasets such as ImageNet. In contrast, our AutoGrow can scale up to ImageNet thanks to its short depth learning time, which is as efficient as training a single DNN. In addition to architecture search which requires to train lots of DNNs from scratch, there are also many studies on learning neural structures within a single training. Structure pruning and growing were proposed for different goals, such as efficient inference (Wen et al., 2016; Li et al., 2016; Lebedev & Lempitsky, 2016; He et al., 2017; Luo et al., 2017; Liu et al., 2017b; Dai et al., 2017; Huang et al., 2018; Gordon et al., 2018; Du et al., 2019), lifelong learning (Yoon et al., 2017) and model adaptation (Feng & Darrell, 2015; Philipp & Carbonell, 2017). However, those works fixed the network depth and limited structure learning within the existing layers. Optimization over a DNN with fixed depth is easier as the skeleton architecture is known. AutoGrow performs in a scenario where the DNN depth is unknown hence we need to seek for the optimal depth. A APPENDIX A.0.1 OPTIMIZATION TRAJECTORIES OF NETWORK MORPHISM We hypothesize that a converged shallower net may not be an adequate initialization. Figure 5 visualizes and compares the optimization trajectories of Network Morphism and the training from scratch. In this figure, the shallower net is Basic3ResNet-3-3-3 (ResNet-20) and the deeper one is Basic3ResNet-5-5-5 (ResNet-32) in Table 2. The initializer is ZeroInit. The visualization method is extended from Li et al. (2018). Points on the trajectory are evenly sampled every a few epochs. To maximize the variance of trajectory, we use PCA to project from a high dimensional space to a 2D space and use the first two Principle Components (PC) to form the axes in Figure 5. The contours of training loss function and the trajectory are visualized around the final minimum of the deeper net. When projecting a shallower net to a deeper net space, zeros are padded for the parameters not existing in the deeper net. We must note that the loss increase along the trajectory does not truly represent the situation in high dimensional space, as the trajectory is just a projection. It is possible that the loss remains decreasing in the high dimension while it appears in an opposite way in the 2D space. The sharp detour at “Morphing” in Figure 5(a) may indicate that the shallower net plausibly converges to a point that the deeper net struggles to escape. In contrast, Figure 5(b) shows that the trajectory of the direct optimization in the deeper space smoothly converges to a better minimum. Figure 6(a) visualizes the trajectory of c-AutoGrow corresponding to row “2-3-6” in Table 3. Along the trajectory, there are many trials to detour and escape an initialization from a shallower net. Figure 6(b) visualizes the trajectory corresponding to row “2-4-3” in Table 3, which is much smoother compared to Figure 6(a). Figure 6(c)(d) visualize the trajectories of p-AutoGrow with K = 50 and 3. The 2D projection gives limited information to reveal the advantages of p-AutoGrow comparing to c-AutoGrow in Figure 6(b), although the trajectory of our final p-AutoGrow in Figure 6(d) is plausibly more similar to the one of training from scratch in Figure 5(b). A.0.2 VISUALIZATION OF LOSS SURFACES AROUND MINIMA Figure 7 visualizes loss surfaces around minima by AutoGrow and baseline. Intuitively, AutoGrow finds wider or deeper minima with less chaotic landscapes. A.0.3 MORE EXPERIMENTAL RESULTS Figure 8 plots the growing and converging curves for two DNNs in Table 10. Table 11 summarizes the adaptability of AutoGrow to the sizes of dataset. In each set of experiments, dataset is randomly down-sampled to 100%, 75%, 50% and 25%. For a fair comparison, K is divided by the percentage of dataset such that the number of mini-batches between growths remains the same. As expected, our experiments show that AutoGrow adapts to shallower networks when the sizes are smaller.
1. What is the main contribution of the paper regarding neural network training? 2. What are the strengths and weaknesses of the proposed method compared to prior works? 3. How does the reviewer assess the novelty and additional value of the current study? 4. What are the peculiarities in the results that suggest a bias in the algorithm or a shared property among datasets? 5. How would you explain the regularity of sub-network depths across multiple datasets? 6. What would be a more appropriate way to present the results in Figure 3 to account for different baseline networks? 7. Would it be beneficial to investigate non-regular architectures for improved performance, reduced computational cost, or both?
Review
Review This paper's contribution is a method for automatically growing the depth of a neural network during training. It compares several heuristics that may be used to successfully achieve this goal and identifies a set of choices that work well together on multiple datasets. The paper focuses on CNNs that conform to a popular design pattern where the network is organized into a series of sub-networks, each consisting of a series of sub-modules (sometimes called blocks) operating at the same resolution. To be precise, the proposed method aims to learn the length of each series of sub-modules. A main contribution of the paper is the demonstration that it is not necessary to train a network until convergence before adding new sub-modules as proposed in past work. Instead, it is better to grow the network after training for a short while. My current decision for this paper is a weak rejection due to the points below. However, I am open to revising my opinion if these points are addressed satisfactorily. - The growing strategy identified in the paper as a superior alternative seems to be already known and used, at least in the speech recognition community. Seide et al. (2011) called it Discriminative Pre-training, and showed that it outperforms greedy layer-wise pretraining and DBN pre-training. Zeyer et al. (2017) reported that a similar method also enables the training of very deep LSTM networks which is otherwise notoriously hard. In general, the existence of prior work with the same ideas does not preclude acceptance, but the existence of this work needs to be clearly stated early on and the additional value of the current study sufficiently clarified. - I find it strange that the final networks found by the proposed method usually have the same/similar number of sub-modules per sub-network (Tables 4,5,6) on multiple datasets. The only exceptions appear to be Basic4ResNet/CIFAR100 in Table 6 and about 50% of ImageNet results in Table 7. This regularity suggests that either A) the proposed algorithm prefers to set same number of sub-modules per sub-network due to its design, or B) datasets except ImageNet have an inherent shared property that produces this result. Since option A suggests a bias in the algorithm, this peculiarity of the results needs to be investigated or explained further. - Figure 3 constitutes the main evidence that Autogrow finds approximately optimal depths as compared to manual searching, but it is not clear how the plot for baselines is obtained. For any given parameter budget, there are multiple baseline networks possible since the sub-networks can have different number of sub-modules (see previous point). This does not appear to be accounted for in Figure 3. Further, when dealing with CNNs, it would be more useful to have computation budget on the x-axis instead of the parameter budget. This would better account for the difference between increasing depth in an earlier sub-network vs. a later one. - The reported results appear to be for single trials throughout the paper. This does not seem sufficient especially for results in Tables 2 and 3 where many differences are rather small, and so drawing conclusions from these tables would be unscientific. References: Seide, Frank, et al. "Feature engineering in context-dependent deep neural networks for conversational speech transcription." 2011 IEEE Workshop on Automatic Speech Recognition & Understanding. IEEE, 2011. https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/FeatureEngineeringInCD-DNN-ASRU2011-pub.pdf Zeyer, Albert, et al. "A comprehensive study of deep bidirectional LSTM RNNs for acoustic modeling in speech recognition." 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017. https://arxiv.org/abs/1606.06871 Update after rebuttal ----------------------------- I'm sympathetic to the unfortunate situation that the authors are in, since the underlying growing strategy has already been covered by prior work. As I mentioned earlier, a sufficient rewrite of the paper can clearly state what has been done already so as not to take credit from the earlier authors. A revised version of the paper has not been uploaded; I suggest that the authors do so for the future. I agree that the focus of this paper is learning the 'optimal' depth by using the growing strategy. But I am not convinced that the technique indeed finds optimal depths based on the regularity of the sub-network depths mentioned in my review. The rebuttal suggests reasons for the obtained regularity, but does not prove that these regular structures obtained are indeed optimal and not an artifact of the algorithm itself. The baselines are also using the same regular architectures, which distorts the overall picture because it is possible that a non-regular architecture provides a better trade-off. While my rating doesn't change, I do think that the work is in an interesting direction. My final suggestions for the future are: - Investigate where non-regular architectures (unequal sub-network depths) are in the trade-off between accuracy, flops and parameters. - Investigate whether the proposed algorithm can be modified to easily find non-regular architectures if they can yield equally good performance as regular ones at similar or lower cost.
ICLR
Title AutoGrow: Automatic Layer Growing in Deep Convolutional Networks Abstract Depth is a key component of Deep Neural Networks (DNNs), however, designing depth is heuristic and requires many human efforts. We propose AutoGrow to automate depth discovery in DNNs: starting from a shallow seed architecture, AutoGrow grows new layers if the growth improves the accuracy; otherwise, stops growing and thus discovers the depth. We propose robust growing and stopping policies to generalize to different network architectures and datasets. Our experiments show that by applying the same policy to different network architectures, AutoGrow can always discover near-optimal depth on various datasets of MNIST, FashionMNIST, SVHN, CIFAR10, CIFAR100 and ImageNet. For example, in terms of accuracy-computation trade-off, AutoGrow discovers a better depth combination in ResNets than human experts. Our AutoGrow is efficient. It discovers depth within similar time of training a single DNN. 1 INTRODUCTION Layer depth is one of the decisive factors of the success of Deep Neural Networks (DNNs). For example, image classification accuracy keeps improving as the depth of network models grows (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; Szegedy et al., 2015; He et al., 2016; Huang et al., 2017). Although shallow networks cannot ensure high accuracy, DNNs composed of too many layers may suffer from over-fitting and convergence difficulty in training. How to obtain the optimal depth for a DNN still remains mysterious. For instance, ResNet-152 (He et al., 2016) uses 3, 8, 36 and 3 residual blocks under output sizes of 56 × 56, 28 × 28, 14 × 14 and 7 × 7, respectively, which don’t show an obvious quantitative relation. In practice, people usually reply on some heuristic trials and tests to obtain the depth of a network: they first design a DNN with a specific depth and then train and evaluate the network on a given dataset; finally, they change the depth and repeat the procedure until the accuracy meets the requirement. Besides the high computational cost induced by the iteration process, such trial & test iterations must be repeated whenever dataset changes. In this paper, we propose AutoGrow that can automate depth discovery given a layer architecture. We will show that AutoGrow generalizes to different datasets and layer architectures. There are some previous works which add or morph layers to increase the depth in DNNs. VggNet (Simonyan & Zisserman, 2014) and DropIn (Smith et al., 2016) added new layers into shallower DNNs; Network Morphism (Wei et al., 2016; 2017; Chen et al., 2015) morphed each layer to multiple layers to increase the depth meanwhile preserving the function of the shallower net. Table 1 summarizes differences in this work. Their goal was to overcome difficulty of training deeper DNNs or accelerate it. Our goal is to automatically find an optimal depth. Moreover, previous works applied layer growth by once or a few times at pre-defined locations to grow a pre-defined number of layers; in contrast, ours automatically learns the number of new layers and growth locations without limiting growing times. We will summarize more related works in Section 4. Figure 1 illustrates an example of AutoGrow. It starts from the shallowest backbone network and gradually grows sub-modules (A sub-module can be one or more layers, e.g., a residual block); the growth stops once a stopping policy is satisfied. We studied multiple initializers of new layers and multiple growing policies, and surprisingly find that: (1) a random initializer works equally or better than complicated Network Morphism; (2) it is more effective to grow before a shallow net converges. We hypothesize that this is because a converged shallow net is an inadequate initialization for training deeper net, while random initialization can help to escape from a bad starting point. Motivated by this, we intentionally avoid full convergence during the growing by using (1) random initialization of new layers, (2) a constant large learning rate, and (3) a short growing interval. Growing sub-nets Sub-modules Stopped sub-nets Initializer Epochs Seed Discovered Figure 1: A simple example of AutoGrow. Previous works Ours Goal Ease training Depth automation Times Once or a few Unlimited Locations Human defined Learned Layer # Human defined Learned Table 1: Comparison with previous works about layer growth. Our contributions are: (1) We propose AutoGrow to automate DNN layer growing and depth discovery. AutoGrow is very robust. With the same hyper-parameters, it adapts network depth to various datasets including MNIST, FashionMNIST, SVHN, CIFAR10, CIFAR100 and ImageNet. Moreover, AutoGrow can also discover shallower DNNs when the dataset is a subset. (2) AutoGrow demonstrates high efficiency and scales up to ImageNet, because the layer growing is as fast as training a single DNN. On ImageNet, it discovers a new ResNets with better trade-off between accuracy and computation complexity. (3) We challenge the idea of Network Morphism, as random initialization works equally or better when growing layers. (4) We find that it is beneficial to rapidly grow layers before a shallower net converge, contradicting previous intuition. 2 AutoGrow – A DEPTH GROWING ALGORITHM Algorithm 1 AutoGrow Algorithm. Input : 1 A seed shallow network g(X0) composed of M sub-networks F = {fi (·;Wi) : i = 0 . . .M − 1}, where each sub-network has only one sub-module (a dimension reduction sub-module); an epoch interval K to check growing and stopping policies; the number of fine-tuning epochs N after growing. Initialization: 2 A Circular Linked List of sub-networks under growing: subNetList = f0 (·;W0)→ · · · → fM−1 (·;WM−1)←−−−−−−−−−−−−−−−−−−−−−−−−−−− ; 3 The current growing sub-network: growingSubNet = subNetList.head() = f0 (·;W0); 4 The recent grown sub-network: grownSubNet = None; Process : 5 # if there exist growing sub-network(s) 6 while subNetList.size()>0 do 7 train(g(X0), K) # train the whole network g(X0) for K epochs 8 if meetStoppingPolicy() then 9 # remove a sub-network from the growing list if its growth did not improve accuracy 10 subNetList.delete(grownSubNet); 11 end 12 if meetGrowingPolicy() and subNetList.size()>0 then 13 # current growing sub-network growingSubNet== fi (·;Wi) 14 Wi = Wi ∪W # stack a sub-module on top of fi (·;Wi) 15 initializer(W); # initialize the new sub-moduleW 16 # record the recent grown sub-network and iterate to a next sub-network 17 grownSubNet= growingSubNet; 18 growingSubNet= subNetList.next(growingSubNet); 19 end 20 end 21 Fine-tune the discovered network g(X0) for N epochs; Output : 22 A trained neural network g(X0) with learned depth. Figure 1 gives an overview of the proposed AutoGrow. In this paper, we use network, sub-networks, sub-modules and layers to describe the architecture hierarchy. A network is composed of a cascade of sub-networks. A sub-network is composed of sub-modules, which typical share the same output size. A sub-module (e.g. a residual block) is an elementary growing block composed of one or a few layers. In this section, we rigorously formulate a generic version of AutoGrow which will be materialized in subsections. A deep convolutional network g(X0) is a cascade of sub-networks by composing functions as g(X0) = l (fM−1 (fM−2 (· · ·f1 (f0 (X0)) · · · ))), where X0 is an input image, M is the number of sub-networks, l(·) is a loss function, and Xi+1 = fi (Xi) is a sub-network that operates on an input image or a feature tensor Xi ∈ Rci×hi×wi . Here, ci is the number of channels, and hi and wi are spatial dimensions. fi (Xi) is a simplified notation of fi (Xi;Wi), where Wi is a set of sub-modules’ parameters within the i-th sub-network. Thus W = {Wi : i = 0 . . .M − 1} denotes the whole set of parameters in the DNN. To facilitate growing, the following properties are supported within a sub-network: (1) the first sub-module usually reduces the size of input feature maps, e.g., using pooling or convolution with a stride; and (2) all sub-modules in a sub-network maintain the same output size. As such, our framework can support popular networks, including VggNet-like plain networks (Simonyan & Zisserman, 2014), GoogLeNet (Szegedy et al., 2015), ResNets (He et al., 2016) and DenseNets (Huang et al., 2017). In this paper, we select ResNets and VggNet-like nets as representatives of DNNs with and without shortcuts, respectively. With above notations, Algorithm 1 rigorously describes the AutoGrow algorithm. In brief, AutoGrow starts with the shallowest net where every sub-network has only one sub-module for spatial dimension reduction. AutoGrow loops over all growing sub-networks in order. For each sub-network, AutoGrow stacks a new sub-module. When the new sub-module does not improve the accuracy, the growth in corresponding sub-network will be permanently stopped. The details of our method will be materialized in the following subsections. 2.1 SEED SHALLOW NETWORKS AND SUB-MODULES In this paper, in all datasets except ImageNet, we explore growing depth for four types of DNNs: (1) Basic3ResNet: the same ResNet used for CIFAR10 in He et al. (2016), which has 3 residual subnetworks with output spatial sizes of 32×32, 16×16 and 8×8, respectively; (2) Basic4ResNet: a variant of ResNet used for ImageNet in He et al. (2016) built by basic residual blocks (each of which contains two convolutions and one shortcut). There are 4 sub-networks with output spatial sizes of 32× 32, 16× 16, 8× 8 and 4× 4, respectively; (3) Plain3Net: a VggNet-like plain net by removing shortcuts in Basic3ResNet; (4) Plain4Net: a VggNet-like plain net by removing shortcuts in Basic4ResNet. In AutoGrow, the architectures of seed shallow networks and sub-modules are pre-defined. In plain DNNs, a sub-module is a stack of convolution, Batch Normalization and ReLU; in residual DNNs, a sub-module is a residual block. In AutoGrow, a sub-network is a stack of all sub-modules with the same output spatial size. Unlike He et al. (2016) which manually designed the depth, AutoGrow starts from a seed architecture in which each sub-network has only one sub-module and automatically learns the number of sub-modules. On ImageNet, we apply the same backbones in He et al. (2016) as the seed architectures. A seed architecture has only one sub-module under each output spatial size. For a ResNet using basic residual blocks or bottleneck residual blocks (He et al., 2016), we respectively name it as Basic4ResNet or Bottleneck4ResNet. Plain4Net is also obtained by removing shortcuts in Basic4ResNet. 2.2 SUB-MODULE INITIALIZERS Here we explain how to initialize a new sub-moduleW in initializer(W) mentioned in Algorithm 1. Network Morphism changes DNN architecture meanwhile preserving the loss function via special initialization of new layers, that is, g(X0;W) = g(X0;W∪W) ∀X0. A residual sub-module shows a nice property: when stacking a residual block and initializing the last Batch Normalization layer as zeros, the function of the shallower net is preserved but the DNN is morphed to a deeper net. Thus, Network Morphism can be easily implemented by this zero initialization (ZeroInit). In this work, all layers inW are initialized using default randomization, except for a special treatment of the last Batch Normalization layer in a residual sub-module. Besides ZeroInit, we propose a new AdamInit for Network Morphism. In AdamInit, we freeze all parameters except the last Batch Normalization layer inW , and then use Adam optimizer (Kingma & Ba, 2014) to optimize the last Bath Normalization for maximum 10 epochs till the training accuracy of the deeper net is as good as the shallower one. After AdamInit, all parameters are jointly optimized. We view AdamInit as a Network Morphism because the training loss is similar after AdamInit. We empirically find that AdamInit can usually find a solution in less than 3 epochs. We also study random initialization of the last Batch Normalization layer using uniform (UniInit) or Gaussian (GauInit) noises with a standard deviation 1.0. We will show that GauInit obtains the best result, challenging the idea of Network Morphism (Wei et al., 2016; 2017; Chen et al., 2015). 2.3 GROWING AND STOPPING POLICIES In Algorithm 1, a growing policy refers to meetGrowingPolicy(), which returns true when the network should grow a sub-module. Two growing policies are studied here: 1. Convergent Growth: meetGrowingPolicy() returns true when the improvement of validation accuracy is less than τ in the last K epochs. That is, in Convergent Growth, AutoGrow only grows when current network has converged. This is a similar growing criterion adopted in previous works (Elsken et al., 2017; Cai et al., 2018a;b). 2. Periodic Growth: meetGrowingPolicy() always returns true, that is, the network always grows everyK epochs. Therefore,K is also the growing period. In the best practice of AutoGrow, K is small (e.g. K = 3) such that it grows before current network converges. Our experiments will show that Periodic Growth outperforms Convergent Growth. We hypothesize that a fully converged shallower net is an inadequate initialization to train a deeper net. We will perform experiments to test this hypothesis and visualize optimization trajectory to illustrate it. A stopping policy denotes meetStoppingPolicy() in Algorithm 1. When Convergent Growth is adopted, meetStoppingPolicy() returns true if a recent growth does not improve validation accuracy more than τ within K epochs. We use a similar stopping policy for Periodic Growth; however, as it can grow rapidly with a small period K (e.g. K = 3) before it converges, we use a larger window size J for stop. Specifically, when Periodic Growth is adopted, meetStoppingPolicy() returns true when the validation accuracy improves less than τ in the last J epochs, where J K. Hyper-parameters τ , J and K control the operation of AutoGrow and can be easily setup and generalize well. τ denotes the significance of accuracy improvement for classification. We simply set τ = 0.05% in all experiments. J represents how many epochs to wait for an accuracy improvement before stopping the growth of a sub-network. It is more meaningful to consider stopping when the new net is trained to some extent. As such, we set J to the number of epochs T under the largest learning rate when training a baseline. K means how frequently AutoGrow checks the polices. In Convergent Growth, we simply setK = T , which is long enough to ensure convergence. In Periodic Growth, K is set to a small fraction of T to enable fast growth before convergence; more importantly, K = 3 is very robust to all networks and datasets. Therefore, all those hyper-parameters are very robust and strongly correlated to design considerations. 3 EXPERIMENTS In this paper, we use Basic3ResNet-2-3-2, for instance, to denote a model architecture which contains 2, 3 and 2 sub-modules in the first, second and third sub-networks, respectively. Sometimes we simplify it as 2-3-2 for convenience. AutoGrow always starts from the shallowest depth of 1-1-1 and uses the maximum validation accuracy as the metric to guide growing and stopping. All DNN baselines are trained by SGD with momentum 0.9 using staircase learning rate. The initial learning rate is 0.1 in ResNets and 0.01 in plain networks. On ImageNet, baselines are trained using batch size 256 for 90 epochs, within which learning rate is decayed by 0.1× at epoch 30 and 60. In all other smaller datasets, baselines are trained using batch size 128 for 200 epochs and learning rate is decayed by 0.1× at epoch 100 and 150. Our early experiments followed prior wisdom by growing layers with Network Morphism (Wei et al., 2016; 2017; Chen et al., 2015; Elsken et al., 2017; Cai et al., 2018a;b), i.e., AutoGrow with ZeroInit (or AdamInit) and Convergent Growth policy; however, it stopped early with very shallow DNNs, failing to find optimal depth. We hypothesize that a converged shallow net with Network Morphism gives a bad initialization to train a deeper neural network. Section 3.1 experimentally test that the hypothesis is valid. To tackle this issue, we intentionally avoid convergence during growing by three simple solutions, which are evaluated in Section 3.2. Finally, Section 3.3 and Section 3.4 include extensive experiments to show the effectiveness of our final AutoGrow. 3.1 SUBOPTIMUM OF NETWORK MORPHISM AND CONVERGENT GROWTH In this section, we study Network Morphism itself and its integration into our AutoGrow under Convergent Growth. When studying Network Morphism, we take the following steps: 1) train a shallower ResNet to converge, 2) stack residual blocks on top of each sub-network to morph to a deeper net, 3) use ZeroInit or AdamInit to initialize new layers, and 4) train the deeper net in a standard way. We compare the accuracy difference (“∆”) between Network Morphism and training the deeper net from scratch. Table 2 summaries our results. Network Morphism has a lower accuracy (negative “∆”) in all the cases, which validates our hypothesis that a converged shallow network with Network Morphism gives a bad initialization to train a deeper net. We visualize the optimization trajectories in Appendix A.0.1 to illustrate the hypothesis. To further validate our hypothesis, we integrate Network Morphism as the initializer in AutoGrow with Convergent Growth policy. We refer to this version of AutoGrow as c-AutoGrow with “c-” denoting “Convergent.” More specific, we take ZeroInit or AdamInit as sub-module initializer and “Convergent Growth” policy in Algorithm 1. To recap, in this setting, AutoGrow trains a shallower net till it converges, then grows a sub-module which is initialized by Network Morphism, and repeats the same process till there is no further accuracy improvement. In every interval of K training epochs (train(g(X0),K) in Algorithm 1), “staircase” learning rate is used. The learning rate is reset to 0.1 at the first epoch, and decayed by 0.1× at epoch K2 and 3K 4 . The results are shown in Table 3 by “staircase” rows, which illustrate that c-AutoGrow can grow a DNN multiple times and finally find a depth. However, there are two problems: 1) the final accuracy is lower than training the found net from scratch, as indicated by “∆”, validating our hypothesis; 2) the depth learning stops too early with a relatively shallower net, while a deeper net beyond the found depth can achieve a higher accuracy as we will show in Table 6. These problems provide a circumstantial evidence of the hypothesis that a converged shallow net with Network Morphism gives a bad initialization. Thus, AutoGrow cannot receive signals to continue growing after a limited number of growths. In Appendix A.0.1, Figure 6(a) visualizes the trajectory of c-AutoGrow corresponding to row “2-3-6” in Table 3. 3.2 ABLATION STUDY FOR AutoGrow DESIGN Based on the findings in Section 3.1, we propose three simple but effective solutions to further enhance AutoGrow and refer it as p-AutoGrow, with “p-” denoting “Periodic”: (1) Use a large constant learning rate for growing, i.e., 0.1 for residual networks and 0.01 for plain networks. Stochastic gradient descent with a large learning rate intrinsically introduces noises, which help to avoid a full convergence into a bad initialization from a shallower net. Note that staircase learning rate is still used for fine-tuning after discovering the final DNN; (2) Use random initialization (UniInit or GauInit) as noises to escape from an inadequate initialization; (3) Grow rapidly before a shallower net converges by taking Periodic Growth with a small K. p-AutoGrow is our final AutoGrow. In the rest part of this section, we perform ablation study to prove that the three solutions are effective. We start from c-AutoGrow, and incrementally add above solutions one by one and eventually obtain p-AutoGrow. In Table 3, first, we replace the staircase learning rate with a constant learning rate, the accuracy of AutoGrow improves and therefore “∆” improves; second, we further replace Network Morphism (ZeroInit or AdamInit) with a random initializer (UniInit or GauInit) and result in a bigger gain. Overall, combining a constant learning rate with GauInit performs the best. Thus, constant learning rate and GauInit are adopted in the remaining experiments, unless we explicitly specify them. Note that, in this paper, we are more interested in automating depth discovery to find a final DNN (“found net”) with a high accuracy (“accu”). Ideally, the “found net” has a minimum depth, a larger depth than which cannot further improve “accu”. We will show in Figure 3 that AutoGrow discovers a depth approximately satisfying this property. The “∆” is a metric to indicate how well shallower nets initialize deeper nets; a negative “∆” indicates that weight initialization from a shallower net hurts training of a deeper net; while a positive “∆” indicates AutoGrow helps training a deeper net, which is a byproduct of this work. Finally, we apply the last solution – Periodic Growth, and obtain our final p-AutoGrow. Our ablation study results for p-AutoGrow are summarized in Table 5 and Table 4. Table 5 analyzes the impact of the growing period K. In general, K is a hyper-parameter to trade off speed and accuracy: a smaller K takes a longer learning time but discovers a deeper net, vice versa. Our results validate the preference of a faster growth (i.e. a smaller K). On CIFAR10/CIFAR100, the accuracy reaches plateau/peak at K = 3; further reducing K produces a deeper net while the accuracy gain is marginal/impossible. In the following, we simply select K = 3 for robustness test. More importantly, our quantitative results in Table 5 show that p-AutoGrow finds much deeper nets, overcoming the very-early stop issue in c-AutoGrow in Table 3. That is, Periodic Growth proposed in this work is much more effective than Convergent Growth utilized in previous work. For sanity check, we perform the ablation study of initializers for p-AutoGrow. The results are in Table 8 in Appendix A.0.3, which further validates our wisdom on selecting GauInit. The motivation of Network Morphism in previous work was to start a deeper net from a loss function that has been well optimized by a shallower net, so as not to restart the deeper net training from scratch (Wei et al., 2016; 2017; Chen et al., 2015; Elsken et al., 2017; Cai et al., 2018a;b). In all our experiments, we find this is sure even with random initialization. Figure 2 plots the convergence curves and learning process for “42-42-42” in Table 5. Even with GauInit, the loss and accuracy rapidly recover and no restart is observed. The convergence pattern in the “Growing” stage is similar to the “Fine-tuning” stage under the same learning rate (the initial learning rate 0.1). Similar results on ImageNet will be shown in Figure 8. Our results challenge the necessity of Network Morphism when growing neural networks. At last, we perform the ablation study on the initial depth of the seed network. Table 4 demonstrates that a shallowest DNN works as well as a deeper seed. This implies that AutoGrow can appropriately stop regardless of the depth of the seed network. As the focus of this work is on depth automation, we prefer starting with the shallowest seed to avoid a manual search of a seed depth. 3.3 ADAPTABILITY OF AutoGrow To verify the adaptability of AutoGrow, we use an identical configuration (p-AutoGrow withK = 3) and test over 5 datasets and 4 seed architectures. Table 6 includes the results of all 20 combinations. Figure 3 compares AutoGrow with manual search which is obtained by training many DNNs with different depths from scratch. The results lead to the following conclusions and contributions: Finally, our supposition is that: when the size of dataset is smaller, the optimal depth should be smaller. Under this supposition, we test the effectiveness of AutoGrow by sampling a subset of dataset and verify if AutoGrow can discover a shallower depth. In Appendix A.0.3, Table 11 summarizes the results. As expected, our experiments show that AutoGrow adapts to shallower networks when the datasets are smaller. 3.4 SCALING TO IMAGENET AND EFFICIENCY In ImageNet, K = 3 should generalize well, but we explore AutoGrow with K = 2 and K = 5 to obtain an accuracy-depth trade-off line for comparison with human experts. The larger K = 5 enables AutoGrow to obtain a smaller DNN to trade-off accuracy and model size (computation) and the smaller K = 2 achieves higher accuracy. The results are shown in Table 7, which proves that AutoGrow automatically finds a good depth without any tuning. As a byproduct, the accuracy is even higher than training the found net from scratch, indicating that the Periodic Growth in AutoGrow helps training deeper nets. The comparison of AutoGrow and manual depth design (He et al., 2016) is in Figure 4, which shows that AutoGrow achieves better trade-off between accuracy and computation (measured by floating point operations). In Appendix A.0.3, Table 10 summarizes the breakdown of wall-clock time in AutoGrow. The growing/searching time is as efficient as (often more efficient than) fine-tuning the single discovered DNN. The scalability of AutoGrow comes from its intrinsic features that (1) it grows quickly with a short period K and stops immediately if no improvement is sensed; and (2) the network is small at the beginning of growing. 4 RELATED WORK Neural Architecture Search (NAS) (Zoph & Le, 2016) and neural evolution (Miikkulainen et al., 2019; Angeline et al., 1994; Stanley & Miikkulainen, 2002; Liu et al., 2017a; Real et al., 2017) can search network architectures from a gigantic search space. In NAS, the depth of DNNs in the search space is fixed, while AutoGrow learns the depth. Some NAS methods (Bender et al., 2018; Liu et al., 2018b; Cortes et al., 2017) can find DNNs with different depths, however, the maximum depth is pre-defined and shallower nets are obtained by padding zero operations or selecting shallower branches, while our AutoGrow learns the depth in an open domain to find a minimum depth, beyond which no accuracy improvement can be obtained. Moreover, NAS is very computation and memory intensive. To accelerate NAS, one-shot models (Saxena & Verbeek, 2016; Pham et al., 2018; Bender et al., 2018), DARTS (Liu et al., 2018b) and NAS with Transferable Cell (Zoph et al., 2018; Liu et al., 2018a) were proposed. The search time reduces dramatically but is still long from practical perspective. It is still very challenging to deploy these methods to larger datasets such as ImageNet. In contrast, our AutoGrow can scale up to ImageNet thanks to its short depth learning time, which is as efficient as training a single DNN. In addition to architecture search which requires to train lots of DNNs from scratch, there are also many studies on learning neural structures within a single training. Structure pruning and growing were proposed for different goals, such as efficient inference (Wen et al., 2016; Li et al., 2016; Lebedev & Lempitsky, 2016; He et al., 2017; Luo et al., 2017; Liu et al., 2017b; Dai et al., 2017; Huang et al., 2018; Gordon et al., 2018; Du et al., 2019), lifelong learning (Yoon et al., 2017) and model adaptation (Feng & Darrell, 2015; Philipp & Carbonell, 2017). However, those works fixed the network depth and limited structure learning within the existing layers. Optimization over a DNN with fixed depth is easier as the skeleton architecture is known. AutoGrow performs in a scenario where the DNN depth is unknown hence we need to seek for the optimal depth. A APPENDIX A.0.1 OPTIMIZATION TRAJECTORIES OF NETWORK MORPHISM We hypothesize that a converged shallower net may not be an adequate initialization. Figure 5 visualizes and compares the optimization trajectories of Network Morphism and the training from scratch. In this figure, the shallower net is Basic3ResNet-3-3-3 (ResNet-20) and the deeper one is Basic3ResNet-5-5-5 (ResNet-32) in Table 2. The initializer is ZeroInit. The visualization method is extended from Li et al. (2018). Points on the trajectory are evenly sampled every a few epochs. To maximize the variance of trajectory, we use PCA to project from a high dimensional space to a 2D space and use the first two Principle Components (PC) to form the axes in Figure 5. The contours of training loss function and the trajectory are visualized around the final minimum of the deeper net. When projecting a shallower net to a deeper net space, zeros are padded for the parameters not existing in the deeper net. We must note that the loss increase along the trajectory does not truly represent the situation in high dimensional space, as the trajectory is just a projection. It is possible that the loss remains decreasing in the high dimension while it appears in an opposite way in the 2D space. The sharp detour at “Morphing” in Figure 5(a) may indicate that the shallower net plausibly converges to a point that the deeper net struggles to escape. In contrast, Figure 5(b) shows that the trajectory of the direct optimization in the deeper space smoothly converges to a better minimum. Figure 6(a) visualizes the trajectory of c-AutoGrow corresponding to row “2-3-6” in Table 3. Along the trajectory, there are many trials to detour and escape an initialization from a shallower net. Figure 6(b) visualizes the trajectory corresponding to row “2-4-3” in Table 3, which is much smoother compared to Figure 6(a). Figure 6(c)(d) visualize the trajectories of p-AutoGrow with K = 50 and 3. The 2D projection gives limited information to reveal the advantages of p-AutoGrow comparing to c-AutoGrow in Figure 6(b), although the trajectory of our final p-AutoGrow in Figure 6(d) is plausibly more similar to the one of training from scratch in Figure 5(b). A.0.2 VISUALIZATION OF LOSS SURFACES AROUND MINIMA Figure 7 visualizes loss surfaces around minima by AutoGrow and baseline. Intuitively, AutoGrow finds wider or deeper minima with less chaotic landscapes. A.0.3 MORE EXPERIMENTAL RESULTS Figure 8 plots the growing and converging curves for two DNNs in Table 10. Table 11 summarizes the adaptability of AutoGrow to the sizes of dataset. In each set of experiments, dataset is randomly down-sampled to 100%, 75%, 50% and 25%. For a fair comparison, K is divided by the percentage of dataset such that the number of mini-batches between growths remains the same. As expected, our experiments show that AutoGrow adapts to shallower networks when the sizes are smaller.
1. What is the main contribution of the paper regarding growing network depth? 2. What are the strengths and weaknesses of the proposed method in terms of empirical results and insights into the 'growing networks' paradigm? 3. Do you have any suggestions or criticisms regarding the experimental design, such as the choice of sub-module or the use of random initialization? 4. How does the paper's contribution compare to other works in the literature of NAS, and what are the implications of the findings for ZeroInit and GauInit? 5. Would including experiments on other data modalities or addressing some of the suggested improvements make the paper stronger, and what kind of theoretical understanding could be developed for this paradigm of growing networks?
Review
Review Contributions: This paper best fits in the literature that explores growing network depth. The main framework here is to interleave training a shallower network and adding new layers. This paper (their final algorithm) differs from existing methods in that they: 1) initialize the new layers using standard initialization as opposed to the commonly used zero-init in this literature, 2) grows at a fixed interval , and this interval is short (to avoid the shallower nets being overly-trained)., 3) uses a large and constant learning rate during the growing phase. Empirically, they show competitive results on standard image benchmarks. More interestingly (to me), they provide interesting insights to this paradigm of ‘growing networks’. Comments/Questions: Section 2 of the paper describes the proposed method is good details. Section 3 of the paper describes the experiments. Since for now I see the contribution of this paper is mostly empirical, I will give my detailed feedback here. 3.1 (Suboptimum of Network Morphism (NM) ) Table 2 shows NM is worse than training from scratch, and this isn’t fixed by AdamInit. Table 3 shows c-AutoGrow (in between p-AutoGrow and NM) still does worse than from scratch, pinpoint the problem to converged subnetworks. 3.2 (p-AutoGrow) Table 3 shows +Constant LR helps, then +RandomInit helps. Table 4, 5 shows +Periodic gets the best performance. *Suggestion* The found net is Table 4,5 are significantly deeper than those in Table 2,3, also there are no \Delta. Also, although within this write-up those are the highest numbers, in the broader literature of NAS this doesn’t seem to be that good. From a quick search, many methods in the Table 1 of [1] seems to give >96% accuracy on CIFAR10, some even close to 98%. It might be good to at least discuss why this method is limited from achieving that. I do like the finding that ZeroInit is unnecessary, as reported in the rest of this subsection. However, it is unsatisfying to me that many past works (as cited by the authors) required this ZeroInit without ever trying RandomInit. *Suggestion* I would love to see a more thorough discussion on why GauInit is better than ZeroInit, not just more numbers. For example, even just text description on why past works found ZeroInit useful, and countering some of those claims would be interesting. A more controlled experiment rather than training 2 networks by swapping this would be interesting. ZeroInit is used in more context than just NM. For example, good flow models like Glow also uses such initialization, for likely a different reason, but I wonder if findings here have any implication for ZeroInit more generally. 3.3 (Many datasets) Table 6 is a strong result. One odd thing is how deep the found-net has to be for MNIST. This actually suggest to me that AutoGrow does not have the ability to stop early when it can. And in the discussion, the authors argue that by using a better sub-module like in NAS they can do better. This raises the question why the authors did not choose to use it. I would believe it if the proposed method has obvious reasons that it can transfer to different architecture, but for now I cannot jump to the conclusion that, say, p-AutoGrow with GauInit will necessarily work when using a different sub-module. Perhaps, the reason past NM works didn’t use a GauInit was also due to the fact that past sub-modules didn’t work with GauInit. 3.4 (Scale to ImageNet) It’d be good to add reference results from other papers. Minor details: There are some good contents in this work, but for it to be a strong *empirical* contribution, perhaps it would be more useful to include experiments on other data modality where things are not so well tuned, and show state-of-the-art results. For it to be a strong *analysis* paper, it should expanded, at least addressing some of the *suggestions* mentioned above. Unrelated to my evaluation of this work, reading this makes me think we should (and can) develop theoretical understanding to this paradigm of growing networks. References: [1] https://arxiv.org/pdf/1905.13360.pdf
ICLR
Title AutoGrow: Automatic Layer Growing in Deep Convolutional Networks Abstract Depth is a key component of Deep Neural Networks (DNNs), however, designing depth is heuristic and requires many human efforts. We propose AutoGrow to automate depth discovery in DNNs: starting from a shallow seed architecture, AutoGrow grows new layers if the growth improves the accuracy; otherwise, stops growing and thus discovers the depth. We propose robust growing and stopping policies to generalize to different network architectures and datasets. Our experiments show that by applying the same policy to different network architectures, AutoGrow can always discover near-optimal depth on various datasets of MNIST, FashionMNIST, SVHN, CIFAR10, CIFAR100 and ImageNet. For example, in terms of accuracy-computation trade-off, AutoGrow discovers a better depth combination in ResNets than human experts. Our AutoGrow is efficient. It discovers depth within similar time of training a single DNN. 1 INTRODUCTION Layer depth is one of the decisive factors of the success of Deep Neural Networks (DNNs). For example, image classification accuracy keeps improving as the depth of network models grows (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; Szegedy et al., 2015; He et al., 2016; Huang et al., 2017). Although shallow networks cannot ensure high accuracy, DNNs composed of too many layers may suffer from over-fitting and convergence difficulty in training. How to obtain the optimal depth for a DNN still remains mysterious. For instance, ResNet-152 (He et al., 2016) uses 3, 8, 36 and 3 residual blocks under output sizes of 56 × 56, 28 × 28, 14 × 14 and 7 × 7, respectively, which don’t show an obvious quantitative relation. In practice, people usually reply on some heuristic trials and tests to obtain the depth of a network: they first design a DNN with a specific depth and then train and evaluate the network on a given dataset; finally, they change the depth and repeat the procedure until the accuracy meets the requirement. Besides the high computational cost induced by the iteration process, such trial & test iterations must be repeated whenever dataset changes. In this paper, we propose AutoGrow that can automate depth discovery given a layer architecture. We will show that AutoGrow generalizes to different datasets and layer architectures. There are some previous works which add or morph layers to increase the depth in DNNs. VggNet (Simonyan & Zisserman, 2014) and DropIn (Smith et al., 2016) added new layers into shallower DNNs; Network Morphism (Wei et al., 2016; 2017; Chen et al., 2015) morphed each layer to multiple layers to increase the depth meanwhile preserving the function of the shallower net. Table 1 summarizes differences in this work. Their goal was to overcome difficulty of training deeper DNNs or accelerate it. Our goal is to automatically find an optimal depth. Moreover, previous works applied layer growth by once or a few times at pre-defined locations to grow a pre-defined number of layers; in contrast, ours automatically learns the number of new layers and growth locations without limiting growing times. We will summarize more related works in Section 4. Figure 1 illustrates an example of AutoGrow. It starts from the shallowest backbone network and gradually grows sub-modules (A sub-module can be one or more layers, e.g., a residual block); the growth stops once a stopping policy is satisfied. We studied multiple initializers of new layers and multiple growing policies, and surprisingly find that: (1) a random initializer works equally or better than complicated Network Morphism; (2) it is more effective to grow before a shallow net converges. We hypothesize that this is because a converged shallow net is an inadequate initialization for training deeper net, while random initialization can help to escape from a bad starting point. Motivated by this, we intentionally avoid full convergence during the growing by using (1) random initialization of new layers, (2) a constant large learning rate, and (3) a short growing interval. Growing sub-nets Sub-modules Stopped sub-nets Initializer Epochs Seed Discovered Figure 1: A simple example of AutoGrow. Previous works Ours Goal Ease training Depth automation Times Once or a few Unlimited Locations Human defined Learned Layer # Human defined Learned Table 1: Comparison with previous works about layer growth. Our contributions are: (1) We propose AutoGrow to automate DNN layer growing and depth discovery. AutoGrow is very robust. With the same hyper-parameters, it adapts network depth to various datasets including MNIST, FashionMNIST, SVHN, CIFAR10, CIFAR100 and ImageNet. Moreover, AutoGrow can also discover shallower DNNs when the dataset is a subset. (2) AutoGrow demonstrates high efficiency and scales up to ImageNet, because the layer growing is as fast as training a single DNN. On ImageNet, it discovers a new ResNets with better trade-off between accuracy and computation complexity. (3) We challenge the idea of Network Morphism, as random initialization works equally or better when growing layers. (4) We find that it is beneficial to rapidly grow layers before a shallower net converge, contradicting previous intuition. 2 AutoGrow – A DEPTH GROWING ALGORITHM Algorithm 1 AutoGrow Algorithm. Input : 1 A seed shallow network g(X0) composed of M sub-networks F = {fi (·;Wi) : i = 0 . . .M − 1}, where each sub-network has only one sub-module (a dimension reduction sub-module); an epoch interval K to check growing and stopping policies; the number of fine-tuning epochs N after growing. Initialization: 2 A Circular Linked List of sub-networks under growing: subNetList = f0 (·;W0)→ · · · → fM−1 (·;WM−1)←−−−−−−−−−−−−−−−−−−−−−−−−−−− ; 3 The current growing sub-network: growingSubNet = subNetList.head() = f0 (·;W0); 4 The recent grown sub-network: grownSubNet = None; Process : 5 # if there exist growing sub-network(s) 6 while subNetList.size()>0 do 7 train(g(X0), K) # train the whole network g(X0) for K epochs 8 if meetStoppingPolicy() then 9 # remove a sub-network from the growing list if its growth did not improve accuracy 10 subNetList.delete(grownSubNet); 11 end 12 if meetGrowingPolicy() and subNetList.size()>0 then 13 # current growing sub-network growingSubNet== fi (·;Wi) 14 Wi = Wi ∪W # stack a sub-module on top of fi (·;Wi) 15 initializer(W); # initialize the new sub-moduleW 16 # record the recent grown sub-network and iterate to a next sub-network 17 grownSubNet= growingSubNet; 18 growingSubNet= subNetList.next(growingSubNet); 19 end 20 end 21 Fine-tune the discovered network g(X0) for N epochs; Output : 22 A trained neural network g(X0) with learned depth. Figure 1 gives an overview of the proposed AutoGrow. In this paper, we use network, sub-networks, sub-modules and layers to describe the architecture hierarchy. A network is composed of a cascade of sub-networks. A sub-network is composed of sub-modules, which typical share the same output size. A sub-module (e.g. a residual block) is an elementary growing block composed of one or a few layers. In this section, we rigorously formulate a generic version of AutoGrow which will be materialized in subsections. A deep convolutional network g(X0) is a cascade of sub-networks by composing functions as g(X0) = l (fM−1 (fM−2 (· · ·f1 (f0 (X0)) · · · ))), where X0 is an input image, M is the number of sub-networks, l(·) is a loss function, and Xi+1 = fi (Xi) is a sub-network that operates on an input image or a feature tensor Xi ∈ Rci×hi×wi . Here, ci is the number of channels, and hi and wi are spatial dimensions. fi (Xi) is a simplified notation of fi (Xi;Wi), where Wi is a set of sub-modules’ parameters within the i-th sub-network. Thus W = {Wi : i = 0 . . .M − 1} denotes the whole set of parameters in the DNN. To facilitate growing, the following properties are supported within a sub-network: (1) the first sub-module usually reduces the size of input feature maps, e.g., using pooling or convolution with a stride; and (2) all sub-modules in a sub-network maintain the same output size. As such, our framework can support popular networks, including VggNet-like plain networks (Simonyan & Zisserman, 2014), GoogLeNet (Szegedy et al., 2015), ResNets (He et al., 2016) and DenseNets (Huang et al., 2017). In this paper, we select ResNets and VggNet-like nets as representatives of DNNs with and without shortcuts, respectively. With above notations, Algorithm 1 rigorously describes the AutoGrow algorithm. In brief, AutoGrow starts with the shallowest net where every sub-network has only one sub-module for spatial dimension reduction. AutoGrow loops over all growing sub-networks in order. For each sub-network, AutoGrow stacks a new sub-module. When the new sub-module does not improve the accuracy, the growth in corresponding sub-network will be permanently stopped. The details of our method will be materialized in the following subsections. 2.1 SEED SHALLOW NETWORKS AND SUB-MODULES In this paper, in all datasets except ImageNet, we explore growing depth for four types of DNNs: (1) Basic3ResNet: the same ResNet used for CIFAR10 in He et al. (2016), which has 3 residual subnetworks with output spatial sizes of 32×32, 16×16 and 8×8, respectively; (2) Basic4ResNet: a variant of ResNet used for ImageNet in He et al. (2016) built by basic residual blocks (each of which contains two convolutions and one shortcut). There are 4 sub-networks with output spatial sizes of 32× 32, 16× 16, 8× 8 and 4× 4, respectively; (3) Plain3Net: a VggNet-like plain net by removing shortcuts in Basic3ResNet; (4) Plain4Net: a VggNet-like plain net by removing shortcuts in Basic4ResNet. In AutoGrow, the architectures of seed shallow networks and sub-modules are pre-defined. In plain DNNs, a sub-module is a stack of convolution, Batch Normalization and ReLU; in residual DNNs, a sub-module is a residual block. In AutoGrow, a sub-network is a stack of all sub-modules with the same output spatial size. Unlike He et al. (2016) which manually designed the depth, AutoGrow starts from a seed architecture in which each sub-network has only one sub-module and automatically learns the number of sub-modules. On ImageNet, we apply the same backbones in He et al. (2016) as the seed architectures. A seed architecture has only one sub-module under each output spatial size. For a ResNet using basic residual blocks or bottleneck residual blocks (He et al., 2016), we respectively name it as Basic4ResNet or Bottleneck4ResNet. Plain4Net is also obtained by removing shortcuts in Basic4ResNet. 2.2 SUB-MODULE INITIALIZERS Here we explain how to initialize a new sub-moduleW in initializer(W) mentioned in Algorithm 1. Network Morphism changes DNN architecture meanwhile preserving the loss function via special initialization of new layers, that is, g(X0;W) = g(X0;W∪W) ∀X0. A residual sub-module shows a nice property: when stacking a residual block and initializing the last Batch Normalization layer as zeros, the function of the shallower net is preserved but the DNN is morphed to a deeper net. Thus, Network Morphism can be easily implemented by this zero initialization (ZeroInit). In this work, all layers inW are initialized using default randomization, except for a special treatment of the last Batch Normalization layer in a residual sub-module. Besides ZeroInit, we propose a new AdamInit for Network Morphism. In AdamInit, we freeze all parameters except the last Batch Normalization layer inW , and then use Adam optimizer (Kingma & Ba, 2014) to optimize the last Bath Normalization for maximum 10 epochs till the training accuracy of the deeper net is as good as the shallower one. After AdamInit, all parameters are jointly optimized. We view AdamInit as a Network Morphism because the training loss is similar after AdamInit. We empirically find that AdamInit can usually find a solution in less than 3 epochs. We also study random initialization of the last Batch Normalization layer using uniform (UniInit) or Gaussian (GauInit) noises with a standard deviation 1.0. We will show that GauInit obtains the best result, challenging the idea of Network Morphism (Wei et al., 2016; 2017; Chen et al., 2015). 2.3 GROWING AND STOPPING POLICIES In Algorithm 1, a growing policy refers to meetGrowingPolicy(), which returns true when the network should grow a sub-module. Two growing policies are studied here: 1. Convergent Growth: meetGrowingPolicy() returns true when the improvement of validation accuracy is less than τ in the last K epochs. That is, in Convergent Growth, AutoGrow only grows when current network has converged. This is a similar growing criterion adopted in previous works (Elsken et al., 2017; Cai et al., 2018a;b). 2. Periodic Growth: meetGrowingPolicy() always returns true, that is, the network always grows everyK epochs. Therefore,K is also the growing period. In the best practice of AutoGrow, K is small (e.g. K = 3) such that it grows before current network converges. Our experiments will show that Periodic Growth outperforms Convergent Growth. We hypothesize that a fully converged shallower net is an inadequate initialization to train a deeper net. We will perform experiments to test this hypothesis and visualize optimization trajectory to illustrate it. A stopping policy denotes meetStoppingPolicy() in Algorithm 1. When Convergent Growth is adopted, meetStoppingPolicy() returns true if a recent growth does not improve validation accuracy more than τ within K epochs. We use a similar stopping policy for Periodic Growth; however, as it can grow rapidly with a small period K (e.g. K = 3) before it converges, we use a larger window size J for stop. Specifically, when Periodic Growth is adopted, meetStoppingPolicy() returns true when the validation accuracy improves less than τ in the last J epochs, where J K. Hyper-parameters τ , J and K control the operation of AutoGrow and can be easily setup and generalize well. τ denotes the significance of accuracy improvement for classification. We simply set τ = 0.05% in all experiments. J represents how many epochs to wait for an accuracy improvement before stopping the growth of a sub-network. It is more meaningful to consider stopping when the new net is trained to some extent. As such, we set J to the number of epochs T under the largest learning rate when training a baseline. K means how frequently AutoGrow checks the polices. In Convergent Growth, we simply setK = T , which is long enough to ensure convergence. In Periodic Growth, K is set to a small fraction of T to enable fast growth before convergence; more importantly, K = 3 is very robust to all networks and datasets. Therefore, all those hyper-parameters are very robust and strongly correlated to design considerations. 3 EXPERIMENTS In this paper, we use Basic3ResNet-2-3-2, for instance, to denote a model architecture which contains 2, 3 and 2 sub-modules in the first, second and third sub-networks, respectively. Sometimes we simplify it as 2-3-2 for convenience. AutoGrow always starts from the shallowest depth of 1-1-1 and uses the maximum validation accuracy as the metric to guide growing and stopping. All DNN baselines are trained by SGD with momentum 0.9 using staircase learning rate. The initial learning rate is 0.1 in ResNets and 0.01 in plain networks. On ImageNet, baselines are trained using batch size 256 for 90 epochs, within which learning rate is decayed by 0.1× at epoch 30 and 60. In all other smaller datasets, baselines are trained using batch size 128 for 200 epochs and learning rate is decayed by 0.1× at epoch 100 and 150. Our early experiments followed prior wisdom by growing layers with Network Morphism (Wei et al., 2016; 2017; Chen et al., 2015; Elsken et al., 2017; Cai et al., 2018a;b), i.e., AutoGrow with ZeroInit (or AdamInit) and Convergent Growth policy; however, it stopped early with very shallow DNNs, failing to find optimal depth. We hypothesize that a converged shallow net with Network Morphism gives a bad initialization to train a deeper neural network. Section 3.1 experimentally test that the hypothesis is valid. To tackle this issue, we intentionally avoid convergence during growing by three simple solutions, which are evaluated in Section 3.2. Finally, Section 3.3 and Section 3.4 include extensive experiments to show the effectiveness of our final AutoGrow. 3.1 SUBOPTIMUM OF NETWORK MORPHISM AND CONVERGENT GROWTH In this section, we study Network Morphism itself and its integration into our AutoGrow under Convergent Growth. When studying Network Morphism, we take the following steps: 1) train a shallower ResNet to converge, 2) stack residual blocks on top of each sub-network to morph to a deeper net, 3) use ZeroInit or AdamInit to initialize new layers, and 4) train the deeper net in a standard way. We compare the accuracy difference (“∆”) between Network Morphism and training the deeper net from scratch. Table 2 summaries our results. Network Morphism has a lower accuracy (negative “∆”) in all the cases, which validates our hypothesis that a converged shallow network with Network Morphism gives a bad initialization to train a deeper net. We visualize the optimization trajectories in Appendix A.0.1 to illustrate the hypothesis. To further validate our hypothesis, we integrate Network Morphism as the initializer in AutoGrow with Convergent Growth policy. We refer to this version of AutoGrow as c-AutoGrow with “c-” denoting “Convergent.” More specific, we take ZeroInit or AdamInit as sub-module initializer and “Convergent Growth” policy in Algorithm 1. To recap, in this setting, AutoGrow trains a shallower net till it converges, then grows a sub-module which is initialized by Network Morphism, and repeats the same process till there is no further accuracy improvement. In every interval of K training epochs (train(g(X0),K) in Algorithm 1), “staircase” learning rate is used. The learning rate is reset to 0.1 at the first epoch, and decayed by 0.1× at epoch K2 and 3K 4 . The results are shown in Table 3 by “staircase” rows, which illustrate that c-AutoGrow can grow a DNN multiple times and finally find a depth. However, there are two problems: 1) the final accuracy is lower than training the found net from scratch, as indicated by “∆”, validating our hypothesis; 2) the depth learning stops too early with a relatively shallower net, while a deeper net beyond the found depth can achieve a higher accuracy as we will show in Table 6. These problems provide a circumstantial evidence of the hypothesis that a converged shallow net with Network Morphism gives a bad initialization. Thus, AutoGrow cannot receive signals to continue growing after a limited number of growths. In Appendix A.0.1, Figure 6(a) visualizes the trajectory of c-AutoGrow corresponding to row “2-3-6” in Table 3. 3.2 ABLATION STUDY FOR AutoGrow DESIGN Based on the findings in Section 3.1, we propose three simple but effective solutions to further enhance AutoGrow and refer it as p-AutoGrow, with “p-” denoting “Periodic”: (1) Use a large constant learning rate for growing, i.e., 0.1 for residual networks and 0.01 for plain networks. Stochastic gradient descent with a large learning rate intrinsically introduces noises, which help to avoid a full convergence into a bad initialization from a shallower net. Note that staircase learning rate is still used for fine-tuning after discovering the final DNN; (2) Use random initialization (UniInit or GauInit) as noises to escape from an inadequate initialization; (3) Grow rapidly before a shallower net converges by taking Periodic Growth with a small K. p-AutoGrow is our final AutoGrow. In the rest part of this section, we perform ablation study to prove that the three solutions are effective. We start from c-AutoGrow, and incrementally add above solutions one by one and eventually obtain p-AutoGrow. In Table 3, first, we replace the staircase learning rate with a constant learning rate, the accuracy of AutoGrow improves and therefore “∆” improves; second, we further replace Network Morphism (ZeroInit or AdamInit) with a random initializer (UniInit or GauInit) and result in a bigger gain. Overall, combining a constant learning rate with GauInit performs the best. Thus, constant learning rate and GauInit are adopted in the remaining experiments, unless we explicitly specify them. Note that, in this paper, we are more interested in automating depth discovery to find a final DNN (“found net”) with a high accuracy (“accu”). Ideally, the “found net” has a minimum depth, a larger depth than which cannot further improve “accu”. We will show in Figure 3 that AutoGrow discovers a depth approximately satisfying this property. The “∆” is a metric to indicate how well shallower nets initialize deeper nets; a negative “∆” indicates that weight initialization from a shallower net hurts training of a deeper net; while a positive “∆” indicates AutoGrow helps training a deeper net, which is a byproduct of this work. Finally, we apply the last solution – Periodic Growth, and obtain our final p-AutoGrow. Our ablation study results for p-AutoGrow are summarized in Table 5 and Table 4. Table 5 analyzes the impact of the growing period K. In general, K is a hyper-parameter to trade off speed and accuracy: a smaller K takes a longer learning time but discovers a deeper net, vice versa. Our results validate the preference of a faster growth (i.e. a smaller K). On CIFAR10/CIFAR100, the accuracy reaches plateau/peak at K = 3; further reducing K produces a deeper net while the accuracy gain is marginal/impossible. In the following, we simply select K = 3 for robustness test. More importantly, our quantitative results in Table 5 show that p-AutoGrow finds much deeper nets, overcoming the very-early stop issue in c-AutoGrow in Table 3. That is, Periodic Growth proposed in this work is much more effective than Convergent Growth utilized in previous work. For sanity check, we perform the ablation study of initializers for p-AutoGrow. The results are in Table 8 in Appendix A.0.3, which further validates our wisdom on selecting GauInit. The motivation of Network Morphism in previous work was to start a deeper net from a loss function that has been well optimized by a shallower net, so as not to restart the deeper net training from scratch (Wei et al., 2016; 2017; Chen et al., 2015; Elsken et al., 2017; Cai et al., 2018a;b). In all our experiments, we find this is sure even with random initialization. Figure 2 plots the convergence curves and learning process for “42-42-42” in Table 5. Even with GauInit, the loss and accuracy rapidly recover and no restart is observed. The convergence pattern in the “Growing” stage is similar to the “Fine-tuning” stage under the same learning rate (the initial learning rate 0.1). Similar results on ImageNet will be shown in Figure 8. Our results challenge the necessity of Network Morphism when growing neural networks. At last, we perform the ablation study on the initial depth of the seed network. Table 4 demonstrates that a shallowest DNN works as well as a deeper seed. This implies that AutoGrow can appropriately stop regardless of the depth of the seed network. As the focus of this work is on depth automation, we prefer starting with the shallowest seed to avoid a manual search of a seed depth. 3.3 ADAPTABILITY OF AutoGrow To verify the adaptability of AutoGrow, we use an identical configuration (p-AutoGrow withK = 3) and test over 5 datasets and 4 seed architectures. Table 6 includes the results of all 20 combinations. Figure 3 compares AutoGrow with manual search which is obtained by training many DNNs with different depths from scratch. The results lead to the following conclusions and contributions: Finally, our supposition is that: when the size of dataset is smaller, the optimal depth should be smaller. Under this supposition, we test the effectiveness of AutoGrow by sampling a subset of dataset and verify if AutoGrow can discover a shallower depth. In Appendix A.0.3, Table 11 summarizes the results. As expected, our experiments show that AutoGrow adapts to shallower networks when the datasets are smaller. 3.4 SCALING TO IMAGENET AND EFFICIENCY In ImageNet, K = 3 should generalize well, but we explore AutoGrow with K = 2 and K = 5 to obtain an accuracy-depth trade-off line for comparison with human experts. The larger K = 5 enables AutoGrow to obtain a smaller DNN to trade-off accuracy and model size (computation) and the smaller K = 2 achieves higher accuracy. The results are shown in Table 7, which proves that AutoGrow automatically finds a good depth without any tuning. As a byproduct, the accuracy is even higher than training the found net from scratch, indicating that the Periodic Growth in AutoGrow helps training deeper nets. The comparison of AutoGrow and manual depth design (He et al., 2016) is in Figure 4, which shows that AutoGrow achieves better trade-off between accuracy and computation (measured by floating point operations). In Appendix A.0.3, Table 10 summarizes the breakdown of wall-clock time in AutoGrow. The growing/searching time is as efficient as (often more efficient than) fine-tuning the single discovered DNN. The scalability of AutoGrow comes from its intrinsic features that (1) it grows quickly with a short period K and stops immediately if no improvement is sensed; and (2) the network is small at the beginning of growing. 4 RELATED WORK Neural Architecture Search (NAS) (Zoph & Le, 2016) and neural evolution (Miikkulainen et al., 2019; Angeline et al., 1994; Stanley & Miikkulainen, 2002; Liu et al., 2017a; Real et al., 2017) can search network architectures from a gigantic search space. In NAS, the depth of DNNs in the search space is fixed, while AutoGrow learns the depth. Some NAS methods (Bender et al., 2018; Liu et al., 2018b; Cortes et al., 2017) can find DNNs with different depths, however, the maximum depth is pre-defined and shallower nets are obtained by padding zero operations or selecting shallower branches, while our AutoGrow learns the depth in an open domain to find a minimum depth, beyond which no accuracy improvement can be obtained. Moreover, NAS is very computation and memory intensive. To accelerate NAS, one-shot models (Saxena & Verbeek, 2016; Pham et al., 2018; Bender et al., 2018), DARTS (Liu et al., 2018b) and NAS with Transferable Cell (Zoph et al., 2018; Liu et al., 2018a) were proposed. The search time reduces dramatically but is still long from practical perspective. It is still very challenging to deploy these methods to larger datasets such as ImageNet. In contrast, our AutoGrow can scale up to ImageNet thanks to its short depth learning time, which is as efficient as training a single DNN. In addition to architecture search which requires to train lots of DNNs from scratch, there are also many studies on learning neural structures within a single training. Structure pruning and growing were proposed for different goals, such as efficient inference (Wen et al., 2016; Li et al., 2016; Lebedev & Lempitsky, 2016; He et al., 2017; Luo et al., 2017; Liu et al., 2017b; Dai et al., 2017; Huang et al., 2018; Gordon et al., 2018; Du et al., 2019), lifelong learning (Yoon et al., 2017) and model adaptation (Feng & Darrell, 2015; Philipp & Carbonell, 2017). However, those works fixed the network depth and limited structure learning within the existing layers. Optimization over a DNN with fixed depth is easier as the skeleton architecture is known. AutoGrow performs in a scenario where the DNN depth is unknown hence we need to seek for the optimal depth. A APPENDIX A.0.1 OPTIMIZATION TRAJECTORIES OF NETWORK MORPHISM We hypothesize that a converged shallower net may not be an adequate initialization. Figure 5 visualizes and compares the optimization trajectories of Network Morphism and the training from scratch. In this figure, the shallower net is Basic3ResNet-3-3-3 (ResNet-20) and the deeper one is Basic3ResNet-5-5-5 (ResNet-32) in Table 2. The initializer is ZeroInit. The visualization method is extended from Li et al. (2018). Points on the trajectory are evenly sampled every a few epochs. To maximize the variance of trajectory, we use PCA to project from a high dimensional space to a 2D space and use the first two Principle Components (PC) to form the axes in Figure 5. The contours of training loss function and the trajectory are visualized around the final minimum of the deeper net. When projecting a shallower net to a deeper net space, zeros are padded for the parameters not existing in the deeper net. We must note that the loss increase along the trajectory does not truly represent the situation in high dimensional space, as the trajectory is just a projection. It is possible that the loss remains decreasing in the high dimension while it appears in an opposite way in the 2D space. The sharp detour at “Morphing” in Figure 5(a) may indicate that the shallower net plausibly converges to a point that the deeper net struggles to escape. In contrast, Figure 5(b) shows that the trajectory of the direct optimization in the deeper space smoothly converges to a better minimum. Figure 6(a) visualizes the trajectory of c-AutoGrow corresponding to row “2-3-6” in Table 3. Along the trajectory, there are many trials to detour and escape an initialization from a shallower net. Figure 6(b) visualizes the trajectory corresponding to row “2-4-3” in Table 3, which is much smoother compared to Figure 6(a). Figure 6(c)(d) visualize the trajectories of p-AutoGrow with K = 50 and 3. The 2D projection gives limited information to reveal the advantages of p-AutoGrow comparing to c-AutoGrow in Figure 6(b), although the trajectory of our final p-AutoGrow in Figure 6(d) is plausibly more similar to the one of training from scratch in Figure 5(b). A.0.2 VISUALIZATION OF LOSS SURFACES AROUND MINIMA Figure 7 visualizes loss surfaces around minima by AutoGrow and baseline. Intuitively, AutoGrow finds wider or deeper minima with less chaotic landscapes. A.0.3 MORE EXPERIMENTAL RESULTS Figure 8 plots the growing and converging curves for two DNNs in Table 10. Table 11 summarizes the adaptability of AutoGrow to the sizes of dataset. In each set of experiments, dataset is randomly down-sampled to 100%, 75%, 50% and 25%. For a fair comparison, K is divided by the percentage of dataset such that the number of mini-batches between growths remains the same. As expected, our experiments show that AutoGrow adapts to shallower networks when the sizes are smaller.
1. What is the focus of the paper regarding neural network depth determination? 2. What is the reviewer's opinion on the simplicity of the proposed approach? 3. How does the reviewer assess the significance and usefulness of the contribution? 4. What do the tables and visualization figures reveal about the method's performance?
Review
Review The paper presents a meta-learning algorithm to automatically detemine the depth of neural network through a policy to add depth if this bring improvement on accuracy. I have conserved opinion based on the technique being used here is extremely simple, basically is an implementation of naive greedy algorithm in such a scenario, which implies the problem may not be intrinsically hard, or even useful. The paper consists of detailed narrative about how these procedure are conducted, but still, it is really hard for me to find the true merit to appreciate, and why this brings a nontrivial and usefull contribution. The tables, visualization figures also didnot imply too much about whether this is more than overfitting on previous works with hand-chosen depth.
ICLR
Title SupportNet: solving catastrophic forgetting in class incremental learning with support data Abstract A plain well-trained deep learning model often does not have the ability to learn new knowledge without forgetting the previously learned knowledge, which is known as catastrophic forgetting. Here we propose a novel method, SupportNet, to efficiently and effectively solve the catastrophic forgetting problem in the class incremental learning scenario. SupportNet combines the strength of deep learning and support vector machine (SVM), where SVM is used to identify the support data from the old data, which are fed to the deep learning model together with the new data for further training so that the model can review the essential information of the old data when learning the new information. Two powerful consolidation regularizers are applied to stabilize the learned representation and ensure the robustness of the learned model. We validate our method with comprehensive experiments on various tasks, which show that SupportNet drastically outperforms the state-of-the-art incremental learning methods and even reaches similar performance as the deep learning model trained from scratch on both old and new data. 1 INTRODUCTION Since the breakthrough in 2012 (Krizhevsky et al., 2012), deep learning has achieved great success in various fields (LeCun et al., 2015; Silver et al., 2016; Sutskever et al., 2014; He et al., 2016; Alipanahi et al., 2015; Li et al., 2018b; Dai et al., 2017). However, despite its impressive achievements, there are still several bottlenecks related to the practical part of deep learning waiting to be solved (Papernot et al., 2016; Lipton, 2016; Kemker et al., 2017). One of those bottlenecks is catastrophic forgetting (Kemker et al., 2017), which means that a well-trained deep learning model tends to completely forget all the previously learned information when learning new information (McCloskey & Cohen, 1989). That is, once a deep learning model is trained to perform a specific task, it cannot be trained easily to perform a new similar task without affecting the original task’s performance dramatically. Unlike human and animals, deep learning models do not have the ability to continuously learn over time and different datasets by incorporating the new information while retaining the previously learned experience, which is known as incremental learning. Two major theories have been proposed to explain human’s ability to perform incremental learning. The first theory is Hebbian learning (Hebb, 1949) with homeostatic plasticity (Zenke et al., 2017), which suggests that human brain’s plasticity will decrease as people learn more knowledge to protect the previously learned information. The second theory is the complementary learning system (CLS) theory (Mcclelland et al., 1995; OReilly et al., 2014), which suggests that human beings extract high-level structural information and store the high level information in a different brain area while retaining episodic memories. Inspired by the above two major neurophysiological theories, people have proposed a number of methods to deal with catastrophic forgetting. The most straightforward and pragmatic method to avoid catastrophic forgetting is to retrain a deep learning model completely from scratch with all the old data and new data (Parisi et al., 2018). However, this method is proved to be very inefficient (Parisi et al., 2018). Moreover, the new model learned from scratch may share very low similarity with the old one, which results in poor learning robustness. In addition to the straightforward method, there are three categories of methods. The first category is the regularization approach (Kirkpatrick et al., 2017; Li & Hoiem, 2016; Jung et al., 2016), which is inspired by the plasticity theory (Benna & Fusi, 2016). The core idea of such methods is to incorporate the plasticity information of the neural network model into the loss function to prevent the parameters from varying significantly when learning new information. These approaches are proved to be able to protect the consolidated knowledge (Kemker et al., 2017). However, due to the fixed size of the neural network, there is a trade-off between the performance of the old and new tasks (Kemker et al., 2017). The second class uses dynamic neural network architectures (Rebuffi et al., 2016; Rusu et al., 2016; Lopez-Paz & Ranzato, 2017). To accommodate the new knowledge, these methods dynamically allocate neural resources or retrain the model with an increasing number of neurons or layers. Intuitively, these approaches can prevent catastrophic forgetting but may also lead to scalability and generalization issues due to the increasing complexity of the network (Parisi et al., 2018). The last category utilizes the dual-memory learning system, which is inspired by the CLS theory (Hinton & Plaut, 1987; Lopez-Paz & Ranzato, 2017; Gepperth & Karaoguz, 2016). Most of these systems either use dual weights or take advantage of pseudo-rehearsal, which draw training samples from a generative model and replay them to the model when training with new data. However, how to build an effective generative model remains a difficult problem. Recent researches on the optimization and generalization of deep neural networks suggested the potential relationship between deep learning and SVM (Soudry et al., 2017; Li et al., 2018a). Based on that idea, we propose a novel and easy-to-implement method to perform class incremental deep learning efficiently when encountering data from new classes (Fig. 1). Our method maintains a support dataset for each old class, which is much smaller than the original dataset of that class, and shows the support datasets to the deep learning model every time there is a new class coming in so that the model can “review” the representatives of the old classes while learning new information. Although this rehearsal idea is not new (Rebuffi et al., 2016), our method is innovative in the sense that we show how to select the support data in a systematic and generic way to preserve as much information as possible. We demonstrate that it is more efficient to select the support vectors of an SVM, which is used to approximate the neural network’s last layer, as the support data, both theoretically and empirically. Meanwhile, since we divide the network into two parts, the last layer and all the previous layers, in order to stabilize the learned representation of old data before the last layer and retain the performance for the old classes, following the idea of the Hebbian learning theory, we utilize two consolidation regularizers, to reduce the plasticity of the deep learning model and constrain the deep learning model to produce similar representation for old data. The framework of our method is show in Fig. 2. In summary, this paper has the following main contributions: • We propose a novel way of selecting support data through the combination of deep learning and SVM, and demonstrate its efficiency with comprehensive experiments on various tasks. • We propose a novel regularizer, namely, consolidation regularizer, which stabilizes the deep learning network and maintains the high level feature representation of the old information. 2 METHODS 2.1 DEEP LEARNING AND SVM In this subsection, we will show what data is more important for deep neural network model training. Following the setting in Soudry et al. (2017); Li et al. (2018a), let us consider a dataset {xn, ỹn}Nn=1, with xn ∈ RD being the feature, and ỹn ∈ RK being the one-hot encoding of the label. K is the total number of classes andN is the size of the dataset. Denote the input of the last layer (the learned representation) as δn ∈ RT for xn. We use W to denote the parameter of the last layer and define zn = Wδn. After applying softmax activation function to zn, we obtain the output of the whole deep neural network for the input xn as on. Consequently, we have: on,i = exp(zn,i)∑K k=1 exp(zn,k) = exp(Wi,:δn)∑K k=1 exp(Wk,:δn) . (1) For deep learning, we usually use the cross-entropy loss as the loss function: L = − 1 N N∑ n=1 K∑ k=1 ỹn,k log(on,k), (2) Consider the negative gradient of the loss function on wj,i (the derivation of Equation (3) can be referred to Section A in the Appendices): − ∂L ∂wj,i = 1 N N∑ n=1 (ỹn,i − on,i)δn,j = 1 N N∑ n=1 (ỹn,i − exp(Wi,:δn)∑K k=1 exp(Wk,:δn) )δn,j , (3) according to Soudry et al. (2017); Li et al. (2018a), after the learned representation becoming stable, the last weight layer will converge to the SVM solution. That is, we can write W = a(t)Ŵ + B(t), where Ŵ is the corresponding SVM solution; t represent the t-th iteration of SGD; a(t) → ∞ and B(t) is bounded. Thus, Equation (3) becomes: − ∂L ∂wj,i = 1 N N∑ n=1 (ỹn,i − exp(a(t)Ŵi,:δn) exp(B(t)i,:δn)∑K k=1 exp(a(t)Ŵk,:δn) exp(B(t)k,:δn) )δn,j . (4) Since the candidate value of ỹn,i is {0, 1} and if ỹn,i = 0, that term in Equation (2) does not contribute to the loss. Only when ỹn,i = 1 can the data contribute the loss and thus the gradient. Under that circumstance, since a(t) → ∞, only the data with the smallest exponential nominator can contribute to the gradient. Those data are precisely the ones with the smallest margin Ŵi,:δn, which are the support vectors, for class i. 2.2 SUPPORT DATA SELECTOR According to Sirois et al. (2008); Pallier et al. (2003), even human beings, who are proficient in incremental learning, cannot deal with catastrophic forgetting perfectly. On the other hand, a common strategy for human beings to overcome forgetting during learning is to review the old knowledge frequently (Murre & Dros, 2015). Actually, during reviewing, we usually do not review all the details, but rather the important ones, which are often enough for us to grasp the knowledge. Inspired by this, we design the support dataset and the review training process. During incremental learning, we maintain a support dataset for each class, which is fed to the model together with the new data of the new classes. In other words, we want the model to review the representatives of the previous classes when learning new information. The main question is thus how to build an effective support data selector to construct such support data, which we denote as {xSn , ỹSn} NS n=1. According to the discussion in Section 2.1, we know that the data corresponding to the support vectors in SVM solution contribute more to the deep learning model training. Based on that, we obtain the high level feature representations of the original input using deep learning mapping function and train an SVM classifier with these features. By performing the SVM training, we detect the support vectors from each class, which are of crucial importance for the deep learning model training. We define the original data which correspond to these support vectors as the support data candidates, which we denote as {xSVn , ỹSVn } NSV n=1 . If the required number of preserved data is smaller than that of the support vectors, we will sample support data candidates to obtain the required number. Formally: {xSn , ỹSn} NS n=1 ⊂ {xSVn , ỹSVn } NSV n=1 . (5) Denote the new coming data as {xnewn , ỹnewn } Nnew n=1 , we have the new training data for the model as: {xSn , ỹSn} NS n=1 ∪ {xnewn , ỹnewn } Nnew n=1 , (6) 2.3 CONSOLIDATION REGULARIZERS Since the support data selection depends on the high level representation produced by the deep learning layers, which are fine tuned on new data, the old data feature representations may change over time. As a result, the previous support vectors for the old data may no longer be support vectors for the new data, which makes the support data invalid (here we assume the support vectors will remain the same as long as the representations are largely fixed, which will be discussed in more details in Section 4.2). To solve the issue, we add two consolidation regularizers to consolidate the learned knowledge: the feature regularizer, which forces the model to produce fixed representation for the old data over time, and the EWC regularizer, which consolidates the important weights that contribute to the old class classification significantly into the loss function. 2.3.1 FEATURE REGULARIZER We add the following feature regularizer into the loss function to force the mapping function to produce fixed representation for old data. Following the setting in Section 2.1, δn depends on φ, which is the parameters of the deep learning mapping function. The feature regularizer is defined as: Rf (φ) = NS∑ n=1 ‖δn(φnew)− δn(φold)‖22 , (7) where φnew is the parameters for the deep learning architecture trained with the support data from the old classes and the new data from the new class(es); φold is the parameters for the mapping function of the old data; and Ns is the number of support data. This regularizer requires the model to preserve the feature representation produced by the deep learning architecture for each support data, which could lead to potential memory overhead. However, since it operates on a very high level representation, which is of much less dimensionality than the original input, the overhead is neglectable. 2.3.2 EWC REGULARIZER According to the Hebbian learning theory, after learning, the related synaptic strength and connectivity are enhanced while the degree of plasticity decreases to protect the learned knowledge. Guided by this neurophysiological theory, the EWC regularizer (Kirkpatrick et al., 2017) was designed to consolidate the old information while learning new knowledge. The core idea of this regularizer is to constrain those parameters which contribute significantly to the classification of the old data. Specifically, the more a certain parameter contributes to the previous classification, the harder constrain we apply to it to make it unlikely to be changed. That is, we make those parameters that are closely related to the previous classification less “plastic”. In order to achieve this goal, we calculate the Fisher information for each parameter, which measures its contribution to the final prediction, and apply the regularizer accordingly. Formally, the Fisher information for the parameters θ = {φ,W} can be calculated as: F (θ) = E[( ∂ ∂θ log f(X; θ))2|θ] = ∫ ( ∂ ∂θ log f(x; θ))2f(x; θ)dx, (8) where f(x; θ) is the functional mapping of the entire neural network. The EWC regularizer is defined as follows: Rewc(θ) = ∑ i F (θoldi)(θnewi − θoldi)2, (9) where i iterates over all the parameters of the model. There are two major benefits of using the EWC regularizer in our framework. Firstly, the EWC regularizer reduces the “plasticity” of the parameters that are important to the old classes and thus guarantees stable performance over the old classes. Secondly, by reducing the capacity of the deep learning model, the EWC regularizer prevents overfitting to a certain degree. The function of the EWC regularizer could be considered as changing the learning trajectory pointing to the region where the loss is low for both the old and new data. 2.3.3 LOSS FUNCTION After adding the feature regularizer and the EWC regularier, the loss function becomes: L̃(θ) = L+ λfRf (φ) + λewcRewc(θ), (10) where λf and λewc are the coefficients for the feature regularizer and the EWC regularizer, respectively. After plugging Eq. (2), (7) and (9) into Eq. (10), we obtain the regularized loss function: L̃(θ) = − 1 NS +Nnew NS+Nnew∑ n=1 Kt∑ k=1 ỹn,k log(on,k)+ NS∑ n=1 ‖δn(φnew)− δn(φold)‖22 +∑ i λewc(θnewi − θoldi)2 ∫ ( ∂ ∂θnew log f(x; θnew)) 2f(x; θnew)dx, (11) where Kt is the total number of classes at the incremental learning time point t. 2.4 SUPPORTNET Combining the deep learning model, which consists of the deep learning architecture mapping function and the final fully connected classification layer, the novel support data selector, and the two consolidation regularizers together, we propose a highly effective framework, SupportNet (Fig. 2), which can perform class incremental learning without catastrophic forgetting. Our framework can resolve the catastrophic forgetting issue in two ways. Firstly, the support data can help the model to review the old information during future training. Despite the small size of the support data, they can preserve the distribution of the old data quite well, which will be shown in Section 4.1. Secondly, the two consolidation regularizers consolidate the high level representation of the old data and reduce the plasticity of those weights, which are of vital importance for the old classes. 3 RESULTS 3.1 DATASETS During our experiments, we used six datasets: (1) MNIST, (2) CIFAR-10 and CIFAR-100, (3) Enzyme function data (Li et al., 2018c), (4) HeLa (Boland & Murphy, 2001) and (5) BreakHis (Spanhol et al., 2016). MNIST, CIFAR-10 and CIFAR-100 are commonly used benchmark datasets in the computer vision field. MNIST consists of 70K 28*28 single channel images belonging to 10 classes. CIFAR-10 contains 60K 32*32 RGB images belonging to 10 classes, while CIFAR-100 is composed of the same images but the images are further classified into 100 classes. The latter three datasets are from bioinformatics. Enzyme function data1 is composed of 22,168 low-homologous enzyme sequences belonging to 6 classes. The HeLa dataset2 contains around 700 512*384 grayscale images for subcellular structures in HeLa cells belonging to 10 classes. BreakHis3 is composed of 9,109 microscopic images of the breast tumor tissue belonging to 8 classes. Each image is a 3- channel RGB image, whose dimensionality is 700 by 460. 3.2 COMPARED METHODS We compared our method with numerous methods. We refer the first method as the “All Data” method. When data from a new class appear, this method trains a deep learning model from scratch for multi-class classification, using all the new and old data. It can be expected that this method should have the highest classification performance. The second method is the iCaRL method (Rebuffi et al., 2016), which is the state-of-the-art method for class incremental learning in computer vision field Kemker et al. (2017). The third method is EWC . The fourth method is the “Fine Tune” method, in which we only use the new data to tune the model, without using any old data or regularizers. The fifth method is the baseline “Random Guess” method, which assigns the label of each test data sample randomly without using any model. We also compared with a number of recently proposed methods, including three versions of Variational Continual Learning (VCL) methods (Nguyen et al., 2018), Deep Generative Replay (DGR) (Shin et al., 2017), Gradient Episodic Memory (GEM) (Lopez-Paz et al., 2017), and Incremental Moment Matching (IMM) (Lee et al., 2017) on MNIST. In terms of the deep learning architecture, for the enzyme function data, we used the same architecture from Li et al. (2018c). As for the other datasets, we used the residual network with 32 layers. Regarding the SVM in SupportNet framework, based on the result from Soudry et al. (2017); Li et al. (2018a), we used linear kernel. 3.3 PERFORMANCE COMPARISON For all the tasks, we started with binary classification. Then each time we incrementally gave data from one or two new classes to each method, until all the classes were fed to the model. For enzyme data, we fed one class each time. For the other five datasets, we fed two classes in each round. Fig. 3 shows the accuracy comparison on the multi-class classification performance of different methods, over the six datasets, along the incremental learning process. 1http://www.cbrc.kaust.edu.sa/DEEPre/dataset.html 2http://murphylab.web.cmu.edu/data/2DHeLa 3https://web.inf.ufpr.br/vri/breast-cancer-database/ As expected, the “All Data” method has the best classification performance because it has access to all the data and retrains a brand new model each time. The performance of this “All Data” method can be considered as the empirical upper bound of the performance of the incremental learning methods. All the incremental learning methods have performance decrease to different degrees. EWC and “Fine Tune” have quite similar performance which drops quickly when the number of classes increases. The iCaRL method is much more robust than these two methods. In contrast, SupportNet has significantly better performance than all the other incremental learning methods across the five datasets. In fact, its performance is quite close to the “All Data” method and stays stable when the number of classes increases for the MNIST and enzyme datasets. On the MNIST dataset, VCL with K-center Coreset can also achieve very impressive performance. Nevertheless, SupportNet can outperform it along the process. Specifically, the performance of SupportNet has less than 1% on MNIST and 5% on enzyme data difference compared to that of the “All Data” method. We also show the importance of SupportNet’s components in Fig. 3 (C). As shown in the figure, all the three components (support data, EWC regularizer and feature regularizer) contribute to the performance of SupportNet to different degrees. Notice that even with only support data, SupportNet can already outperform iCaRL, which shows the effectiveness of our support data selector. The result on CIFAR-100 will be discussed in more detail in Section 4.2. Detailed results about different methods’ performance on different classes (confusion matrix) and on the old classes and the new classes separately (accuracy matrix) can be referred to Section B and C in the Appendices. We also show the effectiveness of the consolidation regularizers on stabilizing the learned feature representation in Section D with t-SNE visualization (Maaten & Hinton, 2008) in the Appendices. Furthermore, we compared SupportNet with iCaRL on an additional dataset, tiny ImageNet, which contains 200 classes. The results are shown in Section F in the Appendices, which further demonstrate the effectiveness of SupportNet. 3.4 SUPPORT DATA SIZE AND RUNNING TIME As reported by the previous study (Rebuffi et al., 2016), the preserved dataset size can affect the performance of the final model significantly. We investigated that in details here. As shown in Fig. 4 (A), the performance degradation of SupportNet from the “All Data” method decreases gradually as the support data size increases, which is consistent with the previous study using the rehearsal method (Rebuffi et al., 2016). What is interesting is that the performance degradation decreases very quickly at the beginning of the curve, so the performance loss is already very small with a small number of support data. That trend demonstrates the effectiveness of our support data selector, i.e., being able to select a small while representative support dataset. We also show the performance of SupportNet with 2000, 1500, 1000, 500, 200 support data, respectively, in Section E in the Appendices, which further demonstrates the effective of our method. On the other hand, this decent property of our framework is very useful when the users need to trade off the performance with the computational resources and running time. As shown in Fig. 4 (B), on MNIST, SupportNet outperforms the “All Data” method significantly regarding the accumulated running time with only less than 1% performance deviation, trained on the same hardware (GTX 1080 Ti). 3.5 REGULARIZER COEFFICIENT Although the performance of the EWC method on incremental learning is not impressive (Fig. 3), the EWC regularizer plays an important role in our method. Here, we evaluated our method by varying the EWC regularizer coefficient from 1 to 100,000, and compared it with the “All Data” method and iCaRL (Table 1). We can find that the performance of SupportNet varies with different EWC regularier coefficients, with the highest one very close to the “All Data” method, which is the upper bound of all the incremental learning methods, whereas the lowest one having around 13% performance degradation. The results make sense because from the neurophysiological point of view, SupportNet is trying to reach the stability-plasticity balance point for this classification task. If the coefficient is too small, which means we do not impose enough constraint on those weights which contribute significantly to the old class classification, the deep learning model will be too plastic and the old knowledge tends to be lost. If the coefficient is too large, which means that we impose very strong constraint on those weights even when they are not important to the old class classification, the deep learning model will be too stable and does not have enough capacity to incorporate new knowledge. In general, our results are consistent with the stability-plasticity dilemma. 4 DISCUSSION 4.1 UNDERFITTING AND OVERFITTING When training a deep learning model, one can encounter the notorious overfitting issue almost all the time. It is still the case for training an incremental learning model, but we found that there are some unique issues of such learning methods. Table 2 shows the performance of SupportNet and iCaRL on the real training data (i.e., the new data plus the support data for SupportNet and examplars for iCaRL), all the training data (i.e., the new data plus all the old data), and the test data. It can be seen that both methods perform almost perfectly on the real training data, which is as expected. However, the performances of iCaRL on the test data and all the training data are almost the same, both of which are much worse than that on the real training data. This indicates that iCaRL is overfitted to the real training data but underfitted to all the training data. As for SupportNet, the issue is much less severe than iCaRL as the performance degradation from the real training data to all the training data reduces from 37% as in iCaRL to 7% in SupportNet. This suggests that the support data selected by SupportNet are indeed critical for the deep learning training for the old classes. We can find the same pattern on the MNIST dataset. 4.2 SUPPORT VECTOR EVOLVING Despite the impressive performance of SupportNet as shown in Fig. 3, we have to admit the limitation of SupporNet. In fact, using our method, we assume the support vectors of one class will stay static if the learned representation is largely fixed. However, this assumption does not hold under all the circumstances. For example, suppose we perform a binary classification for one very specific type of cat, such as Chartreux, and one very specific type of dog, such as Rottweiler. Later, we need to equip the classifier with the function to recognize another very specific type of cat, such as British Shorthair. We may find that the support vectors of Chartreux change as British Shorthair comes in because Chartreux and British Shorthair are so similar that using the previous support vectors, we are unable to distinguish them. Although SupportNet can still reach the state-of-the-art performance even under this circumstance, as shown in Fig. 3 (F), more work should be done in the future to handle this support vector evolving problem. 5 CONCLUSION In this paper, we proposed a novel class incremental learning method, SupportNet, to solve the catastrophic forgetting problem by combining the strength of deep learning and SVM. SupportNet can identify the support data from the old data efficiently, which are fed to the deep learning model together with the new data for further training so that the model can review the essential information of the old data when learning the new information. With the help of two powerful consolidation regularizers, the support data can effectively help the deep learning model prevent the catastrophic forgetting issue, eliminate the necessity of retraining the model from scratch, and maintain stable learned representation between the old and the new data. A DERIVATION OF EQUATION 3 FROM EQUATION 2 In the section, we use chain rule to derive the following equation − ∂L ∂wj,i = 1 N N∑ n=1 (ỹn,i − on,i)δn,j , (12) from L = − 1 N N∑ n=1 K∑ k=1 ỹn,k log(on,k). (13) Let us first consider just one data sample: Ln = − K∑ k=1 ỹn,k log(on,k). (14) Using chain rule, we have − ∂Ln ∂wj,i = − K∑ l=1 ∂Ln ∂on,l ∂on,l ∂zn,i ∂zn,i ∂wj,i , (15) For the first term in Eq. 15, we have ∂Ln ∂on,l = ∂ − ∑K k=1 ỹn,k log(on,k) ∂on,l = − ỹn,l on,l . (16) For the second term in Eq. 15, we have ∂on,l ∂zn,i = ∂ exp(zn,l)∑K k=1 exp(zn,k) ∂zn,i = ∂ exp(zn,l) ∂zn,i ∑K k=1 exp(zn,k)− exp(zn,l) exp(zn,i) ( ∑K k=1 exp(zn,k)) 2 = { on,i(1− on,i), l = i −on,ion,l, l 6= i. (17) For the third term in Eq. 15, we have ∂zn,i ∂wj,i = ∂Wi,:δn ∂wj,i = δn,j . (18) Put Eq. 16, Eq. 17, and Eq. 18 into Eq. 15, we have: − ∂Ln ∂wj,i = ( ỹn,i on,i on,i(1− on,i) + K∑ l 6=i ỹn,l on,l (−on,ion,l))δn,j = (ỹn,i − on,i K∑ l=1 ỹn,l)δn,j (1) = (ỹn,i − on,i)δn,j , (19) where (1) is the result of the fact that we use one hot encoding for the label and ∑K l=1 ỹn,l = 1. From Eq. 19, we can easily get Eq. 12 by considering all the data points. B CONFUSION MATRICES We investigate the confusion matrices of the “Random Guess” method, the “Fine Tune” method, iCaRL and SupportNet (Fig. 5) after the last batch of classes on the EC data. As expected, the “Fine Tune” method only considers the new data from the new class, and thus is overfitted to the new class (Fig. 5(B)). The iCaRL method partially solves this issue by combining deep learning with nearestmean-examplars, which is a variant of KNN (Fig 5(C)). SupportNet, on the other hand, combines the advantage of SVM and deep learning by using SVM to find the important support data, which efficiently preserve the knowledge of the old data, and utilizing deep learning as the final classifier. This novel combination can efficiently and effectively solve the incremental learning problem (Fig 5(D)). Notice that the upper left diagonal of the SupportNet’s confusion matrix has much higher values than those of the iCaRL’s confusion matrix, which indicates the performance improvement comes from the accuracy prediction of the old classes. C ACCURACY MATRICES In this section, we investigate the performance composition of SupportNet on MNIST shown in Fig. 3 (A). Fig. 3 (A) only shows the overall performance of different methods on all the testing data, averaging the performances on the old test data and the new test data, which can lose the insight of different methods’ performance on old data. To avoid that, we further check the performance of different methods on the old data and the new data separately, whose results can be referred to Fig. 6. As shown in Fig. 6 (B), iCaRL can maintain its performance on the oldest class batch very well, however, it is unable to maintain its performance on the intermediate class batches. GEM (Fig. 6 (A)) can outperform iCaRL on the middle class batches, however, it cannot maintain the performance of the oldest class batch. VCL (Fig. 6 (C)) further outperforms GEM in terms of middle class batches, however it suffers from the same problem as GEM, being unable to preserve the performance on the oldest class batch. On the other hand, both VCL with K-center Coreset and SupportNet can maintain their performance on the old data classes almost perfectly, no matter for the intermediate class batches or the oldest class batch. However, because of the difference between the two algorithms, their trade-offs are different. Although VCL with K-center Coreset can maintain the performance of old classes almost exactly, there is a trade-off of the methods on the newest classes, with the newest model being unable to achieve the optimal performance on the newest class. As for SupportNet, it allows slight performance degradation on the old classes while can achieve optimal performance on the newest class batch. D T-SNE VISUALIZATION OF FEATURE REPRESENTATION The feature representation learned by the deep learning models during the incremental learning process is worth investigating, since it can suggest why SupportNet works to a certain degree. We take the EC dataset and the MNIST dataset as examples and use t-SNE (Maaten & Hinton, 2008) to investigate the learned representation. For each dataset, we randomly select 2000 data points from the training data at the first training time point. Then, after each future training time point, we apply the further trained model to the selected data points and extract the input of the deep learning model’s last layer as the learned feature representation. After obtaining those raw feature representations, we apply t-SNE to them and visualize them in 2D space. For each dataset, we investigated both the SupportNet with consolidation regularizers and SupportNet without any regularizers. The result of EC data can be referred to Fig. 7 and the result of MNIST data can be referred to Fig. 8. As shown in those figures, although the feature representation of the standard SupportNet still varies, compared to the SupportNet without any regularizers, the variance is much smaller, which suggests that the consolidation regularizes help the model stabilize the learned feature representation. E PERFORMANCE ON MNIST WITH LESS SUPPORT DATA In this section, we further investigate the performance of SupportNet with less support data as a supplement of Section 3.4. We run the experiments of SupportNet with the support data size as 2000, 1500, 1000, 500, and 200, respectively, whose results are shown in Fig. 9. As shown in the figure, even SupportNet with 500 support data points can outperform iCaRL with 2000 examplars, which further demonstrates the effectiveness of our support data selecting strategy . F PERFORMANCE ON TINY IMAGENET To further evaluate SupportNet’s performance on class incremental learning setting with more classes, we tested it on tiny ImageNet dataset4, comparing it with iCaRL. The setting of tiny ImageNet dataset is similar to that of ImageNet. However, its data size is much smaller than ImageNet. Tiny ImageNet has 200 classes while each class only has 500 training images and 50 testing images, which means that it is even harder than ImageNet. The performance of SupportNet and iCaRL on this dataset is shown in Fig. 10. As illustrated in the figure, SupportNet can outperform iCaRL significantly on this dataset. Furthermore, as suggested by the red line, which shows the performance difference between SupportNet and iCaRL, SupportNet’s performance superiority is increasingly significant as the class incremental learning setting goes further. This phenomenon demonstrates the effectiveness of SupportNet in combating catastrophic forgetting. 4https://tiny-imagenet.herokuapp.com/
1. What is the focus and contribution of the paper regarding preventing catastrophic forgetting? 2. What are the strengths and weaknesses of the proposed approach, particularly in its experimental results and comparisons with other works? 3. Do you have any concerns or questions about the method's ability to ensure similar decision boundaries between SVM and NN? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any limitations or omissions in the related work section that the reviewer identifies?
Review
Review This paper presents a hybrid concept of deep neural network and support vector machine (SVM) for preventing catastrophic forgetting. The authors consider the last layer and the softmax function as SVM, and obtain support vectors, which are used as important samples of the old dataset. Merging the support vector data and new data, the network can keep the knowledge on the previous task. The use of support vector concept is interesting, but this paper has some issues to be improved. Pros and Cons (+) Interesting idea (+) Diverse experimental results on six datasets including benchmark and real-world datasets (-) Lack of related work on recent catastrophic forgetting (-) Limited comparing results (-) Limited analysis of feature regularizers Detailed comments - I am curious how we can assure that SVM's decision boundary is similar or same to NN's boundary - SupportNet is a method to use some of the previous data. For fair comparisons, SupportNet needs to be compared with other models using previous samples such as GEM [Lopez-Paz and Ranzato, 2017]. - Following papers are omitted in related work: 1. Lee et al. Overcoming Catastrophic Forgetting by Incremental Moment Matching, NIPS 2017. 2. Shin et al. Continual Learning with Deep Generative Replay, NIPS 2017. Also, the model needs to be compared with two models. - There is no result and analysis for feature regularizers. As the authors referred, the features of support vector data continuously change as the learning goes on. So, I am curious how the feature regularizer has effects on the performance. This can be performed by visualizing the change of support vector features via t-SNE as the incremental learning proceeds - The authors used 2000 support vectors for MNIST, Cifar-10, and Cifar-100. However, this size might be quite large considering their difficulty. - How is the pattern of EwC using some samples in the old dataset? - iCaRL was evaluated on ImageNet. Is there any reason not to be evaluated on ImageNet? - What kind of NNs is used for each dataset? And what kind of kernel is used for SVM?
ICLR
Title SupportNet: solving catastrophic forgetting in class incremental learning with support data Abstract A plain well-trained deep learning model often does not have the ability to learn new knowledge without forgetting the previously learned knowledge, which is known as catastrophic forgetting. Here we propose a novel method, SupportNet, to efficiently and effectively solve the catastrophic forgetting problem in the class incremental learning scenario. SupportNet combines the strength of deep learning and support vector machine (SVM), where SVM is used to identify the support data from the old data, which are fed to the deep learning model together with the new data for further training so that the model can review the essential information of the old data when learning the new information. Two powerful consolidation regularizers are applied to stabilize the learned representation and ensure the robustness of the learned model. We validate our method with comprehensive experiments on various tasks, which show that SupportNet drastically outperforms the state-of-the-art incremental learning methods and even reaches similar performance as the deep learning model trained from scratch on both old and new data. 1 INTRODUCTION Since the breakthrough in 2012 (Krizhevsky et al., 2012), deep learning has achieved great success in various fields (LeCun et al., 2015; Silver et al., 2016; Sutskever et al., 2014; He et al., 2016; Alipanahi et al., 2015; Li et al., 2018b; Dai et al., 2017). However, despite its impressive achievements, there are still several bottlenecks related to the practical part of deep learning waiting to be solved (Papernot et al., 2016; Lipton, 2016; Kemker et al., 2017). One of those bottlenecks is catastrophic forgetting (Kemker et al., 2017), which means that a well-trained deep learning model tends to completely forget all the previously learned information when learning new information (McCloskey & Cohen, 1989). That is, once a deep learning model is trained to perform a specific task, it cannot be trained easily to perform a new similar task without affecting the original task’s performance dramatically. Unlike human and animals, deep learning models do not have the ability to continuously learn over time and different datasets by incorporating the new information while retaining the previously learned experience, which is known as incremental learning. Two major theories have been proposed to explain human’s ability to perform incremental learning. The first theory is Hebbian learning (Hebb, 1949) with homeostatic plasticity (Zenke et al., 2017), which suggests that human brain’s plasticity will decrease as people learn more knowledge to protect the previously learned information. The second theory is the complementary learning system (CLS) theory (Mcclelland et al., 1995; OReilly et al., 2014), which suggests that human beings extract high-level structural information and store the high level information in a different brain area while retaining episodic memories. Inspired by the above two major neurophysiological theories, people have proposed a number of methods to deal with catastrophic forgetting. The most straightforward and pragmatic method to avoid catastrophic forgetting is to retrain a deep learning model completely from scratch with all the old data and new data (Parisi et al., 2018). However, this method is proved to be very inefficient (Parisi et al., 2018). Moreover, the new model learned from scratch may share very low similarity with the old one, which results in poor learning robustness. In addition to the straightforward method, there are three categories of methods. The first category is the regularization approach (Kirkpatrick et al., 2017; Li & Hoiem, 2016; Jung et al., 2016), which is inspired by the plasticity theory (Benna & Fusi, 2016). The core idea of such methods is to incorporate the plasticity information of the neural network model into the loss function to prevent the parameters from varying significantly when learning new information. These approaches are proved to be able to protect the consolidated knowledge (Kemker et al., 2017). However, due to the fixed size of the neural network, there is a trade-off between the performance of the old and new tasks (Kemker et al., 2017). The second class uses dynamic neural network architectures (Rebuffi et al., 2016; Rusu et al., 2016; Lopez-Paz & Ranzato, 2017). To accommodate the new knowledge, these methods dynamically allocate neural resources or retrain the model with an increasing number of neurons or layers. Intuitively, these approaches can prevent catastrophic forgetting but may also lead to scalability and generalization issues due to the increasing complexity of the network (Parisi et al., 2018). The last category utilizes the dual-memory learning system, which is inspired by the CLS theory (Hinton & Plaut, 1987; Lopez-Paz & Ranzato, 2017; Gepperth & Karaoguz, 2016). Most of these systems either use dual weights or take advantage of pseudo-rehearsal, which draw training samples from a generative model and replay them to the model when training with new data. However, how to build an effective generative model remains a difficult problem. Recent researches on the optimization and generalization of deep neural networks suggested the potential relationship between deep learning and SVM (Soudry et al., 2017; Li et al., 2018a). Based on that idea, we propose a novel and easy-to-implement method to perform class incremental deep learning efficiently when encountering data from new classes (Fig. 1). Our method maintains a support dataset for each old class, which is much smaller than the original dataset of that class, and shows the support datasets to the deep learning model every time there is a new class coming in so that the model can “review” the representatives of the old classes while learning new information. Although this rehearsal idea is not new (Rebuffi et al., 2016), our method is innovative in the sense that we show how to select the support data in a systematic and generic way to preserve as much information as possible. We demonstrate that it is more efficient to select the support vectors of an SVM, which is used to approximate the neural network’s last layer, as the support data, both theoretically and empirically. Meanwhile, since we divide the network into two parts, the last layer and all the previous layers, in order to stabilize the learned representation of old data before the last layer and retain the performance for the old classes, following the idea of the Hebbian learning theory, we utilize two consolidation regularizers, to reduce the plasticity of the deep learning model and constrain the deep learning model to produce similar representation for old data. The framework of our method is show in Fig. 2. In summary, this paper has the following main contributions: • We propose a novel way of selecting support data through the combination of deep learning and SVM, and demonstrate its efficiency with comprehensive experiments on various tasks. • We propose a novel regularizer, namely, consolidation regularizer, which stabilizes the deep learning network and maintains the high level feature representation of the old information. 2 METHODS 2.1 DEEP LEARNING AND SVM In this subsection, we will show what data is more important for deep neural network model training. Following the setting in Soudry et al. (2017); Li et al. (2018a), let us consider a dataset {xn, ỹn}Nn=1, with xn ∈ RD being the feature, and ỹn ∈ RK being the one-hot encoding of the label. K is the total number of classes andN is the size of the dataset. Denote the input of the last layer (the learned representation) as δn ∈ RT for xn. We use W to denote the parameter of the last layer and define zn = Wδn. After applying softmax activation function to zn, we obtain the output of the whole deep neural network for the input xn as on. Consequently, we have: on,i = exp(zn,i)∑K k=1 exp(zn,k) = exp(Wi,:δn)∑K k=1 exp(Wk,:δn) . (1) For deep learning, we usually use the cross-entropy loss as the loss function: L = − 1 N N∑ n=1 K∑ k=1 ỹn,k log(on,k), (2) Consider the negative gradient of the loss function on wj,i (the derivation of Equation (3) can be referred to Section A in the Appendices): − ∂L ∂wj,i = 1 N N∑ n=1 (ỹn,i − on,i)δn,j = 1 N N∑ n=1 (ỹn,i − exp(Wi,:δn)∑K k=1 exp(Wk,:δn) )δn,j , (3) according to Soudry et al. (2017); Li et al. (2018a), after the learned representation becoming stable, the last weight layer will converge to the SVM solution. That is, we can write W = a(t)Ŵ + B(t), where Ŵ is the corresponding SVM solution; t represent the t-th iteration of SGD; a(t) → ∞ and B(t) is bounded. Thus, Equation (3) becomes: − ∂L ∂wj,i = 1 N N∑ n=1 (ỹn,i − exp(a(t)Ŵi,:δn) exp(B(t)i,:δn)∑K k=1 exp(a(t)Ŵk,:δn) exp(B(t)k,:δn) )δn,j . (4) Since the candidate value of ỹn,i is {0, 1} and if ỹn,i = 0, that term in Equation (2) does not contribute to the loss. Only when ỹn,i = 1 can the data contribute the loss and thus the gradient. Under that circumstance, since a(t) → ∞, only the data with the smallest exponential nominator can contribute to the gradient. Those data are precisely the ones with the smallest margin Ŵi,:δn, which are the support vectors, for class i. 2.2 SUPPORT DATA SELECTOR According to Sirois et al. (2008); Pallier et al. (2003), even human beings, who are proficient in incremental learning, cannot deal with catastrophic forgetting perfectly. On the other hand, a common strategy for human beings to overcome forgetting during learning is to review the old knowledge frequently (Murre & Dros, 2015). Actually, during reviewing, we usually do not review all the details, but rather the important ones, which are often enough for us to grasp the knowledge. Inspired by this, we design the support dataset and the review training process. During incremental learning, we maintain a support dataset for each class, which is fed to the model together with the new data of the new classes. In other words, we want the model to review the representatives of the previous classes when learning new information. The main question is thus how to build an effective support data selector to construct such support data, which we denote as {xSn , ỹSn} NS n=1. According to the discussion in Section 2.1, we know that the data corresponding to the support vectors in SVM solution contribute more to the deep learning model training. Based on that, we obtain the high level feature representations of the original input using deep learning mapping function and train an SVM classifier with these features. By performing the SVM training, we detect the support vectors from each class, which are of crucial importance for the deep learning model training. We define the original data which correspond to these support vectors as the support data candidates, which we denote as {xSVn , ỹSVn } NSV n=1 . If the required number of preserved data is smaller than that of the support vectors, we will sample support data candidates to obtain the required number. Formally: {xSn , ỹSn} NS n=1 ⊂ {xSVn , ỹSVn } NSV n=1 . (5) Denote the new coming data as {xnewn , ỹnewn } Nnew n=1 , we have the new training data for the model as: {xSn , ỹSn} NS n=1 ∪ {xnewn , ỹnewn } Nnew n=1 , (6) 2.3 CONSOLIDATION REGULARIZERS Since the support data selection depends on the high level representation produced by the deep learning layers, which are fine tuned on new data, the old data feature representations may change over time. As a result, the previous support vectors for the old data may no longer be support vectors for the new data, which makes the support data invalid (here we assume the support vectors will remain the same as long as the representations are largely fixed, which will be discussed in more details in Section 4.2). To solve the issue, we add two consolidation regularizers to consolidate the learned knowledge: the feature regularizer, which forces the model to produce fixed representation for the old data over time, and the EWC regularizer, which consolidates the important weights that contribute to the old class classification significantly into the loss function. 2.3.1 FEATURE REGULARIZER We add the following feature regularizer into the loss function to force the mapping function to produce fixed representation for old data. Following the setting in Section 2.1, δn depends on φ, which is the parameters of the deep learning mapping function. The feature regularizer is defined as: Rf (φ) = NS∑ n=1 ‖δn(φnew)− δn(φold)‖22 , (7) where φnew is the parameters for the deep learning architecture trained with the support data from the old classes and the new data from the new class(es); φold is the parameters for the mapping function of the old data; and Ns is the number of support data. This regularizer requires the model to preserve the feature representation produced by the deep learning architecture for each support data, which could lead to potential memory overhead. However, since it operates on a very high level representation, which is of much less dimensionality than the original input, the overhead is neglectable. 2.3.2 EWC REGULARIZER According to the Hebbian learning theory, after learning, the related synaptic strength and connectivity are enhanced while the degree of plasticity decreases to protect the learned knowledge. Guided by this neurophysiological theory, the EWC regularizer (Kirkpatrick et al., 2017) was designed to consolidate the old information while learning new knowledge. The core idea of this regularizer is to constrain those parameters which contribute significantly to the classification of the old data. Specifically, the more a certain parameter contributes to the previous classification, the harder constrain we apply to it to make it unlikely to be changed. That is, we make those parameters that are closely related to the previous classification less “plastic”. In order to achieve this goal, we calculate the Fisher information for each parameter, which measures its contribution to the final prediction, and apply the regularizer accordingly. Formally, the Fisher information for the parameters θ = {φ,W} can be calculated as: F (θ) = E[( ∂ ∂θ log f(X; θ))2|θ] = ∫ ( ∂ ∂θ log f(x; θ))2f(x; θ)dx, (8) where f(x; θ) is the functional mapping of the entire neural network. The EWC regularizer is defined as follows: Rewc(θ) = ∑ i F (θoldi)(θnewi − θoldi)2, (9) where i iterates over all the parameters of the model. There are two major benefits of using the EWC regularizer in our framework. Firstly, the EWC regularizer reduces the “plasticity” of the parameters that are important to the old classes and thus guarantees stable performance over the old classes. Secondly, by reducing the capacity of the deep learning model, the EWC regularizer prevents overfitting to a certain degree. The function of the EWC regularizer could be considered as changing the learning trajectory pointing to the region where the loss is low for both the old and new data. 2.3.3 LOSS FUNCTION After adding the feature regularizer and the EWC regularier, the loss function becomes: L̃(θ) = L+ λfRf (φ) + λewcRewc(θ), (10) where λf and λewc are the coefficients for the feature regularizer and the EWC regularizer, respectively. After plugging Eq. (2), (7) and (9) into Eq. (10), we obtain the regularized loss function: L̃(θ) = − 1 NS +Nnew NS+Nnew∑ n=1 Kt∑ k=1 ỹn,k log(on,k)+ NS∑ n=1 ‖δn(φnew)− δn(φold)‖22 +∑ i λewc(θnewi − θoldi)2 ∫ ( ∂ ∂θnew log f(x; θnew)) 2f(x; θnew)dx, (11) where Kt is the total number of classes at the incremental learning time point t. 2.4 SUPPORTNET Combining the deep learning model, which consists of the deep learning architecture mapping function and the final fully connected classification layer, the novel support data selector, and the two consolidation regularizers together, we propose a highly effective framework, SupportNet (Fig. 2), which can perform class incremental learning without catastrophic forgetting. Our framework can resolve the catastrophic forgetting issue in two ways. Firstly, the support data can help the model to review the old information during future training. Despite the small size of the support data, they can preserve the distribution of the old data quite well, which will be shown in Section 4.1. Secondly, the two consolidation regularizers consolidate the high level representation of the old data and reduce the plasticity of those weights, which are of vital importance for the old classes. 3 RESULTS 3.1 DATASETS During our experiments, we used six datasets: (1) MNIST, (2) CIFAR-10 and CIFAR-100, (3) Enzyme function data (Li et al., 2018c), (4) HeLa (Boland & Murphy, 2001) and (5) BreakHis (Spanhol et al., 2016). MNIST, CIFAR-10 and CIFAR-100 are commonly used benchmark datasets in the computer vision field. MNIST consists of 70K 28*28 single channel images belonging to 10 classes. CIFAR-10 contains 60K 32*32 RGB images belonging to 10 classes, while CIFAR-100 is composed of the same images but the images are further classified into 100 classes. The latter three datasets are from bioinformatics. Enzyme function data1 is composed of 22,168 low-homologous enzyme sequences belonging to 6 classes. The HeLa dataset2 contains around 700 512*384 grayscale images for subcellular structures in HeLa cells belonging to 10 classes. BreakHis3 is composed of 9,109 microscopic images of the breast tumor tissue belonging to 8 classes. Each image is a 3- channel RGB image, whose dimensionality is 700 by 460. 3.2 COMPARED METHODS We compared our method with numerous methods. We refer the first method as the “All Data” method. When data from a new class appear, this method trains a deep learning model from scratch for multi-class classification, using all the new and old data. It can be expected that this method should have the highest classification performance. The second method is the iCaRL method (Rebuffi et al., 2016), which is the state-of-the-art method for class incremental learning in computer vision field Kemker et al. (2017). The third method is EWC . The fourth method is the “Fine Tune” method, in which we only use the new data to tune the model, without using any old data or regularizers. The fifth method is the baseline “Random Guess” method, which assigns the label of each test data sample randomly without using any model. We also compared with a number of recently proposed methods, including three versions of Variational Continual Learning (VCL) methods (Nguyen et al., 2018), Deep Generative Replay (DGR) (Shin et al., 2017), Gradient Episodic Memory (GEM) (Lopez-Paz et al., 2017), and Incremental Moment Matching (IMM) (Lee et al., 2017) on MNIST. In terms of the deep learning architecture, for the enzyme function data, we used the same architecture from Li et al. (2018c). As for the other datasets, we used the residual network with 32 layers. Regarding the SVM in SupportNet framework, based on the result from Soudry et al. (2017); Li et al. (2018a), we used linear kernel. 3.3 PERFORMANCE COMPARISON For all the tasks, we started with binary classification. Then each time we incrementally gave data from one or two new classes to each method, until all the classes were fed to the model. For enzyme data, we fed one class each time. For the other five datasets, we fed two classes in each round. Fig. 3 shows the accuracy comparison on the multi-class classification performance of different methods, over the six datasets, along the incremental learning process. 1http://www.cbrc.kaust.edu.sa/DEEPre/dataset.html 2http://murphylab.web.cmu.edu/data/2DHeLa 3https://web.inf.ufpr.br/vri/breast-cancer-database/ As expected, the “All Data” method has the best classification performance because it has access to all the data and retrains a brand new model each time. The performance of this “All Data” method can be considered as the empirical upper bound of the performance of the incremental learning methods. All the incremental learning methods have performance decrease to different degrees. EWC and “Fine Tune” have quite similar performance which drops quickly when the number of classes increases. The iCaRL method is much more robust than these two methods. In contrast, SupportNet has significantly better performance than all the other incremental learning methods across the five datasets. In fact, its performance is quite close to the “All Data” method and stays stable when the number of classes increases for the MNIST and enzyme datasets. On the MNIST dataset, VCL with K-center Coreset can also achieve very impressive performance. Nevertheless, SupportNet can outperform it along the process. Specifically, the performance of SupportNet has less than 1% on MNIST and 5% on enzyme data difference compared to that of the “All Data” method. We also show the importance of SupportNet’s components in Fig. 3 (C). As shown in the figure, all the three components (support data, EWC regularizer and feature regularizer) contribute to the performance of SupportNet to different degrees. Notice that even with only support data, SupportNet can already outperform iCaRL, which shows the effectiveness of our support data selector. The result on CIFAR-100 will be discussed in more detail in Section 4.2. Detailed results about different methods’ performance on different classes (confusion matrix) and on the old classes and the new classes separately (accuracy matrix) can be referred to Section B and C in the Appendices. We also show the effectiveness of the consolidation regularizers on stabilizing the learned feature representation in Section D with t-SNE visualization (Maaten & Hinton, 2008) in the Appendices. Furthermore, we compared SupportNet with iCaRL on an additional dataset, tiny ImageNet, which contains 200 classes. The results are shown in Section F in the Appendices, which further demonstrate the effectiveness of SupportNet. 3.4 SUPPORT DATA SIZE AND RUNNING TIME As reported by the previous study (Rebuffi et al., 2016), the preserved dataset size can affect the performance of the final model significantly. We investigated that in details here. As shown in Fig. 4 (A), the performance degradation of SupportNet from the “All Data” method decreases gradually as the support data size increases, which is consistent with the previous study using the rehearsal method (Rebuffi et al., 2016). What is interesting is that the performance degradation decreases very quickly at the beginning of the curve, so the performance loss is already very small with a small number of support data. That trend demonstrates the effectiveness of our support data selector, i.e., being able to select a small while representative support dataset. We also show the performance of SupportNet with 2000, 1500, 1000, 500, 200 support data, respectively, in Section E in the Appendices, which further demonstrates the effective of our method. On the other hand, this decent property of our framework is very useful when the users need to trade off the performance with the computational resources and running time. As shown in Fig. 4 (B), on MNIST, SupportNet outperforms the “All Data” method significantly regarding the accumulated running time with only less than 1% performance deviation, trained on the same hardware (GTX 1080 Ti). 3.5 REGULARIZER COEFFICIENT Although the performance of the EWC method on incremental learning is not impressive (Fig. 3), the EWC regularizer plays an important role in our method. Here, we evaluated our method by varying the EWC regularizer coefficient from 1 to 100,000, and compared it with the “All Data” method and iCaRL (Table 1). We can find that the performance of SupportNet varies with different EWC regularier coefficients, with the highest one very close to the “All Data” method, which is the upper bound of all the incremental learning methods, whereas the lowest one having around 13% performance degradation. The results make sense because from the neurophysiological point of view, SupportNet is trying to reach the stability-plasticity balance point for this classification task. If the coefficient is too small, which means we do not impose enough constraint on those weights which contribute significantly to the old class classification, the deep learning model will be too plastic and the old knowledge tends to be lost. If the coefficient is too large, which means that we impose very strong constraint on those weights even when they are not important to the old class classification, the deep learning model will be too stable and does not have enough capacity to incorporate new knowledge. In general, our results are consistent with the stability-plasticity dilemma. 4 DISCUSSION 4.1 UNDERFITTING AND OVERFITTING When training a deep learning model, one can encounter the notorious overfitting issue almost all the time. It is still the case for training an incremental learning model, but we found that there are some unique issues of such learning methods. Table 2 shows the performance of SupportNet and iCaRL on the real training data (i.e., the new data plus the support data for SupportNet and examplars for iCaRL), all the training data (i.e., the new data plus all the old data), and the test data. It can be seen that both methods perform almost perfectly on the real training data, which is as expected. However, the performances of iCaRL on the test data and all the training data are almost the same, both of which are much worse than that on the real training data. This indicates that iCaRL is overfitted to the real training data but underfitted to all the training data. As for SupportNet, the issue is much less severe than iCaRL as the performance degradation from the real training data to all the training data reduces from 37% as in iCaRL to 7% in SupportNet. This suggests that the support data selected by SupportNet are indeed critical for the deep learning training for the old classes. We can find the same pattern on the MNIST dataset. 4.2 SUPPORT VECTOR EVOLVING Despite the impressive performance of SupportNet as shown in Fig. 3, we have to admit the limitation of SupporNet. In fact, using our method, we assume the support vectors of one class will stay static if the learned representation is largely fixed. However, this assumption does not hold under all the circumstances. For example, suppose we perform a binary classification for one very specific type of cat, such as Chartreux, and one very specific type of dog, such as Rottweiler. Later, we need to equip the classifier with the function to recognize another very specific type of cat, such as British Shorthair. We may find that the support vectors of Chartreux change as British Shorthair comes in because Chartreux and British Shorthair are so similar that using the previous support vectors, we are unable to distinguish them. Although SupportNet can still reach the state-of-the-art performance even under this circumstance, as shown in Fig. 3 (F), more work should be done in the future to handle this support vector evolving problem. 5 CONCLUSION In this paper, we proposed a novel class incremental learning method, SupportNet, to solve the catastrophic forgetting problem by combining the strength of deep learning and SVM. SupportNet can identify the support data from the old data efficiently, which are fed to the deep learning model together with the new data for further training so that the model can review the essential information of the old data when learning the new information. With the help of two powerful consolidation regularizers, the support data can effectively help the deep learning model prevent the catastrophic forgetting issue, eliminate the necessity of retraining the model from scratch, and maintain stable learned representation between the old and the new data. A DERIVATION OF EQUATION 3 FROM EQUATION 2 In the section, we use chain rule to derive the following equation − ∂L ∂wj,i = 1 N N∑ n=1 (ỹn,i − on,i)δn,j , (12) from L = − 1 N N∑ n=1 K∑ k=1 ỹn,k log(on,k). (13) Let us first consider just one data sample: Ln = − K∑ k=1 ỹn,k log(on,k). (14) Using chain rule, we have − ∂Ln ∂wj,i = − K∑ l=1 ∂Ln ∂on,l ∂on,l ∂zn,i ∂zn,i ∂wj,i , (15) For the first term in Eq. 15, we have ∂Ln ∂on,l = ∂ − ∑K k=1 ỹn,k log(on,k) ∂on,l = − ỹn,l on,l . (16) For the second term in Eq. 15, we have ∂on,l ∂zn,i = ∂ exp(zn,l)∑K k=1 exp(zn,k) ∂zn,i = ∂ exp(zn,l) ∂zn,i ∑K k=1 exp(zn,k)− exp(zn,l) exp(zn,i) ( ∑K k=1 exp(zn,k)) 2 = { on,i(1− on,i), l = i −on,ion,l, l 6= i. (17) For the third term in Eq. 15, we have ∂zn,i ∂wj,i = ∂Wi,:δn ∂wj,i = δn,j . (18) Put Eq. 16, Eq. 17, and Eq. 18 into Eq. 15, we have: − ∂Ln ∂wj,i = ( ỹn,i on,i on,i(1− on,i) + K∑ l 6=i ỹn,l on,l (−on,ion,l))δn,j = (ỹn,i − on,i K∑ l=1 ỹn,l)δn,j (1) = (ỹn,i − on,i)δn,j , (19) where (1) is the result of the fact that we use one hot encoding for the label and ∑K l=1 ỹn,l = 1. From Eq. 19, we can easily get Eq. 12 by considering all the data points. B CONFUSION MATRICES We investigate the confusion matrices of the “Random Guess” method, the “Fine Tune” method, iCaRL and SupportNet (Fig. 5) after the last batch of classes on the EC data. As expected, the “Fine Tune” method only considers the new data from the new class, and thus is overfitted to the new class (Fig. 5(B)). The iCaRL method partially solves this issue by combining deep learning with nearestmean-examplars, which is a variant of KNN (Fig 5(C)). SupportNet, on the other hand, combines the advantage of SVM and deep learning by using SVM to find the important support data, which efficiently preserve the knowledge of the old data, and utilizing deep learning as the final classifier. This novel combination can efficiently and effectively solve the incremental learning problem (Fig 5(D)). Notice that the upper left diagonal of the SupportNet’s confusion matrix has much higher values than those of the iCaRL’s confusion matrix, which indicates the performance improvement comes from the accuracy prediction of the old classes. C ACCURACY MATRICES In this section, we investigate the performance composition of SupportNet on MNIST shown in Fig. 3 (A). Fig. 3 (A) only shows the overall performance of different methods on all the testing data, averaging the performances on the old test data and the new test data, which can lose the insight of different methods’ performance on old data. To avoid that, we further check the performance of different methods on the old data and the new data separately, whose results can be referred to Fig. 6. As shown in Fig. 6 (B), iCaRL can maintain its performance on the oldest class batch very well, however, it is unable to maintain its performance on the intermediate class batches. GEM (Fig. 6 (A)) can outperform iCaRL on the middle class batches, however, it cannot maintain the performance of the oldest class batch. VCL (Fig. 6 (C)) further outperforms GEM in terms of middle class batches, however it suffers from the same problem as GEM, being unable to preserve the performance on the oldest class batch. On the other hand, both VCL with K-center Coreset and SupportNet can maintain their performance on the old data classes almost perfectly, no matter for the intermediate class batches or the oldest class batch. However, because of the difference between the two algorithms, their trade-offs are different. Although VCL with K-center Coreset can maintain the performance of old classes almost exactly, there is a trade-off of the methods on the newest classes, with the newest model being unable to achieve the optimal performance on the newest class. As for SupportNet, it allows slight performance degradation on the old classes while can achieve optimal performance on the newest class batch. D T-SNE VISUALIZATION OF FEATURE REPRESENTATION The feature representation learned by the deep learning models during the incremental learning process is worth investigating, since it can suggest why SupportNet works to a certain degree. We take the EC dataset and the MNIST dataset as examples and use t-SNE (Maaten & Hinton, 2008) to investigate the learned representation. For each dataset, we randomly select 2000 data points from the training data at the first training time point. Then, after each future training time point, we apply the further trained model to the selected data points and extract the input of the deep learning model’s last layer as the learned feature representation. After obtaining those raw feature representations, we apply t-SNE to them and visualize them in 2D space. For each dataset, we investigated both the SupportNet with consolidation regularizers and SupportNet without any regularizers. The result of EC data can be referred to Fig. 7 and the result of MNIST data can be referred to Fig. 8. As shown in those figures, although the feature representation of the standard SupportNet still varies, compared to the SupportNet without any regularizers, the variance is much smaller, which suggests that the consolidation regularizes help the model stabilize the learned feature representation. E PERFORMANCE ON MNIST WITH LESS SUPPORT DATA In this section, we further investigate the performance of SupportNet with less support data as a supplement of Section 3.4. We run the experiments of SupportNet with the support data size as 2000, 1500, 1000, 500, and 200, respectively, whose results are shown in Fig. 9. As shown in the figure, even SupportNet with 500 support data points can outperform iCaRL with 2000 examplars, which further demonstrates the effectiveness of our support data selecting strategy . F PERFORMANCE ON TINY IMAGENET To further evaluate SupportNet’s performance on class incremental learning setting with more classes, we tested it on tiny ImageNet dataset4, comparing it with iCaRL. The setting of tiny ImageNet dataset is similar to that of ImageNet. However, its data size is much smaller than ImageNet. Tiny ImageNet has 200 classes while each class only has 500 training images and 50 testing images, which means that it is even harder than ImageNet. The performance of SupportNet and iCaRL on this dataset is shown in Fig. 10. As illustrated in the figure, SupportNet can outperform iCaRL significantly on this dataset. Furthermore, as suggested by the red line, which shows the performance difference between SupportNet and iCaRL, SupportNet’s performance superiority is increasingly significant as the class incremental learning setting goes further. This phenomenon demonstrates the effectiveness of SupportNet in combating catastrophic forgetting. 4https://tiny-imagenet.herokuapp.com/
1. What is the main contribution of the paper regarding incremental learning? 2. What are the strengths of the proposed approach, particularly in addressing catastrophic forgetting? 3. What are the weaknesses of the paper, especially in terms of experimental analysis and methodology? 4. Do you have any concerns regarding the novelty and applicability of the proposed method? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review Summary: The authors offer a novel incremental learning method called SupportNet to combat catastrophic forgetting that can be seen in standard deep learning models. Catastrophic forgetting is the phenomenon where the networks don’t retain old knowledge when they learn new knowledge. SupportNet uses resnet network with 32 layers, trains an SVM on the last layer and the support vector points from this SVM are given to the network along with the new data. Furthermore, two regularizers, feature and EWC regularizer, are added to the network. The feature regularizer forces the network to produce fixed representation for the old data, since if the feature representation for the old data changes when the network is fine-tuned on the new data then the support vectors generated from the old feature representation of the old data would become invalid. The EWC regularizer works by constraining parameters crucial for the classification of the old data, making it harder for the network to change them. SupportNet is compared to five methods (all data: network is re-trained with new and old data, upper bound for performance, iCaRL: state-of-the-art method for incremental learning, EWC: Only EWC regularizer added, Fine tune: Only new data, Random guessing: Random guess to assign labels) on six datasets (MNIST, CIFAR-10, CIFAR-100, Enzyme Function Prediction, HeLa Subcellular Structure Classification, Breast Tumor Classification). It shows some improvement in overall accuracy with each newly added class when compared to iCaRL, EWC, Fine Tune and Random guessing. Additionally, they show that overfitting for the real training data (a chosen subset of old data and the new data) is a problem for the competition iCaRL and affects SupportNet to a much lesser degree. Pros: (1) The authors propose a sensible approach, which is also novel to be best of our knowledge, using SVM to select support data from old data to be fed to the network along with the new data in the incremental learning framework to avoid catastrophic forgetting. Additionally, they offer a feature regularizer that penalizes the network for changing the feature representation of the support data when training the network on new data and an EWC regularizer that constrains the parameters that are crucial for the classification of the old data and makes it harder to change them. (2) The authors use six different datasets and several other approaches (subsets of their method’s components, other competing methods) to show these three components alleviate catastrophic forgetting and show improvement in overall accuracy. (3) The paper is well written and easy to follow. Cons: Major Points: (1) To show that the method proposed in the paper addresses catastrophic forgetting, in addition to the overall accuracy shown in Figure 3, it is also necessary to show the accuracy of different models on old classes when new classes are added to the network. This will strengthen the argument that the improvement in accuracy is indeed due to correct classification on old data. (2) The authors claim that iCaRL suffers from overfitting on real training data (section 4.1) however Table 2 shows iCaRL only on the enzyme function prediction which is also the dataset where the difference in performance between iCaRL and SupportNet is the largest. To support the general overfitting claim made in section 4.1, the authors should repeat this analysis on any of the other five datasets where the performance difference between the two methods is much smaller. SupportNet also suffers from overfitting (Table 3, Accuracy: test data: 83.9%, real training data: 98.7%) although to a lesser extent than iCaRL. (3) The individual impact of the support points and the joint impact of support points with feature regularizer on accuracy is not assessed. To prove their usefulness, add two methods to Figure 3: (a)A method that uses support points without any regularizer. (b) A method that uses support points with just the feature regularizer. Other points: (1) In section 2.3.2, EWC regularizer, Eq. 9: We think F(theta_new) should be F(theta_old) since we want to constrain parameters crucial for classification of old data and should be computing Fisher Information for the old parameters. (2) In section 2.1 Deep Learning and SVM: additional steps are needed to show how Eq. 3 is derived from Eq. 2. (3) In section 2.1 Deep Learning and SVM: In the line before Eq. 4. “t represtent” instead of “t represents”. (4) Figures are small and hard to read. Please increase the size and resolution of the figures.
ICLR
Title SupportNet: solving catastrophic forgetting in class incremental learning with support data Abstract A plain well-trained deep learning model often does not have the ability to learn new knowledge without forgetting the previously learned knowledge, which is known as catastrophic forgetting. Here we propose a novel method, SupportNet, to efficiently and effectively solve the catastrophic forgetting problem in the class incremental learning scenario. SupportNet combines the strength of deep learning and support vector machine (SVM), where SVM is used to identify the support data from the old data, which are fed to the deep learning model together with the new data for further training so that the model can review the essential information of the old data when learning the new information. Two powerful consolidation regularizers are applied to stabilize the learned representation and ensure the robustness of the learned model. We validate our method with comprehensive experiments on various tasks, which show that SupportNet drastically outperforms the state-of-the-art incremental learning methods and even reaches similar performance as the deep learning model trained from scratch on both old and new data. 1 INTRODUCTION Since the breakthrough in 2012 (Krizhevsky et al., 2012), deep learning has achieved great success in various fields (LeCun et al., 2015; Silver et al., 2016; Sutskever et al., 2014; He et al., 2016; Alipanahi et al., 2015; Li et al., 2018b; Dai et al., 2017). However, despite its impressive achievements, there are still several bottlenecks related to the practical part of deep learning waiting to be solved (Papernot et al., 2016; Lipton, 2016; Kemker et al., 2017). One of those bottlenecks is catastrophic forgetting (Kemker et al., 2017), which means that a well-trained deep learning model tends to completely forget all the previously learned information when learning new information (McCloskey & Cohen, 1989). That is, once a deep learning model is trained to perform a specific task, it cannot be trained easily to perform a new similar task without affecting the original task’s performance dramatically. Unlike human and animals, deep learning models do not have the ability to continuously learn over time and different datasets by incorporating the new information while retaining the previously learned experience, which is known as incremental learning. Two major theories have been proposed to explain human’s ability to perform incremental learning. The first theory is Hebbian learning (Hebb, 1949) with homeostatic plasticity (Zenke et al., 2017), which suggests that human brain’s plasticity will decrease as people learn more knowledge to protect the previously learned information. The second theory is the complementary learning system (CLS) theory (Mcclelland et al., 1995; OReilly et al., 2014), which suggests that human beings extract high-level structural information and store the high level information in a different brain area while retaining episodic memories. Inspired by the above two major neurophysiological theories, people have proposed a number of methods to deal with catastrophic forgetting. The most straightforward and pragmatic method to avoid catastrophic forgetting is to retrain a deep learning model completely from scratch with all the old data and new data (Parisi et al., 2018). However, this method is proved to be very inefficient (Parisi et al., 2018). Moreover, the new model learned from scratch may share very low similarity with the old one, which results in poor learning robustness. In addition to the straightforward method, there are three categories of methods. The first category is the regularization approach (Kirkpatrick et al., 2017; Li & Hoiem, 2016; Jung et al., 2016), which is inspired by the plasticity theory (Benna & Fusi, 2016). The core idea of such methods is to incorporate the plasticity information of the neural network model into the loss function to prevent the parameters from varying significantly when learning new information. These approaches are proved to be able to protect the consolidated knowledge (Kemker et al., 2017). However, due to the fixed size of the neural network, there is a trade-off between the performance of the old and new tasks (Kemker et al., 2017). The second class uses dynamic neural network architectures (Rebuffi et al., 2016; Rusu et al., 2016; Lopez-Paz & Ranzato, 2017). To accommodate the new knowledge, these methods dynamically allocate neural resources or retrain the model with an increasing number of neurons or layers. Intuitively, these approaches can prevent catastrophic forgetting but may also lead to scalability and generalization issues due to the increasing complexity of the network (Parisi et al., 2018). The last category utilizes the dual-memory learning system, which is inspired by the CLS theory (Hinton & Plaut, 1987; Lopez-Paz & Ranzato, 2017; Gepperth & Karaoguz, 2016). Most of these systems either use dual weights or take advantage of pseudo-rehearsal, which draw training samples from a generative model and replay them to the model when training with new data. However, how to build an effective generative model remains a difficult problem. Recent researches on the optimization and generalization of deep neural networks suggested the potential relationship between deep learning and SVM (Soudry et al., 2017; Li et al., 2018a). Based on that idea, we propose a novel and easy-to-implement method to perform class incremental deep learning efficiently when encountering data from new classes (Fig. 1). Our method maintains a support dataset for each old class, which is much smaller than the original dataset of that class, and shows the support datasets to the deep learning model every time there is a new class coming in so that the model can “review” the representatives of the old classes while learning new information. Although this rehearsal idea is not new (Rebuffi et al., 2016), our method is innovative in the sense that we show how to select the support data in a systematic and generic way to preserve as much information as possible. We demonstrate that it is more efficient to select the support vectors of an SVM, which is used to approximate the neural network’s last layer, as the support data, both theoretically and empirically. Meanwhile, since we divide the network into two parts, the last layer and all the previous layers, in order to stabilize the learned representation of old data before the last layer and retain the performance for the old classes, following the idea of the Hebbian learning theory, we utilize two consolidation regularizers, to reduce the plasticity of the deep learning model and constrain the deep learning model to produce similar representation for old data. The framework of our method is show in Fig. 2. In summary, this paper has the following main contributions: • We propose a novel way of selecting support data through the combination of deep learning and SVM, and demonstrate its efficiency with comprehensive experiments on various tasks. • We propose a novel regularizer, namely, consolidation regularizer, which stabilizes the deep learning network and maintains the high level feature representation of the old information. 2 METHODS 2.1 DEEP LEARNING AND SVM In this subsection, we will show what data is more important for deep neural network model training. Following the setting in Soudry et al. (2017); Li et al. (2018a), let us consider a dataset {xn, ỹn}Nn=1, with xn ∈ RD being the feature, and ỹn ∈ RK being the one-hot encoding of the label. K is the total number of classes andN is the size of the dataset. Denote the input of the last layer (the learned representation) as δn ∈ RT for xn. We use W to denote the parameter of the last layer and define zn = Wδn. After applying softmax activation function to zn, we obtain the output of the whole deep neural network for the input xn as on. Consequently, we have: on,i = exp(zn,i)∑K k=1 exp(zn,k) = exp(Wi,:δn)∑K k=1 exp(Wk,:δn) . (1) For deep learning, we usually use the cross-entropy loss as the loss function: L = − 1 N N∑ n=1 K∑ k=1 ỹn,k log(on,k), (2) Consider the negative gradient of the loss function on wj,i (the derivation of Equation (3) can be referred to Section A in the Appendices): − ∂L ∂wj,i = 1 N N∑ n=1 (ỹn,i − on,i)δn,j = 1 N N∑ n=1 (ỹn,i − exp(Wi,:δn)∑K k=1 exp(Wk,:δn) )δn,j , (3) according to Soudry et al. (2017); Li et al. (2018a), after the learned representation becoming stable, the last weight layer will converge to the SVM solution. That is, we can write W = a(t)Ŵ + B(t), where Ŵ is the corresponding SVM solution; t represent the t-th iteration of SGD; a(t) → ∞ and B(t) is bounded. Thus, Equation (3) becomes: − ∂L ∂wj,i = 1 N N∑ n=1 (ỹn,i − exp(a(t)Ŵi,:δn) exp(B(t)i,:δn)∑K k=1 exp(a(t)Ŵk,:δn) exp(B(t)k,:δn) )δn,j . (4) Since the candidate value of ỹn,i is {0, 1} and if ỹn,i = 0, that term in Equation (2) does not contribute to the loss. Only when ỹn,i = 1 can the data contribute the loss and thus the gradient. Under that circumstance, since a(t) → ∞, only the data with the smallest exponential nominator can contribute to the gradient. Those data are precisely the ones with the smallest margin Ŵi,:δn, which are the support vectors, for class i. 2.2 SUPPORT DATA SELECTOR According to Sirois et al. (2008); Pallier et al. (2003), even human beings, who are proficient in incremental learning, cannot deal with catastrophic forgetting perfectly. On the other hand, a common strategy for human beings to overcome forgetting during learning is to review the old knowledge frequently (Murre & Dros, 2015). Actually, during reviewing, we usually do not review all the details, but rather the important ones, which are often enough for us to grasp the knowledge. Inspired by this, we design the support dataset and the review training process. During incremental learning, we maintain a support dataset for each class, which is fed to the model together with the new data of the new classes. In other words, we want the model to review the representatives of the previous classes when learning new information. The main question is thus how to build an effective support data selector to construct such support data, which we denote as {xSn , ỹSn} NS n=1. According to the discussion in Section 2.1, we know that the data corresponding to the support vectors in SVM solution contribute more to the deep learning model training. Based on that, we obtain the high level feature representations of the original input using deep learning mapping function and train an SVM classifier with these features. By performing the SVM training, we detect the support vectors from each class, which are of crucial importance for the deep learning model training. We define the original data which correspond to these support vectors as the support data candidates, which we denote as {xSVn , ỹSVn } NSV n=1 . If the required number of preserved data is smaller than that of the support vectors, we will sample support data candidates to obtain the required number. Formally: {xSn , ỹSn} NS n=1 ⊂ {xSVn , ỹSVn } NSV n=1 . (5) Denote the new coming data as {xnewn , ỹnewn } Nnew n=1 , we have the new training data for the model as: {xSn , ỹSn} NS n=1 ∪ {xnewn , ỹnewn } Nnew n=1 , (6) 2.3 CONSOLIDATION REGULARIZERS Since the support data selection depends on the high level representation produced by the deep learning layers, which are fine tuned on new data, the old data feature representations may change over time. As a result, the previous support vectors for the old data may no longer be support vectors for the new data, which makes the support data invalid (here we assume the support vectors will remain the same as long as the representations are largely fixed, which will be discussed in more details in Section 4.2). To solve the issue, we add two consolidation regularizers to consolidate the learned knowledge: the feature regularizer, which forces the model to produce fixed representation for the old data over time, and the EWC regularizer, which consolidates the important weights that contribute to the old class classification significantly into the loss function. 2.3.1 FEATURE REGULARIZER We add the following feature regularizer into the loss function to force the mapping function to produce fixed representation for old data. Following the setting in Section 2.1, δn depends on φ, which is the parameters of the deep learning mapping function. The feature regularizer is defined as: Rf (φ) = NS∑ n=1 ‖δn(φnew)− δn(φold)‖22 , (7) where φnew is the parameters for the deep learning architecture trained with the support data from the old classes and the new data from the new class(es); φold is the parameters for the mapping function of the old data; and Ns is the number of support data. This regularizer requires the model to preserve the feature representation produced by the deep learning architecture for each support data, which could lead to potential memory overhead. However, since it operates on a very high level representation, which is of much less dimensionality than the original input, the overhead is neglectable. 2.3.2 EWC REGULARIZER According to the Hebbian learning theory, after learning, the related synaptic strength and connectivity are enhanced while the degree of plasticity decreases to protect the learned knowledge. Guided by this neurophysiological theory, the EWC regularizer (Kirkpatrick et al., 2017) was designed to consolidate the old information while learning new knowledge. The core idea of this regularizer is to constrain those parameters which contribute significantly to the classification of the old data. Specifically, the more a certain parameter contributes to the previous classification, the harder constrain we apply to it to make it unlikely to be changed. That is, we make those parameters that are closely related to the previous classification less “plastic”. In order to achieve this goal, we calculate the Fisher information for each parameter, which measures its contribution to the final prediction, and apply the regularizer accordingly. Formally, the Fisher information for the parameters θ = {φ,W} can be calculated as: F (θ) = E[( ∂ ∂θ log f(X; θ))2|θ] = ∫ ( ∂ ∂θ log f(x; θ))2f(x; θ)dx, (8) where f(x; θ) is the functional mapping of the entire neural network. The EWC regularizer is defined as follows: Rewc(θ) = ∑ i F (θoldi)(θnewi − θoldi)2, (9) where i iterates over all the parameters of the model. There are two major benefits of using the EWC regularizer in our framework. Firstly, the EWC regularizer reduces the “plasticity” of the parameters that are important to the old classes and thus guarantees stable performance over the old classes. Secondly, by reducing the capacity of the deep learning model, the EWC regularizer prevents overfitting to a certain degree. The function of the EWC regularizer could be considered as changing the learning trajectory pointing to the region where the loss is low for both the old and new data. 2.3.3 LOSS FUNCTION After adding the feature regularizer and the EWC regularier, the loss function becomes: L̃(θ) = L+ λfRf (φ) + λewcRewc(θ), (10) where λf and λewc are the coefficients for the feature regularizer and the EWC regularizer, respectively. After plugging Eq. (2), (7) and (9) into Eq. (10), we obtain the regularized loss function: L̃(θ) = − 1 NS +Nnew NS+Nnew∑ n=1 Kt∑ k=1 ỹn,k log(on,k)+ NS∑ n=1 ‖δn(φnew)− δn(φold)‖22 +∑ i λewc(θnewi − θoldi)2 ∫ ( ∂ ∂θnew log f(x; θnew)) 2f(x; θnew)dx, (11) where Kt is the total number of classes at the incremental learning time point t. 2.4 SUPPORTNET Combining the deep learning model, which consists of the deep learning architecture mapping function and the final fully connected classification layer, the novel support data selector, and the two consolidation regularizers together, we propose a highly effective framework, SupportNet (Fig. 2), which can perform class incremental learning without catastrophic forgetting. Our framework can resolve the catastrophic forgetting issue in two ways. Firstly, the support data can help the model to review the old information during future training. Despite the small size of the support data, they can preserve the distribution of the old data quite well, which will be shown in Section 4.1. Secondly, the two consolidation regularizers consolidate the high level representation of the old data and reduce the plasticity of those weights, which are of vital importance for the old classes. 3 RESULTS 3.1 DATASETS During our experiments, we used six datasets: (1) MNIST, (2) CIFAR-10 and CIFAR-100, (3) Enzyme function data (Li et al., 2018c), (4) HeLa (Boland & Murphy, 2001) and (5) BreakHis (Spanhol et al., 2016). MNIST, CIFAR-10 and CIFAR-100 are commonly used benchmark datasets in the computer vision field. MNIST consists of 70K 28*28 single channel images belonging to 10 classes. CIFAR-10 contains 60K 32*32 RGB images belonging to 10 classes, while CIFAR-100 is composed of the same images but the images are further classified into 100 classes. The latter three datasets are from bioinformatics. Enzyme function data1 is composed of 22,168 low-homologous enzyme sequences belonging to 6 classes. The HeLa dataset2 contains around 700 512*384 grayscale images for subcellular structures in HeLa cells belonging to 10 classes. BreakHis3 is composed of 9,109 microscopic images of the breast tumor tissue belonging to 8 classes. Each image is a 3- channel RGB image, whose dimensionality is 700 by 460. 3.2 COMPARED METHODS We compared our method with numerous methods. We refer the first method as the “All Data” method. When data from a new class appear, this method trains a deep learning model from scratch for multi-class classification, using all the new and old data. It can be expected that this method should have the highest classification performance. The second method is the iCaRL method (Rebuffi et al., 2016), which is the state-of-the-art method for class incremental learning in computer vision field Kemker et al. (2017). The third method is EWC . The fourth method is the “Fine Tune” method, in which we only use the new data to tune the model, without using any old data or regularizers. The fifth method is the baseline “Random Guess” method, which assigns the label of each test data sample randomly without using any model. We also compared with a number of recently proposed methods, including three versions of Variational Continual Learning (VCL) methods (Nguyen et al., 2018), Deep Generative Replay (DGR) (Shin et al., 2017), Gradient Episodic Memory (GEM) (Lopez-Paz et al., 2017), and Incremental Moment Matching (IMM) (Lee et al., 2017) on MNIST. In terms of the deep learning architecture, for the enzyme function data, we used the same architecture from Li et al. (2018c). As for the other datasets, we used the residual network with 32 layers. Regarding the SVM in SupportNet framework, based on the result from Soudry et al. (2017); Li et al. (2018a), we used linear kernel. 3.3 PERFORMANCE COMPARISON For all the tasks, we started with binary classification. Then each time we incrementally gave data from one or two new classes to each method, until all the classes were fed to the model. For enzyme data, we fed one class each time. For the other five datasets, we fed two classes in each round. Fig. 3 shows the accuracy comparison on the multi-class classification performance of different methods, over the six datasets, along the incremental learning process. 1http://www.cbrc.kaust.edu.sa/DEEPre/dataset.html 2http://murphylab.web.cmu.edu/data/2DHeLa 3https://web.inf.ufpr.br/vri/breast-cancer-database/ As expected, the “All Data” method has the best classification performance because it has access to all the data and retrains a brand new model each time. The performance of this “All Data” method can be considered as the empirical upper bound of the performance of the incremental learning methods. All the incremental learning methods have performance decrease to different degrees. EWC and “Fine Tune” have quite similar performance which drops quickly when the number of classes increases. The iCaRL method is much more robust than these two methods. In contrast, SupportNet has significantly better performance than all the other incremental learning methods across the five datasets. In fact, its performance is quite close to the “All Data” method and stays stable when the number of classes increases for the MNIST and enzyme datasets. On the MNIST dataset, VCL with K-center Coreset can also achieve very impressive performance. Nevertheless, SupportNet can outperform it along the process. Specifically, the performance of SupportNet has less than 1% on MNIST and 5% on enzyme data difference compared to that of the “All Data” method. We also show the importance of SupportNet’s components in Fig. 3 (C). As shown in the figure, all the three components (support data, EWC regularizer and feature regularizer) contribute to the performance of SupportNet to different degrees. Notice that even with only support data, SupportNet can already outperform iCaRL, which shows the effectiveness of our support data selector. The result on CIFAR-100 will be discussed in more detail in Section 4.2. Detailed results about different methods’ performance on different classes (confusion matrix) and on the old classes and the new classes separately (accuracy matrix) can be referred to Section B and C in the Appendices. We also show the effectiveness of the consolidation regularizers on stabilizing the learned feature representation in Section D with t-SNE visualization (Maaten & Hinton, 2008) in the Appendices. Furthermore, we compared SupportNet with iCaRL on an additional dataset, tiny ImageNet, which contains 200 classes. The results are shown in Section F in the Appendices, which further demonstrate the effectiveness of SupportNet. 3.4 SUPPORT DATA SIZE AND RUNNING TIME As reported by the previous study (Rebuffi et al., 2016), the preserved dataset size can affect the performance of the final model significantly. We investigated that in details here. As shown in Fig. 4 (A), the performance degradation of SupportNet from the “All Data” method decreases gradually as the support data size increases, which is consistent with the previous study using the rehearsal method (Rebuffi et al., 2016). What is interesting is that the performance degradation decreases very quickly at the beginning of the curve, so the performance loss is already very small with a small number of support data. That trend demonstrates the effectiveness of our support data selector, i.e., being able to select a small while representative support dataset. We also show the performance of SupportNet with 2000, 1500, 1000, 500, 200 support data, respectively, in Section E in the Appendices, which further demonstrates the effective of our method. On the other hand, this decent property of our framework is very useful when the users need to trade off the performance with the computational resources and running time. As shown in Fig. 4 (B), on MNIST, SupportNet outperforms the “All Data” method significantly regarding the accumulated running time with only less than 1% performance deviation, trained on the same hardware (GTX 1080 Ti). 3.5 REGULARIZER COEFFICIENT Although the performance of the EWC method on incremental learning is not impressive (Fig. 3), the EWC regularizer plays an important role in our method. Here, we evaluated our method by varying the EWC regularizer coefficient from 1 to 100,000, and compared it with the “All Data” method and iCaRL (Table 1). We can find that the performance of SupportNet varies with different EWC regularier coefficients, with the highest one very close to the “All Data” method, which is the upper bound of all the incremental learning methods, whereas the lowest one having around 13% performance degradation. The results make sense because from the neurophysiological point of view, SupportNet is trying to reach the stability-plasticity balance point for this classification task. If the coefficient is too small, which means we do not impose enough constraint on those weights which contribute significantly to the old class classification, the deep learning model will be too plastic and the old knowledge tends to be lost. If the coefficient is too large, which means that we impose very strong constraint on those weights even when they are not important to the old class classification, the deep learning model will be too stable and does not have enough capacity to incorporate new knowledge. In general, our results are consistent with the stability-plasticity dilemma. 4 DISCUSSION 4.1 UNDERFITTING AND OVERFITTING When training a deep learning model, one can encounter the notorious overfitting issue almost all the time. It is still the case for training an incremental learning model, but we found that there are some unique issues of such learning methods. Table 2 shows the performance of SupportNet and iCaRL on the real training data (i.e., the new data plus the support data for SupportNet and examplars for iCaRL), all the training data (i.e., the new data plus all the old data), and the test data. It can be seen that both methods perform almost perfectly on the real training data, which is as expected. However, the performances of iCaRL on the test data and all the training data are almost the same, both of which are much worse than that on the real training data. This indicates that iCaRL is overfitted to the real training data but underfitted to all the training data. As for SupportNet, the issue is much less severe than iCaRL as the performance degradation from the real training data to all the training data reduces from 37% as in iCaRL to 7% in SupportNet. This suggests that the support data selected by SupportNet are indeed critical for the deep learning training for the old classes. We can find the same pattern on the MNIST dataset. 4.2 SUPPORT VECTOR EVOLVING Despite the impressive performance of SupportNet as shown in Fig. 3, we have to admit the limitation of SupporNet. In fact, using our method, we assume the support vectors of one class will stay static if the learned representation is largely fixed. However, this assumption does not hold under all the circumstances. For example, suppose we perform a binary classification for one very specific type of cat, such as Chartreux, and one very specific type of dog, such as Rottweiler. Later, we need to equip the classifier with the function to recognize another very specific type of cat, such as British Shorthair. We may find that the support vectors of Chartreux change as British Shorthair comes in because Chartreux and British Shorthair are so similar that using the previous support vectors, we are unable to distinguish them. Although SupportNet can still reach the state-of-the-art performance even under this circumstance, as shown in Fig. 3 (F), more work should be done in the future to handle this support vector evolving problem. 5 CONCLUSION In this paper, we proposed a novel class incremental learning method, SupportNet, to solve the catastrophic forgetting problem by combining the strength of deep learning and SVM. SupportNet can identify the support data from the old data efficiently, which are fed to the deep learning model together with the new data for further training so that the model can review the essential information of the old data when learning the new information. With the help of two powerful consolidation regularizers, the support data can effectively help the deep learning model prevent the catastrophic forgetting issue, eliminate the necessity of retraining the model from scratch, and maintain stable learned representation between the old and the new data. A DERIVATION OF EQUATION 3 FROM EQUATION 2 In the section, we use chain rule to derive the following equation − ∂L ∂wj,i = 1 N N∑ n=1 (ỹn,i − on,i)δn,j , (12) from L = − 1 N N∑ n=1 K∑ k=1 ỹn,k log(on,k). (13) Let us first consider just one data sample: Ln = − K∑ k=1 ỹn,k log(on,k). (14) Using chain rule, we have − ∂Ln ∂wj,i = − K∑ l=1 ∂Ln ∂on,l ∂on,l ∂zn,i ∂zn,i ∂wj,i , (15) For the first term in Eq. 15, we have ∂Ln ∂on,l = ∂ − ∑K k=1 ỹn,k log(on,k) ∂on,l = − ỹn,l on,l . (16) For the second term in Eq. 15, we have ∂on,l ∂zn,i = ∂ exp(zn,l)∑K k=1 exp(zn,k) ∂zn,i = ∂ exp(zn,l) ∂zn,i ∑K k=1 exp(zn,k)− exp(zn,l) exp(zn,i) ( ∑K k=1 exp(zn,k)) 2 = { on,i(1− on,i), l = i −on,ion,l, l 6= i. (17) For the third term in Eq. 15, we have ∂zn,i ∂wj,i = ∂Wi,:δn ∂wj,i = δn,j . (18) Put Eq. 16, Eq. 17, and Eq. 18 into Eq. 15, we have: − ∂Ln ∂wj,i = ( ỹn,i on,i on,i(1− on,i) + K∑ l 6=i ỹn,l on,l (−on,ion,l))δn,j = (ỹn,i − on,i K∑ l=1 ỹn,l)δn,j (1) = (ỹn,i − on,i)δn,j , (19) where (1) is the result of the fact that we use one hot encoding for the label and ∑K l=1 ỹn,l = 1. From Eq. 19, we can easily get Eq. 12 by considering all the data points. B CONFUSION MATRICES We investigate the confusion matrices of the “Random Guess” method, the “Fine Tune” method, iCaRL and SupportNet (Fig. 5) after the last batch of classes on the EC data. As expected, the “Fine Tune” method only considers the new data from the new class, and thus is overfitted to the new class (Fig. 5(B)). The iCaRL method partially solves this issue by combining deep learning with nearestmean-examplars, which is a variant of KNN (Fig 5(C)). SupportNet, on the other hand, combines the advantage of SVM and deep learning by using SVM to find the important support data, which efficiently preserve the knowledge of the old data, and utilizing deep learning as the final classifier. This novel combination can efficiently and effectively solve the incremental learning problem (Fig 5(D)). Notice that the upper left diagonal of the SupportNet’s confusion matrix has much higher values than those of the iCaRL’s confusion matrix, which indicates the performance improvement comes from the accuracy prediction of the old classes. C ACCURACY MATRICES In this section, we investigate the performance composition of SupportNet on MNIST shown in Fig. 3 (A). Fig. 3 (A) only shows the overall performance of different methods on all the testing data, averaging the performances on the old test data and the new test data, which can lose the insight of different methods’ performance on old data. To avoid that, we further check the performance of different methods on the old data and the new data separately, whose results can be referred to Fig. 6. As shown in Fig. 6 (B), iCaRL can maintain its performance on the oldest class batch very well, however, it is unable to maintain its performance on the intermediate class batches. GEM (Fig. 6 (A)) can outperform iCaRL on the middle class batches, however, it cannot maintain the performance of the oldest class batch. VCL (Fig. 6 (C)) further outperforms GEM in terms of middle class batches, however it suffers from the same problem as GEM, being unable to preserve the performance on the oldest class batch. On the other hand, both VCL with K-center Coreset and SupportNet can maintain their performance on the old data classes almost perfectly, no matter for the intermediate class batches or the oldest class batch. However, because of the difference between the two algorithms, their trade-offs are different. Although VCL with K-center Coreset can maintain the performance of old classes almost exactly, there is a trade-off of the methods on the newest classes, with the newest model being unable to achieve the optimal performance on the newest class. As for SupportNet, it allows slight performance degradation on the old classes while can achieve optimal performance on the newest class batch. D T-SNE VISUALIZATION OF FEATURE REPRESENTATION The feature representation learned by the deep learning models during the incremental learning process is worth investigating, since it can suggest why SupportNet works to a certain degree. We take the EC dataset and the MNIST dataset as examples and use t-SNE (Maaten & Hinton, 2008) to investigate the learned representation. For each dataset, we randomly select 2000 data points from the training data at the first training time point. Then, after each future training time point, we apply the further trained model to the selected data points and extract the input of the deep learning model’s last layer as the learned feature representation. After obtaining those raw feature representations, we apply t-SNE to them and visualize them in 2D space. For each dataset, we investigated both the SupportNet with consolidation regularizers and SupportNet without any regularizers. The result of EC data can be referred to Fig. 7 and the result of MNIST data can be referred to Fig. 8. As shown in those figures, although the feature representation of the standard SupportNet still varies, compared to the SupportNet without any regularizers, the variance is much smaller, which suggests that the consolidation regularizes help the model stabilize the learned feature representation. E PERFORMANCE ON MNIST WITH LESS SUPPORT DATA In this section, we further investigate the performance of SupportNet with less support data as a supplement of Section 3.4. We run the experiments of SupportNet with the support data size as 2000, 1500, 1000, 500, and 200, respectively, whose results are shown in Fig. 9. As shown in the figure, even SupportNet with 500 support data points can outperform iCaRL with 2000 examplars, which further demonstrates the effectiveness of our support data selecting strategy . F PERFORMANCE ON TINY IMAGENET To further evaluate SupportNet’s performance on class incremental learning setting with more classes, we tested it on tiny ImageNet dataset4, comparing it with iCaRL. The setting of tiny ImageNet dataset is similar to that of ImageNet. However, its data size is much smaller than ImageNet. Tiny ImageNet has 200 classes while each class only has 500 training images and 50 testing images, which means that it is even harder than ImageNet. The performance of SupportNet and iCaRL on this dataset is shown in Fig. 10. As illustrated in the figure, SupportNet can outperform iCaRL significantly on this dataset. Furthermore, as suggested by the red line, which shows the performance difference between SupportNet and iCaRL, SupportNet’s performance superiority is increasingly significant as the class incremental learning setting goes further. This phenomenon demonstrates the effectiveness of SupportNet in combating catastrophic forgetting. 4https://tiny-imagenet.herokuapp.com/
1. What is the main contribution of the paper in continual learning? 2. What are the strengths and weaknesses of the proposed method, SupportNet? 3. How does the reviewer assess the novelty and effectiveness of the method compared to prior works? 4. Are there any concerns regarding the applicability and limitations of the proposed approach?
Review
Review This paper presents a continual learning method that aims to overcome the catastrophic forgetting problem by holding out small number of samples for each task to be used in training for new tasks. Specifially, these representative samples for each task are selected as support vectors of a SVM trained on it. The proposed method, SupportNet, is validated on a continual learning task of a classifier against two existing continual learning approaches, which it outperforms. Pros - Idea of using SVM to identify the most important samples for classification makes sense. Cons - The idea of storing a small subset of a original dataset for each task has been already explored in [Nguyen et al. 18], and thus is not novel. - Thus the contribution of this work reduces to the use of SVM to identify the most important samples, but the effectiveness of this approach is not validated since it does not compare against [Nguyen et al. 18]. - Also it leaves out many of the recent work on continual learning. - The idea of using SVM for identifying important samples is not very attractive since an SVM will have a very different decision boundary from the one from the trained DNN. - Also this method is only applicable to a classification task and not to other tasks such as regression or RL. Thus considering the lack of novelty and experimental validation, I recommend rejecting this paper. [Nguyen et al. 18] Variational Continual Learning, ICLR 2018
ICLR
Title Text Generation with Efficient (Soft) $Q$-Learning Abstract Maximum likelihood estimation (MLE) is the predominant algorithm for training text generation models. This paradigm relies on direct supervision examples, which is not applicable to many emerging applications, such as generating adversarial attacks or generating prompts to control language models. Reinforcement learning (RL) on the other hand offers a more flexible solution by allowing users to plug in arbitrary task metrics as reward. Yet previous RL algorithms for text generation, such as policy gradient (on-policy RL) and Q-learning (off-policy RL), are often notoriously inefficient or unstable to train due to the large sequence space and the sparse reward received only at the end of sequences. In this paper, we introduce a new RL formulation for text generation from the soft Q-learning (SQL) perspective. It enables us to draw from the latest RL advances, such as path consistency learning, to combine the best of on-/off-policy updates, and learn effectively from sparse reward. We apply the approach to a wide range of text generation tasks, including learning from noisy/negative examples, adversarial attacks, and prompt generation. Experiments show our approach consistently outperforms both task-specialized algorithms and the previous RL methods. 1 INTRODUCTION Recent natural language generation systems have made remarkable progress in producing wellformed coherent text, especially with the massive pretrained language models (LMs) (Radford et al., 2019; Brown et al., 2020; Lewis et al., 2020; Raffel et al., 2019). Those models are typically trained using maximum likelihood estimation (MLE) with a large amount of data supervisions. Despite its successful outcomes, the standard training method suffers from limited applicability to many emerging text generation problems, where little or no supervised data is available. Prominent examples of such low-data problems include generating prompts to control the massive LMs (Yin et al., 2019; Shin et al., 2020; Zhong et al., 2021), learning text generation from noisy or even negative data, generating adversarial text attacks for robustness study (Wallace et al., 2019; Atanasova et al., 2020), and others (Figure 1, right). Due to the failure of standard MLE, people have had to devise specialized algorithms for those problems respectively. On the other hand, reinforcement learning (RL) (Sutton & Barto, 2018) offers an alternative principled framework for learning from arbitrary reward functions, and has achieved great advances in robotic and game control. However, RL by far has made limited success for training text generation, primarily due to the key challenges of sparse reward (i.e., a single reward signal is received only after the whole text sequence is generated) and large action space (i.e., a vocabulary of millions of words). For instance, a popular family of RL algorithms studied extensively for text generation is the policy-based (Williams, 1992) or actor-critic based (Bahdanau et al., 2016; Rennie et al., 2017) algorithms, with policy gradient (PG) being the most prevalent example (Ranzato et al., 2016; Li et al., 2016; Rennie et al., 2017; Tan et al., 2018; Pasunuru & Bansal, 2018; Paulus et al., 2018). Those algorithms train the model with on-policy updates, i.e., the text samples used for estimating policy gradients are from the target model itself. Due to the exponentially large space of sequences, onpolicy updates often suffer from extremely high variance and low data efficiency (e.g., most model samples are not useful for learning). Thus directly training with PG from scratch is usually impossible. In practice, the model has to be initialized by MLE training, followed by PG as finetuning, which often leads to limited improvement (Choshen et al., 2020; Wu et al., 2018). Another set of work has resorted to off-policy RL. The key advantage is that samples from other sources, e.g., human-written text, can be used, making them more data efficient than on-policy methods. Previous work has used either importance weighted PG (Pang & He, 2021; Zhou et al., 2017; Kandasamy et al., 2017) or Q-learning based algorithms (Guo, 2015; Jaques et al., 2020; Narasimhan et al., 2015). However, off-policy methods have been considered to be less stable. For example, theQ-learning performance relies heavily on how accurate the learnedQ-function assesses the quality of intermediate subsequences – a challenging task due to the sparse reward signals. In this paper, we develop a new RL formulation for text generation that tackles the above issues (Figure 1, left). We reframe the text generation problem from the soft Q-learning perspective originally developed in robotics (Haarnoja et al., 2017; Schulman et al., 2017). The resulting connection allows us to seamlessly take advantage of the latest successful techniques from the RL literature. In particular, we introduce and adapt the principled path consistency learning (Nachum et al., 2017) to text generation, that (1) offers a natural way to train the model with both on- and off-policy updates, hence combining the best of the two strategies, (2) bridges the sparse reward signal to directly supervise the Q function learning, leading to more accurate Q estimation and credit assignment, and (3) makes efficient updates to Q-values by considering all candidate actions together. The generality and efficiency of the proposed method allows us to train text generation in a wide range of applications: (1) With noisy and negative training examples, our approach learns to generate accurate entailment text that greatly improves upon the data itself as well as other various training methods; (2) Our approach also manages to train an effective adversarial text generator for robustness test for classifiers; (3) We train a prompt generator with our algorithm to achieve controllable generation of pretrained LMs in terms of topics. On all the three tasks, our approach consistently improves over not only previous RL algorithms for text generation, but also diverse task-specialized methods designed specifically for each of the problems, respectively. In the appendix (§A.1.4), we also show that on standard supervised tasks where MLE prevails, our approach is competitive to train text generation models from scratch, which was usually impossible for previous RL algorithms. 2 BACKGROUND AND CHALLENGES The goal of text generation is to produce coherent text y = (y0, ..., yT ) of certain properties for a given task, where yt is a token from a vocabulary V , and T is the text length. The generation can condition on arbitrary input context, which we omit for simplicity of notations. We aim to learn a generation model pθ(y) which is typically decomposed autoregressively as pθ(y) = ∏T t=0 pθ(yt | y<t), where y<t = (y0, ..., yt−1) is the prefix, and the distribution at each step t is obtained by applying the softmax function on the output logits: pθ(yt | y<t) = exp fθ(yt | y<t)∑ y′∈V exp fθ(y ′ | y<t) . (1) Here fθ(y | y<t) is the logit of token y computed by the generation model. Given a training example y∗, maximum likelihood training (MLE) updates the model with the gradient ∇θLMLE(θ)= ∑T t=0∇θ log pθ (y∗t | y∗<t). Despite its popularity, MLE-based training only applies when clean supervised data y∗ is available, and cannot be used to optimize arbitrary task metrics (e.g., BLEU, entailment score) which are typically the goal in many text generation tasks. 2.1 REINFORCEMENT LEARNING (RL) FORMULATIONS FOR TEXT GENERATION Notations. Previous research has formulated text generation as an RL problem by considering the following finite-time Markov Decision Process (MDP). At each time step t, let the “state” be st = y<t, namely the partial sequence generated so far. The model, also known as the “agent”, takes as input the current state st and outputs a token, also called “action”, at ∈ V according to a policy π(at | st). The agent then receives a reward rt = r(st, at) and deterministically transitions to next state st+1 (i.e., the concatenation of the tokens in st and the new token at). Following the notation convention in RL, let τ be the trajectory (i.e., text sample) generated by the policy. The agent’s objective is to maximize the accumulative reward, J(π) = Eτ∼π [∑T t=0 γ trt ] , where γ ∈ (0, 1] is the discount factor. A central concept in RL is the Q-function of policy π, defined as Qπ(st, at) = Eπ [∑T t′=t γ t′rt′ | st, at ] , which is the expected future reward of taking action at (i.e., generating token at) in state st and continuing with the policy π. Challenges. Text generation poses significant challenges to RL, particularly because (1) the reward signal is usually sparse, i.e., rt = 0, ∀t < T and the agent receives a non-zero reward rT only after it generates the full sequence, (2) the action space (i.e., the vocabulary V) is extremely large, often containing millions of words. The challenges have led to difficulties of the two major families of RL approaches applied to text generation problems, as detailed below. Policy-based RL techniques directly parameterize the policy πθ with parameters θ. Thus the policy πθ(at | st) exactly corresponds to the above generation model pθ(yt | y<t). Policy gradient (PG) is one of the most widely used algorithms for text generation (Ranzato et al., 2016). It optimizes the cumulative reward with the policy gradient: ∇θJ(πθ) = −Eτ∼πθ [∑T t=0 Q̂(st, at)∇θ log πθ (at | st) ] , (2) where Q̂(st, at) is the estimated Qπθ value with sample τ . Notice that the expectation is taken w.r.t. the policy πθ, which makes PG an on-policy algorithm, meaning that the sample τ needs to come from the the current policy πθ itself. In practice, however, optimizing this objective alone from scratch is unlikely going to work because most samples τ ∼ πθ are just gibberish with zero reward, failing to provide meaningful training signals for updating the policy. Previous literature either initializes the policy πθ with MLE training, and/or use a combination of MLE and PG updates, which often leads to marginal gains in practice (Wu et al., 2018; Choshen et al., 2020). Value-based RL techniques, such as Q-learning, implicitly learn the policy π by approximating the value Qπ(s, a) directly. Specifically, let Q∗(s, a) = maxπ Qπ(s, a) denote the optimal value over policies. Thus the optimal policy π∗ is simply taking the action of maximal Q∗ value at each state. The approximation of Q∗ is based on the well-known Bellman temporal consistency: Q∗(st, at) = rt + γmaxat+1 Q ∗(st+1, at+1). (3) Deep Q-learning (Mnih et al., 2013) parameterizes the Q-function as Qθ(x, a) (e.g., a neural network), and train the parameters by minimizing the following regression objective: L(θ) = Eπ′ [ 0.5 · ( rt + γmaxat+1 Qθ̄(st+1, at+1)−Qθ(st, at) )2] , (4) Single-Step PCL Training Multi-Step PCL Training . . . sequence reward . . . sequence reward Figure 2: SoftQ-Learning with path consistency learning (PCL) objectives, where we illustrate with a vocabulary of size 3. Left: Single-step objective (Eq.9), where for each (st, at), the computation involves step t and t+1. Dashed boxes in dark green and gray indicate the regression target, where the intermediate reward rt is often 0 due to sparsity. The gradient is applied to parameters θ at step t (indicated by orange color). Right: Multi-step objective (Eq.11) which aggregates from step t all the way to T . In this way, the final-step non-zero reward rT is used as the regression target. where θ̄ is the parameters of the target Q-network, which is a slow copy of θ and considered as constant for gradient computation of θ. Here π′ is an behavior policy which can be an arbitrary distribution over text, such as the data distribution or replay buffer (Mnih et al., 2013). This makes Q-learning an off-policy algorithm because of its ability to use samples coming from other policies. After learning Qθ, one can induce a policy π from it that takes arg maxaQθ(s, a) at each state s. Jaques et al. (2017) instead sample tokens from the softmax function applied to Qθ. However, the training can be unstable and inefficient due to several challenges: (1) The bootstrapping nature of the above regression problem can make the training unstable. That is, the regression target rt + γmaxat+1 Qθ̄(st+1, at+1) itself is derived from the Q-function to be learned (Kumar et al., 2019). The problem is exacerbated in the presence of sparse reward in text generation, where the real observed signal rt is zero for all intermediate t < T ; (2) The large action space (e.g., 104) in text generation results in slow updates. In particular, notice that Eq.(4) applies the gradient update to the Qθ-value of the only one particular token at (out of the 104 candidate tokens in the vocabulary), making the training inefficient; (3) Besides, pure off-policy updates could be highly sensitive to the quality of training data, and miss the opportunity of on-policy exploration that maximizes the reward of interest in a more direct way. 3 THE SOFT Q-LEARNING FRAMEWORK In this section, we combat the difficulties of previous RL methods by introducing the softQ-learning (SQL) formulation of text generation. We show that the formulation is seamlessly compatible with the common architecture of text generation model (Eq.1), permitting easy implementation (§3.1). The formulation further allows us to integrate the latest advances in RL, notably path consistency learning (Nachum et al., 2017) that makes the RL training efficient and stable in practice (§3.2). Figure 2 and Algorithm 1 summarizes the resulting SQL framework for efficient training. 3.1 SOFT Q-LEARNING FORMULATION FOR TEXT GENERATION SoftQ-learning (Haarnoja et al., 2017; Schulman et al., 2017; Nachum et al., 2017) is an maximumentropy (MaxEnt) extension to the standard (hard) Q-learning (Mnih et al., 2015; Sutton & Barto, 2018). Under this framework, the agent is encouraged to optimize the reward while staying as stochastic as possible, with the objective JMaxEnt(π) = Eτ∼π [∑T t=0 γ trt + αH (π (· | st)) ] , which augments the vanilla J(π) with the additional Shannon entropy term H with coefficient α.1 This is appealing because it seamlessly connects the Q-values to the familiar output logits of a text generation model, which enables straightforward implementation of the SQL formulation. Q-values as Generation Model Logits. We show the connection of the Q-values with the logits, i.e., the model outputs right before the softmax layer. Concretely, with the SQL objective, the following relationship between optimal policy π∗ and action-value Q∗ holds (Haarnoja et al., 2017; Schulman et al., 2017): π∗(a | s) = expQ ∗(s, a)∑ a′ expQ ∗ (s, a′) . (5) 1WLOG, we can assume α=1, as it can be folded into the reward function by scaling the latter with 1/α. This form is highly reminiscent of the softmax layer of the generation model in Eq.(1). The connection suggests that we can naturally parameterize the Q-function in SQL as the generation model logit function, i.e., Qθ(s, a) ≡ fθ(a | s). In other words, the model output fθ(a | s), originally interpretted as the “logit” of token a given the preceding tokens s, is now re-interpretted as the Qvalue of action a in state s. When achieving optimality, fθ∗(a | s), namely Q∗(s, a), represents the best possible future reward achievable by generating token a in state s. Similarly, the full generation model pθ(a | s) in Eq.(1) that applies softmax to fθ now precisely corresponds to the policy πθ induced from Qθ(s, a). That is, πθ(a | s) = expQθ(s, a)∑ a′ expQθ (s, a ′) ≡ exp fθ(a | s)∑ a′ exp fθ (a ′ | s) = pθ(a | s). (6) We could further gain even more intuitive interpretation of the above generation policy π∗ from the lens of advantage function (Sutton & Barto, 2018). Specifically, in SQL, the optimal state-value function is the log-normalizer of the optimalQ-values (Haarnoja et al., 2017; Schulman et al., 2017). This allows us to rewrite Eq.(5) into a more concise form: V ∗ (s) = log ∑ a′ expQ∗ ( s, a′ ) , π∗(a | s)= exp ( Q∗(s, a)−V ∗(s) ) = expA∗(s, a), (7) where A∗ is the optimal advantage function. The equation says that, in the proposed text generation SQL formulation, the optimal policy generates token a in state s according to the token’s advantage. 3.2 EFFICIENT TRAINING WITH PATH CONSISTENCY The above section has described parameterizing the Q-function with the common generation model with parameters θ. Now we present how to learn theQθ function within the SQL framework. Vanilla training based on the Bellman temporal consistency can suffer from the instability and inefficiency issues similar to the conventional Q-learning (§2.1), as we discuss more in the appendix (§A.3.2). Fortunately, our SQL formulation allows us to import latest advances of RL techniques to the text generation setting that overcome the difficulties. Specifically, we adapt the unified path consistency learning (PCL) that has excelled in game control (Nachum et al., 2017). The PCL-based training updates Q-values of all tokens at once through a connection between the value function and the induced policy. More specifically, it is shown in Nachum et al. (2017) that the optimal policy π∗ (Eq.5) and the optimal state value function V ∗ (Eq.7) in SQL must satisfy the following consistency property for all states and actions: V ∗ (st)− γV ∗ (st+1) = rt − log π∗ (at | st) , ∀st, at. (8) Accordingly, the PCL-based training attempts to encourage the satisfaction of the consistency with the following regression objective: LSQL, PCL(θ) = Eπ′ [ 1 2 ( − Vθ̄ (st) + γVθ̄ (st+1) + rt − log πθ (at | st) )2] , (9) where πθ is the induced policy defined in Eq.(6); Vθ̄ is defined similarly as in Eq.(7) but depends on the target Qθ̄ network (i.e., a slow copy of the Qθ to be learned), and recall that π ′ is an arbitrary behavior policy (e.g., data distribution). Please see Figure 2 (left) for an illustration. Crucially, notice that the gradient update is applied to θ through the log πθ term which explicitly involves the Qθ-values of all tokens a in the vocabulary. This shows an important difference from the above vanilla training in conventional Q-learning (§2.1) where Qθ is updated only through the particular at token. The PCL training thus offers more efficient updates for the Qθ function. Multi-step PCL for Sparse Reward. The above PCL objective Eq.(9) alone does not resolve the potential instability issue due to the bootstrapped Vθ̄(st+1) value and the sparse reward (i.e., r(st, at) = 0 for t < T ). Our SQL formulation allows us to additionally incorporate the multi-step variant of the PCL training (Nachum et al., 2017) to resolve the issue. Specifically, by applying a telescoping sum on the consistency equation (Eq.8) starting from t up to T , we arrive at the multistep temporal consistency: V ∗ (st)− γT−tV ∗ (sT+1) = ∑T−t l=0 γl ( rt+l − log π∗ (at+l | st+l) ) , (10) where the value of past-terminal state is zero, V ∗ (sT+1) = 0; and the rewards are only available at the end, ∑T−t l=0 γ lrt+l = γ T−trT . We can then come to the following multi-step objective function, LSQL, PCL-ms(θ) = Eπ′ [ 1 2 ( −Vθ̄ (st) + γ T−trT − ∑T−t l=0 γl log πθ (at+l | st+l) )2] . (11) We can see the objective side-steps the need to bootstrap intermediate value functions Vθ̄(st′) for t′ > t. Instead, it directly uses the non-zero end reward rT to derive the update for θ. Please see Figure 2 (right) for an illustration. In practice, we combine the single- and multi-step objectives (Eqs.9 and 11) together for training. Joint On- and Off-policy Training. Finally, we highlight that the behavior policy π′ involved in the objectives Eqs.(9) and (11) can be an arbitrary policy (i.e., distribution over text sequences), from which we can draw trajectories τ (i.e., text samples). For example, π′ can be a (possibly noisy) text dataset, or a set of text samples produced by other generation models, resulting in off-policy training. We can also set π′ to be the current generation model πθ to be learned, resulting in onpolicy training. In practice, we could first train the model with only off-policy data for warming up, and then continue with joint on- and off-policy training to further maximize the reward. Algorithm 1 Efficient Soft Q-Learning for Text Generation Input: Qθ function (i.e., generation model logit function fθ in Eq.1) Reward function r(s, t) Training examples D (for off-policy updates; optional) 1: Initialize θ and target model parameters θ̄ 2: repeat 3: Draw a batch of off-policy samples {τoff} ∼ D 4: Draw a batch of on-policy samples {τon} by decoding with policy πθ(at | st) (Eq.6) 5: Compute Qθ(st, at) values (i.e., the model logits) and target Qθ̄(st, at) for (st, at) ∈ {τoff} ∪ {τon} 6: Compute the objectives in Eqs.(9) and (11) 7: Update the model parameters θ via gradient descent 8: Update the target model parameters θ̄ by θ̄ ← ρθ̄ + (1− ρ)θ with update rate ρ 9: until convergence Output: The trained Qθ∗ function and the induced generator πθ∗ 4 APPLICATIONS AND EXPERIMENTS 4.1 LEARNING FROM NOISY (NEGATIVE) TEXT The popular MLE algorithm learns by (blindly) imitating training data. However, it is often expensive to curate clean quality data. It is thus highly desirable to be able to learn from data with noises, or even negative examples. With the guidance of task metrics (rewards), the model can even learn to “outperform” the training data and achieve desired generation behaviors. To this end, we consider the task of entailment generation (Pasunuru & Bansal, 2017). Given a sentence (premise), the goal is to generate a new sentence (hypothesis) that logically follows the premise. For example, given source sentence “Sophie is walking a dog outside her house”, the hypotheses “Sophie is outdoor” is considered entailed, but “Sophie is inside her house” is not and even is a negative (contradictive) sentence. Setup (more in the appendix §A.2.1). We sub-sampled 50k training examples from the SNLI dataset (Bowman et al., 2015), a commonly used entailment classification dataset. The hypotheses have an average entailment probability of only 50%, and over 2/5 of them less than 20% (negative/contradictive examples). This poses a significant challenge for the models to learn from the noises. The rewards used in RL algorithms include (1) the entailment score of the generation measured by a robust entailment classifier (Nie et al., 2020), (2) the log-likelihood of the generation as an indicator of language quality measured by a GPT-2 language model (Radford et al., 2019), and (3) BLEU score w.r.t the input premises as another language quality reward that avoids trivial outputs. We sum together all rewards with weights 1.0. We compare our approach with a broad range of baselines, including (1) the standard MLE training (MLE); (2) MLE+reward, where we use the reward function to filter examples; (3) joint MLE and PG training with MLE initialization (MLE+PG), where we initialize the model with MLE training, then train it with combined MLE and PG losses; previous text-generation RL algorithms including (4) MIXER (Ranzato et al., 2016), (5) Self-critic (Rennie et al., 2017), and (6) one of the latest methods GOLD-s (Pang & He, 2021) which is a pure off-policy method based on importancesampling PG. To ablate the effect of multi-step training (§3.2), we additionally compare with a simplified variant of our approach that uses only vanilla single-step PCL training (SQL(single)). In the appendix (§A.1.1) we compare and discuss more baselines such as MLE weighted by rewards. We evaluate generation results in terms of entailment rate, language quality (perplexity), and diversity which is measured by the Shannon entropy over unigrams and bigrams (H1, H2) (Gehrmann et al., 2021). Since text generation models intrinsically trade off diversity and quality (Caccia et al., 2019; Hashimoto et al., 2019), we vary the generation diversity by generating samples via top-p sampling (Holtzman et al., 2019) with different p values, and plot the entailment rate and perplexity against diversity, resp. We also evaluate the samples produced by beam-search decoding. Results. Figure 3 (left) shows the results. First, notice that MLE performs poorly, while MLE+reward improves upon it. This is not surprising as the training data contain noisy/negative examples. Similarly, since the pure off-policy algorithm GOLD-s relies heavily on the data distribution, we observed that it achieves sub-optimal performance. The on-policy MLE+PG with MLE initialization gives better entailment rate. In comparison, our full SQL framework achieves the best entailment-diversity trade-off. The comparison between SQL and SQL(single) highlights the importance of having the multi-step objective which directly uses the end reward rather than bootstrapping intermediate Q-values for supervision. 4.2 Universal ADVERSARIAL ATTACKS We next study the application in text adversarial attacks, where again no supervised data is available. Adversarial attacks is an increasingly important research topic as they reveal models’ vulnerabilities and flaws. This is especially true for universal attacks (Wallace et al., 2019; Atanasova et al., 2020), where we want to generate universal examples that trick the model on all possible inputs. For instance, consider the context of entailment classification. Our goal is to find universal humanreadable hypotheses that are going to be classified as “entailment” with as high probability as possible, regardless of the input premises. This is a more challenging setting compared to previous instance-specific attack (Morris et al., 2020; Jin et al., 2020; Ebrahimi et al., 2017) where the attack model conditions on a premise and generates an adversarial hypothesis specific to the premise. Setup (more in the appendix §A.2.2). We aim to attack one of the most popular MultiNLI (Williams et al., 2018) entailment classifiers on HuggingFaceHub.2 The attack generation model generates adversarial text without conditioning on any inputs so that the generated attacks are universal to all premises. We compare our SQL with MLE+PG. We use all hypotheses in the MultiNLI dataset as the training data for the MLE training in MLE+PG and the off-policy updates for our SQL. We do not compare with previous specialized adversarial text attack methods, because they either are not applicable to the challenging universal attack setting (Morris et al., 2020; 2https://github.com/pytorch/fairseq/tree/master/examples/roberta Jin et al., 2020; Ebrahimi et al., 2017), or were not designed to generate human-readable sentences (Wallace et al., 2019). We use similar settings as in §4.1 to explore the diversity-quality trade-off by plotting the entailment rate and perplexity against diversity, respectively. The entailment classifier to be attacked is used as entailment score reward functions. We additionally include a token-level repetition penalty reward for readability. Results. Figure 3 (right) shows the results, and Table 4 (appendix) shows samples. We can see that SQL outperforms MLE+PG consistently across different diversity values. The outputs from MLE+PG are not diverse even with high p’s, indicating the model collapses and can only generate a small set of unique adversarial examples. The model by SQL discovers the pattern “saint-pierre-et-saint-paul” (an entity name), and exploits this to generate samples with high universal entailment rate. 4.3 PROMPT GENERATION FOR CONTROLLING PRETRAINED LANGUAGE MODELS A reward function does not just have to be a metric like the BLEU score, but also a complicated pipeline that eventually returns a score. To demonstrate this, we consider the emerging task of prompting a large pretrained LM for controllable generation (Hu et al., 2017; Radford et al., 2019; Brown et al., 2020). The goal is to learn to generate text prompts that steer the LM to generate sentences of certain desired attributes (e.g., topics). The problem of controlling the generation of pretrained LMs was previously approached through specialized algorithms such as modifying the LM hidden states during decoding (Dathathri et al., 2020; Krause et al., 2020; Qin et al., 2020). Here we show that prompts offer an easier, faster, more effective way for controlled generation. Learning to generate/tune prompts is gaining increasing attention recently. It side-steps the needs for expensive LM fine-tuning, and adapts LMs to new scenarios with prompt as the (computefriendly) interface. Most existing approaches (Wallace et al., 2019; Li & Liang, 2021; Lester et al., 2021) rely on gradient backpropagation and are applicable only when the whole training pipeline is differentiable. This does not hold for the text generation setting, as illustrated in Figure 4. In contrast, the RL framework is generally applicable to any differentiable or discrete pipelines. Setup (more in the appendix §A.2.3). Following (Dathathri et al., 2019), we aim to control the generation to have one of 7 topics (e.g., “science”); the generated prompt is prepended to one of 20 input sentences for the pretrained LM to generate continuation sentences. Figure 4 shows the architecture of prompt-based controllable generation. We compare our SQL method with MLE+PG as before. Since the prompt length could impact the generated sentences, we conducted experiments with maximum prompt length 5, 10, and 15. As ablation study, we also evaluate the SQL algorithm with only off-policy updates (i.e., without on-policy exploration), denoted as SQL(off), and compare it with vanilla MLE training. Finally, we also compare with two specialized controllable generation techniques based on pretrained LMs, namely PPLM (Dathathri et al., 2019) and GeDi (Krause et al., 2020), following similar procedures using their open-sourced code. We use a distilled GPT-2 model3 as the pretrained LM to be controlled. For rewards, we use the topic accuracy of the continuation sentences measured by a zero-shot classifier, plus the the log-likelihood of continuation sentences as the language quality reward measured by a distilled GPT-2. Results Figure 5 shows the topic accuracy of the controlled LM outputs averaged across the 7 topics, and Table 1 shows the respective language quality results. More detailed topic accuracy results and samples are provided in the appendix (§A.1.3) (where GeDi obtained low accuracy on 2 of the 7 topics, possibly because the topic tokens are tokenized into two subwords for which the model released by the authors was not specifically trained). We can see that the prompts generated by our SQL cause the LM to generate sentences with high topic accuracy while maintaining low perplexity in most settings. Increasing the prompt length positively impacts the topic accuracy, which makes sense because longer prompts give more flexible for steering the LM. The comparison between MLE and SQL(off) shows that the off-policy component of SQL is better than standard MLE training, as it incorporates reward signals instead of just blindly following the (noisy) data. Next, comparing with the previous steered decoding such as PPLM and GeDi, we can see the prompt-based control trained with RL achieves better trade-off between topic accuracy and language quality. Moreover, once a prompt is produced, we can use the pretrained LM to generate text of desired topics efficiently, with the same time cost as standard non-controlled decoding. In comparison, the dedicated steered decoding is often orders-of-magnitude slower, as shown in Table 2. 5 RELATED WORK Standard RL algorithms maximizing the external rewards can sometimes be over-sensitive to the randomness in the environment. Recent works have considered maximum-entropy RL (MaxEnt RL) extensions, such as the soft Q-learning (SQL) (Haarnoja et al., 2017; Nachum et al., 2017; Schulman et al., 2017), that maximize the entropy of policy besides the rewards, and have demonstrated substantial improvement in robotic and game control (Ziebart et al., 2008; O’Donoghue et al., 2017; Nachum et al., 2018; Eysenbach & Levine, 2021). Our work is the first to adapt SQL and its advanced variants (in particular the path consistency learning (Nachum et al., 2017)) to the challenging text generation problem and show significant results on diverse applications. Applying RL for text generation has been discussed in alleviating the exposure bias problem and optimizing task metrics (Li et al., 2016; Wu et al., 2016; Rennie et al., 2017; Paulus et al., 2018; Chen & Bansal, 2018). For example, Ranzato et al. (2016) used the REINFORCE algorithm (Williams, 1992), and Bahdanau et al. (2016) used the actor-critic algorithm; Guo et al. (2018) and Shi et al. (2018) tried to relieve the sparsity problem via hierarchical and inverse RL methods, resp. They are all on-policy RL algorithms with the need of pretraining their models using MLE. Another line of work focused mostly on using only off-policy data, often for offline training of chatbots (Jaques et al., 2020; Kandasamy et al., 2017; Zhou et al., 2017; Pang & He, 2021). As a result, the opportunity of directly improving the reward (as in on-policy updates) for other rich tasks is missed. Our proposed framework combines on- and off-policy training, and further offers solutions for efficient training from scratch in the presence of large action space and sparse sequence-level reward in text generation. 6 CONCLUSION We develop a new RL formulation for text generation based on softQ-learning and path consistency learning. We conduct experiments on learning with noisy and negative data, black box adversarial attack, prompting a pretrained language model for controllable generation, and finally, on standard supervised tasks. The RL formulation opens up enormous new opportunities to integrate more advances made in the fertile RL literature to improve text and other sequence generation problems. 3https://huggingface.co/distilgpt2 7 ETHICS STATEMENT This work develops a new RL formulation for text generation. While we demonstrate the framework in four applications, it could be adapted to other (emerging) applications. One major component in these applications is the design of the reward function, which influences the behavior of the trained agent. While we believe the MaxEnt RL framework is more robust against reward misspecification (Eysenbach & Levine, 2021), the potential failures of sub-optimal reward functions are widely known and discussed.4 To this end, deploying this model to the wild requires careful and extensive examination, using tools such as Ribeiro et al. (2020). Further, we highlight the application for (black-box) adversarial attacks in the paper, with the intention of using adversarial attacks to understand the model’s inner workings. That being said, this could potentially be misused to conduct malicious attacks against systems. Hence, users of this framework might want to conduct adversarial attacks against their own models to avoid being attacked by other people with bad intentions. 8 REPRODUCIBILITY STATEMENT We provide code in the supplementary materials, and additional experiment details in the appendix. A APPENDIX A.1 APPLICATIONS AND EXPERIMENTS A.1.1 LEARNING FROM NOISY (NEGATIVE) TEXT Please see Table 5 for beam search results, Figure 6 for additional results for MLE+reward, and Table 7 for examples. A.1.2 Universal ADVERSARIAL ATTACKS Please see Table 4 for examples. A.1.3 PROMPT GENERATION FOR CONTROLLING PRETRAINED LANGUAGE MODELS Please see Table 6 for detailed results breakdown, and Table 8-11 for examples. Examples are in the format: topic: [prompt] input sentence generated text. A.1.4 SUPERVISED TEXT GENERATION TASKS Finally, we conduct experiment on standard generation tasks where clean supervised data is available. The study is to examine the capabilities of the proposed RL method to train a text generation model from scratch, which has been considered as exceedingly challenging for previous RL algorithms. Setup. We study on two tasks, E2E (Novikova et al., 2017) and CommonGEN (Lin et al., 2020), and use the respective datasets pre-processed by (Gehrmann et al., 2021) which allow sequenceto-sequence modeling with standard transformers. We run four sets of methods: the standard MLE training (MLE); PG training from scratch (PG); joint MLE and PG training, with MLE ini- tialization (MLE+PG); and our SQL training from scratch with both off-policy and on-policy updates (SQL). We use the standard BLEU as reward. We additionally investigate the training stability and sensitivity w.r.t hyperparameters, in particular the scale of reward. To this end, for MLE+PG and SQL, we vary the reward scale in {1, 10, 50, 100, 500, 1000} and evaluate the respective performance under different scales. Results. Table 3 shows the performance on E2E of different models whose hyperparameters are picked using the validation set. We can see the proposed SQL that trains models from scratch achieves competitive results with the common MLE and MLE+PG. In contrast, the PG algorithm alone without MLE fails the training. Figure 7 (left) shows the respective training curves (on the validation set), demonstrating that SQL converges in an efficient and stable way as MLE. We further demonstrate the sensitive of MLE+PG and SQL w.r.t the reward scale as a key hyperparameter. Figure 7 (middle and right) shows the training curves of the two methods with varying reward scales. We can see SQL is significantly more robust as reward scale changes, while MLE+PG tends to collapse with improper reward scale configurations. A.2 SETUP DETAILS Our evaluation follows the GEM Benchmark (Gehrmann et al., 2021) when applicable,5 and otherwise same with the reward function used in training. We use a transformer model (Vaswani et al., 2017) based on Texar-Pytorch (Hu et al., 2019) by default, with 64 hidden dimension, 3 blocks, and 4 heads. For experiments that involve policy gradient training, we initialize the model with maximum likelihood training by default unless specified otherwise. We train soft Q-learning model from scratch with both off-policy (using data) and on-policy (using samples) by default except in §4.1 and §4.3, in which we find it beneficial to warm-up the model with just off-policy training. We apply similar tuning budgets to both soft Q-learning model, and policy-gradient (mostly the reward scale and top-k), based on performance on the validation dataset and sample qualities. Reward Functions We use the robust entailment classifier (Nie et al., 2020) in §4.1,6 one of the most used entailment classifiers on HuggingFaceHub in §4.2.7 and a zero-shot classifier based on BART (Lewis et al., 2020) to compute the topic score in §4.3.8 To compute perplexities, we use a GPT-2 model (Radford et al., 2019) fine-tuned on the corresponding datasets for computing perplexity in §4.1 and 4.2, and a distilled GPT-2 model in §4.3 without fine-tuning.9 We simply set reward weights to 1.0, except in §4.2, where we changed the entailment weight to 0.5, log-likelihood and repetition penalty weight to 5.0. A.2.1 SETUP DETAILS: §4.1 We study using the SNLI dataset (Bowman et al., 2015), a dataset commonly used in training an entailment classifier. The original dataset contains (premise, hypothesis) sentence pairs, where the hypothesis may or may not entail the premise. We sub-sampled 50, 000 training examples from the corpus such that the hypotheses have an average entailment probability of only 50% in terms of the premises, and over 2/5 examples have entailment probabilities less than 20%, which can be seen as negative (contradictive) examples. The resulting training set poses a significant challenge for the models to learn from the noises. The RL algorithms (including PG and ours) permit us to plug in arbitrary reward functions to drive learning. Based on the goal of the task, we use the following intuitive rewards to ensure entailment accuracy and language quality: (1) a robust entailment classifier (Nie et al., 2020) that measures the entailment score of a generation in terms of the input premise, (2) a GPT-2 language model (Radford et al., 2019) that measures the log-likelihood of the generation as an indicator of language quality, and (3) BLEU score w.r.t the input premises as another language quality reward that avoids trivial outputs. We sum together all rewards with weights 1.0. A.2.2 SETUP DETAILS: §4.2 We study the task of attacking an entailment classifier. In particular, we aim to attack one of the most popular entailment classifiers on HuggingFaceHub.10 The attack generation model generates adversarial text without conditioning on any inputs so that the generated attacks are universal to all premises. The generation model is trained with mostly the same setting as in §4.1, where the entailment classifier to be 5https://github.com/GEM-benchmark/GEM-metrics 6https://huggingface.co/ynie/roberta-large-snli_mnli_fever_anli_R1_R2_ R3-nli 7https://github.com/pytorch/fairseq/tree/master/examples/roberta. This classifier is ranked #1 (as of May 20, 2021) based on https://huggingface.co/models?search=nli. 8https://huggingface.co/facebook/bart-large-mnli 9https://huggingface.co/distilgpt2 10https://github.com/pytorch/fairseq/tree/master/examples/roberta, which is ranked #1 as of May 20, 2021 based on https://huggingface.co/models?search=nli. attacked is used as entailment score reward functions. Besides, we additionally include a tokenlevel repetition penalty reward, which empirically benefits readability. Finally, we use the MultiNLI dataset (Williams et al., 2018) which includes more diverse examples than the SNLI used above. We compare our SQL with MLE+PG. We use all hypotheses in the MultiNLI dataset as the training data for the MLE training in MLE+PG and the off-policy updates for our SQL. We do not compare with previous specialized adversarial text attack methods, because they either are not applicable to the universal attack setting (Morris et al., 2020; Jin et al., 2020; Ebrahimi et al., 2017), or were not designed to generate human-readable sentences (Wallace et al., 2019). Besides, it is worth noting that the general RL algorithms have an additional advantage of doing black-box attacks. That is, the algorithms only require the ability to query the entailment classifier for entailment probability, without need of knowing the internal structure of the classifier (e.g., for computing gradients) as in previous attack algorithms (Ebrahimi et al., 2017; Wallace et al., 2019). For top-p sampling results, we sample a hypothesis for each premise and measure the average attack rate across the dataset. This is because sampling multiple hypotheses, each for all premises, and measure performance are expensive. Since the hypotheses are sampled input-independently, this should be a good approximation. A.2.3 SETUP DETAILS: §4.3 Following (Dathathri et al., 2019), we aim to control the generation to have one of 7 topics (e.g., “science”); the generated prompt is prepended to one of 20 input sentences (Figure 4) for the pretrained LM to generate continuation sentences. There is no direct supervision data available for training the prompt generator. We randomly create some noisy text as the training data for MLE baselines below and for off-policy updates for our algorithm. Specifically, the noisy text is created by sampling keywords and topics from the list used in (Dathathri et al., 2020) and a paraphrase generation model. Figure 4 shows the architecture of prompt-based controllable generation. We compare our SQL method with MLE+PG as before. At training time, for each generated prompt sample, the pretrained LM generates 2 continuation sentences for evaluating average reward. We use a zero-shot classifier to evaluate the topic accuracy of the continuation sentences. That is, we do not assume access to classifiers pretrained on topic-specific sentences, because generating such topic-specific sentences is the goal of the task in the first place. We additionally use an LM to evaluate the log-likelihood of continuation sentences for measuring language quality. Since the prompt length could impact the generated sentences, we conducted experiments with maximum prompt length 5, 10, and 15. As ablation study, we also evaluate the SQL algorithm with only off-policy updates (i.e., without onpolicy exploration), denoted as SQL(off), and compare it with vanilla MLE training. At test time, given a topic, the trained prompt generator produces one prompt using beam search decoding. For each generated prompt, the pretrained LM generates 100 sentences using top-k decoding (with k = 50) for evaluation. Finally, we also compare with two specialized controllable generation techniques based on pretrained LMs, namely PPLM (Dathathri et al., 2019) and GeDi (Krause et al., 2020), following similar procedures using their open-sourced code. We use a distilled GPT-2 model11 as the pretrained LM to be controlled. We use the paraphrase generation model based on Zhang et al. (2019).12 During decoding, we include no repeat ngram size= 2, which improves readability.13 A.3 THE SOFT Q-LEARNING FRAMEWORK A.3.1 COMPARISON WITH MLE OBJECTIVE It is interesting to take a closer look at the above objective and compare with the common MLE training. Specifically, we notice the relations between the optimal Q∗, V ∗, and A∗ functions: A∗ (st, at) = Q ∗(st, at) − V ∗(st) = rt + γV ∗(st+1) − V ∗ (st), where the first equation is the definition of A∗ (see Eq.7) and the second equation is due to Eqs.(12) and (7). We thus can see the regression target in the above objective as an approximation to the advantage function: 11https://huggingface.co/distilgpt2 12https://huggingface.co/tuner007/pegasus_paraphrase 13https://huggingface.co/blog/how-to-generate Ãθ̄ (st, at) := −Vθ̄ (st) + γVθ̄ (st+1) + rt. Therefore, by optimizing the regression objective, log πθ(at|st), which is the log probability of generating token at given preceding tokens st, is encouraged to match the approximate advantage value Ãθ̄ (st, at), no more and no less. This is different from the objective of MLE where the model is trained to (blindly) increase the probability of the observed token at given st and decrease the probability of the rest. A.3.2 VANILLA TRAINING WITH TEMPORAL CONSISTENCY Much like the Bellman temporal consistency in standard Q-learning (Eq.3), in SQL, the optimal action-value function follows the softmax form of the temporal consistency (Ziebart et al., 2008; Ziebart, 2010; Fox et al., 2016; Nachum et al., 2017): Q∗ (st, at) = rt + γ log ∑ at+1 expQ∗ (st+1, at+1) . (12) We thus can derive a regression objective similar to the standard Q-learning (Eq.4): LSQL, vanilla(θ) = Eπ′ [ 0.5 · ( rt + γ log ∑ at+1 expQθ̄ (st+1, at+1)−Qθ (st, at) )2] . (13) Recall that π′ is an arbitrary behavior policy (e.g., data distribution), and Qθ̄ is the target Q-network which is a slow copy of the Qθ to be learned and is held fixed during the gradient updates. However, the above objective is inefficient due to exact the same reasons as in standard Q-learning discussed earlier, namely the unstable per-step bootstrapping-style training with sparse reward signals, plus the slow updates w.r.t only one token at out of the large vocabulary (action space).
1. What is the main contribution of the paper regarding text generation models? 2. What are the strengths and weaknesses of the proposed approach compared to previous RL-based methods and task-specific methods? 3. How does the reviewer assess the novelty and applicability of the adapted path consistency learning approach in the text generation setting? 4. Are there any concerns or suggestions regarding the experiments and performance comparisons in the paper? 5. Can you provide additional explanations or references regarding the related work mentioned in the review, such as SBEED algorithm?
Summary Of The Paper Review
Summary Of The Paper This paper considers the problem of learning text generation models using reinforcement learning. The problem is challenging in that RL algorithm becomes inefficient or unstable when dealing with large action space and the sparse reward situations in text generation. To address these problems, this paper adapts the path consistency learning approach to the text generation setting. It allows to train the model with both on- and off-policy samples, bridge the sparse reward signal to directly supervise the Q-function learning, and makes efficient updates to Q-values by considering all candidate actions together. Experiments result show that the proposed approach are effective in solving a wide range of applications where MLE are not applicable, where it achieves better performance than previous RL-based methods as well as task-specific methods. In addition, on standard MLE-based tasks, the proposed approach could also achieve competitive performance when training the models from scratch. Review Strength: This paper establishes a connection between the text generation problem and path consistent learning. It leads to a principled and elegant RL-based text generation method that naturally inherits many desirable advantages from PCL algorithm, such as being capable of using both on- and off-policy samples. It is widely applicable to text generation tasks where MLE training is not directly applicable, and demonstrate superior performance on 3 such tasks. Weakness: The work is almost a direct application of PCL to text generation, and there is no newly developed RL algorithm for text generation. This may not be a serious weakness given that this paper mainly focuses on RL-based text generation problem, and the good performance could be encouraging for research on RL-based text generation methods. The experiments on standard MLE-based tasks are relatively small scale. Demonstrating competitive performance of SQL on standard machine translation tasks would be more convincing. Comments: It would be helpful to explain the intuition of the single-step PCL loss (9) and the multi-step PCL loss (10) in the text generation context. In particular, compare its intuition to the MLE loss could make it better received in NLP community. It seems that the perplexity of the proposed method (SQL full) is not as good as the others (Figure 3 Middle and also Table 1). The authors may need to discuss it more thoroughly. Compared to the original PCL learning, the SBEED algorithm [Dai et al 2018] provides a provably-stable algorithm for path consistency learning when there is nonlinear function approximation (e.g., by using deep neural networks to parameterize the Q-function, the state-value function and the policy here). [Dai et al 2018]: “SBEED: Convergent reinforcement learning with nonlinear function approximation”, Proc. ICML 2018.
ICLR
Title Text Generation with Efficient (Soft) $Q$-Learning Abstract Maximum likelihood estimation (MLE) is the predominant algorithm for training text generation models. This paradigm relies on direct supervision examples, which is not applicable to many emerging applications, such as generating adversarial attacks or generating prompts to control language models. Reinforcement learning (RL) on the other hand offers a more flexible solution by allowing users to plug in arbitrary task metrics as reward. Yet previous RL algorithms for text generation, such as policy gradient (on-policy RL) and Q-learning (off-policy RL), are often notoriously inefficient or unstable to train due to the large sequence space and the sparse reward received only at the end of sequences. In this paper, we introduce a new RL formulation for text generation from the soft Q-learning (SQL) perspective. It enables us to draw from the latest RL advances, such as path consistency learning, to combine the best of on-/off-policy updates, and learn effectively from sparse reward. We apply the approach to a wide range of text generation tasks, including learning from noisy/negative examples, adversarial attacks, and prompt generation. Experiments show our approach consistently outperforms both task-specialized algorithms and the previous RL methods. 1 INTRODUCTION Recent natural language generation systems have made remarkable progress in producing wellformed coherent text, especially with the massive pretrained language models (LMs) (Radford et al., 2019; Brown et al., 2020; Lewis et al., 2020; Raffel et al., 2019). Those models are typically trained using maximum likelihood estimation (MLE) with a large amount of data supervisions. Despite its successful outcomes, the standard training method suffers from limited applicability to many emerging text generation problems, where little or no supervised data is available. Prominent examples of such low-data problems include generating prompts to control the massive LMs (Yin et al., 2019; Shin et al., 2020; Zhong et al., 2021), learning text generation from noisy or even negative data, generating adversarial text attacks for robustness study (Wallace et al., 2019; Atanasova et al., 2020), and others (Figure 1, right). Due to the failure of standard MLE, people have had to devise specialized algorithms for those problems respectively. On the other hand, reinforcement learning (RL) (Sutton & Barto, 2018) offers an alternative principled framework for learning from arbitrary reward functions, and has achieved great advances in robotic and game control. However, RL by far has made limited success for training text generation, primarily due to the key challenges of sparse reward (i.e., a single reward signal is received only after the whole text sequence is generated) and large action space (i.e., a vocabulary of millions of words). For instance, a popular family of RL algorithms studied extensively for text generation is the policy-based (Williams, 1992) or actor-critic based (Bahdanau et al., 2016; Rennie et al., 2017) algorithms, with policy gradient (PG) being the most prevalent example (Ranzato et al., 2016; Li et al., 2016; Rennie et al., 2017; Tan et al., 2018; Pasunuru & Bansal, 2018; Paulus et al., 2018). Those algorithms train the model with on-policy updates, i.e., the text samples used for estimating policy gradients are from the target model itself. Due to the exponentially large space of sequences, onpolicy updates often suffer from extremely high variance and low data efficiency (e.g., most model samples are not useful for learning). Thus directly training with PG from scratch is usually impossible. In practice, the model has to be initialized by MLE training, followed by PG as finetuning, which often leads to limited improvement (Choshen et al., 2020; Wu et al., 2018). Another set of work has resorted to off-policy RL. The key advantage is that samples from other sources, e.g., human-written text, can be used, making them more data efficient than on-policy methods. Previous work has used either importance weighted PG (Pang & He, 2021; Zhou et al., 2017; Kandasamy et al., 2017) or Q-learning based algorithms (Guo, 2015; Jaques et al., 2020; Narasimhan et al., 2015). However, off-policy methods have been considered to be less stable. For example, theQ-learning performance relies heavily on how accurate the learnedQ-function assesses the quality of intermediate subsequences – a challenging task due to the sparse reward signals. In this paper, we develop a new RL formulation for text generation that tackles the above issues (Figure 1, left). We reframe the text generation problem from the soft Q-learning perspective originally developed in robotics (Haarnoja et al., 2017; Schulman et al., 2017). The resulting connection allows us to seamlessly take advantage of the latest successful techniques from the RL literature. In particular, we introduce and adapt the principled path consistency learning (Nachum et al., 2017) to text generation, that (1) offers a natural way to train the model with both on- and off-policy updates, hence combining the best of the two strategies, (2) bridges the sparse reward signal to directly supervise the Q function learning, leading to more accurate Q estimation and credit assignment, and (3) makes efficient updates to Q-values by considering all candidate actions together. The generality and efficiency of the proposed method allows us to train text generation in a wide range of applications: (1) With noisy and negative training examples, our approach learns to generate accurate entailment text that greatly improves upon the data itself as well as other various training methods; (2) Our approach also manages to train an effective adversarial text generator for robustness test for classifiers; (3) We train a prompt generator with our algorithm to achieve controllable generation of pretrained LMs in terms of topics. On all the three tasks, our approach consistently improves over not only previous RL algorithms for text generation, but also diverse task-specialized methods designed specifically for each of the problems, respectively. In the appendix (§A.1.4), we also show that on standard supervised tasks where MLE prevails, our approach is competitive to train text generation models from scratch, which was usually impossible for previous RL algorithms. 2 BACKGROUND AND CHALLENGES The goal of text generation is to produce coherent text y = (y0, ..., yT ) of certain properties for a given task, where yt is a token from a vocabulary V , and T is the text length. The generation can condition on arbitrary input context, which we omit for simplicity of notations. We aim to learn a generation model pθ(y) which is typically decomposed autoregressively as pθ(y) = ∏T t=0 pθ(yt | y<t), where y<t = (y0, ..., yt−1) is the prefix, and the distribution at each step t is obtained by applying the softmax function on the output logits: pθ(yt | y<t) = exp fθ(yt | y<t)∑ y′∈V exp fθ(y ′ | y<t) . (1) Here fθ(y | y<t) is the logit of token y computed by the generation model. Given a training example y∗, maximum likelihood training (MLE) updates the model with the gradient ∇θLMLE(θ)= ∑T t=0∇θ log pθ (y∗t | y∗<t). Despite its popularity, MLE-based training only applies when clean supervised data y∗ is available, and cannot be used to optimize arbitrary task metrics (e.g., BLEU, entailment score) which are typically the goal in many text generation tasks. 2.1 REINFORCEMENT LEARNING (RL) FORMULATIONS FOR TEXT GENERATION Notations. Previous research has formulated text generation as an RL problem by considering the following finite-time Markov Decision Process (MDP). At each time step t, let the “state” be st = y<t, namely the partial sequence generated so far. The model, also known as the “agent”, takes as input the current state st and outputs a token, also called “action”, at ∈ V according to a policy π(at | st). The agent then receives a reward rt = r(st, at) and deterministically transitions to next state st+1 (i.e., the concatenation of the tokens in st and the new token at). Following the notation convention in RL, let τ be the trajectory (i.e., text sample) generated by the policy. The agent’s objective is to maximize the accumulative reward, J(π) = Eτ∼π [∑T t=0 γ trt ] , where γ ∈ (0, 1] is the discount factor. A central concept in RL is the Q-function of policy π, defined as Qπ(st, at) = Eπ [∑T t′=t γ t′rt′ | st, at ] , which is the expected future reward of taking action at (i.e., generating token at) in state st and continuing with the policy π. Challenges. Text generation poses significant challenges to RL, particularly because (1) the reward signal is usually sparse, i.e., rt = 0, ∀t < T and the agent receives a non-zero reward rT only after it generates the full sequence, (2) the action space (i.e., the vocabulary V) is extremely large, often containing millions of words. The challenges have led to difficulties of the two major families of RL approaches applied to text generation problems, as detailed below. Policy-based RL techniques directly parameterize the policy πθ with parameters θ. Thus the policy πθ(at | st) exactly corresponds to the above generation model pθ(yt | y<t). Policy gradient (PG) is one of the most widely used algorithms for text generation (Ranzato et al., 2016). It optimizes the cumulative reward with the policy gradient: ∇θJ(πθ) = −Eτ∼πθ [∑T t=0 Q̂(st, at)∇θ log πθ (at | st) ] , (2) where Q̂(st, at) is the estimated Qπθ value with sample τ . Notice that the expectation is taken w.r.t. the policy πθ, which makes PG an on-policy algorithm, meaning that the sample τ needs to come from the the current policy πθ itself. In practice, however, optimizing this objective alone from scratch is unlikely going to work because most samples τ ∼ πθ are just gibberish with zero reward, failing to provide meaningful training signals for updating the policy. Previous literature either initializes the policy πθ with MLE training, and/or use a combination of MLE and PG updates, which often leads to marginal gains in practice (Wu et al., 2018; Choshen et al., 2020). Value-based RL techniques, such as Q-learning, implicitly learn the policy π by approximating the value Qπ(s, a) directly. Specifically, let Q∗(s, a) = maxπ Qπ(s, a) denote the optimal value over policies. Thus the optimal policy π∗ is simply taking the action of maximal Q∗ value at each state. The approximation of Q∗ is based on the well-known Bellman temporal consistency: Q∗(st, at) = rt + γmaxat+1 Q ∗(st+1, at+1). (3) Deep Q-learning (Mnih et al., 2013) parameterizes the Q-function as Qθ(x, a) (e.g., a neural network), and train the parameters by minimizing the following regression objective: L(θ) = Eπ′ [ 0.5 · ( rt + γmaxat+1 Qθ̄(st+1, at+1)−Qθ(st, at) )2] , (4) Single-Step PCL Training Multi-Step PCL Training . . . sequence reward . . . sequence reward Figure 2: SoftQ-Learning with path consistency learning (PCL) objectives, where we illustrate with a vocabulary of size 3. Left: Single-step objective (Eq.9), where for each (st, at), the computation involves step t and t+1. Dashed boxes in dark green and gray indicate the regression target, where the intermediate reward rt is often 0 due to sparsity. The gradient is applied to parameters θ at step t (indicated by orange color). Right: Multi-step objective (Eq.11) which aggregates from step t all the way to T . In this way, the final-step non-zero reward rT is used as the regression target. where θ̄ is the parameters of the target Q-network, which is a slow copy of θ and considered as constant for gradient computation of θ. Here π′ is an behavior policy which can be an arbitrary distribution over text, such as the data distribution or replay buffer (Mnih et al., 2013). This makes Q-learning an off-policy algorithm because of its ability to use samples coming from other policies. After learning Qθ, one can induce a policy π from it that takes arg maxaQθ(s, a) at each state s. Jaques et al. (2017) instead sample tokens from the softmax function applied to Qθ. However, the training can be unstable and inefficient due to several challenges: (1) The bootstrapping nature of the above regression problem can make the training unstable. That is, the regression target rt + γmaxat+1 Qθ̄(st+1, at+1) itself is derived from the Q-function to be learned (Kumar et al., 2019). The problem is exacerbated in the presence of sparse reward in text generation, where the real observed signal rt is zero for all intermediate t < T ; (2) The large action space (e.g., 104) in text generation results in slow updates. In particular, notice that Eq.(4) applies the gradient update to the Qθ-value of the only one particular token at (out of the 104 candidate tokens in the vocabulary), making the training inefficient; (3) Besides, pure off-policy updates could be highly sensitive to the quality of training data, and miss the opportunity of on-policy exploration that maximizes the reward of interest in a more direct way. 3 THE SOFT Q-LEARNING FRAMEWORK In this section, we combat the difficulties of previous RL methods by introducing the softQ-learning (SQL) formulation of text generation. We show that the formulation is seamlessly compatible with the common architecture of text generation model (Eq.1), permitting easy implementation (§3.1). The formulation further allows us to integrate the latest advances in RL, notably path consistency learning (Nachum et al., 2017) that makes the RL training efficient and stable in practice (§3.2). Figure 2 and Algorithm 1 summarizes the resulting SQL framework for efficient training. 3.1 SOFT Q-LEARNING FORMULATION FOR TEXT GENERATION SoftQ-learning (Haarnoja et al., 2017; Schulman et al., 2017; Nachum et al., 2017) is an maximumentropy (MaxEnt) extension to the standard (hard) Q-learning (Mnih et al., 2015; Sutton & Barto, 2018). Under this framework, the agent is encouraged to optimize the reward while staying as stochastic as possible, with the objective JMaxEnt(π) = Eτ∼π [∑T t=0 γ trt + αH (π (· | st)) ] , which augments the vanilla J(π) with the additional Shannon entropy term H with coefficient α.1 This is appealing because it seamlessly connects the Q-values to the familiar output logits of a text generation model, which enables straightforward implementation of the SQL formulation. Q-values as Generation Model Logits. We show the connection of the Q-values with the logits, i.e., the model outputs right before the softmax layer. Concretely, with the SQL objective, the following relationship between optimal policy π∗ and action-value Q∗ holds (Haarnoja et al., 2017; Schulman et al., 2017): π∗(a | s) = expQ ∗(s, a)∑ a′ expQ ∗ (s, a′) . (5) 1WLOG, we can assume α=1, as it can be folded into the reward function by scaling the latter with 1/α. This form is highly reminiscent of the softmax layer of the generation model in Eq.(1). The connection suggests that we can naturally parameterize the Q-function in SQL as the generation model logit function, i.e., Qθ(s, a) ≡ fθ(a | s). In other words, the model output fθ(a | s), originally interpretted as the “logit” of token a given the preceding tokens s, is now re-interpretted as the Qvalue of action a in state s. When achieving optimality, fθ∗(a | s), namely Q∗(s, a), represents the best possible future reward achievable by generating token a in state s. Similarly, the full generation model pθ(a | s) in Eq.(1) that applies softmax to fθ now precisely corresponds to the policy πθ induced from Qθ(s, a). That is, πθ(a | s) = expQθ(s, a)∑ a′ expQθ (s, a ′) ≡ exp fθ(a | s)∑ a′ exp fθ (a ′ | s) = pθ(a | s). (6) We could further gain even more intuitive interpretation of the above generation policy π∗ from the lens of advantage function (Sutton & Barto, 2018). Specifically, in SQL, the optimal state-value function is the log-normalizer of the optimalQ-values (Haarnoja et al., 2017; Schulman et al., 2017). This allows us to rewrite Eq.(5) into a more concise form: V ∗ (s) = log ∑ a′ expQ∗ ( s, a′ ) , π∗(a | s)= exp ( Q∗(s, a)−V ∗(s) ) = expA∗(s, a), (7) where A∗ is the optimal advantage function. The equation says that, in the proposed text generation SQL formulation, the optimal policy generates token a in state s according to the token’s advantage. 3.2 EFFICIENT TRAINING WITH PATH CONSISTENCY The above section has described parameterizing the Q-function with the common generation model with parameters θ. Now we present how to learn theQθ function within the SQL framework. Vanilla training based on the Bellman temporal consistency can suffer from the instability and inefficiency issues similar to the conventional Q-learning (§2.1), as we discuss more in the appendix (§A.3.2). Fortunately, our SQL formulation allows us to import latest advances of RL techniques to the text generation setting that overcome the difficulties. Specifically, we adapt the unified path consistency learning (PCL) that has excelled in game control (Nachum et al., 2017). The PCL-based training updates Q-values of all tokens at once through a connection between the value function and the induced policy. More specifically, it is shown in Nachum et al. (2017) that the optimal policy π∗ (Eq.5) and the optimal state value function V ∗ (Eq.7) in SQL must satisfy the following consistency property for all states and actions: V ∗ (st)− γV ∗ (st+1) = rt − log π∗ (at | st) , ∀st, at. (8) Accordingly, the PCL-based training attempts to encourage the satisfaction of the consistency with the following regression objective: LSQL, PCL(θ) = Eπ′ [ 1 2 ( − Vθ̄ (st) + γVθ̄ (st+1) + rt − log πθ (at | st) )2] , (9) where πθ is the induced policy defined in Eq.(6); Vθ̄ is defined similarly as in Eq.(7) but depends on the target Qθ̄ network (i.e., a slow copy of the Qθ to be learned), and recall that π ′ is an arbitrary behavior policy (e.g., data distribution). Please see Figure 2 (left) for an illustration. Crucially, notice that the gradient update is applied to θ through the log πθ term which explicitly involves the Qθ-values of all tokens a in the vocabulary. This shows an important difference from the above vanilla training in conventional Q-learning (§2.1) where Qθ is updated only through the particular at token. The PCL training thus offers more efficient updates for the Qθ function. Multi-step PCL for Sparse Reward. The above PCL objective Eq.(9) alone does not resolve the potential instability issue due to the bootstrapped Vθ̄(st+1) value and the sparse reward (i.e., r(st, at) = 0 for t < T ). Our SQL formulation allows us to additionally incorporate the multi-step variant of the PCL training (Nachum et al., 2017) to resolve the issue. Specifically, by applying a telescoping sum on the consistency equation (Eq.8) starting from t up to T , we arrive at the multistep temporal consistency: V ∗ (st)− γT−tV ∗ (sT+1) = ∑T−t l=0 γl ( rt+l − log π∗ (at+l | st+l) ) , (10) where the value of past-terminal state is zero, V ∗ (sT+1) = 0; and the rewards are only available at the end, ∑T−t l=0 γ lrt+l = γ T−trT . We can then come to the following multi-step objective function, LSQL, PCL-ms(θ) = Eπ′ [ 1 2 ( −Vθ̄ (st) + γ T−trT − ∑T−t l=0 γl log πθ (at+l | st+l) )2] . (11) We can see the objective side-steps the need to bootstrap intermediate value functions Vθ̄(st′) for t′ > t. Instead, it directly uses the non-zero end reward rT to derive the update for θ. Please see Figure 2 (right) for an illustration. In practice, we combine the single- and multi-step objectives (Eqs.9 and 11) together for training. Joint On- and Off-policy Training. Finally, we highlight that the behavior policy π′ involved in the objectives Eqs.(9) and (11) can be an arbitrary policy (i.e., distribution over text sequences), from which we can draw trajectories τ (i.e., text samples). For example, π′ can be a (possibly noisy) text dataset, or a set of text samples produced by other generation models, resulting in off-policy training. We can also set π′ to be the current generation model πθ to be learned, resulting in onpolicy training. In practice, we could first train the model with only off-policy data for warming up, and then continue with joint on- and off-policy training to further maximize the reward. Algorithm 1 Efficient Soft Q-Learning for Text Generation Input: Qθ function (i.e., generation model logit function fθ in Eq.1) Reward function r(s, t) Training examples D (for off-policy updates; optional) 1: Initialize θ and target model parameters θ̄ 2: repeat 3: Draw a batch of off-policy samples {τoff} ∼ D 4: Draw a batch of on-policy samples {τon} by decoding with policy πθ(at | st) (Eq.6) 5: Compute Qθ(st, at) values (i.e., the model logits) and target Qθ̄(st, at) for (st, at) ∈ {τoff} ∪ {τon} 6: Compute the objectives in Eqs.(9) and (11) 7: Update the model parameters θ via gradient descent 8: Update the target model parameters θ̄ by θ̄ ← ρθ̄ + (1− ρ)θ with update rate ρ 9: until convergence Output: The trained Qθ∗ function and the induced generator πθ∗ 4 APPLICATIONS AND EXPERIMENTS 4.1 LEARNING FROM NOISY (NEGATIVE) TEXT The popular MLE algorithm learns by (blindly) imitating training data. However, it is often expensive to curate clean quality data. It is thus highly desirable to be able to learn from data with noises, or even negative examples. With the guidance of task metrics (rewards), the model can even learn to “outperform” the training data and achieve desired generation behaviors. To this end, we consider the task of entailment generation (Pasunuru & Bansal, 2017). Given a sentence (premise), the goal is to generate a new sentence (hypothesis) that logically follows the premise. For example, given source sentence “Sophie is walking a dog outside her house”, the hypotheses “Sophie is outdoor” is considered entailed, but “Sophie is inside her house” is not and even is a negative (contradictive) sentence. Setup (more in the appendix §A.2.1). We sub-sampled 50k training examples from the SNLI dataset (Bowman et al., 2015), a commonly used entailment classification dataset. The hypotheses have an average entailment probability of only 50%, and over 2/5 of them less than 20% (negative/contradictive examples). This poses a significant challenge for the models to learn from the noises. The rewards used in RL algorithms include (1) the entailment score of the generation measured by a robust entailment classifier (Nie et al., 2020), (2) the log-likelihood of the generation as an indicator of language quality measured by a GPT-2 language model (Radford et al., 2019), and (3) BLEU score w.r.t the input premises as another language quality reward that avoids trivial outputs. We sum together all rewards with weights 1.0. We compare our approach with a broad range of baselines, including (1) the standard MLE training (MLE); (2) MLE+reward, where we use the reward function to filter examples; (3) joint MLE and PG training with MLE initialization (MLE+PG), where we initialize the model with MLE training, then train it with combined MLE and PG losses; previous text-generation RL algorithms including (4) MIXER (Ranzato et al., 2016), (5) Self-critic (Rennie et al., 2017), and (6) one of the latest methods GOLD-s (Pang & He, 2021) which is a pure off-policy method based on importancesampling PG. To ablate the effect of multi-step training (§3.2), we additionally compare with a simplified variant of our approach that uses only vanilla single-step PCL training (SQL(single)). In the appendix (§A.1.1) we compare and discuss more baselines such as MLE weighted by rewards. We evaluate generation results in terms of entailment rate, language quality (perplexity), and diversity which is measured by the Shannon entropy over unigrams and bigrams (H1, H2) (Gehrmann et al., 2021). Since text generation models intrinsically trade off diversity and quality (Caccia et al., 2019; Hashimoto et al., 2019), we vary the generation diversity by generating samples via top-p sampling (Holtzman et al., 2019) with different p values, and plot the entailment rate and perplexity against diversity, resp. We also evaluate the samples produced by beam-search decoding. Results. Figure 3 (left) shows the results. First, notice that MLE performs poorly, while MLE+reward improves upon it. This is not surprising as the training data contain noisy/negative examples. Similarly, since the pure off-policy algorithm GOLD-s relies heavily on the data distribution, we observed that it achieves sub-optimal performance. The on-policy MLE+PG with MLE initialization gives better entailment rate. In comparison, our full SQL framework achieves the best entailment-diversity trade-off. The comparison between SQL and SQL(single) highlights the importance of having the multi-step objective which directly uses the end reward rather than bootstrapping intermediate Q-values for supervision. 4.2 Universal ADVERSARIAL ATTACKS We next study the application in text adversarial attacks, where again no supervised data is available. Adversarial attacks is an increasingly important research topic as they reveal models’ vulnerabilities and flaws. This is especially true for universal attacks (Wallace et al., 2019; Atanasova et al., 2020), where we want to generate universal examples that trick the model on all possible inputs. For instance, consider the context of entailment classification. Our goal is to find universal humanreadable hypotheses that are going to be classified as “entailment” with as high probability as possible, regardless of the input premises. This is a more challenging setting compared to previous instance-specific attack (Morris et al., 2020; Jin et al., 2020; Ebrahimi et al., 2017) where the attack model conditions on a premise and generates an adversarial hypothesis specific to the premise. Setup (more in the appendix §A.2.2). We aim to attack one of the most popular MultiNLI (Williams et al., 2018) entailment classifiers on HuggingFaceHub.2 The attack generation model generates adversarial text without conditioning on any inputs so that the generated attacks are universal to all premises. We compare our SQL with MLE+PG. We use all hypotheses in the MultiNLI dataset as the training data for the MLE training in MLE+PG and the off-policy updates for our SQL. We do not compare with previous specialized adversarial text attack methods, because they either are not applicable to the challenging universal attack setting (Morris et al., 2020; 2https://github.com/pytorch/fairseq/tree/master/examples/roberta Jin et al., 2020; Ebrahimi et al., 2017), or were not designed to generate human-readable sentences (Wallace et al., 2019). We use similar settings as in §4.1 to explore the diversity-quality trade-off by plotting the entailment rate and perplexity against diversity, respectively. The entailment classifier to be attacked is used as entailment score reward functions. We additionally include a token-level repetition penalty reward for readability. Results. Figure 3 (right) shows the results, and Table 4 (appendix) shows samples. We can see that SQL outperforms MLE+PG consistently across different diversity values. The outputs from MLE+PG are not diverse even with high p’s, indicating the model collapses and can only generate a small set of unique adversarial examples. The model by SQL discovers the pattern “saint-pierre-et-saint-paul” (an entity name), and exploits this to generate samples with high universal entailment rate. 4.3 PROMPT GENERATION FOR CONTROLLING PRETRAINED LANGUAGE MODELS A reward function does not just have to be a metric like the BLEU score, but also a complicated pipeline that eventually returns a score. To demonstrate this, we consider the emerging task of prompting a large pretrained LM for controllable generation (Hu et al., 2017; Radford et al., 2019; Brown et al., 2020). The goal is to learn to generate text prompts that steer the LM to generate sentences of certain desired attributes (e.g., topics). The problem of controlling the generation of pretrained LMs was previously approached through specialized algorithms such as modifying the LM hidden states during decoding (Dathathri et al., 2020; Krause et al., 2020; Qin et al., 2020). Here we show that prompts offer an easier, faster, more effective way for controlled generation. Learning to generate/tune prompts is gaining increasing attention recently. It side-steps the needs for expensive LM fine-tuning, and adapts LMs to new scenarios with prompt as the (computefriendly) interface. Most existing approaches (Wallace et al., 2019; Li & Liang, 2021; Lester et al., 2021) rely on gradient backpropagation and are applicable only when the whole training pipeline is differentiable. This does not hold for the text generation setting, as illustrated in Figure 4. In contrast, the RL framework is generally applicable to any differentiable or discrete pipelines. Setup (more in the appendix §A.2.3). Following (Dathathri et al., 2019), we aim to control the generation to have one of 7 topics (e.g., “science”); the generated prompt is prepended to one of 20 input sentences for the pretrained LM to generate continuation sentences. Figure 4 shows the architecture of prompt-based controllable generation. We compare our SQL method with MLE+PG as before. Since the prompt length could impact the generated sentences, we conducted experiments with maximum prompt length 5, 10, and 15. As ablation study, we also evaluate the SQL algorithm with only off-policy updates (i.e., without on-policy exploration), denoted as SQL(off), and compare it with vanilla MLE training. Finally, we also compare with two specialized controllable generation techniques based on pretrained LMs, namely PPLM (Dathathri et al., 2019) and GeDi (Krause et al., 2020), following similar procedures using their open-sourced code. We use a distilled GPT-2 model3 as the pretrained LM to be controlled. For rewards, we use the topic accuracy of the continuation sentences measured by a zero-shot classifier, plus the the log-likelihood of continuation sentences as the language quality reward measured by a distilled GPT-2. Results Figure 5 shows the topic accuracy of the controlled LM outputs averaged across the 7 topics, and Table 1 shows the respective language quality results. More detailed topic accuracy results and samples are provided in the appendix (§A.1.3) (where GeDi obtained low accuracy on 2 of the 7 topics, possibly because the topic tokens are tokenized into two subwords for which the model released by the authors was not specifically trained). We can see that the prompts generated by our SQL cause the LM to generate sentences with high topic accuracy while maintaining low perplexity in most settings. Increasing the prompt length positively impacts the topic accuracy, which makes sense because longer prompts give more flexible for steering the LM. The comparison between MLE and SQL(off) shows that the off-policy component of SQL is better than standard MLE training, as it incorporates reward signals instead of just blindly following the (noisy) data. Next, comparing with the previous steered decoding such as PPLM and GeDi, we can see the prompt-based control trained with RL achieves better trade-off between topic accuracy and language quality. Moreover, once a prompt is produced, we can use the pretrained LM to generate text of desired topics efficiently, with the same time cost as standard non-controlled decoding. In comparison, the dedicated steered decoding is often orders-of-magnitude slower, as shown in Table 2. 5 RELATED WORK Standard RL algorithms maximizing the external rewards can sometimes be over-sensitive to the randomness in the environment. Recent works have considered maximum-entropy RL (MaxEnt RL) extensions, such as the soft Q-learning (SQL) (Haarnoja et al., 2017; Nachum et al., 2017; Schulman et al., 2017), that maximize the entropy of policy besides the rewards, and have demonstrated substantial improvement in robotic and game control (Ziebart et al., 2008; O’Donoghue et al., 2017; Nachum et al., 2018; Eysenbach & Levine, 2021). Our work is the first to adapt SQL and its advanced variants (in particular the path consistency learning (Nachum et al., 2017)) to the challenging text generation problem and show significant results on diverse applications. Applying RL for text generation has been discussed in alleviating the exposure bias problem and optimizing task metrics (Li et al., 2016; Wu et al., 2016; Rennie et al., 2017; Paulus et al., 2018; Chen & Bansal, 2018). For example, Ranzato et al. (2016) used the REINFORCE algorithm (Williams, 1992), and Bahdanau et al. (2016) used the actor-critic algorithm; Guo et al. (2018) and Shi et al. (2018) tried to relieve the sparsity problem via hierarchical and inverse RL methods, resp. They are all on-policy RL algorithms with the need of pretraining their models using MLE. Another line of work focused mostly on using only off-policy data, often for offline training of chatbots (Jaques et al., 2020; Kandasamy et al., 2017; Zhou et al., 2017; Pang & He, 2021). As a result, the opportunity of directly improving the reward (as in on-policy updates) for other rich tasks is missed. Our proposed framework combines on- and off-policy training, and further offers solutions for efficient training from scratch in the presence of large action space and sparse sequence-level reward in text generation. 6 CONCLUSION We develop a new RL formulation for text generation based on softQ-learning and path consistency learning. We conduct experiments on learning with noisy and negative data, black box adversarial attack, prompting a pretrained language model for controllable generation, and finally, on standard supervised tasks. The RL formulation opens up enormous new opportunities to integrate more advances made in the fertile RL literature to improve text and other sequence generation problems. 3https://huggingface.co/distilgpt2 7 ETHICS STATEMENT This work develops a new RL formulation for text generation. While we demonstrate the framework in four applications, it could be adapted to other (emerging) applications. One major component in these applications is the design of the reward function, which influences the behavior of the trained agent. While we believe the MaxEnt RL framework is more robust against reward misspecification (Eysenbach & Levine, 2021), the potential failures of sub-optimal reward functions are widely known and discussed.4 To this end, deploying this model to the wild requires careful and extensive examination, using tools such as Ribeiro et al. (2020). Further, we highlight the application for (black-box) adversarial attacks in the paper, with the intention of using adversarial attacks to understand the model’s inner workings. That being said, this could potentially be misused to conduct malicious attacks against systems. Hence, users of this framework might want to conduct adversarial attacks against their own models to avoid being attacked by other people with bad intentions. 8 REPRODUCIBILITY STATEMENT We provide code in the supplementary materials, and additional experiment details in the appendix. A APPENDIX A.1 APPLICATIONS AND EXPERIMENTS A.1.1 LEARNING FROM NOISY (NEGATIVE) TEXT Please see Table 5 for beam search results, Figure 6 for additional results for MLE+reward, and Table 7 for examples. A.1.2 Universal ADVERSARIAL ATTACKS Please see Table 4 for examples. A.1.3 PROMPT GENERATION FOR CONTROLLING PRETRAINED LANGUAGE MODELS Please see Table 6 for detailed results breakdown, and Table 8-11 for examples. Examples are in the format: topic: [prompt] input sentence generated text. A.1.4 SUPERVISED TEXT GENERATION TASKS Finally, we conduct experiment on standard generation tasks where clean supervised data is available. The study is to examine the capabilities of the proposed RL method to train a text generation model from scratch, which has been considered as exceedingly challenging for previous RL algorithms. Setup. We study on two tasks, E2E (Novikova et al., 2017) and CommonGEN (Lin et al., 2020), and use the respective datasets pre-processed by (Gehrmann et al., 2021) which allow sequenceto-sequence modeling with standard transformers. We run four sets of methods: the standard MLE training (MLE); PG training from scratch (PG); joint MLE and PG training, with MLE ini- tialization (MLE+PG); and our SQL training from scratch with both off-policy and on-policy updates (SQL). We use the standard BLEU as reward. We additionally investigate the training stability and sensitivity w.r.t hyperparameters, in particular the scale of reward. To this end, for MLE+PG and SQL, we vary the reward scale in {1, 10, 50, 100, 500, 1000} and evaluate the respective performance under different scales. Results. Table 3 shows the performance on E2E of different models whose hyperparameters are picked using the validation set. We can see the proposed SQL that trains models from scratch achieves competitive results with the common MLE and MLE+PG. In contrast, the PG algorithm alone without MLE fails the training. Figure 7 (left) shows the respective training curves (on the validation set), demonstrating that SQL converges in an efficient and stable way as MLE. We further demonstrate the sensitive of MLE+PG and SQL w.r.t the reward scale as a key hyperparameter. Figure 7 (middle and right) shows the training curves of the two methods with varying reward scales. We can see SQL is significantly more robust as reward scale changes, while MLE+PG tends to collapse with improper reward scale configurations. A.2 SETUP DETAILS Our evaluation follows the GEM Benchmark (Gehrmann et al., 2021) when applicable,5 and otherwise same with the reward function used in training. We use a transformer model (Vaswani et al., 2017) based on Texar-Pytorch (Hu et al., 2019) by default, with 64 hidden dimension, 3 blocks, and 4 heads. For experiments that involve policy gradient training, we initialize the model with maximum likelihood training by default unless specified otherwise. We train soft Q-learning model from scratch with both off-policy (using data) and on-policy (using samples) by default except in §4.1 and §4.3, in which we find it beneficial to warm-up the model with just off-policy training. We apply similar tuning budgets to both soft Q-learning model, and policy-gradient (mostly the reward scale and top-k), based on performance on the validation dataset and sample qualities. Reward Functions We use the robust entailment classifier (Nie et al., 2020) in §4.1,6 one of the most used entailment classifiers on HuggingFaceHub in §4.2.7 and a zero-shot classifier based on BART (Lewis et al., 2020) to compute the topic score in §4.3.8 To compute perplexities, we use a GPT-2 model (Radford et al., 2019) fine-tuned on the corresponding datasets for computing perplexity in §4.1 and 4.2, and a distilled GPT-2 model in §4.3 without fine-tuning.9 We simply set reward weights to 1.0, except in §4.2, where we changed the entailment weight to 0.5, log-likelihood and repetition penalty weight to 5.0. A.2.1 SETUP DETAILS: §4.1 We study using the SNLI dataset (Bowman et al., 2015), a dataset commonly used in training an entailment classifier. The original dataset contains (premise, hypothesis) sentence pairs, where the hypothesis may or may not entail the premise. We sub-sampled 50, 000 training examples from the corpus such that the hypotheses have an average entailment probability of only 50% in terms of the premises, and over 2/5 examples have entailment probabilities less than 20%, which can be seen as negative (contradictive) examples. The resulting training set poses a significant challenge for the models to learn from the noises. The RL algorithms (including PG and ours) permit us to plug in arbitrary reward functions to drive learning. Based on the goal of the task, we use the following intuitive rewards to ensure entailment accuracy and language quality: (1) a robust entailment classifier (Nie et al., 2020) that measures the entailment score of a generation in terms of the input premise, (2) a GPT-2 language model (Radford et al., 2019) that measures the log-likelihood of the generation as an indicator of language quality, and (3) BLEU score w.r.t the input premises as another language quality reward that avoids trivial outputs. We sum together all rewards with weights 1.0. A.2.2 SETUP DETAILS: §4.2 We study the task of attacking an entailment classifier. In particular, we aim to attack one of the most popular entailment classifiers on HuggingFaceHub.10 The attack generation model generates adversarial text without conditioning on any inputs so that the generated attacks are universal to all premises. The generation model is trained with mostly the same setting as in §4.1, where the entailment classifier to be 5https://github.com/GEM-benchmark/GEM-metrics 6https://huggingface.co/ynie/roberta-large-snli_mnli_fever_anli_R1_R2_ R3-nli 7https://github.com/pytorch/fairseq/tree/master/examples/roberta. This classifier is ranked #1 (as of May 20, 2021) based on https://huggingface.co/models?search=nli. 8https://huggingface.co/facebook/bart-large-mnli 9https://huggingface.co/distilgpt2 10https://github.com/pytorch/fairseq/tree/master/examples/roberta, which is ranked #1 as of May 20, 2021 based on https://huggingface.co/models?search=nli. attacked is used as entailment score reward functions. Besides, we additionally include a tokenlevel repetition penalty reward, which empirically benefits readability. Finally, we use the MultiNLI dataset (Williams et al., 2018) which includes more diverse examples than the SNLI used above. We compare our SQL with MLE+PG. We use all hypotheses in the MultiNLI dataset as the training data for the MLE training in MLE+PG and the off-policy updates for our SQL. We do not compare with previous specialized adversarial text attack methods, because they either are not applicable to the universal attack setting (Morris et al., 2020; Jin et al., 2020; Ebrahimi et al., 2017), or were not designed to generate human-readable sentences (Wallace et al., 2019). Besides, it is worth noting that the general RL algorithms have an additional advantage of doing black-box attacks. That is, the algorithms only require the ability to query the entailment classifier for entailment probability, without need of knowing the internal structure of the classifier (e.g., for computing gradients) as in previous attack algorithms (Ebrahimi et al., 2017; Wallace et al., 2019). For top-p sampling results, we sample a hypothesis for each premise and measure the average attack rate across the dataset. This is because sampling multiple hypotheses, each for all premises, and measure performance are expensive. Since the hypotheses are sampled input-independently, this should be a good approximation. A.2.3 SETUP DETAILS: §4.3 Following (Dathathri et al., 2019), we aim to control the generation to have one of 7 topics (e.g., “science”); the generated prompt is prepended to one of 20 input sentences (Figure 4) for the pretrained LM to generate continuation sentences. There is no direct supervision data available for training the prompt generator. We randomly create some noisy text as the training data for MLE baselines below and for off-policy updates for our algorithm. Specifically, the noisy text is created by sampling keywords and topics from the list used in (Dathathri et al., 2020) and a paraphrase generation model. Figure 4 shows the architecture of prompt-based controllable generation. We compare our SQL method with MLE+PG as before. At training time, for each generated prompt sample, the pretrained LM generates 2 continuation sentences for evaluating average reward. We use a zero-shot classifier to evaluate the topic accuracy of the continuation sentences. That is, we do not assume access to classifiers pretrained on topic-specific sentences, because generating such topic-specific sentences is the goal of the task in the first place. We additionally use an LM to evaluate the log-likelihood of continuation sentences for measuring language quality. Since the prompt length could impact the generated sentences, we conducted experiments with maximum prompt length 5, 10, and 15. As ablation study, we also evaluate the SQL algorithm with only off-policy updates (i.e., without onpolicy exploration), denoted as SQL(off), and compare it with vanilla MLE training. At test time, given a topic, the trained prompt generator produces one prompt using beam search decoding. For each generated prompt, the pretrained LM generates 100 sentences using top-k decoding (with k = 50) for evaluation. Finally, we also compare with two specialized controllable generation techniques based on pretrained LMs, namely PPLM (Dathathri et al., 2019) and GeDi (Krause et al., 2020), following similar procedures using their open-sourced code. We use a distilled GPT-2 model11 as the pretrained LM to be controlled. We use the paraphrase generation model based on Zhang et al. (2019).12 During decoding, we include no repeat ngram size= 2, which improves readability.13 A.3 THE SOFT Q-LEARNING FRAMEWORK A.3.1 COMPARISON WITH MLE OBJECTIVE It is interesting to take a closer look at the above objective and compare with the common MLE training. Specifically, we notice the relations between the optimal Q∗, V ∗, and A∗ functions: A∗ (st, at) = Q ∗(st, at) − V ∗(st) = rt + γV ∗(st+1) − V ∗ (st), where the first equation is the definition of A∗ (see Eq.7) and the second equation is due to Eqs.(12) and (7). We thus can see the regression target in the above objective as an approximation to the advantage function: 11https://huggingface.co/distilgpt2 12https://huggingface.co/tuner007/pegasus_paraphrase 13https://huggingface.co/blog/how-to-generate Ãθ̄ (st, at) := −Vθ̄ (st) + γVθ̄ (st+1) + rt. Therefore, by optimizing the regression objective, log πθ(at|st), which is the log probability of generating token at given preceding tokens st, is encouraged to match the approximate advantage value Ãθ̄ (st, at), no more and no less. This is different from the objective of MLE where the model is trained to (blindly) increase the probability of the observed token at given st and decrease the probability of the rest. A.3.2 VANILLA TRAINING WITH TEMPORAL CONSISTENCY Much like the Bellman temporal consistency in standard Q-learning (Eq.3), in SQL, the optimal action-value function follows the softmax form of the temporal consistency (Ziebart et al., 2008; Ziebart, 2010; Fox et al., 2016; Nachum et al., 2017): Q∗ (st, at) = rt + γ log ∑ at+1 expQ∗ (st+1, at+1) . (12) We thus can derive a regression objective similar to the standard Q-learning (Eq.4): LSQL, vanilla(θ) = Eπ′ [ 0.5 · ( rt + γ log ∑ at+1 expQθ̄ (st+1, at+1)−Qθ (st, at) )2] . (13) Recall that π′ is an arbitrary behavior policy (e.g., data distribution), and Qθ̄ is the target Q-network which is a slow copy of the Qθ to be learned and is held fixed during the gradient updates. However, the above objective is inefficient due to exact the same reasons as in standard Q-learning discussed earlier, namely the unstable per-step bootstrapping-style training with sparse reward signals, plus the slow updates w.r.t only one token at out of the large vocabulary (action space).
1. What is the focus of the paper, and how does it relate to sequence generation? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its novelty and applicability to central tasks in the field? 3. Do you have any concerns about the technical details and claims made in the paper, especially regarding the role of SQL and path consistency learning? 4. How does the reviewer assess the significance and impact of the paper's contributions, considering its relevance to EMNLP or other related fields? 5. Are there any suggestions for improving the paper, such as including ablation analyses or exploring applications of path consistency learning to other RL problems?
Summary Of The Paper Review
Summary Of The Paper Authors proposed to use soft Q-learning (SQL) to formulate the RL problem in sequence generation. The stability can be increased when techniques of path consistency learning is corporated. Review I think when mentioning text generation, people will recall language modeling or machine translation. This paper only did experiments on some tasks which I don't think it's central to the topic. I think it's a misleading title and content. Since authors used this title, I think at least one of these 2 central tasks should be added in experiments. I think overall all the technical details are not new (correct me if I am wrong, and in this case kindly pointed out what's the contribution). It reads to me that the major contribution is trying some existing technical methods in these sort of problems. If that's the case, I feel it's more appropriate to submit to EMNLP I guess. On technical issues, I feel like the claim is not well justified. In many places, authors claims SQL is the important thing in overcoming the existing problems. However, in SQL the major concept to me is the entropy term. If that's the case, that means we want the term to be large and it translates to we want an evenly distributed actions space and in this thinking, normal SARSA or Q-learning or with larger epsilon greedy search should also work. I think if SQL is the reason, this should also be verified. Somehow it reads to me that SQL itself is not even enough to overcome the challenges so path consistency learning is the most important. So I think 1) there should be an ablation analysis on SQL only, and 2) I feel path consistency learning can be applied to other RL problems as well, which should also be included. And if it cannot be combined, please discuss the reason.
ICLR
Title Text Generation with Efficient (Soft) $Q$-Learning Abstract Maximum likelihood estimation (MLE) is the predominant algorithm for training text generation models. This paradigm relies on direct supervision examples, which is not applicable to many emerging applications, such as generating adversarial attacks or generating prompts to control language models. Reinforcement learning (RL) on the other hand offers a more flexible solution by allowing users to plug in arbitrary task metrics as reward. Yet previous RL algorithms for text generation, such as policy gradient (on-policy RL) and Q-learning (off-policy RL), are often notoriously inefficient or unstable to train due to the large sequence space and the sparse reward received only at the end of sequences. In this paper, we introduce a new RL formulation for text generation from the soft Q-learning (SQL) perspective. It enables us to draw from the latest RL advances, such as path consistency learning, to combine the best of on-/off-policy updates, and learn effectively from sparse reward. We apply the approach to a wide range of text generation tasks, including learning from noisy/negative examples, adversarial attacks, and prompt generation. Experiments show our approach consistently outperforms both task-specialized algorithms and the previous RL methods. 1 INTRODUCTION Recent natural language generation systems have made remarkable progress in producing wellformed coherent text, especially with the massive pretrained language models (LMs) (Radford et al., 2019; Brown et al., 2020; Lewis et al., 2020; Raffel et al., 2019). Those models are typically trained using maximum likelihood estimation (MLE) with a large amount of data supervisions. Despite its successful outcomes, the standard training method suffers from limited applicability to many emerging text generation problems, where little or no supervised data is available. Prominent examples of such low-data problems include generating prompts to control the massive LMs (Yin et al., 2019; Shin et al., 2020; Zhong et al., 2021), learning text generation from noisy or even negative data, generating adversarial text attacks for robustness study (Wallace et al., 2019; Atanasova et al., 2020), and others (Figure 1, right). Due to the failure of standard MLE, people have had to devise specialized algorithms for those problems respectively. On the other hand, reinforcement learning (RL) (Sutton & Barto, 2018) offers an alternative principled framework for learning from arbitrary reward functions, and has achieved great advances in robotic and game control. However, RL by far has made limited success for training text generation, primarily due to the key challenges of sparse reward (i.e., a single reward signal is received only after the whole text sequence is generated) and large action space (i.e., a vocabulary of millions of words). For instance, a popular family of RL algorithms studied extensively for text generation is the policy-based (Williams, 1992) or actor-critic based (Bahdanau et al., 2016; Rennie et al., 2017) algorithms, with policy gradient (PG) being the most prevalent example (Ranzato et al., 2016; Li et al., 2016; Rennie et al., 2017; Tan et al., 2018; Pasunuru & Bansal, 2018; Paulus et al., 2018). Those algorithms train the model with on-policy updates, i.e., the text samples used for estimating policy gradients are from the target model itself. Due to the exponentially large space of sequences, onpolicy updates often suffer from extremely high variance and low data efficiency (e.g., most model samples are not useful for learning). Thus directly training with PG from scratch is usually impossible. In practice, the model has to be initialized by MLE training, followed by PG as finetuning, which often leads to limited improvement (Choshen et al., 2020; Wu et al., 2018). Another set of work has resorted to off-policy RL. The key advantage is that samples from other sources, e.g., human-written text, can be used, making them more data efficient than on-policy methods. Previous work has used either importance weighted PG (Pang & He, 2021; Zhou et al., 2017; Kandasamy et al., 2017) or Q-learning based algorithms (Guo, 2015; Jaques et al., 2020; Narasimhan et al., 2015). However, off-policy methods have been considered to be less stable. For example, theQ-learning performance relies heavily on how accurate the learnedQ-function assesses the quality of intermediate subsequences – a challenging task due to the sparse reward signals. In this paper, we develop a new RL formulation for text generation that tackles the above issues (Figure 1, left). We reframe the text generation problem from the soft Q-learning perspective originally developed in robotics (Haarnoja et al., 2017; Schulman et al., 2017). The resulting connection allows us to seamlessly take advantage of the latest successful techniques from the RL literature. In particular, we introduce and adapt the principled path consistency learning (Nachum et al., 2017) to text generation, that (1) offers a natural way to train the model with both on- and off-policy updates, hence combining the best of the two strategies, (2) bridges the sparse reward signal to directly supervise the Q function learning, leading to more accurate Q estimation and credit assignment, and (3) makes efficient updates to Q-values by considering all candidate actions together. The generality and efficiency of the proposed method allows us to train text generation in a wide range of applications: (1) With noisy and negative training examples, our approach learns to generate accurate entailment text that greatly improves upon the data itself as well as other various training methods; (2) Our approach also manages to train an effective adversarial text generator for robustness test for classifiers; (3) We train a prompt generator with our algorithm to achieve controllable generation of pretrained LMs in terms of topics. On all the three tasks, our approach consistently improves over not only previous RL algorithms for text generation, but also diverse task-specialized methods designed specifically for each of the problems, respectively. In the appendix (§A.1.4), we also show that on standard supervised tasks where MLE prevails, our approach is competitive to train text generation models from scratch, which was usually impossible for previous RL algorithms. 2 BACKGROUND AND CHALLENGES The goal of text generation is to produce coherent text y = (y0, ..., yT ) of certain properties for a given task, where yt is a token from a vocabulary V , and T is the text length. The generation can condition on arbitrary input context, which we omit for simplicity of notations. We aim to learn a generation model pθ(y) which is typically decomposed autoregressively as pθ(y) = ∏T t=0 pθ(yt | y<t), where y<t = (y0, ..., yt−1) is the prefix, and the distribution at each step t is obtained by applying the softmax function on the output logits: pθ(yt | y<t) = exp fθ(yt | y<t)∑ y′∈V exp fθ(y ′ | y<t) . (1) Here fθ(y | y<t) is the logit of token y computed by the generation model. Given a training example y∗, maximum likelihood training (MLE) updates the model with the gradient ∇θLMLE(θ)= ∑T t=0∇θ log pθ (y∗t | y∗<t). Despite its popularity, MLE-based training only applies when clean supervised data y∗ is available, and cannot be used to optimize arbitrary task metrics (e.g., BLEU, entailment score) which are typically the goal in many text generation tasks. 2.1 REINFORCEMENT LEARNING (RL) FORMULATIONS FOR TEXT GENERATION Notations. Previous research has formulated text generation as an RL problem by considering the following finite-time Markov Decision Process (MDP). At each time step t, let the “state” be st = y<t, namely the partial sequence generated so far. The model, also known as the “agent”, takes as input the current state st and outputs a token, also called “action”, at ∈ V according to a policy π(at | st). The agent then receives a reward rt = r(st, at) and deterministically transitions to next state st+1 (i.e., the concatenation of the tokens in st and the new token at). Following the notation convention in RL, let τ be the trajectory (i.e., text sample) generated by the policy. The agent’s objective is to maximize the accumulative reward, J(π) = Eτ∼π [∑T t=0 γ trt ] , where γ ∈ (0, 1] is the discount factor. A central concept in RL is the Q-function of policy π, defined as Qπ(st, at) = Eπ [∑T t′=t γ t′rt′ | st, at ] , which is the expected future reward of taking action at (i.e., generating token at) in state st and continuing with the policy π. Challenges. Text generation poses significant challenges to RL, particularly because (1) the reward signal is usually sparse, i.e., rt = 0, ∀t < T and the agent receives a non-zero reward rT only after it generates the full sequence, (2) the action space (i.e., the vocabulary V) is extremely large, often containing millions of words. The challenges have led to difficulties of the two major families of RL approaches applied to text generation problems, as detailed below. Policy-based RL techniques directly parameterize the policy πθ with parameters θ. Thus the policy πθ(at | st) exactly corresponds to the above generation model pθ(yt | y<t). Policy gradient (PG) is one of the most widely used algorithms for text generation (Ranzato et al., 2016). It optimizes the cumulative reward with the policy gradient: ∇θJ(πθ) = −Eτ∼πθ [∑T t=0 Q̂(st, at)∇θ log πθ (at | st) ] , (2) where Q̂(st, at) is the estimated Qπθ value with sample τ . Notice that the expectation is taken w.r.t. the policy πθ, which makes PG an on-policy algorithm, meaning that the sample τ needs to come from the the current policy πθ itself. In practice, however, optimizing this objective alone from scratch is unlikely going to work because most samples τ ∼ πθ are just gibberish with zero reward, failing to provide meaningful training signals for updating the policy. Previous literature either initializes the policy πθ with MLE training, and/or use a combination of MLE and PG updates, which often leads to marginal gains in practice (Wu et al., 2018; Choshen et al., 2020). Value-based RL techniques, such as Q-learning, implicitly learn the policy π by approximating the value Qπ(s, a) directly. Specifically, let Q∗(s, a) = maxπ Qπ(s, a) denote the optimal value over policies. Thus the optimal policy π∗ is simply taking the action of maximal Q∗ value at each state. The approximation of Q∗ is based on the well-known Bellman temporal consistency: Q∗(st, at) = rt + γmaxat+1 Q ∗(st+1, at+1). (3) Deep Q-learning (Mnih et al., 2013) parameterizes the Q-function as Qθ(x, a) (e.g., a neural network), and train the parameters by minimizing the following regression objective: L(θ) = Eπ′ [ 0.5 · ( rt + γmaxat+1 Qθ̄(st+1, at+1)−Qθ(st, at) )2] , (4) Single-Step PCL Training Multi-Step PCL Training . . . sequence reward . . . sequence reward Figure 2: SoftQ-Learning with path consistency learning (PCL) objectives, where we illustrate with a vocabulary of size 3. Left: Single-step objective (Eq.9), where for each (st, at), the computation involves step t and t+1. Dashed boxes in dark green and gray indicate the regression target, where the intermediate reward rt is often 0 due to sparsity. The gradient is applied to parameters θ at step t (indicated by orange color). Right: Multi-step objective (Eq.11) which aggregates from step t all the way to T . In this way, the final-step non-zero reward rT is used as the regression target. where θ̄ is the parameters of the target Q-network, which is a slow copy of θ and considered as constant for gradient computation of θ. Here π′ is an behavior policy which can be an arbitrary distribution over text, such as the data distribution or replay buffer (Mnih et al., 2013). This makes Q-learning an off-policy algorithm because of its ability to use samples coming from other policies. After learning Qθ, one can induce a policy π from it that takes arg maxaQθ(s, a) at each state s. Jaques et al. (2017) instead sample tokens from the softmax function applied to Qθ. However, the training can be unstable and inefficient due to several challenges: (1) The bootstrapping nature of the above regression problem can make the training unstable. That is, the regression target rt + γmaxat+1 Qθ̄(st+1, at+1) itself is derived from the Q-function to be learned (Kumar et al., 2019). The problem is exacerbated in the presence of sparse reward in text generation, where the real observed signal rt is zero for all intermediate t < T ; (2) The large action space (e.g., 104) in text generation results in slow updates. In particular, notice that Eq.(4) applies the gradient update to the Qθ-value of the only one particular token at (out of the 104 candidate tokens in the vocabulary), making the training inefficient; (3) Besides, pure off-policy updates could be highly sensitive to the quality of training data, and miss the opportunity of on-policy exploration that maximizes the reward of interest in a more direct way. 3 THE SOFT Q-LEARNING FRAMEWORK In this section, we combat the difficulties of previous RL methods by introducing the softQ-learning (SQL) formulation of text generation. We show that the formulation is seamlessly compatible with the common architecture of text generation model (Eq.1), permitting easy implementation (§3.1). The formulation further allows us to integrate the latest advances in RL, notably path consistency learning (Nachum et al., 2017) that makes the RL training efficient and stable in practice (§3.2). Figure 2 and Algorithm 1 summarizes the resulting SQL framework for efficient training. 3.1 SOFT Q-LEARNING FORMULATION FOR TEXT GENERATION SoftQ-learning (Haarnoja et al., 2017; Schulman et al., 2017; Nachum et al., 2017) is an maximumentropy (MaxEnt) extension to the standard (hard) Q-learning (Mnih et al., 2015; Sutton & Barto, 2018). Under this framework, the agent is encouraged to optimize the reward while staying as stochastic as possible, with the objective JMaxEnt(π) = Eτ∼π [∑T t=0 γ trt + αH (π (· | st)) ] , which augments the vanilla J(π) with the additional Shannon entropy term H with coefficient α.1 This is appealing because it seamlessly connects the Q-values to the familiar output logits of a text generation model, which enables straightforward implementation of the SQL formulation. Q-values as Generation Model Logits. We show the connection of the Q-values with the logits, i.e., the model outputs right before the softmax layer. Concretely, with the SQL objective, the following relationship between optimal policy π∗ and action-value Q∗ holds (Haarnoja et al., 2017; Schulman et al., 2017): π∗(a | s) = expQ ∗(s, a)∑ a′ expQ ∗ (s, a′) . (5) 1WLOG, we can assume α=1, as it can be folded into the reward function by scaling the latter with 1/α. This form is highly reminiscent of the softmax layer of the generation model in Eq.(1). The connection suggests that we can naturally parameterize the Q-function in SQL as the generation model logit function, i.e., Qθ(s, a) ≡ fθ(a | s). In other words, the model output fθ(a | s), originally interpretted as the “logit” of token a given the preceding tokens s, is now re-interpretted as the Qvalue of action a in state s. When achieving optimality, fθ∗(a | s), namely Q∗(s, a), represents the best possible future reward achievable by generating token a in state s. Similarly, the full generation model pθ(a | s) in Eq.(1) that applies softmax to fθ now precisely corresponds to the policy πθ induced from Qθ(s, a). That is, πθ(a | s) = expQθ(s, a)∑ a′ expQθ (s, a ′) ≡ exp fθ(a | s)∑ a′ exp fθ (a ′ | s) = pθ(a | s). (6) We could further gain even more intuitive interpretation of the above generation policy π∗ from the lens of advantage function (Sutton & Barto, 2018). Specifically, in SQL, the optimal state-value function is the log-normalizer of the optimalQ-values (Haarnoja et al., 2017; Schulman et al., 2017). This allows us to rewrite Eq.(5) into a more concise form: V ∗ (s) = log ∑ a′ expQ∗ ( s, a′ ) , π∗(a | s)= exp ( Q∗(s, a)−V ∗(s) ) = expA∗(s, a), (7) where A∗ is the optimal advantage function. The equation says that, in the proposed text generation SQL formulation, the optimal policy generates token a in state s according to the token’s advantage. 3.2 EFFICIENT TRAINING WITH PATH CONSISTENCY The above section has described parameterizing the Q-function with the common generation model with parameters θ. Now we present how to learn theQθ function within the SQL framework. Vanilla training based on the Bellman temporal consistency can suffer from the instability and inefficiency issues similar to the conventional Q-learning (§2.1), as we discuss more in the appendix (§A.3.2). Fortunately, our SQL formulation allows us to import latest advances of RL techniques to the text generation setting that overcome the difficulties. Specifically, we adapt the unified path consistency learning (PCL) that has excelled in game control (Nachum et al., 2017). The PCL-based training updates Q-values of all tokens at once through a connection between the value function and the induced policy. More specifically, it is shown in Nachum et al. (2017) that the optimal policy π∗ (Eq.5) and the optimal state value function V ∗ (Eq.7) in SQL must satisfy the following consistency property for all states and actions: V ∗ (st)− γV ∗ (st+1) = rt − log π∗ (at | st) , ∀st, at. (8) Accordingly, the PCL-based training attempts to encourage the satisfaction of the consistency with the following regression objective: LSQL, PCL(θ) = Eπ′ [ 1 2 ( − Vθ̄ (st) + γVθ̄ (st+1) + rt − log πθ (at | st) )2] , (9) where πθ is the induced policy defined in Eq.(6); Vθ̄ is defined similarly as in Eq.(7) but depends on the target Qθ̄ network (i.e., a slow copy of the Qθ to be learned), and recall that π ′ is an arbitrary behavior policy (e.g., data distribution). Please see Figure 2 (left) for an illustration. Crucially, notice that the gradient update is applied to θ through the log πθ term which explicitly involves the Qθ-values of all tokens a in the vocabulary. This shows an important difference from the above vanilla training in conventional Q-learning (§2.1) where Qθ is updated only through the particular at token. The PCL training thus offers more efficient updates for the Qθ function. Multi-step PCL for Sparse Reward. The above PCL objective Eq.(9) alone does not resolve the potential instability issue due to the bootstrapped Vθ̄(st+1) value and the sparse reward (i.e., r(st, at) = 0 for t < T ). Our SQL formulation allows us to additionally incorporate the multi-step variant of the PCL training (Nachum et al., 2017) to resolve the issue. Specifically, by applying a telescoping sum on the consistency equation (Eq.8) starting from t up to T , we arrive at the multistep temporal consistency: V ∗ (st)− γT−tV ∗ (sT+1) = ∑T−t l=0 γl ( rt+l − log π∗ (at+l | st+l) ) , (10) where the value of past-terminal state is zero, V ∗ (sT+1) = 0; and the rewards are only available at the end, ∑T−t l=0 γ lrt+l = γ T−trT . We can then come to the following multi-step objective function, LSQL, PCL-ms(θ) = Eπ′ [ 1 2 ( −Vθ̄ (st) + γ T−trT − ∑T−t l=0 γl log πθ (at+l | st+l) )2] . (11) We can see the objective side-steps the need to bootstrap intermediate value functions Vθ̄(st′) for t′ > t. Instead, it directly uses the non-zero end reward rT to derive the update for θ. Please see Figure 2 (right) for an illustration. In practice, we combine the single- and multi-step objectives (Eqs.9 and 11) together for training. Joint On- and Off-policy Training. Finally, we highlight that the behavior policy π′ involved in the objectives Eqs.(9) and (11) can be an arbitrary policy (i.e., distribution over text sequences), from which we can draw trajectories τ (i.e., text samples). For example, π′ can be a (possibly noisy) text dataset, or a set of text samples produced by other generation models, resulting in off-policy training. We can also set π′ to be the current generation model πθ to be learned, resulting in onpolicy training. In practice, we could first train the model with only off-policy data for warming up, and then continue with joint on- and off-policy training to further maximize the reward. Algorithm 1 Efficient Soft Q-Learning for Text Generation Input: Qθ function (i.e., generation model logit function fθ in Eq.1) Reward function r(s, t) Training examples D (for off-policy updates; optional) 1: Initialize θ and target model parameters θ̄ 2: repeat 3: Draw a batch of off-policy samples {τoff} ∼ D 4: Draw a batch of on-policy samples {τon} by decoding with policy πθ(at | st) (Eq.6) 5: Compute Qθ(st, at) values (i.e., the model logits) and target Qθ̄(st, at) for (st, at) ∈ {τoff} ∪ {τon} 6: Compute the objectives in Eqs.(9) and (11) 7: Update the model parameters θ via gradient descent 8: Update the target model parameters θ̄ by θ̄ ← ρθ̄ + (1− ρ)θ with update rate ρ 9: until convergence Output: The trained Qθ∗ function and the induced generator πθ∗ 4 APPLICATIONS AND EXPERIMENTS 4.1 LEARNING FROM NOISY (NEGATIVE) TEXT The popular MLE algorithm learns by (blindly) imitating training data. However, it is often expensive to curate clean quality data. It is thus highly desirable to be able to learn from data with noises, or even negative examples. With the guidance of task metrics (rewards), the model can even learn to “outperform” the training data and achieve desired generation behaviors. To this end, we consider the task of entailment generation (Pasunuru & Bansal, 2017). Given a sentence (premise), the goal is to generate a new sentence (hypothesis) that logically follows the premise. For example, given source sentence “Sophie is walking a dog outside her house”, the hypotheses “Sophie is outdoor” is considered entailed, but “Sophie is inside her house” is not and even is a negative (contradictive) sentence. Setup (more in the appendix §A.2.1). We sub-sampled 50k training examples from the SNLI dataset (Bowman et al., 2015), a commonly used entailment classification dataset. The hypotheses have an average entailment probability of only 50%, and over 2/5 of them less than 20% (negative/contradictive examples). This poses a significant challenge for the models to learn from the noises. The rewards used in RL algorithms include (1) the entailment score of the generation measured by a robust entailment classifier (Nie et al., 2020), (2) the log-likelihood of the generation as an indicator of language quality measured by a GPT-2 language model (Radford et al., 2019), and (3) BLEU score w.r.t the input premises as another language quality reward that avoids trivial outputs. We sum together all rewards with weights 1.0. We compare our approach with a broad range of baselines, including (1) the standard MLE training (MLE); (2) MLE+reward, where we use the reward function to filter examples; (3) joint MLE and PG training with MLE initialization (MLE+PG), where we initialize the model with MLE training, then train it with combined MLE and PG losses; previous text-generation RL algorithms including (4) MIXER (Ranzato et al., 2016), (5) Self-critic (Rennie et al., 2017), and (6) one of the latest methods GOLD-s (Pang & He, 2021) which is a pure off-policy method based on importancesampling PG. To ablate the effect of multi-step training (§3.2), we additionally compare with a simplified variant of our approach that uses only vanilla single-step PCL training (SQL(single)). In the appendix (§A.1.1) we compare and discuss more baselines such as MLE weighted by rewards. We evaluate generation results in terms of entailment rate, language quality (perplexity), and diversity which is measured by the Shannon entropy over unigrams and bigrams (H1, H2) (Gehrmann et al., 2021). Since text generation models intrinsically trade off diversity and quality (Caccia et al., 2019; Hashimoto et al., 2019), we vary the generation diversity by generating samples via top-p sampling (Holtzman et al., 2019) with different p values, and plot the entailment rate and perplexity against diversity, resp. We also evaluate the samples produced by beam-search decoding. Results. Figure 3 (left) shows the results. First, notice that MLE performs poorly, while MLE+reward improves upon it. This is not surprising as the training data contain noisy/negative examples. Similarly, since the pure off-policy algorithm GOLD-s relies heavily on the data distribution, we observed that it achieves sub-optimal performance. The on-policy MLE+PG with MLE initialization gives better entailment rate. In comparison, our full SQL framework achieves the best entailment-diversity trade-off. The comparison between SQL and SQL(single) highlights the importance of having the multi-step objective which directly uses the end reward rather than bootstrapping intermediate Q-values for supervision. 4.2 Universal ADVERSARIAL ATTACKS We next study the application in text adversarial attacks, where again no supervised data is available. Adversarial attacks is an increasingly important research topic as they reveal models’ vulnerabilities and flaws. This is especially true for universal attacks (Wallace et al., 2019; Atanasova et al., 2020), where we want to generate universal examples that trick the model on all possible inputs. For instance, consider the context of entailment classification. Our goal is to find universal humanreadable hypotheses that are going to be classified as “entailment” with as high probability as possible, regardless of the input premises. This is a more challenging setting compared to previous instance-specific attack (Morris et al., 2020; Jin et al., 2020; Ebrahimi et al., 2017) where the attack model conditions on a premise and generates an adversarial hypothesis specific to the premise. Setup (more in the appendix §A.2.2). We aim to attack one of the most popular MultiNLI (Williams et al., 2018) entailment classifiers on HuggingFaceHub.2 The attack generation model generates adversarial text without conditioning on any inputs so that the generated attacks are universal to all premises. We compare our SQL with MLE+PG. We use all hypotheses in the MultiNLI dataset as the training data for the MLE training in MLE+PG and the off-policy updates for our SQL. We do not compare with previous specialized adversarial text attack methods, because they either are not applicable to the challenging universal attack setting (Morris et al., 2020; 2https://github.com/pytorch/fairseq/tree/master/examples/roberta Jin et al., 2020; Ebrahimi et al., 2017), or were not designed to generate human-readable sentences (Wallace et al., 2019). We use similar settings as in §4.1 to explore the diversity-quality trade-off by plotting the entailment rate and perplexity against diversity, respectively. The entailment classifier to be attacked is used as entailment score reward functions. We additionally include a token-level repetition penalty reward for readability. Results. Figure 3 (right) shows the results, and Table 4 (appendix) shows samples. We can see that SQL outperforms MLE+PG consistently across different diversity values. The outputs from MLE+PG are not diverse even with high p’s, indicating the model collapses and can only generate a small set of unique adversarial examples. The model by SQL discovers the pattern “saint-pierre-et-saint-paul” (an entity name), and exploits this to generate samples with high universal entailment rate. 4.3 PROMPT GENERATION FOR CONTROLLING PRETRAINED LANGUAGE MODELS A reward function does not just have to be a metric like the BLEU score, but also a complicated pipeline that eventually returns a score. To demonstrate this, we consider the emerging task of prompting a large pretrained LM for controllable generation (Hu et al., 2017; Radford et al., 2019; Brown et al., 2020). The goal is to learn to generate text prompts that steer the LM to generate sentences of certain desired attributes (e.g., topics). The problem of controlling the generation of pretrained LMs was previously approached through specialized algorithms such as modifying the LM hidden states during decoding (Dathathri et al., 2020; Krause et al., 2020; Qin et al., 2020). Here we show that prompts offer an easier, faster, more effective way for controlled generation. Learning to generate/tune prompts is gaining increasing attention recently. It side-steps the needs for expensive LM fine-tuning, and adapts LMs to new scenarios with prompt as the (computefriendly) interface. Most existing approaches (Wallace et al., 2019; Li & Liang, 2021; Lester et al., 2021) rely on gradient backpropagation and are applicable only when the whole training pipeline is differentiable. This does not hold for the text generation setting, as illustrated in Figure 4. In contrast, the RL framework is generally applicable to any differentiable or discrete pipelines. Setup (more in the appendix §A.2.3). Following (Dathathri et al., 2019), we aim to control the generation to have one of 7 topics (e.g., “science”); the generated prompt is prepended to one of 20 input sentences for the pretrained LM to generate continuation sentences. Figure 4 shows the architecture of prompt-based controllable generation. We compare our SQL method with MLE+PG as before. Since the prompt length could impact the generated sentences, we conducted experiments with maximum prompt length 5, 10, and 15. As ablation study, we also evaluate the SQL algorithm with only off-policy updates (i.e., without on-policy exploration), denoted as SQL(off), and compare it with vanilla MLE training. Finally, we also compare with two specialized controllable generation techniques based on pretrained LMs, namely PPLM (Dathathri et al., 2019) and GeDi (Krause et al., 2020), following similar procedures using their open-sourced code. We use a distilled GPT-2 model3 as the pretrained LM to be controlled. For rewards, we use the topic accuracy of the continuation sentences measured by a zero-shot classifier, plus the the log-likelihood of continuation sentences as the language quality reward measured by a distilled GPT-2. Results Figure 5 shows the topic accuracy of the controlled LM outputs averaged across the 7 topics, and Table 1 shows the respective language quality results. More detailed topic accuracy results and samples are provided in the appendix (§A.1.3) (where GeDi obtained low accuracy on 2 of the 7 topics, possibly because the topic tokens are tokenized into two subwords for which the model released by the authors was not specifically trained). We can see that the prompts generated by our SQL cause the LM to generate sentences with high topic accuracy while maintaining low perplexity in most settings. Increasing the prompt length positively impacts the topic accuracy, which makes sense because longer prompts give more flexible for steering the LM. The comparison between MLE and SQL(off) shows that the off-policy component of SQL is better than standard MLE training, as it incorporates reward signals instead of just blindly following the (noisy) data. Next, comparing with the previous steered decoding such as PPLM and GeDi, we can see the prompt-based control trained with RL achieves better trade-off between topic accuracy and language quality. Moreover, once a prompt is produced, we can use the pretrained LM to generate text of desired topics efficiently, with the same time cost as standard non-controlled decoding. In comparison, the dedicated steered decoding is often orders-of-magnitude slower, as shown in Table 2. 5 RELATED WORK Standard RL algorithms maximizing the external rewards can sometimes be over-sensitive to the randomness in the environment. Recent works have considered maximum-entropy RL (MaxEnt RL) extensions, such as the soft Q-learning (SQL) (Haarnoja et al., 2017; Nachum et al., 2017; Schulman et al., 2017), that maximize the entropy of policy besides the rewards, and have demonstrated substantial improvement in robotic and game control (Ziebart et al., 2008; O’Donoghue et al., 2017; Nachum et al., 2018; Eysenbach & Levine, 2021). Our work is the first to adapt SQL and its advanced variants (in particular the path consistency learning (Nachum et al., 2017)) to the challenging text generation problem and show significant results on diverse applications. Applying RL for text generation has been discussed in alleviating the exposure bias problem and optimizing task metrics (Li et al., 2016; Wu et al., 2016; Rennie et al., 2017; Paulus et al., 2018; Chen & Bansal, 2018). For example, Ranzato et al. (2016) used the REINFORCE algorithm (Williams, 1992), and Bahdanau et al. (2016) used the actor-critic algorithm; Guo et al. (2018) and Shi et al. (2018) tried to relieve the sparsity problem via hierarchical and inverse RL methods, resp. They are all on-policy RL algorithms with the need of pretraining their models using MLE. Another line of work focused mostly on using only off-policy data, often for offline training of chatbots (Jaques et al., 2020; Kandasamy et al., 2017; Zhou et al., 2017; Pang & He, 2021). As a result, the opportunity of directly improving the reward (as in on-policy updates) for other rich tasks is missed. Our proposed framework combines on- and off-policy training, and further offers solutions for efficient training from scratch in the presence of large action space and sparse sequence-level reward in text generation. 6 CONCLUSION We develop a new RL formulation for text generation based on softQ-learning and path consistency learning. We conduct experiments on learning with noisy and negative data, black box adversarial attack, prompting a pretrained language model for controllable generation, and finally, on standard supervised tasks. The RL formulation opens up enormous new opportunities to integrate more advances made in the fertile RL literature to improve text and other sequence generation problems. 3https://huggingface.co/distilgpt2 7 ETHICS STATEMENT This work develops a new RL formulation for text generation. While we demonstrate the framework in four applications, it could be adapted to other (emerging) applications. One major component in these applications is the design of the reward function, which influences the behavior of the trained agent. While we believe the MaxEnt RL framework is more robust against reward misspecification (Eysenbach & Levine, 2021), the potential failures of sub-optimal reward functions are widely known and discussed.4 To this end, deploying this model to the wild requires careful and extensive examination, using tools such as Ribeiro et al. (2020). Further, we highlight the application for (black-box) adversarial attacks in the paper, with the intention of using adversarial attacks to understand the model’s inner workings. That being said, this could potentially be misused to conduct malicious attacks against systems. Hence, users of this framework might want to conduct adversarial attacks against their own models to avoid being attacked by other people with bad intentions. 8 REPRODUCIBILITY STATEMENT We provide code in the supplementary materials, and additional experiment details in the appendix. A APPENDIX A.1 APPLICATIONS AND EXPERIMENTS A.1.1 LEARNING FROM NOISY (NEGATIVE) TEXT Please see Table 5 for beam search results, Figure 6 for additional results for MLE+reward, and Table 7 for examples. A.1.2 Universal ADVERSARIAL ATTACKS Please see Table 4 for examples. A.1.3 PROMPT GENERATION FOR CONTROLLING PRETRAINED LANGUAGE MODELS Please see Table 6 for detailed results breakdown, and Table 8-11 for examples. Examples are in the format: topic: [prompt] input sentence generated text. A.1.4 SUPERVISED TEXT GENERATION TASKS Finally, we conduct experiment on standard generation tasks where clean supervised data is available. The study is to examine the capabilities of the proposed RL method to train a text generation model from scratch, which has been considered as exceedingly challenging for previous RL algorithms. Setup. We study on two tasks, E2E (Novikova et al., 2017) and CommonGEN (Lin et al., 2020), and use the respective datasets pre-processed by (Gehrmann et al., 2021) which allow sequenceto-sequence modeling with standard transformers. We run four sets of methods: the standard MLE training (MLE); PG training from scratch (PG); joint MLE and PG training, with MLE ini- tialization (MLE+PG); and our SQL training from scratch with both off-policy and on-policy updates (SQL). We use the standard BLEU as reward. We additionally investigate the training stability and sensitivity w.r.t hyperparameters, in particular the scale of reward. To this end, for MLE+PG and SQL, we vary the reward scale in {1, 10, 50, 100, 500, 1000} and evaluate the respective performance under different scales. Results. Table 3 shows the performance on E2E of different models whose hyperparameters are picked using the validation set. We can see the proposed SQL that trains models from scratch achieves competitive results with the common MLE and MLE+PG. In contrast, the PG algorithm alone without MLE fails the training. Figure 7 (left) shows the respective training curves (on the validation set), demonstrating that SQL converges in an efficient and stable way as MLE. We further demonstrate the sensitive of MLE+PG and SQL w.r.t the reward scale as a key hyperparameter. Figure 7 (middle and right) shows the training curves of the two methods with varying reward scales. We can see SQL is significantly more robust as reward scale changes, while MLE+PG tends to collapse with improper reward scale configurations. A.2 SETUP DETAILS Our evaluation follows the GEM Benchmark (Gehrmann et al., 2021) when applicable,5 and otherwise same with the reward function used in training. We use a transformer model (Vaswani et al., 2017) based on Texar-Pytorch (Hu et al., 2019) by default, with 64 hidden dimension, 3 blocks, and 4 heads. For experiments that involve policy gradient training, we initialize the model with maximum likelihood training by default unless specified otherwise. We train soft Q-learning model from scratch with both off-policy (using data) and on-policy (using samples) by default except in §4.1 and §4.3, in which we find it beneficial to warm-up the model with just off-policy training. We apply similar tuning budgets to both soft Q-learning model, and policy-gradient (mostly the reward scale and top-k), based on performance on the validation dataset and sample qualities. Reward Functions We use the robust entailment classifier (Nie et al., 2020) in §4.1,6 one of the most used entailment classifiers on HuggingFaceHub in §4.2.7 and a zero-shot classifier based on BART (Lewis et al., 2020) to compute the topic score in §4.3.8 To compute perplexities, we use a GPT-2 model (Radford et al., 2019) fine-tuned on the corresponding datasets for computing perplexity in §4.1 and 4.2, and a distilled GPT-2 model in §4.3 without fine-tuning.9 We simply set reward weights to 1.0, except in §4.2, where we changed the entailment weight to 0.5, log-likelihood and repetition penalty weight to 5.0. A.2.1 SETUP DETAILS: §4.1 We study using the SNLI dataset (Bowman et al., 2015), a dataset commonly used in training an entailment classifier. The original dataset contains (premise, hypothesis) sentence pairs, where the hypothesis may or may not entail the premise. We sub-sampled 50, 000 training examples from the corpus such that the hypotheses have an average entailment probability of only 50% in terms of the premises, and over 2/5 examples have entailment probabilities less than 20%, which can be seen as negative (contradictive) examples. The resulting training set poses a significant challenge for the models to learn from the noises. The RL algorithms (including PG and ours) permit us to plug in arbitrary reward functions to drive learning. Based on the goal of the task, we use the following intuitive rewards to ensure entailment accuracy and language quality: (1) a robust entailment classifier (Nie et al., 2020) that measures the entailment score of a generation in terms of the input premise, (2) a GPT-2 language model (Radford et al., 2019) that measures the log-likelihood of the generation as an indicator of language quality, and (3) BLEU score w.r.t the input premises as another language quality reward that avoids trivial outputs. We sum together all rewards with weights 1.0. A.2.2 SETUP DETAILS: §4.2 We study the task of attacking an entailment classifier. In particular, we aim to attack one of the most popular entailment classifiers on HuggingFaceHub.10 The attack generation model generates adversarial text without conditioning on any inputs so that the generated attacks are universal to all premises. The generation model is trained with mostly the same setting as in §4.1, where the entailment classifier to be 5https://github.com/GEM-benchmark/GEM-metrics 6https://huggingface.co/ynie/roberta-large-snli_mnli_fever_anli_R1_R2_ R3-nli 7https://github.com/pytorch/fairseq/tree/master/examples/roberta. This classifier is ranked #1 (as of May 20, 2021) based on https://huggingface.co/models?search=nli. 8https://huggingface.co/facebook/bart-large-mnli 9https://huggingface.co/distilgpt2 10https://github.com/pytorch/fairseq/tree/master/examples/roberta, which is ranked #1 as of May 20, 2021 based on https://huggingface.co/models?search=nli. attacked is used as entailment score reward functions. Besides, we additionally include a tokenlevel repetition penalty reward, which empirically benefits readability. Finally, we use the MultiNLI dataset (Williams et al., 2018) which includes more diverse examples than the SNLI used above. We compare our SQL with MLE+PG. We use all hypotheses in the MultiNLI dataset as the training data for the MLE training in MLE+PG and the off-policy updates for our SQL. We do not compare with previous specialized adversarial text attack methods, because they either are not applicable to the universal attack setting (Morris et al., 2020; Jin et al., 2020; Ebrahimi et al., 2017), or were not designed to generate human-readable sentences (Wallace et al., 2019). Besides, it is worth noting that the general RL algorithms have an additional advantage of doing black-box attacks. That is, the algorithms only require the ability to query the entailment classifier for entailment probability, without need of knowing the internal structure of the classifier (e.g., for computing gradients) as in previous attack algorithms (Ebrahimi et al., 2017; Wallace et al., 2019). For top-p sampling results, we sample a hypothesis for each premise and measure the average attack rate across the dataset. This is because sampling multiple hypotheses, each for all premises, and measure performance are expensive. Since the hypotheses are sampled input-independently, this should be a good approximation. A.2.3 SETUP DETAILS: §4.3 Following (Dathathri et al., 2019), we aim to control the generation to have one of 7 topics (e.g., “science”); the generated prompt is prepended to one of 20 input sentences (Figure 4) for the pretrained LM to generate continuation sentences. There is no direct supervision data available for training the prompt generator. We randomly create some noisy text as the training data for MLE baselines below and for off-policy updates for our algorithm. Specifically, the noisy text is created by sampling keywords and topics from the list used in (Dathathri et al., 2020) and a paraphrase generation model. Figure 4 shows the architecture of prompt-based controllable generation. We compare our SQL method with MLE+PG as before. At training time, for each generated prompt sample, the pretrained LM generates 2 continuation sentences for evaluating average reward. We use a zero-shot classifier to evaluate the topic accuracy of the continuation sentences. That is, we do not assume access to classifiers pretrained on topic-specific sentences, because generating such topic-specific sentences is the goal of the task in the first place. We additionally use an LM to evaluate the log-likelihood of continuation sentences for measuring language quality. Since the prompt length could impact the generated sentences, we conducted experiments with maximum prompt length 5, 10, and 15. As ablation study, we also evaluate the SQL algorithm with only off-policy updates (i.e., without onpolicy exploration), denoted as SQL(off), and compare it with vanilla MLE training. At test time, given a topic, the trained prompt generator produces one prompt using beam search decoding. For each generated prompt, the pretrained LM generates 100 sentences using top-k decoding (with k = 50) for evaluation. Finally, we also compare with two specialized controllable generation techniques based on pretrained LMs, namely PPLM (Dathathri et al., 2019) and GeDi (Krause et al., 2020), following similar procedures using their open-sourced code. We use a distilled GPT-2 model11 as the pretrained LM to be controlled. We use the paraphrase generation model based on Zhang et al. (2019).12 During decoding, we include no repeat ngram size= 2, which improves readability.13 A.3 THE SOFT Q-LEARNING FRAMEWORK A.3.1 COMPARISON WITH MLE OBJECTIVE It is interesting to take a closer look at the above objective and compare with the common MLE training. Specifically, we notice the relations between the optimal Q∗, V ∗, and A∗ functions: A∗ (st, at) = Q ∗(st, at) − V ∗(st) = rt + γV ∗(st+1) − V ∗ (st), where the first equation is the definition of A∗ (see Eq.7) and the second equation is due to Eqs.(12) and (7). We thus can see the regression target in the above objective as an approximation to the advantage function: 11https://huggingface.co/distilgpt2 12https://huggingface.co/tuner007/pegasus_paraphrase 13https://huggingface.co/blog/how-to-generate Ãθ̄ (st, at) := −Vθ̄ (st) + γVθ̄ (st+1) + rt. Therefore, by optimizing the regression objective, log πθ(at|st), which is the log probability of generating token at given preceding tokens st, is encouraged to match the approximate advantage value Ãθ̄ (st, at), no more and no less. This is different from the objective of MLE where the model is trained to (blindly) increase the probability of the observed token at given st and decrease the probability of the rest. A.3.2 VANILLA TRAINING WITH TEMPORAL CONSISTENCY Much like the Bellman temporal consistency in standard Q-learning (Eq.3), in SQL, the optimal action-value function follows the softmax form of the temporal consistency (Ziebart et al., 2008; Ziebart, 2010; Fox et al., 2016; Nachum et al., 2017): Q∗ (st, at) = rt + γ log ∑ at+1 expQ∗ (st+1, at+1) . (12) We thus can derive a regression objective similar to the standard Q-learning (Eq.4): LSQL, vanilla(θ) = Eπ′ [ 0.5 · ( rt + γ log ∑ at+1 expQθ̄ (st+1, at+1)−Qθ (st, at) )2] . (13) Recall that π′ is an arbitrary behavior policy (e.g., data distribution), and Qθ̄ is the target Q-network which is a slow copy of the Qθ to be learned and is held fixed during the gradient updates. However, the above objective is inefficient due to exact the same reasons as in standard Q-learning discussed earlier, namely the unstable per-step bootstrapping-style training with sparse reward signals, plus the slow updates w.r.t only one token at out of the large vocabulary (action space).
1. What is the main contribution of the paper in the field of text generation? 2. What are the strengths and weaknesses of the proposed method compared to previous works? 3. How significant is the sparsity problem in the chosen three text generation tasks? 4. Why did the authors choose to use soft Q-learning and path consistency learning, and how do these techniques relieve the sparsity reward and large action space problems? 5. What is the novelty of the paper, and how does it differ from other reinforcement learning-based text generation methods? 6. Are there any limitations or areas for improvement in the paper's approach or experiments?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a new text generation framework based on the existing soft Q-learning of reinforcement learning. The experiments demonstrate that the proposed text generation method achieves superior performance to baselines. Review This paper models text generation as a reinforcement learning problem, and uses the existing reinforcement-learning techniques (soft Q-learning and path consistency learning) to relieve the sparsity reward and large action space problems in the existing reinforcement learning based text generation methods. Strong Points: The paper is well-written. The illustration of this paper is clear and good. The proposed methods achieve moderate improvement over previous methods. Weak Points: The novelty of this paper seems limited as the techniques used in this paper are directly from the reinforcement learning area (soft Q-learning and path consistency learning). Considering that the reinforcement learning methods had been widely used in the text generation methods, e.g, MIXER [1] SeqGAN [2], the idea of using reinforcement learning methods for the text generation seems less novel. One of the motivations of this paper is to relieve the sparsity problem in text generation. However, this problem had been clearly pointed by the previous text generation methods, e.g. LeakGAN [3] and IRLGAN [4]. Specifically, IRLGAN tried to relieve this problem using the inverse reinforcement learning method, which is highly related to the submission. However, the paper does not compare with them and even does not mention them. This is limiting. How significant is the sparsity problem in the chosen three text generation tasks? If the authors emphasise the sparsity reward problem is significant in the text generation, the related experiments, e.g. generating different lengths of texts, should be performed. However, the related analysis is absent in the paper. One of the baselines in this paper is MLE+PG. There are many MLE+PG text generation methods, but authors do not clearly state which one they use in the paper, which leads to confusion. The paper shows the generation results of the prompt generation which is an interesting application of text generation. However, there are many meaningless and less fluent generated prompts in the shown examples, e.g. macintoshintoshintoshintosh in Table 7. What causes this phenomenon? In addition to the prompts results, It would be better to show generated examples of other tasks. [1] Ranzato, M. A., Chopra, S., Auli, M., & Zaremba, W. (2015). Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732. [2] Yu, L., Zhang, W., Wang, J., & Yu, Y. (2017, February). Seqgan: Sequence generative adversarial nets with policy gradient. In Proceedings of the AAAI conference on artificial intelligence (Vol. 31, No. 1). [3] Guo, J., Lu, S., Cai, H., Zhang, W., Yu, Y., & Wang, J. (2018, April). Long text generation via adversarial training with leaked information. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 32, No. 1). [4] Shi, Z., Chen, X., Qiu, X., & Huang, X. (2018, July). Toward diverse text generation with inverse reinforcement learning. In Proceedings of the 27th International Joint Conference on Artificial Intelligence (pp. 4361-4367).
ICLR
Title DeepFIB: Self-Imputation for Time Series Anomaly Detection Abstract Time series (TS) anomaly detection (AD) plays an essential role in various applications, e.g., fraud detection in finance and healthcare monitoring. Due to the inherently unpredictable and highly varied nature of anomalies and the lack of anomaly labels in historical data, the AD problem is typically formulated as an unsupervised learning problem. The performance of existing solutions is often not satisfactory, especially in data-scarce scenarios. To tackle this problem, we propose a novel self-supervised learning technique for AD in time series, namely DeepFIB. We model the problem as a Fill In the Blank game by masking some elements in the TS and imputing them with the rest. Considering the two common anomaly shapes (pointor sequence-outliers) in TS data, we implement two masking strategies with many self-generated training samples. The corresponding self-imputation networks can extract more robust temporal relations than existing AD solutions and effectively facilitate identifying the two types of anomalies. For continuous outliers, we also propose an anomaly localization algorithm that dramatically reduces AD errors. Experiments on various real-world TS datasets demonstrate that DeepFIB outperforms state-of-the-art methods by a large margin, achieving up to 65.2% relative improvement in F1-score. 1 INTRODUCTION Anomaly detection (AD) in time series (TS) data has numerous applications across various domains. Examples include fault and damage detection in industry (Hundman et al., 2018), intrusion detection in cybersecurity (Feng & Tian, 2021), and fraud detection in finance (Zheng et al., 2018) or healthcare (Zhou et al., 2019), to name a few. Generally speaking, an anomaly/outlier is an observation that deviates considerably from some concept of normality (Ruff et al., 2021). The somewhat “vague” definition itself tells the challenges of the AD problem arising from the rare and unpredictable nature of anomalies. With the lack of anomaly labels in historical data, most AD approaches try to learn the expected values of time-series data in an unsupervised manner (Bl’azquez-Garc’ia et al., 2021). Various techniques use different means (e.g., distance-based methods (Angiulli & Pizzuti, 2002), predictive methods (Holt, 2004; Yu et al., 2016; Deng & Hooi, 2021) or reconstruction-based methods (Shyu et al., 2003; Malhotra et al., 2016; Zhang et al., 2019; Shen et al., 2021)) to obtain this expected value, and then compute how far it is from the actual observation to decide whether or not it is an anomaly. While existing solutions have shown superior performance on some time series AD tasks, they are still far from satisfactory. For example, for the six ECG datasets in (Keogh et al., 2005), the average F1-score of state-of-the-art solutions (Kieu et al., 2019; Shen et al., 2021) with model ensembles are barely over 40%. Other than the TS data’ complexity issues, one primary reason is that the available data is often scarce while deep learning algorithms are notoriously data-hungry. Recently, self-supervised learning (SSL) that enlarges the training dataset without manual labels has attracted lots of attention, and it has achieved great success in representation learning in computer vision (Zhang et al., 2016; Pathak et al., 2016; Chen et al., 2020), natural language processing (Devlin et al., 2019), and graph learning (Hu et al., 2020) areas. There are also a few SSL techniques for time series analysis proposed in the literature. Most of them (Falck et al., 2020; Saeed et al., 2021; Fan et al., 2020) craft contrastive TS examples for classification tasks. (Deldari et al., 2021) also leverages contrastive learning for change point detection in time series. While interesting, the above SSL techniques do not apply to the AD task because detecting anomalies in time series requires fine-grained models at the element level. In this work, inspired by the context encoder for visual feature learning (Pathak et al., 2016) and the BERT model for language representation learning (Devlin et al., 2019), we propose a novel self-supervised learning technique for time series anomaly detection, namely DeepFIB. To be specific, we model the problem as a Fill In the Blank game by masking some elements in the TS and imputing them with other elements. This is achieved by revising the TS forecasting model SCINet (Liu et al., 2021) for the TS imputation task, in which the masked elements are regarded as missing values for imputation. Such self-imputation strategies facilitate generating a large amount of training samples for temporal relation extraction. As anomalies in time series manifest themselves as either discrete points or subsequences (see Fig. 1), correspondingly, we propose two kinds of masking strategies and use them to generate two pre-trained models. They are biased towards recovering from point-wise anomalies (DeepFIB-p model for point outliers) and sequence-wise anomalies (DeepFIB-s model for continuous outliers), respectively. To the best of our knowledge, this is the first SSL work for time series anomaly detection. Generally speaking, AD solutions have difficulty detecting sequence-wise anomalies because it is hard to tell the real outliers from their neighboring normal elements due to their interplay. To tackle this problem, we propose a novel anomaly localization algorithm to locate the precise start and end positions of continuous outliers. As a post-processing step, we conduct a local search after determining the existence of sequence-wise anomalies within a timing window with our DeepFIB-s model. By doing so, the detection accuracy for continuous outliers is significantly improved. We conduct experiments on several commonly-used time series benchmarks, and results show that DeepFIB consistently outperforms state-of-the-art solutions. In particular, the average F1-score of DeepFIB for the six ECG datasets is more than 62%, achieving nearly 50% relative improvement. 2 RELATED WORK In this section, we mainly discuss recent deep learning-based time series AD approaches. A comprehensive survey on the traditional techniques can be found in (Gupta et al., 2014). Existing anomaly detection approaches can be broadly categorized into three types (see Fig. 2): (i) Density-based methods consider the normal instances compact in the latent space and identify anomalies with one-class classifiers or likelihood measurements (Su et al., 2019; Shen & Kwok, 2020; Feng & Tian, 2021). (ii) Reconstruction-based methods use recurrent auto-encoders (RAE) (Malhotra et al., 2016; Yoo et al., 2021; Kieu et al., 2019; Shen et al., 2021; Zhang et al., 2019) or deep generative models such as recurrent VAEs (Park et al., 2018) or GANs (Li et al., 2019; Zhou et al., 2019) for reconstruction. The reconstruction errors are used as anomaly scores. (iii) Prediction-based methods rely on predictive models (Bontemps et al., 2016; Deng & Hooi, 2021; Chen et al., 2021) and use the prediction errors as anomaly scores. While the above methods have been successfully used in many real-world applications, practical AD tasks still have lots of room for improvement, especially in data-scarce scenarios. Unlike existing AD approaches, the proposed mask-and-impute method in DeepFIB exploits the unique property of TS data that missing values can be effectively imputed (Fang & Wang, 2020). By constructing many training samples via self-imputation, DeepFIB extracts robust temporal relations of TS data and improves AD accuracy dramatically. Moreover, for the more challenging sequence-wise anomalies, most prior work assumes a user-defined fixed-length for anomaly subsequences (Cook et al., 2020) or simplifies the problem by stating all the continuous outliers have been correctly detected as long as one of the points is detected (Su et al., 2019; Shen & Kwok, 2020). In DeepFIB, we lift these assumptions and try to locate the exact location of sequence-wise anomalies. 3 METHOD In this section, we first introduce the overall self-imputation framework in DeepFIB and then discuss the separate AD models for detecting point- and sequence-wise anomalies with different maskand-impute strategies, namely DeepFIB-p and DeepFIB-s, respectively. Next, we describe the TS imputation method used in DeepFIB, based on an existing TS forecasting approach SCINet (Liu et al., 2021). Finally, we present our anomaly localization algorithm for continuous outliers. 3.1 SELF-IMPUTATION FOR ANOMALY DETECTION Given a set of multivariate time series wherein Xs = {x1, x2, ..., xTs} Rd×Ts (Ts is the length of the sth time series Xs), the objective of the AD task is to find all anomalous points xt ∈ Rd (d is the number of variates) and anomalous subsequences Xt,τ = {xt−τ+1, ..., xt}. The critical issue to solve the above problem is obtaining an expected value for each element in the TS, which requires a large amount of training data to learn from, especially for deep learning-based solutions. However, time-series data are often scarce, significantly restricting the effectiveness of learning-based AD solutions. DeepFIB is a simple yet effective SSL technique to tackle the above problem. We model this problem as a Fill In the Blank game by randomly masking some elements in the TS and imputing them with the rest. Such self-imputation strategies generate many training samples from every time series and hence dramatically improve temporal learning capabilities. In particular, we propose to train two self-imputation models (Fig. 3), biased towards point- and sequence-wise anomalies in the TS data, respectively. • DeepFIB-p model targets point outliers, as shown in Fig. 3(a), in which we mask discrete elements and rely on the local temporal relations extracted from neighboring elements for reconstruction. For each time series Xs, we generate M training samples by masking it M times with randomly-selected yet non-overlapping d×TsM elements. • DeepFIB-s model targets continuous outliers, as shown in Fig. 3(b), in which we mask continuous elements and rely on predictive models for reconstruction. For each time series Xs, we evenly divide it into N non-overlapping sub-sequences as { X d×TsN s,i , i ∈ [0, N − 1] } and generate N training samples by masking one of them each time. During training, for each time series Xs, we obtain a set of non-overlapped imputed data with the above model and integrate them together results in a reconstructed time series X̂s (i.e., X̂s-p for DeepFIB-p model and X̂s-s for DeepFIB-s model). The training loss for both models are defined as the reconstruction errors between the input time series and the reconstructed one: L = 1 Ts Ts∑ t=1 ‖xt − x̂t‖ (1) where xt is the original input value at time step t and the x̂t denotes the reconstructed value from the corresponding model, and ‖·‖ is the L1-norm of a vector. During testing, to detect point outliers with the DeepFIB-p model, we simply use the residual error as the anomaly score, defined as et = ∑d i=0 ∣∣∣x̂ti − xit∣∣∣, and when et is larger than a threshold value λp, time step t is regarded as an outlier. In contrast, for continuous outliers, we use dynamic time warping (DTW) (Sakoe & Chiba, 1978) distance metrics as our anomaly scoring mechanism, which measures the similarity between the input time series X and reconstructed sequence X̂ . If DTW (X, X̂) is above a threshold value λs, a sequence-wise anomaly is detected. 3.2 TIME SERIES IMPUTATION IN DEEPFIB While the time-series data imputation problem has been investigated for decades (Fang & Wang, 2020), there are still lots of rooms for improvement and various deep learning models are proposed recently (Cao et al., 2018; Liu et al., 2019; Luo et al., 2019). SCINet (Liu et al., 2021) is an encoder-decoder architecture motivated by the unique characteristics of time series data. It incorporates a series of SCI-Blocks that conduct down-sampled convolutions and interactive learning to capture temporal features at various resolutions and effectively blend them in a hierarchical manner. Considering the highly-effective temporal relation extraction capability of SCINet when compared to other sequence models, we propose to revise it for the TS imputation task. More details about SCINet can be found in (Liu et al., 2021). To impute the missing elements from the two masking strategies with DeepFIB-p and DeepFIB-s models, we simply change the supervisions for the decoder part accordingly. For point imputation, we use the original input sequence as the supervision of our DeepFIB-p model, making it a reconstruction structure. By doing so, the model concentrates more on the local temporal relations inside the timing window for imputing discrete missing data, as shown in Fig. 5(a). As for continuous imputation, we propose to change SCINet as a bidirectional forecasting structure in our DeepFIB-s model, with the masked sub-sequence as supervision. As shown in Fig. 5(b), the two sub-models, namely F-SCINet and B-SCINet, are used to conduct forecasting in the forward and backward directions, respectively. By doing so, the model can aggregate the temporal features from both directions and learn a robust long-term temporal relations for imputing continuous missing data. 3.3 ANOMALY LOCALIZATION ALGORITHM During inference, we use a sliding window with stride µ to walk through the time series and find anomalies in each window. For sequence-wise anomalies, without knowing their positions a priori, we could mask some normal elements in the window and use those unmasked outliers for prediction (see Fig. 5(b)), thereby leading to mispredictions. To tackle this problem, we propose to conduct a local search for the precise locations of the sequence-wise anomalies. As shown in Fig. 6, the Active window are the current input sequence to the DeepFIB-s model with length ω (ω > µ), i.e., Xt = {xt, xt+1, ..., xt+ω−1} at time step t. When the DTW distance between the original time series in the Active window and the imputed sequence is above the threshold λs, a sequence-wise anomaly is detected in the current window, and the localization mechanism is triggered. As the sliding window is moving along the data stream with stride µ, if no outliers are detected in the previous window, the start position of the sequence-wise anomaly can only exist at the end of Xt in the window {xt+ω−µ, ..., xt+ω−1, xt+ω−1} with length µ. Consequently, by gradually shifting the Active window backward to include one more element in the Buffer window (see Fig. 6) at a time and calculating the corresponding DTW distances as {e1, ..., ei, ..., eµ}, we can find the maximum i with ei < λs, indicating the following element after the Active window starting with i is the start of the anomaly subsequence. The Anomaly flag is then activated from this position. Similarly, to determine the ending position of the anomaly subsequence, we keep sliding the Active windows until we find a window with DTW distance smaller than λs, indicating that the ending position is within {xt−µ, ..., xt−2, xt−1}. Again, we shift the Active window backwardly one-by-one to include one element of the above window at a time and calculate the corresponding DTW distance, until we find the ending position with its DTW distance larger than λs. 4 EXPERIMENTS In this section, we conduct extensive experiments to answer the following two questions: Whether DeepFIB outperforms state-of-the-art AD methods (Q1)? How does each component of DeepFIB affect its performance (Q2)? Experiments are conducted on a number of commonly-used benchmark TS datasets, namely 2dgesture, Power demand, ECG and Credit Card, ranging from human abnormal behavior detection, power monitoring, healthcare and fraud detection in finance (see Table 1). As the anomalies in 2d-gesture, Power demand, and ECG are mainly sequence outliers, we apply the DeepFIB-s model on these datasets. In contrast, the Credit Card dataset only contains point outliers, and hence we use DeepFIB-p model on it. To make a fair comparison with existing models, we use the standard evaluation metrics on the corresponding datasets. For 2d-gesture, Power demand and Credit Card, we use precision, recall, and F1-score following (Shen & Kwok, 2020). For ECG datasets, we use the AUROC (area under the ROC curve), AUPRC (area under the precision-recall curve) and F1-score, following (Shen et al., 2021). To detect anomalies, we use the maximum anomaly score in each sub-models over the validation dataset to set the threshold. More details on experimental settings, additional experimental results and discussions (e.g., hyperparameter analysis) are presented in the supplementary materials. 4.1 Q1: COMPARISON WITH STATE-OF-THE-ART METHODS 2d-gesture and Power demand: The results in Table 2 show that the proposed DeepFIB-s achieves 16.55% and 50.18% F1-score improvements on 2d-gesture and Power demand, respectively, compared with the second best methods. For 2d-gesture, the available training data is limited and the temporal relations contained in the data are complex (body jitter), making it difficult to obtain a discriminative representation in AD models. DAGMM (Zong et al., 2018) shows low performance since it does not consider the temporal information of the time-series data at all. As for the AD solutions based on generative models (EncDecAD (Malhotra et al., 2016), LSTM-VAE (Park et al., 2018), MAD-GAN (Li et al., 2019), AnoGAN (Schlegl et al., 2017), BeatGAN (Zhou et al., 2019), OmniAnomaly (Su et al., 2019)), they usually require a large amount of training data, limiting their performance in data-scarce scenario. Compared to the above methods, the encoder-decoder architecture MSCRED (Zhang et al., 2019) is relatively easier to train and its AD performance is considerably higher. Moreover, the recent THOC (Shen & Kwok, 2020) work further improves AD performance by fusing the multi-scale temporal information to capture the complex temporal dynamics. The proposed DeepFIB-s model outperforms all the above baseline methods since the proposed self-imputation technique allows the model to learn more robust temporal relations from much more self-generated training samples. Notably, we also observe that the precision of the DeepFIB-s dominates the other baselines. We attribute it to the anomaly localization algorithm that can locate the anomaly’s precise start and end positions, significantly reducing the false positive rate. For Power demand, the data contains many contextual anomaly1 subsequences (see Fig. 7). It is quite challenging for existing AD approaches to learn such context information by extracting temporal features from the entire time series as a whole. In contrast, the proposed sequence-wise masking strategy facilitates learning different kinds of temporal patterns, which is much more effective in detecting such contextual anomalies. As shown in Table 2, the recall of our DeepFIB-s model almost reaches 100%, indicating all anomalies have been detected. The precision is not the best, and we argue that some of the false positives are in fact resulted from the poorly labeled test set (see our supplementary material). ECG(A-F): Compared with (A),(B),(C) datasets, (D),(E),(F) are clearly noisy, which affect the performance of the anomaly detectors significantly. Nevertheless, Table 3 shows that DeepFIB-s achieves an average 46.3% F1-score improvement among all datasets and an impressive 65.2% improvement for ECG(F) dataset. There are mainly two reasons: (1) the data is scarce (See Table 1). Existing AD methods are unable to learn robust temporal relations under such circumstances. In contrast, the self-imputation training strategy together with the bidirectional forecasting mechanism used in our DeepFIB-s model can well address this issue; (2) the proposed DTW anomaly score is more effective in detecting the anomaly sequence than the previous point-wise residual scoring (see Section 4.2.1 ). Notably, the AUPRC of DeepFIB in ECG(E) is slightly lower than RAMED (Shen et al., 2021), and we attribute to the fact that some unlabeled sub-sequences are too similar to labeled anomalies in the raw data. Credit Card: Due to the nature of this application, this dataset is stochastic and the temporal relation is not significant. Therefore, as shown in Table 4, traditional AD solutions without modeling the underlying temporal dependency achieve fair performance, e.g., OCSVM (Ma & Perkins, 2003), ISO 1Contextual anomalies are observations or sequences that deviate from the expected patterns within the time series however if taken in isolation they are within the range of values expected for that signal (Cook et al., 2020). Forest (Liu et al., 2008). Besides, the AR (Rousseeuw & Leroy, 1987) with a small window size (e.g., 3, 5) can also identify the local change point without considering longer temporal relations. However, the large recall and small precision values show its high false positive rates. The prediction-based method, LSTM-RNN (Bontemps et al., 2016) tries to learn a robust temporal relation from the data, which is infeasible for this dataset. In contrast, the reconstruction-based method, RAE (recurrent auto-encoder) (Malhotra et al., 2016) performs better since it can estimate the outliers based on the local contextual information. The proposed DeepFIB-p model outperforms all baseline methods, because it can better extract local correlations with the proposed self-imputation strategy. At the same time, compared to our results on other datasets, the relative 26.3% improvement over the second best solution (AR) is less impressive and the F1-score with our DeepFIB-p model is still less than 25%. We attribute it to both the dataset complexity and the lack of temporal relations in this dataset. 4.2 Q2: ABLATION STUDY In this section, we first evaluate the impact of various components in our DeepFIB-s and DeepFIB-p models. Next, we replace the SCINet with other sequence models to evaluate its impact. 4.2.1 COMPONENT ANALYSIS DeepFIB-p: To demonstrate the impact of the proposed mask-and-impute mechanism in point outlier detection. We add two baseline methods: (1) DeepFIB-p†, wherein we remove the self-imputation strategy; (2) RAE∗, we implement the same mask-and-impute strategy and apply it to the baseline method RAE. In Table 4, the performance improvement and degradation of the corresponding variants compared to DeepFIB-p and RAE clearly demonstrate the effectiveness of the proposed self-imputation strategy for point outlier detection. DeepFIB-s: To investigate the impact of different modules of DeepFIB-s, we compare two variants of the DeepFIB-s on five datasets. The details of the variants are described as below: For w/o. localization, we remove the anomaly localization algorithm from our DeepFIB-s model. The w/o. localization & DTW further removes the DTW scoring mechanism, and the anomalies are determined based on point-wise residual errors. As shown in Fig. 8, all these components are essential for achieving high anomaly detection accuracy. At the same time, the proposed self-imputation training strategy is still the main contributor to the performance of our DeepFIB-s model, as the results of w/o. localization & DTW are still much better than those of the 2nd best solution. Besides, the performance gain of the DTW anomaly scoring indicates that the point-wise outlier estimation is not suitable for evaluating sequence-wise anomalies. 4.2.2 IMPACT OF SCINET In our DeepFIB framework, we revise SCINet for time series imputation. To show its impact, we replace it with other sequence models in DeepFIB-s. As we can see in Table 5, compared with TCN (Bai et al., 2018) and LSTM (Hochreiter & Schmidhuber, 1997), using SCINet indeed brings significant improvements, which clearly shows its strong temporal relation extraction capability and the effectiveness of the revised architecture for TS imputation. At the same time, compared to the previous SOTA methods (2nd best) for the corresponding dataset, with the same mask-andimpute strategy, we can still achieve remarkable performance without using SCINet, indicating the effectiveness of the proposed self-imputation concept itself. 5 CONCLUSION In this paper, we propose a novel self-imputation framework DeepFIB for time series anomaly detection. Considering the two types of common anomalies in TS data, we implement two maskand-impute models biased towards them, which facilitate extracting more robust temporal relations than existing AD solutions. Moreover, for sequence-wise anomalies, we propose a novel anomaly localization algorithm that dramatically improves AD detection accuracy. Experiments on various real-world TS datasets demonstrate that DeepFIB outperforms state-of-the-art AD approaches by a large margin, achieving up to more than 65% relative improvement in F1-score.
1. What is the focus and contribution of the paper regarding time series anomaly detection? 2. What are the strengths of the proposed approach, particularly in its self-supervised learning technique and mask strategies? 3. What are the weaknesses of the paper, especially regarding the integration process and the usage of DTW? 4. Do you have any concerns about the inconsistency in performance results between the main paper and the appendix? 5. Are there any other minor issues or questions you have regarding the paper?
Summary Of The Paper Review
Summary Of The Paper The paper proposed a self-supervised learning technique to deal with time series anomaly detection problem. Different masign strategies are implemented to deal with point or continuous outliers. Review Strengths: The paper proposed a self-supervised learning technique to deal with time series anomaly detection problem. Different mask strategies are implemented to deal with point or continuous outliers. Efforts are made to detect the location of the anomaly. The paper is overall well written and easy to follow. In addition to performance comparison with baselines, several ablation studies are also been carried out. Weaknesses: The proposed method is very straightforward. Yet some parts are lack of explanation. E.g., after obtaining a set of non-overlapped imputed data, how does the integration work? The motivation of the usage of DTW to calculate the difference between the original data and the inputed time series is questionable. Shouldn't the unmasked part of the input be close to the imputed data in the corresponding position? The input and imputed time series are not out of sync. The performances from the main paper are not consistent with those in the Appendix. E.g., the F1 score of DeepFIB on EGG (A) dataset as shown in Table 3 of the main paper is much higher that those plotted in the Figure 2(b) in the Appendix. I could understand there could be small difference considering multiple rounds of experiments running under different settings. But 80.90% vs 60%+ doesn't make sense to me. Some small issues The Figure 4 is not referred in the paper. In Table 2, the second best recall result for 2d-gesture is wrongly marked.
ICLR
Title DeepFIB: Self-Imputation for Time Series Anomaly Detection Abstract Time series (TS) anomaly detection (AD) plays an essential role in various applications, e.g., fraud detection in finance and healthcare monitoring. Due to the inherently unpredictable and highly varied nature of anomalies and the lack of anomaly labels in historical data, the AD problem is typically formulated as an unsupervised learning problem. The performance of existing solutions is often not satisfactory, especially in data-scarce scenarios. To tackle this problem, we propose a novel self-supervised learning technique for AD in time series, namely DeepFIB. We model the problem as a Fill In the Blank game by masking some elements in the TS and imputing them with the rest. Considering the two common anomaly shapes (pointor sequence-outliers) in TS data, we implement two masking strategies with many self-generated training samples. The corresponding self-imputation networks can extract more robust temporal relations than existing AD solutions and effectively facilitate identifying the two types of anomalies. For continuous outliers, we also propose an anomaly localization algorithm that dramatically reduces AD errors. Experiments on various real-world TS datasets demonstrate that DeepFIB outperforms state-of-the-art methods by a large margin, achieving up to 65.2% relative improvement in F1-score. 1 INTRODUCTION Anomaly detection (AD) in time series (TS) data has numerous applications across various domains. Examples include fault and damage detection in industry (Hundman et al., 2018), intrusion detection in cybersecurity (Feng & Tian, 2021), and fraud detection in finance (Zheng et al., 2018) or healthcare (Zhou et al., 2019), to name a few. Generally speaking, an anomaly/outlier is an observation that deviates considerably from some concept of normality (Ruff et al., 2021). The somewhat “vague” definition itself tells the challenges of the AD problem arising from the rare and unpredictable nature of anomalies. With the lack of anomaly labels in historical data, most AD approaches try to learn the expected values of time-series data in an unsupervised manner (Bl’azquez-Garc’ia et al., 2021). Various techniques use different means (e.g., distance-based methods (Angiulli & Pizzuti, 2002), predictive methods (Holt, 2004; Yu et al., 2016; Deng & Hooi, 2021) or reconstruction-based methods (Shyu et al., 2003; Malhotra et al., 2016; Zhang et al., 2019; Shen et al., 2021)) to obtain this expected value, and then compute how far it is from the actual observation to decide whether or not it is an anomaly. While existing solutions have shown superior performance on some time series AD tasks, they are still far from satisfactory. For example, for the six ECG datasets in (Keogh et al., 2005), the average F1-score of state-of-the-art solutions (Kieu et al., 2019; Shen et al., 2021) with model ensembles are barely over 40%. Other than the TS data’ complexity issues, one primary reason is that the available data is often scarce while deep learning algorithms are notoriously data-hungry. Recently, self-supervised learning (SSL) that enlarges the training dataset without manual labels has attracted lots of attention, and it has achieved great success in representation learning in computer vision (Zhang et al., 2016; Pathak et al., 2016; Chen et al., 2020), natural language processing (Devlin et al., 2019), and graph learning (Hu et al., 2020) areas. There are also a few SSL techniques for time series analysis proposed in the literature. Most of them (Falck et al., 2020; Saeed et al., 2021; Fan et al., 2020) craft contrastive TS examples for classification tasks. (Deldari et al., 2021) also leverages contrastive learning for change point detection in time series. While interesting, the above SSL techniques do not apply to the AD task because detecting anomalies in time series requires fine-grained models at the element level. In this work, inspired by the context encoder for visual feature learning (Pathak et al., 2016) and the BERT model for language representation learning (Devlin et al., 2019), we propose a novel self-supervised learning technique for time series anomaly detection, namely DeepFIB. To be specific, we model the problem as a Fill In the Blank game by masking some elements in the TS and imputing them with other elements. This is achieved by revising the TS forecasting model SCINet (Liu et al., 2021) for the TS imputation task, in which the masked elements are regarded as missing values for imputation. Such self-imputation strategies facilitate generating a large amount of training samples for temporal relation extraction. As anomalies in time series manifest themselves as either discrete points or subsequences (see Fig. 1), correspondingly, we propose two kinds of masking strategies and use them to generate two pre-trained models. They are biased towards recovering from point-wise anomalies (DeepFIB-p model for point outliers) and sequence-wise anomalies (DeepFIB-s model for continuous outliers), respectively. To the best of our knowledge, this is the first SSL work for time series anomaly detection. Generally speaking, AD solutions have difficulty detecting sequence-wise anomalies because it is hard to tell the real outliers from their neighboring normal elements due to their interplay. To tackle this problem, we propose a novel anomaly localization algorithm to locate the precise start and end positions of continuous outliers. As a post-processing step, we conduct a local search after determining the existence of sequence-wise anomalies within a timing window with our DeepFIB-s model. By doing so, the detection accuracy for continuous outliers is significantly improved. We conduct experiments on several commonly-used time series benchmarks, and results show that DeepFIB consistently outperforms state-of-the-art solutions. In particular, the average F1-score of DeepFIB for the six ECG datasets is more than 62%, achieving nearly 50% relative improvement. 2 RELATED WORK In this section, we mainly discuss recent deep learning-based time series AD approaches. A comprehensive survey on the traditional techniques can be found in (Gupta et al., 2014). Existing anomaly detection approaches can be broadly categorized into three types (see Fig. 2): (i) Density-based methods consider the normal instances compact in the latent space and identify anomalies with one-class classifiers or likelihood measurements (Su et al., 2019; Shen & Kwok, 2020; Feng & Tian, 2021). (ii) Reconstruction-based methods use recurrent auto-encoders (RAE) (Malhotra et al., 2016; Yoo et al., 2021; Kieu et al., 2019; Shen et al., 2021; Zhang et al., 2019) or deep generative models such as recurrent VAEs (Park et al., 2018) or GANs (Li et al., 2019; Zhou et al., 2019) for reconstruction. The reconstruction errors are used as anomaly scores. (iii) Prediction-based methods rely on predictive models (Bontemps et al., 2016; Deng & Hooi, 2021; Chen et al., 2021) and use the prediction errors as anomaly scores. While the above methods have been successfully used in many real-world applications, practical AD tasks still have lots of room for improvement, especially in data-scarce scenarios. Unlike existing AD approaches, the proposed mask-and-impute method in DeepFIB exploits the unique property of TS data that missing values can be effectively imputed (Fang & Wang, 2020). By constructing many training samples via self-imputation, DeepFIB extracts robust temporal relations of TS data and improves AD accuracy dramatically. Moreover, for the more challenging sequence-wise anomalies, most prior work assumes a user-defined fixed-length for anomaly subsequences (Cook et al., 2020) or simplifies the problem by stating all the continuous outliers have been correctly detected as long as one of the points is detected (Su et al., 2019; Shen & Kwok, 2020). In DeepFIB, we lift these assumptions and try to locate the exact location of sequence-wise anomalies. 3 METHOD In this section, we first introduce the overall self-imputation framework in DeepFIB and then discuss the separate AD models for detecting point- and sequence-wise anomalies with different maskand-impute strategies, namely DeepFIB-p and DeepFIB-s, respectively. Next, we describe the TS imputation method used in DeepFIB, based on an existing TS forecasting approach SCINet (Liu et al., 2021). Finally, we present our anomaly localization algorithm for continuous outliers. 3.1 SELF-IMPUTATION FOR ANOMALY DETECTION Given a set of multivariate time series wherein Xs = {x1, x2, ..., xTs} Rd×Ts (Ts is the length of the sth time series Xs), the objective of the AD task is to find all anomalous points xt ∈ Rd (d is the number of variates) and anomalous subsequences Xt,τ = {xt−τ+1, ..., xt}. The critical issue to solve the above problem is obtaining an expected value for each element in the TS, which requires a large amount of training data to learn from, especially for deep learning-based solutions. However, time-series data are often scarce, significantly restricting the effectiveness of learning-based AD solutions. DeepFIB is a simple yet effective SSL technique to tackle the above problem. We model this problem as a Fill In the Blank game by randomly masking some elements in the TS and imputing them with the rest. Such self-imputation strategies generate many training samples from every time series and hence dramatically improve temporal learning capabilities. In particular, we propose to train two self-imputation models (Fig. 3), biased towards point- and sequence-wise anomalies in the TS data, respectively. • DeepFIB-p model targets point outliers, as shown in Fig. 3(a), in which we mask discrete elements and rely on the local temporal relations extracted from neighboring elements for reconstruction. For each time series Xs, we generate M training samples by masking it M times with randomly-selected yet non-overlapping d×TsM elements. • DeepFIB-s model targets continuous outliers, as shown in Fig. 3(b), in which we mask continuous elements and rely on predictive models for reconstruction. For each time series Xs, we evenly divide it into N non-overlapping sub-sequences as { X d×TsN s,i , i ∈ [0, N − 1] } and generate N training samples by masking one of them each time. During training, for each time series Xs, we obtain a set of non-overlapped imputed data with the above model and integrate them together results in a reconstructed time series X̂s (i.e., X̂s-p for DeepFIB-p model and X̂s-s for DeepFIB-s model). The training loss for both models are defined as the reconstruction errors between the input time series and the reconstructed one: L = 1 Ts Ts∑ t=1 ‖xt − x̂t‖ (1) where xt is the original input value at time step t and the x̂t denotes the reconstructed value from the corresponding model, and ‖·‖ is the L1-norm of a vector. During testing, to detect point outliers with the DeepFIB-p model, we simply use the residual error as the anomaly score, defined as et = ∑d i=0 ∣∣∣x̂ti − xit∣∣∣, and when et is larger than a threshold value λp, time step t is regarded as an outlier. In contrast, for continuous outliers, we use dynamic time warping (DTW) (Sakoe & Chiba, 1978) distance metrics as our anomaly scoring mechanism, which measures the similarity between the input time series X and reconstructed sequence X̂ . If DTW (X, X̂) is above a threshold value λs, a sequence-wise anomaly is detected. 3.2 TIME SERIES IMPUTATION IN DEEPFIB While the time-series data imputation problem has been investigated for decades (Fang & Wang, 2020), there are still lots of rooms for improvement and various deep learning models are proposed recently (Cao et al., 2018; Liu et al., 2019; Luo et al., 2019). SCINet (Liu et al., 2021) is an encoder-decoder architecture motivated by the unique characteristics of time series data. It incorporates a series of SCI-Blocks that conduct down-sampled convolutions and interactive learning to capture temporal features at various resolutions and effectively blend them in a hierarchical manner. Considering the highly-effective temporal relation extraction capability of SCINet when compared to other sequence models, we propose to revise it for the TS imputation task. More details about SCINet can be found in (Liu et al., 2021). To impute the missing elements from the two masking strategies with DeepFIB-p and DeepFIB-s models, we simply change the supervisions for the decoder part accordingly. For point imputation, we use the original input sequence as the supervision of our DeepFIB-p model, making it a reconstruction structure. By doing so, the model concentrates more on the local temporal relations inside the timing window for imputing discrete missing data, as shown in Fig. 5(a). As for continuous imputation, we propose to change SCINet as a bidirectional forecasting structure in our DeepFIB-s model, with the masked sub-sequence as supervision. As shown in Fig. 5(b), the two sub-models, namely F-SCINet and B-SCINet, are used to conduct forecasting in the forward and backward directions, respectively. By doing so, the model can aggregate the temporal features from both directions and learn a robust long-term temporal relations for imputing continuous missing data. 3.3 ANOMALY LOCALIZATION ALGORITHM During inference, we use a sliding window with stride µ to walk through the time series and find anomalies in each window. For sequence-wise anomalies, without knowing their positions a priori, we could mask some normal elements in the window and use those unmasked outliers for prediction (see Fig. 5(b)), thereby leading to mispredictions. To tackle this problem, we propose to conduct a local search for the precise locations of the sequence-wise anomalies. As shown in Fig. 6, the Active window are the current input sequence to the DeepFIB-s model with length ω (ω > µ), i.e., Xt = {xt, xt+1, ..., xt+ω−1} at time step t. When the DTW distance between the original time series in the Active window and the imputed sequence is above the threshold λs, a sequence-wise anomaly is detected in the current window, and the localization mechanism is triggered. As the sliding window is moving along the data stream with stride µ, if no outliers are detected in the previous window, the start position of the sequence-wise anomaly can only exist at the end of Xt in the window {xt+ω−µ, ..., xt+ω−1, xt+ω−1} with length µ. Consequently, by gradually shifting the Active window backward to include one more element in the Buffer window (see Fig. 6) at a time and calculating the corresponding DTW distances as {e1, ..., ei, ..., eµ}, we can find the maximum i with ei < λs, indicating the following element after the Active window starting with i is the start of the anomaly subsequence. The Anomaly flag is then activated from this position. Similarly, to determine the ending position of the anomaly subsequence, we keep sliding the Active windows until we find a window with DTW distance smaller than λs, indicating that the ending position is within {xt−µ, ..., xt−2, xt−1}. Again, we shift the Active window backwardly one-by-one to include one element of the above window at a time and calculate the corresponding DTW distance, until we find the ending position with its DTW distance larger than λs. 4 EXPERIMENTS In this section, we conduct extensive experiments to answer the following two questions: Whether DeepFIB outperforms state-of-the-art AD methods (Q1)? How does each component of DeepFIB affect its performance (Q2)? Experiments are conducted on a number of commonly-used benchmark TS datasets, namely 2dgesture, Power demand, ECG and Credit Card, ranging from human abnormal behavior detection, power monitoring, healthcare and fraud detection in finance (see Table 1). As the anomalies in 2d-gesture, Power demand, and ECG are mainly sequence outliers, we apply the DeepFIB-s model on these datasets. In contrast, the Credit Card dataset only contains point outliers, and hence we use DeepFIB-p model on it. To make a fair comparison with existing models, we use the standard evaluation metrics on the corresponding datasets. For 2d-gesture, Power demand and Credit Card, we use precision, recall, and F1-score following (Shen & Kwok, 2020). For ECG datasets, we use the AUROC (area under the ROC curve), AUPRC (area under the precision-recall curve) and F1-score, following (Shen et al., 2021). To detect anomalies, we use the maximum anomaly score in each sub-models over the validation dataset to set the threshold. More details on experimental settings, additional experimental results and discussions (e.g., hyperparameter analysis) are presented in the supplementary materials. 4.1 Q1: COMPARISON WITH STATE-OF-THE-ART METHODS 2d-gesture and Power demand: The results in Table 2 show that the proposed DeepFIB-s achieves 16.55% and 50.18% F1-score improvements on 2d-gesture and Power demand, respectively, compared with the second best methods. For 2d-gesture, the available training data is limited and the temporal relations contained in the data are complex (body jitter), making it difficult to obtain a discriminative representation in AD models. DAGMM (Zong et al., 2018) shows low performance since it does not consider the temporal information of the time-series data at all. As for the AD solutions based on generative models (EncDecAD (Malhotra et al., 2016), LSTM-VAE (Park et al., 2018), MAD-GAN (Li et al., 2019), AnoGAN (Schlegl et al., 2017), BeatGAN (Zhou et al., 2019), OmniAnomaly (Su et al., 2019)), they usually require a large amount of training data, limiting their performance in data-scarce scenario. Compared to the above methods, the encoder-decoder architecture MSCRED (Zhang et al., 2019) is relatively easier to train and its AD performance is considerably higher. Moreover, the recent THOC (Shen & Kwok, 2020) work further improves AD performance by fusing the multi-scale temporal information to capture the complex temporal dynamics. The proposed DeepFIB-s model outperforms all the above baseline methods since the proposed self-imputation technique allows the model to learn more robust temporal relations from much more self-generated training samples. Notably, we also observe that the precision of the DeepFIB-s dominates the other baselines. We attribute it to the anomaly localization algorithm that can locate the anomaly’s precise start and end positions, significantly reducing the false positive rate. For Power demand, the data contains many contextual anomaly1 subsequences (see Fig. 7). It is quite challenging for existing AD approaches to learn such context information by extracting temporal features from the entire time series as a whole. In contrast, the proposed sequence-wise masking strategy facilitates learning different kinds of temporal patterns, which is much more effective in detecting such contextual anomalies. As shown in Table 2, the recall of our DeepFIB-s model almost reaches 100%, indicating all anomalies have been detected. The precision is not the best, and we argue that some of the false positives are in fact resulted from the poorly labeled test set (see our supplementary material). ECG(A-F): Compared with (A),(B),(C) datasets, (D),(E),(F) are clearly noisy, which affect the performance of the anomaly detectors significantly. Nevertheless, Table 3 shows that DeepFIB-s achieves an average 46.3% F1-score improvement among all datasets and an impressive 65.2% improvement for ECG(F) dataset. There are mainly two reasons: (1) the data is scarce (See Table 1). Existing AD methods are unable to learn robust temporal relations under such circumstances. In contrast, the self-imputation training strategy together with the bidirectional forecasting mechanism used in our DeepFIB-s model can well address this issue; (2) the proposed DTW anomaly score is more effective in detecting the anomaly sequence than the previous point-wise residual scoring (see Section 4.2.1 ). Notably, the AUPRC of DeepFIB in ECG(E) is slightly lower than RAMED (Shen et al., 2021), and we attribute to the fact that some unlabeled sub-sequences are too similar to labeled anomalies in the raw data. Credit Card: Due to the nature of this application, this dataset is stochastic and the temporal relation is not significant. Therefore, as shown in Table 4, traditional AD solutions without modeling the underlying temporal dependency achieve fair performance, e.g., OCSVM (Ma & Perkins, 2003), ISO 1Contextual anomalies are observations or sequences that deviate from the expected patterns within the time series however if taken in isolation they are within the range of values expected for that signal (Cook et al., 2020). Forest (Liu et al., 2008). Besides, the AR (Rousseeuw & Leroy, 1987) with a small window size (e.g., 3, 5) can also identify the local change point without considering longer temporal relations. However, the large recall and small precision values show its high false positive rates. The prediction-based method, LSTM-RNN (Bontemps et al., 2016) tries to learn a robust temporal relation from the data, which is infeasible for this dataset. In contrast, the reconstruction-based method, RAE (recurrent auto-encoder) (Malhotra et al., 2016) performs better since it can estimate the outliers based on the local contextual information. The proposed DeepFIB-p model outperforms all baseline methods, because it can better extract local correlations with the proposed self-imputation strategy. At the same time, compared to our results on other datasets, the relative 26.3% improvement over the second best solution (AR) is less impressive and the F1-score with our DeepFIB-p model is still less than 25%. We attribute it to both the dataset complexity and the lack of temporal relations in this dataset. 4.2 Q2: ABLATION STUDY In this section, we first evaluate the impact of various components in our DeepFIB-s and DeepFIB-p models. Next, we replace the SCINet with other sequence models to evaluate its impact. 4.2.1 COMPONENT ANALYSIS DeepFIB-p: To demonstrate the impact of the proposed mask-and-impute mechanism in point outlier detection. We add two baseline methods: (1) DeepFIB-p†, wherein we remove the self-imputation strategy; (2) RAE∗, we implement the same mask-and-impute strategy and apply it to the baseline method RAE. In Table 4, the performance improvement and degradation of the corresponding variants compared to DeepFIB-p and RAE clearly demonstrate the effectiveness of the proposed self-imputation strategy for point outlier detection. DeepFIB-s: To investigate the impact of different modules of DeepFIB-s, we compare two variants of the DeepFIB-s on five datasets. The details of the variants are described as below: For w/o. localization, we remove the anomaly localization algorithm from our DeepFIB-s model. The w/o. localization & DTW further removes the DTW scoring mechanism, and the anomalies are determined based on point-wise residual errors. As shown in Fig. 8, all these components are essential for achieving high anomaly detection accuracy. At the same time, the proposed self-imputation training strategy is still the main contributor to the performance of our DeepFIB-s model, as the results of w/o. localization & DTW are still much better than those of the 2nd best solution. Besides, the performance gain of the DTW anomaly scoring indicates that the point-wise outlier estimation is not suitable for evaluating sequence-wise anomalies. 4.2.2 IMPACT OF SCINET In our DeepFIB framework, we revise SCINet for time series imputation. To show its impact, we replace it with other sequence models in DeepFIB-s. As we can see in Table 5, compared with TCN (Bai et al., 2018) and LSTM (Hochreiter & Schmidhuber, 1997), using SCINet indeed brings significant improvements, which clearly shows its strong temporal relation extraction capability and the effectiveness of the revised architecture for TS imputation. At the same time, compared to the previous SOTA methods (2nd best) for the corresponding dataset, with the same mask-andimpute strategy, we can still achieve remarkable performance without using SCINet, indicating the effectiveness of the proposed self-imputation concept itself. 5 CONCLUSION In this paper, we propose a novel self-imputation framework DeepFIB for time series anomaly detection. Considering the two types of common anomalies in TS data, we implement two maskand-impute models biased towards them, which facilitate extracting more robust temporal relations than existing AD solutions. Moreover, for sequence-wise anomalies, we propose a novel anomaly localization algorithm that dramatically improves AD detection accuracy. Experiments on various real-world TS datasets demonstrate that DeepFIB outperforms state-of-the-art AD approaches by a large margin, achieving up to more than 65% relative improvement in F1-score.
1. What is the focus of the paper regarding time series outlier detection? 2. What are the strengths of the proposed approach, particularly in terms of novelty and organization? 3. What are the weaknesses of the paper, especially regarding the adaptation of the time series forecasting model and the lack of certain baselines? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper · The author proposes a self-imputation framework for time series outlier detection. · The author adapts the time series forecasting model, SCINet to point-wise and sequence-wise imputation toward detecting outliers from time series data. · The author propose a novel anomaly localization algorithm to identify the precise locations of sequence anomalies. Review · Strength: o The paper is well-organized and easy to follow. o The proposed anomaly localization algorithm with DTW as anomaly score is novel and very well fit into the attribute of sequence-anomaly. o The empirical experiment seems promising on the adopted detests. · Weakness: o There are many existing time series imputation algorithms [1]. However, the author adopts an forecasting model (i.e., SCINet) without further justification, which makes the whole framework seems arbitrary. More justification on this may clarify the motivation behind the adoption. o Although the author includes state-of-the-art anomaly detection algorithms as baselines, one of the most important classical discord analysis baseline (i.e., MatrixProfile) is missing. Comparing with MatrixProfile will certainly increase the credibility of the model performance. In addition, it is also interesting to see the comparison between the proposed model to existing imputation models for detecting outliers. [1] Fang, Chenguang, and Chen Wang. "Time Series Data Imputation: A Survey on Deep Learning Approaches." arXiv preprint arXiv:2011.11347 (2020).
ICLR
Title DeepFIB: Self-Imputation for Time Series Anomaly Detection Abstract Time series (TS) anomaly detection (AD) plays an essential role in various applications, e.g., fraud detection in finance and healthcare monitoring. Due to the inherently unpredictable and highly varied nature of anomalies and the lack of anomaly labels in historical data, the AD problem is typically formulated as an unsupervised learning problem. The performance of existing solutions is often not satisfactory, especially in data-scarce scenarios. To tackle this problem, we propose a novel self-supervised learning technique for AD in time series, namely DeepFIB. We model the problem as a Fill In the Blank game by masking some elements in the TS and imputing them with the rest. Considering the two common anomaly shapes (pointor sequence-outliers) in TS data, we implement two masking strategies with many self-generated training samples. The corresponding self-imputation networks can extract more robust temporal relations than existing AD solutions and effectively facilitate identifying the two types of anomalies. For continuous outliers, we also propose an anomaly localization algorithm that dramatically reduces AD errors. Experiments on various real-world TS datasets demonstrate that DeepFIB outperforms state-of-the-art methods by a large margin, achieving up to 65.2% relative improvement in F1-score. 1 INTRODUCTION Anomaly detection (AD) in time series (TS) data has numerous applications across various domains. Examples include fault and damage detection in industry (Hundman et al., 2018), intrusion detection in cybersecurity (Feng & Tian, 2021), and fraud detection in finance (Zheng et al., 2018) or healthcare (Zhou et al., 2019), to name a few. Generally speaking, an anomaly/outlier is an observation that deviates considerably from some concept of normality (Ruff et al., 2021). The somewhat “vague” definition itself tells the challenges of the AD problem arising from the rare and unpredictable nature of anomalies. With the lack of anomaly labels in historical data, most AD approaches try to learn the expected values of time-series data in an unsupervised manner (Bl’azquez-Garc’ia et al., 2021). Various techniques use different means (e.g., distance-based methods (Angiulli & Pizzuti, 2002), predictive methods (Holt, 2004; Yu et al., 2016; Deng & Hooi, 2021) or reconstruction-based methods (Shyu et al., 2003; Malhotra et al., 2016; Zhang et al., 2019; Shen et al., 2021)) to obtain this expected value, and then compute how far it is from the actual observation to decide whether or not it is an anomaly. While existing solutions have shown superior performance on some time series AD tasks, they are still far from satisfactory. For example, for the six ECG datasets in (Keogh et al., 2005), the average F1-score of state-of-the-art solutions (Kieu et al., 2019; Shen et al., 2021) with model ensembles are barely over 40%. Other than the TS data’ complexity issues, one primary reason is that the available data is often scarce while deep learning algorithms are notoriously data-hungry. Recently, self-supervised learning (SSL) that enlarges the training dataset without manual labels has attracted lots of attention, and it has achieved great success in representation learning in computer vision (Zhang et al., 2016; Pathak et al., 2016; Chen et al., 2020), natural language processing (Devlin et al., 2019), and graph learning (Hu et al., 2020) areas. There are also a few SSL techniques for time series analysis proposed in the literature. Most of them (Falck et al., 2020; Saeed et al., 2021; Fan et al., 2020) craft contrastive TS examples for classification tasks. (Deldari et al., 2021) also leverages contrastive learning for change point detection in time series. While interesting, the above SSL techniques do not apply to the AD task because detecting anomalies in time series requires fine-grained models at the element level. In this work, inspired by the context encoder for visual feature learning (Pathak et al., 2016) and the BERT model for language representation learning (Devlin et al., 2019), we propose a novel self-supervised learning technique for time series anomaly detection, namely DeepFIB. To be specific, we model the problem as a Fill In the Blank game by masking some elements in the TS and imputing them with other elements. This is achieved by revising the TS forecasting model SCINet (Liu et al., 2021) for the TS imputation task, in which the masked elements are regarded as missing values for imputation. Such self-imputation strategies facilitate generating a large amount of training samples for temporal relation extraction. As anomalies in time series manifest themselves as either discrete points or subsequences (see Fig. 1), correspondingly, we propose two kinds of masking strategies and use them to generate two pre-trained models. They are biased towards recovering from point-wise anomalies (DeepFIB-p model for point outliers) and sequence-wise anomalies (DeepFIB-s model for continuous outliers), respectively. To the best of our knowledge, this is the first SSL work for time series anomaly detection. Generally speaking, AD solutions have difficulty detecting sequence-wise anomalies because it is hard to tell the real outliers from their neighboring normal elements due to their interplay. To tackle this problem, we propose a novel anomaly localization algorithm to locate the precise start and end positions of continuous outliers. As a post-processing step, we conduct a local search after determining the existence of sequence-wise anomalies within a timing window with our DeepFIB-s model. By doing so, the detection accuracy for continuous outliers is significantly improved. We conduct experiments on several commonly-used time series benchmarks, and results show that DeepFIB consistently outperforms state-of-the-art solutions. In particular, the average F1-score of DeepFIB for the six ECG datasets is more than 62%, achieving nearly 50% relative improvement. 2 RELATED WORK In this section, we mainly discuss recent deep learning-based time series AD approaches. A comprehensive survey on the traditional techniques can be found in (Gupta et al., 2014). Existing anomaly detection approaches can be broadly categorized into three types (see Fig. 2): (i) Density-based methods consider the normal instances compact in the latent space and identify anomalies with one-class classifiers or likelihood measurements (Su et al., 2019; Shen & Kwok, 2020; Feng & Tian, 2021). (ii) Reconstruction-based methods use recurrent auto-encoders (RAE) (Malhotra et al., 2016; Yoo et al., 2021; Kieu et al., 2019; Shen et al., 2021; Zhang et al., 2019) or deep generative models such as recurrent VAEs (Park et al., 2018) or GANs (Li et al., 2019; Zhou et al., 2019) for reconstruction. The reconstruction errors are used as anomaly scores. (iii) Prediction-based methods rely on predictive models (Bontemps et al., 2016; Deng & Hooi, 2021; Chen et al., 2021) and use the prediction errors as anomaly scores. While the above methods have been successfully used in many real-world applications, practical AD tasks still have lots of room for improvement, especially in data-scarce scenarios. Unlike existing AD approaches, the proposed mask-and-impute method in DeepFIB exploits the unique property of TS data that missing values can be effectively imputed (Fang & Wang, 2020). By constructing many training samples via self-imputation, DeepFIB extracts robust temporal relations of TS data and improves AD accuracy dramatically. Moreover, for the more challenging sequence-wise anomalies, most prior work assumes a user-defined fixed-length for anomaly subsequences (Cook et al., 2020) or simplifies the problem by stating all the continuous outliers have been correctly detected as long as one of the points is detected (Su et al., 2019; Shen & Kwok, 2020). In DeepFIB, we lift these assumptions and try to locate the exact location of sequence-wise anomalies. 3 METHOD In this section, we first introduce the overall self-imputation framework in DeepFIB and then discuss the separate AD models for detecting point- and sequence-wise anomalies with different maskand-impute strategies, namely DeepFIB-p and DeepFIB-s, respectively. Next, we describe the TS imputation method used in DeepFIB, based on an existing TS forecasting approach SCINet (Liu et al., 2021). Finally, we present our anomaly localization algorithm for continuous outliers. 3.1 SELF-IMPUTATION FOR ANOMALY DETECTION Given a set of multivariate time series wherein Xs = {x1, x2, ..., xTs} Rd×Ts (Ts is the length of the sth time series Xs), the objective of the AD task is to find all anomalous points xt ∈ Rd (d is the number of variates) and anomalous subsequences Xt,τ = {xt−τ+1, ..., xt}. The critical issue to solve the above problem is obtaining an expected value for each element in the TS, which requires a large amount of training data to learn from, especially for deep learning-based solutions. However, time-series data are often scarce, significantly restricting the effectiveness of learning-based AD solutions. DeepFIB is a simple yet effective SSL technique to tackle the above problem. We model this problem as a Fill In the Blank game by randomly masking some elements in the TS and imputing them with the rest. Such self-imputation strategies generate many training samples from every time series and hence dramatically improve temporal learning capabilities. In particular, we propose to train two self-imputation models (Fig. 3), biased towards point- and sequence-wise anomalies in the TS data, respectively. • DeepFIB-p model targets point outliers, as shown in Fig. 3(a), in which we mask discrete elements and rely on the local temporal relations extracted from neighboring elements for reconstruction. For each time series Xs, we generate M training samples by masking it M times with randomly-selected yet non-overlapping d×TsM elements. • DeepFIB-s model targets continuous outliers, as shown in Fig. 3(b), in which we mask continuous elements and rely on predictive models for reconstruction. For each time series Xs, we evenly divide it into N non-overlapping sub-sequences as { X d×TsN s,i , i ∈ [0, N − 1] } and generate N training samples by masking one of them each time. During training, for each time series Xs, we obtain a set of non-overlapped imputed data with the above model and integrate them together results in a reconstructed time series X̂s (i.e., X̂s-p for DeepFIB-p model and X̂s-s for DeepFIB-s model). The training loss for both models are defined as the reconstruction errors between the input time series and the reconstructed one: L = 1 Ts Ts∑ t=1 ‖xt − x̂t‖ (1) where xt is the original input value at time step t and the x̂t denotes the reconstructed value from the corresponding model, and ‖·‖ is the L1-norm of a vector. During testing, to detect point outliers with the DeepFIB-p model, we simply use the residual error as the anomaly score, defined as et = ∑d i=0 ∣∣∣x̂ti − xit∣∣∣, and when et is larger than a threshold value λp, time step t is regarded as an outlier. In contrast, for continuous outliers, we use dynamic time warping (DTW) (Sakoe & Chiba, 1978) distance metrics as our anomaly scoring mechanism, which measures the similarity between the input time series X and reconstructed sequence X̂ . If DTW (X, X̂) is above a threshold value λs, a sequence-wise anomaly is detected. 3.2 TIME SERIES IMPUTATION IN DEEPFIB While the time-series data imputation problem has been investigated for decades (Fang & Wang, 2020), there are still lots of rooms for improvement and various deep learning models are proposed recently (Cao et al., 2018; Liu et al., 2019; Luo et al., 2019). SCINet (Liu et al., 2021) is an encoder-decoder architecture motivated by the unique characteristics of time series data. It incorporates a series of SCI-Blocks that conduct down-sampled convolutions and interactive learning to capture temporal features at various resolutions and effectively blend them in a hierarchical manner. Considering the highly-effective temporal relation extraction capability of SCINet when compared to other sequence models, we propose to revise it for the TS imputation task. More details about SCINet can be found in (Liu et al., 2021). To impute the missing elements from the two masking strategies with DeepFIB-p and DeepFIB-s models, we simply change the supervisions for the decoder part accordingly. For point imputation, we use the original input sequence as the supervision of our DeepFIB-p model, making it a reconstruction structure. By doing so, the model concentrates more on the local temporal relations inside the timing window for imputing discrete missing data, as shown in Fig. 5(a). As for continuous imputation, we propose to change SCINet as a bidirectional forecasting structure in our DeepFIB-s model, with the masked sub-sequence as supervision. As shown in Fig. 5(b), the two sub-models, namely F-SCINet and B-SCINet, are used to conduct forecasting in the forward and backward directions, respectively. By doing so, the model can aggregate the temporal features from both directions and learn a robust long-term temporal relations for imputing continuous missing data. 3.3 ANOMALY LOCALIZATION ALGORITHM During inference, we use a sliding window with stride µ to walk through the time series and find anomalies in each window. For sequence-wise anomalies, without knowing their positions a priori, we could mask some normal elements in the window and use those unmasked outliers for prediction (see Fig. 5(b)), thereby leading to mispredictions. To tackle this problem, we propose to conduct a local search for the precise locations of the sequence-wise anomalies. As shown in Fig. 6, the Active window are the current input sequence to the DeepFIB-s model with length ω (ω > µ), i.e., Xt = {xt, xt+1, ..., xt+ω−1} at time step t. When the DTW distance between the original time series in the Active window and the imputed sequence is above the threshold λs, a sequence-wise anomaly is detected in the current window, and the localization mechanism is triggered. As the sliding window is moving along the data stream with stride µ, if no outliers are detected in the previous window, the start position of the sequence-wise anomaly can only exist at the end of Xt in the window {xt+ω−µ, ..., xt+ω−1, xt+ω−1} with length µ. Consequently, by gradually shifting the Active window backward to include one more element in the Buffer window (see Fig. 6) at a time and calculating the corresponding DTW distances as {e1, ..., ei, ..., eµ}, we can find the maximum i with ei < λs, indicating the following element after the Active window starting with i is the start of the anomaly subsequence. The Anomaly flag is then activated from this position. Similarly, to determine the ending position of the anomaly subsequence, we keep sliding the Active windows until we find a window with DTW distance smaller than λs, indicating that the ending position is within {xt−µ, ..., xt−2, xt−1}. Again, we shift the Active window backwardly one-by-one to include one element of the above window at a time and calculate the corresponding DTW distance, until we find the ending position with its DTW distance larger than λs. 4 EXPERIMENTS In this section, we conduct extensive experiments to answer the following two questions: Whether DeepFIB outperforms state-of-the-art AD methods (Q1)? How does each component of DeepFIB affect its performance (Q2)? Experiments are conducted on a number of commonly-used benchmark TS datasets, namely 2dgesture, Power demand, ECG and Credit Card, ranging from human abnormal behavior detection, power monitoring, healthcare and fraud detection in finance (see Table 1). As the anomalies in 2d-gesture, Power demand, and ECG are mainly sequence outliers, we apply the DeepFIB-s model on these datasets. In contrast, the Credit Card dataset only contains point outliers, and hence we use DeepFIB-p model on it. To make a fair comparison with existing models, we use the standard evaluation metrics on the corresponding datasets. For 2d-gesture, Power demand and Credit Card, we use precision, recall, and F1-score following (Shen & Kwok, 2020). For ECG datasets, we use the AUROC (area under the ROC curve), AUPRC (area under the precision-recall curve) and F1-score, following (Shen et al., 2021). To detect anomalies, we use the maximum anomaly score in each sub-models over the validation dataset to set the threshold. More details on experimental settings, additional experimental results and discussions (e.g., hyperparameter analysis) are presented in the supplementary materials. 4.1 Q1: COMPARISON WITH STATE-OF-THE-ART METHODS 2d-gesture and Power demand: The results in Table 2 show that the proposed DeepFIB-s achieves 16.55% and 50.18% F1-score improvements on 2d-gesture and Power demand, respectively, compared with the second best methods. For 2d-gesture, the available training data is limited and the temporal relations contained in the data are complex (body jitter), making it difficult to obtain a discriminative representation in AD models. DAGMM (Zong et al., 2018) shows low performance since it does not consider the temporal information of the time-series data at all. As for the AD solutions based on generative models (EncDecAD (Malhotra et al., 2016), LSTM-VAE (Park et al., 2018), MAD-GAN (Li et al., 2019), AnoGAN (Schlegl et al., 2017), BeatGAN (Zhou et al., 2019), OmniAnomaly (Su et al., 2019)), they usually require a large amount of training data, limiting their performance in data-scarce scenario. Compared to the above methods, the encoder-decoder architecture MSCRED (Zhang et al., 2019) is relatively easier to train and its AD performance is considerably higher. Moreover, the recent THOC (Shen & Kwok, 2020) work further improves AD performance by fusing the multi-scale temporal information to capture the complex temporal dynamics. The proposed DeepFIB-s model outperforms all the above baseline methods since the proposed self-imputation technique allows the model to learn more robust temporal relations from much more self-generated training samples. Notably, we also observe that the precision of the DeepFIB-s dominates the other baselines. We attribute it to the anomaly localization algorithm that can locate the anomaly’s precise start and end positions, significantly reducing the false positive rate. For Power demand, the data contains many contextual anomaly1 subsequences (see Fig. 7). It is quite challenging for existing AD approaches to learn such context information by extracting temporal features from the entire time series as a whole. In contrast, the proposed sequence-wise masking strategy facilitates learning different kinds of temporal patterns, which is much more effective in detecting such contextual anomalies. As shown in Table 2, the recall of our DeepFIB-s model almost reaches 100%, indicating all anomalies have been detected. The precision is not the best, and we argue that some of the false positives are in fact resulted from the poorly labeled test set (see our supplementary material). ECG(A-F): Compared with (A),(B),(C) datasets, (D),(E),(F) are clearly noisy, which affect the performance of the anomaly detectors significantly. Nevertheless, Table 3 shows that DeepFIB-s achieves an average 46.3% F1-score improvement among all datasets and an impressive 65.2% improvement for ECG(F) dataset. There are mainly two reasons: (1) the data is scarce (See Table 1). Existing AD methods are unable to learn robust temporal relations under such circumstances. In contrast, the self-imputation training strategy together with the bidirectional forecasting mechanism used in our DeepFIB-s model can well address this issue; (2) the proposed DTW anomaly score is more effective in detecting the anomaly sequence than the previous point-wise residual scoring (see Section 4.2.1 ). Notably, the AUPRC of DeepFIB in ECG(E) is slightly lower than RAMED (Shen et al., 2021), and we attribute to the fact that some unlabeled sub-sequences are too similar to labeled anomalies in the raw data. Credit Card: Due to the nature of this application, this dataset is stochastic and the temporal relation is not significant. Therefore, as shown in Table 4, traditional AD solutions without modeling the underlying temporal dependency achieve fair performance, e.g., OCSVM (Ma & Perkins, 2003), ISO 1Contextual anomalies are observations or sequences that deviate from the expected patterns within the time series however if taken in isolation they are within the range of values expected for that signal (Cook et al., 2020). Forest (Liu et al., 2008). Besides, the AR (Rousseeuw & Leroy, 1987) with a small window size (e.g., 3, 5) can also identify the local change point without considering longer temporal relations. However, the large recall and small precision values show its high false positive rates. The prediction-based method, LSTM-RNN (Bontemps et al., 2016) tries to learn a robust temporal relation from the data, which is infeasible for this dataset. In contrast, the reconstruction-based method, RAE (recurrent auto-encoder) (Malhotra et al., 2016) performs better since it can estimate the outliers based on the local contextual information. The proposed DeepFIB-p model outperforms all baseline methods, because it can better extract local correlations with the proposed self-imputation strategy. At the same time, compared to our results on other datasets, the relative 26.3% improvement over the second best solution (AR) is less impressive and the F1-score with our DeepFIB-p model is still less than 25%. We attribute it to both the dataset complexity and the lack of temporal relations in this dataset. 4.2 Q2: ABLATION STUDY In this section, we first evaluate the impact of various components in our DeepFIB-s and DeepFIB-p models. Next, we replace the SCINet with other sequence models to evaluate its impact. 4.2.1 COMPONENT ANALYSIS DeepFIB-p: To demonstrate the impact of the proposed mask-and-impute mechanism in point outlier detection. We add two baseline methods: (1) DeepFIB-p†, wherein we remove the self-imputation strategy; (2) RAE∗, we implement the same mask-and-impute strategy and apply it to the baseline method RAE. In Table 4, the performance improvement and degradation of the corresponding variants compared to DeepFIB-p and RAE clearly demonstrate the effectiveness of the proposed self-imputation strategy for point outlier detection. DeepFIB-s: To investigate the impact of different modules of DeepFIB-s, we compare two variants of the DeepFIB-s on five datasets. The details of the variants are described as below: For w/o. localization, we remove the anomaly localization algorithm from our DeepFIB-s model. The w/o. localization & DTW further removes the DTW scoring mechanism, and the anomalies are determined based on point-wise residual errors. As shown in Fig. 8, all these components are essential for achieving high anomaly detection accuracy. At the same time, the proposed self-imputation training strategy is still the main contributor to the performance of our DeepFIB-s model, as the results of w/o. localization & DTW are still much better than those of the 2nd best solution. Besides, the performance gain of the DTW anomaly scoring indicates that the point-wise outlier estimation is not suitable for evaluating sequence-wise anomalies. 4.2.2 IMPACT OF SCINET In our DeepFIB framework, we revise SCINet for time series imputation. To show its impact, we replace it with other sequence models in DeepFIB-s. As we can see in Table 5, compared with TCN (Bai et al., 2018) and LSTM (Hochreiter & Schmidhuber, 1997), using SCINet indeed brings significant improvements, which clearly shows its strong temporal relation extraction capability and the effectiveness of the revised architecture for TS imputation. At the same time, compared to the previous SOTA methods (2nd best) for the corresponding dataset, with the same mask-andimpute strategy, we can still achieve remarkable performance without using SCINet, indicating the effectiveness of the proposed self-imputation concept itself. 5 CONCLUSION In this paper, we propose a novel self-imputation framework DeepFIB for time series anomaly detection. Considering the two types of common anomalies in TS data, we implement two maskand-impute models biased towards them, which facilitate extracting more robust temporal relations than existing AD solutions. Moreover, for sequence-wise anomalies, we propose a novel anomaly localization algorithm that dramatically improves AD detection accuracy. Experiments on various real-world TS datasets demonstrate that DeepFIB outperforms state-of-the-art AD approaches by a large margin, achieving up to more than 65% relative improvement in F1-score.
1. What is the main contribution of the paper regarding time series anomaly detection? 2. What are the strengths and weaknesses of the proposed framework in addressing the scarcity of training data? 3. Do you have any concerns about the technical details of the paper, such as the selection of M or the bidirectional prediction in Fig 5(b)? 4. How does the reviewer assess the effectiveness of the proposed method in terms of its dependence on hyperparameters and domain knowledge? 5. What are the limitations of the experimental results, and how could they be improved? 6. Are there any other suggestions or recommendations for future work related to this research?
Summary Of The Paper Review
Summary Of The Paper This paper investigates the anomaly detection problem in time series, and introduces a masking and reconstruction based framework for model training. The method has two masking strategies for dealing with two types of anomaly detection problems. With some specific designs in techniques, the experimental results demonstrate the effectiveness of the proposed method. Review In this paper, the authors developed a masking and reconstruction based framework for training models on times series anomaly detection. The key challenge is to address the scarcity of the training data. The proposed framework uses masking methods to augment the training data with corrupted time series, where the masked values are reconstructed during training so that the model learns the temporal structure in the augmented datasets. To accommodate two type of anomaly detections, i.e., point-wise anomaly and subsequence-wise anomaly, random masking and sliding window based masking are respectively used as the strategy to augment the data. Accordingly, two methods of reconstruction were proposed, one is regular prediction, another is bidirectional prediction. The framework was built upon an existing method that encodes time series. Several other encoders were also tested in the ablation analysis. An anomaly localization approach was proposed to facilitate locating the window of the subsequence anomaly. The experiments were performed on several datasets involve the targeted two types of anomalies. The results demonstrate the proposed method is effective to detect those anomalies in time series. It is good to see many figures that helps illustrating some concepts. The following are concerns of the paper. The technical novelty of the paper is limited. Using masking to enable training models with reconstruction capability is sort of straightforward in time series anomaly detection. For the two types of anomaly detection problems, the corresponding strategies, random masking and subsequence masking, are straightforward as well. The time series encoding was mostly built upon existing methods. The anomaly localization method for detecting subsequence anomaly seems to be a kind of trial-and-error method that iteratively checking the threshold, which is an engineering approach with limited novelty. Some technical details were not well justified. First, in random masking, (d * T / M) values were masked, but it is unclear why to set this number in this way. It seems to be an even division of the total number of values for M augmented samples, but this may relate the challenge of reconstruction to the choice of M. If M is small, the reconstruction may be challenging. The paper doesn't discuss the impact of M and how to select M. Second, the subsequence masking method only considers non-overlapping subsequences, it is unclear whether it neglects some temporal structures that could be learned from some overlapping subsequences. Third, in Fig 5(b), the reconstruction of the masked subsequences were performed in a bidirectional manner. Since time series have a temporal trend that is unidirectional, it is obscure why a backward prediction is useful from an intuitive perspective. An ablation analysis on this may help better understanding. The proposed method has many hyperparameters, such as mask size, window size, window stride, and its effectiveness may depend on domain knowledge such as how to set the hyperparameters and the threshold of anomaly detection. In particular, the correct detection of subsequence window requires iterative comparison between the prediction error and the threshold, thus the results may be sensitive to the choice of the threshold. In the experiments, to understand how precise the located subsequences are, it is better to visualize some comparison between the detected windows and the ground truth subsequence anomalies. Since the compared methods for point-wise and subsequence-wise anomaly detection in the experiments are two different sets, a elaborate discussion on the different choices of the compared methods is expected. Also, since the choices of datasets for different ablation analysis are different, it is better to provide some justification that the choices are not arbitrary.
ICLR
Title DeepFIB: Self-Imputation for Time Series Anomaly Detection Abstract Time series (TS) anomaly detection (AD) plays an essential role in various applications, e.g., fraud detection in finance and healthcare monitoring. Due to the inherently unpredictable and highly varied nature of anomalies and the lack of anomaly labels in historical data, the AD problem is typically formulated as an unsupervised learning problem. The performance of existing solutions is often not satisfactory, especially in data-scarce scenarios. To tackle this problem, we propose a novel self-supervised learning technique for AD in time series, namely DeepFIB. We model the problem as a Fill In the Blank game by masking some elements in the TS and imputing them with the rest. Considering the two common anomaly shapes (pointor sequence-outliers) in TS data, we implement two masking strategies with many self-generated training samples. The corresponding self-imputation networks can extract more robust temporal relations than existing AD solutions and effectively facilitate identifying the two types of anomalies. For continuous outliers, we also propose an anomaly localization algorithm that dramatically reduces AD errors. Experiments on various real-world TS datasets demonstrate that DeepFIB outperforms state-of-the-art methods by a large margin, achieving up to 65.2% relative improvement in F1-score. 1 INTRODUCTION Anomaly detection (AD) in time series (TS) data has numerous applications across various domains. Examples include fault and damage detection in industry (Hundman et al., 2018), intrusion detection in cybersecurity (Feng & Tian, 2021), and fraud detection in finance (Zheng et al., 2018) or healthcare (Zhou et al., 2019), to name a few. Generally speaking, an anomaly/outlier is an observation that deviates considerably from some concept of normality (Ruff et al., 2021). The somewhat “vague” definition itself tells the challenges of the AD problem arising from the rare and unpredictable nature of anomalies. With the lack of anomaly labels in historical data, most AD approaches try to learn the expected values of time-series data in an unsupervised manner (Bl’azquez-Garc’ia et al., 2021). Various techniques use different means (e.g., distance-based methods (Angiulli & Pizzuti, 2002), predictive methods (Holt, 2004; Yu et al., 2016; Deng & Hooi, 2021) or reconstruction-based methods (Shyu et al., 2003; Malhotra et al., 2016; Zhang et al., 2019; Shen et al., 2021)) to obtain this expected value, and then compute how far it is from the actual observation to decide whether or not it is an anomaly. While existing solutions have shown superior performance on some time series AD tasks, they are still far from satisfactory. For example, for the six ECG datasets in (Keogh et al., 2005), the average F1-score of state-of-the-art solutions (Kieu et al., 2019; Shen et al., 2021) with model ensembles are barely over 40%. Other than the TS data’ complexity issues, one primary reason is that the available data is often scarce while deep learning algorithms are notoriously data-hungry. Recently, self-supervised learning (SSL) that enlarges the training dataset without manual labels has attracted lots of attention, and it has achieved great success in representation learning in computer vision (Zhang et al., 2016; Pathak et al., 2016; Chen et al., 2020), natural language processing (Devlin et al., 2019), and graph learning (Hu et al., 2020) areas. There are also a few SSL techniques for time series analysis proposed in the literature. Most of them (Falck et al., 2020; Saeed et al., 2021; Fan et al., 2020) craft contrastive TS examples for classification tasks. (Deldari et al., 2021) also leverages contrastive learning for change point detection in time series. While interesting, the above SSL techniques do not apply to the AD task because detecting anomalies in time series requires fine-grained models at the element level. In this work, inspired by the context encoder for visual feature learning (Pathak et al., 2016) and the BERT model for language representation learning (Devlin et al., 2019), we propose a novel self-supervised learning technique for time series anomaly detection, namely DeepFIB. To be specific, we model the problem as a Fill In the Blank game by masking some elements in the TS and imputing them with other elements. This is achieved by revising the TS forecasting model SCINet (Liu et al., 2021) for the TS imputation task, in which the masked elements are regarded as missing values for imputation. Such self-imputation strategies facilitate generating a large amount of training samples for temporal relation extraction. As anomalies in time series manifest themselves as either discrete points or subsequences (see Fig. 1), correspondingly, we propose two kinds of masking strategies and use them to generate two pre-trained models. They are biased towards recovering from point-wise anomalies (DeepFIB-p model for point outliers) and sequence-wise anomalies (DeepFIB-s model for continuous outliers), respectively. To the best of our knowledge, this is the first SSL work for time series anomaly detection. Generally speaking, AD solutions have difficulty detecting sequence-wise anomalies because it is hard to tell the real outliers from their neighboring normal elements due to their interplay. To tackle this problem, we propose a novel anomaly localization algorithm to locate the precise start and end positions of continuous outliers. As a post-processing step, we conduct a local search after determining the existence of sequence-wise anomalies within a timing window with our DeepFIB-s model. By doing so, the detection accuracy for continuous outliers is significantly improved. We conduct experiments on several commonly-used time series benchmarks, and results show that DeepFIB consistently outperforms state-of-the-art solutions. In particular, the average F1-score of DeepFIB for the six ECG datasets is more than 62%, achieving nearly 50% relative improvement. 2 RELATED WORK In this section, we mainly discuss recent deep learning-based time series AD approaches. A comprehensive survey on the traditional techniques can be found in (Gupta et al., 2014). Existing anomaly detection approaches can be broadly categorized into three types (see Fig. 2): (i) Density-based methods consider the normal instances compact in the latent space and identify anomalies with one-class classifiers or likelihood measurements (Su et al., 2019; Shen & Kwok, 2020; Feng & Tian, 2021). (ii) Reconstruction-based methods use recurrent auto-encoders (RAE) (Malhotra et al., 2016; Yoo et al., 2021; Kieu et al., 2019; Shen et al., 2021; Zhang et al., 2019) or deep generative models such as recurrent VAEs (Park et al., 2018) or GANs (Li et al., 2019; Zhou et al., 2019) for reconstruction. The reconstruction errors are used as anomaly scores. (iii) Prediction-based methods rely on predictive models (Bontemps et al., 2016; Deng & Hooi, 2021; Chen et al., 2021) and use the prediction errors as anomaly scores. While the above methods have been successfully used in many real-world applications, practical AD tasks still have lots of room for improvement, especially in data-scarce scenarios. Unlike existing AD approaches, the proposed mask-and-impute method in DeepFIB exploits the unique property of TS data that missing values can be effectively imputed (Fang & Wang, 2020). By constructing many training samples via self-imputation, DeepFIB extracts robust temporal relations of TS data and improves AD accuracy dramatically. Moreover, for the more challenging sequence-wise anomalies, most prior work assumes a user-defined fixed-length for anomaly subsequences (Cook et al., 2020) or simplifies the problem by stating all the continuous outliers have been correctly detected as long as one of the points is detected (Su et al., 2019; Shen & Kwok, 2020). In DeepFIB, we lift these assumptions and try to locate the exact location of sequence-wise anomalies. 3 METHOD In this section, we first introduce the overall self-imputation framework in DeepFIB and then discuss the separate AD models for detecting point- and sequence-wise anomalies with different maskand-impute strategies, namely DeepFIB-p and DeepFIB-s, respectively. Next, we describe the TS imputation method used in DeepFIB, based on an existing TS forecasting approach SCINet (Liu et al., 2021). Finally, we present our anomaly localization algorithm for continuous outliers. 3.1 SELF-IMPUTATION FOR ANOMALY DETECTION Given a set of multivariate time series wherein Xs = {x1, x2, ..., xTs} Rd×Ts (Ts is the length of the sth time series Xs), the objective of the AD task is to find all anomalous points xt ∈ Rd (d is the number of variates) and anomalous subsequences Xt,τ = {xt−τ+1, ..., xt}. The critical issue to solve the above problem is obtaining an expected value for each element in the TS, which requires a large amount of training data to learn from, especially for deep learning-based solutions. However, time-series data are often scarce, significantly restricting the effectiveness of learning-based AD solutions. DeepFIB is a simple yet effective SSL technique to tackle the above problem. We model this problem as a Fill In the Blank game by randomly masking some elements in the TS and imputing them with the rest. Such self-imputation strategies generate many training samples from every time series and hence dramatically improve temporal learning capabilities. In particular, we propose to train two self-imputation models (Fig. 3), biased towards point- and sequence-wise anomalies in the TS data, respectively. • DeepFIB-p model targets point outliers, as shown in Fig. 3(a), in which we mask discrete elements and rely on the local temporal relations extracted from neighboring elements for reconstruction. For each time series Xs, we generate M training samples by masking it M times with randomly-selected yet non-overlapping d×TsM elements. • DeepFIB-s model targets continuous outliers, as shown in Fig. 3(b), in which we mask continuous elements and rely on predictive models for reconstruction. For each time series Xs, we evenly divide it into N non-overlapping sub-sequences as { X d×TsN s,i , i ∈ [0, N − 1] } and generate N training samples by masking one of them each time. During training, for each time series Xs, we obtain a set of non-overlapped imputed data with the above model and integrate them together results in a reconstructed time series X̂s (i.e., X̂s-p for DeepFIB-p model and X̂s-s for DeepFIB-s model). The training loss for both models are defined as the reconstruction errors between the input time series and the reconstructed one: L = 1 Ts Ts∑ t=1 ‖xt − x̂t‖ (1) where xt is the original input value at time step t and the x̂t denotes the reconstructed value from the corresponding model, and ‖·‖ is the L1-norm of a vector. During testing, to detect point outliers with the DeepFIB-p model, we simply use the residual error as the anomaly score, defined as et = ∑d i=0 ∣∣∣x̂ti − xit∣∣∣, and when et is larger than a threshold value λp, time step t is regarded as an outlier. In contrast, for continuous outliers, we use dynamic time warping (DTW) (Sakoe & Chiba, 1978) distance metrics as our anomaly scoring mechanism, which measures the similarity between the input time series X and reconstructed sequence X̂ . If DTW (X, X̂) is above a threshold value λs, a sequence-wise anomaly is detected. 3.2 TIME SERIES IMPUTATION IN DEEPFIB While the time-series data imputation problem has been investigated for decades (Fang & Wang, 2020), there are still lots of rooms for improvement and various deep learning models are proposed recently (Cao et al., 2018; Liu et al., 2019; Luo et al., 2019). SCINet (Liu et al., 2021) is an encoder-decoder architecture motivated by the unique characteristics of time series data. It incorporates a series of SCI-Blocks that conduct down-sampled convolutions and interactive learning to capture temporal features at various resolutions and effectively blend them in a hierarchical manner. Considering the highly-effective temporal relation extraction capability of SCINet when compared to other sequence models, we propose to revise it for the TS imputation task. More details about SCINet can be found in (Liu et al., 2021). To impute the missing elements from the two masking strategies with DeepFIB-p and DeepFIB-s models, we simply change the supervisions for the decoder part accordingly. For point imputation, we use the original input sequence as the supervision of our DeepFIB-p model, making it a reconstruction structure. By doing so, the model concentrates more on the local temporal relations inside the timing window for imputing discrete missing data, as shown in Fig. 5(a). As for continuous imputation, we propose to change SCINet as a bidirectional forecasting structure in our DeepFIB-s model, with the masked sub-sequence as supervision. As shown in Fig. 5(b), the two sub-models, namely F-SCINet and B-SCINet, are used to conduct forecasting in the forward and backward directions, respectively. By doing so, the model can aggregate the temporal features from both directions and learn a robust long-term temporal relations for imputing continuous missing data. 3.3 ANOMALY LOCALIZATION ALGORITHM During inference, we use a sliding window with stride µ to walk through the time series and find anomalies in each window. For sequence-wise anomalies, without knowing their positions a priori, we could mask some normal elements in the window and use those unmasked outliers for prediction (see Fig. 5(b)), thereby leading to mispredictions. To tackle this problem, we propose to conduct a local search for the precise locations of the sequence-wise anomalies. As shown in Fig. 6, the Active window are the current input sequence to the DeepFIB-s model with length ω (ω > µ), i.e., Xt = {xt, xt+1, ..., xt+ω−1} at time step t. When the DTW distance between the original time series in the Active window and the imputed sequence is above the threshold λs, a sequence-wise anomaly is detected in the current window, and the localization mechanism is triggered. As the sliding window is moving along the data stream with stride µ, if no outliers are detected in the previous window, the start position of the sequence-wise anomaly can only exist at the end of Xt in the window {xt+ω−µ, ..., xt+ω−1, xt+ω−1} with length µ. Consequently, by gradually shifting the Active window backward to include one more element in the Buffer window (see Fig. 6) at a time and calculating the corresponding DTW distances as {e1, ..., ei, ..., eµ}, we can find the maximum i with ei < λs, indicating the following element after the Active window starting with i is the start of the anomaly subsequence. The Anomaly flag is then activated from this position. Similarly, to determine the ending position of the anomaly subsequence, we keep sliding the Active windows until we find a window with DTW distance smaller than λs, indicating that the ending position is within {xt−µ, ..., xt−2, xt−1}. Again, we shift the Active window backwardly one-by-one to include one element of the above window at a time and calculate the corresponding DTW distance, until we find the ending position with its DTW distance larger than λs. 4 EXPERIMENTS In this section, we conduct extensive experiments to answer the following two questions: Whether DeepFIB outperforms state-of-the-art AD methods (Q1)? How does each component of DeepFIB affect its performance (Q2)? Experiments are conducted on a number of commonly-used benchmark TS datasets, namely 2dgesture, Power demand, ECG and Credit Card, ranging from human abnormal behavior detection, power monitoring, healthcare and fraud detection in finance (see Table 1). As the anomalies in 2d-gesture, Power demand, and ECG are mainly sequence outliers, we apply the DeepFIB-s model on these datasets. In contrast, the Credit Card dataset only contains point outliers, and hence we use DeepFIB-p model on it. To make a fair comparison with existing models, we use the standard evaluation metrics on the corresponding datasets. For 2d-gesture, Power demand and Credit Card, we use precision, recall, and F1-score following (Shen & Kwok, 2020). For ECG datasets, we use the AUROC (area under the ROC curve), AUPRC (area under the precision-recall curve) and F1-score, following (Shen et al., 2021). To detect anomalies, we use the maximum anomaly score in each sub-models over the validation dataset to set the threshold. More details on experimental settings, additional experimental results and discussions (e.g., hyperparameter analysis) are presented in the supplementary materials. 4.1 Q1: COMPARISON WITH STATE-OF-THE-ART METHODS 2d-gesture and Power demand: The results in Table 2 show that the proposed DeepFIB-s achieves 16.55% and 50.18% F1-score improvements on 2d-gesture and Power demand, respectively, compared with the second best methods. For 2d-gesture, the available training data is limited and the temporal relations contained in the data are complex (body jitter), making it difficult to obtain a discriminative representation in AD models. DAGMM (Zong et al., 2018) shows low performance since it does not consider the temporal information of the time-series data at all. As for the AD solutions based on generative models (EncDecAD (Malhotra et al., 2016), LSTM-VAE (Park et al., 2018), MAD-GAN (Li et al., 2019), AnoGAN (Schlegl et al., 2017), BeatGAN (Zhou et al., 2019), OmniAnomaly (Su et al., 2019)), they usually require a large amount of training data, limiting their performance in data-scarce scenario. Compared to the above methods, the encoder-decoder architecture MSCRED (Zhang et al., 2019) is relatively easier to train and its AD performance is considerably higher. Moreover, the recent THOC (Shen & Kwok, 2020) work further improves AD performance by fusing the multi-scale temporal information to capture the complex temporal dynamics. The proposed DeepFIB-s model outperforms all the above baseline methods since the proposed self-imputation technique allows the model to learn more robust temporal relations from much more self-generated training samples. Notably, we also observe that the precision of the DeepFIB-s dominates the other baselines. We attribute it to the anomaly localization algorithm that can locate the anomaly’s precise start and end positions, significantly reducing the false positive rate. For Power demand, the data contains many contextual anomaly1 subsequences (see Fig. 7). It is quite challenging for existing AD approaches to learn such context information by extracting temporal features from the entire time series as a whole. In contrast, the proposed sequence-wise masking strategy facilitates learning different kinds of temporal patterns, which is much more effective in detecting such contextual anomalies. As shown in Table 2, the recall of our DeepFIB-s model almost reaches 100%, indicating all anomalies have been detected. The precision is not the best, and we argue that some of the false positives are in fact resulted from the poorly labeled test set (see our supplementary material). ECG(A-F): Compared with (A),(B),(C) datasets, (D),(E),(F) are clearly noisy, which affect the performance of the anomaly detectors significantly. Nevertheless, Table 3 shows that DeepFIB-s achieves an average 46.3% F1-score improvement among all datasets and an impressive 65.2% improvement for ECG(F) dataset. There are mainly two reasons: (1) the data is scarce (See Table 1). Existing AD methods are unable to learn robust temporal relations under such circumstances. In contrast, the self-imputation training strategy together with the bidirectional forecasting mechanism used in our DeepFIB-s model can well address this issue; (2) the proposed DTW anomaly score is more effective in detecting the anomaly sequence than the previous point-wise residual scoring (see Section 4.2.1 ). Notably, the AUPRC of DeepFIB in ECG(E) is slightly lower than RAMED (Shen et al., 2021), and we attribute to the fact that some unlabeled sub-sequences are too similar to labeled anomalies in the raw data. Credit Card: Due to the nature of this application, this dataset is stochastic and the temporal relation is not significant. Therefore, as shown in Table 4, traditional AD solutions without modeling the underlying temporal dependency achieve fair performance, e.g., OCSVM (Ma & Perkins, 2003), ISO 1Contextual anomalies are observations or sequences that deviate from the expected patterns within the time series however if taken in isolation they are within the range of values expected for that signal (Cook et al., 2020). Forest (Liu et al., 2008). Besides, the AR (Rousseeuw & Leroy, 1987) with a small window size (e.g., 3, 5) can also identify the local change point without considering longer temporal relations. However, the large recall and small precision values show its high false positive rates. The prediction-based method, LSTM-RNN (Bontemps et al., 2016) tries to learn a robust temporal relation from the data, which is infeasible for this dataset. In contrast, the reconstruction-based method, RAE (recurrent auto-encoder) (Malhotra et al., 2016) performs better since it can estimate the outliers based on the local contextual information. The proposed DeepFIB-p model outperforms all baseline methods, because it can better extract local correlations with the proposed self-imputation strategy. At the same time, compared to our results on other datasets, the relative 26.3% improvement over the second best solution (AR) is less impressive and the F1-score with our DeepFIB-p model is still less than 25%. We attribute it to both the dataset complexity and the lack of temporal relations in this dataset. 4.2 Q2: ABLATION STUDY In this section, we first evaluate the impact of various components in our DeepFIB-s and DeepFIB-p models. Next, we replace the SCINet with other sequence models to evaluate its impact. 4.2.1 COMPONENT ANALYSIS DeepFIB-p: To demonstrate the impact of the proposed mask-and-impute mechanism in point outlier detection. We add two baseline methods: (1) DeepFIB-p†, wherein we remove the self-imputation strategy; (2) RAE∗, we implement the same mask-and-impute strategy and apply it to the baseline method RAE. In Table 4, the performance improvement and degradation of the corresponding variants compared to DeepFIB-p and RAE clearly demonstrate the effectiveness of the proposed self-imputation strategy for point outlier detection. DeepFIB-s: To investigate the impact of different modules of DeepFIB-s, we compare two variants of the DeepFIB-s on five datasets. The details of the variants are described as below: For w/o. localization, we remove the anomaly localization algorithm from our DeepFIB-s model. The w/o. localization & DTW further removes the DTW scoring mechanism, and the anomalies are determined based on point-wise residual errors. As shown in Fig. 8, all these components are essential for achieving high anomaly detection accuracy. At the same time, the proposed self-imputation training strategy is still the main contributor to the performance of our DeepFIB-s model, as the results of w/o. localization & DTW are still much better than those of the 2nd best solution. Besides, the performance gain of the DTW anomaly scoring indicates that the point-wise outlier estimation is not suitable for evaluating sequence-wise anomalies. 4.2.2 IMPACT OF SCINET In our DeepFIB framework, we revise SCINet for time series imputation. To show its impact, we replace it with other sequence models in DeepFIB-s. As we can see in Table 5, compared with TCN (Bai et al., 2018) and LSTM (Hochreiter & Schmidhuber, 1997), using SCINet indeed brings significant improvements, which clearly shows its strong temporal relation extraction capability and the effectiveness of the revised architecture for TS imputation. At the same time, compared to the previous SOTA methods (2nd best) for the corresponding dataset, with the same mask-andimpute strategy, we can still achieve remarkable performance without using SCINet, indicating the effectiveness of the proposed self-imputation concept itself. 5 CONCLUSION In this paper, we propose a novel self-imputation framework DeepFIB for time series anomaly detection. Considering the two types of common anomalies in TS data, we implement two maskand-impute models biased towards them, which facilitate extracting more robust temporal relations than existing AD solutions. Moreover, for sequence-wise anomalies, we propose a novel anomaly localization algorithm that dramatically improves AD detection accuracy. Experiments on various real-world TS datasets demonstrate that DeepFIB outperforms state-of-the-art AD approaches by a large margin, achieving up to more than 65% relative improvement in F1-score.
1. What are the strengths and weaknesses of the proposed approach in dealing with limited training data for time series anomaly detection? 2. How does the proposed method compare to other approaches that use self-supervised learning or general data augmentation for time series anomaly detection? 3. What are some non-deep baselines that could be used for comparison with the proposed approach on commonly used time series anomaly detection datasets? 4. How would the performance of DeepFIB degrade as the available data is reduced, and how would this be reflected in a plot showing F1 score versus percent of training set used? 5. Could the training procedure be modified to propose a different and more accurate localization method, such as using different sizes of masking to find out when the error may start arising? 6. How would the number of hyperparameters be chosen, and what would be the discussion on picking the number of masked points and the length of the masked sequence? 7. What are the implications of combining the masking methods and applying them to the time series anomaly detection setting, and how novel is this approach? 8. How would labeled examples be incorporated into the training process in practical settings where they are expensive to obtain but noisy? 9. Could a single anomaly detection model be trained using both sequence masking and point masking, and how would the models be picked without knowing in advance the kind of anomalies that one is looking for? 10. Are there any minor comments or suggestions for improving the paper's presentation?
Summary Of The Paper Review
Summary Of The Paper This paper presents a new method to tackle the time series anomaly detection task in a setting where the training data is limited. They propose to use the commonly used reconstruction approach to time series anomaly detection by training an auto encoder and detecting anomalies by putting a threshold on the reconstruction error. The novelty of this paper lies in proposing to use a masking of part of the input time series at training time as it should allow the model to learn more from the limited data. The experiments present the overall performance of the approach but do not highlight how it handles the low data regim. The method is not evaluated on a number of common datasets without justification. Review Strengths: The proposed approach is simple, interesting, and can be applied to any anomaly detection model using the reconstruction approach to time series anomaly detection. The proposed approach seems to allow better anomaly detection results on the datasets that were tested on. Concerns: It would have been good to contrast and compare this approach to other work that use self supervised learning or general data augmentation for time series anomaly detection, see [1,2,3], as well as the many data augmentation techniques proposed for anomaly detection techniques in computer vision. The experimental results are insufficient to demonstrate the effectiveness of the proposed approach. The experiments would be more convincing if the following items were addressed: It would be helpful to evaluate the more commonly used time series anomaly detection datasets, like SMD, SMAP and MSL, potentially also Numenta, KPI and Yahoo. See [4,5] It would be important to compare the approach to non-deep baselines on these datasets. The main argument of the paper is that it can deal with the low data regime, this is a regime in which non-deep baselines should also perform well. There are some non-deep learning baselines shown for the credit card dataset, but I would also want to see the comparison on the other datasets. The main proposed use case of the paper is the low data regime, while this may or may not be a common scenario, it is an important scenario to nail. This is a scenario where non-deep methods should perform better, making it even more important to compare them. I would like to see a plot showing that the performance of DeepFIB does not degrade significantly as the available data is reduced. It could be done as follows: pick a reasonably sized dataset, and measure the performance of the different algorithms as you reduce the size of the training set. The plot could be structured as follows: on the y-axis the F1 score, on the x-axis the percent of the training set used, starting at 100% and going close to 0%. There we would want to see that the performance of DeepFIB stays good as the size of the training set decreases, whereas the performance of other models would drop quicker. If this plot was to show that the F1 score of DeepFIB decreases less as the size of the training set is reduced, it would prove that it is in deed a solution to the problem presented. The proposed anomaly localization method is fairly common among window based approaches. Would your training procedure not allow you to propose a different and more accurate localization method? Maybe using different sizes of masking to find out when the error may start arising? While the number of hyperparameters is limited, it would be good to have a discussion on how one would pick the number of masked points and the length of the masked sequence. It is interesting to see the results of the model trained without the masking method in table 4, it would be helpful to see this for all datasets to better understand the impact of the masking on the performance. Finally, while it would not be a problem if the evaluation was thorough enough, one has to admit that the proposed method is not very novel, as both the architecture used and the masking methods have been proposed and the novelty lies in combining them and applying it to the time series anomaly detection setting. Questions: It is not clear to me that data in time series anomaly detection would generally be scarce. Labeled examples are very expensive to obtain and often noisy, but in most practical settings one can have access to a larger set of unlabeled data. In the case where you would have access to labeled data, how would you incorporate it in the training? The two masking methods do not seem that different from the other, could you train a single anomaly detection model using both the sequence masking and the point masking? How can you decide which model to pick without knowing in advance the kind of anomalies that you are looking for? Could you report the results of both models on the different datasets? Minor comments: Some minor form comments: section 4: “Whether DeepFIB outperforms state-of-the-art AD methods?” → “Does DeepFIB outperforms state-of-the-art AD methods?” You could point to [6] for the F1 score computation method since it is proposed there. 4.1: It is very good to present the mean and standard deviation of your runs, it would be good to show over how many runs this was. 4.2: “Next, we replace...” → “Then, we replace...” Table 5, I can guess that this is the F1 score but could you please mention it in the caption. [1] : Opprentice: Towards practical and automatic anomaly detection through machine learning. Liu et al. 2015 [2] : RobustTAD: Robust Time Series Anomaly Detection via Decomposition and Convolutional Neural Networks. Gao et al. 2020 [3] : Neural Contextual Anomaly Detection for Time Series. Carmona et al. 2020 [4] : Robust anomaly detection for multivariate time series through stochastic recurrent neural network. Su et al. 2019 [5] : Timeseries Anomaly Detection using Temporal Hierarchical One-Class Network. Shen et al. 2020 [6] : Unsupervised anomaly detection via variational auto-encoder for seasonal kpis in web applications. Xu et al. 2018
ICLR
Title DeepFIB: Self-Imputation for Time Series Anomaly Detection Abstract Time series (TS) anomaly detection (AD) plays an essential role in various applications, e.g., fraud detection in finance and healthcare monitoring. Due to the inherently unpredictable and highly varied nature of anomalies and the lack of anomaly labels in historical data, the AD problem is typically formulated as an unsupervised learning problem. The performance of existing solutions is often not satisfactory, especially in data-scarce scenarios. To tackle this problem, we propose a novel self-supervised learning technique for AD in time series, namely DeepFIB. We model the problem as a Fill In the Blank game by masking some elements in the TS and imputing them with the rest. Considering the two common anomaly shapes (pointor sequence-outliers) in TS data, we implement two masking strategies with many self-generated training samples. The corresponding self-imputation networks can extract more robust temporal relations than existing AD solutions and effectively facilitate identifying the two types of anomalies. For continuous outliers, we also propose an anomaly localization algorithm that dramatically reduces AD errors. Experiments on various real-world TS datasets demonstrate that DeepFIB outperforms state-of-the-art methods by a large margin, achieving up to 65.2% relative improvement in F1-score. 1 INTRODUCTION Anomaly detection (AD) in time series (TS) data has numerous applications across various domains. Examples include fault and damage detection in industry (Hundman et al., 2018), intrusion detection in cybersecurity (Feng & Tian, 2021), and fraud detection in finance (Zheng et al., 2018) or healthcare (Zhou et al., 2019), to name a few. Generally speaking, an anomaly/outlier is an observation that deviates considerably from some concept of normality (Ruff et al., 2021). The somewhat “vague” definition itself tells the challenges of the AD problem arising from the rare and unpredictable nature of anomalies. With the lack of anomaly labels in historical data, most AD approaches try to learn the expected values of time-series data in an unsupervised manner (Bl’azquez-Garc’ia et al., 2021). Various techniques use different means (e.g., distance-based methods (Angiulli & Pizzuti, 2002), predictive methods (Holt, 2004; Yu et al., 2016; Deng & Hooi, 2021) or reconstruction-based methods (Shyu et al., 2003; Malhotra et al., 2016; Zhang et al., 2019; Shen et al., 2021)) to obtain this expected value, and then compute how far it is from the actual observation to decide whether or not it is an anomaly. While existing solutions have shown superior performance on some time series AD tasks, they are still far from satisfactory. For example, for the six ECG datasets in (Keogh et al., 2005), the average F1-score of state-of-the-art solutions (Kieu et al., 2019; Shen et al., 2021) with model ensembles are barely over 40%. Other than the TS data’ complexity issues, one primary reason is that the available data is often scarce while deep learning algorithms are notoriously data-hungry. Recently, self-supervised learning (SSL) that enlarges the training dataset without manual labels has attracted lots of attention, and it has achieved great success in representation learning in computer vision (Zhang et al., 2016; Pathak et al., 2016; Chen et al., 2020), natural language processing (Devlin et al., 2019), and graph learning (Hu et al., 2020) areas. There are also a few SSL techniques for time series analysis proposed in the literature. Most of them (Falck et al., 2020; Saeed et al., 2021; Fan et al., 2020) craft contrastive TS examples for classification tasks. (Deldari et al., 2021) also leverages contrastive learning for change point detection in time series. While interesting, the above SSL techniques do not apply to the AD task because detecting anomalies in time series requires fine-grained models at the element level. In this work, inspired by the context encoder for visual feature learning (Pathak et al., 2016) and the BERT model for language representation learning (Devlin et al., 2019), we propose a novel self-supervised learning technique for time series anomaly detection, namely DeepFIB. To be specific, we model the problem as a Fill In the Blank game by masking some elements in the TS and imputing them with other elements. This is achieved by revising the TS forecasting model SCINet (Liu et al., 2021) for the TS imputation task, in which the masked elements are regarded as missing values for imputation. Such self-imputation strategies facilitate generating a large amount of training samples for temporal relation extraction. As anomalies in time series manifest themselves as either discrete points or subsequences (see Fig. 1), correspondingly, we propose two kinds of masking strategies and use them to generate two pre-trained models. They are biased towards recovering from point-wise anomalies (DeepFIB-p model for point outliers) and sequence-wise anomalies (DeepFIB-s model for continuous outliers), respectively. To the best of our knowledge, this is the first SSL work for time series anomaly detection. Generally speaking, AD solutions have difficulty detecting sequence-wise anomalies because it is hard to tell the real outliers from their neighboring normal elements due to their interplay. To tackle this problem, we propose a novel anomaly localization algorithm to locate the precise start and end positions of continuous outliers. As a post-processing step, we conduct a local search after determining the existence of sequence-wise anomalies within a timing window with our DeepFIB-s model. By doing so, the detection accuracy for continuous outliers is significantly improved. We conduct experiments on several commonly-used time series benchmarks, and results show that DeepFIB consistently outperforms state-of-the-art solutions. In particular, the average F1-score of DeepFIB for the six ECG datasets is more than 62%, achieving nearly 50% relative improvement. 2 RELATED WORK In this section, we mainly discuss recent deep learning-based time series AD approaches. A comprehensive survey on the traditional techniques can be found in (Gupta et al., 2014). Existing anomaly detection approaches can be broadly categorized into three types (see Fig. 2): (i) Density-based methods consider the normal instances compact in the latent space and identify anomalies with one-class classifiers or likelihood measurements (Su et al., 2019; Shen & Kwok, 2020; Feng & Tian, 2021). (ii) Reconstruction-based methods use recurrent auto-encoders (RAE) (Malhotra et al., 2016; Yoo et al., 2021; Kieu et al., 2019; Shen et al., 2021; Zhang et al., 2019) or deep generative models such as recurrent VAEs (Park et al., 2018) or GANs (Li et al., 2019; Zhou et al., 2019) for reconstruction. The reconstruction errors are used as anomaly scores. (iii) Prediction-based methods rely on predictive models (Bontemps et al., 2016; Deng & Hooi, 2021; Chen et al., 2021) and use the prediction errors as anomaly scores. While the above methods have been successfully used in many real-world applications, practical AD tasks still have lots of room for improvement, especially in data-scarce scenarios. Unlike existing AD approaches, the proposed mask-and-impute method in DeepFIB exploits the unique property of TS data that missing values can be effectively imputed (Fang & Wang, 2020). By constructing many training samples via self-imputation, DeepFIB extracts robust temporal relations of TS data and improves AD accuracy dramatically. Moreover, for the more challenging sequence-wise anomalies, most prior work assumes a user-defined fixed-length for anomaly subsequences (Cook et al., 2020) or simplifies the problem by stating all the continuous outliers have been correctly detected as long as one of the points is detected (Su et al., 2019; Shen & Kwok, 2020). In DeepFIB, we lift these assumptions and try to locate the exact location of sequence-wise anomalies. 3 METHOD In this section, we first introduce the overall self-imputation framework in DeepFIB and then discuss the separate AD models for detecting point- and sequence-wise anomalies with different maskand-impute strategies, namely DeepFIB-p and DeepFIB-s, respectively. Next, we describe the TS imputation method used in DeepFIB, based on an existing TS forecasting approach SCINet (Liu et al., 2021). Finally, we present our anomaly localization algorithm for continuous outliers. 3.1 SELF-IMPUTATION FOR ANOMALY DETECTION Given a set of multivariate time series wherein Xs = {x1, x2, ..., xTs} Rd×Ts (Ts is the length of the sth time series Xs), the objective of the AD task is to find all anomalous points xt ∈ Rd (d is the number of variates) and anomalous subsequences Xt,τ = {xt−τ+1, ..., xt}. The critical issue to solve the above problem is obtaining an expected value for each element in the TS, which requires a large amount of training data to learn from, especially for deep learning-based solutions. However, time-series data are often scarce, significantly restricting the effectiveness of learning-based AD solutions. DeepFIB is a simple yet effective SSL technique to tackle the above problem. We model this problem as a Fill In the Blank game by randomly masking some elements in the TS and imputing them with the rest. Such self-imputation strategies generate many training samples from every time series and hence dramatically improve temporal learning capabilities. In particular, we propose to train two self-imputation models (Fig. 3), biased towards point- and sequence-wise anomalies in the TS data, respectively. • DeepFIB-p model targets point outliers, as shown in Fig. 3(a), in which we mask discrete elements and rely on the local temporal relations extracted from neighboring elements for reconstruction. For each time series Xs, we generate M training samples by masking it M times with randomly-selected yet non-overlapping d×TsM elements. • DeepFIB-s model targets continuous outliers, as shown in Fig. 3(b), in which we mask continuous elements and rely on predictive models for reconstruction. For each time series Xs, we evenly divide it into N non-overlapping sub-sequences as { X d×TsN s,i , i ∈ [0, N − 1] } and generate N training samples by masking one of them each time. During training, for each time series Xs, we obtain a set of non-overlapped imputed data with the above model and integrate them together results in a reconstructed time series X̂s (i.e., X̂s-p for DeepFIB-p model and X̂s-s for DeepFIB-s model). The training loss for both models are defined as the reconstruction errors between the input time series and the reconstructed one: L = 1 Ts Ts∑ t=1 ‖xt − x̂t‖ (1) where xt is the original input value at time step t and the x̂t denotes the reconstructed value from the corresponding model, and ‖·‖ is the L1-norm of a vector. During testing, to detect point outliers with the DeepFIB-p model, we simply use the residual error as the anomaly score, defined as et = ∑d i=0 ∣∣∣x̂ti − xit∣∣∣, and when et is larger than a threshold value λp, time step t is regarded as an outlier. In contrast, for continuous outliers, we use dynamic time warping (DTW) (Sakoe & Chiba, 1978) distance metrics as our anomaly scoring mechanism, which measures the similarity between the input time series X and reconstructed sequence X̂ . If DTW (X, X̂) is above a threshold value λs, a sequence-wise anomaly is detected. 3.2 TIME SERIES IMPUTATION IN DEEPFIB While the time-series data imputation problem has been investigated for decades (Fang & Wang, 2020), there are still lots of rooms for improvement and various deep learning models are proposed recently (Cao et al., 2018; Liu et al., 2019; Luo et al., 2019). SCINet (Liu et al., 2021) is an encoder-decoder architecture motivated by the unique characteristics of time series data. It incorporates a series of SCI-Blocks that conduct down-sampled convolutions and interactive learning to capture temporal features at various resolutions and effectively blend them in a hierarchical manner. Considering the highly-effective temporal relation extraction capability of SCINet when compared to other sequence models, we propose to revise it for the TS imputation task. More details about SCINet can be found in (Liu et al., 2021). To impute the missing elements from the two masking strategies with DeepFIB-p and DeepFIB-s models, we simply change the supervisions for the decoder part accordingly. For point imputation, we use the original input sequence as the supervision of our DeepFIB-p model, making it a reconstruction structure. By doing so, the model concentrates more on the local temporal relations inside the timing window for imputing discrete missing data, as shown in Fig. 5(a). As for continuous imputation, we propose to change SCINet as a bidirectional forecasting structure in our DeepFIB-s model, with the masked sub-sequence as supervision. As shown in Fig. 5(b), the two sub-models, namely F-SCINet and B-SCINet, are used to conduct forecasting in the forward and backward directions, respectively. By doing so, the model can aggregate the temporal features from both directions and learn a robust long-term temporal relations for imputing continuous missing data. 3.3 ANOMALY LOCALIZATION ALGORITHM During inference, we use a sliding window with stride µ to walk through the time series and find anomalies in each window. For sequence-wise anomalies, without knowing their positions a priori, we could mask some normal elements in the window and use those unmasked outliers for prediction (see Fig. 5(b)), thereby leading to mispredictions. To tackle this problem, we propose to conduct a local search for the precise locations of the sequence-wise anomalies. As shown in Fig. 6, the Active window are the current input sequence to the DeepFIB-s model with length ω (ω > µ), i.e., Xt = {xt, xt+1, ..., xt+ω−1} at time step t. When the DTW distance between the original time series in the Active window and the imputed sequence is above the threshold λs, a sequence-wise anomaly is detected in the current window, and the localization mechanism is triggered. As the sliding window is moving along the data stream with stride µ, if no outliers are detected in the previous window, the start position of the sequence-wise anomaly can only exist at the end of Xt in the window {xt+ω−µ, ..., xt+ω−1, xt+ω−1} with length µ. Consequently, by gradually shifting the Active window backward to include one more element in the Buffer window (see Fig. 6) at a time and calculating the corresponding DTW distances as {e1, ..., ei, ..., eµ}, we can find the maximum i with ei < λs, indicating the following element after the Active window starting with i is the start of the anomaly subsequence. The Anomaly flag is then activated from this position. Similarly, to determine the ending position of the anomaly subsequence, we keep sliding the Active windows until we find a window with DTW distance smaller than λs, indicating that the ending position is within {xt−µ, ..., xt−2, xt−1}. Again, we shift the Active window backwardly one-by-one to include one element of the above window at a time and calculate the corresponding DTW distance, until we find the ending position with its DTW distance larger than λs. 4 EXPERIMENTS In this section, we conduct extensive experiments to answer the following two questions: Whether DeepFIB outperforms state-of-the-art AD methods (Q1)? How does each component of DeepFIB affect its performance (Q2)? Experiments are conducted on a number of commonly-used benchmark TS datasets, namely 2dgesture, Power demand, ECG and Credit Card, ranging from human abnormal behavior detection, power monitoring, healthcare and fraud detection in finance (see Table 1). As the anomalies in 2d-gesture, Power demand, and ECG are mainly sequence outliers, we apply the DeepFIB-s model on these datasets. In contrast, the Credit Card dataset only contains point outliers, and hence we use DeepFIB-p model on it. To make a fair comparison with existing models, we use the standard evaluation metrics on the corresponding datasets. For 2d-gesture, Power demand and Credit Card, we use precision, recall, and F1-score following (Shen & Kwok, 2020). For ECG datasets, we use the AUROC (area under the ROC curve), AUPRC (area under the precision-recall curve) and F1-score, following (Shen et al., 2021). To detect anomalies, we use the maximum anomaly score in each sub-models over the validation dataset to set the threshold. More details on experimental settings, additional experimental results and discussions (e.g., hyperparameter analysis) are presented in the supplementary materials. 4.1 Q1: COMPARISON WITH STATE-OF-THE-ART METHODS 2d-gesture and Power demand: The results in Table 2 show that the proposed DeepFIB-s achieves 16.55% and 50.18% F1-score improvements on 2d-gesture and Power demand, respectively, compared with the second best methods. For 2d-gesture, the available training data is limited and the temporal relations contained in the data are complex (body jitter), making it difficult to obtain a discriminative representation in AD models. DAGMM (Zong et al., 2018) shows low performance since it does not consider the temporal information of the time-series data at all. As for the AD solutions based on generative models (EncDecAD (Malhotra et al., 2016), LSTM-VAE (Park et al., 2018), MAD-GAN (Li et al., 2019), AnoGAN (Schlegl et al., 2017), BeatGAN (Zhou et al., 2019), OmniAnomaly (Su et al., 2019)), they usually require a large amount of training data, limiting their performance in data-scarce scenario. Compared to the above methods, the encoder-decoder architecture MSCRED (Zhang et al., 2019) is relatively easier to train and its AD performance is considerably higher. Moreover, the recent THOC (Shen & Kwok, 2020) work further improves AD performance by fusing the multi-scale temporal information to capture the complex temporal dynamics. The proposed DeepFIB-s model outperforms all the above baseline methods since the proposed self-imputation technique allows the model to learn more robust temporal relations from much more self-generated training samples. Notably, we also observe that the precision of the DeepFIB-s dominates the other baselines. We attribute it to the anomaly localization algorithm that can locate the anomaly’s precise start and end positions, significantly reducing the false positive rate. For Power demand, the data contains many contextual anomaly1 subsequences (see Fig. 7). It is quite challenging for existing AD approaches to learn such context information by extracting temporal features from the entire time series as a whole. In contrast, the proposed sequence-wise masking strategy facilitates learning different kinds of temporal patterns, which is much more effective in detecting such contextual anomalies. As shown in Table 2, the recall of our DeepFIB-s model almost reaches 100%, indicating all anomalies have been detected. The precision is not the best, and we argue that some of the false positives are in fact resulted from the poorly labeled test set (see our supplementary material). ECG(A-F): Compared with (A),(B),(C) datasets, (D),(E),(F) are clearly noisy, which affect the performance of the anomaly detectors significantly. Nevertheless, Table 3 shows that DeepFIB-s achieves an average 46.3% F1-score improvement among all datasets and an impressive 65.2% improvement for ECG(F) dataset. There are mainly two reasons: (1) the data is scarce (See Table 1). Existing AD methods are unable to learn robust temporal relations under such circumstances. In contrast, the self-imputation training strategy together with the bidirectional forecasting mechanism used in our DeepFIB-s model can well address this issue; (2) the proposed DTW anomaly score is more effective in detecting the anomaly sequence than the previous point-wise residual scoring (see Section 4.2.1 ). Notably, the AUPRC of DeepFIB in ECG(E) is slightly lower than RAMED (Shen et al., 2021), and we attribute to the fact that some unlabeled sub-sequences are too similar to labeled anomalies in the raw data. Credit Card: Due to the nature of this application, this dataset is stochastic and the temporal relation is not significant. Therefore, as shown in Table 4, traditional AD solutions without modeling the underlying temporal dependency achieve fair performance, e.g., OCSVM (Ma & Perkins, 2003), ISO 1Contextual anomalies are observations or sequences that deviate from the expected patterns within the time series however if taken in isolation they are within the range of values expected for that signal (Cook et al., 2020). Forest (Liu et al., 2008). Besides, the AR (Rousseeuw & Leroy, 1987) with a small window size (e.g., 3, 5) can also identify the local change point without considering longer temporal relations. However, the large recall and small precision values show its high false positive rates. The prediction-based method, LSTM-RNN (Bontemps et al., 2016) tries to learn a robust temporal relation from the data, which is infeasible for this dataset. In contrast, the reconstruction-based method, RAE (recurrent auto-encoder) (Malhotra et al., 2016) performs better since it can estimate the outliers based on the local contextual information. The proposed DeepFIB-p model outperforms all baseline methods, because it can better extract local correlations with the proposed self-imputation strategy. At the same time, compared to our results on other datasets, the relative 26.3% improvement over the second best solution (AR) is less impressive and the F1-score with our DeepFIB-p model is still less than 25%. We attribute it to both the dataset complexity and the lack of temporal relations in this dataset. 4.2 Q2: ABLATION STUDY In this section, we first evaluate the impact of various components in our DeepFIB-s and DeepFIB-p models. Next, we replace the SCINet with other sequence models to evaluate its impact. 4.2.1 COMPONENT ANALYSIS DeepFIB-p: To demonstrate the impact of the proposed mask-and-impute mechanism in point outlier detection. We add two baseline methods: (1) DeepFIB-p†, wherein we remove the self-imputation strategy; (2) RAE∗, we implement the same mask-and-impute strategy and apply it to the baseline method RAE. In Table 4, the performance improvement and degradation of the corresponding variants compared to DeepFIB-p and RAE clearly demonstrate the effectiveness of the proposed self-imputation strategy for point outlier detection. DeepFIB-s: To investigate the impact of different modules of DeepFIB-s, we compare two variants of the DeepFIB-s on five datasets. The details of the variants are described as below: For w/o. localization, we remove the anomaly localization algorithm from our DeepFIB-s model. The w/o. localization & DTW further removes the DTW scoring mechanism, and the anomalies are determined based on point-wise residual errors. As shown in Fig. 8, all these components are essential for achieving high anomaly detection accuracy. At the same time, the proposed self-imputation training strategy is still the main contributor to the performance of our DeepFIB-s model, as the results of w/o. localization & DTW are still much better than those of the 2nd best solution. Besides, the performance gain of the DTW anomaly scoring indicates that the point-wise outlier estimation is not suitable for evaluating sequence-wise anomalies. 4.2.2 IMPACT OF SCINET In our DeepFIB framework, we revise SCINet for time series imputation. To show its impact, we replace it with other sequence models in DeepFIB-s. As we can see in Table 5, compared with TCN (Bai et al., 2018) and LSTM (Hochreiter & Schmidhuber, 1997), using SCINet indeed brings significant improvements, which clearly shows its strong temporal relation extraction capability and the effectiveness of the revised architecture for TS imputation. At the same time, compared to the previous SOTA methods (2nd best) for the corresponding dataset, with the same mask-andimpute strategy, we can still achieve remarkable performance without using SCINet, indicating the effectiveness of the proposed self-imputation concept itself. 5 CONCLUSION In this paper, we propose a novel self-imputation framework DeepFIB for time series anomaly detection. Considering the two types of common anomalies in TS data, we implement two maskand-impute models biased towards them, which facilitate extracting more robust temporal relations than existing AD solutions. Moreover, for sequence-wise anomalies, we propose a novel anomaly localization algorithm that dramatically improves AD detection accuracy. Experiments on various real-world TS datasets demonstrate that DeepFIB outperforms state-of-the-art AD approaches by a large margin, achieving up to more than 65% relative improvement in F1-score.
1. What is the focus of the paper regarding deep anomaly detection? 2. What are the strengths and weaknesses of the proposed data augmentation method using masking? 3. How does the reviewer assess the relationship between the proposed approach and other imputation methods? 4. What are the concerns regarding the modification of SCINet for the task and its lack of clear explanation? 5. How does the reviewer evaluate the experimental results and comparison with other methods?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a deep anomaly detection method for multivariate time series. The anomaly detection task does not seem to be clearly defined in the paper, but it would be either in-sample outlier detection (aka data cleansing) or out-of-sample outliner/sequential anomaly detection. Assuming that the latter is the case, the key idea seems to be a data augmentation method using masking. The authors propose two masking schemes, namely random point-wise and sequence-wise. The underlying neural network architecture is the same as the one called SCINet, which is described in an archive paper that has not yet been accepted in any peer-reviewed venues. The training is made by minimizing the reconstruction error of the masked entries. The authors report on a few experimental results to claim superior performance. Review The proposed data augmentation/training approach might be useful as a general method to boost the performance of neural sequential autoencoders. If it is the main claim, the authors will need to apply the same/similar approach to other models to compare the performance. If the capability in data imputation is a selling point, the authors will need to clarify the relationship with other imputation method theoretically and empirically. The authors claim that they modified SCINet by changing supervised signals. But it is unclear how SCINet, which is designed for time-ahead prediction, was used for their task. For example, the input is obviously different from the original setting. There does not seem to be a clear explanation in the paper. Overall, the description of the paper is high-level and qualitative. The model lacks a solid justification. The cited paper (SCINet) and the supplemental do not help understand the detail. The experimental evaluation does not look comprehensive. Table 4 shows that the AR model gave a competitive performance with the Credit Card data set. This means that the time series has an extremely simple spectral structure (or a very simple periodic nature). In that case, many classical time-series analysis methods should work as well as the AR model does. There does not seem to be a clear explanation why those classical methods were not compared in other datasets.
ICLR
Title Planning in Stochastic Environments with a Learned Model Abstract Model-based reinforcement learning has proven highly successful. However, learning a model in isolation from its use during planning is problematic in complex environments. To date, the most effective techniques have instead combined valueequivalent model learning with powerful tree-search methods. This approach is exemplified by MuZero, which has achieved state-of-the-art performance in a wide range of domains, from board games to visually rich environments, with discrete and continuous action spaces, in online and offline settings. However, previous instantiations of this approach were limited to the use of deterministic models. This limits their performance in environments that are inherently stochastic, partially observed, or so large and complex that they appear stochastic to a finite agent. In this paper we extend this approach to learn and plan with stochastic models. Specifically, we introduce a new algorithm, Stochastic MuZero, that learns a stochastic model incorporating afterstates, and uses this model to perform a stochastic tree search. Stochastic MuZero matched or exceeded the state of the art in a set of canonical single and multi-agent environments, including 2048 and backgammon, while maintaining the superhuman performance of standard MuZero in the game of Go. 1 INTRODUCTION Constructing plans and executing them is an important feature of human and animal behaviour. In the field of artificial intelligence there has been a great amount of research into adding planning capabilities to intelligent agents. Tree-based planning algorithms have shown a lot of success in a wide variety of environments such as card games (Moravčík et al., 2017), board games (Campbell et al., 2002; Silver et al., 2016) and more recently video games (Schrittwieser et al., 2020) and continuous control tasks (Hubert et al., 2021). Most tree search methods assume that the agent has access to a perfect simulator of the environment, whereas real-world environments are typically unknown. Model-based reinforcement learning algorithms combine a model-learning component, which estimates the dynamics of the environment, with a planning component, using the learned model as a simulator. However, learning a model in isolation from its use during planning has proven to be problematic in complex environments (van Hasselt et al., 2019). Instead, value-equivalent modellearning methods (Silver et al., 2017; Farahmand et al., 2017; Oh et al., 2017; Grimm et al., 2020) identify a model that reconstructs only those quantities required for planning. The most successful method, MuZero (Schrittwieser et al., 2020) learns a model that reconstructs reward, value and policy, and uses this model to perform a powerful Monte Carlo tree search. MuZero achieved superhuman results in Go, chess, shogi and Atari without any prior knowledge of the rules, and has also achieved state-of-the-art performance in large and continuous action spaces (Hubert et al., 2021) and offline reinforcement learning (Schrittwieser et al., 2021). However, value equivalent methods such as MuZero have in practice been limited to a deterministic class of models, which severely limits their applicability. Many environments are inherently stochastic and may be poorly approximated by a deterministic model. Partially observed environments may also be perceived by the agent as stochastic, whenever aliased states cannot be disambiguated. Similarly, large and complex environments may appear stochastic to a small agent with finite capacity. In this paper we introduce the first empirically effective approach for handling stochasticity in value equivalent model-learning and planning. The model is factored to first transition deterministically from state to an afterstate, and then to branch stochastically from the afterstate to the next state. This factored model is trained end-to-end so as to maintain value equivalence for both state value function and action value function respectively, and a stochastic planning method is applied to the model. We apply these ideas to MuZero, using a discrete generative network to represent the model, and modifying the Monte Carlo tree search to effectively use the factored model. We apply our method, Stochastic MuZero, to several environments in which handling stochasticity is important. First, we consider the popular stochastic puzzle game 2048, in which the prior state of the art exploits a perfect simulator and significant handcrafted domain knowledge. In our experiments, Stochastic MuZero achieved better results without any domain knowledge. Second, we consider the classic stochastic two-player game of backgammon, in which near-optimal play has been achieved using a perfect simulator. Stochastic MuZero matches this performance without any prior knowledge of the game rules. Finally, we evaluated our method in the deterministic board game of Go. There our method matched the performance of MuZero, demonstrating that Stochastic MuZero extends MuZero without sacrificing performance. 2 RELATED WORK Observation models (Oh et al., 2015; Chiappa et al., 2017; Łukasz Kaiser et al., 2020) explicitly learn the dynamics of an environment by fitting a model of observations and rewards to observed transitions. Subsequently, these models can be combined with a model-free learning rule in a Dyna fashion (Sutton, 1991). However, modeling high dimensional image observations can be computationally prohibitive, prone to high error accumulation as the model is unrolled for multiple steps, and limiting since the capacity of the model could be spent on background features which are not helpful for the problem at hand. These issues make such models unconducive for planning. Finally, van Hasselt et al. (2019) argues that Dyna-based methods are unlikely to outperform model-free approaches that use a replay buffer. Latent models (Schrittwieser et al., 2020; Oh et al., 2017; Hafner et al., 2021; Henaff et al., 2017) attempt to overcome the limitations of observation models by learning recurrent networks that operate on latent states. In this framework, the model is conditioned on the current observation and future actions and is unrolled for k steps. Subsequently, it is trained to make predictions about rewards, values, policies or observations at each timestep based on the current latent state. The model can then be combined with a tree-based planning algorithm or used to generate synthetic trajectories. Recently, MuZero has shown that it is possible to use this approach to achieve state-of-the-art performance in many challenging domains (Hubert et al., 2021) while using less data (Schrittwieser et al., 2021). However, most approaches, including MuZero, use a deterministic function to model the environment dynamics, which limits their applicability to deterministic or weakly stochastic environments. Stochastic latent models are stochastic models of the environment dynamics that operate on latent states. In (Hafner et al., 2021) the authors propose a recurrent state-space model which consists of three main modules, a recurrent module which generates the deterministic recurrent state ht, a representation model which combines ht with the current observation xt to generate a distribution over stochastic states st and plays the role of the posterior, and a transition predictor which depends only on ht and acts as the prior of the model. By combining the deterministic and stochastic states ht and st the model is trained to predict the current observation ot, the transition reward rt and the discount dt. The next deterministic recurrent state is generated using ht, st and action at. The stochastic states st are modeled as multidimensional multinomial variables. The learned model is then used to generate synthetic data which are used to train an actor-critic model-free agent. The authors show that their approach outperforms pure model-free methods but it fails to achieve the performance of MuZero which combines its learned model with planning. In (Ozair et al., 2021) the authors learn a stochastic transition model using a VQ-VAE generative network (van den Oord et al., 2017) and subsequently combine it with MCTS. They show that their method can match the performance of MuZero in chess, while viewing the problem as a singleplayer task and implicitly learning to model the behaviour of the opponent. Despite its promise their approach was only applied in a supervised setting using expert data, and did not address the challenges of learning a stochastic model in the reinforcement learning setting. Moreover, the learned model was trained to explicitly predict the observation at every step, which can be a limiting factor in terms of computation and model efficiency when dealing with high dimensional observations. Finally, the authors used a two stage training process: first, a model learns latent representations of the observations, then these representations are used to learn a transition model. This makes it hard to apply this approach in the reinforcement learning setting. 3 BACKGROUND 3.1 MuZero MuZero is a model-based general reinforcement learning agent which combines a learned model of the environment dynamics with a Monte Carlo tree search planning algorithm. The model is conditioned on the history of observations o≤t at timestep t and a sequence of future actions at:t+K , and it is trained to predict the search policies πt:t+K , values vπt:t+K and intermediate rewards rt:t+K at each future timestep. MuZero uses deterministic functions for its model, and thus it implicitly assumes that the underlying environment dynamics are also deterministic. MuZero uses its dynamics model to plan ahead at each time step and the outcome of its MCTS search to select an action and as targets for its policy improvement operator. Model MuZero’s learned model consists of 3 functions: a representation function h, a dynamics function g and a prediction function f . The representation function maps the current history of observations o≤t into a latent state s0t . The dynamics function g receives the previous latent state s k t and combines it with an action at+k to produce the next latent state sk+1t and the reward r k t . Finally, the prediction function f receives each latent state skt as an input and computes the policy p k t and value vkt . Given a sequence of policy πt:T , value zt:T , and reward ut:T targets, the model is trained to minimize the loss shown in 1. LMuZero = K∑ k=0 lp(πt+k, p k t ) + K∑ k=0 lv(zt+k, v k t ) + K∑ k=1 lr(ut+k, r k t ) (1) The policy targets πt+k correspond to the MCTS policy that was generated when searching from observation o≤t+k. The value targets zt+k are computed using n-step returns (Sutton & Barto, 2018). Finally, the reward targets ut+k correspond to the real instantaneous rewards observed when this sequence was generated. Search MuZero uses a variant of the MCTS tree based algorithm first proposed in (Silver et al., 2018). The tree is constructed recursively through a number of simulations. Each simulation consists of 3 phases: selection, expansion and backpropagation. During the selection phase the tree is traversed starting from the root node until a leaf edge is reached. At each internal node s the algorithm selects the action a which maximizes the upper confidence bound proposed in (Silver et al., 2016) and shown in equation 2. a = argmax a [ Q(s, a) +P (a | s) · √ 1 + ∑ bN(s, b) 1 +N(s, a) ( α1 + log (∑ bN(s, b) + α2 + 1 α2 ))] (2) Here, Q(s, a) is the value estimate for action a, N(s, a) the visit count, P (a | s) the prior probability of selecting action a, and α1, α2 are constants which control the relative importance of the Q(s, ·) estimates and prior probabilities P (· | s). In the next phase expansion, the leaf edge is expanded by querying the MuZero model and a new node is added to the tree. Finally, during the backpropagation phase the value estimate of the newly added edge is backpropagated up the tree using the n-step return estimate. 3.2 VECTOR QUANTISED VARIATIONAL AUTOENCODER Vector Quantised Variational AutoEncoder (VQ-VAE, van den Oord et al. (2017)) is a generative modeling technique which uses four key components: an encoder neural network e, a decoder neural network d, a vector quantisation layer vq, and an autoregressive model m. Given an input xt, the encoder produces an embedding cet = e(xt). The quantisation layer comprises of a set of M codes {ci}Mi=0, called the codebook, and quantises the encoder’s output embedding cet by returning the nearest code ct = ckt along with its index kt = argmini ‖ci − cet‖. Additionally, in the backwards pass, this quantisation is treated as an identity function, referred to as straight-through gradient estimation (Bengio et al., 2013). The decoder produces a reconstruction of the input x̂t = d(ct). The autoregressive model predicts a distribution p(kt|c<t) = m(c<t) over the code index at time t using the quantised embeddings c<t of the previous timesteps. The VQ-VAE equations are shown in Equations 3. Encoder cet = e(xt) Quantisation ct, kt = vq(cet ) Decoder x̂t = d(ct) Model p(kt|c<t) = m(c<t) (3) Typically, the encoder, decoder, and codebook are trained first and then frozen to train the autoregressive model in an additional second stage. The total loss for the VQ-VAE is Lvqvaeφ = N−1∑ t=0 [ ‖x̂t − xt‖︸ ︷︷ ︸ reconstruction +β ‖ct − cet‖ 2︸ ︷︷ ︸ commitment − γ log p(kt|c<t)︸ ︷︷ ︸ second stage ] (4) 4 Stochastic MuZero In this section we present our novel algorithm Stochastic MuZero. Our approach combines a learned stochastic transition model of the environment dynamics with a variant of Monte Carlo tree search (MCTS). First, we describe the new model and subsequently how it is combined with MCTS for planning. 4.1 STOCHASTIC MODEL Afterstates We consider the problem of modeling the dynamics of a stochastic environment. Similarly to MuZero, the model receives an initial observation o≤t at time step t and a sequence of actions at:t+K , and needs to make predictions about the future values, policies and rewards. In contrast to MuZero which only considers latent states which correspond to real states of the environment, Stochastic MuZero makes use of the notion of afterstates (Sutton & Barto, 2018) to capture the stochastic dynamics. An afterstate ast is the hypothetical state of the environment after an action is applied but before the environment has transitioned to a true state: st at // ast ;; ## // st+1 By using afterstates we can separate the effect of applying an action to the environment and of the chance transition given an action. For example in backgammon, the afterstate corresponds to the board state after one player has played its action but before the other player had the chance to roll the dice. It is also possible to define the value of an afterstate as V (ast) = Q(st, at) and the transition probabilities of the environment dynamics Pr(st+1 | ast) = Pr(st+1 | st, at). An afterstate can lead to multiple states based on a chance event. In our work we assume that there is a finite number of possible states M that the environment can transition to, given an afterstate, and this way we can associate each transition with a chance outcome cit. An example of a chance outcome could be the result of the dice in a game of backgammon. By defining afterstates ast and chance outcomes ct, we can model a chance transition using a deterministic model st+1, rt+1 =M(ast, ct) and a distribution Pr(st+1 | ast) = Pr(ct | ast). The task of learning a stochastic model is then reduced to the problem of learning afterstates as and chance outcomes c. Model The stochastic model of Stochastic MuZero consists of 5 functions: a representation function h which maps the current observation o≤t to a latent state s0t , an afterstate dynamics function φ which given a state skt and an action at+k produces the next latent afterstate as k t , a dynamics function g which given an afterstate askt and a chance outcome ct+k+1 produces the next latent state s k+1 t and a reward prediction rk+1t , a prediction function f which given a state s k t generates the value vkt and policy p k t predictions, and a afterstate prediction function ψ which given an afterstate as k generates a value prediction Qkt , and a distribution σ k t = Pr(ct+k+1 | askt ) over possible future chance outcomes ct+k+1. The model equations are shown in 5. Representation s0t = h(o≤t) Prediction pkt , v k t = f(s k t ) Afterstate Dynamics askt = φ(s k t , at+k) Afterstate Prediction σkt , Q k t = ψ(as k t ) Dynamics sk+1t , r k+1 t = g(as k t , ct+k+1) (5) During inference, given an initial observation o≤t and a sequence of actions at:t+K , we can generate trajectories from the above model by recurrently unrolling it and by sampling chance outcomes from the distributions ct+k+1 ∼ σkt . Chance outcomes Stochastic MuZero models the chance outcomes by using a novel variant of the VQ-VAE method. Specifically, we consider a VQ-VAE with a constant codebook of size M . Each entry in the codebook is a fixed one-hot vector of size M . By using a fixed codebook of one hot vectors, we can simplify the equations of the VQ-VAE 3. In this case, we model the encoder embedding cet as a categorical variable, and selecting the closest code ct is equivalent to computing the expression one hot (argmaxi(c e,i t )). The resulting encoder can also be viewed as a stochastic function of the observation which makes use of the Gumbel softmax reparameterization trick (Jang et al., 2016) with zero temperature during the forward pass and a straight through estimator during the backward. There is no explicit decoder in our model, and contrary to previous work (Ozair et al., 2021) we do not make use of a reconstruction loss. Instead the network is trained end-to-end in a fashion similar to MuZero. In the following section we explain the training procedure in more detail. Model training The stochastic model is unrolled and trained in an end-to-end fashion similar to MuZero. Specifically, given a trajectory of lengthK with observations o≤t:t+K , actions at:t+K , value targets zt:t+K , policy targets πt:t+K and rewards ut+1:t+K , the model is unrolled for K steps as shown in figure 1 and is trained to optimize the sum of two losses as shown in equation 6: a MuZero loss and a chance loss for learning the stochastic dynamics of the model. Ltotal = LMuZero + Lchance (6) The MuZero loss is the same as the one described in MuZero (see equation 3.1). The chance loss is applied to the predictions Qkt and σ k t which correspond to the latent afterstates as k. The Qkt value is trained to match the value target zt+k and the σk is trained towards the one hot chance code ct+k+1 = one hot (argmaxi(e(o i ≤t+k+1))) produced by the encoder. Finally, following the standard VQ-VAE practice, we use a VQ-VAE commitment cost to ensure that the output of the encoder cet+k = e(o≤t+k+1) is close to the code ct+k. Equation 7 shows the chance loss used to train the model. Lchancew = K−1∑ k=0 lQ(zt+k, Q k t ) + K−1∑ k=0 lσ(ct+k+1, σ k t ) + β K−1∑ k=0 ∥∥ct+k+1 − cet+k+1∥∥2︸ ︷︷ ︸ VQ-VAE commitment cost (7) 4.2 STOCHASTIC SEARCH Stochastic MuZero extends the MCTS algorithm used in MuZero by introducing chance nodes and chance values to the search. In the stochastic instantiation of MCTS, there are two types of nodes: decision and chance (Couetoux, 2013). The chance and decision nodes are interleaved along the depth of the tree, so that the parent of each decision node is a chance node. The root node of the tree is always a decision node. In our approach, each chance node corresponds to a latent afterstate (4.1) and it is expanded by querying the stochastic model, where the parent state and an action are provided as an input and the model returns a value for the node and a prior distribution over future codes Pr(c | as). After a chance node is expanded its value is backpropagated up the tree. Finally, when the node is traversed during the selection phase, a code is selected by sampling the prior distribution 1. In Stochastic MuZero each internal decision node is again expanded by querying the learned model, where the state of the chance parent node and a sampled code c are provided as an input, and the model returns a reward, a value and a policy. Similarly to MuZero the value of the newly added node is backpropagated up the tree, and the pUCT (2) formula is used to select an edge. The stochastic search used by Stochastic MuZero is shown schematically in figure 1. 5 EXPERIMENTS We applied our algorithm to a variety of challenging stochastic and deterministic environments. First, we evaluated our approach in the classic game of 2048, a stochastic single player game. Subsequently, we considered a two player zero-sum stochastic game, Backgammon, which belongs to the same 1In practice we follow the same quasi-random sampling approach as in Ozair et al. (2021) (A.3), where the code is selected using the formula argmaxc Pr(c|as) N(c)+1 . class of board games such as Go, chess or Shogi where MuZero excels, but with stochasticity induced by the use of a die. Finally, we evaluated our method in the deterministic game of Go, to measure any performance loss caused by the use of a stochastic model and search in deterministic environments in comparison to MuZero. In each environment we assess our algorithm’s ability to learn a transition model and effectively use it during search. To this end, we compare Stochastic MuZero (using a stochastic learned model) to MuZero (using a deterministic learned model), AlphaZero (using a perfect simulator), and a strong baseline method (also using a perfect simulator). In the following sections we present our results for each environment separately. 5.1 2048 The game of 2048 (inspired by the game of Threes!) is a stochastic, single player, perfect information puzzle game played on a 4x4 board. The objective of the game is to slide numbered tiles on a grid to combine them to create a tile with the number 2048; one can continue to play the game after reaching the goal, creating tiles with larger numbers. The episode reward is the sum of all created tile numbers. There is a plethora of previous work (Szubert & Jaśkowski, 2014; Yeh et al., 2017; Oka & Matsuzaki, 2016; Rodgers & Levine, 2014; Neller, 2015) on combining reinforcement learning and tree search methods for tackling 2048. Despite its simplicity, model-free approaches have traditionally struggled to achieve high performance, while planning-based approaches have exploited perfect knowledge of the simulator. To date, the best performing agent used the planning-based approach proposed in (Jaśkowski, 2016). This method used an expectimax tree search over a perfect simulator, combined with domain-specific knowledge and a number of novel algorithmic ideas that exploited the structure of this specific problem. In contrast our method uses a learned model and no prior knowledge about the environment. Figure 2 compares the performance of Stochastic MuZero in 2048 to AlphaZero, MuZero and the state-ofthe-art Jaskowski 2016 agent. Our method outperformed Jaskowski 2016, while using only a quarter of the training data. Stochastic MuZero also achieved the same performance as AlphaZero (using a perfect simulator), despite learning the model, and performed far better than MuZero (using a deterministic model). 5.2 BACKGAMMON Backgammon is a classic two player, zero-sum, stochastic board game; it was popularized as a standard testbed for reinforcement learning and artificial intelligence by TD-gammon (Tesauro, 1995). Here we focus on the single game setting, where the final score takes the values ±1 for a simple win or loss, ±2 for a gammon and ±3 for a backgammon. In all experiments we compared to GNUbg Grandmaster (Free Software Foundation, 2004), a superhuman-level open-source backgammon player. GNUbg combines a learned value function based on handcrafted features with a specialized min-max tree search using a perfect stochastic simulator. GNUbg Grandmaster uses a 3-ply look-ahead search over a branching factor of 20 legal moves on average and 21 chance transitions. Stochastic MuZero, using a learned stochastic model of the environment and only 1600 simulations per move, achieved the same playing strength as GNUbg, as shown in Figure 5b. The model learned by Stochastic MuZero is of high quality: it reached the same playing strength as AlphaZero (using a perfect stochastic simulator), and much higher strength than MuZero (using a deterministic learned model). The model also robustly scaled to larger planning budgets (Figure 5c): the performance of Stochastic MuZero improved with increasing number of simulations per move, and ultimately exceeded the playing strength of GNUbg Grandmaster. Given the high dimensionality of the action space in Backgammon (see appendix for details), our Backgammon experiments used the sample-based search introduced by Hubert et al. (2021). 5.3 GO Go is a classic, two player, perfect information, zero-sum board game, that has been studied heavily in the field of artificial intelligence. AlphaZero and subsequently, MuZero have been the only algorithms which have managed to achieve super-human performance, purely through selfplay, in this challenging domain. Since the goal of Stochastic MuZero is to extend the applicability of MuZero to stochastic environments while maintaining the latter’s performance in deterministic environments, we compared the performance of the two algorithms in the game of Go. Figure 4 shows the Elo (Coulom, 2008) achieved by Stochastic MuZero and MuZero during training. Although, Stochastic MuZero requires twice the number of network expansions in comparison to MuZero to achieve the same performance, due to the use of a stochastic MCTS instead of a deterministic one, we ensure that the methods are computationally equivalent by halving the network depth for the chance and dynamic parts of the Stochastic MuZero’s network. 5.4 REPRODUCIBILITY In order to evaluate the robustness of our method in all different environments, we replicated our experiments using nine different initial random seeds (see figure 5.4). We observe that our method is robust to the random initialization and there is minimal variation in its performance between multiple runs. Due to the computational cost of each experiment we used a smaller number of training steps for each experiment. 6 CONCLUSIONS In this work, we proposed a new method for learning a stochastic model of the environment, in a fully online reinforcement learning setting, and showed that the learned model can be effectively combined with planning. Our approach builds on top of MuZero, a model-based reinforcement learning agent that has been widely successful in a range of environments and settings, but its applicability is limited to deterministic or weakly stochastic environments. We have shown that our algorithm, Stochastic MuZero, can overcome the limitations of MuZero, significantly outperforming it in stochastic environments, and it can achieve the same or better performance than AlphaZero which makes use of a perfect simulator for the environment. Finally, we have demonstrated that Stochastic MuZero matches or exceeds the performance of previous methods that use a perfect stochastic simulator, in a pure reinforcement learning setting without using any prior knowledge about the environment. 7 REPRODUCIBILITY STATEMENT In order to ensure the reproducability of our results by the research community, we have included detailed pseudocode, references to all environments and datasets used as well as a detailed description of the hyperparameters used (see Appendix). We did not release the full code as it relies on a lot of proprietary internal infrastructure, limiting its usefulness. We also provide a study of the robustness of our method under different random initialization conditions (see 5.4). D BACKGAMMON EXPERIMENTS Backgammon is an ancient two player, zero-sum, perfect information, stochastic board game. The board consists of 24 squares (or points) and each player controls 15 checkers, which can move based on the outcome of a dice roll. The two players move their checkers in opposite directions and their goal is to move all their checkers off the board first. In addition to a simple winning, a player can also score a double ("gammon") or a triple ("backgammon") winning. A "gammon" is achieved when a player bears off all their checkers before their opponent manages to bear off any, while a "backgammon" when the opponent also has checkers left in the player’s home quadrant (farthermost quadrant from the opponent’s perspective). Each player can impede the progress of their opponent through "hitting" the opponent’s checkers or blocking their advancement. A "hit" is achieved when a player’s checker advances to a position with a single opponent’s checker. Then the opponent’s checker needs to reenter the board in the player’s home quadrant and no further moves are allowed to the opponent until that happens. A position is blocked to the opponent when it is occupied by at least two of the player’s checkers. Each player makes moves based on the values yielded by rolling two dice. In the case of "doubles", aka the two dice have the same value, the player can play up to 4 moves. One of the challenges of computer Backgammon is the high branching ratio, since at each ply there are 21 chance outcomes, which yield positions with an average of 20 legal moves each, resulting in a branching ratio of several hundred per ply. In our backgammon experiments, the board was represented using a vector of size 28, with the first 24 positions representing the number of chips for each player in the 24 possible points on the board, and the last four representing the number of hit chips and born off chips for each of the two players. We used positive numbers for the current player’s chips and negative ones for her opponent. An action in our implementation consists of 4 micro-actions, the same as the maximum number of dice a player can play at each turn. Each micro-action encodes the source position of a chip along with the value of the die used. We consider 26 possible source positions, with the 0th position corresponding to a no-op, the 1st to retrieving a chip from the hit pile, and the remaining to selecting a chip in one of the 24 possible points. Each micro-action is encoded as a single integer with micro-action = src · 6 + die. Similarly to the 2048 experiments, the representation, afterstate dynamics, dynamics and encoder functions were implemented using a 10 block ResNet v2 style pre-activation residual tower (He et al., 2016) coupled with Layer Normalisation (Ba et al., 2016) and Rectified Linear Unit (ReLU) activations. Each linear layer has an output size of 256. The action was provided to the afterstate dynamics network as a vector which was the result of the concatenation of the one-hot representation of each micro-action. We used a codebook of size 32 to model the stochasticity in the environment. Following the work of (Hubert et al., 2021), we used an autoregressive prediction head to model the network policy, with each step corresponding to a single micro-action. To generate a full action, the network was unrolled for 4 steps. In contrast to the 2048 experiments, the value was represented as a scalar. Similarly to MuZero when applied to board games, we used Monte Carlo returns to compute the value targets zt, and we assumed a discount of 1. We trained the model using an Adam optimizer with weight decay (Loshchilov & Hutter, 2017), with learning rate of 0.0003 and a weight decay of 0.0001, with a batch size of 1024 for a total of 8M steps. In all our experiments we used a replay buffer of 100000 games, and the training trajectories were sampled uniformly. For exploration, we injected dirichlet noise to the prior policy at the root node. However, since the number of legal moves at each position can dramatically change in backgammon, we dynamically adapted the alpha parameter of the dirichlet noise based on the number of legal moves, with alpha = 1/ √ num_legal_moves. We used a budget of 1600 simulations for each MCTS search. E GO EXPERIMENTS In our Go experiments, we used the same approach as the one proposed in Hubert et al. (2021). The main differences between this setup and the one proposed in the original MuZero Schrittwieser et al. (2020) is the use of n-step bootstrapping with a target network to improve the data efficiency of the algorithm. The MuZero and Stochastic MuZero players were evaluated during training by playing 100 matches with a search budget of 800 simulations for MuZero and 1600 for Stochastic MuZero. In order to ensure that the two methods are computationally equivalent, each of the chance and dynamics networks of Stochastic MuZero has half the depth of the dynamics network used by MuZero. The Elo scale was anchored so that the performance of the final MuZero baseline corresponded to an Elo of 2000. F CHANCE ANALYSIS We investigated the distribution of chance outcomes at each chance node for Stochastic MuZero. We collected a dataset for each game by storing the probability distribution over chance nodes, σkt = Pr(ct+k+1|askt ), for all afterstate prediction network evaluations invoked throughout all searches in 5 episodes. Subsequently, we sorted each chance node distribution and finally, we computed the average distribution, as shown in figure 6 6. We observed that in the case of deterministic environment like Go, the chance distribution collapsed to a single code, while in stochastic environments the model used multiple codes. Furthermore, in Backgammon, the chance distribution had a support of 21 codes with non-negligible probability, which corresponds to the number of distinct rolls of two dice. G COMPUTATIONAL RESOURCES All experiments were run using second generation Google Cloud TPUs (Google, 2018). For Backgammon, we used 1 TPU for training and 16 TPUs for acting, for approximately 27 hours - equivalent to 10 days on a single V100 GPU. In 2048 we used 1 TPU for training and 4 TPUs for acting, for 80 hours per experiment; equivalent to roughly 8 days on a V100. Finally, in Go we used the same setup as in MuZero (Schrittwieser et al., 2020). H IMPLEMENTATION Stochastic MuZero was implemented as an extension to the standard MuZero algorithm, as it was described in (Schrittwieser et al., 2020). We used the JAX (Bradbury et al., 2018) and Haiku (Hennigan et al., 2020) libraries to implement the neural networks and optimization methods described in this paper. Along with this work, we provide a detailed pseudocode of the Stochastic MuZero algorithm, along with all the hyperparameters used for each environment. I PSEUDOCODE For completeness we provide a detailed pseudocode of the Stochastic MuZero algorithm, along with all the hyperparameters used by the agent. """Pseudocode description of the Stochastic MuZero algorithm. This pseudocode was adapted from the original MuZero pseudocode. """ # pylint: disable=unused-argument # pylint: disable=missing-docstring # pylint: disable=g-explicit-length-test import abc import math from typing import Any, Dict, Callable, List, NamedTuple, Tuple, Union, Optional, Sequence import dataclasses import numpy as np MAXIMUM_FLOAT_VALUE = float(’inf’) ######################################## ####### Environment interface ########## # An action to apply to the environment. # It can a single integer or a list of micro-actions for backgammon. Action = Any # The current player to play. Player = int class Environment: """Implements the rules of the environment.""" def apply(self, action: Action): """Applies an action or a chance outcome to the environment.""" def observation(self): """Returns the observation of the environment to feed to the network. """ def is_terminal(self) -> bool: """Returns true if the environment is in a terminal state.""" return False def legal_actions(self) -> Sequence[Action]: """Returns the legal actions for the current state.""" return [] def reward(self, player: Player) -> float: """Returns the last reward for the player.""" return 0.0 def to_play(self) -> Player: """Returns the current player to play.""" return 0 ########################## ####### Helpers ########## class KnownBounds(NamedTuple): min: float max: float class MinMaxStats(object): """A class that holds the min-max values of the tree.""" def __init__(self, known_bounds: Optional[KnownBounds]): self.maximum = known_bounds.max if known_bounds else - MAXIMUM_FLOAT_VALUE self.minimum = known_bounds.min if known_bounds else MAXIMUM_FLOAT_VALUE def update(self, value: float): self.maximum = max(self.maximum, value) self.minimum = min(self.minimum, value) def normalize(self, value: float) -> float: if self.maximum > self.minimum: # We normalize only when we have set the maximum and minimum values . return (value - self.minimum) / (self.maximum - self.minimum) return value # A chance outcome. Outcome = Any # An object that holds an action or a chance outcome. ActionOrOutcome = Union[Action, Outcome] LatentState = List[float] AfterState = List[float] class NetworkOutput(NamedTuple): value: float probabilities: Dict[ActionOrOutcome, float] reward: Optional[float] = 0.0 class Network: """An instance of the network used by stochastic MuZero.""" def representation(self, observation) -> LatentState: """Representation function maps from observation to latent state.""" return [] def predictions(self, state: LatentState) -> NetworkOutput: """Returns the network predictions for a latent state.""" return NetworkOutput(0, {}, 0) def afterstate_dynamics(self, state: LatentState, action: Action) -> AfterState: """Implements the dynamics from latent state and action to afterstate .""" return [] def afterstate_predictions(self, state: AfterState) -> NetworkOutput: """Returns the network predictions for an afterstate.""" # No reward for afterstate transitions. return NetworkOutput(0, {}) def dynamics(self, state: AfterState, action: Outcome) -> LatentState: """Implements the dynamics from afterstate and chance outcome to state.""" return [] def encoder(self, observation) -> Outcome: """An encoder maps an observation to an outcome.""" class NetworkCacher: """An object to share the network between the self-play and training jobs.""" def __init__(self): self._networks = {} def save_network(self, step: int, network: Network): self._networks[step] = network def load_network(self) -> Tuple[int, Network]: training_step = max(self._networks.keys()) return training_step, self._networks[training_step] # Takes the training step and returns the temperature of the softmax policy. VisitSoftmaxTemperatureFn = Callable[[int], float] # Returns an instance of the environment. EnvironmentFactory = Callable[[], Environment] # The factory for the network. NetworkFactory = Callable[[], Network] @dataclasses.dataclass class StochasticMuZeroConfig: # A factory for the environment. environment_factory: EnvironmentFactory network_factory: NetworkFactory # Self-Play num_actors: int visit_softmax_temperature_fn: VisitSoftmaxTemperatureFn num_simulations: int discount: float # Root prior exploration noise. root_dirichlet_alpha: float root_dirichlet_fraction: float root_dirichlet_adaptive: bool # UCB formula pb_c_base: float = 19652 pb_c_init: float = 1.25 # If we already have some information about which values occur in the # environment, we can use them to initialize the rescaling. # This is not strictly necessary, but establishes identical behaviour to # AlphaZero in board games. known_bounds: Optional[KnownBounds] = None # Replay buffer. num_trajectories_in_buffer: int = int(1e6) batch_size: int = int(128) num_unroll_steps: int = 5 td_steps: int = 6 td_lambda: float = 1.0 # Alpha and beta parameters for prioritization. # By default they are set to 0 which means uniform sampling. priority_alpha: float = 0.0 priority_beta: float = 0.0 # Training training_steps: int = int(1e6) export_network_every: int = int(1e3) learning_rate: float = 3e-4 weight_decay: float = 1e-4 # The number of chance codes (codebook size). # We use a codebook of size 32 for all our experiments. codebook_size: int = 32 ################################## ## Environment specific configs ## def twentyfortyeight_config() -> StochasticMuZeroConfig: """Returns the config for the game of 2048.""" def environment_factory(): # Returns an implementation of 2048. return Environment() def network_factory(): # 10 layer fully connected Res V2 network with Layer normalization and size # 256. return Network() def visit_softmax_temperature(train_steps: int) -> float: if train_steps < 1e5: return 1.0 elif train_steps < 2e5: return 0.5 elif train_steps < 3e5: return 0.1 else: # Greedy selection. return 0.0 return StochasticMuZeroConfig( environment_factory=environment_factory, network_factory=network_factory, num_actors=1000, visit_softmax_temperature=visit_softmax_temperature, num_simulations=100, discount=0.999, root_dirichlet_alpha=0.3, root_dirichlet_fraction=0.1, root_dirichlet_adaptive=False, num_trajectories_in_buffer=int(125e3), td_steps=10, td_lambda=0.5, priority_alpha=1.0, priority_beta=1.0, training_steps=int(20e6), batch_size=1024, weight_decay=0.0) def backgammon_config() -> StochasticMuZeroConfig: """Returns the config for the game of 2048.""" def environment_factory(): # Returns an backgammon. We consider single games without a doubling cube. return Environment() def network_factory(): # 10 layer fully connected Res V2 network with Layer normalization and size # 256. return Network() def visit_softmax_temperature(train_steps: int) -> float: return 1.0 return StochasticMuZeroConfig( environment_factory=environment_factory, network_factory=network_factory, num_actors=1000, visit_softmax_temperature_fn=visit_softmax_temperature, num_simulations=1600, discount=1.0, # Unused, we use adaptive dirichlet for backgammon. root_dirichlet_alpha=-1.0, root_dirichlet_fraction=0.1, root_dirichlet_adaptive=True, # Max value is 3 for backgammon. known_bounds=KnownBounds(min=-3, max=3), # 1e5 full episodes stored. num_trajectories_in_buffer=int(1e5), # We use monte carlo returns. td_steps=int(1e3), training_steps=int(8e6), batch_size=1024, learning_rate=3e-4, weight_decay=1e-4) ################################## ############ Replay ############## class SearchStats(NamedTuple): search_policy: Dict[Action, int] search_value: float class State(NamedTuple): """Data for a single state.""" observation: List[float] reward: float discount: float player: Player action: Action search_stats: SearchStats Trajectory = Sequence[State] class ReplayBuffer: """A replay buffer to hold the experience generated by the selfplay.""" def __init__(self, config: StochasticMuZeroConfig): self.config = config self.data = [] def save(self, seq: Trajectory): if len(self.data) > self.config.num_trajectories_in_buffer: # Remove the oldest sequence from the buffer. self.data.pop(0) self.data.append(seq) def sample_trajectory(self) -> Trajectory: """Samples a trajectory uniformly or using prioritization.""" return self.data[0] def sample_index(self, seq: Trajectory) -> int: """Samples an index in the trajectory uniformly or using prioritization.""" return 0 def sample_element(self) -> Trajectory: """Samples a single element from the buffer.""" # Sample a trajectory. trajectory = self.sample_trajectory() state_idx = self.sample_index(trajectory) limit = max([self.config.num_unroll_steps, self.config.td_steps]) # Returns a trajectory of experiment. return trajectory[state_idx:state_idx + limit] def sample(self) -> Sequence[Trajectory]: """Samples a training batch.""" return [self.sample_element() for _ in range(self.config.batch_size)] ################################## ############ Search ############## class ActionOutcomeHistory: """Simple history container used inside the search. Only used to keep track of the actions and chance outcomes executed. """ def __init__(self, player: Player, history: Optional[List[ActionOrOutcome]] = None): self.initial_player = player self.history = list(history or []) def clone(self): return ActionOutcomeHistory(self.initial_player, self.history) def add_action_or_outcome(self, action_or_outcome: ActionOrOutcome): self.history.append(action_or_outcome) def last_action_or_outcome(self) -> ActionOrOutcome: return self.history[-1] def to_play(self) -> Player: # Returns the next player to play based on the initial player and the # history of actions and outcomes. For example for backgammon the two # players alternate, while for 2048 it is always the same player. return 0 class Node(object): """A Node in the MCTS search tree.""" def __init__(self, prior: float, is_chance: bool = False): self.visit_count = 0 self.to_play = -1 self.prior = prior self.value_sum = 0 self.children = {} self.state = None self.is_chance = is_chance self.reward = 0 def expanded(self) -> bool: return len(self.children) > 0 def value(self) -> float: if self.visit_count == 0: return 0 return self.value_sum / self.visit_count # Core Monte Carlo Tree Search algorithm. # To decide on an action, we run N simulations, always starting at the root of # the search tree and traversing the tree according to the UCB formula until we # reach a leaf node. def run_mcts(config: StochasticMuZeroConfig, root: Node, action_outcome_history: ActionOutcomeHistory, network: Network, min_max_stats: MinMaxStats): for _ in range(config.num_simulations): history = action_outcome_history.clone() node = root search_path = [node] while node.expanded(): action_or_outcome, node = select_child(config, node, min_max_stats) history.add_action(action_or_outcome) search_path.append(node) # Inside the search tree we use the dynamics function to obtain the next # hidden state given an action and the previous hidden state. parent = search_path[-2] if parent.is_chance: # The parent is a chance node, afterstate to latent state transition. # The last action or outcome is a chance outcome. child_state = network_output.dynamics(parent.state, history. last_action_or_outcome()) network_output = network_output.predictions(child_state) # This child is a decision node. is_child_chance = False else: # The parent is a decision node, latent state to afterstate transition. # The last action or outcome is an action. child_state = network_output.afterstate_dynamics( parent.state, history.last_action_or_outcome()) network_output = network_output.afterstate_predictions(child_state) # The child is a chance node. is_child_chance = True # Expand the node. expand_node(node, child_state, network_output, history.to_play(), is_child_chance) # Backpropagate the value up the tree. backpropagate(search_path, network_output.value, history.to_play(), config.discount, min_max_stats) # Select the child with the highest UCB score. def select_child(config: StochasticMuZeroConfig, node: Node, min_max_stats: MinMaxStats): if node.is_chance: # If the node is chance we sample from the prior. outcomes, probs = zip(*[(o, n.prob) for o, n in node.children.items() ]) outcome = np.random.choice(outcomes, p=probs) return outcome, node.children[outcome] # For decision nodes we use the pUCT formula. _, action, child = max( (ucb_score(config, node, child, min_max_stats), action, child) for action, child in node.children.items()) return action, child # The score for a node is based on its value, plus an exploration bonus based on # the prior. def ucb_score(config: StochasticMuZeroConfig, parent: Node, child: Node, min_max_stats: MinMaxStats) -> float: pb_c = math.log((parent.visit_count + config.pb_c_base + 1) / config.pb_c_base) + config.pb_c_init pb_c *= math.sqrt(parent.visit_count) / (child.visit_count + 1) prior_score = pb_c * child.prior if child.visit_count > 0: value_score = min_max_stats.normalize(child.reward + config.discount * child.value() ) else: value_score = 0 return prior_score + value_score # We expand a node using the value, reward and policy prediction obtained from # the neural network. def expand_node(node: Node, state: Union[LatentState, AfterState], network_output: NetworkOutput, player: Player, is_chance: bool): node.to_play = player node.state = state node.is_chance = is_chance node.reward = network_output.reward for action, prob in network_output.probabilities.items(): node.children[action] = Node(prob) # At the end of a simulation, we propagate the evaluation all the way up the # tree to the root. def backpropagate(search_path: List[Node], value: float, to_play: Player, discount: float, min_max_stats: MinMaxStats): for node in reversed(search_path): node.value_sum += value if node.to_play == to_play else -value node.visit_count += 1 min_max_stats.update(node.value()) value = node.reward + discount * value # At the start of each search, we add dirichlet noise to the prior of the root # to encourage the search to explore new actions. def add_exploration_noise(config: StochasticMuZeroConfig, node: Node): actions = list(node.children.keys()) dir_alpha = config.root_dirichlet_alpha if config.root_dirichlet_adaptive: dir_alpha = 1.0 / np.sqrt(len(actions)) noise = np.random.dirichlet([dir_alpha] * len(actions)) frac = config.root_exploration_fraction for a, n in zip(actions, noise): node.children[a].prior = node.children[a].prior * (1 - frac) + n * frac ################################## ############ Self-play ########### class Actor(metaclass=abc.ABCMeta): """An actor to interact with the environment.""" @abc.abstractmethod def reset(self): """Resets the player for a new episode.""" @abc.abstractmethod def select_action(self, env: Environment) -> Action: """Selects an action for the current state of the environment.""" @abc.abstractmethod def stats(self) -> SearchStats: """Returns the stats for the player after it has selected an action. """ class StochasticMuZeroActor(Actor): def __init__(self, config: StochasticMuZeroConfig, cacher: NetworkCacher): self.config = config self.cacher = cacher self.training_step = -1 self.network = None def reset(self): # Read a network from the cacher for the new episode. self.training_step, self.network = self.cacher.load_network() self.root = None def _mask_illegal_actions(self, env: Environment, outputs: NetworkOutput) -> NetworkOutput: """Masks any actions which are illegal at the root.""" # We mask out and keep only the legal actions. masked_policy = {} network_policy = outputs.probabilities norm = 0 for action in env.legal_actions(): if action in network_policy: masked_policy[action] = network_policy[action] else: masked_policy[action] = 0.0 norm += masked_policy[action] # Renormalize the masked policy. masked_policy = {a: v / norm for a, v in masked_policy.items()} return NetworkOutput(value=outputs.value, probabilities=masked_policy ) def _select_action(self, root: Node): """Selects an action given the root node.""" # Get the visit count distribution. actions, visit_counts = zip(*[ (action, node.visit_counts) for action, node in node.children.items() ]) # Temperature temperature = self.config.visit_softmax_temperature_fn(self. training_step) # Compute the search policy. search_policy = [v ** (1. / temperature) for v in visit_counts] norm = sum(search_policy) search_policy = [v / norm for v in search_policy] return np.random.choice(actions, p=search_policy) def select_action(self, env: Environment) -> Action: """Selects an action.""" # New min max stats for the search tree. min_max_stats = MinMaxStats(self.config.known_bounds) # At the root of the search tree we use the representation function to # obtain a hidden state given the current observation. root = Node(0) # Provide the history of observations to the representation network to # get the initial latent state. latent_state = self.network.representation(env.observation()) # Compute the predictions. outputs = self.network.predictions(latent_state) # Keep only the legal actions. outputs = self._mask_illegal_actions(env, outputs) # Expand the root node. expand_node(root, latent_state, outputs, env.to_play(), is_chance= False) # Backpropagate the value. backpropagate([root], outputs.value, env.to_play(), self.config.discount, min_max_stats) # We add exploration noise to the root node. add_exploration_noise(self.config, root) # We then run a Monte Carlo Tree Search using only action sequences and the # model learned by the network. run_mcts(self.config, root, ActionOutcomeHistory(env.to_play()), self.network, min_max_stats) # Keep track of the root to return the stats. self.root = root # Return an action. return self._select_action(root) def stats(self) -> SearchStats: """Returns the stats of the latest search.""" if self.root is None: raise ValueError(’No search was executed.’) return SearchStats( search_policy={ action: node.visit_counts for action, node in self.root.children.items() }, search_value=self.root.value()) # Self-play. # Each self-play job is independent of all others; it takes the latest network # snapshot, produces an episode and makes it available to the training job by # writing it to a shared replay buffer. def run_selfplay(config: StochasticMuZeroConfig, cacher: NetworkCacher, replay_buffer: ReplayBuffer): actor = StochasticMuZeroActor(config, cacher) while True: # Create a new instance of the environment. env = config.environment_factory() # Reset the actor. actor.reset() episode = [] while not env.is_terminal(): action = actor.select_action(env) state = State( observation=env.observation(), reward=env.reward(env.to_play()), discount=config.discount, player=env.to_play(), action=action, search_stats=actor.stats()) episode.append(state) env.apply(action) # Send the episode to the replay. replay_buffer.save(episode) ################################## ############ Training ############ class Learner(metaclass=abc.ABCMeta): """An learner to update the network weights based.""" @abc.abstractmethod def learn(self): """Single training step of the learner.""" @abc.abstractmethod def export(self) -> Network: """Exports the network.""" def policy_loss(predictions, labels): """Minimizes the KL-divergence of the predictions and labels.""" return 0.0 def value_or_reward_loss(prediction, target): """Implements the value or reward loss for Stochastic MuZero. For backgammon this is implemented as an MSE loss of scalars. For 2048, we use the two hot representation proposed in MuZero, and this loss is implemented as a KL divergence between the value and value target representations. For 2048 we also apply a hyperbolic transformation to the target (see paper for more information). Args: prediction: The reward or value output of the network. target: The reward or value target. Returns: The loss to minimize. """ return 0.0 class StochasticMuZeroLearner(Learner): """Implements the learning for Stochastic MuZero.""" def __init__(self, config: StochasticMuZeroConfig, replay_buffer: ReplayBuffer): self.config = config self.replay_buffer = replay_buffer # Instantiate the network. self.network = config.network_factory() def transpose_to_time(self, batch): """Transposes the data so the leading dimension is time instead of batch.""" return batch def learn(self): """Applies a single training step.""" batch = self.replay_buffer.sample() # Transpose batch to make time the leading dimension. batch = self.transpose_to_time(batch) # Compute the initial step loss. latent_state = self.network.representation(batch[0].observation) predictions = self.network.predictions(latent_state) # Computes the td target for the 0th position. value_target = compute_td_target(self.config.td_steps, self.config.td_lambda, batch) # Train the network value towards the td target. total_loss = value_or_reward_loss(predictions.value, value_target) # Train the network policy towards the MCTS policy. total_loss += policy_loss(predictions.probabilities, batch[0].search_stats.search_policy) # Unroll the model for k steps. for t in range(1, self.config.num_unroll_steps + 1): # Condition the afterstate on the previous action. afterstate = self.network.afterstate_dynamics( latent_state, batch[t - 1].action) afterstate_predictions = self.network.afterstate_predictions( afterstate) # Call the encoder on the next observation. # The encoder returns the chance code which is a discrete one hot code. # The gradients flow to the encoder using a straight through estimator. chance_code = self.network.encoder(batch[t].observation) # The afterstate value is trained towards the previous value target # but conditioned on the selected action to obtain a Q-estimate. total_loss += value_or_reward_loss( afterstate_predictions.value, value_target) # The afterstate distribution is trained to predict the chance code # generated by the encoder. total_loss += policy_loss(afterstate_predictions.probabilities, chance_code) # Get the dynamic predictions. latent_state = self.network.dynamics(afterstate, chance_code) predictions = self.network.predictions(latent_state) # Compute the new value target. value_target = compute_td_target(self.config.td_steps, self.config.td_lambda, batch[t:]) # The reward loss for the dynamics network. total_loss += value_or_reward_loss(predictions.reward, batch[t]. reward) total_loss += value_or_reward_loss(predictions.value, value_target) total_loss += policy_loss(predictions.probabilities, batch[t].search_stats.search_policy) minimize_with_adam_and_weight_decay(total_loss, learning_rate=self.config. learning_rate, weight_decay=self.config. weight_decay) def export(self) -> Network: return self.network def train_stochastic_muzero(config: StochasticMuZeroConfig, cacher: NetworkCacher, replay_buffer: ReplayBuffer): learner = StochasticMuZeroLearner(config, replay_buffer) # Export the network so the actors can start generating experience. cacher.save_network(0, learner.export()) for step in range(config.training_steps): # Single learning step. learner.learn() if step > 0 and step % config.export_network_every == 0: cacher.save_network(step, learner.export()) ################################## ############ RL loop ############# def launch_stochastic_muzero(config: StochasticMuZeroConfig): """Full RL loop for stochastic MuZero.""" replay_buffer = ReplayBuffer(config) cacher = NetworkCacher() # Launch a learner job. launch_job(lambda: train_stochastic_muzero(config, cacher, replay_buffer)) # Launch the actors. for _ in range(config.num_actors): launch_job(lambda: run_selfplay(config, cacher, replay_buffer)) # Stubs to make the typechecker happy. def softmax_sample(distribution, temperature: float): return 0, 0 def compute_td_target(td_steps, td_lambda, trajectory): """Computes the TD lambda targets given a trajectory for the 0th element. Args: td_steps: The number n of the n-step returns. td_lambda: The lambda in TD(lambda). trajectory: A sequence of states. Returns: The n-step return. """ return 0.0 def minimize_with_sgd(loss, learning_rate): """Minimizes the loss using SGD.""" def minimize_with_adam_and_weight_decay(loss, learning_rate, weight_decay ): """Minimizes the loss using Adam with weight decay.""" def launch_job(f): """Launches a job to run remotely.""" return f()
1. What is the main contribution of the paper, and how does it extend previous work in MDPs with stochasticity? 2. How effective is the proposed approach in solving stochastic environments, and what advantages does it offer over other methods? 3. Are there any potential areas for improvement or questions regarding the model's training details, notations, or explanations? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The authors extend mu-zero to MDPs with stochasticity by adding after-states to the tree and using a VQ-VAE model. They show that this enables them to solve stochastic environments where mu-zero fails. Review Potential Areas of Improvements / Questions: One thing I would have liked to see, possibly in the appendix, is confirmation that this model trained on Go-Zero learned to become fully, or almost-fully deterministic. It’s not essential since it clearly works, but I think it’d be interesting! I suspect it doesn’t completely do this as this is the only explanation I am able to come up for why you need a larger sampling budget for stochastic mu-zero? To the former point, why do you need a larger sampling budget for stochastic mu-zero in the Go games? Would this model also then be applicable to multi-player, imperfect information games? This also does not affect my score nor am I asking for additional experiments on this, I just think such an experiment would be interesting. Is the notion of after-states strictly needed? Why is it advantageous to explicitly split the deterministic and stochastic components instead of having each node be stochastic? It would be good to include the hyperparam grids that were searched over to get the results in the appendix, or if none was done, to say so. The robustness section is nice but it’s valuable to know how much tuning was involved. Clarity: In the “Model” section, I might have missed it, but it looks like l^p is not defined? Same for l^v. Notational question: why is c_{t+k+1} the only thing where the k index is a subscript rather than a superscript? In the section “chance outcomes” there seems to be something wrong with the sentence “By using a fixed codebook of one hot vectors, we can simplify the equations of the VQ-VAE 3” I may have missed it but in case I didn’t, the appendix should include an exact description of the training details of VQ-VAE, including when different components are frozen, the size of the codebook, etc. Unless I missed something, the paper is not reproducible as is. After equation (5), I am not totally clear what is meant by the expression σ t k , , c t + k + 1 ∼ σ t k . I assume you mean to say that chance outcomes are drawn from sigma but the construction of the sentence makes it read like you are drawing σ t k from σ t k . I think it would be worthwhile, at least in the appendix, to be clearer about the process by which the chance variables are actually sampled. The description in the “chance outcomes” section “The resulting encoder can also be viewed as a stochastic it function of the observation which makes use of the Gumbel softmax reparameterization trick (Jang et al., 2016) with zero temperature during the forward pass and a straight through estimator during the backward” is a little unclear. Given that the VQ-VAE model is a key contribution, I really do think clearer explanation of how the “novel” variant of VQ-VAE works could be given. I don’t have a substantive solution, I just want the authors to be aware that this section was slightly confusing to read. In particular, it might be worth expanding on how the deterministic codes let you still acquire stochasticity. Is l σ defined anywhere? I think in Fig. 1B you want the sampling symbol rather than the approximately equal symbol?
ICLR
Title Planning in Stochastic Environments with a Learned Model Abstract Model-based reinforcement learning has proven highly successful. However, learning a model in isolation from its use during planning is problematic in complex environments. To date, the most effective techniques have instead combined valueequivalent model learning with powerful tree-search methods. This approach is exemplified by MuZero, which has achieved state-of-the-art performance in a wide range of domains, from board games to visually rich environments, with discrete and continuous action spaces, in online and offline settings. However, previous instantiations of this approach were limited to the use of deterministic models. This limits their performance in environments that are inherently stochastic, partially observed, or so large and complex that they appear stochastic to a finite agent. In this paper we extend this approach to learn and plan with stochastic models. Specifically, we introduce a new algorithm, Stochastic MuZero, that learns a stochastic model incorporating afterstates, and uses this model to perform a stochastic tree search. Stochastic MuZero matched or exceeded the state of the art in a set of canonical single and multi-agent environments, including 2048 and backgammon, while maintaining the superhuman performance of standard MuZero in the game of Go. 1 INTRODUCTION Constructing plans and executing them is an important feature of human and animal behaviour. In the field of artificial intelligence there has been a great amount of research into adding planning capabilities to intelligent agents. Tree-based planning algorithms have shown a lot of success in a wide variety of environments such as card games (Moravčík et al., 2017), board games (Campbell et al., 2002; Silver et al., 2016) and more recently video games (Schrittwieser et al., 2020) and continuous control tasks (Hubert et al., 2021). Most tree search methods assume that the agent has access to a perfect simulator of the environment, whereas real-world environments are typically unknown. Model-based reinforcement learning algorithms combine a model-learning component, which estimates the dynamics of the environment, with a planning component, using the learned model as a simulator. However, learning a model in isolation from its use during planning has proven to be problematic in complex environments (van Hasselt et al., 2019). Instead, value-equivalent modellearning methods (Silver et al., 2017; Farahmand et al., 2017; Oh et al., 2017; Grimm et al., 2020) identify a model that reconstructs only those quantities required for planning. The most successful method, MuZero (Schrittwieser et al., 2020) learns a model that reconstructs reward, value and policy, and uses this model to perform a powerful Monte Carlo tree search. MuZero achieved superhuman results in Go, chess, shogi and Atari without any prior knowledge of the rules, and has also achieved state-of-the-art performance in large and continuous action spaces (Hubert et al., 2021) and offline reinforcement learning (Schrittwieser et al., 2021). However, value equivalent methods such as MuZero have in practice been limited to a deterministic class of models, which severely limits their applicability. Many environments are inherently stochastic and may be poorly approximated by a deterministic model. Partially observed environments may also be perceived by the agent as stochastic, whenever aliased states cannot be disambiguated. Similarly, large and complex environments may appear stochastic to a small agent with finite capacity. In this paper we introduce the first empirically effective approach for handling stochasticity in value equivalent model-learning and planning. The model is factored to first transition deterministically from state to an afterstate, and then to branch stochastically from the afterstate to the next state. This factored model is trained end-to-end so as to maintain value equivalence for both state value function and action value function respectively, and a stochastic planning method is applied to the model. We apply these ideas to MuZero, using a discrete generative network to represent the model, and modifying the Monte Carlo tree search to effectively use the factored model. We apply our method, Stochastic MuZero, to several environments in which handling stochasticity is important. First, we consider the popular stochastic puzzle game 2048, in which the prior state of the art exploits a perfect simulator and significant handcrafted domain knowledge. In our experiments, Stochastic MuZero achieved better results without any domain knowledge. Second, we consider the classic stochastic two-player game of backgammon, in which near-optimal play has been achieved using a perfect simulator. Stochastic MuZero matches this performance without any prior knowledge of the game rules. Finally, we evaluated our method in the deterministic board game of Go. There our method matched the performance of MuZero, demonstrating that Stochastic MuZero extends MuZero without sacrificing performance. 2 RELATED WORK Observation models (Oh et al., 2015; Chiappa et al., 2017; Łukasz Kaiser et al., 2020) explicitly learn the dynamics of an environment by fitting a model of observations and rewards to observed transitions. Subsequently, these models can be combined with a model-free learning rule in a Dyna fashion (Sutton, 1991). However, modeling high dimensional image observations can be computationally prohibitive, prone to high error accumulation as the model is unrolled for multiple steps, and limiting since the capacity of the model could be spent on background features which are not helpful for the problem at hand. These issues make such models unconducive for planning. Finally, van Hasselt et al. (2019) argues that Dyna-based methods are unlikely to outperform model-free approaches that use a replay buffer. Latent models (Schrittwieser et al., 2020; Oh et al., 2017; Hafner et al., 2021; Henaff et al., 2017) attempt to overcome the limitations of observation models by learning recurrent networks that operate on latent states. In this framework, the model is conditioned on the current observation and future actions and is unrolled for k steps. Subsequently, it is trained to make predictions about rewards, values, policies or observations at each timestep based on the current latent state. The model can then be combined with a tree-based planning algorithm or used to generate synthetic trajectories. Recently, MuZero has shown that it is possible to use this approach to achieve state-of-the-art performance in many challenging domains (Hubert et al., 2021) while using less data (Schrittwieser et al., 2021). However, most approaches, including MuZero, use a deterministic function to model the environment dynamics, which limits their applicability to deterministic or weakly stochastic environments. Stochastic latent models are stochastic models of the environment dynamics that operate on latent states. In (Hafner et al., 2021) the authors propose a recurrent state-space model which consists of three main modules, a recurrent module which generates the deterministic recurrent state ht, a representation model which combines ht with the current observation xt to generate a distribution over stochastic states st and plays the role of the posterior, and a transition predictor which depends only on ht and acts as the prior of the model. By combining the deterministic and stochastic states ht and st the model is trained to predict the current observation ot, the transition reward rt and the discount dt. The next deterministic recurrent state is generated using ht, st and action at. The stochastic states st are modeled as multidimensional multinomial variables. The learned model is then used to generate synthetic data which are used to train an actor-critic model-free agent. The authors show that their approach outperforms pure model-free methods but it fails to achieve the performance of MuZero which combines its learned model with planning. In (Ozair et al., 2021) the authors learn a stochastic transition model using a VQ-VAE generative network (van den Oord et al., 2017) and subsequently combine it with MCTS. They show that their method can match the performance of MuZero in chess, while viewing the problem as a singleplayer task and implicitly learning to model the behaviour of the opponent. Despite its promise their approach was only applied in a supervised setting using expert data, and did not address the challenges of learning a stochastic model in the reinforcement learning setting. Moreover, the learned model was trained to explicitly predict the observation at every step, which can be a limiting factor in terms of computation and model efficiency when dealing with high dimensional observations. Finally, the authors used a two stage training process: first, a model learns latent representations of the observations, then these representations are used to learn a transition model. This makes it hard to apply this approach in the reinforcement learning setting. 3 BACKGROUND 3.1 MuZero MuZero is a model-based general reinforcement learning agent which combines a learned model of the environment dynamics with a Monte Carlo tree search planning algorithm. The model is conditioned on the history of observations o≤t at timestep t and a sequence of future actions at:t+K , and it is trained to predict the search policies πt:t+K , values vπt:t+K and intermediate rewards rt:t+K at each future timestep. MuZero uses deterministic functions for its model, and thus it implicitly assumes that the underlying environment dynamics are also deterministic. MuZero uses its dynamics model to plan ahead at each time step and the outcome of its MCTS search to select an action and as targets for its policy improvement operator. Model MuZero’s learned model consists of 3 functions: a representation function h, a dynamics function g and a prediction function f . The representation function maps the current history of observations o≤t into a latent state s0t . The dynamics function g receives the previous latent state s k t and combines it with an action at+k to produce the next latent state sk+1t and the reward r k t . Finally, the prediction function f receives each latent state skt as an input and computes the policy p k t and value vkt . Given a sequence of policy πt:T , value zt:T , and reward ut:T targets, the model is trained to minimize the loss shown in 1. LMuZero = K∑ k=0 lp(πt+k, p k t ) + K∑ k=0 lv(zt+k, v k t ) + K∑ k=1 lr(ut+k, r k t ) (1) The policy targets πt+k correspond to the MCTS policy that was generated when searching from observation o≤t+k. The value targets zt+k are computed using n-step returns (Sutton & Barto, 2018). Finally, the reward targets ut+k correspond to the real instantaneous rewards observed when this sequence was generated. Search MuZero uses a variant of the MCTS tree based algorithm first proposed in (Silver et al., 2018). The tree is constructed recursively through a number of simulations. Each simulation consists of 3 phases: selection, expansion and backpropagation. During the selection phase the tree is traversed starting from the root node until a leaf edge is reached. At each internal node s the algorithm selects the action a which maximizes the upper confidence bound proposed in (Silver et al., 2016) and shown in equation 2. a = argmax a [ Q(s, a) +P (a | s) · √ 1 + ∑ bN(s, b) 1 +N(s, a) ( α1 + log (∑ bN(s, b) + α2 + 1 α2 ))] (2) Here, Q(s, a) is the value estimate for action a, N(s, a) the visit count, P (a | s) the prior probability of selecting action a, and α1, α2 are constants which control the relative importance of the Q(s, ·) estimates and prior probabilities P (· | s). In the next phase expansion, the leaf edge is expanded by querying the MuZero model and a new node is added to the tree. Finally, during the backpropagation phase the value estimate of the newly added edge is backpropagated up the tree using the n-step return estimate. 3.2 VECTOR QUANTISED VARIATIONAL AUTOENCODER Vector Quantised Variational AutoEncoder (VQ-VAE, van den Oord et al. (2017)) is a generative modeling technique which uses four key components: an encoder neural network e, a decoder neural network d, a vector quantisation layer vq, and an autoregressive model m. Given an input xt, the encoder produces an embedding cet = e(xt). The quantisation layer comprises of a set of M codes {ci}Mi=0, called the codebook, and quantises the encoder’s output embedding cet by returning the nearest code ct = ckt along with its index kt = argmini ‖ci − cet‖. Additionally, in the backwards pass, this quantisation is treated as an identity function, referred to as straight-through gradient estimation (Bengio et al., 2013). The decoder produces a reconstruction of the input x̂t = d(ct). The autoregressive model predicts a distribution p(kt|c<t) = m(c<t) over the code index at time t using the quantised embeddings c<t of the previous timesteps. The VQ-VAE equations are shown in Equations 3. Encoder cet = e(xt) Quantisation ct, kt = vq(cet ) Decoder x̂t = d(ct) Model p(kt|c<t) = m(c<t) (3) Typically, the encoder, decoder, and codebook are trained first and then frozen to train the autoregressive model in an additional second stage. The total loss for the VQ-VAE is Lvqvaeφ = N−1∑ t=0 [ ‖x̂t − xt‖︸ ︷︷ ︸ reconstruction +β ‖ct − cet‖ 2︸ ︷︷ ︸ commitment − γ log p(kt|c<t)︸ ︷︷ ︸ second stage ] (4) 4 Stochastic MuZero In this section we present our novel algorithm Stochastic MuZero. Our approach combines a learned stochastic transition model of the environment dynamics with a variant of Monte Carlo tree search (MCTS). First, we describe the new model and subsequently how it is combined with MCTS for planning. 4.1 STOCHASTIC MODEL Afterstates We consider the problem of modeling the dynamics of a stochastic environment. Similarly to MuZero, the model receives an initial observation o≤t at time step t and a sequence of actions at:t+K , and needs to make predictions about the future values, policies and rewards. In contrast to MuZero which only considers latent states which correspond to real states of the environment, Stochastic MuZero makes use of the notion of afterstates (Sutton & Barto, 2018) to capture the stochastic dynamics. An afterstate ast is the hypothetical state of the environment after an action is applied but before the environment has transitioned to a true state: st at // ast ;; ## // st+1 By using afterstates we can separate the effect of applying an action to the environment and of the chance transition given an action. For example in backgammon, the afterstate corresponds to the board state after one player has played its action but before the other player had the chance to roll the dice. It is also possible to define the value of an afterstate as V (ast) = Q(st, at) and the transition probabilities of the environment dynamics Pr(st+1 | ast) = Pr(st+1 | st, at). An afterstate can lead to multiple states based on a chance event. In our work we assume that there is a finite number of possible states M that the environment can transition to, given an afterstate, and this way we can associate each transition with a chance outcome cit. An example of a chance outcome could be the result of the dice in a game of backgammon. By defining afterstates ast and chance outcomes ct, we can model a chance transition using a deterministic model st+1, rt+1 =M(ast, ct) and a distribution Pr(st+1 | ast) = Pr(ct | ast). The task of learning a stochastic model is then reduced to the problem of learning afterstates as and chance outcomes c. Model The stochastic model of Stochastic MuZero consists of 5 functions: a representation function h which maps the current observation o≤t to a latent state s0t , an afterstate dynamics function φ which given a state skt and an action at+k produces the next latent afterstate as k t , a dynamics function g which given an afterstate askt and a chance outcome ct+k+1 produces the next latent state s k+1 t and a reward prediction rk+1t , a prediction function f which given a state s k t generates the value vkt and policy p k t predictions, and a afterstate prediction function ψ which given an afterstate as k generates a value prediction Qkt , and a distribution σ k t = Pr(ct+k+1 | askt ) over possible future chance outcomes ct+k+1. The model equations are shown in 5. Representation s0t = h(o≤t) Prediction pkt , v k t = f(s k t ) Afterstate Dynamics askt = φ(s k t , at+k) Afterstate Prediction σkt , Q k t = ψ(as k t ) Dynamics sk+1t , r k+1 t = g(as k t , ct+k+1) (5) During inference, given an initial observation o≤t and a sequence of actions at:t+K , we can generate trajectories from the above model by recurrently unrolling it and by sampling chance outcomes from the distributions ct+k+1 ∼ σkt . Chance outcomes Stochastic MuZero models the chance outcomes by using a novel variant of the VQ-VAE method. Specifically, we consider a VQ-VAE with a constant codebook of size M . Each entry in the codebook is a fixed one-hot vector of size M . By using a fixed codebook of one hot vectors, we can simplify the equations of the VQ-VAE 3. In this case, we model the encoder embedding cet as a categorical variable, and selecting the closest code ct is equivalent to computing the expression one hot (argmaxi(c e,i t )). The resulting encoder can also be viewed as a stochastic function of the observation which makes use of the Gumbel softmax reparameterization trick (Jang et al., 2016) with zero temperature during the forward pass and a straight through estimator during the backward. There is no explicit decoder in our model, and contrary to previous work (Ozair et al., 2021) we do not make use of a reconstruction loss. Instead the network is trained end-to-end in a fashion similar to MuZero. In the following section we explain the training procedure in more detail. Model training The stochastic model is unrolled and trained in an end-to-end fashion similar to MuZero. Specifically, given a trajectory of lengthK with observations o≤t:t+K , actions at:t+K , value targets zt:t+K , policy targets πt:t+K and rewards ut+1:t+K , the model is unrolled for K steps as shown in figure 1 and is trained to optimize the sum of two losses as shown in equation 6: a MuZero loss and a chance loss for learning the stochastic dynamics of the model. Ltotal = LMuZero + Lchance (6) The MuZero loss is the same as the one described in MuZero (see equation 3.1). The chance loss is applied to the predictions Qkt and σ k t which correspond to the latent afterstates as k. The Qkt value is trained to match the value target zt+k and the σk is trained towards the one hot chance code ct+k+1 = one hot (argmaxi(e(o i ≤t+k+1))) produced by the encoder. Finally, following the standard VQ-VAE practice, we use a VQ-VAE commitment cost to ensure that the output of the encoder cet+k = e(o≤t+k+1) is close to the code ct+k. Equation 7 shows the chance loss used to train the model. Lchancew = K−1∑ k=0 lQ(zt+k, Q k t ) + K−1∑ k=0 lσ(ct+k+1, σ k t ) + β K−1∑ k=0 ∥∥ct+k+1 − cet+k+1∥∥2︸ ︷︷ ︸ VQ-VAE commitment cost (7) 4.2 STOCHASTIC SEARCH Stochastic MuZero extends the MCTS algorithm used in MuZero by introducing chance nodes and chance values to the search. In the stochastic instantiation of MCTS, there are two types of nodes: decision and chance (Couetoux, 2013). The chance and decision nodes are interleaved along the depth of the tree, so that the parent of each decision node is a chance node. The root node of the tree is always a decision node. In our approach, each chance node corresponds to a latent afterstate (4.1) and it is expanded by querying the stochastic model, where the parent state and an action are provided as an input and the model returns a value for the node and a prior distribution over future codes Pr(c | as). After a chance node is expanded its value is backpropagated up the tree. Finally, when the node is traversed during the selection phase, a code is selected by sampling the prior distribution 1. In Stochastic MuZero each internal decision node is again expanded by querying the learned model, where the state of the chance parent node and a sampled code c are provided as an input, and the model returns a reward, a value and a policy. Similarly to MuZero the value of the newly added node is backpropagated up the tree, and the pUCT (2) formula is used to select an edge. The stochastic search used by Stochastic MuZero is shown schematically in figure 1. 5 EXPERIMENTS We applied our algorithm to a variety of challenging stochastic and deterministic environments. First, we evaluated our approach in the classic game of 2048, a stochastic single player game. Subsequently, we considered a two player zero-sum stochastic game, Backgammon, which belongs to the same 1In practice we follow the same quasi-random sampling approach as in Ozair et al. (2021) (A.3), where the code is selected using the formula argmaxc Pr(c|as) N(c)+1 . class of board games such as Go, chess or Shogi where MuZero excels, but with stochasticity induced by the use of a die. Finally, we evaluated our method in the deterministic game of Go, to measure any performance loss caused by the use of a stochastic model and search in deterministic environments in comparison to MuZero. In each environment we assess our algorithm’s ability to learn a transition model and effectively use it during search. To this end, we compare Stochastic MuZero (using a stochastic learned model) to MuZero (using a deterministic learned model), AlphaZero (using a perfect simulator), and a strong baseline method (also using a perfect simulator). In the following sections we present our results for each environment separately. 5.1 2048 The game of 2048 (inspired by the game of Threes!) is a stochastic, single player, perfect information puzzle game played on a 4x4 board. The objective of the game is to slide numbered tiles on a grid to combine them to create a tile with the number 2048; one can continue to play the game after reaching the goal, creating tiles with larger numbers. The episode reward is the sum of all created tile numbers. There is a plethora of previous work (Szubert & Jaśkowski, 2014; Yeh et al., 2017; Oka & Matsuzaki, 2016; Rodgers & Levine, 2014; Neller, 2015) on combining reinforcement learning and tree search methods for tackling 2048. Despite its simplicity, model-free approaches have traditionally struggled to achieve high performance, while planning-based approaches have exploited perfect knowledge of the simulator. To date, the best performing agent used the planning-based approach proposed in (Jaśkowski, 2016). This method used an expectimax tree search over a perfect simulator, combined with domain-specific knowledge and a number of novel algorithmic ideas that exploited the structure of this specific problem. In contrast our method uses a learned model and no prior knowledge about the environment. Figure 2 compares the performance of Stochastic MuZero in 2048 to AlphaZero, MuZero and the state-ofthe-art Jaskowski 2016 agent. Our method outperformed Jaskowski 2016, while using only a quarter of the training data. Stochastic MuZero also achieved the same performance as AlphaZero (using a perfect simulator), despite learning the model, and performed far better than MuZero (using a deterministic model). 5.2 BACKGAMMON Backgammon is a classic two player, zero-sum, stochastic board game; it was popularized as a standard testbed for reinforcement learning and artificial intelligence by TD-gammon (Tesauro, 1995). Here we focus on the single game setting, where the final score takes the values ±1 for a simple win or loss, ±2 for a gammon and ±3 for a backgammon. In all experiments we compared to GNUbg Grandmaster (Free Software Foundation, 2004), a superhuman-level open-source backgammon player. GNUbg combines a learned value function based on handcrafted features with a specialized min-max tree search using a perfect stochastic simulator. GNUbg Grandmaster uses a 3-ply look-ahead search over a branching factor of 20 legal moves on average and 21 chance transitions. Stochastic MuZero, using a learned stochastic model of the environment and only 1600 simulations per move, achieved the same playing strength as GNUbg, as shown in Figure 5b. The model learned by Stochastic MuZero is of high quality: it reached the same playing strength as AlphaZero (using a perfect stochastic simulator), and much higher strength than MuZero (using a deterministic learned model). The model also robustly scaled to larger planning budgets (Figure 5c): the performance of Stochastic MuZero improved with increasing number of simulations per move, and ultimately exceeded the playing strength of GNUbg Grandmaster. Given the high dimensionality of the action space in Backgammon (see appendix for details), our Backgammon experiments used the sample-based search introduced by Hubert et al. (2021). 5.3 GO Go is a classic, two player, perfect information, zero-sum board game, that has been studied heavily in the field of artificial intelligence. AlphaZero and subsequently, MuZero have been the only algorithms which have managed to achieve super-human performance, purely through selfplay, in this challenging domain. Since the goal of Stochastic MuZero is to extend the applicability of MuZero to stochastic environments while maintaining the latter’s performance in deterministic environments, we compared the performance of the two algorithms in the game of Go. Figure 4 shows the Elo (Coulom, 2008) achieved by Stochastic MuZero and MuZero during training. Although, Stochastic MuZero requires twice the number of network expansions in comparison to MuZero to achieve the same performance, due to the use of a stochastic MCTS instead of a deterministic one, we ensure that the methods are computationally equivalent by halving the network depth for the chance and dynamic parts of the Stochastic MuZero’s network. 5.4 REPRODUCIBILITY In order to evaluate the robustness of our method in all different environments, we replicated our experiments using nine different initial random seeds (see figure 5.4). We observe that our method is robust to the random initialization and there is minimal variation in its performance between multiple runs. Due to the computational cost of each experiment we used a smaller number of training steps for each experiment. 6 CONCLUSIONS In this work, we proposed a new method for learning a stochastic model of the environment, in a fully online reinforcement learning setting, and showed that the learned model can be effectively combined with planning. Our approach builds on top of MuZero, a model-based reinforcement learning agent that has been widely successful in a range of environments and settings, but its applicability is limited to deterministic or weakly stochastic environments. We have shown that our algorithm, Stochastic MuZero, can overcome the limitations of MuZero, significantly outperforming it in stochastic environments, and it can achieve the same or better performance than AlphaZero which makes use of a perfect simulator for the environment. Finally, we have demonstrated that Stochastic MuZero matches or exceeds the performance of previous methods that use a perfect stochastic simulator, in a pure reinforcement learning setting without using any prior knowledge about the environment. 7 REPRODUCIBILITY STATEMENT In order to ensure the reproducability of our results by the research community, we have included detailed pseudocode, references to all environments and datasets used as well as a detailed description of the hyperparameters used (see Appendix). We did not release the full code as it relies on a lot of proprietary internal infrastructure, limiting its usefulness. We also provide a study of the robustness of our method under different random initialization conditions (see 5.4). D BACKGAMMON EXPERIMENTS Backgammon is an ancient two player, zero-sum, perfect information, stochastic board game. The board consists of 24 squares (or points) and each player controls 15 checkers, which can move based on the outcome of a dice roll. The two players move their checkers in opposite directions and their goal is to move all their checkers off the board first. In addition to a simple winning, a player can also score a double ("gammon") or a triple ("backgammon") winning. A "gammon" is achieved when a player bears off all their checkers before their opponent manages to bear off any, while a "backgammon" when the opponent also has checkers left in the player’s home quadrant (farthermost quadrant from the opponent’s perspective). Each player can impede the progress of their opponent through "hitting" the opponent’s checkers or blocking their advancement. A "hit" is achieved when a player’s checker advances to a position with a single opponent’s checker. Then the opponent’s checker needs to reenter the board in the player’s home quadrant and no further moves are allowed to the opponent until that happens. A position is blocked to the opponent when it is occupied by at least two of the player’s checkers. Each player makes moves based on the values yielded by rolling two dice. In the case of "doubles", aka the two dice have the same value, the player can play up to 4 moves. One of the challenges of computer Backgammon is the high branching ratio, since at each ply there are 21 chance outcomes, which yield positions with an average of 20 legal moves each, resulting in a branching ratio of several hundred per ply. In our backgammon experiments, the board was represented using a vector of size 28, with the first 24 positions representing the number of chips for each player in the 24 possible points on the board, and the last four representing the number of hit chips and born off chips for each of the two players. We used positive numbers for the current player’s chips and negative ones for her opponent. An action in our implementation consists of 4 micro-actions, the same as the maximum number of dice a player can play at each turn. Each micro-action encodes the source position of a chip along with the value of the die used. We consider 26 possible source positions, with the 0th position corresponding to a no-op, the 1st to retrieving a chip from the hit pile, and the remaining to selecting a chip in one of the 24 possible points. Each micro-action is encoded as a single integer with micro-action = src · 6 + die. Similarly to the 2048 experiments, the representation, afterstate dynamics, dynamics and encoder functions were implemented using a 10 block ResNet v2 style pre-activation residual tower (He et al., 2016) coupled with Layer Normalisation (Ba et al., 2016) and Rectified Linear Unit (ReLU) activations. Each linear layer has an output size of 256. The action was provided to the afterstate dynamics network as a vector which was the result of the concatenation of the one-hot representation of each micro-action. We used a codebook of size 32 to model the stochasticity in the environment. Following the work of (Hubert et al., 2021), we used an autoregressive prediction head to model the network policy, with each step corresponding to a single micro-action. To generate a full action, the network was unrolled for 4 steps. In contrast to the 2048 experiments, the value was represented as a scalar. Similarly to MuZero when applied to board games, we used Monte Carlo returns to compute the value targets zt, and we assumed a discount of 1. We trained the model using an Adam optimizer with weight decay (Loshchilov & Hutter, 2017), with learning rate of 0.0003 and a weight decay of 0.0001, with a batch size of 1024 for a total of 8M steps. In all our experiments we used a replay buffer of 100000 games, and the training trajectories were sampled uniformly. For exploration, we injected dirichlet noise to the prior policy at the root node. However, since the number of legal moves at each position can dramatically change in backgammon, we dynamically adapted the alpha parameter of the dirichlet noise based on the number of legal moves, with alpha = 1/ √ num_legal_moves. We used a budget of 1600 simulations for each MCTS search. E GO EXPERIMENTS In our Go experiments, we used the same approach as the one proposed in Hubert et al. (2021). The main differences between this setup and the one proposed in the original MuZero Schrittwieser et al. (2020) is the use of n-step bootstrapping with a target network to improve the data efficiency of the algorithm. The MuZero and Stochastic MuZero players were evaluated during training by playing 100 matches with a search budget of 800 simulations for MuZero and 1600 for Stochastic MuZero. In order to ensure that the two methods are computationally equivalent, each of the chance and dynamics networks of Stochastic MuZero has half the depth of the dynamics network used by MuZero. The Elo scale was anchored so that the performance of the final MuZero baseline corresponded to an Elo of 2000. F CHANCE ANALYSIS We investigated the distribution of chance outcomes at each chance node for Stochastic MuZero. We collected a dataset for each game by storing the probability distribution over chance nodes, σkt = Pr(ct+k+1|askt ), for all afterstate prediction network evaluations invoked throughout all searches in 5 episodes. Subsequently, we sorted each chance node distribution and finally, we computed the average distribution, as shown in figure 6 6. We observed that in the case of deterministic environment like Go, the chance distribution collapsed to a single code, while in stochastic environments the model used multiple codes. Furthermore, in Backgammon, the chance distribution had a support of 21 codes with non-negligible probability, which corresponds to the number of distinct rolls of two dice. G COMPUTATIONAL RESOURCES All experiments were run using second generation Google Cloud TPUs (Google, 2018). For Backgammon, we used 1 TPU for training and 16 TPUs for acting, for approximately 27 hours - equivalent to 10 days on a single V100 GPU. In 2048 we used 1 TPU for training and 4 TPUs for acting, for 80 hours per experiment; equivalent to roughly 8 days on a V100. Finally, in Go we used the same setup as in MuZero (Schrittwieser et al., 2020). H IMPLEMENTATION Stochastic MuZero was implemented as an extension to the standard MuZero algorithm, as it was described in (Schrittwieser et al., 2020). We used the JAX (Bradbury et al., 2018) and Haiku (Hennigan et al., 2020) libraries to implement the neural networks and optimization methods described in this paper. Along with this work, we provide a detailed pseudocode of the Stochastic MuZero algorithm, along with all the hyperparameters used for each environment. I PSEUDOCODE For completeness we provide a detailed pseudocode of the Stochastic MuZero algorithm, along with all the hyperparameters used by the agent. """Pseudocode description of the Stochastic MuZero algorithm. This pseudocode was adapted from the original MuZero pseudocode. """ # pylint: disable=unused-argument # pylint: disable=missing-docstring # pylint: disable=g-explicit-length-test import abc import math from typing import Any, Dict, Callable, List, NamedTuple, Tuple, Union, Optional, Sequence import dataclasses import numpy as np MAXIMUM_FLOAT_VALUE = float(’inf’) ######################################## ####### Environment interface ########## # An action to apply to the environment. # It can a single integer or a list of micro-actions for backgammon. Action = Any # The current player to play. Player = int class Environment: """Implements the rules of the environment.""" def apply(self, action: Action): """Applies an action or a chance outcome to the environment.""" def observation(self): """Returns the observation of the environment to feed to the network. """ def is_terminal(self) -> bool: """Returns true if the environment is in a terminal state.""" return False def legal_actions(self) -> Sequence[Action]: """Returns the legal actions for the current state.""" return [] def reward(self, player: Player) -> float: """Returns the last reward for the player.""" return 0.0 def to_play(self) -> Player: """Returns the current player to play.""" return 0 ########################## ####### Helpers ########## class KnownBounds(NamedTuple): min: float max: float class MinMaxStats(object): """A class that holds the min-max values of the tree.""" def __init__(self, known_bounds: Optional[KnownBounds]): self.maximum = known_bounds.max if known_bounds else - MAXIMUM_FLOAT_VALUE self.minimum = known_bounds.min if known_bounds else MAXIMUM_FLOAT_VALUE def update(self, value: float): self.maximum = max(self.maximum, value) self.minimum = min(self.minimum, value) def normalize(self, value: float) -> float: if self.maximum > self.minimum: # We normalize only when we have set the maximum and minimum values . return (value - self.minimum) / (self.maximum - self.minimum) return value # A chance outcome. Outcome = Any # An object that holds an action or a chance outcome. ActionOrOutcome = Union[Action, Outcome] LatentState = List[float] AfterState = List[float] class NetworkOutput(NamedTuple): value: float probabilities: Dict[ActionOrOutcome, float] reward: Optional[float] = 0.0 class Network: """An instance of the network used by stochastic MuZero.""" def representation(self, observation) -> LatentState: """Representation function maps from observation to latent state.""" return [] def predictions(self, state: LatentState) -> NetworkOutput: """Returns the network predictions for a latent state.""" return NetworkOutput(0, {}, 0) def afterstate_dynamics(self, state: LatentState, action: Action) -> AfterState: """Implements the dynamics from latent state and action to afterstate .""" return [] def afterstate_predictions(self, state: AfterState) -> NetworkOutput: """Returns the network predictions for an afterstate.""" # No reward for afterstate transitions. return NetworkOutput(0, {}) def dynamics(self, state: AfterState, action: Outcome) -> LatentState: """Implements the dynamics from afterstate and chance outcome to state.""" return [] def encoder(self, observation) -> Outcome: """An encoder maps an observation to an outcome.""" class NetworkCacher: """An object to share the network between the self-play and training jobs.""" def __init__(self): self._networks = {} def save_network(self, step: int, network: Network): self._networks[step] = network def load_network(self) -> Tuple[int, Network]: training_step = max(self._networks.keys()) return training_step, self._networks[training_step] # Takes the training step and returns the temperature of the softmax policy. VisitSoftmaxTemperatureFn = Callable[[int], float] # Returns an instance of the environment. EnvironmentFactory = Callable[[], Environment] # The factory for the network. NetworkFactory = Callable[[], Network] @dataclasses.dataclass class StochasticMuZeroConfig: # A factory for the environment. environment_factory: EnvironmentFactory network_factory: NetworkFactory # Self-Play num_actors: int visit_softmax_temperature_fn: VisitSoftmaxTemperatureFn num_simulations: int discount: float # Root prior exploration noise. root_dirichlet_alpha: float root_dirichlet_fraction: float root_dirichlet_adaptive: bool # UCB formula pb_c_base: float = 19652 pb_c_init: float = 1.25 # If we already have some information about which values occur in the # environment, we can use them to initialize the rescaling. # This is not strictly necessary, but establishes identical behaviour to # AlphaZero in board games. known_bounds: Optional[KnownBounds] = None # Replay buffer. num_trajectories_in_buffer: int = int(1e6) batch_size: int = int(128) num_unroll_steps: int = 5 td_steps: int = 6 td_lambda: float = 1.0 # Alpha and beta parameters for prioritization. # By default they are set to 0 which means uniform sampling. priority_alpha: float = 0.0 priority_beta: float = 0.0 # Training training_steps: int = int(1e6) export_network_every: int = int(1e3) learning_rate: float = 3e-4 weight_decay: float = 1e-4 # The number of chance codes (codebook size). # We use a codebook of size 32 for all our experiments. codebook_size: int = 32 ################################## ## Environment specific configs ## def twentyfortyeight_config() -> StochasticMuZeroConfig: """Returns the config for the game of 2048.""" def environment_factory(): # Returns an implementation of 2048. return Environment() def network_factory(): # 10 layer fully connected Res V2 network with Layer normalization and size # 256. return Network() def visit_softmax_temperature(train_steps: int) -> float: if train_steps < 1e5: return 1.0 elif train_steps < 2e5: return 0.5 elif train_steps < 3e5: return 0.1 else: # Greedy selection. return 0.0 return StochasticMuZeroConfig( environment_factory=environment_factory, network_factory=network_factory, num_actors=1000, visit_softmax_temperature=visit_softmax_temperature, num_simulations=100, discount=0.999, root_dirichlet_alpha=0.3, root_dirichlet_fraction=0.1, root_dirichlet_adaptive=False, num_trajectories_in_buffer=int(125e3), td_steps=10, td_lambda=0.5, priority_alpha=1.0, priority_beta=1.0, training_steps=int(20e6), batch_size=1024, weight_decay=0.0) def backgammon_config() -> StochasticMuZeroConfig: """Returns the config for the game of 2048.""" def environment_factory(): # Returns an backgammon. We consider single games without a doubling cube. return Environment() def network_factory(): # 10 layer fully connected Res V2 network with Layer normalization and size # 256. return Network() def visit_softmax_temperature(train_steps: int) -> float: return 1.0 return StochasticMuZeroConfig( environment_factory=environment_factory, network_factory=network_factory, num_actors=1000, visit_softmax_temperature_fn=visit_softmax_temperature, num_simulations=1600, discount=1.0, # Unused, we use adaptive dirichlet for backgammon. root_dirichlet_alpha=-1.0, root_dirichlet_fraction=0.1, root_dirichlet_adaptive=True, # Max value is 3 for backgammon. known_bounds=KnownBounds(min=-3, max=3), # 1e5 full episodes stored. num_trajectories_in_buffer=int(1e5), # We use monte carlo returns. td_steps=int(1e3), training_steps=int(8e6), batch_size=1024, learning_rate=3e-4, weight_decay=1e-4) ################################## ############ Replay ############## class SearchStats(NamedTuple): search_policy: Dict[Action, int] search_value: float class State(NamedTuple): """Data for a single state.""" observation: List[float] reward: float discount: float player: Player action: Action search_stats: SearchStats Trajectory = Sequence[State] class ReplayBuffer: """A replay buffer to hold the experience generated by the selfplay.""" def __init__(self, config: StochasticMuZeroConfig): self.config = config self.data = [] def save(self, seq: Trajectory): if len(self.data) > self.config.num_trajectories_in_buffer: # Remove the oldest sequence from the buffer. self.data.pop(0) self.data.append(seq) def sample_trajectory(self) -> Trajectory: """Samples a trajectory uniformly or using prioritization.""" return self.data[0] def sample_index(self, seq: Trajectory) -> int: """Samples an index in the trajectory uniformly or using prioritization.""" return 0 def sample_element(self) -> Trajectory: """Samples a single element from the buffer.""" # Sample a trajectory. trajectory = self.sample_trajectory() state_idx = self.sample_index(trajectory) limit = max([self.config.num_unroll_steps, self.config.td_steps]) # Returns a trajectory of experiment. return trajectory[state_idx:state_idx + limit] def sample(self) -> Sequence[Trajectory]: """Samples a training batch.""" return [self.sample_element() for _ in range(self.config.batch_size)] ################################## ############ Search ############## class ActionOutcomeHistory: """Simple history container used inside the search. Only used to keep track of the actions and chance outcomes executed. """ def __init__(self, player: Player, history: Optional[List[ActionOrOutcome]] = None): self.initial_player = player self.history = list(history or []) def clone(self): return ActionOutcomeHistory(self.initial_player, self.history) def add_action_or_outcome(self, action_or_outcome: ActionOrOutcome): self.history.append(action_or_outcome) def last_action_or_outcome(self) -> ActionOrOutcome: return self.history[-1] def to_play(self) -> Player: # Returns the next player to play based on the initial player and the # history of actions and outcomes. For example for backgammon the two # players alternate, while for 2048 it is always the same player. return 0 class Node(object): """A Node in the MCTS search tree.""" def __init__(self, prior: float, is_chance: bool = False): self.visit_count = 0 self.to_play = -1 self.prior = prior self.value_sum = 0 self.children = {} self.state = None self.is_chance = is_chance self.reward = 0 def expanded(self) -> bool: return len(self.children) > 0 def value(self) -> float: if self.visit_count == 0: return 0 return self.value_sum / self.visit_count # Core Monte Carlo Tree Search algorithm. # To decide on an action, we run N simulations, always starting at the root of # the search tree and traversing the tree according to the UCB formula until we # reach a leaf node. def run_mcts(config: StochasticMuZeroConfig, root: Node, action_outcome_history: ActionOutcomeHistory, network: Network, min_max_stats: MinMaxStats): for _ in range(config.num_simulations): history = action_outcome_history.clone() node = root search_path = [node] while node.expanded(): action_or_outcome, node = select_child(config, node, min_max_stats) history.add_action(action_or_outcome) search_path.append(node) # Inside the search tree we use the dynamics function to obtain the next # hidden state given an action and the previous hidden state. parent = search_path[-2] if parent.is_chance: # The parent is a chance node, afterstate to latent state transition. # The last action or outcome is a chance outcome. child_state = network_output.dynamics(parent.state, history. last_action_or_outcome()) network_output = network_output.predictions(child_state) # This child is a decision node. is_child_chance = False else: # The parent is a decision node, latent state to afterstate transition. # The last action or outcome is an action. child_state = network_output.afterstate_dynamics( parent.state, history.last_action_or_outcome()) network_output = network_output.afterstate_predictions(child_state) # The child is a chance node. is_child_chance = True # Expand the node. expand_node(node, child_state, network_output, history.to_play(), is_child_chance) # Backpropagate the value up the tree. backpropagate(search_path, network_output.value, history.to_play(), config.discount, min_max_stats) # Select the child with the highest UCB score. def select_child(config: StochasticMuZeroConfig, node: Node, min_max_stats: MinMaxStats): if node.is_chance: # If the node is chance we sample from the prior. outcomes, probs = zip(*[(o, n.prob) for o, n in node.children.items() ]) outcome = np.random.choice(outcomes, p=probs) return outcome, node.children[outcome] # For decision nodes we use the pUCT formula. _, action, child = max( (ucb_score(config, node, child, min_max_stats), action, child) for action, child in node.children.items()) return action, child # The score for a node is based on its value, plus an exploration bonus based on # the prior. def ucb_score(config: StochasticMuZeroConfig, parent: Node, child: Node, min_max_stats: MinMaxStats) -> float: pb_c = math.log((parent.visit_count + config.pb_c_base + 1) / config.pb_c_base) + config.pb_c_init pb_c *= math.sqrt(parent.visit_count) / (child.visit_count + 1) prior_score = pb_c * child.prior if child.visit_count > 0: value_score = min_max_stats.normalize(child.reward + config.discount * child.value() ) else: value_score = 0 return prior_score + value_score # We expand a node using the value, reward and policy prediction obtained from # the neural network. def expand_node(node: Node, state: Union[LatentState, AfterState], network_output: NetworkOutput, player: Player, is_chance: bool): node.to_play = player node.state = state node.is_chance = is_chance node.reward = network_output.reward for action, prob in network_output.probabilities.items(): node.children[action] = Node(prob) # At the end of a simulation, we propagate the evaluation all the way up the # tree to the root. def backpropagate(search_path: List[Node], value: float, to_play: Player, discount: float, min_max_stats: MinMaxStats): for node in reversed(search_path): node.value_sum += value if node.to_play == to_play else -value node.visit_count += 1 min_max_stats.update(node.value()) value = node.reward + discount * value # At the start of each search, we add dirichlet noise to the prior of the root # to encourage the search to explore new actions. def add_exploration_noise(config: StochasticMuZeroConfig, node: Node): actions = list(node.children.keys()) dir_alpha = config.root_dirichlet_alpha if config.root_dirichlet_adaptive: dir_alpha = 1.0 / np.sqrt(len(actions)) noise = np.random.dirichlet([dir_alpha] * len(actions)) frac = config.root_exploration_fraction for a, n in zip(actions, noise): node.children[a].prior = node.children[a].prior * (1 - frac) + n * frac ################################## ############ Self-play ########### class Actor(metaclass=abc.ABCMeta): """An actor to interact with the environment.""" @abc.abstractmethod def reset(self): """Resets the player for a new episode.""" @abc.abstractmethod def select_action(self, env: Environment) -> Action: """Selects an action for the current state of the environment.""" @abc.abstractmethod def stats(self) -> SearchStats: """Returns the stats for the player after it has selected an action. """ class StochasticMuZeroActor(Actor): def __init__(self, config: StochasticMuZeroConfig, cacher: NetworkCacher): self.config = config self.cacher = cacher self.training_step = -1 self.network = None def reset(self): # Read a network from the cacher for the new episode. self.training_step, self.network = self.cacher.load_network() self.root = None def _mask_illegal_actions(self, env: Environment, outputs: NetworkOutput) -> NetworkOutput: """Masks any actions which are illegal at the root.""" # We mask out and keep only the legal actions. masked_policy = {} network_policy = outputs.probabilities norm = 0 for action in env.legal_actions(): if action in network_policy: masked_policy[action] = network_policy[action] else: masked_policy[action] = 0.0 norm += masked_policy[action] # Renormalize the masked policy. masked_policy = {a: v / norm for a, v in masked_policy.items()} return NetworkOutput(value=outputs.value, probabilities=masked_policy ) def _select_action(self, root: Node): """Selects an action given the root node.""" # Get the visit count distribution. actions, visit_counts = zip(*[ (action, node.visit_counts) for action, node in node.children.items() ]) # Temperature temperature = self.config.visit_softmax_temperature_fn(self. training_step) # Compute the search policy. search_policy = [v ** (1. / temperature) for v in visit_counts] norm = sum(search_policy) search_policy = [v / norm for v in search_policy] return np.random.choice(actions, p=search_policy) def select_action(self, env: Environment) -> Action: """Selects an action.""" # New min max stats for the search tree. min_max_stats = MinMaxStats(self.config.known_bounds) # At the root of the search tree we use the representation function to # obtain a hidden state given the current observation. root = Node(0) # Provide the history of observations to the representation network to # get the initial latent state. latent_state = self.network.representation(env.observation()) # Compute the predictions. outputs = self.network.predictions(latent_state) # Keep only the legal actions. outputs = self._mask_illegal_actions(env, outputs) # Expand the root node. expand_node(root, latent_state, outputs, env.to_play(), is_chance= False) # Backpropagate the value. backpropagate([root], outputs.value, env.to_play(), self.config.discount, min_max_stats) # We add exploration noise to the root node. add_exploration_noise(self.config, root) # We then run a Monte Carlo Tree Search using only action sequences and the # model learned by the network. run_mcts(self.config, root, ActionOutcomeHistory(env.to_play()), self.network, min_max_stats) # Keep track of the root to return the stats. self.root = root # Return an action. return self._select_action(root) def stats(self) -> SearchStats: """Returns the stats of the latest search.""" if self.root is None: raise ValueError(’No search was executed.’) return SearchStats( search_policy={ action: node.visit_counts for action, node in self.root.children.items() }, search_value=self.root.value()) # Self-play. # Each self-play job is independent of all others; it takes the latest network # snapshot, produces an episode and makes it available to the training job by # writing it to a shared replay buffer. def run_selfplay(config: StochasticMuZeroConfig, cacher: NetworkCacher, replay_buffer: ReplayBuffer): actor = StochasticMuZeroActor(config, cacher) while True: # Create a new instance of the environment. env = config.environment_factory() # Reset the actor. actor.reset() episode = [] while not env.is_terminal(): action = actor.select_action(env) state = State( observation=env.observation(), reward=env.reward(env.to_play()), discount=config.discount, player=env.to_play(), action=action, search_stats=actor.stats()) episode.append(state) env.apply(action) # Send the episode to the replay. replay_buffer.save(episode) ################################## ############ Training ############ class Learner(metaclass=abc.ABCMeta): """An learner to update the network weights based.""" @abc.abstractmethod def learn(self): """Single training step of the learner.""" @abc.abstractmethod def export(self) -> Network: """Exports the network.""" def policy_loss(predictions, labels): """Minimizes the KL-divergence of the predictions and labels.""" return 0.0 def value_or_reward_loss(prediction, target): """Implements the value or reward loss for Stochastic MuZero. For backgammon this is implemented as an MSE loss of scalars. For 2048, we use the two hot representation proposed in MuZero, and this loss is implemented as a KL divergence between the value and value target representations. For 2048 we also apply a hyperbolic transformation to the target (see paper for more information). Args: prediction: The reward or value output of the network. target: The reward or value target. Returns: The loss to minimize. """ return 0.0 class StochasticMuZeroLearner(Learner): """Implements the learning for Stochastic MuZero.""" def __init__(self, config: StochasticMuZeroConfig, replay_buffer: ReplayBuffer): self.config = config self.replay_buffer = replay_buffer # Instantiate the network. self.network = config.network_factory() def transpose_to_time(self, batch): """Transposes the data so the leading dimension is time instead of batch.""" return batch def learn(self): """Applies a single training step.""" batch = self.replay_buffer.sample() # Transpose batch to make time the leading dimension. batch = self.transpose_to_time(batch) # Compute the initial step loss. latent_state = self.network.representation(batch[0].observation) predictions = self.network.predictions(latent_state) # Computes the td target for the 0th position. value_target = compute_td_target(self.config.td_steps, self.config.td_lambda, batch) # Train the network value towards the td target. total_loss = value_or_reward_loss(predictions.value, value_target) # Train the network policy towards the MCTS policy. total_loss += policy_loss(predictions.probabilities, batch[0].search_stats.search_policy) # Unroll the model for k steps. for t in range(1, self.config.num_unroll_steps + 1): # Condition the afterstate on the previous action. afterstate = self.network.afterstate_dynamics( latent_state, batch[t - 1].action) afterstate_predictions = self.network.afterstate_predictions( afterstate) # Call the encoder on the next observation. # The encoder returns the chance code which is a discrete one hot code. # The gradients flow to the encoder using a straight through estimator. chance_code = self.network.encoder(batch[t].observation) # The afterstate value is trained towards the previous value target # but conditioned on the selected action to obtain a Q-estimate. total_loss += value_or_reward_loss( afterstate_predictions.value, value_target) # The afterstate distribution is trained to predict the chance code # generated by the encoder. total_loss += policy_loss(afterstate_predictions.probabilities, chance_code) # Get the dynamic predictions. latent_state = self.network.dynamics(afterstate, chance_code) predictions = self.network.predictions(latent_state) # Compute the new value target. value_target = compute_td_target(self.config.td_steps, self.config.td_lambda, batch[t:]) # The reward loss for the dynamics network. total_loss += value_or_reward_loss(predictions.reward, batch[t]. reward) total_loss += value_or_reward_loss(predictions.value, value_target) total_loss += policy_loss(predictions.probabilities, batch[t].search_stats.search_policy) minimize_with_adam_and_weight_decay(total_loss, learning_rate=self.config. learning_rate, weight_decay=self.config. weight_decay) def export(self) -> Network: return self.network def train_stochastic_muzero(config: StochasticMuZeroConfig, cacher: NetworkCacher, replay_buffer: ReplayBuffer): learner = StochasticMuZeroLearner(config, replay_buffer) # Export the network so the actors can start generating experience. cacher.save_network(0, learner.export()) for step in range(config.training_steps): # Single learning step. learner.learn() if step > 0 and step % config.export_network_every == 0: cacher.save_network(step, learner.export()) ################################## ############ RL loop ############# def launch_stochastic_muzero(config: StochasticMuZeroConfig): """Full RL loop for stochastic MuZero.""" replay_buffer = ReplayBuffer(config) cacher = NetworkCacher() # Launch a learner job. launch_job(lambda: train_stochastic_muzero(config, cacher, replay_buffer)) # Launch the actors. for _ in range(config.num_actors): launch_job(lambda: run_selfplay(config, cacher, replay_buffer)) # Stubs to make the typechecker happy. def softmax_sample(distribution, temperature: float): return 0, 0 def compute_td_target(td_steps, td_lambda, trajectory): """Computes the TD lambda targets given a trajectory for the 0th element. Args: td_steps: The number n of the n-step returns. td_lambda: The lambda in TD(lambda). trajectory: A sequence of states. Returns: The n-step return. """ return 0.0 def minimize_with_sgd(loss, learning_rate): """Minimizes the loss using SGD.""" def minimize_with_adam_and_weight_decay(loss, learning_rate, weight_decay ): """Minimizes the loss using Adam with weight decay.""" def launch_job(f): """Launches a job to run remotely.""" return f()
1. What is the focus of the paper regarding value-equivalent MBRL? 2. What are the strengths and weaknesses of the proposed Stochastic MuZero algorithm? 3. How does the reviewer assess the related work and empirical results presented in the paper? 4. Do you have any concerns or suggestions regarding the experimental methodology and result presentation? 5. Are there any minor issues or suggestions you have regarding the paper's content or format?
Summary Of The Paper Review
Summary Of The Paper This paper aims to extend previous work on value-equivalent MBRL, such as MuZero, to stochastic environments. In contrast to conventional work in MBRL that fit transition models to be consistent with environmental observations, this line of work fits transition models to improve the accuracy / utility of a downstream value / policy. To this end authors consider the MuZero algorithm and advocate for learning a stochastic model with a VQ-VAE, and they modify MCTS so that it can be used with their stochastic model. The authors propose Stochastic MuZero. Their algorithm makes use of a stochastic model to predict future values, policies and rewards. The authors suggest utilizing "afterstates": an imaginary state that is the result of taking an action but it is also before the environment responds with an actual state. In 2048, for example, an afterstate could be the state reached after applying a tile moving action but before a number "2" tile appears in a random place. As illustrated in Figure 1, the stochastic model consists of 5 functions in contrast to 3 functions in MuZero. The notable addition is in incorporating afterstates in these functions which allows for incorporating chance outcomes. There is an afterstate dynamics function that predicts a latent after-state given a state and action. The typical dynamics function would then still predict a next actual state and reward but its input will be a latent afterstate and a chance outcome. There is also an afterstate prediction function for value and a distribution prediction, where the distribution is that of a chance outcome given an afterstate. That distribution could then be used for sampling chance outcomes in inference. To adapt MCTS to this model, search starts from a state and then proceeds to alternate at every level between afterstates and states by using the corresponding dynamics function to reach each type of state. Review I generally liked the ideas in this paper. Though, I believe there is room for improving the related work and empirical results. Related Work There are two references that were not mentioned as part of the related work, but which are important in the line of value-equivalent MBRL: Model-Based Reinforcement Learning with Value-Targeted Regression Value Iteration Networks Experiments To support their claims, the authors chose two environments that exhibit stochasticity: 2048 and Backgammon, and compared the performance of their algorithm against AlphaZero (where a perfect stochastic simulator is used) and MuZero (where a learned deterministic model is used). A similar methodology was used in an additional experiment, using the deterministic, perfect-information game of Go, and here the question was whether their algorithm's learned stochastic model could match the performance of a learned deterministic model. In general, I see no issue with the choice of domains and baselines. My concerns are mostly with the way results are evaluated and reported. For instance, Figures 2, 3, and 4 all seem to represent a single trial. This is unacceptable for experiments with random outcomes, and it considerably weakens support for the paper's main claim of high performance in stochastic settings. I would imagine the authors expect their results to hold on average, given the randomness of a stochastic model. Some attempt was made to characterize the variability of results in Section 5.4. Each experiment was repeated---under unknown random conditions---three times for a small fraction of the data shown in figures 2, 3, and 4. There are two issues with this methodology: Three samples is insufficient to characterize a distribution's dispersion. The trials should be run with the same amount of data as the reported results. To approximate the distribution of performance to a reasonable degree of accuracy, I suggest that a minimum of thirty trials are run for the full length of learning. In addition, the authors should describe exactly the sources of randomness their results represent. It is also unclear if a hyperparameter search took place for the proposed algorithm. While the authors do reference relevant work that has used their domains, a concise description of the games and their sources of stochasticity is missing. This is important to communicate explicitly to the reader, since it is the primary environmental feature that experiments depend on. This information could be included in the appendix if the paper is tight on space. Minor points I didn't find the pseudocode in the appendix very informative. A simple table with all the considered parameters would be sufficient.
ICLR
Title Planning in Stochastic Environments with a Learned Model Abstract Model-based reinforcement learning has proven highly successful. However, learning a model in isolation from its use during planning is problematic in complex environments. To date, the most effective techniques have instead combined valueequivalent model learning with powerful tree-search methods. This approach is exemplified by MuZero, which has achieved state-of-the-art performance in a wide range of domains, from board games to visually rich environments, with discrete and continuous action spaces, in online and offline settings. However, previous instantiations of this approach were limited to the use of deterministic models. This limits their performance in environments that are inherently stochastic, partially observed, or so large and complex that they appear stochastic to a finite agent. In this paper we extend this approach to learn and plan with stochastic models. Specifically, we introduce a new algorithm, Stochastic MuZero, that learns a stochastic model incorporating afterstates, and uses this model to perform a stochastic tree search. Stochastic MuZero matched or exceeded the state of the art in a set of canonical single and multi-agent environments, including 2048 and backgammon, while maintaining the superhuman performance of standard MuZero in the game of Go. 1 INTRODUCTION Constructing plans and executing them is an important feature of human and animal behaviour. In the field of artificial intelligence there has been a great amount of research into adding planning capabilities to intelligent agents. Tree-based planning algorithms have shown a lot of success in a wide variety of environments such as card games (Moravčík et al., 2017), board games (Campbell et al., 2002; Silver et al., 2016) and more recently video games (Schrittwieser et al., 2020) and continuous control tasks (Hubert et al., 2021). Most tree search methods assume that the agent has access to a perfect simulator of the environment, whereas real-world environments are typically unknown. Model-based reinforcement learning algorithms combine a model-learning component, which estimates the dynamics of the environment, with a planning component, using the learned model as a simulator. However, learning a model in isolation from its use during planning has proven to be problematic in complex environments (van Hasselt et al., 2019). Instead, value-equivalent modellearning methods (Silver et al., 2017; Farahmand et al., 2017; Oh et al., 2017; Grimm et al., 2020) identify a model that reconstructs only those quantities required for planning. The most successful method, MuZero (Schrittwieser et al., 2020) learns a model that reconstructs reward, value and policy, and uses this model to perform a powerful Monte Carlo tree search. MuZero achieved superhuman results in Go, chess, shogi and Atari without any prior knowledge of the rules, and has also achieved state-of-the-art performance in large and continuous action spaces (Hubert et al., 2021) and offline reinforcement learning (Schrittwieser et al., 2021). However, value equivalent methods such as MuZero have in practice been limited to a deterministic class of models, which severely limits their applicability. Many environments are inherently stochastic and may be poorly approximated by a deterministic model. Partially observed environments may also be perceived by the agent as stochastic, whenever aliased states cannot be disambiguated. Similarly, large and complex environments may appear stochastic to a small agent with finite capacity. In this paper we introduce the first empirically effective approach for handling stochasticity in value equivalent model-learning and planning. The model is factored to first transition deterministically from state to an afterstate, and then to branch stochastically from the afterstate to the next state. This factored model is trained end-to-end so as to maintain value equivalence for both state value function and action value function respectively, and a stochastic planning method is applied to the model. We apply these ideas to MuZero, using a discrete generative network to represent the model, and modifying the Monte Carlo tree search to effectively use the factored model. We apply our method, Stochastic MuZero, to several environments in which handling stochasticity is important. First, we consider the popular stochastic puzzle game 2048, in which the prior state of the art exploits a perfect simulator and significant handcrafted domain knowledge. In our experiments, Stochastic MuZero achieved better results without any domain knowledge. Second, we consider the classic stochastic two-player game of backgammon, in which near-optimal play has been achieved using a perfect simulator. Stochastic MuZero matches this performance without any prior knowledge of the game rules. Finally, we evaluated our method in the deterministic board game of Go. There our method matched the performance of MuZero, demonstrating that Stochastic MuZero extends MuZero without sacrificing performance. 2 RELATED WORK Observation models (Oh et al., 2015; Chiappa et al., 2017; Łukasz Kaiser et al., 2020) explicitly learn the dynamics of an environment by fitting a model of observations and rewards to observed transitions. Subsequently, these models can be combined with a model-free learning rule in a Dyna fashion (Sutton, 1991). However, modeling high dimensional image observations can be computationally prohibitive, prone to high error accumulation as the model is unrolled for multiple steps, and limiting since the capacity of the model could be spent on background features which are not helpful for the problem at hand. These issues make such models unconducive for planning. Finally, van Hasselt et al. (2019) argues that Dyna-based methods are unlikely to outperform model-free approaches that use a replay buffer. Latent models (Schrittwieser et al., 2020; Oh et al., 2017; Hafner et al., 2021; Henaff et al., 2017) attempt to overcome the limitations of observation models by learning recurrent networks that operate on latent states. In this framework, the model is conditioned on the current observation and future actions and is unrolled for k steps. Subsequently, it is trained to make predictions about rewards, values, policies or observations at each timestep based on the current latent state. The model can then be combined with a tree-based planning algorithm or used to generate synthetic trajectories. Recently, MuZero has shown that it is possible to use this approach to achieve state-of-the-art performance in many challenging domains (Hubert et al., 2021) while using less data (Schrittwieser et al., 2021). However, most approaches, including MuZero, use a deterministic function to model the environment dynamics, which limits their applicability to deterministic or weakly stochastic environments. Stochastic latent models are stochastic models of the environment dynamics that operate on latent states. In (Hafner et al., 2021) the authors propose a recurrent state-space model which consists of three main modules, a recurrent module which generates the deterministic recurrent state ht, a representation model which combines ht with the current observation xt to generate a distribution over stochastic states st and plays the role of the posterior, and a transition predictor which depends only on ht and acts as the prior of the model. By combining the deterministic and stochastic states ht and st the model is trained to predict the current observation ot, the transition reward rt and the discount dt. The next deterministic recurrent state is generated using ht, st and action at. The stochastic states st are modeled as multidimensional multinomial variables. The learned model is then used to generate synthetic data which are used to train an actor-critic model-free agent. The authors show that their approach outperforms pure model-free methods but it fails to achieve the performance of MuZero which combines its learned model with planning. In (Ozair et al., 2021) the authors learn a stochastic transition model using a VQ-VAE generative network (van den Oord et al., 2017) and subsequently combine it with MCTS. They show that their method can match the performance of MuZero in chess, while viewing the problem as a singleplayer task and implicitly learning to model the behaviour of the opponent. Despite its promise their approach was only applied in a supervised setting using expert data, and did not address the challenges of learning a stochastic model in the reinforcement learning setting. Moreover, the learned model was trained to explicitly predict the observation at every step, which can be a limiting factor in terms of computation and model efficiency when dealing with high dimensional observations. Finally, the authors used a two stage training process: first, a model learns latent representations of the observations, then these representations are used to learn a transition model. This makes it hard to apply this approach in the reinforcement learning setting. 3 BACKGROUND 3.1 MuZero MuZero is a model-based general reinforcement learning agent which combines a learned model of the environment dynamics with a Monte Carlo tree search planning algorithm. The model is conditioned on the history of observations o≤t at timestep t and a sequence of future actions at:t+K , and it is trained to predict the search policies πt:t+K , values vπt:t+K and intermediate rewards rt:t+K at each future timestep. MuZero uses deterministic functions for its model, and thus it implicitly assumes that the underlying environment dynamics are also deterministic. MuZero uses its dynamics model to plan ahead at each time step and the outcome of its MCTS search to select an action and as targets for its policy improvement operator. Model MuZero’s learned model consists of 3 functions: a representation function h, a dynamics function g and a prediction function f . The representation function maps the current history of observations o≤t into a latent state s0t . The dynamics function g receives the previous latent state s k t and combines it with an action at+k to produce the next latent state sk+1t and the reward r k t . Finally, the prediction function f receives each latent state skt as an input and computes the policy p k t and value vkt . Given a sequence of policy πt:T , value zt:T , and reward ut:T targets, the model is trained to minimize the loss shown in 1. LMuZero = K∑ k=0 lp(πt+k, p k t ) + K∑ k=0 lv(zt+k, v k t ) + K∑ k=1 lr(ut+k, r k t ) (1) The policy targets πt+k correspond to the MCTS policy that was generated when searching from observation o≤t+k. The value targets zt+k are computed using n-step returns (Sutton & Barto, 2018). Finally, the reward targets ut+k correspond to the real instantaneous rewards observed when this sequence was generated. Search MuZero uses a variant of the MCTS tree based algorithm first proposed in (Silver et al., 2018). The tree is constructed recursively through a number of simulations. Each simulation consists of 3 phases: selection, expansion and backpropagation. During the selection phase the tree is traversed starting from the root node until a leaf edge is reached. At each internal node s the algorithm selects the action a which maximizes the upper confidence bound proposed in (Silver et al., 2016) and shown in equation 2. a = argmax a [ Q(s, a) +P (a | s) · √ 1 + ∑ bN(s, b) 1 +N(s, a) ( α1 + log (∑ bN(s, b) + α2 + 1 α2 ))] (2) Here, Q(s, a) is the value estimate for action a, N(s, a) the visit count, P (a | s) the prior probability of selecting action a, and α1, α2 are constants which control the relative importance of the Q(s, ·) estimates and prior probabilities P (· | s). In the next phase expansion, the leaf edge is expanded by querying the MuZero model and a new node is added to the tree. Finally, during the backpropagation phase the value estimate of the newly added edge is backpropagated up the tree using the n-step return estimate. 3.2 VECTOR QUANTISED VARIATIONAL AUTOENCODER Vector Quantised Variational AutoEncoder (VQ-VAE, van den Oord et al. (2017)) is a generative modeling technique which uses four key components: an encoder neural network e, a decoder neural network d, a vector quantisation layer vq, and an autoregressive model m. Given an input xt, the encoder produces an embedding cet = e(xt). The quantisation layer comprises of a set of M codes {ci}Mi=0, called the codebook, and quantises the encoder’s output embedding cet by returning the nearest code ct = ckt along with its index kt = argmini ‖ci − cet‖. Additionally, in the backwards pass, this quantisation is treated as an identity function, referred to as straight-through gradient estimation (Bengio et al., 2013). The decoder produces a reconstruction of the input x̂t = d(ct). The autoregressive model predicts a distribution p(kt|c<t) = m(c<t) over the code index at time t using the quantised embeddings c<t of the previous timesteps. The VQ-VAE equations are shown in Equations 3. Encoder cet = e(xt) Quantisation ct, kt = vq(cet ) Decoder x̂t = d(ct) Model p(kt|c<t) = m(c<t) (3) Typically, the encoder, decoder, and codebook are trained first and then frozen to train the autoregressive model in an additional second stage. The total loss for the VQ-VAE is Lvqvaeφ = N−1∑ t=0 [ ‖x̂t − xt‖︸ ︷︷ ︸ reconstruction +β ‖ct − cet‖ 2︸ ︷︷ ︸ commitment − γ log p(kt|c<t)︸ ︷︷ ︸ second stage ] (4) 4 Stochastic MuZero In this section we present our novel algorithm Stochastic MuZero. Our approach combines a learned stochastic transition model of the environment dynamics with a variant of Monte Carlo tree search (MCTS). First, we describe the new model and subsequently how it is combined with MCTS for planning. 4.1 STOCHASTIC MODEL Afterstates We consider the problem of modeling the dynamics of a stochastic environment. Similarly to MuZero, the model receives an initial observation o≤t at time step t and a sequence of actions at:t+K , and needs to make predictions about the future values, policies and rewards. In contrast to MuZero which only considers latent states which correspond to real states of the environment, Stochastic MuZero makes use of the notion of afterstates (Sutton & Barto, 2018) to capture the stochastic dynamics. An afterstate ast is the hypothetical state of the environment after an action is applied but before the environment has transitioned to a true state: st at // ast ;; ## // st+1 By using afterstates we can separate the effect of applying an action to the environment and of the chance transition given an action. For example in backgammon, the afterstate corresponds to the board state after one player has played its action but before the other player had the chance to roll the dice. It is also possible to define the value of an afterstate as V (ast) = Q(st, at) and the transition probabilities of the environment dynamics Pr(st+1 | ast) = Pr(st+1 | st, at). An afterstate can lead to multiple states based on a chance event. In our work we assume that there is a finite number of possible states M that the environment can transition to, given an afterstate, and this way we can associate each transition with a chance outcome cit. An example of a chance outcome could be the result of the dice in a game of backgammon. By defining afterstates ast and chance outcomes ct, we can model a chance transition using a deterministic model st+1, rt+1 =M(ast, ct) and a distribution Pr(st+1 | ast) = Pr(ct | ast). The task of learning a stochastic model is then reduced to the problem of learning afterstates as and chance outcomes c. Model The stochastic model of Stochastic MuZero consists of 5 functions: a representation function h which maps the current observation o≤t to a latent state s0t , an afterstate dynamics function φ which given a state skt and an action at+k produces the next latent afterstate as k t , a dynamics function g which given an afterstate askt and a chance outcome ct+k+1 produces the next latent state s k+1 t and a reward prediction rk+1t , a prediction function f which given a state s k t generates the value vkt and policy p k t predictions, and a afterstate prediction function ψ which given an afterstate as k generates a value prediction Qkt , and a distribution σ k t = Pr(ct+k+1 | askt ) over possible future chance outcomes ct+k+1. The model equations are shown in 5. Representation s0t = h(o≤t) Prediction pkt , v k t = f(s k t ) Afterstate Dynamics askt = φ(s k t , at+k) Afterstate Prediction σkt , Q k t = ψ(as k t ) Dynamics sk+1t , r k+1 t = g(as k t , ct+k+1) (5) During inference, given an initial observation o≤t and a sequence of actions at:t+K , we can generate trajectories from the above model by recurrently unrolling it and by sampling chance outcomes from the distributions ct+k+1 ∼ σkt . Chance outcomes Stochastic MuZero models the chance outcomes by using a novel variant of the VQ-VAE method. Specifically, we consider a VQ-VAE with a constant codebook of size M . Each entry in the codebook is a fixed one-hot vector of size M . By using a fixed codebook of one hot vectors, we can simplify the equations of the VQ-VAE 3. In this case, we model the encoder embedding cet as a categorical variable, and selecting the closest code ct is equivalent to computing the expression one hot (argmaxi(c e,i t )). The resulting encoder can also be viewed as a stochastic function of the observation which makes use of the Gumbel softmax reparameterization trick (Jang et al., 2016) with zero temperature during the forward pass and a straight through estimator during the backward. There is no explicit decoder in our model, and contrary to previous work (Ozair et al., 2021) we do not make use of a reconstruction loss. Instead the network is trained end-to-end in a fashion similar to MuZero. In the following section we explain the training procedure in more detail. Model training The stochastic model is unrolled and trained in an end-to-end fashion similar to MuZero. Specifically, given a trajectory of lengthK with observations o≤t:t+K , actions at:t+K , value targets zt:t+K , policy targets πt:t+K and rewards ut+1:t+K , the model is unrolled for K steps as shown in figure 1 and is trained to optimize the sum of two losses as shown in equation 6: a MuZero loss and a chance loss for learning the stochastic dynamics of the model. Ltotal = LMuZero + Lchance (6) The MuZero loss is the same as the one described in MuZero (see equation 3.1). The chance loss is applied to the predictions Qkt and σ k t which correspond to the latent afterstates as k. The Qkt value is trained to match the value target zt+k and the σk is trained towards the one hot chance code ct+k+1 = one hot (argmaxi(e(o i ≤t+k+1))) produced by the encoder. Finally, following the standard VQ-VAE practice, we use a VQ-VAE commitment cost to ensure that the output of the encoder cet+k = e(o≤t+k+1) is close to the code ct+k. Equation 7 shows the chance loss used to train the model. Lchancew = K−1∑ k=0 lQ(zt+k, Q k t ) + K−1∑ k=0 lσ(ct+k+1, σ k t ) + β K−1∑ k=0 ∥∥ct+k+1 − cet+k+1∥∥2︸ ︷︷ ︸ VQ-VAE commitment cost (7) 4.2 STOCHASTIC SEARCH Stochastic MuZero extends the MCTS algorithm used in MuZero by introducing chance nodes and chance values to the search. In the stochastic instantiation of MCTS, there are two types of nodes: decision and chance (Couetoux, 2013). The chance and decision nodes are interleaved along the depth of the tree, so that the parent of each decision node is a chance node. The root node of the tree is always a decision node. In our approach, each chance node corresponds to a latent afterstate (4.1) and it is expanded by querying the stochastic model, where the parent state and an action are provided as an input and the model returns a value for the node and a prior distribution over future codes Pr(c | as). After a chance node is expanded its value is backpropagated up the tree. Finally, when the node is traversed during the selection phase, a code is selected by sampling the prior distribution 1. In Stochastic MuZero each internal decision node is again expanded by querying the learned model, where the state of the chance parent node and a sampled code c are provided as an input, and the model returns a reward, a value and a policy. Similarly to MuZero the value of the newly added node is backpropagated up the tree, and the pUCT (2) formula is used to select an edge. The stochastic search used by Stochastic MuZero is shown schematically in figure 1. 5 EXPERIMENTS We applied our algorithm to a variety of challenging stochastic and deterministic environments. First, we evaluated our approach in the classic game of 2048, a stochastic single player game. Subsequently, we considered a two player zero-sum stochastic game, Backgammon, which belongs to the same 1In practice we follow the same quasi-random sampling approach as in Ozair et al. (2021) (A.3), where the code is selected using the formula argmaxc Pr(c|as) N(c)+1 . class of board games such as Go, chess or Shogi where MuZero excels, but with stochasticity induced by the use of a die. Finally, we evaluated our method in the deterministic game of Go, to measure any performance loss caused by the use of a stochastic model and search in deterministic environments in comparison to MuZero. In each environment we assess our algorithm’s ability to learn a transition model and effectively use it during search. To this end, we compare Stochastic MuZero (using a stochastic learned model) to MuZero (using a deterministic learned model), AlphaZero (using a perfect simulator), and a strong baseline method (also using a perfect simulator). In the following sections we present our results for each environment separately. 5.1 2048 The game of 2048 (inspired by the game of Threes!) is a stochastic, single player, perfect information puzzle game played on a 4x4 board. The objective of the game is to slide numbered tiles on a grid to combine them to create a tile with the number 2048; one can continue to play the game after reaching the goal, creating tiles with larger numbers. The episode reward is the sum of all created tile numbers. There is a plethora of previous work (Szubert & Jaśkowski, 2014; Yeh et al., 2017; Oka & Matsuzaki, 2016; Rodgers & Levine, 2014; Neller, 2015) on combining reinforcement learning and tree search methods for tackling 2048. Despite its simplicity, model-free approaches have traditionally struggled to achieve high performance, while planning-based approaches have exploited perfect knowledge of the simulator. To date, the best performing agent used the planning-based approach proposed in (Jaśkowski, 2016). This method used an expectimax tree search over a perfect simulator, combined with domain-specific knowledge and a number of novel algorithmic ideas that exploited the structure of this specific problem. In contrast our method uses a learned model and no prior knowledge about the environment. Figure 2 compares the performance of Stochastic MuZero in 2048 to AlphaZero, MuZero and the state-ofthe-art Jaskowski 2016 agent. Our method outperformed Jaskowski 2016, while using only a quarter of the training data. Stochastic MuZero also achieved the same performance as AlphaZero (using a perfect simulator), despite learning the model, and performed far better than MuZero (using a deterministic model). 5.2 BACKGAMMON Backgammon is a classic two player, zero-sum, stochastic board game; it was popularized as a standard testbed for reinforcement learning and artificial intelligence by TD-gammon (Tesauro, 1995). Here we focus on the single game setting, where the final score takes the values ±1 for a simple win or loss, ±2 for a gammon and ±3 for a backgammon. In all experiments we compared to GNUbg Grandmaster (Free Software Foundation, 2004), a superhuman-level open-source backgammon player. GNUbg combines a learned value function based on handcrafted features with a specialized min-max tree search using a perfect stochastic simulator. GNUbg Grandmaster uses a 3-ply look-ahead search over a branching factor of 20 legal moves on average and 21 chance transitions. Stochastic MuZero, using a learned stochastic model of the environment and only 1600 simulations per move, achieved the same playing strength as GNUbg, as shown in Figure 5b. The model learned by Stochastic MuZero is of high quality: it reached the same playing strength as AlphaZero (using a perfect stochastic simulator), and much higher strength than MuZero (using a deterministic learned model). The model also robustly scaled to larger planning budgets (Figure 5c): the performance of Stochastic MuZero improved with increasing number of simulations per move, and ultimately exceeded the playing strength of GNUbg Grandmaster. Given the high dimensionality of the action space in Backgammon (see appendix for details), our Backgammon experiments used the sample-based search introduced by Hubert et al. (2021). 5.3 GO Go is a classic, two player, perfect information, zero-sum board game, that has been studied heavily in the field of artificial intelligence. AlphaZero and subsequently, MuZero have been the only algorithms which have managed to achieve super-human performance, purely through selfplay, in this challenging domain. Since the goal of Stochastic MuZero is to extend the applicability of MuZero to stochastic environments while maintaining the latter’s performance in deterministic environments, we compared the performance of the two algorithms in the game of Go. Figure 4 shows the Elo (Coulom, 2008) achieved by Stochastic MuZero and MuZero during training. Although, Stochastic MuZero requires twice the number of network expansions in comparison to MuZero to achieve the same performance, due to the use of a stochastic MCTS instead of a deterministic one, we ensure that the methods are computationally equivalent by halving the network depth for the chance and dynamic parts of the Stochastic MuZero’s network. 5.4 REPRODUCIBILITY In order to evaluate the robustness of our method in all different environments, we replicated our experiments using nine different initial random seeds (see figure 5.4). We observe that our method is robust to the random initialization and there is minimal variation in its performance between multiple runs. Due to the computational cost of each experiment we used a smaller number of training steps for each experiment. 6 CONCLUSIONS In this work, we proposed a new method for learning a stochastic model of the environment, in a fully online reinforcement learning setting, and showed that the learned model can be effectively combined with planning. Our approach builds on top of MuZero, a model-based reinforcement learning agent that has been widely successful in a range of environments and settings, but its applicability is limited to deterministic or weakly stochastic environments. We have shown that our algorithm, Stochastic MuZero, can overcome the limitations of MuZero, significantly outperforming it in stochastic environments, and it can achieve the same or better performance than AlphaZero which makes use of a perfect simulator for the environment. Finally, we have demonstrated that Stochastic MuZero matches or exceeds the performance of previous methods that use a perfect stochastic simulator, in a pure reinforcement learning setting without using any prior knowledge about the environment. 7 REPRODUCIBILITY STATEMENT In order to ensure the reproducability of our results by the research community, we have included detailed pseudocode, references to all environments and datasets used as well as a detailed description of the hyperparameters used (see Appendix). We did not release the full code as it relies on a lot of proprietary internal infrastructure, limiting its usefulness. We also provide a study of the robustness of our method under different random initialization conditions (see 5.4). D BACKGAMMON EXPERIMENTS Backgammon is an ancient two player, zero-sum, perfect information, stochastic board game. The board consists of 24 squares (or points) and each player controls 15 checkers, which can move based on the outcome of a dice roll. The two players move their checkers in opposite directions and their goal is to move all their checkers off the board first. In addition to a simple winning, a player can also score a double ("gammon") or a triple ("backgammon") winning. A "gammon" is achieved when a player bears off all their checkers before their opponent manages to bear off any, while a "backgammon" when the opponent also has checkers left in the player’s home quadrant (farthermost quadrant from the opponent’s perspective). Each player can impede the progress of their opponent through "hitting" the opponent’s checkers or blocking their advancement. A "hit" is achieved when a player’s checker advances to a position with a single opponent’s checker. Then the opponent’s checker needs to reenter the board in the player’s home quadrant and no further moves are allowed to the opponent until that happens. A position is blocked to the opponent when it is occupied by at least two of the player’s checkers. Each player makes moves based on the values yielded by rolling two dice. In the case of "doubles", aka the two dice have the same value, the player can play up to 4 moves. One of the challenges of computer Backgammon is the high branching ratio, since at each ply there are 21 chance outcomes, which yield positions with an average of 20 legal moves each, resulting in a branching ratio of several hundred per ply. In our backgammon experiments, the board was represented using a vector of size 28, with the first 24 positions representing the number of chips for each player in the 24 possible points on the board, and the last four representing the number of hit chips and born off chips for each of the two players. We used positive numbers for the current player’s chips and negative ones for her opponent. An action in our implementation consists of 4 micro-actions, the same as the maximum number of dice a player can play at each turn. Each micro-action encodes the source position of a chip along with the value of the die used. We consider 26 possible source positions, with the 0th position corresponding to a no-op, the 1st to retrieving a chip from the hit pile, and the remaining to selecting a chip in one of the 24 possible points. Each micro-action is encoded as a single integer with micro-action = src · 6 + die. Similarly to the 2048 experiments, the representation, afterstate dynamics, dynamics and encoder functions were implemented using a 10 block ResNet v2 style pre-activation residual tower (He et al., 2016) coupled with Layer Normalisation (Ba et al., 2016) and Rectified Linear Unit (ReLU) activations. Each linear layer has an output size of 256. The action was provided to the afterstate dynamics network as a vector which was the result of the concatenation of the one-hot representation of each micro-action. We used a codebook of size 32 to model the stochasticity in the environment. Following the work of (Hubert et al., 2021), we used an autoregressive prediction head to model the network policy, with each step corresponding to a single micro-action. To generate a full action, the network was unrolled for 4 steps. In contrast to the 2048 experiments, the value was represented as a scalar. Similarly to MuZero when applied to board games, we used Monte Carlo returns to compute the value targets zt, and we assumed a discount of 1. We trained the model using an Adam optimizer with weight decay (Loshchilov & Hutter, 2017), with learning rate of 0.0003 and a weight decay of 0.0001, with a batch size of 1024 for a total of 8M steps. In all our experiments we used a replay buffer of 100000 games, and the training trajectories were sampled uniformly. For exploration, we injected dirichlet noise to the prior policy at the root node. However, since the number of legal moves at each position can dramatically change in backgammon, we dynamically adapted the alpha parameter of the dirichlet noise based on the number of legal moves, with alpha = 1/ √ num_legal_moves. We used a budget of 1600 simulations for each MCTS search. E GO EXPERIMENTS In our Go experiments, we used the same approach as the one proposed in Hubert et al. (2021). The main differences between this setup and the one proposed in the original MuZero Schrittwieser et al. (2020) is the use of n-step bootstrapping with a target network to improve the data efficiency of the algorithm. The MuZero and Stochastic MuZero players were evaluated during training by playing 100 matches with a search budget of 800 simulations for MuZero and 1600 for Stochastic MuZero. In order to ensure that the two methods are computationally equivalent, each of the chance and dynamics networks of Stochastic MuZero has half the depth of the dynamics network used by MuZero. The Elo scale was anchored so that the performance of the final MuZero baseline corresponded to an Elo of 2000. F CHANCE ANALYSIS We investigated the distribution of chance outcomes at each chance node for Stochastic MuZero. We collected a dataset for each game by storing the probability distribution over chance nodes, σkt = Pr(ct+k+1|askt ), for all afterstate prediction network evaluations invoked throughout all searches in 5 episodes. Subsequently, we sorted each chance node distribution and finally, we computed the average distribution, as shown in figure 6 6. We observed that in the case of deterministic environment like Go, the chance distribution collapsed to a single code, while in stochastic environments the model used multiple codes. Furthermore, in Backgammon, the chance distribution had a support of 21 codes with non-negligible probability, which corresponds to the number of distinct rolls of two dice. G COMPUTATIONAL RESOURCES All experiments were run using second generation Google Cloud TPUs (Google, 2018). For Backgammon, we used 1 TPU for training and 16 TPUs for acting, for approximately 27 hours - equivalent to 10 days on a single V100 GPU. In 2048 we used 1 TPU for training and 4 TPUs for acting, for 80 hours per experiment; equivalent to roughly 8 days on a V100. Finally, in Go we used the same setup as in MuZero (Schrittwieser et al., 2020). H IMPLEMENTATION Stochastic MuZero was implemented as an extension to the standard MuZero algorithm, as it was described in (Schrittwieser et al., 2020). We used the JAX (Bradbury et al., 2018) and Haiku (Hennigan et al., 2020) libraries to implement the neural networks and optimization methods described in this paper. Along with this work, we provide a detailed pseudocode of the Stochastic MuZero algorithm, along with all the hyperparameters used for each environment. I PSEUDOCODE For completeness we provide a detailed pseudocode of the Stochastic MuZero algorithm, along with all the hyperparameters used by the agent. """Pseudocode description of the Stochastic MuZero algorithm. This pseudocode was adapted from the original MuZero pseudocode. """ # pylint: disable=unused-argument # pylint: disable=missing-docstring # pylint: disable=g-explicit-length-test import abc import math from typing import Any, Dict, Callable, List, NamedTuple, Tuple, Union, Optional, Sequence import dataclasses import numpy as np MAXIMUM_FLOAT_VALUE = float(’inf’) ######################################## ####### Environment interface ########## # An action to apply to the environment. # It can a single integer or a list of micro-actions for backgammon. Action = Any # The current player to play. Player = int class Environment: """Implements the rules of the environment.""" def apply(self, action: Action): """Applies an action or a chance outcome to the environment.""" def observation(self): """Returns the observation of the environment to feed to the network. """ def is_terminal(self) -> bool: """Returns true if the environment is in a terminal state.""" return False def legal_actions(self) -> Sequence[Action]: """Returns the legal actions for the current state.""" return [] def reward(self, player: Player) -> float: """Returns the last reward for the player.""" return 0.0 def to_play(self) -> Player: """Returns the current player to play.""" return 0 ########################## ####### Helpers ########## class KnownBounds(NamedTuple): min: float max: float class MinMaxStats(object): """A class that holds the min-max values of the tree.""" def __init__(self, known_bounds: Optional[KnownBounds]): self.maximum = known_bounds.max if known_bounds else - MAXIMUM_FLOAT_VALUE self.minimum = known_bounds.min if known_bounds else MAXIMUM_FLOAT_VALUE def update(self, value: float): self.maximum = max(self.maximum, value) self.minimum = min(self.minimum, value) def normalize(self, value: float) -> float: if self.maximum > self.minimum: # We normalize only when we have set the maximum and minimum values . return (value - self.minimum) / (self.maximum - self.minimum) return value # A chance outcome. Outcome = Any # An object that holds an action or a chance outcome. ActionOrOutcome = Union[Action, Outcome] LatentState = List[float] AfterState = List[float] class NetworkOutput(NamedTuple): value: float probabilities: Dict[ActionOrOutcome, float] reward: Optional[float] = 0.0 class Network: """An instance of the network used by stochastic MuZero.""" def representation(self, observation) -> LatentState: """Representation function maps from observation to latent state.""" return [] def predictions(self, state: LatentState) -> NetworkOutput: """Returns the network predictions for a latent state.""" return NetworkOutput(0, {}, 0) def afterstate_dynamics(self, state: LatentState, action: Action) -> AfterState: """Implements the dynamics from latent state and action to afterstate .""" return [] def afterstate_predictions(self, state: AfterState) -> NetworkOutput: """Returns the network predictions for an afterstate.""" # No reward for afterstate transitions. return NetworkOutput(0, {}) def dynamics(self, state: AfterState, action: Outcome) -> LatentState: """Implements the dynamics from afterstate and chance outcome to state.""" return [] def encoder(self, observation) -> Outcome: """An encoder maps an observation to an outcome.""" class NetworkCacher: """An object to share the network between the self-play and training jobs.""" def __init__(self): self._networks = {} def save_network(self, step: int, network: Network): self._networks[step] = network def load_network(self) -> Tuple[int, Network]: training_step = max(self._networks.keys()) return training_step, self._networks[training_step] # Takes the training step and returns the temperature of the softmax policy. VisitSoftmaxTemperatureFn = Callable[[int], float] # Returns an instance of the environment. EnvironmentFactory = Callable[[], Environment] # The factory for the network. NetworkFactory = Callable[[], Network] @dataclasses.dataclass class StochasticMuZeroConfig: # A factory for the environment. environment_factory: EnvironmentFactory network_factory: NetworkFactory # Self-Play num_actors: int visit_softmax_temperature_fn: VisitSoftmaxTemperatureFn num_simulations: int discount: float # Root prior exploration noise. root_dirichlet_alpha: float root_dirichlet_fraction: float root_dirichlet_adaptive: bool # UCB formula pb_c_base: float = 19652 pb_c_init: float = 1.25 # If we already have some information about which values occur in the # environment, we can use them to initialize the rescaling. # This is not strictly necessary, but establishes identical behaviour to # AlphaZero in board games. known_bounds: Optional[KnownBounds] = None # Replay buffer. num_trajectories_in_buffer: int = int(1e6) batch_size: int = int(128) num_unroll_steps: int = 5 td_steps: int = 6 td_lambda: float = 1.0 # Alpha and beta parameters for prioritization. # By default they are set to 0 which means uniform sampling. priority_alpha: float = 0.0 priority_beta: float = 0.0 # Training training_steps: int = int(1e6) export_network_every: int = int(1e3) learning_rate: float = 3e-4 weight_decay: float = 1e-4 # The number of chance codes (codebook size). # We use a codebook of size 32 for all our experiments. codebook_size: int = 32 ################################## ## Environment specific configs ## def twentyfortyeight_config() -> StochasticMuZeroConfig: """Returns the config for the game of 2048.""" def environment_factory(): # Returns an implementation of 2048. return Environment() def network_factory(): # 10 layer fully connected Res V2 network with Layer normalization and size # 256. return Network() def visit_softmax_temperature(train_steps: int) -> float: if train_steps < 1e5: return 1.0 elif train_steps < 2e5: return 0.5 elif train_steps < 3e5: return 0.1 else: # Greedy selection. return 0.0 return StochasticMuZeroConfig( environment_factory=environment_factory, network_factory=network_factory, num_actors=1000, visit_softmax_temperature=visit_softmax_temperature, num_simulations=100, discount=0.999, root_dirichlet_alpha=0.3, root_dirichlet_fraction=0.1, root_dirichlet_adaptive=False, num_trajectories_in_buffer=int(125e3), td_steps=10, td_lambda=0.5, priority_alpha=1.0, priority_beta=1.0, training_steps=int(20e6), batch_size=1024, weight_decay=0.0) def backgammon_config() -> StochasticMuZeroConfig: """Returns the config for the game of 2048.""" def environment_factory(): # Returns an backgammon. We consider single games without a doubling cube. return Environment() def network_factory(): # 10 layer fully connected Res V2 network with Layer normalization and size # 256. return Network() def visit_softmax_temperature(train_steps: int) -> float: return 1.0 return StochasticMuZeroConfig( environment_factory=environment_factory, network_factory=network_factory, num_actors=1000, visit_softmax_temperature_fn=visit_softmax_temperature, num_simulations=1600, discount=1.0, # Unused, we use adaptive dirichlet for backgammon. root_dirichlet_alpha=-1.0, root_dirichlet_fraction=0.1, root_dirichlet_adaptive=True, # Max value is 3 for backgammon. known_bounds=KnownBounds(min=-3, max=3), # 1e5 full episodes stored. num_trajectories_in_buffer=int(1e5), # We use monte carlo returns. td_steps=int(1e3), training_steps=int(8e6), batch_size=1024, learning_rate=3e-4, weight_decay=1e-4) ################################## ############ Replay ############## class SearchStats(NamedTuple): search_policy: Dict[Action, int] search_value: float class State(NamedTuple): """Data for a single state.""" observation: List[float] reward: float discount: float player: Player action: Action search_stats: SearchStats Trajectory = Sequence[State] class ReplayBuffer: """A replay buffer to hold the experience generated by the selfplay.""" def __init__(self, config: StochasticMuZeroConfig): self.config = config self.data = [] def save(self, seq: Trajectory): if len(self.data) > self.config.num_trajectories_in_buffer: # Remove the oldest sequence from the buffer. self.data.pop(0) self.data.append(seq) def sample_trajectory(self) -> Trajectory: """Samples a trajectory uniformly or using prioritization.""" return self.data[0] def sample_index(self, seq: Trajectory) -> int: """Samples an index in the trajectory uniformly or using prioritization.""" return 0 def sample_element(self) -> Trajectory: """Samples a single element from the buffer.""" # Sample a trajectory. trajectory = self.sample_trajectory() state_idx = self.sample_index(trajectory) limit = max([self.config.num_unroll_steps, self.config.td_steps]) # Returns a trajectory of experiment. return trajectory[state_idx:state_idx + limit] def sample(self) -> Sequence[Trajectory]: """Samples a training batch.""" return [self.sample_element() for _ in range(self.config.batch_size)] ################################## ############ Search ############## class ActionOutcomeHistory: """Simple history container used inside the search. Only used to keep track of the actions and chance outcomes executed. """ def __init__(self, player: Player, history: Optional[List[ActionOrOutcome]] = None): self.initial_player = player self.history = list(history or []) def clone(self): return ActionOutcomeHistory(self.initial_player, self.history) def add_action_or_outcome(self, action_or_outcome: ActionOrOutcome): self.history.append(action_or_outcome) def last_action_or_outcome(self) -> ActionOrOutcome: return self.history[-1] def to_play(self) -> Player: # Returns the next player to play based on the initial player and the # history of actions and outcomes. For example for backgammon the two # players alternate, while for 2048 it is always the same player. return 0 class Node(object): """A Node in the MCTS search tree.""" def __init__(self, prior: float, is_chance: bool = False): self.visit_count = 0 self.to_play = -1 self.prior = prior self.value_sum = 0 self.children = {} self.state = None self.is_chance = is_chance self.reward = 0 def expanded(self) -> bool: return len(self.children) > 0 def value(self) -> float: if self.visit_count == 0: return 0 return self.value_sum / self.visit_count # Core Monte Carlo Tree Search algorithm. # To decide on an action, we run N simulations, always starting at the root of # the search tree and traversing the tree according to the UCB formula until we # reach a leaf node. def run_mcts(config: StochasticMuZeroConfig, root: Node, action_outcome_history: ActionOutcomeHistory, network: Network, min_max_stats: MinMaxStats): for _ in range(config.num_simulations): history = action_outcome_history.clone() node = root search_path = [node] while node.expanded(): action_or_outcome, node = select_child(config, node, min_max_stats) history.add_action(action_or_outcome) search_path.append(node) # Inside the search tree we use the dynamics function to obtain the next # hidden state given an action and the previous hidden state. parent = search_path[-2] if parent.is_chance: # The parent is a chance node, afterstate to latent state transition. # The last action or outcome is a chance outcome. child_state = network_output.dynamics(parent.state, history. last_action_or_outcome()) network_output = network_output.predictions(child_state) # This child is a decision node. is_child_chance = False else: # The parent is a decision node, latent state to afterstate transition. # The last action or outcome is an action. child_state = network_output.afterstate_dynamics( parent.state, history.last_action_or_outcome()) network_output = network_output.afterstate_predictions(child_state) # The child is a chance node. is_child_chance = True # Expand the node. expand_node(node, child_state, network_output, history.to_play(), is_child_chance) # Backpropagate the value up the tree. backpropagate(search_path, network_output.value, history.to_play(), config.discount, min_max_stats) # Select the child with the highest UCB score. def select_child(config: StochasticMuZeroConfig, node: Node, min_max_stats: MinMaxStats): if node.is_chance: # If the node is chance we sample from the prior. outcomes, probs = zip(*[(o, n.prob) for o, n in node.children.items() ]) outcome = np.random.choice(outcomes, p=probs) return outcome, node.children[outcome] # For decision nodes we use the pUCT formula. _, action, child = max( (ucb_score(config, node, child, min_max_stats), action, child) for action, child in node.children.items()) return action, child # The score for a node is based on its value, plus an exploration bonus based on # the prior. def ucb_score(config: StochasticMuZeroConfig, parent: Node, child: Node, min_max_stats: MinMaxStats) -> float: pb_c = math.log((parent.visit_count + config.pb_c_base + 1) / config.pb_c_base) + config.pb_c_init pb_c *= math.sqrt(parent.visit_count) / (child.visit_count + 1) prior_score = pb_c * child.prior if child.visit_count > 0: value_score = min_max_stats.normalize(child.reward + config.discount * child.value() ) else: value_score = 0 return prior_score + value_score # We expand a node using the value, reward and policy prediction obtained from # the neural network. def expand_node(node: Node, state: Union[LatentState, AfterState], network_output: NetworkOutput, player: Player, is_chance: bool): node.to_play = player node.state = state node.is_chance = is_chance node.reward = network_output.reward for action, prob in network_output.probabilities.items(): node.children[action] = Node(prob) # At the end of a simulation, we propagate the evaluation all the way up the # tree to the root. def backpropagate(search_path: List[Node], value: float, to_play: Player, discount: float, min_max_stats: MinMaxStats): for node in reversed(search_path): node.value_sum += value if node.to_play == to_play else -value node.visit_count += 1 min_max_stats.update(node.value()) value = node.reward + discount * value # At the start of each search, we add dirichlet noise to the prior of the root # to encourage the search to explore new actions. def add_exploration_noise(config: StochasticMuZeroConfig, node: Node): actions = list(node.children.keys()) dir_alpha = config.root_dirichlet_alpha if config.root_dirichlet_adaptive: dir_alpha = 1.0 / np.sqrt(len(actions)) noise = np.random.dirichlet([dir_alpha] * len(actions)) frac = config.root_exploration_fraction for a, n in zip(actions, noise): node.children[a].prior = node.children[a].prior * (1 - frac) + n * frac ################################## ############ Self-play ########### class Actor(metaclass=abc.ABCMeta): """An actor to interact with the environment.""" @abc.abstractmethod def reset(self): """Resets the player for a new episode.""" @abc.abstractmethod def select_action(self, env: Environment) -> Action: """Selects an action for the current state of the environment.""" @abc.abstractmethod def stats(self) -> SearchStats: """Returns the stats for the player after it has selected an action. """ class StochasticMuZeroActor(Actor): def __init__(self, config: StochasticMuZeroConfig, cacher: NetworkCacher): self.config = config self.cacher = cacher self.training_step = -1 self.network = None def reset(self): # Read a network from the cacher for the new episode. self.training_step, self.network = self.cacher.load_network() self.root = None def _mask_illegal_actions(self, env: Environment, outputs: NetworkOutput) -> NetworkOutput: """Masks any actions which are illegal at the root.""" # We mask out and keep only the legal actions. masked_policy = {} network_policy = outputs.probabilities norm = 0 for action in env.legal_actions(): if action in network_policy: masked_policy[action] = network_policy[action] else: masked_policy[action] = 0.0 norm += masked_policy[action] # Renormalize the masked policy. masked_policy = {a: v / norm for a, v in masked_policy.items()} return NetworkOutput(value=outputs.value, probabilities=masked_policy ) def _select_action(self, root: Node): """Selects an action given the root node.""" # Get the visit count distribution. actions, visit_counts = zip(*[ (action, node.visit_counts) for action, node in node.children.items() ]) # Temperature temperature = self.config.visit_softmax_temperature_fn(self. training_step) # Compute the search policy. search_policy = [v ** (1. / temperature) for v in visit_counts] norm = sum(search_policy) search_policy = [v / norm for v in search_policy] return np.random.choice(actions, p=search_policy) def select_action(self, env: Environment) -> Action: """Selects an action.""" # New min max stats for the search tree. min_max_stats = MinMaxStats(self.config.known_bounds) # At the root of the search tree we use the representation function to # obtain a hidden state given the current observation. root = Node(0) # Provide the history of observations to the representation network to # get the initial latent state. latent_state = self.network.representation(env.observation()) # Compute the predictions. outputs = self.network.predictions(latent_state) # Keep only the legal actions. outputs = self._mask_illegal_actions(env, outputs) # Expand the root node. expand_node(root, latent_state, outputs, env.to_play(), is_chance= False) # Backpropagate the value. backpropagate([root], outputs.value, env.to_play(), self.config.discount, min_max_stats) # We add exploration noise to the root node. add_exploration_noise(self.config, root) # We then run a Monte Carlo Tree Search using only action sequences and the # model learned by the network. run_mcts(self.config, root, ActionOutcomeHistory(env.to_play()), self.network, min_max_stats) # Keep track of the root to return the stats. self.root = root # Return an action. return self._select_action(root) def stats(self) -> SearchStats: """Returns the stats of the latest search.""" if self.root is None: raise ValueError(’No search was executed.’) return SearchStats( search_policy={ action: node.visit_counts for action, node in self.root.children.items() }, search_value=self.root.value()) # Self-play. # Each self-play job is independent of all others; it takes the latest network # snapshot, produces an episode and makes it available to the training job by # writing it to a shared replay buffer. def run_selfplay(config: StochasticMuZeroConfig, cacher: NetworkCacher, replay_buffer: ReplayBuffer): actor = StochasticMuZeroActor(config, cacher) while True: # Create a new instance of the environment. env = config.environment_factory() # Reset the actor. actor.reset() episode = [] while not env.is_terminal(): action = actor.select_action(env) state = State( observation=env.observation(), reward=env.reward(env.to_play()), discount=config.discount, player=env.to_play(), action=action, search_stats=actor.stats()) episode.append(state) env.apply(action) # Send the episode to the replay. replay_buffer.save(episode) ################################## ############ Training ############ class Learner(metaclass=abc.ABCMeta): """An learner to update the network weights based.""" @abc.abstractmethod def learn(self): """Single training step of the learner.""" @abc.abstractmethod def export(self) -> Network: """Exports the network.""" def policy_loss(predictions, labels): """Minimizes the KL-divergence of the predictions and labels.""" return 0.0 def value_or_reward_loss(prediction, target): """Implements the value or reward loss for Stochastic MuZero. For backgammon this is implemented as an MSE loss of scalars. For 2048, we use the two hot representation proposed in MuZero, and this loss is implemented as a KL divergence between the value and value target representations. For 2048 we also apply a hyperbolic transformation to the target (see paper for more information). Args: prediction: The reward or value output of the network. target: The reward or value target. Returns: The loss to minimize. """ return 0.0 class StochasticMuZeroLearner(Learner): """Implements the learning for Stochastic MuZero.""" def __init__(self, config: StochasticMuZeroConfig, replay_buffer: ReplayBuffer): self.config = config self.replay_buffer = replay_buffer # Instantiate the network. self.network = config.network_factory() def transpose_to_time(self, batch): """Transposes the data so the leading dimension is time instead of batch.""" return batch def learn(self): """Applies a single training step.""" batch = self.replay_buffer.sample() # Transpose batch to make time the leading dimension. batch = self.transpose_to_time(batch) # Compute the initial step loss. latent_state = self.network.representation(batch[0].observation) predictions = self.network.predictions(latent_state) # Computes the td target for the 0th position. value_target = compute_td_target(self.config.td_steps, self.config.td_lambda, batch) # Train the network value towards the td target. total_loss = value_or_reward_loss(predictions.value, value_target) # Train the network policy towards the MCTS policy. total_loss += policy_loss(predictions.probabilities, batch[0].search_stats.search_policy) # Unroll the model for k steps. for t in range(1, self.config.num_unroll_steps + 1): # Condition the afterstate on the previous action. afterstate = self.network.afterstate_dynamics( latent_state, batch[t - 1].action) afterstate_predictions = self.network.afterstate_predictions( afterstate) # Call the encoder on the next observation. # The encoder returns the chance code which is a discrete one hot code. # The gradients flow to the encoder using a straight through estimator. chance_code = self.network.encoder(batch[t].observation) # The afterstate value is trained towards the previous value target # but conditioned on the selected action to obtain a Q-estimate. total_loss += value_or_reward_loss( afterstate_predictions.value, value_target) # The afterstate distribution is trained to predict the chance code # generated by the encoder. total_loss += policy_loss(afterstate_predictions.probabilities, chance_code) # Get the dynamic predictions. latent_state = self.network.dynamics(afterstate, chance_code) predictions = self.network.predictions(latent_state) # Compute the new value target. value_target = compute_td_target(self.config.td_steps, self.config.td_lambda, batch[t:]) # The reward loss for the dynamics network. total_loss += value_or_reward_loss(predictions.reward, batch[t]. reward) total_loss += value_or_reward_loss(predictions.value, value_target) total_loss += policy_loss(predictions.probabilities, batch[t].search_stats.search_policy) minimize_with_adam_and_weight_decay(total_loss, learning_rate=self.config. learning_rate, weight_decay=self.config. weight_decay) def export(self) -> Network: return self.network def train_stochastic_muzero(config: StochasticMuZeroConfig, cacher: NetworkCacher, replay_buffer: ReplayBuffer): learner = StochasticMuZeroLearner(config, replay_buffer) # Export the network so the actors can start generating experience. cacher.save_network(0, learner.export()) for step in range(config.training_steps): # Single learning step. learner.learn() if step > 0 and step % config.export_network_every == 0: cacher.save_network(step, learner.export()) ################################## ############ RL loop ############# def launch_stochastic_muzero(config: StochasticMuZeroConfig): """Full RL loop for stochastic MuZero.""" replay_buffer = ReplayBuffer(config) cacher = NetworkCacher() # Launch a learner job. launch_job(lambda: train_stochastic_muzero(config, cacher, replay_buffer)) # Launch the actors. for _ in range(config.num_actors): launch_job(lambda: run_selfplay(config, cacher, replay_buffer)) # Stubs to make the typechecker happy. def softmax_sample(distribution, temperature: float): return 0, 0 def compute_td_target(td_steps, td_lambda, trajectory): """Computes the TD lambda targets given a trajectory for the 0th element. Args: td_steps: The number n of the n-step returns. td_lambda: The lambda in TD(lambda). trajectory: A sequence of states. Returns: The n-step return. """ return 0.0 def minimize_with_sgd(loss, learning_rate): """Minimizes the loss using SGD.""" def minimize_with_adam_and_weight_decay(loss, learning_rate, weight_decay ): """Minimizes the loss using Adam with weight decay.""" def launch_job(f): """Launches a job to run remotely.""" return f()
1. What is the main contribution of the paper, and how does it address a major shortcoming of MuZero? 2. How does Stochastic MuZero perform in comparison to AlphaZero and MuZero in various settings? 3. What are some limitations of the paper's experimental results, and what further investigations could be conducted to address these limitations? 4. How does Stochastic MuZero handle stochasticity in its environment, and how does this compare to other approaches such as Monte Carlo returns? 5. What can be learned from analyzing the model learned by Stochastic MuZero, and how does it differ from the true transition model or MuZero's model? 6. How do the search trees of Stochastic MuZero look compared to those of AlphaZero and MuZero?
Summary Of The Paper Review
Summary Of The Paper The submission proposes an algorithm (called Stochastic MuZero) that combines VQ-VAEs with MuZero. Unlike MuZero, Stochastic MuZero can handle settings with stochasticity in a principled way (in terms of value equivalence). The submission shows that Stochastic MuZero can perform comparably to AlphaZero in 2048 and Backgammon. Additionally, it shows that Stochastic MuZero can perform comparably to MuZero in Go in a setting in which its computation budget is twice as large. Review This work can largely be summarized as plugging the ideas from Ozair 2021 into a MuZero training pipeline. Nevertheless, I think it is a valuable contribution to the community, as it addresses a major shortcoming of MuZero and, unlike Ozair 2021, does so in self-play setting without a reconstruction loss. The paper is nicely written (though see small comments below). My main comments are on the experimental side: Throughout the paper, the submission claims that Stochastic MuZero retains the MuZero’s performance in deterministic settings. However, the paper only demonstrates this in a case in which Stochastic MuZero is given twice the simulation budget of MuZero. I think that making this claim would require showing that Stochastic MuZero matches the performance of MuZero for equal computation budgets. Maybe Stochastic MuZero’s network architecture could be tuned such that each simulation is only half as costly? Seems like this merits further investigation. One weakness of the paper is that the results on games with stochasticity are not on active (at least in a relative sense) research benchmarks. I think it would be interesting to see Stochastic MuZero benchmarked in Hanabi. While Hanabi is an N-player game, it can easily be turned into a POMDP by fixing N-1 players. This was the approach taken by single-agent SPARTA, for which code has been open sourced: https://github.com/facebookresearch/Hanabi_SPARTA. I think showing Stochastic MuZero’s performance against single-agent SPARTA would be a valuable contribution to the community. This experiment would also move toward addressing reviewer 4qYA's interest (which I also share) in seeing Stochastic MuZero in settings with a larger number of possible outcomes. A third, smaller point, regards the baseline AlphaZero implementation. The appendix gives a number of details about the Stochastic MuZero implementation, but few regarding the AlphaZero implementation. I think it would be beneficial to devote some space to this. For example, did the submission's AlphaZero implementation used Monte Carlo returns, as was done in the AlphaZero paper, or did the submission find it necessary to modify this aspect of AlphaZero to achieve good performance in stochastic settings? A fourth, again smaller point concerns the model learned by Stochastic MuZero. In particular, I would be interested to see some analysis on the extent to which Stochastic MuZero's model performs state abstractions. Ie, does Stochastic MuZero's model look more like the true transition model or more like MuZero's (transition determinization) model? Does Stochastic MuZero end up learning a deterministic model in deterministic environment? How do the shape of Stochastic MuZero's search trees look compared to those of AlphaZero and MuZero? Small Comments It would be more generous to also cite Libratus (not just DeepStack) for tree-based planning algorithms for card games. “However, learning a model in isolation from its use during planning has proven to be problematic in complex environments” What about Dreamer v2? VQ-VAE 3 — make clear this is referring to an equation code ct+k+1 = one hot (arg maxi (e(o i ≤t+k+1))) produced The = part of the <= on this line is hard to see because of its proximity to the capital V on the line below. It took me a minute to figure out how the encoder was being used. Might be worth discussing it at a greater length in the training section. In Figure 1, it took me a minute to figure out that the pictures of the 2048 game were not pictures of a VQ-VAE embedding. Would be better to use backgammon or something else less misinterpretable.
ICLR
Title Planning in Stochastic Environments with a Learned Model Abstract Model-based reinforcement learning has proven highly successful. However, learning a model in isolation from its use during planning is problematic in complex environments. To date, the most effective techniques have instead combined valueequivalent model learning with powerful tree-search methods. This approach is exemplified by MuZero, which has achieved state-of-the-art performance in a wide range of domains, from board games to visually rich environments, with discrete and continuous action spaces, in online and offline settings. However, previous instantiations of this approach were limited to the use of deterministic models. This limits their performance in environments that are inherently stochastic, partially observed, or so large and complex that they appear stochastic to a finite agent. In this paper we extend this approach to learn and plan with stochastic models. Specifically, we introduce a new algorithm, Stochastic MuZero, that learns a stochastic model incorporating afterstates, and uses this model to perform a stochastic tree search. Stochastic MuZero matched or exceeded the state of the art in a set of canonical single and multi-agent environments, including 2048 and backgammon, while maintaining the superhuman performance of standard MuZero in the game of Go. 1 INTRODUCTION Constructing plans and executing them is an important feature of human and animal behaviour. In the field of artificial intelligence there has been a great amount of research into adding planning capabilities to intelligent agents. Tree-based planning algorithms have shown a lot of success in a wide variety of environments such as card games (Moravčík et al., 2017), board games (Campbell et al., 2002; Silver et al., 2016) and more recently video games (Schrittwieser et al., 2020) and continuous control tasks (Hubert et al., 2021). Most tree search methods assume that the agent has access to a perfect simulator of the environment, whereas real-world environments are typically unknown. Model-based reinforcement learning algorithms combine a model-learning component, which estimates the dynamics of the environment, with a planning component, using the learned model as a simulator. However, learning a model in isolation from its use during planning has proven to be problematic in complex environments (van Hasselt et al., 2019). Instead, value-equivalent modellearning methods (Silver et al., 2017; Farahmand et al., 2017; Oh et al., 2017; Grimm et al., 2020) identify a model that reconstructs only those quantities required for planning. The most successful method, MuZero (Schrittwieser et al., 2020) learns a model that reconstructs reward, value and policy, and uses this model to perform a powerful Monte Carlo tree search. MuZero achieved superhuman results in Go, chess, shogi and Atari without any prior knowledge of the rules, and has also achieved state-of-the-art performance in large and continuous action spaces (Hubert et al., 2021) and offline reinforcement learning (Schrittwieser et al., 2021). However, value equivalent methods such as MuZero have in practice been limited to a deterministic class of models, which severely limits their applicability. Many environments are inherently stochastic and may be poorly approximated by a deterministic model. Partially observed environments may also be perceived by the agent as stochastic, whenever aliased states cannot be disambiguated. Similarly, large and complex environments may appear stochastic to a small agent with finite capacity. In this paper we introduce the first empirically effective approach for handling stochasticity in value equivalent model-learning and planning. The model is factored to first transition deterministically from state to an afterstate, and then to branch stochastically from the afterstate to the next state. This factored model is trained end-to-end so as to maintain value equivalence for both state value function and action value function respectively, and a stochastic planning method is applied to the model. We apply these ideas to MuZero, using a discrete generative network to represent the model, and modifying the Monte Carlo tree search to effectively use the factored model. We apply our method, Stochastic MuZero, to several environments in which handling stochasticity is important. First, we consider the popular stochastic puzzle game 2048, in which the prior state of the art exploits a perfect simulator and significant handcrafted domain knowledge. In our experiments, Stochastic MuZero achieved better results without any domain knowledge. Second, we consider the classic stochastic two-player game of backgammon, in which near-optimal play has been achieved using a perfect simulator. Stochastic MuZero matches this performance without any prior knowledge of the game rules. Finally, we evaluated our method in the deterministic board game of Go. There our method matched the performance of MuZero, demonstrating that Stochastic MuZero extends MuZero without sacrificing performance. 2 RELATED WORK Observation models (Oh et al., 2015; Chiappa et al., 2017; Łukasz Kaiser et al., 2020) explicitly learn the dynamics of an environment by fitting a model of observations and rewards to observed transitions. Subsequently, these models can be combined with a model-free learning rule in a Dyna fashion (Sutton, 1991). However, modeling high dimensional image observations can be computationally prohibitive, prone to high error accumulation as the model is unrolled for multiple steps, and limiting since the capacity of the model could be spent on background features which are not helpful for the problem at hand. These issues make such models unconducive for planning. Finally, van Hasselt et al. (2019) argues that Dyna-based methods are unlikely to outperform model-free approaches that use a replay buffer. Latent models (Schrittwieser et al., 2020; Oh et al., 2017; Hafner et al., 2021; Henaff et al., 2017) attempt to overcome the limitations of observation models by learning recurrent networks that operate on latent states. In this framework, the model is conditioned on the current observation and future actions and is unrolled for k steps. Subsequently, it is trained to make predictions about rewards, values, policies or observations at each timestep based on the current latent state. The model can then be combined with a tree-based planning algorithm or used to generate synthetic trajectories. Recently, MuZero has shown that it is possible to use this approach to achieve state-of-the-art performance in many challenging domains (Hubert et al., 2021) while using less data (Schrittwieser et al., 2021). However, most approaches, including MuZero, use a deterministic function to model the environment dynamics, which limits their applicability to deterministic or weakly stochastic environments. Stochastic latent models are stochastic models of the environment dynamics that operate on latent states. In (Hafner et al., 2021) the authors propose a recurrent state-space model which consists of three main modules, a recurrent module which generates the deterministic recurrent state ht, a representation model which combines ht with the current observation xt to generate a distribution over stochastic states st and plays the role of the posterior, and a transition predictor which depends only on ht and acts as the prior of the model. By combining the deterministic and stochastic states ht and st the model is trained to predict the current observation ot, the transition reward rt and the discount dt. The next deterministic recurrent state is generated using ht, st and action at. The stochastic states st are modeled as multidimensional multinomial variables. The learned model is then used to generate synthetic data which are used to train an actor-critic model-free agent. The authors show that their approach outperforms pure model-free methods but it fails to achieve the performance of MuZero which combines its learned model with planning. In (Ozair et al., 2021) the authors learn a stochastic transition model using a VQ-VAE generative network (van den Oord et al., 2017) and subsequently combine it with MCTS. They show that their method can match the performance of MuZero in chess, while viewing the problem as a singleplayer task and implicitly learning to model the behaviour of the opponent. Despite its promise their approach was only applied in a supervised setting using expert data, and did not address the challenges of learning a stochastic model in the reinforcement learning setting. Moreover, the learned model was trained to explicitly predict the observation at every step, which can be a limiting factor in terms of computation and model efficiency when dealing with high dimensional observations. Finally, the authors used a two stage training process: first, a model learns latent representations of the observations, then these representations are used to learn a transition model. This makes it hard to apply this approach in the reinforcement learning setting. 3 BACKGROUND 3.1 MuZero MuZero is a model-based general reinforcement learning agent which combines a learned model of the environment dynamics with a Monte Carlo tree search planning algorithm. The model is conditioned on the history of observations o≤t at timestep t and a sequence of future actions at:t+K , and it is trained to predict the search policies πt:t+K , values vπt:t+K and intermediate rewards rt:t+K at each future timestep. MuZero uses deterministic functions for its model, and thus it implicitly assumes that the underlying environment dynamics are also deterministic. MuZero uses its dynamics model to plan ahead at each time step and the outcome of its MCTS search to select an action and as targets for its policy improvement operator. Model MuZero’s learned model consists of 3 functions: a representation function h, a dynamics function g and a prediction function f . The representation function maps the current history of observations o≤t into a latent state s0t . The dynamics function g receives the previous latent state s k t and combines it with an action at+k to produce the next latent state sk+1t and the reward r k t . Finally, the prediction function f receives each latent state skt as an input and computes the policy p k t and value vkt . Given a sequence of policy πt:T , value zt:T , and reward ut:T targets, the model is trained to minimize the loss shown in 1. LMuZero = K∑ k=0 lp(πt+k, p k t ) + K∑ k=0 lv(zt+k, v k t ) + K∑ k=1 lr(ut+k, r k t ) (1) The policy targets πt+k correspond to the MCTS policy that was generated when searching from observation o≤t+k. The value targets zt+k are computed using n-step returns (Sutton & Barto, 2018). Finally, the reward targets ut+k correspond to the real instantaneous rewards observed when this sequence was generated. Search MuZero uses a variant of the MCTS tree based algorithm first proposed in (Silver et al., 2018). The tree is constructed recursively through a number of simulations. Each simulation consists of 3 phases: selection, expansion and backpropagation. During the selection phase the tree is traversed starting from the root node until a leaf edge is reached. At each internal node s the algorithm selects the action a which maximizes the upper confidence bound proposed in (Silver et al., 2016) and shown in equation 2. a = argmax a [ Q(s, a) +P (a | s) · √ 1 + ∑ bN(s, b) 1 +N(s, a) ( α1 + log (∑ bN(s, b) + α2 + 1 α2 ))] (2) Here, Q(s, a) is the value estimate for action a, N(s, a) the visit count, P (a | s) the prior probability of selecting action a, and α1, α2 are constants which control the relative importance of the Q(s, ·) estimates and prior probabilities P (· | s). In the next phase expansion, the leaf edge is expanded by querying the MuZero model and a new node is added to the tree. Finally, during the backpropagation phase the value estimate of the newly added edge is backpropagated up the tree using the n-step return estimate. 3.2 VECTOR QUANTISED VARIATIONAL AUTOENCODER Vector Quantised Variational AutoEncoder (VQ-VAE, van den Oord et al. (2017)) is a generative modeling technique which uses four key components: an encoder neural network e, a decoder neural network d, a vector quantisation layer vq, and an autoregressive model m. Given an input xt, the encoder produces an embedding cet = e(xt). The quantisation layer comprises of a set of M codes {ci}Mi=0, called the codebook, and quantises the encoder’s output embedding cet by returning the nearest code ct = ckt along with its index kt = argmini ‖ci − cet‖. Additionally, in the backwards pass, this quantisation is treated as an identity function, referred to as straight-through gradient estimation (Bengio et al., 2013). The decoder produces a reconstruction of the input x̂t = d(ct). The autoregressive model predicts a distribution p(kt|c<t) = m(c<t) over the code index at time t using the quantised embeddings c<t of the previous timesteps. The VQ-VAE equations are shown in Equations 3. Encoder cet = e(xt) Quantisation ct, kt = vq(cet ) Decoder x̂t = d(ct) Model p(kt|c<t) = m(c<t) (3) Typically, the encoder, decoder, and codebook are trained first and then frozen to train the autoregressive model in an additional second stage. The total loss for the VQ-VAE is Lvqvaeφ = N−1∑ t=0 [ ‖x̂t − xt‖︸ ︷︷ ︸ reconstruction +β ‖ct − cet‖ 2︸ ︷︷ ︸ commitment − γ log p(kt|c<t)︸ ︷︷ ︸ second stage ] (4) 4 Stochastic MuZero In this section we present our novel algorithm Stochastic MuZero. Our approach combines a learned stochastic transition model of the environment dynamics with a variant of Monte Carlo tree search (MCTS). First, we describe the new model and subsequently how it is combined with MCTS for planning. 4.1 STOCHASTIC MODEL Afterstates We consider the problem of modeling the dynamics of a stochastic environment. Similarly to MuZero, the model receives an initial observation o≤t at time step t and a sequence of actions at:t+K , and needs to make predictions about the future values, policies and rewards. In contrast to MuZero which only considers latent states which correspond to real states of the environment, Stochastic MuZero makes use of the notion of afterstates (Sutton & Barto, 2018) to capture the stochastic dynamics. An afterstate ast is the hypothetical state of the environment after an action is applied but before the environment has transitioned to a true state: st at // ast ;; ## // st+1 By using afterstates we can separate the effect of applying an action to the environment and of the chance transition given an action. For example in backgammon, the afterstate corresponds to the board state after one player has played its action but before the other player had the chance to roll the dice. It is also possible to define the value of an afterstate as V (ast) = Q(st, at) and the transition probabilities of the environment dynamics Pr(st+1 | ast) = Pr(st+1 | st, at). An afterstate can lead to multiple states based on a chance event. In our work we assume that there is a finite number of possible states M that the environment can transition to, given an afterstate, and this way we can associate each transition with a chance outcome cit. An example of a chance outcome could be the result of the dice in a game of backgammon. By defining afterstates ast and chance outcomes ct, we can model a chance transition using a deterministic model st+1, rt+1 =M(ast, ct) and a distribution Pr(st+1 | ast) = Pr(ct | ast). The task of learning a stochastic model is then reduced to the problem of learning afterstates as and chance outcomes c. Model The stochastic model of Stochastic MuZero consists of 5 functions: a representation function h which maps the current observation o≤t to a latent state s0t , an afterstate dynamics function φ which given a state skt and an action at+k produces the next latent afterstate as k t , a dynamics function g which given an afterstate askt and a chance outcome ct+k+1 produces the next latent state s k+1 t and a reward prediction rk+1t , a prediction function f which given a state s k t generates the value vkt and policy p k t predictions, and a afterstate prediction function ψ which given an afterstate as k generates a value prediction Qkt , and a distribution σ k t = Pr(ct+k+1 | askt ) over possible future chance outcomes ct+k+1. The model equations are shown in 5. Representation s0t = h(o≤t) Prediction pkt , v k t = f(s k t ) Afterstate Dynamics askt = φ(s k t , at+k) Afterstate Prediction σkt , Q k t = ψ(as k t ) Dynamics sk+1t , r k+1 t = g(as k t , ct+k+1) (5) During inference, given an initial observation o≤t and a sequence of actions at:t+K , we can generate trajectories from the above model by recurrently unrolling it and by sampling chance outcomes from the distributions ct+k+1 ∼ σkt . Chance outcomes Stochastic MuZero models the chance outcomes by using a novel variant of the VQ-VAE method. Specifically, we consider a VQ-VAE with a constant codebook of size M . Each entry in the codebook is a fixed one-hot vector of size M . By using a fixed codebook of one hot vectors, we can simplify the equations of the VQ-VAE 3. In this case, we model the encoder embedding cet as a categorical variable, and selecting the closest code ct is equivalent to computing the expression one hot (argmaxi(c e,i t )). The resulting encoder can also be viewed as a stochastic function of the observation which makes use of the Gumbel softmax reparameterization trick (Jang et al., 2016) with zero temperature during the forward pass and a straight through estimator during the backward. There is no explicit decoder in our model, and contrary to previous work (Ozair et al., 2021) we do not make use of a reconstruction loss. Instead the network is trained end-to-end in a fashion similar to MuZero. In the following section we explain the training procedure in more detail. Model training The stochastic model is unrolled and trained in an end-to-end fashion similar to MuZero. Specifically, given a trajectory of lengthK with observations o≤t:t+K , actions at:t+K , value targets zt:t+K , policy targets πt:t+K and rewards ut+1:t+K , the model is unrolled for K steps as shown in figure 1 and is trained to optimize the sum of two losses as shown in equation 6: a MuZero loss and a chance loss for learning the stochastic dynamics of the model. Ltotal = LMuZero + Lchance (6) The MuZero loss is the same as the one described in MuZero (see equation 3.1). The chance loss is applied to the predictions Qkt and σ k t which correspond to the latent afterstates as k. The Qkt value is trained to match the value target zt+k and the σk is trained towards the one hot chance code ct+k+1 = one hot (argmaxi(e(o i ≤t+k+1))) produced by the encoder. Finally, following the standard VQ-VAE practice, we use a VQ-VAE commitment cost to ensure that the output of the encoder cet+k = e(o≤t+k+1) is close to the code ct+k. Equation 7 shows the chance loss used to train the model. Lchancew = K−1∑ k=0 lQ(zt+k, Q k t ) + K−1∑ k=0 lσ(ct+k+1, σ k t ) + β K−1∑ k=0 ∥∥ct+k+1 − cet+k+1∥∥2︸ ︷︷ ︸ VQ-VAE commitment cost (7) 4.2 STOCHASTIC SEARCH Stochastic MuZero extends the MCTS algorithm used in MuZero by introducing chance nodes and chance values to the search. In the stochastic instantiation of MCTS, there are two types of nodes: decision and chance (Couetoux, 2013). The chance and decision nodes are interleaved along the depth of the tree, so that the parent of each decision node is a chance node. The root node of the tree is always a decision node. In our approach, each chance node corresponds to a latent afterstate (4.1) and it is expanded by querying the stochastic model, where the parent state and an action are provided as an input and the model returns a value for the node and a prior distribution over future codes Pr(c | as). After a chance node is expanded its value is backpropagated up the tree. Finally, when the node is traversed during the selection phase, a code is selected by sampling the prior distribution 1. In Stochastic MuZero each internal decision node is again expanded by querying the learned model, where the state of the chance parent node and a sampled code c are provided as an input, and the model returns a reward, a value and a policy. Similarly to MuZero the value of the newly added node is backpropagated up the tree, and the pUCT (2) formula is used to select an edge. The stochastic search used by Stochastic MuZero is shown schematically in figure 1. 5 EXPERIMENTS We applied our algorithm to a variety of challenging stochastic and deterministic environments. First, we evaluated our approach in the classic game of 2048, a stochastic single player game. Subsequently, we considered a two player zero-sum stochastic game, Backgammon, which belongs to the same 1In practice we follow the same quasi-random sampling approach as in Ozair et al. (2021) (A.3), where the code is selected using the formula argmaxc Pr(c|as) N(c)+1 . class of board games such as Go, chess or Shogi where MuZero excels, but with stochasticity induced by the use of a die. Finally, we evaluated our method in the deterministic game of Go, to measure any performance loss caused by the use of a stochastic model and search in deterministic environments in comparison to MuZero. In each environment we assess our algorithm’s ability to learn a transition model and effectively use it during search. To this end, we compare Stochastic MuZero (using a stochastic learned model) to MuZero (using a deterministic learned model), AlphaZero (using a perfect simulator), and a strong baseline method (also using a perfect simulator). In the following sections we present our results for each environment separately. 5.1 2048 The game of 2048 (inspired by the game of Threes!) is a stochastic, single player, perfect information puzzle game played on a 4x4 board. The objective of the game is to slide numbered tiles on a grid to combine them to create a tile with the number 2048; one can continue to play the game after reaching the goal, creating tiles with larger numbers. The episode reward is the sum of all created tile numbers. There is a plethora of previous work (Szubert & Jaśkowski, 2014; Yeh et al., 2017; Oka & Matsuzaki, 2016; Rodgers & Levine, 2014; Neller, 2015) on combining reinforcement learning and tree search methods for tackling 2048. Despite its simplicity, model-free approaches have traditionally struggled to achieve high performance, while planning-based approaches have exploited perfect knowledge of the simulator. To date, the best performing agent used the planning-based approach proposed in (Jaśkowski, 2016). This method used an expectimax tree search over a perfect simulator, combined with domain-specific knowledge and a number of novel algorithmic ideas that exploited the structure of this specific problem. In contrast our method uses a learned model and no prior knowledge about the environment. Figure 2 compares the performance of Stochastic MuZero in 2048 to AlphaZero, MuZero and the state-ofthe-art Jaskowski 2016 agent. Our method outperformed Jaskowski 2016, while using only a quarter of the training data. Stochastic MuZero also achieved the same performance as AlphaZero (using a perfect simulator), despite learning the model, and performed far better than MuZero (using a deterministic model). 5.2 BACKGAMMON Backgammon is a classic two player, zero-sum, stochastic board game; it was popularized as a standard testbed for reinforcement learning and artificial intelligence by TD-gammon (Tesauro, 1995). Here we focus on the single game setting, where the final score takes the values ±1 for a simple win or loss, ±2 for a gammon and ±3 for a backgammon. In all experiments we compared to GNUbg Grandmaster (Free Software Foundation, 2004), a superhuman-level open-source backgammon player. GNUbg combines a learned value function based on handcrafted features with a specialized min-max tree search using a perfect stochastic simulator. GNUbg Grandmaster uses a 3-ply look-ahead search over a branching factor of 20 legal moves on average and 21 chance transitions. Stochastic MuZero, using a learned stochastic model of the environment and only 1600 simulations per move, achieved the same playing strength as GNUbg, as shown in Figure 5b. The model learned by Stochastic MuZero is of high quality: it reached the same playing strength as AlphaZero (using a perfect stochastic simulator), and much higher strength than MuZero (using a deterministic learned model). The model also robustly scaled to larger planning budgets (Figure 5c): the performance of Stochastic MuZero improved with increasing number of simulations per move, and ultimately exceeded the playing strength of GNUbg Grandmaster. Given the high dimensionality of the action space in Backgammon (see appendix for details), our Backgammon experiments used the sample-based search introduced by Hubert et al. (2021). 5.3 GO Go is a classic, two player, perfect information, zero-sum board game, that has been studied heavily in the field of artificial intelligence. AlphaZero and subsequently, MuZero have been the only algorithms which have managed to achieve super-human performance, purely through selfplay, in this challenging domain. Since the goal of Stochastic MuZero is to extend the applicability of MuZero to stochastic environments while maintaining the latter’s performance in deterministic environments, we compared the performance of the two algorithms in the game of Go. Figure 4 shows the Elo (Coulom, 2008) achieved by Stochastic MuZero and MuZero during training. Although, Stochastic MuZero requires twice the number of network expansions in comparison to MuZero to achieve the same performance, due to the use of a stochastic MCTS instead of a deterministic one, we ensure that the methods are computationally equivalent by halving the network depth for the chance and dynamic parts of the Stochastic MuZero’s network. 5.4 REPRODUCIBILITY In order to evaluate the robustness of our method in all different environments, we replicated our experiments using nine different initial random seeds (see figure 5.4). We observe that our method is robust to the random initialization and there is minimal variation in its performance between multiple runs. Due to the computational cost of each experiment we used a smaller number of training steps for each experiment. 6 CONCLUSIONS In this work, we proposed a new method for learning a stochastic model of the environment, in a fully online reinforcement learning setting, and showed that the learned model can be effectively combined with planning. Our approach builds on top of MuZero, a model-based reinforcement learning agent that has been widely successful in a range of environments and settings, but its applicability is limited to deterministic or weakly stochastic environments. We have shown that our algorithm, Stochastic MuZero, can overcome the limitations of MuZero, significantly outperforming it in stochastic environments, and it can achieve the same or better performance than AlphaZero which makes use of a perfect simulator for the environment. Finally, we have demonstrated that Stochastic MuZero matches or exceeds the performance of previous methods that use a perfect stochastic simulator, in a pure reinforcement learning setting without using any prior knowledge about the environment. 7 REPRODUCIBILITY STATEMENT In order to ensure the reproducability of our results by the research community, we have included detailed pseudocode, references to all environments and datasets used as well as a detailed description of the hyperparameters used (see Appendix). We did not release the full code as it relies on a lot of proprietary internal infrastructure, limiting its usefulness. We also provide a study of the robustness of our method under different random initialization conditions (see 5.4). D BACKGAMMON EXPERIMENTS Backgammon is an ancient two player, zero-sum, perfect information, stochastic board game. The board consists of 24 squares (or points) and each player controls 15 checkers, which can move based on the outcome of a dice roll. The two players move their checkers in opposite directions and their goal is to move all their checkers off the board first. In addition to a simple winning, a player can also score a double ("gammon") or a triple ("backgammon") winning. A "gammon" is achieved when a player bears off all their checkers before their opponent manages to bear off any, while a "backgammon" when the opponent also has checkers left in the player’s home quadrant (farthermost quadrant from the opponent’s perspective). Each player can impede the progress of their opponent through "hitting" the opponent’s checkers or blocking their advancement. A "hit" is achieved when a player’s checker advances to a position with a single opponent’s checker. Then the opponent’s checker needs to reenter the board in the player’s home quadrant and no further moves are allowed to the opponent until that happens. A position is blocked to the opponent when it is occupied by at least two of the player’s checkers. Each player makes moves based on the values yielded by rolling two dice. In the case of "doubles", aka the two dice have the same value, the player can play up to 4 moves. One of the challenges of computer Backgammon is the high branching ratio, since at each ply there are 21 chance outcomes, which yield positions with an average of 20 legal moves each, resulting in a branching ratio of several hundred per ply. In our backgammon experiments, the board was represented using a vector of size 28, with the first 24 positions representing the number of chips for each player in the 24 possible points on the board, and the last four representing the number of hit chips and born off chips for each of the two players. We used positive numbers for the current player’s chips and negative ones for her opponent. An action in our implementation consists of 4 micro-actions, the same as the maximum number of dice a player can play at each turn. Each micro-action encodes the source position of a chip along with the value of the die used. We consider 26 possible source positions, with the 0th position corresponding to a no-op, the 1st to retrieving a chip from the hit pile, and the remaining to selecting a chip in one of the 24 possible points. Each micro-action is encoded as a single integer with micro-action = src · 6 + die. Similarly to the 2048 experiments, the representation, afterstate dynamics, dynamics and encoder functions were implemented using a 10 block ResNet v2 style pre-activation residual tower (He et al., 2016) coupled with Layer Normalisation (Ba et al., 2016) and Rectified Linear Unit (ReLU) activations. Each linear layer has an output size of 256. The action was provided to the afterstate dynamics network as a vector which was the result of the concatenation of the one-hot representation of each micro-action. We used a codebook of size 32 to model the stochasticity in the environment. Following the work of (Hubert et al., 2021), we used an autoregressive prediction head to model the network policy, with each step corresponding to a single micro-action. To generate a full action, the network was unrolled for 4 steps. In contrast to the 2048 experiments, the value was represented as a scalar. Similarly to MuZero when applied to board games, we used Monte Carlo returns to compute the value targets zt, and we assumed a discount of 1. We trained the model using an Adam optimizer with weight decay (Loshchilov & Hutter, 2017), with learning rate of 0.0003 and a weight decay of 0.0001, with a batch size of 1024 for a total of 8M steps. In all our experiments we used a replay buffer of 100000 games, and the training trajectories were sampled uniformly. For exploration, we injected dirichlet noise to the prior policy at the root node. However, since the number of legal moves at each position can dramatically change in backgammon, we dynamically adapted the alpha parameter of the dirichlet noise based on the number of legal moves, with alpha = 1/ √ num_legal_moves. We used a budget of 1600 simulations for each MCTS search. E GO EXPERIMENTS In our Go experiments, we used the same approach as the one proposed in Hubert et al. (2021). The main differences between this setup and the one proposed in the original MuZero Schrittwieser et al. (2020) is the use of n-step bootstrapping with a target network to improve the data efficiency of the algorithm. The MuZero and Stochastic MuZero players were evaluated during training by playing 100 matches with a search budget of 800 simulations for MuZero and 1600 for Stochastic MuZero. In order to ensure that the two methods are computationally equivalent, each of the chance and dynamics networks of Stochastic MuZero has half the depth of the dynamics network used by MuZero. The Elo scale was anchored so that the performance of the final MuZero baseline corresponded to an Elo of 2000. F CHANCE ANALYSIS We investigated the distribution of chance outcomes at each chance node for Stochastic MuZero. We collected a dataset for each game by storing the probability distribution over chance nodes, σkt = Pr(ct+k+1|askt ), for all afterstate prediction network evaluations invoked throughout all searches in 5 episodes. Subsequently, we sorted each chance node distribution and finally, we computed the average distribution, as shown in figure 6 6. We observed that in the case of deterministic environment like Go, the chance distribution collapsed to a single code, while in stochastic environments the model used multiple codes. Furthermore, in Backgammon, the chance distribution had a support of 21 codes with non-negligible probability, which corresponds to the number of distinct rolls of two dice. G COMPUTATIONAL RESOURCES All experiments were run using second generation Google Cloud TPUs (Google, 2018). For Backgammon, we used 1 TPU for training and 16 TPUs for acting, for approximately 27 hours - equivalent to 10 days on a single V100 GPU. In 2048 we used 1 TPU for training and 4 TPUs for acting, for 80 hours per experiment; equivalent to roughly 8 days on a V100. Finally, in Go we used the same setup as in MuZero (Schrittwieser et al., 2020). H IMPLEMENTATION Stochastic MuZero was implemented as an extension to the standard MuZero algorithm, as it was described in (Schrittwieser et al., 2020). We used the JAX (Bradbury et al., 2018) and Haiku (Hennigan et al., 2020) libraries to implement the neural networks and optimization methods described in this paper. Along with this work, we provide a detailed pseudocode of the Stochastic MuZero algorithm, along with all the hyperparameters used for each environment. I PSEUDOCODE For completeness we provide a detailed pseudocode of the Stochastic MuZero algorithm, along with all the hyperparameters used by the agent. """Pseudocode description of the Stochastic MuZero algorithm. This pseudocode was adapted from the original MuZero pseudocode. """ # pylint: disable=unused-argument # pylint: disable=missing-docstring # pylint: disable=g-explicit-length-test import abc import math from typing import Any, Dict, Callable, List, NamedTuple, Tuple, Union, Optional, Sequence import dataclasses import numpy as np MAXIMUM_FLOAT_VALUE = float(’inf’) ######################################## ####### Environment interface ########## # An action to apply to the environment. # It can a single integer or a list of micro-actions for backgammon. Action = Any # The current player to play. Player = int class Environment: """Implements the rules of the environment.""" def apply(self, action: Action): """Applies an action or a chance outcome to the environment.""" def observation(self): """Returns the observation of the environment to feed to the network. """ def is_terminal(self) -> bool: """Returns true if the environment is in a terminal state.""" return False def legal_actions(self) -> Sequence[Action]: """Returns the legal actions for the current state.""" return [] def reward(self, player: Player) -> float: """Returns the last reward for the player.""" return 0.0 def to_play(self) -> Player: """Returns the current player to play.""" return 0 ########################## ####### Helpers ########## class KnownBounds(NamedTuple): min: float max: float class MinMaxStats(object): """A class that holds the min-max values of the tree.""" def __init__(self, known_bounds: Optional[KnownBounds]): self.maximum = known_bounds.max if known_bounds else - MAXIMUM_FLOAT_VALUE self.minimum = known_bounds.min if known_bounds else MAXIMUM_FLOAT_VALUE def update(self, value: float): self.maximum = max(self.maximum, value) self.minimum = min(self.minimum, value) def normalize(self, value: float) -> float: if self.maximum > self.minimum: # We normalize only when we have set the maximum and minimum values . return (value - self.minimum) / (self.maximum - self.minimum) return value # A chance outcome. Outcome = Any # An object that holds an action or a chance outcome. ActionOrOutcome = Union[Action, Outcome] LatentState = List[float] AfterState = List[float] class NetworkOutput(NamedTuple): value: float probabilities: Dict[ActionOrOutcome, float] reward: Optional[float] = 0.0 class Network: """An instance of the network used by stochastic MuZero.""" def representation(self, observation) -> LatentState: """Representation function maps from observation to latent state.""" return [] def predictions(self, state: LatentState) -> NetworkOutput: """Returns the network predictions for a latent state.""" return NetworkOutput(0, {}, 0) def afterstate_dynamics(self, state: LatentState, action: Action) -> AfterState: """Implements the dynamics from latent state and action to afterstate .""" return [] def afterstate_predictions(self, state: AfterState) -> NetworkOutput: """Returns the network predictions for an afterstate.""" # No reward for afterstate transitions. return NetworkOutput(0, {}) def dynamics(self, state: AfterState, action: Outcome) -> LatentState: """Implements the dynamics from afterstate and chance outcome to state.""" return [] def encoder(self, observation) -> Outcome: """An encoder maps an observation to an outcome.""" class NetworkCacher: """An object to share the network between the self-play and training jobs.""" def __init__(self): self._networks = {} def save_network(self, step: int, network: Network): self._networks[step] = network def load_network(self) -> Tuple[int, Network]: training_step = max(self._networks.keys()) return training_step, self._networks[training_step] # Takes the training step and returns the temperature of the softmax policy. VisitSoftmaxTemperatureFn = Callable[[int], float] # Returns an instance of the environment. EnvironmentFactory = Callable[[], Environment] # The factory for the network. NetworkFactory = Callable[[], Network] @dataclasses.dataclass class StochasticMuZeroConfig: # A factory for the environment. environment_factory: EnvironmentFactory network_factory: NetworkFactory # Self-Play num_actors: int visit_softmax_temperature_fn: VisitSoftmaxTemperatureFn num_simulations: int discount: float # Root prior exploration noise. root_dirichlet_alpha: float root_dirichlet_fraction: float root_dirichlet_adaptive: bool # UCB formula pb_c_base: float = 19652 pb_c_init: float = 1.25 # If we already have some information about which values occur in the # environment, we can use them to initialize the rescaling. # This is not strictly necessary, but establishes identical behaviour to # AlphaZero in board games. known_bounds: Optional[KnownBounds] = None # Replay buffer. num_trajectories_in_buffer: int = int(1e6) batch_size: int = int(128) num_unroll_steps: int = 5 td_steps: int = 6 td_lambda: float = 1.0 # Alpha and beta parameters for prioritization. # By default they are set to 0 which means uniform sampling. priority_alpha: float = 0.0 priority_beta: float = 0.0 # Training training_steps: int = int(1e6) export_network_every: int = int(1e3) learning_rate: float = 3e-4 weight_decay: float = 1e-4 # The number of chance codes (codebook size). # We use a codebook of size 32 for all our experiments. codebook_size: int = 32 ################################## ## Environment specific configs ## def twentyfortyeight_config() -> StochasticMuZeroConfig: """Returns the config for the game of 2048.""" def environment_factory(): # Returns an implementation of 2048. return Environment() def network_factory(): # 10 layer fully connected Res V2 network with Layer normalization and size # 256. return Network() def visit_softmax_temperature(train_steps: int) -> float: if train_steps < 1e5: return 1.0 elif train_steps < 2e5: return 0.5 elif train_steps < 3e5: return 0.1 else: # Greedy selection. return 0.0 return StochasticMuZeroConfig( environment_factory=environment_factory, network_factory=network_factory, num_actors=1000, visit_softmax_temperature=visit_softmax_temperature, num_simulations=100, discount=0.999, root_dirichlet_alpha=0.3, root_dirichlet_fraction=0.1, root_dirichlet_adaptive=False, num_trajectories_in_buffer=int(125e3), td_steps=10, td_lambda=0.5, priority_alpha=1.0, priority_beta=1.0, training_steps=int(20e6), batch_size=1024, weight_decay=0.0) def backgammon_config() -> StochasticMuZeroConfig: """Returns the config for the game of 2048.""" def environment_factory(): # Returns an backgammon. We consider single games without a doubling cube. return Environment() def network_factory(): # 10 layer fully connected Res V2 network with Layer normalization and size # 256. return Network() def visit_softmax_temperature(train_steps: int) -> float: return 1.0 return StochasticMuZeroConfig( environment_factory=environment_factory, network_factory=network_factory, num_actors=1000, visit_softmax_temperature_fn=visit_softmax_temperature, num_simulations=1600, discount=1.0, # Unused, we use adaptive dirichlet for backgammon. root_dirichlet_alpha=-1.0, root_dirichlet_fraction=0.1, root_dirichlet_adaptive=True, # Max value is 3 for backgammon. known_bounds=KnownBounds(min=-3, max=3), # 1e5 full episodes stored. num_trajectories_in_buffer=int(1e5), # We use monte carlo returns. td_steps=int(1e3), training_steps=int(8e6), batch_size=1024, learning_rate=3e-4, weight_decay=1e-4) ################################## ############ Replay ############## class SearchStats(NamedTuple): search_policy: Dict[Action, int] search_value: float class State(NamedTuple): """Data for a single state.""" observation: List[float] reward: float discount: float player: Player action: Action search_stats: SearchStats Trajectory = Sequence[State] class ReplayBuffer: """A replay buffer to hold the experience generated by the selfplay.""" def __init__(self, config: StochasticMuZeroConfig): self.config = config self.data = [] def save(self, seq: Trajectory): if len(self.data) > self.config.num_trajectories_in_buffer: # Remove the oldest sequence from the buffer. self.data.pop(0) self.data.append(seq) def sample_trajectory(self) -> Trajectory: """Samples a trajectory uniformly or using prioritization.""" return self.data[0] def sample_index(self, seq: Trajectory) -> int: """Samples an index in the trajectory uniformly or using prioritization.""" return 0 def sample_element(self) -> Trajectory: """Samples a single element from the buffer.""" # Sample a trajectory. trajectory = self.sample_trajectory() state_idx = self.sample_index(trajectory) limit = max([self.config.num_unroll_steps, self.config.td_steps]) # Returns a trajectory of experiment. return trajectory[state_idx:state_idx + limit] def sample(self) -> Sequence[Trajectory]: """Samples a training batch.""" return [self.sample_element() for _ in range(self.config.batch_size)] ################################## ############ Search ############## class ActionOutcomeHistory: """Simple history container used inside the search. Only used to keep track of the actions and chance outcomes executed. """ def __init__(self, player: Player, history: Optional[List[ActionOrOutcome]] = None): self.initial_player = player self.history = list(history or []) def clone(self): return ActionOutcomeHistory(self.initial_player, self.history) def add_action_or_outcome(self, action_or_outcome: ActionOrOutcome): self.history.append(action_or_outcome) def last_action_or_outcome(self) -> ActionOrOutcome: return self.history[-1] def to_play(self) -> Player: # Returns the next player to play based on the initial player and the # history of actions and outcomes. For example for backgammon the two # players alternate, while for 2048 it is always the same player. return 0 class Node(object): """A Node in the MCTS search tree.""" def __init__(self, prior: float, is_chance: bool = False): self.visit_count = 0 self.to_play = -1 self.prior = prior self.value_sum = 0 self.children = {} self.state = None self.is_chance = is_chance self.reward = 0 def expanded(self) -> bool: return len(self.children) > 0 def value(self) -> float: if self.visit_count == 0: return 0 return self.value_sum / self.visit_count # Core Monte Carlo Tree Search algorithm. # To decide on an action, we run N simulations, always starting at the root of # the search tree and traversing the tree according to the UCB formula until we # reach a leaf node. def run_mcts(config: StochasticMuZeroConfig, root: Node, action_outcome_history: ActionOutcomeHistory, network: Network, min_max_stats: MinMaxStats): for _ in range(config.num_simulations): history = action_outcome_history.clone() node = root search_path = [node] while node.expanded(): action_or_outcome, node = select_child(config, node, min_max_stats) history.add_action(action_or_outcome) search_path.append(node) # Inside the search tree we use the dynamics function to obtain the next # hidden state given an action and the previous hidden state. parent = search_path[-2] if parent.is_chance: # The parent is a chance node, afterstate to latent state transition. # The last action or outcome is a chance outcome. child_state = network_output.dynamics(parent.state, history. last_action_or_outcome()) network_output = network_output.predictions(child_state) # This child is a decision node. is_child_chance = False else: # The parent is a decision node, latent state to afterstate transition. # The last action or outcome is an action. child_state = network_output.afterstate_dynamics( parent.state, history.last_action_or_outcome()) network_output = network_output.afterstate_predictions(child_state) # The child is a chance node. is_child_chance = True # Expand the node. expand_node(node, child_state, network_output, history.to_play(), is_child_chance) # Backpropagate the value up the tree. backpropagate(search_path, network_output.value, history.to_play(), config.discount, min_max_stats) # Select the child with the highest UCB score. def select_child(config: StochasticMuZeroConfig, node: Node, min_max_stats: MinMaxStats): if node.is_chance: # If the node is chance we sample from the prior. outcomes, probs = zip(*[(o, n.prob) for o, n in node.children.items() ]) outcome = np.random.choice(outcomes, p=probs) return outcome, node.children[outcome] # For decision nodes we use the pUCT formula. _, action, child = max( (ucb_score(config, node, child, min_max_stats), action, child) for action, child in node.children.items()) return action, child # The score for a node is based on its value, plus an exploration bonus based on # the prior. def ucb_score(config: StochasticMuZeroConfig, parent: Node, child: Node, min_max_stats: MinMaxStats) -> float: pb_c = math.log((parent.visit_count + config.pb_c_base + 1) / config.pb_c_base) + config.pb_c_init pb_c *= math.sqrt(parent.visit_count) / (child.visit_count + 1) prior_score = pb_c * child.prior if child.visit_count > 0: value_score = min_max_stats.normalize(child.reward + config.discount * child.value() ) else: value_score = 0 return prior_score + value_score # We expand a node using the value, reward and policy prediction obtained from # the neural network. def expand_node(node: Node, state: Union[LatentState, AfterState], network_output: NetworkOutput, player: Player, is_chance: bool): node.to_play = player node.state = state node.is_chance = is_chance node.reward = network_output.reward for action, prob in network_output.probabilities.items(): node.children[action] = Node(prob) # At the end of a simulation, we propagate the evaluation all the way up the # tree to the root. def backpropagate(search_path: List[Node], value: float, to_play: Player, discount: float, min_max_stats: MinMaxStats): for node in reversed(search_path): node.value_sum += value if node.to_play == to_play else -value node.visit_count += 1 min_max_stats.update(node.value()) value = node.reward + discount * value # At the start of each search, we add dirichlet noise to the prior of the root # to encourage the search to explore new actions. def add_exploration_noise(config: StochasticMuZeroConfig, node: Node): actions = list(node.children.keys()) dir_alpha = config.root_dirichlet_alpha if config.root_dirichlet_adaptive: dir_alpha = 1.0 / np.sqrt(len(actions)) noise = np.random.dirichlet([dir_alpha] * len(actions)) frac = config.root_exploration_fraction for a, n in zip(actions, noise): node.children[a].prior = node.children[a].prior * (1 - frac) + n * frac ################################## ############ Self-play ########### class Actor(metaclass=abc.ABCMeta): """An actor to interact with the environment.""" @abc.abstractmethod def reset(self): """Resets the player for a new episode.""" @abc.abstractmethod def select_action(self, env: Environment) -> Action: """Selects an action for the current state of the environment.""" @abc.abstractmethod def stats(self) -> SearchStats: """Returns the stats for the player after it has selected an action. """ class StochasticMuZeroActor(Actor): def __init__(self, config: StochasticMuZeroConfig, cacher: NetworkCacher): self.config = config self.cacher = cacher self.training_step = -1 self.network = None def reset(self): # Read a network from the cacher for the new episode. self.training_step, self.network = self.cacher.load_network() self.root = None def _mask_illegal_actions(self, env: Environment, outputs: NetworkOutput) -> NetworkOutput: """Masks any actions which are illegal at the root.""" # We mask out and keep only the legal actions. masked_policy = {} network_policy = outputs.probabilities norm = 0 for action in env.legal_actions(): if action in network_policy: masked_policy[action] = network_policy[action] else: masked_policy[action] = 0.0 norm += masked_policy[action] # Renormalize the masked policy. masked_policy = {a: v / norm for a, v in masked_policy.items()} return NetworkOutput(value=outputs.value, probabilities=masked_policy ) def _select_action(self, root: Node): """Selects an action given the root node.""" # Get the visit count distribution. actions, visit_counts = zip(*[ (action, node.visit_counts) for action, node in node.children.items() ]) # Temperature temperature = self.config.visit_softmax_temperature_fn(self. training_step) # Compute the search policy. search_policy = [v ** (1. / temperature) for v in visit_counts] norm = sum(search_policy) search_policy = [v / norm for v in search_policy] return np.random.choice(actions, p=search_policy) def select_action(self, env: Environment) -> Action: """Selects an action.""" # New min max stats for the search tree. min_max_stats = MinMaxStats(self.config.known_bounds) # At the root of the search tree we use the representation function to # obtain a hidden state given the current observation. root = Node(0) # Provide the history of observations to the representation network to # get the initial latent state. latent_state = self.network.representation(env.observation()) # Compute the predictions. outputs = self.network.predictions(latent_state) # Keep only the legal actions. outputs = self._mask_illegal_actions(env, outputs) # Expand the root node. expand_node(root, latent_state, outputs, env.to_play(), is_chance= False) # Backpropagate the value. backpropagate([root], outputs.value, env.to_play(), self.config.discount, min_max_stats) # We add exploration noise to the root node. add_exploration_noise(self.config, root) # We then run a Monte Carlo Tree Search using only action sequences and the # model learned by the network. run_mcts(self.config, root, ActionOutcomeHistory(env.to_play()), self.network, min_max_stats) # Keep track of the root to return the stats. self.root = root # Return an action. return self._select_action(root) def stats(self) -> SearchStats: """Returns the stats of the latest search.""" if self.root is None: raise ValueError(’No search was executed.’) return SearchStats( search_policy={ action: node.visit_counts for action, node in self.root.children.items() }, search_value=self.root.value()) # Self-play. # Each self-play job is independent of all others; it takes the latest network # snapshot, produces an episode and makes it available to the training job by # writing it to a shared replay buffer. def run_selfplay(config: StochasticMuZeroConfig, cacher: NetworkCacher, replay_buffer: ReplayBuffer): actor = StochasticMuZeroActor(config, cacher) while True: # Create a new instance of the environment. env = config.environment_factory() # Reset the actor. actor.reset() episode = [] while not env.is_terminal(): action = actor.select_action(env) state = State( observation=env.observation(), reward=env.reward(env.to_play()), discount=config.discount, player=env.to_play(), action=action, search_stats=actor.stats()) episode.append(state) env.apply(action) # Send the episode to the replay. replay_buffer.save(episode) ################################## ############ Training ############ class Learner(metaclass=abc.ABCMeta): """An learner to update the network weights based.""" @abc.abstractmethod def learn(self): """Single training step of the learner.""" @abc.abstractmethod def export(self) -> Network: """Exports the network.""" def policy_loss(predictions, labels): """Minimizes the KL-divergence of the predictions and labels.""" return 0.0 def value_or_reward_loss(prediction, target): """Implements the value or reward loss for Stochastic MuZero. For backgammon this is implemented as an MSE loss of scalars. For 2048, we use the two hot representation proposed in MuZero, and this loss is implemented as a KL divergence between the value and value target representations. For 2048 we also apply a hyperbolic transformation to the target (see paper for more information). Args: prediction: The reward or value output of the network. target: The reward or value target. Returns: The loss to minimize. """ return 0.0 class StochasticMuZeroLearner(Learner): """Implements the learning for Stochastic MuZero.""" def __init__(self, config: StochasticMuZeroConfig, replay_buffer: ReplayBuffer): self.config = config self.replay_buffer = replay_buffer # Instantiate the network. self.network = config.network_factory() def transpose_to_time(self, batch): """Transposes the data so the leading dimension is time instead of batch.""" return batch def learn(self): """Applies a single training step.""" batch = self.replay_buffer.sample() # Transpose batch to make time the leading dimension. batch = self.transpose_to_time(batch) # Compute the initial step loss. latent_state = self.network.representation(batch[0].observation) predictions = self.network.predictions(latent_state) # Computes the td target for the 0th position. value_target = compute_td_target(self.config.td_steps, self.config.td_lambda, batch) # Train the network value towards the td target. total_loss = value_or_reward_loss(predictions.value, value_target) # Train the network policy towards the MCTS policy. total_loss += policy_loss(predictions.probabilities, batch[0].search_stats.search_policy) # Unroll the model for k steps. for t in range(1, self.config.num_unroll_steps + 1): # Condition the afterstate on the previous action. afterstate = self.network.afterstate_dynamics( latent_state, batch[t - 1].action) afterstate_predictions = self.network.afterstate_predictions( afterstate) # Call the encoder on the next observation. # The encoder returns the chance code which is a discrete one hot code. # The gradients flow to the encoder using a straight through estimator. chance_code = self.network.encoder(batch[t].observation) # The afterstate value is trained towards the previous value target # but conditioned on the selected action to obtain a Q-estimate. total_loss += value_or_reward_loss( afterstate_predictions.value, value_target) # The afterstate distribution is trained to predict the chance code # generated by the encoder. total_loss += policy_loss(afterstate_predictions.probabilities, chance_code) # Get the dynamic predictions. latent_state = self.network.dynamics(afterstate, chance_code) predictions = self.network.predictions(latent_state) # Compute the new value target. value_target = compute_td_target(self.config.td_steps, self.config.td_lambda, batch[t:]) # The reward loss for the dynamics network. total_loss += value_or_reward_loss(predictions.reward, batch[t]. reward) total_loss += value_or_reward_loss(predictions.value, value_target) total_loss += policy_loss(predictions.probabilities, batch[t].search_stats.search_policy) minimize_with_adam_and_weight_decay(total_loss, learning_rate=self.config. learning_rate, weight_decay=self.config. weight_decay) def export(self) -> Network: return self.network def train_stochastic_muzero(config: StochasticMuZeroConfig, cacher: NetworkCacher, replay_buffer: ReplayBuffer): learner = StochasticMuZeroLearner(config, replay_buffer) # Export the network so the actors can start generating experience. cacher.save_network(0, learner.export()) for step in range(config.training_steps): # Single learning step. learner.learn() if step > 0 and step % config.export_network_every == 0: cacher.save_network(step, learner.export()) ################################## ############ RL loop ############# def launch_stochastic_muzero(config: StochasticMuZeroConfig): """Full RL loop for stochastic MuZero.""" replay_buffer = ReplayBuffer(config) cacher = NetworkCacher() # Launch a learner job. launch_job(lambda: train_stochastic_muzero(config, cacher, replay_buffer)) # Launch the actors. for _ in range(config.num_actors): launch_job(lambda: run_selfplay(config, cacher, replay_buffer)) # Stubs to make the typechecker happy. def softmax_sample(distribution, temperature: float): return 0, 0 def compute_td_target(td_steps, td_lambda, trajectory): """Computes the TD lambda targets given a trajectory for the 0th element. Args: td_steps: The number n of the n-step returns. td_lambda: The lambda in TD(lambda). trajectory: A sequence of states. Returns: The n-step return. """ return 0.0 def minimize_with_sgd(loss, learning_rate): """Minimizes the loss using SGD.""" def minimize_with_adam_and_weight_decay(loss, learning_rate, weight_decay ): """Minimizes the loss using Adam with weight decay.""" def launch_job(f): """Launches a job to run remotely.""" return f()
1. What is the focus and contribution of the paper regarding reinforcement learning? 2. What are the strengths of the proposed approach, particularly in its application to stochastic environments? 3. What are the weaknesses of the paper, especially in terms of optimality and limitations? 4. Do you have any concerns regarding the extension of MuZero to stochastic environments? 5. What are the potential challenges in scaling the proposed method to handle larger random outcome spaces or partially observed environments?
Summary Of The Paper Review
Summary Of The Paper The paper proposes an extension of MuZero to stochastic environments. The stochasticity of the environment is handled by using afterstates, as_t, and chance outcomes, c_t. This decomposes the modelling of the stochastic environment dynamics into a deterministic model s_{t+1}, r_{t+1} = M(as_t, c_t) and modelling chance outcomes p(c_t | as_t). The chance outcomes are modeled as a discrete categorical variable (1 of M) and learned using a VQ-VAE like setup. The paper shows how the proposed model achieves ~SOTA on two stochastic environments: 2048 and backgammon, and retains SOTA performance on a single non-stochastic environment, Go, although, using twice the computational budget. Review Strengths: This paper represents a significant contribution to reinforcement learning, by extending the state-of-the-art MuZero to stochastic or partially observed environments. The paper is clearly written and clearly explains the methods it builds on. The paper is evaluated on both stochastic and non-stochastic environments. Weakness: I would have liked a discussion of optimality. Expectimax is known to be optimal, but computationally infeasible in many applications. How does this work compare to expectimax and what are the approximations? I would also have liked a discussion of limitations. What are the limitations of the proposed method? For instance, modeling discrete chance outcomes seems to limit this to environments with discrete randomness, e.g. dice rolls, cars, etc. What about environments with continuous stochasticity? Also, how large are the random outcome spaces of the environments tried, and how well does the one-hot chance outcomes scale with larger random outcome spaces? How well does it scale to partially observed environments, like e.g. poker?
ICLR
Title Gaussian Process Behaviour in Wide Deep Neural Networks Abstract Whilst deep neural networks have shown great empirical success, there is still much work to be done to understand their theoretical properties. In this paper, we study the relationship between Gaussian processes with a recursive kernel definition and random wide fully connected feedforward networks with more than one hidden layer. We exhibit limiting procedures under which finite deep networks will converge in distribution to the corresponding Gaussian process. To evaluate convergence rates empirically, we use maximum mean discrepancy. We then exhibit situations where existing Bayesian deep networks are close to Gaussian processes in terms of the key quantities of interest. Any Gaussian process has a flat representation. Since this behaviour may be undesirable in certain situations we discuss ways in which it might be prevented. 1 N/A Whilst deep neural networks have shown great empirical success, there is still much work to be done to understand their theoretical properties. In this paper, we study the relationship between Gaussian processes with a recursive kernel definition and random wide fully connected feedforward networks with more than one hidden layer. We exhibit limiting procedures under which finite deep networks will converge in distribution to the corresponding Gaussian process. To evaluate convergence rates empirically, we use maximum mean discrepancy. We then exhibit situations where existing Bayesian deep networks are close to Gaussian processes in terms of the key quantities of interest. Any Gaussian process has a flat representation. Since this behaviour may be undesirable in certain situations we discuss ways in which it might be prevented. 1 1 INTRODUCTION Deep feedforward neural networks have emerged as an essential component of modern machine learning. As such there has been significant research effort in trying to understand the theoretical properties of such models. One important branch of such research is the study of random networks. By assuming a probability distribution on the network parameters, a distribution is induced on the input to output function that such networks encode. This has proved important in the study of initialisation and learning dynamics (Schoenholz et al., 2017) and expressivity (Poole et al., 2016). It is, of course, essential in the study of Bayesian priors on networks (Neal, 1996). The Bayesian approach makes little sense if prior assumptions are not understood, and distributional knowledge can be essential in finding good posterior approximations. Since we typically want our networks to have high modelling capacity, it is natural to consider limit distributions of networks as they become large. Whilst distributions on deep networks are generally challenging to work with exactly, the limiting behaviour can lead to more insight. Further, as we shall see, networks used in the literature may be very close to this behaviour. The seminal work in this area is that of Neal (1996), which showed that under certain conditions random neural networks with one hidden layer converge to a Gaussian process. The question of the type of convergence is non-trivial and part of our discussion. Historically this result was a significant one because it provided a connection between flexible Bayesian neural networks and Gaussian processes (Williams, 1998; Rasmussen & Williams, 2006) 1Code for the experiments in the paper can be found at https://github.com/ widedeepnetworks/widedeepnetworks 1.1 OUR CONTRIBUTIONS We extend the theoretical understanding of random fully connected networks and their relationship to Gaussian processes. In particular, we prove a rigorous result (Theorem 1) on the convergence of certain finite networks with more than one hidden layer to Gaussian processes. Further, we empirically study the distance between finite networks and their Gaussian process analogues by using maximum mean discrepancy (Gretton et al., 2012) as a distance measure. We find that Bayesian deep networks from the literature can exhibit predictions that are close to Gaussian processes. To demonstrate this, we systematically compare exact Gaussian process inference with ‘gold standard’ MCMC inference for Bayesian neural networks. Our work is of relevance to the theoretical understanding of neural network initialisation and dynamics. It is also important in the area of Bayesian deep networks because it demonstrates that Gaussian process behaviour can arise in more situations of practical interest than previously thought. If this behaviour is desired then Gaussian process inference (exact and approximate) should also be considered. In some scenarios, the behaviour may not be desired because it implies a lack of a hierarchical representation. We therefore highlight promising ideas from the literature to prevent such behaviour. 1.2 RELATED WORK The case of random neural networks with one hidden layer was studied by Neal (1996). Cho & Saul (2009) provided analytic expressions for single layer kernels including those corresponding to a rectified linear unit (ReLU). They also studied recursive kernels designed to ‘mimic computation in large, multilayer neural nets’. As discussed in Section 3 they arrived at the correct kernel recursion through an erroneous argument. Such recursive kernels were later used with empirical success in the Gaussian process literature (Krauth et al., 2017), with a similar justification to that of Cho and Saul. The first case we are aware of using a Gaussian process construction with more than one hidden layer is the work of Hazan & Jaakkola (2015). Their contribution is similar in content to Lemma 1 discussed here, and the work has had increasing interest from the kernel community (Mitrovic et al., 2017). Recent work from Daniely et al. (2016) uses the concept of ‘computational skeletons’ to give concentration bounds on the difference in the second order moments of large finite networks and their kernel analogue, with strong assumptions on the inputs. The Gaussian process view given here, without strong input assumptions, is related but concerns not just the first two moments of a random network but the full distribution. As such the theorems we obtain are distinct. A less obvious connection is to the recent series of papers studying deep networks using a mean field approximation (Poole et al., 2016; Schoenholz et al., 2017). In those papers a second order approximation gives equivalent behaviour to the kernel recursion. By contrast, in this paper the claim is that the behaviour emerges as a consequence of increasing width and is therefore something that needs to be proved. Another surprising connection is to the analysis of self-normalizing neural networks (Klambauer et al., 2017). In their analysis the authors assume that the hidden layers are wide in order to invoke the central limit theorem. The premise of the central limit theorem will only hold approximately in layers after the first one and this theoretical barrier is something we discuss here. An area that is less related than might be expected is that of ‘Deep Gaussian Processes’ (DGPs) (Damianou & Lawrence, 2013). As will be discussed in Section 6, narrow intermediate representations mean that the marginal behaviour is not close to that of a Gaussian process. Duvenaud et al. (2014) offer an analysis that largely applies to DGPs though they also study the Cho and Saul recursion with the motivating argument from the original paper. 2 THE DEEP WIDE LIMIT We consider a fully connected network as shown in Figure 1. The inputs and outputs will be real valued vectors of dimension M and L respectively. The network is fully connected. The initial step and recursion are standard. The initial step is: f (1) i (x) = M∑ j=1 w (1) i,j xj + b (1) i . (1) Activations 1 Activities 1 Activations 2 Activities 2 Inputs Output We make the functional dependence on x explicit in our notation as it will help clarify what follows. For a network with D hidden layers the recursion is, for each µ = 1, . . . , D, g (µ) i (x) = φ(f (µ) i (x)) (2) f (µ+1) i (x) = Hµ∑ j=1 w (µ+1) i,j g (µ) j (x) + b (µ+1) i , (3) so that f (D+1)(x) is the output of the network given input x. φ denotes the non-linearity. In all cases the equations hold for each value of i; i ranges between 1 and Hµ in Equation (2), and between 1 and Hµ+1 in Equation (3) except in the case of the final activation where the top value is L. The network could of course be modified to be probability simplex-valued by adding a softmax at the end. A distribution on the parameters of the network will be assumed. Conditional on the inputs, this induces a distribution on the activations and activities. In particular we will assume independent normal distributions on the weights and biases w (µ) i,j ∼ N (0, C (µ) w ) indep (4) b (µ) i ∼ N (0, C (µ) b ) indep. (5) We will be interested in the behaviour of this network as the widths Hµ becomes large. The weight variances for µ ≥ 2 will be scaled according to the width of the network to avoid a divergence in the variance of the activities in this limit. As will become apparent, the appropriate scaling is C(µ)w = Ĉ (µ) w Hµ µ ≥ 2 . (6) The assumption is that Ĉ(µ)w will remain fixed as we take the limit. Neal (1996) analysed this problem for D = 1, showing that as H1 → ∞, the values of f (2)i (x), the output of the network in this case, converge to a certain multi-output Gaussian process if the activities have bounded variance. Since our approach relies on the multivariate central limit theorem we will arrange the relevant terms into (column) vectors to make the linear algebra clearer. Consider any two inputs x and x′ and all output functions ranging over the index i. We define the vector f (2)(x) of length L whose elements are the numbers f (2)i (x). We define f (2)(x′) similarly. For the weight matrices defined by w(µ)i,j for fixed µ we use a ‘placeholder’ index • to return column and row vectors from the weight matrices. In particular w(1)j,• denotes row j of the weight matrix at depth 1. Similarly, w (2) •,j denotes column j at depth 2. The biases are given as column vectors b(1) and b(2). Finally we concatenate the two vectors f (2)(x) and f (2)(x′) into a single column vector F (2) of size 2L. The vector in question takes the form F (2) = ( f (2)(x) f (2)(x′) ) = ( b(2) b(2) ) + H1∑ j=1 ( w (2) •,j φ(w (1) j,• x+ b (1) j ) w (2) •,j φ(w (1) j,• x ′ + b (1) j ) ) (7) The benefit of writing the relation in this form is that the applicability of the multivariate central limit theorem is immediately apparent. Each of the vector terms on this right hand side is independent and identically distributed conditional on the inputs x and x′. By assumption, the activities have bounded variance. The scaling we have chosen on the variances is precisely that required to ensure the applicability of the theorem. Therefore as H becomes large F (2) converges in distribution to a multivariate normal distribution. The limiting normal distribution is fully specified by its first two moments. Defining γ ∼ N (0, C(1)b ), ∼ N (0, C (1) w IM ), the moments in question are: E [ f (2) i (x) ] = 0 (8) E [ f (2) i (x)f (2) j (x ′) ] = δi,j [ Ĉ(2)w E ,γ [ φ( Tx+ γ)φ( Tx′ + γ) ] + C (2) b ] (9) Note that we could have taken a larger set of input points to give a larger vector F and again we would conclude that this vector converged in distribution to a multivariate normal distribution. More formally, we can consider the set of possible inputs as an index set. A set of consistent finite dimensional Gaussian distributions on an index set corresponds to a Gaussian process by the Kolmogorov extension theorem. The Gaussian process in question is a distribution over functions defined on the product σ-algebra, which has the relevant finite dimensional distributions as its marginals. In the case of a multivariate normal distribution a set of variables having a covariance of zero implies that the variables are mutually independent. Looking at Equation (9), we see that the limiting distribution has independence between different components i, j of the output. Combining this with the recursion (2), we might intuitively suggest that the next layer also converges to a multivariate normal distribution in the limit of large Hµ. Indeed we state the following lemma, which we attribute to Hazan & Jaakkola (2015): Lemma 1 (Normal recursion). If the activations of a previous layer are normally distributed with moments: E [ f (µ−1) i (x) ] = 0 (10) E [ f (µ−1) i (x)f (µ−1) j (x ′) ] = δi,jK(x, x ′), (11) Then under the recursion (2) and asH →∞ the activations of the next layer converge in distribution to a normal distribution with moments E [ f (µ) i (x) ] = 0 (12) E [ f (µ) i (x)f (µ) j (x ′) ] = δi,j [ Ĉ(µ)w E( 1, 2)∼N (0,K)[φ( 1)φ( 2)] + C (µ) b ] (13) where K is a 2× 2 matrix containing the input covariances. Unfortunately the lemma is not sufficient to show that the joint distribution of the activations of higher layers converge in distribution to a multivariate normals. This is because for finite H the input activations do not have a multivariate normal distribution - this is only attained (weakly or in distribution) in the limit. It could be the case that the rate at which the limit distribution is attained affects the distribution in subsequent layers. We are able to offer the following theorem rigorously: Theorem 1. Consider a Bayesian deep neural network of the form in Equations (1) and (2) using ReLU activation functions. Then there exist strictly increasing width functions hµ : N 7→ N such that H1 = h1(n), . . . ,HD = hD(n), and for any countable input set (x[i])∞i=1, the distribution of the output of the network converges in distribution to a Gaussian process as n→∞. A proof is included in the appendix. We conjecture that a more general theorem will hold. In particular we expect that the width functions hµ can be taken to be the identity and that the nonlinearity can be extended to monotone functions with well behaved tails. Our conjecture is based on the intuition from Lemma 1 and from our experiments, in which we always take the width function to be the identity. 3 SPECIFIC KERNELS UNDER RECURSION Cho & Saul (2009) suggest a family of kernels based on a recurrence designed to ‘mimic computation in large, multilayer neural nets’. It is therefore of interest to see how this relates to deep wide Gaussian processes. A kernel may be associated with a feature mapping Φ(x) such that K(x, x′) = Φ(x) • Φ(x′). Cho and Saul define a recursive kernel through a new feature mapping by compositions such as Φ(Φ(x)). However this cannot be a legitimate way to create a kernel because such a composition represents a type error. There is no reason to think the output dimension of the function Φ matches the input dimension and indeed the output dimension may well be infinite. Nevertheless, the paper provides an elegant solution to a different task: it derives closed form solution to the recursion from Lemma 1 (Hazan & Jaakkola, 2015) for the special case φ(u) = Θ(u)ur for r = 0, 1, 2, 3 , (14) where Θ is the Heaviside step function. Specifically, the recursive approach of Cho & Saul (2009) can be adapted by using the fact that u>z for z ∼ N (0, LL>) is equivalent in distribution to (L>u)>ε with ε ∼ N (0, I), and by optionally augmenting u to incorporate the bias. Since r = 1 corresponds to rectified linear units we apply this analytic kernel recursion in all of our experiments. 4 MEASURING CONVERGENCE USING MAXIMUM MEAN DISCREPANCY In this section we use the kernel based two sample tests of Gretton et al. (2012) to empirically measure the similarity of finite random neural networks to their Gaussian process analogues. The maximum mean discrepancy (MMD) between two distributions P and Q is defined as: MMD(P,Q,H) := sup ||h||H≤1 [ EP [h]− EQ[h] ] (15) where H denotes a reproducing kernel Hilbert space and || • ||H denotes the corresponding norm. It gives the biggest possible difference between expectations of a function under the two distributions under the constraint that the function has Hilbert space norm less than or equal to one. We used the unbiased estimator of squared MMD given in Equation (3) of Gretton et al. (2012). In this experiment and all those that follow we take weight variance parameters Ĉ(µ)w = 0.8 and bias variance Cb = 0.2. We took 10 standard normal input points in 4 dimensions and pass them through 2000 independent random neural networks drawn from the distribution discussed in this paper. This was then compared to 2000 samples drawn from the corresponding Gaussian process distribution. The experiment was performed with different numbers of hidden layers and numbers of units per hidden layer. We repeated each experiment 20 times which allows us to reduce variance in our results and give a simple estimate of measurement error. The experiments use an RBF kernel for the MMD estimate with lengthscale 1/2. In order to help give an intuitive sense of the distances involved we also include a comparison between two Gaussian processes with isotropic RBF kernels using the same MMD distance measure. The kernel length scales for this pair of ‘calibration’ Gaussian processes are taken to be l and 2l, where the characteristic length scale l = √ 8 is chosen to be sensible for the standard Normal input distribution on the four dimensional space. The results of the experiment are shown in Figure 2. We see that for each fixed depth the network converges towards the corresponding Gaussian process as the width increases. For the same number of hidden units per layer, the MMD distance between the networks and their Gaussian process analogue becomes higher as depth increases. The rate of convergence to the Gaussian process is slower as the number of hidden layers is increased. 5 COMPARING BAYESIAN DEEP NETWORKS TO GAUSSIAN PROCESSES In this section we compare the behaviour of finite Bayesian deep networks of the form considered in this paper with their Gaussian process analogues. If we make the networks wide enough the agreement will be very close. It is also of interest, however, to consider the behaviour of networks actually used in the literature, so we use 3 hidden layers and 50 hidden units which is typical of the networks used by Hernández-Lobato & Adams (2015). Fully connected Bayesian deep networks with finite variance priors on the weights have also been considered in other works (Graves, 2011; Hernández-Lobato et al., 2016; Blundell et al., 2015), though the specific details vary. We use rectified linear units and correct the variances to avoid a loss of prior variance as depth is increased. Our general strategy was to compare exact Gaussian process inference against expensive ‘gold standard’ Markov Chain Monte Carlo (MCMC) methods. We choose the latter because used correctly it works well enough to largely remove questions of posterior approximation quality from the calculus of comparison. It does mean however that our empirical study does not extend to datasets which are large in terms of number of data points or dimensionality, where such inference is challenging. We therefore sound a note of caution about extrapolating our empirical finite network conclusions too confidently to this domain. On the other hand, lower dimensional, prior dominated problems are generally regarded as an area of strength for Bayesian approaches and in this context our results are directly relevant. We computed the posterior moments by the two different methods on some example datasets. For the MCMC we used Hamiltonian Monte Carlo (HMC) (Neal, 2010) updates interleaved with elliptical slice sampling (Murray et al., 2010). We considered a simple one dimensional problem and a two dimensional real valued embedding of the four data point XOR problem. We see in Figures 3 and 4 (left) that the agreement in the posterior moments between the Gaussian process and the Bayesian deep network is very close. A key quantity of interest in Bayesian machine learning is the marginal likelihood. It is the normalising constant of the posterior distribution and gives a measure of the model fit to the data. For a Bayesian neural network, it is generally very difficult to compute, but with care and computational time it can be approximated using Hamiltonian annealed importance sampling (Sohl-Dickstein & Culpepper, 2012). The log-importance weights attained in this way constitute a stochastic lower bound on the marginal likelihood (Grosse et al., 2015). Figure 4 (right) shows the result of such an experiment compared against the (extremely cheap) Gaussian process marginal likelihood computation on the XOR problem. The value of the log-marginal likelihood computed in the two different ways agree to within a single nat which is negligible from a model selection perspective (Grosse et al., 2015). Predictive log-likelihood is a measure of the quality of probabilistic predictions given by a Bayesian regression method on a test point. To compare the two models we sampled 10 standard normal train and test points in 4 dimensions and passed them through a random network of the type under study to get regression targets. We then discarded the true network parameters and compared the predictions of posterior inference between the two methods. We also compared the marginal predictive distributions of a latent function value. Figure 5 shows the results. We see that the correspondence in predictive log-likelihood is close but not exact. Similarly the marginal function values are close to those of a Gaussian process but are slightly more concentrated. 6 AVOIDING GAUSSIAN PROCESS BEHAVIOUR When using deep Bayesian neural networks as priors, the emergence of Gaussian priors raises important questions in the cases where it is applicable, even if one sets aside questions of computational tractability. It has been argued in the literature that there are important cases where kernel machines with local kernels will perform badly (Bengio et al., 2005). The analysis applies to the posterior mean of a Gaussian process. The emergent kernels in our case are hyperparameter free. Although they do not meet the strict definition of what could be considered ‘local’ the fact remains that any Gaussian process with a fixed kernel does not use a learnt hierarchical representation. Such representations are widely regarded to be essential to the success of deep learning. There is relevant literature here on learning the representation of a standard, usually structured, network composed with a Gaussian process (Wilson et al., 2016a;b; Al-Shedivat et al., 2017). This differs from the assumed paradigm of this paper, where all model complexity is specified probabilistically and we do not assume convolutional, recurrent or other problem specific structure. Within this paradigm, the question therefore arises as to what can be done to avoid marginal Gaussian process behaviour if it is not desired. Speaking loosely, to stop the onset of the central limit theorem and the approximate analogues discussed in this paper one needs to make sure that one or more of its conditions is far from being met. Since the chief conditions on the summands are independence, bounded variance and many terms, violating these assumptions will remove Gaussian process behaviour. Deep Gaussian processes (Damianou & Lawrence, 2013) are not close to standard Gaussian processes marginally because they are typically used with narrow intermediate layers. It can be challenging to choose the precise nature of these narrow layers a priori. Neal (1996) suggests using networks with infinite variance in the activities. With a single hidden layer and correctly scaled, these networks become alpha stable processes in the wide limit. Neal also discusses variants that destroy independence by coupling weights. Our results about the emergence of Gaussian processes even with more than one hidden layer mean these ideas are of considerable interest going forward. 7 CONCLUSIONS Studying the limiting behaviour of distributions on feedforward networks has been a fruitful avenue for understanding these models historically. In this paper we have extended the state of knowledge about the wide limit, including for networks with more than one hidden layer. In particular, we have exhibited limit sequences of networks that converge in distribution to Gaussian processes with a certain recursively defined kernel. Our empirical study using MMD suggests that this behaviour is exhibited in a variety of models of size comparable to networks used in the literature. This led us to juxtapose finite Bayesian neural networks with their Gaussian process analogues, finding that the agreement in terms of key predictors is close empirically. If this Gaussian process behaviour is desired then exact and approximate inference using the analytic properties of Gaussian processes should be considered as an alternative to neural network inference. Since Gaussian processes have an equivalent flat representation then in the context of deep learning the behaviour may well not be desired and steps should be taken to avoid it. We view these results as a new opportunity to further the understanding of neural networks in the work that follows. Initialisation and learning dynamics are crucial topics of study in modern deep learning which require that we understand random networks. Bayesian neural networks should offer a principled approach to generalisation but this relies on successfully approximating a clearly understood prior. In illustrating the continued importance of Gaussian processes as limit distributions, we hope that our results will further research in these broader areas. 8 ACKNOWLEDGEMENTS We wish to thank Neil Lawrence for helpful conversations. We also thank the anonymous reviewers for their insights. Alexander Matthews and Zoubin Ghahramani acknowledge the support of EPSRC Grant EP/N014162/1 and EPSRC Grant EP/N510129/1 (The Alan Turing Institute). Jiri Hron holds a Nokia CASE Studentship. Mark Rowland acknowledges support by EPSRC grant EP/L016516/1 for the Cambridge Centre for Analysis. Richard E. Turner is supported by Google as well as EPSRC grants EP/M0269571 and EP/L000776/1. A PROOF OF MAIN THEOREM A.1 STATEMENT OF THEOREM AND NOTATION In this section, we provide a proof of the main theorem of the paper, which we begin by recalling. Theorem 1. Consider a Bayesian deep neural network of the form in Equations (1) and (2) using ReLU activation functions. Then there exist strictly increasing width functions hµ : N 7→ N such that H1 = h1(n), . . . ,HD = hD(n), and for any countable input set (x[i])∞i=1, the distribution of the output of the network converges in distribution to a Gaussian process as n→∞. The theorem is proven via use of the propositions that follow below. The broad structure of the proof is to use a particular variant of the Berry-Esseen inequality to upper bound how far each layer is from a multivariate normal distribution, and then to inductively propagate these inequalities through the network, leading to a bound on the distance between the output of the network for a collection of input points, and a multivariate Gaussian distribution. These notions will be made precise below. We begin in Section A.2 by stating the propositions that will be used in the proof of Theorem 1, but first establish notation that will be used in the remainder of the appendix. Given a finite set of inputs x[1], . . . , x[n] ∈ RM , we will write: • f (µ)(x) for the random variables (f (µ)(x[i]))ni=1 collectively taking values in RnHµ ; • f (µ)j (x) for the random variables (f (µ) j (x[i])) n i=1 collectively taking values in Rn; • g(µ)(x) for the random variables (g(µ)(x[i]))ni=1 collectively taking values in RnHµ ; • g(µ)j (x) for the random variables (g (µ) j (x[i])) n i=1 collectively taking values in Rn; Throughout, if U1, U2 are random variables taking in values in some Euclidean space Rd, we will define d(U1, U2) = sup A⊆Rd A convex |P(U1 ∈ A)− P(U2 ∈ A)| . Note that convergence of a sequence of random variables in this metric implies convergence in distribution. We will also consider multivariate normal distributions (Z(µ)j (x[i])|j = 1, . . . ,Hµ , i = 1, . . . , n) with covariance matrices of block diagonal form, such that Cov(Z (µ) k (x[a]), Z (µ) l (x[b])) = 0 for distinct k, l ∈ {1, . . . ,Hµ} , for all x[a], x[b] . To avoid writing this in full every time it is required, we will refer to this condition as blockwise independence with respect to the index j. We will avoid specification of all covariance values, deferring to the expression (13) given in the main paper. Finally, to simplify notation, we will assume that the network output is one-dimensional. Our proof trivially extends to arbitrary finite output dimension where the limiting distribution is a coordinate-wise independent multivariate GP. A.2 SUPPORTING RESULTS Proposition 1. Let ε > 0, and x[1], . . . , x[n] ∈ RM . Let µ ∈ {2, . . . , D + 1}, and let Hk = 2H 2 k+1 for k = 1, . . . , D − 1. Then for HD sufficiently large, suppose the condition d(f (µ−1)(x), Z(µ−1)(x)) ≤ 2−((D+1)−(µ−1))−n ∑D k=µ−1Hkε , holds, where Z(µ−1)(x) = (Z(µ−1)j (x[i])|j = 1, . . . ,Hµ−1 , i = 1, . . . , n) is mean-zero multivariate normal, with blockwise independence with respect to the index j. Then we have d(f (µ)(x), Z(µ)(x)) ≤ 2−((D+1)−µ)−n ∑D k=µHkε , where Z(µ)(x) = (Z(µ)j (x[i])|j = 1, . . . ,Hµ , i = 1, . . . , n) is mean-zero multivariate normal, with blockwise independence with respect to the index j. Proposition 2. Let ε > 0, and x[1], . . . , x[n] ∈ RM . If Hk = 2H 2 k+1 for k = 1, . . . , D− 1, then for HD sufficiently large, we have d(f (D+1)(x), Z(x)) ≤ ε , where Z(x) is a mean-zero multivariate normal random variable. In establishing the two propositions above, the following three lemmas will be useful. Lemma 2. Let ε > 0, and let Z(µ−1)(x) = (Z(µ−1)j (x[i])|j = 1, . . . ,Hµ−1 , i = 1, . . . , n) be mean-zero multivariate normal, with blockwise independence with respect to the index j. Let g̃(µ−1)(x) = φ(Z(µ−1)(x)), and let f̃ (µ)(x) be given by f̃ (µ)(x[i]) = Hµ−1∑ j=1 w (µ) •,j g̃ (µ−1) j (x[i]) + b (µ) , for i = 1, . . . , n. Then given ε > 0, if Hk = 2H 2 k+1 for k = 1, . . . , D − 1, then for all sufficiently large HD we have: d(f̃ (µ)(x), Z(µ)(x)) ≤ 2−((D+1)−(µ−1))−n ∑D k=µHkε , where Z(µ)(x) = (Z(µ)j (x[i])|j = 1, . . . ,Hµ , i = 1, . . . , n) is mean-zero multivariate normal, with blockwise independence with respect to the index j. Lemma 3. Let Z(µ−1)(x) = (Z(µ−1)j (x[i])|j = 1, . . . ,Hµ−1 , 1, . . . , n) be mean-zero multivariate normal, with blockwise independence with respect to the index j, such that for some ε > 0, d(Z(µ−1)(x), f (µ−1)(x)) ≤ ε . Then, defining f̃ (µ)(x) by f̃ (µ)(x[i]) = Hµ∑ j=1 w•,jφ(Z (µ−1) j (x[i])) + b (µ) , in the particular case where φ is the elementwise ReLU function, we have d(f̃ (µ)(x), f (µ)(x)) ≤ 2nHµ−1ε . Lemma 4. Let X1, . . . , XHµ−1 be iid random variables of the form Xj = g̃ (µ−1) j (x) ⊗ w̃ (µ) •,j , where ⊗ denotes the Kronecker product, g̃(µ−1)j (x) is defined as in Lemma 2, and w̃ (µ) •,j is a multivariate normal variable taking values in RHµ with mean vector 0, and covariance Ĉ(µ)w I . We denote the variance of Xj by Σ⊗ and its Schur decomposition as Σ⊗ = Q⊗Λ⊗QT⊗. Then β = E [ ‖Q⊗Λ−1/2⊗ QT⊗Xj‖3 ] ≤ CHµ,n, where CHµ,n ∈ R depends on Hµ and n, but is indepen- dent of Hµ−1. Further, we have CHµ,n = O(H2µn2). A.3 PROOFS Proof of Lemma 2. We use a straightforward variant of a particular Berry-Esseen inequality described in Bentkus (2003). We first state this result from the literature, and then derive a straightforward variation that we will use in the sequel. Theorem 2 (From Bentkus (2003)). Let X1, . . . , Xn be iid random variables taking values in Rd, with mean vector 0, identity covariance matrix, and β = E [ ‖Xi‖3 ] <∞. Let Sn = 1√n ∑n i=1Xi, and let Y be a standard d-dimensional multivariate normal random vector. Then we have sup A⊆Rd A convex |P(Sn ∈ A)− P(Y ∈ A)| ≤ 400d1/4β√ n We need a mildly modified version of this theorem to deal with iid random vectors X1, . . . , Xn with non-identity covariance matrices. To this end, suppose that Σ is the (full-rank) covariance matrix of each Xi, with decomposition Σ = RR>, for some invertible matrix R; R can be obtained, for example, by using Cholesky or Schur decomposition. The random variables R−1X1, . . . , R−1Xn are then iid, mean zero and with identity covariance matrices, so we may apply Theorem 2 to obtain: sup A⊆Rd A convex |P(R−1Sn ∈ A)− P(Y ∈ A)| ≤ 400d1/4β√ n , where β = E [ ‖R−1Xi‖3 ] . Now note that this is equivalent to sup A⊆Rd A convex |P(Sn ∈ RA)− P(RY ∈ RA)| ≤ 400d1/4β√ n , noting that RY ∼ N(0,Σ). Since R is invertible, and recalling the definition of the distance d above, this is exactly equivalent to: d(Sn, RY ) ≤ 400d1/4β√ n , (16) which is the variant of Bentkus’ result we will require in the sequel. We apply this bound to the sum Hµ−1∑ j=1 g̃ (µ−1) j (x)⊗ w (µ) •,j Noting that the summands indexed by j are iid by assumption, with the expected third moment norm featuring in the Berry-Esseen inequality upper-bounded by β ≤ CHµ,n, for some constant CHµ,n depending on Hµ and n, but independent of Hµ−1 (finiteness of CHµ,n follows from Lemma 4). As a consequence, we have the following bound: d Hµ−1∑ j=1 g̃ (µ−1) j (x)⊗ w (µ) •,j , Z ′(x) ≤ 400CHµ,n(nHµ)1/4/√Hµ−1 , where Z ′(x) = (Z ′j(x[i])|j = 1, . . . ,Hµ , i = 1, . . . , n) is mean-zero multivariate normal, with blockwise independence with respect to the index j. We wish to demonstrate that this is less than or equal to 2−(D−(µ−1))−n ∑D k=µHkε when HD is sufficiently large. This is equivalent to showing that 400CHµ,n(nHµ) 1/42((D+1)−(µ−1))+n ∑D k=µHk/ √ Hµ−1 ≤ ε for all sufficiently large HD. But note that with Hk−1 = 2H 2 k for k = µ, . . . ,D − 1, the left-hand side converges to 0 as HD increases (using the bound obtained for CHµ,n in Lemma 4), so for all HD sufficiently large, we obtain d Hµ−1∑ j=1 g̃ (µ−1) j (x)⊗ w (µ) •,j , Z ′(x) ≤ 2−((D+1)−(µ−1))−n∑Dk=µHkε , as required. Adding the independent bias vector b(µ) immediately yields d 1n ⊗ b(µ) + Hµ−1∑ j=1 g̃ (µ−1) j (x)⊗ w (µ) •,j , Z(x) ≤ 2−((D+1)−(µ−1))−n∑Dk=µHkε , where Z(x) is mean-zero multivariate normal, with the same block-diagonal covariance structure as described for Z ′(x) above, and 1n ∈ Rn is a vector of 1’s. Proof of Lemma 3. Let A ⊆ RnHµ be an arbitrary convex set. First, observe that we have P 1n ⊗ b(µ) + Hµ−1∑ j=1 φ(f̃ (µ−1) j (x))⊗ w (µ) •,j ∈ A =E P 1n ⊗ b(µ) + Hµ−1∑ j=1 φ(f̃ (µ−1) j (x))⊗ w (µ) •,j ∈ A∣∣∣∣w(µ), b(µ) =E P Hµ−1∑ j=1 φ(f̃ (µ−1) j (x))⊗ w (µ) •,j ∈ A− 1n ⊗ b(µ)∣∣∣∣w(µ), b(µ) (17) Now, note that for fixed w(µ) and b(µ), the event Hµ−1∑ j=1 φ(f̃ (µ−1) j (x))⊗ w (µ) •,j ∈ A− 1n ⊗ b(µ)∣∣∣∣w(µ), b(µ) is exactly that the vector φ(f̃ (µ−1)(x)) lies in the preimage of the convex set A − 1n ⊗ b(µ) under the linear map w(µ), which is again a convex set. Secondly, observe that for the specific ReLU nonlinearity φ, if C is an arbitrary convex set, then {(f (µ−1)(x)|φ(f (µ−1)(x)) ∈ C} may be written as the disjoint union of at most 2nHµ−1 convex sets: {f (µ−1)(x)|φ(f (µ−1)(x)) ∈ C} = (φ−1(C) ∩ {t ∈ RnHµ−1 |ti ≥ 0 ∀i})∪⋃ I⊆{1,...,nHµ−1} I 6=∅ {t ∈ RnHµ−1 |tI < 0,∃y ∈ C s.t. yIc = tIc , yI = 0} . Applying the assumed bound in the statement of the lemma to each of these sets, we obtain |P(φ(f (µ−1)(x)) ∈ C)− P(φ(f̃ (µ−1)(x)) ∈ C)| ≤ 2nHµ−1ε . Substituting this bound into the conditional probability (17) yields |P(f (µ)(x) ∈ A)− P(f̃ (µ)(x) ∈ A)| ≤ 2nHµ−1ε . Since A was an arbitrary convex set, the proof is complete. Proof of Lemma 4. Note that by independence of g̃(µ−1)j (x) from w̃ (µ) •,j we have that each Xj has mean zero and covariance Σ⊗ = Σ ⊗ Ĉ(µ)w I where Σ is the covariance matrix of g̃(µ−1)j (x). By standard properties of the Kronecker product, the Schur decomposition of Σ⊗ is (QΛQT )⊗(Ĉ(µ)w I) where QΛQT is the Schur decomposition of Σ. Simple algebraic manipulation yields: E [ ‖Q⊗Λ−1/2⊗ QT⊗(g̃ (µ−1) j (x)⊗ w̃ (µ) •,j )‖ 3 ] = E [ ‖(QΛ−1/2QT g̃(µ−1)j (x))⊗ ((Ĉ (µ) w ) −1/2w̃ (µ) •,j )‖ 3 ] = E [ ‖QΛ−1/2QT g̃(µ−1)j (x)‖ 3 ] E [ ‖(Ĉ(µ)w )−1/2w̃ (µ) •,j ‖ 3 ] . Notice that the random variable (Ĉ(µ)w )−1/2w̃ (µ) •,j follows the RHµ -dimensional standard normal distribution, and thus its squared norm follows the chi-squared distribution withHµ degrees of freedom, which is also known as the Gamma(Hµ/2, 1/2) distribution. Exponentiating to the power of 3/2 and taking the expectation, we obtain: E [ ‖(Ĉ(µ)w )−1/2w̃ (µ) •,j ‖ 3 ] = 23/2 Γ((Hµ + 3)/2) Γ(Hµ/2) . Finally, ‖QΛ−1/2QT g̃(µ−1)j (x)‖3 ≤ ‖g̃ (µ−1) j (x)‖3/λ 3/2 min where λmin is the smallest value on the diagonal of Λ. If the activation φ does not increase the norm of the input vector (as is the case for rectified linear), we have ‖g̃(µ−1)j (x)‖3 ≤ ‖Z (µ−1) j (x)‖3 almost surely, where Z (µ−1) j (x) follows the known n-dimensional normal distribution with mean zero and covariance matrix whose Schur decomposition will be denoted as UΨUT . Using standard Gaussian identities, we can write E [ ‖Z(µ−1)j (x)‖ 3 ] = E [ ‖UΨ1/2ε‖3 ] = E [ ‖Ψ1/2ε‖3 ] ≤ 23/2ψ3/2max Γ((n+ 3)/2) Γ(n/2) , where ε ∼ N (0, In) and ψmax is the highest entry on the diagonal of Ψ. Putting it all together, we arrive at the desired upper bound CHµ,n E [ ‖Q⊗Λ−1/2⊗ QT⊗Xj‖3 ] ≤ ( 4 ψmax λmin )3/2 Γ((Hµ + 3)/2) Γ(Hµ/2) Γ((n+ 3)/2) Γ(n/2) . Because ψmax and λmin are derived from the distribution of the limiting variable g̃ (µ−1) j (x) = φ(Z (µ−1) j (x)), which only depends on µ, the bound only depends on Hµ and n as desired. Further, noting that Γ((x+3)/2)/Γ(x/2) = O(x2), we have that CHµ,n = O(H2µn2), as required. Proof of Proposition 1. We first apply Lemma 3 to the assumed inequality d(f (µ−1)(x), Z(µ−1)(x)) ≤ 2−((D+1)−(µ−1))−n ∑D k=µ−1Hkε , to obtain d(f (µ)(x), f̃ (µ)(x)) ≤ 2−((D+1)−(µ−1))−n ∑D k=µHkε . We apply Lemma 2 so that for HD sufficiently large, we have d(f̃ (µ)(x), Z(µ)(x)) ≤ 2−((D+1)−(µ−1))−n ∑D k=µHkε . Applying the triangle inequality then yields d(f (µ)(x), Z(µ)(x)) ≤ 2−((D+1)−µ)−n ∑D k=µHkε , as required. Proof of Proposition 2. The idea of the proof is to chain Proposition 1 together across the layers of the network. We fix ε > 0, and apply Proposition 1 to each layer of the network, yielding the following set of implications for HD sufficiently large: d(f (µ−1)(x), Z(µ−1)(x)) ≤ 2−((D+1)−(µ−1))−n ∑D k=µ−1Hkε =⇒ d(f (µ)(x), Z(µ)(x)) ≤ 2−((D+1)−µ)−n ∑D k=µHkε , for µ ∈ {2, . . . , D + 1}. Finally, note that from the definition of the network, the distribution of f (1)(x) is exactly multivariate normal with the required covariance structure, so that d(f (1)(x), Z(1)(x)) = 0, completing the proof. Proof of Theorem 1. To prove that (f (D+1)(x[i]))∞i=1 converges weakly to a Gaussian process with respect to the metric ρ on RN given by: ρ(v, v′) = ∞∑ i=1 2−i min(1, |vi − v′i|) ∀v, v′ ∈ RN , it is sufficient (Billingsley, 1999, p. 19) to prove weak convergence of the finite-dimensional marginals of the process to multivariate Gaussian random variables, with covariance matrix matching that specified by the kernel of the proposed Gaussian process. To this end, let I be a finite subset of N, and consider the inputs (x[i])i∈I . We may now apply Proposition 2 to obtain weak convergence of the joint distribution of the output variables of the network, (f (D+1)(x[i]))i∈I to a multivariate Gaussian with the correct covariance matrix. As the finite subset of inputs was arbitrary, we are done.
1. What is the focus of the paper regarding Bayesian neural networks? 2. What are the strengths and weaknesses of the paper, particularly in its significance and practical value? 3. How does the reviewer assess the paper's contribution to the existing body of work, including Neal (1994) and recent studies on neural networks? 4. What are the limitations of the paper, especially in its attribution to Gaussian processes? 5. Are there any recent works that the reviewer suggests the authors should discuss in their paper, specifically related to "deep kernel learning"?
Review
Review The authors study the limiting behaviour for wide Bayesian neural networks, comparing to Gaussian processes. The paper is well written, and the experiments are enlightening. This work is a nice follow up to Neal (1994), and recent work considering similar results for neural networks with more than one hidden layer. It does add to our understanding of this body of work. The weakness of this paper is in its significance and practical value. This infinite limit loses much of the interesting representation in neural networks because the variance of the weights goes to zero. Thus it’s unclear whether these formulations will have many of the benefits of standard neural networks, and whether they’re particularly related to standard neural networks at all. There also don’t seem to be many practical takeaways from the experiments, and the experiments themselves do not consider any predictive tasks at all. It would be nice to see some practical benefit for a predictive task actually demonstrated in the paper. I am not sure what exactly I would do differently in training large neural networks based on the results of this paper, and the possible takeaways are not tested here on real applications. This paper also seems to erroneously attribute this limitation of the Neal (1994) limit, and its multilayer extensions, to Gaussian processes in the section “avoiding Gaussian process behaviour”. The problems with that construction are not a profound limitation of Gaussian processes in general. If we can learn the kernel function, then we can learn an interesting representation that does not have these limitations and still use a GP. We could alternatively treat the kernel parameters probabilistically, but the fact that in this case we would not marginally have a GP any longer is mostly incidental. The discussed limitations are more about specific kernel choices, and lack of kernel learning, than about “GP behaviour”. Indeed, while the discussion of related work is mostly commendable, the authors should also discuss the recent papers on “deep kernel learning”: i) http://proceedings.mlr.press/v51/wilson16.pdf ii) https://papers.nips.cc/paper/6426-stochastic-variational-deep-kernel-learning.pdf iii) http://www.jmlr.org/papers/volume18/16-498/16-498.pdf In particular, these papers do indeed learn flexible representations with Gaussian processes by using kernels constructed with neural networks. They avoid the behaviour discussed in the last section of your paper, but still use a Gaussian process. The network structures themselves are trained through the marginal likelihood of the Gaussian process. This approach effectively learns an infinite number of adaptive basis functions, parametrized through the structural properties of a neural network. Computations are made scalable and practical through exploiting algebraic structure. Overall I enjoyed reading your paper.
ICLR
Title Gaussian Process Behaviour in Wide Deep Neural Networks Abstract Whilst deep neural networks have shown great empirical success, there is still much work to be done to understand their theoretical properties. In this paper, we study the relationship between Gaussian processes with a recursive kernel definition and random wide fully connected feedforward networks with more than one hidden layer. We exhibit limiting procedures under which finite deep networks will converge in distribution to the corresponding Gaussian process. To evaluate convergence rates empirically, we use maximum mean discrepancy. We then exhibit situations where existing Bayesian deep networks are close to Gaussian processes in terms of the key quantities of interest. Any Gaussian process has a flat representation. Since this behaviour may be undesirable in certain situations we discuss ways in which it might be prevented. 1 N/A Whilst deep neural networks have shown great empirical success, there is still much work to be done to understand their theoretical properties. In this paper, we study the relationship between Gaussian processes with a recursive kernel definition and random wide fully connected feedforward networks with more than one hidden layer. We exhibit limiting procedures under which finite deep networks will converge in distribution to the corresponding Gaussian process. To evaluate convergence rates empirically, we use maximum mean discrepancy. We then exhibit situations where existing Bayesian deep networks are close to Gaussian processes in terms of the key quantities of interest. Any Gaussian process has a flat representation. Since this behaviour may be undesirable in certain situations we discuss ways in which it might be prevented. 1 1 INTRODUCTION Deep feedforward neural networks have emerged as an essential component of modern machine learning. As such there has been significant research effort in trying to understand the theoretical properties of such models. One important branch of such research is the study of random networks. By assuming a probability distribution on the network parameters, a distribution is induced on the input to output function that such networks encode. This has proved important in the study of initialisation and learning dynamics (Schoenholz et al., 2017) and expressivity (Poole et al., 2016). It is, of course, essential in the study of Bayesian priors on networks (Neal, 1996). The Bayesian approach makes little sense if prior assumptions are not understood, and distributional knowledge can be essential in finding good posterior approximations. Since we typically want our networks to have high modelling capacity, it is natural to consider limit distributions of networks as they become large. Whilst distributions on deep networks are generally challenging to work with exactly, the limiting behaviour can lead to more insight. Further, as we shall see, networks used in the literature may be very close to this behaviour. The seminal work in this area is that of Neal (1996), which showed that under certain conditions random neural networks with one hidden layer converge to a Gaussian process. The question of the type of convergence is non-trivial and part of our discussion. Historically this result was a significant one because it provided a connection between flexible Bayesian neural networks and Gaussian processes (Williams, 1998; Rasmussen & Williams, 2006) 1Code for the experiments in the paper can be found at https://github.com/ widedeepnetworks/widedeepnetworks 1.1 OUR CONTRIBUTIONS We extend the theoretical understanding of random fully connected networks and their relationship to Gaussian processes. In particular, we prove a rigorous result (Theorem 1) on the convergence of certain finite networks with more than one hidden layer to Gaussian processes. Further, we empirically study the distance between finite networks and their Gaussian process analogues by using maximum mean discrepancy (Gretton et al., 2012) as a distance measure. We find that Bayesian deep networks from the literature can exhibit predictions that are close to Gaussian processes. To demonstrate this, we systematically compare exact Gaussian process inference with ‘gold standard’ MCMC inference for Bayesian neural networks. Our work is of relevance to the theoretical understanding of neural network initialisation and dynamics. It is also important in the area of Bayesian deep networks because it demonstrates that Gaussian process behaviour can arise in more situations of practical interest than previously thought. If this behaviour is desired then Gaussian process inference (exact and approximate) should also be considered. In some scenarios, the behaviour may not be desired because it implies a lack of a hierarchical representation. We therefore highlight promising ideas from the literature to prevent such behaviour. 1.2 RELATED WORK The case of random neural networks with one hidden layer was studied by Neal (1996). Cho & Saul (2009) provided analytic expressions for single layer kernels including those corresponding to a rectified linear unit (ReLU). They also studied recursive kernels designed to ‘mimic computation in large, multilayer neural nets’. As discussed in Section 3 they arrived at the correct kernel recursion through an erroneous argument. Such recursive kernels were later used with empirical success in the Gaussian process literature (Krauth et al., 2017), with a similar justification to that of Cho and Saul. The first case we are aware of using a Gaussian process construction with more than one hidden layer is the work of Hazan & Jaakkola (2015). Their contribution is similar in content to Lemma 1 discussed here, and the work has had increasing interest from the kernel community (Mitrovic et al., 2017). Recent work from Daniely et al. (2016) uses the concept of ‘computational skeletons’ to give concentration bounds on the difference in the second order moments of large finite networks and their kernel analogue, with strong assumptions on the inputs. The Gaussian process view given here, without strong input assumptions, is related but concerns not just the first two moments of a random network but the full distribution. As such the theorems we obtain are distinct. A less obvious connection is to the recent series of papers studying deep networks using a mean field approximation (Poole et al., 2016; Schoenholz et al., 2017). In those papers a second order approximation gives equivalent behaviour to the kernel recursion. By contrast, in this paper the claim is that the behaviour emerges as a consequence of increasing width and is therefore something that needs to be proved. Another surprising connection is to the analysis of self-normalizing neural networks (Klambauer et al., 2017). In their analysis the authors assume that the hidden layers are wide in order to invoke the central limit theorem. The premise of the central limit theorem will only hold approximately in layers after the first one and this theoretical barrier is something we discuss here. An area that is less related than might be expected is that of ‘Deep Gaussian Processes’ (DGPs) (Damianou & Lawrence, 2013). As will be discussed in Section 6, narrow intermediate representations mean that the marginal behaviour is not close to that of a Gaussian process. Duvenaud et al. (2014) offer an analysis that largely applies to DGPs though they also study the Cho and Saul recursion with the motivating argument from the original paper. 2 THE DEEP WIDE LIMIT We consider a fully connected network as shown in Figure 1. The inputs and outputs will be real valued vectors of dimension M and L respectively. The network is fully connected. The initial step and recursion are standard. The initial step is: f (1) i (x) = M∑ j=1 w (1) i,j xj + b (1) i . (1) Activations 1 Activities 1 Activations 2 Activities 2 Inputs Output We make the functional dependence on x explicit in our notation as it will help clarify what follows. For a network with D hidden layers the recursion is, for each µ = 1, . . . , D, g (µ) i (x) = φ(f (µ) i (x)) (2) f (µ+1) i (x) = Hµ∑ j=1 w (µ+1) i,j g (µ) j (x) + b (µ+1) i , (3) so that f (D+1)(x) is the output of the network given input x. φ denotes the non-linearity. In all cases the equations hold for each value of i; i ranges between 1 and Hµ in Equation (2), and between 1 and Hµ+1 in Equation (3) except in the case of the final activation where the top value is L. The network could of course be modified to be probability simplex-valued by adding a softmax at the end. A distribution on the parameters of the network will be assumed. Conditional on the inputs, this induces a distribution on the activations and activities. In particular we will assume independent normal distributions on the weights and biases w (µ) i,j ∼ N (0, C (µ) w ) indep (4) b (µ) i ∼ N (0, C (µ) b ) indep. (5) We will be interested in the behaviour of this network as the widths Hµ becomes large. The weight variances for µ ≥ 2 will be scaled according to the width of the network to avoid a divergence in the variance of the activities in this limit. As will become apparent, the appropriate scaling is C(µ)w = Ĉ (µ) w Hµ µ ≥ 2 . (6) The assumption is that Ĉ(µ)w will remain fixed as we take the limit. Neal (1996) analysed this problem for D = 1, showing that as H1 → ∞, the values of f (2)i (x), the output of the network in this case, converge to a certain multi-output Gaussian process if the activities have bounded variance. Since our approach relies on the multivariate central limit theorem we will arrange the relevant terms into (column) vectors to make the linear algebra clearer. Consider any two inputs x and x′ and all output functions ranging over the index i. We define the vector f (2)(x) of length L whose elements are the numbers f (2)i (x). We define f (2)(x′) similarly. For the weight matrices defined by w(µ)i,j for fixed µ we use a ‘placeholder’ index • to return column and row vectors from the weight matrices. In particular w(1)j,• denotes row j of the weight matrix at depth 1. Similarly, w (2) •,j denotes column j at depth 2. The biases are given as column vectors b(1) and b(2). Finally we concatenate the two vectors f (2)(x) and f (2)(x′) into a single column vector F (2) of size 2L. The vector in question takes the form F (2) = ( f (2)(x) f (2)(x′) ) = ( b(2) b(2) ) + H1∑ j=1 ( w (2) •,j φ(w (1) j,• x+ b (1) j ) w (2) •,j φ(w (1) j,• x ′ + b (1) j ) ) (7) The benefit of writing the relation in this form is that the applicability of the multivariate central limit theorem is immediately apparent. Each of the vector terms on this right hand side is independent and identically distributed conditional on the inputs x and x′. By assumption, the activities have bounded variance. The scaling we have chosen on the variances is precisely that required to ensure the applicability of the theorem. Therefore as H becomes large F (2) converges in distribution to a multivariate normal distribution. The limiting normal distribution is fully specified by its first two moments. Defining γ ∼ N (0, C(1)b ), ∼ N (0, C (1) w IM ), the moments in question are: E [ f (2) i (x) ] = 0 (8) E [ f (2) i (x)f (2) j (x ′) ] = δi,j [ Ĉ(2)w E ,γ [ φ( Tx+ γ)φ( Tx′ + γ) ] + C (2) b ] (9) Note that we could have taken a larger set of input points to give a larger vector F and again we would conclude that this vector converged in distribution to a multivariate normal distribution. More formally, we can consider the set of possible inputs as an index set. A set of consistent finite dimensional Gaussian distributions on an index set corresponds to a Gaussian process by the Kolmogorov extension theorem. The Gaussian process in question is a distribution over functions defined on the product σ-algebra, which has the relevant finite dimensional distributions as its marginals. In the case of a multivariate normal distribution a set of variables having a covariance of zero implies that the variables are mutually independent. Looking at Equation (9), we see that the limiting distribution has independence between different components i, j of the output. Combining this with the recursion (2), we might intuitively suggest that the next layer also converges to a multivariate normal distribution in the limit of large Hµ. Indeed we state the following lemma, which we attribute to Hazan & Jaakkola (2015): Lemma 1 (Normal recursion). If the activations of a previous layer are normally distributed with moments: E [ f (µ−1) i (x) ] = 0 (10) E [ f (µ−1) i (x)f (µ−1) j (x ′) ] = δi,jK(x, x ′), (11) Then under the recursion (2) and asH →∞ the activations of the next layer converge in distribution to a normal distribution with moments E [ f (µ) i (x) ] = 0 (12) E [ f (µ) i (x)f (µ) j (x ′) ] = δi,j [ Ĉ(µ)w E( 1, 2)∼N (0,K)[φ( 1)φ( 2)] + C (µ) b ] (13) where K is a 2× 2 matrix containing the input covariances. Unfortunately the lemma is not sufficient to show that the joint distribution of the activations of higher layers converge in distribution to a multivariate normals. This is because for finite H the input activations do not have a multivariate normal distribution - this is only attained (weakly or in distribution) in the limit. It could be the case that the rate at which the limit distribution is attained affects the distribution in subsequent layers. We are able to offer the following theorem rigorously: Theorem 1. Consider a Bayesian deep neural network of the form in Equations (1) and (2) using ReLU activation functions. Then there exist strictly increasing width functions hµ : N 7→ N such that H1 = h1(n), . . . ,HD = hD(n), and for any countable input set (x[i])∞i=1, the distribution of the output of the network converges in distribution to a Gaussian process as n→∞. A proof is included in the appendix. We conjecture that a more general theorem will hold. In particular we expect that the width functions hµ can be taken to be the identity and that the nonlinearity can be extended to monotone functions with well behaved tails. Our conjecture is based on the intuition from Lemma 1 and from our experiments, in which we always take the width function to be the identity. 3 SPECIFIC KERNELS UNDER RECURSION Cho & Saul (2009) suggest a family of kernels based on a recurrence designed to ‘mimic computation in large, multilayer neural nets’. It is therefore of interest to see how this relates to deep wide Gaussian processes. A kernel may be associated with a feature mapping Φ(x) such that K(x, x′) = Φ(x) • Φ(x′). Cho and Saul define a recursive kernel through a new feature mapping by compositions such as Φ(Φ(x)). However this cannot be a legitimate way to create a kernel because such a composition represents a type error. There is no reason to think the output dimension of the function Φ matches the input dimension and indeed the output dimension may well be infinite. Nevertheless, the paper provides an elegant solution to a different task: it derives closed form solution to the recursion from Lemma 1 (Hazan & Jaakkola, 2015) for the special case φ(u) = Θ(u)ur for r = 0, 1, 2, 3 , (14) where Θ is the Heaviside step function. Specifically, the recursive approach of Cho & Saul (2009) can be adapted by using the fact that u>z for z ∼ N (0, LL>) is equivalent in distribution to (L>u)>ε with ε ∼ N (0, I), and by optionally augmenting u to incorporate the bias. Since r = 1 corresponds to rectified linear units we apply this analytic kernel recursion in all of our experiments. 4 MEASURING CONVERGENCE USING MAXIMUM MEAN DISCREPANCY In this section we use the kernel based two sample tests of Gretton et al. (2012) to empirically measure the similarity of finite random neural networks to their Gaussian process analogues. The maximum mean discrepancy (MMD) between two distributions P and Q is defined as: MMD(P,Q,H) := sup ||h||H≤1 [ EP [h]− EQ[h] ] (15) where H denotes a reproducing kernel Hilbert space and || • ||H denotes the corresponding norm. It gives the biggest possible difference between expectations of a function under the two distributions under the constraint that the function has Hilbert space norm less than or equal to one. We used the unbiased estimator of squared MMD given in Equation (3) of Gretton et al. (2012). In this experiment and all those that follow we take weight variance parameters Ĉ(µ)w = 0.8 and bias variance Cb = 0.2. We took 10 standard normal input points in 4 dimensions and pass them through 2000 independent random neural networks drawn from the distribution discussed in this paper. This was then compared to 2000 samples drawn from the corresponding Gaussian process distribution. The experiment was performed with different numbers of hidden layers and numbers of units per hidden layer. We repeated each experiment 20 times which allows us to reduce variance in our results and give a simple estimate of measurement error. The experiments use an RBF kernel for the MMD estimate with lengthscale 1/2. In order to help give an intuitive sense of the distances involved we also include a comparison between two Gaussian processes with isotropic RBF kernels using the same MMD distance measure. The kernel length scales for this pair of ‘calibration’ Gaussian processes are taken to be l and 2l, where the characteristic length scale l = √ 8 is chosen to be sensible for the standard Normal input distribution on the four dimensional space. The results of the experiment are shown in Figure 2. We see that for each fixed depth the network converges towards the corresponding Gaussian process as the width increases. For the same number of hidden units per layer, the MMD distance between the networks and their Gaussian process analogue becomes higher as depth increases. The rate of convergence to the Gaussian process is slower as the number of hidden layers is increased. 5 COMPARING BAYESIAN DEEP NETWORKS TO GAUSSIAN PROCESSES In this section we compare the behaviour of finite Bayesian deep networks of the form considered in this paper with their Gaussian process analogues. If we make the networks wide enough the agreement will be very close. It is also of interest, however, to consider the behaviour of networks actually used in the literature, so we use 3 hidden layers and 50 hidden units which is typical of the networks used by Hernández-Lobato & Adams (2015). Fully connected Bayesian deep networks with finite variance priors on the weights have also been considered in other works (Graves, 2011; Hernández-Lobato et al., 2016; Blundell et al., 2015), though the specific details vary. We use rectified linear units and correct the variances to avoid a loss of prior variance as depth is increased. Our general strategy was to compare exact Gaussian process inference against expensive ‘gold standard’ Markov Chain Monte Carlo (MCMC) methods. We choose the latter because used correctly it works well enough to largely remove questions of posterior approximation quality from the calculus of comparison. It does mean however that our empirical study does not extend to datasets which are large in terms of number of data points or dimensionality, where such inference is challenging. We therefore sound a note of caution about extrapolating our empirical finite network conclusions too confidently to this domain. On the other hand, lower dimensional, prior dominated problems are generally regarded as an area of strength for Bayesian approaches and in this context our results are directly relevant. We computed the posterior moments by the two different methods on some example datasets. For the MCMC we used Hamiltonian Monte Carlo (HMC) (Neal, 2010) updates interleaved with elliptical slice sampling (Murray et al., 2010). We considered a simple one dimensional problem and a two dimensional real valued embedding of the four data point XOR problem. We see in Figures 3 and 4 (left) that the agreement in the posterior moments between the Gaussian process and the Bayesian deep network is very close. A key quantity of interest in Bayesian machine learning is the marginal likelihood. It is the normalising constant of the posterior distribution and gives a measure of the model fit to the data. For a Bayesian neural network, it is generally very difficult to compute, but with care and computational time it can be approximated using Hamiltonian annealed importance sampling (Sohl-Dickstein & Culpepper, 2012). The log-importance weights attained in this way constitute a stochastic lower bound on the marginal likelihood (Grosse et al., 2015). Figure 4 (right) shows the result of such an experiment compared against the (extremely cheap) Gaussian process marginal likelihood computation on the XOR problem. The value of the log-marginal likelihood computed in the two different ways agree to within a single nat which is negligible from a model selection perspective (Grosse et al., 2015). Predictive log-likelihood is a measure of the quality of probabilistic predictions given by a Bayesian regression method on a test point. To compare the two models we sampled 10 standard normal train and test points in 4 dimensions and passed them through a random network of the type under study to get regression targets. We then discarded the true network parameters and compared the predictions of posterior inference between the two methods. We also compared the marginal predictive distributions of a latent function value. Figure 5 shows the results. We see that the correspondence in predictive log-likelihood is close but not exact. Similarly the marginal function values are close to those of a Gaussian process but are slightly more concentrated. 6 AVOIDING GAUSSIAN PROCESS BEHAVIOUR When using deep Bayesian neural networks as priors, the emergence of Gaussian priors raises important questions in the cases where it is applicable, even if one sets aside questions of computational tractability. It has been argued in the literature that there are important cases where kernel machines with local kernels will perform badly (Bengio et al., 2005). The analysis applies to the posterior mean of a Gaussian process. The emergent kernels in our case are hyperparameter free. Although they do not meet the strict definition of what could be considered ‘local’ the fact remains that any Gaussian process with a fixed kernel does not use a learnt hierarchical representation. Such representations are widely regarded to be essential to the success of deep learning. There is relevant literature here on learning the representation of a standard, usually structured, network composed with a Gaussian process (Wilson et al., 2016a;b; Al-Shedivat et al., 2017). This differs from the assumed paradigm of this paper, where all model complexity is specified probabilistically and we do not assume convolutional, recurrent or other problem specific structure. Within this paradigm, the question therefore arises as to what can be done to avoid marginal Gaussian process behaviour if it is not desired. Speaking loosely, to stop the onset of the central limit theorem and the approximate analogues discussed in this paper one needs to make sure that one or more of its conditions is far from being met. Since the chief conditions on the summands are independence, bounded variance and many terms, violating these assumptions will remove Gaussian process behaviour. Deep Gaussian processes (Damianou & Lawrence, 2013) are not close to standard Gaussian processes marginally because they are typically used with narrow intermediate layers. It can be challenging to choose the precise nature of these narrow layers a priori. Neal (1996) suggests using networks with infinite variance in the activities. With a single hidden layer and correctly scaled, these networks become alpha stable processes in the wide limit. Neal also discusses variants that destroy independence by coupling weights. Our results about the emergence of Gaussian processes even with more than one hidden layer mean these ideas are of considerable interest going forward. 7 CONCLUSIONS Studying the limiting behaviour of distributions on feedforward networks has been a fruitful avenue for understanding these models historically. In this paper we have extended the state of knowledge about the wide limit, including for networks with more than one hidden layer. In particular, we have exhibited limit sequences of networks that converge in distribution to Gaussian processes with a certain recursively defined kernel. Our empirical study using MMD suggests that this behaviour is exhibited in a variety of models of size comparable to networks used in the literature. This led us to juxtapose finite Bayesian neural networks with their Gaussian process analogues, finding that the agreement in terms of key predictors is close empirically. If this Gaussian process behaviour is desired then exact and approximate inference using the analytic properties of Gaussian processes should be considered as an alternative to neural network inference. Since Gaussian processes have an equivalent flat representation then in the context of deep learning the behaviour may well not be desired and steps should be taken to avoid it. We view these results as a new opportunity to further the understanding of neural networks in the work that follows. Initialisation and learning dynamics are crucial topics of study in modern deep learning which require that we understand random networks. Bayesian neural networks should offer a principled approach to generalisation but this relies on successfully approximating a clearly understood prior. In illustrating the continued importance of Gaussian processes as limit distributions, we hope that our results will further research in these broader areas. 8 ACKNOWLEDGEMENTS We wish to thank Neil Lawrence for helpful conversations. We also thank the anonymous reviewers for their insights. Alexander Matthews and Zoubin Ghahramani acknowledge the support of EPSRC Grant EP/N014162/1 and EPSRC Grant EP/N510129/1 (The Alan Turing Institute). Jiri Hron holds a Nokia CASE Studentship. Mark Rowland acknowledges support by EPSRC grant EP/L016516/1 for the Cambridge Centre for Analysis. Richard E. Turner is supported by Google as well as EPSRC grants EP/M0269571 and EP/L000776/1. A PROOF OF MAIN THEOREM A.1 STATEMENT OF THEOREM AND NOTATION In this section, we provide a proof of the main theorem of the paper, which we begin by recalling. Theorem 1. Consider a Bayesian deep neural network of the form in Equations (1) and (2) using ReLU activation functions. Then there exist strictly increasing width functions hµ : N 7→ N such that H1 = h1(n), . . . ,HD = hD(n), and for any countable input set (x[i])∞i=1, the distribution of the output of the network converges in distribution to a Gaussian process as n→∞. The theorem is proven via use of the propositions that follow below. The broad structure of the proof is to use a particular variant of the Berry-Esseen inequality to upper bound how far each layer is from a multivariate normal distribution, and then to inductively propagate these inequalities through the network, leading to a bound on the distance between the output of the network for a collection of input points, and a multivariate Gaussian distribution. These notions will be made precise below. We begin in Section A.2 by stating the propositions that will be used in the proof of Theorem 1, but first establish notation that will be used in the remainder of the appendix. Given a finite set of inputs x[1], . . . , x[n] ∈ RM , we will write: • f (µ)(x) for the random variables (f (µ)(x[i]))ni=1 collectively taking values in RnHµ ; • f (µ)j (x) for the random variables (f (µ) j (x[i])) n i=1 collectively taking values in Rn; • g(µ)(x) for the random variables (g(µ)(x[i]))ni=1 collectively taking values in RnHµ ; • g(µ)j (x) for the random variables (g (µ) j (x[i])) n i=1 collectively taking values in Rn; Throughout, if U1, U2 are random variables taking in values in some Euclidean space Rd, we will define d(U1, U2) = sup A⊆Rd A convex |P(U1 ∈ A)− P(U2 ∈ A)| . Note that convergence of a sequence of random variables in this metric implies convergence in distribution. We will also consider multivariate normal distributions (Z(µ)j (x[i])|j = 1, . . . ,Hµ , i = 1, . . . , n) with covariance matrices of block diagonal form, such that Cov(Z (µ) k (x[a]), Z (µ) l (x[b])) = 0 for distinct k, l ∈ {1, . . . ,Hµ} , for all x[a], x[b] . To avoid writing this in full every time it is required, we will refer to this condition as blockwise independence with respect to the index j. We will avoid specification of all covariance values, deferring to the expression (13) given in the main paper. Finally, to simplify notation, we will assume that the network output is one-dimensional. Our proof trivially extends to arbitrary finite output dimension where the limiting distribution is a coordinate-wise independent multivariate GP. A.2 SUPPORTING RESULTS Proposition 1. Let ε > 0, and x[1], . . . , x[n] ∈ RM . Let µ ∈ {2, . . . , D + 1}, and let Hk = 2H 2 k+1 for k = 1, . . . , D − 1. Then for HD sufficiently large, suppose the condition d(f (µ−1)(x), Z(µ−1)(x)) ≤ 2−((D+1)−(µ−1))−n ∑D k=µ−1Hkε , holds, where Z(µ−1)(x) = (Z(µ−1)j (x[i])|j = 1, . . . ,Hµ−1 , i = 1, . . . , n) is mean-zero multivariate normal, with blockwise independence with respect to the index j. Then we have d(f (µ)(x), Z(µ)(x)) ≤ 2−((D+1)−µ)−n ∑D k=µHkε , where Z(µ)(x) = (Z(µ)j (x[i])|j = 1, . . . ,Hµ , i = 1, . . . , n) is mean-zero multivariate normal, with blockwise independence with respect to the index j. Proposition 2. Let ε > 0, and x[1], . . . , x[n] ∈ RM . If Hk = 2H 2 k+1 for k = 1, . . . , D− 1, then for HD sufficiently large, we have d(f (D+1)(x), Z(x)) ≤ ε , where Z(x) is a mean-zero multivariate normal random variable. In establishing the two propositions above, the following three lemmas will be useful. Lemma 2. Let ε > 0, and let Z(µ−1)(x) = (Z(µ−1)j (x[i])|j = 1, . . . ,Hµ−1 , i = 1, . . . , n) be mean-zero multivariate normal, with blockwise independence with respect to the index j. Let g̃(µ−1)(x) = φ(Z(µ−1)(x)), and let f̃ (µ)(x) be given by f̃ (µ)(x[i]) = Hµ−1∑ j=1 w (µ) •,j g̃ (µ−1) j (x[i]) + b (µ) , for i = 1, . . . , n. Then given ε > 0, if Hk = 2H 2 k+1 for k = 1, . . . , D − 1, then for all sufficiently large HD we have: d(f̃ (µ)(x), Z(µ)(x)) ≤ 2−((D+1)−(µ−1))−n ∑D k=µHkε , where Z(µ)(x) = (Z(µ)j (x[i])|j = 1, . . . ,Hµ , i = 1, . . . , n) is mean-zero multivariate normal, with blockwise independence with respect to the index j. Lemma 3. Let Z(µ−1)(x) = (Z(µ−1)j (x[i])|j = 1, . . . ,Hµ−1 , 1, . . . , n) be mean-zero multivariate normal, with blockwise independence with respect to the index j, such that for some ε > 0, d(Z(µ−1)(x), f (µ−1)(x)) ≤ ε . Then, defining f̃ (µ)(x) by f̃ (µ)(x[i]) = Hµ∑ j=1 w•,jφ(Z (µ−1) j (x[i])) + b (µ) , in the particular case where φ is the elementwise ReLU function, we have d(f̃ (µ)(x), f (µ)(x)) ≤ 2nHµ−1ε . Lemma 4. Let X1, . . . , XHµ−1 be iid random variables of the form Xj = g̃ (µ−1) j (x) ⊗ w̃ (µ) •,j , where ⊗ denotes the Kronecker product, g̃(µ−1)j (x) is defined as in Lemma 2, and w̃ (µ) •,j is a multivariate normal variable taking values in RHµ with mean vector 0, and covariance Ĉ(µ)w I . We denote the variance of Xj by Σ⊗ and its Schur decomposition as Σ⊗ = Q⊗Λ⊗QT⊗. Then β = E [ ‖Q⊗Λ−1/2⊗ QT⊗Xj‖3 ] ≤ CHµ,n, where CHµ,n ∈ R depends on Hµ and n, but is indepen- dent of Hµ−1. Further, we have CHµ,n = O(H2µn2). A.3 PROOFS Proof of Lemma 2. We use a straightforward variant of a particular Berry-Esseen inequality described in Bentkus (2003). We first state this result from the literature, and then derive a straightforward variation that we will use in the sequel. Theorem 2 (From Bentkus (2003)). Let X1, . . . , Xn be iid random variables taking values in Rd, with mean vector 0, identity covariance matrix, and β = E [ ‖Xi‖3 ] <∞. Let Sn = 1√n ∑n i=1Xi, and let Y be a standard d-dimensional multivariate normal random vector. Then we have sup A⊆Rd A convex |P(Sn ∈ A)− P(Y ∈ A)| ≤ 400d1/4β√ n We need a mildly modified version of this theorem to deal with iid random vectors X1, . . . , Xn with non-identity covariance matrices. To this end, suppose that Σ is the (full-rank) covariance matrix of each Xi, with decomposition Σ = RR>, for some invertible matrix R; R can be obtained, for example, by using Cholesky or Schur decomposition. The random variables R−1X1, . . . , R−1Xn are then iid, mean zero and with identity covariance matrices, so we may apply Theorem 2 to obtain: sup A⊆Rd A convex |P(R−1Sn ∈ A)− P(Y ∈ A)| ≤ 400d1/4β√ n , where β = E [ ‖R−1Xi‖3 ] . Now note that this is equivalent to sup A⊆Rd A convex |P(Sn ∈ RA)− P(RY ∈ RA)| ≤ 400d1/4β√ n , noting that RY ∼ N(0,Σ). Since R is invertible, and recalling the definition of the distance d above, this is exactly equivalent to: d(Sn, RY ) ≤ 400d1/4β√ n , (16) which is the variant of Bentkus’ result we will require in the sequel. We apply this bound to the sum Hµ−1∑ j=1 g̃ (µ−1) j (x)⊗ w (µ) •,j Noting that the summands indexed by j are iid by assumption, with the expected third moment norm featuring in the Berry-Esseen inequality upper-bounded by β ≤ CHµ,n, for some constant CHµ,n depending on Hµ and n, but independent of Hµ−1 (finiteness of CHµ,n follows from Lemma 4). As a consequence, we have the following bound: d Hµ−1∑ j=1 g̃ (µ−1) j (x)⊗ w (µ) •,j , Z ′(x) ≤ 400CHµ,n(nHµ)1/4/√Hµ−1 , where Z ′(x) = (Z ′j(x[i])|j = 1, . . . ,Hµ , i = 1, . . . , n) is mean-zero multivariate normal, with blockwise independence with respect to the index j. We wish to demonstrate that this is less than or equal to 2−(D−(µ−1))−n ∑D k=µHkε when HD is sufficiently large. This is equivalent to showing that 400CHµ,n(nHµ) 1/42((D+1)−(µ−1))+n ∑D k=µHk/ √ Hµ−1 ≤ ε for all sufficiently large HD. But note that with Hk−1 = 2H 2 k for k = µ, . . . ,D − 1, the left-hand side converges to 0 as HD increases (using the bound obtained for CHµ,n in Lemma 4), so for all HD sufficiently large, we obtain d Hµ−1∑ j=1 g̃ (µ−1) j (x)⊗ w (µ) •,j , Z ′(x) ≤ 2−((D+1)−(µ−1))−n∑Dk=µHkε , as required. Adding the independent bias vector b(µ) immediately yields d 1n ⊗ b(µ) + Hµ−1∑ j=1 g̃ (µ−1) j (x)⊗ w (µ) •,j , Z(x) ≤ 2−((D+1)−(µ−1))−n∑Dk=µHkε , where Z(x) is mean-zero multivariate normal, with the same block-diagonal covariance structure as described for Z ′(x) above, and 1n ∈ Rn is a vector of 1’s. Proof of Lemma 3. Let A ⊆ RnHµ be an arbitrary convex set. First, observe that we have P 1n ⊗ b(µ) + Hµ−1∑ j=1 φ(f̃ (µ−1) j (x))⊗ w (µ) •,j ∈ A =E P 1n ⊗ b(µ) + Hµ−1∑ j=1 φ(f̃ (µ−1) j (x))⊗ w (µ) •,j ∈ A∣∣∣∣w(µ), b(µ) =E P Hµ−1∑ j=1 φ(f̃ (µ−1) j (x))⊗ w (µ) •,j ∈ A− 1n ⊗ b(µ)∣∣∣∣w(µ), b(µ) (17) Now, note that for fixed w(µ) and b(µ), the event Hµ−1∑ j=1 φ(f̃ (µ−1) j (x))⊗ w (µ) •,j ∈ A− 1n ⊗ b(µ)∣∣∣∣w(µ), b(µ) is exactly that the vector φ(f̃ (µ−1)(x)) lies in the preimage of the convex set A − 1n ⊗ b(µ) under the linear map w(µ), which is again a convex set. Secondly, observe that for the specific ReLU nonlinearity φ, if C is an arbitrary convex set, then {(f (µ−1)(x)|φ(f (µ−1)(x)) ∈ C} may be written as the disjoint union of at most 2nHµ−1 convex sets: {f (µ−1)(x)|φ(f (µ−1)(x)) ∈ C} = (φ−1(C) ∩ {t ∈ RnHµ−1 |ti ≥ 0 ∀i})∪⋃ I⊆{1,...,nHµ−1} I 6=∅ {t ∈ RnHµ−1 |tI < 0,∃y ∈ C s.t. yIc = tIc , yI = 0} . Applying the assumed bound in the statement of the lemma to each of these sets, we obtain |P(φ(f (µ−1)(x)) ∈ C)− P(φ(f̃ (µ−1)(x)) ∈ C)| ≤ 2nHµ−1ε . Substituting this bound into the conditional probability (17) yields |P(f (µ)(x) ∈ A)− P(f̃ (µ)(x) ∈ A)| ≤ 2nHµ−1ε . Since A was an arbitrary convex set, the proof is complete. Proof of Lemma 4. Note that by independence of g̃(µ−1)j (x) from w̃ (µ) •,j we have that each Xj has mean zero and covariance Σ⊗ = Σ ⊗ Ĉ(µ)w I where Σ is the covariance matrix of g̃(µ−1)j (x). By standard properties of the Kronecker product, the Schur decomposition of Σ⊗ is (QΛQT )⊗(Ĉ(µ)w I) where QΛQT is the Schur decomposition of Σ. Simple algebraic manipulation yields: E [ ‖Q⊗Λ−1/2⊗ QT⊗(g̃ (µ−1) j (x)⊗ w̃ (µ) •,j )‖ 3 ] = E [ ‖(QΛ−1/2QT g̃(µ−1)j (x))⊗ ((Ĉ (µ) w ) −1/2w̃ (µ) •,j )‖ 3 ] = E [ ‖QΛ−1/2QT g̃(µ−1)j (x)‖ 3 ] E [ ‖(Ĉ(µ)w )−1/2w̃ (µ) •,j ‖ 3 ] . Notice that the random variable (Ĉ(µ)w )−1/2w̃ (µ) •,j follows the RHµ -dimensional standard normal distribution, and thus its squared norm follows the chi-squared distribution withHµ degrees of freedom, which is also known as the Gamma(Hµ/2, 1/2) distribution. Exponentiating to the power of 3/2 and taking the expectation, we obtain: E [ ‖(Ĉ(µ)w )−1/2w̃ (µ) •,j ‖ 3 ] = 23/2 Γ((Hµ + 3)/2) Γ(Hµ/2) . Finally, ‖QΛ−1/2QT g̃(µ−1)j (x)‖3 ≤ ‖g̃ (µ−1) j (x)‖3/λ 3/2 min where λmin is the smallest value on the diagonal of Λ. If the activation φ does not increase the norm of the input vector (as is the case for rectified linear), we have ‖g̃(µ−1)j (x)‖3 ≤ ‖Z (µ−1) j (x)‖3 almost surely, where Z (µ−1) j (x) follows the known n-dimensional normal distribution with mean zero and covariance matrix whose Schur decomposition will be denoted as UΨUT . Using standard Gaussian identities, we can write E [ ‖Z(µ−1)j (x)‖ 3 ] = E [ ‖UΨ1/2ε‖3 ] = E [ ‖Ψ1/2ε‖3 ] ≤ 23/2ψ3/2max Γ((n+ 3)/2) Γ(n/2) , where ε ∼ N (0, In) and ψmax is the highest entry on the diagonal of Ψ. Putting it all together, we arrive at the desired upper bound CHµ,n E [ ‖Q⊗Λ−1/2⊗ QT⊗Xj‖3 ] ≤ ( 4 ψmax λmin )3/2 Γ((Hµ + 3)/2) Γ(Hµ/2) Γ((n+ 3)/2) Γ(n/2) . Because ψmax and λmin are derived from the distribution of the limiting variable g̃ (µ−1) j (x) = φ(Z (µ−1) j (x)), which only depends on µ, the bound only depends on Hµ and n as desired. Further, noting that Γ((x+3)/2)/Γ(x/2) = O(x2), we have that CHµ,n = O(H2µn2), as required. Proof of Proposition 1. We first apply Lemma 3 to the assumed inequality d(f (µ−1)(x), Z(µ−1)(x)) ≤ 2−((D+1)−(µ−1))−n ∑D k=µ−1Hkε , to obtain d(f (µ)(x), f̃ (µ)(x)) ≤ 2−((D+1)−(µ−1))−n ∑D k=µHkε . We apply Lemma 2 so that for HD sufficiently large, we have d(f̃ (µ)(x), Z(µ)(x)) ≤ 2−((D+1)−(µ−1))−n ∑D k=µHkε . Applying the triangle inequality then yields d(f (µ)(x), Z(µ)(x)) ≤ 2−((D+1)−µ)−n ∑D k=µHkε , as required. Proof of Proposition 2. The idea of the proof is to chain Proposition 1 together across the layers of the network. We fix ε > 0, and apply Proposition 1 to each layer of the network, yielding the following set of implications for HD sufficiently large: d(f (µ−1)(x), Z(µ−1)(x)) ≤ 2−((D+1)−(µ−1))−n ∑D k=µ−1Hkε =⇒ d(f (µ)(x), Z(µ)(x)) ≤ 2−((D+1)−µ)−n ∑D k=µHkε , for µ ∈ {2, . . . , D + 1}. Finally, note that from the definition of the network, the distribution of f (1)(x) is exactly multivariate normal with the required covariance structure, so that d(f (1)(x), Z(1)(x)) = 0, completing the proof. Proof of Theorem 1. To prove that (f (D+1)(x[i]))∞i=1 converges weakly to a Gaussian process with respect to the metric ρ on RN given by: ρ(v, v′) = ∞∑ i=1 2−i min(1, |vi − v′i|) ∀v, v′ ∈ RN , it is sufficient (Billingsley, 1999, p. 19) to prove weak convergence of the finite-dimensional marginals of the process to multivariate Gaussian random variables, with covariance matrix matching that specified by the kernel of the proposed Gaussian process. To this end, let I be a finite subset of N, and consider the inputs (x[i])i∈I . We may now apply Proposition 2 to obtain weak convergence of the joint distribution of the output variables of the network, (f (D+1)(x[i]))i∈I to a multivariate Gaussian with the correct covariance matrix. As the finite subset of inputs was arbitrary, we are done.
1. What is the main contribution of the paper regarding deep neural networks and Gaussian processes? 2. What are the strengths and weaknesses of the paper in terms of its relevance and novelty? 3. Do you have any concerns or questions regarding the experimental verification of the convergence to a GP? 4. How does the reviewer assess the practical relevance of the result, particularly in comparison to other applications of deep learning? 5. Are there any specific points in the paper that the reviewer would like clarification or further discussion on?
Review
Review - Summary The paper is well written and proves how deep, wide, fully connected NNs are equivalent to GPs in the limit. This result, which was well known for single-layer NNs, is now extended to the multilayer case. Although there was already previous work suggesting GP this behavior, there was no formal proof under the specific conditions presented here. The convergence to a GP is also verified experimentally on some toy examples. - Relevance The result itself does not feel very novel because variants of it were already available. Unfortunately, although making other researchers aware of this is worthy, the application of this result seems limited, since in fact it describes and lets us know more about a regime that we would rather avoid, rather than one we want to exploit. Most of the applications of deep learning benefit from strong structured priors that cannot be represented as a GP. This is properly acknowledged in the paper. The lack of practical relevance combined with the not-groundbreaking novelty of the result makes this paper less appealing. - Other comments Page 6: "It does mean however that our empirical study does not extend to larger datasets where such inference is prohibitively expensive (...) prior dominated problems are generally regarded as an area of strength for Bayesian approaches and in this context our results are directly relevant." Although that argument can hold for datasets that are large in terms of amount of data points, it doesn't for datasets that are large in terms of number of dimensions. The empirical study could have used very high-dimensional datasets with comparatively low amounts of training data. That would maintain a regime were the prior does matter but and better show the generality of the results. Page 6: "We use rectified linear units and correct the variances to avoid a loss of prior variance as depth is increased as discussed in Section 3" Are you sure this is discussed in Section 3? Page 4: "This is because for finite H the input activations do not have a multivariate normal distribution". Can you elaborate on this? Since we are interested in the infinite limit, why is this a problem?
ICLR
Title Gaussian Process Behaviour in Wide Deep Neural Networks Abstract Whilst deep neural networks have shown great empirical success, there is still much work to be done to understand their theoretical properties. In this paper, we study the relationship between Gaussian processes with a recursive kernel definition and random wide fully connected feedforward networks with more than one hidden layer. We exhibit limiting procedures under which finite deep networks will converge in distribution to the corresponding Gaussian process. To evaluate convergence rates empirically, we use maximum mean discrepancy. We then exhibit situations where existing Bayesian deep networks are close to Gaussian processes in terms of the key quantities of interest. Any Gaussian process has a flat representation. Since this behaviour may be undesirable in certain situations we discuss ways in which it might be prevented. 1 N/A Whilst deep neural networks have shown great empirical success, there is still much work to be done to understand their theoretical properties. In this paper, we study the relationship between Gaussian processes with a recursive kernel definition and random wide fully connected feedforward networks with more than one hidden layer. We exhibit limiting procedures under which finite deep networks will converge in distribution to the corresponding Gaussian process. To evaluate convergence rates empirically, we use maximum mean discrepancy. We then exhibit situations where existing Bayesian deep networks are close to Gaussian processes in terms of the key quantities of interest. Any Gaussian process has a flat representation. Since this behaviour may be undesirable in certain situations we discuss ways in which it might be prevented. 1 1 INTRODUCTION Deep feedforward neural networks have emerged as an essential component of modern machine learning. As such there has been significant research effort in trying to understand the theoretical properties of such models. One important branch of such research is the study of random networks. By assuming a probability distribution on the network parameters, a distribution is induced on the input to output function that such networks encode. This has proved important in the study of initialisation and learning dynamics (Schoenholz et al., 2017) and expressivity (Poole et al., 2016). It is, of course, essential in the study of Bayesian priors on networks (Neal, 1996). The Bayesian approach makes little sense if prior assumptions are not understood, and distributional knowledge can be essential in finding good posterior approximations. Since we typically want our networks to have high modelling capacity, it is natural to consider limit distributions of networks as they become large. Whilst distributions on deep networks are generally challenging to work with exactly, the limiting behaviour can lead to more insight. Further, as we shall see, networks used in the literature may be very close to this behaviour. The seminal work in this area is that of Neal (1996), which showed that under certain conditions random neural networks with one hidden layer converge to a Gaussian process. The question of the type of convergence is non-trivial and part of our discussion. Historically this result was a significant one because it provided a connection between flexible Bayesian neural networks and Gaussian processes (Williams, 1998; Rasmussen & Williams, 2006) 1Code for the experiments in the paper can be found at https://github.com/ widedeepnetworks/widedeepnetworks 1.1 OUR CONTRIBUTIONS We extend the theoretical understanding of random fully connected networks and their relationship to Gaussian processes. In particular, we prove a rigorous result (Theorem 1) on the convergence of certain finite networks with more than one hidden layer to Gaussian processes. Further, we empirically study the distance between finite networks and their Gaussian process analogues by using maximum mean discrepancy (Gretton et al., 2012) as a distance measure. We find that Bayesian deep networks from the literature can exhibit predictions that are close to Gaussian processes. To demonstrate this, we systematically compare exact Gaussian process inference with ‘gold standard’ MCMC inference for Bayesian neural networks. Our work is of relevance to the theoretical understanding of neural network initialisation and dynamics. It is also important in the area of Bayesian deep networks because it demonstrates that Gaussian process behaviour can arise in more situations of practical interest than previously thought. If this behaviour is desired then Gaussian process inference (exact and approximate) should also be considered. In some scenarios, the behaviour may not be desired because it implies a lack of a hierarchical representation. We therefore highlight promising ideas from the literature to prevent such behaviour. 1.2 RELATED WORK The case of random neural networks with one hidden layer was studied by Neal (1996). Cho & Saul (2009) provided analytic expressions for single layer kernels including those corresponding to a rectified linear unit (ReLU). They also studied recursive kernels designed to ‘mimic computation in large, multilayer neural nets’. As discussed in Section 3 they arrived at the correct kernel recursion through an erroneous argument. Such recursive kernels were later used with empirical success in the Gaussian process literature (Krauth et al., 2017), with a similar justification to that of Cho and Saul. The first case we are aware of using a Gaussian process construction with more than one hidden layer is the work of Hazan & Jaakkola (2015). Their contribution is similar in content to Lemma 1 discussed here, and the work has had increasing interest from the kernel community (Mitrovic et al., 2017). Recent work from Daniely et al. (2016) uses the concept of ‘computational skeletons’ to give concentration bounds on the difference in the second order moments of large finite networks and their kernel analogue, with strong assumptions on the inputs. The Gaussian process view given here, without strong input assumptions, is related but concerns not just the first two moments of a random network but the full distribution. As such the theorems we obtain are distinct. A less obvious connection is to the recent series of papers studying deep networks using a mean field approximation (Poole et al., 2016; Schoenholz et al., 2017). In those papers a second order approximation gives equivalent behaviour to the kernel recursion. By contrast, in this paper the claim is that the behaviour emerges as a consequence of increasing width and is therefore something that needs to be proved. Another surprising connection is to the analysis of self-normalizing neural networks (Klambauer et al., 2017). In their analysis the authors assume that the hidden layers are wide in order to invoke the central limit theorem. The premise of the central limit theorem will only hold approximately in layers after the first one and this theoretical barrier is something we discuss here. An area that is less related than might be expected is that of ‘Deep Gaussian Processes’ (DGPs) (Damianou & Lawrence, 2013). As will be discussed in Section 6, narrow intermediate representations mean that the marginal behaviour is not close to that of a Gaussian process. Duvenaud et al. (2014) offer an analysis that largely applies to DGPs though they also study the Cho and Saul recursion with the motivating argument from the original paper. 2 THE DEEP WIDE LIMIT We consider a fully connected network as shown in Figure 1. The inputs and outputs will be real valued vectors of dimension M and L respectively. The network is fully connected. The initial step and recursion are standard. The initial step is: f (1) i (x) = M∑ j=1 w (1) i,j xj + b (1) i . (1) Activations 1 Activities 1 Activations 2 Activities 2 Inputs Output We make the functional dependence on x explicit in our notation as it will help clarify what follows. For a network with D hidden layers the recursion is, for each µ = 1, . . . , D, g (µ) i (x) = φ(f (µ) i (x)) (2) f (µ+1) i (x) = Hµ∑ j=1 w (µ+1) i,j g (µ) j (x) + b (µ+1) i , (3) so that f (D+1)(x) is the output of the network given input x. φ denotes the non-linearity. In all cases the equations hold for each value of i; i ranges between 1 and Hµ in Equation (2), and between 1 and Hµ+1 in Equation (3) except in the case of the final activation where the top value is L. The network could of course be modified to be probability simplex-valued by adding a softmax at the end. A distribution on the parameters of the network will be assumed. Conditional on the inputs, this induces a distribution on the activations and activities. In particular we will assume independent normal distributions on the weights and biases w (µ) i,j ∼ N (0, C (µ) w ) indep (4) b (µ) i ∼ N (0, C (µ) b ) indep. (5) We will be interested in the behaviour of this network as the widths Hµ becomes large. The weight variances for µ ≥ 2 will be scaled according to the width of the network to avoid a divergence in the variance of the activities in this limit. As will become apparent, the appropriate scaling is C(µ)w = Ĉ (µ) w Hµ µ ≥ 2 . (6) The assumption is that Ĉ(µ)w will remain fixed as we take the limit. Neal (1996) analysed this problem for D = 1, showing that as H1 → ∞, the values of f (2)i (x), the output of the network in this case, converge to a certain multi-output Gaussian process if the activities have bounded variance. Since our approach relies on the multivariate central limit theorem we will arrange the relevant terms into (column) vectors to make the linear algebra clearer. Consider any two inputs x and x′ and all output functions ranging over the index i. We define the vector f (2)(x) of length L whose elements are the numbers f (2)i (x). We define f (2)(x′) similarly. For the weight matrices defined by w(µ)i,j for fixed µ we use a ‘placeholder’ index • to return column and row vectors from the weight matrices. In particular w(1)j,• denotes row j of the weight matrix at depth 1. Similarly, w (2) •,j denotes column j at depth 2. The biases are given as column vectors b(1) and b(2). Finally we concatenate the two vectors f (2)(x) and f (2)(x′) into a single column vector F (2) of size 2L. The vector in question takes the form F (2) = ( f (2)(x) f (2)(x′) ) = ( b(2) b(2) ) + H1∑ j=1 ( w (2) •,j φ(w (1) j,• x+ b (1) j ) w (2) •,j φ(w (1) j,• x ′ + b (1) j ) ) (7) The benefit of writing the relation in this form is that the applicability of the multivariate central limit theorem is immediately apparent. Each of the vector terms on this right hand side is independent and identically distributed conditional on the inputs x and x′. By assumption, the activities have bounded variance. The scaling we have chosen on the variances is precisely that required to ensure the applicability of the theorem. Therefore as H becomes large F (2) converges in distribution to a multivariate normal distribution. The limiting normal distribution is fully specified by its first two moments. Defining γ ∼ N (0, C(1)b ), ∼ N (0, C (1) w IM ), the moments in question are: E [ f (2) i (x) ] = 0 (8) E [ f (2) i (x)f (2) j (x ′) ] = δi,j [ Ĉ(2)w E ,γ [ φ( Tx+ γ)φ( Tx′ + γ) ] + C (2) b ] (9) Note that we could have taken a larger set of input points to give a larger vector F and again we would conclude that this vector converged in distribution to a multivariate normal distribution. More formally, we can consider the set of possible inputs as an index set. A set of consistent finite dimensional Gaussian distributions on an index set corresponds to a Gaussian process by the Kolmogorov extension theorem. The Gaussian process in question is a distribution over functions defined on the product σ-algebra, which has the relevant finite dimensional distributions as its marginals. In the case of a multivariate normal distribution a set of variables having a covariance of zero implies that the variables are mutually independent. Looking at Equation (9), we see that the limiting distribution has independence between different components i, j of the output. Combining this with the recursion (2), we might intuitively suggest that the next layer also converges to a multivariate normal distribution in the limit of large Hµ. Indeed we state the following lemma, which we attribute to Hazan & Jaakkola (2015): Lemma 1 (Normal recursion). If the activations of a previous layer are normally distributed with moments: E [ f (µ−1) i (x) ] = 0 (10) E [ f (µ−1) i (x)f (µ−1) j (x ′) ] = δi,jK(x, x ′), (11) Then under the recursion (2) and asH →∞ the activations of the next layer converge in distribution to a normal distribution with moments E [ f (µ) i (x) ] = 0 (12) E [ f (µ) i (x)f (µ) j (x ′) ] = δi,j [ Ĉ(µ)w E( 1, 2)∼N (0,K)[φ( 1)φ( 2)] + C (µ) b ] (13) where K is a 2× 2 matrix containing the input covariances. Unfortunately the lemma is not sufficient to show that the joint distribution of the activations of higher layers converge in distribution to a multivariate normals. This is because for finite H the input activations do not have a multivariate normal distribution - this is only attained (weakly or in distribution) in the limit. It could be the case that the rate at which the limit distribution is attained affects the distribution in subsequent layers. We are able to offer the following theorem rigorously: Theorem 1. Consider a Bayesian deep neural network of the form in Equations (1) and (2) using ReLU activation functions. Then there exist strictly increasing width functions hµ : N 7→ N such that H1 = h1(n), . . . ,HD = hD(n), and for any countable input set (x[i])∞i=1, the distribution of the output of the network converges in distribution to a Gaussian process as n→∞. A proof is included in the appendix. We conjecture that a more general theorem will hold. In particular we expect that the width functions hµ can be taken to be the identity and that the nonlinearity can be extended to monotone functions with well behaved tails. Our conjecture is based on the intuition from Lemma 1 and from our experiments, in which we always take the width function to be the identity. 3 SPECIFIC KERNELS UNDER RECURSION Cho & Saul (2009) suggest a family of kernels based on a recurrence designed to ‘mimic computation in large, multilayer neural nets’. It is therefore of interest to see how this relates to deep wide Gaussian processes. A kernel may be associated with a feature mapping Φ(x) such that K(x, x′) = Φ(x) • Φ(x′). Cho and Saul define a recursive kernel through a new feature mapping by compositions such as Φ(Φ(x)). However this cannot be a legitimate way to create a kernel because such a composition represents a type error. There is no reason to think the output dimension of the function Φ matches the input dimension and indeed the output dimension may well be infinite. Nevertheless, the paper provides an elegant solution to a different task: it derives closed form solution to the recursion from Lemma 1 (Hazan & Jaakkola, 2015) for the special case φ(u) = Θ(u)ur for r = 0, 1, 2, 3 , (14) where Θ is the Heaviside step function. Specifically, the recursive approach of Cho & Saul (2009) can be adapted by using the fact that u>z for z ∼ N (0, LL>) is equivalent in distribution to (L>u)>ε with ε ∼ N (0, I), and by optionally augmenting u to incorporate the bias. Since r = 1 corresponds to rectified linear units we apply this analytic kernel recursion in all of our experiments. 4 MEASURING CONVERGENCE USING MAXIMUM MEAN DISCREPANCY In this section we use the kernel based two sample tests of Gretton et al. (2012) to empirically measure the similarity of finite random neural networks to their Gaussian process analogues. The maximum mean discrepancy (MMD) between two distributions P and Q is defined as: MMD(P,Q,H) := sup ||h||H≤1 [ EP [h]− EQ[h] ] (15) where H denotes a reproducing kernel Hilbert space and || • ||H denotes the corresponding norm. It gives the biggest possible difference between expectations of a function under the two distributions under the constraint that the function has Hilbert space norm less than or equal to one. We used the unbiased estimator of squared MMD given in Equation (3) of Gretton et al. (2012). In this experiment and all those that follow we take weight variance parameters Ĉ(µ)w = 0.8 and bias variance Cb = 0.2. We took 10 standard normal input points in 4 dimensions and pass them through 2000 independent random neural networks drawn from the distribution discussed in this paper. This was then compared to 2000 samples drawn from the corresponding Gaussian process distribution. The experiment was performed with different numbers of hidden layers and numbers of units per hidden layer. We repeated each experiment 20 times which allows us to reduce variance in our results and give a simple estimate of measurement error. The experiments use an RBF kernel for the MMD estimate with lengthscale 1/2. In order to help give an intuitive sense of the distances involved we also include a comparison between two Gaussian processes with isotropic RBF kernels using the same MMD distance measure. The kernel length scales for this pair of ‘calibration’ Gaussian processes are taken to be l and 2l, where the characteristic length scale l = √ 8 is chosen to be sensible for the standard Normal input distribution on the four dimensional space. The results of the experiment are shown in Figure 2. We see that for each fixed depth the network converges towards the corresponding Gaussian process as the width increases. For the same number of hidden units per layer, the MMD distance between the networks and their Gaussian process analogue becomes higher as depth increases. The rate of convergence to the Gaussian process is slower as the number of hidden layers is increased. 5 COMPARING BAYESIAN DEEP NETWORKS TO GAUSSIAN PROCESSES In this section we compare the behaviour of finite Bayesian deep networks of the form considered in this paper with their Gaussian process analogues. If we make the networks wide enough the agreement will be very close. It is also of interest, however, to consider the behaviour of networks actually used in the literature, so we use 3 hidden layers and 50 hidden units which is typical of the networks used by Hernández-Lobato & Adams (2015). Fully connected Bayesian deep networks with finite variance priors on the weights have also been considered in other works (Graves, 2011; Hernández-Lobato et al., 2016; Blundell et al., 2015), though the specific details vary. We use rectified linear units and correct the variances to avoid a loss of prior variance as depth is increased. Our general strategy was to compare exact Gaussian process inference against expensive ‘gold standard’ Markov Chain Monte Carlo (MCMC) methods. We choose the latter because used correctly it works well enough to largely remove questions of posterior approximation quality from the calculus of comparison. It does mean however that our empirical study does not extend to datasets which are large in terms of number of data points or dimensionality, where such inference is challenging. We therefore sound a note of caution about extrapolating our empirical finite network conclusions too confidently to this domain. On the other hand, lower dimensional, prior dominated problems are generally regarded as an area of strength for Bayesian approaches and in this context our results are directly relevant. We computed the posterior moments by the two different methods on some example datasets. For the MCMC we used Hamiltonian Monte Carlo (HMC) (Neal, 2010) updates interleaved with elliptical slice sampling (Murray et al., 2010). We considered a simple one dimensional problem and a two dimensional real valued embedding of the four data point XOR problem. We see in Figures 3 and 4 (left) that the agreement in the posterior moments between the Gaussian process and the Bayesian deep network is very close. A key quantity of interest in Bayesian machine learning is the marginal likelihood. It is the normalising constant of the posterior distribution and gives a measure of the model fit to the data. For a Bayesian neural network, it is generally very difficult to compute, but with care and computational time it can be approximated using Hamiltonian annealed importance sampling (Sohl-Dickstein & Culpepper, 2012). The log-importance weights attained in this way constitute a stochastic lower bound on the marginal likelihood (Grosse et al., 2015). Figure 4 (right) shows the result of such an experiment compared against the (extremely cheap) Gaussian process marginal likelihood computation on the XOR problem. The value of the log-marginal likelihood computed in the two different ways agree to within a single nat which is negligible from a model selection perspective (Grosse et al., 2015). Predictive log-likelihood is a measure of the quality of probabilistic predictions given by a Bayesian regression method on a test point. To compare the two models we sampled 10 standard normal train and test points in 4 dimensions and passed them through a random network of the type under study to get regression targets. We then discarded the true network parameters and compared the predictions of posterior inference between the two methods. We also compared the marginal predictive distributions of a latent function value. Figure 5 shows the results. We see that the correspondence in predictive log-likelihood is close but not exact. Similarly the marginal function values are close to those of a Gaussian process but are slightly more concentrated. 6 AVOIDING GAUSSIAN PROCESS BEHAVIOUR When using deep Bayesian neural networks as priors, the emergence of Gaussian priors raises important questions in the cases where it is applicable, even if one sets aside questions of computational tractability. It has been argued in the literature that there are important cases where kernel machines with local kernels will perform badly (Bengio et al., 2005). The analysis applies to the posterior mean of a Gaussian process. The emergent kernels in our case are hyperparameter free. Although they do not meet the strict definition of what could be considered ‘local’ the fact remains that any Gaussian process with a fixed kernel does not use a learnt hierarchical representation. Such representations are widely regarded to be essential to the success of deep learning. There is relevant literature here on learning the representation of a standard, usually structured, network composed with a Gaussian process (Wilson et al., 2016a;b; Al-Shedivat et al., 2017). This differs from the assumed paradigm of this paper, where all model complexity is specified probabilistically and we do not assume convolutional, recurrent or other problem specific structure. Within this paradigm, the question therefore arises as to what can be done to avoid marginal Gaussian process behaviour if it is not desired. Speaking loosely, to stop the onset of the central limit theorem and the approximate analogues discussed in this paper one needs to make sure that one or more of its conditions is far from being met. Since the chief conditions on the summands are independence, bounded variance and many terms, violating these assumptions will remove Gaussian process behaviour. Deep Gaussian processes (Damianou & Lawrence, 2013) are not close to standard Gaussian processes marginally because they are typically used with narrow intermediate layers. It can be challenging to choose the precise nature of these narrow layers a priori. Neal (1996) suggests using networks with infinite variance in the activities. With a single hidden layer and correctly scaled, these networks become alpha stable processes in the wide limit. Neal also discusses variants that destroy independence by coupling weights. Our results about the emergence of Gaussian processes even with more than one hidden layer mean these ideas are of considerable interest going forward. 7 CONCLUSIONS Studying the limiting behaviour of distributions on feedforward networks has been a fruitful avenue for understanding these models historically. In this paper we have extended the state of knowledge about the wide limit, including for networks with more than one hidden layer. In particular, we have exhibited limit sequences of networks that converge in distribution to Gaussian processes with a certain recursively defined kernel. Our empirical study using MMD suggests that this behaviour is exhibited in a variety of models of size comparable to networks used in the literature. This led us to juxtapose finite Bayesian neural networks with their Gaussian process analogues, finding that the agreement in terms of key predictors is close empirically. If this Gaussian process behaviour is desired then exact and approximate inference using the analytic properties of Gaussian processes should be considered as an alternative to neural network inference. Since Gaussian processes have an equivalent flat representation then in the context of deep learning the behaviour may well not be desired and steps should be taken to avoid it. We view these results as a new opportunity to further the understanding of neural networks in the work that follows. Initialisation and learning dynamics are crucial topics of study in modern deep learning which require that we understand random networks. Bayesian neural networks should offer a principled approach to generalisation but this relies on successfully approximating a clearly understood prior. In illustrating the continued importance of Gaussian processes as limit distributions, we hope that our results will further research in these broader areas. 8 ACKNOWLEDGEMENTS We wish to thank Neil Lawrence for helpful conversations. We also thank the anonymous reviewers for their insights. Alexander Matthews and Zoubin Ghahramani acknowledge the support of EPSRC Grant EP/N014162/1 and EPSRC Grant EP/N510129/1 (The Alan Turing Institute). Jiri Hron holds a Nokia CASE Studentship. Mark Rowland acknowledges support by EPSRC grant EP/L016516/1 for the Cambridge Centre for Analysis. Richard E. Turner is supported by Google as well as EPSRC grants EP/M0269571 and EP/L000776/1. A PROOF OF MAIN THEOREM A.1 STATEMENT OF THEOREM AND NOTATION In this section, we provide a proof of the main theorem of the paper, which we begin by recalling. Theorem 1. Consider a Bayesian deep neural network of the form in Equations (1) and (2) using ReLU activation functions. Then there exist strictly increasing width functions hµ : N 7→ N such that H1 = h1(n), . . . ,HD = hD(n), and for any countable input set (x[i])∞i=1, the distribution of the output of the network converges in distribution to a Gaussian process as n→∞. The theorem is proven via use of the propositions that follow below. The broad structure of the proof is to use a particular variant of the Berry-Esseen inequality to upper bound how far each layer is from a multivariate normal distribution, and then to inductively propagate these inequalities through the network, leading to a bound on the distance between the output of the network for a collection of input points, and a multivariate Gaussian distribution. These notions will be made precise below. We begin in Section A.2 by stating the propositions that will be used in the proof of Theorem 1, but first establish notation that will be used in the remainder of the appendix. Given a finite set of inputs x[1], . . . , x[n] ∈ RM , we will write: • f (µ)(x) for the random variables (f (µ)(x[i]))ni=1 collectively taking values in RnHµ ; • f (µ)j (x) for the random variables (f (µ) j (x[i])) n i=1 collectively taking values in Rn; • g(µ)(x) for the random variables (g(µ)(x[i]))ni=1 collectively taking values in RnHµ ; • g(µ)j (x) for the random variables (g (µ) j (x[i])) n i=1 collectively taking values in Rn; Throughout, if U1, U2 are random variables taking in values in some Euclidean space Rd, we will define d(U1, U2) = sup A⊆Rd A convex |P(U1 ∈ A)− P(U2 ∈ A)| . Note that convergence of a sequence of random variables in this metric implies convergence in distribution. We will also consider multivariate normal distributions (Z(µ)j (x[i])|j = 1, . . . ,Hµ , i = 1, . . . , n) with covariance matrices of block diagonal form, such that Cov(Z (µ) k (x[a]), Z (µ) l (x[b])) = 0 for distinct k, l ∈ {1, . . . ,Hµ} , for all x[a], x[b] . To avoid writing this in full every time it is required, we will refer to this condition as blockwise independence with respect to the index j. We will avoid specification of all covariance values, deferring to the expression (13) given in the main paper. Finally, to simplify notation, we will assume that the network output is one-dimensional. Our proof trivially extends to arbitrary finite output dimension where the limiting distribution is a coordinate-wise independent multivariate GP. A.2 SUPPORTING RESULTS Proposition 1. Let ε > 0, and x[1], . . . , x[n] ∈ RM . Let µ ∈ {2, . . . , D + 1}, and let Hk = 2H 2 k+1 for k = 1, . . . , D − 1. Then for HD sufficiently large, suppose the condition d(f (µ−1)(x), Z(µ−1)(x)) ≤ 2−((D+1)−(µ−1))−n ∑D k=µ−1Hkε , holds, where Z(µ−1)(x) = (Z(µ−1)j (x[i])|j = 1, . . . ,Hµ−1 , i = 1, . . . , n) is mean-zero multivariate normal, with blockwise independence with respect to the index j. Then we have d(f (µ)(x), Z(µ)(x)) ≤ 2−((D+1)−µ)−n ∑D k=µHkε , where Z(µ)(x) = (Z(µ)j (x[i])|j = 1, . . . ,Hµ , i = 1, . . . , n) is mean-zero multivariate normal, with blockwise independence with respect to the index j. Proposition 2. Let ε > 0, and x[1], . . . , x[n] ∈ RM . If Hk = 2H 2 k+1 for k = 1, . . . , D− 1, then for HD sufficiently large, we have d(f (D+1)(x), Z(x)) ≤ ε , where Z(x) is a mean-zero multivariate normal random variable. In establishing the two propositions above, the following three lemmas will be useful. Lemma 2. Let ε > 0, and let Z(µ−1)(x) = (Z(µ−1)j (x[i])|j = 1, . . . ,Hµ−1 , i = 1, . . . , n) be mean-zero multivariate normal, with blockwise independence with respect to the index j. Let g̃(µ−1)(x) = φ(Z(µ−1)(x)), and let f̃ (µ)(x) be given by f̃ (µ)(x[i]) = Hµ−1∑ j=1 w (µ) •,j g̃ (µ−1) j (x[i]) + b (µ) , for i = 1, . . . , n. Then given ε > 0, if Hk = 2H 2 k+1 for k = 1, . . . , D − 1, then for all sufficiently large HD we have: d(f̃ (µ)(x), Z(µ)(x)) ≤ 2−((D+1)−(µ−1))−n ∑D k=µHkε , where Z(µ)(x) = (Z(µ)j (x[i])|j = 1, . . . ,Hµ , i = 1, . . . , n) is mean-zero multivariate normal, with blockwise independence with respect to the index j. Lemma 3. Let Z(µ−1)(x) = (Z(µ−1)j (x[i])|j = 1, . . . ,Hµ−1 , 1, . . . , n) be mean-zero multivariate normal, with blockwise independence with respect to the index j, such that for some ε > 0, d(Z(µ−1)(x), f (µ−1)(x)) ≤ ε . Then, defining f̃ (µ)(x) by f̃ (µ)(x[i]) = Hµ∑ j=1 w•,jφ(Z (µ−1) j (x[i])) + b (µ) , in the particular case where φ is the elementwise ReLU function, we have d(f̃ (µ)(x), f (µ)(x)) ≤ 2nHµ−1ε . Lemma 4. Let X1, . . . , XHµ−1 be iid random variables of the form Xj = g̃ (µ−1) j (x) ⊗ w̃ (µ) •,j , where ⊗ denotes the Kronecker product, g̃(µ−1)j (x) is defined as in Lemma 2, and w̃ (µ) •,j is a multivariate normal variable taking values in RHµ with mean vector 0, and covariance Ĉ(µ)w I . We denote the variance of Xj by Σ⊗ and its Schur decomposition as Σ⊗ = Q⊗Λ⊗QT⊗. Then β = E [ ‖Q⊗Λ−1/2⊗ QT⊗Xj‖3 ] ≤ CHµ,n, where CHµ,n ∈ R depends on Hµ and n, but is indepen- dent of Hµ−1. Further, we have CHµ,n = O(H2µn2). A.3 PROOFS Proof of Lemma 2. We use a straightforward variant of a particular Berry-Esseen inequality described in Bentkus (2003). We first state this result from the literature, and then derive a straightforward variation that we will use in the sequel. Theorem 2 (From Bentkus (2003)). Let X1, . . . , Xn be iid random variables taking values in Rd, with mean vector 0, identity covariance matrix, and β = E [ ‖Xi‖3 ] <∞. Let Sn = 1√n ∑n i=1Xi, and let Y be a standard d-dimensional multivariate normal random vector. Then we have sup A⊆Rd A convex |P(Sn ∈ A)− P(Y ∈ A)| ≤ 400d1/4β√ n We need a mildly modified version of this theorem to deal with iid random vectors X1, . . . , Xn with non-identity covariance matrices. To this end, suppose that Σ is the (full-rank) covariance matrix of each Xi, with decomposition Σ = RR>, for some invertible matrix R; R can be obtained, for example, by using Cholesky or Schur decomposition. The random variables R−1X1, . . . , R−1Xn are then iid, mean zero and with identity covariance matrices, so we may apply Theorem 2 to obtain: sup A⊆Rd A convex |P(R−1Sn ∈ A)− P(Y ∈ A)| ≤ 400d1/4β√ n , where β = E [ ‖R−1Xi‖3 ] . Now note that this is equivalent to sup A⊆Rd A convex |P(Sn ∈ RA)− P(RY ∈ RA)| ≤ 400d1/4β√ n , noting that RY ∼ N(0,Σ). Since R is invertible, and recalling the definition of the distance d above, this is exactly equivalent to: d(Sn, RY ) ≤ 400d1/4β√ n , (16) which is the variant of Bentkus’ result we will require in the sequel. We apply this bound to the sum Hµ−1∑ j=1 g̃ (µ−1) j (x)⊗ w (µ) •,j Noting that the summands indexed by j are iid by assumption, with the expected third moment norm featuring in the Berry-Esseen inequality upper-bounded by β ≤ CHµ,n, for some constant CHµ,n depending on Hµ and n, but independent of Hµ−1 (finiteness of CHµ,n follows from Lemma 4). As a consequence, we have the following bound: d Hµ−1∑ j=1 g̃ (µ−1) j (x)⊗ w (µ) •,j , Z ′(x) ≤ 400CHµ,n(nHµ)1/4/√Hµ−1 , where Z ′(x) = (Z ′j(x[i])|j = 1, . . . ,Hµ , i = 1, . . . , n) is mean-zero multivariate normal, with blockwise independence with respect to the index j. We wish to demonstrate that this is less than or equal to 2−(D−(µ−1))−n ∑D k=µHkε when HD is sufficiently large. This is equivalent to showing that 400CHµ,n(nHµ) 1/42((D+1)−(µ−1))+n ∑D k=µHk/ √ Hµ−1 ≤ ε for all sufficiently large HD. But note that with Hk−1 = 2H 2 k for k = µ, . . . ,D − 1, the left-hand side converges to 0 as HD increases (using the bound obtained for CHµ,n in Lemma 4), so for all HD sufficiently large, we obtain d Hµ−1∑ j=1 g̃ (µ−1) j (x)⊗ w (µ) •,j , Z ′(x) ≤ 2−((D+1)−(µ−1))−n∑Dk=µHkε , as required. Adding the independent bias vector b(µ) immediately yields d 1n ⊗ b(µ) + Hµ−1∑ j=1 g̃ (µ−1) j (x)⊗ w (µ) •,j , Z(x) ≤ 2−((D+1)−(µ−1))−n∑Dk=µHkε , where Z(x) is mean-zero multivariate normal, with the same block-diagonal covariance structure as described for Z ′(x) above, and 1n ∈ Rn is a vector of 1’s. Proof of Lemma 3. Let A ⊆ RnHµ be an arbitrary convex set. First, observe that we have P 1n ⊗ b(µ) + Hµ−1∑ j=1 φ(f̃ (µ−1) j (x))⊗ w (µ) •,j ∈ A =E P 1n ⊗ b(µ) + Hµ−1∑ j=1 φ(f̃ (µ−1) j (x))⊗ w (µ) •,j ∈ A∣∣∣∣w(µ), b(µ) =E P Hµ−1∑ j=1 φ(f̃ (µ−1) j (x))⊗ w (µ) •,j ∈ A− 1n ⊗ b(µ)∣∣∣∣w(µ), b(µ) (17) Now, note that for fixed w(µ) and b(µ), the event Hµ−1∑ j=1 φ(f̃ (µ−1) j (x))⊗ w (µ) •,j ∈ A− 1n ⊗ b(µ)∣∣∣∣w(µ), b(µ) is exactly that the vector φ(f̃ (µ−1)(x)) lies in the preimage of the convex set A − 1n ⊗ b(µ) under the linear map w(µ), which is again a convex set. Secondly, observe that for the specific ReLU nonlinearity φ, if C is an arbitrary convex set, then {(f (µ−1)(x)|φ(f (µ−1)(x)) ∈ C} may be written as the disjoint union of at most 2nHµ−1 convex sets: {f (µ−1)(x)|φ(f (µ−1)(x)) ∈ C} = (φ−1(C) ∩ {t ∈ RnHµ−1 |ti ≥ 0 ∀i})∪⋃ I⊆{1,...,nHµ−1} I 6=∅ {t ∈ RnHµ−1 |tI < 0,∃y ∈ C s.t. yIc = tIc , yI = 0} . Applying the assumed bound in the statement of the lemma to each of these sets, we obtain |P(φ(f (µ−1)(x)) ∈ C)− P(φ(f̃ (µ−1)(x)) ∈ C)| ≤ 2nHµ−1ε . Substituting this bound into the conditional probability (17) yields |P(f (µ)(x) ∈ A)− P(f̃ (µ)(x) ∈ A)| ≤ 2nHµ−1ε . Since A was an arbitrary convex set, the proof is complete. Proof of Lemma 4. Note that by independence of g̃(µ−1)j (x) from w̃ (µ) •,j we have that each Xj has mean zero and covariance Σ⊗ = Σ ⊗ Ĉ(µ)w I where Σ is the covariance matrix of g̃(µ−1)j (x). By standard properties of the Kronecker product, the Schur decomposition of Σ⊗ is (QΛQT )⊗(Ĉ(µ)w I) where QΛQT is the Schur decomposition of Σ. Simple algebraic manipulation yields: E [ ‖Q⊗Λ−1/2⊗ QT⊗(g̃ (µ−1) j (x)⊗ w̃ (µ) •,j )‖ 3 ] = E [ ‖(QΛ−1/2QT g̃(µ−1)j (x))⊗ ((Ĉ (µ) w ) −1/2w̃ (µ) •,j )‖ 3 ] = E [ ‖QΛ−1/2QT g̃(µ−1)j (x)‖ 3 ] E [ ‖(Ĉ(µ)w )−1/2w̃ (µ) •,j ‖ 3 ] . Notice that the random variable (Ĉ(µ)w )−1/2w̃ (µ) •,j follows the RHµ -dimensional standard normal distribution, and thus its squared norm follows the chi-squared distribution withHµ degrees of freedom, which is also known as the Gamma(Hµ/2, 1/2) distribution. Exponentiating to the power of 3/2 and taking the expectation, we obtain: E [ ‖(Ĉ(µ)w )−1/2w̃ (µ) •,j ‖ 3 ] = 23/2 Γ((Hµ + 3)/2) Γ(Hµ/2) . Finally, ‖QΛ−1/2QT g̃(µ−1)j (x)‖3 ≤ ‖g̃ (µ−1) j (x)‖3/λ 3/2 min where λmin is the smallest value on the diagonal of Λ. If the activation φ does not increase the norm of the input vector (as is the case for rectified linear), we have ‖g̃(µ−1)j (x)‖3 ≤ ‖Z (µ−1) j (x)‖3 almost surely, where Z (µ−1) j (x) follows the known n-dimensional normal distribution with mean zero and covariance matrix whose Schur decomposition will be denoted as UΨUT . Using standard Gaussian identities, we can write E [ ‖Z(µ−1)j (x)‖ 3 ] = E [ ‖UΨ1/2ε‖3 ] = E [ ‖Ψ1/2ε‖3 ] ≤ 23/2ψ3/2max Γ((n+ 3)/2) Γ(n/2) , where ε ∼ N (0, In) and ψmax is the highest entry on the diagonal of Ψ. Putting it all together, we arrive at the desired upper bound CHµ,n E [ ‖Q⊗Λ−1/2⊗ QT⊗Xj‖3 ] ≤ ( 4 ψmax λmin )3/2 Γ((Hµ + 3)/2) Γ(Hµ/2) Γ((n+ 3)/2) Γ(n/2) . Because ψmax and λmin are derived from the distribution of the limiting variable g̃ (µ−1) j (x) = φ(Z (µ−1) j (x)), which only depends on µ, the bound only depends on Hµ and n as desired. Further, noting that Γ((x+3)/2)/Γ(x/2) = O(x2), we have that CHµ,n = O(H2µn2), as required. Proof of Proposition 1. We first apply Lemma 3 to the assumed inequality d(f (µ−1)(x), Z(µ−1)(x)) ≤ 2−((D+1)−(µ−1))−n ∑D k=µ−1Hkε , to obtain d(f (µ)(x), f̃ (µ)(x)) ≤ 2−((D+1)−(µ−1))−n ∑D k=µHkε . We apply Lemma 2 so that for HD sufficiently large, we have d(f̃ (µ)(x), Z(µ)(x)) ≤ 2−((D+1)−(µ−1))−n ∑D k=µHkε . Applying the triangle inequality then yields d(f (µ)(x), Z(µ)(x)) ≤ 2−((D+1)−µ)−n ∑D k=µHkε , as required. Proof of Proposition 2. The idea of the proof is to chain Proposition 1 together across the layers of the network. We fix ε > 0, and apply Proposition 1 to each layer of the network, yielding the following set of implications for HD sufficiently large: d(f (µ−1)(x), Z(µ−1)(x)) ≤ 2−((D+1)−(µ−1))−n ∑D k=µ−1Hkε =⇒ d(f (µ)(x), Z(µ)(x)) ≤ 2−((D+1)−µ)−n ∑D k=µHkε , for µ ∈ {2, . . . , D + 1}. Finally, note that from the definition of the network, the distribution of f (1)(x) is exactly multivariate normal with the required covariance structure, so that d(f (1)(x), Z(1)(x)) = 0, completing the proof. Proof of Theorem 1. To prove that (f (D+1)(x[i]))∞i=1 converges weakly to a Gaussian process with respect to the metric ρ on RN given by: ρ(v, v′) = ∞∑ i=1 2−i min(1, |vi − v′i|) ∀v, v′ ∈ RN , it is sufficient (Billingsley, 1999, p. 19) to prove weak convergence of the finite-dimensional marginals of the process to multivariate Gaussian random variables, with covariance matrix matching that specified by the kernel of the proposed Gaussian process. To this end, let I be a finite subset of N, and consider the inputs (x[i])i∈I . We may now apply Proposition 2 to obtain weak convergence of the joint distribution of the output variables of the network, (f (D+1)(x[i]))i∈I to a multivariate Gaussian with the correct covariance matrix. As the finite subset of inputs was arbitrary, we are done.
1. What is the main contribution of the paper regarding the convergence of wide neural networks? 2. What are the strengths and weaknesses of the paper's theoretical analysis, particularly in regards to Theorem 1? 3. Are there any concerns or questions regarding the proof of Lemma 2 and Lemma 4 in the paper? 4. Are there any unclear statements or notations in the paper, such as the notation x^(i), the distribution of epsilon and gamma, and the statement of Theorem 1? 5. Is there a minor mistake in the paper, such as the incorrect usage of f_i^{(1)}(x) instead of f_i^{(2)}(x)?
Review
Review In part 1, the authors introduce motivation for studying wide neural networks and summarize related work. In part 2, they present a theorem (main theoretical result) stating that under conditions on the weight priors, the output function of a multi-layer neural network (conditionally to a given input) weakly converges to a gaussian process as the size of the hidden layers go to infinity. remark on theorem 1: This result generalizes a result proven in 2015 stating that the normality of a layer propagates to the next as the size of the first layer goes to infinity. The result stated in this paper is proven by bounding the gap between the output distribution and the corresponding gaussian process, and by propagating this bound across layers (appendix). In part 3, the authors discuss the choice of a nonlinearity function that enables easy computation of the kernels introduced in the covariance matrix of the limit normal distribution. Their choice lands on ReLU. In part 4, the focus is on the speed of the convergence presented in theorem 1. Experiments are conducted to show how the distance (maximum mean disrepancy) between the output distribution and its theoretical gaussian process limit vary when the sizes of the hidden layers increase. The results show that the convergence (in MMD) happens consistently, although it is slower when the number of hidden layers gets bigger. In part 5, the authors compare the distributions (finite Bayesian deep networks and their analogues Gaussian processes) in yet another way: by studying their agreement in terms of inference. For this purpose, the authors chose several crieteria: the first two moments of the posterior, the log marginal likelihood and the predictive log-likelihood. The authors judge that the distributions agree on those criteria, but do not provide further analysis. In part 6, now that It has been shown that the output distributions of Bayesian neural nets do not only weakly converge to Gaussian processes but also behave similarly in terms of inference, the authors discuss ways to avoid the gaussian process behaviour. Indeed, it seems that Gaussian processes with a fixed kernel cannot learn hierarchical representations, that are essential in deep learning. The idea to avoid the Gaussian process behaviour is to contradict one of the hypothesis of the CLT (so that it does not hold anymore), either by controlling the size of intermediate layers, by using networks with infinite variance in the activities, or by choosing non-independent weights. In part 7, it is concluded that the result that has been proven for size of layers going to infinity (Theorem 1) seems to empirically be verified on finite networks similar to those used in the literature. This can be used to simplify inference in cases were the gaussian process behaviour is desired, and opens questions on how to avoid this behaviour the rest of the time. Pros: The authors line of thought of the authors is overall quite easy to follow. The main theoretical convergence result is stated early on, and the remaining of the article is dedicated to observing this result empirically from different angles (MMD, inference, predictive capability..). The last part contains a discussion concerning the extent to which it is actually a desired or a undesired result in classical deep learning use-cases, and the authors provide intuitive conditions under which the convergence would not hold. The stated theorem is a clear improvement on the past literature and is promising in a context where multi-layers neural networks are more and more studied. Finally, the work is well documented. Cons: I have a some concerns with the main result (Theorem 1) and found that some of the notations / formulas were not very clear. Concerns with Theorem 1: * at the end of the proof of Lemma 2, H_\mu is to be chosen large enough in order to get the \epsilon bound of the statement. However, I think that H_\mu is constrained by the statement of Proposition 2, not to be larger than a constant times 2^(H_{\mu+1}). Isn't that a problem? * In the proof of Lemma 4, it looks like matrix \Psi, from the schur decomposition of \tilde f, actually depends on H_{\mu-2}, thus making \psi_max depend on it too, as well as the final \beta bound, which would contradict the statement that it depends only on n and H_{\mu}. Could you please double check? Unclear statements/notations: * end of page 3, notations are not entirely consist with previous notations * I do not understand which distribution is assumed on epsilon and gamma when taking the expectancy in equation (9). * the notation x^(i) (in the theorem and the proof notably) could be changed, for the ^(i) index refers to the depth of the layer in the rest of the notations, and is here surprisingly referring to a set of observations. * the statement of Theorem 1: * I would change "for a countable input set" to "for any countable input set", if this holds true. * does not say that the width has to go to infinity for the convergence to happen, which goes a bit in contradiction with the adjective "wide". However, the authors say that in practice, they use the identity as width function. * I understood that the conclusion of part 3 was that the expectation of eq (9) was elegantly computable for certain non-linearity (including ReLU). However I don't see the link with the "recursive kernel" idea (maybe it's just the way to do the computation described in Cho&Saul(2009) ?) Some places where it appears that there are minor mistakes: * 7th line from the bottom of page 3, the vector f^{(2)}(x) contains f_i^{(1)}(x) but should contain f_i^{(2)}(x) * last display of page 3: change x and x', and indicate upper limit of the sum * please double check variances C_w and/or \hat{C}_w appearing in equations in (9) and (13). * line 2 of second paragraph after equations (8) and (9). The authors refer to equation (8) concerning the independence of the components of the output. I think they rather wanted to refer to (9). Same for first sentence before eq (14). * middle of page 12: matrix LY should be RY.
ICLR
Title How and Why We Detect Distribution Shift: Critical Analysis of Methods and Benchmarks Abstract Detecting test-time distribution shift has emerged as a key capability for safely deployed machine learning models, with the question being tackled under various guises in recent years. In this paper, we aim to provide a consolidated view of the two largest sub-fields within the community: open-set recognition (OSR) and outof-distribution detection (OOD). In particular, we aim to provide rigorous empirical analysis of different methods across settings and provide actionable takeaways for practitioners and researchers. Concretely, we make the following contributions: (i) For the first time, we perform rigorous cross-evaluation between state-of-the-art methods in the OOD and OSR settings and identify a strong correlation between the performances of methods for them; (ii) We propose a new, large-scale benchmark setting which we suggest better disentangles the problem tackled by OOD and OSR; (iii) We thoroughly examine SOTA methods for OOD and OSR on our large-scale benchmark; and (iv) Finally, we find that the best performing method on previous benchmarks struggles on our large-scale benchmark, while magnitudeaware scoring rules consistently show promise. 1 INTRODUCTION Any practical machine learning model is likely to encounter test-time samples which differ substantially from its training set; i.e models are likely to encounter test-time distribution shift. As such, detecting distribution shift has emerged as a key research problem in the community (Scheirer et al., 2013; Hendrycks & Gimpel, 2017; Liu et al., 2020). Specifically, out-of-distribution detection (OOD) and open-set recognition (OSR) have emerged as two rich sub-fields to tackle this task. In fact, both tasks explicitly tackle the setting in which multi-way classifiers must detect if test samples are unfamiliar with respect to their training set, with a variety of methods proposed within each field. OSR methods are developed for detecting test images which come from different semantic categories to the training set, while OOD methods are developed for detecting images which come from a different data distribution to the training images. Research efforts in both directions largely occur independently (with little cross-pollination of ideas). Though prior work has recognized the similarity of the two sub-fields (Vaze et al., 2022; Tran et al., 2022; Yang et al., 2021; Salehi et al., 2021), OOD and OSR, there have been no rigorous benchmarking to understand the underlining principles of methods for both. In this work, for the first time, we perform rigorous cross-evaluation between methods developed for OOD and OSR on current standard benchmarks, suggesting that methods which perform well for one are likely to perform well for the other in Sec. 3. We experiment both with methods which require alternate training strategies (e.g., Outlier Exposure (Hendrycks et al., 2019) (OE) and ARPL (Chen et al., 2021)) as well as different post-hoc scoring rules (e.g., Maximum Softmax Probability (MSP) (Hendrycks & Gimpel, 2017), Maximum Logit Score (MLS) (Vaze et al., 2022) and Energy Scoring (Liu et al., 2020)). We thoroughly evaluate all methods on both standard OOD and OSR benchmarks, after which we find that OE achieves almost saturating performance on the OOD task and also obtains the state-of-the-art (SOTA) results on the OSR task. Meanwhile, we also find that the magnitude-aware scoring rules like MLS (Vaze et al., 2022) and Energy Scoring (Liu et al., 2020) show steady good performance across different methods and datasets. Next, we propose a reconciling perspective on the tasks tackled by the two fields, and propose a new benchmark to assess this in Sec. 4. Specifically, we propose a new, large-scale benchmark setting, in which we disentangle different distribution shifts, namely, semantic shift and covariate shift, that occur in OOD and OSR. For example, to isolate semantic shift, we leverage the recently introduced Semantic Shift Benchmark (SSB) (Vaze et al., 2022) containing ImageNet-scale datasets, in which the original ImageNet-1K (Russakovsky et al., 2015b) is regarded as ‘seen’ closed-set data while ‘unseen’ data is carefully drawn from the disjoint set of ImageNet-21K-P (Ridnik et al., 2021b). For covariate shift, we leverage ImageNet-C (Hendrycks & Dietterich, 2019) and ImageNet-R (Hendrycks et al., 2020) to demonstrate distribution shift with respect to the standard ImageNet dataset. Finally, we examine SOTA methods developed for OOD and OSR on this large-scale benchmark to validate whether the findings through the standard (small-scale) datasets still hold on our consolidated large-scale benchmarking. Through the large-scale analysis, we surprisingly find that OE struggles to scale to larger benchmarks, while the magnitude-aware scoring rules, MLS (Vaze et al., 2022) and Energy Scoring (Liu et al., 2020), still show promise. We further provide empirical insights by analysing the representations extracted by different models on data under different distribution shifts, which suggests that the strong performance of OE on the standard benchmark is partially attributed to the fact that the auxiliary OOD data used for training has sufficient distribution overlap with the OOD testing data, while it is not straightforward to come up with an auxiliary OOD data to reflect the actual distribution shift on large-scale datasets. We believe there are still many more open questions to be answered in the shared space of OOD and OSR, and hope the findings in our work can serve as a starting point to have a deeper look into them. 2 PRELIMINARIES AND RELATED WORK Open-set recognition. Previous work (Scheirer et al., 2012) coined “open-set recognition”, the objective of which is to identify unknown classes while classifying the known ones. OpenMax resorts to Activation Vector (AV) and models the distribution of AVs based on the Extreme Value Theorem (EVT). Recent works (Ge et al., 2017; Neal et al., 2018b; Kong & Ramanan, 2021) show that the generated data from synthetic distribution would be helpful to improve OSR. OSRCI (Neal et al., 2018b) generates images belonging to the unknown classes but similar to the training data to train an open-set classifier. (Kong & Ramanan, 2021) adversarially trained discriminator to distinguish closed from open-set images and introduced real open-set samples for model selection. Prototype-based methods (Chen et al., 2020; 2021) adjust the boundaries of different classes and identify open-set images based on distances to the learned prototypes of known classes. Out of Distribution Detection. (Hendrycks & Gimpel, 2017) formalized the task of out-ofdistribution detection and provided a paradigm to evaluate deep learning out-of-distribution detectors using the maximum softmax probability (MSP). A test sample with a large MSP score is detected as an in-distribution (ID) example rather than out-of-distribution (OOD) example. ODIN (Liang et al., 2018) and its learnable variant G-ODIN (Hsu et al., 2020) added adversarial perturbations to both ID and OOD samples and employed temperature scaling strategy on the softmax output to separate them. (Liu et al., 2020) proposes the energy score derived from the logit outputs for OOD uncertainty estimation. (Sun et al., 2021) rectified the distribution of per-unit activations in the penultimate layer for ID and OOD data. Outlier Exposure (OE) (Hendrycks et al., 2019) and (Huang et al., 2021) both designed a loss based on the KL divergence between the softmax output and a uniform probability distribution to encourage models to output a uniform softmax distribution on outliers. The former leveraged real OOD data for training while the latter directly employed the vector norm of gradients to perform uncertainty estimation. 3 ANALYSIS OF SOTA BASELINES ON STANDARD BENCHMARKS In this section, we perform cross-evaluation of methods from the OOD and OSR literature. 3.1 EXPERIMENTAL SETUP Methods. We distinguish two categories of shift detection methods: scoring rules (which operate post-hoc on top of pre-trained networks); and specialised training (which change the optimisation procedure of the networks themselves). For scoring rules, we compare the maximum softmax probability (MSP, (Hendrycks & Gimpel, 2017)), the Maximum Logit Score (MLS, (Vaze et al., 2022)), ODIN (Liang et al., 2018), GODIN (Hsu et al., 2020), Energy scoring (Liu et al., 2020), GradNorm (Huang et al., 2021) and SEM (Yang et al., 2022). We further experiment with ReAct (Sun et al., 2021), an activation pruning technique which can be employed in tangent with any scoring rule. While MLS was developed for OSR (Vaze et al., 2022), other scoring rules were developed for OOD detection. We provide descriptions of each scoring rule in the appendix. For specialised training, we first experiment with the standard cross-entropy (CE) loss. We also use ARPL + CS (Chen et al., 2021) from the OSR literature. This method learns a set of ‘reciprocal points’ which are trained to be far away from all training category embeddings. We note that the reciprocal points can be treated as a linear classification layer, allowing us to use any of the scoring rules mentioned above on top of this representation. Finally, we train models with Outlier Exposure (OE) (Hendrycks et al., 2019) from the OOD literature, where real outlier examples are used during training as examples of OOD. In this case, the model is encouraged to predict a uniform softmax output. Datasets. For the OOD setting, we train models on CIFAR10 (Krizhevsky et al., 2009). As OOD data, we use six common datasets: SVHN (Cimpoi et al., 2014), Textures (Ovadia et al., 2019), LSUN-Crop (Yu et al., 2015), LSUN-Resize (Yu et al., 2015), iSUN (Xu et al., 2015) and Places365 (Zhou et al., 2017). We also perform OOD experiments training with CIFAR100 as ID in Appendix E. For the OSR benchmark, following the standard protocols in (Neal et al., 2018a), we set up four sub-tasks containing CIFAR10, CIFAR+10, CIFAR+50 and TinyImageNet (Le & Yang, 2015). In all cases, models are trained on a subset of categories with remaining used as ‘unseen’ at test time. The CIFAR+N settings involve training on four classes from CIFAR10 and evaluating on N classes from CIFAR100. Note that, for a given method, benchmarking on OOD involves training a single model and evaluating on multiple downstream datasets. In contrast, OSR benchmarks involve training a different model for each evaluation. Training configurations. Due to limited space, we give a detailed description and experimental results of each configuration in Appendix A. Broadly speaking, we train a ResNet-18 on the ID data, with an SGD optimizer and cosine annealing schedule. We train ARPL + CS and OE largely based on the official public implementation. For the auxiliary outlier dataset in the OE loss, we follow (Hendrycks et al., 2019) and use a subset of 80 Million Tiny Images (Torralba et al., 2008) with 300K images, removing all examples that appear in CIFAR10/100, Places or LSUN classes. Metrics. Following standard practise in both OOD and OSR tasks, we use the Area Under the Receiver Operating characteristic Curve (AUROC) as an evaluation metric throughout this paper. We found that other metrics, such as the FPR95 (Hendrycks & Gimpel, 2017) (also known as the false alarm rate), were correlated strongly with the AUROC. 3.2 QUANTITATIVE RESULTS We present results from our benchmarking in Table 1. Although there is not always one clear winner when it comes to methodology, we observe two main takeaways. Firstly, MLS and Energy tend to perform best across OOD and OSR datasets (Fig. 1(a)). We hypothesize this is because both are sensitive to the magnitude of the feature vector before the networks’ linear layer. This phenomenon was observed in (Vaze et al., 2022), as unfamiliar examples tend to have lower feature norms than ID samples, providing a strong signal for the distribution shift decision. Interestingly, we also find that, ReAct, which has been shown to be effective in the literature, does not seem to bring performance gain in the well trained models with a high in-distribution accuracy. Here, we follow (Vaze et al., 2022) to obtain as much high in-distribution accuracy as possible for all models. It appears that when the classifier is strong enough, it is difficult for ReAct to bring extra improvement. Secondly, we observe that Outlier Exposure (Hendrycks et al., 2019) provides excellent performance on the OOD benchmarks (Fig. 1(b)), often nearly saturating performance. It also often boosts OSR performance, though to a lesser degree, a phenomenon which we explore in the next section. 3.3 QUALITATIVE ANALYSIS In this section, we qualitatively interrogate the learned representations of Cross-Entropy and Outlier Exposure networks in order to explain the stark performance boost of OE on existing OOD benchmarks. Specifically, we use the value of the maximally activated neuron at various layers to analyze how the networks respond to distribution shift. We pass every related sample through the network, and plot the histogram of maximum activations at every layer in Fig. 2. This is inspired by (Vaze et al., 2022), who show the ‘maximum logit score’ (MLS, the maximum activation at a network’s output layer) can achieve SOTA for OSR. Furthermore, (Dietterich & Guyer, 2022) propose that networks respond to a ‘lack of familiarity’ under distribution shift by failing to light in-distribution activation pathways. We investigate how activations at various stages of a deep network vary under different ‘unseen’ datasets. Fig. 2 shows histograms of the maximum activations at the outputs from layer 1 to layer 4 of a ResNet-18 (He et al., 2016) trained on CIFAR10 when evaluated on data and data with different shifts. Note that here we use ‘layer’ to refer to ResNet block. For OSR data, we find that early layer activations are largely the same as for the ID test data. It is only later in the network that the activation patterns begin to differ. This is intuitive as the low-level textures and statistics of the open-set data do not vary too much from the training images. Furthermore, it has long been known that early filters in CNNs tend to focus on textural details such as edges (Krizhevsky et al., 2012). In contrast, we discover that some OOD datasets, such as SVHN, induce very different activations in the early layers. Our explanation for this phenomenon is analogous: SVHN contains very different image statistics and low-level features to the training dataset of CIFAR10, and hence induces different activations in early layers. Interestingly, some datasets (like SVHN) which showed markedly different early layer activations actually display more similar activations at later layers. Meanwhile, OE displays show substantially different intermediate activations. Interestingly, the maximum activation in early layers look very similar to the ID testing data, but tend to be less so later on in the network. It is clear that activations in later layers are more discriminative after using OE loss when compared with using CE loss. 4 A CONSOLIDATED BENCHMARKING OF DISTRIBUTION SHIFT Having analyzed methodologies for detecting distribution shift across the OOD and OSR settings, we turn our attention to the benchmarks. While it is clear that OSR specifically aims to detect unseen categories, there is no specification of the type of distribution shift which OOD benchmarks aim to capture, or how they would relate to a real-world scenario. In this section, we propose a lens through which to consolidate types of distribution shift. Specifically, we propose that ‘distribution shift’ can be parameterised along two broad, orthognal, axes: semantic shift and covariate shift. Pure semantic shift is when new categories are encountered, and is the explicit focus of OSR, while covariate shift refers to the setting when the semantics of test images remain constant, but other features change. Formally, similarly to (Wiles et al., 2022), we consider a latent variable model of the data generation process, with latent z: z ∼ p(z) yi ∼ p(yi|z) i ∈ {1...K} x ∼ p(x|z) (1) Here, x is an image and yi represents an image attribute. The set of attributes could include traditional features such as ‘color’ or ‘shape’, or refer to more abstract features such as ‘beak shape’ of a bird. We define a set of semantic attributes, YS , such that the category label of an image is a function of these attributes. Furthermore, we define covariate attributes, YC , which can be freely varied without the category label changing. In this framing, given marginal training distributions ptrain(YS) and ptrain(YC), detecting semantic shift is the task of flagging when ptest(YS) ̸= ptrain(YS). Analogously, we wish to flag covariate shift if ptest(YC) ̸= ptrain(YC). To motivate this setting, consider the perceptual system in an autonomous car, which has been trained to recognize ‘cars’ and ‘pedestrians’ during the day. A semantic shift detector is necessary for when the system encounters a new category, e.g to flag that ‘bicycle’ is an unknown concept. Meanwhile, a covariate shift detector is necessary for when the system is deployed at night-time, where the categories may be familiar, but the performance of the system could be expected to degrade. 4.1 DATASETS As a starting point, we note that (Vaze et al., 2022) introduced the Semantic Shift Benchmark (SSB), a distribution shift benchmark with isolates semantic shift. We mainly focus on ImageNet-SSB (Russakovsky et al., 2015a) and CUB-SSB (Wah et al., 2011) datasets. ‘Seen’ classes in ImageNetSSB are the original ImageNet-1K classes, while ‘unseen’ classes selected from the disjoint set of ImageNet-21K-P (Ridnik et al., 2021a). Meanwhile, CUB-SSB splits the 200 bird classes in CUB into ‘seen’ and ‘unseen’ categories. Furthermore, the unseen categories are split into Easy and Hard classes by their attributes, and the splitting rule depends on semantic similarity of every pair of visual attributes in the unknown classes and the training classes. For all the above datasets, categories appearing in the training set would not be included in the evaluation set. We further report figures on SCars-SSB (Krause et al., 2013) and FGVC-Aircraft-SSB (Maji et al., 2013) in the appendix. For covariate shift, we propose ImageNet-C (Hendrycks & Dietterich, 2019) and ImageNet-R (Hendrycks et al., 2020) to demonstrate distribution shift with respect to the standard ImageNet dataset. Both datasets contain images from a subset of the ImageNet-1K categories, but with different low-level image statistics. ImageNet-C applies four main corruptions (e.g. noise, blur, weather and digital) with varying intensities to the validation images of ImageNet-1K, while ImageNet-R collects various artistic renditions of foreground classes from the ImageNet-1K dataset. We also choose Waterbirds (Sagawa et al., 2019) to test the model trained on the CUB -SSB ‘Seen’ classes. Waterbirds inserts bird photographs from the CUB dataset into backgrounds picked from the Places dataset (Zhou et al., 2017), meaning it has the same semantic categories to CUB but in different scenery. Discussion. We note that there is no uniquely optimal framing for discussing distribution shift, and here briefly discuss alternate proposals. For instance, (Zhao et al., 2022) propose a fine-grained analysis of the shifts, where the test time distribution is controlled for specific attributes such as shape and pose. Also related, (Tran et al., 2022) discuss that indications of ‘unfamiliarity’ in a neural network could refer to many things, including confusing classes and sub-population shift. We propose our simple framing as a way to fill the ‘negative space’ left by the semantic shift detection task of OSR. Furthermore, we suggest it is important to study distribution shift in this way as classifiers are optimized to differentiate between one set features (YS) while in fact being invariant to others (YC). As such, we would expect models to react differently to changes their distributions. 4.2 QUANTITATIVE ANALYSIS In Tables 2 and 3, we evaluate a selection of previously discussed methods on our large-scale benchmark for both OOD and OSR. Through this large-scale evaluation, we find that in terms of training methods, among CE, ARPL (+CS), and OE, there is no clear winners across the board. It is surprising that the best performer on the previous small scale benchmarks (see Table 1), OE, appears to be struggling (last two rows in Table 2) on the large-scale benchmark. This is contradicting with the finding on the small scale benchmarks, which we will analyse in the next subsection. In terms of scoring rules, the maganitude-awere scoring rules, MLS and Energy, consistently produce the best performance regardless of the methods and benchmarks (both standard small-scale ones and our large-scale ones). 4.3 WHY DOES OE UNDERPERFORM ON LARGE-SCALE DATASETS? Here, we investigate why OE underperforms other methods on large-scale benchmark. One critical difference between OE and other methods is that OE requires auxiliary OOD data for training. Intuitively, if the distribution of the auxiliary OOD training data can reflect the distribution of the actual OOD testing data, we would expect a better performance on detecting, while incomplete or biased outlier data may hurt the learning. In fig. 3, we visualize the t-SNE projection of the representations for in-distribution (ID) data (i.e., CIFAR10), auxiliary training OOD data (300K (Hendrycks et al., 2019) vs YFCC15M), and different test-time OOD datasets. As can be seen, using the 300K images generally shows a better overlap with the test-time OOD data. Hence, OE trained with 300K as the auxiliary OOD data achieves better performance than the counterpart trained with YFCC15M (table 1 vs table 7). For the experiments on standard (small-scale) benchmark, we experiment using 300K images v.s YFCC15M as the auxiliary training data. While for the experimentes on our large-scale benchmark, we use YFCC15M because 300K images are not competent in large-scale setting. In fig. 4 (first macro row), as a more fine-grained analysis on standard benchmark, we identify the most confident true positive (TP, i.e., correctly predicted OD sample) and the most confident true negative (TN, correctly predicted ID sample) with MLS scoring, and then find their top-k nearest samples from the auxiliary training OOD data according to feature similarities. As can be seen, for a TP sample, the NNs retrieved from 300K are more similar than those retrieved from YFCC15M in to the OOD testing sample. This is consistent with the finding in fig. 3. Further, we carry out a similar experiment on our large-scale benchmark (see fig. 4, second macro row), by retrieving the NNs from the union of ID training set and YFCC15. We observe that the retrieved NN for TPs are less similar to the TPs, suggesting less similarities between the test-time OOD data and the auxiliary training OOD data. Finally, in fig. 5, for CE and ARPL models, in which no auxiliary OOD training data are used, we also identify the most confident predictions (TP and TN) and most confusing predictions (FN and FP) on our benchmarking datasets, ImageNet-R and Waterbirds, and retrieve their NNs from the training ID datasets (i.e., ImageNet and CUB). We observe that the current decision paradigm is susceptible to posture, viewpoint, background color/object and even semantic similarity. FNs and FPs are often mislead by posture (e.g., FN-1 from the ARPL model in Waterbirds), viewpoint (e.g., FP-5 and FP-4 in ImageNet-1K), background (e.g., FN-1 from the CE model in Waterbirds) and semantic similarity (e.g., cartoon style in FN-1 from the CE model in ImageNet-1K and the shark from the ARPL one). More results and analysis can be found in the appendix. 5 CONCLUSION In this paper, we have provided a consolidated exploration of Out-of-Distribution detection (OOD) and Open-set Recognition (OSR). We performed rigorous cross-evaluation between methods developed for OOD and OSR and identified a strong correlation between their performances. We also proposed a new, large-scale benchmark setting, to disentangle the OOD and OSR, by breaking the distribution shift problem down into covariate shift and semantic shift, suggesting large-scale evaluation protocols for the task. We also showed that the best performing method on both OSR and OOD does not generalize well to our challenging large-scale benchmark, and found that magnitude-aware scoring rules are generally more reliable than the others. We believe our new benchmark can serve as a better testbed to measure progresses in OSR and OOD, and hope our findings in this work can shed light for further exploration on the shared space of OSR and OOD and foster new development for this field. A DETAILED TRAINING CONFIGURATION FOR CIFAR BENCHMARKS For training ResNet-18 on CIFAR10, we set the initial learning rate 0.1 and apply cosine annealing schedule, using SGD with 0.9 momentum. The weight decay factor is set to 5e−4. The total training epochs are 200 and the batch size is 128. For CIFAR100, we also set the initial learning rate 0.1, but divided by 5 at 60th, 120th, 160th epochs, train for 200 epochs with a batch size of 128 and weight decay 5e-4, Nesterov momentum of 0.9, following (DeVries & Taylor, 2017) For ReAct config, the models are trained with a batch size of 128 for 100 epochs. The start learning rate is 0.1 and decays by a factor of 10 at epochs 50, 75, and 90. For MLS config, we train the models with a batch size of 128 for 600 epochs with a cosine annealed learning rate, restarting the learning rate to the initial value at epochs 200 and 400. Besides, we linearly increase the learning rate from 0 to the initial value at the beginning. The initial learning rate is 0.1 for CIFAR but 0.01 for TinyImageNet. B EXPERIMENTAL RESULTS ON LARGE-SCALE BENCHMARKS C ACTIVATIONS OF OOD AND OSR AT DIFFERENT LAYERS D INFLUENCE OF TRAINING CONFIGURATIONS FOR OOD PERFORMANCE E EXPERIMENTS OF OOD ON CIFAR-100 F QUALITATIVE SAMPLES FROM THE ID AND OOD DATASETS CIFAR10 G CORRESPONDENCE POINTS FOR IMAGENET-SSB
1. What is the focus and contribution of the paper regarding out-of-distribution detection and open-set recognition? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its empirical analysis and benchmark setting? 3. Do you have any concerns or suggestions regarding the descriptions of OOD and OSR, as well as the differences between the compared approaches? 4. How do the results of the paper impact the understanding of OOD and OSR tasks, and what are the implications for future research? 5. What are the limitations of the paper's analysis, and what areas could benefit from further exploration?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors provide an in depth empirical analysis of different approaches to out-of-distribution detection (OOD) and open-set recognition (OSR). They also present a novel large scale benchmark setting that better disentangles the problems tackled by OOD and OSR. They reveal that the previous SOTA approach, on existing standard benchmarks, is not the best at the approach on the newly presented benchmark. Strengths And Weaknesses The paper is well motivated and the information is presented clearly. It is generally well written. The authors present a good analysis of existing approaches to OSR and OOD on existing benchmark datasets as well as a newly presented benchmark. a better description, or an example, of OOD would be helpful. The OSR description is very clear but the OOD description could be improved for readers that are not as familiar to the area. This also applies to the Datasets section of 3.1 where an example distribution difference between test and train is not described while the OSR train/test difference is clear. A short description of the differences in the compared approaches would help clarify the significance of the results and make the paper stronger as a stand-alone read "We found that other metrics, such as the FPR95 (Hendrycks & Gimpel, 2017) (also known as the false alarm rate), were correlated strongly with the AUROC." It would be helpful if this was shown at least in an appendix. Some of the results are close. What is are uncertainty/error bounds? What is the effect of different classes being used for the open set on the OSR task or different distributions used for training/test in the OSR task? Since this is mostly an analysis paper I would prefer a deeper dive here. It would also be interesting to see the effect of these changes on the same dataset task for the results shown in 3.3. Clarity, Quality, Novelty And Reproducibility The paper is clearly written and generally high quality. The novelty is limited and iterative but presents a good analysis and overview of existing approaches.
ICLR
Title How and Why We Detect Distribution Shift: Critical Analysis of Methods and Benchmarks Abstract Detecting test-time distribution shift has emerged as a key capability for safely deployed machine learning models, with the question being tackled under various guises in recent years. In this paper, we aim to provide a consolidated view of the two largest sub-fields within the community: open-set recognition (OSR) and outof-distribution detection (OOD). In particular, we aim to provide rigorous empirical analysis of different methods across settings and provide actionable takeaways for practitioners and researchers. Concretely, we make the following contributions: (i) For the first time, we perform rigorous cross-evaluation between state-of-the-art methods in the OOD and OSR settings and identify a strong correlation between the performances of methods for them; (ii) We propose a new, large-scale benchmark setting which we suggest better disentangles the problem tackled by OOD and OSR; (iii) We thoroughly examine SOTA methods for OOD and OSR on our large-scale benchmark; and (iv) Finally, we find that the best performing method on previous benchmarks struggles on our large-scale benchmark, while magnitudeaware scoring rules consistently show promise. 1 INTRODUCTION Any practical machine learning model is likely to encounter test-time samples which differ substantially from its training set; i.e models are likely to encounter test-time distribution shift. As such, detecting distribution shift has emerged as a key research problem in the community (Scheirer et al., 2013; Hendrycks & Gimpel, 2017; Liu et al., 2020). Specifically, out-of-distribution detection (OOD) and open-set recognition (OSR) have emerged as two rich sub-fields to tackle this task. In fact, both tasks explicitly tackle the setting in which multi-way classifiers must detect if test samples are unfamiliar with respect to their training set, with a variety of methods proposed within each field. OSR methods are developed for detecting test images which come from different semantic categories to the training set, while OOD methods are developed for detecting images which come from a different data distribution to the training images. Research efforts in both directions largely occur independently (with little cross-pollination of ideas). Though prior work has recognized the similarity of the two sub-fields (Vaze et al., 2022; Tran et al., 2022; Yang et al., 2021; Salehi et al., 2021), OOD and OSR, there have been no rigorous benchmarking to understand the underlining principles of methods for both. In this work, for the first time, we perform rigorous cross-evaluation between methods developed for OOD and OSR on current standard benchmarks, suggesting that methods which perform well for one are likely to perform well for the other in Sec. 3. We experiment both with methods which require alternate training strategies (e.g., Outlier Exposure (Hendrycks et al., 2019) (OE) and ARPL (Chen et al., 2021)) as well as different post-hoc scoring rules (e.g., Maximum Softmax Probability (MSP) (Hendrycks & Gimpel, 2017), Maximum Logit Score (MLS) (Vaze et al., 2022) and Energy Scoring (Liu et al., 2020)). We thoroughly evaluate all methods on both standard OOD and OSR benchmarks, after which we find that OE achieves almost saturating performance on the OOD task and also obtains the state-of-the-art (SOTA) results on the OSR task. Meanwhile, we also find that the magnitude-aware scoring rules like MLS (Vaze et al., 2022) and Energy Scoring (Liu et al., 2020) show steady good performance across different methods and datasets. Next, we propose a reconciling perspective on the tasks tackled by the two fields, and propose a new benchmark to assess this in Sec. 4. Specifically, we propose a new, large-scale benchmark setting, in which we disentangle different distribution shifts, namely, semantic shift and covariate shift, that occur in OOD and OSR. For example, to isolate semantic shift, we leverage the recently introduced Semantic Shift Benchmark (SSB) (Vaze et al., 2022) containing ImageNet-scale datasets, in which the original ImageNet-1K (Russakovsky et al., 2015b) is regarded as ‘seen’ closed-set data while ‘unseen’ data is carefully drawn from the disjoint set of ImageNet-21K-P (Ridnik et al., 2021b). For covariate shift, we leverage ImageNet-C (Hendrycks & Dietterich, 2019) and ImageNet-R (Hendrycks et al., 2020) to demonstrate distribution shift with respect to the standard ImageNet dataset. Finally, we examine SOTA methods developed for OOD and OSR on this large-scale benchmark to validate whether the findings through the standard (small-scale) datasets still hold on our consolidated large-scale benchmarking. Through the large-scale analysis, we surprisingly find that OE struggles to scale to larger benchmarks, while the magnitude-aware scoring rules, MLS (Vaze et al., 2022) and Energy Scoring (Liu et al., 2020), still show promise. We further provide empirical insights by analysing the representations extracted by different models on data under different distribution shifts, which suggests that the strong performance of OE on the standard benchmark is partially attributed to the fact that the auxiliary OOD data used for training has sufficient distribution overlap with the OOD testing data, while it is not straightforward to come up with an auxiliary OOD data to reflect the actual distribution shift on large-scale datasets. We believe there are still many more open questions to be answered in the shared space of OOD and OSR, and hope the findings in our work can serve as a starting point to have a deeper look into them. 2 PRELIMINARIES AND RELATED WORK Open-set recognition. Previous work (Scheirer et al., 2012) coined “open-set recognition”, the objective of which is to identify unknown classes while classifying the known ones. OpenMax resorts to Activation Vector (AV) and models the distribution of AVs based on the Extreme Value Theorem (EVT). Recent works (Ge et al., 2017; Neal et al., 2018b; Kong & Ramanan, 2021) show that the generated data from synthetic distribution would be helpful to improve OSR. OSRCI (Neal et al., 2018b) generates images belonging to the unknown classes but similar to the training data to train an open-set classifier. (Kong & Ramanan, 2021) adversarially trained discriminator to distinguish closed from open-set images and introduced real open-set samples for model selection. Prototype-based methods (Chen et al., 2020; 2021) adjust the boundaries of different classes and identify open-set images based on distances to the learned prototypes of known classes. Out of Distribution Detection. (Hendrycks & Gimpel, 2017) formalized the task of out-ofdistribution detection and provided a paradigm to evaluate deep learning out-of-distribution detectors using the maximum softmax probability (MSP). A test sample with a large MSP score is detected as an in-distribution (ID) example rather than out-of-distribution (OOD) example. ODIN (Liang et al., 2018) and its learnable variant G-ODIN (Hsu et al., 2020) added adversarial perturbations to both ID and OOD samples and employed temperature scaling strategy on the softmax output to separate them. (Liu et al., 2020) proposes the energy score derived from the logit outputs for OOD uncertainty estimation. (Sun et al., 2021) rectified the distribution of per-unit activations in the penultimate layer for ID and OOD data. Outlier Exposure (OE) (Hendrycks et al., 2019) and (Huang et al., 2021) both designed a loss based on the KL divergence between the softmax output and a uniform probability distribution to encourage models to output a uniform softmax distribution on outliers. The former leveraged real OOD data for training while the latter directly employed the vector norm of gradients to perform uncertainty estimation. 3 ANALYSIS OF SOTA BASELINES ON STANDARD BENCHMARKS In this section, we perform cross-evaluation of methods from the OOD and OSR literature. 3.1 EXPERIMENTAL SETUP Methods. We distinguish two categories of shift detection methods: scoring rules (which operate post-hoc on top of pre-trained networks); and specialised training (which change the optimisation procedure of the networks themselves). For scoring rules, we compare the maximum softmax probability (MSP, (Hendrycks & Gimpel, 2017)), the Maximum Logit Score (MLS, (Vaze et al., 2022)), ODIN (Liang et al., 2018), GODIN (Hsu et al., 2020), Energy scoring (Liu et al., 2020), GradNorm (Huang et al., 2021) and SEM (Yang et al., 2022). We further experiment with ReAct (Sun et al., 2021), an activation pruning technique which can be employed in tangent with any scoring rule. While MLS was developed for OSR (Vaze et al., 2022), other scoring rules were developed for OOD detection. We provide descriptions of each scoring rule in the appendix. For specialised training, we first experiment with the standard cross-entropy (CE) loss. We also use ARPL + CS (Chen et al., 2021) from the OSR literature. This method learns a set of ‘reciprocal points’ which are trained to be far away from all training category embeddings. We note that the reciprocal points can be treated as a linear classification layer, allowing us to use any of the scoring rules mentioned above on top of this representation. Finally, we train models with Outlier Exposure (OE) (Hendrycks et al., 2019) from the OOD literature, where real outlier examples are used during training as examples of OOD. In this case, the model is encouraged to predict a uniform softmax output. Datasets. For the OOD setting, we train models on CIFAR10 (Krizhevsky et al., 2009). As OOD data, we use six common datasets: SVHN (Cimpoi et al., 2014), Textures (Ovadia et al., 2019), LSUN-Crop (Yu et al., 2015), LSUN-Resize (Yu et al., 2015), iSUN (Xu et al., 2015) and Places365 (Zhou et al., 2017). We also perform OOD experiments training with CIFAR100 as ID in Appendix E. For the OSR benchmark, following the standard protocols in (Neal et al., 2018a), we set up four sub-tasks containing CIFAR10, CIFAR+10, CIFAR+50 and TinyImageNet (Le & Yang, 2015). In all cases, models are trained on a subset of categories with remaining used as ‘unseen’ at test time. The CIFAR+N settings involve training on four classes from CIFAR10 and evaluating on N classes from CIFAR100. Note that, for a given method, benchmarking on OOD involves training a single model and evaluating on multiple downstream datasets. In contrast, OSR benchmarks involve training a different model for each evaluation. Training configurations. Due to limited space, we give a detailed description and experimental results of each configuration in Appendix A. Broadly speaking, we train a ResNet-18 on the ID data, with an SGD optimizer and cosine annealing schedule. We train ARPL + CS and OE largely based on the official public implementation. For the auxiliary outlier dataset in the OE loss, we follow (Hendrycks et al., 2019) and use a subset of 80 Million Tiny Images (Torralba et al., 2008) with 300K images, removing all examples that appear in CIFAR10/100, Places or LSUN classes. Metrics. Following standard practise in both OOD and OSR tasks, we use the Area Under the Receiver Operating characteristic Curve (AUROC) as an evaluation metric throughout this paper. We found that other metrics, such as the FPR95 (Hendrycks & Gimpel, 2017) (also known as the false alarm rate), were correlated strongly with the AUROC. 3.2 QUANTITATIVE RESULTS We present results from our benchmarking in Table 1. Although there is not always one clear winner when it comes to methodology, we observe two main takeaways. Firstly, MLS and Energy tend to perform best across OOD and OSR datasets (Fig. 1(a)). We hypothesize this is because both are sensitive to the magnitude of the feature vector before the networks’ linear layer. This phenomenon was observed in (Vaze et al., 2022), as unfamiliar examples tend to have lower feature norms than ID samples, providing a strong signal for the distribution shift decision. Interestingly, we also find that, ReAct, which has been shown to be effective in the literature, does not seem to bring performance gain in the well trained models with a high in-distribution accuracy. Here, we follow (Vaze et al., 2022) to obtain as much high in-distribution accuracy as possible for all models. It appears that when the classifier is strong enough, it is difficult for ReAct to bring extra improvement. Secondly, we observe that Outlier Exposure (Hendrycks et al., 2019) provides excellent performance on the OOD benchmarks (Fig. 1(b)), often nearly saturating performance. It also often boosts OSR performance, though to a lesser degree, a phenomenon which we explore in the next section. 3.3 QUALITATIVE ANALYSIS In this section, we qualitatively interrogate the learned representations of Cross-Entropy and Outlier Exposure networks in order to explain the stark performance boost of OE on existing OOD benchmarks. Specifically, we use the value of the maximally activated neuron at various layers to analyze how the networks respond to distribution shift. We pass every related sample through the network, and plot the histogram of maximum activations at every layer in Fig. 2. This is inspired by (Vaze et al., 2022), who show the ‘maximum logit score’ (MLS, the maximum activation at a network’s output layer) can achieve SOTA for OSR. Furthermore, (Dietterich & Guyer, 2022) propose that networks respond to a ‘lack of familiarity’ under distribution shift by failing to light in-distribution activation pathways. We investigate how activations at various stages of a deep network vary under different ‘unseen’ datasets. Fig. 2 shows histograms of the maximum activations at the outputs from layer 1 to layer 4 of a ResNet-18 (He et al., 2016) trained on CIFAR10 when evaluated on data and data with different shifts. Note that here we use ‘layer’ to refer to ResNet block. For OSR data, we find that early layer activations are largely the same as for the ID test data. It is only later in the network that the activation patterns begin to differ. This is intuitive as the low-level textures and statistics of the open-set data do not vary too much from the training images. Furthermore, it has long been known that early filters in CNNs tend to focus on textural details such as edges (Krizhevsky et al., 2012). In contrast, we discover that some OOD datasets, such as SVHN, induce very different activations in the early layers. Our explanation for this phenomenon is analogous: SVHN contains very different image statistics and low-level features to the training dataset of CIFAR10, and hence induces different activations in early layers. Interestingly, some datasets (like SVHN) which showed markedly different early layer activations actually display more similar activations at later layers. Meanwhile, OE displays show substantially different intermediate activations. Interestingly, the maximum activation in early layers look very similar to the ID testing data, but tend to be less so later on in the network. It is clear that activations in later layers are more discriminative after using OE loss when compared with using CE loss. 4 A CONSOLIDATED BENCHMARKING OF DISTRIBUTION SHIFT Having analyzed methodologies for detecting distribution shift across the OOD and OSR settings, we turn our attention to the benchmarks. While it is clear that OSR specifically aims to detect unseen categories, there is no specification of the type of distribution shift which OOD benchmarks aim to capture, or how they would relate to a real-world scenario. In this section, we propose a lens through which to consolidate types of distribution shift. Specifically, we propose that ‘distribution shift’ can be parameterised along two broad, orthognal, axes: semantic shift and covariate shift. Pure semantic shift is when new categories are encountered, and is the explicit focus of OSR, while covariate shift refers to the setting when the semantics of test images remain constant, but other features change. Formally, similarly to (Wiles et al., 2022), we consider a latent variable model of the data generation process, with latent z: z ∼ p(z) yi ∼ p(yi|z) i ∈ {1...K} x ∼ p(x|z) (1) Here, x is an image and yi represents an image attribute. The set of attributes could include traditional features such as ‘color’ or ‘shape’, or refer to more abstract features such as ‘beak shape’ of a bird. We define a set of semantic attributes, YS , such that the category label of an image is a function of these attributes. Furthermore, we define covariate attributes, YC , which can be freely varied without the category label changing. In this framing, given marginal training distributions ptrain(YS) and ptrain(YC), detecting semantic shift is the task of flagging when ptest(YS) ̸= ptrain(YS). Analogously, we wish to flag covariate shift if ptest(YC) ̸= ptrain(YC). To motivate this setting, consider the perceptual system in an autonomous car, which has been trained to recognize ‘cars’ and ‘pedestrians’ during the day. A semantic shift detector is necessary for when the system encounters a new category, e.g to flag that ‘bicycle’ is an unknown concept. Meanwhile, a covariate shift detector is necessary for when the system is deployed at night-time, where the categories may be familiar, but the performance of the system could be expected to degrade. 4.1 DATASETS As a starting point, we note that (Vaze et al., 2022) introduced the Semantic Shift Benchmark (SSB), a distribution shift benchmark with isolates semantic shift. We mainly focus on ImageNet-SSB (Russakovsky et al., 2015a) and CUB-SSB (Wah et al., 2011) datasets. ‘Seen’ classes in ImageNetSSB are the original ImageNet-1K classes, while ‘unseen’ classes selected from the disjoint set of ImageNet-21K-P (Ridnik et al., 2021a). Meanwhile, CUB-SSB splits the 200 bird classes in CUB into ‘seen’ and ‘unseen’ categories. Furthermore, the unseen categories are split into Easy and Hard classes by their attributes, and the splitting rule depends on semantic similarity of every pair of visual attributes in the unknown classes and the training classes. For all the above datasets, categories appearing in the training set would not be included in the evaluation set. We further report figures on SCars-SSB (Krause et al., 2013) and FGVC-Aircraft-SSB (Maji et al., 2013) in the appendix. For covariate shift, we propose ImageNet-C (Hendrycks & Dietterich, 2019) and ImageNet-R (Hendrycks et al., 2020) to demonstrate distribution shift with respect to the standard ImageNet dataset. Both datasets contain images from a subset of the ImageNet-1K categories, but with different low-level image statistics. ImageNet-C applies four main corruptions (e.g. noise, blur, weather and digital) with varying intensities to the validation images of ImageNet-1K, while ImageNet-R collects various artistic renditions of foreground classes from the ImageNet-1K dataset. We also choose Waterbirds (Sagawa et al., 2019) to test the model trained on the CUB -SSB ‘Seen’ classes. Waterbirds inserts bird photographs from the CUB dataset into backgrounds picked from the Places dataset (Zhou et al., 2017), meaning it has the same semantic categories to CUB but in different scenery. Discussion. We note that there is no uniquely optimal framing for discussing distribution shift, and here briefly discuss alternate proposals. For instance, (Zhao et al., 2022) propose a fine-grained analysis of the shifts, where the test time distribution is controlled for specific attributes such as shape and pose. Also related, (Tran et al., 2022) discuss that indications of ‘unfamiliarity’ in a neural network could refer to many things, including confusing classes and sub-population shift. We propose our simple framing as a way to fill the ‘negative space’ left by the semantic shift detection task of OSR. Furthermore, we suggest it is important to study distribution shift in this way as classifiers are optimized to differentiate between one set features (YS) while in fact being invariant to others (YC). As such, we would expect models to react differently to changes their distributions. 4.2 QUANTITATIVE ANALYSIS In Tables 2 and 3, we evaluate a selection of previously discussed methods on our large-scale benchmark for both OOD and OSR. Through this large-scale evaluation, we find that in terms of training methods, among CE, ARPL (+CS), and OE, there is no clear winners across the board. It is surprising that the best performer on the previous small scale benchmarks (see Table 1), OE, appears to be struggling (last two rows in Table 2) on the large-scale benchmark. This is contradicting with the finding on the small scale benchmarks, which we will analyse in the next subsection. In terms of scoring rules, the maganitude-awere scoring rules, MLS and Energy, consistently produce the best performance regardless of the methods and benchmarks (both standard small-scale ones and our large-scale ones). 4.3 WHY DOES OE UNDERPERFORM ON LARGE-SCALE DATASETS? Here, we investigate why OE underperforms other methods on large-scale benchmark. One critical difference between OE and other methods is that OE requires auxiliary OOD data for training. Intuitively, if the distribution of the auxiliary OOD training data can reflect the distribution of the actual OOD testing data, we would expect a better performance on detecting, while incomplete or biased outlier data may hurt the learning. In fig. 3, we visualize the t-SNE projection of the representations for in-distribution (ID) data (i.e., CIFAR10), auxiliary training OOD data (300K (Hendrycks et al., 2019) vs YFCC15M), and different test-time OOD datasets. As can be seen, using the 300K images generally shows a better overlap with the test-time OOD data. Hence, OE trained with 300K as the auxiliary OOD data achieves better performance than the counterpart trained with YFCC15M (table 1 vs table 7). For the experiments on standard (small-scale) benchmark, we experiment using 300K images v.s YFCC15M as the auxiliary training data. While for the experimentes on our large-scale benchmark, we use YFCC15M because 300K images are not competent in large-scale setting. In fig. 4 (first macro row), as a more fine-grained analysis on standard benchmark, we identify the most confident true positive (TP, i.e., correctly predicted OD sample) and the most confident true negative (TN, correctly predicted ID sample) with MLS scoring, and then find their top-k nearest samples from the auxiliary training OOD data according to feature similarities. As can be seen, for a TP sample, the NNs retrieved from 300K are more similar than those retrieved from YFCC15M in to the OOD testing sample. This is consistent with the finding in fig. 3. Further, we carry out a similar experiment on our large-scale benchmark (see fig. 4, second macro row), by retrieving the NNs from the union of ID training set and YFCC15. We observe that the retrieved NN for TPs are less similar to the TPs, suggesting less similarities between the test-time OOD data and the auxiliary training OOD data. Finally, in fig. 5, for CE and ARPL models, in which no auxiliary OOD training data are used, we also identify the most confident predictions (TP and TN) and most confusing predictions (FN and FP) on our benchmarking datasets, ImageNet-R and Waterbirds, and retrieve their NNs from the training ID datasets (i.e., ImageNet and CUB). We observe that the current decision paradigm is susceptible to posture, viewpoint, background color/object and even semantic similarity. FNs and FPs are often mislead by posture (e.g., FN-1 from the ARPL model in Waterbirds), viewpoint (e.g., FP-5 and FP-4 in ImageNet-1K), background (e.g., FN-1 from the CE model in Waterbirds) and semantic similarity (e.g., cartoon style in FN-1 from the CE model in ImageNet-1K and the shark from the ARPL one). More results and analysis can be found in the appendix. 5 CONCLUSION In this paper, we have provided a consolidated exploration of Out-of-Distribution detection (OOD) and Open-set Recognition (OSR). We performed rigorous cross-evaluation between methods developed for OOD and OSR and identified a strong correlation between their performances. We also proposed a new, large-scale benchmark setting, to disentangle the OOD and OSR, by breaking the distribution shift problem down into covariate shift and semantic shift, suggesting large-scale evaluation protocols for the task. We also showed that the best performing method on both OSR and OOD does not generalize well to our challenging large-scale benchmark, and found that magnitude-aware scoring rules are generally more reliable than the others. We believe our new benchmark can serve as a better testbed to measure progresses in OSR and OOD, and hope our findings in this work can shed light for further exploration on the shared space of OSR and OOD and foster new development for this field. A DETAILED TRAINING CONFIGURATION FOR CIFAR BENCHMARKS For training ResNet-18 on CIFAR10, we set the initial learning rate 0.1 and apply cosine annealing schedule, using SGD with 0.9 momentum. The weight decay factor is set to 5e−4. The total training epochs are 200 and the batch size is 128. For CIFAR100, we also set the initial learning rate 0.1, but divided by 5 at 60th, 120th, 160th epochs, train for 200 epochs with a batch size of 128 and weight decay 5e-4, Nesterov momentum of 0.9, following (DeVries & Taylor, 2017) For ReAct config, the models are trained with a batch size of 128 for 100 epochs. The start learning rate is 0.1 and decays by a factor of 10 at epochs 50, 75, and 90. For MLS config, we train the models with a batch size of 128 for 600 epochs with a cosine annealed learning rate, restarting the learning rate to the initial value at epochs 200 and 400. Besides, we linearly increase the learning rate from 0 to the initial value at the beginning. The initial learning rate is 0.1 for CIFAR but 0.01 for TinyImageNet. B EXPERIMENTAL RESULTS ON LARGE-SCALE BENCHMARKS C ACTIVATIONS OF OOD AND OSR AT DIFFERENT LAYERS D INFLUENCE OF TRAINING CONFIGURATIONS FOR OOD PERFORMANCE E EXPERIMENTS OF OOD ON CIFAR-100 F QUALITATIVE SAMPLES FROM THE ID AND OOD DATASETS CIFAR10 G CORRESPONDENCE POINTS FOR IMAGENET-SSB
1. What are the main contributions of the paper regarding OOD detection and OSR? 2. How does the proposed approach relate to prior works in the field, particularly those that discuss the relations between OOD detection and OSR? 3. What are the strengths and weaknesses of the proposed method compared to existing methods in the literature? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the paper that the reviewer would like to raise?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The submission discusses the related problems of Out-Of-Distribution (OOD) detection and Open-Set Recognition (OSR), with an empirical evaluation of different methods on both problems. The submission also proposes a large-scale benchmark, suggesting that such evaluation better disentangles OOD detection and OSR capabilities. The results of the empirical evaluation are mixed — there does not appear to be a winning method across all datasets and tasks. Strengths And Weaknesses Strengths: The community at large tends to treat OOD detection and OSR as separate problems (with notable exceptions mentioned below), and it is useful to have discussions relating them. The paper is well-written overall, which makes for pleasant reading. Weaknesses: There is significant prior work that also discusses the relations between OOD detection and OSR problems. Among them, — Ahmed et al. [1] discussed that OOD detection is a problem that is “too broad to be meaningful”, suggesting that rather than reward sensitivity to all manner of “non-semantic” distributional shift, some of which one wishes to be robust to, we ought to be encouraging and evaluating sensitivity to semantic-shift only. They suggested holding classes out from within a particular dataset (such as CIFAR-10) over multiple trials, and also proposed a set of fine-grained ImageNet-based datasets. — Hsu et al. [2] used carefully separated subsets of DomainNet (in section 4.3) to also evaluate semantic and non-semantic distributional shifts. — Hendrycks et al. [3] curated two large datasets, Imagenet-A (with examples of in-distribution classes that popular classifiers fail on) and Imagenet-O (consisting of held-out classes from Imagenet-21K). These two datasets serve as examples of data that one wishes to be robust to, vs. examples that we wish to be sensitive to. Imagenet-A can be used as examples of covariate-shift we should detect, since popular classifiers are known to fail on this data. — Tian et al. [4] also discuss the separation between covariate and semantic distributional shift (although they call it “concept” shift). They also curated a version of near and far semantic distributional shift with CIFAR-10/100. — Ahmed et al. [5] discuss the twin goals of encouraging robustness to non-semantic (covariate) distributional shift while being sensitive to semantic-shift. They curate a set of artificial datasets to disentangle the evaluation. The consolidated view proposed in Section 4 of the submission bears strong similarity to the view in [5]’s Section 2. Deecke et al. [6] also perform such an evaluation, with different combinations denoting covariate and semantic distributional shift. — A number of review works [8, 9] have also discussed the relations between OOD detection and OSR, and gone further into the specific types of problems. Based on such discussions in the literature, the goal of being sensitive to covariate shift (and rewarding models/methods for it) seems somewhat questionable. As described implicitly and explicitly in [1,3,5], the goal when building classifiers is really to be robust to non-semantic distributional shift. [1] takes the extreme point of view of suggesting that one should solely focus on evaluating semantic-shifts, and that trivial methods already fare reasonably well for detecting major covariate shifts (as in dataset-shift benchmarks), if required in safety-sensitive applications. Based on discussion in [5, 6], one cannot consider sensitivity without taking a concurrent look at robustness, since increased sensitivity to covariate-shift while inadvertently losing robustness is not really a useful goal. These perspectives have also been used in [7] to develop a suite of CIFAR-based datasets. For the purposes of a thorough empirical analysis evaluating methods under different forms of shifts (which the submission sets out to perform), one must also take into account the strong influence of the underlying model. It is clear from the literature that different choices in things like architecture (ResNet vs. DenseNet) significantly change AUROC scores. The submission seems to be somewhat restricted in its evaluation, using only a ResNet-18 for one set of experiments, and a ResNet-50 for the rest. [1] Detecting semantic anomalies, Ahmed et al., AAAI 2020 [2] Generalized ODIN, Hsu et al., CVPR 2020 [3] Natural adversarial examples, CVPR 2021 [4] Exploring covariate and concept shift for OOD detection, Tian et al., NeurIPS (Workshop), 2021 [5] Systematic generalisation with group invariant predictions, Ahmed et al., ICLR 2021 [6] Transfer-based semantic anomaly detection, Deecke et al., ICML 2021 [7] Semantically coherent out-of-distribution detection, Yang et al., CVPR 2021 [8] Generalized OOD detection: A survey, Yang et al., arXiv 2021 [9] A unified survey on anomaly, novelty, open-set, and out-of-distribution detection: Solutions and future challenges, arXiv 2021 Clarity, Quality, Novelty And Reproducibility Clarity: The paper is well-written and easy to follow. Quality: The quality is somewhat lacking due to insufficient discussions of how the contributions fit into the existing ilterature. Novelty: NA Reproducibility: There seems to be sufficient details for meaningful reproducibility.
ICLR
Title How and Why We Detect Distribution Shift: Critical Analysis of Methods and Benchmarks Abstract Detecting test-time distribution shift has emerged as a key capability for safely deployed machine learning models, with the question being tackled under various guises in recent years. In this paper, we aim to provide a consolidated view of the two largest sub-fields within the community: open-set recognition (OSR) and outof-distribution detection (OOD). In particular, we aim to provide rigorous empirical analysis of different methods across settings and provide actionable takeaways for practitioners and researchers. Concretely, we make the following contributions: (i) For the first time, we perform rigorous cross-evaluation between state-of-the-art methods in the OOD and OSR settings and identify a strong correlation between the performances of methods for them; (ii) We propose a new, large-scale benchmark setting which we suggest better disentangles the problem tackled by OOD and OSR; (iii) We thoroughly examine SOTA methods for OOD and OSR on our large-scale benchmark; and (iv) Finally, we find that the best performing method on previous benchmarks struggles on our large-scale benchmark, while magnitudeaware scoring rules consistently show promise. 1 INTRODUCTION Any practical machine learning model is likely to encounter test-time samples which differ substantially from its training set; i.e models are likely to encounter test-time distribution shift. As such, detecting distribution shift has emerged as a key research problem in the community (Scheirer et al., 2013; Hendrycks & Gimpel, 2017; Liu et al., 2020). Specifically, out-of-distribution detection (OOD) and open-set recognition (OSR) have emerged as two rich sub-fields to tackle this task. In fact, both tasks explicitly tackle the setting in which multi-way classifiers must detect if test samples are unfamiliar with respect to their training set, with a variety of methods proposed within each field. OSR methods are developed for detecting test images which come from different semantic categories to the training set, while OOD methods are developed for detecting images which come from a different data distribution to the training images. Research efforts in both directions largely occur independently (with little cross-pollination of ideas). Though prior work has recognized the similarity of the two sub-fields (Vaze et al., 2022; Tran et al., 2022; Yang et al., 2021; Salehi et al., 2021), OOD and OSR, there have been no rigorous benchmarking to understand the underlining principles of methods for both. In this work, for the first time, we perform rigorous cross-evaluation between methods developed for OOD and OSR on current standard benchmarks, suggesting that methods which perform well for one are likely to perform well for the other in Sec. 3. We experiment both with methods which require alternate training strategies (e.g., Outlier Exposure (Hendrycks et al., 2019) (OE) and ARPL (Chen et al., 2021)) as well as different post-hoc scoring rules (e.g., Maximum Softmax Probability (MSP) (Hendrycks & Gimpel, 2017), Maximum Logit Score (MLS) (Vaze et al., 2022) and Energy Scoring (Liu et al., 2020)). We thoroughly evaluate all methods on both standard OOD and OSR benchmarks, after which we find that OE achieves almost saturating performance on the OOD task and also obtains the state-of-the-art (SOTA) results on the OSR task. Meanwhile, we also find that the magnitude-aware scoring rules like MLS (Vaze et al., 2022) and Energy Scoring (Liu et al., 2020) show steady good performance across different methods and datasets. Next, we propose a reconciling perspective on the tasks tackled by the two fields, and propose a new benchmark to assess this in Sec. 4. Specifically, we propose a new, large-scale benchmark setting, in which we disentangle different distribution shifts, namely, semantic shift and covariate shift, that occur in OOD and OSR. For example, to isolate semantic shift, we leverage the recently introduced Semantic Shift Benchmark (SSB) (Vaze et al., 2022) containing ImageNet-scale datasets, in which the original ImageNet-1K (Russakovsky et al., 2015b) is regarded as ‘seen’ closed-set data while ‘unseen’ data is carefully drawn from the disjoint set of ImageNet-21K-P (Ridnik et al., 2021b). For covariate shift, we leverage ImageNet-C (Hendrycks & Dietterich, 2019) and ImageNet-R (Hendrycks et al., 2020) to demonstrate distribution shift with respect to the standard ImageNet dataset. Finally, we examine SOTA methods developed for OOD and OSR on this large-scale benchmark to validate whether the findings through the standard (small-scale) datasets still hold on our consolidated large-scale benchmarking. Through the large-scale analysis, we surprisingly find that OE struggles to scale to larger benchmarks, while the magnitude-aware scoring rules, MLS (Vaze et al., 2022) and Energy Scoring (Liu et al., 2020), still show promise. We further provide empirical insights by analysing the representations extracted by different models on data under different distribution shifts, which suggests that the strong performance of OE on the standard benchmark is partially attributed to the fact that the auxiliary OOD data used for training has sufficient distribution overlap with the OOD testing data, while it is not straightforward to come up with an auxiliary OOD data to reflect the actual distribution shift on large-scale datasets. We believe there are still many more open questions to be answered in the shared space of OOD and OSR, and hope the findings in our work can serve as a starting point to have a deeper look into them. 2 PRELIMINARIES AND RELATED WORK Open-set recognition. Previous work (Scheirer et al., 2012) coined “open-set recognition”, the objective of which is to identify unknown classes while classifying the known ones. OpenMax resorts to Activation Vector (AV) and models the distribution of AVs based on the Extreme Value Theorem (EVT). Recent works (Ge et al., 2017; Neal et al., 2018b; Kong & Ramanan, 2021) show that the generated data from synthetic distribution would be helpful to improve OSR. OSRCI (Neal et al., 2018b) generates images belonging to the unknown classes but similar to the training data to train an open-set classifier. (Kong & Ramanan, 2021) adversarially trained discriminator to distinguish closed from open-set images and introduced real open-set samples for model selection. Prototype-based methods (Chen et al., 2020; 2021) adjust the boundaries of different classes and identify open-set images based on distances to the learned prototypes of known classes. Out of Distribution Detection. (Hendrycks & Gimpel, 2017) formalized the task of out-ofdistribution detection and provided a paradigm to evaluate deep learning out-of-distribution detectors using the maximum softmax probability (MSP). A test sample with a large MSP score is detected as an in-distribution (ID) example rather than out-of-distribution (OOD) example. ODIN (Liang et al., 2018) and its learnable variant G-ODIN (Hsu et al., 2020) added adversarial perturbations to both ID and OOD samples and employed temperature scaling strategy on the softmax output to separate them. (Liu et al., 2020) proposes the energy score derived from the logit outputs for OOD uncertainty estimation. (Sun et al., 2021) rectified the distribution of per-unit activations in the penultimate layer for ID and OOD data. Outlier Exposure (OE) (Hendrycks et al., 2019) and (Huang et al., 2021) both designed a loss based on the KL divergence between the softmax output and a uniform probability distribution to encourage models to output a uniform softmax distribution on outliers. The former leveraged real OOD data for training while the latter directly employed the vector norm of gradients to perform uncertainty estimation. 3 ANALYSIS OF SOTA BASELINES ON STANDARD BENCHMARKS In this section, we perform cross-evaluation of methods from the OOD and OSR literature. 3.1 EXPERIMENTAL SETUP Methods. We distinguish two categories of shift detection methods: scoring rules (which operate post-hoc on top of pre-trained networks); and specialised training (which change the optimisation procedure of the networks themselves). For scoring rules, we compare the maximum softmax probability (MSP, (Hendrycks & Gimpel, 2017)), the Maximum Logit Score (MLS, (Vaze et al., 2022)), ODIN (Liang et al., 2018), GODIN (Hsu et al., 2020), Energy scoring (Liu et al., 2020), GradNorm (Huang et al., 2021) and SEM (Yang et al., 2022). We further experiment with ReAct (Sun et al., 2021), an activation pruning technique which can be employed in tangent with any scoring rule. While MLS was developed for OSR (Vaze et al., 2022), other scoring rules were developed for OOD detection. We provide descriptions of each scoring rule in the appendix. For specialised training, we first experiment with the standard cross-entropy (CE) loss. We also use ARPL + CS (Chen et al., 2021) from the OSR literature. This method learns a set of ‘reciprocal points’ which are trained to be far away from all training category embeddings. We note that the reciprocal points can be treated as a linear classification layer, allowing us to use any of the scoring rules mentioned above on top of this representation. Finally, we train models with Outlier Exposure (OE) (Hendrycks et al., 2019) from the OOD literature, where real outlier examples are used during training as examples of OOD. In this case, the model is encouraged to predict a uniform softmax output. Datasets. For the OOD setting, we train models on CIFAR10 (Krizhevsky et al., 2009). As OOD data, we use six common datasets: SVHN (Cimpoi et al., 2014), Textures (Ovadia et al., 2019), LSUN-Crop (Yu et al., 2015), LSUN-Resize (Yu et al., 2015), iSUN (Xu et al., 2015) and Places365 (Zhou et al., 2017). We also perform OOD experiments training with CIFAR100 as ID in Appendix E. For the OSR benchmark, following the standard protocols in (Neal et al., 2018a), we set up four sub-tasks containing CIFAR10, CIFAR+10, CIFAR+50 and TinyImageNet (Le & Yang, 2015). In all cases, models are trained on a subset of categories with remaining used as ‘unseen’ at test time. The CIFAR+N settings involve training on four classes from CIFAR10 and evaluating on N classes from CIFAR100. Note that, for a given method, benchmarking on OOD involves training a single model and evaluating on multiple downstream datasets. In contrast, OSR benchmarks involve training a different model for each evaluation. Training configurations. Due to limited space, we give a detailed description and experimental results of each configuration in Appendix A. Broadly speaking, we train a ResNet-18 on the ID data, with an SGD optimizer and cosine annealing schedule. We train ARPL + CS and OE largely based on the official public implementation. For the auxiliary outlier dataset in the OE loss, we follow (Hendrycks et al., 2019) and use a subset of 80 Million Tiny Images (Torralba et al., 2008) with 300K images, removing all examples that appear in CIFAR10/100, Places or LSUN classes. Metrics. Following standard practise in both OOD and OSR tasks, we use the Area Under the Receiver Operating characteristic Curve (AUROC) as an evaluation metric throughout this paper. We found that other metrics, such as the FPR95 (Hendrycks & Gimpel, 2017) (also known as the false alarm rate), were correlated strongly with the AUROC. 3.2 QUANTITATIVE RESULTS We present results from our benchmarking in Table 1. Although there is not always one clear winner when it comes to methodology, we observe two main takeaways. Firstly, MLS and Energy tend to perform best across OOD and OSR datasets (Fig. 1(a)). We hypothesize this is because both are sensitive to the magnitude of the feature vector before the networks’ linear layer. This phenomenon was observed in (Vaze et al., 2022), as unfamiliar examples tend to have lower feature norms than ID samples, providing a strong signal for the distribution shift decision. Interestingly, we also find that, ReAct, which has been shown to be effective in the literature, does not seem to bring performance gain in the well trained models with a high in-distribution accuracy. Here, we follow (Vaze et al., 2022) to obtain as much high in-distribution accuracy as possible for all models. It appears that when the classifier is strong enough, it is difficult for ReAct to bring extra improvement. Secondly, we observe that Outlier Exposure (Hendrycks et al., 2019) provides excellent performance on the OOD benchmarks (Fig. 1(b)), often nearly saturating performance. It also often boosts OSR performance, though to a lesser degree, a phenomenon which we explore in the next section. 3.3 QUALITATIVE ANALYSIS In this section, we qualitatively interrogate the learned representations of Cross-Entropy and Outlier Exposure networks in order to explain the stark performance boost of OE on existing OOD benchmarks. Specifically, we use the value of the maximally activated neuron at various layers to analyze how the networks respond to distribution shift. We pass every related sample through the network, and plot the histogram of maximum activations at every layer in Fig. 2. This is inspired by (Vaze et al., 2022), who show the ‘maximum logit score’ (MLS, the maximum activation at a network’s output layer) can achieve SOTA for OSR. Furthermore, (Dietterich & Guyer, 2022) propose that networks respond to a ‘lack of familiarity’ under distribution shift by failing to light in-distribution activation pathways. We investigate how activations at various stages of a deep network vary under different ‘unseen’ datasets. Fig. 2 shows histograms of the maximum activations at the outputs from layer 1 to layer 4 of a ResNet-18 (He et al., 2016) trained on CIFAR10 when evaluated on data and data with different shifts. Note that here we use ‘layer’ to refer to ResNet block. For OSR data, we find that early layer activations are largely the same as for the ID test data. It is only later in the network that the activation patterns begin to differ. This is intuitive as the low-level textures and statistics of the open-set data do not vary too much from the training images. Furthermore, it has long been known that early filters in CNNs tend to focus on textural details such as edges (Krizhevsky et al., 2012). In contrast, we discover that some OOD datasets, such as SVHN, induce very different activations in the early layers. Our explanation for this phenomenon is analogous: SVHN contains very different image statistics and low-level features to the training dataset of CIFAR10, and hence induces different activations in early layers. Interestingly, some datasets (like SVHN) which showed markedly different early layer activations actually display more similar activations at later layers. Meanwhile, OE displays show substantially different intermediate activations. Interestingly, the maximum activation in early layers look very similar to the ID testing data, but tend to be less so later on in the network. It is clear that activations in later layers are more discriminative after using OE loss when compared with using CE loss. 4 A CONSOLIDATED BENCHMARKING OF DISTRIBUTION SHIFT Having analyzed methodologies for detecting distribution shift across the OOD and OSR settings, we turn our attention to the benchmarks. While it is clear that OSR specifically aims to detect unseen categories, there is no specification of the type of distribution shift which OOD benchmarks aim to capture, or how they would relate to a real-world scenario. In this section, we propose a lens through which to consolidate types of distribution shift. Specifically, we propose that ‘distribution shift’ can be parameterised along two broad, orthognal, axes: semantic shift and covariate shift. Pure semantic shift is when new categories are encountered, and is the explicit focus of OSR, while covariate shift refers to the setting when the semantics of test images remain constant, but other features change. Formally, similarly to (Wiles et al., 2022), we consider a latent variable model of the data generation process, with latent z: z ∼ p(z) yi ∼ p(yi|z) i ∈ {1...K} x ∼ p(x|z) (1) Here, x is an image and yi represents an image attribute. The set of attributes could include traditional features such as ‘color’ or ‘shape’, or refer to more abstract features such as ‘beak shape’ of a bird. We define a set of semantic attributes, YS , such that the category label of an image is a function of these attributes. Furthermore, we define covariate attributes, YC , which can be freely varied without the category label changing. In this framing, given marginal training distributions ptrain(YS) and ptrain(YC), detecting semantic shift is the task of flagging when ptest(YS) ̸= ptrain(YS). Analogously, we wish to flag covariate shift if ptest(YC) ̸= ptrain(YC). To motivate this setting, consider the perceptual system in an autonomous car, which has been trained to recognize ‘cars’ and ‘pedestrians’ during the day. A semantic shift detector is necessary for when the system encounters a new category, e.g to flag that ‘bicycle’ is an unknown concept. Meanwhile, a covariate shift detector is necessary for when the system is deployed at night-time, where the categories may be familiar, but the performance of the system could be expected to degrade. 4.1 DATASETS As a starting point, we note that (Vaze et al., 2022) introduced the Semantic Shift Benchmark (SSB), a distribution shift benchmark with isolates semantic shift. We mainly focus on ImageNet-SSB (Russakovsky et al., 2015a) and CUB-SSB (Wah et al., 2011) datasets. ‘Seen’ classes in ImageNetSSB are the original ImageNet-1K classes, while ‘unseen’ classes selected from the disjoint set of ImageNet-21K-P (Ridnik et al., 2021a). Meanwhile, CUB-SSB splits the 200 bird classes in CUB into ‘seen’ and ‘unseen’ categories. Furthermore, the unseen categories are split into Easy and Hard classes by their attributes, and the splitting rule depends on semantic similarity of every pair of visual attributes in the unknown classes and the training classes. For all the above datasets, categories appearing in the training set would not be included in the evaluation set. We further report figures on SCars-SSB (Krause et al., 2013) and FGVC-Aircraft-SSB (Maji et al., 2013) in the appendix. For covariate shift, we propose ImageNet-C (Hendrycks & Dietterich, 2019) and ImageNet-R (Hendrycks et al., 2020) to demonstrate distribution shift with respect to the standard ImageNet dataset. Both datasets contain images from a subset of the ImageNet-1K categories, but with different low-level image statistics. ImageNet-C applies four main corruptions (e.g. noise, blur, weather and digital) with varying intensities to the validation images of ImageNet-1K, while ImageNet-R collects various artistic renditions of foreground classes from the ImageNet-1K dataset. We also choose Waterbirds (Sagawa et al., 2019) to test the model trained on the CUB -SSB ‘Seen’ classes. Waterbirds inserts bird photographs from the CUB dataset into backgrounds picked from the Places dataset (Zhou et al., 2017), meaning it has the same semantic categories to CUB but in different scenery. Discussion. We note that there is no uniquely optimal framing for discussing distribution shift, and here briefly discuss alternate proposals. For instance, (Zhao et al., 2022) propose a fine-grained analysis of the shifts, where the test time distribution is controlled for specific attributes such as shape and pose. Also related, (Tran et al., 2022) discuss that indications of ‘unfamiliarity’ in a neural network could refer to many things, including confusing classes and sub-population shift. We propose our simple framing as a way to fill the ‘negative space’ left by the semantic shift detection task of OSR. Furthermore, we suggest it is important to study distribution shift in this way as classifiers are optimized to differentiate between one set features (YS) while in fact being invariant to others (YC). As such, we would expect models to react differently to changes their distributions. 4.2 QUANTITATIVE ANALYSIS In Tables 2 and 3, we evaluate a selection of previously discussed methods on our large-scale benchmark for both OOD and OSR. Through this large-scale evaluation, we find that in terms of training methods, among CE, ARPL (+CS), and OE, there is no clear winners across the board. It is surprising that the best performer on the previous small scale benchmarks (see Table 1), OE, appears to be struggling (last two rows in Table 2) on the large-scale benchmark. This is contradicting with the finding on the small scale benchmarks, which we will analyse in the next subsection. In terms of scoring rules, the maganitude-awere scoring rules, MLS and Energy, consistently produce the best performance regardless of the methods and benchmarks (both standard small-scale ones and our large-scale ones). 4.3 WHY DOES OE UNDERPERFORM ON LARGE-SCALE DATASETS? Here, we investigate why OE underperforms other methods on large-scale benchmark. One critical difference between OE and other methods is that OE requires auxiliary OOD data for training. Intuitively, if the distribution of the auxiliary OOD training data can reflect the distribution of the actual OOD testing data, we would expect a better performance on detecting, while incomplete or biased outlier data may hurt the learning. In fig. 3, we visualize the t-SNE projection of the representations for in-distribution (ID) data (i.e., CIFAR10), auxiliary training OOD data (300K (Hendrycks et al., 2019) vs YFCC15M), and different test-time OOD datasets. As can be seen, using the 300K images generally shows a better overlap with the test-time OOD data. Hence, OE trained with 300K as the auxiliary OOD data achieves better performance than the counterpart trained with YFCC15M (table 1 vs table 7). For the experiments on standard (small-scale) benchmark, we experiment using 300K images v.s YFCC15M as the auxiliary training data. While for the experimentes on our large-scale benchmark, we use YFCC15M because 300K images are not competent in large-scale setting. In fig. 4 (first macro row), as a more fine-grained analysis on standard benchmark, we identify the most confident true positive (TP, i.e., correctly predicted OD sample) and the most confident true negative (TN, correctly predicted ID sample) with MLS scoring, and then find their top-k nearest samples from the auxiliary training OOD data according to feature similarities. As can be seen, for a TP sample, the NNs retrieved from 300K are more similar than those retrieved from YFCC15M in to the OOD testing sample. This is consistent with the finding in fig. 3. Further, we carry out a similar experiment on our large-scale benchmark (see fig. 4, second macro row), by retrieving the NNs from the union of ID training set and YFCC15. We observe that the retrieved NN for TPs are less similar to the TPs, suggesting less similarities between the test-time OOD data and the auxiliary training OOD data. Finally, in fig. 5, for CE and ARPL models, in which no auxiliary OOD training data are used, we also identify the most confident predictions (TP and TN) and most confusing predictions (FN and FP) on our benchmarking datasets, ImageNet-R and Waterbirds, and retrieve their NNs from the training ID datasets (i.e., ImageNet and CUB). We observe that the current decision paradigm is susceptible to posture, viewpoint, background color/object and even semantic similarity. FNs and FPs are often mislead by posture (e.g., FN-1 from the ARPL model in Waterbirds), viewpoint (e.g., FP-5 and FP-4 in ImageNet-1K), background (e.g., FN-1 from the CE model in Waterbirds) and semantic similarity (e.g., cartoon style in FN-1 from the CE model in ImageNet-1K and the shark from the ARPL one). More results and analysis can be found in the appendix. 5 CONCLUSION In this paper, we have provided a consolidated exploration of Out-of-Distribution detection (OOD) and Open-set Recognition (OSR). We performed rigorous cross-evaluation between methods developed for OOD and OSR and identified a strong correlation between their performances. We also proposed a new, large-scale benchmark setting, to disentangle the OOD and OSR, by breaking the distribution shift problem down into covariate shift and semantic shift, suggesting large-scale evaluation protocols for the task. We also showed that the best performing method on both OSR and OOD does not generalize well to our challenging large-scale benchmark, and found that magnitude-aware scoring rules are generally more reliable than the others. We believe our new benchmark can serve as a better testbed to measure progresses in OSR and OOD, and hope our findings in this work can shed light for further exploration on the shared space of OSR and OOD and foster new development for this field. A DETAILED TRAINING CONFIGURATION FOR CIFAR BENCHMARKS For training ResNet-18 on CIFAR10, we set the initial learning rate 0.1 and apply cosine annealing schedule, using SGD with 0.9 momentum. The weight decay factor is set to 5e−4. The total training epochs are 200 and the batch size is 128. For CIFAR100, we also set the initial learning rate 0.1, but divided by 5 at 60th, 120th, 160th epochs, train for 200 epochs with a batch size of 128 and weight decay 5e-4, Nesterov momentum of 0.9, following (DeVries & Taylor, 2017) For ReAct config, the models are trained with a batch size of 128 for 100 epochs. The start learning rate is 0.1 and decays by a factor of 10 at epochs 50, 75, and 90. For MLS config, we train the models with a batch size of 128 for 600 epochs with a cosine annealed learning rate, restarting the learning rate to the initial value at epochs 200 and 400. Besides, we linearly increase the learning rate from 0 to the initial value at the beginning. The initial learning rate is 0.1 for CIFAR but 0.01 for TinyImageNet. B EXPERIMENTAL RESULTS ON LARGE-SCALE BENCHMARKS C ACTIVATIONS OF OOD AND OSR AT DIFFERENT LAYERS D INFLUENCE OF TRAINING CONFIGURATIONS FOR OOD PERFORMANCE E EXPERIMENTS OF OOD ON CIFAR-100 F QUALITATIVE SAMPLES FROM THE ID AND OOD DATASETS CIFAR10 G CORRESPONDENCE POINTS FOR IMAGENET-SSB
1. What is the focus of the paper regarding distribution shift detection in open-set recognition (OSR) and out-of-distribution detection (OOD)? 2. What are the strengths and weaknesses of the paper, particularly in terms of its experimental approach and analysis? 3. Do you have any concerns regarding the comparative analysis of existing detection methods? 4. How does the paper categorize and measure distribution shifts in test datasets? 5. How do the proposed methods handle semantic and covariate shifts differently? 6. What are the implications of the study, and how can the results guide future research?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors studied the detection of test-time distribution shift for two applications experimentally: open-set recognition (OSR) and out-of-distribution detection (OOD). A large amount of cross-validation experiments have been implemented for several state-of-the-art detection algorithms and a new, large-scale benchmark has been proposed. Strengths And Weaknesses Strength: The paper focused on the experimental study of the state-of-the-art works on OSR and OOD fields. A large amount of cross-validation experiments have been implemented. Based on the experimental results, observations have been made regarding the performance of different detection methods. ##################### Weaknesses: . Several existing detection methods are introduced for comparison. However, the systematic description and discussion of these algorithms are missing. a) For example, what are the similarities and main differences among these methods. This kind of discussion would definitely help readers better understand the experimental results. b) As these methods are generally motivated differently, how to ensure the fair comparison is a big issue. c) The introduced detection methods were tested on both OSR and OOD. As some methods were developed particularly for one task, eg., MLS for OSR, how did the authors adapt the algorithm for the other task? Any particular changes? d) For the OOD detection in the experiment, did the authors perform point detection or group detection? Are the OOD detection methods developed for the same purpose? . In Section 4, the authors proposed to disentangle the distribution shift into semantic shift and covariate shift. a) For a distribution-shifted test dataset, it generally includes both semantic and covariate shifts. However, in the experiment, the authors seemed to categorize the test set into either semantic or covariate shift, which may be inaccurate. b) Moreover, as the semantic and covariate shifts can be parameterized, the shift level of each test set can be measured analytically. With an analytical measure of the distribution shift, for each test dataset we would have a sense of how difficult the test dataset is for each detection method. This way, we can better compare the detection methods on different levels of the distribution shift. The current categorization of the test datasets into the purely semantic or covariate shift without any measurement of the distribution shift provided a lot less information regarding the comparison. . Were the experiments implemented in the supervised or unsupervised way? Are the introduced detection algorithms all developed in the same way? If not, the comparison may not be fair. . It is observed through the experiments that the magnitude-aware scoring rules MLS and Energy consistently produce the best performance. Any explanation for that? . Based on the experimental results, what's the indication of the study? How to use the current results to guide the future research? Clarity, Quality, Novelty And Reproducibility Clarity and Quality: The presentation of this paper is good. Novelty: The authors compared several state-of-the art detection methods for OSR and OOD experimentally, which has not been tried before as claimed in the paper. However, as the reviewer mentioned above in the weaknesses part, further discussion of these methods is required. Also, the benchmark introduced in Section 4 needs a lot of refinement. Reproducibility: The authors have provided details about the training parameters and network architectures in the appendix. These would help to reproduce the experimental results.
ICLR
Title miniF2F: a cross-system benchmark for formal Olympiad-level mathematics Abstract We present miniF2F, a dataset of formal Olympiad-level mathematics problems statements intended to provide a unified cross-system benchmark for neural theorem proving. The miniF2F benchmark currently targets Metamath, Lean, Isabelle (partially) and HOL Light (partially) and consists of 488 problem statements drawn from the AIME, AMC, and the International Mathematical Olympiad (IMO), as well as material from high-school and undergraduate mathematics courses. We report baseline results using GPT-f (Polu & Sutskever, 2020), a neural theorem prover based on GPT-3 (Brown et al., 2020) and provide an analysis of its performance. We intend for miniF2F to be a community-driven effort and hope that our benchmark will help spur advances in neural theorem proving. 1 INTRODUCTION Shared benchmarks and datasets have historically played a crucial role in driving advances in largescale applications of deep learning, e.g. in computer vision (Deng et al., 2009) and natural language processing (Wang et al., 2019; Rajpurkar et al., 2016; Paperno et al., 2016). Neural theorem proving is a rapidly developing area which aims to apply techniques from deep learning to interactive theorem proving. To date, most contributions in this area have focused on individual theorem proving systems, each with a separately-implemented mathematics library and with results reported on a dataset-specific test split; examples include the HOList (Bansal et al., 2019a), CoqGym (Yang & Deng, 2019) and LeanStep (Han et al., 2021) theorem proving environments and benchmarks. However, benchmarks from this paradigm are not ideal for measuring the mathematical reasoning ability of neural theorem provers for several reasons. Library-specific train/test splits are siloed by construction, dependent on how theorems and lemmas are split in these libraries, and as such are not directly comparable across systems. Moreover, formal mathematics libraries are closer to software repositories than informal mathematical exposition, and many lemmas are implementation-specific artifacts without precise informal mathematical or cross-system translations. To date, the neural theorem proving community has not organized its efforts around a cross-system benchmark. To address this need and to provide a common resource to research groups working on formal theorem proving, we present miniF2F, a unified cross-system benchmark of formal mathematics of progressively increasing difficulty, centering around Olympiad-level problem statements (AMC, AIME, IMO) as well as high-school and undergraduate maths classes. Both the content and name of miniF2F are inspired by the IMO Grand Challenge (Selsam et al., 2019): to build an AI that can win a gold medal in the International Mathematical Olympiad in a formal-to-formal (F2F) format. More precisely, the agent must receive IMO problems written in a formal mathematical format, and must produce a formal (i.e. machine-checkable) proof for that problem. We intend for miniF2F to serve as a stepping stone for different formal systems towards the IMO Grand Challenge (Selsam et al., 2019), as it is end-to-end verifiable, cross-platform and spans a wide range of difficulty. While we report baseline results on miniF2F using GPT-f , a language model based on GPT-3 which has been finetuned for theorem proving, language models are not a mandatory approach for Olympiad problems and this assumption is not reflected in miniF2F, preserving the generality and widespread applicability of the benchmark to systems similar to DeepHOL (Bansal et al., 2019a) or Holophrasm (Whalen, 2016). 2 BACKGROUND AND RELATED WORK BENCHMARKS In the closely related field of (first-order) automated theorem proving (ATP), the TPTP (Sutcliffe, 2017) benchmark is a library of test problems in a unified format for ATP systems. In interactive theorem proving, the ”Freek 100” (Wiedijk, 2008) tracks progress across various interactive theorem provers on a list of 100 mathematical theorems. Wu et al. (2021) built a simplified formal proof environment INT with an associated synthetic inequality benchmark. Competitions and communal challenges have also spurred development in formal theorem proving. The CADE ATP System Competition (CASC) (Sutcliffe, 2016) is a competition that evaluates the performance of first-order automated theorem proving systems. Proof Ground (Haslbeck et al., 2019), part of the ITP conference, is an interactive proving contest (for humans) that supports Coq, Isabelle, and Lean, which focuses on evaluating the formalization effort of proof to given problems within limited time. Finally, the IMO Grand Challenge (Selsam et al., 2019), a proposal from researchers working on the interactive proof assistant Lean, aims to build a system capable of solving IMO problems in the formal-to-formal format. Due to its convenient framing as a natural language processing task, the domain of informal mathematical reasoning has received more attention than the formal one. MATH (Hendrycks et al., 2021) is a mathematics benchmark comprising 12,500 statements in natural language where exercises are classified into 5 levels of difficulty across various domains. Each exercise is combined with a detailed step-by-step proof in natural language. Scaling state-of-the-art models shows little amelioration on MATH, which requires advanced mathematical reasoning capabilities. miniF2F includes a number of formalized statements from MATH. NaturalProofs (Welleck et al., 2021) is another benchmark of natural proof in mathematics , containing 32k theorem statements and proofs. It essentially contains the proofs in ProofWiki and other resources. While MATH is more oriented towards mathematics exercises, NaturalProofs is focused on proofs of general mathematics theorems. Saxton et al. (2019) built a mathematics dataset with 2 × 106 training data and 104 test data, presented in a question-answering format where each statement is paired with a question written in natural language and a direct answer without proof. NEURAL THEOREM PROVING HOList (Bansal et al., 2019a;b; Paliwal et al., 2020) provides an environment as well as a benchmark for HOL Light. They also proposes various deep reinforcement learning approaches for theorem proving and report a pass rate of 59.91% on their benchmark. Yang & Deng (2019) built CoqGym, a large-scale dataset, which comes also with a learning environment, of 71k human-written proofs in Coq proof assistant. They report a 30.0% pass rate on the held-out test theorems in CoqGym. Polu & Sutskever (2020) applied a decoder-only transformer similar to GPT-3 (Brown et al., 2020) to proof steps prediction in Metamath combined with a log-probability based proof search. They also proposed a methodology to train a value function to further guide proof search, achieving a 56.22% pass rate on the held-out test set. Large language models were applied to Lean by Han et al. (2021). They created an environment around the Lean prover targeted to machine learning and propose a dataset extracted from low level proof artifacts that is shown to boost performance when used as a self-supervised co-training objective. They report a 48.4% pass rate on held-out test statements from mathlib, Lean’s mathematical library (mathlib Community, 2020). 3 MINIF2F BENCHMARK miniF2F is a dataset of manually formalized statements of Olympiad type problems, aligned in Lean, Metamath, and Isabelle (partial at the time of writing), providing a cross-platform benchmark for formal mathematical reasoning. Olympiad type problems are of particular interest to compare automated provers across different formal systems as the theories required to solve them are well identified and they generally do not require the definition of new mathematical concepts (a capability that remains beyond the current neural theorem proving state of the art). The formalized statements in miniF2F are drawn from multiple sources, ranging from high school and undergraduate level exercises to Olympiad problems. miniF2F also covers different subsubjects in mathematics as well as proof strategies, focusing on the types of exercises whose statements are expressible in most formal systems. This leads to a systemic focus on algebra, number theory and inequalities because, for example, geometry and combinatorial problems are generally challenging to formalize due to only nascent efforts in these areas in most formal systems. The statements in miniF2F are all manually formalized and selected to cover a variety of difficulty levels for both humans and machines. Formal proofs for these statements are optionally attached. miniF2F draws from AIME, AMC, IMO problems as well as problems from the MATH (Hendrycks et al., 2021) informal dataset. Formalizing problems from the MATH dataset serves two purposes. First, problems in MATH are segmented by difficulty level (from 1 to 5), randomly selecting a subset from each of these difficulty levels allows miniF2F to cover a wider range of difficulty. Second, it provides the community an opportunity to compare capabilities of formal automated prover to their informal counter-parts as discussed in later sections. miniF2F comprises a test set and a validation set, which are a stratified random split from the statements we formalized such that each set equally covers each problem type and difficulty (when available). Table 1 shows a detailed distribution of these statements. Versioning miniF2F is an evolving effort and new statements will continuously be added. Periodically, we will freeze versions of the benchmark. The current version of the benchmark is v11 and results in this paper are reported using this version. v1 comprises 244 test and 244 valid statements. The set of statements of each version is guaranteed to remain stable, only allowing fixes in case errors are later discovered. Rules of engagement and License miniF2F is meant to serve as a shared resource for research groups working on applying deep learning to formal theorem proving. There is no formal process to submit evaluation results and researchers are simply invited to cite miniF2F indicating the version used in their evaluations. We also encourage them to contribute proofs found by their approaches back to the benchmark. The parts of the benchmark associated with each theorem prover (Metamath, 1https://github.com/openai/miniF2F/tree/v1 Lean, Isabelle) are meant to be licensed in a way that is aligned with the licensing usage associated with the theorem prover’s main library. As a result, the Metamath version of the benchmark is released under the MIT License, while the Lean and Isabelle versions are released under the Apache License. Formalization effort and challenges We found that, for trained practitioners (but not necessarily experts, including students recently introduced to formal systems), formalizing a statement takes about 15 minutes on average, and reviewing a formalized statement, about half of that on average. Note that not all exercises are directly or naturally formalizable. In particular, multi-choice questions, word problems, and exercises that require to explicit a witness or a set as part of the answer present interesting challenges: multi-choice questions2 these problems are generally straightforwardly formalizable by reformulating the statement using the right answer only, and could be made “fair” in a competitive setup by formalizing all possible choices and running automated provers on all of them, attributing points only if a proof of the correct answer is provided. word problems3 where significant information is presented in natural language generally require non-trivial efforts to be formalized. We generally formalized them by explicitly modeling the mathematics concepts and expression presented in natural language while attempting as best as possible to preserve the mathematical difficulty of the original problem. Sometime the formalization work is most of the difficulty associated with the original question; in such cases we would discard the problem entirely. problems that require to explicit a set or witness4 (e.g. find all ... such that ...) are not directly formalizable. The best approximation we relied on for these was to formalize the statement with the witness or answer provided, turning such exercises into the generation of a proof that the answer is correct, and if needed, that it is the unique one–which is, at times, a much easier exercise. A non negligible portion of IMO problems are as such, which we foresee could become a challenge in the future, to fairly compare humans to automated proving systems in a competitive setup. Porting effort In addition to Metamath, Lean, Isabelle (work in progress) and HOL Light (work in progress), we are eager to extend the coverage of miniF2F to Coq, and will welcome any effort in that direction or to extend miniF2F to further systems. 4 EXPERIMENTS In this section, in order to study baseline performances associated with existing systems, we report pass rates achieved by GPT-f (Polu & Sutskever, 2020) applied to Metamath, GPT-f /PACT (Polu & Sutskever, 2020; Han et al., 2021) applied to Lean as well as a baseline prover implemented in Lean denoted as the tidy baseline. Pass rates are reported as Pass@N where N is the number of proof search attempts per statement. Pass@N is computed by running more attempts per statement, averaged to get an unbiased, low-variance estimate. 4.1 METAMATH Metamath is powered by a meta logic system based on a single substitution rule. It’s characterized by its simplicity which makes it convenient to study machine learning. Proofs in Metamath are, as a consequence of the low-level proofsteps, much longer than in other systems as there is no assistance from high-level tactics. Proofs which are trivial in other systems (e.g. n-digit addition or simple ring arithmetic transformations) can be quite tedious in Metamath. The absence of tactics is both 2Example: amc12a 2020 p10 in https://github.com/openai/miniF2F/blob/main/ lean/src/test.lean 3Example: mathd algebra 398 in https://github.com/openai/miniF2F/blob/main/ lean/src/test.lean 4Example: imo 1997 p5 in https://github.com/openai/miniF2F/blob/main/lean/ src/test.lean a benefit, as the models sees and learns on everything, and a challenge, as proofs of even simple exercises require hundreds of proofsteps. 4.1.1 GPT-F We report the pass rate of GPT-f applied to Metamath as described in Polu & Sutskever (2020). We use a model with 700m learnable parameters. The model is trained on an updated dump of the set.mm library (but similar synthetic datasets), using the log-probability based search as reported in Table 8 of the GPT-f paper (Polu & Sutskever, 2020). The model achieves a Pass@1 of 1.3% and a Pass@8 of 1.6% on miniF2F-test. As expected, these numbers are quite low due to the length of typical proofs for even simple math exercises. The average proof length is also reported in Table 3. 4.2 LEAN In comparison to Metamath, Lean benefits from a large number of powerful tactics to assist formalization efforts. Typical Lean proofs are much shorter than Metamath’s. This is also a formal system of interest as it has received a lot of attention from the mathematical community as recent theories have successfully been formalized in Lean (Perfectoid Spaces (Buzzard et al., 2019), Liquid Tensor experiment (Scholze, 2020)). Lean is also associated with the IMO Grand Challenge (Selsam et al., 2019) which aims to organize a formal-to-formal challenge during the upcoming IMO competitions. 4.2.1 TIDY BASELINE We use the generic best-first search algorithm presented in PACT (Han et al., 2021). The algorithm works as follows: Given a list of tactics L with priority, we maintain a priority queue Q of tactic states whose priority is given by the priority of the last applied tactic in L that led to it. While Q is not empty, we pop the top tactic state t from Q. We iterate through L and apply each tactic to t. If no error is raised, we capture the returned tactic states from Lean and insert them back into Q. We use the same terminology as in PACT (Han et al., 2021): maximum queue size ωmax, depth limit dmax. We also enforce a budget of imax iterations of the outer loop. When Q’s size reach qmax, all the tactic states to be inserted are discarded. We do not expand the next tactic state when the depth is beyond dmax. This loop is run until a proof is found or the iterations budget is exhausted. For consistency checking, we run the tidy baseline under the same settings and on the same test set as in PACT (Han et al., 2021) except that we don’t set a global timeout. Our implementation achieved a 10.5% pass rate on mathlib’s test split. This result is comparable to the reported 9.9% in PACT given the waived global timeout. In addition to the curated list of tactics L used in PACT (Han et al., 2021), we added 4 high-level tactics HL =[nlinarith, linarith, ring nf, norm num] to L with higher priorities than the others. We report our pass rate on miniF2F in Table 2. 4.2.2 GPT-F/PACT We report the pass rate of GPT-f /PACT as described in Han et al. (2021). We use a model with 700M learnable parameters. The model is trained on an updated dump56 of the mathlib library using the PACT methodology denoted in the paper as mix2 > mix1 + tactic in Figure 6. The model achieves a Pass@1 of 24.6% and a Pass@8 of 29.2% on miniF2F-test. The average proof length is also reported in Table 3. 4.3 DISCUSSION 4.3.1 ACCESS TO HIGH-LEVEL TACTICS One goal of miniF2F is to study the comparison of performance across formal systems. In this section we reported the performance of the same methodology (GPT-f (Polu & Sutskever, 2020)) 5https://github.com/jasonrute/lean_proof_recording/commit/ 8499f10c2e10dd533152070ed933c4f0b21ecdc0 6https://github.com/jesse-michael-han/lean-step-public/commit/ a2b83c237bfe4d6f1c48bb48bc0769b5940e614a applied to both Lean and Metamath. Both models are pre-trained on WebMath (Polu & Sutskever, 2020) and respectively trained on datasets extracted from Lean (Han et al., 2021) and Metamath (Polu & Sutskever, 2020). The overall compute deployed at training is comparable in both setup and exactly equivalent at test-time, yet the achieved performance appears drastically superior when applied to Lean. We hypothesize that this is mainly explained by the model’s access to highlevel tactics when applied to Lean, enabling the model to learn how to guide Lean’s automation in an effective way. An example of this high-level guidance behavior is well exemplified by the following proof of the statement algebra_sqineq_2unitcircatblt1 where the model heavily relies on Lean’s nlinarith solver but provides it with essential premises to successfully guide the search. theorem algebra_sqineq_2unitcircatblt1 (a b : R) (h0 : aˆ2 + bˆ2 = 2) : a * b ≤ 1 := begin nlinarith [sq_nonneg a,sq_nonneg b,sq_nonneg (a - b)] end (The statement above (algebra_sqineq_2unitcircatblt1) requires to prove the assertion ∀a, b ∈ R, a2 + b2 = 2→ a · b ≤ 1). In Metamath, GPT-f fails to find a proof as it requires a very large number of steps to appropriately rewrite the goal in a way that is amenable to the use of set.mm’s existing theorems. The tidy baseline also fails to find a proof of that statement as nlinarith is not capable of solving the goal without being passed extraneous premises. These results motivate the use of neural theorem proving with formal systems that expose powerful high level tactics and also suggest the potential of a closer collaboration between formal systems and machine learning practitioners. It also motivates the use of generative models in that setup as the arguments required by high-level tactics to succeed on non trivial problems generally do not exist in the context of the statement and therefore have to be generated ex-nihilo. 4.3.2 COMPARISON OF INFORMAL AND FORMAL SETUPS The use of formal systems for neural theorem proving is often motivated by the role of the formal system as a verifier, enabling more advanced neural search strategies than possible in a fully informal setup where the generation of a model can’t be verified automatically, as well as the access to powerful tactics. Our formalization of a subset of the MATH (Hendrycks et al., 2021) informal dataset provides an interesting approximate quantification of the benefit of having access to a formal system in the context of neural theorem proving. Approximate, because we only formalized a small subset of the MATH statements, but nonetheless useful since we drew uniformly from the 5 difficulty levels. In Hendrycks et al. (2021), the performance of GPT-3 (which is a larger model than the GPT-f model studied here) is reported to be 6.0% in the algebra category and 3.9% in the number theory category. GPT-f applied to Lean by comparison achieves 51.4% in the algebra category and 41.7% in the number theory category. It is also worthwhile to note that the tidy baseline also highly outperforms (31.4% in algebra and 30.0% in number theory) GPT-3 in an informal setup demonstrating the benefit of proof automation alone. 4.3.3 LIMITATION With miniF2F being cross-system as the goal, types of problems that are less expressible in certain systems such as geometry and combinatorial problems are less covered. The shift of distribution of problem types may result in skewing the research direction of models when benchmarking on miniF2F. Directionally we aim to fix it and extend the coverage of miniF2F as we grow the benchmark. However, works and efforts on the corresponding library of other systems are required as well. 5 CONCLUSION We presented miniF2F, a dataset of formal Olympiad-level mathematics problem statements, meant to serve as an initial effort towards cross-system benchmarking of neural mathematical reasoning capabilities in formal environments. We reported the performance of the neural theorem prover GPT-f (Polu & Sutskever, 2020) on both the Lean and Metamath parts of miniF2F as well as the performance of our non-neural tidy baseline applied to Lean. Then, we discussed these baselines and put them in perspective with previously reported comparable results in informal environments (Hendrycks et al., 2021). Finally, we hope that miniF2F will prove to be useful to the scientific community working on neural theorem proving and spur advances in this domain. ACKNOWLEDGMENTS We are grateful to Wenda Li and Xavier Martinet for contributing the Isabelle and HOL Light statements currently available in miniF2F, paving the way towards a full support of Isabelle and HOL Light, as well as their feedback and encouragement in the process. We thank Harri Edwards for his comments that greatly improved the manuscript. A EXAMPLE OF STATEMENT IN MINIF2F B PERFORMANCE BY DIFFICULTY ON STATEMENTS FORMALIZED FROM MATH DATASET The MATH dataset assigns a difficulty ranging from 1 to 5 to each of its problem. Tables 5 and 6 report the number of proved statement split by difficulty level on the algebra and number theory categories. More broadly, Lean GPT-f is capable of solving any problem that the tidy baseline or Metamath GPT-f can solve in MiniF2F. Qualitatively, the problems on which it fail either require multiple nontrivial reasoning steps (outside a few exceptions, problems requiring more than 2 non-trivial steps of mathematical reasoning are generally out of reach of these baselines) or require a cut introduction that is hard to generate, such as generating a non trivial witness.
1. What is the focus and contribution of the paper regarding mathematical problem formalization? 2. What are the strengths of the proposed approach, particularly in terms of using deep learning techniques? 3. What are the weaknesses of the paper, especially regarding the experiments and comparisons with other works? 4. How does the reviewer assess the significance and impact of the paper's content? 5. Are there any suggestions or recommendations for future research directions related to the paper's topic?
Summary Of The Paper Review
Summary Of The Paper The authors present miniF2F, a dataset of formalized mathematical problems drawn from diverse sources including IMO, AIME, AMC, undergraduate, and high school problems. The focus is on algebra, inequalities, and number theory as those problems are easier to formalize than for example, geometry or combinatorial problems. The formalization is done in Metamath, Lean, with efforts for Isabelle ongoing. The authors run GPT-f on Metamath and Lean, and the tidy baseline (from the PACT paper) on the dataset and present results. They find that proving in Lean is vastly better for performance than Metamath which they conjecture is due to access to higher level tactics in Lean compared to Metamath. Review Deep learning applied to theorem proving is I think one of its most exciting applications. The multiple different frameworks and datasets are a barrier to making progress in this area as a community and to that end this dataset is a significant step. The methods the authors apply on the dataset are fairly state of the art and serve as a good baseline for someone wanting to make further progress. I do however think that some more analysis would be worthwhile. In particular, I think the authors should add the following (1) Breakdown of the performance on the problems sourced from the MATH dataset by level of difficulty. (2) A qualitative analysis of what kinds of problems the baseline models fail on and whether they fail on similar problems.
ICLR
Title miniF2F: a cross-system benchmark for formal Olympiad-level mathematics Abstract We present miniF2F, a dataset of formal Olympiad-level mathematics problems statements intended to provide a unified cross-system benchmark for neural theorem proving. The miniF2F benchmark currently targets Metamath, Lean, Isabelle (partially) and HOL Light (partially) and consists of 488 problem statements drawn from the AIME, AMC, and the International Mathematical Olympiad (IMO), as well as material from high-school and undergraduate mathematics courses. We report baseline results using GPT-f (Polu & Sutskever, 2020), a neural theorem prover based on GPT-3 (Brown et al., 2020) and provide an analysis of its performance. We intend for miniF2F to be a community-driven effort and hope that our benchmark will help spur advances in neural theorem proving. 1 INTRODUCTION Shared benchmarks and datasets have historically played a crucial role in driving advances in largescale applications of deep learning, e.g. in computer vision (Deng et al., 2009) and natural language processing (Wang et al., 2019; Rajpurkar et al., 2016; Paperno et al., 2016). Neural theorem proving is a rapidly developing area which aims to apply techniques from deep learning to interactive theorem proving. To date, most contributions in this area have focused on individual theorem proving systems, each with a separately-implemented mathematics library and with results reported on a dataset-specific test split; examples include the HOList (Bansal et al., 2019a), CoqGym (Yang & Deng, 2019) and LeanStep (Han et al., 2021) theorem proving environments and benchmarks. However, benchmarks from this paradigm are not ideal for measuring the mathematical reasoning ability of neural theorem provers for several reasons. Library-specific train/test splits are siloed by construction, dependent on how theorems and lemmas are split in these libraries, and as such are not directly comparable across systems. Moreover, formal mathematics libraries are closer to software repositories than informal mathematical exposition, and many lemmas are implementation-specific artifacts without precise informal mathematical or cross-system translations. To date, the neural theorem proving community has not organized its efforts around a cross-system benchmark. To address this need and to provide a common resource to research groups working on formal theorem proving, we present miniF2F, a unified cross-system benchmark of formal mathematics of progressively increasing difficulty, centering around Olympiad-level problem statements (AMC, AIME, IMO) as well as high-school and undergraduate maths classes. Both the content and name of miniF2F are inspired by the IMO Grand Challenge (Selsam et al., 2019): to build an AI that can win a gold medal in the International Mathematical Olympiad in a formal-to-formal (F2F) format. More precisely, the agent must receive IMO problems written in a formal mathematical format, and must produce a formal (i.e. machine-checkable) proof for that problem. We intend for miniF2F to serve as a stepping stone for different formal systems towards the IMO Grand Challenge (Selsam et al., 2019), as it is end-to-end verifiable, cross-platform and spans a wide range of difficulty. While we report baseline results on miniF2F using GPT-f , a language model based on GPT-3 which has been finetuned for theorem proving, language models are not a mandatory approach for Olympiad problems and this assumption is not reflected in miniF2F, preserving the generality and widespread applicability of the benchmark to systems similar to DeepHOL (Bansal et al., 2019a) or Holophrasm (Whalen, 2016). 2 BACKGROUND AND RELATED WORK BENCHMARKS In the closely related field of (first-order) automated theorem proving (ATP), the TPTP (Sutcliffe, 2017) benchmark is a library of test problems in a unified format for ATP systems. In interactive theorem proving, the ”Freek 100” (Wiedijk, 2008) tracks progress across various interactive theorem provers on a list of 100 mathematical theorems. Wu et al. (2021) built a simplified formal proof environment INT with an associated synthetic inequality benchmark. Competitions and communal challenges have also spurred development in formal theorem proving. The CADE ATP System Competition (CASC) (Sutcliffe, 2016) is a competition that evaluates the performance of first-order automated theorem proving systems. Proof Ground (Haslbeck et al., 2019), part of the ITP conference, is an interactive proving contest (for humans) that supports Coq, Isabelle, and Lean, which focuses on evaluating the formalization effort of proof to given problems within limited time. Finally, the IMO Grand Challenge (Selsam et al., 2019), a proposal from researchers working on the interactive proof assistant Lean, aims to build a system capable of solving IMO problems in the formal-to-formal format. Due to its convenient framing as a natural language processing task, the domain of informal mathematical reasoning has received more attention than the formal one. MATH (Hendrycks et al., 2021) is a mathematics benchmark comprising 12,500 statements in natural language where exercises are classified into 5 levels of difficulty across various domains. Each exercise is combined with a detailed step-by-step proof in natural language. Scaling state-of-the-art models shows little amelioration on MATH, which requires advanced mathematical reasoning capabilities. miniF2F includes a number of formalized statements from MATH. NaturalProofs (Welleck et al., 2021) is another benchmark of natural proof in mathematics , containing 32k theorem statements and proofs. It essentially contains the proofs in ProofWiki and other resources. While MATH is more oriented towards mathematics exercises, NaturalProofs is focused on proofs of general mathematics theorems. Saxton et al. (2019) built a mathematics dataset with 2 × 106 training data and 104 test data, presented in a question-answering format where each statement is paired with a question written in natural language and a direct answer without proof. NEURAL THEOREM PROVING HOList (Bansal et al., 2019a;b; Paliwal et al., 2020) provides an environment as well as a benchmark for HOL Light. They also proposes various deep reinforcement learning approaches for theorem proving and report a pass rate of 59.91% on their benchmark. Yang & Deng (2019) built CoqGym, a large-scale dataset, which comes also with a learning environment, of 71k human-written proofs in Coq proof assistant. They report a 30.0% pass rate on the held-out test theorems in CoqGym. Polu & Sutskever (2020) applied a decoder-only transformer similar to GPT-3 (Brown et al., 2020) to proof steps prediction in Metamath combined with a log-probability based proof search. They also proposed a methodology to train a value function to further guide proof search, achieving a 56.22% pass rate on the held-out test set. Large language models were applied to Lean by Han et al. (2021). They created an environment around the Lean prover targeted to machine learning and propose a dataset extracted from low level proof artifacts that is shown to boost performance when used as a self-supervised co-training objective. They report a 48.4% pass rate on held-out test statements from mathlib, Lean’s mathematical library (mathlib Community, 2020). 3 MINIF2F BENCHMARK miniF2F is a dataset of manually formalized statements of Olympiad type problems, aligned in Lean, Metamath, and Isabelle (partial at the time of writing), providing a cross-platform benchmark for formal mathematical reasoning. Olympiad type problems are of particular interest to compare automated provers across different formal systems as the theories required to solve them are well identified and they generally do not require the definition of new mathematical concepts (a capability that remains beyond the current neural theorem proving state of the art). The formalized statements in miniF2F are drawn from multiple sources, ranging from high school and undergraduate level exercises to Olympiad problems. miniF2F also covers different subsubjects in mathematics as well as proof strategies, focusing on the types of exercises whose statements are expressible in most formal systems. This leads to a systemic focus on algebra, number theory and inequalities because, for example, geometry and combinatorial problems are generally challenging to formalize due to only nascent efforts in these areas in most formal systems. The statements in miniF2F are all manually formalized and selected to cover a variety of difficulty levels for both humans and machines. Formal proofs for these statements are optionally attached. miniF2F draws from AIME, AMC, IMO problems as well as problems from the MATH (Hendrycks et al., 2021) informal dataset. Formalizing problems from the MATH dataset serves two purposes. First, problems in MATH are segmented by difficulty level (from 1 to 5), randomly selecting a subset from each of these difficulty levels allows miniF2F to cover a wider range of difficulty. Second, it provides the community an opportunity to compare capabilities of formal automated prover to their informal counter-parts as discussed in later sections. miniF2F comprises a test set and a validation set, which are a stratified random split from the statements we formalized such that each set equally covers each problem type and difficulty (when available). Table 1 shows a detailed distribution of these statements. Versioning miniF2F is an evolving effort and new statements will continuously be added. Periodically, we will freeze versions of the benchmark. The current version of the benchmark is v11 and results in this paper are reported using this version. v1 comprises 244 test and 244 valid statements. The set of statements of each version is guaranteed to remain stable, only allowing fixes in case errors are later discovered. Rules of engagement and License miniF2F is meant to serve as a shared resource for research groups working on applying deep learning to formal theorem proving. There is no formal process to submit evaluation results and researchers are simply invited to cite miniF2F indicating the version used in their evaluations. We also encourage them to contribute proofs found by their approaches back to the benchmark. The parts of the benchmark associated with each theorem prover (Metamath, 1https://github.com/openai/miniF2F/tree/v1 Lean, Isabelle) are meant to be licensed in a way that is aligned with the licensing usage associated with the theorem prover’s main library. As a result, the Metamath version of the benchmark is released under the MIT License, while the Lean and Isabelle versions are released under the Apache License. Formalization effort and challenges We found that, for trained practitioners (but not necessarily experts, including students recently introduced to formal systems), formalizing a statement takes about 15 minutes on average, and reviewing a formalized statement, about half of that on average. Note that not all exercises are directly or naturally formalizable. In particular, multi-choice questions, word problems, and exercises that require to explicit a witness or a set as part of the answer present interesting challenges: multi-choice questions2 these problems are generally straightforwardly formalizable by reformulating the statement using the right answer only, and could be made “fair” in a competitive setup by formalizing all possible choices and running automated provers on all of them, attributing points only if a proof of the correct answer is provided. word problems3 where significant information is presented in natural language generally require non-trivial efforts to be formalized. We generally formalized them by explicitly modeling the mathematics concepts and expression presented in natural language while attempting as best as possible to preserve the mathematical difficulty of the original problem. Sometime the formalization work is most of the difficulty associated with the original question; in such cases we would discard the problem entirely. problems that require to explicit a set or witness4 (e.g. find all ... such that ...) are not directly formalizable. The best approximation we relied on for these was to formalize the statement with the witness or answer provided, turning such exercises into the generation of a proof that the answer is correct, and if needed, that it is the unique one–which is, at times, a much easier exercise. A non negligible portion of IMO problems are as such, which we foresee could become a challenge in the future, to fairly compare humans to automated proving systems in a competitive setup. Porting effort In addition to Metamath, Lean, Isabelle (work in progress) and HOL Light (work in progress), we are eager to extend the coverage of miniF2F to Coq, and will welcome any effort in that direction or to extend miniF2F to further systems. 4 EXPERIMENTS In this section, in order to study baseline performances associated with existing systems, we report pass rates achieved by GPT-f (Polu & Sutskever, 2020) applied to Metamath, GPT-f /PACT (Polu & Sutskever, 2020; Han et al., 2021) applied to Lean as well as a baseline prover implemented in Lean denoted as the tidy baseline. Pass rates are reported as Pass@N where N is the number of proof search attempts per statement. Pass@N is computed by running more attempts per statement, averaged to get an unbiased, low-variance estimate. 4.1 METAMATH Metamath is powered by a meta logic system based on a single substitution rule. It’s characterized by its simplicity which makes it convenient to study machine learning. Proofs in Metamath are, as a consequence of the low-level proofsteps, much longer than in other systems as there is no assistance from high-level tactics. Proofs which are trivial in other systems (e.g. n-digit addition or simple ring arithmetic transformations) can be quite tedious in Metamath. The absence of tactics is both 2Example: amc12a 2020 p10 in https://github.com/openai/miniF2F/blob/main/ lean/src/test.lean 3Example: mathd algebra 398 in https://github.com/openai/miniF2F/blob/main/ lean/src/test.lean 4Example: imo 1997 p5 in https://github.com/openai/miniF2F/blob/main/lean/ src/test.lean a benefit, as the models sees and learns on everything, and a challenge, as proofs of even simple exercises require hundreds of proofsteps. 4.1.1 GPT-F We report the pass rate of GPT-f applied to Metamath as described in Polu & Sutskever (2020). We use a model with 700m learnable parameters. The model is trained on an updated dump of the set.mm library (but similar synthetic datasets), using the log-probability based search as reported in Table 8 of the GPT-f paper (Polu & Sutskever, 2020). The model achieves a Pass@1 of 1.3% and a Pass@8 of 1.6% on miniF2F-test. As expected, these numbers are quite low due to the length of typical proofs for even simple math exercises. The average proof length is also reported in Table 3. 4.2 LEAN In comparison to Metamath, Lean benefits from a large number of powerful tactics to assist formalization efforts. Typical Lean proofs are much shorter than Metamath’s. This is also a formal system of interest as it has received a lot of attention from the mathematical community as recent theories have successfully been formalized in Lean (Perfectoid Spaces (Buzzard et al., 2019), Liquid Tensor experiment (Scholze, 2020)). Lean is also associated with the IMO Grand Challenge (Selsam et al., 2019) which aims to organize a formal-to-formal challenge during the upcoming IMO competitions. 4.2.1 TIDY BASELINE We use the generic best-first search algorithm presented in PACT (Han et al., 2021). The algorithm works as follows: Given a list of tactics L with priority, we maintain a priority queue Q of tactic states whose priority is given by the priority of the last applied tactic in L that led to it. While Q is not empty, we pop the top tactic state t from Q. We iterate through L and apply each tactic to t. If no error is raised, we capture the returned tactic states from Lean and insert them back into Q. We use the same terminology as in PACT (Han et al., 2021): maximum queue size ωmax, depth limit dmax. We also enforce a budget of imax iterations of the outer loop. When Q’s size reach qmax, all the tactic states to be inserted are discarded. We do not expand the next tactic state when the depth is beyond dmax. This loop is run until a proof is found or the iterations budget is exhausted. For consistency checking, we run the tidy baseline under the same settings and on the same test set as in PACT (Han et al., 2021) except that we don’t set a global timeout. Our implementation achieved a 10.5% pass rate on mathlib’s test split. This result is comparable to the reported 9.9% in PACT given the waived global timeout. In addition to the curated list of tactics L used in PACT (Han et al., 2021), we added 4 high-level tactics HL =[nlinarith, linarith, ring nf, norm num] to L with higher priorities than the others. We report our pass rate on miniF2F in Table 2. 4.2.2 GPT-F/PACT We report the pass rate of GPT-f /PACT as described in Han et al. (2021). We use a model with 700M learnable parameters. The model is trained on an updated dump56 of the mathlib library using the PACT methodology denoted in the paper as mix2 > mix1 + tactic in Figure 6. The model achieves a Pass@1 of 24.6% and a Pass@8 of 29.2% on miniF2F-test. The average proof length is also reported in Table 3. 4.3 DISCUSSION 4.3.1 ACCESS TO HIGH-LEVEL TACTICS One goal of miniF2F is to study the comparison of performance across formal systems. In this section we reported the performance of the same methodology (GPT-f (Polu & Sutskever, 2020)) 5https://github.com/jasonrute/lean_proof_recording/commit/ 8499f10c2e10dd533152070ed933c4f0b21ecdc0 6https://github.com/jesse-michael-han/lean-step-public/commit/ a2b83c237bfe4d6f1c48bb48bc0769b5940e614a applied to both Lean and Metamath. Both models are pre-trained on WebMath (Polu & Sutskever, 2020) and respectively trained on datasets extracted from Lean (Han et al., 2021) and Metamath (Polu & Sutskever, 2020). The overall compute deployed at training is comparable in both setup and exactly equivalent at test-time, yet the achieved performance appears drastically superior when applied to Lean. We hypothesize that this is mainly explained by the model’s access to highlevel tactics when applied to Lean, enabling the model to learn how to guide Lean’s automation in an effective way. An example of this high-level guidance behavior is well exemplified by the following proof of the statement algebra_sqineq_2unitcircatblt1 where the model heavily relies on Lean’s nlinarith solver but provides it with essential premises to successfully guide the search. theorem algebra_sqineq_2unitcircatblt1 (a b : R) (h0 : aˆ2 + bˆ2 = 2) : a * b ≤ 1 := begin nlinarith [sq_nonneg a,sq_nonneg b,sq_nonneg (a - b)] end (The statement above (algebra_sqineq_2unitcircatblt1) requires to prove the assertion ∀a, b ∈ R, a2 + b2 = 2→ a · b ≤ 1). In Metamath, GPT-f fails to find a proof as it requires a very large number of steps to appropriately rewrite the goal in a way that is amenable to the use of set.mm’s existing theorems. The tidy baseline also fails to find a proof of that statement as nlinarith is not capable of solving the goal without being passed extraneous premises. These results motivate the use of neural theorem proving with formal systems that expose powerful high level tactics and also suggest the potential of a closer collaboration between formal systems and machine learning practitioners. It also motivates the use of generative models in that setup as the arguments required by high-level tactics to succeed on non trivial problems generally do not exist in the context of the statement and therefore have to be generated ex-nihilo. 4.3.2 COMPARISON OF INFORMAL AND FORMAL SETUPS The use of formal systems for neural theorem proving is often motivated by the role of the formal system as a verifier, enabling more advanced neural search strategies than possible in a fully informal setup where the generation of a model can’t be verified automatically, as well as the access to powerful tactics. Our formalization of a subset of the MATH (Hendrycks et al., 2021) informal dataset provides an interesting approximate quantification of the benefit of having access to a formal system in the context of neural theorem proving. Approximate, because we only formalized a small subset of the MATH statements, but nonetheless useful since we drew uniformly from the 5 difficulty levels. In Hendrycks et al. (2021), the performance of GPT-3 (which is a larger model than the GPT-f model studied here) is reported to be 6.0% in the algebra category and 3.9% in the number theory category. GPT-f applied to Lean by comparison achieves 51.4% in the algebra category and 41.7% in the number theory category. It is also worthwhile to note that the tidy baseline also highly outperforms (31.4% in algebra and 30.0% in number theory) GPT-3 in an informal setup demonstrating the benefit of proof automation alone. 4.3.3 LIMITATION With miniF2F being cross-system as the goal, types of problems that are less expressible in certain systems such as geometry and combinatorial problems are less covered. The shift of distribution of problem types may result in skewing the research direction of models when benchmarking on miniF2F. Directionally we aim to fix it and extend the coverage of miniF2F as we grow the benchmark. However, works and efforts on the corresponding library of other systems are required as well. 5 CONCLUSION We presented miniF2F, a dataset of formal Olympiad-level mathematics problem statements, meant to serve as an initial effort towards cross-system benchmarking of neural mathematical reasoning capabilities in formal environments. We reported the performance of the neural theorem prover GPT-f (Polu & Sutskever, 2020) on both the Lean and Metamath parts of miniF2F as well as the performance of our non-neural tidy baseline applied to Lean. Then, we discussed these baselines and put them in perspective with previously reported comparable results in informal environments (Hendrycks et al., 2021). Finally, we hope that miniF2F will prove to be useful to the scientific community working on neural theorem proving and spur advances in this domain. ACKNOWLEDGMENTS We are grateful to Wenda Li and Xavier Martinet for contributing the Isabelle and HOL Light statements currently available in miniF2F, paving the way towards a full support of Isabelle and HOL Light, as well as their feedback and encouragement in the process. We thank Harri Edwards for his comments that greatly improved the manuscript. A EXAMPLE OF STATEMENT IN MINIF2F B PERFORMANCE BY DIFFICULTY ON STATEMENTS FORMALIZED FROM MATH DATASET The MATH dataset assigns a difficulty ranging from 1 to 5 to each of its problem. Tables 5 and 6 report the number of proved statement split by difficulty level on the algebra and number theory categories. More broadly, Lean GPT-f is capable of solving any problem that the tidy baseline or Metamath GPT-f can solve in MiniF2F. Qualitatively, the problems on which it fail either require multiple nontrivial reasoning steps (outside a few exceptions, problems requiring more than 2 non-trivial steps of mathematical reasoning are generally out of reach of these baselines) or require a cut introduction that is hard to generate, such as generating a non trivial witness.
1. What is the main contribution of the paper regarding theorem proving? 2. What are the strengths of the proposed test suite, particularly in its cross-system design and significance in the grand-IMO challenge? 3. Do you have any questions regarding the experiment results and the importance of expert knowledge for theorem proving? 4. What are the meanings of "CUSTOM" and "Induction" in Table 1? 5. How is the distribution of the number of theorems proved across different difficulty levels? 6. What is the average time spent on one problem, and how large a portion of problems could be formalized except for geometry and combinatorial problems? 7. What is the expected ultimate size of miniF2F?
Summary Of The Paper Review
Summary Of The Paper This paper presents miniF2F, a test suite of Olympiad-level problems of theorem proving that is implemented in Metamath, Lean and Isabelle. MiniF2F contains 488 individual theorem statements that are formalized from Olympiad math contests. GPT-f models trained on Metamath and Lean are evaluated on this test suite. Review Strengths: (1) Since previous benchmarks of ATP mainly focus on basic math theorems, miniF2F fills the vacancy of the contest-level test suite for verifying the performance of theorem provers. I think this is an important step towards the goal of the grand-IMO challenge. (2) The cross-system design of miniF2F provides a good way to compare different formalizations and proving systems. (3) The experiment results demonstrate the importance of expert knowledge for theorem proving. Built with high-level tactics, GPT-f/Lean achieves better results than GPT-f/MM. The formal theorem provers also work better than the natural language-based problem solver. Questions: (1) What are the meanings of "CUSTOM" and "Induction" in Table. 1. (2) What is the distribution of the number of theorems proved across different difficulty levels? (3) Personally, I am quite curious about your experience of formalizing these problems. What is the average time spent on one problem? Except for geometry and combinatorial problems, how large portion of problems could be formalized, and what would be the ultimate size of miniF2F in your expectation?
ICLR
Title miniF2F: a cross-system benchmark for formal Olympiad-level mathematics Abstract We present miniF2F, a dataset of formal Olympiad-level mathematics problems statements intended to provide a unified cross-system benchmark for neural theorem proving. The miniF2F benchmark currently targets Metamath, Lean, Isabelle (partially) and HOL Light (partially) and consists of 488 problem statements drawn from the AIME, AMC, and the International Mathematical Olympiad (IMO), as well as material from high-school and undergraduate mathematics courses. We report baseline results using GPT-f (Polu & Sutskever, 2020), a neural theorem prover based on GPT-3 (Brown et al., 2020) and provide an analysis of its performance. We intend for miniF2F to be a community-driven effort and hope that our benchmark will help spur advances in neural theorem proving. 1 INTRODUCTION Shared benchmarks and datasets have historically played a crucial role in driving advances in largescale applications of deep learning, e.g. in computer vision (Deng et al., 2009) and natural language processing (Wang et al., 2019; Rajpurkar et al., 2016; Paperno et al., 2016). Neural theorem proving is a rapidly developing area which aims to apply techniques from deep learning to interactive theorem proving. To date, most contributions in this area have focused on individual theorem proving systems, each with a separately-implemented mathematics library and with results reported on a dataset-specific test split; examples include the HOList (Bansal et al., 2019a), CoqGym (Yang & Deng, 2019) and LeanStep (Han et al., 2021) theorem proving environments and benchmarks. However, benchmarks from this paradigm are not ideal for measuring the mathematical reasoning ability of neural theorem provers for several reasons. Library-specific train/test splits are siloed by construction, dependent on how theorems and lemmas are split in these libraries, and as such are not directly comparable across systems. Moreover, formal mathematics libraries are closer to software repositories than informal mathematical exposition, and many lemmas are implementation-specific artifacts without precise informal mathematical or cross-system translations. To date, the neural theorem proving community has not organized its efforts around a cross-system benchmark. To address this need and to provide a common resource to research groups working on formal theorem proving, we present miniF2F, a unified cross-system benchmark of formal mathematics of progressively increasing difficulty, centering around Olympiad-level problem statements (AMC, AIME, IMO) as well as high-school and undergraduate maths classes. Both the content and name of miniF2F are inspired by the IMO Grand Challenge (Selsam et al., 2019): to build an AI that can win a gold medal in the International Mathematical Olympiad in a formal-to-formal (F2F) format. More precisely, the agent must receive IMO problems written in a formal mathematical format, and must produce a formal (i.e. machine-checkable) proof for that problem. We intend for miniF2F to serve as a stepping stone for different formal systems towards the IMO Grand Challenge (Selsam et al., 2019), as it is end-to-end verifiable, cross-platform and spans a wide range of difficulty. While we report baseline results on miniF2F using GPT-f , a language model based on GPT-3 which has been finetuned for theorem proving, language models are not a mandatory approach for Olympiad problems and this assumption is not reflected in miniF2F, preserving the generality and widespread applicability of the benchmark to systems similar to DeepHOL (Bansal et al., 2019a) or Holophrasm (Whalen, 2016). 2 BACKGROUND AND RELATED WORK BENCHMARKS In the closely related field of (first-order) automated theorem proving (ATP), the TPTP (Sutcliffe, 2017) benchmark is a library of test problems in a unified format for ATP systems. In interactive theorem proving, the ”Freek 100” (Wiedijk, 2008) tracks progress across various interactive theorem provers on a list of 100 mathematical theorems. Wu et al. (2021) built a simplified formal proof environment INT with an associated synthetic inequality benchmark. Competitions and communal challenges have also spurred development in formal theorem proving. The CADE ATP System Competition (CASC) (Sutcliffe, 2016) is a competition that evaluates the performance of first-order automated theorem proving systems. Proof Ground (Haslbeck et al., 2019), part of the ITP conference, is an interactive proving contest (for humans) that supports Coq, Isabelle, and Lean, which focuses on evaluating the formalization effort of proof to given problems within limited time. Finally, the IMO Grand Challenge (Selsam et al., 2019), a proposal from researchers working on the interactive proof assistant Lean, aims to build a system capable of solving IMO problems in the formal-to-formal format. Due to its convenient framing as a natural language processing task, the domain of informal mathematical reasoning has received more attention than the formal one. MATH (Hendrycks et al., 2021) is a mathematics benchmark comprising 12,500 statements in natural language where exercises are classified into 5 levels of difficulty across various domains. Each exercise is combined with a detailed step-by-step proof in natural language. Scaling state-of-the-art models shows little amelioration on MATH, which requires advanced mathematical reasoning capabilities. miniF2F includes a number of formalized statements from MATH. NaturalProofs (Welleck et al., 2021) is another benchmark of natural proof in mathematics , containing 32k theorem statements and proofs. It essentially contains the proofs in ProofWiki and other resources. While MATH is more oriented towards mathematics exercises, NaturalProofs is focused on proofs of general mathematics theorems. Saxton et al. (2019) built a mathematics dataset with 2 × 106 training data and 104 test data, presented in a question-answering format where each statement is paired with a question written in natural language and a direct answer without proof. NEURAL THEOREM PROVING HOList (Bansal et al., 2019a;b; Paliwal et al., 2020) provides an environment as well as a benchmark for HOL Light. They also proposes various deep reinforcement learning approaches for theorem proving and report a pass rate of 59.91% on their benchmark. Yang & Deng (2019) built CoqGym, a large-scale dataset, which comes also with a learning environment, of 71k human-written proofs in Coq proof assistant. They report a 30.0% pass rate on the held-out test theorems in CoqGym. Polu & Sutskever (2020) applied a decoder-only transformer similar to GPT-3 (Brown et al., 2020) to proof steps prediction in Metamath combined with a log-probability based proof search. They also proposed a methodology to train a value function to further guide proof search, achieving a 56.22% pass rate on the held-out test set. Large language models were applied to Lean by Han et al. (2021). They created an environment around the Lean prover targeted to machine learning and propose a dataset extracted from low level proof artifacts that is shown to boost performance when used as a self-supervised co-training objective. They report a 48.4% pass rate on held-out test statements from mathlib, Lean’s mathematical library (mathlib Community, 2020). 3 MINIF2F BENCHMARK miniF2F is a dataset of manually formalized statements of Olympiad type problems, aligned in Lean, Metamath, and Isabelle (partial at the time of writing), providing a cross-platform benchmark for formal mathematical reasoning. Olympiad type problems are of particular interest to compare automated provers across different formal systems as the theories required to solve them are well identified and they generally do not require the definition of new mathematical concepts (a capability that remains beyond the current neural theorem proving state of the art). The formalized statements in miniF2F are drawn from multiple sources, ranging from high school and undergraduate level exercises to Olympiad problems. miniF2F also covers different subsubjects in mathematics as well as proof strategies, focusing on the types of exercises whose statements are expressible in most formal systems. This leads to a systemic focus on algebra, number theory and inequalities because, for example, geometry and combinatorial problems are generally challenging to formalize due to only nascent efforts in these areas in most formal systems. The statements in miniF2F are all manually formalized and selected to cover a variety of difficulty levels for both humans and machines. Formal proofs for these statements are optionally attached. miniF2F draws from AIME, AMC, IMO problems as well as problems from the MATH (Hendrycks et al., 2021) informal dataset. Formalizing problems from the MATH dataset serves two purposes. First, problems in MATH are segmented by difficulty level (from 1 to 5), randomly selecting a subset from each of these difficulty levels allows miniF2F to cover a wider range of difficulty. Second, it provides the community an opportunity to compare capabilities of formal automated prover to their informal counter-parts as discussed in later sections. miniF2F comprises a test set and a validation set, which are a stratified random split from the statements we formalized such that each set equally covers each problem type and difficulty (when available). Table 1 shows a detailed distribution of these statements. Versioning miniF2F is an evolving effort and new statements will continuously be added. Periodically, we will freeze versions of the benchmark. The current version of the benchmark is v11 and results in this paper are reported using this version. v1 comprises 244 test and 244 valid statements. The set of statements of each version is guaranteed to remain stable, only allowing fixes in case errors are later discovered. Rules of engagement and License miniF2F is meant to serve as a shared resource for research groups working on applying deep learning to formal theorem proving. There is no formal process to submit evaluation results and researchers are simply invited to cite miniF2F indicating the version used in their evaluations. We also encourage them to contribute proofs found by their approaches back to the benchmark. The parts of the benchmark associated with each theorem prover (Metamath, 1https://github.com/openai/miniF2F/tree/v1 Lean, Isabelle) are meant to be licensed in a way that is aligned with the licensing usage associated with the theorem prover’s main library. As a result, the Metamath version of the benchmark is released under the MIT License, while the Lean and Isabelle versions are released under the Apache License. Formalization effort and challenges We found that, for trained practitioners (but not necessarily experts, including students recently introduced to formal systems), formalizing a statement takes about 15 minutes on average, and reviewing a formalized statement, about half of that on average. Note that not all exercises are directly or naturally formalizable. In particular, multi-choice questions, word problems, and exercises that require to explicit a witness or a set as part of the answer present interesting challenges: multi-choice questions2 these problems are generally straightforwardly formalizable by reformulating the statement using the right answer only, and could be made “fair” in a competitive setup by formalizing all possible choices and running automated provers on all of them, attributing points only if a proof of the correct answer is provided. word problems3 where significant information is presented in natural language generally require non-trivial efforts to be formalized. We generally formalized them by explicitly modeling the mathematics concepts and expression presented in natural language while attempting as best as possible to preserve the mathematical difficulty of the original problem. Sometime the formalization work is most of the difficulty associated with the original question; in such cases we would discard the problem entirely. problems that require to explicit a set or witness4 (e.g. find all ... such that ...) are not directly formalizable. The best approximation we relied on for these was to formalize the statement with the witness or answer provided, turning such exercises into the generation of a proof that the answer is correct, and if needed, that it is the unique one–which is, at times, a much easier exercise. A non negligible portion of IMO problems are as such, which we foresee could become a challenge in the future, to fairly compare humans to automated proving systems in a competitive setup. Porting effort In addition to Metamath, Lean, Isabelle (work in progress) and HOL Light (work in progress), we are eager to extend the coverage of miniF2F to Coq, and will welcome any effort in that direction or to extend miniF2F to further systems. 4 EXPERIMENTS In this section, in order to study baseline performances associated with existing systems, we report pass rates achieved by GPT-f (Polu & Sutskever, 2020) applied to Metamath, GPT-f /PACT (Polu & Sutskever, 2020; Han et al., 2021) applied to Lean as well as a baseline prover implemented in Lean denoted as the tidy baseline. Pass rates are reported as Pass@N where N is the number of proof search attempts per statement. Pass@N is computed by running more attempts per statement, averaged to get an unbiased, low-variance estimate. 4.1 METAMATH Metamath is powered by a meta logic system based on a single substitution rule. It’s characterized by its simplicity which makes it convenient to study machine learning. Proofs in Metamath are, as a consequence of the low-level proofsteps, much longer than in other systems as there is no assistance from high-level tactics. Proofs which are trivial in other systems (e.g. n-digit addition or simple ring arithmetic transformations) can be quite tedious in Metamath. The absence of tactics is both 2Example: amc12a 2020 p10 in https://github.com/openai/miniF2F/blob/main/ lean/src/test.lean 3Example: mathd algebra 398 in https://github.com/openai/miniF2F/blob/main/ lean/src/test.lean 4Example: imo 1997 p5 in https://github.com/openai/miniF2F/blob/main/lean/ src/test.lean a benefit, as the models sees and learns on everything, and a challenge, as proofs of even simple exercises require hundreds of proofsteps. 4.1.1 GPT-F We report the pass rate of GPT-f applied to Metamath as described in Polu & Sutskever (2020). We use a model with 700m learnable parameters. The model is trained on an updated dump of the set.mm library (but similar synthetic datasets), using the log-probability based search as reported in Table 8 of the GPT-f paper (Polu & Sutskever, 2020). The model achieves a Pass@1 of 1.3% and a Pass@8 of 1.6% on miniF2F-test. As expected, these numbers are quite low due to the length of typical proofs for even simple math exercises. The average proof length is also reported in Table 3. 4.2 LEAN In comparison to Metamath, Lean benefits from a large number of powerful tactics to assist formalization efforts. Typical Lean proofs are much shorter than Metamath’s. This is also a formal system of interest as it has received a lot of attention from the mathematical community as recent theories have successfully been formalized in Lean (Perfectoid Spaces (Buzzard et al., 2019), Liquid Tensor experiment (Scholze, 2020)). Lean is also associated with the IMO Grand Challenge (Selsam et al., 2019) which aims to organize a formal-to-formal challenge during the upcoming IMO competitions. 4.2.1 TIDY BASELINE We use the generic best-first search algorithm presented in PACT (Han et al., 2021). The algorithm works as follows: Given a list of tactics L with priority, we maintain a priority queue Q of tactic states whose priority is given by the priority of the last applied tactic in L that led to it. While Q is not empty, we pop the top tactic state t from Q. We iterate through L and apply each tactic to t. If no error is raised, we capture the returned tactic states from Lean and insert them back into Q. We use the same terminology as in PACT (Han et al., 2021): maximum queue size ωmax, depth limit dmax. We also enforce a budget of imax iterations of the outer loop. When Q’s size reach qmax, all the tactic states to be inserted are discarded. We do not expand the next tactic state when the depth is beyond dmax. This loop is run until a proof is found or the iterations budget is exhausted. For consistency checking, we run the tidy baseline under the same settings and on the same test set as in PACT (Han et al., 2021) except that we don’t set a global timeout. Our implementation achieved a 10.5% pass rate on mathlib’s test split. This result is comparable to the reported 9.9% in PACT given the waived global timeout. In addition to the curated list of tactics L used in PACT (Han et al., 2021), we added 4 high-level tactics HL =[nlinarith, linarith, ring nf, norm num] to L with higher priorities than the others. We report our pass rate on miniF2F in Table 2. 4.2.2 GPT-F/PACT We report the pass rate of GPT-f /PACT as described in Han et al. (2021). We use a model with 700M learnable parameters. The model is trained on an updated dump56 of the mathlib library using the PACT methodology denoted in the paper as mix2 > mix1 + tactic in Figure 6. The model achieves a Pass@1 of 24.6% and a Pass@8 of 29.2% on miniF2F-test. The average proof length is also reported in Table 3. 4.3 DISCUSSION 4.3.1 ACCESS TO HIGH-LEVEL TACTICS One goal of miniF2F is to study the comparison of performance across formal systems. In this section we reported the performance of the same methodology (GPT-f (Polu & Sutskever, 2020)) 5https://github.com/jasonrute/lean_proof_recording/commit/ 8499f10c2e10dd533152070ed933c4f0b21ecdc0 6https://github.com/jesse-michael-han/lean-step-public/commit/ a2b83c237bfe4d6f1c48bb48bc0769b5940e614a applied to both Lean and Metamath. Both models are pre-trained on WebMath (Polu & Sutskever, 2020) and respectively trained on datasets extracted from Lean (Han et al., 2021) and Metamath (Polu & Sutskever, 2020). The overall compute deployed at training is comparable in both setup and exactly equivalent at test-time, yet the achieved performance appears drastically superior when applied to Lean. We hypothesize that this is mainly explained by the model’s access to highlevel tactics when applied to Lean, enabling the model to learn how to guide Lean’s automation in an effective way. An example of this high-level guidance behavior is well exemplified by the following proof of the statement algebra_sqineq_2unitcircatblt1 where the model heavily relies on Lean’s nlinarith solver but provides it with essential premises to successfully guide the search. theorem algebra_sqineq_2unitcircatblt1 (a b : R) (h0 : aˆ2 + bˆ2 = 2) : a * b ≤ 1 := begin nlinarith [sq_nonneg a,sq_nonneg b,sq_nonneg (a - b)] end (The statement above (algebra_sqineq_2unitcircatblt1) requires to prove the assertion ∀a, b ∈ R, a2 + b2 = 2→ a · b ≤ 1). In Metamath, GPT-f fails to find a proof as it requires a very large number of steps to appropriately rewrite the goal in a way that is amenable to the use of set.mm’s existing theorems. The tidy baseline also fails to find a proof of that statement as nlinarith is not capable of solving the goal without being passed extraneous premises. These results motivate the use of neural theorem proving with formal systems that expose powerful high level tactics and also suggest the potential of a closer collaboration between formal systems and machine learning practitioners. It also motivates the use of generative models in that setup as the arguments required by high-level tactics to succeed on non trivial problems generally do not exist in the context of the statement and therefore have to be generated ex-nihilo. 4.3.2 COMPARISON OF INFORMAL AND FORMAL SETUPS The use of formal systems for neural theorem proving is often motivated by the role of the formal system as a verifier, enabling more advanced neural search strategies than possible in a fully informal setup where the generation of a model can’t be verified automatically, as well as the access to powerful tactics. Our formalization of a subset of the MATH (Hendrycks et al., 2021) informal dataset provides an interesting approximate quantification of the benefit of having access to a formal system in the context of neural theorem proving. Approximate, because we only formalized a small subset of the MATH statements, but nonetheless useful since we drew uniformly from the 5 difficulty levels. In Hendrycks et al. (2021), the performance of GPT-3 (which is a larger model than the GPT-f model studied here) is reported to be 6.0% in the algebra category and 3.9% in the number theory category. GPT-f applied to Lean by comparison achieves 51.4% in the algebra category and 41.7% in the number theory category. It is also worthwhile to note that the tidy baseline also highly outperforms (31.4% in algebra and 30.0% in number theory) GPT-3 in an informal setup demonstrating the benefit of proof automation alone. 4.3.3 LIMITATION With miniF2F being cross-system as the goal, types of problems that are less expressible in certain systems such as geometry and combinatorial problems are less covered. The shift of distribution of problem types may result in skewing the research direction of models when benchmarking on miniF2F. Directionally we aim to fix it and extend the coverage of miniF2F as we grow the benchmark. However, works and efforts on the corresponding library of other systems are required as well. 5 CONCLUSION We presented miniF2F, a dataset of formal Olympiad-level mathematics problem statements, meant to serve as an initial effort towards cross-system benchmarking of neural mathematical reasoning capabilities in formal environments. We reported the performance of the neural theorem prover GPT-f (Polu & Sutskever, 2020) on both the Lean and Metamath parts of miniF2F as well as the performance of our non-neural tidy baseline applied to Lean. Then, we discussed these baselines and put them in perspective with previously reported comparable results in informal environments (Hendrycks et al., 2021). Finally, we hope that miniF2F will prove to be useful to the scientific community working on neural theorem proving and spur advances in this domain. ACKNOWLEDGMENTS We are grateful to Wenda Li and Xavier Martinet for contributing the Isabelle and HOL Light statements currently available in miniF2F, paving the way towards a full support of Isabelle and HOL Light, as well as their feedback and encouragement in the process. We thank Harri Edwards for his comments that greatly improved the manuscript. A EXAMPLE OF STATEMENT IN MINIF2F B PERFORMANCE BY DIFFICULTY ON STATEMENTS FORMALIZED FROM MATH DATASET The MATH dataset assigns a difficulty ranging from 1 to 5 to each of its problem. Tables 5 and 6 report the number of proved statement split by difficulty level on the algebra and number theory categories. More broadly, Lean GPT-f is capable of solving any problem that the tidy baseline or Metamath GPT-f can solve in MiniF2F. Qualitatively, the problems on which it fail either require multiple nontrivial reasoning steps (outside a few exceptions, problems requiring more than 2 non-trivial steps of mathematical reasoning are generally out of reach of these baselines) or require a cut introduction that is hard to generate, such as generating a non trivial witness.
1. What is the focus and contribution of the paper regarding formal mathematics benchmarks? 2. What are the strengths of the proposed benchmark, particularly in terms of its cross-system and topic coverage? 3. What are the weaknesses and questions raised by the reviewer regarding the benchmark's claims and formalizations? 4. How does the reviewer assess the clarity and readability of the paper's content, particularly in terms of natural language annotations?
Summary Of The Paper Review
Summary Of The Paper This paper presents a new formal mathematics benchmark consisting of 488 statements expressed in three prominent theorem-proving/verification systems. Baseline ATP systems, notably GPT-f/PACT in Lean, are evaluated on this benchmark. Review Strengths: The advantages of this benchmark are that it is cross-system and that it covers a variety of mathematical topics at the Olympiad level. The motivation for the particular assemblage of mathematical topics is solid: miniF2F is intended as an intermediate step toward the IMO as an ATP task, which is out of reach for current systems. This is the first effort to unify various Olympiad topics in one dataset, and the problems cover a wide scope of tactics and difficulties. Weaknesses / questions: The benchmark is not really as cross-system as claimed in the abstract. Only 12% of the training statements are available in Isabelle. How were the Olympiad and Custom problems chosen? The way in which multiple-choice problems are formalized gives additional information to the solver: In Table 4, the AMC problem asks which value of an expression is possible (quantifier on a and b), but the formalization drops the quantifiers and asks to prove equality. This would be incorrect under minor changes in the conditions. It would seem more appropriate to formalize this with "x = -2 or x = 1/2 or ..." as a hypothesis. Problem 22 of AMC 12B 2020 asked to find the maximum value of a certain function, out of five choices. Yet, the theorem amc12b_2020_p22 (Lean) asks to prove that for all values of the argument the function is smaller than the correct maximum. This is clearly insufficient without the prior knowledge of the correct answer (we can imagine that the solver could prove a weaker bound, but exceeded its timeout trying to prove correct bound). (AIME problems are multiple-choice as well, but it is perhaps forgivable not to formalize them as such.) Suggestion to the authors: Code such as Table 4 and the theorem on the circle and hyperbola (p.6) would be more readable with simple natural language annotations describing the meaning of each line, for the benefit of readers who are not familiar with all three systems or do not see the solution.
ICLR
Title miniF2F: a cross-system benchmark for formal Olympiad-level mathematics Abstract We present miniF2F, a dataset of formal Olympiad-level mathematics problems statements intended to provide a unified cross-system benchmark for neural theorem proving. The miniF2F benchmark currently targets Metamath, Lean, Isabelle (partially) and HOL Light (partially) and consists of 488 problem statements drawn from the AIME, AMC, and the International Mathematical Olympiad (IMO), as well as material from high-school and undergraduate mathematics courses. We report baseline results using GPT-f (Polu & Sutskever, 2020), a neural theorem prover based on GPT-3 (Brown et al., 2020) and provide an analysis of its performance. We intend for miniF2F to be a community-driven effort and hope that our benchmark will help spur advances in neural theorem proving. 1 INTRODUCTION Shared benchmarks and datasets have historically played a crucial role in driving advances in largescale applications of deep learning, e.g. in computer vision (Deng et al., 2009) and natural language processing (Wang et al., 2019; Rajpurkar et al., 2016; Paperno et al., 2016). Neural theorem proving is a rapidly developing area which aims to apply techniques from deep learning to interactive theorem proving. To date, most contributions in this area have focused on individual theorem proving systems, each with a separately-implemented mathematics library and with results reported on a dataset-specific test split; examples include the HOList (Bansal et al., 2019a), CoqGym (Yang & Deng, 2019) and LeanStep (Han et al., 2021) theorem proving environments and benchmarks. However, benchmarks from this paradigm are not ideal for measuring the mathematical reasoning ability of neural theorem provers for several reasons. Library-specific train/test splits are siloed by construction, dependent on how theorems and lemmas are split in these libraries, and as such are not directly comparable across systems. Moreover, formal mathematics libraries are closer to software repositories than informal mathematical exposition, and many lemmas are implementation-specific artifacts without precise informal mathematical or cross-system translations. To date, the neural theorem proving community has not organized its efforts around a cross-system benchmark. To address this need and to provide a common resource to research groups working on formal theorem proving, we present miniF2F, a unified cross-system benchmark of formal mathematics of progressively increasing difficulty, centering around Olympiad-level problem statements (AMC, AIME, IMO) as well as high-school and undergraduate maths classes. Both the content and name of miniF2F are inspired by the IMO Grand Challenge (Selsam et al., 2019): to build an AI that can win a gold medal in the International Mathematical Olympiad in a formal-to-formal (F2F) format. More precisely, the agent must receive IMO problems written in a formal mathematical format, and must produce a formal (i.e. machine-checkable) proof for that problem. We intend for miniF2F to serve as a stepping stone for different formal systems towards the IMO Grand Challenge (Selsam et al., 2019), as it is end-to-end verifiable, cross-platform and spans a wide range of difficulty. While we report baseline results on miniF2F using GPT-f , a language model based on GPT-3 which has been finetuned for theorem proving, language models are not a mandatory approach for Olympiad problems and this assumption is not reflected in miniF2F, preserving the generality and widespread applicability of the benchmark to systems similar to DeepHOL (Bansal et al., 2019a) or Holophrasm (Whalen, 2016). 2 BACKGROUND AND RELATED WORK BENCHMARKS In the closely related field of (first-order) automated theorem proving (ATP), the TPTP (Sutcliffe, 2017) benchmark is a library of test problems in a unified format for ATP systems. In interactive theorem proving, the ”Freek 100” (Wiedijk, 2008) tracks progress across various interactive theorem provers on a list of 100 mathematical theorems. Wu et al. (2021) built a simplified formal proof environment INT with an associated synthetic inequality benchmark. Competitions and communal challenges have also spurred development in formal theorem proving. The CADE ATP System Competition (CASC) (Sutcliffe, 2016) is a competition that evaluates the performance of first-order automated theorem proving systems. Proof Ground (Haslbeck et al., 2019), part of the ITP conference, is an interactive proving contest (for humans) that supports Coq, Isabelle, and Lean, which focuses on evaluating the formalization effort of proof to given problems within limited time. Finally, the IMO Grand Challenge (Selsam et al., 2019), a proposal from researchers working on the interactive proof assistant Lean, aims to build a system capable of solving IMO problems in the formal-to-formal format. Due to its convenient framing as a natural language processing task, the domain of informal mathematical reasoning has received more attention than the formal one. MATH (Hendrycks et al., 2021) is a mathematics benchmark comprising 12,500 statements in natural language where exercises are classified into 5 levels of difficulty across various domains. Each exercise is combined with a detailed step-by-step proof in natural language. Scaling state-of-the-art models shows little amelioration on MATH, which requires advanced mathematical reasoning capabilities. miniF2F includes a number of formalized statements from MATH. NaturalProofs (Welleck et al., 2021) is another benchmark of natural proof in mathematics , containing 32k theorem statements and proofs. It essentially contains the proofs in ProofWiki and other resources. While MATH is more oriented towards mathematics exercises, NaturalProofs is focused on proofs of general mathematics theorems. Saxton et al. (2019) built a mathematics dataset with 2 × 106 training data and 104 test data, presented in a question-answering format where each statement is paired with a question written in natural language and a direct answer without proof. NEURAL THEOREM PROVING HOList (Bansal et al., 2019a;b; Paliwal et al., 2020) provides an environment as well as a benchmark for HOL Light. They also proposes various deep reinforcement learning approaches for theorem proving and report a pass rate of 59.91% on their benchmark. Yang & Deng (2019) built CoqGym, a large-scale dataset, which comes also with a learning environment, of 71k human-written proofs in Coq proof assistant. They report a 30.0% pass rate on the held-out test theorems in CoqGym. Polu & Sutskever (2020) applied a decoder-only transformer similar to GPT-3 (Brown et al., 2020) to proof steps prediction in Metamath combined with a log-probability based proof search. They also proposed a methodology to train a value function to further guide proof search, achieving a 56.22% pass rate on the held-out test set. Large language models were applied to Lean by Han et al. (2021). They created an environment around the Lean prover targeted to machine learning and propose a dataset extracted from low level proof artifacts that is shown to boost performance when used as a self-supervised co-training objective. They report a 48.4% pass rate on held-out test statements from mathlib, Lean’s mathematical library (mathlib Community, 2020). 3 MINIF2F BENCHMARK miniF2F is a dataset of manually formalized statements of Olympiad type problems, aligned in Lean, Metamath, and Isabelle (partial at the time of writing), providing a cross-platform benchmark for formal mathematical reasoning. Olympiad type problems are of particular interest to compare automated provers across different formal systems as the theories required to solve them are well identified and they generally do not require the definition of new mathematical concepts (a capability that remains beyond the current neural theorem proving state of the art). The formalized statements in miniF2F are drawn from multiple sources, ranging from high school and undergraduate level exercises to Olympiad problems. miniF2F also covers different subsubjects in mathematics as well as proof strategies, focusing on the types of exercises whose statements are expressible in most formal systems. This leads to a systemic focus on algebra, number theory and inequalities because, for example, geometry and combinatorial problems are generally challenging to formalize due to only nascent efforts in these areas in most formal systems. The statements in miniF2F are all manually formalized and selected to cover a variety of difficulty levels for both humans and machines. Formal proofs for these statements are optionally attached. miniF2F draws from AIME, AMC, IMO problems as well as problems from the MATH (Hendrycks et al., 2021) informal dataset. Formalizing problems from the MATH dataset serves two purposes. First, problems in MATH are segmented by difficulty level (from 1 to 5), randomly selecting a subset from each of these difficulty levels allows miniF2F to cover a wider range of difficulty. Second, it provides the community an opportunity to compare capabilities of formal automated prover to their informal counter-parts as discussed in later sections. miniF2F comprises a test set and a validation set, which are a stratified random split from the statements we formalized such that each set equally covers each problem type and difficulty (when available). Table 1 shows a detailed distribution of these statements. Versioning miniF2F is an evolving effort and new statements will continuously be added. Periodically, we will freeze versions of the benchmark. The current version of the benchmark is v11 and results in this paper are reported using this version. v1 comprises 244 test and 244 valid statements. The set of statements of each version is guaranteed to remain stable, only allowing fixes in case errors are later discovered. Rules of engagement and License miniF2F is meant to serve as a shared resource for research groups working on applying deep learning to formal theorem proving. There is no formal process to submit evaluation results and researchers are simply invited to cite miniF2F indicating the version used in their evaluations. We also encourage them to contribute proofs found by their approaches back to the benchmark. The parts of the benchmark associated with each theorem prover (Metamath, 1https://github.com/openai/miniF2F/tree/v1 Lean, Isabelle) are meant to be licensed in a way that is aligned with the licensing usage associated with the theorem prover’s main library. As a result, the Metamath version of the benchmark is released under the MIT License, while the Lean and Isabelle versions are released under the Apache License. Formalization effort and challenges We found that, for trained practitioners (but not necessarily experts, including students recently introduced to formal systems), formalizing a statement takes about 15 minutes on average, and reviewing a formalized statement, about half of that on average. Note that not all exercises are directly or naturally formalizable. In particular, multi-choice questions, word problems, and exercises that require to explicit a witness or a set as part of the answer present interesting challenges: multi-choice questions2 these problems are generally straightforwardly formalizable by reformulating the statement using the right answer only, and could be made “fair” in a competitive setup by formalizing all possible choices and running automated provers on all of them, attributing points only if a proof of the correct answer is provided. word problems3 where significant information is presented in natural language generally require non-trivial efforts to be formalized. We generally formalized them by explicitly modeling the mathematics concepts and expression presented in natural language while attempting as best as possible to preserve the mathematical difficulty of the original problem. Sometime the formalization work is most of the difficulty associated with the original question; in such cases we would discard the problem entirely. problems that require to explicit a set or witness4 (e.g. find all ... such that ...) are not directly formalizable. The best approximation we relied on for these was to formalize the statement with the witness or answer provided, turning such exercises into the generation of a proof that the answer is correct, and if needed, that it is the unique one–which is, at times, a much easier exercise. A non negligible portion of IMO problems are as such, which we foresee could become a challenge in the future, to fairly compare humans to automated proving systems in a competitive setup. Porting effort In addition to Metamath, Lean, Isabelle (work in progress) and HOL Light (work in progress), we are eager to extend the coverage of miniF2F to Coq, and will welcome any effort in that direction or to extend miniF2F to further systems. 4 EXPERIMENTS In this section, in order to study baseline performances associated with existing systems, we report pass rates achieved by GPT-f (Polu & Sutskever, 2020) applied to Metamath, GPT-f /PACT (Polu & Sutskever, 2020; Han et al., 2021) applied to Lean as well as a baseline prover implemented in Lean denoted as the tidy baseline. Pass rates are reported as Pass@N where N is the number of proof search attempts per statement. Pass@N is computed by running more attempts per statement, averaged to get an unbiased, low-variance estimate. 4.1 METAMATH Metamath is powered by a meta logic system based on a single substitution rule. It’s characterized by its simplicity which makes it convenient to study machine learning. Proofs in Metamath are, as a consequence of the low-level proofsteps, much longer than in other systems as there is no assistance from high-level tactics. Proofs which are trivial in other systems (e.g. n-digit addition or simple ring arithmetic transformations) can be quite tedious in Metamath. The absence of tactics is both 2Example: amc12a 2020 p10 in https://github.com/openai/miniF2F/blob/main/ lean/src/test.lean 3Example: mathd algebra 398 in https://github.com/openai/miniF2F/blob/main/ lean/src/test.lean 4Example: imo 1997 p5 in https://github.com/openai/miniF2F/blob/main/lean/ src/test.lean a benefit, as the models sees and learns on everything, and a challenge, as proofs of even simple exercises require hundreds of proofsteps. 4.1.1 GPT-F We report the pass rate of GPT-f applied to Metamath as described in Polu & Sutskever (2020). We use a model with 700m learnable parameters. The model is trained on an updated dump of the set.mm library (but similar synthetic datasets), using the log-probability based search as reported in Table 8 of the GPT-f paper (Polu & Sutskever, 2020). The model achieves a Pass@1 of 1.3% and a Pass@8 of 1.6% on miniF2F-test. As expected, these numbers are quite low due to the length of typical proofs for even simple math exercises. The average proof length is also reported in Table 3. 4.2 LEAN In comparison to Metamath, Lean benefits from a large number of powerful tactics to assist formalization efforts. Typical Lean proofs are much shorter than Metamath’s. This is also a formal system of interest as it has received a lot of attention from the mathematical community as recent theories have successfully been formalized in Lean (Perfectoid Spaces (Buzzard et al., 2019), Liquid Tensor experiment (Scholze, 2020)). Lean is also associated with the IMO Grand Challenge (Selsam et al., 2019) which aims to organize a formal-to-formal challenge during the upcoming IMO competitions. 4.2.1 TIDY BASELINE We use the generic best-first search algorithm presented in PACT (Han et al., 2021). The algorithm works as follows: Given a list of tactics L with priority, we maintain a priority queue Q of tactic states whose priority is given by the priority of the last applied tactic in L that led to it. While Q is not empty, we pop the top tactic state t from Q. We iterate through L and apply each tactic to t. If no error is raised, we capture the returned tactic states from Lean and insert them back into Q. We use the same terminology as in PACT (Han et al., 2021): maximum queue size ωmax, depth limit dmax. We also enforce a budget of imax iterations of the outer loop. When Q’s size reach qmax, all the tactic states to be inserted are discarded. We do not expand the next tactic state when the depth is beyond dmax. This loop is run until a proof is found or the iterations budget is exhausted. For consistency checking, we run the tidy baseline under the same settings and on the same test set as in PACT (Han et al., 2021) except that we don’t set a global timeout. Our implementation achieved a 10.5% pass rate on mathlib’s test split. This result is comparable to the reported 9.9% in PACT given the waived global timeout. In addition to the curated list of tactics L used in PACT (Han et al., 2021), we added 4 high-level tactics HL =[nlinarith, linarith, ring nf, norm num] to L with higher priorities than the others. We report our pass rate on miniF2F in Table 2. 4.2.2 GPT-F/PACT We report the pass rate of GPT-f /PACT as described in Han et al. (2021). We use a model with 700M learnable parameters. The model is trained on an updated dump56 of the mathlib library using the PACT methodology denoted in the paper as mix2 > mix1 + tactic in Figure 6. The model achieves a Pass@1 of 24.6% and a Pass@8 of 29.2% on miniF2F-test. The average proof length is also reported in Table 3. 4.3 DISCUSSION 4.3.1 ACCESS TO HIGH-LEVEL TACTICS One goal of miniF2F is to study the comparison of performance across formal systems. In this section we reported the performance of the same methodology (GPT-f (Polu & Sutskever, 2020)) 5https://github.com/jasonrute/lean_proof_recording/commit/ 8499f10c2e10dd533152070ed933c4f0b21ecdc0 6https://github.com/jesse-michael-han/lean-step-public/commit/ a2b83c237bfe4d6f1c48bb48bc0769b5940e614a applied to both Lean and Metamath. Both models are pre-trained on WebMath (Polu & Sutskever, 2020) and respectively trained on datasets extracted from Lean (Han et al., 2021) and Metamath (Polu & Sutskever, 2020). The overall compute deployed at training is comparable in both setup and exactly equivalent at test-time, yet the achieved performance appears drastically superior when applied to Lean. We hypothesize that this is mainly explained by the model’s access to highlevel tactics when applied to Lean, enabling the model to learn how to guide Lean’s automation in an effective way. An example of this high-level guidance behavior is well exemplified by the following proof of the statement algebra_sqineq_2unitcircatblt1 where the model heavily relies on Lean’s nlinarith solver but provides it with essential premises to successfully guide the search. theorem algebra_sqineq_2unitcircatblt1 (a b : R) (h0 : aˆ2 + bˆ2 = 2) : a * b ≤ 1 := begin nlinarith [sq_nonneg a,sq_nonneg b,sq_nonneg (a - b)] end (The statement above (algebra_sqineq_2unitcircatblt1) requires to prove the assertion ∀a, b ∈ R, a2 + b2 = 2→ a · b ≤ 1). In Metamath, GPT-f fails to find a proof as it requires a very large number of steps to appropriately rewrite the goal in a way that is amenable to the use of set.mm’s existing theorems. The tidy baseline also fails to find a proof of that statement as nlinarith is not capable of solving the goal without being passed extraneous premises. These results motivate the use of neural theorem proving with formal systems that expose powerful high level tactics and also suggest the potential of a closer collaboration between formal systems and machine learning practitioners. It also motivates the use of generative models in that setup as the arguments required by high-level tactics to succeed on non trivial problems generally do not exist in the context of the statement and therefore have to be generated ex-nihilo. 4.3.2 COMPARISON OF INFORMAL AND FORMAL SETUPS The use of formal systems for neural theorem proving is often motivated by the role of the formal system as a verifier, enabling more advanced neural search strategies than possible in a fully informal setup where the generation of a model can’t be verified automatically, as well as the access to powerful tactics. Our formalization of a subset of the MATH (Hendrycks et al., 2021) informal dataset provides an interesting approximate quantification of the benefit of having access to a formal system in the context of neural theorem proving. Approximate, because we only formalized a small subset of the MATH statements, but nonetheless useful since we drew uniformly from the 5 difficulty levels. In Hendrycks et al. (2021), the performance of GPT-3 (which is a larger model than the GPT-f model studied here) is reported to be 6.0% in the algebra category and 3.9% in the number theory category. GPT-f applied to Lean by comparison achieves 51.4% in the algebra category and 41.7% in the number theory category. It is also worthwhile to note that the tidy baseline also highly outperforms (31.4% in algebra and 30.0% in number theory) GPT-3 in an informal setup demonstrating the benefit of proof automation alone. 4.3.3 LIMITATION With miniF2F being cross-system as the goal, types of problems that are less expressible in certain systems such as geometry and combinatorial problems are less covered. The shift of distribution of problem types may result in skewing the research direction of models when benchmarking on miniF2F. Directionally we aim to fix it and extend the coverage of miniF2F as we grow the benchmark. However, works and efforts on the corresponding library of other systems are required as well. 5 CONCLUSION We presented miniF2F, a dataset of formal Olympiad-level mathematics problem statements, meant to serve as an initial effort towards cross-system benchmarking of neural mathematical reasoning capabilities in formal environments. We reported the performance of the neural theorem prover GPT-f (Polu & Sutskever, 2020) on both the Lean and Metamath parts of miniF2F as well as the performance of our non-neural tidy baseline applied to Lean. Then, we discussed these baselines and put them in perspective with previously reported comparable results in informal environments (Hendrycks et al., 2021). Finally, we hope that miniF2F will prove to be useful to the scientific community working on neural theorem proving and spur advances in this domain. ACKNOWLEDGMENTS We are grateful to Wenda Li and Xavier Martinet for contributing the Isabelle and HOL Light statements currently available in miniF2F, paving the way towards a full support of Isabelle and HOL Light, as well as their feedback and encouragement in the process. We thank Harri Edwards for his comments that greatly improved the manuscript. A EXAMPLE OF STATEMENT IN MINIF2F B PERFORMANCE BY DIFFICULTY ON STATEMENTS FORMALIZED FROM MATH DATASET The MATH dataset assigns a difficulty ranging from 1 to 5 to each of its problem. Tables 5 and 6 report the number of proved statement split by difficulty level on the algebra and number theory categories. More broadly, Lean GPT-f is capable of solving any problem that the tidy baseline or Metamath GPT-f can solve in MiniF2F. Qualitatively, the problems on which it fail either require multiple nontrivial reasoning steps (outside a few exceptions, problems requiring more than 2 non-trivial steps of mathematical reasoning are generally out of reach of these baselines) or require a cut introduction that is hard to generate, such as generating a non trivial witness.
1. What is the main contribution of the paper regarding formal Olympiad-level math problems? 2. What are the strengths of the proposed benchmark, particularly in terms of cross-system support and problem selection? 3. What are the weaknesses of the paper, especially regarding the potential impact on research directions and the gap between Metamath and Lean pass rates? 4. Do you have any questions about the paper's content, such as the formalization process or the inclusion of a subset of MATH benchmark? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a benchmark of formal Olympiad-level math problems focusing on algebra, number theory and inequalities with cross-system support on Metamath, Lean and Isabelle (in development). The paper evaluates performance of GPT-f on Metamath and Lean, and a custom baseline tidy on Lean as well. Review strengths The paper formalizes a decent amount of cross-system Olympiad-level benchmark of 488 problems. Cross-system support on Metamath, Lean and Isabelle provides benefit on comparing automation and tactics of systems. Olympiad-level problems are also interesting to both researchers and general public. The inclusion of formalization of a subset of MATH benchmark also enables comparing provers in formal and informal format. The paper is well-written with good literature review on theorem proving benchmarks. weaknesses The paper could justify more on what type of problems were selected in miniF2F . The benchmark mostly focuses on algebra, number theory and inequalities. Will this benchmark in some way skew the research direction of the community to only focus on developing algorithms particularly suitable for solving these types of problems that may or may not generalize well to other types of problems such as geometry problems? It would be interesting to see if the gap on pass rates between Metamath and Lean could be reduced when the models are trained or fine-tuned on a subset of miniF2F in addition to pre-training. This could provide more evidence if the gap is mostly due to access to high-level tactics.
ICLR
Title Context Based Machine Translation With Recurrent Neural Network For English-Amharic Translation Abstract The current approaches for machine translation usually require large set of parallel corpus in order to achieve fluency like in the case of neural machine translation (NMT), statistical machine translation (SMT) and example-based machine translation (EBMT). The context awareness of phrase-based machine translation (PBMT) approaches is also questionable. This research develops a system that translates English text to Amharic text using a combination of context based machine translation (CBMT) and a recurrent neural network machine translation (RNNMT). We built a bilingual dictionary for the CBMT system to use along with a large target corpus. The RNNMT model has then been provided with the output of the CBMT and a parallel corpus for training. Our combinational approach on English-Amharic language pair yields a performance improvement over the simple neural machine translation (NMT). 1 INTRODUCTION Context based machine translation (CBMT) is a phrase-based machine translation (PBMT) approach proposed by Miller et al. (2006). Unlike most PBMT approaches that rely on statistical occurrence of the phrases, CBMT works on the contextual occurrence of the phrases. CBMT uses bilingual dictionary as its main translator and produces phrases to be flooded into a large target corpus. The CBMT approach addresses the problem of parallel corpus scarcity between language pairs. The parallel corpus set for English-Amharic language pair, for instance, composes of the Bible, the Ethiopian constitution and international documents. These sources use words specific to their domain and overlook phrases and words used by novels, news and similar literary documents. The CBMT uses synonyms of words in place of rare words and rely on large target corpus and a bilingual dictionary to help with data scarcity(Miller et al., 2006). It is not dependent on large parallel corpus like most PBMT such as the statistical machine translation (SMT)(Brown et al., 1990) and the example-based machine translation EBMT(Gangadharaiah, 2011). The CBMT, however, fails in fluently translating texts compared to the neural machine translation (NMT). The NMT learns the pattern of humans’ translation using human translated parallel corpus. Its translations are more fluent and accurate than all the rest so far when evaluated individually (Popovic, 2017). However, NMT struggles to translate properly rare words and words not commonly used(Wu et al., 2016). In addition, NMT requires large parallel corpus for training. The aim of this research is to build a system by combining the CBMT with the NMT for English to Amharic translation. The combination of PBMT and NMT is the future and most promising than the individual approaches themselves (Popovic, 2017). CBMT’s ability to address rare words and the NMT’s ability to produce fluent translation along with their context awareness makes them complementary couple. The combination is done by providing the NMT with two inputs, one from the source language and the other from the output of the CBMT to produces the final target sentence. In this paper, we show that this approach utilizes the strength of each method to achieve a significant translation performance improvement over simple NMT. The improvement is mostly dependent on the performance of the CBMT and mostly on the bilingual dictionary of the CBMT. 2 RELATED WORKS PBMT approaches are mostly used to translate English to Amharic as in the case of Gasser (2012),Tadesse & Mekuria (2000), Teshome (2000), Besacier et al. (2000), Zewgneh (2017) and Taye et al. (2015) . Below we summarize the researches with most significance to ours. The SMT approach takes a parallel corpus as an input and it selects the most frequent target phrase based on statistical analysis for each searched source phrase (Brown et al., 1990). The SMT approach applied to the English-Amharic pair has produced 18.74 % BLEU score (Tadesse & Mekuria, 2000). The SMT has good accuracy in translating all the words in a sentence but it is not fluent (Oladosu et al., 2016). Hybrid of SMT and rule based machine translation (RBMT) translates and orders the source text based on the grammar rules of the target language and sends it to the SMT for final translation(Yulianti et al., 2011)(Labaka et al., 2014). The hybrid approach for English-Amharic pair has achieved a 15% improvement over SMT on simple sentence and 20% improvement for complex sentence(Zewgneh, 2017). Hybrid of RBMT and SMT gets fluency from RBMT and accuracy from SMT but for longer sentences, the reordering fails(Oladosu et al., 2016). The CBMT approach has been implemented for the language pair Spanish-English. In CBMT, the source phrases are translated using bilingual dictionary and flooded to target corpus. It has achieved 64.62% BLEU score for the researchers’ dataset(Miller et al., 2006). The CBMT outperforms SMT in accuracy and fluency but translation of phrases with words not in the bilingual dictionary is weak (Miller et al., 2006). The NMT has been researched by different groups and here the research by Googles’ researchers on the language pair English-French is presented. The NMT model is trained using parallel corpus. The source sentence is encoded as a vector and then decoded with the help of an attention model. Googles’ NMT model has achieved 38.95% BLEU score(Wu et al., 2016). The NMT has accuracy and fluency but it fails to translate the whole sentence and also fails to perform well with rare words(Wu et al., 2016). To solve this using sub-word units has been suggested (Sennrich et al., 2016) but Amharic has unique treats like ”Tebko-lalto”, one word with two different meanings, which can only be addressed by using context. The NMT has been modified to translate low resourced languages. One approach uses universal lexical representation (ULR) were a word is represented using universal word embeddings. This benefits low resource languages which have semantic similarity with high resourced languages (Gu et al., 2018). This achieved 5% BLEU score improvement over normal NMT. However, most southern Semitic languages like Amharic do not have a strong semantic relative with large resource. NMT has also been modified to work with monolingual corpus instead of parallel corpus using cross-lingual word embedding(Artetxe et al., 2017). Such an approach achieved a 15.56% BLEU score which was less that the semi-supervised and supervised which achieved 21.81% BLEU score and 20.48% BLEU score respectively. Combination of NMT and PBMT which takes the output of SMT (a PBMT) and the source sentence to train the NMT model has been used for the language pair English-German. It has achieved 2 BLEU points over basic NMT and PBMT(Niehues et al., 2016). Combination of NMT and PBMT which takes three inputs; output of basic NMT, output of SMT and output of Hierarchical PBMT (HPBMT) of SMT has been implemented for English-Chinese language pair. It achieved 6 BLEU points over basic NMT and 5.3 BLEU points over HPBMT(Zhang et al., 2017). The combination of PBMT and NMT performs better (Popovic, 2017) in terms of accuracy and fluency but it is dependent on the performance of the chosen PBMT approach. 3 METHODOLOGY In this research, we have selected CBMT and NMT to form a combinational system. This approach addresses the limitation of context unawareness of some PBMT approaches like SMT and the need for large parallel corpus of simple NMT. In our approach, the source sentence in English and the translation output of the CBMT in Amharic has been fed to the NMT’s encoder-decoder model as shown in Figure 1. The NMT model then produces the final Amharic translation. The combination of the CBMT and the NMT follows the mixed approach proposed by Niehues et al. (2016). Their mixed approach feeds the NMT with the source sentence and the output of the PBMT. The research by Zhang et al. (2017) also supports this way of combining different systems. 3.1 CBMT SYSTEM The CBMT outperforms RBMT, SMT and EBMT when it comes to languages with less parallel corpora(Miller et al., 2006). It uses a bilingual dictionary, a large target corpus and smaller source corpus, which is optional. In the context based machine translation, there are different components working together to produce the translation. Figure 2 shows the flow of data in the different components of the CBMT. The source sentence is converted into N-gram phrases and then it is translated using bilingual dictionary. CBMT’s performance is mostly dependent on the efficiency of the dictionary. We have manually built a phrase based dictionary aided by Google translate. A synonym finder helps the dictionary’s search using WordNet(Soergel, 1998). WordNet is a library with large lexical database of English words. It provides synonyms of the English words whose Amharic translations are not in dictionary. In this paper, a maximum of four N-grams has been used. Most phrases of English that are translated to a single word in Amharic have a length of four or less words. For example the English phrase ”everyone who calls on the name of the lord will be saved” has the translations in Output 1 using our dictionary. Output 1: The translated output of the N-grams These translations have been combined into sentences in the same order as the source sentence. Then each sentence is converted into N-grams of variable length. The maximum flooded N-grams length is len(Ngram)2 + 1 if len(Ngram) ≥4 else it is equal to len(Ngram). This provides a good variable range to capture neighboring words in a single N-gram. Output 2 shows sentences formed using the translations in Output 1. The N-grams to be flooded are formed by sliding one word from left to right through the combined sentence. Output 2: The translated output combined into sentence Output 3 shows the N-grams for the translated sentences shown in Output 2. Output 3: The N-grams for the translated sentences The flooder is then responsible for searching the translated phrases in the target corpus and finding the longest N-gram match. For each phrase to be flooded, it selects a phrase in the target corpus with the most translated words and least in-between words amongst the words matched. The flooder produces the result in Output 4 with the Book of Romans as the target corpus to be flooded. Output 4: Final output of flooder for single flooded file The N-gram connector combines the flooded text to find the longest overlap of the translated target text. The overlapping system favors those with the least number of not searched words found in between the searched N-grams when calculating the overlap. Output 5 shows the final outcome of the N-gram connector. The system selects the maximum or longest overlapping phrases from the combiner and merges them to form the final target sentence. So finally, the translation for the example English phrase ”everyone who calls on the name of the lord will be saved” is . 3.2 NMT SYSTEM In this paper, we have used RNN (recurrent neural networks) for the NMT. In RNN, the output is fed back to the neuron to learn from both the fresh input and its previous output. This improves RNN’s performance because it learns from its errors while training. The neural cell used is the LSTM (long short-term memory), introduced by Hochreiter & Schmidhuber (1997). We have used LSTM cells for both the encoding and decoding of the sentences. For the decoding, a greedy algorithm has been used. The algorithm selects the first fit word that has the highest probability of occurrence. Probability refers to the probability of being translated and appearing next to the word before itself. The system has an attention layer between the encoder layer and the decoder layer. We have used the Luong attention model(Luong et al., 2015). Equation 1 through Equation 4 shows Luong attention model’s computation. αts = exp(score(ht, hs)) S∑ s′=1 exp(score(ht, hs′)) [Attention Weights] (1) Output 5: Final Output of N-gram connector ct = ∑ s αtshs [Context vector] (2) at = f(ct, ht) = tanh(Wc[ct : ht]) [Attention Vector] (3) score(ht, hs) = h T t Whs [Luong’s multiplicative style] (4) The score function, calculated using Equation 4, is used to compare the output of the decoder (ht) with the output of the encoder (hs) in order to find the attention weight calculated using Equation 1. The Attention weights (alphats) are then used for the context vector(ct) calculated by Equation 2. This context vector as well as the output of the decoder is then used to produce the final output of the decoder using Equation 3. 3.3 COMBINATION OF CBMT AND NMT To combine the two systems, we have made the NMT model to accept two inputs. We have used the proposed method of combining PBMT with NMT accepting two source inputs by Zoph & Knight (2016). According to Zoph & Knight (2016), having two inputs, where one is the source sentence and the other a translation of the source to another language different from the target, helps the NMT produce a better result. The source sentence and the sentence translated using the CBMT are encoded separately and are given to the attention layer. The attention layer focuses on the two inputs at the same time rather than separately. There is a single decoder, which receives the output of the attention layer and provides the final translation. Based on the paper Zoph & Knight (2016), the final outputs of the encoders (ht) are concatenated and a linear transformation is applied to the concatenated output which is activated by tanh using Equation 5. On the other hand, the final states (ct) are simply added as shown by Equation 6. In the attention, the different context vectors are calculated separately and concatenated to produce the final output of the attention vector based on the Luong attention mechanism(Luong et al., 2015) using Equation 7. h = tanh(Wc[h1;h2]) (5) c = c1 + c2 (6) ht = tanh(wc[ht; c 1 t ; c 2 t ] (7) 4 EXPERIMENT SETUP We evaluate the proposed approach for the language pair English-Amharic using the same training parameters for both the basic NMT and the combinational system. The encoder-decoder setup has 1024 LSTM cells or hidden units with 1024 word embedding and the data has been trained for 100 epochs. 4.1 CORPUS USED This research uses the new International Version Bible because both the Amharic and English versions are translated from the same Dead Sea scrolls. This makes it more accurately parallel than other versions of the Bible translation. The whole New Testament of the Bible has been used as a corpus providing a total of 8603 phrases and sentences. We have used two books of the Bible, Paul’s letter to the Romans (Romans) and the Gospel according to Mark (Mark), as a test set. Google translate has been used as the main agent of translation for the manually built bilingual dictionary used by the CBMT. 77% of the total 6,793 vocabulary words have been translated using Google. In addition to Google translate; we have done a manual translation for 1,426 vocabulary words in the book of Romans and other 150 vocabulary words using the Bible. Manual translation here refers to the translation of each word using every entry it has in the Bible by a human. Figure 5 shows the outcome of such translation for the word Acknowledge. Manual translation helps to address the variations in Amharic translated words caused by gender (female or male), plural form and the different persons (first, second and third persons). English words do also have different Amharic translations based on their context as shown in Figure 3. Acknowledge has been translated into four main stem words . 4.1.1 DATASET We have fed the same dataset to all systems with minor variations. In the CBMT, we have used the book of Mark and the book of Romans as a test set. The flooded texts for the book of Romans were the book of Romans itself and Paulian epistles without Romans. The flooded texts for the book of Mark were the book of Mark itself and the gospels without Mark. The books have been flooded to themselves in order to evaluate the performance of the CBMT when the searched text is found in the flooded text and also to see the impact of the bilingual dictionary on the CBMT. The combinational system has two different test sets and different models. The first test set has the output of the CBMT and the source sentence as an input for the NMT. The second test set gives the original target text and the source sentence as an input to the NMT models and we have called it the ideal approach. This was done so to see the impact of the CBMT output and the errors introduced by the CBMT on the combinational system. In the basic NMT and combinational system, similar dataset as the CBMT is used. We have used Paulian epistle without Romans to train a basic NMT model and the combinational model. Then we have tested the model using the book of Romans as the test or holdout set. We have used 10- fold cross validation(Dietterich, 1998) to train and test the basic NMT model and the combinational model with Romans. In a similar manner, we have used 10-fold cross validation(Dietterich, 1998) to train the basic NMT model and the combinational model with the book of Mark. We have also used holdout validation(Raschka, 2018) with a random 80% training and 20% test data split alongside the 10-fold cross validation for both Mark and Romans to obtain a more general representation of the results. 5 EVALUATION METHOD We have measured the translation performance based on the fullness of the translation, whether the system translates all words; context awareness, whether the translation is true to the context and Coherence, whether the translated sentence has a smooth flow of words in terms of syntax. In this research, the BLEU score is the chosen method of evaluation. It answers all the aboveenlisted criteria. The quality of translation is measured by the correspondence between a machine’s output and that of a human(Kishore et al., 2002). BLEU score is defined in the range between 0 and 1 (or in percentage between 0 and 100) where 1 is a perfect match with the reference and 0 is for no words matched. 6 RESULTS AND DISCUSSION This section provides the results obtained along with a brief explanation of the factors and the components. 6.1 CBMT RESULT AND DISCUSSION The system has been tested using a custom-made dictionary using Google translate and manual translation. We have generated the vocabulary of the dictionary from the English version of the NIV Bible. Table 1 depicts the CBMT test results obtained, using BLEU score evaluation method. We have implemented manual translation for the book of Romans on about 80% of its total vocabulary. Hence, it has a better performance yield than the book of Mark, whose translation was solely dependent on Google translate. This is so both when they are flooded to the text that contained them (by 48%) and when they are flooded to the text without them (by 6%). However, the translation of Romans does not produce a 100% as would be expected when it is part of the flooded document. This is mainly because the system selects the overlapping N-gram based on the number of words matched, two consecutive phrases that may have a high overlap but which are not the correct ones may be selected. 6.2 NMT RESULT AND DISCUSSION The NMT test results obtained using BLEU score evaluation method are depicted in Table 2. The test result obtained from Mark was better than that from Romans by an average of 1.62%. Although insignificant difference, it attributes to Marks’ writing having similar words unlike the diverse word selection in Romans(Clay, 2018). 6.3 CBMT AND NMT COMBINATION RESULT AND DISCUSSION There are two test cases for this section. In the first case, we have given the NMT the source sentence and the output of the CBMT as an input per the proposed methodology. Table 3 shows the results obtained from such a setup. In the second case, we have given the NMT the English source sentence and the original Amharic as an input creating an ideal system. The test results obtained using BLEU score evaluation method, are depicted in Table 4 for the ideal combinational system. In the first case, when the CBMT output is used as an input to the NMT, the Book of Romans performed better than the book of Mark by 1.71%. The CBMT output of Romans is better than that by the book of Mark and its impact has propagated to the combinational system. In the ideal case scenario the results is more or less the same. The result for the book of Romans was better than the book of Mark by only 0.63%. 6.4 DISCUSSION OF ALL RESULTS The average of the results obtained from the systems have been calculated and shown in Tables 5 for comparison. The ideal combinational system, which takes the original target Amharic text as the second input, has performed better on average with 11.34 BLEU score gain over the NMT. The ideal system, however, did not outperform the CBMT on average but produced results in the same range. The combinational system with CBMT output given as the second input for the NMT, achieves 2.805 BLEU score point over the simple NMT. The CBMT without being provided the target flooded data has performed better by 14.23 BLEU points over the simple NMT. 7 CONCLUSION The research set out to find a combinational system with components that complement each other when translating a document from English to Amharic. We have proposed the CBMT and the NMT as the complementing pair in terms of parallel data size, accuracy, context awareness with translation and coherence. The CBMT system performed better than the basic NMT and the combinational system given the same size of data. However, the ideal combination of the CBMT and NMT has a BLEU score in the range of the CBMT while outperforming the simple NMT by 11.34 BLEU points. For the book of Mark whose writing resembles the other Gospels, the ideal combination outperformed all. This entails that with smaller increase in parallel corpus for the NMT of the ideal system will outperform both individual systems. The output from the CBMT has a great impact on the performance of the combinational systems as seen by the performance of the proposed combinational system compared to the ideal system. The proposed combinational system still has outperformed the basic NMT by 2.805 BLEU score points in spite of the errors introduced by the CBMT. Therefore, a CBMT with a well-built bilingual dictionary that produces a close to ideal output along with a well-trained NMT with sufficient data makes a fluent combinational system that outperforms a simple NMT and a basic CBMT system. In conclusion, the study suggests the combinational system for translation of English to Amharic language.
1. What are the strengths and weaknesses of the proposed method in the paper? 2. How does the reviewer assess the novelty and relevance of the paper's content regarding its claims and comparisons with other works? 3. Are there any concerns or questions regarding the methodology, experiments, or results presented in the paper? 4. How does the reviewer evaluate the significance and impact of the paper's findings in the field of machine translation?
Review
Review For the low-resource pair English-Amharic, the authors propose to combine context-based machine translation (CNMT), which is built by using a bilingual dictionary and then connecting the resulting n-grams, with a neural MT system. The source sentence, as well as the CNMT output, are used as inputs to the RNN-based translation system. I vote for rejection because the paper makes some unfounded claims, misses important related work, has some methodological issues and presents unconvincing results. The paper claims that CBMT is a new approach, but it dates from 2006. The authors say that it outperforms other MT approaches, but a more recent reference would be needed. While neural machine translation may sometimes struggle with rare words, using sub-word units may help alleviate the issue (Sennrich et al. Neural machine translation of rare words with subword units). The claim that RNNs learn from their previous mistakes is also unclear. It's true in the sense that backpropagation is learning from errors, but using previous reference outputs can cause exposure bias (Ranzato et al. Sequence Level Training with Recurrent Neural Networks). The paper fails to cite related work, in particular on low-resource NMT (e.g. Gu et al. Universal Neural Machine Translation for Extremely Low Resource Languages) and unsupervised translation (e.g. Artetxe et al. Unsupervised Neural Machine Translation). The CBMT system is built on top of the Google Translate English-Amharic system. However, that model may have seen the test data during training. By combining CBMT with NMT, the authors obtain better results than NMT alone, but worse than with CBMT only. As such, the usefulness of the approach in a very low-resource scenario is unclear. Minor points: Some typos could be corrected: BLUE -> BLEU, Loung -> Luong, weather -> whether
ICLR
Title Context Based Machine Translation With Recurrent Neural Network For English-Amharic Translation Abstract The current approaches for machine translation usually require large set of parallel corpus in order to achieve fluency like in the case of neural machine translation (NMT), statistical machine translation (SMT) and example-based machine translation (EBMT). The context awareness of phrase-based machine translation (PBMT) approaches is also questionable. This research develops a system that translates English text to Amharic text using a combination of context based machine translation (CBMT) and a recurrent neural network machine translation (RNNMT). We built a bilingual dictionary for the CBMT system to use along with a large target corpus. The RNNMT model has then been provided with the output of the CBMT and a parallel corpus for training. Our combinational approach on English-Amharic language pair yields a performance improvement over the simple neural machine translation (NMT). 1 INTRODUCTION Context based machine translation (CBMT) is a phrase-based machine translation (PBMT) approach proposed by Miller et al. (2006). Unlike most PBMT approaches that rely on statistical occurrence of the phrases, CBMT works on the contextual occurrence of the phrases. CBMT uses bilingual dictionary as its main translator and produces phrases to be flooded into a large target corpus. The CBMT approach addresses the problem of parallel corpus scarcity between language pairs. The parallel corpus set for English-Amharic language pair, for instance, composes of the Bible, the Ethiopian constitution and international documents. These sources use words specific to their domain and overlook phrases and words used by novels, news and similar literary documents. The CBMT uses synonyms of words in place of rare words and rely on large target corpus and a bilingual dictionary to help with data scarcity(Miller et al., 2006). It is not dependent on large parallel corpus like most PBMT such as the statistical machine translation (SMT)(Brown et al., 1990) and the example-based machine translation EBMT(Gangadharaiah, 2011). The CBMT, however, fails in fluently translating texts compared to the neural machine translation (NMT). The NMT learns the pattern of humans’ translation using human translated parallel corpus. Its translations are more fluent and accurate than all the rest so far when evaluated individually (Popovic, 2017). However, NMT struggles to translate properly rare words and words not commonly used(Wu et al., 2016). In addition, NMT requires large parallel corpus for training. The aim of this research is to build a system by combining the CBMT with the NMT for English to Amharic translation. The combination of PBMT and NMT is the future and most promising than the individual approaches themselves (Popovic, 2017). CBMT’s ability to address rare words and the NMT’s ability to produce fluent translation along with their context awareness makes them complementary couple. The combination is done by providing the NMT with two inputs, one from the source language and the other from the output of the CBMT to produces the final target sentence. In this paper, we show that this approach utilizes the strength of each method to achieve a significant translation performance improvement over simple NMT. The improvement is mostly dependent on the performance of the CBMT and mostly on the bilingual dictionary of the CBMT. 2 RELATED WORKS PBMT approaches are mostly used to translate English to Amharic as in the case of Gasser (2012),Tadesse & Mekuria (2000), Teshome (2000), Besacier et al. (2000), Zewgneh (2017) and Taye et al. (2015) . Below we summarize the researches with most significance to ours. The SMT approach takes a parallel corpus as an input and it selects the most frequent target phrase based on statistical analysis for each searched source phrase (Brown et al., 1990). The SMT approach applied to the English-Amharic pair has produced 18.74 % BLEU score (Tadesse & Mekuria, 2000). The SMT has good accuracy in translating all the words in a sentence but it is not fluent (Oladosu et al., 2016). Hybrid of SMT and rule based machine translation (RBMT) translates and orders the source text based on the grammar rules of the target language and sends it to the SMT for final translation(Yulianti et al., 2011)(Labaka et al., 2014). The hybrid approach for English-Amharic pair has achieved a 15% improvement over SMT on simple sentence and 20% improvement for complex sentence(Zewgneh, 2017). Hybrid of RBMT and SMT gets fluency from RBMT and accuracy from SMT but for longer sentences, the reordering fails(Oladosu et al., 2016). The CBMT approach has been implemented for the language pair Spanish-English. In CBMT, the source phrases are translated using bilingual dictionary and flooded to target corpus. It has achieved 64.62% BLEU score for the researchers’ dataset(Miller et al., 2006). The CBMT outperforms SMT in accuracy and fluency but translation of phrases with words not in the bilingual dictionary is weak (Miller et al., 2006). The NMT has been researched by different groups and here the research by Googles’ researchers on the language pair English-French is presented. The NMT model is trained using parallel corpus. The source sentence is encoded as a vector and then decoded with the help of an attention model. Googles’ NMT model has achieved 38.95% BLEU score(Wu et al., 2016). The NMT has accuracy and fluency but it fails to translate the whole sentence and also fails to perform well with rare words(Wu et al., 2016). To solve this using sub-word units has been suggested (Sennrich et al., 2016) but Amharic has unique treats like ”Tebko-lalto”, one word with two different meanings, which can only be addressed by using context. The NMT has been modified to translate low resourced languages. One approach uses universal lexical representation (ULR) were a word is represented using universal word embeddings. This benefits low resource languages which have semantic similarity with high resourced languages (Gu et al., 2018). This achieved 5% BLEU score improvement over normal NMT. However, most southern Semitic languages like Amharic do not have a strong semantic relative with large resource. NMT has also been modified to work with monolingual corpus instead of parallel corpus using cross-lingual word embedding(Artetxe et al., 2017). Such an approach achieved a 15.56% BLEU score which was less that the semi-supervised and supervised which achieved 21.81% BLEU score and 20.48% BLEU score respectively. Combination of NMT and PBMT which takes the output of SMT (a PBMT) and the source sentence to train the NMT model has been used for the language pair English-German. It has achieved 2 BLEU points over basic NMT and PBMT(Niehues et al., 2016). Combination of NMT and PBMT which takes three inputs; output of basic NMT, output of SMT and output of Hierarchical PBMT (HPBMT) of SMT has been implemented for English-Chinese language pair. It achieved 6 BLEU points over basic NMT and 5.3 BLEU points over HPBMT(Zhang et al., 2017). The combination of PBMT and NMT performs better (Popovic, 2017) in terms of accuracy and fluency but it is dependent on the performance of the chosen PBMT approach. 3 METHODOLOGY In this research, we have selected CBMT and NMT to form a combinational system. This approach addresses the limitation of context unawareness of some PBMT approaches like SMT and the need for large parallel corpus of simple NMT. In our approach, the source sentence in English and the translation output of the CBMT in Amharic has been fed to the NMT’s encoder-decoder model as shown in Figure 1. The NMT model then produces the final Amharic translation. The combination of the CBMT and the NMT follows the mixed approach proposed by Niehues et al. (2016). Their mixed approach feeds the NMT with the source sentence and the output of the PBMT. The research by Zhang et al. (2017) also supports this way of combining different systems. 3.1 CBMT SYSTEM The CBMT outperforms RBMT, SMT and EBMT when it comes to languages with less parallel corpora(Miller et al., 2006). It uses a bilingual dictionary, a large target corpus and smaller source corpus, which is optional. In the context based machine translation, there are different components working together to produce the translation. Figure 2 shows the flow of data in the different components of the CBMT. The source sentence is converted into N-gram phrases and then it is translated using bilingual dictionary. CBMT’s performance is mostly dependent on the efficiency of the dictionary. We have manually built a phrase based dictionary aided by Google translate. A synonym finder helps the dictionary’s search using WordNet(Soergel, 1998). WordNet is a library with large lexical database of English words. It provides synonyms of the English words whose Amharic translations are not in dictionary. In this paper, a maximum of four N-grams has been used. Most phrases of English that are translated to a single word in Amharic have a length of four or less words. For example the English phrase ”everyone who calls on the name of the lord will be saved” has the translations in Output 1 using our dictionary. Output 1: The translated output of the N-grams These translations have been combined into sentences in the same order as the source sentence. Then each sentence is converted into N-grams of variable length. The maximum flooded N-grams length is len(Ngram)2 + 1 if len(Ngram) ≥4 else it is equal to len(Ngram). This provides a good variable range to capture neighboring words in a single N-gram. Output 2 shows sentences formed using the translations in Output 1. The N-grams to be flooded are formed by sliding one word from left to right through the combined sentence. Output 2: The translated output combined into sentence Output 3 shows the N-grams for the translated sentences shown in Output 2. Output 3: The N-grams for the translated sentences The flooder is then responsible for searching the translated phrases in the target corpus and finding the longest N-gram match. For each phrase to be flooded, it selects a phrase in the target corpus with the most translated words and least in-between words amongst the words matched. The flooder produces the result in Output 4 with the Book of Romans as the target corpus to be flooded. Output 4: Final output of flooder for single flooded file The N-gram connector combines the flooded text to find the longest overlap of the translated target text. The overlapping system favors those with the least number of not searched words found in between the searched N-grams when calculating the overlap. Output 5 shows the final outcome of the N-gram connector. The system selects the maximum or longest overlapping phrases from the combiner and merges them to form the final target sentence. So finally, the translation for the example English phrase ”everyone who calls on the name of the lord will be saved” is . 3.2 NMT SYSTEM In this paper, we have used RNN (recurrent neural networks) for the NMT. In RNN, the output is fed back to the neuron to learn from both the fresh input and its previous output. This improves RNN’s performance because it learns from its errors while training. The neural cell used is the LSTM (long short-term memory), introduced by Hochreiter & Schmidhuber (1997). We have used LSTM cells for both the encoding and decoding of the sentences. For the decoding, a greedy algorithm has been used. The algorithm selects the first fit word that has the highest probability of occurrence. Probability refers to the probability of being translated and appearing next to the word before itself. The system has an attention layer between the encoder layer and the decoder layer. We have used the Luong attention model(Luong et al., 2015). Equation 1 through Equation 4 shows Luong attention model’s computation. αts = exp(score(ht, hs)) S∑ s′=1 exp(score(ht, hs′)) [Attention Weights] (1) Output 5: Final Output of N-gram connector ct = ∑ s αtshs [Context vector] (2) at = f(ct, ht) = tanh(Wc[ct : ht]) [Attention Vector] (3) score(ht, hs) = h T t Whs [Luong’s multiplicative style] (4) The score function, calculated using Equation 4, is used to compare the output of the decoder (ht) with the output of the encoder (hs) in order to find the attention weight calculated using Equation 1. The Attention weights (alphats) are then used for the context vector(ct) calculated by Equation 2. This context vector as well as the output of the decoder is then used to produce the final output of the decoder using Equation 3. 3.3 COMBINATION OF CBMT AND NMT To combine the two systems, we have made the NMT model to accept two inputs. We have used the proposed method of combining PBMT with NMT accepting two source inputs by Zoph & Knight (2016). According to Zoph & Knight (2016), having two inputs, where one is the source sentence and the other a translation of the source to another language different from the target, helps the NMT produce a better result. The source sentence and the sentence translated using the CBMT are encoded separately and are given to the attention layer. The attention layer focuses on the two inputs at the same time rather than separately. There is a single decoder, which receives the output of the attention layer and provides the final translation. Based on the paper Zoph & Knight (2016), the final outputs of the encoders (ht) are concatenated and a linear transformation is applied to the concatenated output which is activated by tanh using Equation 5. On the other hand, the final states (ct) are simply added as shown by Equation 6. In the attention, the different context vectors are calculated separately and concatenated to produce the final output of the attention vector based on the Luong attention mechanism(Luong et al., 2015) using Equation 7. h = tanh(Wc[h1;h2]) (5) c = c1 + c2 (6) ht = tanh(wc[ht; c 1 t ; c 2 t ] (7) 4 EXPERIMENT SETUP We evaluate the proposed approach for the language pair English-Amharic using the same training parameters for both the basic NMT and the combinational system. The encoder-decoder setup has 1024 LSTM cells or hidden units with 1024 word embedding and the data has been trained for 100 epochs. 4.1 CORPUS USED This research uses the new International Version Bible because both the Amharic and English versions are translated from the same Dead Sea scrolls. This makes it more accurately parallel than other versions of the Bible translation. The whole New Testament of the Bible has been used as a corpus providing a total of 8603 phrases and sentences. We have used two books of the Bible, Paul’s letter to the Romans (Romans) and the Gospel according to Mark (Mark), as a test set. Google translate has been used as the main agent of translation for the manually built bilingual dictionary used by the CBMT. 77% of the total 6,793 vocabulary words have been translated using Google. In addition to Google translate; we have done a manual translation for 1,426 vocabulary words in the book of Romans and other 150 vocabulary words using the Bible. Manual translation here refers to the translation of each word using every entry it has in the Bible by a human. Figure 5 shows the outcome of such translation for the word Acknowledge. Manual translation helps to address the variations in Amharic translated words caused by gender (female or male), plural form and the different persons (first, second and third persons). English words do also have different Amharic translations based on their context as shown in Figure 3. Acknowledge has been translated into four main stem words . 4.1.1 DATASET We have fed the same dataset to all systems with minor variations. In the CBMT, we have used the book of Mark and the book of Romans as a test set. The flooded texts for the book of Romans were the book of Romans itself and Paulian epistles without Romans. The flooded texts for the book of Mark were the book of Mark itself and the gospels without Mark. The books have been flooded to themselves in order to evaluate the performance of the CBMT when the searched text is found in the flooded text and also to see the impact of the bilingual dictionary on the CBMT. The combinational system has two different test sets and different models. The first test set has the output of the CBMT and the source sentence as an input for the NMT. The second test set gives the original target text and the source sentence as an input to the NMT models and we have called it the ideal approach. This was done so to see the impact of the CBMT output and the errors introduced by the CBMT on the combinational system. In the basic NMT and combinational system, similar dataset as the CBMT is used. We have used Paulian epistle without Romans to train a basic NMT model and the combinational model. Then we have tested the model using the book of Romans as the test or holdout set. We have used 10- fold cross validation(Dietterich, 1998) to train and test the basic NMT model and the combinational model with Romans. In a similar manner, we have used 10-fold cross validation(Dietterich, 1998) to train the basic NMT model and the combinational model with the book of Mark. We have also used holdout validation(Raschka, 2018) with a random 80% training and 20% test data split alongside the 10-fold cross validation for both Mark and Romans to obtain a more general representation of the results. 5 EVALUATION METHOD We have measured the translation performance based on the fullness of the translation, whether the system translates all words; context awareness, whether the translation is true to the context and Coherence, whether the translated sentence has a smooth flow of words in terms of syntax. In this research, the BLEU score is the chosen method of evaluation. It answers all the aboveenlisted criteria. The quality of translation is measured by the correspondence between a machine’s output and that of a human(Kishore et al., 2002). BLEU score is defined in the range between 0 and 1 (or in percentage between 0 and 100) where 1 is a perfect match with the reference and 0 is for no words matched. 6 RESULTS AND DISCUSSION This section provides the results obtained along with a brief explanation of the factors and the components. 6.1 CBMT RESULT AND DISCUSSION The system has been tested using a custom-made dictionary using Google translate and manual translation. We have generated the vocabulary of the dictionary from the English version of the NIV Bible. Table 1 depicts the CBMT test results obtained, using BLEU score evaluation method. We have implemented manual translation for the book of Romans on about 80% of its total vocabulary. Hence, it has a better performance yield than the book of Mark, whose translation was solely dependent on Google translate. This is so both when they are flooded to the text that contained them (by 48%) and when they are flooded to the text without them (by 6%). However, the translation of Romans does not produce a 100% as would be expected when it is part of the flooded document. This is mainly because the system selects the overlapping N-gram based on the number of words matched, two consecutive phrases that may have a high overlap but which are not the correct ones may be selected. 6.2 NMT RESULT AND DISCUSSION The NMT test results obtained using BLEU score evaluation method are depicted in Table 2. The test result obtained from Mark was better than that from Romans by an average of 1.62%. Although insignificant difference, it attributes to Marks’ writing having similar words unlike the diverse word selection in Romans(Clay, 2018). 6.3 CBMT AND NMT COMBINATION RESULT AND DISCUSSION There are two test cases for this section. In the first case, we have given the NMT the source sentence and the output of the CBMT as an input per the proposed methodology. Table 3 shows the results obtained from such a setup. In the second case, we have given the NMT the English source sentence and the original Amharic as an input creating an ideal system. The test results obtained using BLEU score evaluation method, are depicted in Table 4 for the ideal combinational system. In the first case, when the CBMT output is used as an input to the NMT, the Book of Romans performed better than the book of Mark by 1.71%. The CBMT output of Romans is better than that by the book of Mark and its impact has propagated to the combinational system. In the ideal case scenario the results is more or less the same. The result for the book of Romans was better than the book of Mark by only 0.63%. 6.4 DISCUSSION OF ALL RESULTS The average of the results obtained from the systems have been calculated and shown in Tables 5 for comparison. The ideal combinational system, which takes the original target Amharic text as the second input, has performed better on average with 11.34 BLEU score gain over the NMT. The ideal system, however, did not outperform the CBMT on average but produced results in the same range. The combinational system with CBMT output given as the second input for the NMT, achieves 2.805 BLEU score point over the simple NMT. The CBMT without being provided the target flooded data has performed better by 14.23 BLEU points over the simple NMT. 7 CONCLUSION The research set out to find a combinational system with components that complement each other when translating a document from English to Amharic. We have proposed the CBMT and the NMT as the complementing pair in terms of parallel data size, accuracy, context awareness with translation and coherence. The CBMT system performed better than the basic NMT and the combinational system given the same size of data. However, the ideal combination of the CBMT and NMT has a BLEU score in the range of the CBMT while outperforming the simple NMT by 11.34 BLEU points. For the book of Mark whose writing resembles the other Gospels, the ideal combination outperformed all. This entails that with smaller increase in parallel corpus for the NMT of the ideal system will outperform both individual systems. The output from the CBMT has a great impact on the performance of the combinational systems as seen by the performance of the proposed combinational system compared to the ideal system. The proposed combinational system still has outperformed the basic NMT by 2.805 BLEU score points in spite of the errors introduced by the CBMT. Therefore, a CBMT with a well-built bilingual dictionary that produces a close to ideal output along with a well-trained NMT with sufficient data makes a fluent combinational system that outperforms a simple NMT and a basic CBMT system. In conclusion, the study suggests the combinational system for translation of English to Amharic language.
1. What is the main contribution of the paper regarding CBMT and NMT systems? 2. What are the strengths of the proposed approach, particularly in utilizing additional sources for NMT? 3. What are the weaknesses of the paper, especially concerning its relevance and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review This paper aims to combine a traditional CBMT system with an NMT system. The core idea of the paper is to use the output of the CBMT system as a second source to a multi-source NMT system. The first source of the system is CBMT, the second source is the original source and the output is the translation in the target language. All the experiments are conducted for English-Amharic with a small amount of parallel data from the Bible. A lot of details are provided about the generation of the CBMT system using Google Translate and details are provided about the approach to create such a system. Pros: - The idea of using additional outputs to the NMT system and outputs from a context-aware system is neat. This has been done by others who have included PBMT outputs in the past. However, this might be the first to include a CBMT result as an additional source. - Focusing on a low-resource language like Amharic is good for the community and will encourage more research in these underrepresented languages. Cons: - A lot of the techniques described for building the traditional CBMT system are obsolete these days and people prefer neural methods. I worry if this is relevant in the current day. - The authors could have compared against other ways of incorporating context as a strong baseline - like a n-to-n or n-to-1 NMT system. - Most experiments in the paper are conducted on a small data set and this is a big downside of the paper. - More detailed analyses of where the contextual phenomena was incorporated might have helped the paper.
ICLR
Title Context Based Machine Translation With Recurrent Neural Network For English-Amharic Translation Abstract The current approaches for machine translation usually require large set of parallel corpus in order to achieve fluency like in the case of neural machine translation (NMT), statistical machine translation (SMT) and example-based machine translation (EBMT). The context awareness of phrase-based machine translation (PBMT) approaches is also questionable. This research develops a system that translates English text to Amharic text using a combination of context based machine translation (CBMT) and a recurrent neural network machine translation (RNNMT). We built a bilingual dictionary for the CBMT system to use along with a large target corpus. The RNNMT model has then been provided with the output of the CBMT and a parallel corpus for training. Our combinational approach on English-Amharic language pair yields a performance improvement over the simple neural machine translation (NMT). 1 INTRODUCTION Context based machine translation (CBMT) is a phrase-based machine translation (PBMT) approach proposed by Miller et al. (2006). Unlike most PBMT approaches that rely on statistical occurrence of the phrases, CBMT works on the contextual occurrence of the phrases. CBMT uses bilingual dictionary as its main translator and produces phrases to be flooded into a large target corpus. The CBMT approach addresses the problem of parallel corpus scarcity between language pairs. The parallel corpus set for English-Amharic language pair, for instance, composes of the Bible, the Ethiopian constitution and international documents. These sources use words specific to their domain and overlook phrases and words used by novels, news and similar literary documents. The CBMT uses synonyms of words in place of rare words and rely on large target corpus and a bilingual dictionary to help with data scarcity(Miller et al., 2006). It is not dependent on large parallel corpus like most PBMT such as the statistical machine translation (SMT)(Brown et al., 1990) and the example-based machine translation EBMT(Gangadharaiah, 2011). The CBMT, however, fails in fluently translating texts compared to the neural machine translation (NMT). The NMT learns the pattern of humans’ translation using human translated parallel corpus. Its translations are more fluent and accurate than all the rest so far when evaluated individually (Popovic, 2017). However, NMT struggles to translate properly rare words and words not commonly used(Wu et al., 2016). In addition, NMT requires large parallel corpus for training. The aim of this research is to build a system by combining the CBMT with the NMT for English to Amharic translation. The combination of PBMT and NMT is the future and most promising than the individual approaches themselves (Popovic, 2017). CBMT’s ability to address rare words and the NMT’s ability to produce fluent translation along with their context awareness makes them complementary couple. The combination is done by providing the NMT with two inputs, one from the source language and the other from the output of the CBMT to produces the final target sentence. In this paper, we show that this approach utilizes the strength of each method to achieve a significant translation performance improvement over simple NMT. The improvement is mostly dependent on the performance of the CBMT and mostly on the bilingual dictionary of the CBMT. 2 RELATED WORKS PBMT approaches are mostly used to translate English to Amharic as in the case of Gasser (2012),Tadesse & Mekuria (2000), Teshome (2000), Besacier et al. (2000), Zewgneh (2017) and Taye et al. (2015) . Below we summarize the researches with most significance to ours. The SMT approach takes a parallel corpus as an input and it selects the most frequent target phrase based on statistical analysis for each searched source phrase (Brown et al., 1990). The SMT approach applied to the English-Amharic pair has produced 18.74 % BLEU score (Tadesse & Mekuria, 2000). The SMT has good accuracy in translating all the words in a sentence but it is not fluent (Oladosu et al., 2016). Hybrid of SMT and rule based machine translation (RBMT) translates and orders the source text based on the grammar rules of the target language and sends it to the SMT for final translation(Yulianti et al., 2011)(Labaka et al., 2014). The hybrid approach for English-Amharic pair has achieved a 15% improvement over SMT on simple sentence and 20% improvement for complex sentence(Zewgneh, 2017). Hybrid of RBMT and SMT gets fluency from RBMT and accuracy from SMT but for longer sentences, the reordering fails(Oladosu et al., 2016). The CBMT approach has been implemented for the language pair Spanish-English. In CBMT, the source phrases are translated using bilingual dictionary and flooded to target corpus. It has achieved 64.62% BLEU score for the researchers’ dataset(Miller et al., 2006). The CBMT outperforms SMT in accuracy and fluency but translation of phrases with words not in the bilingual dictionary is weak (Miller et al., 2006). The NMT has been researched by different groups and here the research by Googles’ researchers on the language pair English-French is presented. The NMT model is trained using parallel corpus. The source sentence is encoded as a vector and then decoded with the help of an attention model. Googles’ NMT model has achieved 38.95% BLEU score(Wu et al., 2016). The NMT has accuracy and fluency but it fails to translate the whole sentence and also fails to perform well with rare words(Wu et al., 2016). To solve this using sub-word units has been suggested (Sennrich et al., 2016) but Amharic has unique treats like ”Tebko-lalto”, one word with two different meanings, which can only be addressed by using context. The NMT has been modified to translate low resourced languages. One approach uses universal lexical representation (ULR) were a word is represented using universal word embeddings. This benefits low resource languages which have semantic similarity with high resourced languages (Gu et al., 2018). This achieved 5% BLEU score improvement over normal NMT. However, most southern Semitic languages like Amharic do not have a strong semantic relative with large resource. NMT has also been modified to work with monolingual corpus instead of parallel corpus using cross-lingual word embedding(Artetxe et al., 2017). Such an approach achieved a 15.56% BLEU score which was less that the semi-supervised and supervised which achieved 21.81% BLEU score and 20.48% BLEU score respectively. Combination of NMT and PBMT which takes the output of SMT (a PBMT) and the source sentence to train the NMT model has been used for the language pair English-German. It has achieved 2 BLEU points over basic NMT and PBMT(Niehues et al., 2016). Combination of NMT and PBMT which takes three inputs; output of basic NMT, output of SMT and output of Hierarchical PBMT (HPBMT) of SMT has been implemented for English-Chinese language pair. It achieved 6 BLEU points over basic NMT and 5.3 BLEU points over HPBMT(Zhang et al., 2017). The combination of PBMT and NMT performs better (Popovic, 2017) in terms of accuracy and fluency but it is dependent on the performance of the chosen PBMT approach. 3 METHODOLOGY In this research, we have selected CBMT and NMT to form a combinational system. This approach addresses the limitation of context unawareness of some PBMT approaches like SMT and the need for large parallel corpus of simple NMT. In our approach, the source sentence in English and the translation output of the CBMT in Amharic has been fed to the NMT’s encoder-decoder model as shown in Figure 1. The NMT model then produces the final Amharic translation. The combination of the CBMT and the NMT follows the mixed approach proposed by Niehues et al. (2016). Their mixed approach feeds the NMT with the source sentence and the output of the PBMT. The research by Zhang et al. (2017) also supports this way of combining different systems. 3.1 CBMT SYSTEM The CBMT outperforms RBMT, SMT and EBMT when it comes to languages with less parallel corpora(Miller et al., 2006). It uses a bilingual dictionary, a large target corpus and smaller source corpus, which is optional. In the context based machine translation, there are different components working together to produce the translation. Figure 2 shows the flow of data in the different components of the CBMT. The source sentence is converted into N-gram phrases and then it is translated using bilingual dictionary. CBMT’s performance is mostly dependent on the efficiency of the dictionary. We have manually built a phrase based dictionary aided by Google translate. A synonym finder helps the dictionary’s search using WordNet(Soergel, 1998). WordNet is a library with large lexical database of English words. It provides synonyms of the English words whose Amharic translations are not in dictionary. In this paper, a maximum of four N-grams has been used. Most phrases of English that are translated to a single word in Amharic have a length of four or less words. For example the English phrase ”everyone who calls on the name of the lord will be saved” has the translations in Output 1 using our dictionary. Output 1: The translated output of the N-grams These translations have been combined into sentences in the same order as the source sentence. Then each sentence is converted into N-grams of variable length. The maximum flooded N-grams length is len(Ngram)2 + 1 if len(Ngram) ≥4 else it is equal to len(Ngram). This provides a good variable range to capture neighboring words in a single N-gram. Output 2 shows sentences formed using the translations in Output 1. The N-grams to be flooded are formed by sliding one word from left to right through the combined sentence. Output 2: The translated output combined into sentence Output 3 shows the N-grams for the translated sentences shown in Output 2. Output 3: The N-grams for the translated sentences The flooder is then responsible for searching the translated phrases in the target corpus and finding the longest N-gram match. For each phrase to be flooded, it selects a phrase in the target corpus with the most translated words and least in-between words amongst the words matched. The flooder produces the result in Output 4 with the Book of Romans as the target corpus to be flooded. Output 4: Final output of flooder for single flooded file The N-gram connector combines the flooded text to find the longest overlap of the translated target text. The overlapping system favors those with the least number of not searched words found in between the searched N-grams when calculating the overlap. Output 5 shows the final outcome of the N-gram connector. The system selects the maximum or longest overlapping phrases from the combiner and merges them to form the final target sentence. So finally, the translation for the example English phrase ”everyone who calls on the name of the lord will be saved” is . 3.2 NMT SYSTEM In this paper, we have used RNN (recurrent neural networks) for the NMT. In RNN, the output is fed back to the neuron to learn from both the fresh input and its previous output. This improves RNN’s performance because it learns from its errors while training. The neural cell used is the LSTM (long short-term memory), introduced by Hochreiter & Schmidhuber (1997). We have used LSTM cells for both the encoding and decoding of the sentences. For the decoding, a greedy algorithm has been used. The algorithm selects the first fit word that has the highest probability of occurrence. Probability refers to the probability of being translated and appearing next to the word before itself. The system has an attention layer between the encoder layer and the decoder layer. We have used the Luong attention model(Luong et al., 2015). Equation 1 through Equation 4 shows Luong attention model’s computation. αts = exp(score(ht, hs)) S∑ s′=1 exp(score(ht, hs′)) [Attention Weights] (1) Output 5: Final Output of N-gram connector ct = ∑ s αtshs [Context vector] (2) at = f(ct, ht) = tanh(Wc[ct : ht]) [Attention Vector] (3) score(ht, hs) = h T t Whs [Luong’s multiplicative style] (4) The score function, calculated using Equation 4, is used to compare the output of the decoder (ht) with the output of the encoder (hs) in order to find the attention weight calculated using Equation 1. The Attention weights (alphats) are then used for the context vector(ct) calculated by Equation 2. This context vector as well as the output of the decoder is then used to produce the final output of the decoder using Equation 3. 3.3 COMBINATION OF CBMT AND NMT To combine the two systems, we have made the NMT model to accept two inputs. We have used the proposed method of combining PBMT with NMT accepting two source inputs by Zoph & Knight (2016). According to Zoph & Knight (2016), having two inputs, where one is the source sentence and the other a translation of the source to another language different from the target, helps the NMT produce a better result. The source sentence and the sentence translated using the CBMT are encoded separately and are given to the attention layer. The attention layer focuses on the two inputs at the same time rather than separately. There is a single decoder, which receives the output of the attention layer and provides the final translation. Based on the paper Zoph & Knight (2016), the final outputs of the encoders (ht) are concatenated and a linear transformation is applied to the concatenated output which is activated by tanh using Equation 5. On the other hand, the final states (ct) are simply added as shown by Equation 6. In the attention, the different context vectors are calculated separately and concatenated to produce the final output of the attention vector based on the Luong attention mechanism(Luong et al., 2015) using Equation 7. h = tanh(Wc[h1;h2]) (5) c = c1 + c2 (6) ht = tanh(wc[ht; c 1 t ; c 2 t ] (7) 4 EXPERIMENT SETUP We evaluate the proposed approach for the language pair English-Amharic using the same training parameters for both the basic NMT and the combinational system. The encoder-decoder setup has 1024 LSTM cells or hidden units with 1024 word embedding and the data has been trained for 100 epochs. 4.1 CORPUS USED This research uses the new International Version Bible because both the Amharic and English versions are translated from the same Dead Sea scrolls. This makes it more accurately parallel than other versions of the Bible translation. The whole New Testament of the Bible has been used as a corpus providing a total of 8603 phrases and sentences. We have used two books of the Bible, Paul’s letter to the Romans (Romans) and the Gospel according to Mark (Mark), as a test set. Google translate has been used as the main agent of translation for the manually built bilingual dictionary used by the CBMT. 77% of the total 6,793 vocabulary words have been translated using Google. In addition to Google translate; we have done a manual translation for 1,426 vocabulary words in the book of Romans and other 150 vocabulary words using the Bible. Manual translation here refers to the translation of each word using every entry it has in the Bible by a human. Figure 5 shows the outcome of such translation for the word Acknowledge. Manual translation helps to address the variations in Amharic translated words caused by gender (female or male), plural form and the different persons (first, second and third persons). English words do also have different Amharic translations based on their context as shown in Figure 3. Acknowledge has been translated into four main stem words . 4.1.1 DATASET We have fed the same dataset to all systems with minor variations. In the CBMT, we have used the book of Mark and the book of Romans as a test set. The flooded texts for the book of Romans were the book of Romans itself and Paulian epistles without Romans. The flooded texts for the book of Mark were the book of Mark itself and the gospels without Mark. The books have been flooded to themselves in order to evaluate the performance of the CBMT when the searched text is found in the flooded text and also to see the impact of the bilingual dictionary on the CBMT. The combinational system has two different test sets and different models. The first test set has the output of the CBMT and the source sentence as an input for the NMT. The second test set gives the original target text and the source sentence as an input to the NMT models and we have called it the ideal approach. This was done so to see the impact of the CBMT output and the errors introduced by the CBMT on the combinational system. In the basic NMT and combinational system, similar dataset as the CBMT is used. We have used Paulian epistle without Romans to train a basic NMT model and the combinational model. Then we have tested the model using the book of Romans as the test or holdout set. We have used 10- fold cross validation(Dietterich, 1998) to train and test the basic NMT model and the combinational model with Romans. In a similar manner, we have used 10-fold cross validation(Dietterich, 1998) to train the basic NMT model and the combinational model with the book of Mark. We have also used holdout validation(Raschka, 2018) with a random 80% training and 20% test data split alongside the 10-fold cross validation for both Mark and Romans to obtain a more general representation of the results. 5 EVALUATION METHOD We have measured the translation performance based on the fullness of the translation, whether the system translates all words; context awareness, whether the translation is true to the context and Coherence, whether the translated sentence has a smooth flow of words in terms of syntax. In this research, the BLEU score is the chosen method of evaluation. It answers all the aboveenlisted criteria. The quality of translation is measured by the correspondence between a machine’s output and that of a human(Kishore et al., 2002). BLEU score is defined in the range between 0 and 1 (or in percentage between 0 and 100) where 1 is a perfect match with the reference and 0 is for no words matched. 6 RESULTS AND DISCUSSION This section provides the results obtained along with a brief explanation of the factors and the components. 6.1 CBMT RESULT AND DISCUSSION The system has been tested using a custom-made dictionary using Google translate and manual translation. We have generated the vocabulary of the dictionary from the English version of the NIV Bible. Table 1 depicts the CBMT test results obtained, using BLEU score evaluation method. We have implemented manual translation for the book of Romans on about 80% of its total vocabulary. Hence, it has a better performance yield than the book of Mark, whose translation was solely dependent on Google translate. This is so both when they are flooded to the text that contained them (by 48%) and when they are flooded to the text without them (by 6%). However, the translation of Romans does not produce a 100% as would be expected when it is part of the flooded document. This is mainly because the system selects the overlapping N-gram based on the number of words matched, two consecutive phrases that may have a high overlap but which are not the correct ones may be selected. 6.2 NMT RESULT AND DISCUSSION The NMT test results obtained using BLEU score evaluation method are depicted in Table 2. The test result obtained from Mark was better than that from Romans by an average of 1.62%. Although insignificant difference, it attributes to Marks’ writing having similar words unlike the diverse word selection in Romans(Clay, 2018). 6.3 CBMT AND NMT COMBINATION RESULT AND DISCUSSION There are two test cases for this section. In the first case, we have given the NMT the source sentence and the output of the CBMT as an input per the proposed methodology. Table 3 shows the results obtained from such a setup. In the second case, we have given the NMT the English source sentence and the original Amharic as an input creating an ideal system. The test results obtained using BLEU score evaluation method, are depicted in Table 4 for the ideal combinational system. In the first case, when the CBMT output is used as an input to the NMT, the Book of Romans performed better than the book of Mark by 1.71%. The CBMT output of Romans is better than that by the book of Mark and its impact has propagated to the combinational system. In the ideal case scenario the results is more or less the same. The result for the book of Romans was better than the book of Mark by only 0.63%. 6.4 DISCUSSION OF ALL RESULTS The average of the results obtained from the systems have been calculated and shown in Tables 5 for comparison. The ideal combinational system, which takes the original target Amharic text as the second input, has performed better on average with 11.34 BLEU score gain over the NMT. The ideal system, however, did not outperform the CBMT on average but produced results in the same range. The combinational system with CBMT output given as the second input for the NMT, achieves 2.805 BLEU score point over the simple NMT. The CBMT without being provided the target flooded data has performed better by 14.23 BLEU points over the simple NMT. 7 CONCLUSION The research set out to find a combinational system with components that complement each other when translating a document from English to Amharic. We have proposed the CBMT and the NMT as the complementing pair in terms of parallel data size, accuracy, context awareness with translation and coherence. The CBMT system performed better than the basic NMT and the combinational system given the same size of data. However, the ideal combination of the CBMT and NMT has a BLEU score in the range of the CBMT while outperforming the simple NMT by 11.34 BLEU points. For the book of Mark whose writing resembles the other Gospels, the ideal combination outperformed all. This entails that with smaller increase in parallel corpus for the NMT of the ideal system will outperform both individual systems. The output from the CBMT has a great impact on the performance of the combinational systems as seen by the performance of the proposed combinational system compared to the ideal system. The proposed combinational system still has outperformed the basic NMT by 2.805 BLEU score points in spite of the errors introduced by the CBMT. Therefore, a CBMT with a well-built bilingual dictionary that produces a close to ideal output along with a well-trained NMT with sufficient data makes a fluent combinational system that outperforms a simple NMT and a basic CBMT system. In conclusion, the study suggests the combinational system for translation of English to Amharic language.
1. What is the focus of the paper regarding machine translation systems? 2. What are the strengths and weaknesses of the proposed method compared to prior works? 3. How does the reviewer assess the clarity, quality, and novelty of the paper's content? 4. What are the concerns regarding the empirical evidence supporting the claims made in the paper? 5. Does the reviewer think the paper is suitable for publication at ICLR or another venue?
Review
Review This paper presents a machine translation system based on a combination of a neural machine translation system (NMT) and a context-based machine translation (CBMT). The method is evaluated on a small parallel corpus application of English-Amharic translation. The idea is that in the small corpus setting, the CBMT can leverage a manually built bilingual dictionary to improve on the standard NMT. Clarity: The paper is reasonably clear, though there are numerous typos and minor language glitches (missing articles, etc.). Overall the quality of the writing is probably a bit below what's acceptable for publication at ICLR, but nothing that could not be fixed on subsequent revisions. Novelty: The method proposed appears to be a fairly straightforward variant of one proposed in a previous paper, where an NMT system was combined with a phrase-based MT system (Zoph & Knight, 2016). There seems to be no novel machine learning contribution (nor is it claimed). This paper seems more appropriate for a venue more focused on machine translation rather than a machine learning venue such as ICLR. Empirical evidence in support of the claims: The authors set out to demonstrate that by combining a CBMT output into an NMT approach, one can get the best of both approaches. Their results do not strongly support this claim. The results suggest that in the context of the small-scale experiments considered, the baseline CBMT model is actually overall the best performing model. It is therefore strange that, in their last sentence of the conclusion, the authors persist in claiming that their combination "outperforms a simple NMT and a basic CBMT system". That being said, the sub-claim that the NMT/CBMT hybrid improves on the baseline NMT system is well established. In light of the relatively low novelty and the lack of compelling empirical performance for the proposed combined MT system, I do not feel that this paper is appropriate for ICLR at this time.
ICLR
Title Tighter Sparse Approximation Bounds for ReLU Neural Networks Abstract A well-known line of work (Barron, 1993; Breiman, 1993; Klusowski & Barron, 2018) provides bounds on the width n of a ReLU two-layer neural network needed to approximate a function f over the ball BR(R) up to error , when the Fourier based quantityCf = ∫ Rd ‖ξ‖ |f̂(ξ)| dξ is finite. More recently Ongie et al. (2019) used the Radon transform as a tool for analysis of infinite-width ReLU two-layer networks. In particular, they introduce the concept of Radon-based R-norms and show that a function defined on R can be represented as an infinite-width twolayer neural network if and only if its R-norm is finite. In this work, we extend the framework of (Ongie et al., 2019) and define similar Radon-based semi-norms (R,U-norms) such that a function admits an infinite-width neural network representation on a bounded open set U ⊆ R when itsR,U-norm is finite. Building on this, we derive sparse (finite-width) neural network approximation bounds that refine those of Breiman (1993); Klusowski & Barron (2018). Finally, we show that infinite-width neural network representations on bounded open sets are not unique and study their structure, providing a functional view of mode connectivity. N/A A well-known line of work (Barron, 1993; Breiman, 1993; Klusowski & Barron, 2018) provides bounds on the width n of a ReLU two-layer neural network needed to approximate a function f over the ball BR(Rd) up to error , when the Fourier based quantityCf = ∫ Rd ‖ξ‖ 2|f̂(ξ)| dξ is finite. More recently Ongie et al. (2019) used the Radon transform as a tool for analysis of infinite-width ReLU two-layer networks. In particular, they introduce the concept of Radon-based R-norms and show that a function defined on Rd can be represented as an infinite-width twolayer neural network if and only if its R-norm is finite. In this work, we extend the framework of (Ongie et al., 2019) and define similar Radon-based semi-norms (R,U-norms) such that a function admits an infinite-width neural network representation on a bounded open set U ⊆ Rd when itsR,U-norm is finite. Building on this, we derive sparse (finite-width) neural network approximation bounds that refine those of Breiman (1993); Klusowski & Barron (2018). Finally, we show that infinite-width neural network representations on bounded open sets are not unique and study their structure, providing a functional view of mode connectivity. 1 INTRODUCTION Extensive work has shown that for a neural network to be able to generalize, the size or magnitude of the parameters is more important than the size of the network, when the latter is large enough (Bartlett, 1997; Neyshabur et al., 2015; Zhang et al., 2016). Under certain regimes, the size of the neural networks used in practice is so large that the training data is fit perfectly and an infinite-width approximation is appropriate. In this setting, what matters to obtain good generalization is to fit the data using the right inductive bias, which is specified by how network parameters are controlled (Wei et al., 2020) together with the training algorithm used (Lyu & Li, 2020). The infinite-width two-layer neural network model has been studied from several perspectives due to its simplicity. One can replace the finite-width ReLU network 1n ∑n i=1 ai(〈ωi, x〉 − bi)+ by an integral over the parameter space with respect to a signed Radon measure: ∫ (〈ω, x〉− b)+ dα(ω, b). Thus, controlling the magnitude of the neural network parameters is akin to controlling the measure α according to a certain norm. Bach (2017) introduced the F1-space, which is the infinitewidth neural network space with norm inf{ ∫ |b| d|α|(ω, b)}, derived from the finite-width regular- izer 1n ∑n i=1 |ai|‖(ωi, bi)‖2 (the infimum is over all the measures α which represent the function at hand). A different line of work (Savarese et al., 2019; Ongie et al., 2019) consider the infinite-width spaces with norm inf{‖α‖TV = ∫ d|α|(ω, b)}, which is derived from the finite-width regularizer 1 n ∑n i=1 |ai|‖ωi‖2 (i.e. omitting the bias term). Both of these works seek to find expressions for this norm, leading to characterizations of the functions that are representable by infinite-width networks. Savarese et al. (2019) solves the problem in the one-dimensional case: they show that for a function f on R, this norm takes value max{ ∫ R |f ′′(x)| dx, |f ′(−∞) + f ′(∞)|}. Ongie et al. (2019) give an expression for this norm (the R-norm) for functions on Rd, making use of Radon transforms (see Subsec. 2.3). Although we mentioned in the first paragraph that in many occasions the network size is large enough that the specific number of neurons is irrelevant, when the target function is hard to approx- imate it is interesting to have an idea of how many neurons one needs to approximate it. The first contribution in this direction was by Cybenko (1989); Hornik et al. (1989), which show that twolayer neural networks with enough neurons can approximate any reasonable function on bounded sets in the uniform convergence topology. Later on, Barron (1993); Breiman (1993) provided sparse approximation bounds stating that if a function f is such that a certain quantity Cf constructed from the Fourier transform f̂ is finite, then there exists a neural network of width n such that the L2 approximation error with respect to a distribution of bounded support is lower than O(Cf/n). More recently, Klusowski & Barron (2018) provided alternative sparse approximation bounds of Breiman (1993) by restricting to networks with bounded weights and a slightly better dependency on n at the expense of a constant factor increasing with d (see Subsec. 2.2). Contributions. In our work, we seek to characterize the functions that coincide with an infinitewidth two-layer neural network on a fixed bounded open set. This endeavor is interesting in itself because in practice, we want to learn target functions for which we know samples on a bounded set, and we are typically unconcerned with the values that the learned functions take at infinity. Moreover, the tools that we develop allow us to derive state-of-the-art sparse approximation bounds. Our main contributions are the following: • In the spirit of theR-norm introduced by Ongie et al. (2019), for any bounded open set U ⊆ Rd we define theR,U-norm of a function on Rd, and show that when theR,U-norm of f is finite, f(x) can admits a representation of the form ∫ Sd−1×R(〈ω, x〉 − b)+ dα(ω, b) + 〈v, x〉 + c for x ∈ U , where v ∈ Rd, c ∈ R and α is an even signed Radon measure. • Using theR,U-norm, we derive function approximation bounds for neural networks with a fixed finite width. We compute theR,U-norm of a function in terms of its Fourier representation, and show that it admits an upper bound by the quantity Cf . This shows that our approximation bound is tighter than the previous bound by Breiman (1993), and meaningful in more instances (e.g. for finite-width neural networks). We also showR,U-norm-based bounds analogous to the ones of Klusowski & Barron (2018). • Setting U as the open unit ball of radius R, we show that neural network representations of f on U hold for multiple even Radon measures, which contrasts with the uniqueness result provided by Ongie et al. (2019) for the case of Rd. We study the structure of the sets of Radon measures which give rise to the same function on U . The non-uniqueness of the measure representing a measure could be linked to the phenomenon of mode connectivity. Additional related work. There have been other recent works which have used the Radon transform to study neural networks in settings different from ours (Parhi & Nowak, 2021a; Bartolucci et al., 2021). These two works consider the R-norm as a regularizer for an inverse problem, and proceed to prove representer theorems: there exists a solution of the regularized problem which is a twolayer neural network equal to the number of datapoints. Regarding infinite-width network spaces, E & Wojtowytsch (2020) present several equivalent definitions and provides a review. A well-known line of work (Mei et al., 2018; Chizat & Bach, 2018; Rotskoff & Vanden-Eijnden, 2018) studies the convergence of gradient descent for infinite-width two-layer neural networks. 2 FRAMEWORK 2.1 NOTATION Sd−1 denotes the (d − 1)-dimensional hypersphere (as a submanifold of Rd) and BR(Rd) is the Euclidean open ball of radius R. For U ⊆ Rd measurable, the space C0(U) of functions vanishing at infinity contains the continuous functions f such that for any > 0, there exists compact K ⊆ U depending on f such that |f(x)| < for x ∈ U \ K. P(U) is the set of Borel probability measures, M(U) is the space of finite signed Radon measures (which may be seen as the dual of C0(U)). Throughout the paper, the term Radon measure refers to a finite signed Radon measure for shortness. If γ ∈ M(U), then ‖γ‖TV is the total variation (TV) norm of γ. MC(U) denotes the space of complex-valued finite signed Radon measures, defined as the dual space of C0(U,C) (the space of complex-valued functions vanishing at infinity). We denote by S(Rd) the space of Schwartz functions, which contains the functions in C∞(Rd) whose derivatives of any order decay faster than polynomials of all orders, i.e. for all k, p ∈ (N0)d, supx∈Rd |xk∂(p)ϕ(x)| < +∞. For f ∈ L1(Rd), we use f̂ to denote the unitary Fourier transforms with angular frequency, defined as f̂(ξ) = 1 (2π)d/2 ∫ Rd f(x)e −i〈ξ,x〉dx. If f̂ ∈ L1(Rd) as well, we have the inversion formula f(x) = 1 (2π)d/2 ∫ Rd f̂(ξ)e i〈ξ,x〉dx. The Fourier transform is a continuous automorphism on S(Rd). 2.2 EXISTING SPARSE APPROXIMATION BOUNDS One of the classical results of the theory of two-layer neural networks (Breiman (1993), building on (Barron, 1993)) states that given a probability measure p ∈ P(BR(Rd)) and a function f : BR(Rd)→ R admitting a Fourier representation of the form f(x) = 1(2π)d/2 ∫ Rd e i〈ξ,x〉df̂(ξ), where f̂ ∈MC(Rd) is a complex-valued Radon measure such that Cf = 1(2π)d/2 ∫ Rd ‖ξ‖ 2 2 d|f̂ |(ξ) < +∞, there exists a two-layer neural network f̃(x) = 1n ∑n i=1 ai(〈x, ωi〉 − bi)+ such that∫ BR(Rd) (f(x)− f̃(x))2 dx ≤ (2R)4C2f n . (1) These classical results do not provide bounds on the magnitude of the neural network weights. More recently, Klusowski & Barron (2018) showed similar approximation bounds for two-layer ReLU networks under additional l1 and l0 bounds on the weights ai, ωi. Namely, if C̃f = 1 (2π)d/2 ∫ Rd ‖ξ‖ 2 1 d|f̂ |(ξ) < +∞ there exists a two-layer neural network f̃(x) = a0 + 〈ω0, x〉 + κ n ∑n i=1 ai(〈wi, x〉 − bi)+ with |ai| ≤ 1, ‖ωi‖ ≤ 1, bi ∈ [0, 1], and κ ≤ 2C̃f , and sup x∈[−1,1]d |f(x)− f̃(x)| ≤ c C̃f √ d+ log n n−1/2−1/d, (2) where c is a universal constant. 2.3 REPRESENTATION RESULTS ON Rd BASED ON THE RADON TRANSFORM One defines Pd denotes the space of hyperplanes on Rd, whose elements may be represented by points in Sd−1 × R by identifying {x|〈ω, x〉 = b} with both (ω, b) and (−ω,−b). Thus, functions on Pd are even functions on Sd−1 × R and we will use both notions interchangeably1. The Radon transform and the dual Radon transform. If f : Rd → R is a function which is integrable over all the hyperplanes of Rd, we may define the Radon transformRf : Pd → R as Rf(ω, b) = ∫ {x|〈ω,x〉=b} f(x) dx, ∀(ω, b) ∈ Sd−1 × R. That is, one integrates the function f over the hyperplane (ω, b). If Φ : Pd → R is a continuous function, the dual Radon transformR∗Φ : Rd → R is defined as R∗Φ(x) = ∫ Sd−1 Φ(ω, 〈ω, x〉) dω, ∀x ∈ Rd, where the integral is with respect to the Hausdorff measure over Sd−1. R and R∗ are adjoint operators in the appropriate domains (see Lemma 13). The Radon inversion formula. When f ∈ C∞(Rd), one has that (Theorem 3.1, Helgason (2011)) f = cd(−∆)(d−1)/2R∗Rf (3) where cd = 12(2π)d−1 and (−∆) s/2 denotes the (negative) fractional Laplacian, defined via its Fourier transform as ̂(−∆)s/2f(ξ) = ‖ξ‖sf̂(ξ). 1Similarly, the space M(Pd) of Radon measures over Pd contains the even measures in M(Sd−1 × R). If α ∈ M(Pd), ∫ Sd−1×R ϕ(ω, b) dα(ω, b) is well defined for any measurable function ϕ on S d−1 × R, but∫ Pd ϕ(ω, b) dα(ω, b) is only defined for even ϕ. TheR-norm. Given a function f : Rd → R, Ongie et al. (2019) introduce the quantity ‖f‖R = { sup{−cd〈f, (−∆)(d+1)/2R∗ψ〉 | ψ ∈ S(Sd−1 × R), ψ even, ‖ψ‖∞ ≤ 1} if f Lipschitz +∞ otherwise. (4) They call it the R-norm of f , although it is formally a semi-norm. Here, the space S(Sd−1 × R) of Schwartz functions on Sd−1 × R is defined, in analogy with S(Rd), as the space of C∞ functions ψ on Sd−1 × R which for any integers k, l ≥ 0 and any differential operator D on Sd−1 satisfy sup(ω,b)∈Sd−1×R |(1 + |b|k)∂kb (Dψ)(ω, b)| < +∞ (Helgason (2011), p. 5). Moreover, S(Pd) = {ψ ∈ S(Sd−1 × R) | ψ even}, which means the conditions on ψ in (4) can be written as ψ ∈ S(Pd), ‖ψ‖∞ ≤ 1. The finiteness of the R-norm indicates whether a function on Rd admits an exact representation as an infinitely wide neural network. Namely, Ongie et al. (2019) in their Lemma 10 show that ‖f‖R is finite if and only if there exists a (unique) even measure α ∈M(Sd−1×R) and (unique) v ∈ Rd, c ∈ R such that for any x ∈ Rd, f(x) = ∫ Sd−1×R ( 〈ω, x〉 − b ) + dα(ω, b) + 〈v, x〉+ c, (5) in which case, ‖f‖R = ‖α‖TV. Remark the following differences between this result and the bounds by (Breiman, 1993; Klusowski & Barron, 2018) shown in equations (1) and (2): (i) in (5) we have an exact representation with infinite-width neural networks instead of an approximation result with finite-width, (ii) in (5) the representation holds on Rd instead of a bounded domain. In our work, we derive representation results similar to the ones of Ongie et al. (2019) for functions defined on bounded open sets, which naturally give rise to sparse approximation results that refine those of (Breiman, 1993; Klusowski & Barron, 2018). One property that makes the Radon transform and its dual useful to analyze neural networks can be understood at a very high level via the following argument: if f(x) = ∫ Sd−1×R(〈ω, x〉 − b)+ρ(ω, b) d(ω, b) + 〈v, x〉 + c for some smooth rapidly decreasing function ρ, then ∆f(x) =∫ Sd−1×R δ〈ω,x〉=bρ(ω, b) d(ω, b) = ∫ Sd−1 ρ(ω, 〈ω, x〉) dω = (R ∗ρ)(x). For a general function f of the form (5), one has similarly that 〈∆f, ϕ〉 = 〈α,Rϕ〉 for any ϕ ∈ S(Rd). This property relates the evaluations of the measure α to the function ∆f via the Radon transform, and is the main ingredient in the proof of Lemma 10 of Ongie et al. (2019). While we also rely on it, we need many additional tools to deal with the case of bounded open sets. 3 REPRESENTATION RESULTS ON BOUNDED OPEN SETS Schwartz functions on open sets. Let U ⊆ Rd be an open subset. The space of Schwartz functions on U may be defined as S(U) = ⋂ z∈Rd\U ⋂ k∈(N0)d{f ∈ S(R d) | ∂(k)f(z) = 0}, i.e. they are those Schwartz functions on Rd such that the derivatives of all orders vanish outside of U (c.f. Def. 3.2, Shaviv (2020)). The structure of S(U) is similar to S(Rd) in that its topology is given by a family of semi-norms indexed by ((N0)d)2: ‖f‖k,k′ = supx∈U |xk · f (k ′)(x)|. Similarly, if V ⊆ Pd is open, we define S(V) = ⋂ (ω,b)∈(Sd−1×R)\V ⋂ k∈(N0)2{f ∈ S(P d) | ∂k1b ∆̂k2f(ω, b) = 0}, where ∆̂ is the spherical Laplacian. TheR,U-norm. Let U ⊆ Rd be a bounded open set, and let Ũ := {(ω, 〈ω, x〉) ∈ Sd−1 × R |x ∈ U}. For any function f : Rd → R, we define theR,U-norm of f as ‖f‖R,U = sup{−cd〈f, (−∆)(d+1)/2R∗ψ〉 | ψ ∈ S(Ũ), ψ even, ‖ψ‖∞ ≤ 1}. (6) Note the similarity between this quantity and the R-norm defined in (4); the main differences are that the supremum here is taken over the even Schwartz functions on Ũ instead of Sd−1 × R, and that the non-Lipschitz case does not need a separate treatment. Remark that ‖f‖R,U ≤ ‖f‖R. If f has enough regularity, we may write ‖f‖R,U = ∫ Ũ |R(−∆) (d+1)/2f |(ω, b) d(ω, b), using that the fractional Laplacian is self-adjoint andR∗ is the adjoint ofR. Define PdU to be the bounded open set of hyperplanes of Rd that intersect U , which in analogy with Subsec. 2.3, is equal to Ũ up to the identification of (ω, b) with (−ω,−b). Similarly, note that S(PdU ) = {ψ ∈ S(Ũ), ψ even}, which allows to rewrite the conditions in (6) as ψ ∈ S(PdU ), ‖ψ‖∞ ≤ 1. The following proposition, which is based on the Riesz-Markov-Kakutani representation theorem, shows that when theR,U-norm is finite, it can be associated to a unique Radon measure over PdU . Proposition 1. If ‖f‖R,U < +∞, there exists a unique Radon measure α ∈ M(PdU ) such that −cd〈f, (−∆)(d+1)/2R∗ψ〉 = ∫ PdU ψ(ω, b) dα(ω, b) for any ψ ∈ S(PdU ). Moreover, ‖f‖R,U = ‖α‖TV. Building on this, we see that a neural network representation for bounded U holds when the R,Unorm is finite: Theorem 1. Let U be a open, bounded subset of Rd. Let f : Rd → R such that ‖f‖R,U < +∞. Let α ∈ M(PdU ) be given by Proposition 1. For any ϕ ∈ S(U), there exist unique v ∈ Rd and c ∈ R such that ∫ U f(x)ϕ(x) dx = ∫ U (∫ Ũ (〈ω, x〉 − t)+ dα(ω, t) + 〈v, x〉+ c ) ϕ(x) dx, (7) That is, f(x) = ∫ Ũ (〈ω, x〉 − t)+ dα(ω, t) + 〈v, x〉 + c for x a.e. (almost everywhere) in U . If f is continuous, then the equality holds for all x ∈ U . Remark that this theorem does not claim that the representation given by α, v, c is unique, unlike Lemma 10 by Ongie et al. (2019) concerning analogous representations on Rd. In Sec. 5 we see that such representations are in fact not unique, for particular choices of the set U . We want to underline that the proof of Theorem 1 uses completely different tools from the ones of Lemma 10 by Ongie et al. (2019): their result relies critically on the fact that the only harmonic Lipschitz functions on Rd are affine functions, which is not true for functions on bounded subsets in our setting. 4 SPARSE APPROXIMATION FOR FUNCTIONS WITH BOUNDED R,U -NORM In this section, we show how to obtain approximation bounds of a function f on a bounded open set U using a fixed-width neural network with bounded coefficients, in terms of the R,U-norm introduced in the previous section. Theorem 2. Let U ⊆ BR(Rd) be a bounded open set. Suppose that f : Rd → R is such that ‖f‖R,U is finite, where ‖ · ‖R,U is defined in (6). Let v ∈ Rd, c ∈ R as in Theorem 1. Then, there exists {(ωi, bi)}ni=1 ⊆ Ũ and {ai}ni=1 ⊆ {±1} such that the function f̃ : Rd → R defined as f̃(x) = ‖f‖R,U n n∑ i=1 ai(〈ωi, x〉 − bi)+ + 〈v, x〉+ c fulfills, for x a.e. in U , ∣∣∣f̃(x)− f(x)∣∣∣ ≤ R‖f‖R,U√ n . (8) The equality holds for all x ∈ U if f is continuous. The proof of Theorem 2 (in App. B) uses the neural network representation (7) and a probabilistic argument. If one samples {(ωi, bi)}ni=1 from a probability distribution proportional to |α|, a Rademacher complexity bound upper-bounds the expectation of the supremum norm between f̃ and f , which yields the result. Note the resemblance of (8) with the bound (1); theR,U norm of f replaces the quantityCf . We can also use theR,U-norm to obtain a bound analogous to (2), that is, with a slightly better dependency in the exponent of n at the expense of a constant factor growing with the dimension. Proposition 2. Let f : Rd → R and U ⊆ B1(Rd) open such that ‖f‖R,U < +∞. Then, then there exist {ai}ni=1 ⊆ [−1, 1], {ωi}ni=1 ⊆ {ω ∈ Rd|‖ω‖1 = 1} and {bi}ni=1 ⊆ [0, 1] and κ < √ d‖f‖R,U such that the function f̃(x) = κ n n∑ i=1 ai(〈ωi, x〉 − bi)+ fulfills, for x a.e. in U and some universal constant c > 0, |f(x)− f̃(x)| ≤ cκ √ d+ log nn−1/2−1/d. The proof of this result (in App. B) follows readily from the representation (7) and Theorem 1 of Klusowski & Barron (2018). 4.1 LINKS WITH THE FOURIER SPARSE APPROXIMATION BOUNDS The following result shows that setting U = BR(Rd), theR,U-norm can be bounded by the Fourierbased quantities Cf , C̃f introduced in Subsec. 2.2. Theorem 3. Assume that the function f : Rd → R admits a Fourier representation of the form f(x) = 1 (2π)d/2 ∫ Rd e i〈ξ,x〉df̂(ξ) with f̂ ∈ MC(Rd) a complex-valued Radon measure. Let Cf be the quantity used in the sparse approximation bound by Breiman (1993) (see Subsec. 2.2). Then, one has that ‖f‖R,BR(Rd) ≤ 2RCf (9) As a direct consequence of Theorem 3, when U = BR(Rd) the right-hand side of (8) can be upper-bounded by R2Cf/ √ n. This allows to refine the bound (1) from Breiman (1993) to a bound in the supremum norm over BR(Rd), and where the approximating neural network f̃(x) = 1 n ∑n i=1 ai(〈x, ωi〉 − bi)+ + 〈v, x〉+ c fulfills |ai| ≤ ‖f‖R,BR(Rd), ‖ωi‖2 ≤ 1 and bi ∈ (−R,R). While we are able to prove the bound (9), the Fourier representation of f does not allow for a manageable expression for the measure α described in Proposition 1. For that, the following theorem starts from a slightly modified Fourier representation of f , under which one can describe the measure α and provide a formula for theR,U-norm. Theorem 4. Let f : Rd → R admitting the representation f(x) = ∫ Sd−1×R eib〈ω,x〉 dµ(ω, b), (10) for some complex-valued Radon measures µ ∈MC(Sd−1×R) such that dµ(ω, b) = dµ(−ω,−b) = dµ̄(−ω, b) = dµ̄(ω,−b), and ∫ Sd−1×R b 2d|µ|(ω, b) < +∞. Choosing U = BR(Rd), the unique measure α ∈M(PdR) specified by Proposition 1 takes the following form: dα(ω, b) = − ∫ R t2e−itb dµ(ω, t) db, where K = 2πd/2/Γ(d2 ). Note that α is real-valued because ∫ R t 2e−itb dµ(ω, t) ∈ R as t2 dµ(ω, t) = (−t)2 dµ(ω,−t). Consequently, theR,BR(Rd)-norm of f is ‖f‖R,BR(Rd) = ‖α‖TV = ∫ R −R ∫ Sd−1 ∣∣∣∣∫ R t2e−itb dµ(ω, t) ∣∣∣∣ db. (11) Remark that µ is the pushforward of the measure f̂ by the mappings ξ 7→ (±ξ/‖ξ‖,±ξ). When the Fourier transform f̂ admits a density, one may obtain the density of µ via a change from Euclidean to spherical coordinates: dµ(ω, b) = 12vol(S d−1)f̂(bω)|b|d−1 d(ω, b). Hence, Theorem 4 provides an operative way to compute the R,U-norm of f if one has access to the Fourier transform of f̂ . Note that equation (11) implies that theR,BR(Rd)-norm of f increases with R, and in particular is smaller than theR-norm of f , which corresponds to setting R =∞. Theorems 3 and 4 are proven jointly in App. B. Note that from the expression (11) one can easily see that ‖f‖R,BR(Rd) is upper-bounded by RCf :∫ R −R ∫ Sd−1 ∣∣∣∣∫ R t2e−itb dµ(ω, t) ∣∣∣∣ db ≤ ∫ R −R ∫ Sd−1 ∫ R t2 d|µ|(ω, t) db = 2R ∫ Rd ‖ξ‖2d|f̂ |(ξ),(12) where the equality holds since µ is the pushforward of f̂ . Equation (12) makes apparent the norm ‖f‖R,BR(Rd) is sometimes much smaller than the quantities Cf , C̃f , as is showcased by the following one-dimensional example (see the proof in App. B). In these situations, the sparse approximation bounds that we provide in Theorem 2 and Proposition 2 are much better than the ones in (1)-(2). Example 1. Take the function f : R → R defined as f(x) = cos(x) − cos((1 + )x), with > 0. f admits the Fourier representation f(x) = 1 (2π)1/2 ∫ R √ π 2 (δ1(ξ) + δ−1(ξ) − δ1+ (ξ) − δ−1− (ξ))e iξx dξ. We have that Cf = 2 + 2 + 2, and ‖f‖R,BR(Rd) ≤ R ( R + 2 + 2 ) . ‖f‖R,BR(Rd) goes to zero as → 0, while Cf converges to 2. An interesting class of functions for which ‖f‖R,BR(Rd) is finite but Cf , C̃f are infinite are functions that can be written as a finite-width neural network on BR(Rd), as shown in the following proposition. Proposition 3. Let f : Rd → R defined as f(x) = 1n ∑n i=1 ai(〈ωi, x〉 − bi)+ for all x ∈ Rd, with {ωi}ni=1 ⊆ Sd−1, {ai}ni=1, {bi}ni=1 ⊆ R. Then, for any bounded open set U , we have ‖f‖R,U ≤ 1 n ∑n i=1 |ai|, while Cf , C̃f = +∞ if f is not an affine function. Proposition 3 makes use of the fact that the R,U-norm is always upper-bounded by the R-norm, which also means that all the bounds developed in Ongie et al. (2019) apply for the R,U-norm. The fact that finite-width neural networks have infinite Cf was stated by E & Wojtowytsch (2020), that used them to show the gap between the functions with finite Cf and functions representable by infinite-width neural networks (belonging to the Barron space, in their terminology). It remains to be seen whether the gap is closed when considering functions with finite R,U-norm, i.e., whether any function admitting an infinite-width representation (7) on U has a finiteR,U-norm. Moving to the non-linear Radon transform. In many applications the function of interest f may be better represented as ∫ (〈ω, ϕ(x)〉 − t)+ dα(ω, t) + 〈v, x〉 + c, where ϕ is a fixed finite dimensional, non-linear and bounded feature map. Our results trivially extend to this case where in the Radon transform hyperplanes are replaced by hyperplanes in the feature space. This can be seen as the “kernel trick” applied to the Radon transform. The corresponding ‖f‖R,ϕ(U) corresponds to the sparsity of the decomposition in the feature space, and we have better approximation when ‖f‖R,ϕ(U) < ‖f‖R,U . This gives a simple condition for when transfer learning is successful, and explains the success of using random fourier features as a preprocessing in implicit neural representations of images and surfaces (Tancik et al., 2020). In order to go beyond the fixed feature maps and tackle deeper ReLU networks, we think that the non-linear Radon transform (Ehrenpreis, 2003) is an interesting tool to explore. We note that Parhi & Nowak (2021b) introduced recently a representer theorem for deep ReLU networks using Radon transforms as a regularizer. 5 INFINITE-WIDTH REPRESENTATIONS ARE NOT UNIQUE ON BOUNDED SETS Ongie et al. (2019) show that when theR-norm of f is finite, there is a unique measure α ∈M(Rd) such that the representation (5) holds for x ∈ Rd. In this section we show that when we only ask the representation to hold for x in a bounded open set, there exist several measures that do the job; in fact, they span an infinite-dimensional space. Let U = BR(Rd) be the open ball of radius R > 0 in Rd, which means that Ũ = Sd−1 × (−R,R) and PdU is the set of hyperplanes {x|〈ω, x〉 = b} such that ‖ω‖ = 1 and b ∈ (−R,R), which we denote by PdR for simplicity. In the following we will construct a space of Radon measures α ∈ M(PdR) whose neural network representation (5) coincide for all x ∈ BR(Rd). Note that since any bounded subset of Rd is included in some open ball, our results imply that such representations are non-unique on any bounded set. Remark 1. When one considers representations on BR(Rd) of the sort (5) with the measure α lying in the larger spaceM(Sd−1 × R), the non-uniqueness is apparent because there are two ‘trivial’ kinds of symmetry at play: (i) Related to parity: when the measure α is odd, we have ∫ Sd−1×R(〈ω, x〉 − b)+ dα(ω, b) = 1 2 ∫ Sd−1×R(〈ω, x〉 − b)+ − (−〈ω, x〉 + b)+ dα(ω, b) = 〈 1 2 ∫ Sd−1×R ω dα(ω, b), x〉 − 1 2 ∫ Sd−1×R b dα(ω, b), which is an affine function of x. (ii) Related to boundedness: if (ω, b) ∈ Sd−1×(R\(−R,R)), x 7→ (〈ω, x〉−b)+ restricted to BR(Rd) is an affine function of x. Hence, if α is supported on Sd−1×Sd−1×(R\(−R,R)), x 7→ ∫ Sd−1×R(〈ω, x〉 − b)+ dα(ω, b) is an affine function when restricted to BR(R d). Since in Sec. 3 we restrict our scope to measures α lying inM(PdU ), these two kinds of symmetries are already quotiented out in our analysis. The third kind of non-uniqueness that we discuss in this section is conceptually deeper, taking place withinM(PdU ). Let {Yk,j | k ∈ Z+, 1 ≤ j ≤ Nk,d} be the orthonormal basis of spherical harmonics of the space L2(Sd−1) (Atkinson & Han, 2012). It is well known that for any k, the functions {Yk,j | 1 ≤ j ≤ Nk,d} are the restrictions to Sd−1 of homogeneous polynomials of degree k, and in fact Nk,d is the dimension of the space of homogeneous harmonic polynomials of degree k. Consider the following subset of even functions in C∞(Sd−1 × (−R,R)): A = {Yk,j ⊗Xk ′ | k, j, k′ ∈ Z+, k ≡ k′ (mod 2), k′ < k − 2, 1 ≤ j ≤ Nd,k}, where Xk ′ denotes the monomial of degree k′ on (−R,R). We have the following result regarding the non-uniqueness of neural network representations: Theorem 5. If α ∈ M(PdR) is such that α ∈ clw(span(A)), then we have that 0 =∫ Sd−1×(−R,R)(〈ω, x〉 − b)+ dα(ω, b) for any x ∈ BR(R d). That is, α yields a neural network representation of the zero-function on BR(Rd). Here, we consider span(A) as a subset ofM(PdR) by the Riesz-Markov-Kakutani representation theorem via the action 〈g, ϕ〉 = ∫ PdR ϕ(ω, b)g(ω, b) d(ω, b) for any g ∈ span(A), ϕ ∈ C0(PdR), and clw denotes the closure in the topology of weak convergence ofM(Sd−1 × R). In particular, any measure whose density is in the span of A will yield a function which is equal to zero when restricted to BR(Rd). As an example of this result, we show a simple measure inM(Pd1) which represents the zero function on B1(R2). Example 2 (Non-zero measure representing the zero function onB1(R2)). We define the even Radon measure α ∈M(S1×(−1, 1)) with density dα(ω, b) = (8ω40−8ω20+1) d(ω, b) where ω = (ω0, ω1). Then, for any x ∈ B1(R2), 0 = ∫ S1×(−1,1)(〈ω, x〉 − b)+ dα(ω, x). On the one hand, Proposition 1 states that there exists a unique measure α ∈ M(PdU ) such that −cd〈f, (−∆)(d+1)/2R∗ψ〉 = ∫ PdU ψ(ω, b) dα(ω, b) for any ψ ∈ S(PdU ) if ‖f‖R,U is finite. On the other hand, Theorem 5 claims that functions admit distinct representations by measures inM(PdU ). The following theorem clarifies these two seemingly contradictory statements. Consider the following subset of even functions in C∞(Sd−1 × (−R,R)), which contains A: B = {Yk,j ⊗Xk ′ | k, j, k′ ∈ Z+, k ≡ k′ (mod 2), k′ < k, 1 ≤ j ≤ Nd,k}. Proposition 4. Let 0 < R < R′. Let f : Rd → R such that ‖f‖R,BR′ (Rd) < +∞ and let α ∈ M(PdR) be the unique measure specified by Proposition 1. Then, α is the unique measure in M(PdR) such that ∀ϕ ∈ S(BR(Rd)), 〈α,Rϕ〉 = ∫ BR(Rd)) f(x)∆ϕ(x) dx, (13) ∀k, j, k′ ∈ Z+ s.t. k′ ≡ k (mod 2), k′ < k, 1 ≤ j ≤ Nk,d, 〈α, Yk,j ⊗Xk ′ 〉 = −cd〈f, (−∆)(d+1)/2R∗(Yk,j ⊗ 1|X|<RXk ′ )〉. (14) The condition (13) holds for any measure α′ ∈ M(PdR) for which f admits a representation of the form (7) on BR(Rd). Thus, α can be characterized as the unique measure inM(PdR) such that f admits a representation of the form (7) on BR(Rd) and the condition (14) holds. In (14), the quantity 〈f, (−∆)(d+1)/2R∗(Yk,j ⊗ 1|X|<RXk ′ )〉 is well defined despite 1|X|<RXk ′ not being continuous on R; we define it as 〈f, (−∆)(d+1)/2R∗((Yk,j ⊗1|X|<RXk ′ ) + g̃)〉, where g̃ is any function in S(PdR′) such that (Yk,j⊗1|X|<RXk ′ )+ g̃ ∈ S(PdR′) (which do exist, see App. C). In short, Proposition 4 characterizes the measure α from Proposition 1 in terms of its evaluations on the spaces R(S(BR(Rd))) and span(B), and by Corollary 1 the direct sum of these two spaces dense in C0(PdR), which by the Riesz-Markov-Kakutani representation theorem is the predual of M(PdR). Interestingly, the condition (13) holds for any measure α ∈ M(PdR) which represents the function f on BR(Rd), but it is easy to see that the condition (14) does not: by Theorem 5 we have that if ψ ∈ span(A) ⊆ span(B), the measure α′ defined as dα′(ω, b) = dα(ω, b) + ψ(ω, b) db represents the function f on BR(Rd), and 〈α′, ψ〉 = 〈α,ψ〉+ ‖ψ‖22. It remains an open question to see whether Theorem 5 captures all the measures which represent the zero function on BR(Rd), which we hypothesize. If that was the case, we would obtain a complete characterization of the Radon measures which represent a given function on BR(Rd). Mode connectivity. Mode connectivity is the phenomenon that optima of neural network losses (at least the ones found by gradient descent) turn out to be connected by paths where the loss value is almost constant, and was observed empirically by Garipov et al. (2018); Draxler et al. (2018). Kuditipudi et al. (2019) provided an algorithmic explanation based on dropout, and an explanation based on the noise stability property. Theorem 5 suggests an explanation for mode connectivity from a functional perspective: one can construct finitely-supported measures which approximate a measure α ∈ clw(span(A)), yielding finite-width neural networks with non-zero weights which approximate the zero function on BR(Rd). Assuming that the data distribution is supported in BR(Rd), adding a multiple of one such network to an optimal network will produce little change in the loss value because the function being represented is essentially unchanged. More work is required to confirm or discard this intuition. 6 CONCLUSION We provided in this paper tighter sparse approximation bounds for two-layer ReLU neural networks. Our results build on the introduction of Radon-basedR,U-norms for functions defined on a bounded open set U . Our bounds refine Fourier-based approximation bounds of Breiman (1993); Klusowski & Barron (2018). We also showed that the representation of infinite width neural networks on bounded open sets are not unique, which can be seen as a functional view of mode connectivity observed in training deep neural networks. We leave two open questions: whether any function admitting an infinite-width representation on U has a finite R,U-norm, and whether Theorem 5 captures all the measures which represent the zero function on BR(Rd). Finally, in order to extend our theory to deeper ReLU networks we believe that non-linear Radon transforms (Ehrenpreis, 2003) are interesting tools to explore.
1. What are the main contributions of the paper regarding sparse approximation bounds for two-layer ReLU neural networks? 2. What are the strengths of the paper, particularly in the theoretical analysis and generalization to bounded open sets? 3. Do you have any concerns or questions regarding the paper, such as checking the finiteness of the R, U-norm for general functions? 4. Are there any minor comments or typos in the review, such as using n instead of m in the approximation bound?
Summary Of The Paper Review
Summary Of The Paper The paper provides some interesting and tighter spare approximation bounds for two-layer ReLU neural networks. The authors generalize the results for R d to a bounded open set U ⊂ R d by defining the Radon-based R , U -norms of functions. They also show that the representation of infinite width neural networks on U are not unique. Review The presented tighter sparse approximation bounds for two-layer ReLU neural networks are of interest. Theorem 1 tells that there exists a neural network representation for a function defined on a bounded open set U if its R , U -norm is finite, which generalizes the results for functions on R d in the literature and yields the sparse approximation bound in Theorem 2. Moreover, the authors discuss its tightness and links with Fourier sparse approximation bounds in Theorems 3 and 4. The non-uniqueness of neural network representations is studied in Theorem 5. I have one concern that how to check the finiteness of the R , U -norm for general functions? Other minor comments/typos: rows above (1) and (2). Should n be used instead of m ? -Proposition 2, n or m in the approximation bound?
ICLR
Title Tighter Sparse Approximation Bounds for ReLU Neural Networks Abstract A well-known line of work (Barron, 1993; Breiman, 1993; Klusowski & Barron, 2018) provides bounds on the width n of a ReLU two-layer neural network needed to approximate a function f over the ball BR(R) up to error , when the Fourier based quantityCf = ∫ Rd ‖ξ‖ |f̂(ξ)| dξ is finite. More recently Ongie et al. (2019) used the Radon transform as a tool for analysis of infinite-width ReLU two-layer networks. In particular, they introduce the concept of Radon-based R-norms and show that a function defined on R can be represented as an infinite-width twolayer neural network if and only if its R-norm is finite. In this work, we extend the framework of (Ongie et al., 2019) and define similar Radon-based semi-norms (R,U-norms) such that a function admits an infinite-width neural network representation on a bounded open set U ⊆ R when itsR,U-norm is finite. Building on this, we derive sparse (finite-width) neural network approximation bounds that refine those of Breiman (1993); Klusowski & Barron (2018). Finally, we show that infinite-width neural network representations on bounded open sets are not unique and study their structure, providing a functional view of mode connectivity. N/A A well-known line of work (Barron, 1993; Breiman, 1993; Klusowski & Barron, 2018) provides bounds on the width n of a ReLU two-layer neural network needed to approximate a function f over the ball BR(Rd) up to error , when the Fourier based quantityCf = ∫ Rd ‖ξ‖ 2|f̂(ξ)| dξ is finite. More recently Ongie et al. (2019) used the Radon transform as a tool for analysis of infinite-width ReLU two-layer networks. In particular, they introduce the concept of Radon-based R-norms and show that a function defined on Rd can be represented as an infinite-width twolayer neural network if and only if its R-norm is finite. In this work, we extend the framework of (Ongie et al., 2019) and define similar Radon-based semi-norms (R,U-norms) such that a function admits an infinite-width neural network representation on a bounded open set U ⊆ Rd when itsR,U-norm is finite. Building on this, we derive sparse (finite-width) neural network approximation bounds that refine those of Breiman (1993); Klusowski & Barron (2018). Finally, we show that infinite-width neural network representations on bounded open sets are not unique and study their structure, providing a functional view of mode connectivity. 1 INTRODUCTION Extensive work has shown that for a neural network to be able to generalize, the size or magnitude of the parameters is more important than the size of the network, when the latter is large enough (Bartlett, 1997; Neyshabur et al., 2015; Zhang et al., 2016). Under certain regimes, the size of the neural networks used in practice is so large that the training data is fit perfectly and an infinite-width approximation is appropriate. In this setting, what matters to obtain good generalization is to fit the data using the right inductive bias, which is specified by how network parameters are controlled (Wei et al., 2020) together with the training algorithm used (Lyu & Li, 2020). The infinite-width two-layer neural network model has been studied from several perspectives due to its simplicity. One can replace the finite-width ReLU network 1n ∑n i=1 ai(〈ωi, x〉 − bi)+ by an integral over the parameter space with respect to a signed Radon measure: ∫ (〈ω, x〉− b)+ dα(ω, b). Thus, controlling the magnitude of the neural network parameters is akin to controlling the measure α according to a certain norm. Bach (2017) introduced the F1-space, which is the infinitewidth neural network space with norm inf{ ∫ |b| d|α|(ω, b)}, derived from the finite-width regular- izer 1n ∑n i=1 |ai|‖(ωi, bi)‖2 (the infimum is over all the measures α which represent the function at hand). A different line of work (Savarese et al., 2019; Ongie et al., 2019) consider the infinite-width spaces with norm inf{‖α‖TV = ∫ d|α|(ω, b)}, which is derived from the finite-width regularizer 1 n ∑n i=1 |ai|‖ωi‖2 (i.e. omitting the bias term). Both of these works seek to find expressions for this norm, leading to characterizations of the functions that are representable by infinite-width networks. Savarese et al. (2019) solves the problem in the one-dimensional case: they show that for a function f on R, this norm takes value max{ ∫ R |f ′′(x)| dx, |f ′(−∞) + f ′(∞)|}. Ongie et al. (2019) give an expression for this norm (the R-norm) for functions on Rd, making use of Radon transforms (see Subsec. 2.3). Although we mentioned in the first paragraph that in many occasions the network size is large enough that the specific number of neurons is irrelevant, when the target function is hard to approx- imate it is interesting to have an idea of how many neurons one needs to approximate it. The first contribution in this direction was by Cybenko (1989); Hornik et al. (1989), which show that twolayer neural networks with enough neurons can approximate any reasonable function on bounded sets in the uniform convergence topology. Later on, Barron (1993); Breiman (1993) provided sparse approximation bounds stating that if a function f is such that a certain quantity Cf constructed from the Fourier transform f̂ is finite, then there exists a neural network of width n such that the L2 approximation error with respect to a distribution of bounded support is lower than O(Cf/n). More recently, Klusowski & Barron (2018) provided alternative sparse approximation bounds of Breiman (1993) by restricting to networks with bounded weights and a slightly better dependency on n at the expense of a constant factor increasing with d (see Subsec. 2.2). Contributions. In our work, we seek to characterize the functions that coincide with an infinitewidth two-layer neural network on a fixed bounded open set. This endeavor is interesting in itself because in practice, we want to learn target functions for which we know samples on a bounded set, and we are typically unconcerned with the values that the learned functions take at infinity. Moreover, the tools that we develop allow us to derive state-of-the-art sparse approximation bounds. Our main contributions are the following: • In the spirit of theR-norm introduced by Ongie et al. (2019), for any bounded open set U ⊆ Rd we define theR,U-norm of a function on Rd, and show that when theR,U-norm of f is finite, f(x) can admits a representation of the form ∫ Sd−1×R(〈ω, x〉 − b)+ dα(ω, b) + 〈v, x〉 + c for x ∈ U , where v ∈ Rd, c ∈ R and α is an even signed Radon measure. • Using theR,U-norm, we derive function approximation bounds for neural networks with a fixed finite width. We compute theR,U-norm of a function in terms of its Fourier representation, and show that it admits an upper bound by the quantity Cf . This shows that our approximation bound is tighter than the previous bound by Breiman (1993), and meaningful in more instances (e.g. for finite-width neural networks). We also showR,U-norm-based bounds analogous to the ones of Klusowski & Barron (2018). • Setting U as the open unit ball of radius R, we show that neural network representations of f on U hold for multiple even Radon measures, which contrasts with the uniqueness result provided by Ongie et al. (2019) for the case of Rd. We study the structure of the sets of Radon measures which give rise to the same function on U . The non-uniqueness of the measure representing a measure could be linked to the phenomenon of mode connectivity. Additional related work. There have been other recent works which have used the Radon transform to study neural networks in settings different from ours (Parhi & Nowak, 2021a; Bartolucci et al., 2021). These two works consider the R-norm as a regularizer for an inverse problem, and proceed to prove representer theorems: there exists a solution of the regularized problem which is a twolayer neural network equal to the number of datapoints. Regarding infinite-width network spaces, E & Wojtowytsch (2020) present several equivalent definitions and provides a review. A well-known line of work (Mei et al., 2018; Chizat & Bach, 2018; Rotskoff & Vanden-Eijnden, 2018) studies the convergence of gradient descent for infinite-width two-layer neural networks. 2 FRAMEWORK 2.1 NOTATION Sd−1 denotes the (d − 1)-dimensional hypersphere (as a submanifold of Rd) and BR(Rd) is the Euclidean open ball of radius R. For U ⊆ Rd measurable, the space C0(U) of functions vanishing at infinity contains the continuous functions f such that for any > 0, there exists compact K ⊆ U depending on f such that |f(x)| < for x ∈ U \ K. P(U) is the set of Borel probability measures, M(U) is the space of finite signed Radon measures (which may be seen as the dual of C0(U)). Throughout the paper, the term Radon measure refers to a finite signed Radon measure for shortness. If γ ∈ M(U), then ‖γ‖TV is the total variation (TV) norm of γ. MC(U) denotes the space of complex-valued finite signed Radon measures, defined as the dual space of C0(U,C) (the space of complex-valued functions vanishing at infinity). We denote by S(Rd) the space of Schwartz functions, which contains the functions in C∞(Rd) whose derivatives of any order decay faster than polynomials of all orders, i.e. for all k, p ∈ (N0)d, supx∈Rd |xk∂(p)ϕ(x)| < +∞. For f ∈ L1(Rd), we use f̂ to denote the unitary Fourier transforms with angular frequency, defined as f̂(ξ) = 1 (2π)d/2 ∫ Rd f(x)e −i〈ξ,x〉dx. If f̂ ∈ L1(Rd) as well, we have the inversion formula f(x) = 1 (2π)d/2 ∫ Rd f̂(ξ)e i〈ξ,x〉dx. The Fourier transform is a continuous automorphism on S(Rd). 2.2 EXISTING SPARSE APPROXIMATION BOUNDS One of the classical results of the theory of two-layer neural networks (Breiman (1993), building on (Barron, 1993)) states that given a probability measure p ∈ P(BR(Rd)) and a function f : BR(Rd)→ R admitting a Fourier representation of the form f(x) = 1(2π)d/2 ∫ Rd e i〈ξ,x〉df̂(ξ), where f̂ ∈MC(Rd) is a complex-valued Radon measure such that Cf = 1(2π)d/2 ∫ Rd ‖ξ‖ 2 2 d|f̂ |(ξ) < +∞, there exists a two-layer neural network f̃(x) = 1n ∑n i=1 ai(〈x, ωi〉 − bi)+ such that∫ BR(Rd) (f(x)− f̃(x))2 dx ≤ (2R)4C2f n . (1) These classical results do not provide bounds on the magnitude of the neural network weights. More recently, Klusowski & Barron (2018) showed similar approximation bounds for two-layer ReLU networks under additional l1 and l0 bounds on the weights ai, ωi. Namely, if C̃f = 1 (2π)d/2 ∫ Rd ‖ξ‖ 2 1 d|f̂ |(ξ) < +∞ there exists a two-layer neural network f̃(x) = a0 + 〈ω0, x〉 + κ n ∑n i=1 ai(〈wi, x〉 − bi)+ with |ai| ≤ 1, ‖ωi‖ ≤ 1, bi ∈ [0, 1], and κ ≤ 2C̃f , and sup x∈[−1,1]d |f(x)− f̃(x)| ≤ c C̃f √ d+ log n n−1/2−1/d, (2) where c is a universal constant. 2.3 REPRESENTATION RESULTS ON Rd BASED ON THE RADON TRANSFORM One defines Pd denotes the space of hyperplanes on Rd, whose elements may be represented by points in Sd−1 × R by identifying {x|〈ω, x〉 = b} with both (ω, b) and (−ω,−b). Thus, functions on Pd are even functions on Sd−1 × R and we will use both notions interchangeably1. The Radon transform and the dual Radon transform. If f : Rd → R is a function which is integrable over all the hyperplanes of Rd, we may define the Radon transformRf : Pd → R as Rf(ω, b) = ∫ {x|〈ω,x〉=b} f(x) dx, ∀(ω, b) ∈ Sd−1 × R. That is, one integrates the function f over the hyperplane (ω, b). If Φ : Pd → R is a continuous function, the dual Radon transformR∗Φ : Rd → R is defined as R∗Φ(x) = ∫ Sd−1 Φ(ω, 〈ω, x〉) dω, ∀x ∈ Rd, where the integral is with respect to the Hausdorff measure over Sd−1. R and R∗ are adjoint operators in the appropriate domains (see Lemma 13). The Radon inversion formula. When f ∈ C∞(Rd), one has that (Theorem 3.1, Helgason (2011)) f = cd(−∆)(d−1)/2R∗Rf (3) where cd = 12(2π)d−1 and (−∆) s/2 denotes the (negative) fractional Laplacian, defined via its Fourier transform as ̂(−∆)s/2f(ξ) = ‖ξ‖sf̂(ξ). 1Similarly, the space M(Pd) of Radon measures over Pd contains the even measures in M(Sd−1 × R). If α ∈ M(Pd), ∫ Sd−1×R ϕ(ω, b) dα(ω, b) is well defined for any measurable function ϕ on S d−1 × R, but∫ Pd ϕ(ω, b) dα(ω, b) is only defined for even ϕ. TheR-norm. Given a function f : Rd → R, Ongie et al. (2019) introduce the quantity ‖f‖R = { sup{−cd〈f, (−∆)(d+1)/2R∗ψ〉 | ψ ∈ S(Sd−1 × R), ψ even, ‖ψ‖∞ ≤ 1} if f Lipschitz +∞ otherwise. (4) They call it the R-norm of f , although it is formally a semi-norm. Here, the space S(Sd−1 × R) of Schwartz functions on Sd−1 × R is defined, in analogy with S(Rd), as the space of C∞ functions ψ on Sd−1 × R which for any integers k, l ≥ 0 and any differential operator D on Sd−1 satisfy sup(ω,b)∈Sd−1×R |(1 + |b|k)∂kb (Dψ)(ω, b)| < +∞ (Helgason (2011), p. 5). Moreover, S(Pd) = {ψ ∈ S(Sd−1 × R) | ψ even}, which means the conditions on ψ in (4) can be written as ψ ∈ S(Pd), ‖ψ‖∞ ≤ 1. The finiteness of the R-norm indicates whether a function on Rd admits an exact representation as an infinitely wide neural network. Namely, Ongie et al. (2019) in their Lemma 10 show that ‖f‖R is finite if and only if there exists a (unique) even measure α ∈M(Sd−1×R) and (unique) v ∈ Rd, c ∈ R such that for any x ∈ Rd, f(x) = ∫ Sd−1×R ( 〈ω, x〉 − b ) + dα(ω, b) + 〈v, x〉+ c, (5) in which case, ‖f‖R = ‖α‖TV. Remark the following differences between this result and the bounds by (Breiman, 1993; Klusowski & Barron, 2018) shown in equations (1) and (2): (i) in (5) we have an exact representation with infinite-width neural networks instead of an approximation result with finite-width, (ii) in (5) the representation holds on Rd instead of a bounded domain. In our work, we derive representation results similar to the ones of Ongie et al. (2019) for functions defined on bounded open sets, which naturally give rise to sparse approximation results that refine those of (Breiman, 1993; Klusowski & Barron, 2018). One property that makes the Radon transform and its dual useful to analyze neural networks can be understood at a very high level via the following argument: if f(x) = ∫ Sd−1×R(〈ω, x〉 − b)+ρ(ω, b) d(ω, b) + 〈v, x〉 + c for some smooth rapidly decreasing function ρ, then ∆f(x) =∫ Sd−1×R δ〈ω,x〉=bρ(ω, b) d(ω, b) = ∫ Sd−1 ρ(ω, 〈ω, x〉) dω = (R ∗ρ)(x). For a general function f of the form (5), one has similarly that 〈∆f, ϕ〉 = 〈α,Rϕ〉 for any ϕ ∈ S(Rd). This property relates the evaluations of the measure α to the function ∆f via the Radon transform, and is the main ingredient in the proof of Lemma 10 of Ongie et al. (2019). While we also rely on it, we need many additional tools to deal with the case of bounded open sets. 3 REPRESENTATION RESULTS ON BOUNDED OPEN SETS Schwartz functions on open sets. Let U ⊆ Rd be an open subset. The space of Schwartz functions on U may be defined as S(U) = ⋂ z∈Rd\U ⋂ k∈(N0)d{f ∈ S(R d) | ∂(k)f(z) = 0}, i.e. they are those Schwartz functions on Rd such that the derivatives of all orders vanish outside of U (c.f. Def. 3.2, Shaviv (2020)). The structure of S(U) is similar to S(Rd) in that its topology is given by a family of semi-norms indexed by ((N0)d)2: ‖f‖k,k′ = supx∈U |xk · f (k ′)(x)|. Similarly, if V ⊆ Pd is open, we define S(V) = ⋂ (ω,b)∈(Sd−1×R)\V ⋂ k∈(N0)2{f ∈ S(P d) | ∂k1b ∆̂k2f(ω, b) = 0}, where ∆̂ is the spherical Laplacian. TheR,U-norm. Let U ⊆ Rd be a bounded open set, and let Ũ := {(ω, 〈ω, x〉) ∈ Sd−1 × R |x ∈ U}. For any function f : Rd → R, we define theR,U-norm of f as ‖f‖R,U = sup{−cd〈f, (−∆)(d+1)/2R∗ψ〉 | ψ ∈ S(Ũ), ψ even, ‖ψ‖∞ ≤ 1}. (6) Note the similarity between this quantity and the R-norm defined in (4); the main differences are that the supremum here is taken over the even Schwartz functions on Ũ instead of Sd−1 × R, and that the non-Lipschitz case does not need a separate treatment. Remark that ‖f‖R,U ≤ ‖f‖R. If f has enough regularity, we may write ‖f‖R,U = ∫ Ũ |R(−∆) (d+1)/2f |(ω, b) d(ω, b), using that the fractional Laplacian is self-adjoint andR∗ is the adjoint ofR. Define PdU to be the bounded open set of hyperplanes of Rd that intersect U , which in analogy with Subsec. 2.3, is equal to Ũ up to the identification of (ω, b) with (−ω,−b). Similarly, note that S(PdU ) = {ψ ∈ S(Ũ), ψ even}, which allows to rewrite the conditions in (6) as ψ ∈ S(PdU ), ‖ψ‖∞ ≤ 1. The following proposition, which is based on the Riesz-Markov-Kakutani representation theorem, shows that when theR,U-norm is finite, it can be associated to a unique Radon measure over PdU . Proposition 1. If ‖f‖R,U < +∞, there exists a unique Radon measure α ∈ M(PdU ) such that −cd〈f, (−∆)(d+1)/2R∗ψ〉 = ∫ PdU ψ(ω, b) dα(ω, b) for any ψ ∈ S(PdU ). Moreover, ‖f‖R,U = ‖α‖TV. Building on this, we see that a neural network representation for bounded U holds when the R,Unorm is finite: Theorem 1. Let U be a open, bounded subset of Rd. Let f : Rd → R such that ‖f‖R,U < +∞. Let α ∈ M(PdU ) be given by Proposition 1. For any ϕ ∈ S(U), there exist unique v ∈ Rd and c ∈ R such that ∫ U f(x)ϕ(x) dx = ∫ U (∫ Ũ (〈ω, x〉 − t)+ dα(ω, t) + 〈v, x〉+ c ) ϕ(x) dx, (7) That is, f(x) = ∫ Ũ (〈ω, x〉 − t)+ dα(ω, t) + 〈v, x〉 + c for x a.e. (almost everywhere) in U . If f is continuous, then the equality holds for all x ∈ U . Remark that this theorem does not claim that the representation given by α, v, c is unique, unlike Lemma 10 by Ongie et al. (2019) concerning analogous representations on Rd. In Sec. 5 we see that such representations are in fact not unique, for particular choices of the set U . We want to underline that the proof of Theorem 1 uses completely different tools from the ones of Lemma 10 by Ongie et al. (2019): their result relies critically on the fact that the only harmonic Lipschitz functions on Rd are affine functions, which is not true for functions on bounded subsets in our setting. 4 SPARSE APPROXIMATION FOR FUNCTIONS WITH BOUNDED R,U -NORM In this section, we show how to obtain approximation bounds of a function f on a bounded open set U using a fixed-width neural network with bounded coefficients, in terms of the R,U-norm introduced in the previous section. Theorem 2. Let U ⊆ BR(Rd) be a bounded open set. Suppose that f : Rd → R is such that ‖f‖R,U is finite, where ‖ · ‖R,U is defined in (6). Let v ∈ Rd, c ∈ R as in Theorem 1. Then, there exists {(ωi, bi)}ni=1 ⊆ Ũ and {ai}ni=1 ⊆ {±1} such that the function f̃ : Rd → R defined as f̃(x) = ‖f‖R,U n n∑ i=1 ai(〈ωi, x〉 − bi)+ + 〈v, x〉+ c fulfills, for x a.e. in U , ∣∣∣f̃(x)− f(x)∣∣∣ ≤ R‖f‖R,U√ n . (8) The equality holds for all x ∈ U if f is continuous. The proof of Theorem 2 (in App. B) uses the neural network representation (7) and a probabilistic argument. If one samples {(ωi, bi)}ni=1 from a probability distribution proportional to |α|, a Rademacher complexity bound upper-bounds the expectation of the supremum norm between f̃ and f , which yields the result. Note the resemblance of (8) with the bound (1); theR,U norm of f replaces the quantityCf . We can also use theR,U-norm to obtain a bound analogous to (2), that is, with a slightly better dependency in the exponent of n at the expense of a constant factor growing with the dimension. Proposition 2. Let f : Rd → R and U ⊆ B1(Rd) open such that ‖f‖R,U < +∞. Then, then there exist {ai}ni=1 ⊆ [−1, 1], {ωi}ni=1 ⊆ {ω ∈ Rd|‖ω‖1 = 1} and {bi}ni=1 ⊆ [0, 1] and κ < √ d‖f‖R,U such that the function f̃(x) = κ n n∑ i=1 ai(〈ωi, x〉 − bi)+ fulfills, for x a.e. in U and some universal constant c > 0, |f(x)− f̃(x)| ≤ cκ √ d+ log nn−1/2−1/d. The proof of this result (in App. B) follows readily from the representation (7) and Theorem 1 of Klusowski & Barron (2018). 4.1 LINKS WITH THE FOURIER SPARSE APPROXIMATION BOUNDS The following result shows that setting U = BR(Rd), theR,U-norm can be bounded by the Fourierbased quantities Cf , C̃f introduced in Subsec. 2.2. Theorem 3. Assume that the function f : Rd → R admits a Fourier representation of the form f(x) = 1 (2π)d/2 ∫ Rd e i〈ξ,x〉df̂(ξ) with f̂ ∈ MC(Rd) a complex-valued Radon measure. Let Cf be the quantity used in the sparse approximation bound by Breiman (1993) (see Subsec. 2.2). Then, one has that ‖f‖R,BR(Rd) ≤ 2RCf (9) As a direct consequence of Theorem 3, when U = BR(Rd) the right-hand side of (8) can be upper-bounded by R2Cf/ √ n. This allows to refine the bound (1) from Breiman (1993) to a bound in the supremum norm over BR(Rd), and where the approximating neural network f̃(x) = 1 n ∑n i=1 ai(〈x, ωi〉 − bi)+ + 〈v, x〉+ c fulfills |ai| ≤ ‖f‖R,BR(Rd), ‖ωi‖2 ≤ 1 and bi ∈ (−R,R). While we are able to prove the bound (9), the Fourier representation of f does not allow for a manageable expression for the measure α described in Proposition 1. For that, the following theorem starts from a slightly modified Fourier representation of f , under which one can describe the measure α and provide a formula for theR,U-norm. Theorem 4. Let f : Rd → R admitting the representation f(x) = ∫ Sd−1×R eib〈ω,x〉 dµ(ω, b), (10) for some complex-valued Radon measures µ ∈MC(Sd−1×R) such that dµ(ω, b) = dµ(−ω,−b) = dµ̄(−ω, b) = dµ̄(ω,−b), and ∫ Sd−1×R b 2d|µ|(ω, b) < +∞. Choosing U = BR(Rd), the unique measure α ∈M(PdR) specified by Proposition 1 takes the following form: dα(ω, b) = − ∫ R t2e−itb dµ(ω, t) db, where K = 2πd/2/Γ(d2 ). Note that α is real-valued because ∫ R t 2e−itb dµ(ω, t) ∈ R as t2 dµ(ω, t) = (−t)2 dµ(ω,−t). Consequently, theR,BR(Rd)-norm of f is ‖f‖R,BR(Rd) = ‖α‖TV = ∫ R −R ∫ Sd−1 ∣∣∣∣∫ R t2e−itb dµ(ω, t) ∣∣∣∣ db. (11) Remark that µ is the pushforward of the measure f̂ by the mappings ξ 7→ (±ξ/‖ξ‖,±ξ). When the Fourier transform f̂ admits a density, one may obtain the density of µ via a change from Euclidean to spherical coordinates: dµ(ω, b) = 12vol(S d−1)f̂(bω)|b|d−1 d(ω, b). Hence, Theorem 4 provides an operative way to compute the R,U-norm of f if one has access to the Fourier transform of f̂ . Note that equation (11) implies that theR,BR(Rd)-norm of f increases with R, and in particular is smaller than theR-norm of f , which corresponds to setting R =∞. Theorems 3 and 4 are proven jointly in App. B. Note that from the expression (11) one can easily see that ‖f‖R,BR(Rd) is upper-bounded by RCf :∫ R −R ∫ Sd−1 ∣∣∣∣∫ R t2e−itb dµ(ω, t) ∣∣∣∣ db ≤ ∫ R −R ∫ Sd−1 ∫ R t2 d|µ|(ω, t) db = 2R ∫ Rd ‖ξ‖2d|f̂ |(ξ),(12) where the equality holds since µ is the pushforward of f̂ . Equation (12) makes apparent the norm ‖f‖R,BR(Rd) is sometimes much smaller than the quantities Cf , C̃f , as is showcased by the following one-dimensional example (see the proof in App. B). In these situations, the sparse approximation bounds that we provide in Theorem 2 and Proposition 2 are much better than the ones in (1)-(2). Example 1. Take the function f : R → R defined as f(x) = cos(x) − cos((1 + )x), with > 0. f admits the Fourier representation f(x) = 1 (2π)1/2 ∫ R √ π 2 (δ1(ξ) + δ−1(ξ) − δ1+ (ξ) − δ−1− (ξ))e iξx dξ. We have that Cf = 2 + 2 + 2, and ‖f‖R,BR(Rd) ≤ R ( R + 2 + 2 ) . ‖f‖R,BR(Rd) goes to zero as → 0, while Cf converges to 2. An interesting class of functions for which ‖f‖R,BR(Rd) is finite but Cf , C̃f are infinite are functions that can be written as a finite-width neural network on BR(Rd), as shown in the following proposition. Proposition 3. Let f : Rd → R defined as f(x) = 1n ∑n i=1 ai(〈ωi, x〉 − bi)+ for all x ∈ Rd, with {ωi}ni=1 ⊆ Sd−1, {ai}ni=1, {bi}ni=1 ⊆ R. Then, for any bounded open set U , we have ‖f‖R,U ≤ 1 n ∑n i=1 |ai|, while Cf , C̃f = +∞ if f is not an affine function. Proposition 3 makes use of the fact that the R,U-norm is always upper-bounded by the R-norm, which also means that all the bounds developed in Ongie et al. (2019) apply for the R,U-norm. The fact that finite-width neural networks have infinite Cf was stated by E & Wojtowytsch (2020), that used them to show the gap between the functions with finite Cf and functions representable by infinite-width neural networks (belonging to the Barron space, in their terminology). It remains to be seen whether the gap is closed when considering functions with finite R,U-norm, i.e., whether any function admitting an infinite-width representation (7) on U has a finiteR,U-norm. Moving to the non-linear Radon transform. In many applications the function of interest f may be better represented as ∫ (〈ω, ϕ(x)〉 − t)+ dα(ω, t) + 〈v, x〉 + c, where ϕ is a fixed finite dimensional, non-linear and bounded feature map. Our results trivially extend to this case where in the Radon transform hyperplanes are replaced by hyperplanes in the feature space. This can be seen as the “kernel trick” applied to the Radon transform. The corresponding ‖f‖R,ϕ(U) corresponds to the sparsity of the decomposition in the feature space, and we have better approximation when ‖f‖R,ϕ(U) < ‖f‖R,U . This gives a simple condition for when transfer learning is successful, and explains the success of using random fourier features as a preprocessing in implicit neural representations of images and surfaces (Tancik et al., 2020). In order to go beyond the fixed feature maps and tackle deeper ReLU networks, we think that the non-linear Radon transform (Ehrenpreis, 2003) is an interesting tool to explore. We note that Parhi & Nowak (2021b) introduced recently a representer theorem for deep ReLU networks using Radon transforms as a regularizer. 5 INFINITE-WIDTH REPRESENTATIONS ARE NOT UNIQUE ON BOUNDED SETS Ongie et al. (2019) show that when theR-norm of f is finite, there is a unique measure α ∈M(Rd) such that the representation (5) holds for x ∈ Rd. In this section we show that when we only ask the representation to hold for x in a bounded open set, there exist several measures that do the job; in fact, they span an infinite-dimensional space. Let U = BR(Rd) be the open ball of radius R > 0 in Rd, which means that Ũ = Sd−1 × (−R,R) and PdU is the set of hyperplanes {x|〈ω, x〉 = b} such that ‖ω‖ = 1 and b ∈ (−R,R), which we denote by PdR for simplicity. In the following we will construct a space of Radon measures α ∈ M(PdR) whose neural network representation (5) coincide for all x ∈ BR(Rd). Note that since any bounded subset of Rd is included in some open ball, our results imply that such representations are non-unique on any bounded set. Remark 1. When one considers representations on BR(Rd) of the sort (5) with the measure α lying in the larger spaceM(Sd−1 × R), the non-uniqueness is apparent because there are two ‘trivial’ kinds of symmetry at play: (i) Related to parity: when the measure α is odd, we have ∫ Sd−1×R(〈ω, x〉 − b)+ dα(ω, b) = 1 2 ∫ Sd−1×R(〈ω, x〉 − b)+ − (−〈ω, x〉 + b)+ dα(ω, b) = 〈 1 2 ∫ Sd−1×R ω dα(ω, b), x〉 − 1 2 ∫ Sd−1×R b dα(ω, b), which is an affine function of x. (ii) Related to boundedness: if (ω, b) ∈ Sd−1×(R\(−R,R)), x 7→ (〈ω, x〉−b)+ restricted to BR(Rd) is an affine function of x. Hence, if α is supported on Sd−1×Sd−1×(R\(−R,R)), x 7→ ∫ Sd−1×R(〈ω, x〉 − b)+ dα(ω, b) is an affine function when restricted to BR(R d). Since in Sec. 3 we restrict our scope to measures α lying inM(PdU ), these two kinds of symmetries are already quotiented out in our analysis. The third kind of non-uniqueness that we discuss in this section is conceptually deeper, taking place withinM(PdU ). Let {Yk,j | k ∈ Z+, 1 ≤ j ≤ Nk,d} be the orthonormal basis of spherical harmonics of the space L2(Sd−1) (Atkinson & Han, 2012). It is well known that for any k, the functions {Yk,j | 1 ≤ j ≤ Nk,d} are the restrictions to Sd−1 of homogeneous polynomials of degree k, and in fact Nk,d is the dimension of the space of homogeneous harmonic polynomials of degree k. Consider the following subset of even functions in C∞(Sd−1 × (−R,R)): A = {Yk,j ⊗Xk ′ | k, j, k′ ∈ Z+, k ≡ k′ (mod 2), k′ < k − 2, 1 ≤ j ≤ Nd,k}, where Xk ′ denotes the monomial of degree k′ on (−R,R). We have the following result regarding the non-uniqueness of neural network representations: Theorem 5. If α ∈ M(PdR) is such that α ∈ clw(span(A)), then we have that 0 =∫ Sd−1×(−R,R)(〈ω, x〉 − b)+ dα(ω, b) for any x ∈ BR(R d). That is, α yields a neural network representation of the zero-function on BR(Rd). Here, we consider span(A) as a subset ofM(PdR) by the Riesz-Markov-Kakutani representation theorem via the action 〈g, ϕ〉 = ∫ PdR ϕ(ω, b)g(ω, b) d(ω, b) for any g ∈ span(A), ϕ ∈ C0(PdR), and clw denotes the closure in the topology of weak convergence ofM(Sd−1 × R). In particular, any measure whose density is in the span of A will yield a function which is equal to zero when restricted to BR(Rd). As an example of this result, we show a simple measure inM(Pd1) which represents the zero function on B1(R2). Example 2 (Non-zero measure representing the zero function onB1(R2)). We define the even Radon measure α ∈M(S1×(−1, 1)) with density dα(ω, b) = (8ω40−8ω20+1) d(ω, b) where ω = (ω0, ω1). Then, for any x ∈ B1(R2), 0 = ∫ S1×(−1,1)(〈ω, x〉 − b)+ dα(ω, x). On the one hand, Proposition 1 states that there exists a unique measure α ∈ M(PdU ) such that −cd〈f, (−∆)(d+1)/2R∗ψ〉 = ∫ PdU ψ(ω, b) dα(ω, b) for any ψ ∈ S(PdU ) if ‖f‖R,U is finite. On the other hand, Theorem 5 claims that functions admit distinct representations by measures inM(PdU ). The following theorem clarifies these two seemingly contradictory statements. Consider the following subset of even functions in C∞(Sd−1 × (−R,R)), which contains A: B = {Yk,j ⊗Xk ′ | k, j, k′ ∈ Z+, k ≡ k′ (mod 2), k′ < k, 1 ≤ j ≤ Nd,k}. Proposition 4. Let 0 < R < R′. Let f : Rd → R such that ‖f‖R,BR′ (Rd) < +∞ and let α ∈ M(PdR) be the unique measure specified by Proposition 1. Then, α is the unique measure in M(PdR) such that ∀ϕ ∈ S(BR(Rd)), 〈α,Rϕ〉 = ∫ BR(Rd)) f(x)∆ϕ(x) dx, (13) ∀k, j, k′ ∈ Z+ s.t. k′ ≡ k (mod 2), k′ < k, 1 ≤ j ≤ Nk,d, 〈α, Yk,j ⊗Xk ′ 〉 = −cd〈f, (−∆)(d+1)/2R∗(Yk,j ⊗ 1|X|<RXk ′ )〉. (14) The condition (13) holds for any measure α′ ∈ M(PdR) for which f admits a representation of the form (7) on BR(Rd). Thus, α can be characterized as the unique measure inM(PdR) such that f admits a representation of the form (7) on BR(Rd) and the condition (14) holds. In (14), the quantity 〈f, (−∆)(d+1)/2R∗(Yk,j ⊗ 1|X|<RXk ′ )〉 is well defined despite 1|X|<RXk ′ not being continuous on R; we define it as 〈f, (−∆)(d+1)/2R∗((Yk,j ⊗1|X|<RXk ′ ) + g̃)〉, where g̃ is any function in S(PdR′) such that (Yk,j⊗1|X|<RXk ′ )+ g̃ ∈ S(PdR′) (which do exist, see App. C). In short, Proposition 4 characterizes the measure α from Proposition 1 in terms of its evaluations on the spaces R(S(BR(Rd))) and span(B), and by Corollary 1 the direct sum of these two spaces dense in C0(PdR), which by the Riesz-Markov-Kakutani representation theorem is the predual of M(PdR). Interestingly, the condition (13) holds for any measure α ∈ M(PdR) which represents the function f on BR(Rd), but it is easy to see that the condition (14) does not: by Theorem 5 we have that if ψ ∈ span(A) ⊆ span(B), the measure α′ defined as dα′(ω, b) = dα(ω, b) + ψ(ω, b) db represents the function f on BR(Rd), and 〈α′, ψ〉 = 〈α,ψ〉+ ‖ψ‖22. It remains an open question to see whether Theorem 5 captures all the measures which represent the zero function on BR(Rd), which we hypothesize. If that was the case, we would obtain a complete characterization of the Radon measures which represent a given function on BR(Rd). Mode connectivity. Mode connectivity is the phenomenon that optima of neural network losses (at least the ones found by gradient descent) turn out to be connected by paths where the loss value is almost constant, and was observed empirically by Garipov et al. (2018); Draxler et al. (2018). Kuditipudi et al. (2019) provided an algorithmic explanation based on dropout, and an explanation based on the noise stability property. Theorem 5 suggests an explanation for mode connectivity from a functional perspective: one can construct finitely-supported measures which approximate a measure α ∈ clw(span(A)), yielding finite-width neural networks with non-zero weights which approximate the zero function on BR(Rd). Assuming that the data distribution is supported in BR(Rd), adding a multiple of one such network to an optimal network will produce little change in the loss value because the function being represented is essentially unchanged. More work is required to confirm or discard this intuition. 6 CONCLUSION We provided in this paper tighter sparse approximation bounds for two-layer ReLU neural networks. Our results build on the introduction of Radon-basedR,U-norms for functions defined on a bounded open set U . Our bounds refine Fourier-based approximation bounds of Breiman (1993); Klusowski & Barron (2018). We also showed that the representation of infinite width neural networks on bounded open sets are not unique, which can be seen as a functional view of mode connectivity observed in training deep neural networks. We leave two open questions: whether any function admitting an infinite-width representation on U has a finite R,U-norm, and whether Theorem 5 captures all the measures which represent the zero function on BR(Rd). Finally, in order to extend our theory to deeper ReLU networks we believe that non-linear Radon transforms (Ehrenpreis, 2003) are interesting tools to explore.
1. What is the focus of the paper regarding neural networks? 2. What are the strengths of the proposed approach, particularly in terms of its extension and refinement of previous works? 3. What are the weaknesses of the paper, especially regarding its clarity and technical aspects? 4. How does the reviewer assess the significance and novelty of the paper's contributions?
Summary Of The Paper Review
Summary Of The Paper This paper studies the approximation bounds for the 2-layer ReLU network. The authors extend the analysis framework of Ongie et al. (2019) and show the approximation bounds for the infinite-width network on a bounded open set as well as the bounds for sparse (i.e., finite-width) network that refine the similar results in the literature. At last, the authors show that the infinite-width neural network representations may not be unique on bounded open sets and provide a functional view of the mode connectivity. In particular, the authors refine the R-norm introduced by Ongie et al. (2019) to R, U-norm to tickle the bounded open set case. Review Strength: This paper uses the novel analysis tool based on Ongie et al. (2019) to consider the bounded open set case. It is technically solid. I check the detailed proof and don't find the major flaws. (I need to admit I am not very familiar with the Fourier-based approximation bounds, so it is possible my judgment on novelty is wrong. And this is why I rate my confidence as 2.) Weakness: Clarify: It is still room to improve the technical part of the paper. 5 theorems and 4 propositions are shown in the 9-page main paper and more discussions on the Theorems/Proposition may be needed to make the paper easier to follow. I also suggest the authors add some discussions to further highlight the technical difference/significance of the Theorem/Proposition.
ICLR
Title Tighter Sparse Approximation Bounds for ReLU Neural Networks Abstract A well-known line of work (Barron, 1993; Breiman, 1993; Klusowski & Barron, 2018) provides bounds on the width n of a ReLU two-layer neural network needed to approximate a function f over the ball BR(R) up to error , when the Fourier based quantityCf = ∫ Rd ‖ξ‖ |f̂(ξ)| dξ is finite. More recently Ongie et al. (2019) used the Radon transform as a tool for analysis of infinite-width ReLU two-layer networks. In particular, they introduce the concept of Radon-based R-norms and show that a function defined on R can be represented as an infinite-width twolayer neural network if and only if its R-norm is finite. In this work, we extend the framework of (Ongie et al., 2019) and define similar Radon-based semi-norms (R,U-norms) such that a function admits an infinite-width neural network representation on a bounded open set U ⊆ R when itsR,U-norm is finite. Building on this, we derive sparse (finite-width) neural network approximation bounds that refine those of Breiman (1993); Klusowski & Barron (2018). Finally, we show that infinite-width neural network representations on bounded open sets are not unique and study their structure, providing a functional view of mode connectivity. N/A A well-known line of work (Barron, 1993; Breiman, 1993; Klusowski & Barron, 2018) provides bounds on the width n of a ReLU two-layer neural network needed to approximate a function f over the ball BR(Rd) up to error , when the Fourier based quantityCf = ∫ Rd ‖ξ‖ 2|f̂(ξ)| dξ is finite. More recently Ongie et al. (2019) used the Radon transform as a tool for analysis of infinite-width ReLU two-layer networks. In particular, they introduce the concept of Radon-based R-norms and show that a function defined on Rd can be represented as an infinite-width twolayer neural network if and only if its R-norm is finite. In this work, we extend the framework of (Ongie et al., 2019) and define similar Radon-based semi-norms (R,U-norms) such that a function admits an infinite-width neural network representation on a bounded open set U ⊆ Rd when itsR,U-norm is finite. Building on this, we derive sparse (finite-width) neural network approximation bounds that refine those of Breiman (1993); Klusowski & Barron (2018). Finally, we show that infinite-width neural network representations on bounded open sets are not unique and study their structure, providing a functional view of mode connectivity. 1 INTRODUCTION Extensive work has shown that for a neural network to be able to generalize, the size or magnitude of the parameters is more important than the size of the network, when the latter is large enough (Bartlett, 1997; Neyshabur et al., 2015; Zhang et al., 2016). Under certain regimes, the size of the neural networks used in practice is so large that the training data is fit perfectly and an infinite-width approximation is appropriate. In this setting, what matters to obtain good generalization is to fit the data using the right inductive bias, which is specified by how network parameters are controlled (Wei et al., 2020) together with the training algorithm used (Lyu & Li, 2020). The infinite-width two-layer neural network model has been studied from several perspectives due to its simplicity. One can replace the finite-width ReLU network 1n ∑n i=1 ai(〈ωi, x〉 − bi)+ by an integral over the parameter space with respect to a signed Radon measure: ∫ (〈ω, x〉− b)+ dα(ω, b). Thus, controlling the magnitude of the neural network parameters is akin to controlling the measure α according to a certain norm. Bach (2017) introduced the F1-space, which is the infinitewidth neural network space with norm inf{ ∫ |b| d|α|(ω, b)}, derived from the finite-width regular- izer 1n ∑n i=1 |ai|‖(ωi, bi)‖2 (the infimum is over all the measures α which represent the function at hand). A different line of work (Savarese et al., 2019; Ongie et al., 2019) consider the infinite-width spaces with norm inf{‖α‖TV = ∫ d|α|(ω, b)}, which is derived from the finite-width regularizer 1 n ∑n i=1 |ai|‖ωi‖2 (i.e. omitting the bias term). Both of these works seek to find expressions for this norm, leading to characterizations of the functions that are representable by infinite-width networks. Savarese et al. (2019) solves the problem in the one-dimensional case: they show that for a function f on R, this norm takes value max{ ∫ R |f ′′(x)| dx, |f ′(−∞) + f ′(∞)|}. Ongie et al. (2019) give an expression for this norm (the R-norm) for functions on Rd, making use of Radon transforms (see Subsec. 2.3). Although we mentioned in the first paragraph that in many occasions the network size is large enough that the specific number of neurons is irrelevant, when the target function is hard to approx- imate it is interesting to have an idea of how many neurons one needs to approximate it. The first contribution in this direction was by Cybenko (1989); Hornik et al. (1989), which show that twolayer neural networks with enough neurons can approximate any reasonable function on bounded sets in the uniform convergence topology. Later on, Barron (1993); Breiman (1993) provided sparse approximation bounds stating that if a function f is such that a certain quantity Cf constructed from the Fourier transform f̂ is finite, then there exists a neural network of width n such that the L2 approximation error with respect to a distribution of bounded support is lower than O(Cf/n). More recently, Klusowski & Barron (2018) provided alternative sparse approximation bounds of Breiman (1993) by restricting to networks with bounded weights and a slightly better dependency on n at the expense of a constant factor increasing with d (see Subsec. 2.2). Contributions. In our work, we seek to characterize the functions that coincide with an infinitewidth two-layer neural network on a fixed bounded open set. This endeavor is interesting in itself because in practice, we want to learn target functions for which we know samples on a bounded set, and we are typically unconcerned with the values that the learned functions take at infinity. Moreover, the tools that we develop allow us to derive state-of-the-art sparse approximation bounds. Our main contributions are the following: • In the spirit of theR-norm introduced by Ongie et al. (2019), for any bounded open set U ⊆ Rd we define theR,U-norm of a function on Rd, and show that when theR,U-norm of f is finite, f(x) can admits a representation of the form ∫ Sd−1×R(〈ω, x〉 − b)+ dα(ω, b) + 〈v, x〉 + c for x ∈ U , where v ∈ Rd, c ∈ R and α is an even signed Radon measure. • Using theR,U-norm, we derive function approximation bounds for neural networks with a fixed finite width. We compute theR,U-norm of a function in terms of its Fourier representation, and show that it admits an upper bound by the quantity Cf . This shows that our approximation bound is tighter than the previous bound by Breiman (1993), and meaningful in more instances (e.g. for finite-width neural networks). We also showR,U-norm-based bounds analogous to the ones of Klusowski & Barron (2018). • Setting U as the open unit ball of radius R, we show that neural network representations of f on U hold for multiple even Radon measures, which contrasts with the uniqueness result provided by Ongie et al. (2019) for the case of Rd. We study the structure of the sets of Radon measures which give rise to the same function on U . The non-uniqueness of the measure representing a measure could be linked to the phenomenon of mode connectivity. Additional related work. There have been other recent works which have used the Radon transform to study neural networks in settings different from ours (Parhi & Nowak, 2021a; Bartolucci et al., 2021). These two works consider the R-norm as a regularizer for an inverse problem, and proceed to prove representer theorems: there exists a solution of the regularized problem which is a twolayer neural network equal to the number of datapoints. Regarding infinite-width network spaces, E & Wojtowytsch (2020) present several equivalent definitions and provides a review. A well-known line of work (Mei et al., 2018; Chizat & Bach, 2018; Rotskoff & Vanden-Eijnden, 2018) studies the convergence of gradient descent for infinite-width two-layer neural networks. 2 FRAMEWORK 2.1 NOTATION Sd−1 denotes the (d − 1)-dimensional hypersphere (as a submanifold of Rd) and BR(Rd) is the Euclidean open ball of radius R. For U ⊆ Rd measurable, the space C0(U) of functions vanishing at infinity contains the continuous functions f such that for any > 0, there exists compact K ⊆ U depending on f such that |f(x)| < for x ∈ U \ K. P(U) is the set of Borel probability measures, M(U) is the space of finite signed Radon measures (which may be seen as the dual of C0(U)). Throughout the paper, the term Radon measure refers to a finite signed Radon measure for shortness. If γ ∈ M(U), then ‖γ‖TV is the total variation (TV) norm of γ. MC(U) denotes the space of complex-valued finite signed Radon measures, defined as the dual space of C0(U,C) (the space of complex-valued functions vanishing at infinity). We denote by S(Rd) the space of Schwartz functions, which contains the functions in C∞(Rd) whose derivatives of any order decay faster than polynomials of all orders, i.e. for all k, p ∈ (N0)d, supx∈Rd |xk∂(p)ϕ(x)| < +∞. For f ∈ L1(Rd), we use f̂ to denote the unitary Fourier transforms with angular frequency, defined as f̂(ξ) = 1 (2π)d/2 ∫ Rd f(x)e −i〈ξ,x〉dx. If f̂ ∈ L1(Rd) as well, we have the inversion formula f(x) = 1 (2π)d/2 ∫ Rd f̂(ξ)e i〈ξ,x〉dx. The Fourier transform is a continuous automorphism on S(Rd). 2.2 EXISTING SPARSE APPROXIMATION BOUNDS One of the classical results of the theory of two-layer neural networks (Breiman (1993), building on (Barron, 1993)) states that given a probability measure p ∈ P(BR(Rd)) and a function f : BR(Rd)→ R admitting a Fourier representation of the form f(x) = 1(2π)d/2 ∫ Rd e i〈ξ,x〉df̂(ξ), where f̂ ∈MC(Rd) is a complex-valued Radon measure such that Cf = 1(2π)d/2 ∫ Rd ‖ξ‖ 2 2 d|f̂ |(ξ) < +∞, there exists a two-layer neural network f̃(x) = 1n ∑n i=1 ai(〈x, ωi〉 − bi)+ such that∫ BR(Rd) (f(x)− f̃(x))2 dx ≤ (2R)4C2f n . (1) These classical results do not provide bounds on the magnitude of the neural network weights. More recently, Klusowski & Barron (2018) showed similar approximation bounds for two-layer ReLU networks under additional l1 and l0 bounds on the weights ai, ωi. Namely, if C̃f = 1 (2π)d/2 ∫ Rd ‖ξ‖ 2 1 d|f̂ |(ξ) < +∞ there exists a two-layer neural network f̃(x) = a0 + 〈ω0, x〉 + κ n ∑n i=1 ai(〈wi, x〉 − bi)+ with |ai| ≤ 1, ‖ωi‖ ≤ 1, bi ∈ [0, 1], and κ ≤ 2C̃f , and sup x∈[−1,1]d |f(x)− f̃(x)| ≤ c C̃f √ d+ log n n−1/2−1/d, (2) where c is a universal constant. 2.3 REPRESENTATION RESULTS ON Rd BASED ON THE RADON TRANSFORM One defines Pd denotes the space of hyperplanes on Rd, whose elements may be represented by points in Sd−1 × R by identifying {x|〈ω, x〉 = b} with both (ω, b) and (−ω,−b). Thus, functions on Pd are even functions on Sd−1 × R and we will use both notions interchangeably1. The Radon transform and the dual Radon transform. If f : Rd → R is a function which is integrable over all the hyperplanes of Rd, we may define the Radon transformRf : Pd → R as Rf(ω, b) = ∫ {x|〈ω,x〉=b} f(x) dx, ∀(ω, b) ∈ Sd−1 × R. That is, one integrates the function f over the hyperplane (ω, b). If Φ : Pd → R is a continuous function, the dual Radon transformR∗Φ : Rd → R is defined as R∗Φ(x) = ∫ Sd−1 Φ(ω, 〈ω, x〉) dω, ∀x ∈ Rd, where the integral is with respect to the Hausdorff measure over Sd−1. R and R∗ are adjoint operators in the appropriate domains (see Lemma 13). The Radon inversion formula. When f ∈ C∞(Rd), one has that (Theorem 3.1, Helgason (2011)) f = cd(−∆)(d−1)/2R∗Rf (3) where cd = 12(2π)d−1 and (−∆) s/2 denotes the (negative) fractional Laplacian, defined via its Fourier transform as ̂(−∆)s/2f(ξ) = ‖ξ‖sf̂(ξ). 1Similarly, the space M(Pd) of Radon measures over Pd contains the even measures in M(Sd−1 × R). If α ∈ M(Pd), ∫ Sd−1×R ϕ(ω, b) dα(ω, b) is well defined for any measurable function ϕ on S d−1 × R, but∫ Pd ϕ(ω, b) dα(ω, b) is only defined for even ϕ. TheR-norm. Given a function f : Rd → R, Ongie et al. (2019) introduce the quantity ‖f‖R = { sup{−cd〈f, (−∆)(d+1)/2R∗ψ〉 | ψ ∈ S(Sd−1 × R), ψ even, ‖ψ‖∞ ≤ 1} if f Lipschitz +∞ otherwise. (4) They call it the R-norm of f , although it is formally a semi-norm. Here, the space S(Sd−1 × R) of Schwartz functions on Sd−1 × R is defined, in analogy with S(Rd), as the space of C∞ functions ψ on Sd−1 × R which for any integers k, l ≥ 0 and any differential operator D on Sd−1 satisfy sup(ω,b)∈Sd−1×R |(1 + |b|k)∂kb (Dψ)(ω, b)| < +∞ (Helgason (2011), p. 5). Moreover, S(Pd) = {ψ ∈ S(Sd−1 × R) | ψ even}, which means the conditions on ψ in (4) can be written as ψ ∈ S(Pd), ‖ψ‖∞ ≤ 1. The finiteness of the R-norm indicates whether a function on Rd admits an exact representation as an infinitely wide neural network. Namely, Ongie et al. (2019) in their Lemma 10 show that ‖f‖R is finite if and only if there exists a (unique) even measure α ∈M(Sd−1×R) and (unique) v ∈ Rd, c ∈ R such that for any x ∈ Rd, f(x) = ∫ Sd−1×R ( 〈ω, x〉 − b ) + dα(ω, b) + 〈v, x〉+ c, (5) in which case, ‖f‖R = ‖α‖TV. Remark the following differences between this result and the bounds by (Breiman, 1993; Klusowski & Barron, 2018) shown in equations (1) and (2): (i) in (5) we have an exact representation with infinite-width neural networks instead of an approximation result with finite-width, (ii) in (5) the representation holds on Rd instead of a bounded domain. In our work, we derive representation results similar to the ones of Ongie et al. (2019) for functions defined on bounded open sets, which naturally give rise to sparse approximation results that refine those of (Breiman, 1993; Klusowski & Barron, 2018). One property that makes the Radon transform and its dual useful to analyze neural networks can be understood at a very high level via the following argument: if f(x) = ∫ Sd−1×R(〈ω, x〉 − b)+ρ(ω, b) d(ω, b) + 〈v, x〉 + c for some smooth rapidly decreasing function ρ, then ∆f(x) =∫ Sd−1×R δ〈ω,x〉=bρ(ω, b) d(ω, b) = ∫ Sd−1 ρ(ω, 〈ω, x〉) dω = (R ∗ρ)(x). For a general function f of the form (5), one has similarly that 〈∆f, ϕ〉 = 〈α,Rϕ〉 for any ϕ ∈ S(Rd). This property relates the evaluations of the measure α to the function ∆f via the Radon transform, and is the main ingredient in the proof of Lemma 10 of Ongie et al. (2019). While we also rely on it, we need many additional tools to deal with the case of bounded open sets. 3 REPRESENTATION RESULTS ON BOUNDED OPEN SETS Schwartz functions on open sets. Let U ⊆ Rd be an open subset. The space of Schwartz functions on U may be defined as S(U) = ⋂ z∈Rd\U ⋂ k∈(N0)d{f ∈ S(R d) | ∂(k)f(z) = 0}, i.e. they are those Schwartz functions on Rd such that the derivatives of all orders vanish outside of U (c.f. Def. 3.2, Shaviv (2020)). The structure of S(U) is similar to S(Rd) in that its topology is given by a family of semi-norms indexed by ((N0)d)2: ‖f‖k,k′ = supx∈U |xk · f (k ′)(x)|. Similarly, if V ⊆ Pd is open, we define S(V) = ⋂ (ω,b)∈(Sd−1×R)\V ⋂ k∈(N0)2{f ∈ S(P d) | ∂k1b ∆̂k2f(ω, b) = 0}, where ∆̂ is the spherical Laplacian. TheR,U-norm. Let U ⊆ Rd be a bounded open set, and let Ũ := {(ω, 〈ω, x〉) ∈ Sd−1 × R |x ∈ U}. For any function f : Rd → R, we define theR,U-norm of f as ‖f‖R,U = sup{−cd〈f, (−∆)(d+1)/2R∗ψ〉 | ψ ∈ S(Ũ), ψ even, ‖ψ‖∞ ≤ 1}. (6) Note the similarity between this quantity and the R-norm defined in (4); the main differences are that the supremum here is taken over the even Schwartz functions on Ũ instead of Sd−1 × R, and that the non-Lipschitz case does not need a separate treatment. Remark that ‖f‖R,U ≤ ‖f‖R. If f has enough regularity, we may write ‖f‖R,U = ∫ Ũ |R(−∆) (d+1)/2f |(ω, b) d(ω, b), using that the fractional Laplacian is self-adjoint andR∗ is the adjoint ofR. Define PdU to be the bounded open set of hyperplanes of Rd that intersect U , which in analogy with Subsec. 2.3, is equal to Ũ up to the identification of (ω, b) with (−ω,−b). Similarly, note that S(PdU ) = {ψ ∈ S(Ũ), ψ even}, which allows to rewrite the conditions in (6) as ψ ∈ S(PdU ), ‖ψ‖∞ ≤ 1. The following proposition, which is based on the Riesz-Markov-Kakutani representation theorem, shows that when theR,U-norm is finite, it can be associated to a unique Radon measure over PdU . Proposition 1. If ‖f‖R,U < +∞, there exists a unique Radon measure α ∈ M(PdU ) such that −cd〈f, (−∆)(d+1)/2R∗ψ〉 = ∫ PdU ψ(ω, b) dα(ω, b) for any ψ ∈ S(PdU ). Moreover, ‖f‖R,U = ‖α‖TV. Building on this, we see that a neural network representation for bounded U holds when the R,Unorm is finite: Theorem 1. Let U be a open, bounded subset of Rd. Let f : Rd → R such that ‖f‖R,U < +∞. Let α ∈ M(PdU ) be given by Proposition 1. For any ϕ ∈ S(U), there exist unique v ∈ Rd and c ∈ R such that ∫ U f(x)ϕ(x) dx = ∫ U (∫ Ũ (〈ω, x〉 − t)+ dα(ω, t) + 〈v, x〉+ c ) ϕ(x) dx, (7) That is, f(x) = ∫ Ũ (〈ω, x〉 − t)+ dα(ω, t) + 〈v, x〉 + c for x a.e. (almost everywhere) in U . If f is continuous, then the equality holds for all x ∈ U . Remark that this theorem does not claim that the representation given by α, v, c is unique, unlike Lemma 10 by Ongie et al. (2019) concerning analogous representations on Rd. In Sec. 5 we see that such representations are in fact not unique, for particular choices of the set U . We want to underline that the proof of Theorem 1 uses completely different tools from the ones of Lemma 10 by Ongie et al. (2019): their result relies critically on the fact that the only harmonic Lipschitz functions on Rd are affine functions, which is not true for functions on bounded subsets in our setting. 4 SPARSE APPROXIMATION FOR FUNCTIONS WITH BOUNDED R,U -NORM In this section, we show how to obtain approximation bounds of a function f on a bounded open set U using a fixed-width neural network with bounded coefficients, in terms of the R,U-norm introduced in the previous section. Theorem 2. Let U ⊆ BR(Rd) be a bounded open set. Suppose that f : Rd → R is such that ‖f‖R,U is finite, where ‖ · ‖R,U is defined in (6). Let v ∈ Rd, c ∈ R as in Theorem 1. Then, there exists {(ωi, bi)}ni=1 ⊆ Ũ and {ai}ni=1 ⊆ {±1} such that the function f̃ : Rd → R defined as f̃(x) = ‖f‖R,U n n∑ i=1 ai(〈ωi, x〉 − bi)+ + 〈v, x〉+ c fulfills, for x a.e. in U , ∣∣∣f̃(x)− f(x)∣∣∣ ≤ R‖f‖R,U√ n . (8) The equality holds for all x ∈ U if f is continuous. The proof of Theorem 2 (in App. B) uses the neural network representation (7) and a probabilistic argument. If one samples {(ωi, bi)}ni=1 from a probability distribution proportional to |α|, a Rademacher complexity bound upper-bounds the expectation of the supremum norm between f̃ and f , which yields the result. Note the resemblance of (8) with the bound (1); theR,U norm of f replaces the quantityCf . We can also use theR,U-norm to obtain a bound analogous to (2), that is, with a slightly better dependency in the exponent of n at the expense of a constant factor growing with the dimension. Proposition 2. Let f : Rd → R and U ⊆ B1(Rd) open such that ‖f‖R,U < +∞. Then, then there exist {ai}ni=1 ⊆ [−1, 1], {ωi}ni=1 ⊆ {ω ∈ Rd|‖ω‖1 = 1} and {bi}ni=1 ⊆ [0, 1] and κ < √ d‖f‖R,U such that the function f̃(x) = κ n n∑ i=1 ai(〈ωi, x〉 − bi)+ fulfills, for x a.e. in U and some universal constant c > 0, |f(x)− f̃(x)| ≤ cκ √ d+ log nn−1/2−1/d. The proof of this result (in App. B) follows readily from the representation (7) and Theorem 1 of Klusowski & Barron (2018). 4.1 LINKS WITH THE FOURIER SPARSE APPROXIMATION BOUNDS The following result shows that setting U = BR(Rd), theR,U-norm can be bounded by the Fourierbased quantities Cf , C̃f introduced in Subsec. 2.2. Theorem 3. Assume that the function f : Rd → R admits a Fourier representation of the form f(x) = 1 (2π)d/2 ∫ Rd e i〈ξ,x〉df̂(ξ) with f̂ ∈ MC(Rd) a complex-valued Radon measure. Let Cf be the quantity used in the sparse approximation bound by Breiman (1993) (see Subsec. 2.2). Then, one has that ‖f‖R,BR(Rd) ≤ 2RCf (9) As a direct consequence of Theorem 3, when U = BR(Rd) the right-hand side of (8) can be upper-bounded by R2Cf/ √ n. This allows to refine the bound (1) from Breiman (1993) to a bound in the supremum norm over BR(Rd), and where the approximating neural network f̃(x) = 1 n ∑n i=1 ai(〈x, ωi〉 − bi)+ + 〈v, x〉+ c fulfills |ai| ≤ ‖f‖R,BR(Rd), ‖ωi‖2 ≤ 1 and bi ∈ (−R,R). While we are able to prove the bound (9), the Fourier representation of f does not allow for a manageable expression for the measure α described in Proposition 1. For that, the following theorem starts from a slightly modified Fourier representation of f , under which one can describe the measure α and provide a formula for theR,U-norm. Theorem 4. Let f : Rd → R admitting the representation f(x) = ∫ Sd−1×R eib〈ω,x〉 dµ(ω, b), (10) for some complex-valued Radon measures µ ∈MC(Sd−1×R) such that dµ(ω, b) = dµ(−ω,−b) = dµ̄(−ω, b) = dµ̄(ω,−b), and ∫ Sd−1×R b 2d|µ|(ω, b) < +∞. Choosing U = BR(Rd), the unique measure α ∈M(PdR) specified by Proposition 1 takes the following form: dα(ω, b) = − ∫ R t2e−itb dµ(ω, t) db, where K = 2πd/2/Γ(d2 ). Note that α is real-valued because ∫ R t 2e−itb dµ(ω, t) ∈ R as t2 dµ(ω, t) = (−t)2 dµ(ω,−t). Consequently, theR,BR(Rd)-norm of f is ‖f‖R,BR(Rd) = ‖α‖TV = ∫ R −R ∫ Sd−1 ∣∣∣∣∫ R t2e−itb dµ(ω, t) ∣∣∣∣ db. (11) Remark that µ is the pushforward of the measure f̂ by the mappings ξ 7→ (±ξ/‖ξ‖,±ξ). When the Fourier transform f̂ admits a density, one may obtain the density of µ via a change from Euclidean to spherical coordinates: dµ(ω, b) = 12vol(S d−1)f̂(bω)|b|d−1 d(ω, b). Hence, Theorem 4 provides an operative way to compute the R,U-norm of f if one has access to the Fourier transform of f̂ . Note that equation (11) implies that theR,BR(Rd)-norm of f increases with R, and in particular is smaller than theR-norm of f , which corresponds to setting R =∞. Theorems 3 and 4 are proven jointly in App. B. Note that from the expression (11) one can easily see that ‖f‖R,BR(Rd) is upper-bounded by RCf :∫ R −R ∫ Sd−1 ∣∣∣∣∫ R t2e−itb dµ(ω, t) ∣∣∣∣ db ≤ ∫ R −R ∫ Sd−1 ∫ R t2 d|µ|(ω, t) db = 2R ∫ Rd ‖ξ‖2d|f̂ |(ξ),(12) where the equality holds since µ is the pushforward of f̂ . Equation (12) makes apparent the norm ‖f‖R,BR(Rd) is sometimes much smaller than the quantities Cf , C̃f , as is showcased by the following one-dimensional example (see the proof in App. B). In these situations, the sparse approximation bounds that we provide in Theorem 2 and Proposition 2 are much better than the ones in (1)-(2). Example 1. Take the function f : R → R defined as f(x) = cos(x) − cos((1 + )x), with > 0. f admits the Fourier representation f(x) = 1 (2π)1/2 ∫ R √ π 2 (δ1(ξ) + δ−1(ξ) − δ1+ (ξ) − δ−1− (ξ))e iξx dξ. We have that Cf = 2 + 2 + 2, and ‖f‖R,BR(Rd) ≤ R ( R + 2 + 2 ) . ‖f‖R,BR(Rd) goes to zero as → 0, while Cf converges to 2. An interesting class of functions for which ‖f‖R,BR(Rd) is finite but Cf , C̃f are infinite are functions that can be written as a finite-width neural network on BR(Rd), as shown in the following proposition. Proposition 3. Let f : Rd → R defined as f(x) = 1n ∑n i=1 ai(〈ωi, x〉 − bi)+ for all x ∈ Rd, with {ωi}ni=1 ⊆ Sd−1, {ai}ni=1, {bi}ni=1 ⊆ R. Then, for any bounded open set U , we have ‖f‖R,U ≤ 1 n ∑n i=1 |ai|, while Cf , C̃f = +∞ if f is not an affine function. Proposition 3 makes use of the fact that the R,U-norm is always upper-bounded by the R-norm, which also means that all the bounds developed in Ongie et al. (2019) apply for the R,U-norm. The fact that finite-width neural networks have infinite Cf was stated by E & Wojtowytsch (2020), that used them to show the gap between the functions with finite Cf and functions representable by infinite-width neural networks (belonging to the Barron space, in their terminology). It remains to be seen whether the gap is closed when considering functions with finite R,U-norm, i.e., whether any function admitting an infinite-width representation (7) on U has a finiteR,U-norm. Moving to the non-linear Radon transform. In many applications the function of interest f may be better represented as ∫ (〈ω, ϕ(x)〉 − t)+ dα(ω, t) + 〈v, x〉 + c, where ϕ is a fixed finite dimensional, non-linear and bounded feature map. Our results trivially extend to this case where in the Radon transform hyperplanes are replaced by hyperplanes in the feature space. This can be seen as the “kernel trick” applied to the Radon transform. The corresponding ‖f‖R,ϕ(U) corresponds to the sparsity of the decomposition in the feature space, and we have better approximation when ‖f‖R,ϕ(U) < ‖f‖R,U . This gives a simple condition for when transfer learning is successful, and explains the success of using random fourier features as a preprocessing in implicit neural representations of images and surfaces (Tancik et al., 2020). In order to go beyond the fixed feature maps and tackle deeper ReLU networks, we think that the non-linear Radon transform (Ehrenpreis, 2003) is an interesting tool to explore. We note that Parhi & Nowak (2021b) introduced recently a representer theorem for deep ReLU networks using Radon transforms as a regularizer. 5 INFINITE-WIDTH REPRESENTATIONS ARE NOT UNIQUE ON BOUNDED SETS Ongie et al. (2019) show that when theR-norm of f is finite, there is a unique measure α ∈M(Rd) such that the representation (5) holds for x ∈ Rd. In this section we show that when we only ask the representation to hold for x in a bounded open set, there exist several measures that do the job; in fact, they span an infinite-dimensional space. Let U = BR(Rd) be the open ball of radius R > 0 in Rd, which means that Ũ = Sd−1 × (−R,R) and PdU is the set of hyperplanes {x|〈ω, x〉 = b} such that ‖ω‖ = 1 and b ∈ (−R,R), which we denote by PdR for simplicity. In the following we will construct a space of Radon measures α ∈ M(PdR) whose neural network representation (5) coincide for all x ∈ BR(Rd). Note that since any bounded subset of Rd is included in some open ball, our results imply that such representations are non-unique on any bounded set. Remark 1. When one considers representations on BR(Rd) of the sort (5) with the measure α lying in the larger spaceM(Sd−1 × R), the non-uniqueness is apparent because there are two ‘trivial’ kinds of symmetry at play: (i) Related to parity: when the measure α is odd, we have ∫ Sd−1×R(〈ω, x〉 − b)+ dα(ω, b) = 1 2 ∫ Sd−1×R(〈ω, x〉 − b)+ − (−〈ω, x〉 + b)+ dα(ω, b) = 〈 1 2 ∫ Sd−1×R ω dα(ω, b), x〉 − 1 2 ∫ Sd−1×R b dα(ω, b), which is an affine function of x. (ii) Related to boundedness: if (ω, b) ∈ Sd−1×(R\(−R,R)), x 7→ (〈ω, x〉−b)+ restricted to BR(Rd) is an affine function of x. Hence, if α is supported on Sd−1×Sd−1×(R\(−R,R)), x 7→ ∫ Sd−1×R(〈ω, x〉 − b)+ dα(ω, b) is an affine function when restricted to BR(R d). Since in Sec. 3 we restrict our scope to measures α lying inM(PdU ), these two kinds of symmetries are already quotiented out in our analysis. The third kind of non-uniqueness that we discuss in this section is conceptually deeper, taking place withinM(PdU ). Let {Yk,j | k ∈ Z+, 1 ≤ j ≤ Nk,d} be the orthonormal basis of spherical harmonics of the space L2(Sd−1) (Atkinson & Han, 2012). It is well known that for any k, the functions {Yk,j | 1 ≤ j ≤ Nk,d} are the restrictions to Sd−1 of homogeneous polynomials of degree k, and in fact Nk,d is the dimension of the space of homogeneous harmonic polynomials of degree k. Consider the following subset of even functions in C∞(Sd−1 × (−R,R)): A = {Yk,j ⊗Xk ′ | k, j, k′ ∈ Z+, k ≡ k′ (mod 2), k′ < k − 2, 1 ≤ j ≤ Nd,k}, where Xk ′ denotes the monomial of degree k′ on (−R,R). We have the following result regarding the non-uniqueness of neural network representations: Theorem 5. If α ∈ M(PdR) is such that α ∈ clw(span(A)), then we have that 0 =∫ Sd−1×(−R,R)(〈ω, x〉 − b)+ dα(ω, b) for any x ∈ BR(R d). That is, α yields a neural network representation of the zero-function on BR(Rd). Here, we consider span(A) as a subset ofM(PdR) by the Riesz-Markov-Kakutani representation theorem via the action 〈g, ϕ〉 = ∫ PdR ϕ(ω, b)g(ω, b) d(ω, b) for any g ∈ span(A), ϕ ∈ C0(PdR), and clw denotes the closure in the topology of weak convergence ofM(Sd−1 × R). In particular, any measure whose density is in the span of A will yield a function which is equal to zero when restricted to BR(Rd). As an example of this result, we show a simple measure inM(Pd1) which represents the zero function on B1(R2). Example 2 (Non-zero measure representing the zero function onB1(R2)). We define the even Radon measure α ∈M(S1×(−1, 1)) with density dα(ω, b) = (8ω40−8ω20+1) d(ω, b) where ω = (ω0, ω1). Then, for any x ∈ B1(R2), 0 = ∫ S1×(−1,1)(〈ω, x〉 − b)+ dα(ω, x). On the one hand, Proposition 1 states that there exists a unique measure α ∈ M(PdU ) such that −cd〈f, (−∆)(d+1)/2R∗ψ〉 = ∫ PdU ψ(ω, b) dα(ω, b) for any ψ ∈ S(PdU ) if ‖f‖R,U is finite. On the other hand, Theorem 5 claims that functions admit distinct representations by measures inM(PdU ). The following theorem clarifies these two seemingly contradictory statements. Consider the following subset of even functions in C∞(Sd−1 × (−R,R)), which contains A: B = {Yk,j ⊗Xk ′ | k, j, k′ ∈ Z+, k ≡ k′ (mod 2), k′ < k, 1 ≤ j ≤ Nd,k}. Proposition 4. Let 0 < R < R′. Let f : Rd → R such that ‖f‖R,BR′ (Rd) < +∞ and let α ∈ M(PdR) be the unique measure specified by Proposition 1. Then, α is the unique measure in M(PdR) such that ∀ϕ ∈ S(BR(Rd)), 〈α,Rϕ〉 = ∫ BR(Rd)) f(x)∆ϕ(x) dx, (13) ∀k, j, k′ ∈ Z+ s.t. k′ ≡ k (mod 2), k′ < k, 1 ≤ j ≤ Nk,d, 〈α, Yk,j ⊗Xk ′ 〉 = −cd〈f, (−∆)(d+1)/2R∗(Yk,j ⊗ 1|X|<RXk ′ )〉. (14) The condition (13) holds for any measure α′ ∈ M(PdR) for which f admits a representation of the form (7) on BR(Rd). Thus, α can be characterized as the unique measure inM(PdR) such that f admits a representation of the form (7) on BR(Rd) and the condition (14) holds. In (14), the quantity 〈f, (−∆)(d+1)/2R∗(Yk,j ⊗ 1|X|<RXk ′ )〉 is well defined despite 1|X|<RXk ′ not being continuous on R; we define it as 〈f, (−∆)(d+1)/2R∗((Yk,j ⊗1|X|<RXk ′ ) + g̃)〉, where g̃ is any function in S(PdR′) such that (Yk,j⊗1|X|<RXk ′ )+ g̃ ∈ S(PdR′) (which do exist, see App. C). In short, Proposition 4 characterizes the measure α from Proposition 1 in terms of its evaluations on the spaces R(S(BR(Rd))) and span(B), and by Corollary 1 the direct sum of these two spaces dense in C0(PdR), which by the Riesz-Markov-Kakutani representation theorem is the predual of M(PdR). Interestingly, the condition (13) holds for any measure α ∈ M(PdR) which represents the function f on BR(Rd), but it is easy to see that the condition (14) does not: by Theorem 5 we have that if ψ ∈ span(A) ⊆ span(B), the measure α′ defined as dα′(ω, b) = dα(ω, b) + ψ(ω, b) db represents the function f on BR(Rd), and 〈α′, ψ〉 = 〈α,ψ〉+ ‖ψ‖22. It remains an open question to see whether Theorem 5 captures all the measures which represent the zero function on BR(Rd), which we hypothesize. If that was the case, we would obtain a complete characterization of the Radon measures which represent a given function on BR(Rd). Mode connectivity. Mode connectivity is the phenomenon that optima of neural network losses (at least the ones found by gradient descent) turn out to be connected by paths where the loss value is almost constant, and was observed empirically by Garipov et al. (2018); Draxler et al. (2018). Kuditipudi et al. (2019) provided an algorithmic explanation based on dropout, and an explanation based on the noise stability property. Theorem 5 suggests an explanation for mode connectivity from a functional perspective: one can construct finitely-supported measures which approximate a measure α ∈ clw(span(A)), yielding finite-width neural networks with non-zero weights which approximate the zero function on BR(Rd). Assuming that the data distribution is supported in BR(Rd), adding a multiple of one such network to an optimal network will produce little change in the loss value because the function being represented is essentially unchanged. More work is required to confirm or discard this intuition. 6 CONCLUSION We provided in this paper tighter sparse approximation bounds for two-layer ReLU neural networks. Our results build on the introduction of Radon-basedR,U-norms for functions defined on a bounded open set U . Our bounds refine Fourier-based approximation bounds of Breiman (1993); Klusowski & Barron (2018). We also showed that the representation of infinite width neural networks on bounded open sets are not unique, which can be seen as a functional view of mode connectivity observed in training deep neural networks. We leave two open questions: whether any function admitting an infinite-width representation on U has a finite R,U-norm, and whether Theorem 5 captures all the measures which represent the zero function on BR(Rd). Finally, in order to extend our theory to deeper ReLU networks we believe that non-linear Radon transforms (Ehrenpreis, 2003) are interesting tools to explore.
1. What is the main contribution of the paper regarding representation theorems for functions on a bounded set? 2. How does the proposed semi-norm ||{R,U} based on the Radon transform compare to prior work on representation theorems? 3. What are the strengths of the paper, particularly in improving upon Barron's Theorem? 4. Are there any weaknesses or areas for improvement in the paper's analysis or conclusions? 5. Do you have any questions or concerns about the paper's methodology or results?
Summary Of The Paper Review
Summary Of The Paper The main result of this paper is a representation theorem for functions on a bounded set U by two-layer neural networks with bounded width and bounded weights, via a semi-norm ||{R,U} based on the Radon transform. Prior work [OWSS19] used the Radon transform to get a representation theorem for functions on R^d, but the bounded-domain setting is more realistic and allows for stronger results, since the semi-norm ||{R,R^d} will naturally be larger than ||{R,U}. Moreover, this paper shows that when U is a Euclidean ball, the semi-norm ||{R,U} can be upper bounded in terms of the Barron norm, so this paper's representation theorem strengthens Barron's Theorem. Review I recommend acceptance. STRENGTHS: The Radon semi-norm is a potentially useful way of understanding representation, and as the paper shows, there are examples where it can provide a much better bound than the Barron norm. WEAKNESSES: None that I am aware of. QUESTIONS: Are there examples where ||_{R,U} is substantially smaller than ||_R (e.g. is this true for Example 1)? Should we expect it to be much smaller for many reasonable functions? More discussion on when/why this bound is better than the prior bounds would be helpful.
ICLR
Title Tighter Sparse Approximation Bounds for ReLU Neural Networks Abstract A well-known line of work (Barron, 1993; Breiman, 1993; Klusowski & Barron, 2018) provides bounds on the width n of a ReLU two-layer neural network needed to approximate a function f over the ball BR(R) up to error , when the Fourier based quantityCf = ∫ Rd ‖ξ‖ |f̂(ξ)| dξ is finite. More recently Ongie et al. (2019) used the Radon transform as a tool for analysis of infinite-width ReLU two-layer networks. In particular, they introduce the concept of Radon-based R-norms and show that a function defined on R can be represented as an infinite-width twolayer neural network if and only if its R-norm is finite. In this work, we extend the framework of (Ongie et al., 2019) and define similar Radon-based semi-norms (R,U-norms) such that a function admits an infinite-width neural network representation on a bounded open set U ⊆ R when itsR,U-norm is finite. Building on this, we derive sparse (finite-width) neural network approximation bounds that refine those of Breiman (1993); Klusowski & Barron (2018). Finally, we show that infinite-width neural network representations on bounded open sets are not unique and study their structure, providing a functional view of mode connectivity. N/A A well-known line of work (Barron, 1993; Breiman, 1993; Klusowski & Barron, 2018) provides bounds on the width n of a ReLU two-layer neural network needed to approximate a function f over the ball BR(Rd) up to error , when the Fourier based quantityCf = ∫ Rd ‖ξ‖ 2|f̂(ξ)| dξ is finite. More recently Ongie et al. (2019) used the Radon transform as a tool for analysis of infinite-width ReLU two-layer networks. In particular, they introduce the concept of Radon-based R-norms and show that a function defined on Rd can be represented as an infinite-width twolayer neural network if and only if its R-norm is finite. In this work, we extend the framework of (Ongie et al., 2019) and define similar Radon-based semi-norms (R,U-norms) such that a function admits an infinite-width neural network representation on a bounded open set U ⊆ Rd when itsR,U-norm is finite. Building on this, we derive sparse (finite-width) neural network approximation bounds that refine those of Breiman (1993); Klusowski & Barron (2018). Finally, we show that infinite-width neural network representations on bounded open sets are not unique and study their structure, providing a functional view of mode connectivity. 1 INTRODUCTION Extensive work has shown that for a neural network to be able to generalize, the size or magnitude of the parameters is more important than the size of the network, when the latter is large enough (Bartlett, 1997; Neyshabur et al., 2015; Zhang et al., 2016). Under certain regimes, the size of the neural networks used in practice is so large that the training data is fit perfectly and an infinite-width approximation is appropriate. In this setting, what matters to obtain good generalization is to fit the data using the right inductive bias, which is specified by how network parameters are controlled (Wei et al., 2020) together with the training algorithm used (Lyu & Li, 2020). The infinite-width two-layer neural network model has been studied from several perspectives due to its simplicity. One can replace the finite-width ReLU network 1n ∑n i=1 ai(〈ωi, x〉 − bi)+ by an integral over the parameter space with respect to a signed Radon measure: ∫ (〈ω, x〉− b)+ dα(ω, b). Thus, controlling the magnitude of the neural network parameters is akin to controlling the measure α according to a certain norm. Bach (2017) introduced the F1-space, which is the infinitewidth neural network space with norm inf{ ∫ |b| d|α|(ω, b)}, derived from the finite-width regular- izer 1n ∑n i=1 |ai|‖(ωi, bi)‖2 (the infimum is over all the measures α which represent the function at hand). A different line of work (Savarese et al., 2019; Ongie et al., 2019) consider the infinite-width spaces with norm inf{‖α‖TV = ∫ d|α|(ω, b)}, which is derived from the finite-width regularizer 1 n ∑n i=1 |ai|‖ωi‖2 (i.e. omitting the bias term). Both of these works seek to find expressions for this norm, leading to characterizations of the functions that are representable by infinite-width networks. Savarese et al. (2019) solves the problem in the one-dimensional case: they show that for a function f on R, this norm takes value max{ ∫ R |f ′′(x)| dx, |f ′(−∞) + f ′(∞)|}. Ongie et al. (2019) give an expression for this norm (the R-norm) for functions on Rd, making use of Radon transforms (see Subsec. 2.3). Although we mentioned in the first paragraph that in many occasions the network size is large enough that the specific number of neurons is irrelevant, when the target function is hard to approx- imate it is interesting to have an idea of how many neurons one needs to approximate it. The first contribution in this direction was by Cybenko (1989); Hornik et al. (1989), which show that twolayer neural networks with enough neurons can approximate any reasonable function on bounded sets in the uniform convergence topology. Later on, Barron (1993); Breiman (1993) provided sparse approximation bounds stating that if a function f is such that a certain quantity Cf constructed from the Fourier transform f̂ is finite, then there exists a neural network of width n such that the L2 approximation error with respect to a distribution of bounded support is lower than O(Cf/n). More recently, Klusowski & Barron (2018) provided alternative sparse approximation bounds of Breiman (1993) by restricting to networks with bounded weights and a slightly better dependency on n at the expense of a constant factor increasing with d (see Subsec. 2.2). Contributions. In our work, we seek to characterize the functions that coincide with an infinitewidth two-layer neural network on a fixed bounded open set. This endeavor is interesting in itself because in practice, we want to learn target functions for which we know samples on a bounded set, and we are typically unconcerned with the values that the learned functions take at infinity. Moreover, the tools that we develop allow us to derive state-of-the-art sparse approximation bounds. Our main contributions are the following: • In the spirit of theR-norm introduced by Ongie et al. (2019), for any bounded open set U ⊆ Rd we define theR,U-norm of a function on Rd, and show that when theR,U-norm of f is finite, f(x) can admits a representation of the form ∫ Sd−1×R(〈ω, x〉 − b)+ dα(ω, b) + 〈v, x〉 + c for x ∈ U , where v ∈ Rd, c ∈ R and α is an even signed Radon measure. • Using theR,U-norm, we derive function approximation bounds for neural networks with a fixed finite width. We compute theR,U-norm of a function in terms of its Fourier representation, and show that it admits an upper bound by the quantity Cf . This shows that our approximation bound is tighter than the previous bound by Breiman (1993), and meaningful in more instances (e.g. for finite-width neural networks). We also showR,U-norm-based bounds analogous to the ones of Klusowski & Barron (2018). • Setting U as the open unit ball of radius R, we show that neural network representations of f on U hold for multiple even Radon measures, which contrasts with the uniqueness result provided by Ongie et al. (2019) for the case of Rd. We study the structure of the sets of Radon measures which give rise to the same function on U . The non-uniqueness of the measure representing a measure could be linked to the phenomenon of mode connectivity. Additional related work. There have been other recent works which have used the Radon transform to study neural networks in settings different from ours (Parhi & Nowak, 2021a; Bartolucci et al., 2021). These two works consider the R-norm as a regularizer for an inverse problem, and proceed to prove representer theorems: there exists a solution of the regularized problem which is a twolayer neural network equal to the number of datapoints. Regarding infinite-width network spaces, E & Wojtowytsch (2020) present several equivalent definitions and provides a review. A well-known line of work (Mei et al., 2018; Chizat & Bach, 2018; Rotskoff & Vanden-Eijnden, 2018) studies the convergence of gradient descent for infinite-width two-layer neural networks. 2 FRAMEWORK 2.1 NOTATION Sd−1 denotes the (d − 1)-dimensional hypersphere (as a submanifold of Rd) and BR(Rd) is the Euclidean open ball of radius R. For U ⊆ Rd measurable, the space C0(U) of functions vanishing at infinity contains the continuous functions f such that for any > 0, there exists compact K ⊆ U depending on f such that |f(x)| < for x ∈ U \ K. P(U) is the set of Borel probability measures, M(U) is the space of finite signed Radon measures (which may be seen as the dual of C0(U)). Throughout the paper, the term Radon measure refers to a finite signed Radon measure for shortness. If γ ∈ M(U), then ‖γ‖TV is the total variation (TV) norm of γ. MC(U) denotes the space of complex-valued finite signed Radon measures, defined as the dual space of C0(U,C) (the space of complex-valued functions vanishing at infinity). We denote by S(Rd) the space of Schwartz functions, which contains the functions in C∞(Rd) whose derivatives of any order decay faster than polynomials of all orders, i.e. for all k, p ∈ (N0)d, supx∈Rd |xk∂(p)ϕ(x)| < +∞. For f ∈ L1(Rd), we use f̂ to denote the unitary Fourier transforms with angular frequency, defined as f̂(ξ) = 1 (2π)d/2 ∫ Rd f(x)e −i〈ξ,x〉dx. If f̂ ∈ L1(Rd) as well, we have the inversion formula f(x) = 1 (2π)d/2 ∫ Rd f̂(ξ)e i〈ξ,x〉dx. The Fourier transform is a continuous automorphism on S(Rd). 2.2 EXISTING SPARSE APPROXIMATION BOUNDS One of the classical results of the theory of two-layer neural networks (Breiman (1993), building on (Barron, 1993)) states that given a probability measure p ∈ P(BR(Rd)) and a function f : BR(Rd)→ R admitting a Fourier representation of the form f(x) = 1(2π)d/2 ∫ Rd e i〈ξ,x〉df̂(ξ), where f̂ ∈MC(Rd) is a complex-valued Radon measure such that Cf = 1(2π)d/2 ∫ Rd ‖ξ‖ 2 2 d|f̂ |(ξ) < +∞, there exists a two-layer neural network f̃(x) = 1n ∑n i=1 ai(〈x, ωi〉 − bi)+ such that∫ BR(Rd) (f(x)− f̃(x))2 dx ≤ (2R)4C2f n . (1) These classical results do not provide bounds on the magnitude of the neural network weights. More recently, Klusowski & Barron (2018) showed similar approximation bounds for two-layer ReLU networks under additional l1 and l0 bounds on the weights ai, ωi. Namely, if C̃f = 1 (2π)d/2 ∫ Rd ‖ξ‖ 2 1 d|f̂ |(ξ) < +∞ there exists a two-layer neural network f̃(x) = a0 + 〈ω0, x〉 + κ n ∑n i=1 ai(〈wi, x〉 − bi)+ with |ai| ≤ 1, ‖ωi‖ ≤ 1, bi ∈ [0, 1], and κ ≤ 2C̃f , and sup x∈[−1,1]d |f(x)− f̃(x)| ≤ c C̃f √ d+ log n n−1/2−1/d, (2) where c is a universal constant. 2.3 REPRESENTATION RESULTS ON Rd BASED ON THE RADON TRANSFORM One defines Pd denotes the space of hyperplanes on Rd, whose elements may be represented by points in Sd−1 × R by identifying {x|〈ω, x〉 = b} with both (ω, b) and (−ω,−b). Thus, functions on Pd are even functions on Sd−1 × R and we will use both notions interchangeably1. The Radon transform and the dual Radon transform. If f : Rd → R is a function which is integrable over all the hyperplanes of Rd, we may define the Radon transformRf : Pd → R as Rf(ω, b) = ∫ {x|〈ω,x〉=b} f(x) dx, ∀(ω, b) ∈ Sd−1 × R. That is, one integrates the function f over the hyperplane (ω, b). If Φ : Pd → R is a continuous function, the dual Radon transformR∗Φ : Rd → R is defined as R∗Φ(x) = ∫ Sd−1 Φ(ω, 〈ω, x〉) dω, ∀x ∈ Rd, where the integral is with respect to the Hausdorff measure over Sd−1. R and R∗ are adjoint operators in the appropriate domains (see Lemma 13). The Radon inversion formula. When f ∈ C∞(Rd), one has that (Theorem 3.1, Helgason (2011)) f = cd(−∆)(d−1)/2R∗Rf (3) where cd = 12(2π)d−1 and (−∆) s/2 denotes the (negative) fractional Laplacian, defined via its Fourier transform as ̂(−∆)s/2f(ξ) = ‖ξ‖sf̂(ξ). 1Similarly, the space M(Pd) of Radon measures over Pd contains the even measures in M(Sd−1 × R). If α ∈ M(Pd), ∫ Sd−1×R ϕ(ω, b) dα(ω, b) is well defined for any measurable function ϕ on S d−1 × R, but∫ Pd ϕ(ω, b) dα(ω, b) is only defined for even ϕ. TheR-norm. Given a function f : Rd → R, Ongie et al. (2019) introduce the quantity ‖f‖R = { sup{−cd〈f, (−∆)(d+1)/2R∗ψ〉 | ψ ∈ S(Sd−1 × R), ψ even, ‖ψ‖∞ ≤ 1} if f Lipschitz +∞ otherwise. (4) They call it the R-norm of f , although it is formally a semi-norm. Here, the space S(Sd−1 × R) of Schwartz functions on Sd−1 × R is defined, in analogy with S(Rd), as the space of C∞ functions ψ on Sd−1 × R which for any integers k, l ≥ 0 and any differential operator D on Sd−1 satisfy sup(ω,b)∈Sd−1×R |(1 + |b|k)∂kb (Dψ)(ω, b)| < +∞ (Helgason (2011), p. 5). Moreover, S(Pd) = {ψ ∈ S(Sd−1 × R) | ψ even}, which means the conditions on ψ in (4) can be written as ψ ∈ S(Pd), ‖ψ‖∞ ≤ 1. The finiteness of the R-norm indicates whether a function on Rd admits an exact representation as an infinitely wide neural network. Namely, Ongie et al. (2019) in their Lemma 10 show that ‖f‖R is finite if and only if there exists a (unique) even measure α ∈M(Sd−1×R) and (unique) v ∈ Rd, c ∈ R such that for any x ∈ Rd, f(x) = ∫ Sd−1×R ( 〈ω, x〉 − b ) + dα(ω, b) + 〈v, x〉+ c, (5) in which case, ‖f‖R = ‖α‖TV. Remark the following differences between this result and the bounds by (Breiman, 1993; Klusowski & Barron, 2018) shown in equations (1) and (2): (i) in (5) we have an exact representation with infinite-width neural networks instead of an approximation result with finite-width, (ii) in (5) the representation holds on Rd instead of a bounded domain. In our work, we derive representation results similar to the ones of Ongie et al. (2019) for functions defined on bounded open sets, which naturally give rise to sparse approximation results that refine those of (Breiman, 1993; Klusowski & Barron, 2018). One property that makes the Radon transform and its dual useful to analyze neural networks can be understood at a very high level via the following argument: if f(x) = ∫ Sd−1×R(〈ω, x〉 − b)+ρ(ω, b) d(ω, b) + 〈v, x〉 + c for some smooth rapidly decreasing function ρ, then ∆f(x) =∫ Sd−1×R δ〈ω,x〉=bρ(ω, b) d(ω, b) = ∫ Sd−1 ρ(ω, 〈ω, x〉) dω = (R ∗ρ)(x). For a general function f of the form (5), one has similarly that 〈∆f, ϕ〉 = 〈α,Rϕ〉 for any ϕ ∈ S(Rd). This property relates the evaluations of the measure α to the function ∆f via the Radon transform, and is the main ingredient in the proof of Lemma 10 of Ongie et al. (2019). While we also rely on it, we need many additional tools to deal with the case of bounded open sets. 3 REPRESENTATION RESULTS ON BOUNDED OPEN SETS Schwartz functions on open sets. Let U ⊆ Rd be an open subset. The space of Schwartz functions on U may be defined as S(U) = ⋂ z∈Rd\U ⋂ k∈(N0)d{f ∈ S(R d) | ∂(k)f(z) = 0}, i.e. they are those Schwartz functions on Rd such that the derivatives of all orders vanish outside of U (c.f. Def. 3.2, Shaviv (2020)). The structure of S(U) is similar to S(Rd) in that its topology is given by a family of semi-norms indexed by ((N0)d)2: ‖f‖k,k′ = supx∈U |xk · f (k ′)(x)|. Similarly, if V ⊆ Pd is open, we define S(V) = ⋂ (ω,b)∈(Sd−1×R)\V ⋂ k∈(N0)2{f ∈ S(P d) | ∂k1b ∆̂k2f(ω, b) = 0}, where ∆̂ is the spherical Laplacian. TheR,U-norm. Let U ⊆ Rd be a bounded open set, and let Ũ := {(ω, 〈ω, x〉) ∈ Sd−1 × R |x ∈ U}. For any function f : Rd → R, we define theR,U-norm of f as ‖f‖R,U = sup{−cd〈f, (−∆)(d+1)/2R∗ψ〉 | ψ ∈ S(Ũ), ψ even, ‖ψ‖∞ ≤ 1}. (6) Note the similarity between this quantity and the R-norm defined in (4); the main differences are that the supremum here is taken over the even Schwartz functions on Ũ instead of Sd−1 × R, and that the non-Lipschitz case does not need a separate treatment. Remark that ‖f‖R,U ≤ ‖f‖R. If f has enough regularity, we may write ‖f‖R,U = ∫ Ũ |R(−∆) (d+1)/2f |(ω, b) d(ω, b), using that the fractional Laplacian is self-adjoint andR∗ is the adjoint ofR. Define PdU to be the bounded open set of hyperplanes of Rd that intersect U , which in analogy with Subsec. 2.3, is equal to Ũ up to the identification of (ω, b) with (−ω,−b). Similarly, note that S(PdU ) = {ψ ∈ S(Ũ), ψ even}, which allows to rewrite the conditions in (6) as ψ ∈ S(PdU ), ‖ψ‖∞ ≤ 1. The following proposition, which is based on the Riesz-Markov-Kakutani representation theorem, shows that when theR,U-norm is finite, it can be associated to a unique Radon measure over PdU . Proposition 1. If ‖f‖R,U < +∞, there exists a unique Radon measure α ∈ M(PdU ) such that −cd〈f, (−∆)(d+1)/2R∗ψ〉 = ∫ PdU ψ(ω, b) dα(ω, b) for any ψ ∈ S(PdU ). Moreover, ‖f‖R,U = ‖α‖TV. Building on this, we see that a neural network representation for bounded U holds when the R,Unorm is finite: Theorem 1. Let U be a open, bounded subset of Rd. Let f : Rd → R such that ‖f‖R,U < +∞. Let α ∈ M(PdU ) be given by Proposition 1. For any ϕ ∈ S(U), there exist unique v ∈ Rd and c ∈ R such that ∫ U f(x)ϕ(x) dx = ∫ U (∫ Ũ (〈ω, x〉 − t)+ dα(ω, t) + 〈v, x〉+ c ) ϕ(x) dx, (7) That is, f(x) = ∫ Ũ (〈ω, x〉 − t)+ dα(ω, t) + 〈v, x〉 + c for x a.e. (almost everywhere) in U . If f is continuous, then the equality holds for all x ∈ U . Remark that this theorem does not claim that the representation given by α, v, c is unique, unlike Lemma 10 by Ongie et al. (2019) concerning analogous representations on Rd. In Sec. 5 we see that such representations are in fact not unique, for particular choices of the set U . We want to underline that the proof of Theorem 1 uses completely different tools from the ones of Lemma 10 by Ongie et al. (2019): their result relies critically on the fact that the only harmonic Lipschitz functions on Rd are affine functions, which is not true for functions on bounded subsets in our setting. 4 SPARSE APPROXIMATION FOR FUNCTIONS WITH BOUNDED R,U -NORM In this section, we show how to obtain approximation bounds of a function f on a bounded open set U using a fixed-width neural network with bounded coefficients, in terms of the R,U-norm introduced in the previous section. Theorem 2. Let U ⊆ BR(Rd) be a bounded open set. Suppose that f : Rd → R is such that ‖f‖R,U is finite, where ‖ · ‖R,U is defined in (6). Let v ∈ Rd, c ∈ R as in Theorem 1. Then, there exists {(ωi, bi)}ni=1 ⊆ Ũ and {ai}ni=1 ⊆ {±1} such that the function f̃ : Rd → R defined as f̃(x) = ‖f‖R,U n n∑ i=1 ai(〈ωi, x〉 − bi)+ + 〈v, x〉+ c fulfills, for x a.e. in U , ∣∣∣f̃(x)− f(x)∣∣∣ ≤ R‖f‖R,U√ n . (8) The equality holds for all x ∈ U if f is continuous. The proof of Theorem 2 (in App. B) uses the neural network representation (7) and a probabilistic argument. If one samples {(ωi, bi)}ni=1 from a probability distribution proportional to |α|, a Rademacher complexity bound upper-bounds the expectation of the supremum norm between f̃ and f , which yields the result. Note the resemblance of (8) with the bound (1); theR,U norm of f replaces the quantityCf . We can also use theR,U-norm to obtain a bound analogous to (2), that is, with a slightly better dependency in the exponent of n at the expense of a constant factor growing with the dimension. Proposition 2. Let f : Rd → R and U ⊆ B1(Rd) open such that ‖f‖R,U < +∞. Then, then there exist {ai}ni=1 ⊆ [−1, 1], {ωi}ni=1 ⊆ {ω ∈ Rd|‖ω‖1 = 1} and {bi}ni=1 ⊆ [0, 1] and κ < √ d‖f‖R,U such that the function f̃(x) = κ n n∑ i=1 ai(〈ωi, x〉 − bi)+ fulfills, for x a.e. in U and some universal constant c > 0, |f(x)− f̃(x)| ≤ cκ √ d+ log nn−1/2−1/d. The proof of this result (in App. B) follows readily from the representation (7) and Theorem 1 of Klusowski & Barron (2018). 4.1 LINKS WITH THE FOURIER SPARSE APPROXIMATION BOUNDS The following result shows that setting U = BR(Rd), theR,U-norm can be bounded by the Fourierbased quantities Cf , C̃f introduced in Subsec. 2.2. Theorem 3. Assume that the function f : Rd → R admits a Fourier representation of the form f(x) = 1 (2π)d/2 ∫ Rd e i〈ξ,x〉df̂(ξ) with f̂ ∈ MC(Rd) a complex-valued Radon measure. Let Cf be the quantity used in the sparse approximation bound by Breiman (1993) (see Subsec. 2.2). Then, one has that ‖f‖R,BR(Rd) ≤ 2RCf (9) As a direct consequence of Theorem 3, when U = BR(Rd) the right-hand side of (8) can be upper-bounded by R2Cf/ √ n. This allows to refine the bound (1) from Breiman (1993) to a bound in the supremum norm over BR(Rd), and where the approximating neural network f̃(x) = 1 n ∑n i=1 ai(〈x, ωi〉 − bi)+ + 〈v, x〉+ c fulfills |ai| ≤ ‖f‖R,BR(Rd), ‖ωi‖2 ≤ 1 and bi ∈ (−R,R). While we are able to prove the bound (9), the Fourier representation of f does not allow for a manageable expression for the measure α described in Proposition 1. For that, the following theorem starts from a slightly modified Fourier representation of f , under which one can describe the measure α and provide a formula for theR,U-norm. Theorem 4. Let f : Rd → R admitting the representation f(x) = ∫ Sd−1×R eib〈ω,x〉 dµ(ω, b), (10) for some complex-valued Radon measures µ ∈MC(Sd−1×R) such that dµ(ω, b) = dµ(−ω,−b) = dµ̄(−ω, b) = dµ̄(ω,−b), and ∫ Sd−1×R b 2d|µ|(ω, b) < +∞. Choosing U = BR(Rd), the unique measure α ∈M(PdR) specified by Proposition 1 takes the following form: dα(ω, b) = − ∫ R t2e−itb dµ(ω, t) db, where K = 2πd/2/Γ(d2 ). Note that α is real-valued because ∫ R t 2e−itb dµ(ω, t) ∈ R as t2 dµ(ω, t) = (−t)2 dµ(ω,−t). Consequently, theR,BR(Rd)-norm of f is ‖f‖R,BR(Rd) = ‖α‖TV = ∫ R −R ∫ Sd−1 ∣∣∣∣∫ R t2e−itb dµ(ω, t) ∣∣∣∣ db. (11) Remark that µ is the pushforward of the measure f̂ by the mappings ξ 7→ (±ξ/‖ξ‖,±ξ). When the Fourier transform f̂ admits a density, one may obtain the density of µ via a change from Euclidean to spherical coordinates: dµ(ω, b) = 12vol(S d−1)f̂(bω)|b|d−1 d(ω, b). Hence, Theorem 4 provides an operative way to compute the R,U-norm of f if one has access to the Fourier transform of f̂ . Note that equation (11) implies that theR,BR(Rd)-norm of f increases with R, and in particular is smaller than theR-norm of f , which corresponds to setting R =∞. Theorems 3 and 4 are proven jointly in App. B. Note that from the expression (11) one can easily see that ‖f‖R,BR(Rd) is upper-bounded by RCf :∫ R −R ∫ Sd−1 ∣∣∣∣∫ R t2e−itb dµ(ω, t) ∣∣∣∣ db ≤ ∫ R −R ∫ Sd−1 ∫ R t2 d|µ|(ω, t) db = 2R ∫ Rd ‖ξ‖2d|f̂ |(ξ),(12) where the equality holds since µ is the pushforward of f̂ . Equation (12) makes apparent the norm ‖f‖R,BR(Rd) is sometimes much smaller than the quantities Cf , C̃f , as is showcased by the following one-dimensional example (see the proof in App. B). In these situations, the sparse approximation bounds that we provide in Theorem 2 and Proposition 2 are much better than the ones in (1)-(2). Example 1. Take the function f : R → R defined as f(x) = cos(x) − cos((1 + )x), with > 0. f admits the Fourier representation f(x) = 1 (2π)1/2 ∫ R √ π 2 (δ1(ξ) + δ−1(ξ) − δ1+ (ξ) − δ−1− (ξ))e iξx dξ. We have that Cf = 2 + 2 + 2, and ‖f‖R,BR(Rd) ≤ R ( R + 2 + 2 ) . ‖f‖R,BR(Rd) goes to zero as → 0, while Cf converges to 2. An interesting class of functions for which ‖f‖R,BR(Rd) is finite but Cf , C̃f are infinite are functions that can be written as a finite-width neural network on BR(Rd), as shown in the following proposition. Proposition 3. Let f : Rd → R defined as f(x) = 1n ∑n i=1 ai(〈ωi, x〉 − bi)+ for all x ∈ Rd, with {ωi}ni=1 ⊆ Sd−1, {ai}ni=1, {bi}ni=1 ⊆ R. Then, for any bounded open set U , we have ‖f‖R,U ≤ 1 n ∑n i=1 |ai|, while Cf , C̃f = +∞ if f is not an affine function. Proposition 3 makes use of the fact that the R,U-norm is always upper-bounded by the R-norm, which also means that all the bounds developed in Ongie et al. (2019) apply for the R,U-norm. The fact that finite-width neural networks have infinite Cf was stated by E & Wojtowytsch (2020), that used them to show the gap between the functions with finite Cf and functions representable by infinite-width neural networks (belonging to the Barron space, in their terminology). It remains to be seen whether the gap is closed when considering functions with finite R,U-norm, i.e., whether any function admitting an infinite-width representation (7) on U has a finiteR,U-norm. Moving to the non-linear Radon transform. In many applications the function of interest f may be better represented as ∫ (〈ω, ϕ(x)〉 − t)+ dα(ω, t) + 〈v, x〉 + c, where ϕ is a fixed finite dimensional, non-linear and bounded feature map. Our results trivially extend to this case where in the Radon transform hyperplanes are replaced by hyperplanes in the feature space. This can be seen as the “kernel trick” applied to the Radon transform. The corresponding ‖f‖R,ϕ(U) corresponds to the sparsity of the decomposition in the feature space, and we have better approximation when ‖f‖R,ϕ(U) < ‖f‖R,U . This gives a simple condition for when transfer learning is successful, and explains the success of using random fourier features as a preprocessing in implicit neural representations of images and surfaces (Tancik et al., 2020). In order to go beyond the fixed feature maps and tackle deeper ReLU networks, we think that the non-linear Radon transform (Ehrenpreis, 2003) is an interesting tool to explore. We note that Parhi & Nowak (2021b) introduced recently a representer theorem for deep ReLU networks using Radon transforms as a regularizer. 5 INFINITE-WIDTH REPRESENTATIONS ARE NOT UNIQUE ON BOUNDED SETS Ongie et al. (2019) show that when theR-norm of f is finite, there is a unique measure α ∈M(Rd) such that the representation (5) holds for x ∈ Rd. In this section we show that when we only ask the representation to hold for x in a bounded open set, there exist several measures that do the job; in fact, they span an infinite-dimensional space. Let U = BR(Rd) be the open ball of radius R > 0 in Rd, which means that Ũ = Sd−1 × (−R,R) and PdU is the set of hyperplanes {x|〈ω, x〉 = b} such that ‖ω‖ = 1 and b ∈ (−R,R), which we denote by PdR for simplicity. In the following we will construct a space of Radon measures α ∈ M(PdR) whose neural network representation (5) coincide for all x ∈ BR(Rd). Note that since any bounded subset of Rd is included in some open ball, our results imply that such representations are non-unique on any bounded set. Remark 1. When one considers representations on BR(Rd) of the sort (5) with the measure α lying in the larger spaceM(Sd−1 × R), the non-uniqueness is apparent because there are two ‘trivial’ kinds of symmetry at play: (i) Related to parity: when the measure α is odd, we have ∫ Sd−1×R(〈ω, x〉 − b)+ dα(ω, b) = 1 2 ∫ Sd−1×R(〈ω, x〉 − b)+ − (−〈ω, x〉 + b)+ dα(ω, b) = 〈 1 2 ∫ Sd−1×R ω dα(ω, b), x〉 − 1 2 ∫ Sd−1×R b dα(ω, b), which is an affine function of x. (ii) Related to boundedness: if (ω, b) ∈ Sd−1×(R\(−R,R)), x 7→ (〈ω, x〉−b)+ restricted to BR(Rd) is an affine function of x. Hence, if α is supported on Sd−1×Sd−1×(R\(−R,R)), x 7→ ∫ Sd−1×R(〈ω, x〉 − b)+ dα(ω, b) is an affine function when restricted to BR(R d). Since in Sec. 3 we restrict our scope to measures α lying inM(PdU ), these two kinds of symmetries are already quotiented out in our analysis. The third kind of non-uniqueness that we discuss in this section is conceptually deeper, taking place withinM(PdU ). Let {Yk,j | k ∈ Z+, 1 ≤ j ≤ Nk,d} be the orthonormal basis of spherical harmonics of the space L2(Sd−1) (Atkinson & Han, 2012). It is well known that for any k, the functions {Yk,j | 1 ≤ j ≤ Nk,d} are the restrictions to Sd−1 of homogeneous polynomials of degree k, and in fact Nk,d is the dimension of the space of homogeneous harmonic polynomials of degree k. Consider the following subset of even functions in C∞(Sd−1 × (−R,R)): A = {Yk,j ⊗Xk ′ | k, j, k′ ∈ Z+, k ≡ k′ (mod 2), k′ < k − 2, 1 ≤ j ≤ Nd,k}, where Xk ′ denotes the monomial of degree k′ on (−R,R). We have the following result regarding the non-uniqueness of neural network representations: Theorem 5. If α ∈ M(PdR) is such that α ∈ clw(span(A)), then we have that 0 =∫ Sd−1×(−R,R)(〈ω, x〉 − b)+ dα(ω, b) for any x ∈ BR(R d). That is, α yields a neural network representation of the zero-function on BR(Rd). Here, we consider span(A) as a subset ofM(PdR) by the Riesz-Markov-Kakutani representation theorem via the action 〈g, ϕ〉 = ∫ PdR ϕ(ω, b)g(ω, b) d(ω, b) for any g ∈ span(A), ϕ ∈ C0(PdR), and clw denotes the closure in the topology of weak convergence ofM(Sd−1 × R). In particular, any measure whose density is in the span of A will yield a function which is equal to zero when restricted to BR(Rd). As an example of this result, we show a simple measure inM(Pd1) which represents the zero function on B1(R2). Example 2 (Non-zero measure representing the zero function onB1(R2)). We define the even Radon measure α ∈M(S1×(−1, 1)) with density dα(ω, b) = (8ω40−8ω20+1) d(ω, b) where ω = (ω0, ω1). Then, for any x ∈ B1(R2), 0 = ∫ S1×(−1,1)(〈ω, x〉 − b)+ dα(ω, x). On the one hand, Proposition 1 states that there exists a unique measure α ∈ M(PdU ) such that −cd〈f, (−∆)(d+1)/2R∗ψ〉 = ∫ PdU ψ(ω, b) dα(ω, b) for any ψ ∈ S(PdU ) if ‖f‖R,U is finite. On the other hand, Theorem 5 claims that functions admit distinct representations by measures inM(PdU ). The following theorem clarifies these two seemingly contradictory statements. Consider the following subset of even functions in C∞(Sd−1 × (−R,R)), which contains A: B = {Yk,j ⊗Xk ′ | k, j, k′ ∈ Z+, k ≡ k′ (mod 2), k′ < k, 1 ≤ j ≤ Nd,k}. Proposition 4. Let 0 < R < R′. Let f : Rd → R such that ‖f‖R,BR′ (Rd) < +∞ and let α ∈ M(PdR) be the unique measure specified by Proposition 1. Then, α is the unique measure in M(PdR) such that ∀ϕ ∈ S(BR(Rd)), 〈α,Rϕ〉 = ∫ BR(Rd)) f(x)∆ϕ(x) dx, (13) ∀k, j, k′ ∈ Z+ s.t. k′ ≡ k (mod 2), k′ < k, 1 ≤ j ≤ Nk,d, 〈α, Yk,j ⊗Xk ′ 〉 = −cd〈f, (−∆)(d+1)/2R∗(Yk,j ⊗ 1|X|<RXk ′ )〉. (14) The condition (13) holds for any measure α′ ∈ M(PdR) for which f admits a representation of the form (7) on BR(Rd). Thus, α can be characterized as the unique measure inM(PdR) such that f admits a representation of the form (7) on BR(Rd) and the condition (14) holds. In (14), the quantity 〈f, (−∆)(d+1)/2R∗(Yk,j ⊗ 1|X|<RXk ′ )〉 is well defined despite 1|X|<RXk ′ not being continuous on R; we define it as 〈f, (−∆)(d+1)/2R∗((Yk,j ⊗1|X|<RXk ′ ) + g̃)〉, where g̃ is any function in S(PdR′) such that (Yk,j⊗1|X|<RXk ′ )+ g̃ ∈ S(PdR′) (which do exist, see App. C). In short, Proposition 4 characterizes the measure α from Proposition 1 in terms of its evaluations on the spaces R(S(BR(Rd))) and span(B), and by Corollary 1 the direct sum of these two spaces dense in C0(PdR), which by the Riesz-Markov-Kakutani representation theorem is the predual of M(PdR). Interestingly, the condition (13) holds for any measure α ∈ M(PdR) which represents the function f on BR(Rd), but it is easy to see that the condition (14) does not: by Theorem 5 we have that if ψ ∈ span(A) ⊆ span(B), the measure α′ defined as dα′(ω, b) = dα(ω, b) + ψ(ω, b) db represents the function f on BR(Rd), and 〈α′, ψ〉 = 〈α,ψ〉+ ‖ψ‖22. It remains an open question to see whether Theorem 5 captures all the measures which represent the zero function on BR(Rd), which we hypothesize. If that was the case, we would obtain a complete characterization of the Radon measures which represent a given function on BR(Rd). Mode connectivity. Mode connectivity is the phenomenon that optima of neural network losses (at least the ones found by gradient descent) turn out to be connected by paths where the loss value is almost constant, and was observed empirically by Garipov et al. (2018); Draxler et al. (2018). Kuditipudi et al. (2019) provided an algorithmic explanation based on dropout, and an explanation based on the noise stability property. Theorem 5 suggests an explanation for mode connectivity from a functional perspective: one can construct finitely-supported measures which approximate a measure α ∈ clw(span(A)), yielding finite-width neural networks with non-zero weights which approximate the zero function on BR(Rd). Assuming that the data distribution is supported in BR(Rd), adding a multiple of one such network to an optimal network will produce little change in the loss value because the function being represented is essentially unchanged. More work is required to confirm or discard this intuition. 6 CONCLUSION We provided in this paper tighter sparse approximation bounds for two-layer ReLU neural networks. Our results build on the introduction of Radon-basedR,U-norms for functions defined on a bounded open set U . Our bounds refine Fourier-based approximation bounds of Breiman (1993); Klusowski & Barron (2018). We also showed that the representation of infinite width neural networks on bounded open sets are not unique, which can be seen as a functional view of mode connectivity observed in training deep neural networks. We leave two open questions: whether any function admitting an infinite-width representation on U has a finite R,U-norm, and whether Theorem 5 captures all the measures which represent the zero function on BR(Rd). Finally, in order to extend our theory to deeper ReLU networks we believe that non-linear Radon transforms (Ehrenpreis, 2003) are interesting tools to explore.
1. What is the focus of the paper regarding neural networks? 2. What are the novel contributions introduced by the paper? 3. How do the authors improve upon prior works in terms of sparse approximation bounds? 4. What are the limitations of the paper's results regarding the scope of neural network architectures? 5. Are there any questions or concerns regarding the technical aspects and proof presented in the paper?
Summary Of The Paper Review
Summary Of The Paper This paper studies the class of functions that coincide with an infinite width two-layer neural network on a fixed bounded open set. First, they introduce a Radon-based R, U-norms for functions defined on a bounded open set U. Then they prove tighter sparse approximation bounds for two-layer ReLU neural networks. They also prove that the representation of infinite width neural networks on bounded open sets are not unique. Review The novelty of this paper is high. By the introduction of R, U-norms the authors show that their approximation bound is tighter than bounds in the previous papers, and meaningful in more instances such as finite-width neural networks. This result will certainly help us have a better understating of deep neural networks from a theoretical way. I have checked the technique parts and find that the proofs are solid. I think this is a significant contribution to the deep learning community. The theoretical results in this paper are about two-layer ReLU neural networks. It will be interesting to see extended results on more widely used architectures in the future. Overall, I think the results in this paper are important, as explained above.
ICLR
Title Geometry aware convolutional filters for omnidirectional images representation Abstract Due to their wide field of view, omnidirectional cameras are frequently used by autonomous vehicles, drones and robots for navigation and other computer vision tasks. The images captured by such cameras, are often analyzed and classified with techniques designed for planar images that unfortunately fail to properly handle the native geometry of such images. That results in suboptimal performance, and lack of truly meaningful visual features. In this paper we aim at improving popular deep convolutional neural networks so that they can properly take into account the specific properties of omnidirectional data. In particular we propose an algorithm that adapts convolutional layers, which often serve as a core building block of a CNN, to the properties of omnidirectional images. Thus, our filters have a shape and size that adapts with the location on the omnidirectional image. We show that our method is not limited to spherical surfaces and is able to incorporate the knowledge about any kind of omnidirectional geometry inside the deep learning network. As depicted by our experiments, our method outperforms the existing deep neural network techniques for omnidirectional image classification and compression tasks. 1 INTRODUCTION Drone vision, autonomous cars and robot navigation systems often use omnidirectional cameras, as they allow recording the scene with wide field of view. Despite their obvious advantages, images obtained by such cameras have different statistics compared to planar images. Nevertheless, omnidirectional images are often processed with standard techniques, which are unfortunately poorly adapted to the specific geometry of such images. In this paper we improve one of the most popular frameworks for image processing, namely convolutional neural network (CNN) for omnidirectional images. CNNs prove to be effective, as they permit to achieve very good performance in many different tasks like image classification, segmentation, generation and compression. In the context of omnidirectional cameras, CNNs are typically applied directly to the unwrapped and distorted spherical images. This approach, however, is suboptimal: due to specific geometry of these images, and, in particular, the change in the image statistics with the position in the image. The latter forces the network to learn different filters for different locations in the omnidirectional images (see Fig. 1). To solve this issue, we replace ordinary convolutional filters with the graph-based ones that can adapt their size and shape depending on the position in the omnidirectional image thanks to the flexible graph structure. In order to overcome the common limitation of these graph-based filters being isotropic (i.e. invariant to the changes in the objects orientation in the image) we suggest to use multiple directed graphs instead of a single undirected one, as used in (Defferrard et al., 2016; Kipf & Welling, 2017; Khasanova & Frossard, 2017a). This permits our method to encode the orientation of the objects that appear in the images and, therefore, extract more meaningful features from them. This together with the inherent ability of our filters to adapt to the image projective geometry allows our method to reach state-of-the-art performance when working not just with spherical projections, as done by Khasanova & Frossard (2017b); Cohen et al. (2018), but virtually with any type of image projective geometry that can be encoded by a graph. In our experiments we show that apart from the state-of-the-art results on the regular omnidirectional images classification task, our approach outperforms the existing classification techniques when applied to the images projected on a randomly perturbed spherical surface or on a cube via the cube-map projection, which has recently become one of the popular ways to represent 360-images (Chen et al., 2018). Finally, we demonstrate that our method can be applied for orientation-dependent task such as compression and reduce artifacts comparing with a standard approaches. 2 RELATED WORK In this section we first briefly introduce the most recent approaches that combine the power of deep learning methods with graph signal processing techniques, as they are similar in spirit to our work. We then discuss in more details the recent trends in omnidirectional image processing. Geometric Deep learning. In the recent years a number of deep learning methods have been introduced tailored for processing irregular data that is represented as a graph. One way to solve this problem is suggested by Monti et al. (2017), where the authors define a local system of ddimensional pseudo-coordinates for every node of the graph and learn both the filters and patch operators that work on these coordinates. A different direction is taken by Wang et al. (2018), who propose using edge-based convolutional kernels and dynamically update the graph. While being flexible and effective for general tasks, in the specific context of the omnidirectional images these methods do not directly take the advantage of the knowledge about the projective geometry, which we model using a specifically designed graph representation. While most of the existing methods work with undirected graphs, the recent work of (Monti et al., 2018) propose an approach for processing data defined on a directed graph by exploiting local graph motifs that describe its connectivity patterns. The main differences of this method with our work are that first, this method assumes that the directed graph is already given. In our problem, building such a graph that is able to fully take advantage of the image projective geometry is one of the contributions. Second, the approach in (Monti et al., 2018) does not use the knowledge of the coordinate system associated with omnidirectional images, which we however use in our architecture, in order to define filter orientations. The list of the aforementioned works is by no means extensive, therefore we refer the interested reader to the survey (Bronstein et al., 2017), which summarizes many geometric deep learning methods in detail. Omnidirectional image processing. The most typical way of dealing with images taken by omnidirectional cameras is to apply standard image processing techniques directly on the equirectangular projection images, which is one of the most common representations for the omnidirectional images (De Simone et al., 2016). However, due to the strong distortion effects introduced by the process of unwrapping of the projection surface to a plane, standard techniques lose much of their efficiency, as the appearance of the same object may change depending on its location in the image. To overcome this problem the recent work of Khasanova & Frossard (2017b) suggests using graph-based special convolutional filers to adapt to the geometry of omnidirectional images. This method, however, relies on the convolutional operation defined in the spectral domain, which leads to isotropic filters and may reduce the complexity of trained filters. We, on the other hand, propose to build anisotropic graphs-based convolutional filters that do not have this limitation. A different direction is taken by the authors of (Su & Grauman, 2017) who suggest adapting the size of the convolutional kernel to the elevation of the equirectangular image. The main limitation of this technique, however, is that it requires a significantly larger number of parameters than the competing techniques, as it does not have the weight sharing property of CNNs. It rather requires learning different convolutional filters for different elevations of the equirectangular image. Further, Jeon & Kim (2017) propose to learn the shape of the convolutional filter, by learning the sampling locations (position offsets), where the elements of the filter are evaluated. The authors of (Dai et al., 2017) extend later this idea and suggest learning dynamic offsets of the filter elements depending on the image content, which allows the filters to adapt to different parts of the image. This method is quite flexible, however in the context of omnidirectional images requires an extensive training set, as the network needs to learn how it should react to various objects appearing at any possible elevation. In our work we rather take advantage of the knowledge of the image projective geometry and use this knowledge in the design of our architecture to adapt the size and shape of convolutional filters. A different approach is suggested by Cohen et al. (2018), who introduces a CNN that is designed for spherical shapes and define filters directly on its surface. This method, however, is specifically designed for processing spherical images, while our approach is easily adapted to different kind of shapes, which we show in our experiments. Further, the methods of (Coors et al., 2018; Tateno et al., 2018) suggest a different way of compensating for the distortion of omnidirectional image. They suggest adapting the sampling locations of the convolutional filters to the geometry of the lens by projecting kernels to the sphere and using interpolated pixel values on the projected locations for implementing the convolutional filters. While these works are the closest to in spirit to ours, we propose a more general architecture, which permits to adapt the shape and size of the convolutional kernel to the location of omnidirectional image, and therefore to use the information about all the pixels and not only of a subset of them. Then, the authors in (Monroy et al., 2018; Ruder et al., 2018) suggest a completely different approach to tackle the distortion of omnidirectional images. Instead of working with equirectangular images, they propose to project an omnidirectional image to a cube, where each of its faces represents an image that would have been seen through a regular perspective camera, with the optical center located in the center of the cube (Chen et al., 2018). Representing an omnidirectional image in this way allows having less noticeable distortion effects as compared to equirectangular images. This representation, however, suffers from another type of distortion that appears due to discontinuity effect on the borders between the faces of the cube. To mitigate this issue, Monroy et al. (2018) propose to apply a smoothing filter as a post-processing step and Ruder et al. (2018) suggest an algorithm that enforces consistency between the neighboring facets of the cube. Contrary to the mentioned approaches, as we model cube surface as a graph, our algorithm can easily handle the discontinuity problem and adapt to image distortions introduced by the cube-map projection. 3 GEOMETRY-AWARE CNN In this section we describe our algorithm, which adapts convolutional filters to the distortion of omnidirectional images. We start with the introduction of the equirectangular projection, as it is one of common ways to represent images from the omnidirecitonal cameras (De Simone et al., 2016; Coors et al., 2018). We then describe our graph-based representation learning framework. 3.1 EQUIRECTANGULAR PROJECTION Omnidirectional visual content can be represented as a sphere with radius r, where the user is assumed to be located at the center. Each 3D point can be projected to a point on the surface of this sphere, which is described by spherical coordinates, namely a longitude θ ∈ [−π, π] and a latitude φ ∈ [−π2 , π 2 ]. Omnidirectional images are generally not processed directly in their native geometry, but they are first projected to the 2D plane where classical image processing techniques can be activated. One of the popular projections involves sampling the data on the sphere with equal steps ∆θ and ∆φ, which results in an equrectangular image on the 2D plane. Thus, each point of equrectangular image is defined by its spherical coordinates. To describe this projection let us introduce the tangent plane T , which is tangent to the sphere in the point (θ0, φ0). Thus, each point (x, y) on the tangent plane is projected to the sphere surface (θ, φ) as follows (Coors et al., 2018): φ(x, y) = sin−1(cos ν sinφ0 + y sin ν cosφ0 ρ ) θ(x, y) = θ0 + tan −1( x sin νρ cosφ0 cos ν−y sinφ0 sin ν ) . (1) where ρ = √ (x2 + y2) and ν = tan−1 ρ. In order to have similar filter response regardless of the position of the object we model distortion of the applied filter. Thus, similarly to the works of (Khasanova & Frossard, 2017b) and (Coors et al., 2018), we define the filter kernel on this tangent plane T . Fig. 1 illustrates a sample equirectangular image with various kernels corresponding to tangent plane at different positions on the sphere. As we can see the projected area is different for various tangent plane locations. This projected area defines the support of our geometry-aware features, as described in the next section. 3.2 GEOMETRY-ADAPTIVE FILTERS In the context of omnidirectional cameras, the main drawback of the classical convolutional approach is that it applies the same rectangular filters to different image positions. However, as we mentioned before, equirectangular images have different statistics at various elevations. Therefore, we propose to adapt the size and shape of the filter to the elevation on the spherical image. To do so we propose to use a graph-based approach, which has recently become popular. The nodes of this graph vi ∈ G and signal y(vi) defined on the nodes represent pixels and their intensity values respectively. Based on this graph G we can then use the Laplacian polynomial filters F as proposed in (Defferrard et al., 2016; Khasanova & Frossard, 2017a), where normalized Laplacian operator is defined as follows: L = I −D−1/2AD−1/2, (2) where I is an identity matrix, D is a diagonal degree matrix and A is an adjacency matrix, which for each node vp of graph G defines its neighborhood Np. Then, F has the following form: F = M∑ l=1 αlLl, (3) where the αl are the trainable parameters, L is a Laplacian matrix and M is the degree of F . Using these filters we can construct a Deep learning architecture and use it for various tasks, e.g., classification of omnidirectional images. The main advantage of our approach is that by appropriately constructing the graph G we make the Laplacian polynomial filters F react similarly to the same object seen at different elevation of the equirectangular image regardless of geometric distortion due to the spherical geometry. Here we call those filters geometry-aware (GA). In order to adapt the GA-filter F to the elevation we build a graph G in such a way that the neighbourhood of each node is different for different elevations of the omnidirectional image. In the following section we describe two approaches that rely on undirected and directed graphs respectively, which consequently define isotropic and anisotropic filters. Then we describe in more details the polynomial anisotropic filter F that is used for directed graphs.. Undirected graph construction for adaptive filtering. To adapt F to the elevation level we construct a graph G with nodes that have different neighborhoods depending on the elevation. To do so we define a circular area on a tangent plane T , centered in the tangency point. Then we move T such that it becomes tangent to the sphere S in different positions vp and project the circular area onto S. For every point of S this creates a neighborhood Np, which changes its shape and size together with the elevation, as can be seen in Fig. 1. Based on this geometry adaptive neighborhood, we then construct the graph G in the following way. We connect the node vp ∈ G, corresponding to a tangent point on the sphere, with the node vj ∈ Np. The corresponding edge epi has a weight wpi that is inversely proportional to the Euclidean distance between vp and vi, which are defined on the sphere: wpi = ||vp − vi||−1L2 , vi ∈ Np , wpi = 0, vi /∈ Np . (4) This allows us to vary the size of the neighbourhood according to the geometry of the omnidirectional image for each node in G, and weight the contribution of the nodes to the final filter response according to their distances to vp. Therefore, depending on the elevation the filter is changing its shape and size. While effective, filter F does not have a defined orientation in space as according to Eq. (3) the filter applies the same weights αl to all nodes in the l-hoop neighborhood, with the contribution of each node being weighted by the distance to vp. This results in F being isotropic, which leads to suboptimal representation, as the network is not able to encode the orientation of the object. Directed graphs construction. In order to overcome the limitations of isotropic filters, we propose to replace a single undirected graph G with multiple directed graphs Gk, where each Gk defines its own orientation. Let us consider the case of a 3x3 classical convolutional filter. In this case the filter has 9 distinct elements. To mimic the same structure with our graph convolutional filters we employ the following algorithm. First we define 9 non-overlapping areas sk, k = 1..9 on the tangent plane, which together form a rectangular region that is centered in the tangency point vp of T as defined in the Fig. 2. This rectangular region effectively defines the receptive field of the filter on the tangent plane. Then we build a set of nine directed graphs Gk, k = 1..9 in the similar way, as mentioned in the previous section for undirected graph. In particular in order to build graph Gk we do as follows. For the area sk and for every node vp we move the tangent plane at point vp and then project sk from T onto the sphere. This operation defines a specific neighborhood Nk(p) on the sphere that consists of the points that belong to the projection of the region sk from the plane T . We then connect vp with a directed edge to each of these points, where the weight of the edge is defined in Eq. (4). Note that the direction of the edge is very important, because connecting vp and vi with an undirected edge forces vp to be part of the neighborhood Nk(i). This, however, is not possible, as the neighborhood Nk(i) is computed by projecting the area sk from the plane T that is tangent to the sphere at point vi and does not include vp. This results in construction of the directed graph Gk, which corresponds to the kth region of the filter, illustrated in Fig. 2. We repeat this operation for all the areas sk, k = 1..9 of our filter, which leads to creation of 9 directed graphs Gk, k = 1..9. Given this set of graphs Gk we define the resulting convolutional operation F as follows: F = 9∑ k=1 Fk, (5) where Fk is the filtering operation defined on the graph Gk. Note that this filtering operation is slightly different from the operation that is used when working with undirected graphs and is discussed in more details in the following section. To sum up, the introduced graph construction process allows having anisotropic filters F , defined in Eq. (5) that are capable of capturing the orientation of the object and therefore learn more meaningful feature representation for an image compared to the isotropic graph-based filters. It is important to note that in this paper we use the set of 9 non-overlapping rectangular areas defined on the tangent plane, as shown by Fig. 2, due to their rough correspondence to the elements of a 3 × 3 convolutional filter. However, our method can be easily extended to an arbitrary number of such areas with arbitrary shapes. Geometry aware anisotropic filters. For directed graphs Gk Laplacian matrix is not defined, therefore, we use the polynomial filters proposed in (Sakiyama et al., 2017). Instead of the Laplacian matrix these filters rely on the normalized adjacency matrix Ak, which is defined as follows: Ak = D−1k Ak, (6) where Ak and Dk are the weighted adjacency and the diagonal degree matrices of graph Gk respectively. The elements of Dk are computed as Dk(m,m) = ∑ nAk(m,n). Then, we define filters in the following way: Fk = α(k)0 + α (k) 1 Ak, (7) where α(k)0 , α (k) 1 are the training parameters of our filter. Here, we use polynomial filters of degree 1, as they achieve good balance between speed and performance. Network architecture. The introduced approach focuses on the modification of the convolutional layer to incorporate the knowledge about the image projective geometry inside the neural network. Thus, it can be applied for a broad variety of tasks. In this chapter we focus on the image classification and compression problems. For the former one we use a relatively standard architecture that consists of a sequence of convolutional layers with the introduced graph-based filters, followed by a sequence of the fully connected layers. For the compression task we use the architecture proposed in (Ballé et al., 2017) and replace its convolutional filters with the proposed graph-based ones. Discussion. Our method can be seen as a generalization of different approaches that have been developed for omnidirectional images. For example, if the node vp at elevation φi has only one neighbor in each direction and the weight of the edges between nodes is always equal to one, it will be the standard CNN method (LeCun et al., 2001). Further, if these neighbors correspond to the projected points it becomes the recently proposed algorithm of (Coors et al., 2018). Finally, if we replace directed graphs with a single undirected one we get the same behavior of the polynomial filters as described in graph-based deep learning methods (Khasanova & Frossard, 2017a; Defferrard et al., 2016; Kipf & Welling, 2017; Khasanova & Frossard, 2017b). 4 RESULTS In this section we illustrate the performance of our approach. We start by evaluating our method with respect to competing approaches on the task of classification images that are projected to different surfaces. Finally, to show the generality of our approach and illustrate the effectiveness of the anisotropic graph-based filters, we evaluate our method on the image compression task. 4.1 IMAGE CLASSIFICATION In this section we first introduce the datasets that we used for the evaluation of our method. We then discuss the baseline approaches and architectures, which we use in our experiments. Finally, we show the quantitative comparison of our method with the competing ones. Datasets. We evaluate our method on three different types of data, which feature different surface geometries, where the images are projected: • Spherical dataset (S) consists of images projected on different locations on a sphere. The resulting spherical images are then unwrapped to for equirectangular images, as described in Section 3.1. • Mod-spherical dataset (MS) features image projections on more complicated surfaces that are depicted by Fig. 3 together with the representative examples of projected images. This dataset itself consists of three different versions: MS1, MS2, MS3 which correspond to the surfaces which are getting further away from the spherical one. A more detailed discussion about the type of projection and the surface geometry used in these datasets can be found in ApendixB. • Fish-eye dataset (F) consists of images projected on different locations on a sphere using stereographic projection Bettonvil (2005), which is frequently used in fish-eye cameras. • Cube-map dataset (CM) features projection of the images on the cube as shown by Fig. 4. This type of projection has recently gained popularity for handling omnidirectional images, due to its ability to reduce distortion artifacts that appear due to the spherical geometry. In all these dataset we use MNIST images from (LeCun & Cortes, 2010) , which are divided into train, validation and test sets with 54k, 6k and 10k samples respectively. Architecture. We compare our approach with standard ConvNets, the algorithm proposed in (Cohen et al., 2018) and other graph-based methods. We present our result in the Table 11. For the graph-based methods we investigate 3 possible way of constructing G: • Regular grid-graph with 8 neighbors and all equal weights wij = 1; • Regular grid-graph with 8 neighbors and weights that depend on the Euclidean distance dij between the nodes, as proposed in (Khasanova & Frossard, 2017b) wij = d−1ij ; • Irregular GA-graph wij = d−1ij (isotropic filters from Section 3.2). For all of them we build a normalized Laplacian matrix (Khasanova & Frossard, 2017b) and use polynomial filters of degree 1, which is equivalent to using 3× 3 convolutional kernels. Therefore, for the standard ConvNet we similarly rely on filters of size 3× 3. 1We were unable to compare our method to the recent work of (Coors et al., 2018) as to the best of our knowledge, there is no publicly available implementation. All the competing approaches use the networks of roughly the same complexity. For all the methods we use the architectures of similar structure and roughly the same number of parameters. For all the graph-based approaches we use the graph-based convolutions with stride two on each layer, which in turn requires building graph for each new layer according to its respective sampling. The exact architecture of the classification network is illustrated in Appendix A. For the method of (Cohen et al., 2018) we used the architecture proposed in the paper with roughly the same number of parameters as in competing approaches. Evaluation. We present the result of comparison of our approach with the baseline methods in Table 1. Our method significantly outperforms the standard ConvNets, as it is designed to use the geometry of the omnidirectional images. Further it shows a much higher accuracy then other graphbased techniques, which rely on isotropic filters. Further, our method achieves comparably accuracy with (Cohen et al., 2018) on spherical image representation, however, our method is more general, therefore, it outperforms (Cohen et al., 2018) on other datasets. Finally, we are able to run our approach on cub-map projection while the SphericalCNN by design is not applicable to such kind of images. 4.2 IMAGE COMPRESSION In all our previous experiment we have focused on evaluating our approach on the image classification task. To show the generality of our method and better illustrate the effectiveness of anisotropic graph-based filters, we now evaluate their performance on an image compression problem. For this task, we choose to modify the architecture introduced in (Ballé et al., 2017) by replacing the ordinary convolutional layers with our own graph-based convolutions. In this section we first introduce our approach and then compare the performance of the two graph-based methods, which rely on isotropic and anisotropic graph-based filters respectively. Image compression framework. The method introduced in (Ballé et al., 2017), presents the process of image compression is an optimization of the tradeoff between having small distortion of the pixel intensity values and the small number of bits that are required for storing the compressed representation of these values. As described in (Ballé et al., 2016; Ballé et al., 2017), this optimization can be represented as a variational autoencoder. For more details we refer to Appendix C. In the context of omnidirectional images, we propose to modify the method proposed in (Ballé et al., 2017) by using our geometry-aware filters instead of standard convolutional ones. Evaluation. We now evaluate the performance of our approach. For this experiment we have implemented two versions of the system. One with isotropic graph-based filters and the other one with the anisotropic ones. We further evaluate the original method (Ballé et al., 2017) for the sake of completeness. All three methods are trained and tested on the same splits of the modified version of the dataset (Xiao et al., 2012), which consists of omnidirectional images projected onto a cube. From this dataset we use 3900 images for training and 1000 for testing of our approaches. We compare the methods in terms of the Peak Signal to Noise Ratio (PSNR) with respect to the average number of bits per pixel (bpp). The results of the evaluation are presented in Fig. 5. As we can see, our method with anisotropic filters and (Ballé et al., 2017) show similar PSNR values and significantly outperform the architecture with isotropic filters. Further, due to the fact that PSNR depends on the average difference in pixel values between the compressed image and the original one it is not able to reliably detect small artifacts that appear in the cube-map images, which are noticeable for humans. These artifacts are clearly seen in Fig. 6, where we illustrate that due to the knowledge about the image projective geometry, our approach correctly reconstructs the areas along cube borders, while the method of Ballé et al. (2017) over-smooths these areas. 5 CONCLUSION In this paper we have presented generic way of graph construction that allows incorporating the information about the image geometry inside the neural network. Further, we have introduced the graph-based geometry aware convolutional filters that adapt their shape and size to the geometry of the projection surface. In contrast to many existing graph-based filters, our filters are anisotropic, which allows to better adjust to the specific properties of the problem. Our illustrative experiments show state-of-the-art performance of our approach applied to image classification and compression tasks in the presence of various types of image distortions. A ARCHITECTURE OF CLASSIFICATION FRAMEWORK Fig. 7 illustrates the architecture of our classification network, which we use in Section 4.1. B MODIFIED SPHERICAL SURFACE. In order to evaluate the performance of our method as a function of deformation of the spherical surface, we have created a set of datasets by projecting the MNIST images to random locations of the surfaces, which have shapes, shown in Fig. 3 (a,c) and unwrap these spherical images to equirectangular ones. The white color in Fig. 3 (a,c) denotes the areas of the generated surface that are the furthest from the spherical surface of the same radius. The Fig. 3 (b,d) illustrates sample images of digits projected onto the respective surfaces. Each of the aforementioned surfaces is the following modification of a spherical one from Eq. (1): x = cos(φi) sin(θi − θ0) y = (cos(φ0) sin(φi + p(φi, r, l))− sin(φ0 + p(φ0, r, l)) cos(φi) cos(θi − θ0))/c, c = sin(φi + p(φi, r, l)) sin(φ0 + p(φ0, r, l)) + cos(φi) cos(φ0) cos(θi − θ0), (8) where (x, y) are the coordinates on the tangent plane and p(φ, r, l) is the perturbation function that can be written as p(φ, r, l) = r sin−1(sin(lφ)), (9) where φ is the elevation level; r is the parameter that regulates the perturbation magnitude and l defines frequency of the perturbation signal. In our experiments we have set l = 10. Note that for a specific case of r = 0 we get the ordinary spherical surface. We then use Eq. 8 to construct the graph G that allows our method to adapt to the surface geometry and evaluate our method on each of the generated datasets. C COMPRESSION In this section we briefly describe compression approach, which proposed by ?. An input image x is encoded using a function ga(x;α), which results in the respective latent representation y. Then, y is quantized into ŷ, which can be losslessly compressed using entropy coding algorithms. This ŷ is then passed to the decoder gs(ŷ;β) at the decompression step, which results in a decompressed image x̂. Here, we denote by α and β the parameters of the encoding and decoding algorithms respectively. While both encoder and decoder can be represented as a differentiable function, the process of quantization is non-differentiable. Therefore the authors of (Ballé et al., 2016) propose to replace quantization with an additive uniform noise at the training step as follows: ỹi = y + ∆y, (10) where ∆y denotes additive i.i.d uniform noise. This trick allows to perform the end-to-end optimization of both the encoder and decoder parameters using the following loss function: L(α, β) = Ex,∆y [ − ∑ i log2pỹi(ga(x;α) + ∆y) + λd(gs(ga(x;α) + ∆y;β), x) ] , (11) where gs, ga are convolutional deep neural networks, d represents the distance between the images and λ is a weighting parameter. Thus, during the training step, we add noise (according to Eq. (10)) to be able to back propagate the error and at the inference time we apply quantization to the latent representation y. The overall architecture that we use is similar to the one proposed in (Ballé et al., 2017) and is summarized in Fig. 8. Further, the method of (Ballé et al., 2017) relies on the standard convolutional layers, which are practical for ordinary images: they allow learning local image structures independently of their location in the image. Additional visual results. We run experiment with described compression architecture, where we compare three approaches: original and methods, where we replace convolutional filter from the architecture to graph-based isotropic and geometry-aware filters. Fig. 9 further illustrates some visual comparison of the methods and we can see isotropic filters produce over-smoothed decompressed images, which do not look realistic and result in very low PSNR values. On the other hand our method with anisotropic filters is able to produce sharp results.
1. What is the main contribution of the paper regarding graph-based deep learning methods for omnidirectional cameras? 2. What are the strengths and weaknesses of the proposed approach, particularly in its application to realistic settings? 3. Do you have any concerns or suggestions regarding the experimental setup and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any minor issues or typos in the review that should be addressed?
Review
Review This paper proposed to use graph-based deep learning methods to apply deep learning techniques to images coming from omnidirectional cameras. It solves the problem of distorsions introduced by the projection of such images by replacing convolutions by graph-based convolutions, with in particular a combinaison of directed graphs which makes the network able to distinguish between orientations. The paper is fairly well written and easy to follow, and the need for treating omnidirectional images differently is well motivated. However, since the novelty is not so much in the graph convolution method, or in the use of graph methods for treating spherical signals, but in the combined application of the particular graph method proposed to the domain of omnidirectional images, I would expect a more thorough experimental study of the merits of the method and architectural choices. 1. The projected MNIST dataset looks very localized on the sphere and therefore does not seem to leverage that much of the global connectivity of the graph, although it can integrate deformations. Since the dataset is manually projected, why not cover more of the sphere and allow for a more realistic setting with respect to omnidirectional images? More generally, why not use a realistic high resolution classification dataset and project it on the sphere? While it wouldn't allow for all the characteristics of omnidirectional images such as the wrapping around at the borders, it would lead to a more challenging classification problem. Papers such as [Khasanova & Frossard, 2017a] have at least used two toy-like datasets to discuss the merits of their classification method (MNIST-012, ETH-80), and a direct comparison with these baselines is not offered in this work. 2. The method can be applied for a broad variety of tasks but by evaluating it in a classification setting only, it is difficult to have an estimate of its performance in a detection setting, where I would see more uses for the proposed methods in such settings (in particular with respect to rotationally invariant methods, which do not allow for localization). 3. I fail to see the relevance of the experiments in Section 4.2 for a realistic application. Supposing a good model for spherical deformations of a lens is known, what prevents one from computing a reasonable inverse mapping and mapping the images back to a sphere? If the mapping is non-invertible (overlaps), then at least using an approximate inverse mapping would yield a competitive baseline. I am surprised at the loss of accuracy in Table 2 with respect to the spherical baseline. Can you identify the source of this loss? Did you retrain the networks for the different deformations, or did you only change the projection of the network trained on a sphere? 4. While the papers describes what happens at the level of the first filters, I did not find a clear explanation of what happens in upper layers, and find this point open to interpretation. Are graph convolutions used again based on the previous polynomial filter responses, sampling a bigger region on the sphere? Could you clarify this? 5. I would also like to see a study of the choice of the different scales used (in particular, size of the neighborhood). Overall, I find that the paper introduces some interesting points but is too limited experimentally in its current form to allow for a fair evaluation of the merits of the method. Moreover, it leaves some important questions open as to how exactly it is applied (impact of sampling/neighborhood size, design of convolutions in upper layer...) which would need to be clarified and tested. Additional small details: - please do not use notation $\mathbb{N}_p$ for the neighborhood, it suggests integers - p. 4 "While effective, these filters ... as according to Eq. (2) filter..." -> article missing for the word "filter"
ICLR
Title Geometry aware convolutional filters for omnidirectional images representation Abstract Due to their wide field of view, omnidirectional cameras are frequently used by autonomous vehicles, drones and robots for navigation and other computer vision tasks. The images captured by such cameras, are often analyzed and classified with techniques designed for planar images that unfortunately fail to properly handle the native geometry of such images. That results in suboptimal performance, and lack of truly meaningful visual features. In this paper we aim at improving popular deep convolutional neural networks so that they can properly take into account the specific properties of omnidirectional data. In particular we propose an algorithm that adapts convolutional layers, which often serve as a core building block of a CNN, to the properties of omnidirectional images. Thus, our filters have a shape and size that adapts with the location on the omnidirectional image. We show that our method is not limited to spherical surfaces and is able to incorporate the knowledge about any kind of omnidirectional geometry inside the deep learning network. As depicted by our experiments, our method outperforms the existing deep neural network techniques for omnidirectional image classification and compression tasks. 1 INTRODUCTION Drone vision, autonomous cars and robot navigation systems often use omnidirectional cameras, as they allow recording the scene with wide field of view. Despite their obvious advantages, images obtained by such cameras have different statistics compared to planar images. Nevertheless, omnidirectional images are often processed with standard techniques, which are unfortunately poorly adapted to the specific geometry of such images. In this paper we improve one of the most popular frameworks for image processing, namely convolutional neural network (CNN) for omnidirectional images. CNNs prove to be effective, as they permit to achieve very good performance in many different tasks like image classification, segmentation, generation and compression. In the context of omnidirectional cameras, CNNs are typically applied directly to the unwrapped and distorted spherical images. This approach, however, is suboptimal: due to specific geometry of these images, and, in particular, the change in the image statistics with the position in the image. The latter forces the network to learn different filters for different locations in the omnidirectional images (see Fig. 1). To solve this issue, we replace ordinary convolutional filters with the graph-based ones that can adapt their size and shape depending on the position in the omnidirectional image thanks to the flexible graph structure. In order to overcome the common limitation of these graph-based filters being isotropic (i.e. invariant to the changes in the objects orientation in the image) we suggest to use multiple directed graphs instead of a single undirected one, as used in (Defferrard et al., 2016; Kipf & Welling, 2017; Khasanova & Frossard, 2017a). This permits our method to encode the orientation of the objects that appear in the images and, therefore, extract more meaningful features from them. This together with the inherent ability of our filters to adapt to the image projective geometry allows our method to reach state-of-the-art performance when working not just with spherical projections, as done by Khasanova & Frossard (2017b); Cohen et al. (2018), but virtually with any type of image projective geometry that can be encoded by a graph. In our experiments we show that apart from the state-of-the-art results on the regular omnidirectional images classification task, our approach outperforms the existing classification techniques when applied to the images projected on a randomly perturbed spherical surface or on a cube via the cube-map projection, which has recently become one of the popular ways to represent 360-images (Chen et al., 2018). Finally, we demonstrate that our method can be applied for orientation-dependent task such as compression and reduce artifacts comparing with a standard approaches. 2 RELATED WORK In this section we first briefly introduce the most recent approaches that combine the power of deep learning methods with graph signal processing techniques, as they are similar in spirit to our work. We then discuss in more details the recent trends in omnidirectional image processing. Geometric Deep learning. In the recent years a number of deep learning methods have been introduced tailored for processing irregular data that is represented as a graph. One way to solve this problem is suggested by Monti et al. (2017), where the authors define a local system of ddimensional pseudo-coordinates for every node of the graph and learn both the filters and patch operators that work on these coordinates. A different direction is taken by Wang et al. (2018), who propose using edge-based convolutional kernels and dynamically update the graph. While being flexible and effective for general tasks, in the specific context of the omnidirectional images these methods do not directly take the advantage of the knowledge about the projective geometry, which we model using a specifically designed graph representation. While most of the existing methods work with undirected graphs, the recent work of (Monti et al., 2018) propose an approach for processing data defined on a directed graph by exploiting local graph motifs that describe its connectivity patterns. The main differences of this method with our work are that first, this method assumes that the directed graph is already given. In our problem, building such a graph that is able to fully take advantage of the image projective geometry is one of the contributions. Second, the approach in (Monti et al., 2018) does not use the knowledge of the coordinate system associated with omnidirectional images, which we however use in our architecture, in order to define filter orientations. The list of the aforementioned works is by no means extensive, therefore we refer the interested reader to the survey (Bronstein et al., 2017), which summarizes many geometric deep learning methods in detail. Omnidirectional image processing. The most typical way of dealing with images taken by omnidirectional cameras is to apply standard image processing techniques directly on the equirectangular projection images, which is one of the most common representations for the omnidirectional images (De Simone et al., 2016). However, due to the strong distortion effects introduced by the process of unwrapping of the projection surface to a plane, standard techniques lose much of their efficiency, as the appearance of the same object may change depending on its location in the image. To overcome this problem the recent work of Khasanova & Frossard (2017b) suggests using graph-based special convolutional filers to adapt to the geometry of omnidirectional images. This method, however, relies on the convolutional operation defined in the spectral domain, which leads to isotropic filters and may reduce the complexity of trained filters. We, on the other hand, propose to build anisotropic graphs-based convolutional filters that do not have this limitation. A different direction is taken by the authors of (Su & Grauman, 2017) who suggest adapting the size of the convolutional kernel to the elevation of the equirectangular image. The main limitation of this technique, however, is that it requires a significantly larger number of parameters than the competing techniques, as it does not have the weight sharing property of CNNs. It rather requires learning different convolutional filters for different elevations of the equirectangular image. Further, Jeon & Kim (2017) propose to learn the shape of the convolutional filter, by learning the sampling locations (position offsets), where the elements of the filter are evaluated. The authors of (Dai et al., 2017) extend later this idea and suggest learning dynamic offsets of the filter elements depending on the image content, which allows the filters to adapt to different parts of the image. This method is quite flexible, however in the context of omnidirectional images requires an extensive training set, as the network needs to learn how it should react to various objects appearing at any possible elevation. In our work we rather take advantage of the knowledge of the image projective geometry and use this knowledge in the design of our architecture to adapt the size and shape of convolutional filters. A different approach is suggested by Cohen et al. (2018), who introduces a CNN that is designed for spherical shapes and define filters directly on its surface. This method, however, is specifically designed for processing spherical images, while our approach is easily adapted to different kind of shapes, which we show in our experiments. Further, the methods of (Coors et al., 2018; Tateno et al., 2018) suggest a different way of compensating for the distortion of omnidirectional image. They suggest adapting the sampling locations of the convolutional filters to the geometry of the lens by projecting kernels to the sphere and using interpolated pixel values on the projected locations for implementing the convolutional filters. While these works are the closest to in spirit to ours, we propose a more general architecture, which permits to adapt the shape and size of the convolutional kernel to the location of omnidirectional image, and therefore to use the information about all the pixels and not only of a subset of them. Then, the authors in (Monroy et al., 2018; Ruder et al., 2018) suggest a completely different approach to tackle the distortion of omnidirectional images. Instead of working with equirectangular images, they propose to project an omnidirectional image to a cube, where each of its faces represents an image that would have been seen through a regular perspective camera, with the optical center located in the center of the cube (Chen et al., 2018). Representing an omnidirectional image in this way allows having less noticeable distortion effects as compared to equirectangular images. This representation, however, suffers from another type of distortion that appears due to discontinuity effect on the borders between the faces of the cube. To mitigate this issue, Monroy et al. (2018) propose to apply a smoothing filter as a post-processing step and Ruder et al. (2018) suggest an algorithm that enforces consistency between the neighboring facets of the cube. Contrary to the mentioned approaches, as we model cube surface as a graph, our algorithm can easily handle the discontinuity problem and adapt to image distortions introduced by the cube-map projection. 3 GEOMETRY-AWARE CNN In this section we describe our algorithm, which adapts convolutional filters to the distortion of omnidirectional images. We start with the introduction of the equirectangular projection, as it is one of common ways to represent images from the omnidirecitonal cameras (De Simone et al., 2016; Coors et al., 2018). We then describe our graph-based representation learning framework. 3.1 EQUIRECTANGULAR PROJECTION Omnidirectional visual content can be represented as a sphere with radius r, where the user is assumed to be located at the center. Each 3D point can be projected to a point on the surface of this sphere, which is described by spherical coordinates, namely a longitude θ ∈ [−π, π] and a latitude φ ∈ [−π2 , π 2 ]. Omnidirectional images are generally not processed directly in their native geometry, but they are first projected to the 2D plane where classical image processing techniques can be activated. One of the popular projections involves sampling the data on the sphere with equal steps ∆θ and ∆φ, which results in an equrectangular image on the 2D plane. Thus, each point of equrectangular image is defined by its spherical coordinates. To describe this projection let us introduce the tangent plane T , which is tangent to the sphere in the point (θ0, φ0). Thus, each point (x, y) on the tangent plane is projected to the sphere surface (θ, φ) as follows (Coors et al., 2018): φ(x, y) = sin−1(cos ν sinφ0 + y sin ν cosφ0 ρ ) θ(x, y) = θ0 + tan −1( x sin νρ cosφ0 cos ν−y sinφ0 sin ν ) . (1) where ρ = √ (x2 + y2) and ν = tan−1 ρ. In order to have similar filter response regardless of the position of the object we model distortion of the applied filter. Thus, similarly to the works of (Khasanova & Frossard, 2017b) and (Coors et al., 2018), we define the filter kernel on this tangent plane T . Fig. 1 illustrates a sample equirectangular image with various kernels corresponding to tangent plane at different positions on the sphere. As we can see the projected area is different for various tangent plane locations. This projected area defines the support of our geometry-aware features, as described in the next section. 3.2 GEOMETRY-ADAPTIVE FILTERS In the context of omnidirectional cameras, the main drawback of the classical convolutional approach is that it applies the same rectangular filters to different image positions. However, as we mentioned before, equirectangular images have different statistics at various elevations. Therefore, we propose to adapt the size and shape of the filter to the elevation on the spherical image. To do so we propose to use a graph-based approach, which has recently become popular. The nodes of this graph vi ∈ G and signal y(vi) defined on the nodes represent pixels and their intensity values respectively. Based on this graph G we can then use the Laplacian polynomial filters F as proposed in (Defferrard et al., 2016; Khasanova & Frossard, 2017a), where normalized Laplacian operator is defined as follows: L = I −D−1/2AD−1/2, (2) where I is an identity matrix, D is a diagonal degree matrix and A is an adjacency matrix, which for each node vp of graph G defines its neighborhood Np. Then, F has the following form: F = M∑ l=1 αlLl, (3) where the αl are the trainable parameters, L is a Laplacian matrix and M is the degree of F . Using these filters we can construct a Deep learning architecture and use it for various tasks, e.g., classification of omnidirectional images. The main advantage of our approach is that by appropriately constructing the graph G we make the Laplacian polynomial filters F react similarly to the same object seen at different elevation of the equirectangular image regardless of geometric distortion due to the spherical geometry. Here we call those filters geometry-aware (GA). In order to adapt the GA-filter F to the elevation we build a graph G in such a way that the neighbourhood of each node is different for different elevations of the omnidirectional image. In the following section we describe two approaches that rely on undirected and directed graphs respectively, which consequently define isotropic and anisotropic filters. Then we describe in more details the polynomial anisotropic filter F that is used for directed graphs.. Undirected graph construction for adaptive filtering. To adapt F to the elevation level we construct a graph G with nodes that have different neighborhoods depending on the elevation. To do so we define a circular area on a tangent plane T , centered in the tangency point. Then we move T such that it becomes tangent to the sphere S in different positions vp and project the circular area onto S. For every point of S this creates a neighborhood Np, which changes its shape and size together with the elevation, as can be seen in Fig. 1. Based on this geometry adaptive neighborhood, we then construct the graph G in the following way. We connect the node vp ∈ G, corresponding to a tangent point on the sphere, with the node vj ∈ Np. The corresponding edge epi has a weight wpi that is inversely proportional to the Euclidean distance between vp and vi, which are defined on the sphere: wpi = ||vp − vi||−1L2 , vi ∈ Np , wpi = 0, vi /∈ Np . (4) This allows us to vary the size of the neighbourhood according to the geometry of the omnidirectional image for each node in G, and weight the contribution of the nodes to the final filter response according to their distances to vp. Therefore, depending on the elevation the filter is changing its shape and size. While effective, filter F does not have a defined orientation in space as according to Eq. (3) the filter applies the same weights αl to all nodes in the l-hoop neighborhood, with the contribution of each node being weighted by the distance to vp. This results in F being isotropic, which leads to suboptimal representation, as the network is not able to encode the orientation of the object. Directed graphs construction. In order to overcome the limitations of isotropic filters, we propose to replace a single undirected graph G with multiple directed graphs Gk, where each Gk defines its own orientation. Let us consider the case of a 3x3 classical convolutional filter. In this case the filter has 9 distinct elements. To mimic the same structure with our graph convolutional filters we employ the following algorithm. First we define 9 non-overlapping areas sk, k = 1..9 on the tangent plane, which together form a rectangular region that is centered in the tangency point vp of T as defined in the Fig. 2. This rectangular region effectively defines the receptive field of the filter on the tangent plane. Then we build a set of nine directed graphs Gk, k = 1..9 in the similar way, as mentioned in the previous section for undirected graph. In particular in order to build graph Gk we do as follows. For the area sk and for every node vp we move the tangent plane at point vp and then project sk from T onto the sphere. This operation defines a specific neighborhood Nk(p) on the sphere that consists of the points that belong to the projection of the region sk from the plane T . We then connect vp with a directed edge to each of these points, where the weight of the edge is defined in Eq. (4). Note that the direction of the edge is very important, because connecting vp and vi with an undirected edge forces vp to be part of the neighborhood Nk(i). This, however, is not possible, as the neighborhood Nk(i) is computed by projecting the area sk from the plane T that is tangent to the sphere at point vi and does not include vp. This results in construction of the directed graph Gk, which corresponds to the kth region of the filter, illustrated in Fig. 2. We repeat this operation for all the areas sk, k = 1..9 of our filter, which leads to creation of 9 directed graphs Gk, k = 1..9. Given this set of graphs Gk we define the resulting convolutional operation F as follows: F = 9∑ k=1 Fk, (5) where Fk is the filtering operation defined on the graph Gk. Note that this filtering operation is slightly different from the operation that is used when working with undirected graphs and is discussed in more details in the following section. To sum up, the introduced graph construction process allows having anisotropic filters F , defined in Eq. (5) that are capable of capturing the orientation of the object and therefore learn more meaningful feature representation for an image compared to the isotropic graph-based filters. It is important to note that in this paper we use the set of 9 non-overlapping rectangular areas defined on the tangent plane, as shown by Fig. 2, due to their rough correspondence to the elements of a 3 × 3 convolutional filter. However, our method can be easily extended to an arbitrary number of such areas with arbitrary shapes. Geometry aware anisotropic filters. For directed graphs Gk Laplacian matrix is not defined, therefore, we use the polynomial filters proposed in (Sakiyama et al., 2017). Instead of the Laplacian matrix these filters rely on the normalized adjacency matrix Ak, which is defined as follows: Ak = D−1k Ak, (6) where Ak and Dk are the weighted adjacency and the diagonal degree matrices of graph Gk respectively. The elements of Dk are computed as Dk(m,m) = ∑ nAk(m,n). Then, we define filters in the following way: Fk = α(k)0 + α (k) 1 Ak, (7) where α(k)0 , α (k) 1 are the training parameters of our filter. Here, we use polynomial filters of degree 1, as they achieve good balance between speed and performance. Network architecture. The introduced approach focuses on the modification of the convolutional layer to incorporate the knowledge about the image projective geometry inside the neural network. Thus, it can be applied for a broad variety of tasks. In this chapter we focus on the image classification and compression problems. For the former one we use a relatively standard architecture that consists of a sequence of convolutional layers with the introduced graph-based filters, followed by a sequence of the fully connected layers. For the compression task we use the architecture proposed in (Ballé et al., 2017) and replace its convolutional filters with the proposed graph-based ones. Discussion. Our method can be seen as a generalization of different approaches that have been developed for omnidirectional images. For example, if the node vp at elevation φi has only one neighbor in each direction and the weight of the edges between nodes is always equal to one, it will be the standard CNN method (LeCun et al., 2001). Further, if these neighbors correspond to the projected points it becomes the recently proposed algorithm of (Coors et al., 2018). Finally, if we replace directed graphs with a single undirected one we get the same behavior of the polynomial filters as described in graph-based deep learning methods (Khasanova & Frossard, 2017a; Defferrard et al., 2016; Kipf & Welling, 2017; Khasanova & Frossard, 2017b). 4 RESULTS In this section we illustrate the performance of our approach. We start by evaluating our method with respect to competing approaches on the task of classification images that are projected to different surfaces. Finally, to show the generality of our approach and illustrate the effectiveness of the anisotropic graph-based filters, we evaluate our method on the image compression task. 4.1 IMAGE CLASSIFICATION In this section we first introduce the datasets that we used for the evaluation of our method. We then discuss the baseline approaches and architectures, which we use in our experiments. Finally, we show the quantitative comparison of our method with the competing ones. Datasets. We evaluate our method on three different types of data, which feature different surface geometries, where the images are projected: • Spherical dataset (S) consists of images projected on different locations on a sphere. The resulting spherical images are then unwrapped to for equirectangular images, as described in Section 3.1. • Mod-spherical dataset (MS) features image projections on more complicated surfaces that are depicted by Fig. 3 together with the representative examples of projected images. This dataset itself consists of three different versions: MS1, MS2, MS3 which correspond to the surfaces which are getting further away from the spherical one. A more detailed discussion about the type of projection and the surface geometry used in these datasets can be found in ApendixB. • Fish-eye dataset (F) consists of images projected on different locations on a sphere using stereographic projection Bettonvil (2005), which is frequently used in fish-eye cameras. • Cube-map dataset (CM) features projection of the images on the cube as shown by Fig. 4. This type of projection has recently gained popularity for handling omnidirectional images, due to its ability to reduce distortion artifacts that appear due to the spherical geometry. In all these dataset we use MNIST images from (LeCun & Cortes, 2010) , which are divided into train, validation and test sets with 54k, 6k and 10k samples respectively. Architecture. We compare our approach with standard ConvNets, the algorithm proposed in (Cohen et al., 2018) and other graph-based methods. We present our result in the Table 11. For the graph-based methods we investigate 3 possible way of constructing G: • Regular grid-graph with 8 neighbors and all equal weights wij = 1; • Regular grid-graph with 8 neighbors and weights that depend on the Euclidean distance dij between the nodes, as proposed in (Khasanova & Frossard, 2017b) wij = d−1ij ; • Irregular GA-graph wij = d−1ij (isotropic filters from Section 3.2). For all of them we build a normalized Laplacian matrix (Khasanova & Frossard, 2017b) and use polynomial filters of degree 1, which is equivalent to using 3× 3 convolutional kernels. Therefore, for the standard ConvNet we similarly rely on filters of size 3× 3. 1We were unable to compare our method to the recent work of (Coors et al., 2018) as to the best of our knowledge, there is no publicly available implementation. All the competing approaches use the networks of roughly the same complexity. For all the methods we use the architectures of similar structure and roughly the same number of parameters. For all the graph-based approaches we use the graph-based convolutions with stride two on each layer, which in turn requires building graph for each new layer according to its respective sampling. The exact architecture of the classification network is illustrated in Appendix A. For the method of (Cohen et al., 2018) we used the architecture proposed in the paper with roughly the same number of parameters as in competing approaches. Evaluation. We present the result of comparison of our approach with the baseline methods in Table 1. Our method significantly outperforms the standard ConvNets, as it is designed to use the geometry of the omnidirectional images. Further it shows a much higher accuracy then other graphbased techniques, which rely on isotropic filters. Further, our method achieves comparably accuracy with (Cohen et al., 2018) on spherical image representation, however, our method is more general, therefore, it outperforms (Cohen et al., 2018) on other datasets. Finally, we are able to run our approach on cub-map projection while the SphericalCNN by design is not applicable to such kind of images. 4.2 IMAGE COMPRESSION In all our previous experiment we have focused on evaluating our approach on the image classification task. To show the generality of our method and better illustrate the effectiveness of anisotropic graph-based filters, we now evaluate their performance on an image compression problem. For this task, we choose to modify the architecture introduced in (Ballé et al., 2017) by replacing the ordinary convolutional layers with our own graph-based convolutions. In this section we first introduce our approach and then compare the performance of the two graph-based methods, which rely on isotropic and anisotropic graph-based filters respectively. Image compression framework. The method introduced in (Ballé et al., 2017), presents the process of image compression is an optimization of the tradeoff between having small distortion of the pixel intensity values and the small number of bits that are required for storing the compressed representation of these values. As described in (Ballé et al., 2016; Ballé et al., 2017), this optimization can be represented as a variational autoencoder. For more details we refer to Appendix C. In the context of omnidirectional images, we propose to modify the method proposed in (Ballé et al., 2017) by using our geometry-aware filters instead of standard convolutional ones. Evaluation. We now evaluate the performance of our approach. For this experiment we have implemented two versions of the system. One with isotropic graph-based filters and the other one with the anisotropic ones. We further evaluate the original method (Ballé et al., 2017) for the sake of completeness. All three methods are trained and tested on the same splits of the modified version of the dataset (Xiao et al., 2012), which consists of omnidirectional images projected onto a cube. From this dataset we use 3900 images for training and 1000 for testing of our approaches. We compare the methods in terms of the Peak Signal to Noise Ratio (PSNR) with respect to the average number of bits per pixel (bpp). The results of the evaluation are presented in Fig. 5. As we can see, our method with anisotropic filters and (Ballé et al., 2017) show similar PSNR values and significantly outperform the architecture with isotropic filters. Further, due to the fact that PSNR depends on the average difference in pixel values between the compressed image and the original one it is not able to reliably detect small artifacts that appear in the cube-map images, which are noticeable for humans. These artifacts are clearly seen in Fig. 6, where we illustrate that due to the knowledge about the image projective geometry, our approach correctly reconstructs the areas along cube borders, while the method of Ballé et al. (2017) over-smooths these areas. 5 CONCLUSION In this paper we have presented generic way of graph construction that allows incorporating the information about the image geometry inside the neural network. Further, we have introduced the graph-based geometry aware convolutional filters that adapt their shape and size to the geometry of the projection surface. In contrast to many existing graph-based filters, our filters are anisotropic, which allows to better adjust to the specific properties of the problem. Our illustrative experiments show state-of-the-art performance of our approach applied to image classification and compression tasks in the presence of various types of image distortions. A ARCHITECTURE OF CLASSIFICATION FRAMEWORK Fig. 7 illustrates the architecture of our classification network, which we use in Section 4.1. B MODIFIED SPHERICAL SURFACE. In order to evaluate the performance of our method as a function of deformation of the spherical surface, we have created a set of datasets by projecting the MNIST images to random locations of the surfaces, which have shapes, shown in Fig. 3 (a,c) and unwrap these spherical images to equirectangular ones. The white color in Fig. 3 (a,c) denotes the areas of the generated surface that are the furthest from the spherical surface of the same radius. The Fig. 3 (b,d) illustrates sample images of digits projected onto the respective surfaces. Each of the aforementioned surfaces is the following modification of a spherical one from Eq. (1): x = cos(φi) sin(θi − θ0) y = (cos(φ0) sin(φi + p(φi, r, l))− sin(φ0 + p(φ0, r, l)) cos(φi) cos(θi − θ0))/c, c = sin(φi + p(φi, r, l)) sin(φ0 + p(φ0, r, l)) + cos(φi) cos(φ0) cos(θi − θ0), (8) where (x, y) are the coordinates on the tangent plane and p(φ, r, l) is the perturbation function that can be written as p(φ, r, l) = r sin−1(sin(lφ)), (9) where φ is the elevation level; r is the parameter that regulates the perturbation magnitude and l defines frequency of the perturbation signal. In our experiments we have set l = 10. Note that for a specific case of r = 0 we get the ordinary spherical surface. We then use Eq. 8 to construct the graph G that allows our method to adapt to the surface geometry and evaluate our method on each of the generated datasets. C COMPRESSION In this section we briefly describe compression approach, which proposed by ?. An input image x is encoded using a function ga(x;α), which results in the respective latent representation y. Then, y is quantized into ŷ, which can be losslessly compressed using entropy coding algorithms. This ŷ is then passed to the decoder gs(ŷ;β) at the decompression step, which results in a decompressed image x̂. Here, we denote by α and β the parameters of the encoding and decoding algorithms respectively. While both encoder and decoder can be represented as a differentiable function, the process of quantization is non-differentiable. Therefore the authors of (Ballé et al., 2016) propose to replace quantization with an additive uniform noise at the training step as follows: ỹi = y + ∆y, (10) where ∆y denotes additive i.i.d uniform noise. This trick allows to perform the end-to-end optimization of both the encoder and decoder parameters using the following loss function: L(α, β) = Ex,∆y [ − ∑ i log2pỹi(ga(x;α) + ∆y) + λd(gs(ga(x;α) + ∆y;β), x) ] , (11) where gs, ga are convolutional deep neural networks, d represents the distance between the images and λ is a weighting parameter. Thus, during the training step, we add noise (according to Eq. (10)) to be able to back propagate the error and at the inference time we apply quantization to the latent representation y. The overall architecture that we use is similar to the one proposed in (Ballé et al., 2017) and is summarized in Fig. 8. Further, the method of (Ballé et al., 2017) relies on the standard convolutional layers, which are practical for ordinary images: they allow learning local image structures independently of their location in the image. Additional visual results. We run experiment with described compression architecture, where we compare three approaches: original and methods, where we replace convolutional filter from the architecture to graph-based isotropic and geometry-aware filters. Fig. 9 further illustrates some visual comparison of the methods and we can see isotropic filters produce over-smoothed decompressed images, which do not look realistic and result in very low PSNR values. On the other hand our method with anisotropic filters is able to produce sharp results.
1. How does the paper's approach differ from traditional CNNs regarding geometric awareness? 2. What are some potential issues or limitations with the proposed method, such as concerns about filter definitions or graph construction? 3. How might the approach be improved or expanded upon, such as by making the filters intrinsic or comparing them to other related methods like ACNN and moNet?
Review
Review The paper introduces geometry-aware filters based on constructed graphs into the standard CNN for omnidirectional image classification. Overall, the idea is interesting and the authors propose an extrinsic way to respect the underlying geometry by using tangent space projection. Understanding the graph construction and filter definition is not easy from the text description. It would be better to use a figure to illustrate them. 1) How to define the size of the circular area on the tangent plane? 2) Will the filter change greatly with the definition of the weight function in the neighborhood? Since the point locates on the sphere, why not using the geodesic distance instead of the Euclidean distance? 3) It would be better to directly define the filter on the sphere and make it be intrinsic. The same filter on the tangent space may cover different sizes of regions on the sphere; while we prefer the filter has consistent coverage on the sphere. 4) The paper misses the discussion and comparison to Anisotropic CNN (ACNN) and mixture model network (moNet).
ICLR
Title Geometry aware convolutional filters for omnidirectional images representation Abstract Due to their wide field of view, omnidirectional cameras are frequently used by autonomous vehicles, drones and robots for navigation and other computer vision tasks. The images captured by such cameras, are often analyzed and classified with techniques designed for planar images that unfortunately fail to properly handle the native geometry of such images. That results in suboptimal performance, and lack of truly meaningful visual features. In this paper we aim at improving popular deep convolutional neural networks so that they can properly take into account the specific properties of omnidirectional data. In particular we propose an algorithm that adapts convolutional layers, which often serve as a core building block of a CNN, to the properties of omnidirectional images. Thus, our filters have a shape and size that adapts with the location on the omnidirectional image. We show that our method is not limited to spherical surfaces and is able to incorporate the knowledge about any kind of omnidirectional geometry inside the deep learning network. As depicted by our experiments, our method outperforms the existing deep neural network techniques for omnidirectional image classification and compression tasks. 1 INTRODUCTION Drone vision, autonomous cars and robot navigation systems often use omnidirectional cameras, as they allow recording the scene with wide field of view. Despite their obvious advantages, images obtained by such cameras have different statistics compared to planar images. Nevertheless, omnidirectional images are often processed with standard techniques, which are unfortunately poorly adapted to the specific geometry of such images. In this paper we improve one of the most popular frameworks for image processing, namely convolutional neural network (CNN) for omnidirectional images. CNNs prove to be effective, as they permit to achieve very good performance in many different tasks like image classification, segmentation, generation and compression. In the context of omnidirectional cameras, CNNs are typically applied directly to the unwrapped and distorted spherical images. This approach, however, is suboptimal: due to specific geometry of these images, and, in particular, the change in the image statistics with the position in the image. The latter forces the network to learn different filters for different locations in the omnidirectional images (see Fig. 1). To solve this issue, we replace ordinary convolutional filters with the graph-based ones that can adapt their size and shape depending on the position in the omnidirectional image thanks to the flexible graph structure. In order to overcome the common limitation of these graph-based filters being isotropic (i.e. invariant to the changes in the objects orientation in the image) we suggest to use multiple directed graphs instead of a single undirected one, as used in (Defferrard et al., 2016; Kipf & Welling, 2017; Khasanova & Frossard, 2017a). This permits our method to encode the orientation of the objects that appear in the images and, therefore, extract more meaningful features from them. This together with the inherent ability of our filters to adapt to the image projective geometry allows our method to reach state-of-the-art performance when working not just with spherical projections, as done by Khasanova & Frossard (2017b); Cohen et al. (2018), but virtually with any type of image projective geometry that can be encoded by a graph. In our experiments we show that apart from the state-of-the-art results on the regular omnidirectional images classification task, our approach outperforms the existing classification techniques when applied to the images projected on a randomly perturbed spherical surface or on a cube via the cube-map projection, which has recently become one of the popular ways to represent 360-images (Chen et al., 2018). Finally, we demonstrate that our method can be applied for orientation-dependent task such as compression and reduce artifacts comparing with a standard approaches. 2 RELATED WORK In this section we first briefly introduce the most recent approaches that combine the power of deep learning methods with graph signal processing techniques, as they are similar in spirit to our work. We then discuss in more details the recent trends in omnidirectional image processing. Geometric Deep learning. In the recent years a number of deep learning methods have been introduced tailored for processing irregular data that is represented as a graph. One way to solve this problem is suggested by Monti et al. (2017), where the authors define a local system of ddimensional pseudo-coordinates for every node of the graph and learn both the filters and patch operators that work on these coordinates. A different direction is taken by Wang et al. (2018), who propose using edge-based convolutional kernels and dynamically update the graph. While being flexible and effective for general tasks, in the specific context of the omnidirectional images these methods do not directly take the advantage of the knowledge about the projective geometry, which we model using a specifically designed graph representation. While most of the existing methods work with undirected graphs, the recent work of (Monti et al., 2018) propose an approach for processing data defined on a directed graph by exploiting local graph motifs that describe its connectivity patterns. The main differences of this method with our work are that first, this method assumes that the directed graph is already given. In our problem, building such a graph that is able to fully take advantage of the image projective geometry is one of the contributions. Second, the approach in (Monti et al., 2018) does not use the knowledge of the coordinate system associated with omnidirectional images, which we however use in our architecture, in order to define filter orientations. The list of the aforementioned works is by no means extensive, therefore we refer the interested reader to the survey (Bronstein et al., 2017), which summarizes many geometric deep learning methods in detail. Omnidirectional image processing. The most typical way of dealing with images taken by omnidirectional cameras is to apply standard image processing techniques directly on the equirectangular projection images, which is one of the most common representations for the omnidirectional images (De Simone et al., 2016). However, due to the strong distortion effects introduced by the process of unwrapping of the projection surface to a plane, standard techniques lose much of their efficiency, as the appearance of the same object may change depending on its location in the image. To overcome this problem the recent work of Khasanova & Frossard (2017b) suggests using graph-based special convolutional filers to adapt to the geometry of omnidirectional images. This method, however, relies on the convolutional operation defined in the spectral domain, which leads to isotropic filters and may reduce the complexity of trained filters. We, on the other hand, propose to build anisotropic graphs-based convolutional filters that do not have this limitation. A different direction is taken by the authors of (Su & Grauman, 2017) who suggest adapting the size of the convolutional kernel to the elevation of the equirectangular image. The main limitation of this technique, however, is that it requires a significantly larger number of parameters than the competing techniques, as it does not have the weight sharing property of CNNs. It rather requires learning different convolutional filters for different elevations of the equirectangular image. Further, Jeon & Kim (2017) propose to learn the shape of the convolutional filter, by learning the sampling locations (position offsets), where the elements of the filter are evaluated. The authors of (Dai et al., 2017) extend later this idea and suggest learning dynamic offsets of the filter elements depending on the image content, which allows the filters to adapt to different parts of the image. This method is quite flexible, however in the context of omnidirectional images requires an extensive training set, as the network needs to learn how it should react to various objects appearing at any possible elevation. In our work we rather take advantage of the knowledge of the image projective geometry and use this knowledge in the design of our architecture to adapt the size and shape of convolutional filters. A different approach is suggested by Cohen et al. (2018), who introduces a CNN that is designed for spherical shapes and define filters directly on its surface. This method, however, is specifically designed for processing spherical images, while our approach is easily adapted to different kind of shapes, which we show in our experiments. Further, the methods of (Coors et al., 2018; Tateno et al., 2018) suggest a different way of compensating for the distortion of omnidirectional image. They suggest adapting the sampling locations of the convolutional filters to the geometry of the lens by projecting kernels to the sphere and using interpolated pixel values on the projected locations for implementing the convolutional filters. While these works are the closest to in spirit to ours, we propose a more general architecture, which permits to adapt the shape and size of the convolutional kernel to the location of omnidirectional image, and therefore to use the information about all the pixels and not only of a subset of them. Then, the authors in (Monroy et al., 2018; Ruder et al., 2018) suggest a completely different approach to tackle the distortion of omnidirectional images. Instead of working with equirectangular images, they propose to project an omnidirectional image to a cube, where each of its faces represents an image that would have been seen through a regular perspective camera, with the optical center located in the center of the cube (Chen et al., 2018). Representing an omnidirectional image in this way allows having less noticeable distortion effects as compared to equirectangular images. This representation, however, suffers from another type of distortion that appears due to discontinuity effect on the borders between the faces of the cube. To mitigate this issue, Monroy et al. (2018) propose to apply a smoothing filter as a post-processing step and Ruder et al. (2018) suggest an algorithm that enforces consistency between the neighboring facets of the cube. Contrary to the mentioned approaches, as we model cube surface as a graph, our algorithm can easily handle the discontinuity problem and adapt to image distortions introduced by the cube-map projection. 3 GEOMETRY-AWARE CNN In this section we describe our algorithm, which adapts convolutional filters to the distortion of omnidirectional images. We start with the introduction of the equirectangular projection, as it is one of common ways to represent images from the omnidirecitonal cameras (De Simone et al., 2016; Coors et al., 2018). We then describe our graph-based representation learning framework. 3.1 EQUIRECTANGULAR PROJECTION Omnidirectional visual content can be represented as a sphere with radius r, where the user is assumed to be located at the center. Each 3D point can be projected to a point on the surface of this sphere, which is described by spherical coordinates, namely a longitude θ ∈ [−π, π] and a latitude φ ∈ [−π2 , π 2 ]. Omnidirectional images are generally not processed directly in their native geometry, but they are first projected to the 2D plane where classical image processing techniques can be activated. One of the popular projections involves sampling the data on the sphere with equal steps ∆θ and ∆φ, which results in an equrectangular image on the 2D plane. Thus, each point of equrectangular image is defined by its spherical coordinates. To describe this projection let us introduce the tangent plane T , which is tangent to the sphere in the point (θ0, φ0). Thus, each point (x, y) on the tangent plane is projected to the sphere surface (θ, φ) as follows (Coors et al., 2018): φ(x, y) = sin−1(cos ν sinφ0 + y sin ν cosφ0 ρ ) θ(x, y) = θ0 + tan −1( x sin νρ cosφ0 cos ν−y sinφ0 sin ν ) . (1) where ρ = √ (x2 + y2) and ν = tan−1 ρ. In order to have similar filter response regardless of the position of the object we model distortion of the applied filter. Thus, similarly to the works of (Khasanova & Frossard, 2017b) and (Coors et al., 2018), we define the filter kernel on this tangent plane T . Fig. 1 illustrates a sample equirectangular image with various kernels corresponding to tangent plane at different positions on the sphere. As we can see the projected area is different for various tangent plane locations. This projected area defines the support of our geometry-aware features, as described in the next section. 3.2 GEOMETRY-ADAPTIVE FILTERS In the context of omnidirectional cameras, the main drawback of the classical convolutional approach is that it applies the same rectangular filters to different image positions. However, as we mentioned before, equirectangular images have different statistics at various elevations. Therefore, we propose to adapt the size and shape of the filter to the elevation on the spherical image. To do so we propose to use a graph-based approach, which has recently become popular. The nodes of this graph vi ∈ G and signal y(vi) defined on the nodes represent pixels and their intensity values respectively. Based on this graph G we can then use the Laplacian polynomial filters F as proposed in (Defferrard et al., 2016; Khasanova & Frossard, 2017a), where normalized Laplacian operator is defined as follows: L = I −D−1/2AD−1/2, (2) where I is an identity matrix, D is a diagonal degree matrix and A is an adjacency matrix, which for each node vp of graph G defines its neighborhood Np. Then, F has the following form: F = M∑ l=1 αlLl, (3) where the αl are the trainable parameters, L is a Laplacian matrix and M is the degree of F . Using these filters we can construct a Deep learning architecture and use it for various tasks, e.g., classification of omnidirectional images. The main advantage of our approach is that by appropriately constructing the graph G we make the Laplacian polynomial filters F react similarly to the same object seen at different elevation of the equirectangular image regardless of geometric distortion due to the spherical geometry. Here we call those filters geometry-aware (GA). In order to adapt the GA-filter F to the elevation we build a graph G in such a way that the neighbourhood of each node is different for different elevations of the omnidirectional image. In the following section we describe two approaches that rely on undirected and directed graphs respectively, which consequently define isotropic and anisotropic filters. Then we describe in more details the polynomial anisotropic filter F that is used for directed graphs.. Undirected graph construction for adaptive filtering. To adapt F to the elevation level we construct a graph G with nodes that have different neighborhoods depending on the elevation. To do so we define a circular area on a tangent plane T , centered in the tangency point. Then we move T such that it becomes tangent to the sphere S in different positions vp and project the circular area onto S. For every point of S this creates a neighborhood Np, which changes its shape and size together with the elevation, as can be seen in Fig. 1. Based on this geometry adaptive neighborhood, we then construct the graph G in the following way. We connect the node vp ∈ G, corresponding to a tangent point on the sphere, with the node vj ∈ Np. The corresponding edge epi has a weight wpi that is inversely proportional to the Euclidean distance between vp and vi, which are defined on the sphere: wpi = ||vp − vi||−1L2 , vi ∈ Np , wpi = 0, vi /∈ Np . (4) This allows us to vary the size of the neighbourhood according to the geometry of the omnidirectional image for each node in G, and weight the contribution of the nodes to the final filter response according to their distances to vp. Therefore, depending on the elevation the filter is changing its shape and size. While effective, filter F does not have a defined orientation in space as according to Eq. (3) the filter applies the same weights αl to all nodes in the l-hoop neighborhood, with the contribution of each node being weighted by the distance to vp. This results in F being isotropic, which leads to suboptimal representation, as the network is not able to encode the orientation of the object. Directed graphs construction. In order to overcome the limitations of isotropic filters, we propose to replace a single undirected graph G with multiple directed graphs Gk, where each Gk defines its own orientation. Let us consider the case of a 3x3 classical convolutional filter. In this case the filter has 9 distinct elements. To mimic the same structure with our graph convolutional filters we employ the following algorithm. First we define 9 non-overlapping areas sk, k = 1..9 on the tangent plane, which together form a rectangular region that is centered in the tangency point vp of T as defined in the Fig. 2. This rectangular region effectively defines the receptive field of the filter on the tangent plane. Then we build a set of nine directed graphs Gk, k = 1..9 in the similar way, as mentioned in the previous section for undirected graph. In particular in order to build graph Gk we do as follows. For the area sk and for every node vp we move the tangent plane at point vp and then project sk from T onto the sphere. This operation defines a specific neighborhood Nk(p) on the sphere that consists of the points that belong to the projection of the region sk from the plane T . We then connect vp with a directed edge to each of these points, where the weight of the edge is defined in Eq. (4). Note that the direction of the edge is very important, because connecting vp and vi with an undirected edge forces vp to be part of the neighborhood Nk(i). This, however, is not possible, as the neighborhood Nk(i) is computed by projecting the area sk from the plane T that is tangent to the sphere at point vi and does not include vp. This results in construction of the directed graph Gk, which corresponds to the kth region of the filter, illustrated in Fig. 2. We repeat this operation for all the areas sk, k = 1..9 of our filter, which leads to creation of 9 directed graphs Gk, k = 1..9. Given this set of graphs Gk we define the resulting convolutional operation F as follows: F = 9∑ k=1 Fk, (5) where Fk is the filtering operation defined on the graph Gk. Note that this filtering operation is slightly different from the operation that is used when working with undirected graphs and is discussed in more details in the following section. To sum up, the introduced graph construction process allows having anisotropic filters F , defined in Eq. (5) that are capable of capturing the orientation of the object and therefore learn more meaningful feature representation for an image compared to the isotropic graph-based filters. It is important to note that in this paper we use the set of 9 non-overlapping rectangular areas defined on the tangent plane, as shown by Fig. 2, due to their rough correspondence to the elements of a 3 × 3 convolutional filter. However, our method can be easily extended to an arbitrary number of such areas with arbitrary shapes. Geometry aware anisotropic filters. For directed graphs Gk Laplacian matrix is not defined, therefore, we use the polynomial filters proposed in (Sakiyama et al., 2017). Instead of the Laplacian matrix these filters rely on the normalized adjacency matrix Ak, which is defined as follows: Ak = D−1k Ak, (6) where Ak and Dk are the weighted adjacency and the diagonal degree matrices of graph Gk respectively. The elements of Dk are computed as Dk(m,m) = ∑ nAk(m,n). Then, we define filters in the following way: Fk = α(k)0 + α (k) 1 Ak, (7) where α(k)0 , α (k) 1 are the training parameters of our filter. Here, we use polynomial filters of degree 1, as they achieve good balance between speed and performance. Network architecture. The introduced approach focuses on the modification of the convolutional layer to incorporate the knowledge about the image projective geometry inside the neural network. Thus, it can be applied for a broad variety of tasks. In this chapter we focus on the image classification and compression problems. For the former one we use a relatively standard architecture that consists of a sequence of convolutional layers with the introduced graph-based filters, followed by a sequence of the fully connected layers. For the compression task we use the architecture proposed in (Ballé et al., 2017) and replace its convolutional filters with the proposed graph-based ones. Discussion. Our method can be seen as a generalization of different approaches that have been developed for omnidirectional images. For example, if the node vp at elevation φi has only one neighbor in each direction and the weight of the edges between nodes is always equal to one, it will be the standard CNN method (LeCun et al., 2001). Further, if these neighbors correspond to the projected points it becomes the recently proposed algorithm of (Coors et al., 2018). Finally, if we replace directed graphs with a single undirected one we get the same behavior of the polynomial filters as described in graph-based deep learning methods (Khasanova & Frossard, 2017a; Defferrard et al., 2016; Kipf & Welling, 2017; Khasanova & Frossard, 2017b). 4 RESULTS In this section we illustrate the performance of our approach. We start by evaluating our method with respect to competing approaches on the task of classification images that are projected to different surfaces. Finally, to show the generality of our approach and illustrate the effectiveness of the anisotropic graph-based filters, we evaluate our method on the image compression task. 4.1 IMAGE CLASSIFICATION In this section we first introduce the datasets that we used for the evaluation of our method. We then discuss the baseline approaches and architectures, which we use in our experiments. Finally, we show the quantitative comparison of our method with the competing ones. Datasets. We evaluate our method on three different types of data, which feature different surface geometries, where the images are projected: • Spherical dataset (S) consists of images projected on different locations on a sphere. The resulting spherical images are then unwrapped to for equirectangular images, as described in Section 3.1. • Mod-spherical dataset (MS) features image projections on more complicated surfaces that are depicted by Fig. 3 together with the representative examples of projected images. This dataset itself consists of three different versions: MS1, MS2, MS3 which correspond to the surfaces which are getting further away from the spherical one. A more detailed discussion about the type of projection and the surface geometry used in these datasets can be found in ApendixB. • Fish-eye dataset (F) consists of images projected on different locations on a sphere using stereographic projection Bettonvil (2005), which is frequently used in fish-eye cameras. • Cube-map dataset (CM) features projection of the images on the cube as shown by Fig. 4. This type of projection has recently gained popularity for handling omnidirectional images, due to its ability to reduce distortion artifacts that appear due to the spherical geometry. In all these dataset we use MNIST images from (LeCun & Cortes, 2010) , which are divided into train, validation and test sets with 54k, 6k and 10k samples respectively. Architecture. We compare our approach with standard ConvNets, the algorithm proposed in (Cohen et al., 2018) and other graph-based methods. We present our result in the Table 11. For the graph-based methods we investigate 3 possible way of constructing G: • Regular grid-graph with 8 neighbors and all equal weights wij = 1; • Regular grid-graph with 8 neighbors and weights that depend on the Euclidean distance dij between the nodes, as proposed in (Khasanova & Frossard, 2017b) wij = d−1ij ; • Irregular GA-graph wij = d−1ij (isotropic filters from Section 3.2). For all of them we build a normalized Laplacian matrix (Khasanova & Frossard, 2017b) and use polynomial filters of degree 1, which is equivalent to using 3× 3 convolutional kernels. Therefore, for the standard ConvNet we similarly rely on filters of size 3× 3. 1We were unable to compare our method to the recent work of (Coors et al., 2018) as to the best of our knowledge, there is no publicly available implementation. All the competing approaches use the networks of roughly the same complexity. For all the methods we use the architectures of similar structure and roughly the same number of parameters. For all the graph-based approaches we use the graph-based convolutions with stride two on each layer, which in turn requires building graph for each new layer according to its respective sampling. The exact architecture of the classification network is illustrated in Appendix A. For the method of (Cohen et al., 2018) we used the architecture proposed in the paper with roughly the same number of parameters as in competing approaches. Evaluation. We present the result of comparison of our approach with the baseline methods in Table 1. Our method significantly outperforms the standard ConvNets, as it is designed to use the geometry of the omnidirectional images. Further it shows a much higher accuracy then other graphbased techniques, which rely on isotropic filters. Further, our method achieves comparably accuracy with (Cohen et al., 2018) on spherical image representation, however, our method is more general, therefore, it outperforms (Cohen et al., 2018) on other datasets. Finally, we are able to run our approach on cub-map projection while the SphericalCNN by design is not applicable to such kind of images. 4.2 IMAGE COMPRESSION In all our previous experiment we have focused on evaluating our approach on the image classification task. To show the generality of our method and better illustrate the effectiveness of anisotropic graph-based filters, we now evaluate their performance on an image compression problem. For this task, we choose to modify the architecture introduced in (Ballé et al., 2017) by replacing the ordinary convolutional layers with our own graph-based convolutions. In this section we first introduce our approach and then compare the performance of the two graph-based methods, which rely on isotropic and anisotropic graph-based filters respectively. Image compression framework. The method introduced in (Ballé et al., 2017), presents the process of image compression is an optimization of the tradeoff between having small distortion of the pixel intensity values and the small number of bits that are required for storing the compressed representation of these values. As described in (Ballé et al., 2016; Ballé et al., 2017), this optimization can be represented as a variational autoencoder. For more details we refer to Appendix C. In the context of omnidirectional images, we propose to modify the method proposed in (Ballé et al., 2017) by using our geometry-aware filters instead of standard convolutional ones. Evaluation. We now evaluate the performance of our approach. For this experiment we have implemented two versions of the system. One with isotropic graph-based filters and the other one with the anisotropic ones. We further evaluate the original method (Ballé et al., 2017) for the sake of completeness. All three methods are trained and tested on the same splits of the modified version of the dataset (Xiao et al., 2012), which consists of omnidirectional images projected onto a cube. From this dataset we use 3900 images for training and 1000 for testing of our approaches. We compare the methods in terms of the Peak Signal to Noise Ratio (PSNR) with respect to the average number of bits per pixel (bpp). The results of the evaluation are presented in Fig. 5. As we can see, our method with anisotropic filters and (Ballé et al., 2017) show similar PSNR values and significantly outperform the architecture with isotropic filters. Further, due to the fact that PSNR depends on the average difference in pixel values between the compressed image and the original one it is not able to reliably detect small artifacts that appear in the cube-map images, which are noticeable for humans. These artifacts are clearly seen in Fig. 6, where we illustrate that due to the knowledge about the image projective geometry, our approach correctly reconstructs the areas along cube borders, while the method of Ballé et al. (2017) over-smooths these areas. 5 CONCLUSION In this paper we have presented generic way of graph construction that allows incorporating the information about the image geometry inside the neural network. Further, we have introduced the graph-based geometry aware convolutional filters that adapt their shape and size to the geometry of the projection surface. In contrast to many existing graph-based filters, our filters are anisotropic, which allows to better adjust to the specific properties of the problem. Our illustrative experiments show state-of-the-art performance of our approach applied to image classification and compression tasks in the presence of various types of image distortions. A ARCHITECTURE OF CLASSIFICATION FRAMEWORK Fig. 7 illustrates the architecture of our classification network, which we use in Section 4.1. B MODIFIED SPHERICAL SURFACE. In order to evaluate the performance of our method as a function of deformation of the spherical surface, we have created a set of datasets by projecting the MNIST images to random locations of the surfaces, which have shapes, shown in Fig. 3 (a,c) and unwrap these spherical images to equirectangular ones. The white color in Fig. 3 (a,c) denotes the areas of the generated surface that are the furthest from the spherical surface of the same radius. The Fig. 3 (b,d) illustrates sample images of digits projected onto the respective surfaces. Each of the aforementioned surfaces is the following modification of a spherical one from Eq. (1): x = cos(φi) sin(θi − θ0) y = (cos(φ0) sin(φi + p(φi, r, l))− sin(φ0 + p(φ0, r, l)) cos(φi) cos(θi − θ0))/c, c = sin(φi + p(φi, r, l)) sin(φ0 + p(φ0, r, l)) + cos(φi) cos(φ0) cos(θi − θ0), (8) where (x, y) are the coordinates on the tangent plane and p(φ, r, l) is the perturbation function that can be written as p(φ, r, l) = r sin−1(sin(lφ)), (9) where φ is the elevation level; r is the parameter that regulates the perturbation magnitude and l defines frequency of the perturbation signal. In our experiments we have set l = 10. Note that for a specific case of r = 0 we get the ordinary spherical surface. We then use Eq. 8 to construct the graph G that allows our method to adapt to the surface geometry and evaluate our method on each of the generated datasets. C COMPRESSION In this section we briefly describe compression approach, which proposed by ?. An input image x is encoded using a function ga(x;α), which results in the respective latent representation y. Then, y is quantized into ŷ, which can be losslessly compressed using entropy coding algorithms. This ŷ is then passed to the decoder gs(ŷ;β) at the decompression step, which results in a decompressed image x̂. Here, we denote by α and β the parameters of the encoding and decoding algorithms respectively. While both encoder and decoder can be represented as a differentiable function, the process of quantization is non-differentiable. Therefore the authors of (Ballé et al., 2016) propose to replace quantization with an additive uniform noise at the training step as follows: ỹi = y + ∆y, (10) where ∆y denotes additive i.i.d uniform noise. This trick allows to perform the end-to-end optimization of both the encoder and decoder parameters using the following loss function: L(α, β) = Ex,∆y [ − ∑ i log2pỹi(ga(x;α) + ∆y) + λd(gs(ga(x;α) + ∆y;β), x) ] , (11) where gs, ga are convolutional deep neural networks, d represents the distance between the images and λ is a weighting parameter. Thus, during the training step, we add noise (according to Eq. (10)) to be able to back propagate the error and at the inference time we apply quantization to the latent representation y. The overall architecture that we use is similar to the one proposed in (Ballé et al., 2017) and is summarized in Fig. 8. Further, the method of (Ballé et al., 2017) relies on the standard convolutional layers, which are practical for ordinary images: they allow learning local image structures independently of their location in the image. Additional visual results. We run experiment with described compression architecture, where we compare three approaches: original and methods, where we replace convolutional filter from the architecture to graph-based isotropic and geometry-aware filters. Fig. 9 further illustrates some visual comparison of the methods and we can see isotropic filters produce over-smoothed decompressed images, which do not look realistic and result in very low PSNR values. On the other hand our method with anisotropic filters is able to produce sharp results.
1. What is the main contribution of the paper regarding CNNs for omnidirectional images? 2. What are the strengths and weaknesses of the proposed method, particularly in its implementation and numerical stability? 3. How does the reviewer assess the novelty and usefulness of the proposed layers and their stacking method? 4. What are the concerns regarding the experimental results and comparisons with other works? 5. Are there any questions or suggestions regarding the clarity and self-containment of certain sections in the paper?
Review
Review The paper proposes a new way of defining CNNs for omnidirectional images. The method is based on graph convolutional networks, and in contrast to previous work, is applicable to other geometries than spherical ones (e.g. fisheye cameras). Since standard graph CNNs are unable to tell left from right (and up from down, etc.), a key question is how to define anisotropic filters. This is achieved by introducing several directed graphs that have orientation built into the graph structure. The paper is fairly well written, and contains some new ideas. However, the method seems ad-hoc, somewhat difficult to implement, and numerically brittle. Moreover, the method is not equivariant to rotations, and no other justification is given for why it makes sense to stack the proposed layers to form a multi-layer network. The results are underwhelming. Only experiments with small networks on MNIST variants are presented. A very marginal improvement over SphericalCNNs is demonstrated on spherical MNIST. I'm confused by the dataset used: The authors write that they created their own spherical MNIST dataset, which will be made publicly available as a contribution of the paper. However, although the present paper fails to mention it, Cohen et al. also released such a dataset [1], which raises the question for why a new one is needed and whether this is really a useful contribution or only results in more difficulty comparing results. Also, it is not stated whether the 95.2 result for SphericalCNNs was obtained from the authors' dataset or from [1]. If the latter, the numbers are not comparable. The first part of section 3.2 is not very clear. For example, L^l is not defined. L is called the Laplacian matrix, but the Laplacian is not defined. It would be better to make this section more self contained. In the related work section, it is stated that Cohen et al. use isotropic filters, but this is not correct. In the first layer they use general oriented spherical filters, and in later layers they use SO(3) filters, which allows anisotropy in every layer. Estevez et al. [2] do use isotropic spherical filters. In principle, the method is applicable to different geometries than the spherical one. However, this ability is only demonstrated on artificial distortions of a sphere (fig 3), not practically relevant geometries like those found fisheye lenses. In summary, since the approach seems a bit un-principled, does not have nice theoretical properties, and the results are not convincing, I recommend against acceptance of this paper in its current form. [1] https://github.com/jonas-koehler/s2cnn/tree/master/examples/mnist [2] Estevez et al. Learning SO(3) Equivariant Representations with Spherical CNNs
ICLR
Title Counterfactual Graph Learning for Link Prediction Abstract Learning to predict missing links is important for many graph-based applications. Existing methods were designed to learn the association between two sets of variables: (1) the observed graph structure (e.g., clustering effect) and (2) the existence of link between a pair of nodes. However, the causal relationship between these variables was ignored. We visit the possibility of learning it by asking a counterfactual question: “would the link exist or not if the observed graph structure became different?” To answer this question, we leverage causal models considering the information of the node pair (i.e., learned graph representations) as context, global graph structural properties as treatment, and link existence as outcome. In this work, we propose a novel link prediction method that enhances graph learning by counterfactual inference. It creates counterfactual links from the observed ones, and learns representations from both the observed and counterfactual links. Experiments on benchmark datasets show that this novel graph learning method achieves state-of-the-art performance on link prediction. 1 INTRODUCTION Link prediction seeks to predict the likelihood of edge existence between node pairs based on the observed graph. Given the omnipresence of graph-structured data, link prediction has copious applications such as movie recommendation (Bennett et al., 2007), chemical interaction prediction (Stanfield et al., 2017), and knowledge graph completion (Kazemi & Poole, 2018). Graph machine learning methods have been widely applied to solve this problem. Their standard scheme is to first learn the representation vectors of nodes and then learn the association between the representations of a pair of nodes and the existence of the link between them. For example, graph neural networks (GNNs) use neighborhood aggregation to create the representation vectors: the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes (Kipf & Welling, 2016a; Hamilton et al., 2017; Wu et al., 2020). Then the vectors are fed into a binary classification model to learn the association. GNN methods have shown predominance in the task of link prediction (Kipf & Welling, 2016b; Zhang et al., 2020a). Unfortunately, the causal relationship between graph structure and link existence was largely ignored in previous work. Existing methods that learn from association only were are not able to capture essential factors to accurately predict missing links in the test data. Take social network as an example. Suppose Alice and Adam live in the same neighborhood and they are close friends. The association between neighborhood belonging and friend closeness could be too strong to discover the essential factors of the friendship such as common interests or family relationship which could be the cause of being living in the same neighborhood. So, our idea is asking a counterfactual question: “would Alice and Adam still be close friends if they were not living in the same neighborhood?” If a graph learning model could learn the causal relationship by answering the counterfactual questions, it would improve the accuracy of link prediction with the novel knowledge it captured. Generally, the questions can be described as “would the link exist or not if the graph structure became different?” As known to many, counterfactual question is a key component of causal inference and have been well defined in the literature. A counterfactual question is usually framed with three factors: context (as a data point), manipulation (e.g., treatment, intervention, action, strategy), and outcome (van der Laan & Petersen, 2007; Johansson et al., 2016). (To simplify the language, we use “treatment” to refer to the manipulation in this paper, as readers might be familiar more with the word “treatment.”) Given certain data context, it asks what the outcome would have been if the treatment had not been the observed value. In the scenario of link prediction, we consider the information of a pair of nodes as context, graph structural properties as treatment, and link existence as outcome. Recall the social network example. The context is the representations of Alice and Adam that are learned from their personal attributes and relationships with others on the network. The treatment is living in the same neighborhood, which can be identified by community detection. And the outcome is their friendship. In this work, we present a counterfactual graph learning method for link prediction (CFLP) that trains graph learning models to answer the counterfactual questions. Figure 1 illustrates this twostep method. Suppose the treatment variable is defined as one type of global graph structure, e.g., the neighborhood assignment discovered by spectral clustering or community detection algorithms. We are wondering how likely the neighborhood distribution makes a difference on the link (non)existence for each pair of nodes. So, given a pair of nodes (like Alice and Adam) and the treatment value on this pair (in the same neighborhood), we find a pair of nodes (like Helen and Bob) that satisfies two conditions: (1) it has a different treatment (in different neighborhoods) and (2) it is the most similar pair with the given pair of nodes. We call these matched pair of nodes as “counterfactual links.” Note that the outcome of the counterfactual link can be either 1 or 0, depending on whether there exists an edge between the matched pair of nodes (Helen and Bob). The counterfactual link provides unobserved outcome to the given pair of nodes (Alice and Adam) under a counterfactual condition (in different neighborhoods). After counterfactual links are created for all (positive and negative) training examples, CFLP trains a link predictor (which can be GNN-based) to learn the representation vectors of nodes to predict both the observed factual links and the counterfactual links. In this Alice-Adam example, the link predictor is trained to estimate the individual treatment effect (ITE) of neighborhood assignment as 1 − 1 = 0, where ITE is a metric for the effect of treatment on the outcome and zero indicates the given treatment has no effect on the outcome. So, the learner will try to discover the essential factors on the friendship between Alice and Adam. CFLP leverages causal models to find these factors for graph learning models to accurately predict missing links. Contributions. Our main contributions can be summarized as follows. (1) This is the first work that proposes to improve link prediction by causal inference, specifically, learning to answer counterfactual questions about link existence. (2) This work introduces CFLP that trains GNN-based link predictors to predict both factual and counterfactual links. It learns the causal relationship between global graph structure and link existence. (3) CFLP outperforms competitive baseline methods on several benchmark datasets. We analyze the impact of counterfactual links as well as the choice of treatment variable. This work sheds insights for improving graph machine learning with causal analysis, which has not been extensively studied yet, when the other direction (machine learning for causal inference) has been studied for a long time. 2 PROBLEM DEFINITION Notations Let G = (V, E) be an undirected graph of N nodes, where V = {v1, v2, . . . , vN} is the set of nodes and E ⊆ V × V is the set of observed links. We denote the adjacency matrix as A ∈ {0, 1}N×N , whereAi,j = 1 indicates nodes vi and vj are connected and vice versa. We denote the node feature matrix as X ∈ RN×F , where F is the number of node features and xi (bolded) indicates the feature vector of node vi (the i-th row of X). In this work, we follow the commonly accepted problem definition of link prediction on graph data (Zhang & Chen, 2018; Zhang et al., 2020a; Cai et al., 2021): Given an observed graph G (with validation and testing links masked off), predict the link existence between every pair of nodes. More specifically, for the GNN-based link prediction methods, they learn low-dimensional node representations Z ∈ RN×H , where H is the dimensional size of latent space such that H F , and then use Z for the prediction of link existence. 3 PROPOSED METHOD 3.1 IMPROVING GRAPH LEARNING WITH CAUSAL MODEL Z ?T Y Z? T Y Treatment Effect Estimation Graph Representation Learning Figure 2: Causal modeling (not the target of our work but related to the idea we propose): Given Z and observed outcomes, find treatment effect of T on Y . Z ?T Y Z? T Y Treatment Effect Estimation Graph Representation Learning Figure 3: Graph learning with causal model (the proposed idea): leverage the estimated ATE(Y |T ) to improve the learning of Z. Leveraging Causal Model(s) Counterfactual causal inference aims to find out the causal relationship between treatment and outcomes by asking the counterfactual questions such as ”would the outcome be different if the treatment was different?” (Morgan & Winship, 2015). Figure 2 is a typical example, in which we denote the context (confounder) as Z, treatment as T , and the outcome as Y . Given the context, treatments, and their corresponding outcomes, counterfactual inference methods aim to find the effect of treatment on the outcome, which is usually measured by individual treatment effect (ITE) and its expectation averaged treatment effect (ATE) (van der Laan & Petersen, 2007; Weiss et al., 2015). For a binary treatment variable T = {0, 1}, denoting g(z, T ) as the outcome of z given the treatment T , we have ITE(z) = g(z, 1)− g(z, 0), and ATE = Ez∼Z ITE(z). Ideally, we need all possible outcomes of the contexts under all kinds of treatments to study the causal relationships (Morgan & Winship, 2015). However, in reality, the fact that we can only observe one potential outcome under one particular treatment prevents the ITE from being known (Johansson et al., 2016). Traditional causal inference methods use statisti- cal learning approaches such as Neyman–Rubin casual model (BCM) and propensity score matching (PSM) to predict the value of ATE (Rubin, 1974; 2005). In this work, we look at link prediction with graph learning, which is essentially learning the best node representations Z for the prediction of link existence. Therefore, as shown in Figure 3, where the outcome Y is the link existence, the objective is different from classic causal inference. In graph learning, we can estimate the effect of treatment on the outcome (ATE(Y |T )), and we want to improve the learning of Z with the estimation. More specifically, in graph learning for link prediction, for each pair of nodes (vi, vj), its ITE can be estimated with ITE(vi,vj) = g((zi, zj), 1)− g((zi, zj), 0) (1) and we use this information to improve the learning of Z, i.e., P (Z|Y ). We denote the observed adjacency matrix as the factual outcomes A and the unobserved adjacency matrix when the treatment is different as the counterfactual outcomes ACF . We denote T ∈ {0, 1}N×N as the binary factual treatment matrix, where Ti,j indicates the treatment of the node pair (vi, vj). We denote TCF as the counterfactual treatment matrix where TCFi,j = 1 − Ti,j . We are interested in (1) estimating the counterfactual outcomes ACF via observed data, (2) learning with the counterfactual adjacency matrix ACF to enhance link prediction, and (3) learning the causal relationship between graph structural information (treatment) and link existence (outcome). Treatment Variable Previous work on graph machine learning (Velickovic et al., 2019; Park et al., 2020) showed that the graph’s global structural information could improve the quality of representation vectors of nodes learned by GNNs. This is because the message passing-based GNNs aggregate local information in the algorithm of representation vector generation and the global structural information is complementary with the aggregated information. Therefore, for a pair of nodes, one option of defining the treatment variable is its global structural role in the graph. Without the loss of generality, we use Louvain (Blondel et al., 2008), an unsupervised approach that has been widely used for community detection, as an example. Louvain discovers community structure of a graph and assigns each node to one community. Then we can define the binary treatment variable as whether these two nodes in the pair belong to the same community. Let c : V → N be any graph mining/clustering method that outputs the index of community/cluster/neighborhood that each node belongs to. The treatment matrix T is defined as Ti,j = 1 if c(vi) = c(vj), and Ti,j = 0 otherwise. For the choice of c, we suggest methods that group nodes based on global graph structural information, including but not limited to Louvain (Blondel et al., 2008), K-core (Bader & Hogue, 2003), and spectral clustering (Ng et al., 2001). 3.2 COUNTERFACTUAL LINKS To implement the solution based on above idea, we propose counterfactual links. As aforementioned, for each node pair, the observed data contains only the factual treatment and outcome, meaning that the link existence for the given node pair with an opposite treatment is unknown. Therefore, we use the outcome from the nearest observed context as a substitute. This type of matching on covariates is widely used to estimate treatment effects from observational data (Johansson et al., 2016; Alaa & Van Der Schaar, 2019). That is, we want to find the nearest neighbor with the opposite treatment for each observed node pairs and use the nearest neighbor’s outcome as a counterfactual link. Formally, ∀(vi, vj) ∈ V × V , we want to find its counterfactual link (va, vb) as below: (va, vb) = arg min va,vb∈V {h((vi, vj), (va, vb)) | Ta,b = 1− Ti,j}, (2) where h(·, ·) is a metric of measuring the distance between a pair of node pairs (a pair of contexts). Nevertheless, finding the nearest neighbors by computing the distance between all pairs of node pairs is extremely inefficient and infeasible in application, which takes O(N4) comparisons (as there are totally O(N2) node pairs). Hence we implement Eq. (2) using node-level embeddings. Specifically, considering that we want to find the nearest node pair based on not only the raw node features but also structural features, we take the state-of-the-art unsupervised graph representation learning method MVGRL (Hassani & Khasahmadi, 2020) to learn the node embeddings X̃ ∈ RN×F̃ from the observed graph (with validation and testing links masked off). We use X̃ to find the nearest neighbors of node pairs. Therefore, ∀(vi, vj) ∈ V × V , we define its counterfactual link (va, vb) as (va, vb) = arg min va,vb∈V {d(x̃i, x̃a) + d(x̃j , x̃b) | Ta,b = 1− Ti,j , d(x̃i, x̃a) + d(x̃j , x̃b) < 2γ}, (3) where d(·, ·) is specified as the Euclidean distance on the embedding space of X̃, and γ is a hyperparameter that defines the maximum distance that two nodes are considered as similar. When no node pair satisfies the above equation (i.e., there does not exist any node pair with opposite treatment that is close enough to the target node pair), we do not assign any nearest neighbor for the given node pair to ensure all the neighbors are similar enough (as substitutes) in the feature space. Thus, the counterfactual treatment matrix TCF and the counterfactual adjacency matrix ACF are defined as TCFi,j , A CF i,j = { 1− Ti,j , Aa,b , if ∃ (va, vb) ∈ V × V satisfies Eq. (3); Ti,j , Ai,j , otherwise. (4) It is worth noting that the node embeddings X̃ and the nearest neighbors are computed only once and do not change during the learning process. X̃ is only used for finding the nearest neighbors. We also note that X̃ must be structural embeddings rather than positional embeddings (as defined in (Srinivasan & Ribeiro, 2020)). Learning from Counterfactual Distributions Let PF be the factual distribution of the observed contexts and treatments, and PCF be the counterfactual distribution that is composed of the observed contexts and opposite treatments. We define the empirical factual distribution P̂F ∼ PF as P̂F = {(vi, vj , Ti,j)}Ni,j=1, and define the empirical counterfactual distribution P̂CF ∼ PCF as P̂CF = {(vi, vj , TCFi,j )}Ni,j=1. Unlike traditional link prediction methods that take only P̂F as input and use the observed outcomes A as the training target, the idea of counterfactual graph learning is to take advantage of the counterfactual distribution by having P̂CF as a complementary input and use the counterfactual outcomes ACF as the training target for the counterfactual data samples. 3.3 THE COUNTERFACTUAL GRAPH LEARNING MODEL In this subsection, we present the design of our model as well as the training method. The input of the model in CFLP includes (1) the observed graph data A and raw feature matrix X, (2) the factual treatments TF and counterfactual treatments TCF , and (3) the counterfactual graph data ACF . The output contains link prediction logits in  and ÂCF for the factual and counterfactual adjacency matrices A and ACF , respectively. Graph Learning Model The model consist of two trainable components: a graph encoder f and a link decoder g. The graph encoder generates representation vectors of nodes from graph data G. And the link decoder projects the representation vectors of node pairs into the link prediction logits. The choice of the graph encoder f can be any end-to-end GNN model. Without the loss of generality, here we use the commonly used graph convolutional network (GCN) (Kipf & Welling, 2016a). Each layer of GCN is defined as H(l) = f (l)(A,H(l−1);W(l)) = σ(D̃− 1 2 ÃD̃− 1 2H(l−1)W(l)), (5) where l is the layer index, à = A + I is the adjacency matrix with added self-loops, D̃ is the diagonal degree matrix D̃ii = ∑ j Ãij , H (0) = X, W(l) is the learnable weight matrix at the l-th layer, and σ(·) denotes a nonlinear activation such as ReLU. We denote Z = f(A,X) ∈ RN×H as the output from the encoder’s last layer, i.e., the H-dimensional representation vectors of nodes. Following previous work (Zhang et al., 2020a), we compute the representation of a node pair as the Hadamard product of the vectors of the two nodes. That is, the representation for the node pair (vi, vj) is zi zj ∈ RH , where stands for the Hadamard product. For the link decoder that predicts whether a link exists between a pair of nodes, we opt for simplicity and adopt a simple decoder based on multi-layer perceptron (MLP), given the representations of node pairs and their treatments. That is, the decoder g is defined as  = g(Z,T), where Âi,j = MLP([zi zj , Ti,j ]), (6) ÂCF = g(Z,TCF ), where ÂCFi,j = MLP([zi zj , TCFi,j ]), (7) where [·, ·] stands for the concatenation of vectors, and  and ÂCF can be used for estimating the observed ITE as aforementioned in Eq. (1). During the training process, data samples from the empirical factual distribution P̂F and the empirical counterfactual distribution P̂CF are fed into decoder g and optimized towards A and ACF , respectively. That is, for the two distributions, the loss functions are as follows: LF = 1 N2 N∑ i=1 N∑ j=1 Ai,j · log Âi,j + (1−Ai,j) · log(1− Âi,j), (8) LCF = 1 N2 N∑ i=1 N∑ j=1 ACFi,j · log ÂCFi,j + (1−ACFi,j ) · log(1− ÂCFi,j ). (9) Balancing Counterfactual Learning In the training process, the above loss minimizations train the model on both the empirical factual distribution P̂F ∼ PF and empirical counterfactual distribution P̂CF ∼ PCF that are not necessarily equal – the training examples (node pairs) do not have to be aligned. However, at the stage of inference, the test data contains only observed (factual) samples. Such a gap between the training and testing data distributions exposes the model in the risk of covariant shift, which is a common issue in counterfactual learning (Johansson et al., 2016; Assaad et al., 2021). To force the distributions of representations of factual distributions and counterfactual distributions to be similar, we use the discrepancy distance (Mansour et al., 2009; Johansson et al., 2016) as another objective to regularize the representation learning. That is, we use the following loss term to minimize the distance between the learned representations from P̂F and P̂CF : Ldisc = disc(P̂Ff , P̂CFf ), where disc(P,Q) = ||P −Q||F , (10) where || · ||F denotes the Frobenius Norm, and P̂Ff and P̂CFf denote the node pair representations learned by graph encoder f from factual distribution and counterfactual distribution, respectively. That is, the learned representations for (vi, vj , Ti,j) and (vi, vj , TCFi,j ) are [zi zj , Ti,j ] (Eq. (6)) and [zi zj , TCFi,j ] (Eq. (7)), respectively. Training During the training of CFLP, we want the model to be optimized towards three targets: (1) accurate link prediction on the observed outcomes (Eq. (8)), (2) accurate estimation on the counterfactual outcomes (Eq. (9)), and (3) regularization on the representation spaces learned from P̂F and P̂CF (Eq. (10)). Therefore, the overall training loss of our proposed CFLP is L = LF + α · LCF + β · Ldisc, (11) where α and β are hyperparameters to control the weights of counterfactual outcome estimation (link prediction) loss and discrepancy loss. Algorithm 1: CFLP: Counterfactual graph learning for link prediction Input : f , g, A, X, n epochs, n epoch ft 1 Compute T as presented in Section 3.1 ; 2 Compute TCF ,ACF by Eqs. (3) and (4) ; /* model training */ 3 Initialize Θf in f and Θg in g ; 4 for epoch in range(n epochs) do 5 Z = f(A,X) ; 6 Get  and ÂCF via g with Eqs. (6) and (7) ; 7 Update Θf and Θg with L ; // Eq. (11) 8 end /* decoder fine-tuning */ 9 Freeze Θf and re-initialize Θg ; 10 Z = f(A,X) ; 11 for epoch in range(n epochs ft) do 12 Get  via g with Eq. (6) ; 13 Update Θg with LF ; // Eq. (8) 14 end /* model inferencing inference */ 15 Z = f(A,X) ; 16 Get  and ÂCF via g with Eqs. (6) and (7) ; Output:  for link prediction, ÂCF Summary Algorithm 1 summarizes the whole process of CFLP. The first step is to compute the factual and counterfactual treatments T, TCF as well as the counterfactual outcomes ACF . Then, the second step trains the graph learning model on both the observed factual data and created counterfactual data with the integrated loss function (Eq. (11)). Note that the discrepancy loss (Eq. (10)) is computed on the representations of node pairs learned by the graph encoder f , so the decoder g is trained with data from both P̂F and P̂CF without balancing the constraints. Therefore, after the model is sufficiently trained, we freeze the graph encoder f and fine-tune g with only the factual data. Finally, after the decoder is sufficiently finetuned, we output the link prediction logits for both the factual and counterfactual adjacency matrices. Complexity The complexity of the first step (finding counterfactual links with nearest neighbors) is proportional to the number of node pairs. When γ is set as a small value to obtain indeed similar node pairs, this step (Eq. (3)) uses constant time. Moreover, the computation in Eq. (3) can be parallelized. Therefore, the time complexity is O(N2/C) where C is the number of processes. For the complexity of the second step (training counterfactual learning model), the GNN encoder has time complexity of O(LH2N +LH|E|) (Wu et al., 2020), where L is the number of GNN layers and H is the size of node representations. Given that we sample the same number of non-existing links as that of observed links during training, the complexity of a three-layer MLP decoder is O(((H + 1) · dh + dh · 1)|E|) = O(dh(H + 2)|E|), where dh is the number of neurons in the hidden layer. Therefore, the second step has linear time complexity w.r.t. the sum of node and edge counts. Limitations First, as mentioned above, the computation of finding counterfactual links has a worst-case complexity of O(N2). Second, CFLP performs counterfactual prediction with only a single treatment; however, there are quite a few kinds of graph structural information that can be considered as treatments. Future work can leverage the rich structural information by bundled treatments (Zou et al., 2020) in counterfactual graph learning. 4 EXPERIMENTS 4.1 EXPERIMENTAL SETUP We conduct experiments on five benchmark datasets including citation networks (CORA, CITESEER, PUBMED (Yang et al., 2016)), social network (FACEBOOK (McAuley & Leskovec, 2012)), and drug-drug interaction network (OGB-DDI (Wishart et al., 2018)) from the Open Graph Benchmark (OGB) (Hu et al., 2020). For the first four datasets, we randomly select 10%/20% of the links and the same numbers of disconnected node pairs as validation/test samples. The links in the validation and test sets are masked off from the training graph. For OGB-DDI, we used the OGB official train/validation/test splits. Statistics and details for the datasets are given in Appendix. We use K-core (Bader & Hogue, 2003) clusters as the default treatment variable. We evaluate CFLP on three commonly used GNN encoders: GCN (Kipf & Welling, 2016a), GSAGE (Hamilton et al., 2017), and JKNet (Xu et al., 2018). We compare the link prediction performance of CFLP against Node2Vec (Grover & Leskovec, 2016), MVGRL (Hassani & Khasahmadi, 2020), VGAE (Kipf & Welling, 2016b), SEAL (Zhang & Chen, 2018), LGLP (Cai et al., 2021), and GNNs with MLP decoder. We report averaged test performance and their standard deviation over 20 runs with different random parameter initializations. Other than the most commonly used of Area Under ROC Curve (AUC), we report Hits@20 (one of the primary metrics on OGB leaderboard) as a more challenging metric, as it expects models to rank positive edges higher than nearly all negative edges. Besides performance comparison on link prediction, we will answer two questions to suggest a way of choosing a treatment variable for creating counterfactual links: (Q1) Does CFLP sufficiently learn the observed averaged treatment effect (ATE) derived from the counterfactual links? (Q2) What is the relationship between the estimated ATE learned in the method and the prediction performance? If the answer to Q1 is yes, then the answer to Q2 will indicate how to choose treatment based on observed ATE. To answer the Q1, we calculate the observed ATE (ÂTEobs) by comparing the observed links in A and created counterfactual links ACF that have opposite treatments. And we calculate the estimated ATE (ÂTEest) by comparing the predicted links in  and predicted counterfactual links ÂCF . Formally, ÂTEobs and ÂTEest are defined as ÂTEobs = 1 N2 N∑ i=1 N∑ j=1 {T (A−ACF ) + (1N×N −T) (ACF −A)}i,j . (12) ÂTEest = 1 N2 N∑ i=1 N∑ j=1 {T (Â− ÂCF ) + (1N×N −T) (ÂCF − Â)}i,j . (13) The treatment variables we will investigate are usually graph clustering or community detection methods, such as K-core (Bader & Hogue, 2003), stochastic block model (SBM) (Karrer & Newman, 2011), spectral clustering (SpecC) (Ng et al., 2001), propagation clustering (PropC) (Raghavan et al., 2007), Louvain (Blondel et al., 2008), common neighbors (CommN), Katz index, and hierarchical clustering (Ward) (Ward Jr, 1963). We use JKNet (Xu et al., 2018) as default graph encoder. Implementation details and supplementary experimental results (e.g., sensitivity on γ, ablation study on LCF and Ldisc) can be found in Appendix. Source code is available in supplementary material. 4.2 EXPERIMENTAL RESULTS Link Prediction Tables 1 and 2 show the link prediction performance of Hits@20 and AUC by all methods. LGLP on PUBMED and OGB-DDI are missing due to the out of memory error when running the code package from the authors. We observe that our CFLP on different graph encoders achieve similar or better performances compared with baselines. The only exception is the AUC on FACEBOOK where most methods have close-to-perfect AUC. As AUC is a relatively easier metric comparing with Hits@20, most methods achieved good performance on AUC. We observe that CFLP with JKNet almost consistently achieves the best performance and outperforms baselines significantly on Hits@20. Specifically, comparing with the best baseline, CFLP improves relatively by 16.4% and 0.8% on Hits@20 and AUC, respectively. Comparing with the best performing baselines, which are also GNN-based, CFLP benefits from learning with both observed link existence (A) and our defined counterfactual links (ACF ). ATE with Different Treatments Tables 3 and 4 show the link prediction performance, ÂTEobs, and ÂTEest of CFLP (with JKNet) when using different treatments. The treatments in Tables 3 and 4 are sorted by the Hits@20 performance. Bigger ATE indicates stronger causal relationship between the treatment and outcome, and vice versa. We observe: (1) the rankings of ÂTEest and ÂTEobs are positively correlated with Kendell’s ranking coefficient (Abdi, 2007) of 0.67 and 0.57 for CORA and CITESEER, respectively. Hence, CFLP was sufficiently trained to learn the causal relationship between graph structure information and link existence; (2) ÂTEobs and ÂTEest are both negatively correlated with the link prediction performance, showing that we can pick a proper treatment prior to training a model with CFLP. Using the treatment that has the weakest causal relationship with link existence is likely to train the model to capture more essential factors on the outcome, in a way similar to denoising the unrelated information from the representations. 5 RELATED WORK Link Prediction With its wide applications, link prediction has draw attention from many research communities including statistical machine learning and data mining. Stochastic generative methods based on stochastic block models (SBM) are developed to generate links (Mehta et al., 2019). In data mining, matrix factorization (Menon & Elkan, 2011), heuristic methods (Philip et al., 2010; Martı́nez et al., 2016), and graph embedding methods (Cui et al., 2018) have been applied to predict links in the graph. Heuristic methods compute the similarity score of nodes based on their neighborhoods. These methods can be generally categorized into first-order, second-order, and high-order heuristics based on the maximum distance of the neighbors. Graph embedding methods learn latent node features via embedding lookup and use them for link prediction (Perozzi et al., 2014; Tang et al., 2015; Grover & Leskovec, 2016; Wang et al., 2016). In the past few years, GNNs have showed promising results on various graph-based tasks with their ability of learning from features and custom aggregations on structures (Kipf & Welling, 2016a; Hamilton et al., 2017; Wu et al., 2020)(Cotta et al., 2021). With node pair representations and an attached MLP or inner-product decoder, GNNs can be used for link prediction (Zhang et al., 2020a; Davidson et al., 2018; Yang et al., 2018). For example, VGAE used GCN to learn node representations and reconstruct the graph structure (Kipf & Welling, 2016b). SEAL extracted a local subgraph around each target node pair and then learned graph representation from local subgraph for link prediction (Zhang & Chen, 2018). Following the scheme of SEAL, Cai & Ji (2020) proposed to improve local subgraph representation learning by multi-scale graph representation learning. And LGLP inverted the local subgraphs to line graphs before learning representations (Cai et al., 2021). However, very limited work has studied to use causal inference for improving link prediction. Counterfactual Prediction As a mean of learning the causality between treatment and outcome, counterfactual prediction has been used for a variety of applications such as recommender systems (Wang et al., 2020; Xu et al., 2020), health care (Alaa & van der Schaar, 2017; Pawlowski et al., 2020), vision-language tasks (Zhang et al., 2020b; Parvaneh et al., 2020), and decision making (Coston et al., 2020; Pitis et al., 2020; Kusner et al., 2017). To infer the causal relationships, previous work usually estimated the ITE via function fitting models (Gelman & Hill, 2006; Chipman et al., 2010; Wager & Athey, 2018; Assaad et al., 2021). Peysakhovich et al. (2019) and Zou et al. (2020) studied counterfactual prediction with multiple agents and bundled treatments, respectively. Causal Inference Causal inference methods usually re-weighted samples based on propensity score (Rosenbaum & Rubin, 1983; Austin, 2011) to remove confounding bias from binary treatments. Recently, several works studied about learning treatment invariant representation to predict the counterfactual outcomes (Shalit et al., 2017; Li & Fu, 2017; Yao et al., 2018; Yoon et al., 2018; Hassanpour & Greiner, 2019a;b; Bica et al., 2020). Few recent works combined causal inference with graph learning (Sherman & Shpitser, 2020; Bevilacqua et al., 2021; Lin et al., 2021; Feng et al., 2021). For example, Sherman & Shpitser (2020) proposed a new concept in causal modeling, called “network intervention”, to study the effect of link creation on network structure changes. Bevilacqua et al. (2021) studied the task of out-of-distribution (OOD) graph classification, and showed how subgraph densities can be used to build size-invariant graph representations, which alleviates the train-test gap when learning from OOD data. 6 CONCLUSION AND FUTURE WORK In this work, we presented a counterfactual graph learning method for link prediction (CFLP). We introduced the idea of counterfactual prediction to improve link prediction on graphs. CFLP accurately predicted the missing links by exploring the causal relationship between global graph structure and link existence. Extensive experiments demonstrated that CFLP achieved the state-of-the-art performance on benchmark datasets. This work sheds insights that a good use of causal models (even basic ones) can greatly improve the performance of (graph) machine learning tasks, which in our case is link prediction. We note that the use of more sophistically designed causal models may lead to larger improvements for other machine learning tasks, which could be a valuable future research direction for the community. Other than our use of global graph structure as treatment, other treatments choices (with both empirical and theoretical analyses) are also worth exploring. Moreover, as CFLP first generates counterfactual links and then learns from both observed and counterfactual link existence, the underlying philosophy of our methodology could be considered as graph data augmentation. Therefore, investigating the relationship between counterfactual graph learning and graph data augmentation is also a possible future research direction. A ADDITIONAL DATASET DETAILS In this section, we provide some additional dataset details. All the datasets used in this work are publicly available. Statistics for the datasets are shown in Table 5. Citation Networks CORA, CITESEER, and PUBMED are citation networks that were first used by Yang et al. (2016) and then commonly used as benchmarks in GNN-related literature (Kipf & Welling, 2016a; Veličković et al., 2017). In these citation networks, the nodes are published papers and features are bag-of-word vectors extracted from the corresponding paper. Links represent the citation relation between papers. We loaded the datasets with the DGL1 package. Social Network The FACEBOOK dataset2 is a social network constructed from friends lists from Facebook (McAuley & Leskovec, 2012). The nodes are Facebook users and links indicate the friendship relation on Facebook. The node features were constructed from the user profiles and anonymized by McAuley & Leskovec (2012). Drug-Drug Interaction Network The OGB-DDI dataset was constructed from a public Drug database (Wishart et al., 2018) and provided by the Open Graph Benchmark (OGB) (Hu et al., 2020). Each node in this graph represents an FDA-approved or experimental drug and edges represent the existence of unexpected effect when the two drugs are taken together. This dataset does not contain any node features, and it can be downloaded with the dataloader3 provided by OGB. B EXPANDED RELATED WORK With the rapid development of graph machine learning in the past few years, researchers have been attempting to relate graph neural networks (GNNs) with causal models. Recently, several works have been proposed to improve graph learning with causal models (Sherman & Shpitser, 2020; Bevilacqua et al., 2021; Lin et al., 2021; Feng et al., 2021). Sherman & Shpitser (2020) proposed a new concept in causal modeling, called “network intervention”, that is a type of structural intervention in network contexts. Sherman & Shpitser (2020) modeled social network with causal DAG and studied the effect of network intervention (link creation and removal) on network structure changes. Lin et al. (2021) formulated the problem of post-hoc explanation generation for GNNs as a causal learning task and proposed a causal explanation model with a loss designed based on Granger causality. Feng et al. (2021) formulated node classification of GNNs with a causal DAG, which estimated the causal effect of the local structure on the prediction and adaptively chose whether to aggregate from the neighbors. Bevilacqua et al. (2021) studied the task of out-of-distribution (OOD) graph classification, and showed how subgraph densities can be used to build size-invariant graph representations. They modeled OOD graph classification with a twin network DAG causal model, which learned approximately environment-invariant graph representations that better extrapolate between train and test data. The last three works, i.e., Lin et al. (2021), Feng et al. (2021), Bevilacqua et al. (2021), proposed to use causal models to improve the performance of three different types of graph machine learning tasks such as GNN explanation (subgraph) generation, node-level classification, 1https://github.com/dmlc/dgl 2https://snap.stanford.edu/data/ego-Facebook.html 3https://ogb.stanford.edu/docs/linkprop/#data-loader and graph-level classification. Compared with them, our work has three points of uniqueness. First, to the best of our knowledge, our work makes the first attempt to use causal model to improve the performance of link prediction which is also an important graph learning task. Second, to make the attempt successful, our work presents a novel concept of “counterfactual link” and proposes a novel method CFLP that learns from both factual and counterfactual link existence. Third, the proposed method CFLP is flexible with the choice of treatment variables and is able to suggest good treatment choices prior to training via ÂTEobs. C DETAILS ON IMPLEMENTATION AND HYPERPARAMETERS All the experiments in this work were conducted on a Linux server with Intel Xeon Gold 6130 Processor (16 Cores @2.1Ghz), 96 GB of RAM, and 4 RTX 2080Ti cards (11 GB of RAM each). Our method are implemented with Python 3.8.5 with PyTorch. Source code is available in the supplementary materials. A list of used packages can be found in requirements.txt. Baseline Methods For baseline methods, we use official code packages from the authors for MVGRL4 (Hassani & Khasahmadi, 2020), SEAL5 (Zhang & Chen, 2018), and LGLP6 (Cai et al., 2021). We use a public implementation for VGAE7 (Kipf & Welling, 2016b) and OGB implementations8 for Node2Vec and baseline GNNs. For fair comparison, we set the size of node/link representations to be 256 of all methods. CFLP We use the Adam optimizer with a simple cyclical learning rate scheduler (Smith, 2017), in which the learning rate waves cyclically between the given learning rate (lr) and 1e-4 in every 70 epochs (50 warmup steps and 20 annealing steps). We implement the GNN encoders with torch_geometric9 (Fey & Lenssen, 2019). Same with the baselines, we set the size of all hidden layers and node/link representations of CFLP as 256. The graph encoders all have three layers and JKNet has mean pooling for the final aggregation layer. The decoder is a 3-layer MLP with a hidden layer of size 64 and ELU as the nonlinearity. As the Euclidean distance used in Eq. (3) has a range of [0,∞), the value of γ depends on the distribution of all-pair node embedding distances, which varies for different datasets. Therefore, we set the value of γ as the γpct-percentile of all-pair node embedding distances. Commands for reproducing the experiments are included in README.md. Hyperparameter Searching Space We manually tune the following hyperparameters over range: lr ∈ {0.005, 0.01, 0.05, 0.1, 0.2}, α ∈ {0.001, 0.01, 0.1, 1, 2}, β ∈ {0.001, 0.01, 0.1, 1, 2}, γpct ∈ {10, 20, 30}. Treatments For the graph clustering or community detection methods we used as treatments, we use the implementation from scikit-network10 for Louvain (Blondel et al., 2008), SpecC (Ng et al., 2001), PropC (Raghavan et al., 2007), and Ward (Ward Jr, 1963). We used implementation of K-core (Bader & Hogue, 2003) from networkx.11 We used SBM (Karrer & Newman, 2011) from a public implementation by Funke & Becker (2019).12 For CommN and Katz, we set Ti,j = 1 if the number of common neighbors or Katz index between vi and vj are greater or equal to 2 or 2 times the average of all Katz index values, respectively. For SpecC, we set the number of clusters as 16. For SBM, we set the number of communities as 16. These settings are fixed for all datasets. 4https://github.com/kavehhassani/mvgrl 5https://github.com/facebookresearch/SEAL_OGB 6https://github.com/LeiCaiwsu/LGLP 7https://github.com/DaehanKim/vgae_pytorch 8https://github.com/snap-stanford/ogb/tree/master/examples/ linkproppred/ddi 9https://pytorch-geometric.readthedocs.io/en/latest/ 10https://scikit-network.readthedocs.io/ 11https://networkx.org/documentation/ 12https://github.com/funket/pysbm D ADDITIONAL EXPERIMENTAL RESULTS AND DISCUSSIONS Link Prediction Tables 6 and 7 show the link prediction performance of Hits@50 and Average Precision (AP) by all methods. LGLP on PUBMED and OGB-DDI are missing due to the out of memory error when running the code package from the authors. Similar to the results in Tables 1 and 2, we observe that our CFLP on different graph encoders achieve similar or better performances compared with baselines, with the only exception of AP on FACEBOOK where most methods have close-to-perfect AP. From Tables 1, 2, 6 and 7, we observe that CFLP achieves improvement over all GNN architectures (averaged across datasets). Specifically, CFLP improves 25.6% (GCN), 12.0% (GSAGE), and 36.3% (JKNet) on Hits@20, 9.6% (GCN), 5.0% (GSAGE), and 17.8% (JKNet) on Hits@50, 5.6% (GCN), 1.6% (GSAGE), and 1.9% (JKNet) on AUC, and 0.8% (GCN), 0.8% (GSAGE), and 1.8% (JKNet) on AP. We note that CFLP with JKNet almost consistently achieves the best performance and outperforms baselines significantly on Hits@50. Specifically, compared with the best baseline, CFLP improves relatively by 6.8% and 0.9% on Hits@50 and AP, respectively. Ablation Study on Losses For the ablative studies of LCF (Eq. (9)) and Ldisc (Eq. (10)), we show their effect by removing them from the integrated loss function (Eq. (11)). Table 8 shows the results of CFLP on CORA and CITESEER under different settings (α = 0, β = 0, α = β = 0, and original setting). We observe that CFLP in the original setting achieves the best performance. The performance drops significantly when having α = 0, i.e., not using any counterfactual data during training. We note that having β = 0, i.e., not using the discrepancy loss, also lowers the performance. Therefore, both LCF and Ldisc are essential for improving the link prediction performance. Ablation Study on Node Embedding X̃ As the node embedding X̃ is used in the early step of CFLP for finding the counterfactual links, the quality of X̃ may affect the later learning process. Therefore, we also evaluate CFLP with different state-of-the-art unsupervised graph representation learning methods: MVGRL (Hassani & Khasahmadi, 2020), DGI (Velickovic et al., 2019), and GRACE (Zhu et al., 2020). Table 9 shows the link prediction performance of CFLP (w/ JKNet) on CORA and CITESEER with different node embeddings. We observe that the choice of the method for learning X̃ does have an impact on the later learning process as well as the link prediction performance. Nevertheless, Table 9 shows CFLP’s advantage can be consistently observed with different choices of methods for learning X̃, as CFLP with X̃ learned from all three methods showed promising link prediction performance. Sensitivity Analysis of α and β Figure 4 shows the AUC performance of CFLP on CORA with different combinations of α and β. We observe that the performance is the poorest when α = β = 0 and gradually improves and gets stable as α and β increase, showing that CFLP is generally robust to the hyperparameters α and β, and the optimal values are easy to locate. Sensitivity Analysis of γ Figure 5 shows the Hits@20 and AUC performance on link prediction of CFLP (with JKNet) on CORA and CITESEER with different treatments and γpct. We observe that the performance is generally good when 10 ≤ γpct ≤ 20 and gradually get worse when the value of γpct is too small or too large, showing that CFLP is robust to γ and the optimal γ is easy to find. Sensitivity to Noisy Data We note that robustness w.r.t. noisy data is not within our claim of technical contributions. Nevertheless, CFLP is not more vulnerable than other link prediction baselines. We conduct experiments with random attacks on the Cora dataset (randomly removing links and adding noisy links). Table 10 shows the AUC performances of our proposed CFLP (w/ JKNet) compared to the strongest baseline methods under different levels of random attacks (0%, 2%, 5%, and 10%). We can observe that as the attack strength goes up, the link prediction performance of all methods go down. We also note that our proposed CFLP still outperforms the baseline methods. Generalization to Graphs with Weighted Edges As our proposed CFLP uses GNN as the graph encoder and GNNs are usually able to take weighted graph as input (e.g., the adjacency matrix A for GCN can be weighted), the model should be able to handle weighted graphs as given. Note the link prediction losses (Eqs. (8) and (9)) need to be slightly modified considering the task. When the task is to predict the link existence, the label adjacency matrix used in Eqs. (8) and (9) must be of binary values. When the task is to predict the link weights, the BCE loss functions (Eqs. (8) and (9)) need to be changed to regression loss functions such as MSE.
1. What is the focus of the paper regarding link prediction? 2. What is the novel approach introduced by the authors in terms of causal inference? 3. What are the concerns regarding the causal estimand measured by the proposed method? 4. How does the reviewer assess the relevance of the paper's content, particularly its connection to prior works? 5. Are there any suggestions for improving the presentation or clarity of the paper's ideas?
Summary Of The Paper Review
Summary Of The Paper This work looks at the problem of link prediction from the lens of causal inference. In particular, the authors introduce counterfactual links, which are links which would have existed under a different treatment (with a running example being the neighborhood identifier). The authors use this definition to define a training procedure which builds on prior art in neural net based causal inference that uses an IPM penalty to ensure balance in representation between treatment and control outcomes. Experimental results are provided which show strong empirical performance for the proposed model. Review Overall, I found this paper to be an interesting approach, but there are a few items that give me pause. Most importantly, it's not entirely clear what the causal estimand is that is being measured. The authors use membership to a community as the running treatment and consider the existence of a link between nodes as the outcome. However, it's not entirely clear to me that what the authors are ultimately doing is measuring a causal effect (the central task does not appear to be measuring a difference between observed and counterfactual networks). Instead, it seems that the proposed procedure is improving performance by enforcing a kind of invariance in representation between the observed network and the closest perturbed network. I find this to be a very interesting and compelling approach, but unfortunately it appears to be a bit obscured by the presentation. A few more minor points: Sherman & Shpitser (Intervening on Network Ties, UAI 2019) should probably be cited given the relative similarity in task It would be helpful if the definition of ATE provided by equations 13 and 14 appeared earlier in the text. It would give a lot more clarity to the presentation to my eyes to have a more formal definition of the estimand provided during the problem setup. The authors define the ATE as the expectation of the ITE. The ITE is also an expectation, but conditional on X, whereas the ATE is an unconditional measure (after marginalizing over X).
ICLR
Title Counterfactual Graph Learning for Link Prediction Abstract Learning to predict missing links is important for many graph-based applications. Existing methods were designed to learn the association between two sets of variables: (1) the observed graph structure (e.g., clustering effect) and (2) the existence of link between a pair of nodes. However, the causal relationship between these variables was ignored. We visit the possibility of learning it by asking a counterfactual question: “would the link exist or not if the observed graph structure became different?” To answer this question, we leverage causal models considering the information of the node pair (i.e., learned graph representations) as context, global graph structural properties as treatment, and link existence as outcome. In this work, we propose a novel link prediction method that enhances graph learning by counterfactual inference. It creates counterfactual links from the observed ones, and learns representations from both the observed and counterfactual links. Experiments on benchmark datasets show that this novel graph learning method achieves state-of-the-art performance on link prediction. 1 INTRODUCTION Link prediction seeks to predict the likelihood of edge existence between node pairs based on the observed graph. Given the omnipresence of graph-structured data, link prediction has copious applications such as movie recommendation (Bennett et al., 2007), chemical interaction prediction (Stanfield et al., 2017), and knowledge graph completion (Kazemi & Poole, 2018). Graph machine learning methods have been widely applied to solve this problem. Their standard scheme is to first learn the representation vectors of nodes and then learn the association between the representations of a pair of nodes and the existence of the link between them. For example, graph neural networks (GNNs) use neighborhood aggregation to create the representation vectors: the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes (Kipf & Welling, 2016a; Hamilton et al., 2017; Wu et al., 2020). Then the vectors are fed into a binary classification model to learn the association. GNN methods have shown predominance in the task of link prediction (Kipf & Welling, 2016b; Zhang et al., 2020a). Unfortunately, the causal relationship between graph structure and link existence was largely ignored in previous work. Existing methods that learn from association only were are not able to capture essential factors to accurately predict missing links in the test data. Take social network as an example. Suppose Alice and Adam live in the same neighborhood and they are close friends. The association between neighborhood belonging and friend closeness could be too strong to discover the essential factors of the friendship such as common interests or family relationship which could be the cause of being living in the same neighborhood. So, our idea is asking a counterfactual question: “would Alice and Adam still be close friends if they were not living in the same neighborhood?” If a graph learning model could learn the causal relationship by answering the counterfactual questions, it would improve the accuracy of link prediction with the novel knowledge it captured. Generally, the questions can be described as “would the link exist or not if the graph structure became different?” As known to many, counterfactual question is a key component of causal inference and have been well defined in the literature. A counterfactual question is usually framed with three factors: context (as a data point), manipulation (e.g., treatment, intervention, action, strategy), and outcome (van der Laan & Petersen, 2007; Johansson et al., 2016). (To simplify the language, we use “treatment” to refer to the manipulation in this paper, as readers might be familiar more with the word “treatment.”) Given certain data context, it asks what the outcome would have been if the treatment had not been the observed value. In the scenario of link prediction, we consider the information of a pair of nodes as context, graph structural properties as treatment, and link existence as outcome. Recall the social network example. The context is the representations of Alice and Adam that are learned from their personal attributes and relationships with others on the network. The treatment is living in the same neighborhood, which can be identified by community detection. And the outcome is their friendship. In this work, we present a counterfactual graph learning method for link prediction (CFLP) that trains graph learning models to answer the counterfactual questions. Figure 1 illustrates this twostep method. Suppose the treatment variable is defined as one type of global graph structure, e.g., the neighborhood assignment discovered by spectral clustering or community detection algorithms. We are wondering how likely the neighborhood distribution makes a difference on the link (non)existence for each pair of nodes. So, given a pair of nodes (like Alice and Adam) and the treatment value on this pair (in the same neighborhood), we find a pair of nodes (like Helen and Bob) that satisfies two conditions: (1) it has a different treatment (in different neighborhoods) and (2) it is the most similar pair with the given pair of nodes. We call these matched pair of nodes as “counterfactual links.” Note that the outcome of the counterfactual link can be either 1 or 0, depending on whether there exists an edge between the matched pair of nodes (Helen and Bob). The counterfactual link provides unobserved outcome to the given pair of nodes (Alice and Adam) under a counterfactual condition (in different neighborhoods). After counterfactual links are created for all (positive and negative) training examples, CFLP trains a link predictor (which can be GNN-based) to learn the representation vectors of nodes to predict both the observed factual links and the counterfactual links. In this Alice-Adam example, the link predictor is trained to estimate the individual treatment effect (ITE) of neighborhood assignment as 1 − 1 = 0, where ITE is a metric for the effect of treatment on the outcome and zero indicates the given treatment has no effect on the outcome. So, the learner will try to discover the essential factors on the friendship between Alice and Adam. CFLP leverages causal models to find these factors for graph learning models to accurately predict missing links. Contributions. Our main contributions can be summarized as follows. (1) This is the first work that proposes to improve link prediction by causal inference, specifically, learning to answer counterfactual questions about link existence. (2) This work introduces CFLP that trains GNN-based link predictors to predict both factual and counterfactual links. It learns the causal relationship between global graph structure and link existence. (3) CFLP outperforms competitive baseline methods on several benchmark datasets. We analyze the impact of counterfactual links as well as the choice of treatment variable. This work sheds insights for improving graph machine learning with causal analysis, which has not been extensively studied yet, when the other direction (machine learning for causal inference) has been studied for a long time. 2 PROBLEM DEFINITION Notations Let G = (V, E) be an undirected graph of N nodes, where V = {v1, v2, . . . , vN} is the set of nodes and E ⊆ V × V is the set of observed links. We denote the adjacency matrix as A ∈ {0, 1}N×N , whereAi,j = 1 indicates nodes vi and vj are connected and vice versa. We denote the node feature matrix as X ∈ RN×F , where F is the number of node features and xi (bolded) indicates the feature vector of node vi (the i-th row of X). In this work, we follow the commonly accepted problem definition of link prediction on graph data (Zhang & Chen, 2018; Zhang et al., 2020a; Cai et al., 2021): Given an observed graph G (with validation and testing links masked off), predict the link existence between every pair of nodes. More specifically, for the GNN-based link prediction methods, they learn low-dimensional node representations Z ∈ RN×H , where H is the dimensional size of latent space such that H F , and then use Z for the prediction of link existence. 3 PROPOSED METHOD 3.1 IMPROVING GRAPH LEARNING WITH CAUSAL MODEL Z ?T Y Z? T Y Treatment Effect Estimation Graph Representation Learning Figure 2: Causal modeling (not the target of our work but related to the idea we propose): Given Z and observed outcomes, find treatment effect of T on Y . Z ?T Y Z? T Y Treatment Effect Estimation Graph Representation Learning Figure 3: Graph learning with causal model (the proposed idea): leverage the estimated ATE(Y |T ) to improve the learning of Z. Leveraging Causal Model(s) Counterfactual causal inference aims to find out the causal relationship between treatment and outcomes by asking the counterfactual questions such as ”would the outcome be different if the treatment was different?” (Morgan & Winship, 2015). Figure 2 is a typical example, in which we denote the context (confounder) as Z, treatment as T , and the outcome as Y . Given the context, treatments, and their corresponding outcomes, counterfactual inference methods aim to find the effect of treatment on the outcome, which is usually measured by individual treatment effect (ITE) and its expectation averaged treatment effect (ATE) (van der Laan & Petersen, 2007; Weiss et al., 2015). For a binary treatment variable T = {0, 1}, denoting g(z, T ) as the outcome of z given the treatment T , we have ITE(z) = g(z, 1)− g(z, 0), and ATE = Ez∼Z ITE(z). Ideally, we need all possible outcomes of the contexts under all kinds of treatments to study the causal relationships (Morgan & Winship, 2015). However, in reality, the fact that we can only observe one potential outcome under one particular treatment prevents the ITE from being known (Johansson et al., 2016). Traditional causal inference methods use statisti- cal learning approaches such as Neyman–Rubin casual model (BCM) and propensity score matching (PSM) to predict the value of ATE (Rubin, 1974; 2005). In this work, we look at link prediction with graph learning, which is essentially learning the best node representations Z for the prediction of link existence. Therefore, as shown in Figure 3, where the outcome Y is the link existence, the objective is different from classic causal inference. In graph learning, we can estimate the effect of treatment on the outcome (ATE(Y |T )), and we want to improve the learning of Z with the estimation. More specifically, in graph learning for link prediction, for each pair of nodes (vi, vj), its ITE can be estimated with ITE(vi,vj) = g((zi, zj), 1)− g((zi, zj), 0) (1) and we use this information to improve the learning of Z, i.e., P (Z|Y ). We denote the observed adjacency matrix as the factual outcomes A and the unobserved adjacency matrix when the treatment is different as the counterfactual outcomes ACF . We denote T ∈ {0, 1}N×N as the binary factual treatment matrix, where Ti,j indicates the treatment of the node pair (vi, vj). We denote TCF as the counterfactual treatment matrix where TCFi,j = 1 − Ti,j . We are interested in (1) estimating the counterfactual outcomes ACF via observed data, (2) learning with the counterfactual adjacency matrix ACF to enhance link prediction, and (3) learning the causal relationship between graph structural information (treatment) and link existence (outcome). Treatment Variable Previous work on graph machine learning (Velickovic et al., 2019; Park et al., 2020) showed that the graph’s global structural information could improve the quality of representation vectors of nodes learned by GNNs. This is because the message passing-based GNNs aggregate local information in the algorithm of representation vector generation and the global structural information is complementary with the aggregated information. Therefore, for a pair of nodes, one option of defining the treatment variable is its global structural role in the graph. Without the loss of generality, we use Louvain (Blondel et al., 2008), an unsupervised approach that has been widely used for community detection, as an example. Louvain discovers community structure of a graph and assigns each node to one community. Then we can define the binary treatment variable as whether these two nodes in the pair belong to the same community. Let c : V → N be any graph mining/clustering method that outputs the index of community/cluster/neighborhood that each node belongs to. The treatment matrix T is defined as Ti,j = 1 if c(vi) = c(vj), and Ti,j = 0 otherwise. For the choice of c, we suggest methods that group nodes based on global graph structural information, including but not limited to Louvain (Blondel et al., 2008), K-core (Bader & Hogue, 2003), and spectral clustering (Ng et al., 2001). 3.2 COUNTERFACTUAL LINKS To implement the solution based on above idea, we propose counterfactual links. As aforementioned, for each node pair, the observed data contains only the factual treatment and outcome, meaning that the link existence for the given node pair with an opposite treatment is unknown. Therefore, we use the outcome from the nearest observed context as a substitute. This type of matching on covariates is widely used to estimate treatment effects from observational data (Johansson et al., 2016; Alaa & Van Der Schaar, 2019). That is, we want to find the nearest neighbor with the opposite treatment for each observed node pairs and use the nearest neighbor’s outcome as a counterfactual link. Formally, ∀(vi, vj) ∈ V × V , we want to find its counterfactual link (va, vb) as below: (va, vb) = arg min va,vb∈V {h((vi, vj), (va, vb)) | Ta,b = 1− Ti,j}, (2) where h(·, ·) is a metric of measuring the distance between a pair of node pairs (a pair of contexts). Nevertheless, finding the nearest neighbors by computing the distance between all pairs of node pairs is extremely inefficient and infeasible in application, which takes O(N4) comparisons (as there are totally O(N2) node pairs). Hence we implement Eq. (2) using node-level embeddings. Specifically, considering that we want to find the nearest node pair based on not only the raw node features but also structural features, we take the state-of-the-art unsupervised graph representation learning method MVGRL (Hassani & Khasahmadi, 2020) to learn the node embeddings X̃ ∈ RN×F̃ from the observed graph (with validation and testing links masked off). We use X̃ to find the nearest neighbors of node pairs. Therefore, ∀(vi, vj) ∈ V × V , we define its counterfactual link (va, vb) as (va, vb) = arg min va,vb∈V {d(x̃i, x̃a) + d(x̃j , x̃b) | Ta,b = 1− Ti,j , d(x̃i, x̃a) + d(x̃j , x̃b) < 2γ}, (3) where d(·, ·) is specified as the Euclidean distance on the embedding space of X̃, and γ is a hyperparameter that defines the maximum distance that two nodes are considered as similar. When no node pair satisfies the above equation (i.e., there does not exist any node pair with opposite treatment that is close enough to the target node pair), we do not assign any nearest neighbor for the given node pair to ensure all the neighbors are similar enough (as substitutes) in the feature space. Thus, the counterfactual treatment matrix TCF and the counterfactual adjacency matrix ACF are defined as TCFi,j , A CF i,j = { 1− Ti,j , Aa,b , if ∃ (va, vb) ∈ V × V satisfies Eq. (3); Ti,j , Ai,j , otherwise. (4) It is worth noting that the node embeddings X̃ and the nearest neighbors are computed only once and do not change during the learning process. X̃ is only used for finding the nearest neighbors. We also note that X̃ must be structural embeddings rather than positional embeddings (as defined in (Srinivasan & Ribeiro, 2020)). Learning from Counterfactual Distributions Let PF be the factual distribution of the observed contexts and treatments, and PCF be the counterfactual distribution that is composed of the observed contexts and opposite treatments. We define the empirical factual distribution P̂F ∼ PF as P̂F = {(vi, vj , Ti,j)}Ni,j=1, and define the empirical counterfactual distribution P̂CF ∼ PCF as P̂CF = {(vi, vj , TCFi,j )}Ni,j=1. Unlike traditional link prediction methods that take only P̂F as input and use the observed outcomes A as the training target, the idea of counterfactual graph learning is to take advantage of the counterfactual distribution by having P̂CF as a complementary input and use the counterfactual outcomes ACF as the training target for the counterfactual data samples. 3.3 THE COUNTERFACTUAL GRAPH LEARNING MODEL In this subsection, we present the design of our model as well as the training method. The input of the model in CFLP includes (1) the observed graph data A and raw feature matrix X, (2) the factual treatments TF and counterfactual treatments TCF , and (3) the counterfactual graph data ACF . The output contains link prediction logits in  and ÂCF for the factual and counterfactual adjacency matrices A and ACF , respectively. Graph Learning Model The model consist of two trainable components: a graph encoder f and a link decoder g. The graph encoder generates representation vectors of nodes from graph data G. And the link decoder projects the representation vectors of node pairs into the link prediction logits. The choice of the graph encoder f can be any end-to-end GNN model. Without the loss of generality, here we use the commonly used graph convolutional network (GCN) (Kipf & Welling, 2016a). Each layer of GCN is defined as H(l) = f (l)(A,H(l−1);W(l)) = σ(D̃− 1 2 ÃD̃− 1 2H(l−1)W(l)), (5) where l is the layer index, à = A + I is the adjacency matrix with added self-loops, D̃ is the diagonal degree matrix D̃ii = ∑ j Ãij , H (0) = X, W(l) is the learnable weight matrix at the l-th layer, and σ(·) denotes a nonlinear activation such as ReLU. We denote Z = f(A,X) ∈ RN×H as the output from the encoder’s last layer, i.e., the H-dimensional representation vectors of nodes. Following previous work (Zhang et al., 2020a), we compute the representation of a node pair as the Hadamard product of the vectors of the two nodes. That is, the representation for the node pair (vi, vj) is zi zj ∈ RH , where stands for the Hadamard product. For the link decoder that predicts whether a link exists between a pair of nodes, we opt for simplicity and adopt a simple decoder based on multi-layer perceptron (MLP), given the representations of node pairs and their treatments. That is, the decoder g is defined as  = g(Z,T), where Âi,j = MLP([zi zj , Ti,j ]), (6) ÂCF = g(Z,TCF ), where ÂCFi,j = MLP([zi zj , TCFi,j ]), (7) where [·, ·] stands for the concatenation of vectors, and  and ÂCF can be used for estimating the observed ITE as aforementioned in Eq. (1). During the training process, data samples from the empirical factual distribution P̂F and the empirical counterfactual distribution P̂CF are fed into decoder g and optimized towards A and ACF , respectively. That is, for the two distributions, the loss functions are as follows: LF = 1 N2 N∑ i=1 N∑ j=1 Ai,j · log Âi,j + (1−Ai,j) · log(1− Âi,j), (8) LCF = 1 N2 N∑ i=1 N∑ j=1 ACFi,j · log ÂCFi,j + (1−ACFi,j ) · log(1− ÂCFi,j ). (9) Balancing Counterfactual Learning In the training process, the above loss minimizations train the model on both the empirical factual distribution P̂F ∼ PF and empirical counterfactual distribution P̂CF ∼ PCF that are not necessarily equal – the training examples (node pairs) do not have to be aligned. However, at the stage of inference, the test data contains only observed (factual) samples. Such a gap between the training and testing data distributions exposes the model in the risk of covariant shift, which is a common issue in counterfactual learning (Johansson et al., 2016; Assaad et al., 2021). To force the distributions of representations of factual distributions and counterfactual distributions to be similar, we use the discrepancy distance (Mansour et al., 2009; Johansson et al., 2016) as another objective to regularize the representation learning. That is, we use the following loss term to minimize the distance between the learned representations from P̂F and P̂CF : Ldisc = disc(P̂Ff , P̂CFf ), where disc(P,Q) = ||P −Q||F , (10) where || · ||F denotes the Frobenius Norm, and P̂Ff and P̂CFf denote the node pair representations learned by graph encoder f from factual distribution and counterfactual distribution, respectively. That is, the learned representations for (vi, vj , Ti,j) and (vi, vj , TCFi,j ) are [zi zj , Ti,j ] (Eq. (6)) and [zi zj , TCFi,j ] (Eq. (7)), respectively. Training During the training of CFLP, we want the model to be optimized towards three targets: (1) accurate link prediction on the observed outcomes (Eq. (8)), (2) accurate estimation on the counterfactual outcomes (Eq. (9)), and (3) regularization on the representation spaces learned from P̂F and P̂CF (Eq. (10)). Therefore, the overall training loss of our proposed CFLP is L = LF + α · LCF + β · Ldisc, (11) where α and β are hyperparameters to control the weights of counterfactual outcome estimation (link prediction) loss and discrepancy loss. Algorithm 1: CFLP: Counterfactual graph learning for link prediction Input : f , g, A, X, n epochs, n epoch ft 1 Compute T as presented in Section 3.1 ; 2 Compute TCF ,ACF by Eqs. (3) and (4) ; /* model training */ 3 Initialize Θf in f and Θg in g ; 4 for epoch in range(n epochs) do 5 Z = f(A,X) ; 6 Get  and ÂCF via g with Eqs. (6) and (7) ; 7 Update Θf and Θg with L ; // Eq. (11) 8 end /* decoder fine-tuning */ 9 Freeze Θf and re-initialize Θg ; 10 Z = f(A,X) ; 11 for epoch in range(n epochs ft) do 12 Get  via g with Eq. (6) ; 13 Update Θg with LF ; // Eq. (8) 14 end /* model inferencing inference */ 15 Z = f(A,X) ; 16 Get  and ÂCF via g with Eqs. (6) and (7) ; Output:  for link prediction, ÂCF Summary Algorithm 1 summarizes the whole process of CFLP. The first step is to compute the factual and counterfactual treatments T, TCF as well as the counterfactual outcomes ACF . Then, the second step trains the graph learning model on both the observed factual data and created counterfactual data with the integrated loss function (Eq. (11)). Note that the discrepancy loss (Eq. (10)) is computed on the representations of node pairs learned by the graph encoder f , so the decoder g is trained with data from both P̂F and P̂CF without balancing the constraints. Therefore, after the model is sufficiently trained, we freeze the graph encoder f and fine-tune g with only the factual data. Finally, after the decoder is sufficiently finetuned, we output the link prediction logits for both the factual and counterfactual adjacency matrices. Complexity The complexity of the first step (finding counterfactual links with nearest neighbors) is proportional to the number of node pairs. When γ is set as a small value to obtain indeed similar node pairs, this step (Eq. (3)) uses constant time. Moreover, the computation in Eq. (3) can be parallelized. Therefore, the time complexity is O(N2/C) where C is the number of processes. For the complexity of the second step (training counterfactual learning model), the GNN encoder has time complexity of O(LH2N +LH|E|) (Wu et al., 2020), where L is the number of GNN layers and H is the size of node representations. Given that we sample the same number of non-existing links as that of observed links during training, the complexity of a three-layer MLP decoder is O(((H + 1) · dh + dh · 1)|E|) = O(dh(H + 2)|E|), where dh is the number of neurons in the hidden layer. Therefore, the second step has linear time complexity w.r.t. the sum of node and edge counts. Limitations First, as mentioned above, the computation of finding counterfactual links has a worst-case complexity of O(N2). Second, CFLP performs counterfactual prediction with only a single treatment; however, there are quite a few kinds of graph structural information that can be considered as treatments. Future work can leverage the rich structural information by bundled treatments (Zou et al., 2020) in counterfactual graph learning. 4 EXPERIMENTS 4.1 EXPERIMENTAL SETUP We conduct experiments on five benchmark datasets including citation networks (CORA, CITESEER, PUBMED (Yang et al., 2016)), social network (FACEBOOK (McAuley & Leskovec, 2012)), and drug-drug interaction network (OGB-DDI (Wishart et al., 2018)) from the Open Graph Benchmark (OGB) (Hu et al., 2020). For the first four datasets, we randomly select 10%/20% of the links and the same numbers of disconnected node pairs as validation/test samples. The links in the validation and test sets are masked off from the training graph. For OGB-DDI, we used the OGB official train/validation/test splits. Statistics and details for the datasets are given in Appendix. We use K-core (Bader & Hogue, 2003) clusters as the default treatment variable. We evaluate CFLP on three commonly used GNN encoders: GCN (Kipf & Welling, 2016a), GSAGE (Hamilton et al., 2017), and JKNet (Xu et al., 2018). We compare the link prediction performance of CFLP against Node2Vec (Grover & Leskovec, 2016), MVGRL (Hassani & Khasahmadi, 2020), VGAE (Kipf & Welling, 2016b), SEAL (Zhang & Chen, 2018), LGLP (Cai et al., 2021), and GNNs with MLP decoder. We report averaged test performance and their standard deviation over 20 runs with different random parameter initializations. Other than the most commonly used of Area Under ROC Curve (AUC), we report Hits@20 (one of the primary metrics on OGB leaderboard) as a more challenging metric, as it expects models to rank positive edges higher than nearly all negative edges. Besides performance comparison on link prediction, we will answer two questions to suggest a way of choosing a treatment variable for creating counterfactual links: (Q1) Does CFLP sufficiently learn the observed averaged treatment effect (ATE) derived from the counterfactual links? (Q2) What is the relationship between the estimated ATE learned in the method and the prediction performance? If the answer to Q1 is yes, then the answer to Q2 will indicate how to choose treatment based on observed ATE. To answer the Q1, we calculate the observed ATE (ÂTEobs) by comparing the observed links in A and created counterfactual links ACF that have opposite treatments. And we calculate the estimated ATE (ÂTEest) by comparing the predicted links in  and predicted counterfactual links ÂCF . Formally, ÂTEobs and ÂTEest are defined as ÂTEobs = 1 N2 N∑ i=1 N∑ j=1 {T (A−ACF ) + (1N×N −T) (ACF −A)}i,j . (12) ÂTEest = 1 N2 N∑ i=1 N∑ j=1 {T (Â− ÂCF ) + (1N×N −T) (ÂCF − Â)}i,j . (13) The treatment variables we will investigate are usually graph clustering or community detection methods, such as K-core (Bader & Hogue, 2003), stochastic block model (SBM) (Karrer & Newman, 2011), spectral clustering (SpecC) (Ng et al., 2001), propagation clustering (PropC) (Raghavan et al., 2007), Louvain (Blondel et al., 2008), common neighbors (CommN), Katz index, and hierarchical clustering (Ward) (Ward Jr, 1963). We use JKNet (Xu et al., 2018) as default graph encoder. Implementation details and supplementary experimental results (e.g., sensitivity on γ, ablation study on LCF and Ldisc) can be found in Appendix. Source code is available in supplementary material. 4.2 EXPERIMENTAL RESULTS Link Prediction Tables 1 and 2 show the link prediction performance of Hits@20 and AUC by all methods. LGLP on PUBMED and OGB-DDI are missing due to the out of memory error when running the code package from the authors. We observe that our CFLP on different graph encoders achieve similar or better performances compared with baselines. The only exception is the AUC on FACEBOOK where most methods have close-to-perfect AUC. As AUC is a relatively easier metric comparing with Hits@20, most methods achieved good performance on AUC. We observe that CFLP with JKNet almost consistently achieves the best performance and outperforms baselines significantly on Hits@20. Specifically, comparing with the best baseline, CFLP improves relatively by 16.4% and 0.8% on Hits@20 and AUC, respectively. Comparing with the best performing baselines, which are also GNN-based, CFLP benefits from learning with both observed link existence (A) and our defined counterfactual links (ACF ). ATE with Different Treatments Tables 3 and 4 show the link prediction performance, ÂTEobs, and ÂTEest of CFLP (with JKNet) when using different treatments. The treatments in Tables 3 and 4 are sorted by the Hits@20 performance. Bigger ATE indicates stronger causal relationship between the treatment and outcome, and vice versa. We observe: (1) the rankings of ÂTEest and ÂTEobs are positively correlated with Kendell’s ranking coefficient (Abdi, 2007) of 0.67 and 0.57 for CORA and CITESEER, respectively. Hence, CFLP was sufficiently trained to learn the causal relationship between graph structure information and link existence; (2) ÂTEobs and ÂTEest are both negatively correlated with the link prediction performance, showing that we can pick a proper treatment prior to training a model with CFLP. Using the treatment that has the weakest causal relationship with link existence is likely to train the model to capture more essential factors on the outcome, in a way similar to denoising the unrelated information from the representations. 5 RELATED WORK Link Prediction With its wide applications, link prediction has draw attention from many research communities including statistical machine learning and data mining. Stochastic generative methods based on stochastic block models (SBM) are developed to generate links (Mehta et al., 2019). In data mining, matrix factorization (Menon & Elkan, 2011), heuristic methods (Philip et al., 2010; Martı́nez et al., 2016), and graph embedding methods (Cui et al., 2018) have been applied to predict links in the graph. Heuristic methods compute the similarity score of nodes based on their neighborhoods. These methods can be generally categorized into first-order, second-order, and high-order heuristics based on the maximum distance of the neighbors. Graph embedding methods learn latent node features via embedding lookup and use them for link prediction (Perozzi et al., 2014; Tang et al., 2015; Grover & Leskovec, 2016; Wang et al., 2016). In the past few years, GNNs have showed promising results on various graph-based tasks with their ability of learning from features and custom aggregations on structures (Kipf & Welling, 2016a; Hamilton et al., 2017; Wu et al., 2020)(Cotta et al., 2021). With node pair representations and an attached MLP or inner-product decoder, GNNs can be used for link prediction (Zhang et al., 2020a; Davidson et al., 2018; Yang et al., 2018). For example, VGAE used GCN to learn node representations and reconstruct the graph structure (Kipf & Welling, 2016b). SEAL extracted a local subgraph around each target node pair and then learned graph representation from local subgraph for link prediction (Zhang & Chen, 2018). Following the scheme of SEAL, Cai & Ji (2020) proposed to improve local subgraph representation learning by multi-scale graph representation learning. And LGLP inverted the local subgraphs to line graphs before learning representations (Cai et al., 2021). However, very limited work has studied to use causal inference for improving link prediction. Counterfactual Prediction As a mean of learning the causality between treatment and outcome, counterfactual prediction has been used for a variety of applications such as recommender systems (Wang et al., 2020; Xu et al., 2020), health care (Alaa & van der Schaar, 2017; Pawlowski et al., 2020), vision-language tasks (Zhang et al., 2020b; Parvaneh et al., 2020), and decision making (Coston et al., 2020; Pitis et al., 2020; Kusner et al., 2017). To infer the causal relationships, previous work usually estimated the ITE via function fitting models (Gelman & Hill, 2006; Chipman et al., 2010; Wager & Athey, 2018; Assaad et al., 2021). Peysakhovich et al. (2019) and Zou et al. (2020) studied counterfactual prediction with multiple agents and bundled treatments, respectively. Causal Inference Causal inference methods usually re-weighted samples based on propensity score (Rosenbaum & Rubin, 1983; Austin, 2011) to remove confounding bias from binary treatments. Recently, several works studied about learning treatment invariant representation to predict the counterfactual outcomes (Shalit et al., 2017; Li & Fu, 2017; Yao et al., 2018; Yoon et al., 2018; Hassanpour & Greiner, 2019a;b; Bica et al., 2020). Few recent works combined causal inference with graph learning (Sherman & Shpitser, 2020; Bevilacqua et al., 2021; Lin et al., 2021; Feng et al., 2021). For example, Sherman & Shpitser (2020) proposed a new concept in causal modeling, called “network intervention”, to study the effect of link creation on network structure changes. Bevilacqua et al. (2021) studied the task of out-of-distribution (OOD) graph classification, and showed how subgraph densities can be used to build size-invariant graph representations, which alleviates the train-test gap when learning from OOD data. 6 CONCLUSION AND FUTURE WORK In this work, we presented a counterfactual graph learning method for link prediction (CFLP). We introduced the idea of counterfactual prediction to improve link prediction on graphs. CFLP accurately predicted the missing links by exploring the causal relationship between global graph structure and link existence. Extensive experiments demonstrated that CFLP achieved the state-of-the-art performance on benchmark datasets. This work sheds insights that a good use of causal models (even basic ones) can greatly improve the performance of (graph) machine learning tasks, which in our case is link prediction. We note that the use of more sophistically designed causal models may lead to larger improvements for other machine learning tasks, which could be a valuable future research direction for the community. Other than our use of global graph structure as treatment, other treatments choices (with both empirical and theoretical analyses) are also worth exploring. Moreover, as CFLP first generates counterfactual links and then learns from both observed and counterfactual link existence, the underlying philosophy of our methodology could be considered as graph data augmentation. Therefore, investigating the relationship between counterfactual graph learning and graph data augmentation is also a possible future research direction. A ADDITIONAL DATASET DETAILS In this section, we provide some additional dataset details. All the datasets used in this work are publicly available. Statistics for the datasets are shown in Table 5. Citation Networks CORA, CITESEER, and PUBMED are citation networks that were first used by Yang et al. (2016) and then commonly used as benchmarks in GNN-related literature (Kipf & Welling, 2016a; Veličković et al., 2017). In these citation networks, the nodes are published papers and features are bag-of-word vectors extracted from the corresponding paper. Links represent the citation relation between papers. We loaded the datasets with the DGL1 package. Social Network The FACEBOOK dataset2 is a social network constructed from friends lists from Facebook (McAuley & Leskovec, 2012). The nodes are Facebook users and links indicate the friendship relation on Facebook. The node features were constructed from the user profiles and anonymized by McAuley & Leskovec (2012). Drug-Drug Interaction Network The OGB-DDI dataset was constructed from a public Drug database (Wishart et al., 2018) and provided by the Open Graph Benchmark (OGB) (Hu et al., 2020). Each node in this graph represents an FDA-approved or experimental drug and edges represent the existence of unexpected effect when the two drugs are taken together. This dataset does not contain any node features, and it can be downloaded with the dataloader3 provided by OGB. B EXPANDED RELATED WORK With the rapid development of graph machine learning in the past few years, researchers have been attempting to relate graph neural networks (GNNs) with causal models. Recently, several works have been proposed to improve graph learning with causal models (Sherman & Shpitser, 2020; Bevilacqua et al., 2021; Lin et al., 2021; Feng et al., 2021). Sherman & Shpitser (2020) proposed a new concept in causal modeling, called “network intervention”, that is a type of structural intervention in network contexts. Sherman & Shpitser (2020) modeled social network with causal DAG and studied the effect of network intervention (link creation and removal) on network structure changes. Lin et al. (2021) formulated the problem of post-hoc explanation generation for GNNs as a causal learning task and proposed a causal explanation model with a loss designed based on Granger causality. Feng et al. (2021) formulated node classification of GNNs with a causal DAG, which estimated the causal effect of the local structure on the prediction and adaptively chose whether to aggregate from the neighbors. Bevilacqua et al. (2021) studied the task of out-of-distribution (OOD) graph classification, and showed how subgraph densities can be used to build size-invariant graph representations. They modeled OOD graph classification with a twin network DAG causal model, which learned approximately environment-invariant graph representations that better extrapolate between train and test data. The last three works, i.e., Lin et al. (2021), Feng et al. (2021), Bevilacqua et al. (2021), proposed to use causal models to improve the performance of three different types of graph machine learning tasks such as GNN explanation (subgraph) generation, node-level classification, 1https://github.com/dmlc/dgl 2https://snap.stanford.edu/data/ego-Facebook.html 3https://ogb.stanford.edu/docs/linkprop/#data-loader and graph-level classification. Compared with them, our work has three points of uniqueness. First, to the best of our knowledge, our work makes the first attempt to use causal model to improve the performance of link prediction which is also an important graph learning task. Second, to make the attempt successful, our work presents a novel concept of “counterfactual link” and proposes a novel method CFLP that learns from both factual and counterfactual link existence. Third, the proposed method CFLP is flexible with the choice of treatment variables and is able to suggest good treatment choices prior to training via ÂTEobs. C DETAILS ON IMPLEMENTATION AND HYPERPARAMETERS All the experiments in this work were conducted on a Linux server with Intel Xeon Gold 6130 Processor (16 Cores @2.1Ghz), 96 GB of RAM, and 4 RTX 2080Ti cards (11 GB of RAM each). Our method are implemented with Python 3.8.5 with PyTorch. Source code is available in the supplementary materials. A list of used packages can be found in requirements.txt. Baseline Methods For baseline methods, we use official code packages from the authors for MVGRL4 (Hassani & Khasahmadi, 2020), SEAL5 (Zhang & Chen, 2018), and LGLP6 (Cai et al., 2021). We use a public implementation for VGAE7 (Kipf & Welling, 2016b) and OGB implementations8 for Node2Vec and baseline GNNs. For fair comparison, we set the size of node/link representations to be 256 of all methods. CFLP We use the Adam optimizer with a simple cyclical learning rate scheduler (Smith, 2017), in which the learning rate waves cyclically between the given learning rate (lr) and 1e-4 in every 70 epochs (50 warmup steps and 20 annealing steps). We implement the GNN encoders with torch_geometric9 (Fey & Lenssen, 2019). Same with the baselines, we set the size of all hidden layers and node/link representations of CFLP as 256. The graph encoders all have three layers and JKNet has mean pooling for the final aggregation layer. The decoder is a 3-layer MLP with a hidden layer of size 64 and ELU as the nonlinearity. As the Euclidean distance used in Eq. (3) has a range of [0,∞), the value of γ depends on the distribution of all-pair node embedding distances, which varies for different datasets. Therefore, we set the value of γ as the γpct-percentile of all-pair node embedding distances. Commands for reproducing the experiments are included in README.md. Hyperparameter Searching Space We manually tune the following hyperparameters over range: lr ∈ {0.005, 0.01, 0.05, 0.1, 0.2}, α ∈ {0.001, 0.01, 0.1, 1, 2}, β ∈ {0.001, 0.01, 0.1, 1, 2}, γpct ∈ {10, 20, 30}. Treatments For the graph clustering or community detection methods we used as treatments, we use the implementation from scikit-network10 for Louvain (Blondel et al., 2008), SpecC (Ng et al., 2001), PropC (Raghavan et al., 2007), and Ward (Ward Jr, 1963). We used implementation of K-core (Bader & Hogue, 2003) from networkx.11 We used SBM (Karrer & Newman, 2011) from a public implementation by Funke & Becker (2019).12 For CommN and Katz, we set Ti,j = 1 if the number of common neighbors or Katz index between vi and vj are greater or equal to 2 or 2 times the average of all Katz index values, respectively. For SpecC, we set the number of clusters as 16. For SBM, we set the number of communities as 16. These settings are fixed for all datasets. 4https://github.com/kavehhassani/mvgrl 5https://github.com/facebookresearch/SEAL_OGB 6https://github.com/LeiCaiwsu/LGLP 7https://github.com/DaehanKim/vgae_pytorch 8https://github.com/snap-stanford/ogb/tree/master/examples/ linkproppred/ddi 9https://pytorch-geometric.readthedocs.io/en/latest/ 10https://scikit-network.readthedocs.io/ 11https://networkx.org/documentation/ 12https://github.com/funket/pysbm D ADDITIONAL EXPERIMENTAL RESULTS AND DISCUSSIONS Link Prediction Tables 6 and 7 show the link prediction performance of Hits@50 and Average Precision (AP) by all methods. LGLP on PUBMED and OGB-DDI are missing due to the out of memory error when running the code package from the authors. Similar to the results in Tables 1 and 2, we observe that our CFLP on different graph encoders achieve similar or better performances compared with baselines, with the only exception of AP on FACEBOOK where most methods have close-to-perfect AP. From Tables 1, 2, 6 and 7, we observe that CFLP achieves improvement over all GNN architectures (averaged across datasets). Specifically, CFLP improves 25.6% (GCN), 12.0% (GSAGE), and 36.3% (JKNet) on Hits@20, 9.6% (GCN), 5.0% (GSAGE), and 17.8% (JKNet) on Hits@50, 5.6% (GCN), 1.6% (GSAGE), and 1.9% (JKNet) on AUC, and 0.8% (GCN), 0.8% (GSAGE), and 1.8% (JKNet) on AP. We note that CFLP with JKNet almost consistently achieves the best performance and outperforms baselines significantly on Hits@50. Specifically, compared with the best baseline, CFLP improves relatively by 6.8% and 0.9% on Hits@50 and AP, respectively. Ablation Study on Losses For the ablative studies of LCF (Eq. (9)) and Ldisc (Eq. (10)), we show their effect by removing them from the integrated loss function (Eq. (11)). Table 8 shows the results of CFLP on CORA and CITESEER under different settings (α = 0, β = 0, α = β = 0, and original setting). We observe that CFLP in the original setting achieves the best performance. The performance drops significantly when having α = 0, i.e., not using any counterfactual data during training. We note that having β = 0, i.e., not using the discrepancy loss, also lowers the performance. Therefore, both LCF and Ldisc are essential for improving the link prediction performance. Ablation Study on Node Embedding X̃ As the node embedding X̃ is used in the early step of CFLP for finding the counterfactual links, the quality of X̃ may affect the later learning process. Therefore, we also evaluate CFLP with different state-of-the-art unsupervised graph representation learning methods: MVGRL (Hassani & Khasahmadi, 2020), DGI (Velickovic et al., 2019), and GRACE (Zhu et al., 2020). Table 9 shows the link prediction performance of CFLP (w/ JKNet) on CORA and CITESEER with different node embeddings. We observe that the choice of the method for learning X̃ does have an impact on the later learning process as well as the link prediction performance. Nevertheless, Table 9 shows CFLP’s advantage can be consistently observed with different choices of methods for learning X̃, as CFLP with X̃ learned from all three methods showed promising link prediction performance. Sensitivity Analysis of α and β Figure 4 shows the AUC performance of CFLP on CORA with different combinations of α and β. We observe that the performance is the poorest when α = β = 0 and gradually improves and gets stable as α and β increase, showing that CFLP is generally robust to the hyperparameters α and β, and the optimal values are easy to locate. Sensitivity Analysis of γ Figure 5 shows the Hits@20 and AUC performance on link prediction of CFLP (with JKNet) on CORA and CITESEER with different treatments and γpct. We observe that the performance is generally good when 10 ≤ γpct ≤ 20 and gradually get worse when the value of γpct is too small or too large, showing that CFLP is robust to γ and the optimal γ is easy to find. Sensitivity to Noisy Data We note that robustness w.r.t. noisy data is not within our claim of technical contributions. Nevertheless, CFLP is not more vulnerable than other link prediction baselines. We conduct experiments with random attacks on the Cora dataset (randomly removing links and adding noisy links). Table 10 shows the AUC performances of our proposed CFLP (w/ JKNet) compared to the strongest baseline methods under different levels of random attacks (0%, 2%, 5%, and 10%). We can observe that as the attack strength goes up, the link prediction performance of all methods go down. We also note that our proposed CFLP still outperforms the baseline methods. Generalization to Graphs with Weighted Edges As our proposed CFLP uses GNN as the graph encoder and GNNs are usually able to take weighted graph as input (e.g., the adjacency matrix A for GCN can be weighted), the model should be able to handle weighted graphs as given. Note the link prediction losses (Eqs. (8) and (9)) need to be slightly modified considering the task. When the task is to predict the link existence, the label adjacency matrix used in Eqs. (8) and (9) must be of binary values. When the task is to predict the link weights, the BCE loss functions (Eqs. (8) and (9)) need to be changed to regression loss functions such as MSE.
1. What is the focus of the paper regarding network analysis? 2. What are the strengths of the proposed approach, particularly in its novelty and technical soundness? 3. Do you have any questions or concerns regarding the method's equation? 4. Can the method be generalized to handle continuous-valued network data? 5. How does the method perform when dealing with missing or noisy link data? 6. Have the authors considered using PR scores to evaluate link prediction performance? 7. Are there any typos or minor errors in the paper that should be addressed?
Summary Of The Paper Review
Summary Of The Paper This paper targets an interesting question in network analysis: what are the main causes leading to the creation of a link between a pair of two nodes in a given network observations. Most previous related models assume a latent structure underlying observed network data, and suppose the creation of each link is driven by the latent structure behind two nodes associated with that link. Hence, the main cause effect resulting in the link between two nodes, may not be truly reflected by the underlying community structure. To address this concern, the paper resorts to a counterfactual learning framework by learning counterfactual links between the most similar node pairs with a different treatment. Experimental results demonstrate the improved link prediction by the counterfactual graph learning method compared with latent community based methods. Review strengths: novelty: To me, the new perspective and the main idea are novel in the context of link prediction. correctness: To my knowledge, the idea and technical contribution look sound. weaknesses: No obvious weaknesses were found in this paper. It is not clear why T^{CF}, A^{CF} = T, A, otherwise in Eq.(4) It is not easy to digest "...ITE of neighborhood assignment as 1-1=0..." for someone without causality background! additional questions: can you discuss how to generalize your method to continuous-valued network data in the supplement? say, A_ij \in [0,1], represents the connection strength between nodes i and j. if possible, can you discuss or even conduct additional experiments to demonstrate the robustness of your method to missing (false zeros) and noisy link data, which is prevalent in large-scale networks. except AUROC score, have you considered PR score to evaluate the link prediction performance as PR score is only sensitive to non-zero links? typos: pp.1 only were -> are Algo 1 model inferencing -> inference many places causal model -> causal model(s)
ICLR
Title Counterfactual Graph Learning for Link Prediction Abstract Learning to predict missing links is important for many graph-based applications. Existing methods were designed to learn the association between two sets of variables: (1) the observed graph structure (e.g., clustering effect) and (2) the existence of link between a pair of nodes. However, the causal relationship between these variables was ignored. We visit the possibility of learning it by asking a counterfactual question: “would the link exist or not if the observed graph structure became different?” To answer this question, we leverage causal models considering the information of the node pair (i.e., learned graph representations) as context, global graph structural properties as treatment, and link existence as outcome. In this work, we propose a novel link prediction method that enhances graph learning by counterfactual inference. It creates counterfactual links from the observed ones, and learns representations from both the observed and counterfactual links. Experiments on benchmark datasets show that this novel graph learning method achieves state-of-the-art performance on link prediction. 1 INTRODUCTION Link prediction seeks to predict the likelihood of edge existence between node pairs based on the observed graph. Given the omnipresence of graph-structured data, link prediction has copious applications such as movie recommendation (Bennett et al., 2007), chemical interaction prediction (Stanfield et al., 2017), and knowledge graph completion (Kazemi & Poole, 2018). Graph machine learning methods have been widely applied to solve this problem. Their standard scheme is to first learn the representation vectors of nodes and then learn the association between the representations of a pair of nodes and the existence of the link between them. For example, graph neural networks (GNNs) use neighborhood aggregation to create the representation vectors: the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes (Kipf & Welling, 2016a; Hamilton et al., 2017; Wu et al., 2020). Then the vectors are fed into a binary classification model to learn the association. GNN methods have shown predominance in the task of link prediction (Kipf & Welling, 2016b; Zhang et al., 2020a). Unfortunately, the causal relationship between graph structure and link existence was largely ignored in previous work. Existing methods that learn from association only were are not able to capture essential factors to accurately predict missing links in the test data. Take social network as an example. Suppose Alice and Adam live in the same neighborhood and they are close friends. The association between neighborhood belonging and friend closeness could be too strong to discover the essential factors of the friendship such as common interests or family relationship which could be the cause of being living in the same neighborhood. So, our idea is asking a counterfactual question: “would Alice and Adam still be close friends if they were not living in the same neighborhood?” If a graph learning model could learn the causal relationship by answering the counterfactual questions, it would improve the accuracy of link prediction with the novel knowledge it captured. Generally, the questions can be described as “would the link exist or not if the graph structure became different?” As known to many, counterfactual question is a key component of causal inference and have been well defined in the literature. A counterfactual question is usually framed with three factors: context (as a data point), manipulation (e.g., treatment, intervention, action, strategy), and outcome (van der Laan & Petersen, 2007; Johansson et al., 2016). (To simplify the language, we use “treatment” to refer to the manipulation in this paper, as readers might be familiar more with the word “treatment.”) Given certain data context, it asks what the outcome would have been if the treatment had not been the observed value. In the scenario of link prediction, we consider the information of a pair of nodes as context, graph structural properties as treatment, and link existence as outcome. Recall the social network example. The context is the representations of Alice and Adam that are learned from their personal attributes and relationships with others on the network. The treatment is living in the same neighborhood, which can be identified by community detection. And the outcome is their friendship. In this work, we present a counterfactual graph learning method for link prediction (CFLP) that trains graph learning models to answer the counterfactual questions. Figure 1 illustrates this twostep method. Suppose the treatment variable is defined as one type of global graph structure, e.g., the neighborhood assignment discovered by spectral clustering or community detection algorithms. We are wondering how likely the neighborhood distribution makes a difference on the link (non)existence for each pair of nodes. So, given a pair of nodes (like Alice and Adam) and the treatment value on this pair (in the same neighborhood), we find a pair of nodes (like Helen and Bob) that satisfies two conditions: (1) it has a different treatment (in different neighborhoods) and (2) it is the most similar pair with the given pair of nodes. We call these matched pair of nodes as “counterfactual links.” Note that the outcome of the counterfactual link can be either 1 or 0, depending on whether there exists an edge between the matched pair of nodes (Helen and Bob). The counterfactual link provides unobserved outcome to the given pair of nodes (Alice and Adam) under a counterfactual condition (in different neighborhoods). After counterfactual links are created for all (positive and negative) training examples, CFLP trains a link predictor (which can be GNN-based) to learn the representation vectors of nodes to predict both the observed factual links and the counterfactual links. In this Alice-Adam example, the link predictor is trained to estimate the individual treatment effect (ITE) of neighborhood assignment as 1 − 1 = 0, where ITE is a metric for the effect of treatment on the outcome and zero indicates the given treatment has no effect on the outcome. So, the learner will try to discover the essential factors on the friendship between Alice and Adam. CFLP leverages causal models to find these factors for graph learning models to accurately predict missing links. Contributions. Our main contributions can be summarized as follows. (1) This is the first work that proposes to improve link prediction by causal inference, specifically, learning to answer counterfactual questions about link existence. (2) This work introduces CFLP that trains GNN-based link predictors to predict both factual and counterfactual links. It learns the causal relationship between global graph structure and link existence. (3) CFLP outperforms competitive baseline methods on several benchmark datasets. We analyze the impact of counterfactual links as well as the choice of treatment variable. This work sheds insights for improving graph machine learning with causal analysis, which has not been extensively studied yet, when the other direction (machine learning for causal inference) has been studied for a long time. 2 PROBLEM DEFINITION Notations Let G = (V, E) be an undirected graph of N nodes, where V = {v1, v2, . . . , vN} is the set of nodes and E ⊆ V × V is the set of observed links. We denote the adjacency matrix as A ∈ {0, 1}N×N , whereAi,j = 1 indicates nodes vi and vj are connected and vice versa. We denote the node feature matrix as X ∈ RN×F , where F is the number of node features and xi (bolded) indicates the feature vector of node vi (the i-th row of X). In this work, we follow the commonly accepted problem definition of link prediction on graph data (Zhang & Chen, 2018; Zhang et al., 2020a; Cai et al., 2021): Given an observed graph G (with validation and testing links masked off), predict the link existence between every pair of nodes. More specifically, for the GNN-based link prediction methods, they learn low-dimensional node representations Z ∈ RN×H , where H is the dimensional size of latent space such that H F , and then use Z for the prediction of link existence. 3 PROPOSED METHOD 3.1 IMPROVING GRAPH LEARNING WITH CAUSAL MODEL Z ?T Y Z? T Y Treatment Effect Estimation Graph Representation Learning Figure 2: Causal modeling (not the target of our work but related to the idea we propose): Given Z and observed outcomes, find treatment effect of T on Y . Z ?T Y Z? T Y Treatment Effect Estimation Graph Representation Learning Figure 3: Graph learning with causal model (the proposed idea): leverage the estimated ATE(Y |T ) to improve the learning of Z. Leveraging Causal Model(s) Counterfactual causal inference aims to find out the causal relationship between treatment and outcomes by asking the counterfactual questions such as ”would the outcome be different if the treatment was different?” (Morgan & Winship, 2015). Figure 2 is a typical example, in which we denote the context (confounder) as Z, treatment as T , and the outcome as Y . Given the context, treatments, and their corresponding outcomes, counterfactual inference methods aim to find the effect of treatment on the outcome, which is usually measured by individual treatment effect (ITE) and its expectation averaged treatment effect (ATE) (van der Laan & Petersen, 2007; Weiss et al., 2015). For a binary treatment variable T = {0, 1}, denoting g(z, T ) as the outcome of z given the treatment T , we have ITE(z) = g(z, 1)− g(z, 0), and ATE = Ez∼Z ITE(z). Ideally, we need all possible outcomes of the contexts under all kinds of treatments to study the causal relationships (Morgan & Winship, 2015). However, in reality, the fact that we can only observe one potential outcome under one particular treatment prevents the ITE from being known (Johansson et al., 2016). Traditional causal inference methods use statisti- cal learning approaches such as Neyman–Rubin casual model (BCM) and propensity score matching (PSM) to predict the value of ATE (Rubin, 1974; 2005). In this work, we look at link prediction with graph learning, which is essentially learning the best node representations Z for the prediction of link existence. Therefore, as shown in Figure 3, where the outcome Y is the link existence, the objective is different from classic causal inference. In graph learning, we can estimate the effect of treatment on the outcome (ATE(Y |T )), and we want to improve the learning of Z with the estimation. More specifically, in graph learning for link prediction, for each pair of nodes (vi, vj), its ITE can be estimated with ITE(vi,vj) = g((zi, zj), 1)− g((zi, zj), 0) (1) and we use this information to improve the learning of Z, i.e., P (Z|Y ). We denote the observed adjacency matrix as the factual outcomes A and the unobserved adjacency matrix when the treatment is different as the counterfactual outcomes ACF . We denote T ∈ {0, 1}N×N as the binary factual treatment matrix, where Ti,j indicates the treatment of the node pair (vi, vj). We denote TCF as the counterfactual treatment matrix where TCFi,j = 1 − Ti,j . We are interested in (1) estimating the counterfactual outcomes ACF via observed data, (2) learning with the counterfactual adjacency matrix ACF to enhance link prediction, and (3) learning the causal relationship between graph structural information (treatment) and link existence (outcome). Treatment Variable Previous work on graph machine learning (Velickovic et al., 2019; Park et al., 2020) showed that the graph’s global structural information could improve the quality of representation vectors of nodes learned by GNNs. This is because the message passing-based GNNs aggregate local information in the algorithm of representation vector generation and the global structural information is complementary with the aggregated information. Therefore, for a pair of nodes, one option of defining the treatment variable is its global structural role in the graph. Without the loss of generality, we use Louvain (Blondel et al., 2008), an unsupervised approach that has been widely used for community detection, as an example. Louvain discovers community structure of a graph and assigns each node to one community. Then we can define the binary treatment variable as whether these two nodes in the pair belong to the same community. Let c : V → N be any graph mining/clustering method that outputs the index of community/cluster/neighborhood that each node belongs to. The treatment matrix T is defined as Ti,j = 1 if c(vi) = c(vj), and Ti,j = 0 otherwise. For the choice of c, we suggest methods that group nodes based on global graph structural information, including but not limited to Louvain (Blondel et al., 2008), K-core (Bader & Hogue, 2003), and spectral clustering (Ng et al., 2001). 3.2 COUNTERFACTUAL LINKS To implement the solution based on above idea, we propose counterfactual links. As aforementioned, for each node pair, the observed data contains only the factual treatment and outcome, meaning that the link existence for the given node pair with an opposite treatment is unknown. Therefore, we use the outcome from the nearest observed context as a substitute. This type of matching on covariates is widely used to estimate treatment effects from observational data (Johansson et al., 2016; Alaa & Van Der Schaar, 2019). That is, we want to find the nearest neighbor with the opposite treatment for each observed node pairs and use the nearest neighbor’s outcome as a counterfactual link. Formally, ∀(vi, vj) ∈ V × V , we want to find its counterfactual link (va, vb) as below: (va, vb) = arg min va,vb∈V {h((vi, vj), (va, vb)) | Ta,b = 1− Ti,j}, (2) where h(·, ·) is a metric of measuring the distance between a pair of node pairs (a pair of contexts). Nevertheless, finding the nearest neighbors by computing the distance between all pairs of node pairs is extremely inefficient and infeasible in application, which takes O(N4) comparisons (as there are totally O(N2) node pairs). Hence we implement Eq. (2) using node-level embeddings. Specifically, considering that we want to find the nearest node pair based on not only the raw node features but also structural features, we take the state-of-the-art unsupervised graph representation learning method MVGRL (Hassani & Khasahmadi, 2020) to learn the node embeddings X̃ ∈ RN×F̃ from the observed graph (with validation and testing links masked off). We use X̃ to find the nearest neighbors of node pairs. Therefore, ∀(vi, vj) ∈ V × V , we define its counterfactual link (va, vb) as (va, vb) = arg min va,vb∈V {d(x̃i, x̃a) + d(x̃j , x̃b) | Ta,b = 1− Ti,j , d(x̃i, x̃a) + d(x̃j , x̃b) < 2γ}, (3) where d(·, ·) is specified as the Euclidean distance on the embedding space of X̃, and γ is a hyperparameter that defines the maximum distance that two nodes are considered as similar. When no node pair satisfies the above equation (i.e., there does not exist any node pair with opposite treatment that is close enough to the target node pair), we do not assign any nearest neighbor for the given node pair to ensure all the neighbors are similar enough (as substitutes) in the feature space. Thus, the counterfactual treatment matrix TCF and the counterfactual adjacency matrix ACF are defined as TCFi,j , A CF i,j = { 1− Ti,j , Aa,b , if ∃ (va, vb) ∈ V × V satisfies Eq. (3); Ti,j , Ai,j , otherwise. (4) It is worth noting that the node embeddings X̃ and the nearest neighbors are computed only once and do not change during the learning process. X̃ is only used for finding the nearest neighbors. We also note that X̃ must be structural embeddings rather than positional embeddings (as defined in (Srinivasan & Ribeiro, 2020)). Learning from Counterfactual Distributions Let PF be the factual distribution of the observed contexts and treatments, and PCF be the counterfactual distribution that is composed of the observed contexts and opposite treatments. We define the empirical factual distribution P̂F ∼ PF as P̂F = {(vi, vj , Ti,j)}Ni,j=1, and define the empirical counterfactual distribution P̂CF ∼ PCF as P̂CF = {(vi, vj , TCFi,j )}Ni,j=1. Unlike traditional link prediction methods that take only P̂F as input and use the observed outcomes A as the training target, the idea of counterfactual graph learning is to take advantage of the counterfactual distribution by having P̂CF as a complementary input and use the counterfactual outcomes ACF as the training target for the counterfactual data samples. 3.3 THE COUNTERFACTUAL GRAPH LEARNING MODEL In this subsection, we present the design of our model as well as the training method. The input of the model in CFLP includes (1) the observed graph data A and raw feature matrix X, (2) the factual treatments TF and counterfactual treatments TCF , and (3) the counterfactual graph data ACF . The output contains link prediction logits in  and ÂCF for the factual and counterfactual adjacency matrices A and ACF , respectively. Graph Learning Model The model consist of two trainable components: a graph encoder f and a link decoder g. The graph encoder generates representation vectors of nodes from graph data G. And the link decoder projects the representation vectors of node pairs into the link prediction logits. The choice of the graph encoder f can be any end-to-end GNN model. Without the loss of generality, here we use the commonly used graph convolutional network (GCN) (Kipf & Welling, 2016a). Each layer of GCN is defined as H(l) = f (l)(A,H(l−1);W(l)) = σ(D̃− 1 2 ÃD̃− 1 2H(l−1)W(l)), (5) where l is the layer index, à = A + I is the adjacency matrix with added self-loops, D̃ is the diagonal degree matrix D̃ii = ∑ j Ãij , H (0) = X, W(l) is the learnable weight matrix at the l-th layer, and σ(·) denotes a nonlinear activation such as ReLU. We denote Z = f(A,X) ∈ RN×H as the output from the encoder’s last layer, i.e., the H-dimensional representation vectors of nodes. Following previous work (Zhang et al., 2020a), we compute the representation of a node pair as the Hadamard product of the vectors of the two nodes. That is, the representation for the node pair (vi, vj) is zi zj ∈ RH , where stands for the Hadamard product. For the link decoder that predicts whether a link exists between a pair of nodes, we opt for simplicity and adopt a simple decoder based on multi-layer perceptron (MLP), given the representations of node pairs and their treatments. That is, the decoder g is defined as  = g(Z,T), where Âi,j = MLP([zi zj , Ti,j ]), (6) ÂCF = g(Z,TCF ), where ÂCFi,j = MLP([zi zj , TCFi,j ]), (7) where [·, ·] stands for the concatenation of vectors, and  and ÂCF can be used for estimating the observed ITE as aforementioned in Eq. (1). During the training process, data samples from the empirical factual distribution P̂F and the empirical counterfactual distribution P̂CF are fed into decoder g and optimized towards A and ACF , respectively. That is, for the two distributions, the loss functions are as follows: LF = 1 N2 N∑ i=1 N∑ j=1 Ai,j · log Âi,j + (1−Ai,j) · log(1− Âi,j), (8) LCF = 1 N2 N∑ i=1 N∑ j=1 ACFi,j · log ÂCFi,j + (1−ACFi,j ) · log(1− ÂCFi,j ). (9) Balancing Counterfactual Learning In the training process, the above loss minimizations train the model on both the empirical factual distribution P̂F ∼ PF and empirical counterfactual distribution P̂CF ∼ PCF that are not necessarily equal – the training examples (node pairs) do not have to be aligned. However, at the stage of inference, the test data contains only observed (factual) samples. Such a gap between the training and testing data distributions exposes the model in the risk of covariant shift, which is a common issue in counterfactual learning (Johansson et al., 2016; Assaad et al., 2021). To force the distributions of representations of factual distributions and counterfactual distributions to be similar, we use the discrepancy distance (Mansour et al., 2009; Johansson et al., 2016) as another objective to regularize the representation learning. That is, we use the following loss term to minimize the distance between the learned representations from P̂F and P̂CF : Ldisc = disc(P̂Ff , P̂CFf ), where disc(P,Q) = ||P −Q||F , (10) where || · ||F denotes the Frobenius Norm, and P̂Ff and P̂CFf denote the node pair representations learned by graph encoder f from factual distribution and counterfactual distribution, respectively. That is, the learned representations for (vi, vj , Ti,j) and (vi, vj , TCFi,j ) are [zi zj , Ti,j ] (Eq. (6)) and [zi zj , TCFi,j ] (Eq. (7)), respectively. Training During the training of CFLP, we want the model to be optimized towards three targets: (1) accurate link prediction on the observed outcomes (Eq. (8)), (2) accurate estimation on the counterfactual outcomes (Eq. (9)), and (3) regularization on the representation spaces learned from P̂F and P̂CF (Eq. (10)). Therefore, the overall training loss of our proposed CFLP is L = LF + α · LCF + β · Ldisc, (11) where α and β are hyperparameters to control the weights of counterfactual outcome estimation (link prediction) loss and discrepancy loss. Algorithm 1: CFLP: Counterfactual graph learning for link prediction Input : f , g, A, X, n epochs, n epoch ft 1 Compute T as presented in Section 3.1 ; 2 Compute TCF ,ACF by Eqs. (3) and (4) ; /* model training */ 3 Initialize Θf in f and Θg in g ; 4 for epoch in range(n epochs) do 5 Z = f(A,X) ; 6 Get  and ÂCF via g with Eqs. (6) and (7) ; 7 Update Θf and Θg with L ; // Eq. (11) 8 end /* decoder fine-tuning */ 9 Freeze Θf and re-initialize Θg ; 10 Z = f(A,X) ; 11 for epoch in range(n epochs ft) do 12 Get  via g with Eq. (6) ; 13 Update Θg with LF ; // Eq. (8) 14 end /* model inferencing inference */ 15 Z = f(A,X) ; 16 Get  and ÂCF via g with Eqs. (6) and (7) ; Output:  for link prediction, ÂCF Summary Algorithm 1 summarizes the whole process of CFLP. The first step is to compute the factual and counterfactual treatments T, TCF as well as the counterfactual outcomes ACF . Then, the second step trains the graph learning model on both the observed factual data and created counterfactual data with the integrated loss function (Eq. (11)). Note that the discrepancy loss (Eq. (10)) is computed on the representations of node pairs learned by the graph encoder f , so the decoder g is trained with data from both P̂F and P̂CF without balancing the constraints. Therefore, after the model is sufficiently trained, we freeze the graph encoder f and fine-tune g with only the factual data. Finally, after the decoder is sufficiently finetuned, we output the link prediction logits for both the factual and counterfactual adjacency matrices. Complexity The complexity of the first step (finding counterfactual links with nearest neighbors) is proportional to the number of node pairs. When γ is set as a small value to obtain indeed similar node pairs, this step (Eq. (3)) uses constant time. Moreover, the computation in Eq. (3) can be parallelized. Therefore, the time complexity is O(N2/C) where C is the number of processes. For the complexity of the second step (training counterfactual learning model), the GNN encoder has time complexity of O(LH2N +LH|E|) (Wu et al., 2020), where L is the number of GNN layers and H is the size of node representations. Given that we sample the same number of non-existing links as that of observed links during training, the complexity of a three-layer MLP decoder is O(((H + 1) · dh + dh · 1)|E|) = O(dh(H + 2)|E|), where dh is the number of neurons in the hidden layer. Therefore, the second step has linear time complexity w.r.t. the sum of node and edge counts. Limitations First, as mentioned above, the computation of finding counterfactual links has a worst-case complexity of O(N2). Second, CFLP performs counterfactual prediction with only a single treatment; however, there are quite a few kinds of graph structural information that can be considered as treatments. Future work can leverage the rich structural information by bundled treatments (Zou et al., 2020) in counterfactual graph learning. 4 EXPERIMENTS 4.1 EXPERIMENTAL SETUP We conduct experiments on five benchmark datasets including citation networks (CORA, CITESEER, PUBMED (Yang et al., 2016)), social network (FACEBOOK (McAuley & Leskovec, 2012)), and drug-drug interaction network (OGB-DDI (Wishart et al., 2018)) from the Open Graph Benchmark (OGB) (Hu et al., 2020). For the first four datasets, we randomly select 10%/20% of the links and the same numbers of disconnected node pairs as validation/test samples. The links in the validation and test sets are masked off from the training graph. For OGB-DDI, we used the OGB official train/validation/test splits. Statistics and details for the datasets are given in Appendix. We use K-core (Bader & Hogue, 2003) clusters as the default treatment variable. We evaluate CFLP on three commonly used GNN encoders: GCN (Kipf & Welling, 2016a), GSAGE (Hamilton et al., 2017), and JKNet (Xu et al., 2018). We compare the link prediction performance of CFLP against Node2Vec (Grover & Leskovec, 2016), MVGRL (Hassani & Khasahmadi, 2020), VGAE (Kipf & Welling, 2016b), SEAL (Zhang & Chen, 2018), LGLP (Cai et al., 2021), and GNNs with MLP decoder. We report averaged test performance and their standard deviation over 20 runs with different random parameter initializations. Other than the most commonly used of Area Under ROC Curve (AUC), we report Hits@20 (one of the primary metrics on OGB leaderboard) as a more challenging metric, as it expects models to rank positive edges higher than nearly all negative edges. Besides performance comparison on link prediction, we will answer two questions to suggest a way of choosing a treatment variable for creating counterfactual links: (Q1) Does CFLP sufficiently learn the observed averaged treatment effect (ATE) derived from the counterfactual links? (Q2) What is the relationship between the estimated ATE learned in the method and the prediction performance? If the answer to Q1 is yes, then the answer to Q2 will indicate how to choose treatment based on observed ATE. To answer the Q1, we calculate the observed ATE (ÂTEobs) by comparing the observed links in A and created counterfactual links ACF that have opposite treatments. And we calculate the estimated ATE (ÂTEest) by comparing the predicted links in  and predicted counterfactual links ÂCF . Formally, ÂTEobs and ÂTEest are defined as ÂTEobs = 1 N2 N∑ i=1 N∑ j=1 {T (A−ACF ) + (1N×N −T) (ACF −A)}i,j . (12) ÂTEest = 1 N2 N∑ i=1 N∑ j=1 {T (Â− ÂCF ) + (1N×N −T) (ÂCF − Â)}i,j . (13) The treatment variables we will investigate are usually graph clustering or community detection methods, such as K-core (Bader & Hogue, 2003), stochastic block model (SBM) (Karrer & Newman, 2011), spectral clustering (SpecC) (Ng et al., 2001), propagation clustering (PropC) (Raghavan et al., 2007), Louvain (Blondel et al., 2008), common neighbors (CommN), Katz index, and hierarchical clustering (Ward) (Ward Jr, 1963). We use JKNet (Xu et al., 2018) as default graph encoder. Implementation details and supplementary experimental results (e.g., sensitivity on γ, ablation study on LCF and Ldisc) can be found in Appendix. Source code is available in supplementary material. 4.2 EXPERIMENTAL RESULTS Link Prediction Tables 1 and 2 show the link prediction performance of Hits@20 and AUC by all methods. LGLP on PUBMED and OGB-DDI are missing due to the out of memory error when running the code package from the authors. We observe that our CFLP on different graph encoders achieve similar or better performances compared with baselines. The only exception is the AUC on FACEBOOK where most methods have close-to-perfect AUC. As AUC is a relatively easier metric comparing with Hits@20, most methods achieved good performance on AUC. We observe that CFLP with JKNet almost consistently achieves the best performance and outperforms baselines significantly on Hits@20. Specifically, comparing with the best baseline, CFLP improves relatively by 16.4% and 0.8% on Hits@20 and AUC, respectively. Comparing with the best performing baselines, which are also GNN-based, CFLP benefits from learning with both observed link existence (A) and our defined counterfactual links (ACF ). ATE with Different Treatments Tables 3 and 4 show the link prediction performance, ÂTEobs, and ÂTEest of CFLP (with JKNet) when using different treatments. The treatments in Tables 3 and 4 are sorted by the Hits@20 performance. Bigger ATE indicates stronger causal relationship between the treatment and outcome, and vice versa. We observe: (1) the rankings of ÂTEest and ÂTEobs are positively correlated with Kendell’s ranking coefficient (Abdi, 2007) of 0.67 and 0.57 for CORA and CITESEER, respectively. Hence, CFLP was sufficiently trained to learn the causal relationship between graph structure information and link existence; (2) ÂTEobs and ÂTEest are both negatively correlated with the link prediction performance, showing that we can pick a proper treatment prior to training a model with CFLP. Using the treatment that has the weakest causal relationship with link existence is likely to train the model to capture more essential factors on the outcome, in a way similar to denoising the unrelated information from the representations. 5 RELATED WORK Link Prediction With its wide applications, link prediction has draw attention from many research communities including statistical machine learning and data mining. Stochastic generative methods based on stochastic block models (SBM) are developed to generate links (Mehta et al., 2019). In data mining, matrix factorization (Menon & Elkan, 2011), heuristic methods (Philip et al., 2010; Martı́nez et al., 2016), and graph embedding methods (Cui et al., 2018) have been applied to predict links in the graph. Heuristic methods compute the similarity score of nodes based on their neighborhoods. These methods can be generally categorized into first-order, second-order, and high-order heuristics based on the maximum distance of the neighbors. Graph embedding methods learn latent node features via embedding lookup and use them for link prediction (Perozzi et al., 2014; Tang et al., 2015; Grover & Leskovec, 2016; Wang et al., 2016). In the past few years, GNNs have showed promising results on various graph-based tasks with their ability of learning from features and custom aggregations on structures (Kipf & Welling, 2016a; Hamilton et al., 2017; Wu et al., 2020)(Cotta et al., 2021). With node pair representations and an attached MLP or inner-product decoder, GNNs can be used for link prediction (Zhang et al., 2020a; Davidson et al., 2018; Yang et al., 2018). For example, VGAE used GCN to learn node representations and reconstruct the graph structure (Kipf & Welling, 2016b). SEAL extracted a local subgraph around each target node pair and then learned graph representation from local subgraph for link prediction (Zhang & Chen, 2018). Following the scheme of SEAL, Cai & Ji (2020) proposed to improve local subgraph representation learning by multi-scale graph representation learning. And LGLP inverted the local subgraphs to line graphs before learning representations (Cai et al., 2021). However, very limited work has studied to use causal inference for improving link prediction. Counterfactual Prediction As a mean of learning the causality between treatment and outcome, counterfactual prediction has been used for a variety of applications such as recommender systems (Wang et al., 2020; Xu et al., 2020), health care (Alaa & van der Schaar, 2017; Pawlowski et al., 2020), vision-language tasks (Zhang et al., 2020b; Parvaneh et al., 2020), and decision making (Coston et al., 2020; Pitis et al., 2020; Kusner et al., 2017). To infer the causal relationships, previous work usually estimated the ITE via function fitting models (Gelman & Hill, 2006; Chipman et al., 2010; Wager & Athey, 2018; Assaad et al., 2021). Peysakhovich et al. (2019) and Zou et al. (2020) studied counterfactual prediction with multiple agents and bundled treatments, respectively. Causal Inference Causal inference methods usually re-weighted samples based on propensity score (Rosenbaum & Rubin, 1983; Austin, 2011) to remove confounding bias from binary treatments. Recently, several works studied about learning treatment invariant representation to predict the counterfactual outcomes (Shalit et al., 2017; Li & Fu, 2017; Yao et al., 2018; Yoon et al., 2018; Hassanpour & Greiner, 2019a;b; Bica et al., 2020). Few recent works combined causal inference with graph learning (Sherman & Shpitser, 2020; Bevilacqua et al., 2021; Lin et al., 2021; Feng et al., 2021). For example, Sherman & Shpitser (2020) proposed a new concept in causal modeling, called “network intervention”, to study the effect of link creation on network structure changes. Bevilacqua et al. (2021) studied the task of out-of-distribution (OOD) graph classification, and showed how subgraph densities can be used to build size-invariant graph representations, which alleviates the train-test gap when learning from OOD data. 6 CONCLUSION AND FUTURE WORK In this work, we presented a counterfactual graph learning method for link prediction (CFLP). We introduced the idea of counterfactual prediction to improve link prediction on graphs. CFLP accurately predicted the missing links by exploring the causal relationship between global graph structure and link existence. Extensive experiments demonstrated that CFLP achieved the state-of-the-art performance on benchmark datasets. This work sheds insights that a good use of causal models (even basic ones) can greatly improve the performance of (graph) machine learning tasks, which in our case is link prediction. We note that the use of more sophistically designed causal models may lead to larger improvements for other machine learning tasks, which could be a valuable future research direction for the community. Other than our use of global graph structure as treatment, other treatments choices (with both empirical and theoretical analyses) are also worth exploring. Moreover, as CFLP first generates counterfactual links and then learns from both observed and counterfactual link existence, the underlying philosophy of our methodology could be considered as graph data augmentation. Therefore, investigating the relationship between counterfactual graph learning and graph data augmentation is also a possible future research direction. A ADDITIONAL DATASET DETAILS In this section, we provide some additional dataset details. All the datasets used in this work are publicly available. Statistics for the datasets are shown in Table 5. Citation Networks CORA, CITESEER, and PUBMED are citation networks that were first used by Yang et al. (2016) and then commonly used as benchmarks in GNN-related literature (Kipf & Welling, 2016a; Veličković et al., 2017). In these citation networks, the nodes are published papers and features are bag-of-word vectors extracted from the corresponding paper. Links represent the citation relation between papers. We loaded the datasets with the DGL1 package. Social Network The FACEBOOK dataset2 is a social network constructed from friends lists from Facebook (McAuley & Leskovec, 2012). The nodes are Facebook users and links indicate the friendship relation on Facebook. The node features were constructed from the user profiles and anonymized by McAuley & Leskovec (2012). Drug-Drug Interaction Network The OGB-DDI dataset was constructed from a public Drug database (Wishart et al., 2018) and provided by the Open Graph Benchmark (OGB) (Hu et al., 2020). Each node in this graph represents an FDA-approved or experimental drug and edges represent the existence of unexpected effect when the two drugs are taken together. This dataset does not contain any node features, and it can be downloaded with the dataloader3 provided by OGB. B EXPANDED RELATED WORK With the rapid development of graph machine learning in the past few years, researchers have been attempting to relate graph neural networks (GNNs) with causal models. Recently, several works have been proposed to improve graph learning with causal models (Sherman & Shpitser, 2020; Bevilacqua et al., 2021; Lin et al., 2021; Feng et al., 2021). Sherman & Shpitser (2020) proposed a new concept in causal modeling, called “network intervention”, that is a type of structural intervention in network contexts. Sherman & Shpitser (2020) modeled social network with causal DAG and studied the effect of network intervention (link creation and removal) on network structure changes. Lin et al. (2021) formulated the problem of post-hoc explanation generation for GNNs as a causal learning task and proposed a causal explanation model with a loss designed based on Granger causality. Feng et al. (2021) formulated node classification of GNNs with a causal DAG, which estimated the causal effect of the local structure on the prediction and adaptively chose whether to aggregate from the neighbors. Bevilacqua et al. (2021) studied the task of out-of-distribution (OOD) graph classification, and showed how subgraph densities can be used to build size-invariant graph representations. They modeled OOD graph classification with a twin network DAG causal model, which learned approximately environment-invariant graph representations that better extrapolate between train and test data. The last three works, i.e., Lin et al. (2021), Feng et al. (2021), Bevilacqua et al. (2021), proposed to use causal models to improve the performance of three different types of graph machine learning tasks such as GNN explanation (subgraph) generation, node-level classification, 1https://github.com/dmlc/dgl 2https://snap.stanford.edu/data/ego-Facebook.html 3https://ogb.stanford.edu/docs/linkprop/#data-loader and graph-level classification. Compared with them, our work has three points of uniqueness. First, to the best of our knowledge, our work makes the first attempt to use causal model to improve the performance of link prediction which is also an important graph learning task. Second, to make the attempt successful, our work presents a novel concept of “counterfactual link” and proposes a novel method CFLP that learns from both factual and counterfactual link existence. Third, the proposed method CFLP is flexible with the choice of treatment variables and is able to suggest good treatment choices prior to training via ÂTEobs. C DETAILS ON IMPLEMENTATION AND HYPERPARAMETERS All the experiments in this work were conducted on a Linux server with Intel Xeon Gold 6130 Processor (16 Cores @2.1Ghz), 96 GB of RAM, and 4 RTX 2080Ti cards (11 GB of RAM each). Our method are implemented with Python 3.8.5 with PyTorch. Source code is available in the supplementary materials. A list of used packages can be found in requirements.txt. Baseline Methods For baseline methods, we use official code packages from the authors for MVGRL4 (Hassani & Khasahmadi, 2020), SEAL5 (Zhang & Chen, 2018), and LGLP6 (Cai et al., 2021). We use a public implementation for VGAE7 (Kipf & Welling, 2016b) and OGB implementations8 for Node2Vec and baseline GNNs. For fair comparison, we set the size of node/link representations to be 256 of all methods. CFLP We use the Adam optimizer with a simple cyclical learning rate scheduler (Smith, 2017), in which the learning rate waves cyclically between the given learning rate (lr) and 1e-4 in every 70 epochs (50 warmup steps and 20 annealing steps). We implement the GNN encoders with torch_geometric9 (Fey & Lenssen, 2019). Same with the baselines, we set the size of all hidden layers and node/link representations of CFLP as 256. The graph encoders all have three layers and JKNet has mean pooling for the final aggregation layer. The decoder is a 3-layer MLP with a hidden layer of size 64 and ELU as the nonlinearity. As the Euclidean distance used in Eq. (3) has a range of [0,∞), the value of γ depends on the distribution of all-pair node embedding distances, which varies for different datasets. Therefore, we set the value of γ as the γpct-percentile of all-pair node embedding distances. Commands for reproducing the experiments are included in README.md. Hyperparameter Searching Space We manually tune the following hyperparameters over range: lr ∈ {0.005, 0.01, 0.05, 0.1, 0.2}, α ∈ {0.001, 0.01, 0.1, 1, 2}, β ∈ {0.001, 0.01, 0.1, 1, 2}, γpct ∈ {10, 20, 30}. Treatments For the graph clustering or community detection methods we used as treatments, we use the implementation from scikit-network10 for Louvain (Blondel et al., 2008), SpecC (Ng et al., 2001), PropC (Raghavan et al., 2007), and Ward (Ward Jr, 1963). We used implementation of K-core (Bader & Hogue, 2003) from networkx.11 We used SBM (Karrer & Newman, 2011) from a public implementation by Funke & Becker (2019).12 For CommN and Katz, we set Ti,j = 1 if the number of common neighbors or Katz index between vi and vj are greater or equal to 2 or 2 times the average of all Katz index values, respectively. For SpecC, we set the number of clusters as 16. For SBM, we set the number of communities as 16. These settings are fixed for all datasets. 4https://github.com/kavehhassani/mvgrl 5https://github.com/facebookresearch/SEAL_OGB 6https://github.com/LeiCaiwsu/LGLP 7https://github.com/DaehanKim/vgae_pytorch 8https://github.com/snap-stanford/ogb/tree/master/examples/ linkproppred/ddi 9https://pytorch-geometric.readthedocs.io/en/latest/ 10https://scikit-network.readthedocs.io/ 11https://networkx.org/documentation/ 12https://github.com/funket/pysbm D ADDITIONAL EXPERIMENTAL RESULTS AND DISCUSSIONS Link Prediction Tables 6 and 7 show the link prediction performance of Hits@50 and Average Precision (AP) by all methods. LGLP on PUBMED and OGB-DDI are missing due to the out of memory error when running the code package from the authors. Similar to the results in Tables 1 and 2, we observe that our CFLP on different graph encoders achieve similar or better performances compared with baselines, with the only exception of AP on FACEBOOK where most methods have close-to-perfect AP. From Tables 1, 2, 6 and 7, we observe that CFLP achieves improvement over all GNN architectures (averaged across datasets). Specifically, CFLP improves 25.6% (GCN), 12.0% (GSAGE), and 36.3% (JKNet) on Hits@20, 9.6% (GCN), 5.0% (GSAGE), and 17.8% (JKNet) on Hits@50, 5.6% (GCN), 1.6% (GSAGE), and 1.9% (JKNet) on AUC, and 0.8% (GCN), 0.8% (GSAGE), and 1.8% (JKNet) on AP. We note that CFLP with JKNet almost consistently achieves the best performance and outperforms baselines significantly on Hits@50. Specifically, compared with the best baseline, CFLP improves relatively by 6.8% and 0.9% on Hits@50 and AP, respectively. Ablation Study on Losses For the ablative studies of LCF (Eq. (9)) and Ldisc (Eq. (10)), we show their effect by removing them from the integrated loss function (Eq. (11)). Table 8 shows the results of CFLP on CORA and CITESEER under different settings (α = 0, β = 0, α = β = 0, and original setting). We observe that CFLP in the original setting achieves the best performance. The performance drops significantly when having α = 0, i.e., not using any counterfactual data during training. We note that having β = 0, i.e., not using the discrepancy loss, also lowers the performance. Therefore, both LCF and Ldisc are essential for improving the link prediction performance. Ablation Study on Node Embedding X̃ As the node embedding X̃ is used in the early step of CFLP for finding the counterfactual links, the quality of X̃ may affect the later learning process. Therefore, we also evaluate CFLP with different state-of-the-art unsupervised graph representation learning methods: MVGRL (Hassani & Khasahmadi, 2020), DGI (Velickovic et al., 2019), and GRACE (Zhu et al., 2020). Table 9 shows the link prediction performance of CFLP (w/ JKNet) on CORA and CITESEER with different node embeddings. We observe that the choice of the method for learning X̃ does have an impact on the later learning process as well as the link prediction performance. Nevertheless, Table 9 shows CFLP’s advantage can be consistently observed with different choices of methods for learning X̃, as CFLP with X̃ learned from all three methods showed promising link prediction performance. Sensitivity Analysis of α and β Figure 4 shows the AUC performance of CFLP on CORA with different combinations of α and β. We observe that the performance is the poorest when α = β = 0 and gradually improves and gets stable as α and β increase, showing that CFLP is generally robust to the hyperparameters α and β, and the optimal values are easy to locate. Sensitivity Analysis of γ Figure 5 shows the Hits@20 and AUC performance on link prediction of CFLP (with JKNet) on CORA and CITESEER with different treatments and γpct. We observe that the performance is generally good when 10 ≤ γpct ≤ 20 and gradually get worse when the value of γpct is too small or too large, showing that CFLP is robust to γ and the optimal γ is easy to find. Sensitivity to Noisy Data We note that robustness w.r.t. noisy data is not within our claim of technical contributions. Nevertheless, CFLP is not more vulnerable than other link prediction baselines. We conduct experiments with random attacks on the Cora dataset (randomly removing links and adding noisy links). Table 10 shows the AUC performances of our proposed CFLP (w/ JKNet) compared to the strongest baseline methods under different levels of random attacks (0%, 2%, 5%, and 10%). We can observe that as the attack strength goes up, the link prediction performance of all methods go down. We also note that our proposed CFLP still outperforms the baseline methods. Generalization to Graphs with Weighted Edges As our proposed CFLP uses GNN as the graph encoder and GNNs are usually able to take weighted graph as input (e.g., the adjacency matrix A for GCN can be weighted), the model should be able to handle weighted graphs as given. Note the link prediction losses (Eqs. (8) and (9)) need to be slightly modified considering the task. When the task is to predict the link existence, the label adjacency matrix used in Eqs. (8) and (9) must be of binary values. When the task is to predict the link weights, the BCE loss functions (Eqs. (8) and (9)) need to be changed to regression loss functions such as MSE.
1. What is the focus and contribution of the paper on link prediction using counterfactual learning? 2. What are the strengths of the proposed approach, particularly its novelty and ease of understanding? 3. What are the weaknesses of the paper, especially regarding the treatment definition and the accuracy of estimated ATEs? 4. Do you have any questions or concerns regarding the paper's methodology or conclusions? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The authors propose an approach for link prediction using counterfactual learning. They define a "treatment" for a node pair as whether or not they belong in the same group (e.g. from running a graph clustering algorithm). They then find the most similar node pair with a different treatment and treat its outcome (whether there is a link) as the counterfactual of the outcome of the original node pair. To find the most similar node pair, they use a graph neural network (GNN) based encoder. They use two link decoders (both multi-layer Perceptrons) to predict both the actual and counterfactual outcomes. They demonstrate impressive improvements on link prediction accuracy on 5 real data sets. Review Strengths: Highly creative and novel approach, to the best of my knowledge. The closest related work I have seen is to add and remove edges to a graph to try to improve link prediction accuracy, but that is very different to the counterfactual learning approach proposed here. Proposed approach is conceptually easy to understand and could lead to many variants in the future. Strong empirical performance compared to other recent GNNs. Weaknesses: The proposed counterfactual learning formulation feels like a hammer looking for a nail. The way the treatment is defined seems highly unusual, and there is nothing really principled about it. The authors claim that the estimates of average treatment effect (ATE) are generally close to the observed ATE, but I don't see this at all from Tables 3 and 4. The estimated ATEs are close only for a few specific cases. I do agree with the second conclusion about the ranking of estimated ATE being useful to select the treatment, however. Question regarding a claim from Page 5 section on Complexity: "When γ is set as a small value to obtain indeed similar node pairs, this step (Eq. (3)) uses constant time." Why is this the case? Also, what is | E | ? Is this the total number of edges in the graph? Typo: Page 3, 4th paragraph: "Traditional causal inference methods hence statistical learning approaches". "hence" should probably be replaced with "use" After discussion period: I have lowered my score slightly following the discussions but continue to support the paper. Despite the unusual framing as counterfactual learning as compared to data augmentation, the empirical gains are highly impressive. I think it could inspire future research investigating the use of other structural properties of the graph as the "treatment".
ICLR
Title Counterfactual Graph Learning for Link Prediction Abstract Learning to predict missing links is important for many graph-based applications. Existing methods were designed to learn the association between two sets of variables: (1) the observed graph structure (e.g., clustering effect) and (2) the existence of link between a pair of nodes. However, the causal relationship between these variables was ignored. We visit the possibility of learning it by asking a counterfactual question: “would the link exist or not if the observed graph structure became different?” To answer this question, we leverage causal models considering the information of the node pair (i.e., learned graph representations) as context, global graph structural properties as treatment, and link existence as outcome. In this work, we propose a novel link prediction method that enhances graph learning by counterfactual inference. It creates counterfactual links from the observed ones, and learns representations from both the observed and counterfactual links. Experiments on benchmark datasets show that this novel graph learning method achieves state-of-the-art performance on link prediction. 1 INTRODUCTION Link prediction seeks to predict the likelihood of edge existence between node pairs based on the observed graph. Given the omnipresence of graph-structured data, link prediction has copious applications such as movie recommendation (Bennett et al., 2007), chemical interaction prediction (Stanfield et al., 2017), and knowledge graph completion (Kazemi & Poole, 2018). Graph machine learning methods have been widely applied to solve this problem. Their standard scheme is to first learn the representation vectors of nodes and then learn the association between the representations of a pair of nodes and the existence of the link between them. For example, graph neural networks (GNNs) use neighborhood aggregation to create the representation vectors: the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes (Kipf & Welling, 2016a; Hamilton et al., 2017; Wu et al., 2020). Then the vectors are fed into a binary classification model to learn the association. GNN methods have shown predominance in the task of link prediction (Kipf & Welling, 2016b; Zhang et al., 2020a). Unfortunately, the causal relationship between graph structure and link existence was largely ignored in previous work. Existing methods that learn from association only were are not able to capture essential factors to accurately predict missing links in the test data. Take social network as an example. Suppose Alice and Adam live in the same neighborhood and they are close friends. The association between neighborhood belonging and friend closeness could be too strong to discover the essential factors of the friendship such as common interests or family relationship which could be the cause of being living in the same neighborhood. So, our idea is asking a counterfactual question: “would Alice and Adam still be close friends if they were not living in the same neighborhood?” If a graph learning model could learn the causal relationship by answering the counterfactual questions, it would improve the accuracy of link prediction with the novel knowledge it captured. Generally, the questions can be described as “would the link exist or not if the graph structure became different?” As known to many, counterfactual question is a key component of causal inference and have been well defined in the literature. A counterfactual question is usually framed with three factors: context (as a data point), manipulation (e.g., treatment, intervention, action, strategy), and outcome (van der Laan & Petersen, 2007; Johansson et al., 2016). (To simplify the language, we use “treatment” to refer to the manipulation in this paper, as readers might be familiar more with the word “treatment.”) Given certain data context, it asks what the outcome would have been if the treatment had not been the observed value. In the scenario of link prediction, we consider the information of a pair of nodes as context, graph structural properties as treatment, and link existence as outcome. Recall the social network example. The context is the representations of Alice and Adam that are learned from their personal attributes and relationships with others on the network. The treatment is living in the same neighborhood, which can be identified by community detection. And the outcome is their friendship. In this work, we present a counterfactual graph learning method for link prediction (CFLP) that trains graph learning models to answer the counterfactual questions. Figure 1 illustrates this twostep method. Suppose the treatment variable is defined as one type of global graph structure, e.g., the neighborhood assignment discovered by spectral clustering or community detection algorithms. We are wondering how likely the neighborhood distribution makes a difference on the link (non)existence for each pair of nodes. So, given a pair of nodes (like Alice and Adam) and the treatment value on this pair (in the same neighborhood), we find a pair of nodes (like Helen and Bob) that satisfies two conditions: (1) it has a different treatment (in different neighborhoods) and (2) it is the most similar pair with the given pair of nodes. We call these matched pair of nodes as “counterfactual links.” Note that the outcome of the counterfactual link can be either 1 or 0, depending on whether there exists an edge between the matched pair of nodes (Helen and Bob). The counterfactual link provides unobserved outcome to the given pair of nodes (Alice and Adam) under a counterfactual condition (in different neighborhoods). After counterfactual links are created for all (positive and negative) training examples, CFLP trains a link predictor (which can be GNN-based) to learn the representation vectors of nodes to predict both the observed factual links and the counterfactual links. In this Alice-Adam example, the link predictor is trained to estimate the individual treatment effect (ITE) of neighborhood assignment as 1 − 1 = 0, where ITE is a metric for the effect of treatment on the outcome and zero indicates the given treatment has no effect on the outcome. So, the learner will try to discover the essential factors on the friendship between Alice and Adam. CFLP leverages causal models to find these factors for graph learning models to accurately predict missing links. Contributions. Our main contributions can be summarized as follows. (1) This is the first work that proposes to improve link prediction by causal inference, specifically, learning to answer counterfactual questions about link existence. (2) This work introduces CFLP that trains GNN-based link predictors to predict both factual and counterfactual links. It learns the causal relationship between global graph structure and link existence. (3) CFLP outperforms competitive baseline methods on several benchmark datasets. We analyze the impact of counterfactual links as well as the choice of treatment variable. This work sheds insights for improving graph machine learning with causal analysis, which has not been extensively studied yet, when the other direction (machine learning for causal inference) has been studied for a long time. 2 PROBLEM DEFINITION Notations Let G = (V, E) be an undirected graph of N nodes, where V = {v1, v2, . . . , vN} is the set of nodes and E ⊆ V × V is the set of observed links. We denote the adjacency matrix as A ∈ {0, 1}N×N , whereAi,j = 1 indicates nodes vi and vj are connected and vice versa. We denote the node feature matrix as X ∈ RN×F , where F is the number of node features and xi (bolded) indicates the feature vector of node vi (the i-th row of X). In this work, we follow the commonly accepted problem definition of link prediction on graph data (Zhang & Chen, 2018; Zhang et al., 2020a; Cai et al., 2021): Given an observed graph G (with validation and testing links masked off), predict the link existence between every pair of nodes. More specifically, for the GNN-based link prediction methods, they learn low-dimensional node representations Z ∈ RN×H , where H is the dimensional size of latent space such that H F , and then use Z for the prediction of link existence. 3 PROPOSED METHOD 3.1 IMPROVING GRAPH LEARNING WITH CAUSAL MODEL Z ?T Y Z? T Y Treatment Effect Estimation Graph Representation Learning Figure 2: Causal modeling (not the target of our work but related to the idea we propose): Given Z and observed outcomes, find treatment effect of T on Y . Z ?T Y Z? T Y Treatment Effect Estimation Graph Representation Learning Figure 3: Graph learning with causal model (the proposed idea): leverage the estimated ATE(Y |T ) to improve the learning of Z. Leveraging Causal Model(s) Counterfactual causal inference aims to find out the causal relationship between treatment and outcomes by asking the counterfactual questions such as ”would the outcome be different if the treatment was different?” (Morgan & Winship, 2015). Figure 2 is a typical example, in which we denote the context (confounder) as Z, treatment as T , and the outcome as Y . Given the context, treatments, and their corresponding outcomes, counterfactual inference methods aim to find the effect of treatment on the outcome, which is usually measured by individual treatment effect (ITE) and its expectation averaged treatment effect (ATE) (van der Laan & Petersen, 2007; Weiss et al., 2015). For a binary treatment variable T = {0, 1}, denoting g(z, T ) as the outcome of z given the treatment T , we have ITE(z) = g(z, 1)− g(z, 0), and ATE = Ez∼Z ITE(z). Ideally, we need all possible outcomes of the contexts under all kinds of treatments to study the causal relationships (Morgan & Winship, 2015). However, in reality, the fact that we can only observe one potential outcome under one particular treatment prevents the ITE from being known (Johansson et al., 2016). Traditional causal inference methods use statisti- cal learning approaches such as Neyman–Rubin casual model (BCM) and propensity score matching (PSM) to predict the value of ATE (Rubin, 1974; 2005). In this work, we look at link prediction with graph learning, which is essentially learning the best node representations Z for the prediction of link existence. Therefore, as shown in Figure 3, where the outcome Y is the link existence, the objective is different from classic causal inference. In graph learning, we can estimate the effect of treatment on the outcome (ATE(Y |T )), and we want to improve the learning of Z with the estimation. More specifically, in graph learning for link prediction, for each pair of nodes (vi, vj), its ITE can be estimated with ITE(vi,vj) = g((zi, zj), 1)− g((zi, zj), 0) (1) and we use this information to improve the learning of Z, i.e., P (Z|Y ). We denote the observed adjacency matrix as the factual outcomes A and the unobserved adjacency matrix when the treatment is different as the counterfactual outcomes ACF . We denote T ∈ {0, 1}N×N as the binary factual treatment matrix, where Ti,j indicates the treatment of the node pair (vi, vj). We denote TCF as the counterfactual treatment matrix where TCFi,j = 1 − Ti,j . We are interested in (1) estimating the counterfactual outcomes ACF via observed data, (2) learning with the counterfactual adjacency matrix ACF to enhance link prediction, and (3) learning the causal relationship between graph structural information (treatment) and link existence (outcome). Treatment Variable Previous work on graph machine learning (Velickovic et al., 2019; Park et al., 2020) showed that the graph’s global structural information could improve the quality of representation vectors of nodes learned by GNNs. This is because the message passing-based GNNs aggregate local information in the algorithm of representation vector generation and the global structural information is complementary with the aggregated information. Therefore, for a pair of nodes, one option of defining the treatment variable is its global structural role in the graph. Without the loss of generality, we use Louvain (Blondel et al., 2008), an unsupervised approach that has been widely used for community detection, as an example. Louvain discovers community structure of a graph and assigns each node to one community. Then we can define the binary treatment variable as whether these two nodes in the pair belong to the same community. Let c : V → N be any graph mining/clustering method that outputs the index of community/cluster/neighborhood that each node belongs to. The treatment matrix T is defined as Ti,j = 1 if c(vi) = c(vj), and Ti,j = 0 otherwise. For the choice of c, we suggest methods that group nodes based on global graph structural information, including but not limited to Louvain (Blondel et al., 2008), K-core (Bader & Hogue, 2003), and spectral clustering (Ng et al., 2001). 3.2 COUNTERFACTUAL LINKS To implement the solution based on above idea, we propose counterfactual links. As aforementioned, for each node pair, the observed data contains only the factual treatment and outcome, meaning that the link existence for the given node pair with an opposite treatment is unknown. Therefore, we use the outcome from the nearest observed context as a substitute. This type of matching on covariates is widely used to estimate treatment effects from observational data (Johansson et al., 2016; Alaa & Van Der Schaar, 2019). That is, we want to find the nearest neighbor with the opposite treatment for each observed node pairs and use the nearest neighbor’s outcome as a counterfactual link. Formally, ∀(vi, vj) ∈ V × V , we want to find its counterfactual link (va, vb) as below: (va, vb) = arg min va,vb∈V {h((vi, vj), (va, vb)) | Ta,b = 1− Ti,j}, (2) where h(·, ·) is a metric of measuring the distance between a pair of node pairs (a pair of contexts). Nevertheless, finding the nearest neighbors by computing the distance between all pairs of node pairs is extremely inefficient and infeasible in application, which takes O(N4) comparisons (as there are totally O(N2) node pairs). Hence we implement Eq. (2) using node-level embeddings. Specifically, considering that we want to find the nearest node pair based on not only the raw node features but also structural features, we take the state-of-the-art unsupervised graph representation learning method MVGRL (Hassani & Khasahmadi, 2020) to learn the node embeddings X̃ ∈ RN×F̃ from the observed graph (with validation and testing links masked off). We use X̃ to find the nearest neighbors of node pairs. Therefore, ∀(vi, vj) ∈ V × V , we define its counterfactual link (va, vb) as (va, vb) = arg min va,vb∈V {d(x̃i, x̃a) + d(x̃j , x̃b) | Ta,b = 1− Ti,j , d(x̃i, x̃a) + d(x̃j , x̃b) < 2γ}, (3) where d(·, ·) is specified as the Euclidean distance on the embedding space of X̃, and γ is a hyperparameter that defines the maximum distance that two nodes are considered as similar. When no node pair satisfies the above equation (i.e., there does not exist any node pair with opposite treatment that is close enough to the target node pair), we do not assign any nearest neighbor for the given node pair to ensure all the neighbors are similar enough (as substitutes) in the feature space. Thus, the counterfactual treatment matrix TCF and the counterfactual adjacency matrix ACF are defined as TCFi,j , A CF i,j = { 1− Ti,j , Aa,b , if ∃ (va, vb) ∈ V × V satisfies Eq. (3); Ti,j , Ai,j , otherwise. (4) It is worth noting that the node embeddings X̃ and the nearest neighbors are computed only once and do not change during the learning process. X̃ is only used for finding the nearest neighbors. We also note that X̃ must be structural embeddings rather than positional embeddings (as defined in (Srinivasan & Ribeiro, 2020)). Learning from Counterfactual Distributions Let PF be the factual distribution of the observed contexts and treatments, and PCF be the counterfactual distribution that is composed of the observed contexts and opposite treatments. We define the empirical factual distribution P̂F ∼ PF as P̂F = {(vi, vj , Ti,j)}Ni,j=1, and define the empirical counterfactual distribution P̂CF ∼ PCF as P̂CF = {(vi, vj , TCFi,j )}Ni,j=1. Unlike traditional link prediction methods that take only P̂F as input and use the observed outcomes A as the training target, the idea of counterfactual graph learning is to take advantage of the counterfactual distribution by having P̂CF as a complementary input and use the counterfactual outcomes ACF as the training target for the counterfactual data samples. 3.3 THE COUNTERFACTUAL GRAPH LEARNING MODEL In this subsection, we present the design of our model as well as the training method. The input of the model in CFLP includes (1) the observed graph data A and raw feature matrix X, (2) the factual treatments TF and counterfactual treatments TCF , and (3) the counterfactual graph data ACF . The output contains link prediction logits in  and ÂCF for the factual and counterfactual adjacency matrices A and ACF , respectively. Graph Learning Model The model consist of two trainable components: a graph encoder f and a link decoder g. The graph encoder generates representation vectors of nodes from graph data G. And the link decoder projects the representation vectors of node pairs into the link prediction logits. The choice of the graph encoder f can be any end-to-end GNN model. Without the loss of generality, here we use the commonly used graph convolutional network (GCN) (Kipf & Welling, 2016a). Each layer of GCN is defined as H(l) = f (l)(A,H(l−1);W(l)) = σ(D̃− 1 2 ÃD̃− 1 2H(l−1)W(l)), (5) where l is the layer index, à = A + I is the adjacency matrix with added self-loops, D̃ is the diagonal degree matrix D̃ii = ∑ j Ãij , H (0) = X, W(l) is the learnable weight matrix at the l-th layer, and σ(·) denotes a nonlinear activation such as ReLU. We denote Z = f(A,X) ∈ RN×H as the output from the encoder’s last layer, i.e., the H-dimensional representation vectors of nodes. Following previous work (Zhang et al., 2020a), we compute the representation of a node pair as the Hadamard product of the vectors of the two nodes. That is, the representation for the node pair (vi, vj) is zi zj ∈ RH , where stands for the Hadamard product. For the link decoder that predicts whether a link exists between a pair of nodes, we opt for simplicity and adopt a simple decoder based on multi-layer perceptron (MLP), given the representations of node pairs and their treatments. That is, the decoder g is defined as  = g(Z,T), where Âi,j = MLP([zi zj , Ti,j ]), (6) ÂCF = g(Z,TCF ), where ÂCFi,j = MLP([zi zj , TCFi,j ]), (7) where [·, ·] stands for the concatenation of vectors, and  and ÂCF can be used for estimating the observed ITE as aforementioned in Eq. (1). During the training process, data samples from the empirical factual distribution P̂F and the empirical counterfactual distribution P̂CF are fed into decoder g and optimized towards A and ACF , respectively. That is, for the two distributions, the loss functions are as follows: LF = 1 N2 N∑ i=1 N∑ j=1 Ai,j · log Âi,j + (1−Ai,j) · log(1− Âi,j), (8) LCF = 1 N2 N∑ i=1 N∑ j=1 ACFi,j · log ÂCFi,j + (1−ACFi,j ) · log(1− ÂCFi,j ). (9) Balancing Counterfactual Learning In the training process, the above loss minimizations train the model on both the empirical factual distribution P̂F ∼ PF and empirical counterfactual distribution P̂CF ∼ PCF that are not necessarily equal – the training examples (node pairs) do not have to be aligned. However, at the stage of inference, the test data contains only observed (factual) samples. Such a gap between the training and testing data distributions exposes the model in the risk of covariant shift, which is a common issue in counterfactual learning (Johansson et al., 2016; Assaad et al., 2021). To force the distributions of representations of factual distributions and counterfactual distributions to be similar, we use the discrepancy distance (Mansour et al., 2009; Johansson et al., 2016) as another objective to regularize the representation learning. That is, we use the following loss term to minimize the distance between the learned representations from P̂F and P̂CF : Ldisc = disc(P̂Ff , P̂CFf ), where disc(P,Q) = ||P −Q||F , (10) where || · ||F denotes the Frobenius Norm, and P̂Ff and P̂CFf denote the node pair representations learned by graph encoder f from factual distribution and counterfactual distribution, respectively. That is, the learned representations for (vi, vj , Ti,j) and (vi, vj , TCFi,j ) are [zi zj , Ti,j ] (Eq. (6)) and [zi zj , TCFi,j ] (Eq. (7)), respectively. Training During the training of CFLP, we want the model to be optimized towards three targets: (1) accurate link prediction on the observed outcomes (Eq. (8)), (2) accurate estimation on the counterfactual outcomes (Eq. (9)), and (3) regularization on the representation spaces learned from P̂F and P̂CF (Eq. (10)). Therefore, the overall training loss of our proposed CFLP is L = LF + α · LCF + β · Ldisc, (11) where α and β are hyperparameters to control the weights of counterfactual outcome estimation (link prediction) loss and discrepancy loss. Algorithm 1: CFLP: Counterfactual graph learning for link prediction Input : f , g, A, X, n epochs, n epoch ft 1 Compute T as presented in Section 3.1 ; 2 Compute TCF ,ACF by Eqs. (3) and (4) ; /* model training */ 3 Initialize Θf in f and Θg in g ; 4 for epoch in range(n epochs) do 5 Z = f(A,X) ; 6 Get  and ÂCF via g with Eqs. (6) and (7) ; 7 Update Θf and Θg with L ; // Eq. (11) 8 end /* decoder fine-tuning */ 9 Freeze Θf and re-initialize Θg ; 10 Z = f(A,X) ; 11 for epoch in range(n epochs ft) do 12 Get  via g with Eq. (6) ; 13 Update Θg with LF ; // Eq. (8) 14 end /* model inferencing inference */ 15 Z = f(A,X) ; 16 Get  and ÂCF via g with Eqs. (6) and (7) ; Output:  for link prediction, ÂCF Summary Algorithm 1 summarizes the whole process of CFLP. The first step is to compute the factual and counterfactual treatments T, TCF as well as the counterfactual outcomes ACF . Then, the second step trains the graph learning model on both the observed factual data and created counterfactual data with the integrated loss function (Eq. (11)). Note that the discrepancy loss (Eq. (10)) is computed on the representations of node pairs learned by the graph encoder f , so the decoder g is trained with data from both P̂F and P̂CF without balancing the constraints. Therefore, after the model is sufficiently trained, we freeze the graph encoder f and fine-tune g with only the factual data. Finally, after the decoder is sufficiently finetuned, we output the link prediction logits for both the factual and counterfactual adjacency matrices. Complexity The complexity of the first step (finding counterfactual links with nearest neighbors) is proportional to the number of node pairs. When γ is set as a small value to obtain indeed similar node pairs, this step (Eq. (3)) uses constant time. Moreover, the computation in Eq. (3) can be parallelized. Therefore, the time complexity is O(N2/C) where C is the number of processes. For the complexity of the second step (training counterfactual learning model), the GNN encoder has time complexity of O(LH2N +LH|E|) (Wu et al., 2020), where L is the number of GNN layers and H is the size of node representations. Given that we sample the same number of non-existing links as that of observed links during training, the complexity of a three-layer MLP decoder is O(((H + 1) · dh + dh · 1)|E|) = O(dh(H + 2)|E|), where dh is the number of neurons in the hidden layer. Therefore, the second step has linear time complexity w.r.t. the sum of node and edge counts. Limitations First, as mentioned above, the computation of finding counterfactual links has a worst-case complexity of O(N2). Second, CFLP performs counterfactual prediction with only a single treatment; however, there are quite a few kinds of graph structural information that can be considered as treatments. Future work can leverage the rich structural information by bundled treatments (Zou et al., 2020) in counterfactual graph learning. 4 EXPERIMENTS 4.1 EXPERIMENTAL SETUP We conduct experiments on five benchmark datasets including citation networks (CORA, CITESEER, PUBMED (Yang et al., 2016)), social network (FACEBOOK (McAuley & Leskovec, 2012)), and drug-drug interaction network (OGB-DDI (Wishart et al., 2018)) from the Open Graph Benchmark (OGB) (Hu et al., 2020). For the first four datasets, we randomly select 10%/20% of the links and the same numbers of disconnected node pairs as validation/test samples. The links in the validation and test sets are masked off from the training graph. For OGB-DDI, we used the OGB official train/validation/test splits. Statistics and details for the datasets are given in Appendix. We use K-core (Bader & Hogue, 2003) clusters as the default treatment variable. We evaluate CFLP on three commonly used GNN encoders: GCN (Kipf & Welling, 2016a), GSAGE (Hamilton et al., 2017), and JKNet (Xu et al., 2018). We compare the link prediction performance of CFLP against Node2Vec (Grover & Leskovec, 2016), MVGRL (Hassani & Khasahmadi, 2020), VGAE (Kipf & Welling, 2016b), SEAL (Zhang & Chen, 2018), LGLP (Cai et al., 2021), and GNNs with MLP decoder. We report averaged test performance and their standard deviation over 20 runs with different random parameter initializations. Other than the most commonly used of Area Under ROC Curve (AUC), we report Hits@20 (one of the primary metrics on OGB leaderboard) as a more challenging metric, as it expects models to rank positive edges higher than nearly all negative edges. Besides performance comparison on link prediction, we will answer two questions to suggest a way of choosing a treatment variable for creating counterfactual links: (Q1) Does CFLP sufficiently learn the observed averaged treatment effect (ATE) derived from the counterfactual links? (Q2) What is the relationship between the estimated ATE learned in the method and the prediction performance? If the answer to Q1 is yes, then the answer to Q2 will indicate how to choose treatment based on observed ATE. To answer the Q1, we calculate the observed ATE (ÂTEobs) by comparing the observed links in A and created counterfactual links ACF that have opposite treatments. And we calculate the estimated ATE (ÂTEest) by comparing the predicted links in  and predicted counterfactual links ÂCF . Formally, ÂTEobs and ÂTEest are defined as ÂTEobs = 1 N2 N∑ i=1 N∑ j=1 {T (A−ACF ) + (1N×N −T) (ACF −A)}i,j . (12) ÂTEest = 1 N2 N∑ i=1 N∑ j=1 {T (Â− ÂCF ) + (1N×N −T) (ÂCF − Â)}i,j . (13) The treatment variables we will investigate are usually graph clustering or community detection methods, such as K-core (Bader & Hogue, 2003), stochastic block model (SBM) (Karrer & Newman, 2011), spectral clustering (SpecC) (Ng et al., 2001), propagation clustering (PropC) (Raghavan et al., 2007), Louvain (Blondel et al., 2008), common neighbors (CommN), Katz index, and hierarchical clustering (Ward) (Ward Jr, 1963). We use JKNet (Xu et al., 2018) as default graph encoder. Implementation details and supplementary experimental results (e.g., sensitivity on γ, ablation study on LCF and Ldisc) can be found in Appendix. Source code is available in supplementary material. 4.2 EXPERIMENTAL RESULTS Link Prediction Tables 1 and 2 show the link prediction performance of Hits@20 and AUC by all methods. LGLP on PUBMED and OGB-DDI are missing due to the out of memory error when running the code package from the authors. We observe that our CFLP on different graph encoders achieve similar or better performances compared with baselines. The only exception is the AUC on FACEBOOK where most methods have close-to-perfect AUC. As AUC is a relatively easier metric comparing with Hits@20, most methods achieved good performance on AUC. We observe that CFLP with JKNet almost consistently achieves the best performance and outperforms baselines significantly on Hits@20. Specifically, comparing with the best baseline, CFLP improves relatively by 16.4% and 0.8% on Hits@20 and AUC, respectively. Comparing with the best performing baselines, which are also GNN-based, CFLP benefits from learning with both observed link existence (A) and our defined counterfactual links (ACF ). ATE with Different Treatments Tables 3 and 4 show the link prediction performance, ÂTEobs, and ÂTEest of CFLP (with JKNet) when using different treatments. The treatments in Tables 3 and 4 are sorted by the Hits@20 performance. Bigger ATE indicates stronger causal relationship between the treatment and outcome, and vice versa. We observe: (1) the rankings of ÂTEest and ÂTEobs are positively correlated with Kendell’s ranking coefficient (Abdi, 2007) of 0.67 and 0.57 for CORA and CITESEER, respectively. Hence, CFLP was sufficiently trained to learn the causal relationship between graph structure information and link existence; (2) ÂTEobs and ÂTEest are both negatively correlated with the link prediction performance, showing that we can pick a proper treatment prior to training a model with CFLP. Using the treatment that has the weakest causal relationship with link existence is likely to train the model to capture more essential factors on the outcome, in a way similar to denoising the unrelated information from the representations. 5 RELATED WORK Link Prediction With its wide applications, link prediction has draw attention from many research communities including statistical machine learning and data mining. Stochastic generative methods based on stochastic block models (SBM) are developed to generate links (Mehta et al., 2019). In data mining, matrix factorization (Menon & Elkan, 2011), heuristic methods (Philip et al., 2010; Martı́nez et al., 2016), and graph embedding methods (Cui et al., 2018) have been applied to predict links in the graph. Heuristic methods compute the similarity score of nodes based on their neighborhoods. These methods can be generally categorized into first-order, second-order, and high-order heuristics based on the maximum distance of the neighbors. Graph embedding methods learn latent node features via embedding lookup and use them for link prediction (Perozzi et al., 2014; Tang et al., 2015; Grover & Leskovec, 2016; Wang et al., 2016). In the past few years, GNNs have showed promising results on various graph-based tasks with their ability of learning from features and custom aggregations on structures (Kipf & Welling, 2016a; Hamilton et al., 2017; Wu et al., 2020)(Cotta et al., 2021). With node pair representations and an attached MLP or inner-product decoder, GNNs can be used for link prediction (Zhang et al., 2020a; Davidson et al., 2018; Yang et al., 2018). For example, VGAE used GCN to learn node representations and reconstruct the graph structure (Kipf & Welling, 2016b). SEAL extracted a local subgraph around each target node pair and then learned graph representation from local subgraph for link prediction (Zhang & Chen, 2018). Following the scheme of SEAL, Cai & Ji (2020) proposed to improve local subgraph representation learning by multi-scale graph representation learning. And LGLP inverted the local subgraphs to line graphs before learning representations (Cai et al., 2021). However, very limited work has studied to use causal inference for improving link prediction. Counterfactual Prediction As a mean of learning the causality between treatment and outcome, counterfactual prediction has been used for a variety of applications such as recommender systems (Wang et al., 2020; Xu et al., 2020), health care (Alaa & van der Schaar, 2017; Pawlowski et al., 2020), vision-language tasks (Zhang et al., 2020b; Parvaneh et al., 2020), and decision making (Coston et al., 2020; Pitis et al., 2020; Kusner et al., 2017). To infer the causal relationships, previous work usually estimated the ITE via function fitting models (Gelman & Hill, 2006; Chipman et al., 2010; Wager & Athey, 2018; Assaad et al., 2021). Peysakhovich et al. (2019) and Zou et al. (2020) studied counterfactual prediction with multiple agents and bundled treatments, respectively. Causal Inference Causal inference methods usually re-weighted samples based on propensity score (Rosenbaum & Rubin, 1983; Austin, 2011) to remove confounding bias from binary treatments. Recently, several works studied about learning treatment invariant representation to predict the counterfactual outcomes (Shalit et al., 2017; Li & Fu, 2017; Yao et al., 2018; Yoon et al., 2018; Hassanpour & Greiner, 2019a;b; Bica et al., 2020). Few recent works combined causal inference with graph learning (Sherman & Shpitser, 2020; Bevilacqua et al., 2021; Lin et al., 2021; Feng et al., 2021). For example, Sherman & Shpitser (2020) proposed a new concept in causal modeling, called “network intervention”, to study the effect of link creation on network structure changes. Bevilacqua et al. (2021) studied the task of out-of-distribution (OOD) graph classification, and showed how subgraph densities can be used to build size-invariant graph representations, which alleviates the train-test gap when learning from OOD data. 6 CONCLUSION AND FUTURE WORK In this work, we presented a counterfactual graph learning method for link prediction (CFLP). We introduced the idea of counterfactual prediction to improve link prediction on graphs. CFLP accurately predicted the missing links by exploring the causal relationship between global graph structure and link existence. Extensive experiments demonstrated that CFLP achieved the state-of-the-art performance on benchmark datasets. This work sheds insights that a good use of causal models (even basic ones) can greatly improve the performance of (graph) machine learning tasks, which in our case is link prediction. We note that the use of more sophistically designed causal models may lead to larger improvements for other machine learning tasks, which could be a valuable future research direction for the community. Other than our use of global graph structure as treatment, other treatments choices (with both empirical and theoretical analyses) are also worth exploring. Moreover, as CFLP first generates counterfactual links and then learns from both observed and counterfactual link existence, the underlying philosophy of our methodology could be considered as graph data augmentation. Therefore, investigating the relationship between counterfactual graph learning and graph data augmentation is also a possible future research direction. A ADDITIONAL DATASET DETAILS In this section, we provide some additional dataset details. All the datasets used in this work are publicly available. Statistics for the datasets are shown in Table 5. Citation Networks CORA, CITESEER, and PUBMED are citation networks that were first used by Yang et al. (2016) and then commonly used as benchmarks in GNN-related literature (Kipf & Welling, 2016a; Veličković et al., 2017). In these citation networks, the nodes are published papers and features are bag-of-word vectors extracted from the corresponding paper. Links represent the citation relation between papers. We loaded the datasets with the DGL1 package. Social Network The FACEBOOK dataset2 is a social network constructed from friends lists from Facebook (McAuley & Leskovec, 2012). The nodes are Facebook users and links indicate the friendship relation on Facebook. The node features were constructed from the user profiles and anonymized by McAuley & Leskovec (2012). Drug-Drug Interaction Network The OGB-DDI dataset was constructed from a public Drug database (Wishart et al., 2018) and provided by the Open Graph Benchmark (OGB) (Hu et al., 2020). Each node in this graph represents an FDA-approved or experimental drug and edges represent the existence of unexpected effect when the two drugs are taken together. This dataset does not contain any node features, and it can be downloaded with the dataloader3 provided by OGB. B EXPANDED RELATED WORK With the rapid development of graph machine learning in the past few years, researchers have been attempting to relate graph neural networks (GNNs) with causal models. Recently, several works have been proposed to improve graph learning with causal models (Sherman & Shpitser, 2020; Bevilacqua et al., 2021; Lin et al., 2021; Feng et al., 2021). Sherman & Shpitser (2020) proposed a new concept in causal modeling, called “network intervention”, that is a type of structural intervention in network contexts. Sherman & Shpitser (2020) modeled social network with causal DAG and studied the effect of network intervention (link creation and removal) on network structure changes. Lin et al. (2021) formulated the problem of post-hoc explanation generation for GNNs as a causal learning task and proposed a causal explanation model with a loss designed based on Granger causality. Feng et al. (2021) formulated node classification of GNNs with a causal DAG, which estimated the causal effect of the local structure on the prediction and adaptively chose whether to aggregate from the neighbors. Bevilacqua et al. (2021) studied the task of out-of-distribution (OOD) graph classification, and showed how subgraph densities can be used to build size-invariant graph representations. They modeled OOD graph classification with a twin network DAG causal model, which learned approximately environment-invariant graph representations that better extrapolate between train and test data. The last three works, i.e., Lin et al. (2021), Feng et al. (2021), Bevilacqua et al. (2021), proposed to use causal models to improve the performance of three different types of graph machine learning tasks such as GNN explanation (subgraph) generation, node-level classification, 1https://github.com/dmlc/dgl 2https://snap.stanford.edu/data/ego-Facebook.html 3https://ogb.stanford.edu/docs/linkprop/#data-loader and graph-level classification. Compared with them, our work has three points of uniqueness. First, to the best of our knowledge, our work makes the first attempt to use causal model to improve the performance of link prediction which is also an important graph learning task. Second, to make the attempt successful, our work presents a novel concept of “counterfactual link” and proposes a novel method CFLP that learns from both factual and counterfactual link existence. Third, the proposed method CFLP is flexible with the choice of treatment variables and is able to suggest good treatment choices prior to training via ÂTEobs. C DETAILS ON IMPLEMENTATION AND HYPERPARAMETERS All the experiments in this work were conducted on a Linux server with Intel Xeon Gold 6130 Processor (16 Cores @2.1Ghz), 96 GB of RAM, and 4 RTX 2080Ti cards (11 GB of RAM each). Our method are implemented with Python 3.8.5 with PyTorch. Source code is available in the supplementary materials. A list of used packages can be found in requirements.txt. Baseline Methods For baseline methods, we use official code packages from the authors for MVGRL4 (Hassani & Khasahmadi, 2020), SEAL5 (Zhang & Chen, 2018), and LGLP6 (Cai et al., 2021). We use a public implementation for VGAE7 (Kipf & Welling, 2016b) and OGB implementations8 for Node2Vec and baseline GNNs. For fair comparison, we set the size of node/link representations to be 256 of all methods. CFLP We use the Adam optimizer with a simple cyclical learning rate scheduler (Smith, 2017), in which the learning rate waves cyclically between the given learning rate (lr) and 1e-4 in every 70 epochs (50 warmup steps and 20 annealing steps). We implement the GNN encoders with torch_geometric9 (Fey & Lenssen, 2019). Same with the baselines, we set the size of all hidden layers and node/link representations of CFLP as 256. The graph encoders all have three layers and JKNet has mean pooling for the final aggregation layer. The decoder is a 3-layer MLP with a hidden layer of size 64 and ELU as the nonlinearity. As the Euclidean distance used in Eq. (3) has a range of [0,∞), the value of γ depends on the distribution of all-pair node embedding distances, which varies for different datasets. Therefore, we set the value of γ as the γpct-percentile of all-pair node embedding distances. Commands for reproducing the experiments are included in README.md. Hyperparameter Searching Space We manually tune the following hyperparameters over range: lr ∈ {0.005, 0.01, 0.05, 0.1, 0.2}, α ∈ {0.001, 0.01, 0.1, 1, 2}, β ∈ {0.001, 0.01, 0.1, 1, 2}, γpct ∈ {10, 20, 30}. Treatments For the graph clustering or community detection methods we used as treatments, we use the implementation from scikit-network10 for Louvain (Blondel et al., 2008), SpecC (Ng et al., 2001), PropC (Raghavan et al., 2007), and Ward (Ward Jr, 1963). We used implementation of K-core (Bader & Hogue, 2003) from networkx.11 We used SBM (Karrer & Newman, 2011) from a public implementation by Funke & Becker (2019).12 For CommN and Katz, we set Ti,j = 1 if the number of common neighbors or Katz index between vi and vj are greater or equal to 2 or 2 times the average of all Katz index values, respectively. For SpecC, we set the number of clusters as 16. For SBM, we set the number of communities as 16. These settings are fixed for all datasets. 4https://github.com/kavehhassani/mvgrl 5https://github.com/facebookresearch/SEAL_OGB 6https://github.com/LeiCaiwsu/LGLP 7https://github.com/DaehanKim/vgae_pytorch 8https://github.com/snap-stanford/ogb/tree/master/examples/ linkproppred/ddi 9https://pytorch-geometric.readthedocs.io/en/latest/ 10https://scikit-network.readthedocs.io/ 11https://networkx.org/documentation/ 12https://github.com/funket/pysbm D ADDITIONAL EXPERIMENTAL RESULTS AND DISCUSSIONS Link Prediction Tables 6 and 7 show the link prediction performance of Hits@50 and Average Precision (AP) by all methods. LGLP on PUBMED and OGB-DDI are missing due to the out of memory error when running the code package from the authors. Similar to the results in Tables 1 and 2, we observe that our CFLP on different graph encoders achieve similar or better performances compared with baselines, with the only exception of AP on FACEBOOK where most methods have close-to-perfect AP. From Tables 1, 2, 6 and 7, we observe that CFLP achieves improvement over all GNN architectures (averaged across datasets). Specifically, CFLP improves 25.6% (GCN), 12.0% (GSAGE), and 36.3% (JKNet) on Hits@20, 9.6% (GCN), 5.0% (GSAGE), and 17.8% (JKNet) on Hits@50, 5.6% (GCN), 1.6% (GSAGE), and 1.9% (JKNet) on AUC, and 0.8% (GCN), 0.8% (GSAGE), and 1.8% (JKNet) on AP. We note that CFLP with JKNet almost consistently achieves the best performance and outperforms baselines significantly on Hits@50. Specifically, compared with the best baseline, CFLP improves relatively by 6.8% and 0.9% on Hits@50 and AP, respectively. Ablation Study on Losses For the ablative studies of LCF (Eq. (9)) and Ldisc (Eq. (10)), we show their effect by removing them from the integrated loss function (Eq. (11)). Table 8 shows the results of CFLP on CORA and CITESEER under different settings (α = 0, β = 0, α = β = 0, and original setting). We observe that CFLP in the original setting achieves the best performance. The performance drops significantly when having α = 0, i.e., not using any counterfactual data during training. We note that having β = 0, i.e., not using the discrepancy loss, also lowers the performance. Therefore, both LCF and Ldisc are essential for improving the link prediction performance. Ablation Study on Node Embedding X̃ As the node embedding X̃ is used in the early step of CFLP for finding the counterfactual links, the quality of X̃ may affect the later learning process. Therefore, we also evaluate CFLP with different state-of-the-art unsupervised graph representation learning methods: MVGRL (Hassani & Khasahmadi, 2020), DGI (Velickovic et al., 2019), and GRACE (Zhu et al., 2020). Table 9 shows the link prediction performance of CFLP (w/ JKNet) on CORA and CITESEER with different node embeddings. We observe that the choice of the method for learning X̃ does have an impact on the later learning process as well as the link prediction performance. Nevertheless, Table 9 shows CFLP’s advantage can be consistently observed with different choices of methods for learning X̃, as CFLP with X̃ learned from all three methods showed promising link prediction performance. Sensitivity Analysis of α and β Figure 4 shows the AUC performance of CFLP on CORA with different combinations of α and β. We observe that the performance is the poorest when α = β = 0 and gradually improves and gets stable as α and β increase, showing that CFLP is generally robust to the hyperparameters α and β, and the optimal values are easy to locate. Sensitivity Analysis of γ Figure 5 shows the Hits@20 and AUC performance on link prediction of CFLP (with JKNet) on CORA and CITESEER with different treatments and γpct. We observe that the performance is generally good when 10 ≤ γpct ≤ 20 and gradually get worse when the value of γpct is too small or too large, showing that CFLP is robust to γ and the optimal γ is easy to find. Sensitivity to Noisy Data We note that robustness w.r.t. noisy data is not within our claim of technical contributions. Nevertheless, CFLP is not more vulnerable than other link prediction baselines. We conduct experiments with random attacks on the Cora dataset (randomly removing links and adding noisy links). Table 10 shows the AUC performances of our proposed CFLP (w/ JKNet) compared to the strongest baseline methods under different levels of random attacks (0%, 2%, 5%, and 10%). We can observe that as the attack strength goes up, the link prediction performance of all methods go down. We also note that our proposed CFLP still outperforms the baseline methods. Generalization to Graphs with Weighted Edges As our proposed CFLP uses GNN as the graph encoder and GNNs are usually able to take weighted graph as input (e.g., the adjacency matrix A for GCN can be weighted), the model should be able to handle weighted graphs as given. Note the link prediction losses (Eqs. (8) and (9)) need to be slightly modified considering the task. When the task is to predict the link existence, the label adjacency matrix used in Eqs. (8) and (9) must be of binary values. When the task is to predict the link weights, the BCE loss functions (Eqs. (8) and (9)) need to be changed to regression loss functions such as MSE.
1. What is the focus and contribution of the paper on counterfactual graph learning? 2. What are the strengths of the proposed approach, particularly in terms of its performance on link prediction tasks? 3. What are the weaknesses of the paper regarding experiment design and methodology? 4. Do you have any concerns about the choice of node representation and its impact on the learning process? 5. What are the limitations of the paper regarding its discussion of related work and the clarity of its explanations?
Summary Of The Paper Review
Summary Of The Paper In this work, authors presented a counterfactual graph learning method for link prediction (CFLP), where authors introduced the idea of counterfactual prediction to improve link prediction on graphs. Authors demonstrate the model performance on link prediction tasks and achieved promising results. Such results shed insights that a good use of causal models (even basic ones) can greatly improve the performance of (graph) machine learning tasks. Review For experiment, I would recommend using more from OGB. I am curious as OGB has many ling prediction dataset, why authors only use one of them? In addition, OGB-DDI authors model should have ranked second best if author submit the results, I wonder why authors do not submit the results in the official leaderboard? I strongly recommending using official leaderboard to illustrate the performance. CFLP w/ JKNet consistently achieves the best performance across all dataset, while JKNet itself is only very outstanding. Is there any insight on the impact of CFLP for different architecture? For the node representation, authors used MVGRL. Is there any ablation study on the impact of the node embedding? This is the early step and if embedding quality is low, the error can propagate through the learning process. Also authors mentioned the embedding is learnt from the observed graph, so my understanding the link in validation/testing set is removed during embedding learning? In page 4, "That is, we want to find the nearest neighbor with the opposite treatment for each observed node pairs and use the nearest neighbor’s outcome as a counterfactual link." I would recommend adding reference to some matching based methods in causal inference. In page 4, authors first used d for a distance function in (2) between two node pairs, while later used similar d for a distance function for two nodes. This is confusing and I would recommend using another symbol. For RELATED WORK, it seems not clear how some literatures (especially in the causal inference section) are relevant to this work.
ICLR
Title A StyleMap-Based Generator for Real-Time Image Projection and Local Editing Abstract Generative adversarial networks (GANs) have been successful in synthesizing and manipulating synthetic but realistic images from latent vectors. However, it is still challenging for GANs to manipulate real images, especially in real-time. State-ofthe-art GAN-based methods for editing real images suffer from time-consuming operations in projecting real images to latent vectors. Alternatively, an encoder can be trained to embed real images to the latent space instantly, but it loses details drastically. We propose StyleMapGAN, which adopts a novel representation of latent space, called stylemap, incorporating spatial dimension into embedding. Because each spatial location in the stylemap contributes to its corresponding region of the generated images, the real-time projection through the encoder becomes accurate as well as editing real images becomes spatially controllable. Experimental results demonstrate that our method significantly outperforms state-of-the-art models in various image manipulation tasks such as local editing and image interpolation. Especially, detailed comparisons show that our local editing method successfully reflects not only the color and texture but also the shape of a reference image while preserving untargeted regions. 1 INTRODUCTION Generative adversarial networks (GANs) (Goodfellow et al., 2014) have evolved dramatically in recent years, enabling high-fidelity image synthesis with models which are learned directly from data (Brock et al., 2019; Karras et al., 2019; 2020). Recent studies have shown that GANs naturally learn to encode rich semantics within the latent space, thus changing the latent code leads to manipulating the corresponding attributes of the output images (Jahanian et al., 2020; Shen et al., 2020; Härkönen et al., 2020; Goetschalckx et al., 2019; Shen & Zhou, 2020; Alharbi & Wonka, 2020). However, it is still challenging to apply these manipulations to real images, since the GAN itself lacks an inverse mapping from an image back to its corresponding latent code. One promising approach for manipulating real images is image-to-image translation (Isola et al., 2017; Zhu et al., 2017; Choi et al., 2018), where the model learns to directly synthesize an output image given a user’s input. However, these methods require pre-defined tasks and heavy supervision (e.g., input-output pairs, class labels) for training, and also limit the user controllability at inference time. Another approach is to utilize pretrained GAN models, by directly optimizing the latent code for an individual image (Abdal et al., 2019; Zhu et al., 2016; Ma et al., 2018; Noguchi & Harada, 2019). However, even on high-end GPUs, it requires minutes of computation for each target image, and it does not guarantee that the optimized code would be placed in the original latent space of GAN. A more practical approach is to train an extra encoder which learns to project an image into its corresponding latent code (Zhu et al., 2020a; Perarnau et al., 2016; Luo et al., 2017). Although this approach enables real-time projection in a single feed-forward manner, it suffers from the low fidelity of the projected image (i.e., losing details of the target image). We attribute this limitation to the absence of spatial dimensions in the latent space. Without the spatial dimensions, an encoder compresses the local semantics of an image into a vector in an entangled manner, making it difficult to reconstruct the image (e.g., vector-based or low-resolution bottleneck layer is not capable of producing high-frequency details (Lample et al., 2017; Chang et al., 2018)). As a solution to such problems, we propose StyleMapGAN which exploits stylemap, a novel representation of the latent space. Our key idea is simple. Instead of learning a vector-based latent repre- sentation, we utilize a tensor with explicit spatial dimensions. Our proposed representation benefits from its spatial dimensions, enabling GANs to easily encode the local semantics of images into the latent space. This property allows an encoder to effectively project an image into the latent space, thus providing high-fidelity and real-time projection. In addition, our method offers a new capability to edit specific regions of an image by manipulating the matching positions of the stylemap. We demonstrate, on multiple datasets, that our stylemap indeed substantially enhances the projection quality compared to the traditional vector-based latent representation (Section 3.2). Furthermore, we show the advantage of our method over state-of-the art methods on image projection, interpolation, and local editing (Section 3.3 & Section 3.4). Finally, we show that our method can transplant regions even when the regions are not aligned between one image and another (Section 3.5). We will make our code and pretrained models publicly available for research community. 2 STYLEMAPGAN Our goal is to accurately project images to a latent space with an encoder in real-time and to locally manipulate images on the latent space. We propose StyleMapGAN which adopts stylemap, a novel representation of the intermediate latent space with spatial dimensions. It allows accurate reconstruction with the encoder by alleviating the spatial discrepancy between images and the latent space which has been causing the encoder to lose details. Furthermore, local changes in the stylemap lead to local editing of images thanks to the explicit spatial correspondence between the stylemap and images. Section 2.1 explains how we design the mapping network and the synthesis network to incorporate the stylemap. Section 2.2 describes our procedure for the image-to-latent projection and the local editing. 2.1 STYLEMAP-BASED GENERATOR Figure 1 compares the traditional style-based generator (Karras et al., 2019) and our stylemap-based generator. We propose to incorporate a stylemap instead of a style vector and to replace AdaIN operations with spatially adaptive operations. The stylemap has spatial dimensions as opposed to the style vector, thus can represent different styles across spatial locations. Accordingly, we revise SPADE (Park et al., 2019b) which modulates feature maps with spatially varying values. Since the feature maps in the synthesis network grow larger as getting closer to the output image, we introduce a stylemap resizer, which consists of convolutions and upsampling, to match the resolutions of stylemaps with the feature maps. The stylemap resizer not only resizes the stylemap, but also transforms them with learned convolutions to convey more detailed and structured styles. Figure 1 shows examples of changes of resized stylemaps across layers. Then, the affine transform A produces parameters for the modulation regarding the resized stylemaps. The modulation operation of the i-th layer in the synthesis network is as follows: hi+1 = ( γi hi − µi σi ) ⊕ βi (1) where µi, σi ∈ R are the mean and standard deviation of activations hi ∈ RCi×Hi×Wi of the layer, respectively. γi, βi ∈ RCi×Hi×Wi are modulation parameters. and ⊕ are element-wise multiplication and addition with broadcasting, respectively. We use layer normalization (Ba et al., 2016) instead of instance normalization and find it helpful to resolve droplet artifacts. In addition, we remove per-pixel noise which is an extra source of variation because it makes the projection complicated. Instead, β plays the similar role. Note that weight modulation (Karras et al., 2020) cannot be applied to spatially varying modulation because weights are shared across all locations in a convolution. Other details about the networks such as a design choice of mapping network are given in Appendix A and B. 2.2 REAL IMAGE PROJECTION AND LOCAL EDITING Since the stylemap eases the spatial discrepancy between images and the latent space, we train an encoder to project real images into its corresponding stylemaps, which accurately reconstructs the images through the generator. The encoder is jointly trained with the generator and the discriminator. More training details are described in Appendix A. Now we have access to the accurate projection of images to the style space which is essential to latent-based editing. Furthermore, local changes in the stylemap leads to natural local editing of images on the learned semantic manifold. Especially, we design a procedure for local transplantation which now becomes feasible. The goal of local editing is to transplant some part of a reference image to an original image with respect to a mask which indicates the region to be modified. We project the original image and the reference image through the encoder to obtain stylemaps w and w̃, respectively. In general, the mask is finer than 8×8, we blend the stylemaps on w+ space to achieve detailed manipulation. The edited i-th resized stylemap ẅ+ is an alpha blending of w+ and w̃+: ẅ+i = mi w̃ + i + (1−mi) wi + (2) where i-th resized mask mi is shrunk by max pooling. If the mask’s shape aligns with the 8 × 8 stylemap, we can do the same alpha blending on the w space instead of the w+ space. Note that the mask can be in any shape allowing usage of semantic segmentation methods or user scribbles. On the contrary to SPADE (Park et al., 2019b) or SEAN (Zhu et al., 2020b), even coarse masks as coarse as 8 × 8 produces plausible images so that the burden for user to provide detailed masks is lifted. This operation can be further revised for unidentical masks of the two images (Section 3.5). 3 EXPERIMENTS Our proposed method efficiently projects images into the style space in real-time and effectively manipulate specific regions of real images. We first describe our experimental setup (Section 3.1) and show how the proposed spatial dimensions of stylemap affect the image projection and generation quality (Section 3.2). We then compare our method with the state-of-the art methods on real image projection (Section 3.3) and local editing (Section 3.4). We finally show a more flexible editing scenario and usefulness of our proposed method (Section 3.5). Implementation details are described in Appendix A. 3.1 EXPERIMENTAL SETUP Datasets and protocols. For evaluation, we train our model on CelebA-HQ (Karras et al., 2018) and AFHQ (Choi et al., 2020), both at resolution of 256 × 256. We use 500 images for validation, another 500 images for testing, and the rest for training. Because the optimization methods take an extremely long time, we limited the test set to 500 images. When we compute Fréchet inception distance (FID), the numbers of generated samples are matched to the training set. Reconstruction errors are measured with all test images. For FIDlerp, we choose random numbers between 0 and 1 for 500 random pairs of images from the test set to synthesize 500 interpolated images and compute FID between those and the test set. For local editing comparison, 250 pairs of test images in CelebA-HQ are composed with ten semantic masks (e.g., background, hair) (Lee et al., 2020) to produce 2500 images. For local editing on AFHQ, masks are randomly chosen between horizontal and vertical half-and-half masks to produce 250 images. Baselines. We compare our method against recent methods. For image projection, StyleGAN2 (Karras et al., 2020) and Image2StyleGAN (Abdal et al., 2019) infer the per-layer style vectors (analogous to our w+) via iterative optimization. In-DomainGAN (Zhu et al., 2020a) relies on optimization preceded by initialization using a domain-guided encoder. SEAN (Zhu et al., 2020b) also includes an encoder but it requires semantic segmentation masks for training. Structured noise (Alharbi & Wonka, 2020) adds input tensor with spatial dimensions to the synthesis network of StyleGAN but it does not enhance the rest of the network where the style vector still plays an important role. Editing in style (Collins et al., 2020) tries to find local semantics in the style vector. 3.2 ANALYSIS OF OUR METHOD To manipulate an image using a generative model, we first need to accurately project the image into its latent space. In Table 1, we vary the spatial resolution of stylemap and compare the performance of reconstruction and generation. As the spatial resolution increases, the reconstruction accuracy improves significantly. It demonstrates that our stylemap with spatial dimensions is highly effective for image projection. FID varies differently across datasets, possibly due to different contextual relationship between locations for generation. Note that our method with spatial resolution accurately preserves small details, e.g., the eyes are not blurred. We next evaluate the effect of the stylemap’s resolution in editing scenarios, mixing specific parts of one image and another. Figure 3 shows that the 8×8 stylemap synthesizes the most accurate and seamlessly blended images. We see that when the spatial resolution is higher than 8×8, the edited parts are easily detected. We suppose that too large stylemap harms contextual relationship across locations which is essential for realistic images. Considering the editing quality, we choose the 8×8 resolution as our best model and use it consistently for all subsequent experiments. 3.3 REAL IMAGE PROJECTION In Table 2, we compare our approach with the state-of-the art methods for real image projection. For both datasets, StyleMapGAN achieves better reconstruction quality (MSE & LPIPS) than all competitors. Also, it achieves the best FID, which implicitly shows that our manipulation on the style space leads to the most realistic images. Importantly, our method runs 100× faster than the optimization-based baselines since a single feedforward pass provides accurate projection thanks to the stylemap, which is measured in a single GPU. SEAN also runs with a single feedforward pass, but it requires ground-truth segmentation masks for training which is a severe drawback for practical uses. Image2StyleGAN fails to meet requirements for editing in that it produces spurious artifacts in latent interpolation (FIDlerp and figures) and suffers from minutes of runtime. 3.4 LOCAL EDITING We evaluate local editing performance regarding three aspects: detectability, faithfulness to the reference image in the mask and preservation of the source image outside the mask. Figure 4 and Figure 5 visually demonstrate that our method seamlessly composes the two images while others struggle. Since there is no metrics for evaluating the last two aspects, we propose two quantitative metrics: MSEsrc and MSEref. Table 3 shows that the results from our method are the hardest for the classifier to detect and both source and reference images are best reflected. Note that MSEs are not the sole measures but AP should be considered together. 3.5 UNALIGNED TRANSPLANTATION Here, we demonstrate more flexible use case, unaligned transplantation, showing that our local editing does not require the masks on the original and the reference images to be aligned. We project the images to the stylemaps and and replace the designated region of the original stylemap with the crop of the reference stylemap even though they are on the different locations. Users can specify what to replace. Figure 6 shows examples. 4 DISCUSSION AND CONCLUSION Invertibility of GANs has been essential for editing real images with unconditional GAN models at a practical time and it has not been properly answered yet. To achieve this goal, we propose StyleMapGAN, which introduces explicit spatial dimensions to the latent space, called a stylemap. We show, through extensive evaluation, that our method based on the stylemap has a number of advantages over prior approaches, in that it can accurately project real images in real-time, into the latent space, and synthesize high-quality output images by both interpolation and local editing. The proposed latent representation is simple, general, and can be easily integrated into existing GAN models (e.g., StyleGAN) with wide range of network designs and data modality. We believe that improving fidelity by applying our latent representation to other methods such as conditional GANs (e.g., BigGAN) or variational autoencoders (Kingma & Welling, 2013) would be an exciting future work. B MAPPING NETWORK DESIGN FOR THE STYLEMAP There are several choices when designing a mapping network. We can easily think of convolutional layers due to the spatial dimensions of the stylemap. Alternatively, we can remove the mapping network so that our method does not generate images from the standard Gaussian distribution, and 1https://github.com/rosinality/stylegan2-pytorch 2http://publicurl.com uses only real images for training like autoencoder (Hinton & Salakhutdinov, 2006). As shown in Figure 3, autoencoder fails to produce realistic images using the projected stylemap. It seems to copy and paste between two images on RGB space. We give continuous input to the generator from the standard Gaussian distribution using a mapping network, letting the network generate seamless images in image space. However, the autoencoder only gives discrete input, which is projected from the encoder. On the other hand, the mapping network with convolutional layers often struggles in reconstruction so that the edited results images are quite different from the original images. We assume that there is such a limit because the convolutional layer’s mapping is bounded to the local area. In MLP, each weight and input are fully-connected so that it can make a more plausible latent space. C RELATED WORK C.1 LATENT-BASED IMAGE EDITING There are active studies (Abdal et al., 2019; Collins et al., 2020; Zhu et al., 2020a) on image editing using latent vector arithmetic where well-trained GANs (Karras et al., 2019; 2020) are adopted for real-world applications. These studies aim to find a latent vector to reconstruct an original image. In general, there are two approaches to embed images into latent vectors, learning and optimizationbased ones. The learning-based approach (Zhu et al., 2020a; Perarnau et al., 2016; Zhu et al., 2016) trains an encoder that maps a given image to a latent vector. This method has a potential of projecting an image in real time. However, the existing methods suffer from low quality of the reconstructed images, which indicates the difficulty of embedding real images. The optimization-based approach (Creswell & Bharath, 2018; Lipton & Tripathi, 2017; Ma et al., 2018; Abdal et al., 2019), given an input image, aims at optimizing the latent vector to minimize the pixel-wise reconstruction loss. Though it is not feasible to project images in real time due to its iterative nature, it exhibits high quality of the reconstructed images while enabling edits include global changes in semantic attribute, e.g. smiling, beard, etc. Compared with these approaches, our StyleMapGAN can project images in real time while offering high quality of reconstruction images. C.2 LOCAL EDITING Several methods (Collins et al., 2020; Alharbi & Wonka, 2020; Zhu et al., 2020b) tackle locally editing specific parts (e.g., nose, background) as opposed to the most GAN-based image editing methods modifying global appearance. Editing in style (Collins et al., 2020) tries to identify components in the style vector which are responsible for specific parts and achieves local editing. It requires preceding component analysis and the correspondence between the components and regions is lose. Structured noise injection (Alharbi & Wonka, 2020) replaces the learned constant from StyleGAN with an input tensor which has spatial dimensions and is a combination of local and global codes. Though it learns some sense of spatial disentanglement, its applicability is limited due to the separate source of variation, the style vector. These two methods are limited to editing fake images while editing real images with them requires projecting the images to the latent space. SEAN (Zhu et al., 2020b) facilitates editing real images by encoding images into the per-region style codes and manipulating them. However, per-region style codes do not capture details and it requires semantic segmentation masks for training. On the other hand, our StyleMapGAN captures and controls fine details of images with a stylemap which has explicit spatial correspondence with images. Our method does not require segmentation masks for training. C.3 CONDITIONAL IMAGE SYNTHESIS Conditional image synthesis models, such as image-to-image translation (Isola et al., 2017; Zhu et al., 2017; Kim et al., 2020), learn to synthesize an output image given an original image. Thanks to this framework, many applications have been successfully built, including colorization (Kim et al., 2019; Larsson et al., 2016; Zhang et al., 2016), image inpainting (Liu et al., 2018; Pathak et al., 2016; Yang et al., 2017), semantic image synthesis and editing (Wang et al., 2018; Chen & Koltun, 2017; Park et al., 2019a; Portenier et al., 2018). Recent models extend it to multi-domain and multimodal (Huang et al., 2018; Lee et al., 2018; Choi et al., 2020). Image-to-image translation and local edit have been separately studied since they target different objectives, i.e., regarding global and local levels of detail in image generation. However, our method can be applied to the both tasks by semantic manipulation of stylemap for image-to-image translation and local manipulation of stylemap. For example, our StyleMapGAN can make only the eyes laugh or the mouth laugh via local editing as well as change the domain of generated image via global semantic manipulation.
1. What is the focus of the paper regarding image editing, and how does it relate to image generation and reconstruction? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its deterministic nature and comparison to prior works? 3. How does the reviewer assess the novelty aspect of topological feature maps, and what are some relevant references that should be considered? 4. What are the limitations of the method, and how does the author address them? 5. Can the author provide more details about the parameters and losses used in the comparative models? 6. Why did the AP metric choose the right metric for the image editing task, and what alternative metrics could be used? 7. How does the removal of per-pixel noise affect the projection complicated, and what role does beta play? 8. What are the effects of different dimensionalities of style maps, style vectors, and latent codes on FID scores? 9. Why was the 8x8 resolution chosen when it's not among the top-performing ones? 10. How can the presentation of the paper improve, and what important references are missing?
Review
Review The paper deals with the problem of image editing. The editing is done by blending two images in the latent space by using a segmentation masks outlining which parts to take from each image. The main contribution of the papers are StyleMaps - topological feature maps which are used to perform image blending. Results are reported on CelebAHQ and AFHQ datasets, showing the benefits of the StyleMaps for image generation, image editing and image reconstruction. Pros: The paper addresses an interesting problem of image editing. The validation of the approach is performed on multiple datasets. Cons: The presentation of the paper could be improved. The methodological contributions are unclear. Detailed review and clarification questions. The connection between image generation and image editing is a bit unclear to me from the current paper presentation. For example, why do we need to validate the generative approach if the focus is on image editing and the image editing approach does not seem to have any stochasticity in the model - it seems to be a deterministic model that outputs a blended image given two images and a blending mask. It would be fine to just present the description of the image editing pipeline (fig 2.). Could the authors clarify the connections between the three tasks discussed in the paper: generation, image editing and reconstruction? Is topological style map really a new thing? In deterministic models, the latent variables in the model oftentimes have topological structure e.g. UNet architecture. In the context of generative models, the topological latents have been explored already in http://papers.nips.cc/paper/6141-an-architecture-for-deep-hierarchical-generative-models.pdf and https://arxiv.org/abs/1606.04934. Could the authors comment on the novelty aspect? Could the authors add the numbers of parameters next to each model under comparison? Does the proposed approach contain more parameters than the previous ones? Without such information, it is difficult to fully interpret the reported results. Could the authors comment on the importance of each loss component? Adding an ablation study would be beneficial. The paper lacks discussion on the limitations of the methods. Could the authors comment on the potential limitations of the suggested approach? Results: There is a large quantity of results and visualization in the paper. However, I find the discussion of the results not detailed enough, especially, for the image editing experiments. I would recommend moving some visualizations of the results to the appendix and expanding the discussion of the results in the main body of the paper. Could the authors comment on why AP (for real vs fake images) is the right metric for image editing task? Could we use alternative metrics here, e.g. FID or classification accuracy score (https://arxiv.org/abs/1905.10887v1)? "...since the GAN itself lacks an inverse mapping from an image back to its corresponding latent code." There is some previous work aiming to learn inference models in GANs, see for example https://arxiv.org/pdf/1606.00704.pdf and https://arxiv.org/abs/1605.09782. Additional comments: "In addition, we remove per-pixel noise which is an extra source of variation because it makes the projection complicated. Instead, \beta plays the similar role." This is a bit unclear to me. Could the authors clarify this part? Fig 1.: style map and style vectors have different dimensionalities. Same applies to latent code z. What is the effect of these values on FID scores? It would be nice to have appendix A in the main body of the paper. Some visualizations could be moved to appendix. "we choose the 8 \times 8 resolution as our best model and use it consistently for all subsequent experiments", by looking at the Table 1 8 \times 8 maps are not among the top performing ones. Could the authors comment on this? Overall, I acknowledge the importance of the problem tackled in this paper; however, the contributions are unclear, the presentation could be improved and there are some important references missing (see above).
ICLR
Title A StyleMap-Based Generator for Real-Time Image Projection and Local Editing Abstract Generative adversarial networks (GANs) have been successful in synthesizing and manipulating synthetic but realistic images from latent vectors. However, it is still challenging for GANs to manipulate real images, especially in real-time. State-ofthe-art GAN-based methods for editing real images suffer from time-consuming operations in projecting real images to latent vectors. Alternatively, an encoder can be trained to embed real images to the latent space instantly, but it loses details drastically. We propose StyleMapGAN, which adopts a novel representation of latent space, called stylemap, incorporating spatial dimension into embedding. Because each spatial location in the stylemap contributes to its corresponding region of the generated images, the real-time projection through the encoder becomes accurate as well as editing real images becomes spatially controllable. Experimental results demonstrate that our method significantly outperforms state-of-the-art models in various image manipulation tasks such as local editing and image interpolation. Especially, detailed comparisons show that our local editing method successfully reflects not only the color and texture but also the shape of a reference image while preserving untargeted regions. 1 INTRODUCTION Generative adversarial networks (GANs) (Goodfellow et al., 2014) have evolved dramatically in recent years, enabling high-fidelity image synthesis with models which are learned directly from data (Brock et al., 2019; Karras et al., 2019; 2020). Recent studies have shown that GANs naturally learn to encode rich semantics within the latent space, thus changing the latent code leads to manipulating the corresponding attributes of the output images (Jahanian et al., 2020; Shen et al., 2020; Härkönen et al., 2020; Goetschalckx et al., 2019; Shen & Zhou, 2020; Alharbi & Wonka, 2020). However, it is still challenging to apply these manipulations to real images, since the GAN itself lacks an inverse mapping from an image back to its corresponding latent code. One promising approach for manipulating real images is image-to-image translation (Isola et al., 2017; Zhu et al., 2017; Choi et al., 2018), where the model learns to directly synthesize an output image given a user’s input. However, these methods require pre-defined tasks and heavy supervision (e.g., input-output pairs, class labels) for training, and also limit the user controllability at inference time. Another approach is to utilize pretrained GAN models, by directly optimizing the latent code for an individual image (Abdal et al., 2019; Zhu et al., 2016; Ma et al., 2018; Noguchi & Harada, 2019). However, even on high-end GPUs, it requires minutes of computation for each target image, and it does not guarantee that the optimized code would be placed in the original latent space of GAN. A more practical approach is to train an extra encoder which learns to project an image into its corresponding latent code (Zhu et al., 2020a; Perarnau et al., 2016; Luo et al., 2017). Although this approach enables real-time projection in a single feed-forward manner, it suffers from the low fidelity of the projected image (i.e., losing details of the target image). We attribute this limitation to the absence of spatial dimensions in the latent space. Without the spatial dimensions, an encoder compresses the local semantics of an image into a vector in an entangled manner, making it difficult to reconstruct the image (e.g., vector-based or low-resolution bottleneck layer is not capable of producing high-frequency details (Lample et al., 2017; Chang et al., 2018)). As a solution to such problems, we propose StyleMapGAN which exploits stylemap, a novel representation of the latent space. Our key idea is simple. Instead of learning a vector-based latent repre- sentation, we utilize a tensor with explicit spatial dimensions. Our proposed representation benefits from its spatial dimensions, enabling GANs to easily encode the local semantics of images into the latent space. This property allows an encoder to effectively project an image into the latent space, thus providing high-fidelity and real-time projection. In addition, our method offers a new capability to edit specific regions of an image by manipulating the matching positions of the stylemap. We demonstrate, on multiple datasets, that our stylemap indeed substantially enhances the projection quality compared to the traditional vector-based latent representation (Section 3.2). Furthermore, we show the advantage of our method over state-of-the art methods on image projection, interpolation, and local editing (Section 3.3 & Section 3.4). Finally, we show that our method can transplant regions even when the regions are not aligned between one image and another (Section 3.5). We will make our code and pretrained models publicly available for research community. 2 STYLEMAPGAN Our goal is to accurately project images to a latent space with an encoder in real-time and to locally manipulate images on the latent space. We propose StyleMapGAN which adopts stylemap, a novel representation of the intermediate latent space with spatial dimensions. It allows accurate reconstruction with the encoder by alleviating the spatial discrepancy between images and the latent space which has been causing the encoder to lose details. Furthermore, local changes in the stylemap lead to local editing of images thanks to the explicit spatial correspondence between the stylemap and images. Section 2.1 explains how we design the mapping network and the synthesis network to incorporate the stylemap. Section 2.2 describes our procedure for the image-to-latent projection and the local editing. 2.1 STYLEMAP-BASED GENERATOR Figure 1 compares the traditional style-based generator (Karras et al., 2019) and our stylemap-based generator. We propose to incorporate a stylemap instead of a style vector and to replace AdaIN operations with spatially adaptive operations. The stylemap has spatial dimensions as opposed to the style vector, thus can represent different styles across spatial locations. Accordingly, we revise SPADE (Park et al., 2019b) which modulates feature maps with spatially varying values. Since the feature maps in the synthesis network grow larger as getting closer to the output image, we introduce a stylemap resizer, which consists of convolutions and upsampling, to match the resolutions of stylemaps with the feature maps. The stylemap resizer not only resizes the stylemap, but also transforms them with learned convolutions to convey more detailed and structured styles. Figure 1 shows examples of changes of resized stylemaps across layers. Then, the affine transform A produces parameters for the modulation regarding the resized stylemaps. The modulation operation of the i-th layer in the synthesis network is as follows: hi+1 = ( γi hi − µi σi ) ⊕ βi (1) where µi, σi ∈ R are the mean and standard deviation of activations hi ∈ RCi×Hi×Wi of the layer, respectively. γi, βi ∈ RCi×Hi×Wi are modulation parameters. and ⊕ are element-wise multiplication and addition with broadcasting, respectively. We use layer normalization (Ba et al., 2016) instead of instance normalization and find it helpful to resolve droplet artifacts. In addition, we remove per-pixel noise which is an extra source of variation because it makes the projection complicated. Instead, β plays the similar role. Note that weight modulation (Karras et al., 2020) cannot be applied to spatially varying modulation because weights are shared across all locations in a convolution. Other details about the networks such as a design choice of mapping network are given in Appendix A and B. 2.2 REAL IMAGE PROJECTION AND LOCAL EDITING Since the stylemap eases the spatial discrepancy between images and the latent space, we train an encoder to project real images into its corresponding stylemaps, which accurately reconstructs the images through the generator. The encoder is jointly trained with the generator and the discriminator. More training details are described in Appendix A. Now we have access to the accurate projection of images to the style space which is essential to latent-based editing. Furthermore, local changes in the stylemap leads to natural local editing of images on the learned semantic manifold. Especially, we design a procedure for local transplantation which now becomes feasible. The goal of local editing is to transplant some part of a reference image to an original image with respect to a mask which indicates the region to be modified. We project the original image and the reference image through the encoder to obtain stylemaps w and w̃, respectively. In general, the mask is finer than 8×8, we blend the stylemaps on w+ space to achieve detailed manipulation. The edited i-th resized stylemap ẅ+ is an alpha blending of w+ and w̃+: ẅ+i = mi w̃ + i + (1−mi) wi + (2) where i-th resized mask mi is shrunk by max pooling. If the mask’s shape aligns with the 8 × 8 stylemap, we can do the same alpha blending on the w space instead of the w+ space. Note that the mask can be in any shape allowing usage of semantic segmentation methods or user scribbles. On the contrary to SPADE (Park et al., 2019b) or SEAN (Zhu et al., 2020b), even coarse masks as coarse as 8 × 8 produces plausible images so that the burden for user to provide detailed masks is lifted. This operation can be further revised for unidentical masks of the two images (Section 3.5). 3 EXPERIMENTS Our proposed method efficiently projects images into the style space in real-time and effectively manipulate specific regions of real images. We first describe our experimental setup (Section 3.1) and show how the proposed spatial dimensions of stylemap affect the image projection and generation quality (Section 3.2). We then compare our method with the state-of-the art methods on real image projection (Section 3.3) and local editing (Section 3.4). We finally show a more flexible editing scenario and usefulness of our proposed method (Section 3.5). Implementation details are described in Appendix A. 3.1 EXPERIMENTAL SETUP Datasets and protocols. For evaluation, we train our model on CelebA-HQ (Karras et al., 2018) and AFHQ (Choi et al., 2020), both at resolution of 256 × 256. We use 500 images for validation, another 500 images for testing, and the rest for training. Because the optimization methods take an extremely long time, we limited the test set to 500 images. When we compute Fréchet inception distance (FID), the numbers of generated samples are matched to the training set. Reconstruction errors are measured with all test images. For FIDlerp, we choose random numbers between 0 and 1 for 500 random pairs of images from the test set to synthesize 500 interpolated images and compute FID between those and the test set. For local editing comparison, 250 pairs of test images in CelebA-HQ are composed with ten semantic masks (e.g., background, hair) (Lee et al., 2020) to produce 2500 images. For local editing on AFHQ, masks are randomly chosen between horizontal and vertical half-and-half masks to produce 250 images. Baselines. We compare our method against recent methods. For image projection, StyleGAN2 (Karras et al., 2020) and Image2StyleGAN (Abdal et al., 2019) infer the per-layer style vectors (analogous to our w+) via iterative optimization. In-DomainGAN (Zhu et al., 2020a) relies on optimization preceded by initialization using a domain-guided encoder. SEAN (Zhu et al., 2020b) also includes an encoder but it requires semantic segmentation masks for training. Structured noise (Alharbi & Wonka, 2020) adds input tensor with spatial dimensions to the synthesis network of StyleGAN but it does not enhance the rest of the network where the style vector still plays an important role. Editing in style (Collins et al., 2020) tries to find local semantics in the style vector. 3.2 ANALYSIS OF OUR METHOD To manipulate an image using a generative model, we first need to accurately project the image into its latent space. In Table 1, we vary the spatial resolution of stylemap and compare the performance of reconstruction and generation. As the spatial resolution increases, the reconstruction accuracy improves significantly. It demonstrates that our stylemap with spatial dimensions is highly effective for image projection. FID varies differently across datasets, possibly due to different contextual relationship between locations for generation. Note that our method with spatial resolution accurately preserves small details, e.g., the eyes are not blurred. We next evaluate the effect of the stylemap’s resolution in editing scenarios, mixing specific parts of one image and another. Figure 3 shows that the 8×8 stylemap synthesizes the most accurate and seamlessly blended images. We see that when the spatial resolution is higher than 8×8, the edited parts are easily detected. We suppose that too large stylemap harms contextual relationship across locations which is essential for realistic images. Considering the editing quality, we choose the 8×8 resolution as our best model and use it consistently for all subsequent experiments. 3.3 REAL IMAGE PROJECTION In Table 2, we compare our approach with the state-of-the art methods for real image projection. For both datasets, StyleMapGAN achieves better reconstruction quality (MSE & LPIPS) than all competitors. Also, it achieves the best FID, which implicitly shows that our manipulation on the style space leads to the most realistic images. Importantly, our method runs 100× faster than the optimization-based baselines since a single feedforward pass provides accurate projection thanks to the stylemap, which is measured in a single GPU. SEAN also runs with a single feedforward pass, but it requires ground-truth segmentation masks for training which is a severe drawback for practical uses. Image2StyleGAN fails to meet requirements for editing in that it produces spurious artifacts in latent interpolation (FIDlerp and figures) and suffers from minutes of runtime. 3.4 LOCAL EDITING We evaluate local editing performance regarding three aspects: detectability, faithfulness to the reference image in the mask and preservation of the source image outside the mask. Figure 4 and Figure 5 visually demonstrate that our method seamlessly composes the two images while others struggle. Since there is no metrics for evaluating the last two aspects, we propose two quantitative metrics: MSEsrc and MSEref. Table 3 shows that the results from our method are the hardest for the classifier to detect and both source and reference images are best reflected. Note that MSEs are not the sole measures but AP should be considered together. 3.5 UNALIGNED TRANSPLANTATION Here, we demonstrate more flexible use case, unaligned transplantation, showing that our local editing does not require the masks on the original and the reference images to be aligned. We project the images to the stylemaps and and replace the designated region of the original stylemap with the crop of the reference stylemap even though they are on the different locations. Users can specify what to replace. Figure 6 shows examples. 4 DISCUSSION AND CONCLUSION Invertibility of GANs has been essential for editing real images with unconditional GAN models at a practical time and it has not been properly answered yet. To achieve this goal, we propose StyleMapGAN, which introduces explicit spatial dimensions to the latent space, called a stylemap. We show, through extensive evaluation, that our method based on the stylemap has a number of advantages over prior approaches, in that it can accurately project real images in real-time, into the latent space, and synthesize high-quality output images by both interpolation and local editing. The proposed latent representation is simple, general, and can be easily integrated into existing GAN models (e.g., StyleGAN) with wide range of network designs and data modality. We believe that improving fidelity by applying our latent representation to other methods such as conditional GANs (e.g., BigGAN) or variational autoencoders (Kingma & Welling, 2013) would be an exciting future work. B MAPPING NETWORK DESIGN FOR THE STYLEMAP There are several choices when designing a mapping network. We can easily think of convolutional layers due to the spatial dimensions of the stylemap. Alternatively, we can remove the mapping network so that our method does not generate images from the standard Gaussian distribution, and 1https://github.com/rosinality/stylegan2-pytorch 2http://publicurl.com uses only real images for training like autoencoder (Hinton & Salakhutdinov, 2006). As shown in Figure 3, autoencoder fails to produce realistic images using the projected stylemap. It seems to copy and paste between two images on RGB space. We give continuous input to the generator from the standard Gaussian distribution using a mapping network, letting the network generate seamless images in image space. However, the autoencoder only gives discrete input, which is projected from the encoder. On the other hand, the mapping network with convolutional layers often struggles in reconstruction so that the edited results images are quite different from the original images. We assume that there is such a limit because the convolutional layer’s mapping is bounded to the local area. In MLP, each weight and input are fully-connected so that it can make a more plausible latent space. C RELATED WORK C.1 LATENT-BASED IMAGE EDITING There are active studies (Abdal et al., 2019; Collins et al., 2020; Zhu et al., 2020a) on image editing using latent vector arithmetic where well-trained GANs (Karras et al., 2019; 2020) are adopted for real-world applications. These studies aim to find a latent vector to reconstruct an original image. In general, there are two approaches to embed images into latent vectors, learning and optimizationbased ones. The learning-based approach (Zhu et al., 2020a; Perarnau et al., 2016; Zhu et al., 2016) trains an encoder that maps a given image to a latent vector. This method has a potential of projecting an image in real time. However, the existing methods suffer from low quality of the reconstructed images, which indicates the difficulty of embedding real images. The optimization-based approach (Creswell & Bharath, 2018; Lipton & Tripathi, 2017; Ma et al., 2018; Abdal et al., 2019), given an input image, aims at optimizing the latent vector to minimize the pixel-wise reconstruction loss. Though it is not feasible to project images in real time due to its iterative nature, it exhibits high quality of the reconstructed images while enabling edits include global changes in semantic attribute, e.g. smiling, beard, etc. Compared with these approaches, our StyleMapGAN can project images in real time while offering high quality of reconstruction images. C.2 LOCAL EDITING Several methods (Collins et al., 2020; Alharbi & Wonka, 2020; Zhu et al., 2020b) tackle locally editing specific parts (e.g., nose, background) as opposed to the most GAN-based image editing methods modifying global appearance. Editing in style (Collins et al., 2020) tries to identify components in the style vector which are responsible for specific parts and achieves local editing. It requires preceding component analysis and the correspondence between the components and regions is lose. Structured noise injection (Alharbi & Wonka, 2020) replaces the learned constant from StyleGAN with an input tensor which has spatial dimensions and is a combination of local and global codes. Though it learns some sense of spatial disentanglement, its applicability is limited due to the separate source of variation, the style vector. These two methods are limited to editing fake images while editing real images with them requires projecting the images to the latent space. SEAN (Zhu et al., 2020b) facilitates editing real images by encoding images into the per-region style codes and manipulating them. However, per-region style codes do not capture details and it requires semantic segmentation masks for training. On the other hand, our StyleMapGAN captures and controls fine details of images with a stylemap which has explicit spatial correspondence with images. Our method does not require segmentation masks for training. C.3 CONDITIONAL IMAGE SYNTHESIS Conditional image synthesis models, such as image-to-image translation (Isola et al., 2017; Zhu et al., 2017; Kim et al., 2020), learn to synthesize an output image given an original image. Thanks to this framework, many applications have been successfully built, including colorization (Kim et al., 2019; Larsson et al., 2016; Zhang et al., 2016), image inpainting (Liu et al., 2018; Pathak et al., 2016; Yang et al., 2017), semantic image synthesis and editing (Wang et al., 2018; Chen & Koltun, 2017; Park et al., 2019a; Portenier et al., 2018). Recent models extend it to multi-domain and multimodal (Huang et al., 2018; Lee et al., 2018; Choi et al., 2020). Image-to-image translation and local edit have been separately studied since they target different objectives, i.e., regarding global and local levels of detail in image generation. However, our method can be applied to the both tasks by semantic manipulation of stylemap for image-to-image translation and local manipulation of stylemap. For example, our StyleMapGAN can make only the eyes laugh or the mouth laugh via local editing as well as change the domain of generated image via global semantic manipulation.
1. How does the proposed approach differ from previous works, particularly SEAN? 2. How does the method preserve the actual style-mixing capability in the proposed architecture? 3. How reliable are the FID and FID_lerp results with only 500 samples, and how do they compare to results obtained with larger sample sizes? 4. Why do the results seem dependent on the size of the stylemap, and how do the numeric results in table 1 compare to the boldened 32x32 case? 5. What is the number of parameters in comparison to StyleGAN, and how does this affect the comparison to vanilla StyleGAN? 6. Was the StyleGAN CelebA-HQ network trained on 1024p or 256x256, and how does this affect the comparison in Table 2? 7. How does the approach compare LPIPS on the full image versus center region only, and what are the suitability and effects of comparing LPIPS on the full image? 8. What are the values of learning rates and the number of iterations per picture used in the early StyleGAN projection scripts? 9. Why did the authors not compare to any established GAN-with-encoder setups, such as VAE-GAN, and how would such comparisons affect the quantitative gains in LPIPS?
Review
Review Summary: In order to use GANs for image manipulation, one either needs a GAN-with-an-encoder variant or one must train an encoder afterwards, e.g. using an optimization process with SGD on the generator. This paper proposes to include spatial information in the latent representation of a StyleGAN, which helps to train an encoder simultaneously with the generator, in order to manipulate each region with more precision and less side effects. ########################################################################## Reasons for score: The approach has merits and the method might be valuable, but I had multiple questions about whether the results are as strong as the authors suggest. If my concerns about the results are addressed appropriately, I am willing to improve my score. ########################################################################## Pros: Real-time encoding of input images while retaining high output quality has not been conclusively solved yet, and the method seems to improve in this direction. The qualitative results seem promising. Some of the quantitative results indicate state-of-the-art results. The approach appears novel and interesting, though not a major departure from previous works. The closest work seems to be SEAN but it requires semantic masks during training, while the proposed approach does not. The presentation is mostly clear and there are multiple relevant experiments. ########################################################################## Cons: Overall, the evaluation of generative models has to take into account a lot of factors, since there are various trade-offs that can take place. One is whether the actual style-mixing (i.e. combination of 2+ latent inputs) capability is conserved in your proposed architecture. I see no comparison of this aspect? It is not clear to me whether the FID and FID_lerp results are reliable when one uses 500 samples only. Previous works usually use 10 000 or 50 000 samples. Dispersion is not given at all by the authors. I appreciate that projecting 3000 or 10000 samples takes a lot of time, but if it is not feasible, then you just cannot use this metric to begin with, right? A simple experiment you could do quickly to refute my point: now that you have your 500-sample for each baseline, then take a subsample of 400 and 450 separately for each. Do the FID results remain the same, relative to each other? If not, then your sample size of 500 is not informative. If they do, then I stand (probably) corrected. Results seem rather dependent on the size of the stylemap. In Fig 3, 8x8 is said to be the only good choice. Yet, the numeric results in table 1 are much worse for the 8x8 than for the boldened 32x32 case. I grant that many of the results at 8x8 are still better than vanilla StyleGAN, but to me it seems like the authors are using different variants of the model when measuring different things, while not explicitly justifying this in the text. In the tables 2-3 that follow, it is not clear which stylemap size is being used. I find no mention of this. Prior work on this field is wider than covered here; there are recent encoder-based approaches for style modulation based decoders e.g. [1] which also have very fast encoding (no iterations), some of which could be natural baselines here. At the very least, it would seem relevant to mention them unless the authors can justify otherwise. ########################################################################## Questions during rebuttal period: The architecture appears somewhat bigger than regular StyleGAN. What is the number of parameters in comparison to StyleGAN? If the difference is large, then this raises a question, whether the comparison to vanilla StyleGAN is fair in this regard. (I realize the major difference in the speed of encoding remains.) You downgraded the resolution of StyleGAN's maximum throughput (1024p) to 256x256. Was the StyleGAN CelebA-HQ network trained on 1024p or 256x256? If it is trained at higher resolution, it is not clear if the comparison is fully fair in Table 2. Do you compare LPIPS on the full image instead of center region only, and if you do it on the full image, can you comment on the suitability of that approach as opposed to comparing to the center region only? The focus on center region is used e.g. in Perceptual Path Length (at least in Karras et al. 2018) calculations and elsewhere to remove the effects of the backround. The early StyleGAN projection scripts require specifying learning rates and the number of iterations per picture. What are the values of those parameters here? (Low LR + a lot of iteration improve considerably the StyleGAN projections) The authors do not really compare to any of the established GAN-with-encoder setups. Given that we now allow an encoder, decoder and a discriminator, one would presume the first baseline to be one of the known GAN-with-encoder variants such as VAE-GAN [2], simply extended to use the StyleGAN decoder instead of the traditional decoder. Can you justify not doing this kind of comparisons? The rationale is to determine whether the quantitative gains (e.g. in LPIPS) arise from stylemaps or simply from the setup of training the encoder in unison with the other networks. [1] Stanislav Pidhorskyi, Donald Adjeroh, Gianfranco Doretto. Adversarial Latent Autoencoders. arXiv preprint arXiv:2004.04467, 2020. [2] A. Larsen, S. Kaae Sønderby, H. Larochelle, and O. Winther. Autoencoding beyond pixels using a learned similarity metric. In International Conference on Machine Learning (ICML), pages 1558–1566, 2016.
ICLR
Title A StyleMap-Based Generator for Real-Time Image Projection and Local Editing Abstract Generative adversarial networks (GANs) have been successful in synthesizing and manipulating synthetic but realistic images from latent vectors. However, it is still challenging for GANs to manipulate real images, especially in real-time. State-ofthe-art GAN-based methods for editing real images suffer from time-consuming operations in projecting real images to latent vectors. Alternatively, an encoder can be trained to embed real images to the latent space instantly, but it loses details drastically. We propose StyleMapGAN, which adopts a novel representation of latent space, called stylemap, incorporating spatial dimension into embedding. Because each spatial location in the stylemap contributes to its corresponding region of the generated images, the real-time projection through the encoder becomes accurate as well as editing real images becomes spatially controllable. Experimental results demonstrate that our method significantly outperforms state-of-the-art models in various image manipulation tasks such as local editing and image interpolation. Especially, detailed comparisons show that our local editing method successfully reflects not only the color and texture but also the shape of a reference image while preserving untargeted regions. 1 INTRODUCTION Generative adversarial networks (GANs) (Goodfellow et al., 2014) have evolved dramatically in recent years, enabling high-fidelity image synthesis with models which are learned directly from data (Brock et al., 2019; Karras et al., 2019; 2020). Recent studies have shown that GANs naturally learn to encode rich semantics within the latent space, thus changing the latent code leads to manipulating the corresponding attributes of the output images (Jahanian et al., 2020; Shen et al., 2020; Härkönen et al., 2020; Goetschalckx et al., 2019; Shen & Zhou, 2020; Alharbi & Wonka, 2020). However, it is still challenging to apply these manipulations to real images, since the GAN itself lacks an inverse mapping from an image back to its corresponding latent code. One promising approach for manipulating real images is image-to-image translation (Isola et al., 2017; Zhu et al., 2017; Choi et al., 2018), where the model learns to directly synthesize an output image given a user’s input. However, these methods require pre-defined tasks and heavy supervision (e.g., input-output pairs, class labels) for training, and also limit the user controllability at inference time. Another approach is to utilize pretrained GAN models, by directly optimizing the latent code for an individual image (Abdal et al., 2019; Zhu et al., 2016; Ma et al., 2018; Noguchi & Harada, 2019). However, even on high-end GPUs, it requires minutes of computation for each target image, and it does not guarantee that the optimized code would be placed in the original latent space of GAN. A more practical approach is to train an extra encoder which learns to project an image into its corresponding latent code (Zhu et al., 2020a; Perarnau et al., 2016; Luo et al., 2017). Although this approach enables real-time projection in a single feed-forward manner, it suffers from the low fidelity of the projected image (i.e., losing details of the target image). We attribute this limitation to the absence of spatial dimensions in the latent space. Without the spatial dimensions, an encoder compresses the local semantics of an image into a vector in an entangled manner, making it difficult to reconstruct the image (e.g., vector-based or low-resolution bottleneck layer is not capable of producing high-frequency details (Lample et al., 2017; Chang et al., 2018)). As a solution to such problems, we propose StyleMapGAN which exploits stylemap, a novel representation of the latent space. Our key idea is simple. Instead of learning a vector-based latent repre- sentation, we utilize a tensor with explicit spatial dimensions. Our proposed representation benefits from its spatial dimensions, enabling GANs to easily encode the local semantics of images into the latent space. This property allows an encoder to effectively project an image into the latent space, thus providing high-fidelity and real-time projection. In addition, our method offers a new capability to edit specific regions of an image by manipulating the matching positions of the stylemap. We demonstrate, on multiple datasets, that our stylemap indeed substantially enhances the projection quality compared to the traditional vector-based latent representation (Section 3.2). Furthermore, we show the advantage of our method over state-of-the art methods on image projection, interpolation, and local editing (Section 3.3 & Section 3.4). Finally, we show that our method can transplant regions even when the regions are not aligned between one image and another (Section 3.5). We will make our code and pretrained models publicly available for research community. 2 STYLEMAPGAN Our goal is to accurately project images to a latent space with an encoder in real-time and to locally manipulate images on the latent space. We propose StyleMapGAN which adopts stylemap, a novel representation of the intermediate latent space with spatial dimensions. It allows accurate reconstruction with the encoder by alleviating the spatial discrepancy between images and the latent space which has been causing the encoder to lose details. Furthermore, local changes in the stylemap lead to local editing of images thanks to the explicit spatial correspondence between the stylemap and images. Section 2.1 explains how we design the mapping network and the synthesis network to incorporate the stylemap. Section 2.2 describes our procedure for the image-to-latent projection and the local editing. 2.1 STYLEMAP-BASED GENERATOR Figure 1 compares the traditional style-based generator (Karras et al., 2019) and our stylemap-based generator. We propose to incorporate a stylemap instead of a style vector and to replace AdaIN operations with spatially adaptive operations. The stylemap has spatial dimensions as opposed to the style vector, thus can represent different styles across spatial locations. Accordingly, we revise SPADE (Park et al., 2019b) which modulates feature maps with spatially varying values. Since the feature maps in the synthesis network grow larger as getting closer to the output image, we introduce a stylemap resizer, which consists of convolutions and upsampling, to match the resolutions of stylemaps with the feature maps. The stylemap resizer not only resizes the stylemap, but also transforms them with learned convolutions to convey more detailed and structured styles. Figure 1 shows examples of changes of resized stylemaps across layers. Then, the affine transform A produces parameters for the modulation regarding the resized stylemaps. The modulation operation of the i-th layer in the synthesis network is as follows: hi+1 = ( γi hi − µi σi ) ⊕ βi (1) where µi, σi ∈ R are the mean and standard deviation of activations hi ∈ RCi×Hi×Wi of the layer, respectively. γi, βi ∈ RCi×Hi×Wi are modulation parameters. and ⊕ are element-wise multiplication and addition with broadcasting, respectively. We use layer normalization (Ba et al., 2016) instead of instance normalization and find it helpful to resolve droplet artifacts. In addition, we remove per-pixel noise which is an extra source of variation because it makes the projection complicated. Instead, β plays the similar role. Note that weight modulation (Karras et al., 2020) cannot be applied to spatially varying modulation because weights are shared across all locations in a convolution. Other details about the networks such as a design choice of mapping network are given in Appendix A and B. 2.2 REAL IMAGE PROJECTION AND LOCAL EDITING Since the stylemap eases the spatial discrepancy between images and the latent space, we train an encoder to project real images into its corresponding stylemaps, which accurately reconstructs the images through the generator. The encoder is jointly trained with the generator and the discriminator. More training details are described in Appendix A. Now we have access to the accurate projection of images to the style space which is essential to latent-based editing. Furthermore, local changes in the stylemap leads to natural local editing of images on the learned semantic manifold. Especially, we design a procedure for local transplantation which now becomes feasible. The goal of local editing is to transplant some part of a reference image to an original image with respect to a mask which indicates the region to be modified. We project the original image and the reference image through the encoder to obtain stylemaps w and w̃, respectively. In general, the mask is finer than 8×8, we blend the stylemaps on w+ space to achieve detailed manipulation. The edited i-th resized stylemap ẅ+ is an alpha blending of w+ and w̃+: ẅ+i = mi w̃ + i + (1−mi) wi + (2) where i-th resized mask mi is shrunk by max pooling. If the mask’s shape aligns with the 8 × 8 stylemap, we can do the same alpha blending on the w space instead of the w+ space. Note that the mask can be in any shape allowing usage of semantic segmentation methods or user scribbles. On the contrary to SPADE (Park et al., 2019b) or SEAN (Zhu et al., 2020b), even coarse masks as coarse as 8 × 8 produces plausible images so that the burden for user to provide detailed masks is lifted. This operation can be further revised for unidentical masks of the two images (Section 3.5). 3 EXPERIMENTS Our proposed method efficiently projects images into the style space in real-time and effectively manipulate specific regions of real images. We first describe our experimental setup (Section 3.1) and show how the proposed spatial dimensions of stylemap affect the image projection and generation quality (Section 3.2). We then compare our method with the state-of-the art methods on real image projection (Section 3.3) and local editing (Section 3.4). We finally show a more flexible editing scenario and usefulness of our proposed method (Section 3.5). Implementation details are described in Appendix A. 3.1 EXPERIMENTAL SETUP Datasets and protocols. For evaluation, we train our model on CelebA-HQ (Karras et al., 2018) and AFHQ (Choi et al., 2020), both at resolution of 256 × 256. We use 500 images for validation, another 500 images for testing, and the rest for training. Because the optimization methods take an extremely long time, we limited the test set to 500 images. When we compute Fréchet inception distance (FID), the numbers of generated samples are matched to the training set. Reconstruction errors are measured with all test images. For FIDlerp, we choose random numbers between 0 and 1 for 500 random pairs of images from the test set to synthesize 500 interpolated images and compute FID between those and the test set. For local editing comparison, 250 pairs of test images in CelebA-HQ are composed with ten semantic masks (e.g., background, hair) (Lee et al., 2020) to produce 2500 images. For local editing on AFHQ, masks are randomly chosen between horizontal and vertical half-and-half masks to produce 250 images. Baselines. We compare our method against recent methods. For image projection, StyleGAN2 (Karras et al., 2020) and Image2StyleGAN (Abdal et al., 2019) infer the per-layer style vectors (analogous to our w+) via iterative optimization. In-DomainGAN (Zhu et al., 2020a) relies on optimization preceded by initialization using a domain-guided encoder. SEAN (Zhu et al., 2020b) also includes an encoder but it requires semantic segmentation masks for training. Structured noise (Alharbi & Wonka, 2020) adds input tensor with spatial dimensions to the synthesis network of StyleGAN but it does not enhance the rest of the network where the style vector still plays an important role. Editing in style (Collins et al., 2020) tries to find local semantics in the style vector. 3.2 ANALYSIS OF OUR METHOD To manipulate an image using a generative model, we first need to accurately project the image into its latent space. In Table 1, we vary the spatial resolution of stylemap and compare the performance of reconstruction and generation. As the spatial resolution increases, the reconstruction accuracy improves significantly. It demonstrates that our stylemap with spatial dimensions is highly effective for image projection. FID varies differently across datasets, possibly due to different contextual relationship between locations for generation. Note that our method with spatial resolution accurately preserves small details, e.g., the eyes are not blurred. We next evaluate the effect of the stylemap’s resolution in editing scenarios, mixing specific parts of one image and another. Figure 3 shows that the 8×8 stylemap synthesizes the most accurate and seamlessly blended images. We see that when the spatial resolution is higher than 8×8, the edited parts are easily detected. We suppose that too large stylemap harms contextual relationship across locations which is essential for realistic images. Considering the editing quality, we choose the 8×8 resolution as our best model and use it consistently for all subsequent experiments. 3.3 REAL IMAGE PROJECTION In Table 2, we compare our approach with the state-of-the art methods for real image projection. For both datasets, StyleMapGAN achieves better reconstruction quality (MSE & LPIPS) than all competitors. Also, it achieves the best FID, which implicitly shows that our manipulation on the style space leads to the most realistic images. Importantly, our method runs 100× faster than the optimization-based baselines since a single feedforward pass provides accurate projection thanks to the stylemap, which is measured in a single GPU. SEAN also runs with a single feedforward pass, but it requires ground-truth segmentation masks for training which is a severe drawback for practical uses. Image2StyleGAN fails to meet requirements for editing in that it produces spurious artifacts in latent interpolation (FIDlerp and figures) and suffers from minutes of runtime. 3.4 LOCAL EDITING We evaluate local editing performance regarding three aspects: detectability, faithfulness to the reference image in the mask and preservation of the source image outside the mask. Figure 4 and Figure 5 visually demonstrate that our method seamlessly composes the two images while others struggle. Since there is no metrics for evaluating the last two aspects, we propose two quantitative metrics: MSEsrc and MSEref. Table 3 shows that the results from our method are the hardest for the classifier to detect and both source and reference images are best reflected. Note that MSEs are not the sole measures but AP should be considered together. 3.5 UNALIGNED TRANSPLANTATION Here, we demonstrate more flexible use case, unaligned transplantation, showing that our local editing does not require the masks on the original and the reference images to be aligned. We project the images to the stylemaps and and replace the designated region of the original stylemap with the crop of the reference stylemap even though they are on the different locations. Users can specify what to replace. Figure 6 shows examples. 4 DISCUSSION AND CONCLUSION Invertibility of GANs has been essential for editing real images with unconditional GAN models at a practical time and it has not been properly answered yet. To achieve this goal, we propose StyleMapGAN, which introduces explicit spatial dimensions to the latent space, called a stylemap. We show, through extensive evaluation, that our method based on the stylemap has a number of advantages over prior approaches, in that it can accurately project real images in real-time, into the latent space, and synthesize high-quality output images by both interpolation and local editing. The proposed latent representation is simple, general, and can be easily integrated into existing GAN models (e.g., StyleGAN) with wide range of network designs and data modality. We believe that improving fidelity by applying our latent representation to other methods such as conditional GANs (e.g., BigGAN) or variational autoencoders (Kingma & Welling, 2013) would be an exciting future work. B MAPPING NETWORK DESIGN FOR THE STYLEMAP There are several choices when designing a mapping network. We can easily think of convolutional layers due to the spatial dimensions of the stylemap. Alternatively, we can remove the mapping network so that our method does not generate images from the standard Gaussian distribution, and 1https://github.com/rosinality/stylegan2-pytorch 2http://publicurl.com uses only real images for training like autoencoder (Hinton & Salakhutdinov, 2006). As shown in Figure 3, autoencoder fails to produce realistic images using the projected stylemap. It seems to copy and paste between two images on RGB space. We give continuous input to the generator from the standard Gaussian distribution using a mapping network, letting the network generate seamless images in image space. However, the autoencoder only gives discrete input, which is projected from the encoder. On the other hand, the mapping network with convolutional layers often struggles in reconstruction so that the edited results images are quite different from the original images. We assume that there is such a limit because the convolutional layer’s mapping is bounded to the local area. In MLP, each weight and input are fully-connected so that it can make a more plausible latent space. C RELATED WORK C.1 LATENT-BASED IMAGE EDITING There are active studies (Abdal et al., 2019; Collins et al., 2020; Zhu et al., 2020a) on image editing using latent vector arithmetic where well-trained GANs (Karras et al., 2019; 2020) are adopted for real-world applications. These studies aim to find a latent vector to reconstruct an original image. In general, there are two approaches to embed images into latent vectors, learning and optimizationbased ones. The learning-based approach (Zhu et al., 2020a; Perarnau et al., 2016; Zhu et al., 2016) trains an encoder that maps a given image to a latent vector. This method has a potential of projecting an image in real time. However, the existing methods suffer from low quality of the reconstructed images, which indicates the difficulty of embedding real images. The optimization-based approach (Creswell & Bharath, 2018; Lipton & Tripathi, 2017; Ma et al., 2018; Abdal et al., 2019), given an input image, aims at optimizing the latent vector to minimize the pixel-wise reconstruction loss. Though it is not feasible to project images in real time due to its iterative nature, it exhibits high quality of the reconstructed images while enabling edits include global changes in semantic attribute, e.g. smiling, beard, etc. Compared with these approaches, our StyleMapGAN can project images in real time while offering high quality of reconstruction images. C.2 LOCAL EDITING Several methods (Collins et al., 2020; Alharbi & Wonka, 2020; Zhu et al., 2020b) tackle locally editing specific parts (e.g., nose, background) as opposed to the most GAN-based image editing methods modifying global appearance. Editing in style (Collins et al., 2020) tries to identify components in the style vector which are responsible for specific parts and achieves local editing. It requires preceding component analysis and the correspondence between the components and regions is lose. Structured noise injection (Alharbi & Wonka, 2020) replaces the learned constant from StyleGAN with an input tensor which has spatial dimensions and is a combination of local and global codes. Though it learns some sense of spatial disentanglement, its applicability is limited due to the separate source of variation, the style vector. These two methods are limited to editing fake images while editing real images with them requires projecting the images to the latent space. SEAN (Zhu et al., 2020b) facilitates editing real images by encoding images into the per-region style codes and manipulating them. However, per-region style codes do not capture details and it requires semantic segmentation masks for training. On the other hand, our StyleMapGAN captures and controls fine details of images with a stylemap which has explicit spatial correspondence with images. Our method does not require segmentation masks for training. C.3 CONDITIONAL IMAGE SYNTHESIS Conditional image synthesis models, such as image-to-image translation (Isola et al., 2017; Zhu et al., 2017; Kim et al., 2020), learn to synthesize an output image given an original image. Thanks to this framework, many applications have been successfully built, including colorization (Kim et al., 2019; Larsson et al., 2016; Zhang et al., 2016), image inpainting (Liu et al., 2018; Pathak et al., 2016; Yang et al., 2017), semantic image synthesis and editing (Wang et al., 2018; Chen & Koltun, 2017; Park et al., 2019a; Portenier et al., 2018). Recent models extend it to multi-domain and multimodal (Huang et al., 2018; Lee et al., 2018; Choi et al., 2020). Image-to-image translation and local edit have been separately studied since they target different objectives, i.e., regarding global and local levels of detail in image generation. However, our method can be applied to the both tasks by semantic manipulation of stylemap for image-to-image translation and local manipulation of stylemap. For example, our StyleMapGAN can make only the eyes laugh or the mouth laugh via local editing as well as change the domain of generated image via global semantic manipulation.
1. What is the focus of the paper, and what are its benefits? 2. What are the strengths of the proposed approach, particularly in terms of local editing results? 3. Do you have any concerns or questions regarding the effectiveness of the method, especially when dealing with imperfect masks or small parts of the face? 4. How does the reviewer assess the performance of the method compared to other state-of-the-art models, such as Image2StyleGAN and Poisson Blending? 5. Are there any limitations or restrictions in applying the proposed method, such as the need for a reference image or alignment issues? 6. Why did the authors choose not to compare their method with StyleGAN2 on the FFHQ dataset? 7. Are there any typos or minor errors in the paper that the reviewer noticed?
Review
Review Summary: This paper presents StyleMapGAN, which adopts stylemap as the representation of latent space. Benefits from spatial dimensions of this representation, the jointly trained encoder can provide the high-fidelity real-time projection. In addition, their method shows high-quality local editing results superior to current state-of-the-art models. The paper is well written with clear diagrams for the overall architecture. The idea is novel and seems to be relatively effective in practice. Other than that, I have the following questions and remarks: In Figure 3 (8x8), there is still a visible border near the nose. Does the method proposed in the paper only apply to the perfect mask? Can it perfectly edit only a small part of the face? From my point of view, only the blending application is really effective. The result of embedding and interpolation does not show sufficient superiority over Image2StyleGAN. On the blending task, the author should add some comparisons with Poisson Blending and Image2StyleGAN++. Does the proposed method always need a reference image? Since two images may not be aligned, how to edit the facial components like eyes and mouth? There should be some examples of unaligned transplantation on human faces. Why not compare with StyleGAN2 on the FFHQ dataset. Typo : and and in page 7 Overall, my rating lies between borderline and weak accept. My rating might be higher if the above questions are resolved properly.
ICLR
Title A StyleMap-Based Generator for Real-Time Image Projection and Local Editing Abstract Generative adversarial networks (GANs) have been successful in synthesizing and manipulating synthetic but realistic images from latent vectors. However, it is still challenging for GANs to manipulate real images, especially in real-time. State-ofthe-art GAN-based methods for editing real images suffer from time-consuming operations in projecting real images to latent vectors. Alternatively, an encoder can be trained to embed real images to the latent space instantly, but it loses details drastically. We propose StyleMapGAN, which adopts a novel representation of latent space, called stylemap, incorporating spatial dimension into embedding. Because each spatial location in the stylemap contributes to its corresponding region of the generated images, the real-time projection through the encoder becomes accurate as well as editing real images becomes spatially controllable. Experimental results demonstrate that our method significantly outperforms state-of-the-art models in various image manipulation tasks such as local editing and image interpolation. Especially, detailed comparisons show that our local editing method successfully reflects not only the color and texture but also the shape of a reference image while preserving untargeted regions. 1 INTRODUCTION Generative adversarial networks (GANs) (Goodfellow et al., 2014) have evolved dramatically in recent years, enabling high-fidelity image synthesis with models which are learned directly from data (Brock et al., 2019; Karras et al., 2019; 2020). Recent studies have shown that GANs naturally learn to encode rich semantics within the latent space, thus changing the latent code leads to manipulating the corresponding attributes of the output images (Jahanian et al., 2020; Shen et al., 2020; Härkönen et al., 2020; Goetschalckx et al., 2019; Shen & Zhou, 2020; Alharbi & Wonka, 2020). However, it is still challenging to apply these manipulations to real images, since the GAN itself lacks an inverse mapping from an image back to its corresponding latent code. One promising approach for manipulating real images is image-to-image translation (Isola et al., 2017; Zhu et al., 2017; Choi et al., 2018), where the model learns to directly synthesize an output image given a user’s input. However, these methods require pre-defined tasks and heavy supervision (e.g., input-output pairs, class labels) for training, and also limit the user controllability at inference time. Another approach is to utilize pretrained GAN models, by directly optimizing the latent code for an individual image (Abdal et al., 2019; Zhu et al., 2016; Ma et al., 2018; Noguchi & Harada, 2019). However, even on high-end GPUs, it requires minutes of computation for each target image, and it does not guarantee that the optimized code would be placed in the original latent space of GAN. A more practical approach is to train an extra encoder which learns to project an image into its corresponding latent code (Zhu et al., 2020a; Perarnau et al., 2016; Luo et al., 2017). Although this approach enables real-time projection in a single feed-forward manner, it suffers from the low fidelity of the projected image (i.e., losing details of the target image). We attribute this limitation to the absence of spatial dimensions in the latent space. Without the spatial dimensions, an encoder compresses the local semantics of an image into a vector in an entangled manner, making it difficult to reconstruct the image (e.g., vector-based or low-resolution bottleneck layer is not capable of producing high-frequency details (Lample et al., 2017; Chang et al., 2018)). As a solution to such problems, we propose StyleMapGAN which exploits stylemap, a novel representation of the latent space. Our key idea is simple. Instead of learning a vector-based latent repre- sentation, we utilize a tensor with explicit spatial dimensions. Our proposed representation benefits from its spatial dimensions, enabling GANs to easily encode the local semantics of images into the latent space. This property allows an encoder to effectively project an image into the latent space, thus providing high-fidelity and real-time projection. In addition, our method offers a new capability to edit specific regions of an image by manipulating the matching positions of the stylemap. We demonstrate, on multiple datasets, that our stylemap indeed substantially enhances the projection quality compared to the traditional vector-based latent representation (Section 3.2). Furthermore, we show the advantage of our method over state-of-the art methods on image projection, interpolation, and local editing (Section 3.3 & Section 3.4). Finally, we show that our method can transplant regions even when the regions are not aligned between one image and another (Section 3.5). We will make our code and pretrained models publicly available for research community. 2 STYLEMAPGAN Our goal is to accurately project images to a latent space with an encoder in real-time and to locally manipulate images on the latent space. We propose StyleMapGAN which adopts stylemap, a novel representation of the intermediate latent space with spatial dimensions. It allows accurate reconstruction with the encoder by alleviating the spatial discrepancy between images and the latent space which has been causing the encoder to lose details. Furthermore, local changes in the stylemap lead to local editing of images thanks to the explicit spatial correspondence between the stylemap and images. Section 2.1 explains how we design the mapping network and the synthesis network to incorporate the stylemap. Section 2.2 describes our procedure for the image-to-latent projection and the local editing. 2.1 STYLEMAP-BASED GENERATOR Figure 1 compares the traditional style-based generator (Karras et al., 2019) and our stylemap-based generator. We propose to incorporate a stylemap instead of a style vector and to replace AdaIN operations with spatially adaptive operations. The stylemap has spatial dimensions as opposed to the style vector, thus can represent different styles across spatial locations. Accordingly, we revise SPADE (Park et al., 2019b) which modulates feature maps with spatially varying values. Since the feature maps in the synthesis network grow larger as getting closer to the output image, we introduce a stylemap resizer, which consists of convolutions and upsampling, to match the resolutions of stylemaps with the feature maps. The stylemap resizer not only resizes the stylemap, but also transforms them with learned convolutions to convey more detailed and structured styles. Figure 1 shows examples of changes of resized stylemaps across layers. Then, the affine transform A produces parameters for the modulation regarding the resized stylemaps. The modulation operation of the i-th layer in the synthesis network is as follows: hi+1 = ( γi hi − µi σi ) ⊕ βi (1) where µi, σi ∈ R are the mean and standard deviation of activations hi ∈ RCi×Hi×Wi of the layer, respectively. γi, βi ∈ RCi×Hi×Wi are modulation parameters. and ⊕ are element-wise multiplication and addition with broadcasting, respectively. We use layer normalization (Ba et al., 2016) instead of instance normalization and find it helpful to resolve droplet artifacts. In addition, we remove per-pixel noise which is an extra source of variation because it makes the projection complicated. Instead, β plays the similar role. Note that weight modulation (Karras et al., 2020) cannot be applied to spatially varying modulation because weights are shared across all locations in a convolution. Other details about the networks such as a design choice of mapping network are given in Appendix A and B. 2.2 REAL IMAGE PROJECTION AND LOCAL EDITING Since the stylemap eases the spatial discrepancy between images and the latent space, we train an encoder to project real images into its corresponding stylemaps, which accurately reconstructs the images through the generator. The encoder is jointly trained with the generator and the discriminator. More training details are described in Appendix A. Now we have access to the accurate projection of images to the style space which is essential to latent-based editing. Furthermore, local changes in the stylemap leads to natural local editing of images on the learned semantic manifold. Especially, we design a procedure for local transplantation which now becomes feasible. The goal of local editing is to transplant some part of a reference image to an original image with respect to a mask which indicates the region to be modified. We project the original image and the reference image through the encoder to obtain stylemaps w and w̃, respectively. In general, the mask is finer than 8×8, we blend the stylemaps on w+ space to achieve detailed manipulation. The edited i-th resized stylemap ẅ+ is an alpha blending of w+ and w̃+: ẅ+i = mi w̃ + i + (1−mi) wi + (2) where i-th resized mask mi is shrunk by max pooling. If the mask’s shape aligns with the 8 × 8 stylemap, we can do the same alpha blending on the w space instead of the w+ space. Note that the mask can be in any shape allowing usage of semantic segmentation methods or user scribbles. On the contrary to SPADE (Park et al., 2019b) or SEAN (Zhu et al., 2020b), even coarse masks as coarse as 8 × 8 produces plausible images so that the burden for user to provide detailed masks is lifted. This operation can be further revised for unidentical masks of the two images (Section 3.5). 3 EXPERIMENTS Our proposed method efficiently projects images into the style space in real-time and effectively manipulate specific regions of real images. We first describe our experimental setup (Section 3.1) and show how the proposed spatial dimensions of stylemap affect the image projection and generation quality (Section 3.2). We then compare our method with the state-of-the art methods on real image projection (Section 3.3) and local editing (Section 3.4). We finally show a more flexible editing scenario and usefulness of our proposed method (Section 3.5). Implementation details are described in Appendix A. 3.1 EXPERIMENTAL SETUP Datasets and protocols. For evaluation, we train our model on CelebA-HQ (Karras et al., 2018) and AFHQ (Choi et al., 2020), both at resolution of 256 × 256. We use 500 images for validation, another 500 images for testing, and the rest for training. Because the optimization methods take an extremely long time, we limited the test set to 500 images. When we compute Fréchet inception distance (FID), the numbers of generated samples are matched to the training set. Reconstruction errors are measured with all test images. For FIDlerp, we choose random numbers between 0 and 1 for 500 random pairs of images from the test set to synthesize 500 interpolated images and compute FID between those and the test set. For local editing comparison, 250 pairs of test images in CelebA-HQ are composed with ten semantic masks (e.g., background, hair) (Lee et al., 2020) to produce 2500 images. For local editing on AFHQ, masks are randomly chosen between horizontal and vertical half-and-half masks to produce 250 images. Baselines. We compare our method against recent methods. For image projection, StyleGAN2 (Karras et al., 2020) and Image2StyleGAN (Abdal et al., 2019) infer the per-layer style vectors (analogous to our w+) via iterative optimization. In-DomainGAN (Zhu et al., 2020a) relies on optimization preceded by initialization using a domain-guided encoder. SEAN (Zhu et al., 2020b) also includes an encoder but it requires semantic segmentation masks for training. Structured noise (Alharbi & Wonka, 2020) adds input tensor with spatial dimensions to the synthesis network of StyleGAN but it does not enhance the rest of the network where the style vector still plays an important role. Editing in style (Collins et al., 2020) tries to find local semantics in the style vector. 3.2 ANALYSIS OF OUR METHOD To manipulate an image using a generative model, we first need to accurately project the image into its latent space. In Table 1, we vary the spatial resolution of stylemap and compare the performance of reconstruction and generation. As the spatial resolution increases, the reconstruction accuracy improves significantly. It demonstrates that our stylemap with spatial dimensions is highly effective for image projection. FID varies differently across datasets, possibly due to different contextual relationship between locations for generation. Note that our method with spatial resolution accurately preserves small details, e.g., the eyes are not blurred. We next evaluate the effect of the stylemap’s resolution in editing scenarios, mixing specific parts of one image and another. Figure 3 shows that the 8×8 stylemap synthesizes the most accurate and seamlessly blended images. We see that when the spatial resolution is higher than 8×8, the edited parts are easily detected. We suppose that too large stylemap harms contextual relationship across locations which is essential for realistic images. Considering the editing quality, we choose the 8×8 resolution as our best model and use it consistently for all subsequent experiments. 3.3 REAL IMAGE PROJECTION In Table 2, we compare our approach with the state-of-the art methods for real image projection. For both datasets, StyleMapGAN achieves better reconstruction quality (MSE & LPIPS) than all competitors. Also, it achieves the best FID, which implicitly shows that our manipulation on the style space leads to the most realistic images. Importantly, our method runs 100× faster than the optimization-based baselines since a single feedforward pass provides accurate projection thanks to the stylemap, which is measured in a single GPU. SEAN also runs with a single feedforward pass, but it requires ground-truth segmentation masks for training which is a severe drawback for practical uses. Image2StyleGAN fails to meet requirements for editing in that it produces spurious artifacts in latent interpolation (FIDlerp and figures) and suffers from minutes of runtime. 3.4 LOCAL EDITING We evaluate local editing performance regarding three aspects: detectability, faithfulness to the reference image in the mask and preservation of the source image outside the mask. Figure 4 and Figure 5 visually demonstrate that our method seamlessly composes the two images while others struggle. Since there is no metrics for evaluating the last two aspects, we propose two quantitative metrics: MSEsrc and MSEref. Table 3 shows that the results from our method are the hardest for the classifier to detect and both source and reference images are best reflected. Note that MSEs are not the sole measures but AP should be considered together. 3.5 UNALIGNED TRANSPLANTATION Here, we demonstrate more flexible use case, unaligned transplantation, showing that our local editing does not require the masks on the original and the reference images to be aligned. We project the images to the stylemaps and and replace the designated region of the original stylemap with the crop of the reference stylemap even though they are on the different locations. Users can specify what to replace. Figure 6 shows examples. 4 DISCUSSION AND CONCLUSION Invertibility of GANs has been essential for editing real images with unconditional GAN models at a practical time and it has not been properly answered yet. To achieve this goal, we propose StyleMapGAN, which introduces explicit spatial dimensions to the latent space, called a stylemap. We show, through extensive evaluation, that our method based on the stylemap has a number of advantages over prior approaches, in that it can accurately project real images in real-time, into the latent space, and synthesize high-quality output images by both interpolation and local editing. The proposed latent representation is simple, general, and can be easily integrated into existing GAN models (e.g., StyleGAN) with wide range of network designs and data modality. We believe that improving fidelity by applying our latent representation to other methods such as conditional GANs (e.g., BigGAN) or variational autoencoders (Kingma & Welling, 2013) would be an exciting future work. B MAPPING NETWORK DESIGN FOR THE STYLEMAP There are several choices when designing a mapping network. We can easily think of convolutional layers due to the spatial dimensions of the stylemap. Alternatively, we can remove the mapping network so that our method does not generate images from the standard Gaussian distribution, and 1https://github.com/rosinality/stylegan2-pytorch 2http://publicurl.com uses only real images for training like autoencoder (Hinton & Salakhutdinov, 2006). As shown in Figure 3, autoencoder fails to produce realistic images using the projected stylemap. It seems to copy and paste between two images on RGB space. We give continuous input to the generator from the standard Gaussian distribution using a mapping network, letting the network generate seamless images in image space. However, the autoencoder only gives discrete input, which is projected from the encoder. On the other hand, the mapping network with convolutional layers often struggles in reconstruction so that the edited results images are quite different from the original images. We assume that there is such a limit because the convolutional layer’s mapping is bounded to the local area. In MLP, each weight and input are fully-connected so that it can make a more plausible latent space. C RELATED WORK C.1 LATENT-BASED IMAGE EDITING There are active studies (Abdal et al., 2019; Collins et al., 2020; Zhu et al., 2020a) on image editing using latent vector arithmetic where well-trained GANs (Karras et al., 2019; 2020) are adopted for real-world applications. These studies aim to find a latent vector to reconstruct an original image. In general, there are two approaches to embed images into latent vectors, learning and optimizationbased ones. The learning-based approach (Zhu et al., 2020a; Perarnau et al., 2016; Zhu et al., 2016) trains an encoder that maps a given image to a latent vector. This method has a potential of projecting an image in real time. However, the existing methods suffer from low quality of the reconstructed images, which indicates the difficulty of embedding real images. The optimization-based approach (Creswell & Bharath, 2018; Lipton & Tripathi, 2017; Ma et al., 2018; Abdal et al., 2019), given an input image, aims at optimizing the latent vector to minimize the pixel-wise reconstruction loss. Though it is not feasible to project images in real time due to its iterative nature, it exhibits high quality of the reconstructed images while enabling edits include global changes in semantic attribute, e.g. smiling, beard, etc. Compared with these approaches, our StyleMapGAN can project images in real time while offering high quality of reconstruction images. C.2 LOCAL EDITING Several methods (Collins et al., 2020; Alharbi & Wonka, 2020; Zhu et al., 2020b) tackle locally editing specific parts (e.g., nose, background) as opposed to the most GAN-based image editing methods modifying global appearance. Editing in style (Collins et al., 2020) tries to identify components in the style vector which are responsible for specific parts and achieves local editing. It requires preceding component analysis and the correspondence between the components and regions is lose. Structured noise injection (Alharbi & Wonka, 2020) replaces the learned constant from StyleGAN with an input tensor which has spatial dimensions and is a combination of local and global codes. Though it learns some sense of spatial disentanglement, its applicability is limited due to the separate source of variation, the style vector. These two methods are limited to editing fake images while editing real images with them requires projecting the images to the latent space. SEAN (Zhu et al., 2020b) facilitates editing real images by encoding images into the per-region style codes and manipulating them. However, per-region style codes do not capture details and it requires semantic segmentation masks for training. On the other hand, our StyleMapGAN captures and controls fine details of images with a stylemap which has explicit spatial correspondence with images. Our method does not require segmentation masks for training. C.3 CONDITIONAL IMAGE SYNTHESIS Conditional image synthesis models, such as image-to-image translation (Isola et al., 2017; Zhu et al., 2017; Kim et al., 2020), learn to synthesize an output image given an original image. Thanks to this framework, many applications have been successfully built, including colorization (Kim et al., 2019; Larsson et al., 2016; Zhang et al., 2016), image inpainting (Liu et al., 2018; Pathak et al., 2016; Yang et al., 2017), semantic image synthesis and editing (Wang et al., 2018; Chen & Koltun, 2017; Park et al., 2019a; Portenier et al., 2018). Recent models extend it to multi-domain and multimodal (Huang et al., 2018; Lee et al., 2018; Choi et al., 2020). Image-to-image translation and local edit have been separately studied since they target different objectives, i.e., regarding global and local levels of detail in image generation. However, our method can be applied to the both tasks by semantic manipulation of stylemap for image-to-image translation and local manipulation of stylemap. For example, our StyleMapGAN can make only the eyes laugh or the mouth laugh via local editing as well as change the domain of generated image via global semantic manipulation.
1. What is the focus of the paper regarding image generation and editing? 2. What are the strengths of the proposed approach, particularly in encoding real images into spatial feature maps? 3. What are the weaknesses of the paper, especially regarding the use of the term "StyleMap"? 4. Do you have any concerns about the difference between the proposed generator and a normal generator with two branches and skip connections? 5. Can the proposed method support style mixing like StyleGAN2? 6. Why did the paper not perform experiments on the datasets that StyleGAN2 was originally trained on? 7. Are there any minor questions or problems that do not affect your score?
Review
Review Summary: This paper proposes a new method for image generation and editing. A smaller network generates multi-scale feature maps, which are used to modulate (affine transform) the features of a larger network in corresponding scales. Besides, an encoder is trained to project real images into the lowest-resolution feature map of the smaller generative network. At test time, the model can perform efficient and faithful reconstruction and spatial editing similar to image composition. Strengths: The paper is overall well-written. The idea of encoding real images into spatial feature maps, instead of non-spatial feature vectors like many prior works do, is interesting and well-motivated. Experiments show that the method performs better than several baselines on image editing in both quality and speed. Major weaknesses: I do not think the proposed intermediate feature map should be called “StyleMap”. The image “style”, sometimes used interchangeably with the term ”texture”, is usually represented by the patch distribution/statistics within an image [2, 3, 4]. It is by definition spatially-invariant. It is therefore very inappropriate to call the intermediate feature map “StyleMap” because they are spatially-variant. In fact, I do not see any major difference between the “StyleMap” and the commonly used intermediate feature map. Is there any evidence showing the “StyleMap” encodes style but not the content? What is the difference between the proposed generator with a normal generator with two branches and skip connections? Is there anything more “style” in the proposed generator? The only difference I see is that the normal skip connection only has addition/concatenation but the skip connection here has multiplicative terms in the modulation layer. This is related to the gating mechanism but has nothing to do with style. In summary, I strongly discourage the use of the word “StyleMap”, which is a misleading and unnecessarily fancy term in my opinion, and advocate to change it to something plain and faithful such as "feature map". In StyleGAN, they show that different levels of styles can be mixed to create an output image from the high-level style of one image and the low-level style of another. Does the proposed method support such manipulation? Note that StyleGAN2 reports such mixing is no longer feasible if instance normalization is removed because the style injection becomes cumulative rather than scale-specific. This paper replaces instance normalization with layer normalization, but my intuition is that layer normalization will make the style injection cumulative as well. Also note that this is different from the spatial mixing experiments conducted in the paper. It is a natural and well-motivated idea to encode a real image into spatial feature maps rather than style vectors, in order to better reconstruct it. However, I doubt if it is necessary to change the StyleGAN architecture to achieve this. StyleGAN already has inputs that represent spatial details, which is the noise space. Image2StyleGAN++ [8], which is not cited here, already explores this idea by optimizing the noise space. The proposed method is arguably faster because it's feedforward rather than optimization-based. But the basic idea remains the same. I suspect that StyleGAN2 with feedforward encoder into both style space and noise space could perform similarly to the proposed approach. The paper only compares StyleMapGAN with StyleGAN2 on CelebA-HQ and AFHQ. Neither of them was used in the original StyleGAN2 paper. Is there a reason why the paper doesn’t perform experiments on the datasets StyleGAN2 was originally trained on (e.g., FFHQ, LSUN)? Plus, AFHQ is a limited benchmark for the unconditional generation due to the small sample size. Data augmentation [1] should be able to improve the results of both the baseline and the proposed method considerably. Minor questions/problems that do not affect my score: The localized editing task is very similar to Shocher et al. [5] but there is no comparison with them. Some related work [6, 7] is not discussed or compared with as well. StyleGAN2 suggests that, in modulation, only scaling is important (shifting is not needed). Do the authors find shifting useful for their architecture? state-of-the art -> state-of-the-art Overall, I vote for rejection because the terminology is misleading to me, and the experiments leave a lot of things unclear. [1] Karras et al. "Training generative adversarial networks with limited data." NeurIPS 2020 [2] Portill and Eero. "A parametric texture model based on joint statistics of complex wavelet coefficients." IJCV 2000 [3] Gatys et al. "Image style transfer using convolutional neural networks." CVPR 2016 [4] Huang and Belongie. "Arbitrary style transfer in real-time with adaptive instance normalization." ICCV 2017 [5] Shocher et al. "Semantic Pyramid for Image Generation." CVPR 2020. [6] Pidhorsky et al. "Adversarial Latent Autoencoders." CVPR 2020. [7] Park et al. "Swapping Autoencoder for Deep Image Manipulation." NeurIPS 2020 [8] Abdal et al. "Image2StyleGAN++: How to Edit the Embedded Images?." CVPR 2020
ICLR
Title Tr-NAS: Memory-Efficient Neural Architecture Search with Transferred Blocks Abstract Neural Architecture Search (NAS) is one of the most rapidly growing research fields in machine learning due to its ability to discover high-performance architectures automatically. Although conventional NAS algorithms focus on improving search efficiency (e.g., high performance with less search time), they often require a lot of memory footprint and power consumption. To remedy this problem, we propose a new paradigm for NAS that effectively reduces the use of memory while maintaining high performance. The proposed algorithm is motivated by our observation that manually designed and NAS-based architectures share similar low-level representations, regardless of the difference in the network’s topology. Reflecting this, we propose a new architectural paradigm for NAS, called Transfer-NAS, that replaces several first cells in the generated architecture with conventional (hand-crafted) pre-trained blocks. As the replaced pre-trained blocks are kept frozen during training, the memory footprint can significantly be reduced. We demonstrate the effectiveness of the proposed method by incorporating it into Regularized Evolution and Differentiable ARchiTecture Search with Perturbationbased architecture selection (DARTS+PT) on NAS-Bench-201 and DARTS search spaces. Extensive experiments show that Transfer-NAS significantly decreases the memory usage up-to 50% while achieving higher/comparable performance compared to the baselines. Furthermore, the proposed method is 1.98× faster in terms of search time when incorporated to DARTS+PT on NAS-Bench-201 compared to the conventional method. 1 INTRODUCTION Neural Architecture Search (NAS) has become an important domain in the machine learning field due to its superior performance. Many NAS algorithms have been developed (Zoph & Le, 2017; Zoph et al., 2018; Liu et al., 2019; Cai et al., 2019), and continue to raise in the future. The major advantage of NAS is to automatically discover the best architecture from a large search space on a target dataset. Since the solution can be found without human involvement, NAS has a wide range of applications such as image classification (Wu et al., 2019; Tan et al., 2019), object detection (Chen et al., 2020; Wang et al., 2020), and pruning (Dong & Yang, 2019). Researchers have made a lot of attempts to improve the performance of NAS and reduce the searching time (Pham et al., 2018; Liu et al., 2019; Cai et al., 2019). Query-based NAS such as Regularized Evolution (RE) (Real et al., 2019) trains and evaluates thousands of small models before restoring the best model into the original size (enlarge the network’s depth and number of channels) for evaluation. Gradient-based NAS algorithms (Liu et al., 2019; Xu et al., 2020) train a supernet which requires a lot of memory footprint. A natural question arises: Can we perform the search step in NAS by training only a few cells rather than the whole network? Technically, this paradigm shortens the training time and reduces the memory footprint, because the memory required for calculating the gradients in this case is smaller than the conventional approaches. In this paper, we show that it is possible to perform efficient searching in NAS by replacing several first cells of a child network with pre-trained layers and let NAS search for the remaining cells. By analyzing the feature maps between the networks sampled from NAS-Bench-201 (Dong & Yang, 2020) search space and a hand-crafted one, namely ResNet (He et al., 2015), we observe that the representations are very similar among low-level features compared to their high-level features as shown in Figure 1. It is noted that similar observation was reported in Kornblith et al. (2019). However, they compared the similarity among simple and similar architectures (ResNet’s family and plain networks) which cannot directly lead us to our main motivation, while we compared similarity between a hand-crafted architecture with generated ones in NAS-Bench-201 that have high diversity in both topology and operations (e.g., skip connection, conv3× 3, maxpooling, ...). Motivated from our preliminary experimental results in Figure 1, we propose to leverage several low-level features generated by a pre-trained network to speedup the search phase in NAS. This also helps to reduce the memory footprint significantly because we do not need to calculate the gradient for these pre-trained layers. To this end, the contributions of our paper are summarized as follows: • We find out that the low-level features learned by a DARTS’s supernet and networks sampled from NAS-Bench-201 search space, are similar to that of ResNet, regardless of their topology and operations. • We leverage the features generated by a pre-trained baseline to improve the efficiency of NAS. Specifically, we replace several first layers of NAS-based networks with several pretrained layers and freeze them, while leaving the other layers as trainable. This results in reduction of memory footprint and training time of the supernet and/or subnetworks. • We demonstrate the effectiveness of our method by incorporating the proposed method into two search algorithms: evolutionary-based REA (Real et al., 2019) and gradient-based DARTS+PT (Wang et al., 2021). On NAS-Bench-201, we save up to 2.28× memory footprint and run 1.98-1.32× faster than the conventional method, while achieving a higher test accuracy. On DARTS search space, DARTS+PT using our proposed method can find the best cell in 0.53 GPU day and allocates 1.6× less memory, while maintaining competitive compared to the conventional method. 2 RELATED WORK Similarity of neural network representations. Recently, studies on learning the similarity of neural network representations have widely been conducted (Raghu et al., 2017; Morcos et al., 2018; Wang et al., 2018; Kornblith et al., 2019; Nguyen et al., 2021). Especially, Kornblith et al. (2019) provides a powerful similarity index for comparing neural network representations, namely Centered Kernel alignment (CKA). We adopt CKA to analyze the output feature similarity between hand-crafted and NAS-generated layers. Given n examples, let X ∈ Rn×p1 and Y ∈ Rn×p2 are the representations of p1 and p2 neuron for this n examples. Let K = XXT and L = Y Y T be the Gram matrices, which are the similarities between a pair of examples in X and Y . Let H = In− 1n11 T denotes the centering matrix. HilbertSchmidt Independence Criterion (HSIC) is defined as HSIC(K,L) = vec(HKH)·vec(HLH)/(n− 1)2. The normalization index is applied to HISC to make it invariant to isotropic scaling, which is denoted as CKA and is defined as: CKA(K,L) = HSIC(K,L)√ HSIC(K,K)HSIC(L,L) . (1) Neural Architecture Search. The goal of NAS is to automatically discover high-performance networks on a specific task. Reinforcement Learning (RL)-based, evolutionary-based, and gradientbased search algorithms are widely used for NAS (Zoph & Le, 2017; Pham et al., 2018; Zoph et al., 2018; Real et al., 2019; Liu et al., 2019). In the work of Zoph & Le (2017), the authors use RL and train a controller to generate the network’s configurations (e.g., topology and operations). This method requires a lot of computational resources. Evolutionary-based NAS such as (Real et al., 2019) outperforms RL-based NAS in terms of accuracy and efficiency as it reaches higher accuracy given the same amount of time during searching. The major drawback of RL-based and evolutionary-based is the repetition process of training and evaluation of candidate networks, which demands a huge amount of GPU hours. On the other hand, Gradient-based NAS (Liu et al., 2019; Xu et al., 2020; Chen et al., 2019) utilizes the back-propagation process to find the optimal network where the optimal model parameters and operations are found during training in an alternative manner. For example, Differentiable ARchiTecture Search (DARTS) (Liu et al., 2019) introduces architectural weights α beside network’s weights w, forming a supernet consisting of large candidate networks. In order to find the optimal α and w, the process requires a bi-level optimization (Anandalingam & Friesz, 1992; Colson et al., 2007), which is computationally expensive. DARTS gets rid of this issue by approximating the architecture gradient by using only a single training step, which optimizes α and w alternately. Relation between similarity of representations and NAS. Even though studying the similarity of neural network representations and NAS are two different research areas, it is important to understand whether the findings from the previous works on the similarity of feature maps between networks still hold for NAS. Not surprisingly, NAS-based architectures and hand-crafted networks generate similar representations for some layers after training, as depicted in Figure 1. This motivates us to develop a simple yet effective method to reduce the memory footprint in NAS. In our work, we use CKA proposed by Kornblith et al. (2019) for calculating the similarity1. 3 METHOD In this part, we first study the similarity of neural network representations from NAS perspective. We first show that NAS-based networks and hand-crafted networks learn and generate similar representations for several shallow layers. We then propose a simple method that reuses the features from a pre-trained hand-crafted network, which can help to improve the efficiency of NAS in terms of memory footprint and training time. 3.1 KEY OBSERVATIONS We consider comparing the similarity for two cases: (i) query-based NAS algorithms (Zoph & Le, 2017; Real et al., 2019) which train a lot of subnetworks; (ii) gradient-based NAS methods (Liu et al., 2019; Chen et al., 2019) which train a supernet. Case 1. We train a ResNet-32 on CIFAR-10 dataset (Krizhevsky & Hinton, 2009) using the same hyper-parameters used in NASBench-201 (Dong & Yang, 2020). More details can be found in Appendix A.1. We randomly select 1000 architectures from NASBench-201 and measure the similarity between the representations2 generated by these architectures and the trained ResNet-32 via CKA (Kornblith et al., 2019) similarity metric. The weights of NASBench-201 architectures are obtained from their official website. We take the average of 1000 similarities and plot them in Figure 1. It can be seen that the feature maps generated by NAS-Bench-201 architectures share similar characteristics to those from ResNet-32. Especially, the similarity is significantly higher for low-level representations while lower for high-level representations. This suggests that some of the initial 1Current work uses this method due to its simplicity. However, we believe that a proper metric will tell us how many pre-trained layers should be used before performance collapse (i.e., the searched network achieves inferior accuracy). 2The feature maps generated by a ResNet block or a cell from NAS-Bench-201 architectures. layers of candidate networks in NAS can be replaced with layers from pre-trained hand-crafted networks. Moreover, this approach reduces the memory footprint since it reduces the search space size (during backpropagation). Case 2. We further investigate the similarity between a pre-trained ResNet-20 and a supernet, which is used in DARTS. We compute the similarity for the representations generated by supernet cells and ResNet blocks. Figure 2 demonstrates that the supernet and ResNet generate similar low-level and mid-level patterns. This behavior is consistent with previous observation. 3.2 PROPOSED MEMORY-EFFICIENT NEURAL ARCHITECTURE SEARCH In this section, we introduce our method for reducing the memory footprint in NAS. Based on our observation about similarity of representations between hand-crafted and NAS based networks, we propose replacing a first few layers of NAS based networks with pre-trained layers of hand-crafted networks. Figure 3 shows the differences between the conventional and the proposed method for NAS. Let A be an untrained network with L cells and is expressed as A = {lA1, lA2, ..., lAL}. Similarly, let B be a pre-trained baseline with K blocks and is expressed as B = {lB1, lB2, ..., lBK}. Since A and B share similar representations for some cells and blocks after training, a straightforward way3 to reuse the features from pre-trained network B is to plug these features to untrained network A. Hence, the proposed network which replaces i cells with j blocks is defined as S = {lB1, ..., lBj , lA(i+1), ..., lAL}, where i and j are sandwiched between 1 and L− 1. The choice of j controls how many pre-trained blocks from B are used while i controls how many cells of A are replaced. If j and i are large meaning that we only train a few cells, the pre-trained blocks are dominating the NAS cells. Intuitively, this makes the proposed network S looks like the pre-trained network B, thus degrading the performance of NAS. During training the network S, we only update the weights from lA(i+1) to lAL. The proposed method can be used for query-based NAS and gradient-based NAS. Here, we formally describe how to use the proposed method for NAS. Query-based NAS. In many query-based NAS algorithms (Zoph & Le, 2017; Real et al., 2019), a large number of child networks are sampled and trained for limited epochs. Instead of training these networks, which requires computing the gradient for all cells, we replace them with the proposed paradigm and keep other settings as default. LetA be the search space and a ∈ A be an architecture. 3There are other ways to reuse features. Despite directly plug-in, we find that, with a simple learning rate, warmup paradigm can make training stable as it makes the untrained cells to adapt pre-trained blocks gradually. Instead of training a, we train s = {lb1, ..., lbj , la(i+1), ..., laL} where b is a pre-trained architecture and is shared for all networks in A. The performance of s on the validation set is used for updating the controller in Zoph & Le (2017) or selecting the parent for mutation in Real et al. (2019). We term s as Transfer-Net-i-j (Tr-Net-i-j). Gradient-based NAS. Differentiable architecture search (Liu et al., 2019; Xu et al., 2020) trains a supernet from which subnetworks are generated. Although the supernet comprises of a highly complex topology, the starting cells still produce similar representations to a hand-crafted one after training as demonstrated in Figure 2. Thus, the proposed method is also applicable for Gradientbased NAS. Let A be the orginal supernet. The proposed network S now acts as the supernet, which we denote as Transfer-Supernet-i-j (Tr-Supernet-i-j). The search phase is now conducted on Tr-Supernet-i-j, which requires less memory footprint. Since Tr-Net and Tr-Supernet utilize pre-trained blocks, they belong to Tr-NAS family. 4 EXPERIMENTS In this section, we first compare the rank correlation between the validation accuracy and final test accuracy of the proposed method and the conventional one on NAS-Bench-201. Then, we evaluate the performance of REA and DARTS+PT using the proposed method. Unless stated otherwise, the training time and memory footprint are measured on a single Nvidia Geforce 1080 Ti with PyTorch deep learning framework. 4.1 RANK CORRELATION COMPARISON Dataset and baseline. We use NAS-Bench-201 (Dong & Yang, 2020), a benchmark dataset for NAS. NAS-Bench-201 consists of 15,625 networks where five operations exist in the search space, namely none, average pooling, conv1× 1, conv3× 3, and skip connection. These networks are trained on CIFAR-10, CIFAR-100, and ImageNet-16-120. The baseline chosen for comparison is the conventional one, which trains for 12 epochs. We term this as ’Original’. Architectures. We randomly sample 1000 networks from NAS-Bench-201. We replace these networks with the proposed paradigm. The pre-trained network is a ResNet-324. We set i and j to 5, meaning that we replace the first 5 cells of these networks with the first 5 pre-trained ResNet blocks. Performance criteria. Once the training is finished, we compute Spearman Rank Order Correlation Coefficient (SROCC) and Kendall Tau Rank-Order Correlation Coefficient (KROCC) between the accuracy on the validation set and the test set for the Original and the proposed method. We obtain the accuracy on the validation set and test set for the original networks from NAS-Bench-201 dataset. Training setup. We use the same training set and validation set in our experiments following NASBench-201 for fair comparison. We use the same hyper-parameters as in NAS-Bench-201, except that we train our methods for 18 epochs (including 13 warmup epochs). It should be noted that even we train our networks with more epochs, the total training time of our networks is still less than the conventional method. Please refer to Appendix A.2 for more information. 4A good starting point for choosing the depth of the pre-trained baseline is to minimize the difference between the receptive field of blocks and cells. We did not conduct extensive tuning the depth of the baseline because we want to utilize a pre-trained model, which is already available to speedup NAS. Result. Table 1 summarizes the results on three datasets i.e., CIFAR-10, CIFAR-100, and ImageNet16-120. It can be seen that the proposed method achieves a higher correlation than the conventional method for all of the datasets. Notably, there is a huge improvement in rank correlation on CIFAR10 when using the proposed method. The results from Table 1 indicate that, despite sharing the same representations for several first layers, the proposed method is able to preserve the ranking (even better). We obtain the allocated memory on GPU during training and present in Table 2. The results suggest that the proposed method significantly reduces the memory footprint (2.11× reduction) compared to the conventional method. We further evaluate the rank correlation for top-k architectures, which is crucial for assessing the performance as good methods should have a strong correlation for top-performing networks. We set k to 1000 and summarize the results in Table 3. As shown in Table 3, the proposed method performs consistently well for all datasets. In general, the proposed method achieves a higher rank correlation than the conventional method. These sets of experiments indicate that the proposed method is well-suited for NAS. 4.2 RESULT ON QUERY-BASED NAS We now demonstrate the effectiveness of the proposed method by incorporating to NAS algorithms. For this purpose, we use Regularized Evolution REA (Real et al., 2019), a query-based NAS algorithm. We train ours for 18 epochs (including 13 warmup epochs), other hyper-parameters are identical to those used in NAS-Bench-201. The search phase is terminated when the number of evaluated networks exceeds 100. We conduct the experiment 10 times with different seeds and plot the average test accuracy in Figure 4. As shown in Figure 4, using the proposed method, REA is able to find the top-performing network. Compared to the original method, the proposed method achieves similar performance (Figure 4. Bottom) while using roughly 2× less memory footprint (Figure 4. Top). It is noticeable that REA outputs higher average test accuracy during the search phase using our method. This behavior is natural since our method achieves a good rank correlation than the conventional one. 4.3 RESULT ON DIFFERENTIABLE NAS We shift our evaluation to another type of NAS algorithms, which uses the gradient to guide the search. We incorporate the proposed method to DARTS+PT (Wang et al., 2021), a perturbationbased architecture selection that performs on top of DARTS. The authors of DARTS+PT show that the architectural weights α do not represent the strength of the operations. Thus, they introduce an alternative way to derive the final architecture, which relies on the contribution of the operations to the supernet’s accuracy. Specifically, after the supernet converged, the operation which has less impact on the supernet’s accuracy is removed from the supernet. Then, we tune the supernet for some epochs and repeat the process until the stopping criteria is met (e.g., becoming a network with a single-path). 4.3.1 NAS-BENCH-201 SEARCH SPACE In this section, we evaluate the performance of DARTS+PT using our proposed method on NASBench-201 search space. Supernet. The original supernet is a cell-based paradigm, which repeatedly stacks cell. Each cell has 6 edges and is stacked for 5 times for the first, second, and third stages. Our Tr-Supernet-5-5 is formed by replacing the first stage of the original supernet with those from a pre-trained ResNet-32 (i.e., the first 5 residual blocks). Training setup. We train Tr-Supernet-5-5 and Original Supernet for 50 epochs (including 13 warmup epochs for ours) and then perform architecture selection followed DARTS+PT. After the operation is removed from the supernet, we tune the supernet for 10 epochs. The search phase is finished when all edges are processed. Other hyper-parameters are the same as in previous work (Wang et al., 2021). Additionally, we enable Cutout (Devries & Taylor, 2017) during training and tuning the proposed supernet and the original one. The results obtained without Cutout is presented in Ablation 4.4.2. Result. We run experiments 25 times on CIFAR10/CIFAR-100 and 5 times on ImageNet-16-120 with different random seeds and report the mean test accuracy. Figure 5 illustrates our results on NAS-Bench-201. As we can see from Figure 5, using the proposed supernet, DARTS+PT finishes the search phase faster than the conventional supernet (about 1.98× reduction on CIFAR10/CIFAR-100 and 1.32× on ImageNet-16-120). We summarize the average test accuracy in Table 4. Specifically, the proposed method achieves a per- formance gain of 0.45%, 2.79%, and 0.86% on CIFAR-10, CIFAR-100, and ImageNet-16-120, respectively. We plot the allocated memory required to train the supernet in Figure 6. Notably, the proposed supernet reduces the memory footprint by 2.2× for all datasets while achieving higher test accuracy compared to the conventional supernet. 4.3.2 DARTS SEARCH SPACE We perform experiments on DARTS search space, which is much larger than NAS-Bench201 search space. The supernet is formed by stacking the cells for 8 times. The position of reduction cell is at 1/3 and 2/3 of the depth of the supernet. We use a pre-trained ResNet20 to provide intermediate features for the supernet. We replace the first stage of supernet with those from ResNet-20 (i.e., the first two supernet cells are replaced with the first three ResNet-20 blocks). The configuration for the pre-trained ResNet can be found in Appendix A.1. We use the same hyper-parameters for training and tuning the supernet (Liu et al., 2019; Wang et al., 2021). Additionally, we apply warmup for 5 epochs and enable Cutout (Devries & Taylor, 2017) during the search phase. We run the experiment 4 times with different random seeds and report the average (best) test error in Table 5. The result of DARTS+PT is obtained by running the official code provided by authors. From Table 5, we can see that the proposed method works well on a larger search space. The conventional DARTS takes 0.4 day to finish the search phase and 0.85 day when applying perturbation-based architecture selection. Using the proposed method, DARTS+PT finishes the search phase in 0.53 day and allocates 1.6× less memory, while maintaining the performance. 4.4 ABLATION STUDIES 4.4.1 STABILIZING THE TRAINING VIA LEARNING RATE WARMUP Since the proposed method utilizes some pre-trained blocks while others are randomly initialized, the network may fail to converge using the conventional optimization technique. We suggest stabilizing the network via learning rate warmup for a few epochs. It is noted that the warmup phase does not introduce any extra cost. The warmup phase begins with a learning rate of 0, and gradually increases to the initial learning rate for n epochs. This step encourages the untrained cells to adapt to the pre-trained blocks slowly. To find the best warmup epochs for our setting, we randomly sample 250 networks from NASBench-201 search space, replace the first i cells with j pretrained ResNet blocks, and train them on CIFAR-100 for 18 epochs with different warmup epochs n. We use KROCC as a criterion for comparison. We investigate two cases: (i) i and j are equally set to 5, denoted as Tr-Net-5-5; (ii) i and j are set to 10, denoted as TrNet-10-10. We set n from 1 to 17 with a step size of 4 and show the results in Figure 7. For the first case, we can see that without the warmup phase (0 warmup epochs), the rank correlation of the proposed method is below the conventional method. This indicates that the training is unstable that causes the network failed to converge. When using learning rate warmup, the rank correlation gradually increases and reaches its peak when n is 13, then starts decreasing. In addition, the proposed method always outperforms the conventional method when we perform warmup for more than 3 epochs. For the second case, one can observe that if we replace too many cells with pre-trained blocks, the rank correlation is lower than the baseline. This suggests that choosing the right i and j can have a good trade-off between performance and memory reduction. 4.4.2 THE EFFECTIVENESS OF CUTOUT AUGMENTATION We investigate how Cutout (Devries & Taylor, 2017) can help to improve the performance of the proposed method for gradient-based NAS algorithms. We study two cases: (i) i and j is set to 5 which is denoted as Tr-Supernet-5-5; (ii) i and j is set to 10 which is denoted as Tr-Supernet-1010. We perform the experiment on NAS-Bench-201 using CIFAR-100 with different warmup epochs n. We compare the performance of DARTS+PT using our supernet and the conventional one trained with and without Cutout. We show the results in Figure 8. For the first case, DARTS+PT using our supernet outperforms the original supernet when n is greater than 7. Tr-Supernet-5-5 achieves the best performance with 13 warmup epochs. For the second case, Tr-Supernet-10-10 achieves comparable performance compared to the original supernet trained with Cutout. However, when comparing to the conventional method trained without Cutout, there is a little drop in accuracy. To find out the reason that Cutout helps to improve the performance of our method, we compute the full Hessian ∇2αLvalid of validation loss w.r.t the architectural parameters α and show the dominant eigenvalue in Figure 9. Zela et al. (2020) reveals that large dominant eigenvalue of the Hessian degrades the performance and suggests that using regularization technique may improve the performance. As demonstrated in Figure 9, applying Cutout to the original supernet does not help to reduce the dominant eigenvalue as it continues to raise. By contrast, the proposed method achieves better results because the dominant eigenvalue fluctuates around some values when enabling Cutout. 5 DISCUSSION & FUTURE WORK Since, the proposed method fuses pre-trained blocks and untrained cells in a straight-forward manner, there are several potential directions to further improve the performance of NAS while reducing the memory footprint and training time. First, a proper similarity index may help to determine how many untrained cells are suitable for replacement. Second, there is a possibility to develop a single pre-trained baseline, which can be used as a starting point for NAS to perform on other target tasks that are similar to each other (Kornblith et al., 2019). Also, the performance of NAS may depend on the performance of the pre-trained network. Thus, designing a good pre-trained baseline which can be applied for various NAS algorithms can be a promising work. In our work, we keep things as simple as possible to demonstrate the usefulness of the proposed method, and leave other improvements as future works. 6 CONCLUSION In this work, we propose a simple yet effective method to reduce the memory footprint and shorten the training time in NAS, by replacing several first NAS-cells with those from a pre-trained handcrafted network. Our work is motivated by the observation that once converged, both hand-crafted and NAS-based architectures learn similar representations especially at low-level layers. Our method outperforms the conventional method in terms of rank correlation of validation accuracy to test accuracy on NAS-Bench-201 while requiring less memory footprint. Additionally, our method can be incorporated into different types of NAS algorithms, such as query-based or gradient-based methods, without any modification. Overall, the proposed method uses less memory footprint and shortens the training time while achieving comparable or even higher performance than the conventional methods. A APPENDIX A.1 DETAIL OF ARCHITECTURES AND TRAINING SETUP FOR PRE-TRAINED RESNET Actually one can use a pre-trained ResNet available in PyTorchCV (Semery, 2020) database as we do not modify the ResNet architecture. We do train our ResNet for a fair comparison to the conventional method because on NAS-Bench-201, the original train and test sets of CIFAR-10, CIFAR-100 are split into new train, validation, test sets. ResNet-32. The network is created by stacking basic residual block for 15 times. The downsampled block is located at 6-th and 11-st block. The number of channel is 16, 32, 64 for the first, second, and third stages, respectively. ResNet-20. Similar to ResNet-32, we stack the basic residual block for 9 times, reduce the dimension by half at 4-th and 7-th block. The number of channel is set to 64, 128, and 256 for the first, second, and third stages, respectively. The hyper-parameters for training ResNet are summarized in Table 6. A.2 TRAINING TIME OF THE PROPOSED METHOD The training time of a random network sampled from NAS-Bench-201 is displayed in Figure 10. We can see that the proposed method require less time to finish one epoch. As a result, we can increase the number of epochs for training the proposed method as long as it does not exceed the training time of conventional method for a fair comparison. In addition, we observe a similar trend for other datasets (e.g.,CIFAR-10 and ImageNet-16-120) and networks. In our work, we set the number of epochs for training ours to 18.
1. What is the main contribution of the paper, and what is the significance of the proposed method? 2. What are the strengths and weaknesses of the paper regarding its claims, experiments, and comparisons with other works? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What are the limitations of the paper, and what are the potential directions for future research? 5. Are there any ethical or societal implications of the work that the authors should consider?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors propose to replace several first cells of a child network with pre-trained layers and let NAS search for the remaining cells. This method is based on the observation that the low-level feature maps between the networks from NAS-Bench-201 search space and a a hand-crafted one are very similar. By freezing the first few layers, the memory footprint and convergence time are both reduced. The analysis and the experiments are done only on CIFAR-10/100 and ImageNet-16-120, and the networks from NAS-Bench-201 and ResNet-32, which is quite limited. Review The biggest problem with this paper is that the authors only studied networks from NAS-Bench-201 and ResNet-32 on datasets CIFAR-10/100 and ImageNet-16-120. It is questionable that the same observation and conclusion can be made on other models and datasets: The observation low-level feature maps generated by NAS-Bench-201 architectures share similar characteristics to those from ResNet-32 are made on CIFAR-10. As we know, CIFAR-10 is a relatively easy datasets; there may be not much low level features or patterns for a network to capture and hence the low-level feature maps from different networks are similar. Would the same observation hold on more challenging dataset? In the paper, the authors only used ResNet-32 as the pre-trained network. This network isn't particularly popular in practice. Have the author tried other networks? What about networks for other tasks like segmentation, detection? With different pre-trained models and different datasets, is it still possible to replace and freeze as many as 5 layers without performance drop? If not, then the memory and computation gain may not be as big as shown in the paper. The ablation studies are off the topic. I don't think for this paper, readers are interested in how learning rate warmup and cutout augmentation are effective.
ICLR
Title Tr-NAS: Memory-Efficient Neural Architecture Search with Transferred Blocks Abstract Neural Architecture Search (NAS) is one of the most rapidly growing research fields in machine learning due to its ability to discover high-performance architectures automatically. Although conventional NAS algorithms focus on improving search efficiency (e.g., high performance with less search time), they often require a lot of memory footprint and power consumption. To remedy this problem, we propose a new paradigm for NAS that effectively reduces the use of memory while maintaining high performance. The proposed algorithm is motivated by our observation that manually designed and NAS-based architectures share similar low-level representations, regardless of the difference in the network’s topology. Reflecting this, we propose a new architectural paradigm for NAS, called Transfer-NAS, that replaces several first cells in the generated architecture with conventional (hand-crafted) pre-trained blocks. As the replaced pre-trained blocks are kept frozen during training, the memory footprint can significantly be reduced. We demonstrate the effectiveness of the proposed method by incorporating it into Regularized Evolution and Differentiable ARchiTecture Search with Perturbationbased architecture selection (DARTS+PT) on NAS-Bench-201 and DARTS search spaces. Extensive experiments show that Transfer-NAS significantly decreases the memory usage up-to 50% while achieving higher/comparable performance compared to the baselines. Furthermore, the proposed method is 1.98× faster in terms of search time when incorporated to DARTS+PT on NAS-Bench-201 compared to the conventional method. 1 INTRODUCTION Neural Architecture Search (NAS) has become an important domain in the machine learning field due to its superior performance. Many NAS algorithms have been developed (Zoph & Le, 2017; Zoph et al., 2018; Liu et al., 2019; Cai et al., 2019), and continue to raise in the future. The major advantage of NAS is to automatically discover the best architecture from a large search space on a target dataset. Since the solution can be found without human involvement, NAS has a wide range of applications such as image classification (Wu et al., 2019; Tan et al., 2019), object detection (Chen et al., 2020; Wang et al., 2020), and pruning (Dong & Yang, 2019). Researchers have made a lot of attempts to improve the performance of NAS and reduce the searching time (Pham et al., 2018; Liu et al., 2019; Cai et al., 2019). Query-based NAS such as Regularized Evolution (RE) (Real et al., 2019) trains and evaluates thousands of small models before restoring the best model into the original size (enlarge the network’s depth and number of channels) for evaluation. Gradient-based NAS algorithms (Liu et al., 2019; Xu et al., 2020) train a supernet which requires a lot of memory footprint. A natural question arises: Can we perform the search step in NAS by training only a few cells rather than the whole network? Technically, this paradigm shortens the training time and reduces the memory footprint, because the memory required for calculating the gradients in this case is smaller than the conventional approaches. In this paper, we show that it is possible to perform efficient searching in NAS by replacing several first cells of a child network with pre-trained layers and let NAS search for the remaining cells. By analyzing the feature maps between the networks sampled from NAS-Bench-201 (Dong & Yang, 2020) search space and a hand-crafted one, namely ResNet (He et al., 2015), we observe that the representations are very similar among low-level features compared to their high-level features as shown in Figure 1. It is noted that similar observation was reported in Kornblith et al. (2019). However, they compared the similarity among simple and similar architectures (ResNet’s family and plain networks) which cannot directly lead us to our main motivation, while we compared similarity between a hand-crafted architecture with generated ones in NAS-Bench-201 that have high diversity in both topology and operations (e.g., skip connection, conv3× 3, maxpooling, ...). Motivated from our preliminary experimental results in Figure 1, we propose to leverage several low-level features generated by a pre-trained network to speedup the search phase in NAS. This also helps to reduce the memory footprint significantly because we do not need to calculate the gradient for these pre-trained layers. To this end, the contributions of our paper are summarized as follows: • We find out that the low-level features learned by a DARTS’s supernet and networks sampled from NAS-Bench-201 search space, are similar to that of ResNet, regardless of their topology and operations. • We leverage the features generated by a pre-trained baseline to improve the efficiency of NAS. Specifically, we replace several first layers of NAS-based networks with several pretrained layers and freeze them, while leaving the other layers as trainable. This results in reduction of memory footprint and training time of the supernet and/or subnetworks. • We demonstrate the effectiveness of our method by incorporating the proposed method into two search algorithms: evolutionary-based REA (Real et al., 2019) and gradient-based DARTS+PT (Wang et al., 2021). On NAS-Bench-201, we save up to 2.28× memory footprint and run 1.98-1.32× faster than the conventional method, while achieving a higher test accuracy. On DARTS search space, DARTS+PT using our proposed method can find the best cell in 0.53 GPU day and allocates 1.6× less memory, while maintaining competitive compared to the conventional method. 2 RELATED WORK Similarity of neural network representations. Recently, studies on learning the similarity of neural network representations have widely been conducted (Raghu et al., 2017; Morcos et al., 2018; Wang et al., 2018; Kornblith et al., 2019; Nguyen et al., 2021). Especially, Kornblith et al. (2019) provides a powerful similarity index for comparing neural network representations, namely Centered Kernel alignment (CKA). We adopt CKA to analyze the output feature similarity between hand-crafted and NAS-generated layers. Given n examples, let X ∈ Rn×p1 and Y ∈ Rn×p2 are the representations of p1 and p2 neuron for this n examples. Let K = XXT and L = Y Y T be the Gram matrices, which are the similarities between a pair of examples in X and Y . Let H = In− 1n11 T denotes the centering matrix. HilbertSchmidt Independence Criterion (HSIC) is defined as HSIC(K,L) = vec(HKH)·vec(HLH)/(n− 1)2. The normalization index is applied to HISC to make it invariant to isotropic scaling, which is denoted as CKA and is defined as: CKA(K,L) = HSIC(K,L)√ HSIC(K,K)HSIC(L,L) . (1) Neural Architecture Search. The goal of NAS is to automatically discover high-performance networks on a specific task. Reinforcement Learning (RL)-based, evolutionary-based, and gradientbased search algorithms are widely used for NAS (Zoph & Le, 2017; Pham et al., 2018; Zoph et al., 2018; Real et al., 2019; Liu et al., 2019). In the work of Zoph & Le (2017), the authors use RL and train a controller to generate the network’s configurations (e.g., topology and operations). This method requires a lot of computational resources. Evolutionary-based NAS such as (Real et al., 2019) outperforms RL-based NAS in terms of accuracy and efficiency as it reaches higher accuracy given the same amount of time during searching. The major drawback of RL-based and evolutionary-based is the repetition process of training and evaluation of candidate networks, which demands a huge amount of GPU hours. On the other hand, Gradient-based NAS (Liu et al., 2019; Xu et al., 2020; Chen et al., 2019) utilizes the back-propagation process to find the optimal network where the optimal model parameters and operations are found during training in an alternative manner. For example, Differentiable ARchiTecture Search (DARTS) (Liu et al., 2019) introduces architectural weights α beside network’s weights w, forming a supernet consisting of large candidate networks. In order to find the optimal α and w, the process requires a bi-level optimization (Anandalingam & Friesz, 1992; Colson et al., 2007), which is computationally expensive. DARTS gets rid of this issue by approximating the architecture gradient by using only a single training step, which optimizes α and w alternately. Relation between similarity of representations and NAS. Even though studying the similarity of neural network representations and NAS are two different research areas, it is important to understand whether the findings from the previous works on the similarity of feature maps between networks still hold for NAS. Not surprisingly, NAS-based architectures and hand-crafted networks generate similar representations for some layers after training, as depicted in Figure 1. This motivates us to develop a simple yet effective method to reduce the memory footprint in NAS. In our work, we use CKA proposed by Kornblith et al. (2019) for calculating the similarity1. 3 METHOD In this part, we first study the similarity of neural network representations from NAS perspective. We first show that NAS-based networks and hand-crafted networks learn and generate similar representations for several shallow layers. We then propose a simple method that reuses the features from a pre-trained hand-crafted network, which can help to improve the efficiency of NAS in terms of memory footprint and training time. 3.1 KEY OBSERVATIONS We consider comparing the similarity for two cases: (i) query-based NAS algorithms (Zoph & Le, 2017; Real et al., 2019) which train a lot of subnetworks; (ii) gradient-based NAS methods (Liu et al., 2019; Chen et al., 2019) which train a supernet. Case 1. We train a ResNet-32 on CIFAR-10 dataset (Krizhevsky & Hinton, 2009) using the same hyper-parameters used in NASBench-201 (Dong & Yang, 2020). More details can be found in Appendix A.1. We randomly select 1000 architectures from NASBench-201 and measure the similarity between the representations2 generated by these architectures and the trained ResNet-32 via CKA (Kornblith et al., 2019) similarity metric. The weights of NASBench-201 architectures are obtained from their official website. We take the average of 1000 similarities and plot them in Figure 1. It can be seen that the feature maps generated by NAS-Bench-201 architectures share similar characteristics to those from ResNet-32. Especially, the similarity is significantly higher for low-level representations while lower for high-level representations. This suggests that some of the initial 1Current work uses this method due to its simplicity. However, we believe that a proper metric will tell us how many pre-trained layers should be used before performance collapse (i.e., the searched network achieves inferior accuracy). 2The feature maps generated by a ResNet block or a cell from NAS-Bench-201 architectures. layers of candidate networks in NAS can be replaced with layers from pre-trained hand-crafted networks. Moreover, this approach reduces the memory footprint since it reduces the search space size (during backpropagation). Case 2. We further investigate the similarity between a pre-trained ResNet-20 and a supernet, which is used in DARTS. We compute the similarity for the representations generated by supernet cells and ResNet blocks. Figure 2 demonstrates that the supernet and ResNet generate similar low-level and mid-level patterns. This behavior is consistent with previous observation. 3.2 PROPOSED MEMORY-EFFICIENT NEURAL ARCHITECTURE SEARCH In this section, we introduce our method for reducing the memory footprint in NAS. Based on our observation about similarity of representations between hand-crafted and NAS based networks, we propose replacing a first few layers of NAS based networks with pre-trained layers of hand-crafted networks. Figure 3 shows the differences between the conventional and the proposed method for NAS. Let A be an untrained network with L cells and is expressed as A = {lA1, lA2, ..., lAL}. Similarly, let B be a pre-trained baseline with K blocks and is expressed as B = {lB1, lB2, ..., lBK}. Since A and B share similar representations for some cells and blocks after training, a straightforward way3 to reuse the features from pre-trained network B is to plug these features to untrained network A. Hence, the proposed network which replaces i cells with j blocks is defined as S = {lB1, ..., lBj , lA(i+1), ..., lAL}, where i and j are sandwiched between 1 and L− 1. The choice of j controls how many pre-trained blocks from B are used while i controls how many cells of A are replaced. If j and i are large meaning that we only train a few cells, the pre-trained blocks are dominating the NAS cells. Intuitively, this makes the proposed network S looks like the pre-trained network B, thus degrading the performance of NAS. During training the network S, we only update the weights from lA(i+1) to lAL. The proposed method can be used for query-based NAS and gradient-based NAS. Here, we formally describe how to use the proposed method for NAS. Query-based NAS. In many query-based NAS algorithms (Zoph & Le, 2017; Real et al., 2019), a large number of child networks are sampled and trained for limited epochs. Instead of training these networks, which requires computing the gradient for all cells, we replace them with the proposed paradigm and keep other settings as default. LetA be the search space and a ∈ A be an architecture. 3There are other ways to reuse features. Despite directly plug-in, we find that, with a simple learning rate, warmup paradigm can make training stable as it makes the untrained cells to adapt pre-trained blocks gradually. Instead of training a, we train s = {lb1, ..., lbj , la(i+1), ..., laL} where b is a pre-trained architecture and is shared for all networks in A. The performance of s on the validation set is used for updating the controller in Zoph & Le (2017) or selecting the parent for mutation in Real et al. (2019). We term s as Transfer-Net-i-j (Tr-Net-i-j). Gradient-based NAS. Differentiable architecture search (Liu et al., 2019; Xu et al., 2020) trains a supernet from which subnetworks are generated. Although the supernet comprises of a highly complex topology, the starting cells still produce similar representations to a hand-crafted one after training as demonstrated in Figure 2. Thus, the proposed method is also applicable for Gradientbased NAS. Let A be the orginal supernet. The proposed network S now acts as the supernet, which we denote as Transfer-Supernet-i-j (Tr-Supernet-i-j). The search phase is now conducted on Tr-Supernet-i-j, which requires less memory footprint. Since Tr-Net and Tr-Supernet utilize pre-trained blocks, they belong to Tr-NAS family. 4 EXPERIMENTS In this section, we first compare the rank correlation between the validation accuracy and final test accuracy of the proposed method and the conventional one on NAS-Bench-201. Then, we evaluate the performance of REA and DARTS+PT using the proposed method. Unless stated otherwise, the training time and memory footprint are measured on a single Nvidia Geforce 1080 Ti with PyTorch deep learning framework. 4.1 RANK CORRELATION COMPARISON Dataset and baseline. We use NAS-Bench-201 (Dong & Yang, 2020), a benchmark dataset for NAS. NAS-Bench-201 consists of 15,625 networks where five operations exist in the search space, namely none, average pooling, conv1× 1, conv3× 3, and skip connection. These networks are trained on CIFAR-10, CIFAR-100, and ImageNet-16-120. The baseline chosen for comparison is the conventional one, which trains for 12 epochs. We term this as ’Original’. Architectures. We randomly sample 1000 networks from NAS-Bench-201. We replace these networks with the proposed paradigm. The pre-trained network is a ResNet-324. We set i and j to 5, meaning that we replace the first 5 cells of these networks with the first 5 pre-trained ResNet blocks. Performance criteria. Once the training is finished, we compute Spearman Rank Order Correlation Coefficient (SROCC) and Kendall Tau Rank-Order Correlation Coefficient (KROCC) between the accuracy on the validation set and the test set for the Original and the proposed method. We obtain the accuracy on the validation set and test set for the original networks from NAS-Bench-201 dataset. Training setup. We use the same training set and validation set in our experiments following NASBench-201 for fair comparison. We use the same hyper-parameters as in NAS-Bench-201, except that we train our methods for 18 epochs (including 13 warmup epochs). It should be noted that even we train our networks with more epochs, the total training time of our networks is still less than the conventional method. Please refer to Appendix A.2 for more information. 4A good starting point for choosing the depth of the pre-trained baseline is to minimize the difference between the receptive field of blocks and cells. We did not conduct extensive tuning the depth of the baseline because we want to utilize a pre-trained model, which is already available to speedup NAS. Result. Table 1 summarizes the results on three datasets i.e., CIFAR-10, CIFAR-100, and ImageNet16-120. It can be seen that the proposed method achieves a higher correlation than the conventional method for all of the datasets. Notably, there is a huge improvement in rank correlation on CIFAR10 when using the proposed method. The results from Table 1 indicate that, despite sharing the same representations for several first layers, the proposed method is able to preserve the ranking (even better). We obtain the allocated memory on GPU during training and present in Table 2. The results suggest that the proposed method significantly reduces the memory footprint (2.11× reduction) compared to the conventional method. We further evaluate the rank correlation for top-k architectures, which is crucial for assessing the performance as good methods should have a strong correlation for top-performing networks. We set k to 1000 and summarize the results in Table 3. As shown in Table 3, the proposed method performs consistently well for all datasets. In general, the proposed method achieves a higher rank correlation than the conventional method. These sets of experiments indicate that the proposed method is well-suited for NAS. 4.2 RESULT ON QUERY-BASED NAS We now demonstrate the effectiveness of the proposed method by incorporating to NAS algorithms. For this purpose, we use Regularized Evolution REA (Real et al., 2019), a query-based NAS algorithm. We train ours for 18 epochs (including 13 warmup epochs), other hyper-parameters are identical to those used in NAS-Bench-201. The search phase is terminated when the number of evaluated networks exceeds 100. We conduct the experiment 10 times with different seeds and plot the average test accuracy in Figure 4. As shown in Figure 4, using the proposed method, REA is able to find the top-performing network. Compared to the original method, the proposed method achieves similar performance (Figure 4. Bottom) while using roughly 2× less memory footprint (Figure 4. Top). It is noticeable that REA outputs higher average test accuracy during the search phase using our method. This behavior is natural since our method achieves a good rank correlation than the conventional one. 4.3 RESULT ON DIFFERENTIABLE NAS We shift our evaluation to another type of NAS algorithms, which uses the gradient to guide the search. We incorporate the proposed method to DARTS+PT (Wang et al., 2021), a perturbationbased architecture selection that performs on top of DARTS. The authors of DARTS+PT show that the architectural weights α do not represent the strength of the operations. Thus, they introduce an alternative way to derive the final architecture, which relies on the contribution of the operations to the supernet’s accuracy. Specifically, after the supernet converged, the operation which has less impact on the supernet’s accuracy is removed from the supernet. Then, we tune the supernet for some epochs and repeat the process until the stopping criteria is met (e.g., becoming a network with a single-path). 4.3.1 NAS-BENCH-201 SEARCH SPACE In this section, we evaluate the performance of DARTS+PT using our proposed method on NASBench-201 search space. Supernet. The original supernet is a cell-based paradigm, which repeatedly stacks cell. Each cell has 6 edges and is stacked for 5 times for the first, second, and third stages. Our Tr-Supernet-5-5 is formed by replacing the first stage of the original supernet with those from a pre-trained ResNet-32 (i.e., the first 5 residual blocks). Training setup. We train Tr-Supernet-5-5 and Original Supernet for 50 epochs (including 13 warmup epochs for ours) and then perform architecture selection followed DARTS+PT. After the operation is removed from the supernet, we tune the supernet for 10 epochs. The search phase is finished when all edges are processed. Other hyper-parameters are the same as in previous work (Wang et al., 2021). Additionally, we enable Cutout (Devries & Taylor, 2017) during training and tuning the proposed supernet and the original one. The results obtained without Cutout is presented in Ablation 4.4.2. Result. We run experiments 25 times on CIFAR10/CIFAR-100 and 5 times on ImageNet-16-120 with different random seeds and report the mean test accuracy. Figure 5 illustrates our results on NAS-Bench-201. As we can see from Figure 5, using the proposed supernet, DARTS+PT finishes the search phase faster than the conventional supernet (about 1.98× reduction on CIFAR10/CIFAR-100 and 1.32× on ImageNet-16-120). We summarize the average test accuracy in Table 4. Specifically, the proposed method achieves a per- formance gain of 0.45%, 2.79%, and 0.86% on CIFAR-10, CIFAR-100, and ImageNet-16-120, respectively. We plot the allocated memory required to train the supernet in Figure 6. Notably, the proposed supernet reduces the memory footprint by 2.2× for all datasets while achieving higher test accuracy compared to the conventional supernet. 4.3.2 DARTS SEARCH SPACE We perform experiments on DARTS search space, which is much larger than NAS-Bench201 search space. The supernet is formed by stacking the cells for 8 times. The position of reduction cell is at 1/3 and 2/3 of the depth of the supernet. We use a pre-trained ResNet20 to provide intermediate features for the supernet. We replace the first stage of supernet with those from ResNet-20 (i.e., the first two supernet cells are replaced with the first three ResNet-20 blocks). The configuration for the pre-trained ResNet can be found in Appendix A.1. We use the same hyper-parameters for training and tuning the supernet (Liu et al., 2019; Wang et al., 2021). Additionally, we apply warmup for 5 epochs and enable Cutout (Devries & Taylor, 2017) during the search phase. We run the experiment 4 times with different random seeds and report the average (best) test error in Table 5. The result of DARTS+PT is obtained by running the official code provided by authors. From Table 5, we can see that the proposed method works well on a larger search space. The conventional DARTS takes 0.4 day to finish the search phase and 0.85 day when applying perturbation-based architecture selection. Using the proposed method, DARTS+PT finishes the search phase in 0.53 day and allocates 1.6× less memory, while maintaining the performance. 4.4 ABLATION STUDIES 4.4.1 STABILIZING THE TRAINING VIA LEARNING RATE WARMUP Since the proposed method utilizes some pre-trained blocks while others are randomly initialized, the network may fail to converge using the conventional optimization technique. We suggest stabilizing the network via learning rate warmup for a few epochs. It is noted that the warmup phase does not introduce any extra cost. The warmup phase begins with a learning rate of 0, and gradually increases to the initial learning rate for n epochs. This step encourages the untrained cells to adapt to the pre-trained blocks slowly. To find the best warmup epochs for our setting, we randomly sample 250 networks from NASBench-201 search space, replace the first i cells with j pretrained ResNet blocks, and train them on CIFAR-100 for 18 epochs with different warmup epochs n. We use KROCC as a criterion for comparison. We investigate two cases: (i) i and j are equally set to 5, denoted as Tr-Net-5-5; (ii) i and j are set to 10, denoted as TrNet-10-10. We set n from 1 to 17 with a step size of 4 and show the results in Figure 7. For the first case, we can see that without the warmup phase (0 warmup epochs), the rank correlation of the proposed method is below the conventional method. This indicates that the training is unstable that causes the network failed to converge. When using learning rate warmup, the rank correlation gradually increases and reaches its peak when n is 13, then starts decreasing. In addition, the proposed method always outperforms the conventional method when we perform warmup for more than 3 epochs. For the second case, one can observe that if we replace too many cells with pre-trained blocks, the rank correlation is lower than the baseline. This suggests that choosing the right i and j can have a good trade-off between performance and memory reduction. 4.4.2 THE EFFECTIVENESS OF CUTOUT AUGMENTATION We investigate how Cutout (Devries & Taylor, 2017) can help to improve the performance of the proposed method for gradient-based NAS algorithms. We study two cases: (i) i and j is set to 5 which is denoted as Tr-Supernet-5-5; (ii) i and j is set to 10 which is denoted as Tr-Supernet-1010. We perform the experiment on NAS-Bench-201 using CIFAR-100 with different warmup epochs n. We compare the performance of DARTS+PT using our supernet and the conventional one trained with and without Cutout. We show the results in Figure 8. For the first case, DARTS+PT using our supernet outperforms the original supernet when n is greater than 7. Tr-Supernet-5-5 achieves the best performance with 13 warmup epochs. For the second case, Tr-Supernet-10-10 achieves comparable performance compared to the original supernet trained with Cutout. However, when comparing to the conventional method trained without Cutout, there is a little drop in accuracy. To find out the reason that Cutout helps to improve the performance of our method, we compute the full Hessian ∇2αLvalid of validation loss w.r.t the architectural parameters α and show the dominant eigenvalue in Figure 9. Zela et al. (2020) reveals that large dominant eigenvalue of the Hessian degrades the performance and suggests that using regularization technique may improve the performance. As demonstrated in Figure 9, applying Cutout to the original supernet does not help to reduce the dominant eigenvalue as it continues to raise. By contrast, the proposed method achieves better results because the dominant eigenvalue fluctuates around some values when enabling Cutout. 5 DISCUSSION & FUTURE WORK Since, the proposed method fuses pre-trained blocks and untrained cells in a straight-forward manner, there are several potential directions to further improve the performance of NAS while reducing the memory footprint and training time. First, a proper similarity index may help to determine how many untrained cells are suitable for replacement. Second, there is a possibility to develop a single pre-trained baseline, which can be used as a starting point for NAS to perform on other target tasks that are similar to each other (Kornblith et al., 2019). Also, the performance of NAS may depend on the performance of the pre-trained network. Thus, designing a good pre-trained baseline which can be applied for various NAS algorithms can be a promising work. In our work, we keep things as simple as possible to demonstrate the usefulness of the proposed method, and leave other improvements as future works. 6 CONCLUSION In this work, we propose a simple yet effective method to reduce the memory footprint and shorten the training time in NAS, by replacing several first NAS-cells with those from a pre-trained handcrafted network. Our work is motivated by the observation that once converged, both hand-crafted and NAS-based architectures learn similar representations especially at low-level layers. Our method outperforms the conventional method in terms of rank correlation of validation accuracy to test accuracy on NAS-Bench-201 while requiring less memory footprint. Additionally, our method can be incorporated into different types of NAS algorithms, such as query-based or gradient-based methods, without any modification. Overall, the proposed method uses less memory footprint and shortens the training time while achieving comparable or even higher performance than the conventional methods. A APPENDIX A.1 DETAIL OF ARCHITECTURES AND TRAINING SETUP FOR PRE-TRAINED RESNET Actually one can use a pre-trained ResNet available in PyTorchCV (Semery, 2020) database as we do not modify the ResNet architecture. We do train our ResNet for a fair comparison to the conventional method because on NAS-Bench-201, the original train and test sets of CIFAR-10, CIFAR-100 are split into new train, validation, test sets. ResNet-32. The network is created by stacking basic residual block for 15 times. The downsampled block is located at 6-th and 11-st block. The number of channel is 16, 32, 64 for the first, second, and third stages, respectively. ResNet-20. Similar to ResNet-32, we stack the basic residual block for 9 times, reduce the dimension by half at 4-th and 7-th block. The number of channel is set to 64, 128, and 256 for the first, second, and third stages, respectively. The hyper-parameters for training ResNet are summarized in Table 6. A.2 TRAINING TIME OF THE PROPOSED METHOD The training time of a random network sampled from NAS-Bench-201 is displayed in Figure 10. We can see that the proposed method require less time to finish one epoch. As a result, we can increase the number of epochs for training the proposed method as long as it does not exceed the training time of conventional method for a fair comparison. In addition, we observe a similar trend for other datasets (e.g.,CIFAR-10 and ImageNet-16-120) and networks. In our work, we set the number of epochs for training ours to 18.
1. What is the focus of the paper regarding neural architecture search and ResNet blocks? 2. What are the strengths of the proposed approach, particularly in its key observations and experiments? 3. What are the weaknesses of the paper, especially regarding the CKA similarity metric and the replacement of shallow blocks? 4. Do you have any concerns about the method used in the paper, such as manual design works and increased cost? 5. Are there any recent works related to memory-efficient NAS that the paper should have explored and analyzed?
Summary Of The Paper Review
Summary Of The Paper This work uses pre-trianed ResNet block replace the shallow trainable blocks in DARTS-based neural architecture search frameworks and frozen those parameters for saving the memory consumption during the training process. Review Advantages: The authors attempt to explore the relations between automatic designed and hand-crafted neural networks, which is a useful try for the NAS community. The key observations are interesting and the experiments are convincing and show multiple views of observations. The paper is well-written and easy to read. Weakness: The authors claim that the low-level representations of NAS-bench-101 architectures and ResNet-201 are similarly based on the CKA similarity metric. However, in Figure 1, the representations of the layers in the same positions of the NAS-bench-101 and ResNet-201 show high CKA similarity including low-level, middle-level, and high-level. It seems the motivations of replacing the shallow block of NAS with pre-trained ResNet blocks are not sufficient enough. The method of using some hand-crafted architectures to replace part of the NAS framework may require extra time and manual-design works (e.g., calculating the similarity of representations in this paper, manually designing a proper block structure). This probably will increase the cost and difference with the automatically designed intention of NAS. The related work survey is not sufficient. The recent work of the memory-efficient NAS should be fully explored and analyzed. e.g., MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning, NeurIPS 2021 MemNAS: Memory-Efficient Neural Architecture Search with Grow-Trim Learning, CVPR 2020 Memory-Efficient Hierarchical Neural Architecture Search for Image Denoising, CVPR 2020 Besides fro-zing parts of parameters during the training process, should any other contributions be noticed?
ICLR
Title Tr-NAS: Memory-Efficient Neural Architecture Search with Transferred Blocks Abstract Neural Architecture Search (NAS) is one of the most rapidly growing research fields in machine learning due to its ability to discover high-performance architectures automatically. Although conventional NAS algorithms focus on improving search efficiency (e.g., high performance with less search time), they often require a lot of memory footprint and power consumption. To remedy this problem, we propose a new paradigm for NAS that effectively reduces the use of memory while maintaining high performance. The proposed algorithm is motivated by our observation that manually designed and NAS-based architectures share similar low-level representations, regardless of the difference in the network’s topology. Reflecting this, we propose a new architectural paradigm for NAS, called Transfer-NAS, that replaces several first cells in the generated architecture with conventional (hand-crafted) pre-trained blocks. As the replaced pre-trained blocks are kept frozen during training, the memory footprint can significantly be reduced. We demonstrate the effectiveness of the proposed method by incorporating it into Regularized Evolution and Differentiable ARchiTecture Search with Perturbationbased architecture selection (DARTS+PT) on NAS-Bench-201 and DARTS search spaces. Extensive experiments show that Transfer-NAS significantly decreases the memory usage up-to 50% while achieving higher/comparable performance compared to the baselines. Furthermore, the proposed method is 1.98× faster in terms of search time when incorporated to DARTS+PT on NAS-Bench-201 compared to the conventional method. 1 INTRODUCTION Neural Architecture Search (NAS) has become an important domain in the machine learning field due to its superior performance. Many NAS algorithms have been developed (Zoph & Le, 2017; Zoph et al., 2018; Liu et al., 2019; Cai et al., 2019), and continue to raise in the future. The major advantage of NAS is to automatically discover the best architecture from a large search space on a target dataset. Since the solution can be found without human involvement, NAS has a wide range of applications such as image classification (Wu et al., 2019; Tan et al., 2019), object detection (Chen et al., 2020; Wang et al., 2020), and pruning (Dong & Yang, 2019). Researchers have made a lot of attempts to improve the performance of NAS and reduce the searching time (Pham et al., 2018; Liu et al., 2019; Cai et al., 2019). Query-based NAS such as Regularized Evolution (RE) (Real et al., 2019) trains and evaluates thousands of small models before restoring the best model into the original size (enlarge the network’s depth and number of channels) for evaluation. Gradient-based NAS algorithms (Liu et al., 2019; Xu et al., 2020) train a supernet which requires a lot of memory footprint. A natural question arises: Can we perform the search step in NAS by training only a few cells rather than the whole network? Technically, this paradigm shortens the training time and reduces the memory footprint, because the memory required for calculating the gradients in this case is smaller than the conventional approaches. In this paper, we show that it is possible to perform efficient searching in NAS by replacing several first cells of a child network with pre-trained layers and let NAS search for the remaining cells. By analyzing the feature maps between the networks sampled from NAS-Bench-201 (Dong & Yang, 2020) search space and a hand-crafted one, namely ResNet (He et al., 2015), we observe that the representations are very similar among low-level features compared to their high-level features as shown in Figure 1. It is noted that similar observation was reported in Kornblith et al. (2019). However, they compared the similarity among simple and similar architectures (ResNet’s family and plain networks) which cannot directly lead us to our main motivation, while we compared similarity between a hand-crafted architecture with generated ones in NAS-Bench-201 that have high diversity in both topology and operations (e.g., skip connection, conv3× 3, maxpooling, ...). Motivated from our preliminary experimental results in Figure 1, we propose to leverage several low-level features generated by a pre-trained network to speedup the search phase in NAS. This also helps to reduce the memory footprint significantly because we do not need to calculate the gradient for these pre-trained layers. To this end, the contributions of our paper are summarized as follows: • We find out that the low-level features learned by a DARTS’s supernet and networks sampled from NAS-Bench-201 search space, are similar to that of ResNet, regardless of their topology and operations. • We leverage the features generated by a pre-trained baseline to improve the efficiency of NAS. Specifically, we replace several first layers of NAS-based networks with several pretrained layers and freeze them, while leaving the other layers as trainable. This results in reduction of memory footprint and training time of the supernet and/or subnetworks. • We demonstrate the effectiveness of our method by incorporating the proposed method into two search algorithms: evolutionary-based REA (Real et al., 2019) and gradient-based DARTS+PT (Wang et al., 2021). On NAS-Bench-201, we save up to 2.28× memory footprint and run 1.98-1.32× faster than the conventional method, while achieving a higher test accuracy. On DARTS search space, DARTS+PT using our proposed method can find the best cell in 0.53 GPU day and allocates 1.6× less memory, while maintaining competitive compared to the conventional method. 2 RELATED WORK Similarity of neural network representations. Recently, studies on learning the similarity of neural network representations have widely been conducted (Raghu et al., 2017; Morcos et al., 2018; Wang et al., 2018; Kornblith et al., 2019; Nguyen et al., 2021). Especially, Kornblith et al. (2019) provides a powerful similarity index for comparing neural network representations, namely Centered Kernel alignment (CKA). We adopt CKA to analyze the output feature similarity between hand-crafted and NAS-generated layers. Given n examples, let X ∈ Rn×p1 and Y ∈ Rn×p2 are the representations of p1 and p2 neuron for this n examples. Let K = XXT and L = Y Y T be the Gram matrices, which are the similarities between a pair of examples in X and Y . Let H = In− 1n11 T denotes the centering matrix. HilbertSchmidt Independence Criterion (HSIC) is defined as HSIC(K,L) = vec(HKH)·vec(HLH)/(n− 1)2. The normalization index is applied to HISC to make it invariant to isotropic scaling, which is denoted as CKA and is defined as: CKA(K,L) = HSIC(K,L)√ HSIC(K,K)HSIC(L,L) . (1) Neural Architecture Search. The goal of NAS is to automatically discover high-performance networks on a specific task. Reinforcement Learning (RL)-based, evolutionary-based, and gradientbased search algorithms are widely used for NAS (Zoph & Le, 2017; Pham et al., 2018; Zoph et al., 2018; Real et al., 2019; Liu et al., 2019). In the work of Zoph & Le (2017), the authors use RL and train a controller to generate the network’s configurations (e.g., topology and operations). This method requires a lot of computational resources. Evolutionary-based NAS such as (Real et al., 2019) outperforms RL-based NAS in terms of accuracy and efficiency as it reaches higher accuracy given the same amount of time during searching. The major drawback of RL-based and evolutionary-based is the repetition process of training and evaluation of candidate networks, which demands a huge amount of GPU hours. On the other hand, Gradient-based NAS (Liu et al., 2019; Xu et al., 2020; Chen et al., 2019) utilizes the back-propagation process to find the optimal network where the optimal model parameters and operations are found during training in an alternative manner. For example, Differentiable ARchiTecture Search (DARTS) (Liu et al., 2019) introduces architectural weights α beside network’s weights w, forming a supernet consisting of large candidate networks. In order to find the optimal α and w, the process requires a bi-level optimization (Anandalingam & Friesz, 1992; Colson et al., 2007), which is computationally expensive. DARTS gets rid of this issue by approximating the architecture gradient by using only a single training step, which optimizes α and w alternately. Relation between similarity of representations and NAS. Even though studying the similarity of neural network representations and NAS are two different research areas, it is important to understand whether the findings from the previous works on the similarity of feature maps between networks still hold for NAS. Not surprisingly, NAS-based architectures and hand-crafted networks generate similar representations for some layers after training, as depicted in Figure 1. This motivates us to develop a simple yet effective method to reduce the memory footprint in NAS. In our work, we use CKA proposed by Kornblith et al. (2019) for calculating the similarity1. 3 METHOD In this part, we first study the similarity of neural network representations from NAS perspective. We first show that NAS-based networks and hand-crafted networks learn and generate similar representations for several shallow layers. We then propose a simple method that reuses the features from a pre-trained hand-crafted network, which can help to improve the efficiency of NAS in terms of memory footprint and training time. 3.1 KEY OBSERVATIONS We consider comparing the similarity for two cases: (i) query-based NAS algorithms (Zoph & Le, 2017; Real et al., 2019) which train a lot of subnetworks; (ii) gradient-based NAS methods (Liu et al., 2019; Chen et al., 2019) which train a supernet. Case 1. We train a ResNet-32 on CIFAR-10 dataset (Krizhevsky & Hinton, 2009) using the same hyper-parameters used in NASBench-201 (Dong & Yang, 2020). More details can be found in Appendix A.1. We randomly select 1000 architectures from NASBench-201 and measure the similarity between the representations2 generated by these architectures and the trained ResNet-32 via CKA (Kornblith et al., 2019) similarity metric. The weights of NASBench-201 architectures are obtained from their official website. We take the average of 1000 similarities and plot them in Figure 1. It can be seen that the feature maps generated by NAS-Bench-201 architectures share similar characteristics to those from ResNet-32. Especially, the similarity is significantly higher for low-level representations while lower for high-level representations. This suggests that some of the initial 1Current work uses this method due to its simplicity. However, we believe that a proper metric will tell us how many pre-trained layers should be used before performance collapse (i.e., the searched network achieves inferior accuracy). 2The feature maps generated by a ResNet block or a cell from NAS-Bench-201 architectures. layers of candidate networks in NAS can be replaced with layers from pre-trained hand-crafted networks. Moreover, this approach reduces the memory footprint since it reduces the search space size (during backpropagation). Case 2. We further investigate the similarity between a pre-trained ResNet-20 and a supernet, which is used in DARTS. We compute the similarity for the representations generated by supernet cells and ResNet blocks. Figure 2 demonstrates that the supernet and ResNet generate similar low-level and mid-level patterns. This behavior is consistent with previous observation. 3.2 PROPOSED MEMORY-EFFICIENT NEURAL ARCHITECTURE SEARCH In this section, we introduce our method for reducing the memory footprint in NAS. Based on our observation about similarity of representations between hand-crafted and NAS based networks, we propose replacing a first few layers of NAS based networks with pre-trained layers of hand-crafted networks. Figure 3 shows the differences between the conventional and the proposed method for NAS. Let A be an untrained network with L cells and is expressed as A = {lA1, lA2, ..., lAL}. Similarly, let B be a pre-trained baseline with K blocks and is expressed as B = {lB1, lB2, ..., lBK}. Since A and B share similar representations for some cells and blocks after training, a straightforward way3 to reuse the features from pre-trained network B is to plug these features to untrained network A. Hence, the proposed network which replaces i cells with j blocks is defined as S = {lB1, ..., lBj , lA(i+1), ..., lAL}, where i and j are sandwiched between 1 and L− 1. The choice of j controls how many pre-trained blocks from B are used while i controls how many cells of A are replaced. If j and i are large meaning that we only train a few cells, the pre-trained blocks are dominating the NAS cells. Intuitively, this makes the proposed network S looks like the pre-trained network B, thus degrading the performance of NAS. During training the network S, we only update the weights from lA(i+1) to lAL. The proposed method can be used for query-based NAS and gradient-based NAS. Here, we formally describe how to use the proposed method for NAS. Query-based NAS. In many query-based NAS algorithms (Zoph & Le, 2017; Real et al., 2019), a large number of child networks are sampled and trained for limited epochs. Instead of training these networks, which requires computing the gradient for all cells, we replace them with the proposed paradigm and keep other settings as default. LetA be the search space and a ∈ A be an architecture. 3There are other ways to reuse features. Despite directly plug-in, we find that, with a simple learning rate, warmup paradigm can make training stable as it makes the untrained cells to adapt pre-trained blocks gradually. Instead of training a, we train s = {lb1, ..., lbj , la(i+1), ..., laL} where b is a pre-trained architecture and is shared for all networks in A. The performance of s on the validation set is used for updating the controller in Zoph & Le (2017) or selecting the parent for mutation in Real et al. (2019). We term s as Transfer-Net-i-j (Tr-Net-i-j). Gradient-based NAS. Differentiable architecture search (Liu et al., 2019; Xu et al., 2020) trains a supernet from which subnetworks are generated. Although the supernet comprises of a highly complex topology, the starting cells still produce similar representations to a hand-crafted one after training as demonstrated in Figure 2. Thus, the proposed method is also applicable for Gradientbased NAS. Let A be the orginal supernet. The proposed network S now acts as the supernet, which we denote as Transfer-Supernet-i-j (Tr-Supernet-i-j). The search phase is now conducted on Tr-Supernet-i-j, which requires less memory footprint. Since Tr-Net and Tr-Supernet utilize pre-trained blocks, they belong to Tr-NAS family. 4 EXPERIMENTS In this section, we first compare the rank correlation between the validation accuracy and final test accuracy of the proposed method and the conventional one on NAS-Bench-201. Then, we evaluate the performance of REA and DARTS+PT using the proposed method. Unless stated otherwise, the training time and memory footprint are measured on a single Nvidia Geforce 1080 Ti with PyTorch deep learning framework. 4.1 RANK CORRELATION COMPARISON Dataset and baseline. We use NAS-Bench-201 (Dong & Yang, 2020), a benchmark dataset for NAS. NAS-Bench-201 consists of 15,625 networks where five operations exist in the search space, namely none, average pooling, conv1× 1, conv3× 3, and skip connection. These networks are trained on CIFAR-10, CIFAR-100, and ImageNet-16-120. The baseline chosen for comparison is the conventional one, which trains for 12 epochs. We term this as ’Original’. Architectures. We randomly sample 1000 networks from NAS-Bench-201. We replace these networks with the proposed paradigm. The pre-trained network is a ResNet-324. We set i and j to 5, meaning that we replace the first 5 cells of these networks with the first 5 pre-trained ResNet blocks. Performance criteria. Once the training is finished, we compute Spearman Rank Order Correlation Coefficient (SROCC) and Kendall Tau Rank-Order Correlation Coefficient (KROCC) between the accuracy on the validation set and the test set for the Original and the proposed method. We obtain the accuracy on the validation set and test set for the original networks from NAS-Bench-201 dataset. Training setup. We use the same training set and validation set in our experiments following NASBench-201 for fair comparison. We use the same hyper-parameters as in NAS-Bench-201, except that we train our methods for 18 epochs (including 13 warmup epochs). It should be noted that even we train our networks with more epochs, the total training time of our networks is still less than the conventional method. Please refer to Appendix A.2 for more information. 4A good starting point for choosing the depth of the pre-trained baseline is to minimize the difference between the receptive field of blocks and cells. We did not conduct extensive tuning the depth of the baseline because we want to utilize a pre-trained model, which is already available to speedup NAS. Result. Table 1 summarizes the results on three datasets i.e., CIFAR-10, CIFAR-100, and ImageNet16-120. It can be seen that the proposed method achieves a higher correlation than the conventional method for all of the datasets. Notably, there is a huge improvement in rank correlation on CIFAR10 when using the proposed method. The results from Table 1 indicate that, despite sharing the same representations for several first layers, the proposed method is able to preserve the ranking (even better). We obtain the allocated memory on GPU during training and present in Table 2. The results suggest that the proposed method significantly reduces the memory footprint (2.11× reduction) compared to the conventional method. We further evaluate the rank correlation for top-k architectures, which is crucial for assessing the performance as good methods should have a strong correlation for top-performing networks. We set k to 1000 and summarize the results in Table 3. As shown in Table 3, the proposed method performs consistently well for all datasets. In general, the proposed method achieves a higher rank correlation than the conventional method. These sets of experiments indicate that the proposed method is well-suited for NAS. 4.2 RESULT ON QUERY-BASED NAS We now demonstrate the effectiveness of the proposed method by incorporating to NAS algorithms. For this purpose, we use Regularized Evolution REA (Real et al., 2019), a query-based NAS algorithm. We train ours for 18 epochs (including 13 warmup epochs), other hyper-parameters are identical to those used in NAS-Bench-201. The search phase is terminated when the number of evaluated networks exceeds 100. We conduct the experiment 10 times with different seeds and plot the average test accuracy in Figure 4. As shown in Figure 4, using the proposed method, REA is able to find the top-performing network. Compared to the original method, the proposed method achieves similar performance (Figure 4. Bottom) while using roughly 2× less memory footprint (Figure 4. Top). It is noticeable that REA outputs higher average test accuracy during the search phase using our method. This behavior is natural since our method achieves a good rank correlation than the conventional one. 4.3 RESULT ON DIFFERENTIABLE NAS We shift our evaluation to another type of NAS algorithms, which uses the gradient to guide the search. We incorporate the proposed method to DARTS+PT (Wang et al., 2021), a perturbationbased architecture selection that performs on top of DARTS. The authors of DARTS+PT show that the architectural weights α do not represent the strength of the operations. Thus, they introduce an alternative way to derive the final architecture, which relies on the contribution of the operations to the supernet’s accuracy. Specifically, after the supernet converged, the operation which has less impact on the supernet’s accuracy is removed from the supernet. Then, we tune the supernet for some epochs and repeat the process until the stopping criteria is met (e.g., becoming a network with a single-path). 4.3.1 NAS-BENCH-201 SEARCH SPACE In this section, we evaluate the performance of DARTS+PT using our proposed method on NASBench-201 search space. Supernet. The original supernet is a cell-based paradigm, which repeatedly stacks cell. Each cell has 6 edges and is stacked for 5 times for the first, second, and third stages. Our Tr-Supernet-5-5 is formed by replacing the first stage of the original supernet with those from a pre-trained ResNet-32 (i.e., the first 5 residual blocks). Training setup. We train Tr-Supernet-5-5 and Original Supernet for 50 epochs (including 13 warmup epochs for ours) and then perform architecture selection followed DARTS+PT. After the operation is removed from the supernet, we tune the supernet for 10 epochs. The search phase is finished when all edges are processed. Other hyper-parameters are the same as in previous work (Wang et al., 2021). Additionally, we enable Cutout (Devries & Taylor, 2017) during training and tuning the proposed supernet and the original one. The results obtained without Cutout is presented in Ablation 4.4.2. Result. We run experiments 25 times on CIFAR10/CIFAR-100 and 5 times on ImageNet-16-120 with different random seeds and report the mean test accuracy. Figure 5 illustrates our results on NAS-Bench-201. As we can see from Figure 5, using the proposed supernet, DARTS+PT finishes the search phase faster than the conventional supernet (about 1.98× reduction on CIFAR10/CIFAR-100 and 1.32× on ImageNet-16-120). We summarize the average test accuracy in Table 4. Specifically, the proposed method achieves a per- formance gain of 0.45%, 2.79%, and 0.86% on CIFAR-10, CIFAR-100, and ImageNet-16-120, respectively. We plot the allocated memory required to train the supernet in Figure 6. Notably, the proposed supernet reduces the memory footprint by 2.2× for all datasets while achieving higher test accuracy compared to the conventional supernet. 4.3.2 DARTS SEARCH SPACE We perform experiments on DARTS search space, which is much larger than NAS-Bench201 search space. The supernet is formed by stacking the cells for 8 times. The position of reduction cell is at 1/3 and 2/3 of the depth of the supernet. We use a pre-trained ResNet20 to provide intermediate features for the supernet. We replace the first stage of supernet with those from ResNet-20 (i.e., the first two supernet cells are replaced with the first three ResNet-20 blocks). The configuration for the pre-trained ResNet can be found in Appendix A.1. We use the same hyper-parameters for training and tuning the supernet (Liu et al., 2019; Wang et al., 2021). Additionally, we apply warmup for 5 epochs and enable Cutout (Devries & Taylor, 2017) during the search phase. We run the experiment 4 times with different random seeds and report the average (best) test error in Table 5. The result of DARTS+PT is obtained by running the official code provided by authors. From Table 5, we can see that the proposed method works well on a larger search space. The conventional DARTS takes 0.4 day to finish the search phase and 0.85 day when applying perturbation-based architecture selection. Using the proposed method, DARTS+PT finishes the search phase in 0.53 day and allocates 1.6× less memory, while maintaining the performance. 4.4 ABLATION STUDIES 4.4.1 STABILIZING THE TRAINING VIA LEARNING RATE WARMUP Since the proposed method utilizes some pre-trained blocks while others are randomly initialized, the network may fail to converge using the conventional optimization technique. We suggest stabilizing the network via learning rate warmup for a few epochs. It is noted that the warmup phase does not introduce any extra cost. The warmup phase begins with a learning rate of 0, and gradually increases to the initial learning rate for n epochs. This step encourages the untrained cells to adapt to the pre-trained blocks slowly. To find the best warmup epochs for our setting, we randomly sample 250 networks from NASBench-201 search space, replace the first i cells with j pretrained ResNet blocks, and train them on CIFAR-100 for 18 epochs with different warmup epochs n. We use KROCC as a criterion for comparison. We investigate two cases: (i) i and j are equally set to 5, denoted as Tr-Net-5-5; (ii) i and j are set to 10, denoted as TrNet-10-10. We set n from 1 to 17 with a step size of 4 and show the results in Figure 7. For the first case, we can see that without the warmup phase (0 warmup epochs), the rank correlation of the proposed method is below the conventional method. This indicates that the training is unstable that causes the network failed to converge. When using learning rate warmup, the rank correlation gradually increases and reaches its peak when n is 13, then starts decreasing. In addition, the proposed method always outperforms the conventional method when we perform warmup for more than 3 epochs. For the second case, one can observe that if we replace too many cells with pre-trained blocks, the rank correlation is lower than the baseline. This suggests that choosing the right i and j can have a good trade-off between performance and memory reduction. 4.4.2 THE EFFECTIVENESS OF CUTOUT AUGMENTATION We investigate how Cutout (Devries & Taylor, 2017) can help to improve the performance of the proposed method for gradient-based NAS algorithms. We study two cases: (i) i and j is set to 5 which is denoted as Tr-Supernet-5-5; (ii) i and j is set to 10 which is denoted as Tr-Supernet-1010. We perform the experiment on NAS-Bench-201 using CIFAR-100 with different warmup epochs n. We compare the performance of DARTS+PT using our supernet and the conventional one trained with and without Cutout. We show the results in Figure 8. For the first case, DARTS+PT using our supernet outperforms the original supernet when n is greater than 7. Tr-Supernet-5-5 achieves the best performance with 13 warmup epochs. For the second case, Tr-Supernet-10-10 achieves comparable performance compared to the original supernet trained with Cutout. However, when comparing to the conventional method trained without Cutout, there is a little drop in accuracy. To find out the reason that Cutout helps to improve the performance of our method, we compute the full Hessian ∇2αLvalid of validation loss w.r.t the architectural parameters α and show the dominant eigenvalue in Figure 9. Zela et al. (2020) reveals that large dominant eigenvalue of the Hessian degrades the performance and suggests that using regularization technique may improve the performance. As demonstrated in Figure 9, applying Cutout to the original supernet does not help to reduce the dominant eigenvalue as it continues to raise. By contrast, the proposed method achieves better results because the dominant eigenvalue fluctuates around some values when enabling Cutout. 5 DISCUSSION & FUTURE WORK Since, the proposed method fuses pre-trained blocks and untrained cells in a straight-forward manner, there are several potential directions to further improve the performance of NAS while reducing the memory footprint and training time. First, a proper similarity index may help to determine how many untrained cells are suitable for replacement. Second, there is a possibility to develop a single pre-trained baseline, which can be used as a starting point for NAS to perform on other target tasks that are similar to each other (Kornblith et al., 2019). Also, the performance of NAS may depend on the performance of the pre-trained network. Thus, designing a good pre-trained baseline which can be applied for various NAS algorithms can be a promising work. In our work, we keep things as simple as possible to demonstrate the usefulness of the proposed method, and leave other improvements as future works. 6 CONCLUSION In this work, we propose a simple yet effective method to reduce the memory footprint and shorten the training time in NAS, by replacing several first NAS-cells with those from a pre-trained handcrafted network. Our work is motivated by the observation that once converged, both hand-crafted and NAS-based architectures learn similar representations especially at low-level layers. Our method outperforms the conventional method in terms of rank correlation of validation accuracy to test accuracy on NAS-Bench-201 while requiring less memory footprint. Additionally, our method can be incorporated into different types of NAS algorithms, such as query-based or gradient-based methods, without any modification. Overall, the proposed method uses less memory footprint and shortens the training time while achieving comparable or even higher performance than the conventional methods. A APPENDIX A.1 DETAIL OF ARCHITECTURES AND TRAINING SETUP FOR PRE-TRAINED RESNET Actually one can use a pre-trained ResNet available in PyTorchCV (Semery, 2020) database as we do not modify the ResNet architecture. We do train our ResNet for a fair comparison to the conventional method because on NAS-Bench-201, the original train and test sets of CIFAR-10, CIFAR-100 are split into new train, validation, test sets. ResNet-32. The network is created by stacking basic residual block for 15 times. The downsampled block is located at 6-th and 11-st block. The number of channel is 16, 32, 64 for the first, second, and third stages, respectively. ResNet-20. Similar to ResNet-32, we stack the basic residual block for 9 times, reduce the dimension by half at 4-th and 7-th block. The number of channel is set to 64, 128, and 256 for the first, second, and third stages, respectively. The hyper-parameters for training ResNet are summarized in Table 6. A.2 TRAINING TIME OF THE PROPOSED METHOD The training time of a random network sampled from NAS-Bench-201 is displayed in Figure 10. We can see that the proposed method require less time to finish one epoch. As a result, we can increase the number of epochs for training the proposed method as long as it does not exceed the training time of conventional method for a fair comparison. In addition, we observe a similar trend for other datasets (e.g.,CIFAR-10 and ImageNet-16-120) and networks. In our work, we set the number of epochs for training ours to 18.
1. What is the main contribution of the paper regarding sample efficiency and memory efficiency in search? 2. How does the proposed method incorporate pretrained blocks within searched networks, and what are its strengths and weaknesses? 3. How does the paper evaluate its searching methodology, and what are some potential issues or limitations with this evaluation? 4. How does the paper position itself within the context of other relevant works in NAS, particularly those that focus on reducing memory cost or utilizing pretrained blocks? 5. Are there any minor issues or concerns with the paper's presentation or claims that could be addressed?
Summary Of The Paper Review
Summary Of The Paper The paper proposes to improve sample efficiency (searching time) and memory efficiency of a search by fixing the first few layers in a CNN to come from a pretrained network (e.g., Resnet), thus reducing search space size. This practice is motivated by empirical observations that output features generated by those early layers are often similar (as measured using Centered Kernel alignment). Results include evaluation on NAS-Bench-201 and DARTS CNN space, comparing to Ragularized Evolution (RE) and DARTS-PT. Review The paper raises some interesting questions regarding search space design and its effects on searching efficiency - although the authors seem to be focused mostly on presenting their work as a searching methodology (incorrectly?). In general, even though the paper takes an interesting angle on NAS, unfortunately motivation and evaluation both lack critical pieces for the work to be accepted, in my opinion. Please find the detailed commentary in points below. Strengths: Incorporating blocks from pretrained networks directly within searched networks seems to me like an interesting case-study that hasn't received much attention so far (although I might be missing some most recent works in that regard, the closest I'm aware of would be NAS based on blockwise distillation, but lack of distillation make the submitted paper somewhat different). The authors show that their method can improve correlation between validation and test accuracy of models, which is an often overlooked property of NAS that can highly affect outcome. The paper is generally well-written and easy to follow. Most (all?) of the experiments report average performance. Weaknesses: As already hinted, the proposed searching methodology is not really a searching (NAS) algorithm but rather an empirical study of designing a search space by utilizing pretrained models. Although a difference might not seem critical at first, I find it especially important since search space design constitutes a somewhat independent research direction within NAS [1-4]. However, the authors are either unaware of this line of work or chose not to position their work against it - in either way, I find this to me a major downside. In general, search space design works focus on slightly different objectives and should follow a different evaluation paradigm compared to works that focus purely on improving searching methodology. For example, I would expect the paper to address issues like: how does the best achievable model change after we swap some block in the NB2 search space? How does average accuracy of models in the new, reduced search space compare to the original search space? How does performance of different NAS algorithms change as we keep reducing the size by including more pretrained blocks? Instead, the paper tries to emphasize the fact that reducing the search space size can help reduce memory requirements behind running NAS. It's a correct statement but to present a full picture it would be necessary to comment on implications related to the search space. To put it briefly - the method achieves better memory efficiency due to smaller search space size (so, e.g., a supernet has to be smaller) but at the same time this comes at a price of limited discoverability of new models (we have fewer of them in the search space) - the second part is left completely unanswered. Following on evaluation, even disregarding missing parts related to search space design, what currently is presented seems a bit unusual to me and I'm not sure if the results can be considered conclusive because of that. Specifically, the authors seem to have trained most of the networks from NB2 for only 12 epochs whereas full training takes 200 epochs. What is more, the pretrained ResNet was actually trained for 200 epochs (per Appendix A1). Since training for 12 epochs might easily be non representative of full training, an obvious question that arises is how do findings presented in this paper change as we train all models until convergence? Also, it seems that there is a discrepancy between training scheme in section 3.1 (weights were taken from NB2, I'm guessing those were weights of fully-trained models?) and experimental results that was using proxy training, making the current narrative less convincing. Another thing related to evaluation is the choice to report correlation between validation and test as the main (only?) evaluation criteria. Although I can see why this aspect could be important for NAS, the authors actually never explain why this criteria was chosen. What is more, even if the explanation was there, in the end what we care about in NAS is performance of the obtained architectures - in that regard we can see that, e.g., for query-based NAS the proposed method does not improve in neither final accuracy nor sample efficiency (Figure 4) which raises a question: what is the point of presenting improvements in test-validation correlation in Table 1? In general, the entire section 3.1 - even though it touches upon a (potentially) relevant problem in NAS - seems to be be a wrong premise in the context of the presented work. How does better test-validation correlation imply that "the proposed method is well-suited for NAS"? High correlation is usually desired if we're using proxy metric A to optimize for the metric of interest B - this could be the case in the submitted paper, e.g., if the correlation was measured between accuracy of a network with and without a pretrained ResNet block. However, this does not seem to be the case. The paper is missing comparison to some relevant existing work (even disregarding those mentioned in point 1). There is actually quite a few papers that try to either: 1) reduce memory cost of NAS, especially differentiable NAS [5-7]; or 2) use, in some way, pretrained blocks when searching for more efficient networks [8-10]. I suggest the authors try to position their paper in the context of those works, if not taking the road of search space design. Some purely technical aspects of the evaluation are also not very convincing in my opinion. For example, the authors present performance of DARTS-PT on the DARTS CNN space as 2.78% error on average, however according to the original paper it is 2.61% - this is a big different in the context of DARTS and would demand some explanation from the authors. Also, the results achieved by the proposed method (2.81%) are actually quite bad and do not help convincing that the proposed method can robustly deliver good results. On another note, I don't understand why the authors say that "It is noted that the warmup phase does not introduce any extra cost"... the cost might not be high but as long as it require some additional compute the authors should not make claims like that... I would even go a step further and say that the cost of pretraining a ResNet should be included in the searching cost. Some minor issues: the authors never report number of parameters or FLOPs of the new models - are blocks taken from ResNet smaller/larger than those that get replaced? I would really rethink saying things like "(...) NAS-Bench-201 that have high diversity in both topology and operations". NAS-Bench-201 is the simplest NAS search space which can be found in NAS research that I'm aware of. Claiming that it has high diversity of topology and operations is bit of a stretch (especially when it comes to operations, I guess topology-wise it could be disputed when compared to some classical hand-crafted networks). References: [1] I. Radosavovic et al. "On Network Design Spaces for Visual Recognition". 2019 [2] I. Radosavovic et al. "Designing Network Design Spaces". 2020 [3] Y. Hu et al. "Improving One-Shot NAS with Shrinking-and-Expanding Supernet". 2021 [4] Y. Ci et al. "Evolving Search Space for Neural Architecture Search". 2021 [5] X. Chen et al. "Progressive Differentiable Architecture Search: Bridging the Depth Gap between Search and Evaluation". 2019 [6] H. Cai et al. "ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware". 2019 [7] Z. Guo el al. "Single Path One-Shot Neural Architecture Search with Uniform Sampling". 2020 [8] C. Li et al. "Blockwisely Supervised Neural Architecture Search with Knowledge Distillation". 2020 [9] B. Moons et al. "Distilling Optimal Neural Networks: Rapid Search in Diverse Spaces". 2021 [10] P. Molchanov et al. "HANT: Hardware-Aware Network Transformation". 2021
ICLR
Title Keypoint Matching via Random Network Consensus Abstract Visual description, detection, and matching of keypoints in images are fundamental components of many computer vision problems, such as camera tracking and (re)localization. Recently, learning-based feature extractors on top of convolutional neural networks (CNNs) have achieved state-of-the-art performance. In this paper, we further explore the usage of CNNs and show that it’s possible to leverage randomly initialized CNNs without training. Our observation is that the CNN architecture inherently extracts features with certain extents of robustness to viewpoint/illumination changes and thus, it can be regarded as a descriptor extractor. Consequently, randomized CNNs serve as descriptor extractors and a subsequent consensus mechanism detects keypoints using them. Such description and detection pipeline can be used to match keypoints in images and achieves higher generalization ability than the state-of-the-art methods in our experiments. 1 INTRODUCTION Keypoint detection, description, and matching in images are fundamental building blocks in many computer vision tasks, such as visual localization (Sattler et al., 2016; Taira et al., 2018; Dusmanu et al., 2019; Revaud et al., 2019; Sarlin et al., 2019; 2020; Tang et al., 2021), Structure-from-Motion (SfM) (Snavely et al., 2006; Wu, 2013; Cui & Tan, 2015; Schönberger & Frahm, 2016; Lindenberger et al., 2021), Simultaneous Localization and Mapping (SLAM) (Mur-Artal et al., 2015; Mur-Artal & Tardós, 2017; Dai et al., 2017), object detection (Csurka et al., 2004; Yang et al., 2019), and pose estimation (Suwajanakorn et al., 2018; Kundu et al., 2018). The keypoints, in general, refer to the salient pixels that are then matched across images forming point-to-point correspondences. They should be discriminative and robust to viewpoint/illumination changes to be accurately matched. Traditional approaches follow a detection-then-description pipeline that first detect salient pixels (Harris et al., 1988; Lowe, 2004; Mikolajczyk & Schmid, 2004) then compute local descriptors (Lowe, 1999; Bay et al., 2006; Calonder et al., 2011; Rublee et al., 2011) on top of those pixels. Typically, the detectors consider low-level 2D geometry information such as corners and blobs. To deal with large viewpoint distances, scaled pyramids are applied with Laplace of Gaussian (LOG), Difference of Gaussian (DOG), etc. For description, local statistics such as gradients and histograms are computed and used as visual descriptors. To name a few, SIFT (Lowe, 1999), and its variant RootSIFT (Arandjelović & Zisserman, 2012), are still popular nowadays due to their generality. In recent years, learning-based approaches on top of convolutional neural networks (CNNs) (Yi et al., 2016; Noh et al., 2017; Ono et al., 2018; Mishkin et al., 2018; DeTone et al., 2018; Dusmanu et al., 2019; Revaud et al., 2019) achieve promising results, especially in extreme appearance changes, such as images taken at day or night (Zhou et al., 2016), and across seasons (Sattler et al., 2018). Compared with traditional handcrafted approaches, the key advantage of introducing deep learning is the ability to learn robust keypoint representations from large-scale datasets. The aforementioned methods apply either supervised or self-supervised learning mechanisms to train their networks. After training, the off-the-shelf detectors and descriptors generalize well to several new datasets. Benefiting from their simplicity and effectiveness, the learned features such as SuperPoint (DeTone et al., 2018), D2-Net (Dusmanu et al., 2019), and R2D2 (Revaud et al., 2019) are widely used nowadays. In this paper, we further explore CNNs in keypoint detection, description, and matching, without requiring the deep networks to be trained. Our observation is that the CNN architecture itself inher- ently extracts features with a certain extent of robustness to viewpoint/illumination changes. Therefore, the extracted features can be directly used as visual descriptors, as shown in Figure 1. Since no training is required, it is free to obtain a set of visual feature descriptors by changing the random seed that generates the network parameters. In order to minimize the number of incorrect matches (e.g., due to similar and ambiguous image regions), we propose a consensus mechanism considering several randomly generated descriptors simultaneously to filter incorrect matches. This consensus design can be regarded as keypoint matching and, to our experiments, it successfully filters out a large amount of wrong matches. The final set of matches, consistent with the epipolar geometry, is found by the widely-used RANSAC (Fischler & Bolles, 1981) algorithm. We summarize our contributions as follows: • We show the possibility that using CNNs for keypoint description, detection, and matching, without requiring the deep networks to be trained. This allows the algorithm to generalize well across multiple modalities (domains). • Benefiting from our no-training design, we can freely generate multiple descriptors for each keypoint. This allows for introducing a consensus mechanism to detect robust keypoint matches among the candidates produced by randomized CNNs. • The proposed pipeline achieves similar performance compared with state-of-the-art detectors and descriptors while performing better on the images of new modalities. 2 RELATED WORK Traditional keypoint detection and description. In general, a good keypoint should be easy to find and ideally the location of the keypoint is suitable for computing a visual descriptor. Therefore, early works (Harris et al., 1988; Shi et al., 1994; Lowe, 2004; Mikolajczyk & Schmid, 2004) detect keypoint as various types of edges, corners, blobs, shapes, etc. In recent decades, detectors and descriptors like SIFT (Lowe, 2004), SURF (Bay et al., 2006), and RootSIFT (Arandjelović & Zisserman, 2012) are widely used due to their generality. Benefiting from time efficiency, binary descriptors such as BRIEF (Calonder et al., 2011), BRISK (Leutenegger et al., 2011), and ORB (Rublee et al., 2011) are also popular in many real-time applications, namely the series works of ORB-SLAM (Mur-Artal et al., 2015; Mur-Artal & Tardós, 2017). The descriptors designed in the aforementioned traditional approaches, in general, are local statistics with certain extents of invari- ance to scale and rotation. In this paper, we showcase that random statistics stem from convolutional neural networks (CNNs) can also be used as visual descriptors. Learning-based keypoint detection and description. FAST (Rosten & Drummond, 2006) is the first approach that introduces machine learning for corner detection. Recent works (Savinov et al., 2017b; Zhang & Rusinkiewicz, 2018; Di Febbo et al., 2018; Laguna & Mikolajczyk, 2022) make use of deep learning with CNNs to boost the performance. Most of the learning-based methods focus on description (Simonyan et al., 2014; Simo-Serra et al., 2015; Balntas et al., 2016; Savinov et al., 2017a; Mishchuk et al., 2017; He et al., 2018; Luo et al., 2019). Based on the traditional detection-then-description pipeline, LIFT (Yi et al., 2016) takes both keypoint detection and description into account. SuperPoint (DeTone et al., 2018) is the first approach to perform both tasks in a single network. One problem with supervised learning of keypoint detectors is that how to define the saliency. SuperPoint first makes use of a synthetic dataset consisting of different shapes and regards the junctions as keypoints for pre-training. Then homographic adaptation is applied to other datasets (e.g., MS-COCO (Lin et al., 2014)) for self-supervised learning. D2-Net (Dusmanu et al., 2019) proposes to perform detection after description, an additional loss term is added to seek repeatability. Meanwhile, the keypoints should be not only repeatable but also reliable, which motivates the approach of R2D2 (Revaud et al., 2019). Other recent works (Noh et al., 2017; Ono et al., 2018; Luo et al., 2020; Tyszkiewicz et al., 2020; Li et al., 2022) apply a similar pipeline and contribute on network designs and training mechanisms. In this paper, we focus on the approaches with simple yet effective network architectures and explore the impact of randomness and consensus mechanism. Specifically, SuperPoint (DeTone et al., 2018), D2-Net (Dusmanu et al., 2019), and R2D2 (Revaud et al., 2019) are chosen as representatives. There are also learning-based dense or semantic correspondence predictions such as UCN (Choy et al., 2016) and NBB (Aberman et al., 2018), which are beyond our scope. Consensus mechanism. Robust estimation is the problem of simultaneously estimating the parameters of an unknown mathematical model and finding the points consistent with it (i.e., inliers) in a set of noisy inliers and large-scale measurement errors (i.e., outliers). One of the most popular robust estimators is the RANdom SAmple Consensus (RANSAC) (Fischler & Bolles, 1981) that iteratively selects minimal sets of data points, estimates the model parameters, and calculates the support (i.e., number of inliers). There are many variants (Brachmann et al., 2017; Barath & Matas, 2018; Barath et al., 2020; Ivashechkin et al., 2021) and the idea of voting and consensus are widely used in computer vision problems such as visual localization (Brachmann & Rother, 2019; Huang et al., 2021), object detection (Qi et al., 2019), and pose estimation (Peng et al., 2019). In this paper, we apply the idea of voting and perform a consensus mechanism to detect robust keypoint matches. 3 METHOD Problem statement. Given a pair of images containing overlapping scene regions, the task of keypoint matching is to find a set of pixel-wise matches that correspond to the same underlying 3D scene points. These matches enable downstream tasks, e.g., pose estimation and Structure-fromMotion (SfM). Note that the aforementioned camera pose estimation is a minimal problem that requires only a few high-precision matches. However, in practice, due to low precision and outliers existing, a certain amount of matches are required to run robust estimation. Method overview. Figure 2 illustrates our proposed method. The input is a pair of images, and the output is a set of pixel-wise matches. We make use of m VGG-style (Simonyan & Zisserman, 2014) convolutional neural networks (CNNs) with random parameters, i.e., there are m different visual descriptor extractors. Therefore, for each pixel in each image, we obtain m descriptors. Next, we apply a matcher (e.g., the nearest neighbor matcher) across images to select similar pixels based on the extracted descriptors. Note that the matching process is executed independently for each extractor. Consequently, we obtain m sets of match candidates. These candidates are then fed into a consensus mechanism to produce the final matches. Below, we first describe the randomized CNNs to generate match candidates in Section 3.1, and then introduce the consensus mechanism to produce final matches in Section 3.2. 3.1 RANDOM DESCRIPTION The keypoint extraction process of a single CNN is illustrated in Figure 3. Following SuperPoint (DeTone et al., 2018), we apply a simplified network architecture as the descriptor extractor f that takes the full image IH×W as input and produces the feature map FH×W×N = f(IH×W ) as pixel-wise descriptors, where H,W ∈ N refer to the image height and width, and N ∈ N refers to the dimension of descriptors. Before applying descriptor matcher, the feature map F is processed to a saliency map to filter out homogeneous regions. Descriptor. In our method, the CNNs are randomly initialized without any training. Our intuition is that a convolution kernel computes a certain type of local statistics inside its receptive field, just like traditional methods that count handcrafted gradients and histograms. Therefore, a CNN is a combination of kernels to count statistics of statistics. In the literature, there are machine learning-like algorithms that apply random statistics to solve computer vision problems, such as in place recognition (Glocker et al., 2014) and visual localization (Cavallari et al., 2019), which demonstrate the effectiveness of randomized-then-fixed statistics. Note that, in our method, the parameters of CNNs are also fixed after the random initialization, so that each CNN computes consistently the same type of statistics at inference time. Consequently, a descriptor can only be used to match the same type of descriptors extracted by the same CNN. Since we employ multiple CNNs independently, the process of keypoint extraction and matching can be deployed in parallel. Saliency. Before matching, we leverage a saliency detection process to reduce the matching space. Directly matching the descriptors from a randomized CNN results in many candidates lying on homogeneous regions, such as textureless floors and walls. To filter out these meaningless candidates, we adopt the keypoint detection formulation proposed in D2-Net (Dusmanu et al., 2019). The key idea is to detect local maxima in the high-level visual descriptor space, rather than detecting local 2D patterns in the low-level image color space. Specifically, the detection formulation considers two aspects: a local softmax α among nearby pixels in each feature channel, and a ratio β among the feature channels of each pixel. The α and β scores are defined as follows: αi,j,k = exp (Fi,j,k)∑ (i′,j′)∈N (i,j) exp (Fi′,j′,k) , βi,j,k = Fi,j,k / max t Fi,j,t , (1) where N (i, j) refers to the neighbor pixels’ locations around the pixel at (i, j), including itself. The saliency score is defined as si,j = maxk (αi,j,k · βi,j,k) and then image-level normalized. Note that the aforementioned process is a forward computation without additional parameters. In D2Net (Dusmanu et al., 2019), the formulation is used to perform soft detection during training. To our experiments, we observe that the process effectively assigns high scores to salient pixels even if the CNN parameters are randomized. Matching. As for keypoint matching, we make use of a classical nearest neighbor matcher that, for each descriptor, in one image, it retrieves the top-2 similar descriptors in another image and computes a ratio test (Lowe, 2004) to filter out ambiguous descriptors. Then, a mutual nearest neighbor check is applied to keep only those matches that are stable in the two matching directions, i.e., from the left to the right image and vice versa. A match candidate is represented as p = {(i1, j1), (i2, j2)}, where i, j ∈ N refer to the 2D locations of the two associated keypoints. For each type of the descriptors from the same CNN fi, the aforementioned matching process is executed independently to generate a set of match candidates Mi = {pi1 , pi2 , ..., pik}. As a result, we obtain m sets of match candidates as the input for the following consensus mechanism. 3.2 CONSENSUS MECHANISM Directly taking all the match candidates M = ⋃m i=1 Mi to the downstream task such as pose estimation often fails due to a large proportion of wrong matches. In this section, we introduce a simple and effective consensus mechanism that rejects incorrect matches early. For each keypoint in M, our goal is to find a correct match or to discard it. The idea, inspired by RANSAC (Fischler & Bolles, 1981), is to first generate model hypotheses using random minimal samples and then vote for each hypothesis using the rest of the samples to select the most consensus one. In our problem, a randomly selected candidate p ∈ M serves as the minimal sample (i.e. a model hypothesis) that gives a match between keypoints (i1, j1) and (i2, j2). Then the rest of the match candidates correlated with the two keypoints vote if they support the hypothesis. This process is achieved by keypoint clustering and consensus scoring, which are introduced in detail below. Keypoint clustering. Given a match hypothesis generated from px = {(ix1 , jx1), (ix2 , jx2)} ∈ M, the objective is to check if it is in consensus with other correlated match candidates in M. The correlated candidates Mx ⊆ M are obtained by seeking all the candidates that are associated with keypoints (ix1 , jx1) or (ix2 , jx2). With the keypoints in Mx, we apply 2D location clustering in each image separately. According to the clustering results, we compute a consensus score as the measurement of the distribution. If the keypoints are well distributed, h passes the consensus check and we update the keypoint locations with the center points of the most consensus clusters. Otherwise, the hypothesis h will be discarded. The hypotheses with optimized keypoint locations are output as the final robust keypoint matches. Consensus scoring. To quantitatively measure the consensus status (keypoint distribution) after clustering, we introduce the consensus score on top of the clusters Q. First, the clusters containing only one keypoint are immediately discarded. For each remaining cluster q ∈ Q, we compute a density score defined as dq = |q| /std(q), (2) where |q| refers to the number of keypoints, and std(q) refers to the standard deviation of the 2D locations to approximate the cluster radius. Finally, the consensus score is defined as c = { d if |Q| = 1 maxq(dq)/ ∑ q∈Q dq otherwise. (3) Three examples of the consensus status are illustrated in Figure 4, and the set of keypoints in orange gains the best score among the three. Generality. The proposed consensus mechanism above is agnostic to keypoint descriptors, detectors, and matchers. Therefore, their alternatives such as trained SuperPoint and SuperGlue (Sarlin et al., 2020) can also be ensembled into the framework. 4 EXPERIMENTS In this section, we validate the effectiveness of our method. We first elaborate on the implementation details in Section 4.1. Then, we conduct comparisons with state-of-the-art representative methods on both matching performance in Section 4.2 and pose estimation in Section 4.3. Last, we perform analysis and ablation studies on our method in Section 4.4. 4.1 IMPLEMENTATION DETAILS In all the experiments, we make use of a 7 layers convolutional neural network (CNN) as a basic descriptor extractor. Each layer of the network is followed by a ReLU activation, except for the last layer, and each of the first 3 activations is followed by a pooling layer. The last feature maps containing N = 256 dimensional descriptors are normalized to unit vectors before output. We apply bilinear interpolation to resize the descriptors to align the input resolution. By default, we employ m = 8 randomized CNNs for visual feature extraction. With the saliency map, we only select the keypoints with higher scores than the median for the following matching process. The ratio test threshold during the matching process is set to 0.95. As for the consensus mechanism, we apply the MeanShift (Pedregosa et al., 2011) algorithm with 24 pixels bandwidth to perform keypoint clustering. The threshold of the consensus score is set to 1.0 by default, i.e., we output a match if there is only one valid and compact cluster. 4.2 KEYPOINT MATCHING Competitors. There are various related approaches on top of CNN based keypoints, and we believe the following methods are the most relevant and representative to be our competitors: DELF (Noh et al., 2017), SuperPoint (DeTone et al., 2018), D2-Net (Dusmanu et al., 2019), and R2D2 (Revaud et al., 2019). We also apply RootSIFT (Arandjelović & Zisserman, 2012) as the representative of traditional handcrafted approaches. Datasets. We conduct experiments on the 7-Scenes (Shotton et al., 2013) and MegaDepth (Li & Snavely, 2018) datasets since they provide depth images that can be used to verify matches densely. The 7-Scenes dataset consists of 7 indoor scenes with RGB-D images, and ground truth camera poses. Several sequences are officially divided into training and test sets for each scene. Since noisy poses exist in the training set, we only use the test set for our evaluation. To avoid view selection biases, we uniformly sample each test sequence to form our test pairs. We use the original color images, calibrated depth images, and normal images (computed based on depth images) as our test input to evaluate the generalization ability to different modalities (domains). The MegaDepth dataset contains outdoor scenes with RGB-D images; we use a subset with ground truth camera poses from (Tyszkiewicz et al., 2020; Sun et al., 2021) as the test set. Since the MegaDepth evaluation set is part of the training set of D2-Net, we report the results of D2-Net only on the depth and normal images in our evaluation. Metrics. To quantitatively evaluate the keypoint matching results, we apply matching accuracy (i.e., the proportion of correct matches) and the number of correct matches as our metrics. On the 7-Scenes dataset, a match is considered correct if the distance between the corresponding 3D points is lower than the thresholds (1cm and 5cm). On the MegaDepth dataset, due to scale ambiguity, we apply thresholds (5 pixels and 20 pixels) on reprojection error instead of absolute 3D distance. Results. The results are shown in Table 1. On the 7-Scenes dataset, our method obtains more correct matches while the accuracy is comparable with the competitors. For depth and normal images, compared with those in color images, all the methods suffer from performance drops, but we are overall the best. It demonstrates the effectiveness of the randomized CNNs with the consensus mechanism. The MegaDepth dataset is much more challenging than 7-Scenes due to the large viewpoint changes, and we observe that our method obtains reasonable results. Besides the quantitative results, we visualize the keypoint matches of two samples in Figure 5. Methods 7-Scenes (%Accuracy / #Matches) MegaDepth (%Accuracy / #Matches) Color Depth Normal Color Depth Normal 1 cm 5 cm 1 cm 5 cm 1 cm 5 cm 5 px 20 px 5 px 20 px 5 px 20 px RootSIFT 13.12 / 11 87.50 / 84 0.00 / 0 12.97 / 3 0.00 / 0 31.25 / 6 79.64 / 87 83.49 / 93 5.41 / 2 11.76 / 4 41.88 / 19 52.34 / 24 DELF 2.86 / 1 64.71 / 16 0.46 / 1 11.96 / 20 0.0 / 0 13.82 / 11 6.35 / 32 31.51 / 160 0.90 / 4 6.28 / 28 2.10 / 6 14.11 / 43 SuperPoint 11.19 / 22 85.02 / 176 2.00 / 2 30.71 / 22 3.03 / 2 36.16 / 31 74.55 / 213 80.62 / 229 7.31 / 9 15.85 / 18 30.29 / 58 44.05 / 84 D2-Net 10.66 / 6 90.05 / 60 0.00 / 0 27.27 / 7 0.00 / 0 18.75 / 1 - - 17.71 / 2 50.44 / 7 53.85 / 2 96.30 / 4 R2D2 12.50 / 12 94.29 / 108 0.00 / 0 45.45 / 5 0.00 / 0 0.00 / 0 98.29 / 63 100.00 / 64 30.00 / 1 66.67 / 3 92.86 / 14 100.00 / 15 Ours 7.75 / 19 78.03 / 226 2.53 / 2 43.89 / 37 6.78 / 3 71.58 / 37 7.69 / 2 21.95 / 6 4.80 / 9 19.80 / 39 19.00 / 19 42.15 / 46 Table 1: Results of keypoint matching on the 7-Scenes and MegaDepth datasets. The best and second-best numbers are labeled red and blue, respectively. 4.3 POSE ESTIMATION Solvers and metrics. On the 7-Scenes dataset, we apply PnP (Lepetit et al., 2009) algorithm with RANSAC (Fischler & Bolles, 1981) to solve absolute poses, and we report median translation and rotation errors. On the MegaDepth dataset, due to scale ambiguity, we solve relative poses from essential matrix estimation (Stewenius et al., 2006) with RANSAC, and we report the average under the recall curve (AUC) as in (Sarlin et al., 2020). Results. The results are shown in Table 2. On the 7-Scenes dataset, our poses on depth and normal images are marginally worse than SuperPoint, although our matching results are better. For the MegaDepth dataset, our results on depth images are close to the state-of-the art, while the results on color and normal images lag behind. The aforementioned evaluation reveal limitations of our method, which are detailly discussed in Section 4.4. 4.4 ANALYSIS Stable random statistics. We first validate the sensitivity of the descriptors stemming from randomized CNNs. To do so, we test a single CNN (without the saliency computation) with 10 different random seeds on the 7-Scenes dataset. The mean matching accuracies for color, depth, and normal images are 66.49%, 23.06%, and 34.31%, with standard deviations 2.43e-03, 1.59e-03, and 2.29e03. The mean number of correct matches is 462.25, 75.00, and 96.75, with standard deviations of 8.66, 1.83, and 3.10. From these results, we observe that the performance of random descriptors is stable, with a mean coefficient of variations of 1.54e-02 (i.e., ∼ 2%). Effectiveness of saliency. To evaluate the help of the saliency computed on top of descriptors, we conduct experiments on the 7-Scenes dataset with a single randomized CNN. When we select only the descriptors with higher saliency scores than the median, the matching accuracies achieve 70.07%(+3.58%), 34.43%(+11.37%), and 48.73%(+14.42%) for color, depth, and normal images. In Figure 6, we sample 5 images from 7-Scenes and MegaDepth datasets and visualize the locations of the top 50 salient descriptors in each image. We are glad to observe that the visualized descriptor locations overall locate around salient regions. Ablation studies. In Table 3, we report ablation studies of our method using different numbers of CNNs and ensemble the trained SuperPoint and SuperGlue. The result of the single CNN does not apply the consensus check. With consensus check, when there are 3 CNNS, we get a high accuracy but a low number of correct matches. As the number of CNNs increases, the accuracy drops slightly while we get more correct matches. The 7+SP refers to a total of 8 CNNs, one of which is the trained SuperPoint. The SuperGlue matcher is only applied to SuperPoint. As they have only 1/8 proportion, SuperPoint and SuperGlue do not impact much. Limitations. Below we analyze the limitations of our method. From Table 1 we observe that for depth and normal images on the 7-Scenes dataset, we are better than SuperPoint on both matching accuracy and number of matches under different thresholds. However, our camera pose estimations shown in Table 2 are less precise. We blame it on the distribution of the matched keypoints, as shown in Figure 7. Therefore, improving the camera pose solver to deal with the misleading of well-structured outliers will be a good future research direction. Another limitation of our method is that there is no guarantee for the descriptors to be scale/rotation invariant. As shown in Figure 8, on the HPatches (Balntas et al., 2017) dataset, our method overall underperforms R2D2 and SuperPoint. The main reason is that our method fails on the images with very large viewpoint changes. 5 CONCLUSION In this paper, we present a new approach that makes use of random statistics extracted by randomized convolutional neural networks (CNNs) as visual descriptors, followed by a consensus mechanism to perform keypoint matching among images. Incorporating scale/rotation invariance will definitely improve performance. Also, it is worth more exploration and research on network architectures. A APPENDIX Times. We run all the experiments on GeForce GTX 1080 Ti. Our single CNN takes about 7 ms to infer a pair of images in 480×640 resolution, while SuperPoint takes around 30 ms. The consensus processing for each match hypothesis takes about 0.3 ms. Note that both the CNNs inference and hypotheses consensus can be implemented in parallel for further acceleration.
1. What is the main contribution of the paper regarding keypoint matching? 2. What are the strengths of the proposed method, particularly in its ability to compete with existing methods? 3. What are the weaknesses of the paper, such as typos, unclear sentences, and confusing descriptions? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the paper's ideas, experiments, or conclusions?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a surprising new method for keypoint matching across image pairs. The idea is essentially to use randomly-initialized CNNs to generate the features. The random features are first preprocessed slightly to get local peaks, and then the features which do not have mutual neighbors are discarded. Finally matches are found with a consensus mechanism similar to RANSAC, which can be split into a clustering stage (generating hypotheses) followed by a scoring stage (selecting hypotheses). The experiments show that this method is competitive with existing methods. Strengths And Weaknesses I think this paper presents some surprising findings. I would not have expected that randomly-initialized CNNs could be made to compete so closely with models like SuperPoint and D2 and R2D2. I also appreciate the analysis on the number of CNNs, and the analysis over distance in the HPatches dataset. It's not the best method, but I learned something from reading the paper. Clarity, Quality, Novelty And Reproducibility "Our observation is that the CNN architecture ... can be regarded as visual descriptors" Some typo here I guess. You cannot regard an architecture as "descriptors". for keypoint description, description and matching, ? These matches enable us, e.g., I think this is a bad sentence. It almost sounds like "us" is the example. "the aforementioned camera pose estimation is a minimal problem" What is a "minimal" problem? "the feature map F is normalized to a saliency map to filter out homogeneous regions" How does this work? I suppose the normalization here is to make the vector magnitude 1, or maybe asks for some sum to be 1, but why does that turn the vectors into saliency cues, and how does it "filter out homogeneous regions"? "Consequently, a descriptor can only be used to match the same type of descriptors extracted by the same CNN. Therefore, the process of keypoint extraction and matching of multiple CNNs is independent and can be deployed in parallel" I don't really see how these two claims connect. The first claim is obvious (you can only compare features within a CNN), and the second is also obvious (you can run multiple CNNs in parallel), but the first claim does not imply the second. "a local softmax \alpha in each channel" What is meant by "local" here? Is this redundantly referring to the fact that the softmax is applied per-channel, or is something else meant? The whole description in the neighborhood of this phrase could be improved. In particular, it would be really great to clarify the i,j,k notation or maybe not use it, since I don't think it's very natural for space and channels to be indexed the same way.
ICLR
Title Keypoint Matching via Random Network Consensus Abstract Visual description, detection, and matching of keypoints in images are fundamental components of many computer vision problems, such as camera tracking and (re)localization. Recently, learning-based feature extractors on top of convolutional neural networks (CNNs) have achieved state-of-the-art performance. In this paper, we further explore the usage of CNNs and show that it’s possible to leverage randomly initialized CNNs without training. Our observation is that the CNN architecture inherently extracts features with certain extents of robustness to viewpoint/illumination changes and thus, it can be regarded as a descriptor extractor. Consequently, randomized CNNs serve as descriptor extractors and a subsequent consensus mechanism detects keypoints using them. Such description and detection pipeline can be used to match keypoints in images and achieves higher generalization ability than the state-of-the-art methods in our experiments. 1 INTRODUCTION Keypoint detection, description, and matching in images are fundamental building blocks in many computer vision tasks, such as visual localization (Sattler et al., 2016; Taira et al., 2018; Dusmanu et al., 2019; Revaud et al., 2019; Sarlin et al., 2019; 2020; Tang et al., 2021), Structure-from-Motion (SfM) (Snavely et al., 2006; Wu, 2013; Cui & Tan, 2015; Schönberger & Frahm, 2016; Lindenberger et al., 2021), Simultaneous Localization and Mapping (SLAM) (Mur-Artal et al., 2015; Mur-Artal & Tardós, 2017; Dai et al., 2017), object detection (Csurka et al., 2004; Yang et al., 2019), and pose estimation (Suwajanakorn et al., 2018; Kundu et al., 2018). The keypoints, in general, refer to the salient pixels that are then matched across images forming point-to-point correspondences. They should be discriminative and robust to viewpoint/illumination changes to be accurately matched. Traditional approaches follow a detection-then-description pipeline that first detect salient pixels (Harris et al., 1988; Lowe, 2004; Mikolajczyk & Schmid, 2004) then compute local descriptors (Lowe, 1999; Bay et al., 2006; Calonder et al., 2011; Rublee et al., 2011) on top of those pixels. Typically, the detectors consider low-level 2D geometry information such as corners and blobs. To deal with large viewpoint distances, scaled pyramids are applied with Laplace of Gaussian (LOG), Difference of Gaussian (DOG), etc. For description, local statistics such as gradients and histograms are computed and used as visual descriptors. To name a few, SIFT (Lowe, 1999), and its variant RootSIFT (Arandjelović & Zisserman, 2012), are still popular nowadays due to their generality. In recent years, learning-based approaches on top of convolutional neural networks (CNNs) (Yi et al., 2016; Noh et al., 2017; Ono et al., 2018; Mishkin et al., 2018; DeTone et al., 2018; Dusmanu et al., 2019; Revaud et al., 2019) achieve promising results, especially in extreme appearance changes, such as images taken at day or night (Zhou et al., 2016), and across seasons (Sattler et al., 2018). Compared with traditional handcrafted approaches, the key advantage of introducing deep learning is the ability to learn robust keypoint representations from large-scale datasets. The aforementioned methods apply either supervised or self-supervised learning mechanisms to train their networks. After training, the off-the-shelf detectors and descriptors generalize well to several new datasets. Benefiting from their simplicity and effectiveness, the learned features such as SuperPoint (DeTone et al., 2018), D2-Net (Dusmanu et al., 2019), and R2D2 (Revaud et al., 2019) are widely used nowadays. In this paper, we further explore CNNs in keypoint detection, description, and matching, without requiring the deep networks to be trained. Our observation is that the CNN architecture itself inher- ently extracts features with a certain extent of robustness to viewpoint/illumination changes. Therefore, the extracted features can be directly used as visual descriptors, as shown in Figure 1. Since no training is required, it is free to obtain a set of visual feature descriptors by changing the random seed that generates the network parameters. In order to minimize the number of incorrect matches (e.g., due to similar and ambiguous image regions), we propose a consensus mechanism considering several randomly generated descriptors simultaneously to filter incorrect matches. This consensus design can be regarded as keypoint matching and, to our experiments, it successfully filters out a large amount of wrong matches. The final set of matches, consistent with the epipolar geometry, is found by the widely-used RANSAC (Fischler & Bolles, 1981) algorithm. We summarize our contributions as follows: • We show the possibility that using CNNs for keypoint description, detection, and matching, without requiring the deep networks to be trained. This allows the algorithm to generalize well across multiple modalities (domains). • Benefiting from our no-training design, we can freely generate multiple descriptors for each keypoint. This allows for introducing a consensus mechanism to detect robust keypoint matches among the candidates produced by randomized CNNs. • The proposed pipeline achieves similar performance compared with state-of-the-art detectors and descriptors while performing better on the images of new modalities. 2 RELATED WORK Traditional keypoint detection and description. In general, a good keypoint should be easy to find and ideally the location of the keypoint is suitable for computing a visual descriptor. Therefore, early works (Harris et al., 1988; Shi et al., 1994; Lowe, 2004; Mikolajczyk & Schmid, 2004) detect keypoint as various types of edges, corners, blobs, shapes, etc. In recent decades, detectors and descriptors like SIFT (Lowe, 2004), SURF (Bay et al., 2006), and RootSIFT (Arandjelović & Zisserman, 2012) are widely used due to their generality. Benefiting from time efficiency, binary descriptors such as BRIEF (Calonder et al., 2011), BRISK (Leutenegger et al., 2011), and ORB (Rublee et al., 2011) are also popular in many real-time applications, namely the series works of ORB-SLAM (Mur-Artal et al., 2015; Mur-Artal & Tardós, 2017). The descriptors designed in the aforementioned traditional approaches, in general, are local statistics with certain extents of invari- ance to scale and rotation. In this paper, we showcase that random statistics stem from convolutional neural networks (CNNs) can also be used as visual descriptors. Learning-based keypoint detection and description. FAST (Rosten & Drummond, 2006) is the first approach that introduces machine learning for corner detection. Recent works (Savinov et al., 2017b; Zhang & Rusinkiewicz, 2018; Di Febbo et al., 2018; Laguna & Mikolajczyk, 2022) make use of deep learning with CNNs to boost the performance. Most of the learning-based methods focus on description (Simonyan et al., 2014; Simo-Serra et al., 2015; Balntas et al., 2016; Savinov et al., 2017a; Mishchuk et al., 2017; He et al., 2018; Luo et al., 2019). Based on the traditional detection-then-description pipeline, LIFT (Yi et al., 2016) takes both keypoint detection and description into account. SuperPoint (DeTone et al., 2018) is the first approach to perform both tasks in a single network. One problem with supervised learning of keypoint detectors is that how to define the saliency. SuperPoint first makes use of a synthetic dataset consisting of different shapes and regards the junctions as keypoints for pre-training. Then homographic adaptation is applied to other datasets (e.g., MS-COCO (Lin et al., 2014)) for self-supervised learning. D2-Net (Dusmanu et al., 2019) proposes to perform detection after description, an additional loss term is added to seek repeatability. Meanwhile, the keypoints should be not only repeatable but also reliable, which motivates the approach of R2D2 (Revaud et al., 2019). Other recent works (Noh et al., 2017; Ono et al., 2018; Luo et al., 2020; Tyszkiewicz et al., 2020; Li et al., 2022) apply a similar pipeline and contribute on network designs and training mechanisms. In this paper, we focus on the approaches with simple yet effective network architectures and explore the impact of randomness and consensus mechanism. Specifically, SuperPoint (DeTone et al., 2018), D2-Net (Dusmanu et al., 2019), and R2D2 (Revaud et al., 2019) are chosen as representatives. There are also learning-based dense or semantic correspondence predictions such as UCN (Choy et al., 2016) and NBB (Aberman et al., 2018), which are beyond our scope. Consensus mechanism. Robust estimation is the problem of simultaneously estimating the parameters of an unknown mathematical model and finding the points consistent with it (i.e., inliers) in a set of noisy inliers and large-scale measurement errors (i.e., outliers). One of the most popular robust estimators is the RANdom SAmple Consensus (RANSAC) (Fischler & Bolles, 1981) that iteratively selects minimal sets of data points, estimates the model parameters, and calculates the support (i.e., number of inliers). There are many variants (Brachmann et al., 2017; Barath & Matas, 2018; Barath et al., 2020; Ivashechkin et al., 2021) and the idea of voting and consensus are widely used in computer vision problems such as visual localization (Brachmann & Rother, 2019; Huang et al., 2021), object detection (Qi et al., 2019), and pose estimation (Peng et al., 2019). In this paper, we apply the idea of voting and perform a consensus mechanism to detect robust keypoint matches. 3 METHOD Problem statement. Given a pair of images containing overlapping scene regions, the task of keypoint matching is to find a set of pixel-wise matches that correspond to the same underlying 3D scene points. These matches enable downstream tasks, e.g., pose estimation and Structure-fromMotion (SfM). Note that the aforementioned camera pose estimation is a minimal problem that requires only a few high-precision matches. However, in practice, due to low precision and outliers existing, a certain amount of matches are required to run robust estimation. Method overview. Figure 2 illustrates our proposed method. The input is a pair of images, and the output is a set of pixel-wise matches. We make use of m VGG-style (Simonyan & Zisserman, 2014) convolutional neural networks (CNNs) with random parameters, i.e., there are m different visual descriptor extractors. Therefore, for each pixel in each image, we obtain m descriptors. Next, we apply a matcher (e.g., the nearest neighbor matcher) across images to select similar pixels based on the extracted descriptors. Note that the matching process is executed independently for each extractor. Consequently, we obtain m sets of match candidates. These candidates are then fed into a consensus mechanism to produce the final matches. Below, we first describe the randomized CNNs to generate match candidates in Section 3.1, and then introduce the consensus mechanism to produce final matches in Section 3.2. 3.1 RANDOM DESCRIPTION The keypoint extraction process of a single CNN is illustrated in Figure 3. Following SuperPoint (DeTone et al., 2018), we apply a simplified network architecture as the descriptor extractor f that takes the full image IH×W as input and produces the feature map FH×W×N = f(IH×W ) as pixel-wise descriptors, where H,W ∈ N refer to the image height and width, and N ∈ N refers to the dimension of descriptors. Before applying descriptor matcher, the feature map F is processed to a saliency map to filter out homogeneous regions. Descriptor. In our method, the CNNs are randomly initialized without any training. Our intuition is that a convolution kernel computes a certain type of local statistics inside its receptive field, just like traditional methods that count handcrafted gradients and histograms. Therefore, a CNN is a combination of kernels to count statistics of statistics. In the literature, there are machine learning-like algorithms that apply random statistics to solve computer vision problems, such as in place recognition (Glocker et al., 2014) and visual localization (Cavallari et al., 2019), which demonstrate the effectiveness of randomized-then-fixed statistics. Note that, in our method, the parameters of CNNs are also fixed after the random initialization, so that each CNN computes consistently the same type of statistics at inference time. Consequently, a descriptor can only be used to match the same type of descriptors extracted by the same CNN. Since we employ multiple CNNs independently, the process of keypoint extraction and matching can be deployed in parallel. Saliency. Before matching, we leverage a saliency detection process to reduce the matching space. Directly matching the descriptors from a randomized CNN results in many candidates lying on homogeneous regions, such as textureless floors and walls. To filter out these meaningless candidates, we adopt the keypoint detection formulation proposed in D2-Net (Dusmanu et al., 2019). The key idea is to detect local maxima in the high-level visual descriptor space, rather than detecting local 2D patterns in the low-level image color space. Specifically, the detection formulation considers two aspects: a local softmax α among nearby pixels in each feature channel, and a ratio β among the feature channels of each pixel. The α and β scores are defined as follows: αi,j,k = exp (Fi,j,k)∑ (i′,j′)∈N (i,j) exp (Fi′,j′,k) , βi,j,k = Fi,j,k / max t Fi,j,t , (1) where N (i, j) refers to the neighbor pixels’ locations around the pixel at (i, j), including itself. The saliency score is defined as si,j = maxk (αi,j,k · βi,j,k) and then image-level normalized. Note that the aforementioned process is a forward computation without additional parameters. In D2Net (Dusmanu et al., 2019), the formulation is used to perform soft detection during training. To our experiments, we observe that the process effectively assigns high scores to salient pixels even if the CNN parameters are randomized. Matching. As for keypoint matching, we make use of a classical nearest neighbor matcher that, for each descriptor, in one image, it retrieves the top-2 similar descriptors in another image and computes a ratio test (Lowe, 2004) to filter out ambiguous descriptors. Then, a mutual nearest neighbor check is applied to keep only those matches that are stable in the two matching directions, i.e., from the left to the right image and vice versa. A match candidate is represented as p = {(i1, j1), (i2, j2)}, where i, j ∈ N refer to the 2D locations of the two associated keypoints. For each type of the descriptors from the same CNN fi, the aforementioned matching process is executed independently to generate a set of match candidates Mi = {pi1 , pi2 , ..., pik}. As a result, we obtain m sets of match candidates as the input for the following consensus mechanism. 3.2 CONSENSUS MECHANISM Directly taking all the match candidates M = ⋃m i=1 Mi to the downstream task such as pose estimation often fails due to a large proportion of wrong matches. In this section, we introduce a simple and effective consensus mechanism that rejects incorrect matches early. For each keypoint in M, our goal is to find a correct match or to discard it. The idea, inspired by RANSAC (Fischler & Bolles, 1981), is to first generate model hypotheses using random minimal samples and then vote for each hypothesis using the rest of the samples to select the most consensus one. In our problem, a randomly selected candidate p ∈ M serves as the minimal sample (i.e. a model hypothesis) that gives a match between keypoints (i1, j1) and (i2, j2). Then the rest of the match candidates correlated with the two keypoints vote if they support the hypothesis. This process is achieved by keypoint clustering and consensus scoring, which are introduced in detail below. Keypoint clustering. Given a match hypothesis generated from px = {(ix1 , jx1), (ix2 , jx2)} ∈ M, the objective is to check if it is in consensus with other correlated match candidates in M. The correlated candidates Mx ⊆ M are obtained by seeking all the candidates that are associated with keypoints (ix1 , jx1) or (ix2 , jx2). With the keypoints in Mx, we apply 2D location clustering in each image separately. According to the clustering results, we compute a consensus score as the measurement of the distribution. If the keypoints are well distributed, h passes the consensus check and we update the keypoint locations with the center points of the most consensus clusters. Otherwise, the hypothesis h will be discarded. The hypotheses with optimized keypoint locations are output as the final robust keypoint matches. Consensus scoring. To quantitatively measure the consensus status (keypoint distribution) after clustering, we introduce the consensus score on top of the clusters Q. First, the clusters containing only one keypoint are immediately discarded. For each remaining cluster q ∈ Q, we compute a density score defined as dq = |q| /std(q), (2) where |q| refers to the number of keypoints, and std(q) refers to the standard deviation of the 2D locations to approximate the cluster radius. Finally, the consensus score is defined as c = { d if |Q| = 1 maxq(dq)/ ∑ q∈Q dq otherwise. (3) Three examples of the consensus status are illustrated in Figure 4, and the set of keypoints in orange gains the best score among the three. Generality. The proposed consensus mechanism above is agnostic to keypoint descriptors, detectors, and matchers. Therefore, their alternatives such as trained SuperPoint and SuperGlue (Sarlin et al., 2020) can also be ensembled into the framework. 4 EXPERIMENTS In this section, we validate the effectiveness of our method. We first elaborate on the implementation details in Section 4.1. Then, we conduct comparisons with state-of-the-art representative methods on both matching performance in Section 4.2 and pose estimation in Section 4.3. Last, we perform analysis and ablation studies on our method in Section 4.4. 4.1 IMPLEMENTATION DETAILS In all the experiments, we make use of a 7 layers convolutional neural network (CNN) as a basic descriptor extractor. Each layer of the network is followed by a ReLU activation, except for the last layer, and each of the first 3 activations is followed by a pooling layer. The last feature maps containing N = 256 dimensional descriptors are normalized to unit vectors before output. We apply bilinear interpolation to resize the descriptors to align the input resolution. By default, we employ m = 8 randomized CNNs for visual feature extraction. With the saliency map, we only select the keypoints with higher scores than the median for the following matching process. The ratio test threshold during the matching process is set to 0.95. As for the consensus mechanism, we apply the MeanShift (Pedregosa et al., 2011) algorithm with 24 pixels bandwidth to perform keypoint clustering. The threshold of the consensus score is set to 1.0 by default, i.e., we output a match if there is only one valid and compact cluster. 4.2 KEYPOINT MATCHING Competitors. There are various related approaches on top of CNN based keypoints, and we believe the following methods are the most relevant and representative to be our competitors: DELF (Noh et al., 2017), SuperPoint (DeTone et al., 2018), D2-Net (Dusmanu et al., 2019), and R2D2 (Revaud et al., 2019). We also apply RootSIFT (Arandjelović & Zisserman, 2012) as the representative of traditional handcrafted approaches. Datasets. We conduct experiments on the 7-Scenes (Shotton et al., 2013) and MegaDepth (Li & Snavely, 2018) datasets since they provide depth images that can be used to verify matches densely. The 7-Scenes dataset consists of 7 indoor scenes with RGB-D images, and ground truth camera poses. Several sequences are officially divided into training and test sets for each scene. Since noisy poses exist in the training set, we only use the test set for our evaluation. To avoid view selection biases, we uniformly sample each test sequence to form our test pairs. We use the original color images, calibrated depth images, and normal images (computed based on depth images) as our test input to evaluate the generalization ability to different modalities (domains). The MegaDepth dataset contains outdoor scenes with RGB-D images; we use a subset with ground truth camera poses from (Tyszkiewicz et al., 2020; Sun et al., 2021) as the test set. Since the MegaDepth evaluation set is part of the training set of D2-Net, we report the results of D2-Net only on the depth and normal images in our evaluation. Metrics. To quantitatively evaluate the keypoint matching results, we apply matching accuracy (i.e., the proportion of correct matches) and the number of correct matches as our metrics. On the 7-Scenes dataset, a match is considered correct if the distance between the corresponding 3D points is lower than the thresholds (1cm and 5cm). On the MegaDepth dataset, due to scale ambiguity, we apply thresholds (5 pixels and 20 pixels) on reprojection error instead of absolute 3D distance. Results. The results are shown in Table 1. On the 7-Scenes dataset, our method obtains more correct matches while the accuracy is comparable with the competitors. For depth and normal images, compared with those in color images, all the methods suffer from performance drops, but we are overall the best. It demonstrates the effectiveness of the randomized CNNs with the consensus mechanism. The MegaDepth dataset is much more challenging than 7-Scenes due to the large viewpoint changes, and we observe that our method obtains reasonable results. Besides the quantitative results, we visualize the keypoint matches of two samples in Figure 5. Methods 7-Scenes (%Accuracy / #Matches) MegaDepth (%Accuracy / #Matches) Color Depth Normal Color Depth Normal 1 cm 5 cm 1 cm 5 cm 1 cm 5 cm 5 px 20 px 5 px 20 px 5 px 20 px RootSIFT 13.12 / 11 87.50 / 84 0.00 / 0 12.97 / 3 0.00 / 0 31.25 / 6 79.64 / 87 83.49 / 93 5.41 / 2 11.76 / 4 41.88 / 19 52.34 / 24 DELF 2.86 / 1 64.71 / 16 0.46 / 1 11.96 / 20 0.0 / 0 13.82 / 11 6.35 / 32 31.51 / 160 0.90 / 4 6.28 / 28 2.10 / 6 14.11 / 43 SuperPoint 11.19 / 22 85.02 / 176 2.00 / 2 30.71 / 22 3.03 / 2 36.16 / 31 74.55 / 213 80.62 / 229 7.31 / 9 15.85 / 18 30.29 / 58 44.05 / 84 D2-Net 10.66 / 6 90.05 / 60 0.00 / 0 27.27 / 7 0.00 / 0 18.75 / 1 - - 17.71 / 2 50.44 / 7 53.85 / 2 96.30 / 4 R2D2 12.50 / 12 94.29 / 108 0.00 / 0 45.45 / 5 0.00 / 0 0.00 / 0 98.29 / 63 100.00 / 64 30.00 / 1 66.67 / 3 92.86 / 14 100.00 / 15 Ours 7.75 / 19 78.03 / 226 2.53 / 2 43.89 / 37 6.78 / 3 71.58 / 37 7.69 / 2 21.95 / 6 4.80 / 9 19.80 / 39 19.00 / 19 42.15 / 46 Table 1: Results of keypoint matching on the 7-Scenes and MegaDepth datasets. The best and second-best numbers are labeled red and blue, respectively. 4.3 POSE ESTIMATION Solvers and metrics. On the 7-Scenes dataset, we apply PnP (Lepetit et al., 2009) algorithm with RANSAC (Fischler & Bolles, 1981) to solve absolute poses, and we report median translation and rotation errors. On the MegaDepth dataset, due to scale ambiguity, we solve relative poses from essential matrix estimation (Stewenius et al., 2006) with RANSAC, and we report the average under the recall curve (AUC) as in (Sarlin et al., 2020). Results. The results are shown in Table 2. On the 7-Scenes dataset, our poses on depth and normal images are marginally worse than SuperPoint, although our matching results are better. For the MegaDepth dataset, our results on depth images are close to the state-of-the art, while the results on color and normal images lag behind. The aforementioned evaluation reveal limitations of our method, which are detailly discussed in Section 4.4. 4.4 ANALYSIS Stable random statistics. We first validate the sensitivity of the descriptors stemming from randomized CNNs. To do so, we test a single CNN (without the saliency computation) with 10 different random seeds on the 7-Scenes dataset. The mean matching accuracies for color, depth, and normal images are 66.49%, 23.06%, and 34.31%, with standard deviations 2.43e-03, 1.59e-03, and 2.29e03. The mean number of correct matches is 462.25, 75.00, and 96.75, with standard deviations of 8.66, 1.83, and 3.10. From these results, we observe that the performance of random descriptors is stable, with a mean coefficient of variations of 1.54e-02 (i.e., ∼ 2%). Effectiveness of saliency. To evaluate the help of the saliency computed on top of descriptors, we conduct experiments on the 7-Scenes dataset with a single randomized CNN. When we select only the descriptors with higher saliency scores than the median, the matching accuracies achieve 70.07%(+3.58%), 34.43%(+11.37%), and 48.73%(+14.42%) for color, depth, and normal images. In Figure 6, we sample 5 images from 7-Scenes and MegaDepth datasets and visualize the locations of the top 50 salient descriptors in each image. We are glad to observe that the visualized descriptor locations overall locate around salient regions. Ablation studies. In Table 3, we report ablation studies of our method using different numbers of CNNs and ensemble the trained SuperPoint and SuperGlue. The result of the single CNN does not apply the consensus check. With consensus check, when there are 3 CNNS, we get a high accuracy but a low number of correct matches. As the number of CNNs increases, the accuracy drops slightly while we get more correct matches. The 7+SP refers to a total of 8 CNNs, one of which is the trained SuperPoint. The SuperGlue matcher is only applied to SuperPoint. As they have only 1/8 proportion, SuperPoint and SuperGlue do not impact much. Limitations. Below we analyze the limitations of our method. From Table 1 we observe that for depth and normal images on the 7-Scenes dataset, we are better than SuperPoint on both matching accuracy and number of matches under different thresholds. However, our camera pose estimations shown in Table 2 are less precise. We blame it on the distribution of the matched keypoints, as shown in Figure 7. Therefore, improving the camera pose solver to deal with the misleading of well-structured outliers will be a good future research direction. Another limitation of our method is that there is no guarantee for the descriptors to be scale/rotation invariant. As shown in Figure 8, on the HPatches (Balntas et al., 2017) dataset, our method overall underperforms R2D2 and SuperPoint. The main reason is that our method fails on the images with very large viewpoint changes. 5 CONCLUSION In this paper, we present a new approach that makes use of random statistics extracted by randomized convolutional neural networks (CNNs) as visual descriptors, followed by a consensus mechanism to perform keypoint matching among images. Incorporating scale/rotation invariance will definitely improve performance. Also, it is worth more exploration and research on network architectures. A APPENDIX Times. We run all the experiments on GeForce GTX 1080 Ti. Our single CNN takes about 7 ms to infer a pair of images in 480×640 resolution, while SuperPoint takes around 30 ms. The consensus processing for each match hypothesis takes about 0.3 ms. Note that both the CNNs inference and hypotheses consensus can be implemented in parallel for further acceleration.
1. What is the focus and contribution of the paper on keypoint detection and description? 2. What are the strengths of the proposed approach, particularly regarding the use of untrained CNNs? 3. What are the weaknesses of the paper, especially regarding the consensus mechanism and computational time? 4. Do you have any concerns or questions about the method's application on MegaDepth scenes, specifically regarding the scale factor and absence of color images? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes to apply an ensemble of siamese CNNs to detect and describe keypoints. The parameters of these CNNS are not trained but set randomly. The keypoint descriptors produced by each siamese CNN are matched using a classical nearest neighbor with ratio test followed by a mutual nearest neighbor test. All these match candidates are filtered using a novel consensus mechanism. The proposed approach is evaluated on 7-scenes and MegaDepth against RootSIFT, SuperPoint, D2-Net and R2D2 on three different modalities : color images, depth images and normal images. Strengths And Weaknesses Strengths The paper is overall well written and easy to read, except section 3.2 Consensus mechanism which is difficult to understand. Employing untrained CNNs is an interesting idea Weaknesses The consensus mechanism section 3.2 is difficult to understand. The computational time is probably high compared to other methods as it requires 8 forward passes (8 siameses CNNs are employed in the experiments). Concerning MegaDepth, thresholds of 10 cm and 50 cm have been defined, but if I am not mistaken, for each MegaDepth scene the 3D point cloud is obtained using SfM up to a scale factor. How did you obtain this scale factor ? Why did not you run experiments on color images for MegaDepth ? (column "color" is missing both in Table 1 and 2) Clarity, Quality, Novelty And Reproducibility The paper is well written, except section 3.2 that is very difficult to understand. I believe adding some illustrations of the output of the MeanShift algorithm would help. The originality of the paper is somewhat limited to me as recently reviewed several papers investigating the idea of employing Neural Networks with random parameters is the related field of shape matching.
ICLR
Title Keypoint Matching via Random Network Consensus Abstract Visual description, detection, and matching of keypoints in images are fundamental components of many computer vision problems, such as camera tracking and (re)localization. Recently, learning-based feature extractors on top of convolutional neural networks (CNNs) have achieved state-of-the-art performance. In this paper, we further explore the usage of CNNs and show that it’s possible to leverage randomly initialized CNNs without training. Our observation is that the CNN architecture inherently extracts features with certain extents of robustness to viewpoint/illumination changes and thus, it can be regarded as a descriptor extractor. Consequently, randomized CNNs serve as descriptor extractors and a subsequent consensus mechanism detects keypoints using them. Such description and detection pipeline can be used to match keypoints in images and achieves higher generalization ability than the state-of-the-art methods in our experiments. 1 INTRODUCTION Keypoint detection, description, and matching in images are fundamental building blocks in many computer vision tasks, such as visual localization (Sattler et al., 2016; Taira et al., 2018; Dusmanu et al., 2019; Revaud et al., 2019; Sarlin et al., 2019; 2020; Tang et al., 2021), Structure-from-Motion (SfM) (Snavely et al., 2006; Wu, 2013; Cui & Tan, 2015; Schönberger & Frahm, 2016; Lindenberger et al., 2021), Simultaneous Localization and Mapping (SLAM) (Mur-Artal et al., 2015; Mur-Artal & Tardós, 2017; Dai et al., 2017), object detection (Csurka et al., 2004; Yang et al., 2019), and pose estimation (Suwajanakorn et al., 2018; Kundu et al., 2018). The keypoints, in general, refer to the salient pixels that are then matched across images forming point-to-point correspondences. They should be discriminative and robust to viewpoint/illumination changes to be accurately matched. Traditional approaches follow a detection-then-description pipeline that first detect salient pixels (Harris et al., 1988; Lowe, 2004; Mikolajczyk & Schmid, 2004) then compute local descriptors (Lowe, 1999; Bay et al., 2006; Calonder et al., 2011; Rublee et al., 2011) on top of those pixels. Typically, the detectors consider low-level 2D geometry information such as corners and blobs. To deal with large viewpoint distances, scaled pyramids are applied with Laplace of Gaussian (LOG), Difference of Gaussian (DOG), etc. For description, local statistics such as gradients and histograms are computed and used as visual descriptors. To name a few, SIFT (Lowe, 1999), and its variant RootSIFT (Arandjelović & Zisserman, 2012), are still popular nowadays due to their generality. In recent years, learning-based approaches on top of convolutional neural networks (CNNs) (Yi et al., 2016; Noh et al., 2017; Ono et al., 2018; Mishkin et al., 2018; DeTone et al., 2018; Dusmanu et al., 2019; Revaud et al., 2019) achieve promising results, especially in extreme appearance changes, such as images taken at day or night (Zhou et al., 2016), and across seasons (Sattler et al., 2018). Compared with traditional handcrafted approaches, the key advantage of introducing deep learning is the ability to learn robust keypoint representations from large-scale datasets. The aforementioned methods apply either supervised or self-supervised learning mechanisms to train their networks. After training, the off-the-shelf detectors and descriptors generalize well to several new datasets. Benefiting from their simplicity and effectiveness, the learned features such as SuperPoint (DeTone et al., 2018), D2-Net (Dusmanu et al., 2019), and R2D2 (Revaud et al., 2019) are widely used nowadays. In this paper, we further explore CNNs in keypoint detection, description, and matching, without requiring the deep networks to be trained. Our observation is that the CNN architecture itself inher- ently extracts features with a certain extent of robustness to viewpoint/illumination changes. Therefore, the extracted features can be directly used as visual descriptors, as shown in Figure 1. Since no training is required, it is free to obtain a set of visual feature descriptors by changing the random seed that generates the network parameters. In order to minimize the number of incorrect matches (e.g., due to similar and ambiguous image regions), we propose a consensus mechanism considering several randomly generated descriptors simultaneously to filter incorrect matches. This consensus design can be regarded as keypoint matching and, to our experiments, it successfully filters out a large amount of wrong matches. The final set of matches, consistent with the epipolar geometry, is found by the widely-used RANSAC (Fischler & Bolles, 1981) algorithm. We summarize our contributions as follows: • We show the possibility that using CNNs for keypoint description, detection, and matching, without requiring the deep networks to be trained. This allows the algorithm to generalize well across multiple modalities (domains). • Benefiting from our no-training design, we can freely generate multiple descriptors for each keypoint. This allows for introducing a consensus mechanism to detect robust keypoint matches among the candidates produced by randomized CNNs. • The proposed pipeline achieves similar performance compared with state-of-the-art detectors and descriptors while performing better on the images of new modalities. 2 RELATED WORK Traditional keypoint detection and description. In general, a good keypoint should be easy to find and ideally the location of the keypoint is suitable for computing a visual descriptor. Therefore, early works (Harris et al., 1988; Shi et al., 1994; Lowe, 2004; Mikolajczyk & Schmid, 2004) detect keypoint as various types of edges, corners, blobs, shapes, etc. In recent decades, detectors and descriptors like SIFT (Lowe, 2004), SURF (Bay et al., 2006), and RootSIFT (Arandjelović & Zisserman, 2012) are widely used due to their generality. Benefiting from time efficiency, binary descriptors such as BRIEF (Calonder et al., 2011), BRISK (Leutenegger et al., 2011), and ORB (Rublee et al., 2011) are also popular in many real-time applications, namely the series works of ORB-SLAM (Mur-Artal et al., 2015; Mur-Artal & Tardós, 2017). The descriptors designed in the aforementioned traditional approaches, in general, are local statistics with certain extents of invari- ance to scale and rotation. In this paper, we showcase that random statistics stem from convolutional neural networks (CNNs) can also be used as visual descriptors. Learning-based keypoint detection and description. FAST (Rosten & Drummond, 2006) is the first approach that introduces machine learning for corner detection. Recent works (Savinov et al., 2017b; Zhang & Rusinkiewicz, 2018; Di Febbo et al., 2018; Laguna & Mikolajczyk, 2022) make use of deep learning with CNNs to boost the performance. Most of the learning-based methods focus on description (Simonyan et al., 2014; Simo-Serra et al., 2015; Balntas et al., 2016; Savinov et al., 2017a; Mishchuk et al., 2017; He et al., 2018; Luo et al., 2019). Based on the traditional detection-then-description pipeline, LIFT (Yi et al., 2016) takes both keypoint detection and description into account. SuperPoint (DeTone et al., 2018) is the first approach to perform both tasks in a single network. One problem with supervised learning of keypoint detectors is that how to define the saliency. SuperPoint first makes use of a synthetic dataset consisting of different shapes and regards the junctions as keypoints for pre-training. Then homographic adaptation is applied to other datasets (e.g., MS-COCO (Lin et al., 2014)) for self-supervised learning. D2-Net (Dusmanu et al., 2019) proposes to perform detection after description, an additional loss term is added to seek repeatability. Meanwhile, the keypoints should be not only repeatable but also reliable, which motivates the approach of R2D2 (Revaud et al., 2019). Other recent works (Noh et al., 2017; Ono et al., 2018; Luo et al., 2020; Tyszkiewicz et al., 2020; Li et al., 2022) apply a similar pipeline and contribute on network designs and training mechanisms. In this paper, we focus on the approaches with simple yet effective network architectures and explore the impact of randomness and consensus mechanism. Specifically, SuperPoint (DeTone et al., 2018), D2-Net (Dusmanu et al., 2019), and R2D2 (Revaud et al., 2019) are chosen as representatives. There are also learning-based dense or semantic correspondence predictions such as UCN (Choy et al., 2016) and NBB (Aberman et al., 2018), which are beyond our scope. Consensus mechanism. Robust estimation is the problem of simultaneously estimating the parameters of an unknown mathematical model and finding the points consistent with it (i.e., inliers) in a set of noisy inliers and large-scale measurement errors (i.e., outliers). One of the most popular robust estimators is the RANdom SAmple Consensus (RANSAC) (Fischler & Bolles, 1981) that iteratively selects minimal sets of data points, estimates the model parameters, and calculates the support (i.e., number of inliers). There are many variants (Brachmann et al., 2017; Barath & Matas, 2018; Barath et al., 2020; Ivashechkin et al., 2021) and the idea of voting and consensus are widely used in computer vision problems such as visual localization (Brachmann & Rother, 2019; Huang et al., 2021), object detection (Qi et al., 2019), and pose estimation (Peng et al., 2019). In this paper, we apply the idea of voting and perform a consensus mechanism to detect robust keypoint matches. 3 METHOD Problem statement. Given a pair of images containing overlapping scene regions, the task of keypoint matching is to find a set of pixel-wise matches that correspond to the same underlying 3D scene points. These matches enable downstream tasks, e.g., pose estimation and Structure-fromMotion (SfM). Note that the aforementioned camera pose estimation is a minimal problem that requires only a few high-precision matches. However, in practice, due to low precision and outliers existing, a certain amount of matches are required to run robust estimation. Method overview. Figure 2 illustrates our proposed method. The input is a pair of images, and the output is a set of pixel-wise matches. We make use of m VGG-style (Simonyan & Zisserman, 2014) convolutional neural networks (CNNs) with random parameters, i.e., there are m different visual descriptor extractors. Therefore, for each pixel in each image, we obtain m descriptors. Next, we apply a matcher (e.g., the nearest neighbor matcher) across images to select similar pixels based on the extracted descriptors. Note that the matching process is executed independently for each extractor. Consequently, we obtain m sets of match candidates. These candidates are then fed into a consensus mechanism to produce the final matches. Below, we first describe the randomized CNNs to generate match candidates in Section 3.1, and then introduce the consensus mechanism to produce final matches in Section 3.2. 3.1 RANDOM DESCRIPTION The keypoint extraction process of a single CNN is illustrated in Figure 3. Following SuperPoint (DeTone et al., 2018), we apply a simplified network architecture as the descriptor extractor f that takes the full image IH×W as input and produces the feature map FH×W×N = f(IH×W ) as pixel-wise descriptors, where H,W ∈ N refer to the image height and width, and N ∈ N refers to the dimension of descriptors. Before applying descriptor matcher, the feature map F is processed to a saliency map to filter out homogeneous regions. Descriptor. In our method, the CNNs are randomly initialized without any training. Our intuition is that a convolution kernel computes a certain type of local statistics inside its receptive field, just like traditional methods that count handcrafted gradients and histograms. Therefore, a CNN is a combination of kernels to count statistics of statistics. In the literature, there are machine learning-like algorithms that apply random statistics to solve computer vision problems, such as in place recognition (Glocker et al., 2014) and visual localization (Cavallari et al., 2019), which demonstrate the effectiveness of randomized-then-fixed statistics. Note that, in our method, the parameters of CNNs are also fixed after the random initialization, so that each CNN computes consistently the same type of statistics at inference time. Consequently, a descriptor can only be used to match the same type of descriptors extracted by the same CNN. Since we employ multiple CNNs independently, the process of keypoint extraction and matching can be deployed in parallel. Saliency. Before matching, we leverage a saliency detection process to reduce the matching space. Directly matching the descriptors from a randomized CNN results in many candidates lying on homogeneous regions, such as textureless floors and walls. To filter out these meaningless candidates, we adopt the keypoint detection formulation proposed in D2-Net (Dusmanu et al., 2019). The key idea is to detect local maxima in the high-level visual descriptor space, rather than detecting local 2D patterns in the low-level image color space. Specifically, the detection formulation considers two aspects: a local softmax α among nearby pixels in each feature channel, and a ratio β among the feature channels of each pixel. The α and β scores are defined as follows: αi,j,k = exp (Fi,j,k)∑ (i′,j′)∈N (i,j) exp (Fi′,j′,k) , βi,j,k = Fi,j,k / max t Fi,j,t , (1) where N (i, j) refers to the neighbor pixels’ locations around the pixel at (i, j), including itself. The saliency score is defined as si,j = maxk (αi,j,k · βi,j,k) and then image-level normalized. Note that the aforementioned process is a forward computation without additional parameters. In D2Net (Dusmanu et al., 2019), the formulation is used to perform soft detection during training. To our experiments, we observe that the process effectively assigns high scores to salient pixels even if the CNN parameters are randomized. Matching. As for keypoint matching, we make use of a classical nearest neighbor matcher that, for each descriptor, in one image, it retrieves the top-2 similar descriptors in another image and computes a ratio test (Lowe, 2004) to filter out ambiguous descriptors. Then, a mutual nearest neighbor check is applied to keep only those matches that are stable in the two matching directions, i.e., from the left to the right image and vice versa. A match candidate is represented as p = {(i1, j1), (i2, j2)}, where i, j ∈ N refer to the 2D locations of the two associated keypoints. For each type of the descriptors from the same CNN fi, the aforementioned matching process is executed independently to generate a set of match candidates Mi = {pi1 , pi2 , ..., pik}. As a result, we obtain m sets of match candidates as the input for the following consensus mechanism. 3.2 CONSENSUS MECHANISM Directly taking all the match candidates M = ⋃m i=1 Mi to the downstream task such as pose estimation often fails due to a large proportion of wrong matches. In this section, we introduce a simple and effective consensus mechanism that rejects incorrect matches early. For each keypoint in M, our goal is to find a correct match or to discard it. The idea, inspired by RANSAC (Fischler & Bolles, 1981), is to first generate model hypotheses using random minimal samples and then vote for each hypothesis using the rest of the samples to select the most consensus one. In our problem, a randomly selected candidate p ∈ M serves as the minimal sample (i.e. a model hypothesis) that gives a match between keypoints (i1, j1) and (i2, j2). Then the rest of the match candidates correlated with the two keypoints vote if they support the hypothesis. This process is achieved by keypoint clustering and consensus scoring, which are introduced in detail below. Keypoint clustering. Given a match hypothesis generated from px = {(ix1 , jx1), (ix2 , jx2)} ∈ M, the objective is to check if it is in consensus with other correlated match candidates in M. The correlated candidates Mx ⊆ M are obtained by seeking all the candidates that are associated with keypoints (ix1 , jx1) or (ix2 , jx2). With the keypoints in Mx, we apply 2D location clustering in each image separately. According to the clustering results, we compute a consensus score as the measurement of the distribution. If the keypoints are well distributed, h passes the consensus check and we update the keypoint locations with the center points of the most consensus clusters. Otherwise, the hypothesis h will be discarded. The hypotheses with optimized keypoint locations are output as the final robust keypoint matches. Consensus scoring. To quantitatively measure the consensus status (keypoint distribution) after clustering, we introduce the consensus score on top of the clusters Q. First, the clusters containing only one keypoint are immediately discarded. For each remaining cluster q ∈ Q, we compute a density score defined as dq = |q| /std(q), (2) where |q| refers to the number of keypoints, and std(q) refers to the standard deviation of the 2D locations to approximate the cluster radius. Finally, the consensus score is defined as c = { d if |Q| = 1 maxq(dq)/ ∑ q∈Q dq otherwise. (3) Three examples of the consensus status are illustrated in Figure 4, and the set of keypoints in orange gains the best score among the three. Generality. The proposed consensus mechanism above is agnostic to keypoint descriptors, detectors, and matchers. Therefore, their alternatives such as trained SuperPoint and SuperGlue (Sarlin et al., 2020) can also be ensembled into the framework. 4 EXPERIMENTS In this section, we validate the effectiveness of our method. We first elaborate on the implementation details in Section 4.1. Then, we conduct comparisons with state-of-the-art representative methods on both matching performance in Section 4.2 and pose estimation in Section 4.3. Last, we perform analysis and ablation studies on our method in Section 4.4. 4.1 IMPLEMENTATION DETAILS In all the experiments, we make use of a 7 layers convolutional neural network (CNN) as a basic descriptor extractor. Each layer of the network is followed by a ReLU activation, except for the last layer, and each of the first 3 activations is followed by a pooling layer. The last feature maps containing N = 256 dimensional descriptors are normalized to unit vectors before output. We apply bilinear interpolation to resize the descriptors to align the input resolution. By default, we employ m = 8 randomized CNNs for visual feature extraction. With the saliency map, we only select the keypoints with higher scores than the median for the following matching process. The ratio test threshold during the matching process is set to 0.95. As for the consensus mechanism, we apply the MeanShift (Pedregosa et al., 2011) algorithm with 24 pixels bandwidth to perform keypoint clustering. The threshold of the consensus score is set to 1.0 by default, i.e., we output a match if there is only one valid and compact cluster. 4.2 KEYPOINT MATCHING Competitors. There are various related approaches on top of CNN based keypoints, and we believe the following methods are the most relevant and representative to be our competitors: DELF (Noh et al., 2017), SuperPoint (DeTone et al., 2018), D2-Net (Dusmanu et al., 2019), and R2D2 (Revaud et al., 2019). We also apply RootSIFT (Arandjelović & Zisserman, 2012) as the representative of traditional handcrafted approaches. Datasets. We conduct experiments on the 7-Scenes (Shotton et al., 2013) and MegaDepth (Li & Snavely, 2018) datasets since they provide depth images that can be used to verify matches densely. The 7-Scenes dataset consists of 7 indoor scenes with RGB-D images, and ground truth camera poses. Several sequences are officially divided into training and test sets for each scene. Since noisy poses exist in the training set, we only use the test set for our evaluation. To avoid view selection biases, we uniformly sample each test sequence to form our test pairs. We use the original color images, calibrated depth images, and normal images (computed based on depth images) as our test input to evaluate the generalization ability to different modalities (domains). The MegaDepth dataset contains outdoor scenes with RGB-D images; we use a subset with ground truth camera poses from (Tyszkiewicz et al., 2020; Sun et al., 2021) as the test set. Since the MegaDepth evaluation set is part of the training set of D2-Net, we report the results of D2-Net only on the depth and normal images in our evaluation. Metrics. To quantitatively evaluate the keypoint matching results, we apply matching accuracy (i.e., the proportion of correct matches) and the number of correct matches as our metrics. On the 7-Scenes dataset, a match is considered correct if the distance between the corresponding 3D points is lower than the thresholds (1cm and 5cm). On the MegaDepth dataset, due to scale ambiguity, we apply thresholds (5 pixels and 20 pixels) on reprojection error instead of absolute 3D distance. Results. The results are shown in Table 1. On the 7-Scenes dataset, our method obtains more correct matches while the accuracy is comparable with the competitors. For depth and normal images, compared with those in color images, all the methods suffer from performance drops, but we are overall the best. It demonstrates the effectiveness of the randomized CNNs with the consensus mechanism. The MegaDepth dataset is much more challenging than 7-Scenes due to the large viewpoint changes, and we observe that our method obtains reasonable results. Besides the quantitative results, we visualize the keypoint matches of two samples in Figure 5. Methods 7-Scenes (%Accuracy / #Matches) MegaDepth (%Accuracy / #Matches) Color Depth Normal Color Depth Normal 1 cm 5 cm 1 cm 5 cm 1 cm 5 cm 5 px 20 px 5 px 20 px 5 px 20 px RootSIFT 13.12 / 11 87.50 / 84 0.00 / 0 12.97 / 3 0.00 / 0 31.25 / 6 79.64 / 87 83.49 / 93 5.41 / 2 11.76 / 4 41.88 / 19 52.34 / 24 DELF 2.86 / 1 64.71 / 16 0.46 / 1 11.96 / 20 0.0 / 0 13.82 / 11 6.35 / 32 31.51 / 160 0.90 / 4 6.28 / 28 2.10 / 6 14.11 / 43 SuperPoint 11.19 / 22 85.02 / 176 2.00 / 2 30.71 / 22 3.03 / 2 36.16 / 31 74.55 / 213 80.62 / 229 7.31 / 9 15.85 / 18 30.29 / 58 44.05 / 84 D2-Net 10.66 / 6 90.05 / 60 0.00 / 0 27.27 / 7 0.00 / 0 18.75 / 1 - - 17.71 / 2 50.44 / 7 53.85 / 2 96.30 / 4 R2D2 12.50 / 12 94.29 / 108 0.00 / 0 45.45 / 5 0.00 / 0 0.00 / 0 98.29 / 63 100.00 / 64 30.00 / 1 66.67 / 3 92.86 / 14 100.00 / 15 Ours 7.75 / 19 78.03 / 226 2.53 / 2 43.89 / 37 6.78 / 3 71.58 / 37 7.69 / 2 21.95 / 6 4.80 / 9 19.80 / 39 19.00 / 19 42.15 / 46 Table 1: Results of keypoint matching on the 7-Scenes and MegaDepth datasets. The best and second-best numbers are labeled red and blue, respectively. 4.3 POSE ESTIMATION Solvers and metrics. On the 7-Scenes dataset, we apply PnP (Lepetit et al., 2009) algorithm with RANSAC (Fischler & Bolles, 1981) to solve absolute poses, and we report median translation and rotation errors. On the MegaDepth dataset, due to scale ambiguity, we solve relative poses from essential matrix estimation (Stewenius et al., 2006) with RANSAC, and we report the average under the recall curve (AUC) as in (Sarlin et al., 2020). Results. The results are shown in Table 2. On the 7-Scenes dataset, our poses on depth and normal images are marginally worse than SuperPoint, although our matching results are better. For the MegaDepth dataset, our results on depth images are close to the state-of-the art, while the results on color and normal images lag behind. The aforementioned evaluation reveal limitations of our method, which are detailly discussed in Section 4.4. 4.4 ANALYSIS Stable random statistics. We first validate the sensitivity of the descriptors stemming from randomized CNNs. To do so, we test a single CNN (without the saliency computation) with 10 different random seeds on the 7-Scenes dataset. The mean matching accuracies for color, depth, and normal images are 66.49%, 23.06%, and 34.31%, with standard deviations 2.43e-03, 1.59e-03, and 2.29e03. The mean number of correct matches is 462.25, 75.00, and 96.75, with standard deviations of 8.66, 1.83, and 3.10. From these results, we observe that the performance of random descriptors is stable, with a mean coefficient of variations of 1.54e-02 (i.e., ∼ 2%). Effectiveness of saliency. To evaluate the help of the saliency computed on top of descriptors, we conduct experiments on the 7-Scenes dataset with a single randomized CNN. When we select only the descriptors with higher saliency scores than the median, the matching accuracies achieve 70.07%(+3.58%), 34.43%(+11.37%), and 48.73%(+14.42%) for color, depth, and normal images. In Figure 6, we sample 5 images from 7-Scenes and MegaDepth datasets and visualize the locations of the top 50 salient descriptors in each image. We are glad to observe that the visualized descriptor locations overall locate around salient regions. Ablation studies. In Table 3, we report ablation studies of our method using different numbers of CNNs and ensemble the trained SuperPoint and SuperGlue. The result of the single CNN does not apply the consensus check. With consensus check, when there are 3 CNNS, we get a high accuracy but a low number of correct matches. As the number of CNNs increases, the accuracy drops slightly while we get more correct matches. The 7+SP refers to a total of 8 CNNs, one of which is the trained SuperPoint. The SuperGlue matcher is only applied to SuperPoint. As they have only 1/8 proportion, SuperPoint and SuperGlue do not impact much. Limitations. Below we analyze the limitations of our method. From Table 1 we observe that for depth and normal images on the 7-Scenes dataset, we are better than SuperPoint on both matching accuracy and number of matches under different thresholds. However, our camera pose estimations shown in Table 2 are less precise. We blame it on the distribution of the matched keypoints, as shown in Figure 7. Therefore, improving the camera pose solver to deal with the misleading of well-structured outliers will be a good future research direction. Another limitation of our method is that there is no guarantee for the descriptors to be scale/rotation invariant. As shown in Figure 8, on the HPatches (Balntas et al., 2017) dataset, our method overall underperforms R2D2 and SuperPoint. The main reason is that our method fails on the images with very large viewpoint changes. 5 CONCLUSION In this paper, we present a new approach that makes use of random statistics extracted by randomized convolutional neural networks (CNNs) as visual descriptors, followed by a consensus mechanism to perform keypoint matching among images. Incorporating scale/rotation invariance will definitely improve performance. Also, it is worth more exploration and research on network architectures. A APPENDIX Times. We run all the experiments on GeForce GTX 1080 Ti. Our single CNN takes about 7 ms to infer a pair of images in 480×640 resolution, while SuperPoint takes around 30 ms. The consensus processing for each match hypothesis takes about 0.3 ms. Note that both the CNNs inference and hypotheses consensus can be implemented in parallel for further acceleration.
1. What is the focus and contribution of the paper on image matching? 2. What are the strengths of the proposed approach, particularly in terms of its ability to handle appearance variations without requiring supervised or self-supervised learning? 3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or questions regarding the method's ability to handle viewpoint changes and its lack of comparison with important baselines such as DELF (multi-scale)?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The proposed method adopts randomized CNNs ensemble and uses it as descriptor extractors and detects keypoints. Comparing to the state-of-the-art requires training specifically with supervised or self-supervised learning, the method doesn't need any of that and the author claims to have high robustness against illumination and other appearance variations. Strengths And Weaknesses The idea is quite interesting. Basically the method seeks matching generalization with an ensemble of purely random parameters. The idea can potentially be used in some data requires matching which contains no training samples. However, the task that authors targeted is specifically for appearance matching which is a well studied domain. The claim of missing matches with depth or nomal are failing (Table.1, Table.2, Figure.5 ...) are simply because those methods (e.g. Superpoint, D2-Net, R2D2) are not trained with those inputs. It's not very hard to swapping those inputs and re-train them. Despite I know the idea is more like unsupervised vs supervised method, there is simply not much hard work you need to do with supervised method. For example, R2D2 is basically a self-supervised method, the training data can be generated with a set of normal or depth images. The authors need to motivate the idea more than showing the results. Apart from that, the proposed method doesn't really work on any view point changes (Figure 8.) which is essential for image matching. The paper also missed important baseline such as DELF (multi-scale) which is also very robust for illumination changes. Clarity, Quality, Novelty And Reproducibility The idea is simple and clear. Should be fairly easy to reimplement.
ICLR
Title $\mathrm{SO}(2)$-Equivariant Reinforcement Learning Abstract Equivariant neural networks enforce symmetry within the structure of their convolutional layers, resulting in a substantial improvement in sample efficiency when learning an equivariant or invariant function. Such models are applicable to robotic manipulation learning which can often be formulated as a rotationally symmetric problem. This paper studies equivariant model architectures in the context ofQ-learning and actor-critic reinforcement learning. We identify equivariant and invariant characteristics of the optimal Q-function and the optimal policy and propose equivariant DQN and SAC algorithms that leverage this structure. We present experiments that demonstrate that our equivariant versions of DQN and SAC can be significantly more sample efficient than competing algorithms on an important class of robotic manipulation problems. 1 INTRODUCTION A key challenge in reinforcement learning is to improve sample efficiency – that is to reduce the amount of environmental interactions that an agent must take in order to learn a good policy. This is particularly important in robotics applications where gaining experience potentially means interacting with a physical environment. One way of improving sample efficiency is to create “artificial” experiences through data augmentation. This is typically done in visual state spaces where an affine transformation (e.g., translation or rotation of the image) is applied to the states experienced during a transition (Laskin et al., 2020a; Kostrikov et al., 2020). These approaches implicitly assume that the transition and reward dynamics of the environment are invariant to affine transformations of the visual state. In fact, some approaches explicitly use a contrastive loss term to induce the agent to learn translation-invariant feature representations (Laskin et al., 2020b; Zhan et al., 2020). Recent work in geometric deep learning suggests that it may be possible to learn transformationinvariant policies and value functions in a different way, using equivariant neural networks (Cohen & Welling, 2016a;b). The key idea is to structure the model architecture such that it is constrained only to represent functions with the desired invariance properties. In principle, this approach aim at exactly the same thing as the data augmentation approaches described above – both methods seek to improve sample efficiency by introducing an inductive bias. However, the equivariance approach achieves this more directly by modifying the model architecture rather than by modifying the training data. Since with data augmentation, the model must learn equivariance in addition to the task itself, more training time and greater model capacity are often required. Even then, data augmentation results only in approximate equivariance whereas equivariant neural networks guarantee it and often have stronger generalization as well (Wang et al., 2020b). While equivariant architectures have recently been applied to reinforcement learning (van der Pol et al., 2020a;b; Mondal et al., 2020), this has been done only in toy settings (grid worlds, etc.) where the model is equivariant over small finite groups, and the advantages of this approach over standard methods is less clear. This paper explores the application of equivariant methods to more realistic problems in robotics such as object manipulation. We make several contributions. First, we define and analyze an important class of MDPs that we call group-invariant MDPs. Second, we introduce a new variation of the Equivariant DQN (Mondal et al., 2020), and we further introduce equivariant variations of SAC (Haarnoja et al., 2018), and learning from demonstration (LfD). Finally, we show that our methods convincingly outperform recent competitive data augmentation approaches (Laskin et al., 2020a; Kostrikov et al., 2020; Laskin et al., 2020b; Zhan et al., 2020). Our Equivariant SAC method, in particular, outperforms these baselines so dramatically (Figure 7) that it could make reinforcement learning feasible for a much larger class of robotics problems than is currently the case. Supplementary video and code are available at https://pointw.github.io/equi_rl_page/. 2 RELATED WORK Equivariant Learning: Encoding symmetries in the structure of neural networks can improve both generalization and sample efficiency. The idea of equivariant learning is first introduced in GConvolution (Cohen & Welling, 2016a). The extension work proposes an alternative architecture, Steerable CNN (Cohen & Welling, 2016b). Weiler & Cesa (2019) proposes a general framework for implementing E(2)-Steerable CNNs. In the context of reinforcement learning, Mondal et al. (2020) investigates the use of Steerable CNNs in the context of two game environments. van der Pol et al. (2020b) proposes MDP homomorphic networks to encode rotational and reflectional equivariance of an MDP but only evaluates their method in a small set of tasks. In robotic manipulation, Wang et al. (2021) learns equivariant Q-functions but is limited in the spatial action space. In contrast to prior work, this paper proposes an Equivariant SAC algorithm, an equivariant LfD algorithm, and a novel variation of Equivariant DQN (Mondal et al., 2020) focusing on visual motor control problems. Data Augmentation: Another popular method for improving sample efficiency is data augmentation. Recent works demonstrate that the use of simple data augmentation methods like random crop or random translate can significantly improve the performance of reinforcement learning (Laskin et al., 2020a; Kostrikov et al., 2020). Data augmentation is often used for generating additional samples (Kalashnikov et al., 2018; Lin et al., 2020; Zeng et al., 2020) in robotic manipulation. However, data augmentation methods are often less sample efficient than equivariant networks because the latter injects an inductive bias to the network architecture. Contrastive Learning: Data augmentation is also applied with contrastive learning (Oord et al., 2018) to improve feature extraction. Laskin et al. (2020b) show significant sample-efficiency improvement by adding an auxiliary contrastive learning term using random crop augmentation. Zhan et al. (2020) use a similar method in the context of robotic manipulation. However, contrastive learning is limited to learning an invariant feature encoder and is not capable of learning equivariant functions. Close-Loop Robotic Control: There are two typical action space definitions when learning policies that control the end-effector of a robot arm: the spatial action space that controls the target pose of the end-effector (Zeng et al., 2018b;a; Satish et al., 2019; Wang et al., 2020a), or the close-loop action space that controls the displacement of the end-effector. The close-loop action space is widely used for learning grasping policies (Kalashnikov et al., 2018; Quillen et al., 2018; Breyer et al., 2019; James et al., 2019). Recently, some works also learn more complex policies than grasping (Viereck et al., 2020; Kilinc et al., 2019; Cabi et al., 2020; Zhan et al., 2020). This work extends prior works in the close-loop action space by using equivariant learning to improve the sample efficiency. 3 BACKGROUND SO(2) and Cn: We will reason about rotation in terms of the group SO(2) and its cyclic subgroup Cn ≤ SO(2). SO(2) is the group of continuous planar rotations {Rotθ : 0 ≤ θ < 2π}. Cn is the discrete subgroup Cn = {Rotθ : θ ∈ { 2πin |0 ≤ i < n}} of rotations by multiples 2π n . Cn actions: A groupGmay be equipped with an action on a setX by specifying a map · : G×X → X satisfying g1 · (g2 · x) = (g1g2) · x and 1 · x = x for all g1, g2 ∈ G, x ∈ X . Note that closure, gx ∈ X , and invertibility, g−1gx = x, follow immediately from the definition. We are interested in actions of Cn which formalize how vectors or feature maps transform under rotation. The group Cn acts in three ways that concern us (for a more comprehensive background, see Bronstein et al. (2021)): 1. R through the trivial representation ρ0. Let g ∈ Cn and x ∈ R. Then ρ0(g)x = x. For example, the trivial representation describes how pixel color/depth values change when an image is rotated, i.e. they do not change (Figure 1 left). 2. R2 through the standard representation ρ1. Let g ∈ Cn and v ∈ R2. Then ρ1(g)v =( cos g − sin g sin g cos g ) v. This describes how elements of a vector field change when rotated (Figure 1 middle). 3. Rn through the regular representation ρreg. Let g = rm ∈ Cn = {1, r, r2, . . . , rn−1} and (x1, x2, . . . , xn) ∈ Rn. Then ρreg(g)x = (xn−m+1, . . . , xn, x1, x2, . . . , xn−m) cyclically permutes the coordinates of Rn (Figure 1 right). Feature maps as functions: In deep learning, images and feature maps are typically expressed as tensors. However, it will be convenient here to sometimes express these as functions. Specifically, we may write an h× w one-channel image F ∈ R1×h×w as a function F : R2 → R where F(x, y) describes the intensity at pixel x, y. Similarly, an m-channel tensor F ∈ Rm×h×w may be written as F : R2 → Rm. We refer to the domain of this function as its “spatial dimensions”. Cn actions on vectors and feature maps: Cn acts on vectors and feature maps differently depending upon their semantics. We formalize these different ways of acting as follows. Let F : R2 → Rm be an m-channel feature map and let V ∈ Rm×1×1 = Rm be a vector represented as a special case of a feature map with 1× 1 spatial dimensions. Then g is defined to act on F by (gF)(x, y) = ρj(g)F(ρ1(g)−1(x, y)). (1) For a vector V (considered to be at (x, y) = (0, 0)), this becomes: gV = ρj(g)V. (2) In the above, ρ1(g) rotates pixel location and ρj(g) transforms the pixel feature vector using the trivial representation (ρj = ρ0), the standard representation (ρj = ρ1), the regular representation (ρj = ρreg), or some combination thereof. Equivariant convolutional layer: A Cn-equivariant layer is a function h whose output is constrained to transform in a defined way when the input feature map is transformed by a group action. Consider an equivariant layer h with an input Fin : R2 → R|ρin| and an output Fout : R2 → R|ρout| , where ρin and ρout denote the group representations associated with Fin and Fout, respectively. When the input is transformed, this layer is constrained to output a transformed version of the same output feature map: h(gFin) = g(h(Fin)) = gFout. (3) where g ∈ Cn acts on Fin or Fout through Equation 1 or Equation 2, i.e., this constraint equation can be applied to arbitrary feature maps F or vectors V . A linear convolutional layer h satisfies Equation 3 with respect to the group Cn if the convolutional kernel K : R2 → R|ρout|×|ρin| has the following form (Cohen et al., 2018): K(ρ1(g)v) = ρ −1 out(g)K(v)ρin(g). (4) Since the composition of equivariant maps is equivariant, a fully convolutional equivariant network can be constructed by stacking equivariant convolutional layers that satisfy the constraint of Equation 3 and together with equivariant non-linearities (Weiler & Cesa, 2019). 4 PROBLEM STATEMENT 4.1 GROUP-INVARIANT MDPS In a group-invariant MDP, the transition and reward functions are invariant to group elements g ∈ G acting on the state and action space. For state s ∈ S, action a ∈ A, and g ∈ G, let gs ∈ S denote the action of g on s and ga ∈ A denote the action of g on a. Definition 4.1 (G-invariant MDP). A G-invariant MDPMG = (S,A, T,R,G) is an MDPM = (S,A, T,R) that satisfies the following conditions: 1. Reward Invariance: The reward function is invariant to the action of the group element g ∈ G, R(s, a) = R(gs, ga). 2. Transition Invariance: The transition function is invariant to the action of the group element g ∈ G, T (s, a, s′) = T (gs, ga, gs′). A key feature of a G-invariant MDP is that its optimal solution is also G-invariant (proof in Appendix A): Proposition 4.1. LetMG be a group-invariant MDP. Then its optimal Q-function is group invariant, Q∗(s, a) = Q∗(gs, ga), and its optimal policy is group-equivariant, π∗(gs) = gπ∗(s), for any g ∈ G. It should be noted that the G-invariant MDP of Definition 4.1 is in fact a special case of an MDP homomorphism (Ravindran & Barto, 2001; 2004), a broad class of MDP abstractions. MDP homomorphisms are important because optimal solutions to the abstract problem can be “lifted” to produce optimal solutions to the original MDP (Ravindran & Barto, 2004). As such, Proposition 4.1 follows directly from those results. 4.2 SO(2)-INVARIANT MDPS IN VISUAL STATE SPACES In the remainder of this paper, we focus exclusively on an important class of SO(2)-invariant MDPs where the state is encoded as an image. We approximate SO(2) by its subgroup Cn. State space: State is expressed as an m-channel image, Fs : R2 → Rm. The group operator g ∈ Cn acts on this image as defined in Equation 1 where we set ρj = ρ0: gFs(x, y) = ρ0(g)Fs(ρ1(g)−1(x, y)), i.e., by rotating the pixels but leaving the pixel feature vector unchanged. Action space: We assume we are given a factored action space Ainv ×Aequiv = A ⊆ Rk embedded in a k-dimensional Euclidean space where Ainv ⊆ Rkinv and Aequiv ⊆ Rk−kinv . We require the variables in Ainv to be invariant with the rotation operator and the variables in Aequiv to rotate with the representation ρequiv = ρ1. Therefore, the rotation operator g ∈ Cn acts on a ∈ A via ga = (ρequiv(g)aequiv, ainv) where ainv ∈ Ainv and aequiv ∈ Aequiv. Application to robotic manipulation: We express the state as a depth image centered on the gripper position where depth is defined relative to the gripper. The orientation of this image is relative to the base reference frame – not the gripper frame. We require the fingers of the gripper and objects grasped by the gripper to be visible in the image. Figure 2 shows an illustration. The action is a tuple, a = (aλ, axy, az, aθ) ∈ A ⊂ R5, where aλ ∈ Aλ denotes the commanded gripper aperture, axy ∈ Axy denotes the commanded change in gripper xy position, az ∈ Az denotes the commanded change in gripper height, and aθ ∈ Aθ denotes the commanded change in gripper orientation. Here, the xy action is equivariant with g ∈ Cn,Aequiv = Axy , and the rest of the action variables are invariant,Ainv = Aλ×Az×Aθ. Notice that the transition dynamics are Cn-invariant (i.e. T (s, a, s′) = T (gs, ga, gs′)) because the Newtonian physics of the interaction are invariant to the choice of reference frame. If we constrain the reward function to be Cn-invariant as well, then the resulting MDP is Cn-invariant. 5 APPROACH 5.1 EQUIVARIANT DQN In DQN, we assume we have a discrete action space, and we learn the parameters of a Q-network that maps from the state onto action values. Given a G-invariant MDP, Proposition 4.1 tells us that the optimal Q-function is G-invariant. Therefore, we encode the Q-function using an equivariant neural network that is constrained to represent only G-invariant Q-functions. First, in order to use DQN, we need to discretize the action space. Let Aequiv ⊂ Aequiv and Ainv ⊂ Ainv be discrete subsets of the full equivariant and invariant action spaces, respectively. Next, we define a function Fa : Aequiv → RAinv from the equivariant action variables in Aequiv to the Q values of the invariant action variables in Ainv. For example, in the robotic manipulation domain described Section 4.2, we have Aequiv = Axy and Ainv = Aλ × Az × Aθ and ρequiv = ρ1, and we define Aequiv and Ainv accordingly. We now encode the Q network q as a stack of equivariant layers that each encode the equivariant constraint of Equation 3. Since the composition of equivariant layers is equivariant, q satisfies: q(gFs) = g(q(Fs)) = gFa, (5) where we have substituted Fin = Fs and Fout = Fa. In the above, the rotation operator g ∈ Cn is applied using Equation 1 as gFa(axy) = ρ0(g)Fa(ρ1(g)−1(axy)). Figure 3 illustrates this equivariance constraint for the robotic manipulation example with |Aequiv| = |Axy| = 9. When the state (represented as an image on the left) is rotated by 90 degrees, the values associated with the action variables in Axy are also rotated similarly. The detailed network architecture is shown in Appendix D.1. Our architecture is different from that in Mondal et al. (2020) in that we associate the action of g on Aequiv and Ainv with the group action on the spatial dimension and the channel dimension of a feature map Fa, which is more efficient than learning such mapping using FC layers. 5.2 EQUIVARIANT SAC In SAC, we assume the action space is continuous. We learn the parameters for two networks: a policy network Π (the actor) and an action-value network Q (the critic) (Haarnoja et al., 2018). The critic Q : S ×A→ R approximates Q values in the typical way. However, the actor Π : S → A×Aσ estimates both the mean and standard deviation of action for a given state. Here, we define Aσ = Rk to be the domain of the standard deviation variables over the k-dimensional action space defined in Section 4.2. Since Proposition 4.1 tells us that the optimal Q is invariant and the optimal policy is equivariant, we must model Q as an invariant network and Π as an equivariant network. Policy network: First, consider the equivariant constraint of the policy network. As before, the state is encoded by the function Fs. However, we must now express the action as a vector over Ā = A×Aσ . Factoring A into its equivariant and invariant components, we have Ā = Aequiv × Ainv × Aσ . In order to identify the equivariance relation for Ā, we must define how the group operator g ∈ G acts on aσ ∈ Aσ . Here, we make the simplifying assumption that aσ is invariant to the group operator. This choice makes sense in robotics domains where we would expect the variance of our policy to be invariant to the choice of reference frame. As a result, we have that the group element g ∈ G acts on ā ∈ Ā via: gā = g(aequiv, ainv, aσ) = (ρequiv(g)aequiv, ainv, aσ). (6) We can now define the actor network π to be a mapping Fs 7→ ā (Figure 4 top) that satisfies the following equivariance constraint (Equation 3): π(gFs) = g(π(Fs)) = gā. (7) Critic network: The critic network takes both state and action as input and maps onto a real value. We define two equivariant networks: a state encoder e and aQ network q. The equivariant state encoder, e, maps the input state Fs onto a regular representation s̄ ∈ (Rn)α where each of n group elements is associated with an α-vector. Since s̄ has a regular representation, we have gs̄ = ρreg(g)s̄. Writing the equivariance constraint of Equation 3 for e, we have that e must satisfy e(gFs) = ge(Fs) = gs̄. The output state representation s̄ is concatenated with the action a ∈ A, producing w = (s̄, a). The action of the group operator is now gw = (gs̄, ga) where ga = (ρequiv(g)aequiv, ainv). Finally, the q network maps from w onto R, a real-valued estimate of theQ value for w. Based on proposition 4.1, this network must be invariant to the group action: q(gw) = q(w). All together, the critic satisfies the following invariance equation: q(e(gFs), ga) = q(e(Fs), a). (8) This network is illustrated at the bottom of Figure 4. For a robotic manipulation domain in Section 4.2, we have Aequiv = Axy and Ainv = Aλ ×Az ×Aθ and ρequiv = ρ1. The detailed network architecture is in Appendix D.2. Preventing the critic from becoming overconstrained: In the model architecture above, the hidden layer of q is represented using a vector in the regular representation and the output of q is encoded using the trivial representation. However, Schur’s Lemma (see e.g. Dummit & Foote (1991)) implies there only exists a one-dimensional space of linear mappings from a regular representation to a trivial representation (i.e., x = a ∑ i vi where x is a trivial representation, a is a constant, and v is a regular representation). This implies that a linear mapping f : Rn × Rn → R from two regular representations to a trivial representation that satisfies f(gv, gw) = f(v, w) for all g ∈ G will also satisfy f(g1v, w) = f(v, w) and f(v, g2w) = f(v, w) for all g1, g2 ∈ G. (See details in Appendix B.) In principle, this could overconstrain the last layer of q to encode additional undesired symmetries. To avoid this problem we use a non-linear equivariant mapping, maxpool, over the group space to transform the regular representation to the trivial representation. 5.3 EQUIVARIANT SACFD Many of the problems we want to address cannot be solved without guiding the agent’s exploration somehow. In order to evaluate our algorithms in this context, we introduce the following simple strategy for learning from demonstration with SAC. First, prior to training, we pre-populate the replay buffer with a set of expert demonstrations generated using a hand-coded planner. Second, we introduce the following L2 term into the SAC actor’s loss function: Lactor = LSAC + 1e [ 1 2 ((a ∼ π(s))− ae)2 ] , (9) where LSAC is the actor’s loss term in standard SAC, 1e = 1 if the sampled transition is an expert demonstration and 0 otherwise, a ∼ π(s) is an action sampled from the output Gaussian distribution of π(s), and ae is the expert action. Since both the sampled action a ∼ π(s) and the expert action ae transform equivalently, Lactor is compatible with the equivariance we introduce in Section 5.2. We refer to this method as SACfD (SAC from Demonstration). 6 EXPERIMENTS We evaluate Equivariant DQN and Equivariant SAC in the manipulation tasks shown in Figure 5. These tasks can be formulated as SO(2)-invariant MDPs. All environments have sparse rewards (+1 when reaching the goal and 0 otherwise). See environment details in Appendix C. 6.1 EQUIVARIANT DQN We evaluate Equivariant DQN in the Block Pulling, Object Picking, and Drawer Opening tasks for the group C4. The discrete action space is Aλ = {OPEN, CLOSE}; Axy = {(x, y)|x, y ∈ {−0.02m, 0m, 0.02m}}; Az = {−0.02m, 0m, 0.02m}; Aθ = {− π16 , 0, π 16}. Note that the definition of Axy and g ∈ C4 satisfies the closure requirement of the action space in a way that ∀axy ∈ Axy,∀g ∈ C4, ρ1(g)axy ∈ Axy . We compare Equivariant DQN (Equi DQN) against the following baselines: 1) CNN DQN: DQN with conventional CNN instead of equivariant network, where the conventional CNN has a similar amount of trainable parameters (3.9M) as the equivariant network (2.6M). 2) RAD Crop DQN (Laskin et al., 2020a): same network architecture as CNN DQN. At each training step, each transition in the minibatch is applied with a random-crop data augmentation. 3) DrQ Shift DQN (Kostrikov et al., 2020): same network architecture as CNN DQN. At each training step, both the Q-targets and the TD losses are calculated by averaging over two random-shift augmented transitions. 4): CURL DQN (Laskin et al., 2020b): similar architecture as CNN DQN with an extra contrastive loss term that learns an invariant encoder from random crop augmentations. See the baselines detail in Appendix E. At the beginning of each training process, we pre-populate the replay buffer with 100 episodes of expert demonstrations. Figure 6 compares the learning curves of the various methods. Equivariant DQN learns faster and converges at a higher discounted reward in all three environments. 6.2 EQUIVARIANT SAC In this experiment, we evaluate the performance of Equivariant SAC (Equi SAC) for the group C8. The continuous action space is:Aλ = [0, 1]; Axy = {(x, y)|x, y ∈ [−0.05m, 0.05m]}; Az = [−0.05m, 0.05m]; Aθ = [−π8 , π 8 ]. We compare against the following baselines: 1) CNN SAC: SAC with conventional CNN rather than equivariant networks, where the conventional CNN has a similar amount of trainable parameters (2.6M) as the equivariant network (2.3M). 2) RAD Crop SAC (Laskin et al., 2020a): same model architecture as CNN SAC with random crop data augmentation when sampling transitions. 3) DrQ Shift SAC (Kostrikov et al., 2020): same model architecture as CNN SAC with random shift data augmentation when calculating the Q-target and the loss. 4) FERM (Zhan et al., 2020): a combination of SAC, contrastive learning, and random crop augmentation (baseline details in Appendix E). All methods use a SO(2) data augmentation buffer, where every time a new transition is added, we generate 4 more augmented transitions by applying random continuous rotations to both the image and the action (this data augmentation in the buffer is in addition to the data augmentation that is performed in the RAD DrQ, and FERM baselines). Prior to each training run, we pre-load the replay buffer with 20 episodes of expert demonstration. Figure 7 shows the comparison among the various methods. Notice that Equivariant SAC outperforms the other methods significantly. Without the equivariant approach, Object Picking and Drawer Opening appear to be infeasible for the baseline methods. In Block Pulling, FERM is the only other method able to solve the task. 6.3 EQUIVARIANT SACFD We want to explore our equivariant methods in the context of more challenging tasks such as those in the bottom row of Figure 5. However, since these tasks are too difficult to solve without some kind of guided exploration, we augment the Equivariant SAC as well as all the baselines in two ways: 1) we use SACfD as described in Section 5.3; 2) we use Prioritized Experience Replay (Schaul et al., 2015) rather than standard replay buffer. As in Section 6.2, we use the SO(2) data augmentation in the buffer that generates 4 extra SO(2)-augmented transitions whenever a new transition is added. Figure 8 shows the results. First, note that our Equivariant SACfD does best on all four tasks, followed by FERM, and other baselines. Second, notice that only the equivariant method can solve the last three (most challenging tasks). This suggests that equivariant models are important not only for unstructured reinforcement learning, but also for learning from demonstration. Additional results for Block Pulling and Object Picking environments are shown in Appendix G. 6.4 COMPARING WITH LEARNING EQUIVARIANCE USING AUGMENTATION In the previous experiments, we compare against the data augmentation baselines using the same data augmentation operators that the authors proposed (random crop in RAD (Laskin et al., 2020a) and random shift in DrQ (Kostrikov et al., 2020)). However, those two methods can also be modified to learn SO(2) equivariance using SO(2) data augmentation. Here, we explore this idea as an alternative to our equivariant model. Specifically, instead of augmenting on the state as in Laskin et al. (2020a) and Kostrikov et al. (2020) using only translation, we apply the SO(2) augmentation in both the state and the action. Since the RAD and DrQ baselines in this section are already running SO(2) augmentations themselves, we disable the SO(2) buffer augmentation for the online transitions in those baselines. (See the result of RAD and DrQ with the SO(2) data augmentation buffer in Appendix H.4). We compare the resulting version of RAD (RAD SO(2) SACfD) and DrQ (DrQ SO(2) SACfD) with our Equivariant SACfD in Figure 9. Our method outperforms both RAD and DrQ equipped with SO(2) data augmentation. Additional results for Block Pulling and Object Picking are shown in Appendix G. 6.5 GENERALIZATION EXPERIMENT This experiment evaluates the ability for the equivariant model to generalize over the equivariance group. We use a similar experimental setting as in Section 6.3. However, now the training environment is always initialized with a fixed orientation rather than a random orientation. For example, in Block Pulling, the two blocks are initialized with a fixed relative orientation; in Drawer Opening, the drawer is initialized with a fixed orientation. In the evaluation environment, however, these objects are initialized with random orientations. To suc- ceed, the agent needs to generalize over varied orientations while being trained with a fixed orientation. To prevent the agent from generalizing via augmentation, we disable the SO(2) augmentation in the buffer. As shown in Figure 10, Equivariant SACfD generalizes better than the baselines. Even though the equivariant network is presented with only one orientation during training, it successfully generalizes over random orientation whereas none of the baselines can. 7 DISCUSSION This paper defines a class of group-invariant MDPs and identifies the invariance and equivariance characteristics of their optimal solutions. This paper further proposes Equivariant SAC and a new variation of Equivariant DQN for continuous action space and discrete action space, respectively. We show experimentally in the robotic manipulation domains that our proposal substantially surpasses the performance of competitive baselines. A key limitation of this work is that our definition of G-invariant MDPs requires the MDP to have an invariant reward function and invariant transition function. Though such restrictions are often applicable in robotics, they limit the potential of the proposed methods in other domains like some ATARI games. Furthermore, if the observation is from a non-top-down perspective, or there are non-equivariant structures in the observation (e.g., the robot arm), the invariant assumptions of a G-invariant MDP will not be directly satisfied. ACKNOWLEDGMENTS This work is supported in part by NSF 1724257, NSF 1724191, NSF 1763878, NSF 1750649, and NASA 80NSSC19K1474. R. Walters is supported by the Roux Institute and the Harold Alfond Foundation and NSF grants 2107256 and 2134178. A PROOF OF PROPOSITION 4.1 The proof in this section follows Wang et al. (2021). Note that the definition of group action · : G× X → X implies that elements g ∈ G act by bijections on X since the action of g−1 gives a twosided inverse for the action of g. That is, g permutes the elements of X . Proof of Proposition 4.1. For g ∈ G, we will first show that the optimal Q-function is G-invariant, i.e., Q∗(s, a) = Q∗(gs, ga), then show that the optimal policy is G-equivariant, i.e., π∗(gs) = gπ∗(s). (1) Q∗(s, a) = Q∗(gs, ga): The Bellman optimality equations for Q∗(s, a) and Q∗(gs, ga) are, respectively: Q∗(s, a) = R(s, a) + γ sup a′∈A ∫ s′∈S T (s, a, s′)Q∗(s′, a′), (10) and Q∗(gs, ga) = R(gs, ga) + γ sup a′∈A ∫ s′∈S T (gs, ga, s′)Q∗(s′, a′). (11) Since g ∈ G merely permutes the elements of S, we can re-index the integral using s̄′ = gs′: Q∗(gs, ga) = R(gs, ga) + γ sup ā′∈gA ∫ s̄′∈gS T (gs, ga, s̄′)Q∗(s̄′, ā′) (12) = R(gs, ga) + γ sup a′∈A ∫ s′∈S T (gs, ga, gs′)Q∗(gs′, ga′). (13) Using the Reward Invariance and the Transition Invariance in Definition 4.1, this can be written: Q∗(gs, ga) = R(s, a) + γ sup a′∈A ∫ s′∈S T (s, a, s′)Q∗(gs′, ga′). (14) Now, define a new function Q̄ such that ∀s, a ∈ S × A, Q̄(s, a) = Q(gs, ga) and substitute into Eq. 14, resulting in: Q̄∗(s, a) = R(s, a) + γ sup a′∈A ∫ s′∈S T (s, a, s′)Q̄∗(s′, a′). (15) Notice that Eq. 15 and Eq. 10 are the same Bellman equation. Since solutions to the Bellman equation are unique, we have that ∀s, a ∈ S ×A, Q∗(s, a) = Q̄∗(s, a) = Q∗(gs, ga). (2) π∗(gs) = gπ∗(s): The optimal policy for π∗(s) and π∗(gs) can be written in terms of the optimal Q-function, Q∗, as: π∗(s) = arg max a∈A Q∗(s, a) (16) and π∗(gs) = arg max ā∈A Q∗(gs, ā) (17) Using the invariant property of Q∗ we can substitute Q∗(gs, ā) with Q∗(s, g−1ā) in Equation 17: π∗(gs) = arg max ā∈A Q∗(s, g−1ā) (18) Let ā = ga, Equation 18 can be written as: π∗(gs) = g[arg max a∈A Q∗(s, g−1ga)] (19) Cancelling g−1 and g and substituting Equation 16 we have, π∗(gs) = gπ∗(s). (20) B EQUIVARIANCE OVERCONSTRAIN Proposition B.1. Let f : Vreg⊕Vreg → Vtriv be a linear Cn-equivariant function. Then f(v, w) = a ∑ i vi + b ∑ i wi. Proof. By Weyl decomposibility (Hall, 2003), Vreg decomposes into irreducible representations for Cn each with multiplicity determined by its dimension. Among these is the trivial representation with multiplicity 1. By Schur’s lemma (Dummit & Foote, 1991), the mapping Vreg ⊕ Vreg → Vtriv must factor through the trivial representation embedded in Vreg . The projection onto the trivial representation is given v 7→ a ∑ i vi. The result follows by linearity. As a corollary, we find that Cn-equivariant maps Vreg ⊕ Vreg → Vtriv are actually Cn × Cnequivariant. Let (g1, g2) ∈ Cn × Cn, then applying the Proposition f(g1v, g2w) = a ∑ i(gv)i + b ∑ i(gw)i = a ∑ i vi + b ∑ i wi = f(v, w). C ENVIRONMENT DETAILS In all environments, the environment reset is conduced by randomly initializing the objects with random positions and orientations inside the workspace. The arm is always initialized at the same configuration. The workspace has a size of 0.4m × 0.4m × 0.24m. All environments have a sparse reward, i.e., the agent acquires a +1 reward when reaching the goal state, and 0 otherwise. In the PyBullet simulator, the robot joints have enough compliance to allow the gripper to apply force on the block in the Corner Picking task. We augment the state image with an additional binary channel (i.e., either all pixels are 1 or all pixels are 0) indicating if the gripper is holding an object. Note that this additional channel is invariant to rotations (because all pixels have the same value) so it won’t break the proposed equivariant properties. The Block Pulling requires the robot to pull one block to make contact with the other block. The Object Picking requires the robot the pick up an object randomly sampled from a set of 11 objects (Figure 11). The Drawer Opening requires the robot to pull open a drawer. The Block Stacking requires the robot to stack one block on top of another. The House Building requires the robot to stack a triangle roof on top of a block. The Corner Picking requires the robot to slide the block from the corner and then pick it up. D NETWORK ARCHITECTURE Our equivariant models are implemented using the E2CNN (Weiler & Cesa, 2019) library with PyTorch (Paszke et al., 2017). D.1 EQUIVARIANT DQN ARCHITECTURE In the Equivariant DQN, we use a 7-layer Steerable CNN defined in the group C4 (Figure 12a). The input Fs is encoded as a 2-channel ρ0 feature map, and the output is a 18-channel 3 × 3 ρ0 feature map where the channel encodes the invariant actions Ainv and the spatial dimension encodes Axy . D.2 EQUIVARIANT SAC ARCHITECTURE In the Equivariant SAC, there are two separate networks, both are Steerable CNN defined in the group C8. The actor π (Figure 12b top) is an 8-layer network that takes in a 2-channel ρ0 feature map (Fs) and outputs a mixed representation type 1 × 1 feature map (ā) consisting of 1 ρ1 feature for axy and 8 ρ0 features for ainv and aσ . The critic (Figure 12b bottom) is a 9-layer network that takes in both Fs as a 2-channel ρ0 feature map and a as a 1 × 1 mixed representation feature map consisting of 1 ρ1 feature for axy and 3 ρ0 for ainv. The upper path e encodes Fs into a 64-channel regular representation feature map s̄ with 1 × 1 spatial dimensions, then concatenates it with a. Two separate Q-value paths q take in the concatenated feature map and generate two Q-estimates in the form of 1 × 1 ρ0 feature. The non-linear maxpool layer is used for transforming regular representations into trivial representations to prevent the equivariant overconstraint (Section 5.2). Note that there are two Q outputs based on the requirement of the SAC algorithm. E BASELINE DETAILS Figure 13 shows the baseline network architectures for DQN and SAC. The RAD (Laskin et al., 2020a) Crop baselines, CURL (Laskin et al., 2020b) baselines, and FERM (Zhan et al., 2020) baselines use random crop for data augmentation. The random crop crops a 142 × 142 state image to the size of 128 × 128. The contrastive encoder of CURL baselines has a size of 128 as in Laskin et al. (2020b), and that for the FERM baselines has a size of 50 as in Zhan et al. (2020). The FERM baseline’s contrastive encoder is pretrained for 1.6k steps using the expert data as in Zhan et al. (2020). The DrQ (Kostrikov et al., 2020) Shift baselines use random shift of ±4 pixels for data augmentation as in the original work. In all DrQ baselines, the number of augmentations for calculating the target K and the number of augmentations for calculating the loss M are both 2 as in Kostrikov et al. (2020). F TRAINING DETAILS We implement our experimental environments in the PyBullet simulator (Coumans & Bai, 2016). The workspace’s size is 0.4m×0.4m×0.24m. The pixel size of the visual state I is 128×128 (except for the RAD Crop baselines, CURL baselines, and FERM baselines, where I’s size is 142 × 142 and will be cropped to 128 × 128). I’s FOV is 0.6m × 0.6m. During training, we use 5 parallel environments. We implement all training in PyTorch (Paszke et al., 2017). Both DQN and SAC use soft target update with τ = 10−2. In the DQN experiments, we use the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 10−4. We use Huber loss (Huber, 1964) for calculating the TD loss. We use a discount factor γ = 0.95. The batch size is 32. The buffer has a capacity of 100,000 transitions. In the SAC (and SACfD) experiments, we use the Adam optimizer with a learning rate of 10−3. The entropy temperature α is initialized at 10−2. The target entropy is -5. The discount factor γ = 0.99. The batch size is 64. The buffer has a capacity of 100,000 transitions. SACfD uses the prioritized replay buffer (Schaul et al., 2015) with prioritized replay exponent of 0.6 and prioritized importance sampling exponent β0 = 0.4 as in Schaul et al. (2015). The expert transitions are given a priority bonus of d = 1. G ADDITIONAL EXPERIMENTAL RESULTS FOR EQUIVARIANT SACFD Figure 14 (a)-(b) shows the results for the experiment of Section 6.3 in Block Pulling and Object Picking environments. The Equivariant SACfD outperforms all baselines in those two environments. Figure 14 (c)-(d) shows the results for the experiment of Section 6.4 in block Pulling and Object Picking environments. Similarly as the results in Figure 9, our Equivariant SACfD outperforms both RAD and DrQ equipped with SO(2) dat augmentation. H ABLATION STUDIES H.1 USING EQUIVARIANT NETWORK ONLY IN ACTOR OR CRITIC In this experiment, we investigate the effectiveness of the equivariant network in SACfD by only applying it in the actor network or the critic network. We evaluate four variations: 1) Equi Actor + Equi Critic that uses equivariant network in both the actor and the critic; 2) Equi Actor + CNN Critic that uses equivariant network solely in the actor and uses conventional CNN in the critic; 3) CNN Actor + Equi Critic that uses conventional CNN in the actor and equivariant network in the Critic; 4) CNN Actor + CNN Critic that uses the conventional CNN in both the actor and the critic. Other experimental setup mirrors Section 6.3. As is shown in Figure 15, applying the equivariant network in the actor generally helps more than applying the equivariant network in the critic (in 5 out of 6 experiments), and using the equivariant network in both the actor and the critic always demonstrates the best performance. H.2 DIFFERENT SYMMETRY GROUPS This experiment compares the equivariant networks defined in three different symmetry groups: C8, C4, and C2. We run this experiment in SACfD with the same setup as in Section 6.3. As is shown in Figure 16, the network defined in C8 generally outperforms the network defined in C4, followed by the network defined in C2. H.3 EQUIVARIANT SACFD IN NON-SYMMETRIC ENVIRONMENTS This experiments evaluates the performance of Equivariant SACfD in non-symmetric tasks where the initial orientation of the environments are fixed rather than random. (Similarly as in Section 6.5 but both the training and the evaluation environments have the fix orientation.) Specifically, in Block Pulling, the two blocks in the training environment is initialized with a fixed relative orientation; in Drawer Opening, the drawer is initialized with a fixed orientation. As is shown in Figure 17, when the environments do not contain SO(2) symmetries, the performance gain of using equivariant network is less significant. H.4 ROTATIONAL AUGMENTATION + BUFFER AUGMENTATION Section 6.4 compares our Equivariant SACfD with rotational data augmentation baselines. This experiment shows the performance of those baselines (and an extra CNN SACfD baseline that uses conventional CNN) equipped with the data augmentation buffer. As is mentioned in Section 6.2, the data augmentation baseline creates 4 extra augmented transitions using random SO(2) rotation every time a new transition is added. Figure 18 shows the result, where none of the baselines outperform our proposal in any tasks. Compared with Figure 9, the data augmentation buffer hurts RAD and DrQ because of the redundancy of the same data augmentation.
1. What is the focus of the paper in terms of integrating equivariance deep learning with robotic applications? 2. What are the strengths of the proposed approach, particularly in its theoretical foundation and practical applications? 3. Are there any concerns or suggestions regarding the limitations of the proposed method, such as the scope of the state space or the definition of the group element? 4. How does the reviewer assess the clarity and quality of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper explores to integrate equivariance deep learning to robotic applications. The authors propose two main contributions: 1) define and theoretically characterize an important class of group-equivariant MDPs, 2) and integrate equivariant variations to DQN, SAC, and LfD. The paper provides many different sets of experiments. The results are promising which shows benefits of the proposed approach. Review Strength: Sound theory of group-invariant MDP. The theory also characterizes both the invariance and equivariance cases. Applications on realistic robotic manipulation Showcasing on different RL (SAC and DQN) and robot learning (LfD) algorithms. Weakness: The description sometimes is hard to follow In general, the paper pursues an interesting research problem. it's well written. The proposed idea address the main well-understood challenge in robot learning. I would have only following comments. Regarding to the invariant-MDP, the proposed approach only deals with visual state spaces, so is it possible to extend to include proprioceptive information in the state space? Is the group element G pre-defined and does it contain only a single rotation operator g as defined in Experiment? If so would it be too simplistic? More ablation studies regarding this choice would be more helpful.
ICLR
Title $\mathrm{SO}(2)$-Equivariant Reinforcement Learning Abstract Equivariant neural networks enforce symmetry within the structure of their convolutional layers, resulting in a substantial improvement in sample efficiency when learning an equivariant or invariant function. Such models are applicable to robotic manipulation learning which can often be formulated as a rotationally symmetric problem. This paper studies equivariant model architectures in the context ofQ-learning and actor-critic reinforcement learning. We identify equivariant and invariant characteristics of the optimal Q-function and the optimal policy and propose equivariant DQN and SAC algorithms that leverage this structure. We present experiments that demonstrate that our equivariant versions of DQN and SAC can be significantly more sample efficient than competing algorithms on an important class of robotic manipulation problems. 1 INTRODUCTION A key challenge in reinforcement learning is to improve sample efficiency – that is to reduce the amount of environmental interactions that an agent must take in order to learn a good policy. This is particularly important in robotics applications where gaining experience potentially means interacting with a physical environment. One way of improving sample efficiency is to create “artificial” experiences through data augmentation. This is typically done in visual state spaces where an affine transformation (e.g., translation or rotation of the image) is applied to the states experienced during a transition (Laskin et al., 2020a; Kostrikov et al., 2020). These approaches implicitly assume that the transition and reward dynamics of the environment are invariant to affine transformations of the visual state. In fact, some approaches explicitly use a contrastive loss term to induce the agent to learn translation-invariant feature representations (Laskin et al., 2020b; Zhan et al., 2020). Recent work in geometric deep learning suggests that it may be possible to learn transformationinvariant policies and value functions in a different way, using equivariant neural networks (Cohen & Welling, 2016a;b). The key idea is to structure the model architecture such that it is constrained only to represent functions with the desired invariance properties. In principle, this approach aim at exactly the same thing as the data augmentation approaches described above – both methods seek to improve sample efficiency by introducing an inductive bias. However, the equivariance approach achieves this more directly by modifying the model architecture rather than by modifying the training data. Since with data augmentation, the model must learn equivariance in addition to the task itself, more training time and greater model capacity are often required. Even then, data augmentation results only in approximate equivariance whereas equivariant neural networks guarantee it and often have stronger generalization as well (Wang et al., 2020b). While equivariant architectures have recently been applied to reinforcement learning (van der Pol et al., 2020a;b; Mondal et al., 2020), this has been done only in toy settings (grid worlds, etc.) where the model is equivariant over small finite groups, and the advantages of this approach over standard methods is less clear. This paper explores the application of equivariant methods to more realistic problems in robotics such as object manipulation. We make several contributions. First, we define and analyze an important class of MDPs that we call group-invariant MDPs. Second, we introduce a new variation of the Equivariant DQN (Mondal et al., 2020), and we further introduce equivariant variations of SAC (Haarnoja et al., 2018), and learning from demonstration (LfD). Finally, we show that our methods convincingly outperform recent competitive data augmentation approaches (Laskin et al., 2020a; Kostrikov et al., 2020; Laskin et al., 2020b; Zhan et al., 2020). Our Equivariant SAC method, in particular, outperforms these baselines so dramatically (Figure 7) that it could make reinforcement learning feasible for a much larger class of robotics problems than is currently the case. Supplementary video and code are available at https://pointw.github.io/equi_rl_page/. 2 RELATED WORK Equivariant Learning: Encoding symmetries in the structure of neural networks can improve both generalization and sample efficiency. The idea of equivariant learning is first introduced in GConvolution (Cohen & Welling, 2016a). The extension work proposes an alternative architecture, Steerable CNN (Cohen & Welling, 2016b). Weiler & Cesa (2019) proposes a general framework for implementing E(2)-Steerable CNNs. In the context of reinforcement learning, Mondal et al. (2020) investigates the use of Steerable CNNs in the context of two game environments. van der Pol et al. (2020b) proposes MDP homomorphic networks to encode rotational and reflectional equivariance of an MDP but only evaluates their method in a small set of tasks. In robotic manipulation, Wang et al. (2021) learns equivariant Q-functions but is limited in the spatial action space. In contrast to prior work, this paper proposes an Equivariant SAC algorithm, an equivariant LfD algorithm, and a novel variation of Equivariant DQN (Mondal et al., 2020) focusing on visual motor control problems. Data Augmentation: Another popular method for improving sample efficiency is data augmentation. Recent works demonstrate that the use of simple data augmentation methods like random crop or random translate can significantly improve the performance of reinforcement learning (Laskin et al., 2020a; Kostrikov et al., 2020). Data augmentation is often used for generating additional samples (Kalashnikov et al., 2018; Lin et al., 2020; Zeng et al., 2020) in robotic manipulation. However, data augmentation methods are often less sample efficient than equivariant networks because the latter injects an inductive bias to the network architecture. Contrastive Learning: Data augmentation is also applied with contrastive learning (Oord et al., 2018) to improve feature extraction. Laskin et al. (2020b) show significant sample-efficiency improvement by adding an auxiliary contrastive learning term using random crop augmentation. Zhan et al. (2020) use a similar method in the context of robotic manipulation. However, contrastive learning is limited to learning an invariant feature encoder and is not capable of learning equivariant functions. Close-Loop Robotic Control: There are two typical action space definitions when learning policies that control the end-effector of a robot arm: the spatial action space that controls the target pose of the end-effector (Zeng et al., 2018b;a; Satish et al., 2019; Wang et al., 2020a), or the close-loop action space that controls the displacement of the end-effector. The close-loop action space is widely used for learning grasping policies (Kalashnikov et al., 2018; Quillen et al., 2018; Breyer et al., 2019; James et al., 2019). Recently, some works also learn more complex policies than grasping (Viereck et al., 2020; Kilinc et al., 2019; Cabi et al., 2020; Zhan et al., 2020). This work extends prior works in the close-loop action space by using equivariant learning to improve the sample efficiency. 3 BACKGROUND SO(2) and Cn: We will reason about rotation in terms of the group SO(2) and its cyclic subgroup Cn ≤ SO(2). SO(2) is the group of continuous planar rotations {Rotθ : 0 ≤ θ < 2π}. Cn is the discrete subgroup Cn = {Rotθ : θ ∈ { 2πin |0 ≤ i < n}} of rotations by multiples 2π n . Cn actions: A groupGmay be equipped with an action on a setX by specifying a map · : G×X → X satisfying g1 · (g2 · x) = (g1g2) · x and 1 · x = x for all g1, g2 ∈ G, x ∈ X . Note that closure, gx ∈ X , and invertibility, g−1gx = x, follow immediately from the definition. We are interested in actions of Cn which formalize how vectors or feature maps transform under rotation. The group Cn acts in three ways that concern us (for a more comprehensive background, see Bronstein et al. (2021)): 1. R through the trivial representation ρ0. Let g ∈ Cn and x ∈ R. Then ρ0(g)x = x. For example, the trivial representation describes how pixel color/depth values change when an image is rotated, i.e. they do not change (Figure 1 left). 2. R2 through the standard representation ρ1. Let g ∈ Cn and v ∈ R2. Then ρ1(g)v =( cos g − sin g sin g cos g ) v. This describes how elements of a vector field change when rotated (Figure 1 middle). 3. Rn through the regular representation ρreg. Let g = rm ∈ Cn = {1, r, r2, . . . , rn−1} and (x1, x2, . . . , xn) ∈ Rn. Then ρreg(g)x = (xn−m+1, . . . , xn, x1, x2, . . . , xn−m) cyclically permutes the coordinates of Rn (Figure 1 right). Feature maps as functions: In deep learning, images and feature maps are typically expressed as tensors. However, it will be convenient here to sometimes express these as functions. Specifically, we may write an h× w one-channel image F ∈ R1×h×w as a function F : R2 → R where F(x, y) describes the intensity at pixel x, y. Similarly, an m-channel tensor F ∈ Rm×h×w may be written as F : R2 → Rm. We refer to the domain of this function as its “spatial dimensions”. Cn actions on vectors and feature maps: Cn acts on vectors and feature maps differently depending upon their semantics. We formalize these different ways of acting as follows. Let F : R2 → Rm be an m-channel feature map and let V ∈ Rm×1×1 = Rm be a vector represented as a special case of a feature map with 1× 1 spatial dimensions. Then g is defined to act on F by (gF)(x, y) = ρj(g)F(ρ1(g)−1(x, y)). (1) For a vector V (considered to be at (x, y) = (0, 0)), this becomes: gV = ρj(g)V. (2) In the above, ρ1(g) rotates pixel location and ρj(g) transforms the pixel feature vector using the trivial representation (ρj = ρ0), the standard representation (ρj = ρ1), the regular representation (ρj = ρreg), or some combination thereof. Equivariant convolutional layer: A Cn-equivariant layer is a function h whose output is constrained to transform in a defined way when the input feature map is transformed by a group action. Consider an equivariant layer h with an input Fin : R2 → R|ρin| and an output Fout : R2 → R|ρout| , where ρin and ρout denote the group representations associated with Fin and Fout, respectively. When the input is transformed, this layer is constrained to output a transformed version of the same output feature map: h(gFin) = g(h(Fin)) = gFout. (3) where g ∈ Cn acts on Fin or Fout through Equation 1 or Equation 2, i.e., this constraint equation can be applied to arbitrary feature maps F or vectors V . A linear convolutional layer h satisfies Equation 3 with respect to the group Cn if the convolutional kernel K : R2 → R|ρout|×|ρin| has the following form (Cohen et al., 2018): K(ρ1(g)v) = ρ −1 out(g)K(v)ρin(g). (4) Since the composition of equivariant maps is equivariant, a fully convolutional equivariant network can be constructed by stacking equivariant convolutional layers that satisfy the constraint of Equation 3 and together with equivariant non-linearities (Weiler & Cesa, 2019). 4 PROBLEM STATEMENT 4.1 GROUP-INVARIANT MDPS In a group-invariant MDP, the transition and reward functions are invariant to group elements g ∈ G acting on the state and action space. For state s ∈ S, action a ∈ A, and g ∈ G, let gs ∈ S denote the action of g on s and ga ∈ A denote the action of g on a. Definition 4.1 (G-invariant MDP). A G-invariant MDPMG = (S,A, T,R,G) is an MDPM = (S,A, T,R) that satisfies the following conditions: 1. Reward Invariance: The reward function is invariant to the action of the group element g ∈ G, R(s, a) = R(gs, ga). 2. Transition Invariance: The transition function is invariant to the action of the group element g ∈ G, T (s, a, s′) = T (gs, ga, gs′). A key feature of a G-invariant MDP is that its optimal solution is also G-invariant (proof in Appendix A): Proposition 4.1. LetMG be a group-invariant MDP. Then its optimal Q-function is group invariant, Q∗(s, a) = Q∗(gs, ga), and its optimal policy is group-equivariant, π∗(gs) = gπ∗(s), for any g ∈ G. It should be noted that the G-invariant MDP of Definition 4.1 is in fact a special case of an MDP homomorphism (Ravindran & Barto, 2001; 2004), a broad class of MDP abstractions. MDP homomorphisms are important because optimal solutions to the abstract problem can be “lifted” to produce optimal solutions to the original MDP (Ravindran & Barto, 2004). As such, Proposition 4.1 follows directly from those results. 4.2 SO(2)-INVARIANT MDPS IN VISUAL STATE SPACES In the remainder of this paper, we focus exclusively on an important class of SO(2)-invariant MDPs where the state is encoded as an image. We approximate SO(2) by its subgroup Cn. State space: State is expressed as an m-channel image, Fs : R2 → Rm. The group operator g ∈ Cn acts on this image as defined in Equation 1 where we set ρj = ρ0: gFs(x, y) = ρ0(g)Fs(ρ1(g)−1(x, y)), i.e., by rotating the pixels but leaving the pixel feature vector unchanged. Action space: We assume we are given a factored action space Ainv ×Aequiv = A ⊆ Rk embedded in a k-dimensional Euclidean space where Ainv ⊆ Rkinv and Aequiv ⊆ Rk−kinv . We require the variables in Ainv to be invariant with the rotation operator and the variables in Aequiv to rotate with the representation ρequiv = ρ1. Therefore, the rotation operator g ∈ Cn acts on a ∈ A via ga = (ρequiv(g)aequiv, ainv) where ainv ∈ Ainv and aequiv ∈ Aequiv. Application to robotic manipulation: We express the state as a depth image centered on the gripper position where depth is defined relative to the gripper. The orientation of this image is relative to the base reference frame – not the gripper frame. We require the fingers of the gripper and objects grasped by the gripper to be visible in the image. Figure 2 shows an illustration. The action is a tuple, a = (aλ, axy, az, aθ) ∈ A ⊂ R5, where aλ ∈ Aλ denotes the commanded gripper aperture, axy ∈ Axy denotes the commanded change in gripper xy position, az ∈ Az denotes the commanded change in gripper height, and aθ ∈ Aθ denotes the commanded change in gripper orientation. Here, the xy action is equivariant with g ∈ Cn,Aequiv = Axy , and the rest of the action variables are invariant,Ainv = Aλ×Az×Aθ. Notice that the transition dynamics are Cn-invariant (i.e. T (s, a, s′) = T (gs, ga, gs′)) because the Newtonian physics of the interaction are invariant to the choice of reference frame. If we constrain the reward function to be Cn-invariant as well, then the resulting MDP is Cn-invariant. 5 APPROACH 5.1 EQUIVARIANT DQN In DQN, we assume we have a discrete action space, and we learn the parameters of a Q-network that maps from the state onto action values. Given a G-invariant MDP, Proposition 4.1 tells us that the optimal Q-function is G-invariant. Therefore, we encode the Q-function using an equivariant neural network that is constrained to represent only G-invariant Q-functions. First, in order to use DQN, we need to discretize the action space. Let Aequiv ⊂ Aequiv and Ainv ⊂ Ainv be discrete subsets of the full equivariant and invariant action spaces, respectively. Next, we define a function Fa : Aequiv → RAinv from the equivariant action variables in Aequiv to the Q values of the invariant action variables in Ainv. For example, in the robotic manipulation domain described Section 4.2, we have Aequiv = Axy and Ainv = Aλ × Az × Aθ and ρequiv = ρ1, and we define Aequiv and Ainv accordingly. We now encode the Q network q as a stack of equivariant layers that each encode the equivariant constraint of Equation 3. Since the composition of equivariant layers is equivariant, q satisfies: q(gFs) = g(q(Fs)) = gFa, (5) where we have substituted Fin = Fs and Fout = Fa. In the above, the rotation operator g ∈ Cn is applied using Equation 1 as gFa(axy) = ρ0(g)Fa(ρ1(g)−1(axy)). Figure 3 illustrates this equivariance constraint for the robotic manipulation example with |Aequiv| = |Axy| = 9. When the state (represented as an image on the left) is rotated by 90 degrees, the values associated with the action variables in Axy are also rotated similarly. The detailed network architecture is shown in Appendix D.1. Our architecture is different from that in Mondal et al. (2020) in that we associate the action of g on Aequiv and Ainv with the group action on the spatial dimension and the channel dimension of a feature map Fa, which is more efficient than learning such mapping using FC layers. 5.2 EQUIVARIANT SAC In SAC, we assume the action space is continuous. We learn the parameters for two networks: a policy network Π (the actor) and an action-value network Q (the critic) (Haarnoja et al., 2018). The critic Q : S ×A→ R approximates Q values in the typical way. However, the actor Π : S → A×Aσ estimates both the mean and standard deviation of action for a given state. Here, we define Aσ = Rk to be the domain of the standard deviation variables over the k-dimensional action space defined in Section 4.2. Since Proposition 4.1 tells us that the optimal Q is invariant and the optimal policy is equivariant, we must model Q as an invariant network and Π as an equivariant network. Policy network: First, consider the equivariant constraint of the policy network. As before, the state is encoded by the function Fs. However, we must now express the action as a vector over Ā = A×Aσ . Factoring A into its equivariant and invariant components, we have Ā = Aequiv × Ainv × Aσ . In order to identify the equivariance relation for Ā, we must define how the group operator g ∈ G acts on aσ ∈ Aσ . Here, we make the simplifying assumption that aσ is invariant to the group operator. This choice makes sense in robotics domains where we would expect the variance of our policy to be invariant to the choice of reference frame. As a result, we have that the group element g ∈ G acts on ā ∈ Ā via: gā = g(aequiv, ainv, aσ) = (ρequiv(g)aequiv, ainv, aσ). (6) We can now define the actor network π to be a mapping Fs 7→ ā (Figure 4 top) that satisfies the following equivariance constraint (Equation 3): π(gFs) = g(π(Fs)) = gā. (7) Critic network: The critic network takes both state and action as input and maps onto a real value. We define two equivariant networks: a state encoder e and aQ network q. The equivariant state encoder, e, maps the input state Fs onto a regular representation s̄ ∈ (Rn)α where each of n group elements is associated with an α-vector. Since s̄ has a regular representation, we have gs̄ = ρreg(g)s̄. Writing the equivariance constraint of Equation 3 for e, we have that e must satisfy e(gFs) = ge(Fs) = gs̄. The output state representation s̄ is concatenated with the action a ∈ A, producing w = (s̄, a). The action of the group operator is now gw = (gs̄, ga) where ga = (ρequiv(g)aequiv, ainv). Finally, the q network maps from w onto R, a real-valued estimate of theQ value for w. Based on proposition 4.1, this network must be invariant to the group action: q(gw) = q(w). All together, the critic satisfies the following invariance equation: q(e(gFs), ga) = q(e(Fs), a). (8) This network is illustrated at the bottom of Figure 4. For a robotic manipulation domain in Section 4.2, we have Aequiv = Axy and Ainv = Aλ ×Az ×Aθ and ρequiv = ρ1. The detailed network architecture is in Appendix D.2. Preventing the critic from becoming overconstrained: In the model architecture above, the hidden layer of q is represented using a vector in the regular representation and the output of q is encoded using the trivial representation. However, Schur’s Lemma (see e.g. Dummit & Foote (1991)) implies there only exists a one-dimensional space of linear mappings from a regular representation to a trivial representation (i.e., x = a ∑ i vi where x is a trivial representation, a is a constant, and v is a regular representation). This implies that a linear mapping f : Rn × Rn → R from two regular representations to a trivial representation that satisfies f(gv, gw) = f(v, w) for all g ∈ G will also satisfy f(g1v, w) = f(v, w) and f(v, g2w) = f(v, w) for all g1, g2 ∈ G. (See details in Appendix B.) In principle, this could overconstrain the last layer of q to encode additional undesired symmetries. To avoid this problem we use a non-linear equivariant mapping, maxpool, over the group space to transform the regular representation to the trivial representation. 5.3 EQUIVARIANT SACFD Many of the problems we want to address cannot be solved without guiding the agent’s exploration somehow. In order to evaluate our algorithms in this context, we introduce the following simple strategy for learning from demonstration with SAC. First, prior to training, we pre-populate the replay buffer with a set of expert demonstrations generated using a hand-coded planner. Second, we introduce the following L2 term into the SAC actor’s loss function: Lactor = LSAC + 1e [ 1 2 ((a ∼ π(s))− ae)2 ] , (9) where LSAC is the actor’s loss term in standard SAC, 1e = 1 if the sampled transition is an expert demonstration and 0 otherwise, a ∼ π(s) is an action sampled from the output Gaussian distribution of π(s), and ae is the expert action. Since both the sampled action a ∼ π(s) and the expert action ae transform equivalently, Lactor is compatible with the equivariance we introduce in Section 5.2. We refer to this method as SACfD (SAC from Demonstration). 6 EXPERIMENTS We evaluate Equivariant DQN and Equivariant SAC in the manipulation tasks shown in Figure 5. These tasks can be formulated as SO(2)-invariant MDPs. All environments have sparse rewards (+1 when reaching the goal and 0 otherwise). See environment details in Appendix C. 6.1 EQUIVARIANT DQN We evaluate Equivariant DQN in the Block Pulling, Object Picking, and Drawer Opening tasks for the group C4. The discrete action space is Aλ = {OPEN, CLOSE}; Axy = {(x, y)|x, y ∈ {−0.02m, 0m, 0.02m}}; Az = {−0.02m, 0m, 0.02m}; Aθ = {− π16 , 0, π 16}. Note that the definition of Axy and g ∈ C4 satisfies the closure requirement of the action space in a way that ∀axy ∈ Axy,∀g ∈ C4, ρ1(g)axy ∈ Axy . We compare Equivariant DQN (Equi DQN) against the following baselines: 1) CNN DQN: DQN with conventional CNN instead of equivariant network, where the conventional CNN has a similar amount of trainable parameters (3.9M) as the equivariant network (2.6M). 2) RAD Crop DQN (Laskin et al., 2020a): same network architecture as CNN DQN. At each training step, each transition in the minibatch is applied with a random-crop data augmentation. 3) DrQ Shift DQN (Kostrikov et al., 2020): same network architecture as CNN DQN. At each training step, both the Q-targets and the TD losses are calculated by averaging over two random-shift augmented transitions. 4): CURL DQN (Laskin et al., 2020b): similar architecture as CNN DQN with an extra contrastive loss term that learns an invariant encoder from random crop augmentations. See the baselines detail in Appendix E. At the beginning of each training process, we pre-populate the replay buffer with 100 episodes of expert demonstrations. Figure 6 compares the learning curves of the various methods. Equivariant DQN learns faster and converges at a higher discounted reward in all three environments. 6.2 EQUIVARIANT SAC In this experiment, we evaluate the performance of Equivariant SAC (Equi SAC) for the group C8. The continuous action space is:Aλ = [0, 1]; Axy = {(x, y)|x, y ∈ [−0.05m, 0.05m]}; Az = [−0.05m, 0.05m]; Aθ = [−π8 , π 8 ]. We compare against the following baselines: 1) CNN SAC: SAC with conventional CNN rather than equivariant networks, where the conventional CNN has a similar amount of trainable parameters (2.6M) as the equivariant network (2.3M). 2) RAD Crop SAC (Laskin et al., 2020a): same model architecture as CNN SAC with random crop data augmentation when sampling transitions. 3) DrQ Shift SAC (Kostrikov et al., 2020): same model architecture as CNN SAC with random shift data augmentation when calculating the Q-target and the loss. 4) FERM (Zhan et al., 2020): a combination of SAC, contrastive learning, and random crop augmentation (baseline details in Appendix E). All methods use a SO(2) data augmentation buffer, where every time a new transition is added, we generate 4 more augmented transitions by applying random continuous rotations to both the image and the action (this data augmentation in the buffer is in addition to the data augmentation that is performed in the RAD DrQ, and FERM baselines). Prior to each training run, we pre-load the replay buffer with 20 episodes of expert demonstration. Figure 7 shows the comparison among the various methods. Notice that Equivariant SAC outperforms the other methods significantly. Without the equivariant approach, Object Picking and Drawer Opening appear to be infeasible for the baseline methods. In Block Pulling, FERM is the only other method able to solve the task. 6.3 EQUIVARIANT SACFD We want to explore our equivariant methods in the context of more challenging tasks such as those in the bottom row of Figure 5. However, since these tasks are too difficult to solve without some kind of guided exploration, we augment the Equivariant SAC as well as all the baselines in two ways: 1) we use SACfD as described in Section 5.3; 2) we use Prioritized Experience Replay (Schaul et al., 2015) rather than standard replay buffer. As in Section 6.2, we use the SO(2) data augmentation in the buffer that generates 4 extra SO(2)-augmented transitions whenever a new transition is added. Figure 8 shows the results. First, note that our Equivariant SACfD does best on all four tasks, followed by FERM, and other baselines. Second, notice that only the equivariant method can solve the last three (most challenging tasks). This suggests that equivariant models are important not only for unstructured reinforcement learning, but also for learning from demonstration. Additional results for Block Pulling and Object Picking environments are shown in Appendix G. 6.4 COMPARING WITH LEARNING EQUIVARIANCE USING AUGMENTATION In the previous experiments, we compare against the data augmentation baselines using the same data augmentation operators that the authors proposed (random crop in RAD (Laskin et al., 2020a) and random shift in DrQ (Kostrikov et al., 2020)). However, those two methods can also be modified to learn SO(2) equivariance using SO(2) data augmentation. Here, we explore this idea as an alternative to our equivariant model. Specifically, instead of augmenting on the state as in Laskin et al. (2020a) and Kostrikov et al. (2020) using only translation, we apply the SO(2) augmentation in both the state and the action. Since the RAD and DrQ baselines in this section are already running SO(2) augmentations themselves, we disable the SO(2) buffer augmentation for the online transitions in those baselines. (See the result of RAD and DrQ with the SO(2) data augmentation buffer in Appendix H.4). We compare the resulting version of RAD (RAD SO(2) SACfD) and DrQ (DrQ SO(2) SACfD) with our Equivariant SACfD in Figure 9. Our method outperforms both RAD and DrQ equipped with SO(2) data augmentation. Additional results for Block Pulling and Object Picking are shown in Appendix G. 6.5 GENERALIZATION EXPERIMENT This experiment evaluates the ability for the equivariant model to generalize over the equivariance group. We use a similar experimental setting as in Section 6.3. However, now the training environment is always initialized with a fixed orientation rather than a random orientation. For example, in Block Pulling, the two blocks are initialized with a fixed relative orientation; in Drawer Opening, the drawer is initialized with a fixed orientation. In the evaluation environment, however, these objects are initialized with random orientations. To suc- ceed, the agent needs to generalize over varied orientations while being trained with a fixed orientation. To prevent the agent from generalizing via augmentation, we disable the SO(2) augmentation in the buffer. As shown in Figure 10, Equivariant SACfD generalizes better than the baselines. Even though the equivariant network is presented with only one orientation during training, it successfully generalizes over random orientation whereas none of the baselines can. 7 DISCUSSION This paper defines a class of group-invariant MDPs and identifies the invariance and equivariance characteristics of their optimal solutions. This paper further proposes Equivariant SAC and a new variation of Equivariant DQN for continuous action space and discrete action space, respectively. We show experimentally in the robotic manipulation domains that our proposal substantially surpasses the performance of competitive baselines. A key limitation of this work is that our definition of G-invariant MDPs requires the MDP to have an invariant reward function and invariant transition function. Though such restrictions are often applicable in robotics, they limit the potential of the proposed methods in other domains like some ATARI games. Furthermore, if the observation is from a non-top-down perspective, or there are non-equivariant structures in the observation (e.g., the robot arm), the invariant assumptions of a G-invariant MDP will not be directly satisfied. ACKNOWLEDGMENTS This work is supported in part by NSF 1724257, NSF 1724191, NSF 1763878, NSF 1750649, and NASA 80NSSC19K1474. R. Walters is supported by the Roux Institute and the Harold Alfond Foundation and NSF grants 2107256 and 2134178. A PROOF OF PROPOSITION 4.1 The proof in this section follows Wang et al. (2021). Note that the definition of group action · : G× X → X implies that elements g ∈ G act by bijections on X since the action of g−1 gives a twosided inverse for the action of g. That is, g permutes the elements of X . Proof of Proposition 4.1. For g ∈ G, we will first show that the optimal Q-function is G-invariant, i.e., Q∗(s, a) = Q∗(gs, ga), then show that the optimal policy is G-equivariant, i.e., π∗(gs) = gπ∗(s). (1) Q∗(s, a) = Q∗(gs, ga): The Bellman optimality equations for Q∗(s, a) and Q∗(gs, ga) are, respectively: Q∗(s, a) = R(s, a) + γ sup a′∈A ∫ s′∈S T (s, a, s′)Q∗(s′, a′), (10) and Q∗(gs, ga) = R(gs, ga) + γ sup a′∈A ∫ s′∈S T (gs, ga, s′)Q∗(s′, a′). (11) Since g ∈ G merely permutes the elements of S, we can re-index the integral using s̄′ = gs′: Q∗(gs, ga) = R(gs, ga) + γ sup ā′∈gA ∫ s̄′∈gS T (gs, ga, s̄′)Q∗(s̄′, ā′) (12) = R(gs, ga) + γ sup a′∈A ∫ s′∈S T (gs, ga, gs′)Q∗(gs′, ga′). (13) Using the Reward Invariance and the Transition Invariance in Definition 4.1, this can be written: Q∗(gs, ga) = R(s, a) + γ sup a′∈A ∫ s′∈S T (s, a, s′)Q∗(gs′, ga′). (14) Now, define a new function Q̄ such that ∀s, a ∈ S × A, Q̄(s, a) = Q(gs, ga) and substitute into Eq. 14, resulting in: Q̄∗(s, a) = R(s, a) + γ sup a′∈A ∫ s′∈S T (s, a, s′)Q̄∗(s′, a′). (15) Notice that Eq. 15 and Eq. 10 are the same Bellman equation. Since solutions to the Bellman equation are unique, we have that ∀s, a ∈ S ×A, Q∗(s, a) = Q̄∗(s, a) = Q∗(gs, ga). (2) π∗(gs) = gπ∗(s): The optimal policy for π∗(s) and π∗(gs) can be written in terms of the optimal Q-function, Q∗, as: π∗(s) = arg max a∈A Q∗(s, a) (16) and π∗(gs) = arg max ā∈A Q∗(gs, ā) (17) Using the invariant property of Q∗ we can substitute Q∗(gs, ā) with Q∗(s, g−1ā) in Equation 17: π∗(gs) = arg max ā∈A Q∗(s, g−1ā) (18) Let ā = ga, Equation 18 can be written as: π∗(gs) = g[arg max a∈A Q∗(s, g−1ga)] (19) Cancelling g−1 and g and substituting Equation 16 we have, π∗(gs) = gπ∗(s). (20) B EQUIVARIANCE OVERCONSTRAIN Proposition B.1. Let f : Vreg⊕Vreg → Vtriv be a linear Cn-equivariant function. Then f(v, w) = a ∑ i vi + b ∑ i wi. Proof. By Weyl decomposibility (Hall, 2003), Vreg decomposes into irreducible representations for Cn each with multiplicity determined by its dimension. Among these is the trivial representation with multiplicity 1. By Schur’s lemma (Dummit & Foote, 1991), the mapping Vreg ⊕ Vreg → Vtriv must factor through the trivial representation embedded in Vreg . The projection onto the trivial representation is given v 7→ a ∑ i vi. The result follows by linearity. As a corollary, we find that Cn-equivariant maps Vreg ⊕ Vreg → Vtriv are actually Cn × Cnequivariant. Let (g1, g2) ∈ Cn × Cn, then applying the Proposition f(g1v, g2w) = a ∑ i(gv)i + b ∑ i(gw)i = a ∑ i vi + b ∑ i wi = f(v, w). C ENVIRONMENT DETAILS In all environments, the environment reset is conduced by randomly initializing the objects with random positions and orientations inside the workspace. The arm is always initialized at the same configuration. The workspace has a size of 0.4m × 0.4m × 0.24m. All environments have a sparse reward, i.e., the agent acquires a +1 reward when reaching the goal state, and 0 otherwise. In the PyBullet simulator, the robot joints have enough compliance to allow the gripper to apply force on the block in the Corner Picking task. We augment the state image with an additional binary channel (i.e., either all pixels are 1 or all pixels are 0) indicating if the gripper is holding an object. Note that this additional channel is invariant to rotations (because all pixels have the same value) so it won’t break the proposed equivariant properties. The Block Pulling requires the robot to pull one block to make contact with the other block. The Object Picking requires the robot the pick up an object randomly sampled from a set of 11 objects (Figure 11). The Drawer Opening requires the robot to pull open a drawer. The Block Stacking requires the robot to stack one block on top of another. The House Building requires the robot to stack a triangle roof on top of a block. The Corner Picking requires the robot to slide the block from the corner and then pick it up. D NETWORK ARCHITECTURE Our equivariant models are implemented using the E2CNN (Weiler & Cesa, 2019) library with PyTorch (Paszke et al., 2017). D.1 EQUIVARIANT DQN ARCHITECTURE In the Equivariant DQN, we use a 7-layer Steerable CNN defined in the group C4 (Figure 12a). The input Fs is encoded as a 2-channel ρ0 feature map, and the output is a 18-channel 3 × 3 ρ0 feature map where the channel encodes the invariant actions Ainv and the spatial dimension encodes Axy . D.2 EQUIVARIANT SAC ARCHITECTURE In the Equivariant SAC, there are two separate networks, both are Steerable CNN defined in the group C8. The actor π (Figure 12b top) is an 8-layer network that takes in a 2-channel ρ0 feature map (Fs) and outputs a mixed representation type 1 × 1 feature map (ā) consisting of 1 ρ1 feature for axy and 8 ρ0 features for ainv and aσ . The critic (Figure 12b bottom) is a 9-layer network that takes in both Fs as a 2-channel ρ0 feature map and a as a 1 × 1 mixed representation feature map consisting of 1 ρ1 feature for axy and 3 ρ0 for ainv. The upper path e encodes Fs into a 64-channel regular representation feature map s̄ with 1 × 1 spatial dimensions, then concatenates it with a. Two separate Q-value paths q take in the concatenated feature map and generate two Q-estimates in the form of 1 × 1 ρ0 feature. The non-linear maxpool layer is used for transforming regular representations into trivial representations to prevent the equivariant overconstraint (Section 5.2). Note that there are two Q outputs based on the requirement of the SAC algorithm. E BASELINE DETAILS Figure 13 shows the baseline network architectures for DQN and SAC. The RAD (Laskin et al., 2020a) Crop baselines, CURL (Laskin et al., 2020b) baselines, and FERM (Zhan et al., 2020) baselines use random crop for data augmentation. The random crop crops a 142 × 142 state image to the size of 128 × 128. The contrastive encoder of CURL baselines has a size of 128 as in Laskin et al. (2020b), and that for the FERM baselines has a size of 50 as in Zhan et al. (2020). The FERM baseline’s contrastive encoder is pretrained for 1.6k steps using the expert data as in Zhan et al. (2020). The DrQ (Kostrikov et al., 2020) Shift baselines use random shift of ±4 pixels for data augmentation as in the original work. In all DrQ baselines, the number of augmentations for calculating the target K and the number of augmentations for calculating the loss M are both 2 as in Kostrikov et al. (2020). F TRAINING DETAILS We implement our experimental environments in the PyBullet simulator (Coumans & Bai, 2016). The workspace’s size is 0.4m×0.4m×0.24m. The pixel size of the visual state I is 128×128 (except for the RAD Crop baselines, CURL baselines, and FERM baselines, where I’s size is 142 × 142 and will be cropped to 128 × 128). I’s FOV is 0.6m × 0.6m. During training, we use 5 parallel environments. We implement all training in PyTorch (Paszke et al., 2017). Both DQN and SAC use soft target update with τ = 10−2. In the DQN experiments, we use the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 10−4. We use Huber loss (Huber, 1964) for calculating the TD loss. We use a discount factor γ = 0.95. The batch size is 32. The buffer has a capacity of 100,000 transitions. In the SAC (and SACfD) experiments, we use the Adam optimizer with a learning rate of 10−3. The entropy temperature α is initialized at 10−2. The target entropy is -5. The discount factor γ = 0.99. The batch size is 64. The buffer has a capacity of 100,000 transitions. SACfD uses the prioritized replay buffer (Schaul et al., 2015) with prioritized replay exponent of 0.6 and prioritized importance sampling exponent β0 = 0.4 as in Schaul et al. (2015). The expert transitions are given a priority bonus of d = 1. G ADDITIONAL EXPERIMENTAL RESULTS FOR EQUIVARIANT SACFD Figure 14 (a)-(b) shows the results for the experiment of Section 6.3 in Block Pulling and Object Picking environments. The Equivariant SACfD outperforms all baselines in those two environments. Figure 14 (c)-(d) shows the results for the experiment of Section 6.4 in block Pulling and Object Picking environments. Similarly as the results in Figure 9, our Equivariant SACfD outperforms both RAD and DrQ equipped with SO(2) dat augmentation. H ABLATION STUDIES H.1 USING EQUIVARIANT NETWORK ONLY IN ACTOR OR CRITIC In this experiment, we investigate the effectiveness of the equivariant network in SACfD by only applying it in the actor network or the critic network. We evaluate four variations: 1) Equi Actor + Equi Critic that uses equivariant network in both the actor and the critic; 2) Equi Actor + CNN Critic that uses equivariant network solely in the actor and uses conventional CNN in the critic; 3) CNN Actor + Equi Critic that uses conventional CNN in the actor and equivariant network in the Critic; 4) CNN Actor + CNN Critic that uses the conventional CNN in both the actor and the critic. Other experimental setup mirrors Section 6.3. As is shown in Figure 15, applying the equivariant network in the actor generally helps more than applying the equivariant network in the critic (in 5 out of 6 experiments), and using the equivariant network in both the actor and the critic always demonstrates the best performance. H.2 DIFFERENT SYMMETRY GROUPS This experiment compares the equivariant networks defined in three different symmetry groups: C8, C4, and C2. We run this experiment in SACfD with the same setup as in Section 6.3. As is shown in Figure 16, the network defined in C8 generally outperforms the network defined in C4, followed by the network defined in C2. H.3 EQUIVARIANT SACFD IN NON-SYMMETRIC ENVIRONMENTS This experiments evaluates the performance of Equivariant SACfD in non-symmetric tasks where the initial orientation of the environments are fixed rather than random. (Similarly as in Section 6.5 but both the training and the evaluation environments have the fix orientation.) Specifically, in Block Pulling, the two blocks in the training environment is initialized with a fixed relative orientation; in Drawer Opening, the drawer is initialized with a fixed orientation. As is shown in Figure 17, when the environments do not contain SO(2) symmetries, the performance gain of using equivariant network is less significant. H.4 ROTATIONAL AUGMENTATION + BUFFER AUGMENTATION Section 6.4 compares our Equivariant SACfD with rotational data augmentation baselines. This experiment shows the performance of those baselines (and an extra CNN SACfD baseline that uses conventional CNN) equipped with the data augmentation buffer. As is mentioned in Section 6.2, the data augmentation baseline creates 4 extra augmented transitions using random SO(2) rotation every time a new transition is added. Figure 18 shows the result, where none of the baselines outperform our proposal in any tasks. Compared with Figure 9, the data augmentation buffer hurts RAD and DrQ because of the redundancy of the same data augmentation.
1. What is the main contribution of the paper in the field of reinforcement learning? 2. What are the strengths and weaknesses of the paper regarding its experimental results and theoretical novelty? 3. Do you have any concerns or questions about the methodology used in the paper, particularly in preventing the critic from becoming overconstrained? 4. How does the reviewer assess the significance of the results, especially when compared to previous works in symmetric MDPs? 5. Are there any suggestions or requests for additional experiments or analyses to further support the findings of the paper?
Summary Of The Paper Review
Summary Of The Paper The paper uses rotation equivariant CNNs for model-free reinforcement learning. More specifically, it is argued that for robotic manipulation tasks the corresponding MDP is invariant under translation and rotation, and therefore one could more efficiently learn equivariant/invariant policy/value functions. This idea is applied to both DQN and SAC for control with finite and continuous action spaces respectively. Experimental results on several tasks demonstrate substantial improvement over the competing methods. Review I found the paper to be clearly written and generally well-executed. The main strength of the paper is the experimental results which are very supportive in a range of different tasks. Its main weakness is lack of novelty, on the theoretical side. This is because, as noted by the paper itself, equivariant networks have been used for learning equivariant policies and invariant Q functions in previous work. However, given the importance of this domain, I believe the focus on this group and proposed methodology still has interesting contributions (e.g., in learning from demonstrations). Questions/comments: Proposition 4.1 is not new, and proper references should give credit to early works on symmetric MDPs (e.g., “Symmetries and Model Minimization in Markov Decision Processes”) On “preventing critic from becoming overconstrained”: while I understand the proposed argument, it doesn’t seem valid to me, since you have the option of having many equivariant layers before the scaled mean pooling operation. While using max pooling instead of min pooling is fine, the argument seems tangential. Any comments? Since the results are surprisingly good compared to the model that replaces C8/C4 equivariant layer with ordinary convolution, could you please also provide the results for C2 equivariant layer? Such a layer would only have a factor of 2 improvement over the CNN (in data-efficiency) and therefore it would help make sense of the results.
ICLR
Title $\mathrm{SO}(2)$-Equivariant Reinforcement Learning Abstract Equivariant neural networks enforce symmetry within the structure of their convolutional layers, resulting in a substantial improvement in sample efficiency when learning an equivariant or invariant function. Such models are applicable to robotic manipulation learning which can often be formulated as a rotationally symmetric problem. This paper studies equivariant model architectures in the context ofQ-learning and actor-critic reinforcement learning. We identify equivariant and invariant characteristics of the optimal Q-function and the optimal policy and propose equivariant DQN and SAC algorithms that leverage this structure. We present experiments that demonstrate that our equivariant versions of DQN and SAC can be significantly more sample efficient than competing algorithms on an important class of robotic manipulation problems. 1 INTRODUCTION A key challenge in reinforcement learning is to improve sample efficiency – that is to reduce the amount of environmental interactions that an agent must take in order to learn a good policy. This is particularly important in robotics applications where gaining experience potentially means interacting with a physical environment. One way of improving sample efficiency is to create “artificial” experiences through data augmentation. This is typically done in visual state spaces where an affine transformation (e.g., translation or rotation of the image) is applied to the states experienced during a transition (Laskin et al., 2020a; Kostrikov et al., 2020). These approaches implicitly assume that the transition and reward dynamics of the environment are invariant to affine transformations of the visual state. In fact, some approaches explicitly use a contrastive loss term to induce the agent to learn translation-invariant feature representations (Laskin et al., 2020b; Zhan et al., 2020). Recent work in geometric deep learning suggests that it may be possible to learn transformationinvariant policies and value functions in a different way, using equivariant neural networks (Cohen & Welling, 2016a;b). The key idea is to structure the model architecture such that it is constrained only to represent functions with the desired invariance properties. In principle, this approach aim at exactly the same thing as the data augmentation approaches described above – both methods seek to improve sample efficiency by introducing an inductive bias. However, the equivariance approach achieves this more directly by modifying the model architecture rather than by modifying the training data. Since with data augmentation, the model must learn equivariance in addition to the task itself, more training time and greater model capacity are often required. Even then, data augmentation results only in approximate equivariance whereas equivariant neural networks guarantee it and often have stronger generalization as well (Wang et al., 2020b). While equivariant architectures have recently been applied to reinforcement learning (van der Pol et al., 2020a;b; Mondal et al., 2020), this has been done only in toy settings (grid worlds, etc.) where the model is equivariant over small finite groups, and the advantages of this approach over standard methods is less clear. This paper explores the application of equivariant methods to more realistic problems in robotics such as object manipulation. We make several contributions. First, we define and analyze an important class of MDPs that we call group-invariant MDPs. Second, we introduce a new variation of the Equivariant DQN (Mondal et al., 2020), and we further introduce equivariant variations of SAC (Haarnoja et al., 2018), and learning from demonstration (LfD). Finally, we show that our methods convincingly outperform recent competitive data augmentation approaches (Laskin et al., 2020a; Kostrikov et al., 2020; Laskin et al., 2020b; Zhan et al., 2020). Our Equivariant SAC method, in particular, outperforms these baselines so dramatically (Figure 7) that it could make reinforcement learning feasible for a much larger class of robotics problems than is currently the case. Supplementary video and code are available at https://pointw.github.io/equi_rl_page/. 2 RELATED WORK Equivariant Learning: Encoding symmetries in the structure of neural networks can improve both generalization and sample efficiency. The idea of equivariant learning is first introduced in GConvolution (Cohen & Welling, 2016a). The extension work proposes an alternative architecture, Steerable CNN (Cohen & Welling, 2016b). Weiler & Cesa (2019) proposes a general framework for implementing E(2)-Steerable CNNs. In the context of reinforcement learning, Mondal et al. (2020) investigates the use of Steerable CNNs in the context of two game environments. van der Pol et al. (2020b) proposes MDP homomorphic networks to encode rotational and reflectional equivariance of an MDP but only evaluates their method in a small set of tasks. In robotic manipulation, Wang et al. (2021) learns equivariant Q-functions but is limited in the spatial action space. In contrast to prior work, this paper proposes an Equivariant SAC algorithm, an equivariant LfD algorithm, and a novel variation of Equivariant DQN (Mondal et al., 2020) focusing on visual motor control problems. Data Augmentation: Another popular method for improving sample efficiency is data augmentation. Recent works demonstrate that the use of simple data augmentation methods like random crop or random translate can significantly improve the performance of reinforcement learning (Laskin et al., 2020a; Kostrikov et al., 2020). Data augmentation is often used for generating additional samples (Kalashnikov et al., 2018; Lin et al., 2020; Zeng et al., 2020) in robotic manipulation. However, data augmentation methods are often less sample efficient than equivariant networks because the latter injects an inductive bias to the network architecture. Contrastive Learning: Data augmentation is also applied with contrastive learning (Oord et al., 2018) to improve feature extraction. Laskin et al. (2020b) show significant sample-efficiency improvement by adding an auxiliary contrastive learning term using random crop augmentation. Zhan et al. (2020) use a similar method in the context of robotic manipulation. However, contrastive learning is limited to learning an invariant feature encoder and is not capable of learning equivariant functions. Close-Loop Robotic Control: There are two typical action space definitions when learning policies that control the end-effector of a robot arm: the spatial action space that controls the target pose of the end-effector (Zeng et al., 2018b;a; Satish et al., 2019; Wang et al., 2020a), or the close-loop action space that controls the displacement of the end-effector. The close-loop action space is widely used for learning grasping policies (Kalashnikov et al., 2018; Quillen et al., 2018; Breyer et al., 2019; James et al., 2019). Recently, some works also learn more complex policies than grasping (Viereck et al., 2020; Kilinc et al., 2019; Cabi et al., 2020; Zhan et al., 2020). This work extends prior works in the close-loop action space by using equivariant learning to improve the sample efficiency. 3 BACKGROUND SO(2) and Cn: We will reason about rotation in terms of the group SO(2) and its cyclic subgroup Cn ≤ SO(2). SO(2) is the group of continuous planar rotations {Rotθ : 0 ≤ θ < 2π}. Cn is the discrete subgroup Cn = {Rotθ : θ ∈ { 2πin |0 ≤ i < n}} of rotations by multiples 2π n . Cn actions: A groupGmay be equipped with an action on a setX by specifying a map · : G×X → X satisfying g1 · (g2 · x) = (g1g2) · x and 1 · x = x for all g1, g2 ∈ G, x ∈ X . Note that closure, gx ∈ X , and invertibility, g−1gx = x, follow immediately from the definition. We are interested in actions of Cn which formalize how vectors or feature maps transform under rotation. The group Cn acts in three ways that concern us (for a more comprehensive background, see Bronstein et al. (2021)): 1. R through the trivial representation ρ0. Let g ∈ Cn and x ∈ R. Then ρ0(g)x = x. For example, the trivial representation describes how pixel color/depth values change when an image is rotated, i.e. they do not change (Figure 1 left). 2. R2 through the standard representation ρ1. Let g ∈ Cn and v ∈ R2. Then ρ1(g)v =( cos g − sin g sin g cos g ) v. This describes how elements of a vector field change when rotated (Figure 1 middle). 3. Rn through the regular representation ρreg. Let g = rm ∈ Cn = {1, r, r2, . . . , rn−1} and (x1, x2, . . . , xn) ∈ Rn. Then ρreg(g)x = (xn−m+1, . . . , xn, x1, x2, . . . , xn−m) cyclically permutes the coordinates of Rn (Figure 1 right). Feature maps as functions: In deep learning, images and feature maps are typically expressed as tensors. However, it will be convenient here to sometimes express these as functions. Specifically, we may write an h× w one-channel image F ∈ R1×h×w as a function F : R2 → R where F(x, y) describes the intensity at pixel x, y. Similarly, an m-channel tensor F ∈ Rm×h×w may be written as F : R2 → Rm. We refer to the domain of this function as its “spatial dimensions”. Cn actions on vectors and feature maps: Cn acts on vectors and feature maps differently depending upon their semantics. We formalize these different ways of acting as follows. Let F : R2 → Rm be an m-channel feature map and let V ∈ Rm×1×1 = Rm be a vector represented as a special case of a feature map with 1× 1 spatial dimensions. Then g is defined to act on F by (gF)(x, y) = ρj(g)F(ρ1(g)−1(x, y)). (1) For a vector V (considered to be at (x, y) = (0, 0)), this becomes: gV = ρj(g)V. (2) In the above, ρ1(g) rotates pixel location and ρj(g) transforms the pixel feature vector using the trivial representation (ρj = ρ0), the standard representation (ρj = ρ1), the regular representation (ρj = ρreg), or some combination thereof. Equivariant convolutional layer: A Cn-equivariant layer is a function h whose output is constrained to transform in a defined way when the input feature map is transformed by a group action. Consider an equivariant layer h with an input Fin : R2 → R|ρin| and an output Fout : R2 → R|ρout| , where ρin and ρout denote the group representations associated with Fin and Fout, respectively. When the input is transformed, this layer is constrained to output a transformed version of the same output feature map: h(gFin) = g(h(Fin)) = gFout. (3) where g ∈ Cn acts on Fin or Fout through Equation 1 or Equation 2, i.e., this constraint equation can be applied to arbitrary feature maps F or vectors V . A linear convolutional layer h satisfies Equation 3 with respect to the group Cn if the convolutional kernel K : R2 → R|ρout|×|ρin| has the following form (Cohen et al., 2018): K(ρ1(g)v) = ρ −1 out(g)K(v)ρin(g). (4) Since the composition of equivariant maps is equivariant, a fully convolutional equivariant network can be constructed by stacking equivariant convolutional layers that satisfy the constraint of Equation 3 and together with equivariant non-linearities (Weiler & Cesa, 2019). 4 PROBLEM STATEMENT 4.1 GROUP-INVARIANT MDPS In a group-invariant MDP, the transition and reward functions are invariant to group elements g ∈ G acting on the state and action space. For state s ∈ S, action a ∈ A, and g ∈ G, let gs ∈ S denote the action of g on s and ga ∈ A denote the action of g on a. Definition 4.1 (G-invariant MDP). A G-invariant MDPMG = (S,A, T,R,G) is an MDPM = (S,A, T,R) that satisfies the following conditions: 1. Reward Invariance: The reward function is invariant to the action of the group element g ∈ G, R(s, a) = R(gs, ga). 2. Transition Invariance: The transition function is invariant to the action of the group element g ∈ G, T (s, a, s′) = T (gs, ga, gs′). A key feature of a G-invariant MDP is that its optimal solution is also G-invariant (proof in Appendix A): Proposition 4.1. LetMG be a group-invariant MDP. Then its optimal Q-function is group invariant, Q∗(s, a) = Q∗(gs, ga), and its optimal policy is group-equivariant, π∗(gs) = gπ∗(s), for any g ∈ G. It should be noted that the G-invariant MDP of Definition 4.1 is in fact a special case of an MDP homomorphism (Ravindran & Barto, 2001; 2004), a broad class of MDP abstractions. MDP homomorphisms are important because optimal solutions to the abstract problem can be “lifted” to produce optimal solutions to the original MDP (Ravindran & Barto, 2004). As such, Proposition 4.1 follows directly from those results. 4.2 SO(2)-INVARIANT MDPS IN VISUAL STATE SPACES In the remainder of this paper, we focus exclusively on an important class of SO(2)-invariant MDPs where the state is encoded as an image. We approximate SO(2) by its subgroup Cn. State space: State is expressed as an m-channel image, Fs : R2 → Rm. The group operator g ∈ Cn acts on this image as defined in Equation 1 where we set ρj = ρ0: gFs(x, y) = ρ0(g)Fs(ρ1(g)−1(x, y)), i.e., by rotating the pixels but leaving the pixel feature vector unchanged. Action space: We assume we are given a factored action space Ainv ×Aequiv = A ⊆ Rk embedded in a k-dimensional Euclidean space where Ainv ⊆ Rkinv and Aequiv ⊆ Rk−kinv . We require the variables in Ainv to be invariant with the rotation operator and the variables in Aequiv to rotate with the representation ρequiv = ρ1. Therefore, the rotation operator g ∈ Cn acts on a ∈ A via ga = (ρequiv(g)aequiv, ainv) where ainv ∈ Ainv and aequiv ∈ Aequiv. Application to robotic manipulation: We express the state as a depth image centered on the gripper position where depth is defined relative to the gripper. The orientation of this image is relative to the base reference frame – not the gripper frame. We require the fingers of the gripper and objects grasped by the gripper to be visible in the image. Figure 2 shows an illustration. The action is a tuple, a = (aλ, axy, az, aθ) ∈ A ⊂ R5, where aλ ∈ Aλ denotes the commanded gripper aperture, axy ∈ Axy denotes the commanded change in gripper xy position, az ∈ Az denotes the commanded change in gripper height, and aθ ∈ Aθ denotes the commanded change in gripper orientation. Here, the xy action is equivariant with g ∈ Cn,Aequiv = Axy , and the rest of the action variables are invariant,Ainv = Aλ×Az×Aθ. Notice that the transition dynamics are Cn-invariant (i.e. T (s, a, s′) = T (gs, ga, gs′)) because the Newtonian physics of the interaction are invariant to the choice of reference frame. If we constrain the reward function to be Cn-invariant as well, then the resulting MDP is Cn-invariant. 5 APPROACH 5.1 EQUIVARIANT DQN In DQN, we assume we have a discrete action space, and we learn the parameters of a Q-network that maps from the state onto action values. Given a G-invariant MDP, Proposition 4.1 tells us that the optimal Q-function is G-invariant. Therefore, we encode the Q-function using an equivariant neural network that is constrained to represent only G-invariant Q-functions. First, in order to use DQN, we need to discretize the action space. Let Aequiv ⊂ Aequiv and Ainv ⊂ Ainv be discrete subsets of the full equivariant and invariant action spaces, respectively. Next, we define a function Fa : Aequiv → RAinv from the equivariant action variables in Aequiv to the Q values of the invariant action variables in Ainv. For example, in the robotic manipulation domain described Section 4.2, we have Aequiv = Axy and Ainv = Aλ × Az × Aθ and ρequiv = ρ1, and we define Aequiv and Ainv accordingly. We now encode the Q network q as a stack of equivariant layers that each encode the equivariant constraint of Equation 3. Since the composition of equivariant layers is equivariant, q satisfies: q(gFs) = g(q(Fs)) = gFa, (5) where we have substituted Fin = Fs and Fout = Fa. In the above, the rotation operator g ∈ Cn is applied using Equation 1 as gFa(axy) = ρ0(g)Fa(ρ1(g)−1(axy)). Figure 3 illustrates this equivariance constraint for the robotic manipulation example with |Aequiv| = |Axy| = 9. When the state (represented as an image on the left) is rotated by 90 degrees, the values associated with the action variables in Axy are also rotated similarly. The detailed network architecture is shown in Appendix D.1. Our architecture is different from that in Mondal et al. (2020) in that we associate the action of g on Aequiv and Ainv with the group action on the spatial dimension and the channel dimension of a feature map Fa, which is more efficient than learning such mapping using FC layers. 5.2 EQUIVARIANT SAC In SAC, we assume the action space is continuous. We learn the parameters for two networks: a policy network Π (the actor) and an action-value network Q (the critic) (Haarnoja et al., 2018). The critic Q : S ×A→ R approximates Q values in the typical way. However, the actor Π : S → A×Aσ estimates both the mean and standard deviation of action for a given state. Here, we define Aσ = Rk to be the domain of the standard deviation variables over the k-dimensional action space defined in Section 4.2. Since Proposition 4.1 tells us that the optimal Q is invariant and the optimal policy is equivariant, we must model Q as an invariant network and Π as an equivariant network. Policy network: First, consider the equivariant constraint of the policy network. As before, the state is encoded by the function Fs. However, we must now express the action as a vector over Ā = A×Aσ . Factoring A into its equivariant and invariant components, we have Ā = Aequiv × Ainv × Aσ . In order to identify the equivariance relation for Ā, we must define how the group operator g ∈ G acts on aσ ∈ Aσ . Here, we make the simplifying assumption that aσ is invariant to the group operator. This choice makes sense in robotics domains where we would expect the variance of our policy to be invariant to the choice of reference frame. As a result, we have that the group element g ∈ G acts on ā ∈ Ā via: gā = g(aequiv, ainv, aσ) = (ρequiv(g)aequiv, ainv, aσ). (6) We can now define the actor network π to be a mapping Fs 7→ ā (Figure 4 top) that satisfies the following equivariance constraint (Equation 3): π(gFs) = g(π(Fs)) = gā. (7) Critic network: The critic network takes both state and action as input and maps onto a real value. We define two equivariant networks: a state encoder e and aQ network q. The equivariant state encoder, e, maps the input state Fs onto a regular representation s̄ ∈ (Rn)α where each of n group elements is associated with an α-vector. Since s̄ has a regular representation, we have gs̄ = ρreg(g)s̄. Writing the equivariance constraint of Equation 3 for e, we have that e must satisfy e(gFs) = ge(Fs) = gs̄. The output state representation s̄ is concatenated with the action a ∈ A, producing w = (s̄, a). The action of the group operator is now gw = (gs̄, ga) where ga = (ρequiv(g)aequiv, ainv). Finally, the q network maps from w onto R, a real-valued estimate of theQ value for w. Based on proposition 4.1, this network must be invariant to the group action: q(gw) = q(w). All together, the critic satisfies the following invariance equation: q(e(gFs), ga) = q(e(Fs), a). (8) This network is illustrated at the bottom of Figure 4. For a robotic manipulation domain in Section 4.2, we have Aequiv = Axy and Ainv = Aλ ×Az ×Aθ and ρequiv = ρ1. The detailed network architecture is in Appendix D.2. Preventing the critic from becoming overconstrained: In the model architecture above, the hidden layer of q is represented using a vector in the regular representation and the output of q is encoded using the trivial representation. However, Schur’s Lemma (see e.g. Dummit & Foote (1991)) implies there only exists a one-dimensional space of linear mappings from a regular representation to a trivial representation (i.e., x = a ∑ i vi where x is a trivial representation, a is a constant, and v is a regular representation). This implies that a linear mapping f : Rn × Rn → R from two regular representations to a trivial representation that satisfies f(gv, gw) = f(v, w) for all g ∈ G will also satisfy f(g1v, w) = f(v, w) and f(v, g2w) = f(v, w) for all g1, g2 ∈ G. (See details in Appendix B.) In principle, this could overconstrain the last layer of q to encode additional undesired symmetries. To avoid this problem we use a non-linear equivariant mapping, maxpool, over the group space to transform the regular representation to the trivial representation. 5.3 EQUIVARIANT SACFD Many of the problems we want to address cannot be solved without guiding the agent’s exploration somehow. In order to evaluate our algorithms in this context, we introduce the following simple strategy for learning from demonstration with SAC. First, prior to training, we pre-populate the replay buffer with a set of expert demonstrations generated using a hand-coded planner. Second, we introduce the following L2 term into the SAC actor’s loss function: Lactor = LSAC + 1e [ 1 2 ((a ∼ π(s))− ae)2 ] , (9) where LSAC is the actor’s loss term in standard SAC, 1e = 1 if the sampled transition is an expert demonstration and 0 otherwise, a ∼ π(s) is an action sampled from the output Gaussian distribution of π(s), and ae is the expert action. Since both the sampled action a ∼ π(s) and the expert action ae transform equivalently, Lactor is compatible with the equivariance we introduce in Section 5.2. We refer to this method as SACfD (SAC from Demonstration). 6 EXPERIMENTS We evaluate Equivariant DQN and Equivariant SAC in the manipulation tasks shown in Figure 5. These tasks can be formulated as SO(2)-invariant MDPs. All environments have sparse rewards (+1 when reaching the goal and 0 otherwise). See environment details in Appendix C. 6.1 EQUIVARIANT DQN We evaluate Equivariant DQN in the Block Pulling, Object Picking, and Drawer Opening tasks for the group C4. The discrete action space is Aλ = {OPEN, CLOSE}; Axy = {(x, y)|x, y ∈ {−0.02m, 0m, 0.02m}}; Az = {−0.02m, 0m, 0.02m}; Aθ = {− π16 , 0, π 16}. Note that the definition of Axy and g ∈ C4 satisfies the closure requirement of the action space in a way that ∀axy ∈ Axy,∀g ∈ C4, ρ1(g)axy ∈ Axy . We compare Equivariant DQN (Equi DQN) against the following baselines: 1) CNN DQN: DQN with conventional CNN instead of equivariant network, where the conventional CNN has a similar amount of trainable parameters (3.9M) as the equivariant network (2.6M). 2) RAD Crop DQN (Laskin et al., 2020a): same network architecture as CNN DQN. At each training step, each transition in the minibatch is applied with a random-crop data augmentation. 3) DrQ Shift DQN (Kostrikov et al., 2020): same network architecture as CNN DQN. At each training step, both the Q-targets and the TD losses are calculated by averaging over two random-shift augmented transitions. 4): CURL DQN (Laskin et al., 2020b): similar architecture as CNN DQN with an extra contrastive loss term that learns an invariant encoder from random crop augmentations. See the baselines detail in Appendix E. At the beginning of each training process, we pre-populate the replay buffer with 100 episodes of expert demonstrations. Figure 6 compares the learning curves of the various methods. Equivariant DQN learns faster and converges at a higher discounted reward in all three environments. 6.2 EQUIVARIANT SAC In this experiment, we evaluate the performance of Equivariant SAC (Equi SAC) for the group C8. The continuous action space is:Aλ = [0, 1]; Axy = {(x, y)|x, y ∈ [−0.05m, 0.05m]}; Az = [−0.05m, 0.05m]; Aθ = [−π8 , π 8 ]. We compare against the following baselines: 1) CNN SAC: SAC with conventional CNN rather than equivariant networks, where the conventional CNN has a similar amount of trainable parameters (2.6M) as the equivariant network (2.3M). 2) RAD Crop SAC (Laskin et al., 2020a): same model architecture as CNN SAC with random crop data augmentation when sampling transitions. 3) DrQ Shift SAC (Kostrikov et al., 2020): same model architecture as CNN SAC with random shift data augmentation when calculating the Q-target and the loss. 4) FERM (Zhan et al., 2020): a combination of SAC, contrastive learning, and random crop augmentation (baseline details in Appendix E). All methods use a SO(2) data augmentation buffer, where every time a new transition is added, we generate 4 more augmented transitions by applying random continuous rotations to both the image and the action (this data augmentation in the buffer is in addition to the data augmentation that is performed in the RAD DrQ, and FERM baselines). Prior to each training run, we pre-load the replay buffer with 20 episodes of expert demonstration. Figure 7 shows the comparison among the various methods. Notice that Equivariant SAC outperforms the other methods significantly. Without the equivariant approach, Object Picking and Drawer Opening appear to be infeasible for the baseline methods. In Block Pulling, FERM is the only other method able to solve the task. 6.3 EQUIVARIANT SACFD We want to explore our equivariant methods in the context of more challenging tasks such as those in the bottom row of Figure 5. However, since these tasks are too difficult to solve without some kind of guided exploration, we augment the Equivariant SAC as well as all the baselines in two ways: 1) we use SACfD as described in Section 5.3; 2) we use Prioritized Experience Replay (Schaul et al., 2015) rather than standard replay buffer. As in Section 6.2, we use the SO(2) data augmentation in the buffer that generates 4 extra SO(2)-augmented transitions whenever a new transition is added. Figure 8 shows the results. First, note that our Equivariant SACfD does best on all four tasks, followed by FERM, and other baselines. Second, notice that only the equivariant method can solve the last three (most challenging tasks). This suggests that equivariant models are important not only for unstructured reinforcement learning, but also for learning from demonstration. Additional results for Block Pulling and Object Picking environments are shown in Appendix G. 6.4 COMPARING WITH LEARNING EQUIVARIANCE USING AUGMENTATION In the previous experiments, we compare against the data augmentation baselines using the same data augmentation operators that the authors proposed (random crop in RAD (Laskin et al., 2020a) and random shift in DrQ (Kostrikov et al., 2020)). However, those two methods can also be modified to learn SO(2) equivariance using SO(2) data augmentation. Here, we explore this idea as an alternative to our equivariant model. Specifically, instead of augmenting on the state as in Laskin et al. (2020a) and Kostrikov et al. (2020) using only translation, we apply the SO(2) augmentation in both the state and the action. Since the RAD and DrQ baselines in this section are already running SO(2) augmentations themselves, we disable the SO(2) buffer augmentation for the online transitions in those baselines. (See the result of RAD and DrQ with the SO(2) data augmentation buffer in Appendix H.4). We compare the resulting version of RAD (RAD SO(2) SACfD) and DrQ (DrQ SO(2) SACfD) with our Equivariant SACfD in Figure 9. Our method outperforms both RAD and DrQ equipped with SO(2) data augmentation. Additional results for Block Pulling and Object Picking are shown in Appendix G. 6.5 GENERALIZATION EXPERIMENT This experiment evaluates the ability for the equivariant model to generalize over the equivariance group. We use a similar experimental setting as in Section 6.3. However, now the training environment is always initialized with a fixed orientation rather than a random orientation. For example, in Block Pulling, the two blocks are initialized with a fixed relative orientation; in Drawer Opening, the drawer is initialized with a fixed orientation. In the evaluation environment, however, these objects are initialized with random orientations. To suc- ceed, the agent needs to generalize over varied orientations while being trained with a fixed orientation. To prevent the agent from generalizing via augmentation, we disable the SO(2) augmentation in the buffer. As shown in Figure 10, Equivariant SACfD generalizes better than the baselines. Even though the equivariant network is presented with only one orientation during training, it successfully generalizes over random orientation whereas none of the baselines can. 7 DISCUSSION This paper defines a class of group-invariant MDPs and identifies the invariance and equivariance characteristics of their optimal solutions. This paper further proposes Equivariant SAC and a new variation of Equivariant DQN for continuous action space and discrete action space, respectively. We show experimentally in the robotic manipulation domains that our proposal substantially surpasses the performance of competitive baselines. A key limitation of this work is that our definition of G-invariant MDPs requires the MDP to have an invariant reward function and invariant transition function. Though such restrictions are often applicable in robotics, they limit the potential of the proposed methods in other domains like some ATARI games. Furthermore, if the observation is from a non-top-down perspective, or there are non-equivariant structures in the observation (e.g., the robot arm), the invariant assumptions of a G-invariant MDP will not be directly satisfied. ACKNOWLEDGMENTS This work is supported in part by NSF 1724257, NSF 1724191, NSF 1763878, NSF 1750649, and NASA 80NSSC19K1474. R. Walters is supported by the Roux Institute and the Harold Alfond Foundation and NSF grants 2107256 and 2134178. A PROOF OF PROPOSITION 4.1 The proof in this section follows Wang et al. (2021). Note that the definition of group action · : G× X → X implies that elements g ∈ G act by bijections on X since the action of g−1 gives a twosided inverse for the action of g. That is, g permutes the elements of X . Proof of Proposition 4.1. For g ∈ G, we will first show that the optimal Q-function is G-invariant, i.e., Q∗(s, a) = Q∗(gs, ga), then show that the optimal policy is G-equivariant, i.e., π∗(gs) = gπ∗(s). (1) Q∗(s, a) = Q∗(gs, ga): The Bellman optimality equations for Q∗(s, a) and Q∗(gs, ga) are, respectively: Q∗(s, a) = R(s, a) + γ sup a′∈A ∫ s′∈S T (s, a, s′)Q∗(s′, a′), (10) and Q∗(gs, ga) = R(gs, ga) + γ sup a′∈A ∫ s′∈S T (gs, ga, s′)Q∗(s′, a′). (11) Since g ∈ G merely permutes the elements of S, we can re-index the integral using s̄′ = gs′: Q∗(gs, ga) = R(gs, ga) + γ sup ā′∈gA ∫ s̄′∈gS T (gs, ga, s̄′)Q∗(s̄′, ā′) (12) = R(gs, ga) + γ sup a′∈A ∫ s′∈S T (gs, ga, gs′)Q∗(gs′, ga′). (13) Using the Reward Invariance and the Transition Invariance in Definition 4.1, this can be written: Q∗(gs, ga) = R(s, a) + γ sup a′∈A ∫ s′∈S T (s, a, s′)Q∗(gs′, ga′). (14) Now, define a new function Q̄ such that ∀s, a ∈ S × A, Q̄(s, a) = Q(gs, ga) and substitute into Eq. 14, resulting in: Q̄∗(s, a) = R(s, a) + γ sup a′∈A ∫ s′∈S T (s, a, s′)Q̄∗(s′, a′). (15) Notice that Eq. 15 and Eq. 10 are the same Bellman equation. Since solutions to the Bellman equation are unique, we have that ∀s, a ∈ S ×A, Q∗(s, a) = Q̄∗(s, a) = Q∗(gs, ga). (2) π∗(gs) = gπ∗(s): The optimal policy for π∗(s) and π∗(gs) can be written in terms of the optimal Q-function, Q∗, as: π∗(s) = arg max a∈A Q∗(s, a) (16) and π∗(gs) = arg max ā∈A Q∗(gs, ā) (17) Using the invariant property of Q∗ we can substitute Q∗(gs, ā) with Q∗(s, g−1ā) in Equation 17: π∗(gs) = arg max ā∈A Q∗(s, g−1ā) (18) Let ā = ga, Equation 18 can be written as: π∗(gs) = g[arg max a∈A Q∗(s, g−1ga)] (19) Cancelling g−1 and g and substituting Equation 16 we have, π∗(gs) = gπ∗(s). (20) B EQUIVARIANCE OVERCONSTRAIN Proposition B.1. Let f : Vreg⊕Vreg → Vtriv be a linear Cn-equivariant function. Then f(v, w) = a ∑ i vi + b ∑ i wi. Proof. By Weyl decomposibility (Hall, 2003), Vreg decomposes into irreducible representations for Cn each with multiplicity determined by its dimension. Among these is the trivial representation with multiplicity 1. By Schur’s lemma (Dummit & Foote, 1991), the mapping Vreg ⊕ Vreg → Vtriv must factor through the trivial representation embedded in Vreg . The projection onto the trivial representation is given v 7→ a ∑ i vi. The result follows by linearity. As a corollary, we find that Cn-equivariant maps Vreg ⊕ Vreg → Vtriv are actually Cn × Cnequivariant. Let (g1, g2) ∈ Cn × Cn, then applying the Proposition f(g1v, g2w) = a ∑ i(gv)i + b ∑ i(gw)i = a ∑ i vi + b ∑ i wi = f(v, w). C ENVIRONMENT DETAILS In all environments, the environment reset is conduced by randomly initializing the objects with random positions and orientations inside the workspace. The arm is always initialized at the same configuration. The workspace has a size of 0.4m × 0.4m × 0.24m. All environments have a sparse reward, i.e., the agent acquires a +1 reward when reaching the goal state, and 0 otherwise. In the PyBullet simulator, the robot joints have enough compliance to allow the gripper to apply force on the block in the Corner Picking task. We augment the state image with an additional binary channel (i.e., either all pixels are 1 or all pixels are 0) indicating if the gripper is holding an object. Note that this additional channel is invariant to rotations (because all pixels have the same value) so it won’t break the proposed equivariant properties. The Block Pulling requires the robot to pull one block to make contact with the other block. The Object Picking requires the robot the pick up an object randomly sampled from a set of 11 objects (Figure 11). The Drawer Opening requires the robot to pull open a drawer. The Block Stacking requires the robot to stack one block on top of another. The House Building requires the robot to stack a triangle roof on top of a block. The Corner Picking requires the robot to slide the block from the corner and then pick it up. D NETWORK ARCHITECTURE Our equivariant models are implemented using the E2CNN (Weiler & Cesa, 2019) library with PyTorch (Paszke et al., 2017). D.1 EQUIVARIANT DQN ARCHITECTURE In the Equivariant DQN, we use a 7-layer Steerable CNN defined in the group C4 (Figure 12a). The input Fs is encoded as a 2-channel ρ0 feature map, and the output is a 18-channel 3 × 3 ρ0 feature map where the channel encodes the invariant actions Ainv and the spatial dimension encodes Axy . D.2 EQUIVARIANT SAC ARCHITECTURE In the Equivariant SAC, there are two separate networks, both are Steerable CNN defined in the group C8. The actor π (Figure 12b top) is an 8-layer network that takes in a 2-channel ρ0 feature map (Fs) and outputs a mixed representation type 1 × 1 feature map (ā) consisting of 1 ρ1 feature for axy and 8 ρ0 features for ainv and aσ . The critic (Figure 12b bottom) is a 9-layer network that takes in both Fs as a 2-channel ρ0 feature map and a as a 1 × 1 mixed representation feature map consisting of 1 ρ1 feature for axy and 3 ρ0 for ainv. The upper path e encodes Fs into a 64-channel regular representation feature map s̄ with 1 × 1 spatial dimensions, then concatenates it with a. Two separate Q-value paths q take in the concatenated feature map and generate two Q-estimates in the form of 1 × 1 ρ0 feature. The non-linear maxpool layer is used for transforming regular representations into trivial representations to prevent the equivariant overconstraint (Section 5.2). Note that there are two Q outputs based on the requirement of the SAC algorithm. E BASELINE DETAILS Figure 13 shows the baseline network architectures for DQN and SAC. The RAD (Laskin et al., 2020a) Crop baselines, CURL (Laskin et al., 2020b) baselines, and FERM (Zhan et al., 2020) baselines use random crop for data augmentation. The random crop crops a 142 × 142 state image to the size of 128 × 128. The contrastive encoder of CURL baselines has a size of 128 as in Laskin et al. (2020b), and that for the FERM baselines has a size of 50 as in Zhan et al. (2020). The FERM baseline’s contrastive encoder is pretrained for 1.6k steps using the expert data as in Zhan et al. (2020). The DrQ (Kostrikov et al., 2020) Shift baselines use random shift of ±4 pixels for data augmentation as in the original work. In all DrQ baselines, the number of augmentations for calculating the target K and the number of augmentations for calculating the loss M are both 2 as in Kostrikov et al. (2020). F TRAINING DETAILS We implement our experimental environments in the PyBullet simulator (Coumans & Bai, 2016). The workspace’s size is 0.4m×0.4m×0.24m. The pixel size of the visual state I is 128×128 (except for the RAD Crop baselines, CURL baselines, and FERM baselines, where I’s size is 142 × 142 and will be cropped to 128 × 128). I’s FOV is 0.6m × 0.6m. During training, we use 5 parallel environments. We implement all training in PyTorch (Paszke et al., 2017). Both DQN and SAC use soft target update with τ = 10−2. In the DQN experiments, we use the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 10−4. We use Huber loss (Huber, 1964) for calculating the TD loss. We use a discount factor γ = 0.95. The batch size is 32. The buffer has a capacity of 100,000 transitions. In the SAC (and SACfD) experiments, we use the Adam optimizer with a learning rate of 10−3. The entropy temperature α is initialized at 10−2. The target entropy is -5. The discount factor γ = 0.99. The batch size is 64. The buffer has a capacity of 100,000 transitions. SACfD uses the prioritized replay buffer (Schaul et al., 2015) with prioritized replay exponent of 0.6 and prioritized importance sampling exponent β0 = 0.4 as in Schaul et al. (2015). The expert transitions are given a priority bonus of d = 1. G ADDITIONAL EXPERIMENTAL RESULTS FOR EQUIVARIANT SACFD Figure 14 (a)-(b) shows the results for the experiment of Section 6.3 in Block Pulling and Object Picking environments. The Equivariant SACfD outperforms all baselines in those two environments. Figure 14 (c)-(d) shows the results for the experiment of Section 6.4 in block Pulling and Object Picking environments. Similarly as the results in Figure 9, our Equivariant SACfD outperforms both RAD and DrQ equipped with SO(2) dat augmentation. H ABLATION STUDIES H.1 USING EQUIVARIANT NETWORK ONLY IN ACTOR OR CRITIC In this experiment, we investigate the effectiveness of the equivariant network in SACfD by only applying it in the actor network or the critic network. We evaluate four variations: 1) Equi Actor + Equi Critic that uses equivariant network in both the actor and the critic; 2) Equi Actor + CNN Critic that uses equivariant network solely in the actor and uses conventional CNN in the critic; 3) CNN Actor + Equi Critic that uses conventional CNN in the actor and equivariant network in the Critic; 4) CNN Actor + CNN Critic that uses the conventional CNN in both the actor and the critic. Other experimental setup mirrors Section 6.3. As is shown in Figure 15, applying the equivariant network in the actor generally helps more than applying the equivariant network in the critic (in 5 out of 6 experiments), and using the equivariant network in both the actor and the critic always demonstrates the best performance. H.2 DIFFERENT SYMMETRY GROUPS This experiment compares the equivariant networks defined in three different symmetry groups: C8, C4, and C2. We run this experiment in SACfD with the same setup as in Section 6.3. As is shown in Figure 16, the network defined in C8 generally outperforms the network defined in C4, followed by the network defined in C2. H.3 EQUIVARIANT SACFD IN NON-SYMMETRIC ENVIRONMENTS This experiments evaluates the performance of Equivariant SACfD in non-symmetric tasks where the initial orientation of the environments are fixed rather than random. (Similarly as in Section 6.5 but both the training and the evaluation environments have the fix orientation.) Specifically, in Block Pulling, the two blocks in the training environment is initialized with a fixed relative orientation; in Drawer Opening, the drawer is initialized with a fixed orientation. As is shown in Figure 17, when the environments do not contain SO(2) symmetries, the performance gain of using equivariant network is less significant. H.4 ROTATIONAL AUGMENTATION + BUFFER AUGMENTATION Section 6.4 compares our Equivariant SACfD with rotational data augmentation baselines. This experiment shows the performance of those baselines (and an extra CNN SACfD baseline that uses conventional CNN) equipped with the data augmentation buffer. As is mentioned in Section 6.2, the data augmentation baseline creates 4 extra augmented transitions using random SO(2) rotation every time a new transition is added. Figure 18 shows the result, where none of the baselines outperform our proposal in any tasks. Compared with Figure 9, the data augmentation buffer hurts RAD and DrQ because of the redundancy of the same data augmentation.
1. What is the main contribution of the paper regarding equivariant architecture for RL? 2. What are the strengths of the proposed approach, particularly in terms of sample efficiency and generalization? 3. What are the weaknesses or concerns regarding the paper's claims and experiments? 4. How does the reviewer assess the clarity and applicability of the paper's content? 5. Are there any questions or issues that the reviewer raises regarding the paper's methodology or results?
Summary Of The Paper Review
Summary Of The Paper This paper develops equivariant architecture for RL, specifically DQN and SAC. The theoretical exposition is general and a specific SO2 instantiation is shown to outperform baselines on image based RL tasks. Review Strengths Very clearly motivates sample efficiency + generalization and articulates inductive bias from the perspective of adding equivariance to the model vs doing data augmentation. Results do show better data efficiency compared to baselines. Weakness/Comments Results don't get into generalization but the claims/motivation in intro and related work leave the impression that this axis would be investigated, The exposition of the approach is a bit hard to parse and sprinkling in some intuition and ground things to the application would be helpful. What is the input? Is the kinds of images shown in Fig 5 or the depth map in Fig 2b? Some of the assumptions for this depth map don't make sense from a real application perspective: (i) if a depth camera is mounted near the robot end-effector it won't be possible to get the image in base frame, if it is overhead mounted then there will be occlusions so the object may not be always visible; (ii) depending on the task the gripper will often partially/fully occlude the object -- how much is this a problem? (iii) both Fig 5 and 2b are very idealized and in practice many other visual distractors would be present. The claim in Sec 7 about "transfers easily to real-world environments" is not well supported and given the points above (and possibly other) this is in fact going to be quite challenging. A more nuanced coverage of limitations would be helpful to constraint where this method works and where it doesn't, rather that surface level limitations of learning in simulation and delta state based action.
ICLR
Title $\mathrm{SO}(2)$-Equivariant Reinforcement Learning Abstract Equivariant neural networks enforce symmetry within the structure of their convolutional layers, resulting in a substantial improvement in sample efficiency when learning an equivariant or invariant function. Such models are applicable to robotic manipulation learning which can often be formulated as a rotationally symmetric problem. This paper studies equivariant model architectures in the context ofQ-learning and actor-critic reinforcement learning. We identify equivariant and invariant characteristics of the optimal Q-function and the optimal policy and propose equivariant DQN and SAC algorithms that leverage this structure. We present experiments that demonstrate that our equivariant versions of DQN and SAC can be significantly more sample efficient than competing algorithms on an important class of robotic manipulation problems. 1 INTRODUCTION A key challenge in reinforcement learning is to improve sample efficiency – that is to reduce the amount of environmental interactions that an agent must take in order to learn a good policy. This is particularly important in robotics applications where gaining experience potentially means interacting with a physical environment. One way of improving sample efficiency is to create “artificial” experiences through data augmentation. This is typically done in visual state spaces where an affine transformation (e.g., translation or rotation of the image) is applied to the states experienced during a transition (Laskin et al., 2020a; Kostrikov et al., 2020). These approaches implicitly assume that the transition and reward dynamics of the environment are invariant to affine transformations of the visual state. In fact, some approaches explicitly use a contrastive loss term to induce the agent to learn translation-invariant feature representations (Laskin et al., 2020b; Zhan et al., 2020). Recent work in geometric deep learning suggests that it may be possible to learn transformationinvariant policies and value functions in a different way, using equivariant neural networks (Cohen & Welling, 2016a;b). The key idea is to structure the model architecture such that it is constrained only to represent functions with the desired invariance properties. In principle, this approach aim at exactly the same thing as the data augmentation approaches described above – both methods seek to improve sample efficiency by introducing an inductive bias. However, the equivariance approach achieves this more directly by modifying the model architecture rather than by modifying the training data. Since with data augmentation, the model must learn equivariance in addition to the task itself, more training time and greater model capacity are often required. Even then, data augmentation results only in approximate equivariance whereas equivariant neural networks guarantee it and often have stronger generalization as well (Wang et al., 2020b). While equivariant architectures have recently been applied to reinforcement learning (van der Pol et al., 2020a;b; Mondal et al., 2020), this has been done only in toy settings (grid worlds, etc.) where the model is equivariant over small finite groups, and the advantages of this approach over standard methods is less clear. This paper explores the application of equivariant methods to more realistic problems in robotics such as object manipulation. We make several contributions. First, we define and analyze an important class of MDPs that we call group-invariant MDPs. Second, we introduce a new variation of the Equivariant DQN (Mondal et al., 2020), and we further introduce equivariant variations of SAC (Haarnoja et al., 2018), and learning from demonstration (LfD). Finally, we show that our methods convincingly outperform recent competitive data augmentation approaches (Laskin et al., 2020a; Kostrikov et al., 2020; Laskin et al., 2020b; Zhan et al., 2020). Our Equivariant SAC method, in particular, outperforms these baselines so dramatically (Figure 7) that it could make reinforcement learning feasible for a much larger class of robotics problems than is currently the case. Supplementary video and code are available at https://pointw.github.io/equi_rl_page/. 2 RELATED WORK Equivariant Learning: Encoding symmetries in the structure of neural networks can improve both generalization and sample efficiency. The idea of equivariant learning is first introduced in GConvolution (Cohen & Welling, 2016a). The extension work proposes an alternative architecture, Steerable CNN (Cohen & Welling, 2016b). Weiler & Cesa (2019) proposes a general framework for implementing E(2)-Steerable CNNs. In the context of reinforcement learning, Mondal et al. (2020) investigates the use of Steerable CNNs in the context of two game environments. van der Pol et al. (2020b) proposes MDP homomorphic networks to encode rotational and reflectional equivariance of an MDP but only evaluates their method in a small set of tasks. In robotic manipulation, Wang et al. (2021) learns equivariant Q-functions but is limited in the spatial action space. In contrast to prior work, this paper proposes an Equivariant SAC algorithm, an equivariant LfD algorithm, and a novel variation of Equivariant DQN (Mondal et al., 2020) focusing on visual motor control problems. Data Augmentation: Another popular method for improving sample efficiency is data augmentation. Recent works demonstrate that the use of simple data augmentation methods like random crop or random translate can significantly improve the performance of reinforcement learning (Laskin et al., 2020a; Kostrikov et al., 2020). Data augmentation is often used for generating additional samples (Kalashnikov et al., 2018; Lin et al., 2020; Zeng et al., 2020) in robotic manipulation. However, data augmentation methods are often less sample efficient than equivariant networks because the latter injects an inductive bias to the network architecture. Contrastive Learning: Data augmentation is also applied with contrastive learning (Oord et al., 2018) to improve feature extraction. Laskin et al. (2020b) show significant sample-efficiency improvement by adding an auxiliary contrastive learning term using random crop augmentation. Zhan et al. (2020) use a similar method in the context of robotic manipulation. However, contrastive learning is limited to learning an invariant feature encoder and is not capable of learning equivariant functions. Close-Loop Robotic Control: There are two typical action space definitions when learning policies that control the end-effector of a robot arm: the spatial action space that controls the target pose of the end-effector (Zeng et al., 2018b;a; Satish et al., 2019; Wang et al., 2020a), or the close-loop action space that controls the displacement of the end-effector. The close-loop action space is widely used for learning grasping policies (Kalashnikov et al., 2018; Quillen et al., 2018; Breyer et al., 2019; James et al., 2019). Recently, some works also learn more complex policies than grasping (Viereck et al., 2020; Kilinc et al., 2019; Cabi et al., 2020; Zhan et al., 2020). This work extends prior works in the close-loop action space by using equivariant learning to improve the sample efficiency. 3 BACKGROUND SO(2) and Cn: We will reason about rotation in terms of the group SO(2) and its cyclic subgroup Cn ≤ SO(2). SO(2) is the group of continuous planar rotations {Rotθ : 0 ≤ θ < 2π}. Cn is the discrete subgroup Cn = {Rotθ : θ ∈ { 2πin |0 ≤ i < n}} of rotations by multiples 2π n . Cn actions: A groupGmay be equipped with an action on a setX by specifying a map · : G×X → X satisfying g1 · (g2 · x) = (g1g2) · x and 1 · x = x for all g1, g2 ∈ G, x ∈ X . Note that closure, gx ∈ X , and invertibility, g−1gx = x, follow immediately from the definition. We are interested in actions of Cn which formalize how vectors or feature maps transform under rotation. The group Cn acts in three ways that concern us (for a more comprehensive background, see Bronstein et al. (2021)): 1. R through the trivial representation ρ0. Let g ∈ Cn and x ∈ R. Then ρ0(g)x = x. For example, the trivial representation describes how pixel color/depth values change when an image is rotated, i.e. they do not change (Figure 1 left). 2. R2 through the standard representation ρ1. Let g ∈ Cn and v ∈ R2. Then ρ1(g)v =( cos g − sin g sin g cos g ) v. This describes how elements of a vector field change when rotated (Figure 1 middle). 3. Rn through the regular representation ρreg. Let g = rm ∈ Cn = {1, r, r2, . . . , rn−1} and (x1, x2, . . . , xn) ∈ Rn. Then ρreg(g)x = (xn−m+1, . . . , xn, x1, x2, . . . , xn−m) cyclically permutes the coordinates of Rn (Figure 1 right). Feature maps as functions: In deep learning, images and feature maps are typically expressed as tensors. However, it will be convenient here to sometimes express these as functions. Specifically, we may write an h× w one-channel image F ∈ R1×h×w as a function F : R2 → R where F(x, y) describes the intensity at pixel x, y. Similarly, an m-channel tensor F ∈ Rm×h×w may be written as F : R2 → Rm. We refer to the domain of this function as its “spatial dimensions”. Cn actions on vectors and feature maps: Cn acts on vectors and feature maps differently depending upon their semantics. We formalize these different ways of acting as follows. Let F : R2 → Rm be an m-channel feature map and let V ∈ Rm×1×1 = Rm be a vector represented as a special case of a feature map with 1× 1 spatial dimensions. Then g is defined to act on F by (gF)(x, y) = ρj(g)F(ρ1(g)−1(x, y)). (1) For a vector V (considered to be at (x, y) = (0, 0)), this becomes: gV = ρj(g)V. (2) In the above, ρ1(g) rotates pixel location and ρj(g) transforms the pixel feature vector using the trivial representation (ρj = ρ0), the standard representation (ρj = ρ1), the regular representation (ρj = ρreg), or some combination thereof. Equivariant convolutional layer: A Cn-equivariant layer is a function h whose output is constrained to transform in a defined way when the input feature map is transformed by a group action. Consider an equivariant layer h with an input Fin : R2 → R|ρin| and an output Fout : R2 → R|ρout| , where ρin and ρout denote the group representations associated with Fin and Fout, respectively. When the input is transformed, this layer is constrained to output a transformed version of the same output feature map: h(gFin) = g(h(Fin)) = gFout. (3) where g ∈ Cn acts on Fin or Fout through Equation 1 or Equation 2, i.e., this constraint equation can be applied to arbitrary feature maps F or vectors V . A linear convolutional layer h satisfies Equation 3 with respect to the group Cn if the convolutional kernel K : R2 → R|ρout|×|ρin| has the following form (Cohen et al., 2018): K(ρ1(g)v) = ρ −1 out(g)K(v)ρin(g). (4) Since the composition of equivariant maps is equivariant, a fully convolutional equivariant network can be constructed by stacking equivariant convolutional layers that satisfy the constraint of Equation 3 and together with equivariant non-linearities (Weiler & Cesa, 2019). 4 PROBLEM STATEMENT 4.1 GROUP-INVARIANT MDPS In a group-invariant MDP, the transition and reward functions are invariant to group elements g ∈ G acting on the state and action space. For state s ∈ S, action a ∈ A, and g ∈ G, let gs ∈ S denote the action of g on s and ga ∈ A denote the action of g on a. Definition 4.1 (G-invariant MDP). A G-invariant MDPMG = (S,A, T,R,G) is an MDPM = (S,A, T,R) that satisfies the following conditions: 1. Reward Invariance: The reward function is invariant to the action of the group element g ∈ G, R(s, a) = R(gs, ga). 2. Transition Invariance: The transition function is invariant to the action of the group element g ∈ G, T (s, a, s′) = T (gs, ga, gs′). A key feature of a G-invariant MDP is that its optimal solution is also G-invariant (proof in Appendix A): Proposition 4.1. LetMG be a group-invariant MDP. Then its optimal Q-function is group invariant, Q∗(s, a) = Q∗(gs, ga), and its optimal policy is group-equivariant, π∗(gs) = gπ∗(s), for any g ∈ G. It should be noted that the G-invariant MDP of Definition 4.1 is in fact a special case of an MDP homomorphism (Ravindran & Barto, 2001; 2004), a broad class of MDP abstractions. MDP homomorphisms are important because optimal solutions to the abstract problem can be “lifted” to produce optimal solutions to the original MDP (Ravindran & Barto, 2004). As such, Proposition 4.1 follows directly from those results. 4.2 SO(2)-INVARIANT MDPS IN VISUAL STATE SPACES In the remainder of this paper, we focus exclusively on an important class of SO(2)-invariant MDPs where the state is encoded as an image. We approximate SO(2) by its subgroup Cn. State space: State is expressed as an m-channel image, Fs : R2 → Rm. The group operator g ∈ Cn acts on this image as defined in Equation 1 where we set ρj = ρ0: gFs(x, y) = ρ0(g)Fs(ρ1(g)−1(x, y)), i.e., by rotating the pixels but leaving the pixel feature vector unchanged. Action space: We assume we are given a factored action space Ainv ×Aequiv = A ⊆ Rk embedded in a k-dimensional Euclidean space where Ainv ⊆ Rkinv and Aequiv ⊆ Rk−kinv . We require the variables in Ainv to be invariant with the rotation operator and the variables in Aequiv to rotate with the representation ρequiv = ρ1. Therefore, the rotation operator g ∈ Cn acts on a ∈ A via ga = (ρequiv(g)aequiv, ainv) where ainv ∈ Ainv and aequiv ∈ Aequiv. Application to robotic manipulation: We express the state as a depth image centered on the gripper position where depth is defined relative to the gripper. The orientation of this image is relative to the base reference frame – not the gripper frame. We require the fingers of the gripper and objects grasped by the gripper to be visible in the image. Figure 2 shows an illustration. The action is a tuple, a = (aλ, axy, az, aθ) ∈ A ⊂ R5, where aλ ∈ Aλ denotes the commanded gripper aperture, axy ∈ Axy denotes the commanded change in gripper xy position, az ∈ Az denotes the commanded change in gripper height, and aθ ∈ Aθ denotes the commanded change in gripper orientation. Here, the xy action is equivariant with g ∈ Cn,Aequiv = Axy , and the rest of the action variables are invariant,Ainv = Aλ×Az×Aθ. Notice that the transition dynamics are Cn-invariant (i.e. T (s, a, s′) = T (gs, ga, gs′)) because the Newtonian physics of the interaction are invariant to the choice of reference frame. If we constrain the reward function to be Cn-invariant as well, then the resulting MDP is Cn-invariant. 5 APPROACH 5.1 EQUIVARIANT DQN In DQN, we assume we have a discrete action space, and we learn the parameters of a Q-network that maps from the state onto action values. Given a G-invariant MDP, Proposition 4.1 tells us that the optimal Q-function is G-invariant. Therefore, we encode the Q-function using an equivariant neural network that is constrained to represent only G-invariant Q-functions. First, in order to use DQN, we need to discretize the action space. Let Aequiv ⊂ Aequiv and Ainv ⊂ Ainv be discrete subsets of the full equivariant and invariant action spaces, respectively. Next, we define a function Fa : Aequiv → RAinv from the equivariant action variables in Aequiv to the Q values of the invariant action variables in Ainv. For example, in the robotic manipulation domain described Section 4.2, we have Aequiv = Axy and Ainv = Aλ × Az × Aθ and ρequiv = ρ1, and we define Aequiv and Ainv accordingly. We now encode the Q network q as a stack of equivariant layers that each encode the equivariant constraint of Equation 3. Since the composition of equivariant layers is equivariant, q satisfies: q(gFs) = g(q(Fs)) = gFa, (5) where we have substituted Fin = Fs and Fout = Fa. In the above, the rotation operator g ∈ Cn is applied using Equation 1 as gFa(axy) = ρ0(g)Fa(ρ1(g)−1(axy)). Figure 3 illustrates this equivariance constraint for the robotic manipulation example with |Aequiv| = |Axy| = 9. When the state (represented as an image on the left) is rotated by 90 degrees, the values associated with the action variables in Axy are also rotated similarly. The detailed network architecture is shown in Appendix D.1. Our architecture is different from that in Mondal et al. (2020) in that we associate the action of g on Aequiv and Ainv with the group action on the spatial dimension and the channel dimension of a feature map Fa, which is more efficient than learning such mapping using FC layers. 5.2 EQUIVARIANT SAC In SAC, we assume the action space is continuous. We learn the parameters for two networks: a policy network Π (the actor) and an action-value network Q (the critic) (Haarnoja et al., 2018). The critic Q : S ×A→ R approximates Q values in the typical way. However, the actor Π : S → A×Aσ estimates both the mean and standard deviation of action for a given state. Here, we define Aσ = Rk to be the domain of the standard deviation variables over the k-dimensional action space defined in Section 4.2. Since Proposition 4.1 tells us that the optimal Q is invariant and the optimal policy is equivariant, we must model Q as an invariant network and Π as an equivariant network. Policy network: First, consider the equivariant constraint of the policy network. As before, the state is encoded by the function Fs. However, we must now express the action as a vector over Ā = A×Aσ . Factoring A into its equivariant and invariant components, we have Ā = Aequiv × Ainv × Aσ . In order to identify the equivariance relation for Ā, we must define how the group operator g ∈ G acts on aσ ∈ Aσ . Here, we make the simplifying assumption that aσ is invariant to the group operator. This choice makes sense in robotics domains where we would expect the variance of our policy to be invariant to the choice of reference frame. As a result, we have that the group element g ∈ G acts on ā ∈ Ā via: gā = g(aequiv, ainv, aσ) = (ρequiv(g)aequiv, ainv, aσ). (6) We can now define the actor network π to be a mapping Fs 7→ ā (Figure 4 top) that satisfies the following equivariance constraint (Equation 3): π(gFs) = g(π(Fs)) = gā. (7) Critic network: The critic network takes both state and action as input and maps onto a real value. We define two equivariant networks: a state encoder e and aQ network q. The equivariant state encoder, e, maps the input state Fs onto a regular representation s̄ ∈ (Rn)α where each of n group elements is associated with an α-vector. Since s̄ has a regular representation, we have gs̄ = ρreg(g)s̄. Writing the equivariance constraint of Equation 3 for e, we have that e must satisfy e(gFs) = ge(Fs) = gs̄. The output state representation s̄ is concatenated with the action a ∈ A, producing w = (s̄, a). The action of the group operator is now gw = (gs̄, ga) where ga = (ρequiv(g)aequiv, ainv). Finally, the q network maps from w onto R, a real-valued estimate of theQ value for w. Based on proposition 4.1, this network must be invariant to the group action: q(gw) = q(w). All together, the critic satisfies the following invariance equation: q(e(gFs), ga) = q(e(Fs), a). (8) This network is illustrated at the bottom of Figure 4. For a robotic manipulation domain in Section 4.2, we have Aequiv = Axy and Ainv = Aλ ×Az ×Aθ and ρequiv = ρ1. The detailed network architecture is in Appendix D.2. Preventing the critic from becoming overconstrained: In the model architecture above, the hidden layer of q is represented using a vector in the regular representation and the output of q is encoded using the trivial representation. However, Schur’s Lemma (see e.g. Dummit & Foote (1991)) implies there only exists a one-dimensional space of linear mappings from a regular representation to a trivial representation (i.e., x = a ∑ i vi where x is a trivial representation, a is a constant, and v is a regular representation). This implies that a linear mapping f : Rn × Rn → R from two regular representations to a trivial representation that satisfies f(gv, gw) = f(v, w) for all g ∈ G will also satisfy f(g1v, w) = f(v, w) and f(v, g2w) = f(v, w) for all g1, g2 ∈ G. (See details in Appendix B.) In principle, this could overconstrain the last layer of q to encode additional undesired symmetries. To avoid this problem we use a non-linear equivariant mapping, maxpool, over the group space to transform the regular representation to the trivial representation. 5.3 EQUIVARIANT SACFD Many of the problems we want to address cannot be solved without guiding the agent’s exploration somehow. In order to evaluate our algorithms in this context, we introduce the following simple strategy for learning from demonstration with SAC. First, prior to training, we pre-populate the replay buffer with a set of expert demonstrations generated using a hand-coded planner. Second, we introduce the following L2 term into the SAC actor’s loss function: Lactor = LSAC + 1e [ 1 2 ((a ∼ π(s))− ae)2 ] , (9) where LSAC is the actor’s loss term in standard SAC, 1e = 1 if the sampled transition is an expert demonstration and 0 otherwise, a ∼ π(s) is an action sampled from the output Gaussian distribution of π(s), and ae is the expert action. Since both the sampled action a ∼ π(s) and the expert action ae transform equivalently, Lactor is compatible with the equivariance we introduce in Section 5.2. We refer to this method as SACfD (SAC from Demonstration). 6 EXPERIMENTS We evaluate Equivariant DQN and Equivariant SAC in the manipulation tasks shown in Figure 5. These tasks can be formulated as SO(2)-invariant MDPs. All environments have sparse rewards (+1 when reaching the goal and 0 otherwise). See environment details in Appendix C. 6.1 EQUIVARIANT DQN We evaluate Equivariant DQN in the Block Pulling, Object Picking, and Drawer Opening tasks for the group C4. The discrete action space is Aλ = {OPEN, CLOSE}; Axy = {(x, y)|x, y ∈ {−0.02m, 0m, 0.02m}}; Az = {−0.02m, 0m, 0.02m}; Aθ = {− π16 , 0, π 16}. Note that the definition of Axy and g ∈ C4 satisfies the closure requirement of the action space in a way that ∀axy ∈ Axy,∀g ∈ C4, ρ1(g)axy ∈ Axy . We compare Equivariant DQN (Equi DQN) against the following baselines: 1) CNN DQN: DQN with conventional CNN instead of equivariant network, where the conventional CNN has a similar amount of trainable parameters (3.9M) as the equivariant network (2.6M). 2) RAD Crop DQN (Laskin et al., 2020a): same network architecture as CNN DQN. At each training step, each transition in the minibatch is applied with a random-crop data augmentation. 3) DrQ Shift DQN (Kostrikov et al., 2020): same network architecture as CNN DQN. At each training step, both the Q-targets and the TD losses are calculated by averaging over two random-shift augmented transitions. 4): CURL DQN (Laskin et al., 2020b): similar architecture as CNN DQN with an extra contrastive loss term that learns an invariant encoder from random crop augmentations. See the baselines detail in Appendix E. At the beginning of each training process, we pre-populate the replay buffer with 100 episodes of expert demonstrations. Figure 6 compares the learning curves of the various methods. Equivariant DQN learns faster and converges at a higher discounted reward in all three environments. 6.2 EQUIVARIANT SAC In this experiment, we evaluate the performance of Equivariant SAC (Equi SAC) for the group C8. The continuous action space is:Aλ = [0, 1]; Axy = {(x, y)|x, y ∈ [−0.05m, 0.05m]}; Az = [−0.05m, 0.05m]; Aθ = [−π8 , π 8 ]. We compare against the following baselines: 1) CNN SAC: SAC with conventional CNN rather than equivariant networks, where the conventional CNN has a similar amount of trainable parameters (2.6M) as the equivariant network (2.3M). 2) RAD Crop SAC (Laskin et al., 2020a): same model architecture as CNN SAC with random crop data augmentation when sampling transitions. 3) DrQ Shift SAC (Kostrikov et al., 2020): same model architecture as CNN SAC with random shift data augmentation when calculating the Q-target and the loss. 4) FERM (Zhan et al., 2020): a combination of SAC, contrastive learning, and random crop augmentation (baseline details in Appendix E). All methods use a SO(2) data augmentation buffer, where every time a new transition is added, we generate 4 more augmented transitions by applying random continuous rotations to both the image and the action (this data augmentation in the buffer is in addition to the data augmentation that is performed in the RAD DrQ, and FERM baselines). Prior to each training run, we pre-load the replay buffer with 20 episodes of expert demonstration. Figure 7 shows the comparison among the various methods. Notice that Equivariant SAC outperforms the other methods significantly. Without the equivariant approach, Object Picking and Drawer Opening appear to be infeasible for the baseline methods. In Block Pulling, FERM is the only other method able to solve the task. 6.3 EQUIVARIANT SACFD We want to explore our equivariant methods in the context of more challenging tasks such as those in the bottom row of Figure 5. However, since these tasks are too difficult to solve without some kind of guided exploration, we augment the Equivariant SAC as well as all the baselines in two ways: 1) we use SACfD as described in Section 5.3; 2) we use Prioritized Experience Replay (Schaul et al., 2015) rather than standard replay buffer. As in Section 6.2, we use the SO(2) data augmentation in the buffer that generates 4 extra SO(2)-augmented transitions whenever a new transition is added. Figure 8 shows the results. First, note that our Equivariant SACfD does best on all four tasks, followed by FERM, and other baselines. Second, notice that only the equivariant method can solve the last three (most challenging tasks). This suggests that equivariant models are important not only for unstructured reinforcement learning, but also for learning from demonstration. Additional results for Block Pulling and Object Picking environments are shown in Appendix G. 6.4 COMPARING WITH LEARNING EQUIVARIANCE USING AUGMENTATION In the previous experiments, we compare against the data augmentation baselines using the same data augmentation operators that the authors proposed (random crop in RAD (Laskin et al., 2020a) and random shift in DrQ (Kostrikov et al., 2020)). However, those two methods can also be modified to learn SO(2) equivariance using SO(2) data augmentation. Here, we explore this idea as an alternative to our equivariant model. Specifically, instead of augmenting on the state as in Laskin et al. (2020a) and Kostrikov et al. (2020) using only translation, we apply the SO(2) augmentation in both the state and the action. Since the RAD and DrQ baselines in this section are already running SO(2) augmentations themselves, we disable the SO(2) buffer augmentation for the online transitions in those baselines. (See the result of RAD and DrQ with the SO(2) data augmentation buffer in Appendix H.4). We compare the resulting version of RAD (RAD SO(2) SACfD) and DrQ (DrQ SO(2) SACfD) with our Equivariant SACfD in Figure 9. Our method outperforms both RAD and DrQ equipped with SO(2) data augmentation. Additional results for Block Pulling and Object Picking are shown in Appendix G. 6.5 GENERALIZATION EXPERIMENT This experiment evaluates the ability for the equivariant model to generalize over the equivariance group. We use a similar experimental setting as in Section 6.3. However, now the training environment is always initialized with a fixed orientation rather than a random orientation. For example, in Block Pulling, the two blocks are initialized with a fixed relative orientation; in Drawer Opening, the drawer is initialized with a fixed orientation. In the evaluation environment, however, these objects are initialized with random orientations. To suc- ceed, the agent needs to generalize over varied orientations while being trained with a fixed orientation. To prevent the agent from generalizing via augmentation, we disable the SO(2) augmentation in the buffer. As shown in Figure 10, Equivariant SACfD generalizes better than the baselines. Even though the equivariant network is presented with only one orientation during training, it successfully generalizes over random orientation whereas none of the baselines can. 7 DISCUSSION This paper defines a class of group-invariant MDPs and identifies the invariance and equivariance characteristics of their optimal solutions. This paper further proposes Equivariant SAC and a new variation of Equivariant DQN for continuous action space and discrete action space, respectively. We show experimentally in the robotic manipulation domains that our proposal substantially surpasses the performance of competitive baselines. A key limitation of this work is that our definition of G-invariant MDPs requires the MDP to have an invariant reward function and invariant transition function. Though such restrictions are often applicable in robotics, they limit the potential of the proposed methods in other domains like some ATARI games. Furthermore, if the observation is from a non-top-down perspective, or there are non-equivariant structures in the observation (e.g., the robot arm), the invariant assumptions of a G-invariant MDP will not be directly satisfied. ACKNOWLEDGMENTS This work is supported in part by NSF 1724257, NSF 1724191, NSF 1763878, NSF 1750649, and NASA 80NSSC19K1474. R. Walters is supported by the Roux Institute and the Harold Alfond Foundation and NSF grants 2107256 and 2134178. A PROOF OF PROPOSITION 4.1 The proof in this section follows Wang et al. (2021). Note that the definition of group action · : G× X → X implies that elements g ∈ G act by bijections on X since the action of g−1 gives a twosided inverse for the action of g. That is, g permutes the elements of X . Proof of Proposition 4.1. For g ∈ G, we will first show that the optimal Q-function is G-invariant, i.e., Q∗(s, a) = Q∗(gs, ga), then show that the optimal policy is G-equivariant, i.e., π∗(gs) = gπ∗(s). (1) Q∗(s, a) = Q∗(gs, ga): The Bellman optimality equations for Q∗(s, a) and Q∗(gs, ga) are, respectively: Q∗(s, a) = R(s, a) + γ sup a′∈A ∫ s′∈S T (s, a, s′)Q∗(s′, a′), (10) and Q∗(gs, ga) = R(gs, ga) + γ sup a′∈A ∫ s′∈S T (gs, ga, s′)Q∗(s′, a′). (11) Since g ∈ G merely permutes the elements of S, we can re-index the integral using s̄′ = gs′: Q∗(gs, ga) = R(gs, ga) + γ sup ā′∈gA ∫ s̄′∈gS T (gs, ga, s̄′)Q∗(s̄′, ā′) (12) = R(gs, ga) + γ sup a′∈A ∫ s′∈S T (gs, ga, gs′)Q∗(gs′, ga′). (13) Using the Reward Invariance and the Transition Invariance in Definition 4.1, this can be written: Q∗(gs, ga) = R(s, a) + γ sup a′∈A ∫ s′∈S T (s, a, s′)Q∗(gs′, ga′). (14) Now, define a new function Q̄ such that ∀s, a ∈ S × A, Q̄(s, a) = Q(gs, ga) and substitute into Eq. 14, resulting in: Q̄∗(s, a) = R(s, a) + γ sup a′∈A ∫ s′∈S T (s, a, s′)Q̄∗(s′, a′). (15) Notice that Eq. 15 and Eq. 10 are the same Bellman equation. Since solutions to the Bellman equation are unique, we have that ∀s, a ∈ S ×A, Q∗(s, a) = Q̄∗(s, a) = Q∗(gs, ga). (2) π∗(gs) = gπ∗(s): The optimal policy for π∗(s) and π∗(gs) can be written in terms of the optimal Q-function, Q∗, as: π∗(s) = arg max a∈A Q∗(s, a) (16) and π∗(gs) = arg max ā∈A Q∗(gs, ā) (17) Using the invariant property of Q∗ we can substitute Q∗(gs, ā) with Q∗(s, g−1ā) in Equation 17: π∗(gs) = arg max ā∈A Q∗(s, g−1ā) (18) Let ā = ga, Equation 18 can be written as: π∗(gs) = g[arg max a∈A Q∗(s, g−1ga)] (19) Cancelling g−1 and g and substituting Equation 16 we have, π∗(gs) = gπ∗(s). (20) B EQUIVARIANCE OVERCONSTRAIN Proposition B.1. Let f : Vreg⊕Vreg → Vtriv be a linear Cn-equivariant function. Then f(v, w) = a ∑ i vi + b ∑ i wi. Proof. By Weyl decomposibility (Hall, 2003), Vreg decomposes into irreducible representations for Cn each with multiplicity determined by its dimension. Among these is the trivial representation with multiplicity 1. By Schur’s lemma (Dummit & Foote, 1991), the mapping Vreg ⊕ Vreg → Vtriv must factor through the trivial representation embedded in Vreg . The projection onto the trivial representation is given v 7→ a ∑ i vi. The result follows by linearity. As a corollary, we find that Cn-equivariant maps Vreg ⊕ Vreg → Vtriv are actually Cn × Cnequivariant. Let (g1, g2) ∈ Cn × Cn, then applying the Proposition f(g1v, g2w) = a ∑ i(gv)i + b ∑ i(gw)i = a ∑ i vi + b ∑ i wi = f(v, w). C ENVIRONMENT DETAILS In all environments, the environment reset is conduced by randomly initializing the objects with random positions and orientations inside the workspace. The arm is always initialized at the same configuration. The workspace has a size of 0.4m × 0.4m × 0.24m. All environments have a sparse reward, i.e., the agent acquires a +1 reward when reaching the goal state, and 0 otherwise. In the PyBullet simulator, the robot joints have enough compliance to allow the gripper to apply force on the block in the Corner Picking task. We augment the state image with an additional binary channel (i.e., either all pixels are 1 or all pixels are 0) indicating if the gripper is holding an object. Note that this additional channel is invariant to rotations (because all pixels have the same value) so it won’t break the proposed equivariant properties. The Block Pulling requires the robot to pull one block to make contact with the other block. The Object Picking requires the robot the pick up an object randomly sampled from a set of 11 objects (Figure 11). The Drawer Opening requires the robot to pull open a drawer. The Block Stacking requires the robot to stack one block on top of another. The House Building requires the robot to stack a triangle roof on top of a block. The Corner Picking requires the robot to slide the block from the corner and then pick it up. D NETWORK ARCHITECTURE Our equivariant models are implemented using the E2CNN (Weiler & Cesa, 2019) library with PyTorch (Paszke et al., 2017). D.1 EQUIVARIANT DQN ARCHITECTURE In the Equivariant DQN, we use a 7-layer Steerable CNN defined in the group C4 (Figure 12a). The input Fs is encoded as a 2-channel ρ0 feature map, and the output is a 18-channel 3 × 3 ρ0 feature map where the channel encodes the invariant actions Ainv and the spatial dimension encodes Axy . D.2 EQUIVARIANT SAC ARCHITECTURE In the Equivariant SAC, there are two separate networks, both are Steerable CNN defined in the group C8. The actor π (Figure 12b top) is an 8-layer network that takes in a 2-channel ρ0 feature map (Fs) and outputs a mixed representation type 1 × 1 feature map (ā) consisting of 1 ρ1 feature for axy and 8 ρ0 features for ainv and aσ . The critic (Figure 12b bottom) is a 9-layer network that takes in both Fs as a 2-channel ρ0 feature map and a as a 1 × 1 mixed representation feature map consisting of 1 ρ1 feature for axy and 3 ρ0 for ainv. The upper path e encodes Fs into a 64-channel regular representation feature map s̄ with 1 × 1 spatial dimensions, then concatenates it with a. Two separate Q-value paths q take in the concatenated feature map and generate two Q-estimates in the form of 1 × 1 ρ0 feature. The non-linear maxpool layer is used for transforming regular representations into trivial representations to prevent the equivariant overconstraint (Section 5.2). Note that there are two Q outputs based on the requirement of the SAC algorithm. E BASELINE DETAILS Figure 13 shows the baseline network architectures for DQN and SAC. The RAD (Laskin et al., 2020a) Crop baselines, CURL (Laskin et al., 2020b) baselines, and FERM (Zhan et al., 2020) baselines use random crop for data augmentation. The random crop crops a 142 × 142 state image to the size of 128 × 128. The contrastive encoder of CURL baselines has a size of 128 as in Laskin et al. (2020b), and that for the FERM baselines has a size of 50 as in Zhan et al. (2020). The FERM baseline’s contrastive encoder is pretrained for 1.6k steps using the expert data as in Zhan et al. (2020). The DrQ (Kostrikov et al., 2020) Shift baselines use random shift of ±4 pixels for data augmentation as in the original work. In all DrQ baselines, the number of augmentations for calculating the target K and the number of augmentations for calculating the loss M are both 2 as in Kostrikov et al. (2020). F TRAINING DETAILS We implement our experimental environments in the PyBullet simulator (Coumans & Bai, 2016). The workspace’s size is 0.4m×0.4m×0.24m. The pixel size of the visual state I is 128×128 (except for the RAD Crop baselines, CURL baselines, and FERM baselines, where I’s size is 142 × 142 and will be cropped to 128 × 128). I’s FOV is 0.6m × 0.6m. During training, we use 5 parallel environments. We implement all training in PyTorch (Paszke et al., 2017). Both DQN and SAC use soft target update with τ = 10−2. In the DQN experiments, we use the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 10−4. We use Huber loss (Huber, 1964) for calculating the TD loss. We use a discount factor γ = 0.95. The batch size is 32. The buffer has a capacity of 100,000 transitions. In the SAC (and SACfD) experiments, we use the Adam optimizer with a learning rate of 10−3. The entropy temperature α is initialized at 10−2. The target entropy is -5. The discount factor γ = 0.99. The batch size is 64. The buffer has a capacity of 100,000 transitions. SACfD uses the prioritized replay buffer (Schaul et al., 2015) with prioritized replay exponent of 0.6 and prioritized importance sampling exponent β0 = 0.4 as in Schaul et al. (2015). The expert transitions are given a priority bonus of d = 1. G ADDITIONAL EXPERIMENTAL RESULTS FOR EQUIVARIANT SACFD Figure 14 (a)-(b) shows the results for the experiment of Section 6.3 in Block Pulling and Object Picking environments. The Equivariant SACfD outperforms all baselines in those two environments. Figure 14 (c)-(d) shows the results for the experiment of Section 6.4 in block Pulling and Object Picking environments. Similarly as the results in Figure 9, our Equivariant SACfD outperforms both RAD and DrQ equipped with SO(2) dat augmentation. H ABLATION STUDIES H.1 USING EQUIVARIANT NETWORK ONLY IN ACTOR OR CRITIC In this experiment, we investigate the effectiveness of the equivariant network in SACfD by only applying it in the actor network or the critic network. We evaluate four variations: 1) Equi Actor + Equi Critic that uses equivariant network in both the actor and the critic; 2) Equi Actor + CNN Critic that uses equivariant network solely in the actor and uses conventional CNN in the critic; 3) CNN Actor + Equi Critic that uses conventional CNN in the actor and equivariant network in the Critic; 4) CNN Actor + CNN Critic that uses the conventional CNN in both the actor and the critic. Other experimental setup mirrors Section 6.3. As is shown in Figure 15, applying the equivariant network in the actor generally helps more than applying the equivariant network in the critic (in 5 out of 6 experiments), and using the equivariant network in both the actor and the critic always demonstrates the best performance. H.2 DIFFERENT SYMMETRY GROUPS This experiment compares the equivariant networks defined in three different symmetry groups: C8, C4, and C2. We run this experiment in SACfD with the same setup as in Section 6.3. As is shown in Figure 16, the network defined in C8 generally outperforms the network defined in C4, followed by the network defined in C2. H.3 EQUIVARIANT SACFD IN NON-SYMMETRIC ENVIRONMENTS This experiments evaluates the performance of Equivariant SACfD in non-symmetric tasks where the initial orientation of the environments are fixed rather than random. (Similarly as in Section 6.5 but both the training and the evaluation environments have the fix orientation.) Specifically, in Block Pulling, the two blocks in the training environment is initialized with a fixed relative orientation; in Drawer Opening, the drawer is initialized with a fixed orientation. As is shown in Figure 17, when the environments do not contain SO(2) symmetries, the performance gain of using equivariant network is less significant. H.4 ROTATIONAL AUGMENTATION + BUFFER AUGMENTATION Section 6.4 compares our Equivariant SACfD with rotational data augmentation baselines. This experiment shows the performance of those baselines (and an extra CNN SACfD baseline that uses conventional CNN) equipped with the data augmentation buffer. As is mentioned in Section 6.2, the data augmentation baseline creates 4 extra augmented transitions using random SO(2) rotation every time a new transition is added. Figure 18 shows the result, where none of the baselines outperform our proposal in any tasks. Compared with Figure 9, the data augmentation buffer hurts RAD and DrQ because of the redundancy of the same data augmentation.
1. What is the focus and contribution of the paper on group-equivariant MDPs? 2. What are the strengths of the proposed approach, particularly in terms of theoretical analysis and experimental validation? 3. Do you have any concerns regarding the novelty of the proposed method, specifically in relation to Mondal et al. (2020)? 4. How does the reviewer assess the effectiveness of the proposed method in various problem settings, including its ability to handle non-symmetric problems? 5. Are there any questions or suggestions regarding the experiment design, such as the choice of baselines, network architecture, and parameter count? 6. Are there any minor issues with the presentation of the paper, such as grammatical errors or omitted words?
Summary Of The Paper Review
Summary Of The Paper This paper defines and theoretically characterizes a class of group-equivariant MDPs and studies its invariance and equivariance properties. It introduces two new models, one for discrete action spaces (equivariant DQN) and one for continuous action spaces (equivariant SAC). It compares the performance of the proposed methods against strong competitive baselines for multiple robotic manipulation tasks and establishes their superior performance. Review Strengths: The proposed method and propositions are backed by solid theoretical proofs. The claims made about the proposed method are validated through extensive experimentations and comparison against strong baselines. The related works are well-covered. This paper is novel in its identification of equivariant properties of optimal Q-functions and optimal policies and its application to problems in robotics such as object manipulation (as opposed to previous applications to toy settings). It shows the improved performance of an equivariant DQN on these tasks and introduces an equivariant SAC model. It is closely related to Wang et al. (2021) which focuses on the translational and rotational invariance of Q-functions whereas this paper focuses on group equivariance. Overall presentation of the paper is good Weaknesses: Is the claim of proposing an equivariant version of DQN as a novel contribution correct? The authors reference Mondal et al. (2020) which claims to introduce equivariant DQN. Is what the authors proposing different in nature from Mondal et al. (2020)? The two works seem to use different symmetry groups (this paper talks about SO(2) and Mondal et al. talks about E(2)) but this and any/all other differences need to be better highlighted in the paper. The improvement in the reward shows the ability of the Equi DQN and Equi SAC to faster learn the policies but can it be ensured this is because of the inductive bias of the model architecture as claimed? One suggestion is to include a generalization experiment to test the robustness to equivariance where vanilla DQN fails. Additionally, the equivariant models are said to be applicable to rotationally symmetric problems. How would they perform in other problem settings without the symmetry assumption/ in the presence of other types of symmetry? Would they still beat the vanilla models? In the experiment section, the baseline CNN DQN only says “DQN with conventional CNN instead of equivariant network”. Is there a reason why the Equi DQN and baseline DQN have different numbers of convolutional layers and is this a fair comparison? What is the significance of the extra two fully-connected layers? Does this mean the Equi DQN needs more parameters to learn compared to the baseline DQN. If so, then there should be a baseline DQN that has the same number of parameters. The paper has minor grammatical errors/omitted words. A careful proof-reading should be helpful.
ICLR
Title $\mathrm{SO}(2)$-Equivariant Reinforcement Learning Abstract Equivariant neural networks enforce symmetry within the structure of their convolutional layers, resulting in a substantial improvement in sample efficiency when learning an equivariant or invariant function. Such models are applicable to robotic manipulation learning which can often be formulated as a rotationally symmetric problem. This paper studies equivariant model architectures in the context ofQ-learning and actor-critic reinforcement learning. We identify equivariant and invariant characteristics of the optimal Q-function and the optimal policy and propose equivariant DQN and SAC algorithms that leverage this structure. We present experiments that demonstrate that our equivariant versions of DQN and SAC can be significantly more sample efficient than competing algorithms on an important class of robotic manipulation problems. 1 INTRODUCTION A key challenge in reinforcement learning is to improve sample efficiency – that is to reduce the amount of environmental interactions that an agent must take in order to learn a good policy. This is particularly important in robotics applications where gaining experience potentially means interacting with a physical environment. One way of improving sample efficiency is to create “artificial” experiences through data augmentation. This is typically done in visual state spaces where an affine transformation (e.g., translation or rotation of the image) is applied to the states experienced during a transition (Laskin et al., 2020a; Kostrikov et al., 2020). These approaches implicitly assume that the transition and reward dynamics of the environment are invariant to affine transformations of the visual state. In fact, some approaches explicitly use a contrastive loss term to induce the agent to learn translation-invariant feature representations (Laskin et al., 2020b; Zhan et al., 2020). Recent work in geometric deep learning suggests that it may be possible to learn transformationinvariant policies and value functions in a different way, using equivariant neural networks (Cohen & Welling, 2016a;b). The key idea is to structure the model architecture such that it is constrained only to represent functions with the desired invariance properties. In principle, this approach aim at exactly the same thing as the data augmentation approaches described above – both methods seek to improve sample efficiency by introducing an inductive bias. However, the equivariance approach achieves this more directly by modifying the model architecture rather than by modifying the training data. Since with data augmentation, the model must learn equivariance in addition to the task itself, more training time and greater model capacity are often required. Even then, data augmentation results only in approximate equivariance whereas equivariant neural networks guarantee it and often have stronger generalization as well (Wang et al., 2020b). While equivariant architectures have recently been applied to reinforcement learning (van der Pol et al., 2020a;b; Mondal et al., 2020), this has been done only in toy settings (grid worlds, etc.) where the model is equivariant over small finite groups, and the advantages of this approach over standard methods is less clear. This paper explores the application of equivariant methods to more realistic problems in robotics such as object manipulation. We make several contributions. First, we define and analyze an important class of MDPs that we call group-invariant MDPs. Second, we introduce a new variation of the Equivariant DQN (Mondal et al., 2020), and we further introduce equivariant variations of SAC (Haarnoja et al., 2018), and learning from demonstration (LfD). Finally, we show that our methods convincingly outperform recent competitive data augmentation approaches (Laskin et al., 2020a; Kostrikov et al., 2020; Laskin et al., 2020b; Zhan et al., 2020). Our Equivariant SAC method, in particular, outperforms these baselines so dramatically (Figure 7) that it could make reinforcement learning feasible for a much larger class of robotics problems than is currently the case. Supplementary video and code are available at https://pointw.github.io/equi_rl_page/. 2 RELATED WORK Equivariant Learning: Encoding symmetries in the structure of neural networks can improve both generalization and sample efficiency. The idea of equivariant learning is first introduced in GConvolution (Cohen & Welling, 2016a). The extension work proposes an alternative architecture, Steerable CNN (Cohen & Welling, 2016b). Weiler & Cesa (2019) proposes a general framework for implementing E(2)-Steerable CNNs. In the context of reinforcement learning, Mondal et al. (2020) investigates the use of Steerable CNNs in the context of two game environments. van der Pol et al. (2020b) proposes MDP homomorphic networks to encode rotational and reflectional equivariance of an MDP but only evaluates their method in a small set of tasks. In robotic manipulation, Wang et al. (2021) learns equivariant Q-functions but is limited in the spatial action space. In contrast to prior work, this paper proposes an Equivariant SAC algorithm, an equivariant LfD algorithm, and a novel variation of Equivariant DQN (Mondal et al., 2020) focusing on visual motor control problems. Data Augmentation: Another popular method for improving sample efficiency is data augmentation. Recent works demonstrate that the use of simple data augmentation methods like random crop or random translate can significantly improve the performance of reinforcement learning (Laskin et al., 2020a; Kostrikov et al., 2020). Data augmentation is often used for generating additional samples (Kalashnikov et al., 2018; Lin et al., 2020; Zeng et al., 2020) in robotic manipulation. However, data augmentation methods are often less sample efficient than equivariant networks because the latter injects an inductive bias to the network architecture. Contrastive Learning: Data augmentation is also applied with contrastive learning (Oord et al., 2018) to improve feature extraction. Laskin et al. (2020b) show significant sample-efficiency improvement by adding an auxiliary contrastive learning term using random crop augmentation. Zhan et al. (2020) use a similar method in the context of robotic manipulation. However, contrastive learning is limited to learning an invariant feature encoder and is not capable of learning equivariant functions. Close-Loop Robotic Control: There are two typical action space definitions when learning policies that control the end-effector of a robot arm: the spatial action space that controls the target pose of the end-effector (Zeng et al., 2018b;a; Satish et al., 2019; Wang et al., 2020a), or the close-loop action space that controls the displacement of the end-effector. The close-loop action space is widely used for learning grasping policies (Kalashnikov et al., 2018; Quillen et al., 2018; Breyer et al., 2019; James et al., 2019). Recently, some works also learn more complex policies than grasping (Viereck et al., 2020; Kilinc et al., 2019; Cabi et al., 2020; Zhan et al., 2020). This work extends prior works in the close-loop action space by using equivariant learning to improve the sample efficiency. 3 BACKGROUND SO(2) and Cn: We will reason about rotation in terms of the group SO(2) and its cyclic subgroup Cn ≤ SO(2). SO(2) is the group of continuous planar rotations {Rotθ : 0 ≤ θ < 2π}. Cn is the discrete subgroup Cn = {Rotθ : θ ∈ { 2πin |0 ≤ i < n}} of rotations by multiples 2π n . Cn actions: A groupGmay be equipped with an action on a setX by specifying a map · : G×X → X satisfying g1 · (g2 · x) = (g1g2) · x and 1 · x = x for all g1, g2 ∈ G, x ∈ X . Note that closure, gx ∈ X , and invertibility, g−1gx = x, follow immediately from the definition. We are interested in actions of Cn which formalize how vectors or feature maps transform under rotation. The group Cn acts in three ways that concern us (for a more comprehensive background, see Bronstein et al. (2021)): 1. R through the trivial representation ρ0. Let g ∈ Cn and x ∈ R. Then ρ0(g)x = x. For example, the trivial representation describes how pixel color/depth values change when an image is rotated, i.e. they do not change (Figure 1 left). 2. R2 through the standard representation ρ1. Let g ∈ Cn and v ∈ R2. Then ρ1(g)v =( cos g − sin g sin g cos g ) v. This describes how elements of a vector field change when rotated (Figure 1 middle). 3. Rn through the regular representation ρreg. Let g = rm ∈ Cn = {1, r, r2, . . . , rn−1} and (x1, x2, . . . , xn) ∈ Rn. Then ρreg(g)x = (xn−m+1, . . . , xn, x1, x2, . . . , xn−m) cyclically permutes the coordinates of Rn (Figure 1 right). Feature maps as functions: In deep learning, images and feature maps are typically expressed as tensors. However, it will be convenient here to sometimes express these as functions. Specifically, we may write an h× w one-channel image F ∈ R1×h×w as a function F : R2 → R where F(x, y) describes the intensity at pixel x, y. Similarly, an m-channel tensor F ∈ Rm×h×w may be written as F : R2 → Rm. We refer to the domain of this function as its “spatial dimensions”. Cn actions on vectors and feature maps: Cn acts on vectors and feature maps differently depending upon their semantics. We formalize these different ways of acting as follows. Let F : R2 → Rm be an m-channel feature map and let V ∈ Rm×1×1 = Rm be a vector represented as a special case of a feature map with 1× 1 spatial dimensions. Then g is defined to act on F by (gF)(x, y) = ρj(g)F(ρ1(g)−1(x, y)). (1) For a vector V (considered to be at (x, y) = (0, 0)), this becomes: gV = ρj(g)V. (2) In the above, ρ1(g) rotates pixel location and ρj(g) transforms the pixel feature vector using the trivial representation (ρj = ρ0), the standard representation (ρj = ρ1), the regular representation (ρj = ρreg), or some combination thereof. Equivariant convolutional layer: A Cn-equivariant layer is a function h whose output is constrained to transform in a defined way when the input feature map is transformed by a group action. Consider an equivariant layer h with an input Fin : R2 → R|ρin| and an output Fout : R2 → R|ρout| , where ρin and ρout denote the group representations associated with Fin and Fout, respectively. When the input is transformed, this layer is constrained to output a transformed version of the same output feature map: h(gFin) = g(h(Fin)) = gFout. (3) where g ∈ Cn acts on Fin or Fout through Equation 1 or Equation 2, i.e., this constraint equation can be applied to arbitrary feature maps F or vectors V . A linear convolutional layer h satisfies Equation 3 with respect to the group Cn if the convolutional kernel K : R2 → R|ρout|×|ρin| has the following form (Cohen et al., 2018): K(ρ1(g)v) = ρ −1 out(g)K(v)ρin(g). (4) Since the composition of equivariant maps is equivariant, a fully convolutional equivariant network can be constructed by stacking equivariant convolutional layers that satisfy the constraint of Equation 3 and together with equivariant non-linearities (Weiler & Cesa, 2019). 4 PROBLEM STATEMENT 4.1 GROUP-INVARIANT MDPS In a group-invariant MDP, the transition and reward functions are invariant to group elements g ∈ G acting on the state and action space. For state s ∈ S, action a ∈ A, and g ∈ G, let gs ∈ S denote the action of g on s and ga ∈ A denote the action of g on a. Definition 4.1 (G-invariant MDP). A G-invariant MDPMG = (S,A, T,R,G) is an MDPM = (S,A, T,R) that satisfies the following conditions: 1. Reward Invariance: The reward function is invariant to the action of the group element g ∈ G, R(s, a) = R(gs, ga). 2. Transition Invariance: The transition function is invariant to the action of the group element g ∈ G, T (s, a, s′) = T (gs, ga, gs′). A key feature of a G-invariant MDP is that its optimal solution is also G-invariant (proof in Appendix A): Proposition 4.1. LetMG be a group-invariant MDP. Then its optimal Q-function is group invariant, Q∗(s, a) = Q∗(gs, ga), and its optimal policy is group-equivariant, π∗(gs) = gπ∗(s), for any g ∈ G. It should be noted that the G-invariant MDP of Definition 4.1 is in fact a special case of an MDP homomorphism (Ravindran & Barto, 2001; 2004), a broad class of MDP abstractions. MDP homomorphisms are important because optimal solutions to the abstract problem can be “lifted” to produce optimal solutions to the original MDP (Ravindran & Barto, 2004). As such, Proposition 4.1 follows directly from those results. 4.2 SO(2)-INVARIANT MDPS IN VISUAL STATE SPACES In the remainder of this paper, we focus exclusively on an important class of SO(2)-invariant MDPs where the state is encoded as an image. We approximate SO(2) by its subgroup Cn. State space: State is expressed as an m-channel image, Fs : R2 → Rm. The group operator g ∈ Cn acts on this image as defined in Equation 1 where we set ρj = ρ0: gFs(x, y) = ρ0(g)Fs(ρ1(g)−1(x, y)), i.e., by rotating the pixels but leaving the pixel feature vector unchanged. Action space: We assume we are given a factored action space Ainv ×Aequiv = A ⊆ Rk embedded in a k-dimensional Euclidean space where Ainv ⊆ Rkinv and Aequiv ⊆ Rk−kinv . We require the variables in Ainv to be invariant with the rotation operator and the variables in Aequiv to rotate with the representation ρequiv = ρ1. Therefore, the rotation operator g ∈ Cn acts on a ∈ A via ga = (ρequiv(g)aequiv, ainv) where ainv ∈ Ainv and aequiv ∈ Aequiv. Application to robotic manipulation: We express the state as a depth image centered on the gripper position where depth is defined relative to the gripper. The orientation of this image is relative to the base reference frame – not the gripper frame. We require the fingers of the gripper and objects grasped by the gripper to be visible in the image. Figure 2 shows an illustration. The action is a tuple, a = (aλ, axy, az, aθ) ∈ A ⊂ R5, where aλ ∈ Aλ denotes the commanded gripper aperture, axy ∈ Axy denotes the commanded change in gripper xy position, az ∈ Az denotes the commanded change in gripper height, and aθ ∈ Aθ denotes the commanded change in gripper orientation. Here, the xy action is equivariant with g ∈ Cn,Aequiv = Axy , and the rest of the action variables are invariant,Ainv = Aλ×Az×Aθ. Notice that the transition dynamics are Cn-invariant (i.e. T (s, a, s′) = T (gs, ga, gs′)) because the Newtonian physics of the interaction are invariant to the choice of reference frame. If we constrain the reward function to be Cn-invariant as well, then the resulting MDP is Cn-invariant. 5 APPROACH 5.1 EQUIVARIANT DQN In DQN, we assume we have a discrete action space, and we learn the parameters of a Q-network that maps from the state onto action values. Given a G-invariant MDP, Proposition 4.1 tells us that the optimal Q-function is G-invariant. Therefore, we encode the Q-function using an equivariant neural network that is constrained to represent only G-invariant Q-functions. First, in order to use DQN, we need to discretize the action space. Let Aequiv ⊂ Aequiv and Ainv ⊂ Ainv be discrete subsets of the full equivariant and invariant action spaces, respectively. Next, we define a function Fa : Aequiv → RAinv from the equivariant action variables in Aequiv to the Q values of the invariant action variables in Ainv. For example, in the robotic manipulation domain described Section 4.2, we have Aequiv = Axy and Ainv = Aλ × Az × Aθ and ρequiv = ρ1, and we define Aequiv and Ainv accordingly. We now encode the Q network q as a stack of equivariant layers that each encode the equivariant constraint of Equation 3. Since the composition of equivariant layers is equivariant, q satisfies: q(gFs) = g(q(Fs)) = gFa, (5) where we have substituted Fin = Fs and Fout = Fa. In the above, the rotation operator g ∈ Cn is applied using Equation 1 as gFa(axy) = ρ0(g)Fa(ρ1(g)−1(axy)). Figure 3 illustrates this equivariance constraint for the robotic manipulation example with |Aequiv| = |Axy| = 9. When the state (represented as an image on the left) is rotated by 90 degrees, the values associated with the action variables in Axy are also rotated similarly. The detailed network architecture is shown in Appendix D.1. Our architecture is different from that in Mondal et al. (2020) in that we associate the action of g on Aequiv and Ainv with the group action on the spatial dimension and the channel dimension of a feature map Fa, which is more efficient than learning such mapping using FC layers. 5.2 EQUIVARIANT SAC In SAC, we assume the action space is continuous. We learn the parameters for two networks: a policy network Π (the actor) and an action-value network Q (the critic) (Haarnoja et al., 2018). The critic Q : S ×A→ R approximates Q values in the typical way. However, the actor Π : S → A×Aσ estimates both the mean and standard deviation of action for a given state. Here, we define Aσ = Rk to be the domain of the standard deviation variables over the k-dimensional action space defined in Section 4.2. Since Proposition 4.1 tells us that the optimal Q is invariant and the optimal policy is equivariant, we must model Q as an invariant network and Π as an equivariant network. Policy network: First, consider the equivariant constraint of the policy network. As before, the state is encoded by the function Fs. However, we must now express the action as a vector over Ā = A×Aσ . Factoring A into its equivariant and invariant components, we have Ā = Aequiv × Ainv × Aσ . In order to identify the equivariance relation for Ā, we must define how the group operator g ∈ G acts on aσ ∈ Aσ . Here, we make the simplifying assumption that aσ is invariant to the group operator. This choice makes sense in robotics domains where we would expect the variance of our policy to be invariant to the choice of reference frame. As a result, we have that the group element g ∈ G acts on ā ∈ Ā via: gā = g(aequiv, ainv, aσ) = (ρequiv(g)aequiv, ainv, aσ). (6) We can now define the actor network π to be a mapping Fs 7→ ā (Figure 4 top) that satisfies the following equivariance constraint (Equation 3): π(gFs) = g(π(Fs)) = gā. (7) Critic network: The critic network takes both state and action as input and maps onto a real value. We define two equivariant networks: a state encoder e and aQ network q. The equivariant state encoder, e, maps the input state Fs onto a regular representation s̄ ∈ (Rn)α where each of n group elements is associated with an α-vector. Since s̄ has a regular representation, we have gs̄ = ρreg(g)s̄. Writing the equivariance constraint of Equation 3 for e, we have that e must satisfy e(gFs) = ge(Fs) = gs̄. The output state representation s̄ is concatenated with the action a ∈ A, producing w = (s̄, a). The action of the group operator is now gw = (gs̄, ga) where ga = (ρequiv(g)aequiv, ainv). Finally, the q network maps from w onto R, a real-valued estimate of theQ value for w. Based on proposition 4.1, this network must be invariant to the group action: q(gw) = q(w). All together, the critic satisfies the following invariance equation: q(e(gFs), ga) = q(e(Fs), a). (8) This network is illustrated at the bottom of Figure 4. For a robotic manipulation domain in Section 4.2, we have Aequiv = Axy and Ainv = Aλ ×Az ×Aθ and ρequiv = ρ1. The detailed network architecture is in Appendix D.2. Preventing the critic from becoming overconstrained: In the model architecture above, the hidden layer of q is represented using a vector in the regular representation and the output of q is encoded using the trivial representation. However, Schur’s Lemma (see e.g. Dummit & Foote (1991)) implies there only exists a one-dimensional space of linear mappings from a regular representation to a trivial representation (i.e., x = a ∑ i vi where x is a trivial representation, a is a constant, and v is a regular representation). This implies that a linear mapping f : Rn × Rn → R from two regular representations to a trivial representation that satisfies f(gv, gw) = f(v, w) for all g ∈ G will also satisfy f(g1v, w) = f(v, w) and f(v, g2w) = f(v, w) for all g1, g2 ∈ G. (See details in Appendix B.) In principle, this could overconstrain the last layer of q to encode additional undesired symmetries. To avoid this problem we use a non-linear equivariant mapping, maxpool, over the group space to transform the regular representation to the trivial representation. 5.3 EQUIVARIANT SACFD Many of the problems we want to address cannot be solved without guiding the agent’s exploration somehow. In order to evaluate our algorithms in this context, we introduce the following simple strategy for learning from demonstration with SAC. First, prior to training, we pre-populate the replay buffer with a set of expert demonstrations generated using a hand-coded planner. Second, we introduce the following L2 term into the SAC actor’s loss function: Lactor = LSAC + 1e [ 1 2 ((a ∼ π(s))− ae)2 ] , (9) where LSAC is the actor’s loss term in standard SAC, 1e = 1 if the sampled transition is an expert demonstration and 0 otherwise, a ∼ π(s) is an action sampled from the output Gaussian distribution of π(s), and ae is the expert action. Since both the sampled action a ∼ π(s) and the expert action ae transform equivalently, Lactor is compatible with the equivariance we introduce in Section 5.2. We refer to this method as SACfD (SAC from Demonstration). 6 EXPERIMENTS We evaluate Equivariant DQN and Equivariant SAC in the manipulation tasks shown in Figure 5. These tasks can be formulated as SO(2)-invariant MDPs. All environments have sparse rewards (+1 when reaching the goal and 0 otherwise). See environment details in Appendix C. 6.1 EQUIVARIANT DQN We evaluate Equivariant DQN in the Block Pulling, Object Picking, and Drawer Opening tasks for the group C4. The discrete action space is Aλ = {OPEN, CLOSE}; Axy = {(x, y)|x, y ∈ {−0.02m, 0m, 0.02m}}; Az = {−0.02m, 0m, 0.02m}; Aθ = {− π16 , 0, π 16}. Note that the definition of Axy and g ∈ C4 satisfies the closure requirement of the action space in a way that ∀axy ∈ Axy,∀g ∈ C4, ρ1(g)axy ∈ Axy . We compare Equivariant DQN (Equi DQN) against the following baselines: 1) CNN DQN: DQN with conventional CNN instead of equivariant network, where the conventional CNN has a similar amount of trainable parameters (3.9M) as the equivariant network (2.6M). 2) RAD Crop DQN (Laskin et al., 2020a): same network architecture as CNN DQN. At each training step, each transition in the minibatch is applied with a random-crop data augmentation. 3) DrQ Shift DQN (Kostrikov et al., 2020): same network architecture as CNN DQN. At each training step, both the Q-targets and the TD losses are calculated by averaging over two random-shift augmented transitions. 4): CURL DQN (Laskin et al., 2020b): similar architecture as CNN DQN with an extra contrastive loss term that learns an invariant encoder from random crop augmentations. See the baselines detail in Appendix E. At the beginning of each training process, we pre-populate the replay buffer with 100 episodes of expert demonstrations. Figure 6 compares the learning curves of the various methods. Equivariant DQN learns faster and converges at a higher discounted reward in all three environments. 6.2 EQUIVARIANT SAC In this experiment, we evaluate the performance of Equivariant SAC (Equi SAC) for the group C8. The continuous action space is:Aλ = [0, 1]; Axy = {(x, y)|x, y ∈ [−0.05m, 0.05m]}; Az = [−0.05m, 0.05m]; Aθ = [−π8 , π 8 ]. We compare against the following baselines: 1) CNN SAC: SAC with conventional CNN rather than equivariant networks, where the conventional CNN has a similar amount of trainable parameters (2.6M) as the equivariant network (2.3M). 2) RAD Crop SAC (Laskin et al., 2020a): same model architecture as CNN SAC with random crop data augmentation when sampling transitions. 3) DrQ Shift SAC (Kostrikov et al., 2020): same model architecture as CNN SAC with random shift data augmentation when calculating the Q-target and the loss. 4) FERM (Zhan et al., 2020): a combination of SAC, contrastive learning, and random crop augmentation (baseline details in Appendix E). All methods use a SO(2) data augmentation buffer, where every time a new transition is added, we generate 4 more augmented transitions by applying random continuous rotations to both the image and the action (this data augmentation in the buffer is in addition to the data augmentation that is performed in the RAD DrQ, and FERM baselines). Prior to each training run, we pre-load the replay buffer with 20 episodes of expert demonstration. Figure 7 shows the comparison among the various methods. Notice that Equivariant SAC outperforms the other methods significantly. Without the equivariant approach, Object Picking and Drawer Opening appear to be infeasible for the baseline methods. In Block Pulling, FERM is the only other method able to solve the task. 6.3 EQUIVARIANT SACFD We want to explore our equivariant methods in the context of more challenging tasks such as those in the bottom row of Figure 5. However, since these tasks are too difficult to solve without some kind of guided exploration, we augment the Equivariant SAC as well as all the baselines in two ways: 1) we use SACfD as described in Section 5.3; 2) we use Prioritized Experience Replay (Schaul et al., 2015) rather than standard replay buffer. As in Section 6.2, we use the SO(2) data augmentation in the buffer that generates 4 extra SO(2)-augmented transitions whenever a new transition is added. Figure 8 shows the results. First, note that our Equivariant SACfD does best on all four tasks, followed by FERM, and other baselines. Second, notice that only the equivariant method can solve the last three (most challenging tasks). This suggests that equivariant models are important not only for unstructured reinforcement learning, but also for learning from demonstration. Additional results for Block Pulling and Object Picking environments are shown in Appendix G. 6.4 COMPARING WITH LEARNING EQUIVARIANCE USING AUGMENTATION In the previous experiments, we compare against the data augmentation baselines using the same data augmentation operators that the authors proposed (random crop in RAD (Laskin et al., 2020a) and random shift in DrQ (Kostrikov et al., 2020)). However, those two methods can also be modified to learn SO(2) equivariance using SO(2) data augmentation. Here, we explore this idea as an alternative to our equivariant model. Specifically, instead of augmenting on the state as in Laskin et al. (2020a) and Kostrikov et al. (2020) using only translation, we apply the SO(2) augmentation in both the state and the action. Since the RAD and DrQ baselines in this section are already running SO(2) augmentations themselves, we disable the SO(2) buffer augmentation for the online transitions in those baselines. (See the result of RAD and DrQ with the SO(2) data augmentation buffer in Appendix H.4). We compare the resulting version of RAD (RAD SO(2) SACfD) and DrQ (DrQ SO(2) SACfD) with our Equivariant SACfD in Figure 9. Our method outperforms both RAD and DrQ equipped with SO(2) data augmentation. Additional results for Block Pulling and Object Picking are shown in Appendix G. 6.5 GENERALIZATION EXPERIMENT This experiment evaluates the ability for the equivariant model to generalize over the equivariance group. We use a similar experimental setting as in Section 6.3. However, now the training environment is always initialized with a fixed orientation rather than a random orientation. For example, in Block Pulling, the two blocks are initialized with a fixed relative orientation; in Drawer Opening, the drawer is initialized with a fixed orientation. In the evaluation environment, however, these objects are initialized with random orientations. To suc- ceed, the agent needs to generalize over varied orientations while being trained with a fixed orientation. To prevent the agent from generalizing via augmentation, we disable the SO(2) augmentation in the buffer. As shown in Figure 10, Equivariant SACfD generalizes better than the baselines. Even though the equivariant network is presented with only one orientation during training, it successfully generalizes over random orientation whereas none of the baselines can. 7 DISCUSSION This paper defines a class of group-invariant MDPs and identifies the invariance and equivariance characteristics of their optimal solutions. This paper further proposes Equivariant SAC and a new variation of Equivariant DQN for continuous action space and discrete action space, respectively. We show experimentally in the robotic manipulation domains that our proposal substantially surpasses the performance of competitive baselines. A key limitation of this work is that our definition of G-invariant MDPs requires the MDP to have an invariant reward function and invariant transition function. Though such restrictions are often applicable in robotics, they limit the potential of the proposed methods in other domains like some ATARI games. Furthermore, if the observation is from a non-top-down perspective, or there are non-equivariant structures in the observation (e.g., the robot arm), the invariant assumptions of a G-invariant MDP will not be directly satisfied. ACKNOWLEDGMENTS This work is supported in part by NSF 1724257, NSF 1724191, NSF 1763878, NSF 1750649, and NASA 80NSSC19K1474. R. Walters is supported by the Roux Institute and the Harold Alfond Foundation and NSF grants 2107256 and 2134178. A PROOF OF PROPOSITION 4.1 The proof in this section follows Wang et al. (2021). Note that the definition of group action · : G× X → X implies that elements g ∈ G act by bijections on X since the action of g−1 gives a twosided inverse for the action of g. That is, g permutes the elements of X . Proof of Proposition 4.1. For g ∈ G, we will first show that the optimal Q-function is G-invariant, i.e., Q∗(s, a) = Q∗(gs, ga), then show that the optimal policy is G-equivariant, i.e., π∗(gs) = gπ∗(s). (1) Q∗(s, a) = Q∗(gs, ga): The Bellman optimality equations for Q∗(s, a) and Q∗(gs, ga) are, respectively: Q∗(s, a) = R(s, a) + γ sup a′∈A ∫ s′∈S T (s, a, s′)Q∗(s′, a′), (10) and Q∗(gs, ga) = R(gs, ga) + γ sup a′∈A ∫ s′∈S T (gs, ga, s′)Q∗(s′, a′). (11) Since g ∈ G merely permutes the elements of S, we can re-index the integral using s̄′ = gs′: Q∗(gs, ga) = R(gs, ga) + γ sup ā′∈gA ∫ s̄′∈gS T (gs, ga, s̄′)Q∗(s̄′, ā′) (12) = R(gs, ga) + γ sup a′∈A ∫ s′∈S T (gs, ga, gs′)Q∗(gs′, ga′). (13) Using the Reward Invariance and the Transition Invariance in Definition 4.1, this can be written: Q∗(gs, ga) = R(s, a) + γ sup a′∈A ∫ s′∈S T (s, a, s′)Q∗(gs′, ga′). (14) Now, define a new function Q̄ such that ∀s, a ∈ S × A, Q̄(s, a) = Q(gs, ga) and substitute into Eq. 14, resulting in: Q̄∗(s, a) = R(s, a) + γ sup a′∈A ∫ s′∈S T (s, a, s′)Q̄∗(s′, a′). (15) Notice that Eq. 15 and Eq. 10 are the same Bellman equation. Since solutions to the Bellman equation are unique, we have that ∀s, a ∈ S ×A, Q∗(s, a) = Q̄∗(s, a) = Q∗(gs, ga). (2) π∗(gs) = gπ∗(s): The optimal policy for π∗(s) and π∗(gs) can be written in terms of the optimal Q-function, Q∗, as: π∗(s) = arg max a∈A Q∗(s, a) (16) and π∗(gs) = arg max ā∈A Q∗(gs, ā) (17) Using the invariant property of Q∗ we can substitute Q∗(gs, ā) with Q∗(s, g−1ā) in Equation 17: π∗(gs) = arg max ā∈A Q∗(s, g−1ā) (18) Let ā = ga, Equation 18 can be written as: π∗(gs) = g[arg max a∈A Q∗(s, g−1ga)] (19) Cancelling g−1 and g and substituting Equation 16 we have, π∗(gs) = gπ∗(s). (20) B EQUIVARIANCE OVERCONSTRAIN Proposition B.1. Let f : Vreg⊕Vreg → Vtriv be a linear Cn-equivariant function. Then f(v, w) = a ∑ i vi + b ∑ i wi. Proof. By Weyl decomposibility (Hall, 2003), Vreg decomposes into irreducible representations for Cn each with multiplicity determined by its dimension. Among these is the trivial representation with multiplicity 1. By Schur’s lemma (Dummit & Foote, 1991), the mapping Vreg ⊕ Vreg → Vtriv must factor through the trivial representation embedded in Vreg . The projection onto the trivial representation is given v 7→ a ∑ i vi. The result follows by linearity. As a corollary, we find that Cn-equivariant maps Vreg ⊕ Vreg → Vtriv are actually Cn × Cnequivariant. Let (g1, g2) ∈ Cn × Cn, then applying the Proposition f(g1v, g2w) = a ∑ i(gv)i + b ∑ i(gw)i = a ∑ i vi + b ∑ i wi = f(v, w). C ENVIRONMENT DETAILS In all environments, the environment reset is conduced by randomly initializing the objects with random positions and orientations inside the workspace. The arm is always initialized at the same configuration. The workspace has a size of 0.4m × 0.4m × 0.24m. All environments have a sparse reward, i.e., the agent acquires a +1 reward when reaching the goal state, and 0 otherwise. In the PyBullet simulator, the robot joints have enough compliance to allow the gripper to apply force on the block in the Corner Picking task. We augment the state image with an additional binary channel (i.e., either all pixels are 1 or all pixels are 0) indicating if the gripper is holding an object. Note that this additional channel is invariant to rotations (because all pixels have the same value) so it won’t break the proposed equivariant properties. The Block Pulling requires the robot to pull one block to make contact with the other block. The Object Picking requires the robot the pick up an object randomly sampled from a set of 11 objects (Figure 11). The Drawer Opening requires the robot to pull open a drawer. The Block Stacking requires the robot to stack one block on top of another. The House Building requires the robot to stack a triangle roof on top of a block. The Corner Picking requires the robot to slide the block from the corner and then pick it up. D NETWORK ARCHITECTURE Our equivariant models are implemented using the E2CNN (Weiler & Cesa, 2019) library with PyTorch (Paszke et al., 2017). D.1 EQUIVARIANT DQN ARCHITECTURE In the Equivariant DQN, we use a 7-layer Steerable CNN defined in the group C4 (Figure 12a). The input Fs is encoded as a 2-channel ρ0 feature map, and the output is a 18-channel 3 × 3 ρ0 feature map where the channel encodes the invariant actions Ainv and the spatial dimension encodes Axy . D.2 EQUIVARIANT SAC ARCHITECTURE In the Equivariant SAC, there are two separate networks, both are Steerable CNN defined in the group C8. The actor π (Figure 12b top) is an 8-layer network that takes in a 2-channel ρ0 feature map (Fs) and outputs a mixed representation type 1 × 1 feature map (ā) consisting of 1 ρ1 feature for axy and 8 ρ0 features for ainv and aσ . The critic (Figure 12b bottom) is a 9-layer network that takes in both Fs as a 2-channel ρ0 feature map and a as a 1 × 1 mixed representation feature map consisting of 1 ρ1 feature for axy and 3 ρ0 for ainv. The upper path e encodes Fs into a 64-channel regular representation feature map s̄ with 1 × 1 spatial dimensions, then concatenates it with a. Two separate Q-value paths q take in the concatenated feature map and generate two Q-estimates in the form of 1 × 1 ρ0 feature. The non-linear maxpool layer is used for transforming regular representations into trivial representations to prevent the equivariant overconstraint (Section 5.2). Note that there are two Q outputs based on the requirement of the SAC algorithm. E BASELINE DETAILS Figure 13 shows the baseline network architectures for DQN and SAC. The RAD (Laskin et al., 2020a) Crop baselines, CURL (Laskin et al., 2020b) baselines, and FERM (Zhan et al., 2020) baselines use random crop for data augmentation. The random crop crops a 142 × 142 state image to the size of 128 × 128. The contrastive encoder of CURL baselines has a size of 128 as in Laskin et al. (2020b), and that for the FERM baselines has a size of 50 as in Zhan et al. (2020). The FERM baseline’s contrastive encoder is pretrained for 1.6k steps using the expert data as in Zhan et al. (2020). The DrQ (Kostrikov et al., 2020) Shift baselines use random shift of ±4 pixels for data augmentation as in the original work. In all DrQ baselines, the number of augmentations for calculating the target K and the number of augmentations for calculating the loss M are both 2 as in Kostrikov et al. (2020). F TRAINING DETAILS We implement our experimental environments in the PyBullet simulator (Coumans & Bai, 2016). The workspace’s size is 0.4m×0.4m×0.24m. The pixel size of the visual state I is 128×128 (except for the RAD Crop baselines, CURL baselines, and FERM baselines, where I’s size is 142 × 142 and will be cropped to 128 × 128). I’s FOV is 0.6m × 0.6m. During training, we use 5 parallel environments. We implement all training in PyTorch (Paszke et al., 2017). Both DQN and SAC use soft target update with τ = 10−2. In the DQN experiments, we use the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 10−4. We use Huber loss (Huber, 1964) for calculating the TD loss. We use a discount factor γ = 0.95. The batch size is 32. The buffer has a capacity of 100,000 transitions. In the SAC (and SACfD) experiments, we use the Adam optimizer with a learning rate of 10−3. The entropy temperature α is initialized at 10−2. The target entropy is -5. The discount factor γ = 0.99. The batch size is 64. The buffer has a capacity of 100,000 transitions. SACfD uses the prioritized replay buffer (Schaul et al., 2015) with prioritized replay exponent of 0.6 and prioritized importance sampling exponent β0 = 0.4 as in Schaul et al. (2015). The expert transitions are given a priority bonus of d = 1. G ADDITIONAL EXPERIMENTAL RESULTS FOR EQUIVARIANT SACFD Figure 14 (a)-(b) shows the results for the experiment of Section 6.3 in Block Pulling and Object Picking environments. The Equivariant SACfD outperforms all baselines in those two environments. Figure 14 (c)-(d) shows the results for the experiment of Section 6.4 in block Pulling and Object Picking environments. Similarly as the results in Figure 9, our Equivariant SACfD outperforms both RAD and DrQ equipped with SO(2) dat augmentation. H ABLATION STUDIES H.1 USING EQUIVARIANT NETWORK ONLY IN ACTOR OR CRITIC In this experiment, we investigate the effectiveness of the equivariant network in SACfD by only applying it in the actor network or the critic network. We evaluate four variations: 1) Equi Actor + Equi Critic that uses equivariant network in both the actor and the critic; 2) Equi Actor + CNN Critic that uses equivariant network solely in the actor and uses conventional CNN in the critic; 3) CNN Actor + Equi Critic that uses conventional CNN in the actor and equivariant network in the Critic; 4) CNN Actor + CNN Critic that uses the conventional CNN in both the actor and the critic. Other experimental setup mirrors Section 6.3. As is shown in Figure 15, applying the equivariant network in the actor generally helps more than applying the equivariant network in the critic (in 5 out of 6 experiments), and using the equivariant network in both the actor and the critic always demonstrates the best performance. H.2 DIFFERENT SYMMETRY GROUPS This experiment compares the equivariant networks defined in three different symmetry groups: C8, C4, and C2. We run this experiment in SACfD with the same setup as in Section 6.3. As is shown in Figure 16, the network defined in C8 generally outperforms the network defined in C4, followed by the network defined in C2. H.3 EQUIVARIANT SACFD IN NON-SYMMETRIC ENVIRONMENTS This experiments evaluates the performance of Equivariant SACfD in non-symmetric tasks where the initial orientation of the environments are fixed rather than random. (Similarly as in Section 6.5 but both the training and the evaluation environments have the fix orientation.) Specifically, in Block Pulling, the two blocks in the training environment is initialized with a fixed relative orientation; in Drawer Opening, the drawer is initialized with a fixed orientation. As is shown in Figure 17, when the environments do not contain SO(2) symmetries, the performance gain of using equivariant network is less significant. H.4 ROTATIONAL AUGMENTATION + BUFFER AUGMENTATION Section 6.4 compares our Equivariant SACfD with rotational data augmentation baselines. This experiment shows the performance of those baselines (and an extra CNN SACfD baseline that uses conventional CNN) equipped with the data augmentation buffer. As is mentioned in Section 6.2, the data augmentation baseline creates 4 extra augmented transitions using random SO(2) rotation every time a new transition is added. Figure 18 shows the result, where none of the baselines outperform our proposal in any tasks. Compared with Figure 9, the data augmentation buffer hurts RAD and DrQ because of the redundancy of the same data augmentation.
1. Can you provide examples and intuitions into when G-invariant MDPs are applicable beyond visual tasks? 2. What assumptions are made on the reward and transition functions, and how do they relate to Lipshitzness? 3. Can the method handle collision-avoidance scenarios? 4. Can you demonstrate the applicability of the proposed method in higher-dimensional action spaces or ATARI domains? 5. What are the limitations of the current approach in terms of MDPs that can be considered? 6. What are the major contributions of this work beyond applying equivariant architectures to reinforcement learning?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors present an application of equivariant neural networks to reinforcement learning. They demonstrate new DQN and SAC architectures that make use of equivariant representations in order to improve sample efficiency. Results on manipulation tasks demonstrate sample complexity reduction compared to other techniques. Review Although I liked the paper, I would like to kindly ask the authors the following questions: The authors have provided a definition of G-invariant MDPs and demonstrated their effectiveness in visual tasks. This definition rather introduces restrictions on the class of MDPs considered. My question is apart from visual tasks like those solved in this paper, can the authors provide examples and intuitions into when those assumptions hold? Is it the case that G-invariant MDPs are exclusive to image state representations? Similar question as the above goes for the action spaces. What assumptions are we really making on the reward and transition functions? Do these relate to Lipshitzness at all? Can we handle collision-avoidance scenarios for example? In the experiments, the authors have considered relatively low-dimensional action spaces. Can we please see a demonstration in higher dimensional action spaces? Would the method apply to ATARI domains as well? Or is it that the type of discontinuities present in ATARI prohibit the application of the proposed method? It would be great to have a section discussing the limitations in terms of MPDs that could be considered when using the current approach. Although a minor point, I am a bit confused about the novelty of the current approach beyond applying equivariant architectures to reinforcement learning. Can the authors please help me understand the major contributions of this work?
ICLR
Title Few-Shot One-Class Classification via Meta-Learning Abstract Although few-shot learning and one-class classification (OCC), i.e. learning a binary classifier with data from only one class, have been separately well studied, their intersection remains rather unexplored. Our work addresses the few-shot OCC problem and presents a meta-learning approach that requires only few data examples from only one class to adapt to unseen tasks. The proposed method builds upon the model-agnostic meta-learning (MAML) algorithm (Finn et al., 2017) and learns a model initialization particularly suited for learning few-shot OCC tasks. This is done by explicitly optimizing for a parameter initialization which only requires a few gradient steps with one-class minibatches to yield a performance increase on class-balanced test data. We provide a theoretical analysis that explains why our approach works in the few-shot OCC scenario, while other meta-learning algorithms, including MAML, fail. Empirical results on six datasets from the image and time-series domains show that our method substantially outperforms both, classical OCC and few-shot classification approaches, and demonstrate the ability to quickly learn unseen tasks from only few normal class samples. Moreover, we successfully learn anomaly detectors for a real-world application on sensor readings recorded during industrial manufacturing of workpieces with a CNC milling machine using a few examples from the normal class. 1 INTRODUCTION The anomaly detection (AD) task (Chandola et al., 2009; Aggarwal, 2015) consists in differentiating between normal and abnormal data samples. AD applications are common in various domains that involve different data types, including medical diagnosis (Prastawa et al., 2004), cybersecurity (Garcia-Teodoro et al., 2009) and quality control in industrial manufacturing (Scime & Beuth, 2018). Due to the rarity of anomalies, the data underlying AD problems exhibits high class-imbalance. Therefore, AD problems are usually formulated as one-class classification (OCC) problems (Moya et al., 1993), where either only a few or no anomalous data samples are available for training the model (Khan & Madden, 2014). While most of the developed approaches (Khan & Madden, 2014) require a substantial amount of normal data to yield good generalization, in many real-world applications, e.g. in industrial manufacturing, only small datasets are available. Data scarcity can have many reasons: data collection itself might be expensive, e.g. in healthcare, or happens only gradually, such as in a cold-start situation. To enable learning from few examples, various viable meta-learning approaches (Lake et al., 2011; Ravi & Larochelle, 2016; Finn et al., 2017) have been developed. However, they rely on having examples from each of the classification task’s classes, which prevents their application to OCC tasks. To the best of our knowledge, the few-shot OCC (FS-OCC) problem has only been addressed by Kozerawski & Turk (2018) in the image domain. Our contribution is threefold: Firstly, we show that classical OCC approaches fail in the few-shot data regime. Secondly, we provide a theoretical analysis showing that classical gradient-based metalearning algorithms do not yield initializations suitable for OCC tasks and that second-order derivatives are needed to optimize for such initializations. Thirdly, we propose one-class model-agnostic meta-learning (OC-MAML), a data-domain-agnostic algorithm that quickly learns FS-OCC tasks, to serve as a first, simple and strong baseline for future research in the understudied FS-OCC problem. OC-MAML builds upon model-agnostic meta-learning (MAML) (Finn et al., 2017), which is a meta-learning method that explicitly optimizes for few-shot learning and yields a model initialization that enables quick adaptation to a new task using only few of its datapoints. Like MAML, OCMAML yields model parameters that are easily adaptable to unseen tasks. The difference is that the model initialization delivered by OC-MAML is particularly suited for adaptation to OCC tasks and hence requires few examples from only one class of the target task for good adaptation. We provide a theoretical analysis that shows that OC-MAML explicitly optimizes for parameter initializations which yield performance increase on class-balanced test data by taking only a few gradient steps with one-class minibatches. This is done by maximizing the inner product of gradients computed on different minibatches with different class-imbalance rates. While recent meta-learning approaches focused on the few-shot learning problem, i.e. learning to learn with few examples, we extend their use to the OCC problem, i.e. learning to learn with examples from only one class. We empirically validate our theoretical analysis on six datasets from the image and time-series domains, and demonstrate the robustness and maturity of our approach for real-world application by successfully testing it on a real-world dataset of sensor readings recorded during manufacturing of metal workpieces with a CNC milling machine. 2 APPROACH 2.1 PROBLEM STATEMENT Our goal is to learn a one-class classification (OCC) task using only a few examples from the normal class. In the following, we first discuss the unique challenges of the few-shot one-class classification (FS-OCC) problem. Subsequently, we formulate the FS-OCC problem as a meta-learning problem. In order to perform one-class classification, i.e. differentiate between in-class and out-of-class examples, approximating a generalized decision boundary for the normal class is necessary. Learning such a class decision boundary in the few-shot regime can be especially challenging for the following reasons. On the one hand, if the model overfits to the few available datapoints, the class decision boundary would be too restrictive, which would prevent generalization to unseen examples. As a result, some normal samples would be predicted as anomalies. On the other hand, if the model overfits to the majority class, e.g. predicting almost everything as normal, the class decision boundary would overgeneralize, and out-of-class (anomalous) examples would not be detected. In our meta-learning problem formulation, we assume access to data from classification tasks T traini sampled from a task distribution p(T ) related to our target OCC tasks. In the few-shot classification context, N -way K-shot learning tasks are usually used to test the learning procedure, in our case the model initialization, yielded by the meta-learning algorithm. An N -way K-shot classification task includesK examples from each of theN classes that are used for learning this task, after which the trained classifier is tested on a disjoint set of data (Vinyals et al., 2016). When the target task is an OCC task, only examples from one class are available for training, which can be viewed as a 1-way K-shot classification task. In order to align with the AD problem, the available examples have to belong to the normal (majority) class, which usually has a lower variance than the anomalous (minority) class. This problem formulation is a prototype for a practical use case where an application-specific anomaly detector is needed and only few normal class examples are available. 2.2 MODEL-AGNOSTIC META-LEARNING Model-agnostic meta-learning (MAML) (Finn et al., 2017) is an optimization-based meta-learning algorithm upon which we build in our present work. MAML learns a model initialization that enables quick adaptation to unseen tasks using only few data samples. For that, MAML trains a model explicitly for few-shot learning on tasks Ti coming from the same task distribution p(T ) as the unseen target task Ttest. In order to assess the model’s adaptation ability to unseen tasks, the available tasks are divided into mutually disjoint task sets: one for meta-training Str, one for metavalidation Sval and one for meta-testing Stest. Each task Ti is divided into two disjoint sets of data, each of which is used for a particular MAML operation: Dtr is used for adaptation and Dval is used for validation, i.e. evaluating the adaptation. The adaptation procedure of a model fθ to a particular task Ti consists in taking one (or more) gradient descent step(s) using few datapoints sampled from Dtr. We also refer to the adaptation updates as inner loop updates. A good measure for the suitability of the initialization parameters θ for few-shot adaptation to a considered task Ti is the loss LvalTi (fθ′i ), which is computed on the validation set D val using the task-specific adapted model fθ′i . In order to optimize for few-shot learning, the model parameters θ are updated by minimizing the aforementioned loss across all meta-training tasks. This update, called the outer loop update, can be expressed as: θ ← θ − β∇θ ∑ Ti∼p(T ) LvalTi (fθ′i ), (1) where β is the learning rate used for the outer loop. In order to avoid meta-overfitting, i.e. overfitting to the meta-training tasks, model selection can be done via conducting validation episodes using tasks from Sval throughout meta-training. At meta-test time, the few-shot adaptation to unseen tasks from Stest is evaluated. We note that, in the case of few-shot classification, K datapoints from each class are sampled from Dtr for the adaptation, during training, validation and testing. 2.3 ONE-CLASS MODEL-AGNOSTIC META-LEARNING 2.3.1 ALGORITHM The primary contribution of our work is to show that second-order gradient-based meta-learning is a viable approach to the underexplored few-shot one-class classification (FS-OCC) problem. We achieve this by adequately modifying the objective of the adaptation step, i.e. the inner loop updates, of the MAML algorithm. We choose to build upon gradient-based meta-learning algorithms, because these were shown to be universal learning algorithm approximators (Finn & Levine, 2017), which means that they could approximate a learning algorithm tailored for FS-OCC. As explained in Section 2.2, MAML optimizes explicitly for few-shot adaptation by creating and using auxiliary tasks that have the same characteristic as the target tasks, in this case tasks that include only few datapoints for training. Analogously, OC-MAML trains explicitly for quick adaptation to OCC tasks by creating OCC auxiliary tasks for meta-training. Concretely, this is done by modifying the class-imbalance rate (CIR) of the inner loop data batches to match the one of the test task. The meta-training procedure of OC-MAML is described in Algorithm 1 in Appendix A. As described in Section 1, OCC problems are binary classification scenarios where only few or no minority class samples are available. In order to address both of theses cases, we introduce a hyperparameter (c) which sets the CIR of the batch sampled for the inner updates. Hereby, c gives the percentage of the samples belonging to the minority (anomalous) class w.r.t. the total number of samples, e.g. setting c = 0% means only majority class samples are contained in the data batch. We focus on this latter extreme case, where no anomalous samples are available for learning. The key difference between MAML and OC-MAML is in the sampling operation of the inner loop batch (operation 5 in Algorithm 1 in Appendix A). By reducing the size of the batch used for the adaptation (via the hyperparameter K), MAML trains for few-shot adaptation. OC-MAML extends this approach to train for few-shot one-class adaptation by reducing the CIR of the batch used for adaptation (via the hyperparameter c). In order to evaluate the performance of the adapted model on both classes, we use a class-balanced validation batch B ′ for the outer loop updates. This way, we maximize the performance of the model in recognizing both classes after having seen examples from only one class during adaptation. Using OCC tasks for adaptation during meta-training favors model initializations that enable a quick adaptation to OCC tasks over those that require classbalanced tasks. From a representation learning standpoint, OC-MAML learns representations that are not only broadly suitable for the data underlying p(T ), but also particularly suited for OCC tasks. In Section 2.3.2, we discuss the unique characteristics of the model initializations yielded by OC-MAML and explain why adapting first-order meta-learning algorithms to the OCC scenario does not yield the targeted results. 2.3.2 THEORETICAL ANALYSIS: WHY DOES OC-MAML WORK ? In this section we give a theoretical explanation of why OC-MAML works and why it is a more suitable approach than MAML for the few-shot one-class classification (FS-OCC) problem. To address the latter problem, we aim to find a model parameter initialization, from which adaptation using few data examples from only one class yields a good performance on both classes, i.e. good generalization to the class-balanced task. We additionally demonstrate that adapting first-order metalearning algorithms, e.g. First-Order MAML (FOMAML) (Finn et al., 2017) and Reptile (Nichol & Schulman, 2018), to the OCC scenario as done in OC-MAML, does not yield initializations with the desired characteristics, as it is the case for OC-MAML. gMAML = g2 − αH2g1 − αH1g2 +O(α2) = g2 − α ∂(g1.g2) ∂φ1 +O(α2) (2) By using a Taylor series expansion, Nichol & Schulman (2018) approximate the gradient used in the MAML update. For simplicity of exposition, in Equation 2 we give their results for the case where only 2 gradient-based updates are performed, i.e. one adaptation update on a minibatch including K datapoints from Dtr and one meta-update on a minibatch including Q datapoints from Dval. We use the same notation used by Nichol & Schulman (2018), where gi and Hi denote the gradient and Hessian computed on the ith minibatch at the initial parameter point φ1, and α gives the learning rate. Here it is assumed that the same learning rate is used for the adaptation and meta-updates. In Equation 2 Nichol & Schulman (2018) demonstrate that MAML partially optimizes for increasing the inner product of the gradients computed on different minibatches. In fact, when gradients from different minibatches have a positive inner product, taking a gradient step using one of them yields a performance increase on the other (Nichol & Schulman, 2018). Equation 2 holds also for OC-MAML. However, in OC-MAML the minibatches 1 and 2 have different class-imbalance rates (CIRs), since the first minibatch includes data from only one class and the second minibatch is classbalanced. Hence, it optimizes for increasing the inner product of the gradients computed on different minibatches with different CIRs, while MAML does the same but for different minibatches with the same CIR, namely c = 50%. Consequently, OC-MAML optimizes for a parameter initialization from which taking one (or few) gradient step(s) with one-class minibatch(es) results in a performance increase on class-balanced data. In contrast, MAML optimizes for a parameter initialization that requires class-balanced minibatches to yield the same effect (Figure 1 in Appendix A). When adapting to OCC tasks, however, only examples from one class are available. We conclude, therefore, that using minibatches with different CIRs for meta-training, as done in OC-MAML, yields parameter initializations that are more suitable for adapting to OCC tasks. A natural question is whether applying our modification of MAML, i.e. using only data from the normal class for adaptation during meta-training, to other gradient-based meta-learning algorithms would yield the same desired effect. We investigate this for First-Order MAML (FOMAML) (Finn et al., 2017) and Reptile (Nichol & Schulman, 2018). FOMAML is a first-order approximation of MAML, which ignores the second derivative terms. Reptile is also a first-order meta-learning algorithm that learns an initialization that enables fast adaptation to test tasks using only few examples from each class. In the following we demonstrate that adapting the FOMAML and Reptile algorithms to the one-class classification scenario, to which we will refer as OC-FOMAML and OCReptile, does not result in optimizing for an initialization suitable for OCC tasks, as it is the case for OC-MAML. We note that for OC-Reptile, the first (N −1) batches contain examples from only one class and the last (N th) batch is class-balanced. The approximated gradients used in the FOMAML and Reptile updates are given by Equations 3 and 4 (Nichol & Schulman, 2018), respectively. gFOMAML = g2 − αH2g1 +O(α2) (3) gReptile = g1 + g2 − αH2g1 +O(α2) (4) We note that these equations hold also for OC-FOMAML and OC-Reptile. By taking the expectation over minibatch sampling Eτ,1,2 for a meta-training task τ and two class-balanced minibatches, Nichol & Schulman (2018) establish that Eτ,1,2[H1g2] = Eτ,1,2[H2g1]. Averaging the two sides of the latter equation results in the following: Eτ,1,2[H2g1] = 1 2 Eτ,1,2[H1g2 +H2g1] = 1 2 Eτ,1,2[ ∂(g1.g2) ∂φ1 ] (5) Equation 5 shows that, in expectation, FOMAML and Reptile, like MAML, optimize for increasing the inner product of the gradients computed on different minibatches with the same CIR. However, when the minibatches 1 and 2 have different CIRs, which is the case for OC-FOMAML and OC-Reptile, Eτ,1,2[H1g2] 6= Eτ,1,2[H2g1] and therefore Eτ,1,2[H2g1] 6= 12Eτ,1,2[ ∂(g1.g2) ∂φ1 ]. Hence, even though, similarly to OC-MAML, OC-FOMAML and OC-Reptile use minibatches with different CIRs for meta-training, contrarily to OC-MAML, they do not optimize for increasing the inner product of the gradients computed on different minibatches with different CIRs. The second derivative term H1g2 is, thus, necessary to optimize for an initialization from which performance increase on a class-balanced task is yielded by taking few gradient steps using only data from one class. 3 RELATED WORKS Our proposed method addresses the few-shot one-class classification (FS-OCC) problem, i.e. solving binary classification problems using only few datapoints from only one class. To the best of our knowledge, this problem was only addressed by Kozerawski & Turk (2018), and exclusively in the image data domain. Kozerawski & Turk (2018) train a feed-forward neural network (FFNN) to learn a transformation from feature vectors, extracted by a CNN pre-trained on ILSVRC 2014 (Russakovsky et al., 2015), to SVM decision boundaries. Hereby, the FFNN is trained on ILSVRC 2012. At test time, an SVM boundary is inferred by using one image of one class from the test task which is then used to classify the test examples. This approach is specific to the image domain since it relies on the availability of very large, well annotated datasets and uses data augmentation techniques specific to the image domain, e.g. mirroring. OC-MAML offers a more general approach to FS-OCC since it is data-domain-agnostic. In fact, it does not require a pre-trained feature extraction model, which might not be available for some data domains, e.g. sensor readings. 3.1 FEW-SHOT CLASSIFICATION Recent few-shot classification approaches may be broadly categorized in optimization-based methods (Ravi & Larochelle, 2016; Finn et al., 2017; Nichol & Schulman, 2018) and metric-based methods (Koch, 2015; Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018). The optimization-based approaches aim to learn an optimization algorithm (Ravi & Larochelle, 2016) and/or a parameter initialization (Finn et al., 2017; Nichol & Schulman, 2018), that is tailored for few-shot learning. Metric-based techniques learn a metric space where samples belonging to the same class are close together, which facilitates few-shot classification (Koch, 2015; Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018). Rusu et al. (2018) develops a hybrid method that combines the advantages of both categories. Prior meta-learning approaches to few-shot classification addressed the N-way K-shot classification problem described in Section 2.1, i.e they only consider neatly class-balanced test classification tasks. Optimization-based techniques require these samples to finetune the learned initialization. In the metric-based methods, these samples are necessary to compute class prototypes (Snell et al., 2017), embeddings needed for verification (Koch, 2015) or relation scores (Sung et al., 2018). Our approach, however, requires only samples from one of the test task’s classes for learning. Moreover, while the evaluation of the previous approaches in the classification context was limited to the image domain, we additionally validate OC-MAML on datasets from the time-series domain. 3.2 ONE-CLASS CLASSIFICATION Classical OCC approaches rely on SVMs (Schölkopf et al., 2001; Tax & Duin, 2004) to distinguish between normal and abnormal samples. Pal & Foody (2010) show that the classification accuracy of SVMs decreases with an increasing number of input features, particularly when small datasets are available for training. Hybrid approaches combining SVM-based techniques with feature extractors were developed to compress the input samples in lower dimensional representations (Xu et al., 2015; Erfani et al., 2016; Andrews et al., 2016). Fully deep methods that jointly perform the feature extraction step and the OCC step have also been developed (Ruff et al., 2018). Another category of approaches to OCC uses the reconstruction error of antoencoders (Hinton & Salakhutdinov, 2006) trained with only normal class examples as an anomaly score (Hawkins et al., 2002; An & Cho, 2015; Chen et al., 2017). Yet, determining a decision threshold for such an anomaly score requires labeled data from both classes. Further more recent techniques rely on GANs (Goodfellow et al., 2014) to perform OCC (Schlegl et al., 2017; Ravanbakhsh et al., 2017; Sabokrou et al., 2018). The aforementioned hybrid and fully deep approaches require a considerable amount of data from the OCC task to train the typically highly parametrized models to learn features specific to the normal class. By leveraging auxiliary OCC tasks and explicitly optimizing for few-shot learning, OC-MAML learns a representation that can be adapted to unseen OCC task with only few exaples. 4 EXPERIMENTAL EVALUATION The conducted experiments 1 aim to address the following key questions: (a) How does OC-MAML perform compared to classical one-class classification (OCC) approaches in the few-shot (FS) data regime? (b) Does using OCC tasks for meta-training improve the adaptation to such tasks, as it is the case for few-shot tasks (Finn et al., 2017), and do our theoretical findings (Section 2.3.2) about the differences between the MAML and OC-MAML initializations hold in practice? (c) How does OC-MAML compare to the first-order meta-learning algorithms adapted to the OCC scenario, i.e. OC-FOMAML and OC-Reptile (Section 2.3.2)? (d) How does OC-MAML perform in FS-OCC problems from the time-series domain, which is understudied in the few-shot learning literature? 4.1 BASELINES AND DATASETS This section provides information about the baselines and datasets we use in our experimental evaluation. We compare OC-MAML to the classical one-class classification (OCC) approaches One-Class SVM (OC-SVM) (Schölkopf et al., 2001) and Isolation Forest (IF) (Liu et al., 2008) (Question (a)), which we fit to the adaptation set of the test task. Here, we apply PCA to reduce the dimensionality of the data, by choosing the minimum number of eigenvectors so that at least 95% of the variance is preserved as done by Erfani et al. (2016). We additionally tune the inverse length scale γ by using 10% of the test set, as done by Ruff et al. (2018), which gives OC-SVM a supervised advantage, compared to the other methods. For a fairer comparison to OC-MAML, where these latter methods also benefit from the meta-training and meta-validation tasks, we additionally train them on embeddings inferred by feature extractors learned on these tasks. Here, we train two types of feature extractors on the meta-training tasks: one is trained in a Multi-Task-Learning (MTL) setting and the other trained using the ”Finetune” baseline (FB) (Triantafillou et al., 2019). FB is a few-shot classification approach, where one multi-class classifier is trained with all the classes available in all meta-training tasks, after which, an output layer is finetuned with the few available examples of the target task on top of the learned feature extractor. Moreover, we compare OC-MAML to class-balanced meta-learning algorithms, namely MAML, FOMAML and Reptile, as well as firstorder meta-learning algorithms adapted to the OCC scenario, i.e. OC-FOMAML and OC-Reptile (Questions (b) and (c)). Experimental details are provided in Appendix B. We evaluate our approach on six datasets, including 3 from the image domain and 3 from the timeseries domain. In the image domain we use 2 few-shot learning benchmark datasets, namely MiniImageNet (Ravi & Larochelle, 2016) and Omniglot (Lake et al., 2015), and 1 OCC benchmark dataset, the Multi-Task MNIST (MT-MNIST) dataset. To adapt the datasets to the OCC scenario, we create binary classification tasks, where the normal class contains examples from one class of the initial dataset and the anomalous class contains examples from multiple other classes. We create 9 different datasets based on MNIST, where the meta-testing task of each dataset consists in differentiating between a certain digit and the others. We use the same (10th) task for meta-validation in all datasets. Since most of the time-series datasets for anomaly detection include data from only one domain and only one normal class, adapting them to the meta-learning problem formulation where several different tasks are required is not possible. Therefore, we create two synthetic timeseries (STS) datasets, each including 30 synthetically generated time-series that underlie 30 different anomaly detection tasks, to assess the suitability of OC-MAML to time-series data (Question (d)). The time-series underlying the datasets are sawtooth waveforms (STS-Sawtooth) and sine functions (STS-Sine). We propose the STS-datasets as benchmark datasets for the few-shot (one-class) classification problem in the time-series domain. Finally, we validate OC-MAML on a real-world anomaly detection dataset of sensor readings recorded during industrial manufacturing using a CNC milling machine. Various consecutive roughing and finishing operations (pockets, edges, holes, surface finish) were performed on ca. 100 aluminium workpieces to record the CNC Milling Machine Data (CNC-MMD). In Appendix C, we give details about all 6 datasets, the task creation procedures adopted to adapt them to the OCC case, as well as the generation of the STS-datasets. 1Our OC-MAML implementation and experimental evaluation will be made public upon paper acceptance. 4.2 RESULTS AND DISCUSSION Our results of the comparison between OC-MAML and the classical OCC approaches on the 3 image datasets and on the STS-Sawtooth dataset are summarized in Table 1. OC-MAML consistently outperforms all baselines across all datasets and on both adaptation set sizes. While FB and MTL yield relatively good performance when adapting to class-balanced tasks (c = 50%), they completely fail in adapting to OCC tasks. On the MT-MNIST dataset and the STS-Sawtooth dataset, some of the baselines that combine a feature extractor and a shallow model yield high performance, when the adaptation set size is K = 10. Our results of the comparison between OC-MAML and the classical few-shot classification approaches on the 3 image datasets and on the STS-Sawtooth dataset are summarized in Table 2. The results on the other 8 MT-MNIST datasets and on the STS-Sine dataset are presented in Appendix D and are consistent with the results in Tables 1 and 2. We observe that OC-MAML consistently outperforms the other meta-learning algorithms with a substantial margin on all datasets and for both adaptation set sizes. This confirms our theoretical findings (Section 2.3.2) that the initializations yielded by class-balanced meta-learning algorithms as well as OC-FOMAML and OC-Reptile are not optimized for adaptation using data from only one class. These latter yield test accuracies close to 50% showing that they overfitted to the normal class (Table 2 (top)). In an attempt to increase the performance of the other meta-learning algorithms in the OCC scenario, we add a batch normalization (BN) (Ioffe & Szegedy, 2015) layer immediately before the output layer of the network. This BN operation standardizes the latent features using the mean and standard deviation of the K datapoints available for adaptation, which all belong to the normal class. As a result, this layer would output features with mean close to 0 and standard deviation close to 1 for normal class examples. In contrast, anomalous examples would yield features with other statistics, which simplifies their detection. We hypothesize that by enforcing a mapping of the data to a latent space standardized only by examples from the normal class, the anomalies would clearly fall out of the normal-class-distribution, making their detection easier. We note that the BN layer is used during meta-training as well. Hereby, we fix the learnable scaling (γ) and centering (β) parameters of the BN layer to 1 and 0, respectively, to prevent it from shifting the standard distribution. We find that this simple modification increases the performance of the other meta-learning algorithms on all image datasets. However, OC-MAML without BN still yields the highest results, with only one exception. The higher performance increase when a bigger adaptation set is available (K = 10) confirms our hypothesis that enforcing a mapping of the data to a latent space standardized only by examples from the normal class makes the detection of the anomalies easier. In fact, using more examples yields more accurate mean and standard deviation measures, which enables a better approximation of the distribution of the normal class, and hence leads to an improved detection of the anomalies. We also tested these algorithms on networks including a trainable BN layer after each convolutional layer. This yielded comparable results to just adding one non-trainable BN layer before the output layer. Even though some of the meta-learning algorithms and OCC approaches sometimes outperform OC-MAML (Tables 2, 5, 8, 9), they do not consistently yield high performance in learning FS-OCC tasks across several datasets, as it is the case for OC-MAML. We note that this happens only on few MT-MNIST datasets and explain that by the high overlap between the digit classes underlying the meta-training and meta-testing tasks in the MT-MNIST datasets. The results of OC-MAML experiments on the CNC-MMD dataset are presented in Table 3. We compute F1-scores for evaluation since the test sets are class-imbalanced. OC-MAML consistently achieves high F1-scores across the 6 different milling processes. This high model performance on the minority class, i.e. in detecting anomalous data samples, is reached by using only K = 10 non-anomalous data samples (c = 0%). These results show that OC-MAML yielded a parameter initialization suitable for learning OCC tasks in the time-series data domain. Moreover, the high performance reached show the maturity of this method for industrial real-world applications. 5 CONCLUSION This work addressed the novel and challenging problem of few-shot one-class classification (FSOCC) and introduced OC-MAML, a robust meta-learning approach to FS-OCC problems that learns model parameters which are easily adaptable to unseen tasks using few examples from only one class. We demonstrated the viability of our method on six datasets from the image time-series domains, including a real-world dataset of industrial sensor readings, where it significantly outperformed classical OCC and few-shot classification methods. Future works could investigate an unsupervised approach to FS-OCC, as done by Hsu et al. (2018) in the class-balanced scenario. A OC-MAML: ALGORITHM AND PARAMETER INITIALIZATION In this section we present the pseudo-code of OC-MAML in Algorithm 1 and a diagram visualizing the parameter initializations yielded by MAML and OC-MAML. Algorithm 1 Few-shot one-class classification with OC-MAML Require: Str: Set of meta-training tasks Require: α, β: Learning rates Require: K,Q: Batch size for the inner and outer updates Require: c: CIR for the inner-updates 1: Randomly initialize θ 2: while not done do 3: Sample batch of tasks Ti from Str Let {Dtr, Dval} = Ti 4: for all sampled Ti do 5: Sample K datapoints B = {x(l), y(l)} from Dtr such that CIR= c 6: Initialize θ ′ i = θ 7: for number of adaptation steps do 8: Compute adaptation loss LtrTi(fθ′i ) using B 9: Compute adapted parameters with gradient descent: θ ′ i = θ ′ i − α∇θ′iL tr Ti (fθ′i ) 10: end for 11: Sample Q datapoints B ′ = {x′(l), y′(l)} from Dval 12: Compute outer loop loss LvalTi (fθ′i ) using B ′ 13: end for 14: Update θ: θ ← θ − β∇θ ∑ Ti LvalTi (fθ′i ) 15: end while 16: return meta-learned parameters θ Figure 1 visualizes the adaptation to a binary classification test task Ts from the parameter initializations yielded by OC-MAML and MAML, denoted by θOCMAML and θMAML respectively. θ∗s,CB denotes the optimal parameters for Ts. Taking a gradient step using a one-class adaptation setDs,OC (gradient direction denoted by ∇Ls,OC), yields a performance increase on Ts when starting from the OC-MAML parameter initialization. In contrast, when starting from the parameter initialization reached by MAML a class-balanced adaptation set Ds,CB (gradient direction denoted by ∇Ls,CB) is required for a performance increase in Ts. B EXPERIMENT DETAILS For MT-MNIST, we use the same 4-block convolutional architecture as used by Hsu et al. (2018) for their multi-class MNIST experiments. However, we exclude the batch normalization (Ioffe & Szegedy, 2015) layers, as we want to assess their effect in the OCC case, as discussed in Section 4.2. Each convolutional block includes a 3 x 3 convolutional layer with 32 filters, a 2 x 2 pooling and a ReLU non-linearity. The same model architecture is used for the MiniImageNet experiments as done by Ravi & Larochelle (2016). For the Omniglot experiments, we use the same architecture used by Finn et al. (2017). We also do not include the batch normalization layers for the two latter datasets. On the STS datasets, the model architecture used is composed of 3 modules, each including a 5 x 5 convolutional layer with 32 filters, a 2 x 2 pooling and a ReLU non-linearity. The model architecture used for the CNC-MMD experiments is composed of 4 of these aforementioned modules, except that the convolutional layers in the last two modules include 64 filters. The last layer of all architectures is a linear layer followed by softmax. We note that in the experiments on the time-series datasets (STS and CNC-MMD) 1-D convolutional filters are used. Table 4 shows the hyperparameters used in the experiments of each model on the different datasets. We note that we did not fix the outer loop size Q in the experiments on the CNC-MMD dataset, because the sizes and CIRS of the validation sets Dval differ across the different tasks. For the meta-learning algorithms, including OC-MAML, we used vanilla SGD in the inner loop and the Adam optimizer (Kingma & Ba, 2014) in the outer loop, as done by Finn et al. (2017). The MTL and FB baselines are also trained with the Adam optimizer. In the following, we provide details about the meta-training procedure adopted in the meta-learning experiments. We use disjoint sets of data for adaptation (Dtr) and validation (Dval) on the metatraining tasks, as it was empirically found to yield better final performance (Nichol & Schulman, 2018). Hereby, the same sets of data are used in the OC-MAML and baseline experiments. In the MT-MNIST, Omniglot, MiniImageNet and STS experiments, the aforementioned sets of data are class-balanced. The sampling of the batch used for adaptation B ensures that this latter has the appropriate CIR (c = 50% for MAML, FOMAML and Reptile, and c = ctarget for OC-MAML, OC-FOMAML and OC-Reptile). For the one-class meta-learning algorithms, ctarget = 0%, i.e. no anomalous samples of the target task are available, sothat only normal examples are sampled from Dtr during meta-training. In order to ensure that class-balanced and one-class meta-learning algorithms are exposed to the same data during meta-training, we move the anomalous examples from the adaptation set of data (Dtr) to the validation set of data (Dval). We note that this is only done in the experiments using one-class meta-learning algorithms. During meta-training, meta-validation episodes are conducted to perform model selection. In order to mimic the adaptation to unseen FS-OCC tasks with CIR c = ctarget at test time, the CIR of the batches used for adaptation during meta-validation episodes is also set to c = ctarget. We note that the hyperparameter K denotes the total number of datapoints, i.e. batch size, used to perform the adaptation updates, and not the number of datapoints per class as done by Finn et al. (2017). Hence, a task with sizeK = 10 and CIR c = 50% is equivalent to a 2-way 5-shot classification task. In the following, we provide details about the adaptation to the target task(s) and the subsequent evaluation. In the MT-MNIST and MiniImageNet experiments, we randomly sample 20 adaptation sets from the target task(s)’ data, each including K examples with the CIR corresponding to the experiment considered. After each adaptation episode conducted using one of these sets, the adapted model is evaluated on a disjoint class-balanced test set that includes 4,000 images for MT-MNIST and 600 for MiniImageNet. We note that the samples included in the test sets of the test tasks are not used nor for meta-training neither for meta-validation. This results in 20 and 400 (20 adaptation sets created from each of the 20 test classes) different test tasks for MT-MNIST and MiniImageNet, respectively. All the results presented give the mean over all adaptation episodes. Likewise, in the STS experiments, we evaluate the model on 10 different adaptation sets from each of the 5 test tasks. In the CNC-MMD experiments, the 30 tasks created from the target operation are used for adaptation and subsequent evaluation. For each of these target tasks, we randomly sample K datapoints belonging to the normal class that we use for adaptation, and use the rest of the datapoints for testing. We do this 5 times for each target task, which results in 150 testing tasks. For MTL and FB baselines, as well as all the baseline combining these model with shallow models, i.e. IF and OC-SVM, we use the meta-validation task(s) for model choice, like in the meta-learning experiments. For the MTL baseline, for each validation task, we finetune a fully connected layer on top of the shared multi-task learned layers, as it is done at test time. C DATASETS AND TASK CREATION PROCEDURES In this Section we provide information about the datasets used and the task creation procedures. Multi-task MNIST (MT-MNIST): We derive 10 binary classification tasks from the MNIST dataset (LeCun et al., 2010), where every task consists in recognizing one of the digits. This is a classical one-class classification benchmark dataset. For a particular task Ti, images of the digit i are labeled as normal samples, while out-of-distribution samples, i.e. the other digits, are labeled as anomalous samples. We use 8 tasks for meta-training, 1 for meta-validation and 1 for meta-testing. Hereby, images of digits to be recognized in the validation and test tasks are not used as anomalies in the meta-training tasks. This ensures that the model is not exposed to normal samples from the test task during meta-training. Moreover, the sets of anomalous samples of the meta-training, meta-validation and meta-testing tasks are mutually disjoint. We conduct experiments on 9 MTMNIST datasets, each of which involves a different target task (T0 − T8). The task T9 is used as a meta-validation task across all experiments. MiniImageNet: This dataset was proposed by Ravi & Larochelle (2016) and includes 64 classes for training, 16 for validation and 20 for testing, and is a classical challenging benchmark dataset for few-shot learning. To adapt it to the few-shot one-class classification setting, we create 64 binary classification tasks for meta-training, each of which consists in differentiating one of the training classes from the others, i.e. the anomalous examples of a task Ti are randomly sampled from the 63 classes with labels different from i. We do the same to create 16 meta-validation and 20 meta-testing tasks using the corresponding classes. Omniglot: This dataset was proposed by Lake et al. (2015) and includes 20 instances of 1623 handwritten characters from 50 different alphabets. We generate our meta-training and meta-testing tasks based on the official data split (Lake et al., 2015), where 30 alphabets are reserved for training and 20 for evaluation. For each character class, we create a binary classification task, which consists in differentiating between this character and other characters from the same set (meta-training or meta-testing), i.e. the anomalous examples of a task Ti are randomly sampled from the remaining characters. By removing 80 randomly sampled tasks from the meta-training tasks, we create the meta-validation tasks set. Synthetic time-series (STS): In order to investigate the applicability of OC-MAML to time-series (question (c)), we created two datasets, each including 30 synthetically generated time-series that underlie 30 different anomaly detection tasks. The time-series underlying the datasets are sawtooth waveforms (STS-Sawtooth) and sine functions (STS-Sine). Each time-series is generated with random frequencies, amplitudes, noise boundaries, as well as anomaly width and height boundaries. Additionally, the width of the rising ramp as a proportion of the total cycle is sampled randomly for the sawtooth dataset, which results in tasks having rising and falling ramps with different steepness values. The data samples of a particular task are generated by randomly cropping windows of length 128 from the corresponding time-series. We generate 200 normal and 200 anomalous data examples for each task. For each dataset, we randomly choose 20 tasks for meta-training, 5 for meta-validation and 5 for meta-testing. We propose the STS-datasets as benchmark datasets for the few-shot one-class classification problem in the time-series domain, and will make them public upon paper acceptance. In the following, we give details about the generation procedure adopted to create the STS-Sawtooth dataset. The same steps were conducted to generate the STS-Sine dataset. First, we generate the sawtooth waveforms underlying the different tasks by using the Signal package of the Scipy library (Jones et al., 2001–). Thereafter, a randomly generated noise is applied to each signal. Subsequently, signal segments with window length l = 128 are randomly sampled from each noisy signal. These represent the normal, i.e. non-anomalous, examples of the corresponding task. Then, some of the normal examples are randomly chosen, and anomalies are added to them to produce the anomalous examples. Figure 2 shows exemplary normal and anomalous samples from the STS-Sawtooth and STS-Sine datasets. In order to increase the variance between the aforementioned synthetic signals underlying the different tasks, we randomly sample the frequency, i.e. the number of periods within the window length l, with which each waveform is generated, as well as the amplitude and the vertical position (see Figure 2). For sawtooth waveforms, we also randomly sample the width of the rising ramp as a proportion of the total cycle between 0% and 100%, for each task. Setting this value to 100% and to 0% produces sawtooth waveforms with rising and falling ramps, respectively. Setting it to 50% corresponds to triangle waveforms. We note that the noise applied to the tasks are randomly sampled from task-specific intervals, the boundaries of which are also randomly sampled. Likewise, the width and height of each anomaly is sampled from a random task specific-interval. Moreover, we generate the anomalies of each task, such that half of them have a height between the signal’s minimum and maximum (e.g. anomalies (a) and (d) in Figure 2), while the other half can surpass these boundaries, i.e. the anomaly is higher than the normal signal’s maximum or lower than its minimum at least at one time step (e.g. anomalies (b) and (c) in Figure 2). We note that an anomalous sample can have more than one anomaly. We preprocess the data by removing the mean and scaling to unit variance. Hereby, only the available normal examples are used for the computation of the mean and the variance. This means that in the experiments, where the target task’s size K = 2 and only normal samples are available c = 0%, only two examples are used for the mean and variance computation. We note that the time-series in Figure 2 are not preprocessed. CNC Milling Machine Data (CNC-MMD): This dataset consists of ca. 100 aluminum workpieces on which various consecutive roughing and finishing operations (pockets, edges, holes, surface finish) are performed. The sensor readings which were recorded at a rate of 500Hz measure various quantities that are important for the process monitoring including the torques of the various axes. Each run of machining a single workpiece can be seen as a multivariate time-series. We segmented the data of each run in the various operations performed on the workpieces. E.g. one segment would describe the milling of a pocket where another describes a surface finish operation on the workpiece. Since most manufacturing processes are highly efficient, anomalies are quite rare but can be very costly if undetected. For this reason, anomalies were provoked for 6 operations during manufacturing to provide a better basis for the analysis. Anomalies were provoked by creating realistic scenarios for deficient manufacturing. Examples are using a workpiece that exhibits deficiencies which leads to a drop in the torque signal or using rather slightly decalibrated process parameters which induced various irritations to the workpiece surface which harmed production quality. The data was labeled by domain experts from Siemens Digital Industries. It should be noted that this dataset more realistically reflects the data situation in many real application scenarios from industry where anomalies are rare and data is scarce and for this reason training models on huge class-balanced datasets is not an option. For our experiments, we created 30 tasks per operation by randomly cropping windows of length 2048 from the corresponding time-series of each operation. As a result, the data samples of a particular task Ti cropped from a milling operation Oj correspond to the same trajectory part of Oj , but to different workpieces. The task creation procedure ensures that at least two anomalous data samples are available for each task. The resulting tasks include between 15 and 55 normal samples, and between 2 and 4 (9 and 22) anomalous samples for finishing (roughing) operations. We validate our approach on all 6 milling operations in the case where only 10 samples belonging to the normal class (K = 10, c = 0%) are available. Given the type of the target milling operation,e.g. finishing, we use the tasks from the other operations of the same type for meta-training. We note that the model is not exposed to any sample belonging to any task of the target operation during training. We preprocess each of the three signals separately by removing the mean and scaling to unit variance, as done for the STS datasets. Likewise, only the available normal examples are used for the computation of the mean and the variance. Exemplary anomalous signals recorded from a finishing and a roughing operations are shown in Figure 3. These signals are not mean centered and scaled to unit variance. We note that we do not use the labels per time-step, but rather the label ”anomalous” is assigned to each time-series that contains at least an anomalous time-step. D EXPERIMENTAL RESULTS In this Section, we present the results of the experiments on the STS-Sine dataset and the 8 further MT-MNIST datasets.
1. What is the focus of the paper regarding the problem tackled? 2. What are the strengths of the proposed approach, particularly in adapting a state-of-the-art method? 3. What are the weaknesses of the paper, especially regarding its contribution, experimentation, and comparison to other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any minor comments or suggestions for improvement that do not affect the decision?
Review
Review This paper tackles an interesting problem, one-class classification or anomaly detection, using a meta-learning approach. The main contribution is to introduce a parameter such that the inner-loop of the meta-learning algorithm better reflects the imbalance which occurs during meta-testing. Results are shown comparing a few simple baselines to both MAML and the modified variant, on a few datasets such as image-based ones (MNIST, miniImageNet), a synthetic dataset, and a real-world time-series example from CNC milling machines. Overall, the paper presents an interesting problem and awareness that meta-learning might be general enough to solve it well, but provides no real novelty in the approach. The datasets and comparison to other state of art methods (including both other anomaly detection methods and out of distribution methods) is lacking. I suggest the authors perform more rigorous experimentation and focus the paper to be a paper about an understudied problem with rigorous experiments/findings, or improve their method beond the small modification made. Due to these weaknesses, I vote for rejection at this time. Detailed comments are below. Strengths - The problem is interesting and under-studied in the context of deep learning and transferable methods from similar ML problems (e.g. few-shot learning) - The method is simple and adapts a state of art in few-shot learning (meta-learning, and specifically MAML) Weaknesses - While I enjoyed reading the paper since it tackles an under-explored problem, it is hard to justify publishing the method/approach at a top machine learning conference. Changing the balance in meta-learning is a relatively obvious modification that one would do to better reflect the problem; I don't think it results in general scientific/ML principles that can be used elsewhere. - The relationship to out-of-distribution detection (which some of the experiments, e.g. Multi-task MNIST and miniImagenet essentially test) is not discussed or compared to. How are anomalies defined and is it really different than just being out-of-distribution? - The datasets are limited. The MNIST dataset seems to choose a fixed two specific categories for meta-validation and meta-testing, as opposed to doing cross-validation. Results on just one meta-testing seems limited in this case with just one class. In terms of time-series, anomaly detection has been studied for a long time; is there a reason that the authors create a new synthetic dataset? For the milling example, how were anomalies provoked? - The baselines do not represent any state of art anomaly detection (e.g. density based, isolation forests, etc.) nor out of distribution detection; the latter especially would likely do extremely well for the simple image examples. - There is no analysis of what the difference is in representation (initialization) learning due to the differences between the OCC and FS setup. What are the characteristics of the improved initialization? One minor comment not reflecting the decision: - Exposition: Define the one-class classification problem; it's not common so it would be good to define in the abstract, or mention anomaly detection which is a better-known term.
ICLR
Title Few-Shot One-Class Classification via Meta-Learning Abstract Although few-shot learning and one-class classification (OCC), i.e. learning a binary classifier with data from only one class, have been separately well studied, their intersection remains rather unexplored. Our work addresses the few-shot OCC problem and presents a meta-learning approach that requires only few data examples from only one class to adapt to unseen tasks. The proposed method builds upon the model-agnostic meta-learning (MAML) algorithm (Finn et al., 2017) and learns a model initialization particularly suited for learning few-shot OCC tasks. This is done by explicitly optimizing for a parameter initialization which only requires a few gradient steps with one-class minibatches to yield a performance increase on class-balanced test data. We provide a theoretical analysis that explains why our approach works in the few-shot OCC scenario, while other meta-learning algorithms, including MAML, fail. Empirical results on six datasets from the image and time-series domains show that our method substantially outperforms both, classical OCC and few-shot classification approaches, and demonstrate the ability to quickly learn unseen tasks from only few normal class samples. Moreover, we successfully learn anomaly detectors for a real-world application on sensor readings recorded during industrial manufacturing of workpieces with a CNC milling machine using a few examples from the normal class. 1 INTRODUCTION The anomaly detection (AD) task (Chandola et al., 2009; Aggarwal, 2015) consists in differentiating between normal and abnormal data samples. AD applications are common in various domains that involve different data types, including medical diagnosis (Prastawa et al., 2004), cybersecurity (Garcia-Teodoro et al., 2009) and quality control in industrial manufacturing (Scime & Beuth, 2018). Due to the rarity of anomalies, the data underlying AD problems exhibits high class-imbalance. Therefore, AD problems are usually formulated as one-class classification (OCC) problems (Moya et al., 1993), where either only a few or no anomalous data samples are available for training the model (Khan & Madden, 2014). While most of the developed approaches (Khan & Madden, 2014) require a substantial amount of normal data to yield good generalization, in many real-world applications, e.g. in industrial manufacturing, only small datasets are available. Data scarcity can have many reasons: data collection itself might be expensive, e.g. in healthcare, or happens only gradually, such as in a cold-start situation. To enable learning from few examples, various viable meta-learning approaches (Lake et al., 2011; Ravi & Larochelle, 2016; Finn et al., 2017) have been developed. However, they rely on having examples from each of the classification task’s classes, which prevents their application to OCC tasks. To the best of our knowledge, the few-shot OCC (FS-OCC) problem has only been addressed by Kozerawski & Turk (2018) in the image domain. Our contribution is threefold: Firstly, we show that classical OCC approaches fail in the few-shot data regime. Secondly, we provide a theoretical analysis showing that classical gradient-based metalearning algorithms do not yield initializations suitable for OCC tasks and that second-order derivatives are needed to optimize for such initializations. Thirdly, we propose one-class model-agnostic meta-learning (OC-MAML), a data-domain-agnostic algorithm that quickly learns FS-OCC tasks, to serve as a first, simple and strong baseline for future research in the understudied FS-OCC problem. OC-MAML builds upon model-agnostic meta-learning (MAML) (Finn et al., 2017), which is a meta-learning method that explicitly optimizes for few-shot learning and yields a model initialization that enables quick adaptation to a new task using only few of its datapoints. Like MAML, OCMAML yields model parameters that are easily adaptable to unseen tasks. The difference is that the model initialization delivered by OC-MAML is particularly suited for adaptation to OCC tasks and hence requires few examples from only one class of the target task for good adaptation. We provide a theoretical analysis that shows that OC-MAML explicitly optimizes for parameter initializations which yield performance increase on class-balanced test data by taking only a few gradient steps with one-class minibatches. This is done by maximizing the inner product of gradients computed on different minibatches with different class-imbalance rates. While recent meta-learning approaches focused on the few-shot learning problem, i.e. learning to learn with few examples, we extend their use to the OCC problem, i.e. learning to learn with examples from only one class. We empirically validate our theoretical analysis on six datasets from the image and time-series domains, and demonstrate the robustness and maturity of our approach for real-world application by successfully testing it on a real-world dataset of sensor readings recorded during manufacturing of metal workpieces with a CNC milling machine. 2 APPROACH 2.1 PROBLEM STATEMENT Our goal is to learn a one-class classification (OCC) task using only a few examples from the normal class. In the following, we first discuss the unique challenges of the few-shot one-class classification (FS-OCC) problem. Subsequently, we formulate the FS-OCC problem as a meta-learning problem. In order to perform one-class classification, i.e. differentiate between in-class and out-of-class examples, approximating a generalized decision boundary for the normal class is necessary. Learning such a class decision boundary in the few-shot regime can be especially challenging for the following reasons. On the one hand, if the model overfits to the few available datapoints, the class decision boundary would be too restrictive, which would prevent generalization to unseen examples. As a result, some normal samples would be predicted as anomalies. On the other hand, if the model overfits to the majority class, e.g. predicting almost everything as normal, the class decision boundary would overgeneralize, and out-of-class (anomalous) examples would not be detected. In our meta-learning problem formulation, we assume access to data from classification tasks T traini sampled from a task distribution p(T ) related to our target OCC tasks. In the few-shot classification context, N -way K-shot learning tasks are usually used to test the learning procedure, in our case the model initialization, yielded by the meta-learning algorithm. An N -way K-shot classification task includesK examples from each of theN classes that are used for learning this task, after which the trained classifier is tested on a disjoint set of data (Vinyals et al., 2016). When the target task is an OCC task, only examples from one class are available for training, which can be viewed as a 1-way K-shot classification task. In order to align with the AD problem, the available examples have to belong to the normal (majority) class, which usually has a lower variance than the anomalous (minority) class. This problem formulation is a prototype for a practical use case where an application-specific anomaly detector is needed and only few normal class examples are available. 2.2 MODEL-AGNOSTIC META-LEARNING Model-agnostic meta-learning (MAML) (Finn et al., 2017) is an optimization-based meta-learning algorithm upon which we build in our present work. MAML learns a model initialization that enables quick adaptation to unseen tasks using only few data samples. For that, MAML trains a model explicitly for few-shot learning on tasks Ti coming from the same task distribution p(T ) as the unseen target task Ttest. In order to assess the model’s adaptation ability to unseen tasks, the available tasks are divided into mutually disjoint task sets: one for meta-training Str, one for metavalidation Sval and one for meta-testing Stest. Each task Ti is divided into two disjoint sets of data, each of which is used for a particular MAML operation: Dtr is used for adaptation and Dval is used for validation, i.e. evaluating the adaptation. The adaptation procedure of a model fθ to a particular task Ti consists in taking one (or more) gradient descent step(s) using few datapoints sampled from Dtr. We also refer to the adaptation updates as inner loop updates. A good measure for the suitability of the initialization parameters θ for few-shot adaptation to a considered task Ti is the loss LvalTi (fθ′i ), which is computed on the validation set D val using the task-specific adapted model fθ′i . In order to optimize for few-shot learning, the model parameters θ are updated by minimizing the aforementioned loss across all meta-training tasks. This update, called the outer loop update, can be expressed as: θ ← θ − β∇θ ∑ Ti∼p(T ) LvalTi (fθ′i ), (1) where β is the learning rate used for the outer loop. In order to avoid meta-overfitting, i.e. overfitting to the meta-training tasks, model selection can be done via conducting validation episodes using tasks from Sval throughout meta-training. At meta-test time, the few-shot adaptation to unseen tasks from Stest is evaluated. We note that, in the case of few-shot classification, K datapoints from each class are sampled from Dtr for the adaptation, during training, validation and testing. 2.3 ONE-CLASS MODEL-AGNOSTIC META-LEARNING 2.3.1 ALGORITHM The primary contribution of our work is to show that second-order gradient-based meta-learning is a viable approach to the underexplored few-shot one-class classification (FS-OCC) problem. We achieve this by adequately modifying the objective of the adaptation step, i.e. the inner loop updates, of the MAML algorithm. We choose to build upon gradient-based meta-learning algorithms, because these were shown to be universal learning algorithm approximators (Finn & Levine, 2017), which means that they could approximate a learning algorithm tailored for FS-OCC. As explained in Section 2.2, MAML optimizes explicitly for few-shot adaptation by creating and using auxiliary tasks that have the same characteristic as the target tasks, in this case tasks that include only few datapoints for training. Analogously, OC-MAML trains explicitly for quick adaptation to OCC tasks by creating OCC auxiliary tasks for meta-training. Concretely, this is done by modifying the class-imbalance rate (CIR) of the inner loop data batches to match the one of the test task. The meta-training procedure of OC-MAML is described in Algorithm 1 in Appendix A. As described in Section 1, OCC problems are binary classification scenarios where only few or no minority class samples are available. In order to address both of theses cases, we introduce a hyperparameter (c) which sets the CIR of the batch sampled for the inner updates. Hereby, c gives the percentage of the samples belonging to the minority (anomalous) class w.r.t. the total number of samples, e.g. setting c = 0% means only majority class samples are contained in the data batch. We focus on this latter extreme case, where no anomalous samples are available for learning. The key difference between MAML and OC-MAML is in the sampling operation of the inner loop batch (operation 5 in Algorithm 1 in Appendix A). By reducing the size of the batch used for the adaptation (via the hyperparameter K), MAML trains for few-shot adaptation. OC-MAML extends this approach to train for few-shot one-class adaptation by reducing the CIR of the batch used for adaptation (via the hyperparameter c). In order to evaluate the performance of the adapted model on both classes, we use a class-balanced validation batch B ′ for the outer loop updates. This way, we maximize the performance of the model in recognizing both classes after having seen examples from only one class during adaptation. Using OCC tasks for adaptation during meta-training favors model initializations that enable a quick adaptation to OCC tasks over those that require classbalanced tasks. From a representation learning standpoint, OC-MAML learns representations that are not only broadly suitable for the data underlying p(T ), but also particularly suited for OCC tasks. In Section 2.3.2, we discuss the unique characteristics of the model initializations yielded by OC-MAML and explain why adapting first-order meta-learning algorithms to the OCC scenario does not yield the targeted results. 2.3.2 THEORETICAL ANALYSIS: WHY DOES OC-MAML WORK ? In this section we give a theoretical explanation of why OC-MAML works and why it is a more suitable approach than MAML for the few-shot one-class classification (FS-OCC) problem. To address the latter problem, we aim to find a model parameter initialization, from which adaptation using few data examples from only one class yields a good performance on both classes, i.e. good generalization to the class-balanced task. We additionally demonstrate that adapting first-order metalearning algorithms, e.g. First-Order MAML (FOMAML) (Finn et al., 2017) and Reptile (Nichol & Schulman, 2018), to the OCC scenario as done in OC-MAML, does not yield initializations with the desired characteristics, as it is the case for OC-MAML. gMAML = g2 − αH2g1 − αH1g2 +O(α2) = g2 − α ∂(g1.g2) ∂φ1 +O(α2) (2) By using a Taylor series expansion, Nichol & Schulman (2018) approximate the gradient used in the MAML update. For simplicity of exposition, in Equation 2 we give their results for the case where only 2 gradient-based updates are performed, i.e. one adaptation update on a minibatch including K datapoints from Dtr and one meta-update on a minibatch including Q datapoints from Dval. We use the same notation used by Nichol & Schulman (2018), where gi and Hi denote the gradient and Hessian computed on the ith minibatch at the initial parameter point φ1, and α gives the learning rate. Here it is assumed that the same learning rate is used for the adaptation and meta-updates. In Equation 2 Nichol & Schulman (2018) demonstrate that MAML partially optimizes for increasing the inner product of the gradients computed on different minibatches. In fact, when gradients from different minibatches have a positive inner product, taking a gradient step using one of them yields a performance increase on the other (Nichol & Schulman, 2018). Equation 2 holds also for OC-MAML. However, in OC-MAML the minibatches 1 and 2 have different class-imbalance rates (CIRs), since the first minibatch includes data from only one class and the second minibatch is classbalanced. Hence, it optimizes for increasing the inner product of the gradients computed on different minibatches with different CIRs, while MAML does the same but for different minibatches with the same CIR, namely c = 50%. Consequently, OC-MAML optimizes for a parameter initialization from which taking one (or few) gradient step(s) with one-class minibatch(es) results in a performance increase on class-balanced data. In contrast, MAML optimizes for a parameter initialization that requires class-balanced minibatches to yield the same effect (Figure 1 in Appendix A). When adapting to OCC tasks, however, only examples from one class are available. We conclude, therefore, that using minibatches with different CIRs for meta-training, as done in OC-MAML, yields parameter initializations that are more suitable for adapting to OCC tasks. A natural question is whether applying our modification of MAML, i.e. using only data from the normal class for adaptation during meta-training, to other gradient-based meta-learning algorithms would yield the same desired effect. We investigate this for First-Order MAML (FOMAML) (Finn et al., 2017) and Reptile (Nichol & Schulman, 2018). FOMAML is a first-order approximation of MAML, which ignores the second derivative terms. Reptile is also a first-order meta-learning algorithm that learns an initialization that enables fast adaptation to test tasks using only few examples from each class. In the following we demonstrate that adapting the FOMAML and Reptile algorithms to the one-class classification scenario, to which we will refer as OC-FOMAML and OCReptile, does not result in optimizing for an initialization suitable for OCC tasks, as it is the case for OC-MAML. We note that for OC-Reptile, the first (N −1) batches contain examples from only one class and the last (N th) batch is class-balanced. The approximated gradients used in the FOMAML and Reptile updates are given by Equations 3 and 4 (Nichol & Schulman, 2018), respectively. gFOMAML = g2 − αH2g1 +O(α2) (3) gReptile = g1 + g2 − αH2g1 +O(α2) (4) We note that these equations hold also for OC-FOMAML and OC-Reptile. By taking the expectation over minibatch sampling Eτ,1,2 for a meta-training task τ and two class-balanced minibatches, Nichol & Schulman (2018) establish that Eτ,1,2[H1g2] = Eτ,1,2[H2g1]. Averaging the two sides of the latter equation results in the following: Eτ,1,2[H2g1] = 1 2 Eτ,1,2[H1g2 +H2g1] = 1 2 Eτ,1,2[ ∂(g1.g2) ∂φ1 ] (5) Equation 5 shows that, in expectation, FOMAML and Reptile, like MAML, optimize for increasing the inner product of the gradients computed on different minibatches with the same CIR. However, when the minibatches 1 and 2 have different CIRs, which is the case for OC-FOMAML and OC-Reptile, Eτ,1,2[H1g2] 6= Eτ,1,2[H2g1] and therefore Eτ,1,2[H2g1] 6= 12Eτ,1,2[ ∂(g1.g2) ∂φ1 ]. Hence, even though, similarly to OC-MAML, OC-FOMAML and OC-Reptile use minibatches with different CIRs for meta-training, contrarily to OC-MAML, they do not optimize for increasing the inner product of the gradients computed on different minibatches with different CIRs. The second derivative term H1g2 is, thus, necessary to optimize for an initialization from which performance increase on a class-balanced task is yielded by taking few gradient steps using only data from one class. 3 RELATED WORKS Our proposed method addresses the few-shot one-class classification (FS-OCC) problem, i.e. solving binary classification problems using only few datapoints from only one class. To the best of our knowledge, this problem was only addressed by Kozerawski & Turk (2018), and exclusively in the image data domain. Kozerawski & Turk (2018) train a feed-forward neural network (FFNN) to learn a transformation from feature vectors, extracted by a CNN pre-trained on ILSVRC 2014 (Russakovsky et al., 2015), to SVM decision boundaries. Hereby, the FFNN is trained on ILSVRC 2012. At test time, an SVM boundary is inferred by using one image of one class from the test task which is then used to classify the test examples. This approach is specific to the image domain since it relies on the availability of very large, well annotated datasets and uses data augmentation techniques specific to the image domain, e.g. mirroring. OC-MAML offers a more general approach to FS-OCC since it is data-domain-agnostic. In fact, it does not require a pre-trained feature extraction model, which might not be available for some data domains, e.g. sensor readings. 3.1 FEW-SHOT CLASSIFICATION Recent few-shot classification approaches may be broadly categorized in optimization-based methods (Ravi & Larochelle, 2016; Finn et al., 2017; Nichol & Schulman, 2018) and metric-based methods (Koch, 2015; Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018). The optimization-based approaches aim to learn an optimization algorithm (Ravi & Larochelle, 2016) and/or a parameter initialization (Finn et al., 2017; Nichol & Schulman, 2018), that is tailored for few-shot learning. Metric-based techniques learn a metric space where samples belonging to the same class are close together, which facilitates few-shot classification (Koch, 2015; Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018). Rusu et al. (2018) develops a hybrid method that combines the advantages of both categories. Prior meta-learning approaches to few-shot classification addressed the N-way K-shot classification problem described in Section 2.1, i.e they only consider neatly class-balanced test classification tasks. Optimization-based techniques require these samples to finetune the learned initialization. In the metric-based methods, these samples are necessary to compute class prototypes (Snell et al., 2017), embeddings needed for verification (Koch, 2015) or relation scores (Sung et al., 2018). Our approach, however, requires only samples from one of the test task’s classes for learning. Moreover, while the evaluation of the previous approaches in the classification context was limited to the image domain, we additionally validate OC-MAML on datasets from the time-series domain. 3.2 ONE-CLASS CLASSIFICATION Classical OCC approaches rely on SVMs (Schölkopf et al., 2001; Tax & Duin, 2004) to distinguish between normal and abnormal samples. Pal & Foody (2010) show that the classification accuracy of SVMs decreases with an increasing number of input features, particularly when small datasets are available for training. Hybrid approaches combining SVM-based techniques with feature extractors were developed to compress the input samples in lower dimensional representations (Xu et al., 2015; Erfani et al., 2016; Andrews et al., 2016). Fully deep methods that jointly perform the feature extraction step and the OCC step have also been developed (Ruff et al., 2018). Another category of approaches to OCC uses the reconstruction error of antoencoders (Hinton & Salakhutdinov, 2006) trained with only normal class examples as an anomaly score (Hawkins et al., 2002; An & Cho, 2015; Chen et al., 2017). Yet, determining a decision threshold for such an anomaly score requires labeled data from both classes. Further more recent techniques rely on GANs (Goodfellow et al., 2014) to perform OCC (Schlegl et al., 2017; Ravanbakhsh et al., 2017; Sabokrou et al., 2018). The aforementioned hybrid and fully deep approaches require a considerable amount of data from the OCC task to train the typically highly parametrized models to learn features specific to the normal class. By leveraging auxiliary OCC tasks and explicitly optimizing for few-shot learning, OC-MAML learns a representation that can be adapted to unseen OCC task with only few exaples. 4 EXPERIMENTAL EVALUATION The conducted experiments 1 aim to address the following key questions: (a) How does OC-MAML perform compared to classical one-class classification (OCC) approaches in the few-shot (FS) data regime? (b) Does using OCC tasks for meta-training improve the adaptation to such tasks, as it is the case for few-shot tasks (Finn et al., 2017), and do our theoretical findings (Section 2.3.2) about the differences between the MAML and OC-MAML initializations hold in practice? (c) How does OC-MAML compare to the first-order meta-learning algorithms adapted to the OCC scenario, i.e. OC-FOMAML and OC-Reptile (Section 2.3.2)? (d) How does OC-MAML perform in FS-OCC problems from the time-series domain, which is understudied in the few-shot learning literature? 4.1 BASELINES AND DATASETS This section provides information about the baselines and datasets we use in our experimental evaluation. We compare OC-MAML to the classical one-class classification (OCC) approaches One-Class SVM (OC-SVM) (Schölkopf et al., 2001) and Isolation Forest (IF) (Liu et al., 2008) (Question (a)), which we fit to the adaptation set of the test task. Here, we apply PCA to reduce the dimensionality of the data, by choosing the minimum number of eigenvectors so that at least 95% of the variance is preserved as done by Erfani et al. (2016). We additionally tune the inverse length scale γ by using 10% of the test set, as done by Ruff et al. (2018), which gives OC-SVM a supervised advantage, compared to the other methods. For a fairer comparison to OC-MAML, where these latter methods also benefit from the meta-training and meta-validation tasks, we additionally train them on embeddings inferred by feature extractors learned on these tasks. Here, we train two types of feature extractors on the meta-training tasks: one is trained in a Multi-Task-Learning (MTL) setting and the other trained using the ”Finetune” baseline (FB) (Triantafillou et al., 2019). FB is a few-shot classification approach, where one multi-class classifier is trained with all the classes available in all meta-training tasks, after which, an output layer is finetuned with the few available examples of the target task on top of the learned feature extractor. Moreover, we compare OC-MAML to class-balanced meta-learning algorithms, namely MAML, FOMAML and Reptile, as well as firstorder meta-learning algorithms adapted to the OCC scenario, i.e. OC-FOMAML and OC-Reptile (Questions (b) and (c)). Experimental details are provided in Appendix B. We evaluate our approach on six datasets, including 3 from the image domain and 3 from the timeseries domain. In the image domain we use 2 few-shot learning benchmark datasets, namely MiniImageNet (Ravi & Larochelle, 2016) and Omniglot (Lake et al., 2015), and 1 OCC benchmark dataset, the Multi-Task MNIST (MT-MNIST) dataset. To adapt the datasets to the OCC scenario, we create binary classification tasks, where the normal class contains examples from one class of the initial dataset and the anomalous class contains examples from multiple other classes. We create 9 different datasets based on MNIST, where the meta-testing task of each dataset consists in differentiating between a certain digit and the others. We use the same (10th) task for meta-validation in all datasets. Since most of the time-series datasets for anomaly detection include data from only one domain and only one normal class, adapting them to the meta-learning problem formulation where several different tasks are required is not possible. Therefore, we create two synthetic timeseries (STS) datasets, each including 30 synthetically generated time-series that underlie 30 different anomaly detection tasks, to assess the suitability of OC-MAML to time-series data (Question (d)). The time-series underlying the datasets are sawtooth waveforms (STS-Sawtooth) and sine functions (STS-Sine). We propose the STS-datasets as benchmark datasets for the few-shot (one-class) classification problem in the time-series domain. Finally, we validate OC-MAML on a real-world anomaly detection dataset of sensor readings recorded during industrial manufacturing using a CNC milling machine. Various consecutive roughing and finishing operations (pockets, edges, holes, surface finish) were performed on ca. 100 aluminium workpieces to record the CNC Milling Machine Data (CNC-MMD). In Appendix C, we give details about all 6 datasets, the task creation procedures adopted to adapt them to the OCC case, as well as the generation of the STS-datasets. 1Our OC-MAML implementation and experimental evaluation will be made public upon paper acceptance. 4.2 RESULTS AND DISCUSSION Our results of the comparison between OC-MAML and the classical OCC approaches on the 3 image datasets and on the STS-Sawtooth dataset are summarized in Table 1. OC-MAML consistently outperforms all baselines across all datasets and on both adaptation set sizes. While FB and MTL yield relatively good performance when adapting to class-balanced tasks (c = 50%), they completely fail in adapting to OCC tasks. On the MT-MNIST dataset and the STS-Sawtooth dataset, some of the baselines that combine a feature extractor and a shallow model yield high performance, when the adaptation set size is K = 10. Our results of the comparison between OC-MAML and the classical few-shot classification approaches on the 3 image datasets and on the STS-Sawtooth dataset are summarized in Table 2. The results on the other 8 MT-MNIST datasets and on the STS-Sine dataset are presented in Appendix D and are consistent with the results in Tables 1 and 2. We observe that OC-MAML consistently outperforms the other meta-learning algorithms with a substantial margin on all datasets and for both adaptation set sizes. This confirms our theoretical findings (Section 2.3.2) that the initializations yielded by class-balanced meta-learning algorithms as well as OC-FOMAML and OC-Reptile are not optimized for adaptation using data from only one class. These latter yield test accuracies close to 50% showing that they overfitted to the normal class (Table 2 (top)). In an attempt to increase the performance of the other meta-learning algorithms in the OCC scenario, we add a batch normalization (BN) (Ioffe & Szegedy, 2015) layer immediately before the output layer of the network. This BN operation standardizes the latent features using the mean and standard deviation of the K datapoints available for adaptation, which all belong to the normal class. As a result, this layer would output features with mean close to 0 and standard deviation close to 1 for normal class examples. In contrast, anomalous examples would yield features with other statistics, which simplifies their detection. We hypothesize that by enforcing a mapping of the data to a latent space standardized only by examples from the normal class, the anomalies would clearly fall out of the normal-class-distribution, making their detection easier. We note that the BN layer is used during meta-training as well. Hereby, we fix the learnable scaling (γ) and centering (β) parameters of the BN layer to 1 and 0, respectively, to prevent it from shifting the standard distribution. We find that this simple modification increases the performance of the other meta-learning algorithms on all image datasets. However, OC-MAML without BN still yields the highest results, with only one exception. The higher performance increase when a bigger adaptation set is available (K = 10) confirms our hypothesis that enforcing a mapping of the data to a latent space standardized only by examples from the normal class makes the detection of the anomalies easier. In fact, using more examples yields more accurate mean and standard deviation measures, which enables a better approximation of the distribution of the normal class, and hence leads to an improved detection of the anomalies. We also tested these algorithms on networks including a trainable BN layer after each convolutional layer. This yielded comparable results to just adding one non-trainable BN layer before the output layer. Even though some of the meta-learning algorithms and OCC approaches sometimes outperform OC-MAML (Tables 2, 5, 8, 9), they do not consistently yield high performance in learning FS-OCC tasks across several datasets, as it is the case for OC-MAML. We note that this happens only on few MT-MNIST datasets and explain that by the high overlap between the digit classes underlying the meta-training and meta-testing tasks in the MT-MNIST datasets. The results of OC-MAML experiments on the CNC-MMD dataset are presented in Table 3. We compute F1-scores for evaluation since the test sets are class-imbalanced. OC-MAML consistently achieves high F1-scores across the 6 different milling processes. This high model performance on the minority class, i.e. in detecting anomalous data samples, is reached by using only K = 10 non-anomalous data samples (c = 0%). These results show that OC-MAML yielded a parameter initialization suitable for learning OCC tasks in the time-series data domain. Moreover, the high performance reached show the maturity of this method for industrial real-world applications. 5 CONCLUSION This work addressed the novel and challenging problem of few-shot one-class classification (FSOCC) and introduced OC-MAML, a robust meta-learning approach to FS-OCC problems that learns model parameters which are easily adaptable to unseen tasks using few examples from only one class. We demonstrated the viability of our method on six datasets from the image time-series domains, including a real-world dataset of industrial sensor readings, where it significantly outperformed classical OCC and few-shot classification methods. Future works could investigate an unsupervised approach to FS-OCC, as done by Hsu et al. (2018) in the class-balanced scenario. A OC-MAML: ALGORITHM AND PARAMETER INITIALIZATION In this section we present the pseudo-code of OC-MAML in Algorithm 1 and a diagram visualizing the parameter initializations yielded by MAML and OC-MAML. Algorithm 1 Few-shot one-class classification with OC-MAML Require: Str: Set of meta-training tasks Require: α, β: Learning rates Require: K,Q: Batch size for the inner and outer updates Require: c: CIR for the inner-updates 1: Randomly initialize θ 2: while not done do 3: Sample batch of tasks Ti from Str Let {Dtr, Dval} = Ti 4: for all sampled Ti do 5: Sample K datapoints B = {x(l), y(l)} from Dtr such that CIR= c 6: Initialize θ ′ i = θ 7: for number of adaptation steps do 8: Compute adaptation loss LtrTi(fθ′i ) using B 9: Compute adapted parameters with gradient descent: θ ′ i = θ ′ i − α∇θ′iL tr Ti (fθ′i ) 10: end for 11: Sample Q datapoints B ′ = {x′(l), y′(l)} from Dval 12: Compute outer loop loss LvalTi (fθ′i ) using B ′ 13: end for 14: Update θ: θ ← θ − β∇θ ∑ Ti LvalTi (fθ′i ) 15: end while 16: return meta-learned parameters θ Figure 1 visualizes the adaptation to a binary classification test task Ts from the parameter initializations yielded by OC-MAML and MAML, denoted by θOCMAML and θMAML respectively. θ∗s,CB denotes the optimal parameters for Ts. Taking a gradient step using a one-class adaptation setDs,OC (gradient direction denoted by ∇Ls,OC), yields a performance increase on Ts when starting from the OC-MAML parameter initialization. In contrast, when starting from the parameter initialization reached by MAML a class-balanced adaptation set Ds,CB (gradient direction denoted by ∇Ls,CB) is required for a performance increase in Ts. B EXPERIMENT DETAILS For MT-MNIST, we use the same 4-block convolutional architecture as used by Hsu et al. (2018) for their multi-class MNIST experiments. However, we exclude the batch normalization (Ioffe & Szegedy, 2015) layers, as we want to assess their effect in the OCC case, as discussed in Section 4.2. Each convolutional block includes a 3 x 3 convolutional layer with 32 filters, a 2 x 2 pooling and a ReLU non-linearity. The same model architecture is used for the MiniImageNet experiments as done by Ravi & Larochelle (2016). For the Omniglot experiments, we use the same architecture used by Finn et al. (2017). We also do not include the batch normalization layers for the two latter datasets. On the STS datasets, the model architecture used is composed of 3 modules, each including a 5 x 5 convolutional layer with 32 filters, a 2 x 2 pooling and a ReLU non-linearity. The model architecture used for the CNC-MMD experiments is composed of 4 of these aforementioned modules, except that the convolutional layers in the last two modules include 64 filters. The last layer of all architectures is a linear layer followed by softmax. We note that in the experiments on the time-series datasets (STS and CNC-MMD) 1-D convolutional filters are used. Table 4 shows the hyperparameters used in the experiments of each model on the different datasets. We note that we did not fix the outer loop size Q in the experiments on the CNC-MMD dataset, because the sizes and CIRS of the validation sets Dval differ across the different tasks. For the meta-learning algorithms, including OC-MAML, we used vanilla SGD in the inner loop and the Adam optimizer (Kingma & Ba, 2014) in the outer loop, as done by Finn et al. (2017). The MTL and FB baselines are also trained with the Adam optimizer. In the following, we provide details about the meta-training procedure adopted in the meta-learning experiments. We use disjoint sets of data for adaptation (Dtr) and validation (Dval) on the metatraining tasks, as it was empirically found to yield better final performance (Nichol & Schulman, 2018). Hereby, the same sets of data are used in the OC-MAML and baseline experiments. In the MT-MNIST, Omniglot, MiniImageNet and STS experiments, the aforementioned sets of data are class-balanced. The sampling of the batch used for adaptation B ensures that this latter has the appropriate CIR (c = 50% for MAML, FOMAML and Reptile, and c = ctarget for OC-MAML, OC-FOMAML and OC-Reptile). For the one-class meta-learning algorithms, ctarget = 0%, i.e. no anomalous samples of the target task are available, sothat only normal examples are sampled from Dtr during meta-training. In order to ensure that class-balanced and one-class meta-learning algorithms are exposed to the same data during meta-training, we move the anomalous examples from the adaptation set of data (Dtr) to the validation set of data (Dval). We note that this is only done in the experiments using one-class meta-learning algorithms. During meta-training, meta-validation episodes are conducted to perform model selection. In order to mimic the adaptation to unseen FS-OCC tasks with CIR c = ctarget at test time, the CIR of the batches used for adaptation during meta-validation episodes is also set to c = ctarget. We note that the hyperparameter K denotes the total number of datapoints, i.e. batch size, used to perform the adaptation updates, and not the number of datapoints per class as done by Finn et al. (2017). Hence, a task with sizeK = 10 and CIR c = 50% is equivalent to a 2-way 5-shot classification task. In the following, we provide details about the adaptation to the target task(s) and the subsequent evaluation. In the MT-MNIST and MiniImageNet experiments, we randomly sample 20 adaptation sets from the target task(s)’ data, each including K examples with the CIR corresponding to the experiment considered. After each adaptation episode conducted using one of these sets, the adapted model is evaluated on a disjoint class-balanced test set that includes 4,000 images for MT-MNIST and 600 for MiniImageNet. We note that the samples included in the test sets of the test tasks are not used nor for meta-training neither for meta-validation. This results in 20 and 400 (20 adaptation sets created from each of the 20 test classes) different test tasks for MT-MNIST and MiniImageNet, respectively. All the results presented give the mean over all adaptation episodes. Likewise, in the STS experiments, we evaluate the model on 10 different adaptation sets from each of the 5 test tasks. In the CNC-MMD experiments, the 30 tasks created from the target operation are used for adaptation and subsequent evaluation. For each of these target tasks, we randomly sample K datapoints belonging to the normal class that we use for adaptation, and use the rest of the datapoints for testing. We do this 5 times for each target task, which results in 150 testing tasks. For MTL and FB baselines, as well as all the baseline combining these model with shallow models, i.e. IF and OC-SVM, we use the meta-validation task(s) for model choice, like in the meta-learning experiments. For the MTL baseline, for each validation task, we finetune a fully connected layer on top of the shared multi-task learned layers, as it is done at test time. C DATASETS AND TASK CREATION PROCEDURES In this Section we provide information about the datasets used and the task creation procedures. Multi-task MNIST (MT-MNIST): We derive 10 binary classification tasks from the MNIST dataset (LeCun et al., 2010), where every task consists in recognizing one of the digits. This is a classical one-class classification benchmark dataset. For a particular task Ti, images of the digit i are labeled as normal samples, while out-of-distribution samples, i.e. the other digits, are labeled as anomalous samples. We use 8 tasks for meta-training, 1 for meta-validation and 1 for meta-testing. Hereby, images of digits to be recognized in the validation and test tasks are not used as anomalies in the meta-training tasks. This ensures that the model is not exposed to normal samples from the test task during meta-training. Moreover, the sets of anomalous samples of the meta-training, meta-validation and meta-testing tasks are mutually disjoint. We conduct experiments on 9 MTMNIST datasets, each of which involves a different target task (T0 − T8). The task T9 is used as a meta-validation task across all experiments. MiniImageNet: This dataset was proposed by Ravi & Larochelle (2016) and includes 64 classes for training, 16 for validation and 20 for testing, and is a classical challenging benchmark dataset for few-shot learning. To adapt it to the few-shot one-class classification setting, we create 64 binary classification tasks for meta-training, each of which consists in differentiating one of the training classes from the others, i.e. the anomalous examples of a task Ti are randomly sampled from the 63 classes with labels different from i. We do the same to create 16 meta-validation and 20 meta-testing tasks using the corresponding classes. Omniglot: This dataset was proposed by Lake et al. (2015) and includes 20 instances of 1623 handwritten characters from 50 different alphabets. We generate our meta-training and meta-testing tasks based on the official data split (Lake et al., 2015), where 30 alphabets are reserved for training and 20 for evaluation. For each character class, we create a binary classification task, which consists in differentiating between this character and other characters from the same set (meta-training or meta-testing), i.e. the anomalous examples of a task Ti are randomly sampled from the remaining characters. By removing 80 randomly sampled tasks from the meta-training tasks, we create the meta-validation tasks set. Synthetic time-series (STS): In order to investigate the applicability of OC-MAML to time-series (question (c)), we created two datasets, each including 30 synthetically generated time-series that underlie 30 different anomaly detection tasks. The time-series underlying the datasets are sawtooth waveforms (STS-Sawtooth) and sine functions (STS-Sine). Each time-series is generated with random frequencies, amplitudes, noise boundaries, as well as anomaly width and height boundaries. Additionally, the width of the rising ramp as a proportion of the total cycle is sampled randomly for the sawtooth dataset, which results in tasks having rising and falling ramps with different steepness values. The data samples of a particular task are generated by randomly cropping windows of length 128 from the corresponding time-series. We generate 200 normal and 200 anomalous data examples for each task. For each dataset, we randomly choose 20 tasks for meta-training, 5 for meta-validation and 5 for meta-testing. We propose the STS-datasets as benchmark datasets for the few-shot one-class classification problem in the time-series domain, and will make them public upon paper acceptance. In the following, we give details about the generation procedure adopted to create the STS-Sawtooth dataset. The same steps were conducted to generate the STS-Sine dataset. First, we generate the sawtooth waveforms underlying the different tasks by using the Signal package of the Scipy library (Jones et al., 2001–). Thereafter, a randomly generated noise is applied to each signal. Subsequently, signal segments with window length l = 128 are randomly sampled from each noisy signal. These represent the normal, i.e. non-anomalous, examples of the corresponding task. Then, some of the normal examples are randomly chosen, and anomalies are added to them to produce the anomalous examples. Figure 2 shows exemplary normal and anomalous samples from the STS-Sawtooth and STS-Sine datasets. In order to increase the variance between the aforementioned synthetic signals underlying the different tasks, we randomly sample the frequency, i.e. the number of periods within the window length l, with which each waveform is generated, as well as the amplitude and the vertical position (see Figure 2). For sawtooth waveforms, we also randomly sample the width of the rising ramp as a proportion of the total cycle between 0% and 100%, for each task. Setting this value to 100% and to 0% produces sawtooth waveforms with rising and falling ramps, respectively. Setting it to 50% corresponds to triangle waveforms. We note that the noise applied to the tasks are randomly sampled from task-specific intervals, the boundaries of which are also randomly sampled. Likewise, the width and height of each anomaly is sampled from a random task specific-interval. Moreover, we generate the anomalies of each task, such that half of them have a height between the signal’s minimum and maximum (e.g. anomalies (a) and (d) in Figure 2), while the other half can surpass these boundaries, i.e. the anomaly is higher than the normal signal’s maximum or lower than its minimum at least at one time step (e.g. anomalies (b) and (c) in Figure 2). We note that an anomalous sample can have more than one anomaly. We preprocess the data by removing the mean and scaling to unit variance. Hereby, only the available normal examples are used for the computation of the mean and the variance. This means that in the experiments, where the target task’s size K = 2 and only normal samples are available c = 0%, only two examples are used for the mean and variance computation. We note that the time-series in Figure 2 are not preprocessed. CNC Milling Machine Data (CNC-MMD): This dataset consists of ca. 100 aluminum workpieces on which various consecutive roughing and finishing operations (pockets, edges, holes, surface finish) are performed. The sensor readings which were recorded at a rate of 500Hz measure various quantities that are important for the process monitoring including the torques of the various axes. Each run of machining a single workpiece can be seen as a multivariate time-series. We segmented the data of each run in the various operations performed on the workpieces. E.g. one segment would describe the milling of a pocket where another describes a surface finish operation on the workpiece. Since most manufacturing processes are highly efficient, anomalies are quite rare but can be very costly if undetected. For this reason, anomalies were provoked for 6 operations during manufacturing to provide a better basis for the analysis. Anomalies were provoked by creating realistic scenarios for deficient manufacturing. Examples are using a workpiece that exhibits deficiencies which leads to a drop in the torque signal or using rather slightly decalibrated process parameters which induced various irritations to the workpiece surface which harmed production quality. The data was labeled by domain experts from Siemens Digital Industries. It should be noted that this dataset more realistically reflects the data situation in many real application scenarios from industry where anomalies are rare and data is scarce and for this reason training models on huge class-balanced datasets is not an option. For our experiments, we created 30 tasks per operation by randomly cropping windows of length 2048 from the corresponding time-series of each operation. As a result, the data samples of a particular task Ti cropped from a milling operation Oj correspond to the same trajectory part of Oj , but to different workpieces. The task creation procedure ensures that at least two anomalous data samples are available for each task. The resulting tasks include between 15 and 55 normal samples, and between 2 and 4 (9 and 22) anomalous samples for finishing (roughing) operations. We validate our approach on all 6 milling operations in the case where only 10 samples belonging to the normal class (K = 10, c = 0%) are available. Given the type of the target milling operation,e.g. finishing, we use the tasks from the other operations of the same type for meta-training. We note that the model is not exposed to any sample belonging to any task of the target operation during training. We preprocess each of the three signals separately by removing the mean and scaling to unit variance, as done for the STS datasets. Likewise, only the available normal examples are used for the computation of the mean and the variance. Exemplary anomalous signals recorded from a finishing and a roughing operations are shown in Figure 3. These signals are not mean centered and scaled to unit variance. We note that we do not use the labels per time-step, but rather the label ”anomalous” is assigned to each time-series that contains at least an anomalous time-step. D EXPERIMENTAL RESULTS In this Section, we present the results of the experiments on the STS-Sine dataset and the 8 further MT-MNIST datasets.
1. What are the unique challenges of the few-shot one-class classification problem? 2. How does the proposed method build upon the MAML algorithm? 3. What are the weaknesses of the paper regarding its claims and theoretical analysis? 4. How can the writing style of the paper be improved?
Review
Review In this paper, the authors have investigated the few shot one classification problem. They have presented a meta-learning approach that requires only few data examples from only one class to adapt to unseen tasks. The proposed method builds upon the model-agnostic meta-learning (MAML) algorithm. I think the topic itself is interesting and I have the following concerns. (1) The first is about the real requirement of this learning scenario. Although the authors have pointed out some real applications, I think they have been introduced separated. In other words, since this setting is the combination of two previous areas, i.e., one class classification and few-shot learning, I fell that the authors have introduced it by just a combination. What are the unique challenges of this problem? I think these problems should be clarified at first. (2) The second one is the algorithms itself. Although I have not checked the details, I fell that the authors have prepared this paper in a rough way. The authors have only described the method, without deep analyses answering the question why. For example, the method seems heuristic, without theoretical analysis. In summary, I think this paper likes a technical report, not a research paper. (3) Although I can catch the main meaning of this paper, it seems that the writing style is not so fluently. I suggested the authors to recognize the presentation.
ICLR
Title Few-Shot One-Class Classification via Meta-Learning Abstract Although few-shot learning and one-class classification (OCC), i.e. learning a binary classifier with data from only one class, have been separately well studied, their intersection remains rather unexplored. Our work addresses the few-shot OCC problem and presents a meta-learning approach that requires only few data examples from only one class to adapt to unseen tasks. The proposed method builds upon the model-agnostic meta-learning (MAML) algorithm (Finn et al., 2017) and learns a model initialization particularly suited for learning few-shot OCC tasks. This is done by explicitly optimizing for a parameter initialization which only requires a few gradient steps with one-class minibatches to yield a performance increase on class-balanced test data. We provide a theoretical analysis that explains why our approach works in the few-shot OCC scenario, while other meta-learning algorithms, including MAML, fail. Empirical results on six datasets from the image and time-series domains show that our method substantially outperforms both, classical OCC and few-shot classification approaches, and demonstrate the ability to quickly learn unseen tasks from only few normal class samples. Moreover, we successfully learn anomaly detectors for a real-world application on sensor readings recorded during industrial manufacturing of workpieces with a CNC milling machine using a few examples from the normal class. 1 INTRODUCTION The anomaly detection (AD) task (Chandola et al., 2009; Aggarwal, 2015) consists in differentiating between normal and abnormal data samples. AD applications are common in various domains that involve different data types, including medical diagnosis (Prastawa et al., 2004), cybersecurity (Garcia-Teodoro et al., 2009) and quality control in industrial manufacturing (Scime & Beuth, 2018). Due to the rarity of anomalies, the data underlying AD problems exhibits high class-imbalance. Therefore, AD problems are usually formulated as one-class classification (OCC) problems (Moya et al., 1993), where either only a few or no anomalous data samples are available for training the model (Khan & Madden, 2014). While most of the developed approaches (Khan & Madden, 2014) require a substantial amount of normal data to yield good generalization, in many real-world applications, e.g. in industrial manufacturing, only small datasets are available. Data scarcity can have many reasons: data collection itself might be expensive, e.g. in healthcare, or happens only gradually, such as in a cold-start situation. To enable learning from few examples, various viable meta-learning approaches (Lake et al., 2011; Ravi & Larochelle, 2016; Finn et al., 2017) have been developed. However, they rely on having examples from each of the classification task’s classes, which prevents their application to OCC tasks. To the best of our knowledge, the few-shot OCC (FS-OCC) problem has only been addressed by Kozerawski & Turk (2018) in the image domain. Our contribution is threefold: Firstly, we show that classical OCC approaches fail in the few-shot data regime. Secondly, we provide a theoretical analysis showing that classical gradient-based metalearning algorithms do not yield initializations suitable for OCC tasks and that second-order derivatives are needed to optimize for such initializations. Thirdly, we propose one-class model-agnostic meta-learning (OC-MAML), a data-domain-agnostic algorithm that quickly learns FS-OCC tasks, to serve as a first, simple and strong baseline for future research in the understudied FS-OCC problem. OC-MAML builds upon model-agnostic meta-learning (MAML) (Finn et al., 2017), which is a meta-learning method that explicitly optimizes for few-shot learning and yields a model initialization that enables quick adaptation to a new task using only few of its datapoints. Like MAML, OCMAML yields model parameters that are easily adaptable to unseen tasks. The difference is that the model initialization delivered by OC-MAML is particularly suited for adaptation to OCC tasks and hence requires few examples from only one class of the target task for good adaptation. We provide a theoretical analysis that shows that OC-MAML explicitly optimizes for parameter initializations which yield performance increase on class-balanced test data by taking only a few gradient steps with one-class minibatches. This is done by maximizing the inner product of gradients computed on different minibatches with different class-imbalance rates. While recent meta-learning approaches focused on the few-shot learning problem, i.e. learning to learn with few examples, we extend their use to the OCC problem, i.e. learning to learn with examples from only one class. We empirically validate our theoretical analysis on six datasets from the image and time-series domains, and demonstrate the robustness and maturity of our approach for real-world application by successfully testing it on a real-world dataset of sensor readings recorded during manufacturing of metal workpieces with a CNC milling machine. 2 APPROACH 2.1 PROBLEM STATEMENT Our goal is to learn a one-class classification (OCC) task using only a few examples from the normal class. In the following, we first discuss the unique challenges of the few-shot one-class classification (FS-OCC) problem. Subsequently, we formulate the FS-OCC problem as a meta-learning problem. In order to perform one-class classification, i.e. differentiate between in-class and out-of-class examples, approximating a generalized decision boundary for the normal class is necessary. Learning such a class decision boundary in the few-shot regime can be especially challenging for the following reasons. On the one hand, if the model overfits to the few available datapoints, the class decision boundary would be too restrictive, which would prevent generalization to unseen examples. As a result, some normal samples would be predicted as anomalies. On the other hand, if the model overfits to the majority class, e.g. predicting almost everything as normal, the class decision boundary would overgeneralize, and out-of-class (anomalous) examples would not be detected. In our meta-learning problem formulation, we assume access to data from classification tasks T traini sampled from a task distribution p(T ) related to our target OCC tasks. In the few-shot classification context, N -way K-shot learning tasks are usually used to test the learning procedure, in our case the model initialization, yielded by the meta-learning algorithm. An N -way K-shot classification task includesK examples from each of theN classes that are used for learning this task, after which the trained classifier is tested on a disjoint set of data (Vinyals et al., 2016). When the target task is an OCC task, only examples from one class are available for training, which can be viewed as a 1-way K-shot classification task. In order to align with the AD problem, the available examples have to belong to the normal (majority) class, which usually has a lower variance than the anomalous (minority) class. This problem formulation is a prototype for a practical use case where an application-specific anomaly detector is needed and only few normal class examples are available. 2.2 MODEL-AGNOSTIC META-LEARNING Model-agnostic meta-learning (MAML) (Finn et al., 2017) is an optimization-based meta-learning algorithm upon which we build in our present work. MAML learns a model initialization that enables quick adaptation to unseen tasks using only few data samples. For that, MAML trains a model explicitly for few-shot learning on tasks Ti coming from the same task distribution p(T ) as the unseen target task Ttest. In order to assess the model’s adaptation ability to unseen tasks, the available tasks are divided into mutually disjoint task sets: one for meta-training Str, one for metavalidation Sval and one for meta-testing Stest. Each task Ti is divided into two disjoint sets of data, each of which is used for a particular MAML operation: Dtr is used for adaptation and Dval is used for validation, i.e. evaluating the adaptation. The adaptation procedure of a model fθ to a particular task Ti consists in taking one (or more) gradient descent step(s) using few datapoints sampled from Dtr. We also refer to the adaptation updates as inner loop updates. A good measure for the suitability of the initialization parameters θ for few-shot adaptation to a considered task Ti is the loss LvalTi (fθ′i ), which is computed on the validation set D val using the task-specific adapted model fθ′i . In order to optimize for few-shot learning, the model parameters θ are updated by minimizing the aforementioned loss across all meta-training tasks. This update, called the outer loop update, can be expressed as: θ ← θ − β∇θ ∑ Ti∼p(T ) LvalTi (fθ′i ), (1) where β is the learning rate used for the outer loop. In order to avoid meta-overfitting, i.e. overfitting to the meta-training tasks, model selection can be done via conducting validation episodes using tasks from Sval throughout meta-training. At meta-test time, the few-shot adaptation to unseen tasks from Stest is evaluated. We note that, in the case of few-shot classification, K datapoints from each class are sampled from Dtr for the adaptation, during training, validation and testing. 2.3 ONE-CLASS MODEL-AGNOSTIC META-LEARNING 2.3.1 ALGORITHM The primary contribution of our work is to show that second-order gradient-based meta-learning is a viable approach to the underexplored few-shot one-class classification (FS-OCC) problem. We achieve this by adequately modifying the objective of the adaptation step, i.e. the inner loop updates, of the MAML algorithm. We choose to build upon gradient-based meta-learning algorithms, because these were shown to be universal learning algorithm approximators (Finn & Levine, 2017), which means that they could approximate a learning algorithm tailored for FS-OCC. As explained in Section 2.2, MAML optimizes explicitly for few-shot adaptation by creating and using auxiliary tasks that have the same characteristic as the target tasks, in this case tasks that include only few datapoints for training. Analogously, OC-MAML trains explicitly for quick adaptation to OCC tasks by creating OCC auxiliary tasks for meta-training. Concretely, this is done by modifying the class-imbalance rate (CIR) of the inner loop data batches to match the one of the test task. The meta-training procedure of OC-MAML is described in Algorithm 1 in Appendix A. As described in Section 1, OCC problems are binary classification scenarios where only few or no minority class samples are available. In order to address both of theses cases, we introduce a hyperparameter (c) which sets the CIR of the batch sampled for the inner updates. Hereby, c gives the percentage of the samples belonging to the minority (anomalous) class w.r.t. the total number of samples, e.g. setting c = 0% means only majority class samples are contained in the data batch. We focus on this latter extreme case, where no anomalous samples are available for learning. The key difference between MAML and OC-MAML is in the sampling operation of the inner loop batch (operation 5 in Algorithm 1 in Appendix A). By reducing the size of the batch used for the adaptation (via the hyperparameter K), MAML trains for few-shot adaptation. OC-MAML extends this approach to train for few-shot one-class adaptation by reducing the CIR of the batch used for adaptation (via the hyperparameter c). In order to evaluate the performance of the adapted model on both classes, we use a class-balanced validation batch B ′ for the outer loop updates. This way, we maximize the performance of the model in recognizing both classes after having seen examples from only one class during adaptation. Using OCC tasks for adaptation during meta-training favors model initializations that enable a quick adaptation to OCC tasks over those that require classbalanced tasks. From a representation learning standpoint, OC-MAML learns representations that are not only broadly suitable for the data underlying p(T ), but also particularly suited for OCC tasks. In Section 2.3.2, we discuss the unique characteristics of the model initializations yielded by OC-MAML and explain why adapting first-order meta-learning algorithms to the OCC scenario does not yield the targeted results. 2.3.2 THEORETICAL ANALYSIS: WHY DOES OC-MAML WORK ? In this section we give a theoretical explanation of why OC-MAML works and why it is a more suitable approach than MAML for the few-shot one-class classification (FS-OCC) problem. To address the latter problem, we aim to find a model parameter initialization, from which adaptation using few data examples from only one class yields a good performance on both classes, i.e. good generalization to the class-balanced task. We additionally demonstrate that adapting first-order metalearning algorithms, e.g. First-Order MAML (FOMAML) (Finn et al., 2017) and Reptile (Nichol & Schulman, 2018), to the OCC scenario as done in OC-MAML, does not yield initializations with the desired characteristics, as it is the case for OC-MAML. gMAML = g2 − αH2g1 − αH1g2 +O(α2) = g2 − α ∂(g1.g2) ∂φ1 +O(α2) (2) By using a Taylor series expansion, Nichol & Schulman (2018) approximate the gradient used in the MAML update. For simplicity of exposition, in Equation 2 we give their results for the case where only 2 gradient-based updates are performed, i.e. one adaptation update on a minibatch including K datapoints from Dtr and one meta-update on a minibatch including Q datapoints from Dval. We use the same notation used by Nichol & Schulman (2018), where gi and Hi denote the gradient and Hessian computed on the ith minibatch at the initial parameter point φ1, and α gives the learning rate. Here it is assumed that the same learning rate is used for the adaptation and meta-updates. In Equation 2 Nichol & Schulman (2018) demonstrate that MAML partially optimizes for increasing the inner product of the gradients computed on different minibatches. In fact, when gradients from different minibatches have a positive inner product, taking a gradient step using one of them yields a performance increase on the other (Nichol & Schulman, 2018). Equation 2 holds also for OC-MAML. However, in OC-MAML the minibatches 1 and 2 have different class-imbalance rates (CIRs), since the first minibatch includes data from only one class and the second minibatch is classbalanced. Hence, it optimizes for increasing the inner product of the gradients computed on different minibatches with different CIRs, while MAML does the same but for different minibatches with the same CIR, namely c = 50%. Consequently, OC-MAML optimizes for a parameter initialization from which taking one (or few) gradient step(s) with one-class minibatch(es) results in a performance increase on class-balanced data. In contrast, MAML optimizes for a parameter initialization that requires class-balanced minibatches to yield the same effect (Figure 1 in Appendix A). When adapting to OCC tasks, however, only examples from one class are available. We conclude, therefore, that using minibatches with different CIRs for meta-training, as done in OC-MAML, yields parameter initializations that are more suitable for adapting to OCC tasks. A natural question is whether applying our modification of MAML, i.e. using only data from the normal class for adaptation during meta-training, to other gradient-based meta-learning algorithms would yield the same desired effect. We investigate this for First-Order MAML (FOMAML) (Finn et al., 2017) and Reptile (Nichol & Schulman, 2018). FOMAML is a first-order approximation of MAML, which ignores the second derivative terms. Reptile is also a first-order meta-learning algorithm that learns an initialization that enables fast adaptation to test tasks using only few examples from each class. In the following we demonstrate that adapting the FOMAML and Reptile algorithms to the one-class classification scenario, to which we will refer as OC-FOMAML and OCReptile, does not result in optimizing for an initialization suitable for OCC tasks, as it is the case for OC-MAML. We note that for OC-Reptile, the first (N −1) batches contain examples from only one class and the last (N th) batch is class-balanced. The approximated gradients used in the FOMAML and Reptile updates are given by Equations 3 and 4 (Nichol & Schulman, 2018), respectively. gFOMAML = g2 − αH2g1 +O(α2) (3) gReptile = g1 + g2 − αH2g1 +O(α2) (4) We note that these equations hold also for OC-FOMAML and OC-Reptile. By taking the expectation over minibatch sampling Eτ,1,2 for a meta-training task τ and two class-balanced minibatches, Nichol & Schulman (2018) establish that Eτ,1,2[H1g2] = Eτ,1,2[H2g1]. Averaging the two sides of the latter equation results in the following: Eτ,1,2[H2g1] = 1 2 Eτ,1,2[H1g2 +H2g1] = 1 2 Eτ,1,2[ ∂(g1.g2) ∂φ1 ] (5) Equation 5 shows that, in expectation, FOMAML and Reptile, like MAML, optimize for increasing the inner product of the gradients computed on different minibatches with the same CIR. However, when the minibatches 1 and 2 have different CIRs, which is the case for OC-FOMAML and OC-Reptile, Eτ,1,2[H1g2] 6= Eτ,1,2[H2g1] and therefore Eτ,1,2[H2g1] 6= 12Eτ,1,2[ ∂(g1.g2) ∂φ1 ]. Hence, even though, similarly to OC-MAML, OC-FOMAML and OC-Reptile use minibatches with different CIRs for meta-training, contrarily to OC-MAML, they do not optimize for increasing the inner product of the gradients computed on different minibatches with different CIRs. The second derivative term H1g2 is, thus, necessary to optimize for an initialization from which performance increase on a class-balanced task is yielded by taking few gradient steps using only data from one class. 3 RELATED WORKS Our proposed method addresses the few-shot one-class classification (FS-OCC) problem, i.e. solving binary classification problems using only few datapoints from only one class. To the best of our knowledge, this problem was only addressed by Kozerawski & Turk (2018), and exclusively in the image data domain. Kozerawski & Turk (2018) train a feed-forward neural network (FFNN) to learn a transformation from feature vectors, extracted by a CNN pre-trained on ILSVRC 2014 (Russakovsky et al., 2015), to SVM decision boundaries. Hereby, the FFNN is trained on ILSVRC 2012. At test time, an SVM boundary is inferred by using one image of one class from the test task which is then used to classify the test examples. This approach is specific to the image domain since it relies on the availability of very large, well annotated datasets and uses data augmentation techniques specific to the image domain, e.g. mirroring. OC-MAML offers a more general approach to FS-OCC since it is data-domain-agnostic. In fact, it does not require a pre-trained feature extraction model, which might not be available for some data domains, e.g. sensor readings. 3.1 FEW-SHOT CLASSIFICATION Recent few-shot classification approaches may be broadly categorized in optimization-based methods (Ravi & Larochelle, 2016; Finn et al., 2017; Nichol & Schulman, 2018) and metric-based methods (Koch, 2015; Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018). The optimization-based approaches aim to learn an optimization algorithm (Ravi & Larochelle, 2016) and/or a parameter initialization (Finn et al., 2017; Nichol & Schulman, 2018), that is tailored for few-shot learning. Metric-based techniques learn a metric space where samples belonging to the same class are close together, which facilitates few-shot classification (Koch, 2015; Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018). Rusu et al. (2018) develops a hybrid method that combines the advantages of both categories. Prior meta-learning approaches to few-shot classification addressed the N-way K-shot classification problem described in Section 2.1, i.e they only consider neatly class-balanced test classification tasks. Optimization-based techniques require these samples to finetune the learned initialization. In the metric-based methods, these samples are necessary to compute class prototypes (Snell et al., 2017), embeddings needed for verification (Koch, 2015) or relation scores (Sung et al., 2018). Our approach, however, requires only samples from one of the test task’s classes for learning. Moreover, while the evaluation of the previous approaches in the classification context was limited to the image domain, we additionally validate OC-MAML on datasets from the time-series domain. 3.2 ONE-CLASS CLASSIFICATION Classical OCC approaches rely on SVMs (Schölkopf et al., 2001; Tax & Duin, 2004) to distinguish between normal and abnormal samples. Pal & Foody (2010) show that the classification accuracy of SVMs decreases with an increasing number of input features, particularly when small datasets are available for training. Hybrid approaches combining SVM-based techniques with feature extractors were developed to compress the input samples in lower dimensional representations (Xu et al., 2015; Erfani et al., 2016; Andrews et al., 2016). Fully deep methods that jointly perform the feature extraction step and the OCC step have also been developed (Ruff et al., 2018). Another category of approaches to OCC uses the reconstruction error of antoencoders (Hinton & Salakhutdinov, 2006) trained with only normal class examples as an anomaly score (Hawkins et al., 2002; An & Cho, 2015; Chen et al., 2017). Yet, determining a decision threshold for such an anomaly score requires labeled data from both classes. Further more recent techniques rely on GANs (Goodfellow et al., 2014) to perform OCC (Schlegl et al., 2017; Ravanbakhsh et al., 2017; Sabokrou et al., 2018). The aforementioned hybrid and fully deep approaches require a considerable amount of data from the OCC task to train the typically highly parametrized models to learn features specific to the normal class. By leveraging auxiliary OCC tasks and explicitly optimizing for few-shot learning, OC-MAML learns a representation that can be adapted to unseen OCC task with only few exaples. 4 EXPERIMENTAL EVALUATION The conducted experiments 1 aim to address the following key questions: (a) How does OC-MAML perform compared to classical one-class classification (OCC) approaches in the few-shot (FS) data regime? (b) Does using OCC tasks for meta-training improve the adaptation to such tasks, as it is the case for few-shot tasks (Finn et al., 2017), and do our theoretical findings (Section 2.3.2) about the differences between the MAML and OC-MAML initializations hold in practice? (c) How does OC-MAML compare to the first-order meta-learning algorithms adapted to the OCC scenario, i.e. OC-FOMAML and OC-Reptile (Section 2.3.2)? (d) How does OC-MAML perform in FS-OCC problems from the time-series domain, which is understudied in the few-shot learning literature? 4.1 BASELINES AND DATASETS This section provides information about the baselines and datasets we use in our experimental evaluation. We compare OC-MAML to the classical one-class classification (OCC) approaches One-Class SVM (OC-SVM) (Schölkopf et al., 2001) and Isolation Forest (IF) (Liu et al., 2008) (Question (a)), which we fit to the adaptation set of the test task. Here, we apply PCA to reduce the dimensionality of the data, by choosing the minimum number of eigenvectors so that at least 95% of the variance is preserved as done by Erfani et al. (2016). We additionally tune the inverse length scale γ by using 10% of the test set, as done by Ruff et al. (2018), which gives OC-SVM a supervised advantage, compared to the other methods. For a fairer comparison to OC-MAML, where these latter methods also benefit from the meta-training and meta-validation tasks, we additionally train them on embeddings inferred by feature extractors learned on these tasks. Here, we train two types of feature extractors on the meta-training tasks: one is trained in a Multi-Task-Learning (MTL) setting and the other trained using the ”Finetune” baseline (FB) (Triantafillou et al., 2019). FB is a few-shot classification approach, where one multi-class classifier is trained with all the classes available in all meta-training tasks, after which, an output layer is finetuned with the few available examples of the target task on top of the learned feature extractor. Moreover, we compare OC-MAML to class-balanced meta-learning algorithms, namely MAML, FOMAML and Reptile, as well as firstorder meta-learning algorithms adapted to the OCC scenario, i.e. OC-FOMAML and OC-Reptile (Questions (b) and (c)). Experimental details are provided in Appendix B. We evaluate our approach on six datasets, including 3 from the image domain and 3 from the timeseries domain. In the image domain we use 2 few-shot learning benchmark datasets, namely MiniImageNet (Ravi & Larochelle, 2016) and Omniglot (Lake et al., 2015), and 1 OCC benchmark dataset, the Multi-Task MNIST (MT-MNIST) dataset. To adapt the datasets to the OCC scenario, we create binary classification tasks, where the normal class contains examples from one class of the initial dataset and the anomalous class contains examples from multiple other classes. We create 9 different datasets based on MNIST, where the meta-testing task of each dataset consists in differentiating between a certain digit and the others. We use the same (10th) task for meta-validation in all datasets. Since most of the time-series datasets for anomaly detection include data from only one domain and only one normal class, adapting them to the meta-learning problem formulation where several different tasks are required is not possible. Therefore, we create two synthetic timeseries (STS) datasets, each including 30 synthetically generated time-series that underlie 30 different anomaly detection tasks, to assess the suitability of OC-MAML to time-series data (Question (d)). The time-series underlying the datasets are sawtooth waveforms (STS-Sawtooth) and sine functions (STS-Sine). We propose the STS-datasets as benchmark datasets for the few-shot (one-class) classification problem in the time-series domain. Finally, we validate OC-MAML on a real-world anomaly detection dataset of sensor readings recorded during industrial manufacturing using a CNC milling machine. Various consecutive roughing and finishing operations (pockets, edges, holes, surface finish) were performed on ca. 100 aluminium workpieces to record the CNC Milling Machine Data (CNC-MMD). In Appendix C, we give details about all 6 datasets, the task creation procedures adopted to adapt them to the OCC case, as well as the generation of the STS-datasets. 1Our OC-MAML implementation and experimental evaluation will be made public upon paper acceptance. 4.2 RESULTS AND DISCUSSION Our results of the comparison between OC-MAML and the classical OCC approaches on the 3 image datasets and on the STS-Sawtooth dataset are summarized in Table 1. OC-MAML consistently outperforms all baselines across all datasets and on both adaptation set sizes. While FB and MTL yield relatively good performance when adapting to class-balanced tasks (c = 50%), they completely fail in adapting to OCC tasks. On the MT-MNIST dataset and the STS-Sawtooth dataset, some of the baselines that combine a feature extractor and a shallow model yield high performance, when the adaptation set size is K = 10. Our results of the comparison between OC-MAML and the classical few-shot classification approaches on the 3 image datasets and on the STS-Sawtooth dataset are summarized in Table 2. The results on the other 8 MT-MNIST datasets and on the STS-Sine dataset are presented in Appendix D and are consistent with the results in Tables 1 and 2. We observe that OC-MAML consistently outperforms the other meta-learning algorithms with a substantial margin on all datasets and for both adaptation set sizes. This confirms our theoretical findings (Section 2.3.2) that the initializations yielded by class-balanced meta-learning algorithms as well as OC-FOMAML and OC-Reptile are not optimized for adaptation using data from only one class. These latter yield test accuracies close to 50% showing that they overfitted to the normal class (Table 2 (top)). In an attempt to increase the performance of the other meta-learning algorithms in the OCC scenario, we add a batch normalization (BN) (Ioffe & Szegedy, 2015) layer immediately before the output layer of the network. This BN operation standardizes the latent features using the mean and standard deviation of the K datapoints available for adaptation, which all belong to the normal class. As a result, this layer would output features with mean close to 0 and standard deviation close to 1 for normal class examples. In contrast, anomalous examples would yield features with other statistics, which simplifies their detection. We hypothesize that by enforcing a mapping of the data to a latent space standardized only by examples from the normal class, the anomalies would clearly fall out of the normal-class-distribution, making their detection easier. We note that the BN layer is used during meta-training as well. Hereby, we fix the learnable scaling (γ) and centering (β) parameters of the BN layer to 1 and 0, respectively, to prevent it from shifting the standard distribution. We find that this simple modification increases the performance of the other meta-learning algorithms on all image datasets. However, OC-MAML without BN still yields the highest results, with only one exception. The higher performance increase when a bigger adaptation set is available (K = 10) confirms our hypothesis that enforcing a mapping of the data to a latent space standardized only by examples from the normal class makes the detection of the anomalies easier. In fact, using more examples yields more accurate mean and standard deviation measures, which enables a better approximation of the distribution of the normal class, and hence leads to an improved detection of the anomalies. We also tested these algorithms on networks including a trainable BN layer after each convolutional layer. This yielded comparable results to just adding one non-trainable BN layer before the output layer. Even though some of the meta-learning algorithms and OCC approaches sometimes outperform OC-MAML (Tables 2, 5, 8, 9), they do not consistently yield high performance in learning FS-OCC tasks across several datasets, as it is the case for OC-MAML. We note that this happens only on few MT-MNIST datasets and explain that by the high overlap between the digit classes underlying the meta-training and meta-testing tasks in the MT-MNIST datasets. The results of OC-MAML experiments on the CNC-MMD dataset are presented in Table 3. We compute F1-scores for evaluation since the test sets are class-imbalanced. OC-MAML consistently achieves high F1-scores across the 6 different milling processes. This high model performance on the minority class, i.e. in detecting anomalous data samples, is reached by using only K = 10 non-anomalous data samples (c = 0%). These results show that OC-MAML yielded a parameter initialization suitable for learning OCC tasks in the time-series data domain. Moreover, the high performance reached show the maturity of this method for industrial real-world applications. 5 CONCLUSION This work addressed the novel and challenging problem of few-shot one-class classification (FSOCC) and introduced OC-MAML, a robust meta-learning approach to FS-OCC problems that learns model parameters which are easily adaptable to unseen tasks using few examples from only one class. We demonstrated the viability of our method on six datasets from the image time-series domains, including a real-world dataset of industrial sensor readings, where it significantly outperformed classical OCC and few-shot classification methods. Future works could investigate an unsupervised approach to FS-OCC, as done by Hsu et al. (2018) in the class-balanced scenario. A OC-MAML: ALGORITHM AND PARAMETER INITIALIZATION In this section we present the pseudo-code of OC-MAML in Algorithm 1 and a diagram visualizing the parameter initializations yielded by MAML and OC-MAML. Algorithm 1 Few-shot one-class classification with OC-MAML Require: Str: Set of meta-training tasks Require: α, β: Learning rates Require: K,Q: Batch size for the inner and outer updates Require: c: CIR for the inner-updates 1: Randomly initialize θ 2: while not done do 3: Sample batch of tasks Ti from Str Let {Dtr, Dval} = Ti 4: for all sampled Ti do 5: Sample K datapoints B = {x(l), y(l)} from Dtr such that CIR= c 6: Initialize θ ′ i = θ 7: for number of adaptation steps do 8: Compute adaptation loss LtrTi(fθ′i ) using B 9: Compute adapted parameters with gradient descent: θ ′ i = θ ′ i − α∇θ′iL tr Ti (fθ′i ) 10: end for 11: Sample Q datapoints B ′ = {x′(l), y′(l)} from Dval 12: Compute outer loop loss LvalTi (fθ′i ) using B ′ 13: end for 14: Update θ: θ ← θ − β∇θ ∑ Ti LvalTi (fθ′i ) 15: end while 16: return meta-learned parameters θ Figure 1 visualizes the adaptation to a binary classification test task Ts from the parameter initializations yielded by OC-MAML and MAML, denoted by θOCMAML and θMAML respectively. θ∗s,CB denotes the optimal parameters for Ts. Taking a gradient step using a one-class adaptation setDs,OC (gradient direction denoted by ∇Ls,OC), yields a performance increase on Ts when starting from the OC-MAML parameter initialization. In contrast, when starting from the parameter initialization reached by MAML a class-balanced adaptation set Ds,CB (gradient direction denoted by ∇Ls,CB) is required for a performance increase in Ts. B EXPERIMENT DETAILS For MT-MNIST, we use the same 4-block convolutional architecture as used by Hsu et al. (2018) for their multi-class MNIST experiments. However, we exclude the batch normalization (Ioffe & Szegedy, 2015) layers, as we want to assess their effect in the OCC case, as discussed in Section 4.2. Each convolutional block includes a 3 x 3 convolutional layer with 32 filters, a 2 x 2 pooling and a ReLU non-linearity. The same model architecture is used for the MiniImageNet experiments as done by Ravi & Larochelle (2016). For the Omniglot experiments, we use the same architecture used by Finn et al. (2017). We also do not include the batch normalization layers for the two latter datasets. On the STS datasets, the model architecture used is composed of 3 modules, each including a 5 x 5 convolutional layer with 32 filters, a 2 x 2 pooling and a ReLU non-linearity. The model architecture used for the CNC-MMD experiments is composed of 4 of these aforementioned modules, except that the convolutional layers in the last two modules include 64 filters. The last layer of all architectures is a linear layer followed by softmax. We note that in the experiments on the time-series datasets (STS and CNC-MMD) 1-D convolutional filters are used. Table 4 shows the hyperparameters used in the experiments of each model on the different datasets. We note that we did not fix the outer loop size Q in the experiments on the CNC-MMD dataset, because the sizes and CIRS of the validation sets Dval differ across the different tasks. For the meta-learning algorithms, including OC-MAML, we used vanilla SGD in the inner loop and the Adam optimizer (Kingma & Ba, 2014) in the outer loop, as done by Finn et al. (2017). The MTL and FB baselines are also trained with the Adam optimizer. In the following, we provide details about the meta-training procedure adopted in the meta-learning experiments. We use disjoint sets of data for adaptation (Dtr) and validation (Dval) on the metatraining tasks, as it was empirically found to yield better final performance (Nichol & Schulman, 2018). Hereby, the same sets of data are used in the OC-MAML and baseline experiments. In the MT-MNIST, Omniglot, MiniImageNet and STS experiments, the aforementioned sets of data are class-balanced. The sampling of the batch used for adaptation B ensures that this latter has the appropriate CIR (c = 50% for MAML, FOMAML and Reptile, and c = ctarget for OC-MAML, OC-FOMAML and OC-Reptile). For the one-class meta-learning algorithms, ctarget = 0%, i.e. no anomalous samples of the target task are available, sothat only normal examples are sampled from Dtr during meta-training. In order to ensure that class-balanced and one-class meta-learning algorithms are exposed to the same data during meta-training, we move the anomalous examples from the adaptation set of data (Dtr) to the validation set of data (Dval). We note that this is only done in the experiments using one-class meta-learning algorithms. During meta-training, meta-validation episodes are conducted to perform model selection. In order to mimic the adaptation to unseen FS-OCC tasks with CIR c = ctarget at test time, the CIR of the batches used for adaptation during meta-validation episodes is also set to c = ctarget. We note that the hyperparameter K denotes the total number of datapoints, i.e. batch size, used to perform the adaptation updates, and not the number of datapoints per class as done by Finn et al. (2017). Hence, a task with sizeK = 10 and CIR c = 50% is equivalent to a 2-way 5-shot classification task. In the following, we provide details about the adaptation to the target task(s) and the subsequent evaluation. In the MT-MNIST and MiniImageNet experiments, we randomly sample 20 adaptation sets from the target task(s)’ data, each including K examples with the CIR corresponding to the experiment considered. After each adaptation episode conducted using one of these sets, the adapted model is evaluated on a disjoint class-balanced test set that includes 4,000 images for MT-MNIST and 600 for MiniImageNet. We note that the samples included in the test sets of the test tasks are not used nor for meta-training neither for meta-validation. This results in 20 and 400 (20 adaptation sets created from each of the 20 test classes) different test tasks for MT-MNIST and MiniImageNet, respectively. All the results presented give the mean over all adaptation episodes. Likewise, in the STS experiments, we evaluate the model on 10 different adaptation sets from each of the 5 test tasks. In the CNC-MMD experiments, the 30 tasks created from the target operation are used for adaptation and subsequent evaluation. For each of these target tasks, we randomly sample K datapoints belonging to the normal class that we use for adaptation, and use the rest of the datapoints for testing. We do this 5 times for each target task, which results in 150 testing tasks. For MTL and FB baselines, as well as all the baseline combining these model with shallow models, i.e. IF and OC-SVM, we use the meta-validation task(s) for model choice, like in the meta-learning experiments. For the MTL baseline, for each validation task, we finetune a fully connected layer on top of the shared multi-task learned layers, as it is done at test time. C DATASETS AND TASK CREATION PROCEDURES In this Section we provide information about the datasets used and the task creation procedures. Multi-task MNIST (MT-MNIST): We derive 10 binary classification tasks from the MNIST dataset (LeCun et al., 2010), where every task consists in recognizing one of the digits. This is a classical one-class classification benchmark dataset. For a particular task Ti, images of the digit i are labeled as normal samples, while out-of-distribution samples, i.e. the other digits, are labeled as anomalous samples. We use 8 tasks for meta-training, 1 for meta-validation and 1 for meta-testing. Hereby, images of digits to be recognized in the validation and test tasks are not used as anomalies in the meta-training tasks. This ensures that the model is not exposed to normal samples from the test task during meta-training. Moreover, the sets of anomalous samples of the meta-training, meta-validation and meta-testing tasks are mutually disjoint. We conduct experiments on 9 MTMNIST datasets, each of which involves a different target task (T0 − T8). The task T9 is used as a meta-validation task across all experiments. MiniImageNet: This dataset was proposed by Ravi & Larochelle (2016) and includes 64 classes for training, 16 for validation and 20 for testing, and is a classical challenging benchmark dataset for few-shot learning. To adapt it to the few-shot one-class classification setting, we create 64 binary classification tasks for meta-training, each of which consists in differentiating one of the training classes from the others, i.e. the anomalous examples of a task Ti are randomly sampled from the 63 classes with labels different from i. We do the same to create 16 meta-validation and 20 meta-testing tasks using the corresponding classes. Omniglot: This dataset was proposed by Lake et al. (2015) and includes 20 instances of 1623 handwritten characters from 50 different alphabets. We generate our meta-training and meta-testing tasks based on the official data split (Lake et al., 2015), where 30 alphabets are reserved for training and 20 for evaluation. For each character class, we create a binary classification task, which consists in differentiating between this character and other characters from the same set (meta-training or meta-testing), i.e. the anomalous examples of a task Ti are randomly sampled from the remaining characters. By removing 80 randomly sampled tasks from the meta-training tasks, we create the meta-validation tasks set. Synthetic time-series (STS): In order to investigate the applicability of OC-MAML to time-series (question (c)), we created two datasets, each including 30 synthetically generated time-series that underlie 30 different anomaly detection tasks. The time-series underlying the datasets are sawtooth waveforms (STS-Sawtooth) and sine functions (STS-Sine). Each time-series is generated with random frequencies, amplitudes, noise boundaries, as well as anomaly width and height boundaries. Additionally, the width of the rising ramp as a proportion of the total cycle is sampled randomly for the sawtooth dataset, which results in tasks having rising and falling ramps with different steepness values. The data samples of a particular task are generated by randomly cropping windows of length 128 from the corresponding time-series. We generate 200 normal and 200 anomalous data examples for each task. For each dataset, we randomly choose 20 tasks for meta-training, 5 for meta-validation and 5 for meta-testing. We propose the STS-datasets as benchmark datasets for the few-shot one-class classification problem in the time-series domain, and will make them public upon paper acceptance. In the following, we give details about the generation procedure adopted to create the STS-Sawtooth dataset. The same steps were conducted to generate the STS-Sine dataset. First, we generate the sawtooth waveforms underlying the different tasks by using the Signal package of the Scipy library (Jones et al., 2001–). Thereafter, a randomly generated noise is applied to each signal. Subsequently, signal segments with window length l = 128 are randomly sampled from each noisy signal. These represent the normal, i.e. non-anomalous, examples of the corresponding task. Then, some of the normal examples are randomly chosen, and anomalies are added to them to produce the anomalous examples. Figure 2 shows exemplary normal and anomalous samples from the STS-Sawtooth and STS-Sine datasets. In order to increase the variance between the aforementioned synthetic signals underlying the different tasks, we randomly sample the frequency, i.e. the number of periods within the window length l, with which each waveform is generated, as well as the amplitude and the vertical position (see Figure 2). For sawtooth waveforms, we also randomly sample the width of the rising ramp as a proportion of the total cycle between 0% and 100%, for each task. Setting this value to 100% and to 0% produces sawtooth waveforms with rising and falling ramps, respectively. Setting it to 50% corresponds to triangle waveforms. We note that the noise applied to the tasks are randomly sampled from task-specific intervals, the boundaries of which are also randomly sampled. Likewise, the width and height of each anomaly is sampled from a random task specific-interval. Moreover, we generate the anomalies of each task, such that half of them have a height between the signal’s minimum and maximum (e.g. anomalies (a) and (d) in Figure 2), while the other half can surpass these boundaries, i.e. the anomaly is higher than the normal signal’s maximum or lower than its minimum at least at one time step (e.g. anomalies (b) and (c) in Figure 2). We note that an anomalous sample can have more than one anomaly. We preprocess the data by removing the mean and scaling to unit variance. Hereby, only the available normal examples are used for the computation of the mean and the variance. This means that in the experiments, where the target task’s size K = 2 and only normal samples are available c = 0%, only two examples are used for the mean and variance computation. We note that the time-series in Figure 2 are not preprocessed. CNC Milling Machine Data (CNC-MMD): This dataset consists of ca. 100 aluminum workpieces on which various consecutive roughing and finishing operations (pockets, edges, holes, surface finish) are performed. The sensor readings which were recorded at a rate of 500Hz measure various quantities that are important for the process monitoring including the torques of the various axes. Each run of machining a single workpiece can be seen as a multivariate time-series. We segmented the data of each run in the various operations performed on the workpieces. E.g. one segment would describe the milling of a pocket where another describes a surface finish operation on the workpiece. Since most manufacturing processes are highly efficient, anomalies are quite rare but can be very costly if undetected. For this reason, anomalies were provoked for 6 operations during manufacturing to provide a better basis for the analysis. Anomalies were provoked by creating realistic scenarios for deficient manufacturing. Examples are using a workpiece that exhibits deficiencies which leads to a drop in the torque signal or using rather slightly decalibrated process parameters which induced various irritations to the workpiece surface which harmed production quality. The data was labeled by domain experts from Siemens Digital Industries. It should be noted that this dataset more realistically reflects the data situation in many real application scenarios from industry where anomalies are rare and data is scarce and for this reason training models on huge class-balanced datasets is not an option. For our experiments, we created 30 tasks per operation by randomly cropping windows of length 2048 from the corresponding time-series of each operation. As a result, the data samples of a particular task Ti cropped from a milling operation Oj correspond to the same trajectory part of Oj , but to different workpieces. The task creation procedure ensures that at least two anomalous data samples are available for each task. The resulting tasks include between 15 and 55 normal samples, and between 2 and 4 (9 and 22) anomalous samples for finishing (roughing) operations. We validate our approach on all 6 milling operations in the case where only 10 samples belonging to the normal class (K = 10, c = 0%) are available. Given the type of the target milling operation,e.g. finishing, we use the tasks from the other operations of the same type for meta-training. We note that the model is not exposed to any sample belonging to any task of the target operation during training. We preprocess each of the three signals separately by removing the mean and scaling to unit variance, as done for the STS datasets. Likewise, only the available normal examples are used for the computation of the mean and the variance. Exemplary anomalous signals recorded from a finishing and a roughing operations are shown in Figure 3. These signals are not mean centered and scaled to unit variance. We note that we do not use the labels per time-step, but rather the label ”anomalous” is assigned to each time-series that contains at least an anomalous time-step. D EXPERIMENTAL RESULTS In this Section, we present the results of the experiments on the STS-Sine dataset and the 8 further MT-MNIST datasets.
1. What is the main contribution of the paper regarding few-shot one-class classification? 2. What are the strengths and weaknesses of the proposed approach in the MAML framework? 3. How does the reviewer assess the novelty and effectiveness of the method compared to prior works like CLEAR? 4. Are there any questions or concerns regarding the experimental setup, such as the class imbalance rate or the choice of baseline method?
Review
Review One of promising approach to tackle the few-shot problems is to use meta-learning so that the learner can quickly generalize to an unseen task. One-class classification requires only a set of positive examples to discriminate negative examples from positive examples. The current paper addresses a method of meta-training one-class classifiers in the MAML framework when only a handful of positive examples are available. ---Strength--- - Few-shot one-class classification is a timely subject, which has not be studied yet. - Meta-training one-class classifiers in the MAML framework seems to be sound. ---Weakness--- - MAML is quite a general meta-training framework, which can be used when parameterized base-learners are updated using gradient methods. Thus, when parameterized models for one-class classification are used, it is rather easy to meta-train one-class classifiers in the MAML framework. - Regarding episodic training, in contrast to few-shot classification problems, support sets in episodes have similar positive examples. Thus, fine-tuning baseline method could work well, even without using MAML. Please compare it with the fine-tuning method. ---Comments--- - I assume that the query set in each episode include negative examples, while support sets have only positive examples. Right? What is the value of c (class imbalance rate) in the query set? - Wouldn't it be better to focus on experiments with c=0% since one-class classification requires the training with only positive examples? - What was the baseline one-class classifier? One-class SVM? - It was mentioned that CLEAR was an earlier work. Then, the empirical comparison with CLEAR should be included when image data is considered.
ICLR
Title Teleport Graph Convolutional Networks Abstract We consider the limitations in message-passing graph neural networks. In message-passing operations, each node aggregates information from its neighboring nodes. To enlarge the receptive field, graph neural networks need to stack multiple message-passing graph convolution layers, which leads to the over-fitting issue and over-smoothing issue. To address these limitations, we propose a teleport graph convolution layer (TeleGCL) that uses teleport functions to enable each node to aggregate information from a much larger neighborhood. For each node, teleport functions select relevant nodes beyond the local neighborhood, thereby resulting in a larger receptive field. To apply our structure-aware teleport function, we propose a novel method to construct structural features for nodes in the graph. Based on our TeleGCL, we build a family of teleport graph convolutional networks. The empirical results on graph and node classification tasks demonstrate the effectiveness of our proposed methods. 1 INTRODUCTION Graph neural networks (GNNs) have shown great capability in solving challenging tasks on graph data such as node classification (Grover & Leskovec, 2016; Kipf & Welling, 2017; Veličković et al., 2017; Gao et al., 2018), graph classification (Xu et al., 2018; Gao & Ji, 2019; You et al., 2019), and link prediction (Zhang & Chen, 2018; Chen et al., 2019; Zhou et al., 2019). Most graph convolutional networks are based on message-passing operations, in which each node aggregates information from its neighboring nodes. To enable a larger receptive field (Chen et al., 2016), GNNs need to stack multiple layers, which is straightforward but can result in several issues. Firstly, stacking multiple layers involves massive trainable parameters, which consequently increases the risk of over-fitting. Secondly, message-passing operations mostly use averaging to combine the aggregated features, which significantly reduces the distinguishability of network embeddings. From this point, GNNs that are based on message-passing operations can not use deep network architecture due to these limitations. Some works such as Geom-GCN (Pei et al., 2020) try to solve these issues by involving more nodes in the feature aggregation process. However, Geom-GCN doesn’t consider the original graph topology information when generating the additional set of nodes for aggregation, which can neglect some relevant nodes from a structural perspective. To address the above limitations and increase the receptive field effectively, we propose a teleport graph convolution layer (TeleGCL) that uses teleport functions to select highly-relevant nodes at the global scope. A teleport function computes relevances between the center node and other nodes beyond the local neighborhood. The nodes with particular relevances are teleported for the center node. Here, the selection of teleported nodes is not restricted by the graph topology. This enables the center node to gather information from a larger neighborhood without going deep, which helps to avoid over-fitting and over-smoothing issues. In particular, we propose two teleport functions; those are structure-aware and feature-aware teleport functions. They compute the nodes’ relevances from graph structural perspective and node features perspective, respectively. Based on our TeleGCL, we build a family of teleport graph convolutional networks. The empirical results on graph and node classification tasks demonstrate the effectiveness of our proposed methods. 2 BACKGROUND AND RELATED WORK In this section, we describe message-passing operations on graph data and geometric graph convolutional networks. Graph neural networks (Fan et al., 2019; Wu et al., 2019; Morris et al., 2019; Wu et al., 2020) have achieved state-of-the-art performances on various challenging tasks in the field of network embedding. The mainstream of graph deep learning operations follows a message-passing schema. In a message-passing operation, each node sends its features, known as message, to its neighboring nodes in the graph. Then each node aggregates messages from its neighborhood and uses them to update its features. When combing the aggregated features, different strategies can be applied. In the graph convolution layer (GCN) (Kipf & Welling, 2017), features from neighboring nodes are given equal weights in the aggregation process. To assign different weights to different neighboring nodes, the graph attention network (Veličković et al., 2017) employs an attention mechanism to compute aggregation weights. Based on these message-passing operations, graph neural networks stack multiple layers, which enables a larger receptive field. Recently, some research works try to perform message passing beyond the local neighborhood. Pei et al. (2020) proposed to construct a continuous latent space that enables graph neural networks to perform feature learning in the latent space. To be specific, it first projects nodes’ features to a 2-dimensional latent and continuous space. Based on the latent space, a structural neighborhood is constructed based on the Euclidean distance of each pair of nodes in the 2-dimensional space. In this process, the construction of structural features does not consider the graph connectivity information in the graph. Thus, the structural neighborhood in (Pei et al., 2020) is still built on node features without considering the graph topology. In this work, we propose a method to generate structure-aware features for each node. In particular, we use the graph connectivity and similarity information with the neighboring nodes and construct a feature vector for each node. By considering graph connectivity, our constructed structural features can reflect graph topology information. 3 TELEPORT GRAPH CONVOLUTIONAL NETWORKS In this work, we propose the teleport graph convolution layer (TeleGCL) that enables a center node to aggregate information beyond regular neighborhood structure by using some teleport functions. To enable effective node teleportation, we propose two teleport functions from structure-aware and feature-aware perspectives. Specifically, we propose a novel method to construct structural features for nodes, which can be used by structure-aware functions to select relevant nodes. Based on our TeleGCL, we propose the teleport graph convolutional networks for network embedding learning. 3.1 LIMITATIONS OF MESSAGE-PASSING OPERATIONS Currently, most graph convolution networks are based on message-passing operations. In a messagepassing operation, each node aggregates information from its neighboring nodes that usually are the one-hop neighborhood. Intuitively, it is beneficial to use information from a large neighborhood for network embedding learning. To enlarge the receptive field, a straight way is to stack multiple message-passing layers. A graph convolutional network with k layers enables nodes to receive information from a k-hop neighborhood. However, this method results in two issues. Firstly, it increases the risk of over-fitting by involving much more trainable parameters. The number of trainable parameters in the network increases when stacking multiple layers. Unlike regular convolutional neural networks, there is no effective graph pooling layer that can enlarge the receptive field without involving trainable parameters. Stacking many graph convolution layers will inevitably increase the risk of over-fitting. Secondly, stacking multiple layers will reduce the distinguishability of network embeddings, which is often referred to as the over-smoothing issue (Pei et al., 2020). Due to the invariant property in graph structures, message-passing operations cannot learn trainable weights in the aggregation process (Kipf & Welling, 2017; Gao et al., 2018). Averaging operation is usually used for information aggregation from the neighborhood. Consequently, information from relevant distant nodes will be diluted and each node carries similar information. In this work, we propose a teleport graph convolution layer to address this issue. This layer enables each node to aggregate information from a set of relevant nodes that are not directly connected to the center node in the original graph structure. Teleport functions are used to determine the relevant nodes from different perspectives. 3.2 TELEPORT GRAPH CONVOLUTION LAYER To address the limitations in message-passing operations, we propose the teleport graph convolution layer (TeleGCL), which enables nodes to aggregate information beyond their local neighborhoods. In this layer, we employ multiple teleport functions to generate neighborhoods for each node. The teleport functions select some nodes that are relevant but not directly connected. Since these nodes are teleported from a global context, the receptive field of each node can be effectively enlarged. We require these functions to be permutation invariant such that the property retains in this layer. Given a graph G = (V,E), where n = |V |, each node v ∈ V is associated with a feature vector xv ∈ Rd, and each edge (u, v) ∈ E connects node u and node v in the graph. X = [x1,x2, . . . ,xn] ∈ Rd×n and A ∈ Rn×n are the feature matrix and adjacency matrix, respectively. A teleport function g(v,G)→ N takes as input a node v and outputs a neighborhood N that includes a set of relevant nodes for node v’s feature aggregation. In Section 3.3 and Section 3.4, we propose two teleport functions that can construct structure-aware and feature-aware neighborhoods. Suppose we have m teleport functions, node v aggregates information from a neighborhood N (v) = {Nl(v),N1(v),N2(v), . . . ,Nm(v)}, where Nl(v) = {u|u ∈ V, (v, u) ∈ E} is the local neighborhood and Ni(v) = gi(v,G) is a neighborhood created by the ith teleport function. Based on this neighborhood, the layer-wise propagation of TeleGCL ` for node v is formulated as x(`)v = σ 1 |N(v)| ∑ u∈N(v) x(`−1)u , (1) where σ denotes an activation function such as ReLU (Nair & Hinton, 2010). By using teleport functions, a node can aggregate features beyond the local neighborhood, thereby leading to a larger receptive field. In Section 3.2, we propose the teleport graph convolution layer that enables feature aggregation regardless of the original graph structure. A TeleGCL highly depends on teleport functions which select distant nodes in the feature aggregation process. In previous works (Pei et al., 2020), teleport functions are mainly based on node features while neglecting the graph structural information. The graph topology information should be considered to include nodes that share the same structural patterns such as graph motifs (Ren & Jin, 2020). In this section, we propose two teleport functions; those are structure-aware teleport function and feature-aware teleport function. They select teleport nodes based on graph topology and node features, respectively. 3.3 STRUCTURE-AWARE TELEPORT FUNCTION Structure-aware teleport function focuses on selecting nodes based on the graph topology. In a graph, the nodes that share the same structural pattern contain related features. It is desirable for a node to aggregate information from these relevant nodes. A structure-aware function can be used to capture relevant nodes from a graph structural perspective. With a structure-aware function, the teleported nodes for node v are selected as: Ns(v) = {u|u ∈ V, (v, u) /∈ E, ts(v, u) > θ1}, (2) where ts(v, u) is a structure-aware function that computes the relevance of two nodes from a structural perspective. Here, θ1 is used to determine if node u is relevant. In this work, we propose a novel structure-aware teleport function, which computes the relevance of two nodes by checking if they share the same structural pattern. Our proposed method is based on an intuition that the nodes with the same structural pattern have similar connections with their neighboring nodes. From this point, we create a structure feature vector for each node, which can reflect its structural pattern such as graph motifs. For each node, we first compute its similarity scores with its neighboring nodes. Then we rank these similarity scores and use them as the structural feature of this node. To be specific, the structural feature vector yv for node v is constructed as wv = X Txv, ∈Rn (3) idxv = rankk (wv ◦A:,v) , ∈Rk (4) yv = wv (idxv) , ∈Rk (5) where ◦ denotes an element-wise vector multiplication, and A:,v is the vth column of the adjacency matrix. rankk operator ranks the similarity scores and outputs the indices of the top-k values in wv . wv(idxv) returns a subset of rows in wv indexed by idxv . We first compute the similarity scores between node v and its neighboring nodes in Eq. (3). Each element wu,v in wv measures the similarity between node u and node v. In Eq. (4), we rank these similarity scores and select the k-largest values in wv . The indices of the selected values are stored in idxv . Using indices idxv , we extract a structural feature vector yv from wv . By repeating these operations on each node, we can obtain a structural feature matrix Y = [y1,y2, . . . ,yn] ∈ Rk×n for all nodes in the graph. In this way, the structural feature vector is constructed from similarity scores between the center node and its neighboring nodes. These similarity scores encode its connectivity pattern with surrounding nodes, thereby reflecting the structural information in the graph. Based on structural features, we use dot product to compute the relevance of node u and node v: ts(u, v) = softmax(yTu yv), which can measure relevance from the perspectives of both angle and magnitude in an efficient way. As illustrated in Eq. (2), the teleport nodes can be selected based on our constructed structural features. 3.4 FEATURE-AWARE TELEPORT FUNCTION In a feature-aware teleport function, the teleported nodes are selected based on node features. A feature-aware teleport function can select highly relevant nodes based on their features. By using this function, the teleported nodes for node v are selected as: Nf (v) = {u|u ∈ V, (v, u) /∈ E, tf (v, u) > θ2}, (6) where tf (v, u) is a teleport function. θ2 is a threshold to determine if a node is teleported. Notably, Geom-GCN (Pei et al., 2020) uses a special case of this feature-aware teleport function. In Geom-GCN, node features are projected into a 2-dimensional space then the Euclidean distance is computed and used. The structural features in Geom-GCN are based on the latent space without considering graph topology information. The time complexity of this function is O(2d). In our feature-aware teleport function, we use dot product to compute the relevance, which is effective and can slightly reduce the computational cost. To be specific, the feature-based relevance between node u and node v is computed as tf (v, u) = softmax(xTuxv). By combining structure-aware and feature-aware teleport functions, the neighborhood for node v is defined as N(v) = {Nl(v),Nf (v),Ns(v)}. In our proposed TeleGCL, each node aggregates information from nodes in neighborhood N(v). 3.5 TELEPORT GRAPH CONVOLUTIONAL NETWORKS Based on our TeleGCL, we build a family of teleport graph convolutional networks (TeleGCNs). Given an input graph, we first use an embedding layer to learn low-dimensional continuous feature embeddings for nodes in the graph. Possible choices for this embedding layer include fullyconnected layer and GCN layer. Here, we use a GCN layer to learn feature embeddings. Then several convolutional blocks are stacked to gradually learn network embeddings. In each convolutional block, we use our TeleGCL to learn high-level feature embedding, and a pooling layer to reduce the graph size and involve more non-linearity. Here, we use a sampling-based pooling method to retrain original graph structures and reduce the risk of over-fitting. Specifically, we use top-k pooling (Gao & Ji, 2019) layers in our model. Finally, we stack the outputs of all TeleGCLs and the output of the first GCN layer. To deal with variable graph sizes in terms of the number of nodes in a graph, we employ several global pooling layers such as averaging, maximization, and summation to reduce these outputs into vectors. These feature vectors are concatenated and fed into a multi-layer perceptron (MLP) for prediction. 4 EXPERIMENTS In this section, we conduct experiments on graph classification tasks to evaluate our proposed methods. We compare our teleport graph convolutional networks (TeleGCNs) with previous state-of-theart models. We conduct ablation studies to investigate the contributions of our proposed teleport functions. Some experiments are performed to study the impact of thresholds in teleport functions. Our code and detailed experimental setups are available in the supplementary material. 4.1 RESULTS ON GRAPH CLASSIFICATION TASKS We evaluate our proposed TeleGCL and TeleGCNs on graph classification tasks. We compare our TeleGCNs with the previous model on seven datasets including PROTEINS (Borgwardt et al., 2005), COLLAB, D&D (Dobson & Doig, 2003), IMDB-MULTI (Yanardag & Vishwanathan, 2015a), REDDIT-BINARY, REDDIT-MULTI-5K, and REDDIT-MULTI-12K (Yanardag & Vishwanathan, 2015b). These datasets are benchmarking graph datasets and are widely used for evaluation in this community. Notably, these datasets have no test dataset. The common practice (Xu et al., 2018; Ying et al., 2018; Gao & Ji, 2019; Lee et al., 2019) is to run 10-fold cross-validation on the training dataset and report the average accuracy (%) with standard deviation. We choose six previous state-of-the-art models as baseline models (Shervashidze et al., 2011; Niepert et al., 2016). We strictly follow the same practices as previous works. In bioinformatics datasets such as PROTEINS and D&D, we use original node features in the datasets. In social network datasets like REDDITBINARY and IMDB-MULTI, we use node degrees as their initial features. The hyper-parameters in TeleGCNs are slightly tuned on D&D dataset and are migrated to other datasets with slightly different selections. The experimental results on graph classification tasks are summarized in Table 1. Here, we report the graph classification accuracies with standard deviations. It can be seen from the results that our proposed TeleGCNs achieve significantly better performances than previous state-of-the-art models on six out of seven datasets. To be specific, our TeleGCNs outperform previous models by margins of 1.4%, 2.4%, 21.7%, 4.6%, 1.9%, and 5.6% on PROTEINS, COLLAB, D&D, IMDB-MULTI, REDDIT-BINARY, and REDDIT-MULTI-12K datasets. The results above show that our TeleGCNs consistently yield state-of-the-art performances on graph classification tasks, which demonstrate the effectiveness of our methods. By using teleport functions, TeleGCL can rapidly and effectively increase the receptive fields without involving massive trainable parameters. 4.2 PERFORMANCE STUDY ON SMALL DATASETS The experimental studies in Section 4.1 on relatively large datasets in terms of the number of graphs demonstrate the effectiveness of our proposed methods. In this section, we conduct experiments to study the performances of our TeleGCNs on three relatively small datasets; those are MUTAG (Wale et al., 2008), PTC (Toivonen et al., 2003), and IMDBBINARY (Yanardag & Vishwanathan, 2015a). MUTAG and PTC are bioinformatics datasets, while IMDB-BINARY is a popular social network dataset. We follow the same experimental setups as previous works (Xu et al., 2018; Ying et al., 2018; Gao & Ji, 2019). The experimental setups for experiments are provided in the supplementary material. The experimental results are summarized in Table 2. Due to the lack of testing datasets, we follow the common practices in previous works (Xu et al., 2018; Ying et al., 2018) and report the average accuracies by running 10-fold cross-validation on the training dataset. It can be seen from the results that our TeleGCNs achieve promising results on these relatively small datasets. Our TeleGCNs outperform previous models by margins of 0.9%, 9.1%, and 4.4% on MUTAG, PTC, and IMDB-BINARY datasets. The good performances on small datasets demonstrate that our proposed TeleGCL can effectively increase the receptive field without increasing the risk of over-fitting. To be specific, TeleGCL achieves a larger receptive field by using teleport functions, which teleport relevant nodes without using extra trainable parameters. 4.3 ABLATION STUDY OF TELEPORT FUNCTIONS In this section, we conduct experiments to study the contributions of our proposed teleport functions to the overall performances. In our TeleGCL, we use both structure-aware and feature-aware teleport functions to enlarge the receptive fields without stacking many graph convolution layers. The promising results in previous sections have demonstrated the effectiveness of our methods. To investigate the individual contributions of each teleport function, we build multiple networks with the same network architecture as TeleGCN. To be specific, we build two networks that only use the feature-aware and structure-aware teleport functions. We denote these two networks as TeleGCN-ts and TeleGCN-tf , respectively. Also, we replace our TeleGCLs with GCNs in the network, which results in GCNet. We evaluate these networks on IMDB-MULTI, REDDIT-BINARY, and REDDIT-MULTI-12K datasets. To ensure fair comparisons, we use the same experimental setups for these networks. The experimental results are summarized in Table 3. It can be seen from the results that the best performances are achieved when two teleport functions are used. The networks with teleport functions significantly outperform GCNet by margins of 1.0%, 1.1%, and 1.0% on IMDB-MULTI, REDDIT-BINARY, and REDDIT-MULTI-12K datasets. The results demonstrate the promising contributions of teleport functions. 4.4 COMPARISON WITH GEOM-GCN ON NODE CLASSIFICATION TASKS In previous sections, we evaluate our methods using graph classification tasks under inductive settings. Here, we conduct experiments to evaluate our methods on node classification tasks. We compare our TeleGCN with GCN, GAT, and Geom-GCN on Chameleon and Squirrel datasets (Rozemberczki et al., 2019). To ensure fair comparisons, we use the same experimental setups as Geom-GCN. The statistics of datasets and results are summarized in Table 4. From the results, we can see that our TeleGCN outperforms previous models by margins of 1.1% and 1.0% on Chameleon and Squirrel datasets, respectively. This demonstrates the superior performances of our TeleGCN over previous state-of-the-art models. 4.5 PERFORMANCE STUDIES OF THRESHOLDS IN TELEPORT FUNCTIONS In our proposed TeleGCL, teleport functions employ thresholds to determine if the relevance of nodes is significant. Thus, the threshold is a very important hyper-parameter in our proposed methods. It controls the number of nodes teleported in the feature aggregation process. In this section, we conduct experiments to investigate how different threshold values affect the performance of TeleGCN models. In our TeleGCNs, all teleport functions share the same threshold that is α/|V |. This can help to accommodate input graphs with variable sizes. In the experiments of previous sections, we set the hyper-parameter α to 2. Here, we vary the thresholds in TeleGCNs and evaluate the resulting networks on D&D, PROTEINS, and IMDB-MULTI datasets. We report the graph classification accuracies (%) as illustrated in Figure 4. As demonstrated in the figure, the model achieves the best performance when α = 2. When the threshold is small, many nodes are teleported for feature aggregation, thereby leading to an over-smoothing problem. As the threshold increases, fewer nodes are selected and the receptive field is not effectively enlarged. 4.6 PERFORMANCE STUDIES OF k In our structure-aware teleport function, we propose a novel method to construct structural features for teleport function to compute similarities of nodes from graph structural perspective. Essentially, each node uses the connections with k neighborhoods to build a kdimensional feature vector. From this point, k is an important hyper-parameter especially for our TeleGCL. In this section, we conduct experiments to study the impacts of different k values on overall model performances. To this end, we vary the values of k in TeleGCLs and evaluate the resulting models on three datasets; those are D&D, PROTEINS, and IMDB-BINARY. We report graph classification performances on these datasets. The results are summarized in Table 5. We can observe from the results that the networks achieve the best performances when k = 8. As the increase of k, there is no significant improvement in network performances but the computational cost for computing relevances will increase. Thus, we set k = 8 in our experiments as it is the best practice for both efficiency and performance. 4.7 VISUALIZATION OF TELEPORTED NODES In our proposed structure-aware teleport function, the nodes that share the same structural patterns as the center node are teleported. In this part, we provide some visualization analysis of these teleported nodes. Here, we select three graphs from the PROTEINS dataset and visualize them in Figure 5. The green node in each graph is the center nodes and the orange nodes are teleported by the structure-aware teleport function. We can observe from these graphs that the teleported nodes share very similar graph topology information to their corresponding center nodes. The teleported nodes in the first and third graphs are multiple hops away from the center nodes. The teleported nodes enable the center nodes to aggregate information from a larger receptive field. This demonstrates that our proposed structure-aware teleport function can select informative nodes for center nodes beyond the local neighborhood. 5 CONCLUSION In this work, we address two major limitations of graph convolutional networks that are usually based on message-passing operations. To overcome the over-fitting and over-smoothing issues, we propose the teleport graph convolution layer, which utilizes teleport functions to select relevant nodes beyond the original graph structure. In particular, we propose two teleport functions; those are structure-aware and feature-aware teleport functions. These two teleport functions can select relevant nodes from structural and feature perspectives. Based on our TeleGCL, we construct teleport graph convolutional networks on network embedding learning. A APPENDIX In this section, we introduce the experimental setup for our experiments. In this work, we mainly utilize graph classification tasks to demonstrate the effectiveness of our proposed methods. We conduct experiments on ten datasets; those are PROTEINS, COLLAB, D&D, IMDB-BINARY, IMDBMULTI, MUTAG, PTC, REDDIT-BINARY, REDDIT-MULTI5K, and REDDIT-MULTI12K. In our proposed TeleGCNs, we use a GCN layer as the initial embedding layer. After that, three blocks as described in Section 3.5 are stacked before the final multi-layer perceptron. In each block, we use a TeleGCL and a gPool layer (Gao & Ji, 2019). The output features of the initial GCN and TeleGCLs are reduced to three 1-dimensional feature vectors using global max, global averaging, and global summation operations. These feature vectors are concatenated and fed into a two-layer perceptron. The number of hidden neurons is 512. In each convolutional layer, we apply a dropout (Srivastava et al., 2014) to the feature matrix. In the multi-layer perceptron, we also use dropout on input feature vectors. The hyper-parameter tuning is performed on the D&D dataset with slight changes on other datasets. The details of hyper-parameters are summarized in Table 6. On each dataset, we run experiments for 200 epochs by using an Adam optimizer (Kingma & Ba, 2015). The learning rate is 0.001 with a weight decay of 0.0008 to reduce the risk of over-fitting. The batch size is set to 64. We use a NVIDIA GeForce RTX 2080 Ti GPU to train our models.
1. What are the key issues with existing message-passing graph convolutional networks? 2. What are the motivations behind the proposed structure-aware and feature-aware teleport functions? 3. How does the author evaluate the performance improvement of TeleGCN compared to other models? 4. What are some potential limitations of the proposed approach regarding scalability and overfitting? 5. Are there any concerns about the choice of baseline models used in the comparison?
Review
Review This paper analyzed the key issues of the existing message-passing graph convolutional networks. That is, the multiply stacked layer might be over-fitting and over-smoothing. Thus it proposed to choose the neighbors from the entire graph based on the structure-aware and feature-aware relatedness rather than simply choosing the local neighborhood. However, the motivations of the proposed structure-aware and feature-aware teleport functions are not very convincing, and Table 3 shows that the performance improvement of TeleGCN might largely be induced by model architecture rather than the proposed TeleGCL. Pros: [1] It analyzed the potential issues of message-passing operations in conventional graph convolutional networks. [2] It proposed structure-aware and feature-aware teleport functions to select the neighbors from the entire graph. [3] The experiments show that the proposed TeleGCN outperforms the existing graph convolutional networks. Cons: [1] It is confusing why the proposed TeleGCN model could avoid the risk of over-fitting. From Figure 3, it also stacks multiple teleport graph convolution layers. [2] Another concern is the scalability of the proposed graph convolutional networks. It might not be efficient to be applied to large-scale networks by selecting the neighbors from the entire graph for every TeleGCL. Besides, it might be more convincing to empirically compare the running time of the proposed method to the baselines. [3] For structure-aware teleport function, it involved the top-k similar connected neighbors as the structural feature vector. Then why not consider the high-order structural similarity or specific graph motif similarity? [4] From Table 3, it shows that the performance improvement of TeleGCL is very limited and the GCNet model can obtain superior performance. That might indicate that the performance improvement of TeleGCN might largely be induced by model architecture rather than the proposed TeleGCL. [5] It is not clear which papers the baselines, e.g., WL, PSCN, DIFFPOOL, etc., are from.
ICLR
Title Teleport Graph Convolutional Networks Abstract We consider the limitations in message-passing graph neural networks. In message-passing operations, each node aggregates information from its neighboring nodes. To enlarge the receptive field, graph neural networks need to stack multiple message-passing graph convolution layers, which leads to the over-fitting issue and over-smoothing issue. To address these limitations, we propose a teleport graph convolution layer (TeleGCL) that uses teleport functions to enable each node to aggregate information from a much larger neighborhood. For each node, teleport functions select relevant nodes beyond the local neighborhood, thereby resulting in a larger receptive field. To apply our structure-aware teleport function, we propose a novel method to construct structural features for nodes in the graph. Based on our TeleGCL, we build a family of teleport graph convolutional networks. The empirical results on graph and node classification tasks demonstrate the effectiveness of our proposed methods. 1 INTRODUCTION Graph neural networks (GNNs) have shown great capability in solving challenging tasks on graph data such as node classification (Grover & Leskovec, 2016; Kipf & Welling, 2017; Veličković et al., 2017; Gao et al., 2018), graph classification (Xu et al., 2018; Gao & Ji, 2019; You et al., 2019), and link prediction (Zhang & Chen, 2018; Chen et al., 2019; Zhou et al., 2019). Most graph convolutional networks are based on message-passing operations, in which each node aggregates information from its neighboring nodes. To enable a larger receptive field (Chen et al., 2016), GNNs need to stack multiple layers, which is straightforward but can result in several issues. Firstly, stacking multiple layers involves massive trainable parameters, which consequently increases the risk of over-fitting. Secondly, message-passing operations mostly use averaging to combine the aggregated features, which significantly reduces the distinguishability of network embeddings. From this point, GNNs that are based on message-passing operations can not use deep network architecture due to these limitations. Some works such as Geom-GCN (Pei et al., 2020) try to solve these issues by involving more nodes in the feature aggregation process. However, Geom-GCN doesn’t consider the original graph topology information when generating the additional set of nodes for aggregation, which can neglect some relevant nodes from a structural perspective. To address the above limitations and increase the receptive field effectively, we propose a teleport graph convolution layer (TeleGCL) that uses teleport functions to select highly-relevant nodes at the global scope. A teleport function computes relevances between the center node and other nodes beyond the local neighborhood. The nodes with particular relevances are teleported for the center node. Here, the selection of teleported nodes is not restricted by the graph topology. This enables the center node to gather information from a larger neighborhood without going deep, which helps to avoid over-fitting and over-smoothing issues. In particular, we propose two teleport functions; those are structure-aware and feature-aware teleport functions. They compute the nodes’ relevances from graph structural perspective and node features perspective, respectively. Based on our TeleGCL, we build a family of teleport graph convolutional networks. The empirical results on graph and node classification tasks demonstrate the effectiveness of our proposed methods. 2 BACKGROUND AND RELATED WORK In this section, we describe message-passing operations on graph data and geometric graph convolutional networks. Graph neural networks (Fan et al., 2019; Wu et al., 2019; Morris et al., 2019; Wu et al., 2020) have achieved state-of-the-art performances on various challenging tasks in the field of network embedding. The mainstream of graph deep learning operations follows a message-passing schema. In a message-passing operation, each node sends its features, known as message, to its neighboring nodes in the graph. Then each node aggregates messages from its neighborhood and uses them to update its features. When combing the aggregated features, different strategies can be applied. In the graph convolution layer (GCN) (Kipf & Welling, 2017), features from neighboring nodes are given equal weights in the aggregation process. To assign different weights to different neighboring nodes, the graph attention network (Veličković et al., 2017) employs an attention mechanism to compute aggregation weights. Based on these message-passing operations, graph neural networks stack multiple layers, which enables a larger receptive field. Recently, some research works try to perform message passing beyond the local neighborhood. Pei et al. (2020) proposed to construct a continuous latent space that enables graph neural networks to perform feature learning in the latent space. To be specific, it first projects nodes’ features to a 2-dimensional latent and continuous space. Based on the latent space, a structural neighborhood is constructed based on the Euclidean distance of each pair of nodes in the 2-dimensional space. In this process, the construction of structural features does not consider the graph connectivity information in the graph. Thus, the structural neighborhood in (Pei et al., 2020) is still built on node features without considering the graph topology. In this work, we propose a method to generate structure-aware features for each node. In particular, we use the graph connectivity and similarity information with the neighboring nodes and construct a feature vector for each node. By considering graph connectivity, our constructed structural features can reflect graph topology information. 3 TELEPORT GRAPH CONVOLUTIONAL NETWORKS In this work, we propose the teleport graph convolution layer (TeleGCL) that enables a center node to aggregate information beyond regular neighborhood structure by using some teleport functions. To enable effective node teleportation, we propose two teleport functions from structure-aware and feature-aware perspectives. Specifically, we propose a novel method to construct structural features for nodes, which can be used by structure-aware functions to select relevant nodes. Based on our TeleGCL, we propose the teleport graph convolutional networks for network embedding learning. 3.1 LIMITATIONS OF MESSAGE-PASSING OPERATIONS Currently, most graph convolution networks are based on message-passing operations. In a messagepassing operation, each node aggregates information from its neighboring nodes that usually are the one-hop neighborhood. Intuitively, it is beneficial to use information from a large neighborhood for network embedding learning. To enlarge the receptive field, a straight way is to stack multiple message-passing layers. A graph convolutional network with k layers enables nodes to receive information from a k-hop neighborhood. However, this method results in two issues. Firstly, it increases the risk of over-fitting by involving much more trainable parameters. The number of trainable parameters in the network increases when stacking multiple layers. Unlike regular convolutional neural networks, there is no effective graph pooling layer that can enlarge the receptive field without involving trainable parameters. Stacking many graph convolution layers will inevitably increase the risk of over-fitting. Secondly, stacking multiple layers will reduce the distinguishability of network embeddings, which is often referred to as the over-smoothing issue (Pei et al., 2020). Due to the invariant property in graph structures, message-passing operations cannot learn trainable weights in the aggregation process (Kipf & Welling, 2017; Gao et al., 2018). Averaging operation is usually used for information aggregation from the neighborhood. Consequently, information from relevant distant nodes will be diluted and each node carries similar information. In this work, we propose a teleport graph convolution layer to address this issue. This layer enables each node to aggregate information from a set of relevant nodes that are not directly connected to the center node in the original graph structure. Teleport functions are used to determine the relevant nodes from different perspectives. 3.2 TELEPORT GRAPH CONVOLUTION LAYER To address the limitations in message-passing operations, we propose the teleport graph convolution layer (TeleGCL), which enables nodes to aggregate information beyond their local neighborhoods. In this layer, we employ multiple teleport functions to generate neighborhoods for each node. The teleport functions select some nodes that are relevant but not directly connected. Since these nodes are teleported from a global context, the receptive field of each node can be effectively enlarged. We require these functions to be permutation invariant such that the property retains in this layer. Given a graph G = (V,E), where n = |V |, each node v ∈ V is associated with a feature vector xv ∈ Rd, and each edge (u, v) ∈ E connects node u and node v in the graph. X = [x1,x2, . . . ,xn] ∈ Rd×n and A ∈ Rn×n are the feature matrix and adjacency matrix, respectively. A teleport function g(v,G)→ N takes as input a node v and outputs a neighborhood N that includes a set of relevant nodes for node v’s feature aggregation. In Section 3.3 and Section 3.4, we propose two teleport functions that can construct structure-aware and feature-aware neighborhoods. Suppose we have m teleport functions, node v aggregates information from a neighborhood N (v) = {Nl(v),N1(v),N2(v), . . . ,Nm(v)}, where Nl(v) = {u|u ∈ V, (v, u) ∈ E} is the local neighborhood and Ni(v) = gi(v,G) is a neighborhood created by the ith teleport function. Based on this neighborhood, the layer-wise propagation of TeleGCL ` for node v is formulated as x(`)v = σ 1 |N(v)| ∑ u∈N(v) x(`−1)u , (1) where σ denotes an activation function such as ReLU (Nair & Hinton, 2010). By using teleport functions, a node can aggregate features beyond the local neighborhood, thereby leading to a larger receptive field. In Section 3.2, we propose the teleport graph convolution layer that enables feature aggregation regardless of the original graph structure. A TeleGCL highly depends on teleport functions which select distant nodes in the feature aggregation process. In previous works (Pei et al., 2020), teleport functions are mainly based on node features while neglecting the graph structural information. The graph topology information should be considered to include nodes that share the same structural patterns such as graph motifs (Ren & Jin, 2020). In this section, we propose two teleport functions; those are structure-aware teleport function and feature-aware teleport function. They select teleport nodes based on graph topology and node features, respectively. 3.3 STRUCTURE-AWARE TELEPORT FUNCTION Structure-aware teleport function focuses on selecting nodes based on the graph topology. In a graph, the nodes that share the same structural pattern contain related features. It is desirable for a node to aggregate information from these relevant nodes. A structure-aware function can be used to capture relevant nodes from a graph structural perspective. With a structure-aware function, the teleported nodes for node v are selected as: Ns(v) = {u|u ∈ V, (v, u) /∈ E, ts(v, u) > θ1}, (2) where ts(v, u) is a structure-aware function that computes the relevance of two nodes from a structural perspective. Here, θ1 is used to determine if node u is relevant. In this work, we propose a novel structure-aware teleport function, which computes the relevance of two nodes by checking if they share the same structural pattern. Our proposed method is based on an intuition that the nodes with the same structural pattern have similar connections with their neighboring nodes. From this point, we create a structure feature vector for each node, which can reflect its structural pattern such as graph motifs. For each node, we first compute its similarity scores with its neighboring nodes. Then we rank these similarity scores and use them as the structural feature of this node. To be specific, the structural feature vector yv for node v is constructed as wv = X Txv, ∈Rn (3) idxv = rankk (wv ◦A:,v) , ∈Rk (4) yv = wv (idxv) , ∈Rk (5) where ◦ denotes an element-wise vector multiplication, and A:,v is the vth column of the adjacency matrix. rankk operator ranks the similarity scores and outputs the indices of the top-k values in wv . wv(idxv) returns a subset of rows in wv indexed by idxv . We first compute the similarity scores between node v and its neighboring nodes in Eq. (3). Each element wu,v in wv measures the similarity between node u and node v. In Eq. (4), we rank these similarity scores and select the k-largest values in wv . The indices of the selected values are stored in idxv . Using indices idxv , we extract a structural feature vector yv from wv . By repeating these operations on each node, we can obtain a structural feature matrix Y = [y1,y2, . . . ,yn] ∈ Rk×n for all nodes in the graph. In this way, the structural feature vector is constructed from similarity scores between the center node and its neighboring nodes. These similarity scores encode its connectivity pattern with surrounding nodes, thereby reflecting the structural information in the graph. Based on structural features, we use dot product to compute the relevance of node u and node v: ts(u, v) = softmax(yTu yv), which can measure relevance from the perspectives of both angle and magnitude in an efficient way. As illustrated in Eq. (2), the teleport nodes can be selected based on our constructed structural features. 3.4 FEATURE-AWARE TELEPORT FUNCTION In a feature-aware teleport function, the teleported nodes are selected based on node features. A feature-aware teleport function can select highly relevant nodes based on their features. By using this function, the teleported nodes for node v are selected as: Nf (v) = {u|u ∈ V, (v, u) /∈ E, tf (v, u) > θ2}, (6) where tf (v, u) is a teleport function. θ2 is a threshold to determine if a node is teleported. Notably, Geom-GCN (Pei et al., 2020) uses a special case of this feature-aware teleport function. In Geom-GCN, node features are projected into a 2-dimensional space then the Euclidean distance is computed and used. The structural features in Geom-GCN are based on the latent space without considering graph topology information. The time complexity of this function is O(2d). In our feature-aware teleport function, we use dot product to compute the relevance, which is effective and can slightly reduce the computational cost. To be specific, the feature-based relevance between node u and node v is computed as tf (v, u) = softmax(xTuxv). By combining structure-aware and feature-aware teleport functions, the neighborhood for node v is defined as N(v) = {Nl(v),Nf (v),Ns(v)}. In our proposed TeleGCL, each node aggregates information from nodes in neighborhood N(v). 3.5 TELEPORT GRAPH CONVOLUTIONAL NETWORKS Based on our TeleGCL, we build a family of teleport graph convolutional networks (TeleGCNs). Given an input graph, we first use an embedding layer to learn low-dimensional continuous feature embeddings for nodes in the graph. Possible choices for this embedding layer include fullyconnected layer and GCN layer. Here, we use a GCN layer to learn feature embeddings. Then several convolutional blocks are stacked to gradually learn network embeddings. In each convolutional block, we use our TeleGCL to learn high-level feature embedding, and a pooling layer to reduce the graph size and involve more non-linearity. Here, we use a sampling-based pooling method to retrain original graph structures and reduce the risk of over-fitting. Specifically, we use top-k pooling (Gao & Ji, 2019) layers in our model. Finally, we stack the outputs of all TeleGCLs and the output of the first GCN layer. To deal with variable graph sizes in terms of the number of nodes in a graph, we employ several global pooling layers such as averaging, maximization, and summation to reduce these outputs into vectors. These feature vectors are concatenated and fed into a multi-layer perceptron (MLP) for prediction. 4 EXPERIMENTS In this section, we conduct experiments on graph classification tasks to evaluate our proposed methods. We compare our teleport graph convolutional networks (TeleGCNs) with previous state-of-theart models. We conduct ablation studies to investigate the contributions of our proposed teleport functions. Some experiments are performed to study the impact of thresholds in teleport functions. Our code and detailed experimental setups are available in the supplementary material. 4.1 RESULTS ON GRAPH CLASSIFICATION TASKS We evaluate our proposed TeleGCL and TeleGCNs on graph classification tasks. We compare our TeleGCNs with the previous model on seven datasets including PROTEINS (Borgwardt et al., 2005), COLLAB, D&D (Dobson & Doig, 2003), IMDB-MULTI (Yanardag & Vishwanathan, 2015a), REDDIT-BINARY, REDDIT-MULTI-5K, and REDDIT-MULTI-12K (Yanardag & Vishwanathan, 2015b). These datasets are benchmarking graph datasets and are widely used for evaluation in this community. Notably, these datasets have no test dataset. The common practice (Xu et al., 2018; Ying et al., 2018; Gao & Ji, 2019; Lee et al., 2019) is to run 10-fold cross-validation on the training dataset and report the average accuracy (%) with standard deviation. We choose six previous state-of-the-art models as baseline models (Shervashidze et al., 2011; Niepert et al., 2016). We strictly follow the same practices as previous works. In bioinformatics datasets such as PROTEINS and D&D, we use original node features in the datasets. In social network datasets like REDDITBINARY and IMDB-MULTI, we use node degrees as their initial features. The hyper-parameters in TeleGCNs are slightly tuned on D&D dataset and are migrated to other datasets with slightly different selections. The experimental results on graph classification tasks are summarized in Table 1. Here, we report the graph classification accuracies with standard deviations. It can be seen from the results that our proposed TeleGCNs achieve significantly better performances than previous state-of-the-art models on six out of seven datasets. To be specific, our TeleGCNs outperform previous models by margins of 1.4%, 2.4%, 21.7%, 4.6%, 1.9%, and 5.6% on PROTEINS, COLLAB, D&D, IMDB-MULTI, REDDIT-BINARY, and REDDIT-MULTI-12K datasets. The results above show that our TeleGCNs consistently yield state-of-the-art performances on graph classification tasks, which demonstrate the effectiveness of our methods. By using teleport functions, TeleGCL can rapidly and effectively increase the receptive fields without involving massive trainable parameters. 4.2 PERFORMANCE STUDY ON SMALL DATASETS The experimental studies in Section 4.1 on relatively large datasets in terms of the number of graphs demonstrate the effectiveness of our proposed methods. In this section, we conduct experiments to study the performances of our TeleGCNs on three relatively small datasets; those are MUTAG (Wale et al., 2008), PTC (Toivonen et al., 2003), and IMDBBINARY (Yanardag & Vishwanathan, 2015a). MUTAG and PTC are bioinformatics datasets, while IMDB-BINARY is a popular social network dataset. We follow the same experimental setups as previous works (Xu et al., 2018; Ying et al., 2018; Gao & Ji, 2019). The experimental setups for experiments are provided in the supplementary material. The experimental results are summarized in Table 2. Due to the lack of testing datasets, we follow the common practices in previous works (Xu et al., 2018; Ying et al., 2018) and report the average accuracies by running 10-fold cross-validation on the training dataset. It can be seen from the results that our TeleGCNs achieve promising results on these relatively small datasets. Our TeleGCNs outperform previous models by margins of 0.9%, 9.1%, and 4.4% on MUTAG, PTC, and IMDB-BINARY datasets. The good performances on small datasets demonstrate that our proposed TeleGCL can effectively increase the receptive field without increasing the risk of over-fitting. To be specific, TeleGCL achieves a larger receptive field by using teleport functions, which teleport relevant nodes without using extra trainable parameters. 4.3 ABLATION STUDY OF TELEPORT FUNCTIONS In this section, we conduct experiments to study the contributions of our proposed teleport functions to the overall performances. In our TeleGCL, we use both structure-aware and feature-aware teleport functions to enlarge the receptive fields without stacking many graph convolution layers. The promising results in previous sections have demonstrated the effectiveness of our methods. To investigate the individual contributions of each teleport function, we build multiple networks with the same network architecture as TeleGCN. To be specific, we build two networks that only use the feature-aware and structure-aware teleport functions. We denote these two networks as TeleGCN-ts and TeleGCN-tf , respectively. Also, we replace our TeleGCLs with GCNs in the network, which results in GCNet. We evaluate these networks on IMDB-MULTI, REDDIT-BINARY, and REDDIT-MULTI-12K datasets. To ensure fair comparisons, we use the same experimental setups for these networks. The experimental results are summarized in Table 3. It can be seen from the results that the best performances are achieved when two teleport functions are used. The networks with teleport functions significantly outperform GCNet by margins of 1.0%, 1.1%, and 1.0% on IMDB-MULTI, REDDIT-BINARY, and REDDIT-MULTI-12K datasets. The results demonstrate the promising contributions of teleport functions. 4.4 COMPARISON WITH GEOM-GCN ON NODE CLASSIFICATION TASKS In previous sections, we evaluate our methods using graph classification tasks under inductive settings. Here, we conduct experiments to evaluate our methods on node classification tasks. We compare our TeleGCN with GCN, GAT, and Geom-GCN on Chameleon and Squirrel datasets (Rozemberczki et al., 2019). To ensure fair comparisons, we use the same experimental setups as Geom-GCN. The statistics of datasets and results are summarized in Table 4. From the results, we can see that our TeleGCN outperforms previous models by margins of 1.1% and 1.0% on Chameleon and Squirrel datasets, respectively. This demonstrates the superior performances of our TeleGCN over previous state-of-the-art models. 4.5 PERFORMANCE STUDIES OF THRESHOLDS IN TELEPORT FUNCTIONS In our proposed TeleGCL, teleport functions employ thresholds to determine if the relevance of nodes is significant. Thus, the threshold is a very important hyper-parameter in our proposed methods. It controls the number of nodes teleported in the feature aggregation process. In this section, we conduct experiments to investigate how different threshold values affect the performance of TeleGCN models. In our TeleGCNs, all teleport functions share the same threshold that is α/|V |. This can help to accommodate input graphs with variable sizes. In the experiments of previous sections, we set the hyper-parameter α to 2. Here, we vary the thresholds in TeleGCNs and evaluate the resulting networks on D&D, PROTEINS, and IMDB-MULTI datasets. We report the graph classification accuracies (%) as illustrated in Figure 4. As demonstrated in the figure, the model achieves the best performance when α = 2. When the threshold is small, many nodes are teleported for feature aggregation, thereby leading to an over-smoothing problem. As the threshold increases, fewer nodes are selected and the receptive field is not effectively enlarged. 4.6 PERFORMANCE STUDIES OF k In our structure-aware teleport function, we propose a novel method to construct structural features for teleport function to compute similarities of nodes from graph structural perspective. Essentially, each node uses the connections with k neighborhoods to build a kdimensional feature vector. From this point, k is an important hyper-parameter especially for our TeleGCL. In this section, we conduct experiments to study the impacts of different k values on overall model performances. To this end, we vary the values of k in TeleGCLs and evaluate the resulting models on three datasets; those are D&D, PROTEINS, and IMDB-BINARY. We report graph classification performances on these datasets. The results are summarized in Table 5. We can observe from the results that the networks achieve the best performances when k = 8. As the increase of k, there is no significant improvement in network performances but the computational cost for computing relevances will increase. Thus, we set k = 8 in our experiments as it is the best practice for both efficiency and performance. 4.7 VISUALIZATION OF TELEPORTED NODES In our proposed structure-aware teleport function, the nodes that share the same structural patterns as the center node are teleported. In this part, we provide some visualization analysis of these teleported nodes. Here, we select three graphs from the PROTEINS dataset and visualize them in Figure 5. The green node in each graph is the center nodes and the orange nodes are teleported by the structure-aware teleport function. We can observe from these graphs that the teleported nodes share very similar graph topology information to their corresponding center nodes. The teleported nodes in the first and third graphs are multiple hops away from the center nodes. The teleported nodes enable the center nodes to aggregate information from a larger receptive field. This demonstrates that our proposed structure-aware teleport function can select informative nodes for center nodes beyond the local neighborhood. 5 CONCLUSION In this work, we address two major limitations of graph convolutional networks that are usually based on message-passing operations. To overcome the over-fitting and over-smoothing issues, we propose the teleport graph convolution layer, which utilizes teleport functions to select relevant nodes beyond the original graph structure. In particular, we propose two teleport functions; those are structure-aware and feature-aware teleport functions. These two teleport functions can select relevant nodes from structural and feature perspectives. Based on our TeleGCL, we construct teleport graph convolutional networks on network embedding learning. A APPENDIX In this section, we introduce the experimental setup for our experiments. In this work, we mainly utilize graph classification tasks to demonstrate the effectiveness of our proposed methods. We conduct experiments on ten datasets; those are PROTEINS, COLLAB, D&D, IMDB-BINARY, IMDBMULTI, MUTAG, PTC, REDDIT-BINARY, REDDIT-MULTI5K, and REDDIT-MULTI12K. In our proposed TeleGCNs, we use a GCN layer as the initial embedding layer. After that, three blocks as described in Section 3.5 are stacked before the final multi-layer perceptron. In each block, we use a TeleGCL and a gPool layer (Gao & Ji, 2019). The output features of the initial GCN and TeleGCLs are reduced to three 1-dimensional feature vectors using global max, global averaging, and global summation operations. These feature vectors are concatenated and fed into a two-layer perceptron. The number of hidden neurons is 512. In each convolutional layer, we apply a dropout (Srivastava et al., 2014) to the feature matrix. In the multi-layer perceptron, we also use dropout on input feature vectors. The hyper-parameter tuning is performed on the D&D dataset with slight changes on other datasets. The details of hyper-parameters are summarized in Table 6. On each dataset, we run experiments for 200 epochs by using an Adam optimizer (Kingma & Ba, 2015). The learning rate is 0.001 with a weight decay of 0.0008 to reduce the risk of over-fitting. The batch size is set to 64. We use a NVIDIA GeForce RTX 2080 Ti GPU to train our models.
1. What are the main contributions and novel aspects of the proposed Teleport graph convolutional layer? 2. How does the Teleport GCN solve the limitations of message passing paradigms and deep GNN models? 3. Are there any previous works that have addressed the same issues as the Teleport GCN? If so, how does the Teleport GCN differ from these works? 4. What are the weaknesses or limitations of the empirical evaluation of the proposed method? 5. How would you improve the experimental design and validation process for a fairer comparison of the Teleport GCN with other models?
Review
Review The manuscript proposes a novel convolutional layer that computes a node embedding by selectively summing the nodes' embeddings neighbors at different hop distances (from 1 to m hops). The aim of the Teleport graph convolutional layer is to solve the major limitations introduced by using the message-passing paradigm. In the introduction, the authors highlight the main issues related to message passing paradigms and deep GNN models. The authors state that stacking multiple layers involves massive trainable parameters, which consequently increases the risk of over-fitting. It is important to note that the issue of having several different GNN stacked layers were already faced in some papers: “The Graph Neural Network Model” by Scarselly et al. (2019), and in “Gated Graph Sequence Neural Networks” by Li et al. (2016). In these papers, the authors use recurrent models that exploit the weights sharing mechanism in order to limit the number of trainable parameters. Moreover, several works that propose an alternative to the message passing paradigm were published in the last few years. For instance, several convolutional operators are designed to consider a larger receptive field. By exploiting power series of the diffusion operator or leveraging on a multi-scale operator: Diffusion-convolutional neural networks, Atwood et al. (2016), LanczosNet: Multi-scale deep graph convolutional networks, Liao et al. (2019), Break the ceiling: Stronger multi-scale deep graph convolutional networks, Luan et al. (2019), SIGN: Scalable Inception Graph Neural Networks, Rossi et al. (2020). Since the Teleport GCN tries to solve the same issues, all these models should be discussed in the related works section, and also considered in the comparison of the experimental results. In this regards note that, even if the idea of using the features-aware or the structure-aware teleport function is interesting, the teleport convolutional layer defined in section 3.2 is not very novel in my opinion, since it seems very close to a multi-scale approach where in the same layer several exponentiations of linear diffusion operator are considered. The main problem of this work is the empirical evaluation of the proposed method. The authors validate the models using an unfair method. Indeed the authors state “ The hyper-parameters in TeleGCNs are slightly turned on D&D dataset and are migrated to other datasets with slightly different selections' '. Using similar hyper-parameters tuned in D&D for the other datasets in my opinion is not correct. Note that the other datasets differ significantly from D&D (e.g. PROTEINS has 39.1 nodes vs 284.3; COLLAB has a significantly different number of classes and 5 times the number of graphs; etc.). It is also important to notice that all these datasets require a different type of classification task. Moreover, the method used by the authors differs from the “common practice” used in (Xu et al., 2018; Ying et al., 2018; Gao & Ji, 2019; Lee et al., 2019) since in Xu et al., 2018 the authors state: “The hyper-parameters we tune for each dataset are: (1) the number of hidden units ∈ {16, 32} for bioinformatics graphs and 64 for social graphs; (2) the batch size ∈ {32, 128}; (3) the dropout ratio ∈ {0, 0.5} after the dense layer (Srivastava et al., 2014); (4) the number of epochs, i.e., a single epoch with the best cross-validation accuracy averaged over the 10 folds was selected”. The validation phase has to be executed independently for each dataset, otherwise, the obtained result will be clearly biased. Moreover, note that the impact of the validation phase in evaluating the performance of a model is discussed in “A Fair Comparison of Graph Neural Networks for Graph Classification” by Errica et al. (ICLR 2019). The results reported in this paper show that performing a fair validation procedure is crucial to evaluate the model's performance. Minor comments: From section 4.2 to section 4.7 it is almost impossible to distinguish between tables' captions and sections text.
ICLR
Title Teleport Graph Convolutional Networks Abstract We consider the limitations in message-passing graph neural networks. In message-passing operations, each node aggregates information from its neighboring nodes. To enlarge the receptive field, graph neural networks need to stack multiple message-passing graph convolution layers, which leads to the over-fitting issue and over-smoothing issue. To address these limitations, we propose a teleport graph convolution layer (TeleGCL) that uses teleport functions to enable each node to aggregate information from a much larger neighborhood. For each node, teleport functions select relevant nodes beyond the local neighborhood, thereby resulting in a larger receptive field. To apply our structure-aware teleport function, we propose a novel method to construct structural features for nodes in the graph. Based on our TeleGCL, we build a family of teleport graph convolutional networks. The empirical results on graph and node classification tasks demonstrate the effectiveness of our proposed methods. 1 INTRODUCTION Graph neural networks (GNNs) have shown great capability in solving challenging tasks on graph data such as node classification (Grover & Leskovec, 2016; Kipf & Welling, 2017; Veličković et al., 2017; Gao et al., 2018), graph classification (Xu et al., 2018; Gao & Ji, 2019; You et al., 2019), and link prediction (Zhang & Chen, 2018; Chen et al., 2019; Zhou et al., 2019). Most graph convolutional networks are based on message-passing operations, in which each node aggregates information from its neighboring nodes. To enable a larger receptive field (Chen et al., 2016), GNNs need to stack multiple layers, which is straightforward but can result in several issues. Firstly, stacking multiple layers involves massive trainable parameters, which consequently increases the risk of over-fitting. Secondly, message-passing operations mostly use averaging to combine the aggregated features, which significantly reduces the distinguishability of network embeddings. From this point, GNNs that are based on message-passing operations can not use deep network architecture due to these limitations. Some works such as Geom-GCN (Pei et al., 2020) try to solve these issues by involving more nodes in the feature aggregation process. However, Geom-GCN doesn’t consider the original graph topology information when generating the additional set of nodes for aggregation, which can neglect some relevant nodes from a structural perspective. To address the above limitations and increase the receptive field effectively, we propose a teleport graph convolution layer (TeleGCL) that uses teleport functions to select highly-relevant nodes at the global scope. A teleport function computes relevances between the center node and other nodes beyond the local neighborhood. The nodes with particular relevances are teleported for the center node. Here, the selection of teleported nodes is not restricted by the graph topology. This enables the center node to gather information from a larger neighborhood without going deep, which helps to avoid over-fitting and over-smoothing issues. In particular, we propose two teleport functions; those are structure-aware and feature-aware teleport functions. They compute the nodes’ relevances from graph structural perspective and node features perspective, respectively. Based on our TeleGCL, we build a family of teleport graph convolutional networks. The empirical results on graph and node classification tasks demonstrate the effectiveness of our proposed methods. 2 BACKGROUND AND RELATED WORK In this section, we describe message-passing operations on graph data and geometric graph convolutional networks. Graph neural networks (Fan et al., 2019; Wu et al., 2019; Morris et al., 2019; Wu et al., 2020) have achieved state-of-the-art performances on various challenging tasks in the field of network embedding. The mainstream of graph deep learning operations follows a message-passing schema. In a message-passing operation, each node sends its features, known as message, to its neighboring nodes in the graph. Then each node aggregates messages from its neighborhood and uses them to update its features. When combing the aggregated features, different strategies can be applied. In the graph convolution layer (GCN) (Kipf & Welling, 2017), features from neighboring nodes are given equal weights in the aggregation process. To assign different weights to different neighboring nodes, the graph attention network (Veličković et al., 2017) employs an attention mechanism to compute aggregation weights. Based on these message-passing operations, graph neural networks stack multiple layers, which enables a larger receptive field. Recently, some research works try to perform message passing beyond the local neighborhood. Pei et al. (2020) proposed to construct a continuous latent space that enables graph neural networks to perform feature learning in the latent space. To be specific, it first projects nodes’ features to a 2-dimensional latent and continuous space. Based on the latent space, a structural neighborhood is constructed based on the Euclidean distance of each pair of nodes in the 2-dimensional space. In this process, the construction of structural features does not consider the graph connectivity information in the graph. Thus, the structural neighborhood in (Pei et al., 2020) is still built on node features without considering the graph topology. In this work, we propose a method to generate structure-aware features for each node. In particular, we use the graph connectivity and similarity information with the neighboring nodes and construct a feature vector for each node. By considering graph connectivity, our constructed structural features can reflect graph topology information. 3 TELEPORT GRAPH CONVOLUTIONAL NETWORKS In this work, we propose the teleport graph convolution layer (TeleGCL) that enables a center node to aggregate information beyond regular neighborhood structure by using some teleport functions. To enable effective node teleportation, we propose two teleport functions from structure-aware and feature-aware perspectives. Specifically, we propose a novel method to construct structural features for nodes, which can be used by structure-aware functions to select relevant nodes. Based on our TeleGCL, we propose the teleport graph convolutional networks for network embedding learning. 3.1 LIMITATIONS OF MESSAGE-PASSING OPERATIONS Currently, most graph convolution networks are based on message-passing operations. In a messagepassing operation, each node aggregates information from its neighboring nodes that usually are the one-hop neighborhood. Intuitively, it is beneficial to use information from a large neighborhood for network embedding learning. To enlarge the receptive field, a straight way is to stack multiple message-passing layers. A graph convolutional network with k layers enables nodes to receive information from a k-hop neighborhood. However, this method results in two issues. Firstly, it increases the risk of over-fitting by involving much more trainable parameters. The number of trainable parameters in the network increases when stacking multiple layers. Unlike regular convolutional neural networks, there is no effective graph pooling layer that can enlarge the receptive field without involving trainable parameters. Stacking many graph convolution layers will inevitably increase the risk of over-fitting. Secondly, stacking multiple layers will reduce the distinguishability of network embeddings, which is often referred to as the over-smoothing issue (Pei et al., 2020). Due to the invariant property in graph structures, message-passing operations cannot learn trainable weights in the aggregation process (Kipf & Welling, 2017; Gao et al., 2018). Averaging operation is usually used for information aggregation from the neighborhood. Consequently, information from relevant distant nodes will be diluted and each node carries similar information. In this work, we propose a teleport graph convolution layer to address this issue. This layer enables each node to aggregate information from a set of relevant nodes that are not directly connected to the center node in the original graph structure. Teleport functions are used to determine the relevant nodes from different perspectives. 3.2 TELEPORT GRAPH CONVOLUTION LAYER To address the limitations in message-passing operations, we propose the teleport graph convolution layer (TeleGCL), which enables nodes to aggregate information beyond their local neighborhoods. In this layer, we employ multiple teleport functions to generate neighborhoods for each node. The teleport functions select some nodes that are relevant but not directly connected. Since these nodes are teleported from a global context, the receptive field of each node can be effectively enlarged. We require these functions to be permutation invariant such that the property retains in this layer. Given a graph G = (V,E), where n = |V |, each node v ∈ V is associated with a feature vector xv ∈ Rd, and each edge (u, v) ∈ E connects node u and node v in the graph. X = [x1,x2, . . . ,xn] ∈ Rd×n and A ∈ Rn×n are the feature matrix and adjacency matrix, respectively. A teleport function g(v,G)→ N takes as input a node v and outputs a neighborhood N that includes a set of relevant nodes for node v’s feature aggregation. In Section 3.3 and Section 3.4, we propose two teleport functions that can construct structure-aware and feature-aware neighborhoods. Suppose we have m teleport functions, node v aggregates information from a neighborhood N (v) = {Nl(v),N1(v),N2(v), . . . ,Nm(v)}, where Nl(v) = {u|u ∈ V, (v, u) ∈ E} is the local neighborhood and Ni(v) = gi(v,G) is a neighborhood created by the ith teleport function. Based on this neighborhood, the layer-wise propagation of TeleGCL ` for node v is formulated as x(`)v = σ 1 |N(v)| ∑ u∈N(v) x(`−1)u , (1) where σ denotes an activation function such as ReLU (Nair & Hinton, 2010). By using teleport functions, a node can aggregate features beyond the local neighborhood, thereby leading to a larger receptive field. In Section 3.2, we propose the teleport graph convolution layer that enables feature aggregation regardless of the original graph structure. A TeleGCL highly depends on teleport functions which select distant nodes in the feature aggregation process. In previous works (Pei et al., 2020), teleport functions are mainly based on node features while neglecting the graph structural information. The graph topology information should be considered to include nodes that share the same structural patterns such as graph motifs (Ren & Jin, 2020). In this section, we propose two teleport functions; those are structure-aware teleport function and feature-aware teleport function. They select teleport nodes based on graph topology and node features, respectively. 3.3 STRUCTURE-AWARE TELEPORT FUNCTION Structure-aware teleport function focuses on selecting nodes based on the graph topology. In a graph, the nodes that share the same structural pattern contain related features. It is desirable for a node to aggregate information from these relevant nodes. A structure-aware function can be used to capture relevant nodes from a graph structural perspective. With a structure-aware function, the teleported nodes for node v are selected as: Ns(v) = {u|u ∈ V, (v, u) /∈ E, ts(v, u) > θ1}, (2) where ts(v, u) is a structure-aware function that computes the relevance of two nodes from a structural perspective. Here, θ1 is used to determine if node u is relevant. In this work, we propose a novel structure-aware teleport function, which computes the relevance of two nodes by checking if they share the same structural pattern. Our proposed method is based on an intuition that the nodes with the same structural pattern have similar connections with their neighboring nodes. From this point, we create a structure feature vector for each node, which can reflect its structural pattern such as graph motifs. For each node, we first compute its similarity scores with its neighboring nodes. Then we rank these similarity scores and use them as the structural feature of this node. To be specific, the structural feature vector yv for node v is constructed as wv = X Txv, ∈Rn (3) idxv = rankk (wv ◦A:,v) , ∈Rk (4) yv = wv (idxv) , ∈Rk (5) where ◦ denotes an element-wise vector multiplication, and A:,v is the vth column of the adjacency matrix. rankk operator ranks the similarity scores and outputs the indices of the top-k values in wv . wv(idxv) returns a subset of rows in wv indexed by idxv . We first compute the similarity scores between node v and its neighboring nodes in Eq. (3). Each element wu,v in wv measures the similarity between node u and node v. In Eq. (4), we rank these similarity scores and select the k-largest values in wv . The indices of the selected values are stored in idxv . Using indices idxv , we extract a structural feature vector yv from wv . By repeating these operations on each node, we can obtain a structural feature matrix Y = [y1,y2, . . . ,yn] ∈ Rk×n for all nodes in the graph. In this way, the structural feature vector is constructed from similarity scores between the center node and its neighboring nodes. These similarity scores encode its connectivity pattern with surrounding nodes, thereby reflecting the structural information in the graph. Based on structural features, we use dot product to compute the relevance of node u and node v: ts(u, v) = softmax(yTu yv), which can measure relevance from the perspectives of both angle and magnitude in an efficient way. As illustrated in Eq. (2), the teleport nodes can be selected based on our constructed structural features. 3.4 FEATURE-AWARE TELEPORT FUNCTION In a feature-aware teleport function, the teleported nodes are selected based on node features. A feature-aware teleport function can select highly relevant nodes based on their features. By using this function, the teleported nodes for node v are selected as: Nf (v) = {u|u ∈ V, (v, u) /∈ E, tf (v, u) > θ2}, (6) where tf (v, u) is a teleport function. θ2 is a threshold to determine if a node is teleported. Notably, Geom-GCN (Pei et al., 2020) uses a special case of this feature-aware teleport function. In Geom-GCN, node features are projected into a 2-dimensional space then the Euclidean distance is computed and used. The structural features in Geom-GCN are based on the latent space without considering graph topology information. The time complexity of this function is O(2d). In our feature-aware teleport function, we use dot product to compute the relevance, which is effective and can slightly reduce the computational cost. To be specific, the feature-based relevance between node u and node v is computed as tf (v, u) = softmax(xTuxv). By combining structure-aware and feature-aware teleport functions, the neighborhood for node v is defined as N(v) = {Nl(v),Nf (v),Ns(v)}. In our proposed TeleGCL, each node aggregates information from nodes in neighborhood N(v). 3.5 TELEPORT GRAPH CONVOLUTIONAL NETWORKS Based on our TeleGCL, we build a family of teleport graph convolutional networks (TeleGCNs). Given an input graph, we first use an embedding layer to learn low-dimensional continuous feature embeddings for nodes in the graph. Possible choices for this embedding layer include fullyconnected layer and GCN layer. Here, we use a GCN layer to learn feature embeddings. Then several convolutional blocks are stacked to gradually learn network embeddings. In each convolutional block, we use our TeleGCL to learn high-level feature embedding, and a pooling layer to reduce the graph size and involve more non-linearity. Here, we use a sampling-based pooling method to retrain original graph structures and reduce the risk of over-fitting. Specifically, we use top-k pooling (Gao & Ji, 2019) layers in our model. Finally, we stack the outputs of all TeleGCLs and the output of the first GCN layer. To deal with variable graph sizes in terms of the number of nodes in a graph, we employ several global pooling layers such as averaging, maximization, and summation to reduce these outputs into vectors. These feature vectors are concatenated and fed into a multi-layer perceptron (MLP) for prediction. 4 EXPERIMENTS In this section, we conduct experiments on graph classification tasks to evaluate our proposed methods. We compare our teleport graph convolutional networks (TeleGCNs) with previous state-of-theart models. We conduct ablation studies to investigate the contributions of our proposed teleport functions. Some experiments are performed to study the impact of thresholds in teleport functions. Our code and detailed experimental setups are available in the supplementary material. 4.1 RESULTS ON GRAPH CLASSIFICATION TASKS We evaluate our proposed TeleGCL and TeleGCNs on graph classification tasks. We compare our TeleGCNs with the previous model on seven datasets including PROTEINS (Borgwardt et al., 2005), COLLAB, D&D (Dobson & Doig, 2003), IMDB-MULTI (Yanardag & Vishwanathan, 2015a), REDDIT-BINARY, REDDIT-MULTI-5K, and REDDIT-MULTI-12K (Yanardag & Vishwanathan, 2015b). These datasets are benchmarking graph datasets and are widely used for evaluation in this community. Notably, these datasets have no test dataset. The common practice (Xu et al., 2018; Ying et al., 2018; Gao & Ji, 2019; Lee et al., 2019) is to run 10-fold cross-validation on the training dataset and report the average accuracy (%) with standard deviation. We choose six previous state-of-the-art models as baseline models (Shervashidze et al., 2011; Niepert et al., 2016). We strictly follow the same practices as previous works. In bioinformatics datasets such as PROTEINS and D&D, we use original node features in the datasets. In social network datasets like REDDITBINARY and IMDB-MULTI, we use node degrees as their initial features. The hyper-parameters in TeleGCNs are slightly tuned on D&D dataset and are migrated to other datasets with slightly different selections. The experimental results on graph classification tasks are summarized in Table 1. Here, we report the graph classification accuracies with standard deviations. It can be seen from the results that our proposed TeleGCNs achieve significantly better performances than previous state-of-the-art models on six out of seven datasets. To be specific, our TeleGCNs outperform previous models by margins of 1.4%, 2.4%, 21.7%, 4.6%, 1.9%, and 5.6% on PROTEINS, COLLAB, D&D, IMDB-MULTI, REDDIT-BINARY, and REDDIT-MULTI-12K datasets. The results above show that our TeleGCNs consistently yield state-of-the-art performances on graph classification tasks, which demonstrate the effectiveness of our methods. By using teleport functions, TeleGCL can rapidly and effectively increase the receptive fields without involving massive trainable parameters. 4.2 PERFORMANCE STUDY ON SMALL DATASETS The experimental studies in Section 4.1 on relatively large datasets in terms of the number of graphs demonstrate the effectiveness of our proposed methods. In this section, we conduct experiments to study the performances of our TeleGCNs on three relatively small datasets; those are MUTAG (Wale et al., 2008), PTC (Toivonen et al., 2003), and IMDBBINARY (Yanardag & Vishwanathan, 2015a). MUTAG and PTC are bioinformatics datasets, while IMDB-BINARY is a popular social network dataset. We follow the same experimental setups as previous works (Xu et al., 2018; Ying et al., 2018; Gao & Ji, 2019). The experimental setups for experiments are provided in the supplementary material. The experimental results are summarized in Table 2. Due to the lack of testing datasets, we follow the common practices in previous works (Xu et al., 2018; Ying et al., 2018) and report the average accuracies by running 10-fold cross-validation on the training dataset. It can be seen from the results that our TeleGCNs achieve promising results on these relatively small datasets. Our TeleGCNs outperform previous models by margins of 0.9%, 9.1%, and 4.4% on MUTAG, PTC, and IMDB-BINARY datasets. The good performances on small datasets demonstrate that our proposed TeleGCL can effectively increase the receptive field without increasing the risk of over-fitting. To be specific, TeleGCL achieves a larger receptive field by using teleport functions, which teleport relevant nodes without using extra trainable parameters. 4.3 ABLATION STUDY OF TELEPORT FUNCTIONS In this section, we conduct experiments to study the contributions of our proposed teleport functions to the overall performances. In our TeleGCL, we use both structure-aware and feature-aware teleport functions to enlarge the receptive fields without stacking many graph convolution layers. The promising results in previous sections have demonstrated the effectiveness of our methods. To investigate the individual contributions of each teleport function, we build multiple networks with the same network architecture as TeleGCN. To be specific, we build two networks that only use the feature-aware and structure-aware teleport functions. We denote these two networks as TeleGCN-ts and TeleGCN-tf , respectively. Also, we replace our TeleGCLs with GCNs in the network, which results in GCNet. We evaluate these networks on IMDB-MULTI, REDDIT-BINARY, and REDDIT-MULTI-12K datasets. To ensure fair comparisons, we use the same experimental setups for these networks. The experimental results are summarized in Table 3. It can be seen from the results that the best performances are achieved when two teleport functions are used. The networks with teleport functions significantly outperform GCNet by margins of 1.0%, 1.1%, and 1.0% on IMDB-MULTI, REDDIT-BINARY, and REDDIT-MULTI-12K datasets. The results demonstrate the promising contributions of teleport functions. 4.4 COMPARISON WITH GEOM-GCN ON NODE CLASSIFICATION TASKS In previous sections, we evaluate our methods using graph classification tasks under inductive settings. Here, we conduct experiments to evaluate our methods on node classification tasks. We compare our TeleGCN with GCN, GAT, and Geom-GCN on Chameleon and Squirrel datasets (Rozemberczki et al., 2019). To ensure fair comparisons, we use the same experimental setups as Geom-GCN. The statistics of datasets and results are summarized in Table 4. From the results, we can see that our TeleGCN outperforms previous models by margins of 1.1% and 1.0% on Chameleon and Squirrel datasets, respectively. This demonstrates the superior performances of our TeleGCN over previous state-of-the-art models. 4.5 PERFORMANCE STUDIES OF THRESHOLDS IN TELEPORT FUNCTIONS In our proposed TeleGCL, teleport functions employ thresholds to determine if the relevance of nodes is significant. Thus, the threshold is a very important hyper-parameter in our proposed methods. It controls the number of nodes teleported in the feature aggregation process. In this section, we conduct experiments to investigate how different threshold values affect the performance of TeleGCN models. In our TeleGCNs, all teleport functions share the same threshold that is α/|V |. This can help to accommodate input graphs with variable sizes. In the experiments of previous sections, we set the hyper-parameter α to 2. Here, we vary the thresholds in TeleGCNs and evaluate the resulting networks on D&D, PROTEINS, and IMDB-MULTI datasets. We report the graph classification accuracies (%) as illustrated in Figure 4. As demonstrated in the figure, the model achieves the best performance when α = 2. When the threshold is small, many nodes are teleported for feature aggregation, thereby leading to an over-smoothing problem. As the threshold increases, fewer nodes are selected and the receptive field is not effectively enlarged. 4.6 PERFORMANCE STUDIES OF k In our structure-aware teleport function, we propose a novel method to construct structural features for teleport function to compute similarities of nodes from graph structural perspective. Essentially, each node uses the connections with k neighborhoods to build a kdimensional feature vector. From this point, k is an important hyper-parameter especially for our TeleGCL. In this section, we conduct experiments to study the impacts of different k values on overall model performances. To this end, we vary the values of k in TeleGCLs and evaluate the resulting models on three datasets; those are D&D, PROTEINS, and IMDB-BINARY. We report graph classification performances on these datasets. The results are summarized in Table 5. We can observe from the results that the networks achieve the best performances when k = 8. As the increase of k, there is no significant improvement in network performances but the computational cost for computing relevances will increase. Thus, we set k = 8 in our experiments as it is the best practice for both efficiency and performance. 4.7 VISUALIZATION OF TELEPORTED NODES In our proposed structure-aware teleport function, the nodes that share the same structural patterns as the center node are teleported. In this part, we provide some visualization analysis of these teleported nodes. Here, we select three graphs from the PROTEINS dataset and visualize them in Figure 5. The green node in each graph is the center nodes and the orange nodes are teleported by the structure-aware teleport function. We can observe from these graphs that the teleported nodes share very similar graph topology information to their corresponding center nodes. The teleported nodes in the first and third graphs are multiple hops away from the center nodes. The teleported nodes enable the center nodes to aggregate information from a larger receptive field. This demonstrates that our proposed structure-aware teleport function can select informative nodes for center nodes beyond the local neighborhood. 5 CONCLUSION In this work, we address two major limitations of graph convolutional networks that are usually based on message-passing operations. To overcome the over-fitting and over-smoothing issues, we propose the teleport graph convolution layer, which utilizes teleport functions to select relevant nodes beyond the original graph structure. In particular, we propose two teleport functions; those are structure-aware and feature-aware teleport functions. These two teleport functions can select relevant nodes from structural and feature perspectives. Based on our TeleGCL, we construct teleport graph convolutional networks on network embedding learning. A APPENDIX In this section, we introduce the experimental setup for our experiments. In this work, we mainly utilize graph classification tasks to demonstrate the effectiveness of our proposed methods. We conduct experiments on ten datasets; those are PROTEINS, COLLAB, D&D, IMDB-BINARY, IMDBMULTI, MUTAG, PTC, REDDIT-BINARY, REDDIT-MULTI5K, and REDDIT-MULTI12K. In our proposed TeleGCNs, we use a GCN layer as the initial embedding layer. After that, three blocks as described in Section 3.5 are stacked before the final multi-layer perceptron. In each block, we use a TeleGCL and a gPool layer (Gao & Ji, 2019). The output features of the initial GCN and TeleGCLs are reduced to three 1-dimensional feature vectors using global max, global averaging, and global summation operations. These feature vectors are concatenated and fed into a two-layer perceptron. The number of hidden neurons is 512. In each convolutional layer, we apply a dropout (Srivastava et al., 2014) to the feature matrix. In the multi-layer perceptron, we also use dropout on input feature vectors. The hyper-parameter tuning is performed on the D&D dataset with slight changes on other datasets. The details of hyper-parameters are summarized in Table 6. On each dataset, we run experiments for 200 epochs by using an Adam optimizer (Kingma & Ba, 2015). The learning rate is 0.001 with a weight decay of 0.0008 to reduce the risk of over-fitting. The batch size is set to 64. We use a NVIDIA GeForce RTX 2080 Ti GPU to train our models.
1. What is the main contribution of the paper, and what are the strong points of the proposed method? 2. What are the weaknesses of the paper, particularly regarding the evaluation of the method? 3. How does the reviewer assess the complexity of the model, and how does it compare to other approaches? 4. Are there any misunderstandings or misinterpretations of previous works in the field, such as Geom-GCN? 5. How can the method be improved, and what further research directions can be explored?
Review
Review Summary: The paper proposes a method to increase the receptive field of GNNs, while avoiding oversmoothing. The idea is to create extra connections by linking distant nodes based on two criteria: node feature similarity and node structure similarity. Pairs of nodes that are more similar than a threshold are connected. For structure-aware linking (teleport) a descriptor of the local structure is constructed for each node, by stacking the similarity scores to the most similar k neighbours. Computing the dot product between descriptors, gives a similarity measure of the local structure of each node. For feature-aware linking the features dot product is directly used as a measure. Experiments show that both types of teleport functions are helpful. Strong Points: Clearly motivated and simple method. Carefully adding extra connections in the graph is a good idea, and the two proposed ways are sound. The paper highlights the importance of extra connections based on structure similarity and proposes an interesting mechanism to achieve this. The visualizations in Figure 5 are useful. Weak Points: There are some concerns regarding the evaluation of the method. a) The comparing methods from Table 1 have poorer performance than the GCNet baseline used in this work. The baseline GCNet represents a simple GNN formed by stacking GCN and gPool layers and Table 3 shows that it has better results than all the comparing methods from Table 1. Compared to the gap to other methods, the performance improvement brought by the proposed TeleGCL is relatively minor. b) The performance for GCNet baseline should be given for every dataset. Since on graph classification the only other results are from the Geom-GCN paper, it would really help to present the results of GCNet baseline on these datasets, with the same splits and settings as in the Geom-GCN paper. More details about the computational complexity of the model should be given. Due to the computation of the pairwise dot product between all the nodes, the feature-aware teleport function has complexity O ( n × n × d ) . Similarly, the structure-aware function has complexity O ( n × n × k + n × n × d ) (or | E | × d for the second term, depending on the implementation). For large graphs, the quadratic term in the number of nodes is really problematic. This should be compared to the approach of Geom-GCN. Some statements regarding Geom-GCN must be clarified: “ Geom-GCN doesn’t consider the original graph topology information when generating the additional set of nodes for aggregation” or “structural neighborhood in (Pei et al., 2020) is still built on node features without considering the graph topology”. Like the current method, Geom-GCN creates links between nodes that are similar in an embedding space. Their embeddings are given by Isomap (Tenenbaum et al., 2000), Poincare embedding (Nickel & Kiela, 2017), and struc2vec (Ribeiro et al., 2017) and it seems that they also take into account the structure of the graph. The proposed method given in Eq. 3-5 could be a good alternative to these methods, but more detailed explanations and comparisons should be given. Additional Comment: Fair evaluation of graph methods on small datasets is challenging. It would have been better to evaluate the method on bigger datasets like OGB [A]. [A] Hu, Weihua, et al. "Open graph benchmark: Datasets for machine learning on graphs." arXiv preprint arXiv:2005.00687 (2020). Conclusion: This paper presents a good and clear idea to add additional connections in a graph, but it has some issues with the evaluation and some comparisons with prior work. In this form I am inclined towards giving a 5: marginally below rating.
ICLR
Title Teleport Graph Convolutional Networks Abstract We consider the limitations in message-passing graph neural networks. In message-passing operations, each node aggregates information from its neighboring nodes. To enlarge the receptive field, graph neural networks need to stack multiple message-passing graph convolution layers, which leads to the over-fitting issue and over-smoothing issue. To address these limitations, we propose a teleport graph convolution layer (TeleGCL) that uses teleport functions to enable each node to aggregate information from a much larger neighborhood. For each node, teleport functions select relevant nodes beyond the local neighborhood, thereby resulting in a larger receptive field. To apply our structure-aware teleport function, we propose a novel method to construct structural features for nodes in the graph. Based on our TeleGCL, we build a family of teleport graph convolutional networks. The empirical results on graph and node classification tasks demonstrate the effectiveness of our proposed methods. 1 INTRODUCTION Graph neural networks (GNNs) have shown great capability in solving challenging tasks on graph data such as node classification (Grover & Leskovec, 2016; Kipf & Welling, 2017; Veličković et al., 2017; Gao et al., 2018), graph classification (Xu et al., 2018; Gao & Ji, 2019; You et al., 2019), and link prediction (Zhang & Chen, 2018; Chen et al., 2019; Zhou et al., 2019). Most graph convolutional networks are based on message-passing operations, in which each node aggregates information from its neighboring nodes. To enable a larger receptive field (Chen et al., 2016), GNNs need to stack multiple layers, which is straightforward but can result in several issues. Firstly, stacking multiple layers involves massive trainable parameters, which consequently increases the risk of over-fitting. Secondly, message-passing operations mostly use averaging to combine the aggregated features, which significantly reduces the distinguishability of network embeddings. From this point, GNNs that are based on message-passing operations can not use deep network architecture due to these limitations. Some works such as Geom-GCN (Pei et al., 2020) try to solve these issues by involving more nodes in the feature aggregation process. However, Geom-GCN doesn’t consider the original graph topology information when generating the additional set of nodes for aggregation, which can neglect some relevant nodes from a structural perspective. To address the above limitations and increase the receptive field effectively, we propose a teleport graph convolution layer (TeleGCL) that uses teleport functions to select highly-relevant nodes at the global scope. A teleport function computes relevances between the center node and other nodes beyond the local neighborhood. The nodes with particular relevances are teleported for the center node. Here, the selection of teleported nodes is not restricted by the graph topology. This enables the center node to gather information from a larger neighborhood without going deep, which helps to avoid over-fitting and over-smoothing issues. In particular, we propose two teleport functions; those are structure-aware and feature-aware teleport functions. They compute the nodes’ relevances from graph structural perspective and node features perspective, respectively. Based on our TeleGCL, we build a family of teleport graph convolutional networks. The empirical results on graph and node classification tasks demonstrate the effectiveness of our proposed methods. 2 BACKGROUND AND RELATED WORK In this section, we describe message-passing operations on graph data and geometric graph convolutional networks. Graph neural networks (Fan et al., 2019; Wu et al., 2019; Morris et al., 2019; Wu et al., 2020) have achieved state-of-the-art performances on various challenging tasks in the field of network embedding. The mainstream of graph deep learning operations follows a message-passing schema. In a message-passing operation, each node sends its features, known as message, to its neighboring nodes in the graph. Then each node aggregates messages from its neighborhood and uses them to update its features. When combing the aggregated features, different strategies can be applied. In the graph convolution layer (GCN) (Kipf & Welling, 2017), features from neighboring nodes are given equal weights in the aggregation process. To assign different weights to different neighboring nodes, the graph attention network (Veličković et al., 2017) employs an attention mechanism to compute aggregation weights. Based on these message-passing operations, graph neural networks stack multiple layers, which enables a larger receptive field. Recently, some research works try to perform message passing beyond the local neighborhood. Pei et al. (2020) proposed to construct a continuous latent space that enables graph neural networks to perform feature learning in the latent space. To be specific, it first projects nodes’ features to a 2-dimensional latent and continuous space. Based on the latent space, a structural neighborhood is constructed based on the Euclidean distance of each pair of nodes in the 2-dimensional space. In this process, the construction of structural features does not consider the graph connectivity information in the graph. Thus, the structural neighborhood in (Pei et al., 2020) is still built on node features without considering the graph topology. In this work, we propose a method to generate structure-aware features for each node. In particular, we use the graph connectivity and similarity information with the neighboring nodes and construct a feature vector for each node. By considering graph connectivity, our constructed structural features can reflect graph topology information. 3 TELEPORT GRAPH CONVOLUTIONAL NETWORKS In this work, we propose the teleport graph convolution layer (TeleGCL) that enables a center node to aggregate information beyond regular neighborhood structure by using some teleport functions. To enable effective node teleportation, we propose two teleport functions from structure-aware and feature-aware perspectives. Specifically, we propose a novel method to construct structural features for nodes, which can be used by structure-aware functions to select relevant nodes. Based on our TeleGCL, we propose the teleport graph convolutional networks for network embedding learning. 3.1 LIMITATIONS OF MESSAGE-PASSING OPERATIONS Currently, most graph convolution networks are based on message-passing operations. In a messagepassing operation, each node aggregates information from its neighboring nodes that usually are the one-hop neighborhood. Intuitively, it is beneficial to use information from a large neighborhood for network embedding learning. To enlarge the receptive field, a straight way is to stack multiple message-passing layers. A graph convolutional network with k layers enables nodes to receive information from a k-hop neighborhood. However, this method results in two issues. Firstly, it increases the risk of over-fitting by involving much more trainable parameters. The number of trainable parameters in the network increases when stacking multiple layers. Unlike regular convolutional neural networks, there is no effective graph pooling layer that can enlarge the receptive field without involving trainable parameters. Stacking many graph convolution layers will inevitably increase the risk of over-fitting. Secondly, stacking multiple layers will reduce the distinguishability of network embeddings, which is often referred to as the over-smoothing issue (Pei et al., 2020). Due to the invariant property in graph structures, message-passing operations cannot learn trainable weights in the aggregation process (Kipf & Welling, 2017; Gao et al., 2018). Averaging operation is usually used for information aggregation from the neighborhood. Consequently, information from relevant distant nodes will be diluted and each node carries similar information. In this work, we propose a teleport graph convolution layer to address this issue. This layer enables each node to aggregate information from a set of relevant nodes that are not directly connected to the center node in the original graph structure. Teleport functions are used to determine the relevant nodes from different perspectives. 3.2 TELEPORT GRAPH CONVOLUTION LAYER To address the limitations in message-passing operations, we propose the teleport graph convolution layer (TeleGCL), which enables nodes to aggregate information beyond their local neighborhoods. In this layer, we employ multiple teleport functions to generate neighborhoods for each node. The teleport functions select some nodes that are relevant but not directly connected. Since these nodes are teleported from a global context, the receptive field of each node can be effectively enlarged. We require these functions to be permutation invariant such that the property retains in this layer. Given a graph G = (V,E), where n = |V |, each node v ∈ V is associated with a feature vector xv ∈ Rd, and each edge (u, v) ∈ E connects node u and node v in the graph. X = [x1,x2, . . . ,xn] ∈ Rd×n and A ∈ Rn×n are the feature matrix and adjacency matrix, respectively. A teleport function g(v,G)→ N takes as input a node v and outputs a neighborhood N that includes a set of relevant nodes for node v’s feature aggregation. In Section 3.3 and Section 3.4, we propose two teleport functions that can construct structure-aware and feature-aware neighborhoods. Suppose we have m teleport functions, node v aggregates information from a neighborhood N (v) = {Nl(v),N1(v),N2(v), . . . ,Nm(v)}, where Nl(v) = {u|u ∈ V, (v, u) ∈ E} is the local neighborhood and Ni(v) = gi(v,G) is a neighborhood created by the ith teleport function. Based on this neighborhood, the layer-wise propagation of TeleGCL ` for node v is formulated as x(`)v = σ 1 |N(v)| ∑ u∈N(v) x(`−1)u , (1) where σ denotes an activation function such as ReLU (Nair & Hinton, 2010). By using teleport functions, a node can aggregate features beyond the local neighborhood, thereby leading to a larger receptive field. In Section 3.2, we propose the teleport graph convolution layer that enables feature aggregation regardless of the original graph structure. A TeleGCL highly depends on teleport functions which select distant nodes in the feature aggregation process. In previous works (Pei et al., 2020), teleport functions are mainly based on node features while neglecting the graph structural information. The graph topology information should be considered to include nodes that share the same structural patterns such as graph motifs (Ren & Jin, 2020). In this section, we propose two teleport functions; those are structure-aware teleport function and feature-aware teleport function. They select teleport nodes based on graph topology and node features, respectively. 3.3 STRUCTURE-AWARE TELEPORT FUNCTION Structure-aware teleport function focuses on selecting nodes based on the graph topology. In a graph, the nodes that share the same structural pattern contain related features. It is desirable for a node to aggregate information from these relevant nodes. A structure-aware function can be used to capture relevant nodes from a graph structural perspective. With a structure-aware function, the teleported nodes for node v are selected as: Ns(v) = {u|u ∈ V, (v, u) /∈ E, ts(v, u) > θ1}, (2) where ts(v, u) is a structure-aware function that computes the relevance of two nodes from a structural perspective. Here, θ1 is used to determine if node u is relevant. In this work, we propose a novel structure-aware teleport function, which computes the relevance of two nodes by checking if they share the same structural pattern. Our proposed method is based on an intuition that the nodes with the same structural pattern have similar connections with their neighboring nodes. From this point, we create a structure feature vector for each node, which can reflect its structural pattern such as graph motifs. For each node, we first compute its similarity scores with its neighboring nodes. Then we rank these similarity scores and use them as the structural feature of this node. To be specific, the structural feature vector yv for node v is constructed as wv = X Txv, ∈Rn (3) idxv = rankk (wv ◦A:,v) , ∈Rk (4) yv = wv (idxv) , ∈Rk (5) where ◦ denotes an element-wise vector multiplication, and A:,v is the vth column of the adjacency matrix. rankk operator ranks the similarity scores and outputs the indices of the top-k values in wv . wv(idxv) returns a subset of rows in wv indexed by idxv . We first compute the similarity scores between node v and its neighboring nodes in Eq. (3). Each element wu,v in wv measures the similarity between node u and node v. In Eq. (4), we rank these similarity scores and select the k-largest values in wv . The indices of the selected values are stored in idxv . Using indices idxv , we extract a structural feature vector yv from wv . By repeating these operations on each node, we can obtain a structural feature matrix Y = [y1,y2, . . . ,yn] ∈ Rk×n for all nodes in the graph. In this way, the structural feature vector is constructed from similarity scores between the center node and its neighboring nodes. These similarity scores encode its connectivity pattern with surrounding nodes, thereby reflecting the structural information in the graph. Based on structural features, we use dot product to compute the relevance of node u and node v: ts(u, v) = softmax(yTu yv), which can measure relevance from the perspectives of both angle and magnitude in an efficient way. As illustrated in Eq. (2), the teleport nodes can be selected based on our constructed structural features. 3.4 FEATURE-AWARE TELEPORT FUNCTION In a feature-aware teleport function, the teleported nodes are selected based on node features. A feature-aware teleport function can select highly relevant nodes based on their features. By using this function, the teleported nodes for node v are selected as: Nf (v) = {u|u ∈ V, (v, u) /∈ E, tf (v, u) > θ2}, (6) where tf (v, u) is a teleport function. θ2 is a threshold to determine if a node is teleported. Notably, Geom-GCN (Pei et al., 2020) uses a special case of this feature-aware teleport function. In Geom-GCN, node features are projected into a 2-dimensional space then the Euclidean distance is computed and used. The structural features in Geom-GCN are based on the latent space without considering graph topology information. The time complexity of this function is O(2d). In our feature-aware teleport function, we use dot product to compute the relevance, which is effective and can slightly reduce the computational cost. To be specific, the feature-based relevance between node u and node v is computed as tf (v, u) = softmax(xTuxv). By combining structure-aware and feature-aware teleport functions, the neighborhood for node v is defined as N(v) = {Nl(v),Nf (v),Ns(v)}. In our proposed TeleGCL, each node aggregates information from nodes in neighborhood N(v). 3.5 TELEPORT GRAPH CONVOLUTIONAL NETWORKS Based on our TeleGCL, we build a family of teleport graph convolutional networks (TeleGCNs). Given an input graph, we first use an embedding layer to learn low-dimensional continuous feature embeddings for nodes in the graph. Possible choices for this embedding layer include fullyconnected layer and GCN layer. Here, we use a GCN layer to learn feature embeddings. Then several convolutional blocks are stacked to gradually learn network embeddings. In each convolutional block, we use our TeleGCL to learn high-level feature embedding, and a pooling layer to reduce the graph size and involve more non-linearity. Here, we use a sampling-based pooling method to retrain original graph structures and reduce the risk of over-fitting. Specifically, we use top-k pooling (Gao & Ji, 2019) layers in our model. Finally, we stack the outputs of all TeleGCLs and the output of the first GCN layer. To deal with variable graph sizes in terms of the number of nodes in a graph, we employ several global pooling layers such as averaging, maximization, and summation to reduce these outputs into vectors. These feature vectors are concatenated and fed into a multi-layer perceptron (MLP) for prediction. 4 EXPERIMENTS In this section, we conduct experiments on graph classification tasks to evaluate our proposed methods. We compare our teleport graph convolutional networks (TeleGCNs) with previous state-of-theart models. We conduct ablation studies to investigate the contributions of our proposed teleport functions. Some experiments are performed to study the impact of thresholds in teleport functions. Our code and detailed experimental setups are available in the supplementary material. 4.1 RESULTS ON GRAPH CLASSIFICATION TASKS We evaluate our proposed TeleGCL and TeleGCNs on graph classification tasks. We compare our TeleGCNs with the previous model on seven datasets including PROTEINS (Borgwardt et al., 2005), COLLAB, D&D (Dobson & Doig, 2003), IMDB-MULTI (Yanardag & Vishwanathan, 2015a), REDDIT-BINARY, REDDIT-MULTI-5K, and REDDIT-MULTI-12K (Yanardag & Vishwanathan, 2015b). These datasets are benchmarking graph datasets and are widely used for evaluation in this community. Notably, these datasets have no test dataset. The common practice (Xu et al., 2018; Ying et al., 2018; Gao & Ji, 2019; Lee et al., 2019) is to run 10-fold cross-validation on the training dataset and report the average accuracy (%) with standard deviation. We choose six previous state-of-the-art models as baseline models (Shervashidze et al., 2011; Niepert et al., 2016). We strictly follow the same practices as previous works. In bioinformatics datasets such as PROTEINS and D&D, we use original node features in the datasets. In social network datasets like REDDITBINARY and IMDB-MULTI, we use node degrees as their initial features. The hyper-parameters in TeleGCNs are slightly tuned on D&D dataset and are migrated to other datasets with slightly different selections. The experimental results on graph classification tasks are summarized in Table 1. Here, we report the graph classification accuracies with standard deviations. It can be seen from the results that our proposed TeleGCNs achieve significantly better performances than previous state-of-the-art models on six out of seven datasets. To be specific, our TeleGCNs outperform previous models by margins of 1.4%, 2.4%, 21.7%, 4.6%, 1.9%, and 5.6% on PROTEINS, COLLAB, D&D, IMDB-MULTI, REDDIT-BINARY, and REDDIT-MULTI-12K datasets. The results above show that our TeleGCNs consistently yield state-of-the-art performances on graph classification tasks, which demonstrate the effectiveness of our methods. By using teleport functions, TeleGCL can rapidly and effectively increase the receptive fields without involving massive trainable parameters. 4.2 PERFORMANCE STUDY ON SMALL DATASETS The experimental studies in Section 4.1 on relatively large datasets in terms of the number of graphs demonstrate the effectiveness of our proposed methods. In this section, we conduct experiments to study the performances of our TeleGCNs on three relatively small datasets; those are MUTAG (Wale et al., 2008), PTC (Toivonen et al., 2003), and IMDBBINARY (Yanardag & Vishwanathan, 2015a). MUTAG and PTC are bioinformatics datasets, while IMDB-BINARY is a popular social network dataset. We follow the same experimental setups as previous works (Xu et al., 2018; Ying et al., 2018; Gao & Ji, 2019). The experimental setups for experiments are provided in the supplementary material. The experimental results are summarized in Table 2. Due to the lack of testing datasets, we follow the common practices in previous works (Xu et al., 2018; Ying et al., 2018) and report the average accuracies by running 10-fold cross-validation on the training dataset. It can be seen from the results that our TeleGCNs achieve promising results on these relatively small datasets. Our TeleGCNs outperform previous models by margins of 0.9%, 9.1%, and 4.4% on MUTAG, PTC, and IMDB-BINARY datasets. The good performances on small datasets demonstrate that our proposed TeleGCL can effectively increase the receptive field without increasing the risk of over-fitting. To be specific, TeleGCL achieves a larger receptive field by using teleport functions, which teleport relevant nodes without using extra trainable parameters. 4.3 ABLATION STUDY OF TELEPORT FUNCTIONS In this section, we conduct experiments to study the contributions of our proposed teleport functions to the overall performances. In our TeleGCL, we use both structure-aware and feature-aware teleport functions to enlarge the receptive fields without stacking many graph convolution layers. The promising results in previous sections have demonstrated the effectiveness of our methods. To investigate the individual contributions of each teleport function, we build multiple networks with the same network architecture as TeleGCN. To be specific, we build two networks that only use the feature-aware and structure-aware teleport functions. We denote these two networks as TeleGCN-ts and TeleGCN-tf , respectively. Also, we replace our TeleGCLs with GCNs in the network, which results in GCNet. We evaluate these networks on IMDB-MULTI, REDDIT-BINARY, and REDDIT-MULTI-12K datasets. To ensure fair comparisons, we use the same experimental setups for these networks. The experimental results are summarized in Table 3. It can be seen from the results that the best performances are achieved when two teleport functions are used. The networks with teleport functions significantly outperform GCNet by margins of 1.0%, 1.1%, and 1.0% on IMDB-MULTI, REDDIT-BINARY, and REDDIT-MULTI-12K datasets. The results demonstrate the promising contributions of teleport functions. 4.4 COMPARISON WITH GEOM-GCN ON NODE CLASSIFICATION TASKS In previous sections, we evaluate our methods using graph classification tasks under inductive settings. Here, we conduct experiments to evaluate our methods on node classification tasks. We compare our TeleGCN with GCN, GAT, and Geom-GCN on Chameleon and Squirrel datasets (Rozemberczki et al., 2019). To ensure fair comparisons, we use the same experimental setups as Geom-GCN. The statistics of datasets and results are summarized in Table 4. From the results, we can see that our TeleGCN outperforms previous models by margins of 1.1% and 1.0% on Chameleon and Squirrel datasets, respectively. This demonstrates the superior performances of our TeleGCN over previous state-of-the-art models. 4.5 PERFORMANCE STUDIES OF THRESHOLDS IN TELEPORT FUNCTIONS In our proposed TeleGCL, teleport functions employ thresholds to determine if the relevance of nodes is significant. Thus, the threshold is a very important hyper-parameter in our proposed methods. It controls the number of nodes teleported in the feature aggregation process. In this section, we conduct experiments to investigate how different threshold values affect the performance of TeleGCN models. In our TeleGCNs, all teleport functions share the same threshold that is α/|V |. This can help to accommodate input graphs with variable sizes. In the experiments of previous sections, we set the hyper-parameter α to 2. Here, we vary the thresholds in TeleGCNs and evaluate the resulting networks on D&D, PROTEINS, and IMDB-MULTI datasets. We report the graph classification accuracies (%) as illustrated in Figure 4. As demonstrated in the figure, the model achieves the best performance when α = 2. When the threshold is small, many nodes are teleported for feature aggregation, thereby leading to an over-smoothing problem. As the threshold increases, fewer nodes are selected and the receptive field is not effectively enlarged. 4.6 PERFORMANCE STUDIES OF k In our structure-aware teleport function, we propose a novel method to construct structural features for teleport function to compute similarities of nodes from graph structural perspective. Essentially, each node uses the connections with k neighborhoods to build a kdimensional feature vector. From this point, k is an important hyper-parameter especially for our TeleGCL. In this section, we conduct experiments to study the impacts of different k values on overall model performances. To this end, we vary the values of k in TeleGCLs and evaluate the resulting models on three datasets; those are D&D, PROTEINS, and IMDB-BINARY. We report graph classification performances on these datasets. The results are summarized in Table 5. We can observe from the results that the networks achieve the best performances when k = 8. As the increase of k, there is no significant improvement in network performances but the computational cost for computing relevances will increase. Thus, we set k = 8 in our experiments as it is the best practice for both efficiency and performance. 4.7 VISUALIZATION OF TELEPORTED NODES In our proposed structure-aware teleport function, the nodes that share the same structural patterns as the center node are teleported. In this part, we provide some visualization analysis of these teleported nodes. Here, we select three graphs from the PROTEINS dataset and visualize them in Figure 5. The green node in each graph is the center nodes and the orange nodes are teleported by the structure-aware teleport function. We can observe from these graphs that the teleported nodes share very similar graph topology information to their corresponding center nodes. The teleported nodes in the first and third graphs are multiple hops away from the center nodes. The teleported nodes enable the center nodes to aggregate information from a larger receptive field. This demonstrates that our proposed structure-aware teleport function can select informative nodes for center nodes beyond the local neighborhood. 5 CONCLUSION In this work, we address two major limitations of graph convolutional networks that are usually based on message-passing operations. To overcome the over-fitting and over-smoothing issues, we propose the teleport graph convolution layer, which utilizes teleport functions to select relevant nodes beyond the original graph structure. In particular, we propose two teleport functions; those are structure-aware and feature-aware teleport functions. These two teleport functions can select relevant nodes from structural and feature perspectives. Based on our TeleGCL, we construct teleport graph convolutional networks on network embedding learning. A APPENDIX In this section, we introduce the experimental setup for our experiments. In this work, we mainly utilize graph classification tasks to demonstrate the effectiveness of our proposed methods. We conduct experiments on ten datasets; those are PROTEINS, COLLAB, D&D, IMDB-BINARY, IMDBMULTI, MUTAG, PTC, REDDIT-BINARY, REDDIT-MULTI5K, and REDDIT-MULTI12K. In our proposed TeleGCNs, we use a GCN layer as the initial embedding layer. After that, three blocks as described in Section 3.5 are stacked before the final multi-layer perceptron. In each block, we use a TeleGCL and a gPool layer (Gao & Ji, 2019). The output features of the initial GCN and TeleGCLs are reduced to three 1-dimensional feature vectors using global max, global averaging, and global summation operations. These feature vectors are concatenated and fed into a two-layer perceptron. The number of hidden neurons is 512. In each convolutional layer, we apply a dropout (Srivastava et al., 2014) to the feature matrix. In the multi-layer perceptron, we also use dropout on input feature vectors. The hyper-parameter tuning is performed on the D&D dataset with slight changes on other datasets. The details of hyper-parameters are summarized in Table 6. On each dataset, we run experiments for 200 epochs by using an Adam optimizer (Kingma & Ba, 2015). The learning rate is 0.001 with a weight decay of 0.0008 to reduce the risk of over-fitting. The batch size is set to 64. We use a NVIDIA GeForce RTX 2080 Ti GPU to train our models.
1. What is the main contribution of the paper, and how does it address the limitations of traditional graph neural networks? 2. What are the strengths and weaknesses of the proposed Teleport Graph Convolutional Networks (TGL)? 3. How does TGL incorporate graph topology and overcome the over-smoothing problem? 4. What are the concerns regarding the novelty and clarity of the presented idea? 5. How does the reviewer assess the accuracy gains shown in the experiments, and what is the significance of these gains? 6. What are some recommended state-of-the-art baselines for comparison that the authors should have included?
Review
Review A new architecture for graph neural networks, which the authors name as Teleport Graph Convolutional Networks (TGL), is proposed in this paper. Teleport graph convolution layer is proposed to address the limitations in message-passing operations of graph neural networks: 1. over-smoothing and 2. over-fitting. The architecture enables nodes to aggregate information beyond their local neighborhoods. TGL operates as follows: 1) it first aggregates neighbors as in normal graph neural networks, 2) it selects some nodes that are relevant but not directly connected using 2a) feature similarity and 2b) similarity of structure. Specifically, TGL builds upon Pei et al., 2020, which attacks the over-smoothing problem by mapping node features to latent spaces and performing aggregation based on latent space similarity. Authors motivate their work by reporting Pei et al., 2020 ignore the graph topology, but TGL can incorporate graph topology with a similar latent space approach. Experiments are conducted on graph and node classification datasets. Overall, the paper presents some useful attempt to address problems in message passing operations, and some accuracy gains are shown in the graph and node classification datasets. The trick to aggregate based on similarity of features and structure leads to some gains compared to those only based on graph topology. However, there are some concerns lingering around. I. The presented idea, though boosts accuracy of state-of-the-art graph neural networks such as GIN or Geom-GCN, by 2%, is not quite novel. This paper is perhaps better suited for more practical venues, while ICLR is reserved for the most exciting and eye-opening research in AI. II. How does teleGCN solve over-smoothing, which is supposed to be the main problem? Not discussed in paper. III. The motivation for how structure-aware teleport is constructed is unclear yet. Why is yellow node similar to green node in Figure 1? How are their similarity computed in equations, which is missing in paper? IV. How is the feature-aware (pink node in Figure 1) different from Pei et al., 2020? V. I recommend authors to include more state-of-the-art baselines for comparison. The following are recommended. https://arxiv.org/abs/2002.05287 https://arxiv.org/abs/1905.13192 https://arxiv.org/pdf/2007.02133.pdf https://arxiv.org/abs/2006.07739 Grammar / typos: Geom-GCN doesn’t consider the -> does not those are structure-aware teleport function -> those that are These similarity scores encode its connectivity -> "its" is unclear
ICLR
Title Recasting Gradient-Based Meta-Learning as Hierarchical Bayes Abstract Meta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task. Bayesian hierarchical modeling provides a theoretical framework for formalizing meta-learning as inference for a set of parameters that are shared across tasks. Here, we reformulate the model-agnostic meta-learning algorithm (MAML) of Finn et al. (2017) as a method for probabilistic inference in a hierarchical Bayesian model. In contrast to prior methods for meta-learning via hierarchical Bayes, MAML is naturally applicable to complex function approximators through its use of a scalable gradient descent procedure for posterior inference. Furthermore, the identification of MAML as hierarchical Bayes provides a way to understand the algorithm’s operation as a meta-learning procedure, as well as an opportunity to make use of computational strategies for efficient inference. We use this opportunity to propose an improvement to the MAML algorithm that makes use of techniques from approximate inference and curvature estimation. 1 INTRODUCTION A remarkable aspect of human intelligence is the ability to quickly solve a novel problem and to be able to do so even in the face of limited experience in a novel domain. Such fast adaptation is made possible by leveraging prior learning experience in order to improve the efficiency of later learning. This capacity for meta-learning also has the potential to enable an artificially intelligent agent to learn more efficiently in situations with little available data or limited computational resources (Schmidhuber, 1987; Bengio et al., 1991; Naik & Mammone, 1992). In machine learning, meta-learning is formulated as the extraction of domain-general information that can act as an inductive bias to improve learning efficiency in novel tasks (Caruana, 1998; Thrun & Pratt, 1998). This inductive bias has been implemented in various ways: as learned hyperparameters in a hierarchical Bayesian model that regularize task-specific parameters (Heskes, 1998), as a learned metric space in which to group neighbors (Bottou & Vapnik, 1992), as a trained recurrent neural network that allows encoding and retrieval of episodic information (Santoro et al., 2016), or as an optimization algorithm with learned parameters (Schmidhuber, 1987; Bengio et al., 1992). The model-agnostic meta-learning (MAML) of Finn et al. (2017) is an instance of a learned optimization procedure that directly optimizes the standard gradient descent rule. The algorithm estimates an initial parameter set to be shared among the task-specific models; the intuition is that gradient descent from the learned initialization provides a favorable inductive bias for fast adaptation. However, this inductive bias has been evaluated only empirically in prior work (Finn et al., 2017). In this work, we present a novel derivation of and a novel extension to MAML, illustrating that this algorithm can be understood as inference for the parameters of a prior distribution in a hierarchical Bayesian model. The learned prior allows for quick adaptation to unseen tasks on the basis of an implicit predictive density over task-specific parameters. The reinterpretation as hierarchical Bayes gives a principled statistical motivation for MAML as a meta-learning algorithm, and sheds light on the reasons for its favorable performance even among methods with significantly more parameters. More importantly, by casting gradient-based meta-learning within a Bayesian framework, we are able to improve MAML by taking insights from Bayesian posterior estimation as novel augmentations to the gradient-based meta-learning procedure. We experimentally demonstrate that this enables better performance on a few-shot learning benchmark. 2 META-LEARNING FORMULATION The goal of a meta-learner is to extract task-general knowledge through the experience of solving a number of related tasks. By using this learned prior knowledge, the learner has the potential to quickly adapt to novel tasks even in the face of limited data or limited computation time. Formally, we consider a dataset D that defines a distribution over a family of tasks T . These tasks share some common structure such that learning to solve a single task has the potential to aid in solving another. Each task T defines a distribution over data points x, which we assume in this work to consist of inputs and either regression targets or classification labels y in a supervised learning problem (although this assumption can be relaxed to include reinforcement learning problems; e.g., see Finn et al., 2017). The objective of the meta-learner is to be able to minimize a task-specific performance metric associated with any given unseen task from the dataset given even only a small amount of data from the task; i.e., to be capable of fast adaptation to a novel task. In the following subsections, we discuss two ways of formulating a solution to the meta-learning problem: gradient-based hyperparameter optimization and probabilistic inference in a hierarchical Bayesian model. These approaches were developed orthogonally, but, in Section 3.1, we draw a novel connection between the two. 2.1 META-LEARNING AS GRADIENT-BASED HYPERPARAMETER OPTIMIZATION A parametric meta-learner aims to find some shared parameters θ that make it easier to find the right task-specific parameters φ when faced with a novel task. A variety of meta-learners that employ gradient methods for task-specific fast adaptation have been proposed (e.g., Andrychowicz et al., 2016; Li & Malik, 2017a;b; Wichrowska et al., 2017). MAML (Finn et al., 2017) is distinct in that it provides a gradient-based meta-learning procedure that employs a single additional parameter (the meta-learning rate) and operates on the same parameter space for both meta-learning and fast adaptation. These are necessary features for the equivalence we show in Section 3.1. To address the meta-learning problem, MAML estimates the parameters θ of a set of models so that when one or a few batch gradient descent steps are taken from the initialization at θ given a small sample of task data xj1 , . . . ,xjN ∼ pTj (x) each model has good generalization performance on another sample xjN+1 , . . . ,xjN+M ∼ pTj (x) from the same task. The MAML objective in a maximum likelihood setting is L(θ) = 1 J ∑ j [ 1 M ∑ m − log p ( xjN+m | θ − α∇θ 1 N ∑ n − log p ( xjn | θ ) ︸ ︷︷ ︸ φj )] (1) where we use φj to denote the updated parameters after taking a single batch gradient descent step from the initialization at θ with step size α on the negative log-likelihood associated with the task Tj . Note that since φj is an iterate of a gradient descent procedure that starts from θ, each φj is of the same dimensionality as θ. We refer to the inner gradient descent procedure that computes φj as fast adaptation. The computational graph of MAML is given in Figure 1 (left). 2.2 META-LEARNING AS HIERARCHICAL BAYESIAN INFERENCE An alternative way to formulate meta-learning is as a problem of probabilistic inference in the hierarchical model depicted in Figure 1 (right). In particular, in the case of meta-learning, each task-specific parameter φj is distinct from but should influence the estimation of the parameters {φj′ | j ′ 6= j} from other tasks. We can capture this intuition by introducing a meta-level parameter θ on which each task-specific parameter is statistically dependent. With this formulation, the mutual dependence of the task-specific parameters φj is realized only through their individual dependence on the meta-level parameters θ. As such, estimating θ provides a way to constrain the estimation of each of the φj . Given some data in a multi-task setting, we may estimate θ by integrating out the task-specific parameters to form the marginal likelihood of the data. Formally, grouping all of the data from each of the tasks as X and again denoting by xj1 , . . . ,xjN a sample from task Tj , the marginal likelihood of the observed data is given by p (X | θ ) = ∏ j (∫ p ( xj1 , . . . ,xjN | φj ) p ( φj | θ ) dφj ) . (2) Maximizing (2) as a function of θ gives a point estimate for θ, an instance of a method known as empirical Bayes (Bernardo & Smith, 2006; Gelman et al., 2014) due to its use of the data to estimate the parameters of the prior distribution. Hierarchical Bayesian models have a long history of use in both transfer learning and domain adaptation (e.g., Lawrence & Platt, 2004; Yu et al., 2005; Gao et al., 2008; Daumé III, 2009; Wan et al., 2012). However, the formulation of meta-learning as hierarchical Bayes does not automatically provide an inference procedure, and furthermore, there is no guarantee that inference is tractable for expressive models with many parameters such as deep neural networks. 3 LINKING GRADIENT-BASED META-LEARNING & HIERARCHICAL BAYES In this section, we connect the two independent approaches of Section 2.1 and Section 2.2 by showing that MAML can be understood as empirical Bayes in a hierarchical probabilistic model. Furthermore, we build on this understanding by showing that a choice of update rule for the taskspecific parameters φj (i.e., a choice of inner-loop optimizer) corresponds to a choice of prior over task-specific parameters, p(φj | θ ). 3.1 MODEL-AGNOSTIC META-LEARNING AS EMPIRICAL BAYES In general, when performing empirical Bayes, the marginalization over task-specific parameters φj in (2) is not tractable to compute exactly. To avoid this issue, we can consider an approximation that makes use of a point estimate φ̂j instead of performing the integration over φ in (2). Using φ̂j as an estimator for each φj , we may write the negative logarithm of the marginal likelihood as − log p (X | θ ) ≈ ∑ j [ − log p ( xjN+1 , . . .xjN+M | φ̂j )] . (3) Setting φ̂j = θ + α∇θ log p(xj1 , . . . ,xjN | θ ) for each j in (3) recovers the unscaled form of the one-step MAML objective in (1). This tells us that the MAML objective is equivalent to a maximization with respect to the meta-level parameters θ of the marginal likelihood p(X | θ ), where a point estimate for each task-specific parameter φj is computed via one or a few steps of gradient descent. By taking only a few steps from the initialization at θ, the point estimate φ̂j trades off Algorithm MAML-HB(D) Initialize θ randomly while not converged do Draw J samples T1, . . . , TJ ∼ pD(T ) Estimate Ex∼pT1 (x)[− log p(x | θ )], . . . ,Ex∼pTJ (x)[− log p(x | θ )] using ML-· · · Update θ ← θ − β ∇θ ∑ j Ex∼pTj (x)[− log p(x | θ )] end Algorithm 2: Model-agnostic meta-learning as hierarchical Bayesian inference. The choices of the subroutine ML-· · · that we consider are defined in Subroutine 3 and Subroutine 4. Subroutine ML-POINT(θ, T ) Draw N samples x1, . . . ,xN ∼ pT (x) Initialize φ← θ for k in 1, . . . ,K do Update φ← φ+ α∇φ log p(x1, . . . ,xN | φ ) end Draw M samples xN+1, . . . ,xN+M ∼ pT (x) return − log p(xN+1, . . . ,xN+M | φ ) Subroutine 3: Subroutine for computing a point estimate φ̂ using truncated gradient descent to approximate the marginal negative log likelihood (NLL). minimizing the fast adaptation objective − log p(xj1 , . . . ,xjN | θ ) with staying close in value to the parameter initialization θ. We can formalize this trade-off by considering the linear regression case. Recall that the maximum a posteriori (MAP) estimate of φj corresponds to the global mode of the posterior p(φj | xj1 , . . .xjN ,θ ) ∝ p(xj1 , . . .xjN | φj )p(φj | θ ). In the case of a linear model, early stopping of an iterative gradient descent procedure to estimate φj is exactly equivalent to MAP estimation of φj under the assumption of a prior that depends on the number of descent steps as well as the direction in which each step is taken. In particular, write the input examples as X and the vector of regression targets as y, omit the task index from φ, and consider the gradient descent update φ(k) = φ(k−1) − α∇φ [ ‖y −Xφ‖22 ] φ=φ(k−1) = φ(k−1) − αX T (Xφ(k−1) − y) (4) for iteration index k and learning rate α ∈ R+. Santos (1996) shows that, starting from φ(0) = θ, φ(k) in (4) solves the regularized linear least squares problem min ( ‖y −Xφ‖22 + ‖θ − φ‖ 2 Q ) (5) with Q-norm defined by ‖z‖Q = z TQ−1z for a symmetric positive definite matrix Q that depends on the step size α and iteration index k as well as on the covariance structure of X. We describe the exact form of the dependence in Section 3.2. The minimization in (5) can be expressed as a posterior maximization problem given a conditional Gaussian likelihood over y and a Gaussian prior over φ. The posterior takes the form p (φ | X,y,θ ) ∝ N (y ; Xφ, I) N (φ ; θ,Q) . (6) Since φ(k) in (4) maximizes (6), we may conclude that k iterations of gradient descent in a linear regression model with squared error exactly computes the MAP estimate of φ, given a Gaussian-noised observation model and a Gaussian prior over φ with parameters µ0 = θ and Σ0 = Q. Therefore, in the case of linear regression with squared error, MAML is exactly empirical Bayes using the MAP estimate as the point estimate of φ. In the nonlinear case, MAML is again equivalent to an empirical Bayes procedure to maximize the marginal likelihood that uses a point estimate for φ computed by one or a few steps of gradient descent. However, this point estimate is not necessarily the global mode of a posterior. We can instead understand the point estimate given by truncated gradient descent as the value of the mode of an implicit posterior over φ resulting from an empirical loss interpreted as a negative log-likelihood, and regularization penalties and the early stopping procedure jointly acting as priors (for similar interpretations, see Sjöberg & Ljung, 1995; Bishop, 1995; Duvenaud et al., 2016). The exact equivalence between early stopping and a Gaussian prior on the weights in the linear case, as well as the implicit regularization to the parameter initialization the nonlinear case, tells us that every iterate of truncated gradient descent is a mode of an implicit posterior. In particular, we are not required to take the gradient descent procedure of fast adaptation that computes φ̂ to convergence in order to establish a connection between MAML and hierarchical Bayes. MAML can therefore be understood to approximate an expectation of the marginal negative log likelihood (NLL) for each task Tj as Ex∼pTj (x) [− log p (x | θ )] ≈ 1 M ∑ m − log p ( xjN+m | φ̂j ) using the point estimate φ̂j = θ + α∇θ log p(xjn | θ ) for single-step fast adaptation. The algorithm for MAML as probabilistic inference is given in Algorithm 2; Subroutine 3 computes each marginal NLL using the point estimate of φ̂ as just described. Formulating MAML in this way, as probabilistic inference in a hierarchical Bayesian model, motivates the interpretation in Section 3.2 of using various meta-optimization algorithms to induce a prior over task-specific parameters. 3.2 THE PRIOR OVER TASK-SPECIFIC PARAMETERS From Section 3.1, we may conclude that early stopping during fast adaptation is equivalent to a specific choice of a prior over task-specific parameters, p(φj | θ ). We can better understand the role of early stopping in defining the task-specific parameter prior in the case of a quadratic objective. Omit the task index from φ and x, and consider a second-order approximation of the fast adaptation objective `(φ) = − log p(x1 . . . ,xN | φ ) about a minimum φ ∗: `(φ) ≈ ˜̀(φ) := 12‖φ− φ ∗‖2 H −1 + `(φ∗) (7) where the Hessian H = ∇2φ `(φ ∗) is assumed to be positive definite so that ˜̀ is bounded below. Furthermore, consider using a curvature matrix B to precondition the gradient in gradient descent, giving the update φ(k) = φ(k−1) − B∇φ ˜̀(φ(k−1)) . (8) If B is diagonal, we can identify (8) as a Newton method with a diagonal approximation to the inverse Hessian; using the inverse Hessian evaluated at the point φ(k−1) recovers Newton’s method itself. On the other hand, meta-learning the matrix B matrix via gradient descent provides a method to incorporate task-general information into the covariance of the fast adaptation prior, p(φ | θ ). For instance, the meta-learned matrix B may encode correlations between parameters that dictates how such parameters are updated relative to each other. Formally, taking k steps of gradient descent from φ(0) = θ using the update rule in (8) gives a φ(k) that solves min ( ‖φ− φ∗‖2 H −1 + ‖φ(0) − φ‖ 2 Q ) . (9) The minimization in (9) corresponds to taking a Gaussian prior p(φ | θ ) with mean θ and covariance Q for Q = OΛ−1((I−BΛ)−k − I)OT (Santos, 1996) where B is a diagonal matrix that results from a simultaneous diagonalization of H and B as OTHO = diag(λ1, . . . , λn) = Λ and OTB−1O = diag(b1, . . . , bn) = B with bi, λi ≥ 0 for i = 1, . . . , n (Theorem 8.7.1 in Golub & Van Loan, 1983). If the true objective is indeed quadratic, then, assuming the data is centered, H is the unscaled covariance matrix of features, XTX. 4 IMPROVING MODEL-AGNOSTIC META-LEARNING Identifying MAML as a method for probabilistic inference in a hierarchical model allows us to develop novel improvements to the algorithm. In Section 4.1, we consider an approach from Bayesian parameter estimation to improve the MAML algorithm, and in Section 4.2, we discuss how to make this procedure computationally tractable for high-dimensional models. 4.1 LAPLACE’S METHOD OF INTEGRATION We have shown that the MAML algorithm is an empirical Bayes procedure that employs a point estimate for the mid-level, task-specific parameters in a hierarchical Bayesian model. However, the use of this point estimate may lead to an inaccurate point approximation of the integral in (2) if the posterior over the task-specific parameters, p(φj | xjN+1 , . . . ,xjN+M ,θ ), is not sharply peaked at the value of the point estimate. The Laplace approximation (Laplace, 1986; MacKay, 1992b;a) is applicable in this case as it replaces a point estimate of an integral with the volume of a Gaussian centered at a mode of the integrand, thereby forming a local quadratic approximation. We can make use of this approximation to incorporate uncertainty about the task-specific parameters into the MAML algorithm at fast adaptation time. In particular, suppose that each integrand in (2) has a mode φ∗j at which it is locally well-approximated by a quadratic function. The Laplace approximation uses a second-order Taylor expansion of the negative log posterior in order to approximate each integral in the product in (2) as∫ p ( Xj | φj ) p ( φj | θ ) dφj ≈ p ( Xj | φ ∗ j ) p ( φ∗j | θ ) det(Hj/2π) − 12 (10) where Hj is the Hessian matrix of second derivatives of the negative log posterior. Classically, the Laplace approximation uses the MAP estimate for φ∗j , although any mode can be used as an expansion site provided the integrand is well enough approximated there by a quadratic. We use the point estimate φ̂j uncovered by fast adaptation, in which case the MAML objective in (1) becomes an appropriately scaled version of the approximate marginal likelihood − log p (X | θ ) ≈ ∑ j [ − log p ( Xj | φ̂j ) − log p ( φ̂j | θ ) + 12 log det(Hj) ] . (11) The term log p( φ̂j | θ ) results from the implicit regularization imposed by early stopping during fast adaptation, as discussed in Section 3.1. The term 1/2 log det(Hj), on the other hand, results from the Laplace approximation and can be interpreted as a form of regularization that penalizes model complexity. 4.2 USING CURVATURE INFORMATION TO IMPROVE MAML Using (11) as a training criterion for a neural network model is difficult due to the required computation of the determinant of the Hessian of the log posterior Hj , which itself decomposes into a sum of the Hessian of the log likelihood and the Hessian of the log prior as Hj = ∇ 2 φj [ − log p ( Xj | φj )] +∇2φj [ − log p ( φj | θ )] . In our case of early stopping as regularization, the prior over task-specific parameters p(φj | θ ) is implicit and thus no closed form is available for a general model. Although we may use the quadratic approximation derived in Section 3.2 to obtain an approximate Gaussian prior, this prior is not diagonal and does not, to our knowledge, have a convenient factorization. Therefore, in our experiments, we instead use a simple approximation in which the prior is approximated as a diagonal Gaussian with precision τ . We keep τ fixed, although this parameter may be cross-validated for improved performance. Subroutine ML-LAPLACE(θ, T ) Draw N samples x1, . . . ,xN ∼ pT (x) Initialize φ← θ for k in 1, . . . ,K do Update φ← φ+ α∇φ log p(x1, . . . ,xN | φ ) end Draw M samples xN+1, . . . ,xN+M ∼ pT (x) Estimate quadratic curvature Ĥ return − log p(xN+1, . . . ,xN+M | φ ) + η log det(Ĥ) Subroutine 4: Subroutine for computing a Laplace approximation of the marginal likelihood. Similarly, the Hessian of the log likelihood is intractable to form exactly for all but the smallest models, and furthermore, is not guaranteed to be positive definite at all points, possibly rendering the Laplace approximation undefined. To combat this, we instead seek a curvature matrix Ĥ that approximates the quadratic curvature of a neural network objective function. Since it is well-known that the curvature associated with neural network objective functions is highly non-diagonal (e.g., Martens, 2016), a further requirement is that the matrix have off-diagonal terms. Due to the difficulties listed above, we turn to second order gradient descent methods, which precondition the gradient with an inverse curvature matrix at each iteration of descent. The Fisher information matrix (Fisher, 1925) has been extensively used as an approximation of curvature, giving rise to a method known as natural gradient descent (Amari, 1998). A neural network with an appropriate choice of loss function is a probabilistic model and therefore defines a Fisher information matrix. Furthermore, the Fisher information matrix can be seen to define a convex quadratic approximation to the objective function of a probabilistic neural model (Pascanu & Bengio, 2014; Martens, 2014). Importantly for our use case, the Fisher information matrix is positive definite by definition as well as non-diagonal. However, the Fisher information matrix is still expensive to work with. Martens & Grosse (2015) developed Kronecker-factored approximate curvature (K-FAC), a scheme for approximating the curvature of the objective function of a neural network with a block-diagonal approximation to the Fisher information matrix. Each block corresponds to a unique layer in the network, and each block is further approximated as a Kronecker product (see Van Loan, 2000) of two much smaller matrices by assuming that the second-order statistics of the input activation and the back-propagated derivatives within a layer are independent. These two approximations ensure that the inverse of the Fisher information matrix can be computed efficiently for the natural gradient. For the Laplace approximation, we are interested in the determinant of a curvature matrix instead of its inverse. However, we may also make use of the approximations to the Fisher information matrix from K-FAC as well as properties of the Kronecker product. In particular, we use the fact that the determinant of a Kronecker product is the product of the exponentiated determinants of each of the factors, and that the determinant of a block diagonal matrix is the product of the determinants of the blocks (Van Loan, 2000). The determinants for each factor can be computed as efficiently as the inverses required by K-FAC, in O(d3) time for a d-dimensional Kronecker factor. We make use of the Laplace approximation and K-FAC to replace Subroutine 3, which computes the task-specific marginal NLLs using a point estimate for φ̂. We call this method the Lightweight Laplace Approximation for Meta-Adaptation (LLAMA), and give a replacement subroutine in Subroutine 4. 5 EXPERIMENTAL EVALUATION The goal of our experiments is to evaluate if we can use our probabilistic interpretation of MAML to generate samples from the distribution over adapted parameters, and futhermore, if our method can be applied to large-scale meta-learning problems such as miniImageNet. 5.1 WARMUP: TOY NONLINEAR MODEL The connection between MAML and hierarchical Bayes suggests that we should expect MAML to behave like an algorithm that learns the mean of a Gaussian prior on model parameters, and uses the mean of this prior as an initialization during fast adaptation. Using the Laplace approximation to the integration over task-specific parameters as in (10) assumes a task-specific parameter posterior with mean at the adapted parameters φ̂ and covariance equal to the inverse Hessian of the log posterior evaluated at the adapted parameter value. Instead of simply using this density in the Laplace approximation as an additional regularization term as in (11), we may sample parameters φj from this density and use each set of sampled parameters to form a set of predictions for a given task. To illustrate this relationship between MAML and hierarchical Bayes, we present a meta-dataset of sinusoid tasks in which each task involves regressing to the output of a sinusoid wave in Figure 5. Variation between tasks is obtained by sampling the amplitude uniformly from [0.1, 5.0] and the phase from [0, π]. During training and for each task, 10 input datapoints are sampled uniformly from [−10.0, 10.0] and the loss is the mean squared error between the prediction and the true value. We observe in Figure 5 that our method allows us to directly sample models from the task-specific parameter distribution after being presented with 10 datapoints from a new, previously unseen sinusoid curve. In particular, the column on the right of Figure 5 demonstrates that the sampled models display an appropriate level of uncertainty when the datapoints are ambiguous (as in the bottom right). 5.2 LARGE-SCALE EXPERIMENT: miniIMAGENET We evaluate LLAMA on the miniImageNet Ravi & Larochelle (2017) 1-shot, 5-way classification task, a standard benchmark in few-shot classification. miniImageNet comprises 64 training classes, 12 validation classes, and 24 test classes. Following the setup of Vinyals et al. (2016), we structure the N -shot, J-way classification task as follows: The model observes N instances of J unseen classes, and is evaluated on its ability to classify M new instances within the J classes. We use a neural network architecture standard to few-shot classification (e.g., Vinyals et al., 2016; Ravi & Larochelle, 2017), consisting of 4 layers with 3× 3 convolutions and 64 filters, followed by batch normalization (BN) (Ioffe & Szegedy, 2015), a ReLU nonlinearity, and 2× 2 max-pooling. For the scaling variable β and centering variable γ of BN (see Ioffe & Szegedy, 2015), we ignore the fast adaptation update as well as the Fisher factors for K-FAC. We use Adam (Kingma & Ba, 2014) as the meta-optimizer, and standard batch gradient descent with a fixed learning rate to update the model during fast adaptation. LLAMA requires the prior precision term τ as well as an additional parameter η ∈ R+ that weights the regularization term log det Ĥ contributed by the Laplace approximation. We fix τ = 0.001 and selected η = 10−6 via cross-validation; all other parameters are set to the values reported in Finn et al. (2017). We find that LLAMA is practical enough to be applied to this larger-scale problem. In particular, our TensorFlow implementation of LLAMA trains for 60,000 iterations on one TITAN Xp GPU in 9 hours, compared to 5 hours to train MAML. As shown in Table 1, LLAMA achieves comparable performance to the state-of-the-art meta-learning method by Triantafillou et al. (2017). While the gap between MAML and LLAMA is small, the improvement from the Laplace approximation suggests that a more accurate approximation to the marginalization over task-specific parameters will lead to further improvements. 6 RELATED WORK Meta-learning and few-shot learning have a long history in hierarchical Bayesian modeling (e.g., Tenenbaum, 1999; Fei-Fei et al., 2003; Lawrence & Platt, 2004; Yu et al., 2005; Gao et al., 2008; Daumé III, 2009; Wan et al., 2012). A related subfield is that of transfer learning, which has used hierarchical Bayes extensively (e.g., Raina et al., 2006). A variety of inference methods have been used in Bayesian models, including exact inference (Lake et al., 2011), sampling methods (Salakhutdinov et al., 2012), and variational methods (Edwards & Storkey, 2017). While some prior works on hierarchical Bayesian models have proposed to handle basic image recognition tasks, the complexity of these tasks does not yet approach the kinds of complex image recognition problems that can be solved by discriminatively trained deep networks, such as the miniImageNet experiment in our evaluation (Mansinghka et al., 2013). Recently, the Omniglot benchmark Lake et al. (2016) has rekindled interest in the problem of learning from few examples. Modern methods accomplish few-shot learning either through the design of network architectures that ingest the few-shot training samples directly (e.g., Koch, 2015; Vinyals et al., 2016; Snell et al., 2017; Hariharan & Girshick, 2017; Triantafillou et al., 2017), or formulating the problem as one of learning to learn, or meta-learning (e.g., Schmidhuber, 1987; Bengio et al., 1991; Schmidhuber, 1992; Bengio et al., 1992). A variety of inference methods have been used in Bayesian models, including exact inference (Lake et al., 2011), sampling methods (Salakhutdinov et al., 2013), and variational methods (Edwards & Storkey, 2017). Our work bridges the gap between gradient-based meta-learning methods and hierarchical Bayesian modeling. Our contribution is not to formulate the meta-learning problem as a hierarchical Bayesian 1Improved performance on miniImageNet has been reported by several works (Mishra et al., 2017; Munkhdalai & Yu, 2017; Sung et al., 2017) by making use of a model architecture with significantly more parameters than the methods in Table 1. Since we do not explore variations in neural network architecture in this work, we omit such results from the table. model, but instead to formulate a gradient-based meta-learner as hierarchical Bayesian inference, thus providing a way to efficiently perform posterior inference in a model-agnostic manner. 7 CONCLUSION We have shown that model-agnostic meta-learning (MAML) estimates the parameters of a prior in a hierarchical Bayesian model. By casting gradient-based meta-learning within a Bayesian framework, our analysis opens the door to novel improvements inspired by probabilistic machinery. As a step in this direction, we propose an extension to MAML that employs a Laplace approximation to the posterior distribution over task-specific parameters. This technique provides a more accurate estimate of the integral that, in the original MAML algorithm, is approximated via a point estimate. We show how to estimate the quantity required by the Laplace approximation using Kroneckerfactored approximate curvature (K-FAC), a method recently proposed to approximate the quadratic curvature of a neural network objective for the purpose of a second-order gradient descent technique. Our contribution illuminates the road to exploring further connections between gradient-based metalearning methods and hierarchical Bayesian modeling. For instance, in this work we assume that the predictive distribution over new data-points is narrow and well-approximated by a point estimate. We may instead employ methods that make use of the variance of the distribution over task-specific parameters in order to model the predictive density over examples from a novel task. Furthermore, it is known that the Laplace approximation is inaccurate in cases where the integral is highly skewed, or is not unimodal and thus is not amenable to approximation by a single Gaussian mode. This could be solved by using a finite mixture of Gaussians, which can approximate many density functions arbitrarily well (Sorenson & Alspach, 1971; Alspach & Sorenson, 1972). The exploration of additional improvements such as this is an exciting line of future work.
1. What are the strengths and weaknesses of the proposed method compared to other meta-learning approaches such as MAML? 2. How does the Laplace approximation used in the proposed method impact its convergence and accuracy, especially when compared to more advanced probabilistic modeling techniques? 3. Can the authors provide more guidance on how to set the learning parameters in their method, and what considerations should be taken into account when choosing these values? 4. How does the proposed method compare to traditional Bayesian inference methods in terms of computational complexity and scalability, especially when applied to large-scale few-shot learning problems? 5. Could the authors elaborate further on the connection between their proposed method and MAP estimation in linear hierarchical Bayes models with explicit priors, and how this relationship informs the design of their algorithm? 6. In the mini ImageNet experiment, did the proposed method outperform MAML in terms of few-shot learning performance, and if not, what might explain the lack of improvement? 7. How does the proposed method handle non-linear relationships between input and output variables, and how might it be extended to accommodate complex structural relationships between features?
Review
Review Summary The paper presents an interesting view on the recently proposed MAML formulation of meta-learning (Finn et al). The main contribution is a) insight into the connection between the MAML procedure and MAP estimation in an equivalent linear hierarchical Bayes model with explicit priors, b) insight into the connection between MAML and MAP estimation in non-linear HB models with implicit priors, c) based on these insights, the paper proposes a variant of MALM using a Laplace approximation (with additional approximations for the covariance matrix. The paper finally provides an evaluation on the mini ImageNet problem without significantly improving on the MAML results on the same task. Pro: - The topic is timely and of relevance to the ICLR community continuing a current trend in building meta-learning system for few-shot learning. - Provides valuable insight into the MAML objective and its relation to probabilistic models Con: - The paper is generally well-written but I find (as a non-meta-learner expert) that certain fundamental aspects could have been explained better or in more detail (see below for details). - The toy example is quite difficult to interpret the first time around and does not provide any empirical insight into the converge of the proposed method (compared to e.g. MAML) - I do not think the empirical results provide enough evidence that it is a useful/robust method. Especially it does not provide insight into which types of problems (small/large, linear/ non-linear) the method is applicable to. Detailed comments/questions: - The use of Laplace approximation is (in the paper) motivated from a probabilistic/Bayes and uncertainty point-of-view. It would, however, seem that the truncated iterations do not result in the approximation being very accurate during optimization as the truncation does not result in the approximation being created at a mode. Could the authors perhaps comment on: a) whether it is even meaningful to talk about the approximations as probabilistic distribution during the optimization (given the psd approximation to the Hessian), or does it only make sense after convergence? b) the consequence of the approximation errors on the general convergence of the proposed method (consistency and rate) - Sec 4.1, p5: Last equation: Perhaps useful to explain the term $log(\phi_j^* | \theta)$ and why it is not in subroutine 4 . Should $\phi^*$ be $\hat \phi$ ? - Sec 4.2: “A straightforward…”: I think it would improve readability to refer back to the to the previous equation (i.e. H) such that it is clear what is meant by “straightforward”. - Sec 4.2: Several ideas are being discussed in Sec 4.2 and it is not entirely clear to me what has actually been adopted here; perhaps consider formalizing the actual computations in Subroutine 4 – and provide a clearer argument (preferably proof) that this leads to consistent and robust estimator of \theta. - It is not clear from the text or experiment how the learning parameters are set. - Sec 5.1: It took some effort to understand exactly what was going on in the example and particular figure 5.1; e.g., in the model definition in the body text there is no mention of the NN mentioned/used in figure 5, the blue points are not defined in the caption, the terminology e.g. “pre-update density” is new at this point. I think it would benefit the readability to provide the reader with a bit more guidance. - Sec 5.1: While the qualitative example is useful (with a bit more text), I believe it would have been more convincing with a quantitative example to demonstrate e.g. the convergence of the proposal compared to std MAML and possibly compare to a std Bayesian inference method from the HB formulation of the problem (in the linear case) - Sec 5.2: The abstract clams increased performance over MAML but the empirical results do not seem to be significantly better than MAML ? I find it quite difficult to support the specific claim in the abstract from the results without adding a comment about the significance. - Sec 5.2: The authors have left out “Mishral et al” from the comparison due to the model being significantly larger than others. Could the authors provide insight into why they did not use the ResNet structure from the tcml paper in their L-MLMA scheme ? - Sec 6+7: The paper clearly states that it is not the aim to (generally) formulate the MAML as a HB. Given the advancement in gradient based inference for HB the last couple of years (e.g. variational, nested laplace , expectation propagation etc) for explicit models, could the authors perhaps indicate why they believe their approach of looking directly to the MAML objective is more scalable/useful than trying to formulate the same or similar objective in an explicit HB model and using established inference methods from that area ? Minor: - Sec 4.1 “…each integral in the sum in (2)…” eq 2 is a product
ICLR
Title Recasting Gradient-Based Meta-Learning as Hierarchical Bayes Abstract Meta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task. Bayesian hierarchical modeling provides a theoretical framework for formalizing meta-learning as inference for a set of parameters that are shared across tasks. Here, we reformulate the model-agnostic meta-learning algorithm (MAML) of Finn et al. (2017) as a method for probabilistic inference in a hierarchical Bayesian model. In contrast to prior methods for meta-learning via hierarchical Bayes, MAML is naturally applicable to complex function approximators through its use of a scalable gradient descent procedure for posterior inference. Furthermore, the identification of MAML as hierarchical Bayes provides a way to understand the algorithm’s operation as a meta-learning procedure, as well as an opportunity to make use of computational strategies for efficient inference. We use this opportunity to propose an improvement to the MAML algorithm that makes use of techniques from approximate inference and curvature estimation. 1 INTRODUCTION A remarkable aspect of human intelligence is the ability to quickly solve a novel problem and to be able to do so even in the face of limited experience in a novel domain. Such fast adaptation is made possible by leveraging prior learning experience in order to improve the efficiency of later learning. This capacity for meta-learning also has the potential to enable an artificially intelligent agent to learn more efficiently in situations with little available data or limited computational resources (Schmidhuber, 1987; Bengio et al., 1991; Naik & Mammone, 1992). In machine learning, meta-learning is formulated as the extraction of domain-general information that can act as an inductive bias to improve learning efficiency in novel tasks (Caruana, 1998; Thrun & Pratt, 1998). This inductive bias has been implemented in various ways: as learned hyperparameters in a hierarchical Bayesian model that regularize task-specific parameters (Heskes, 1998), as a learned metric space in which to group neighbors (Bottou & Vapnik, 1992), as a trained recurrent neural network that allows encoding and retrieval of episodic information (Santoro et al., 2016), or as an optimization algorithm with learned parameters (Schmidhuber, 1987; Bengio et al., 1992). The model-agnostic meta-learning (MAML) of Finn et al. (2017) is an instance of a learned optimization procedure that directly optimizes the standard gradient descent rule. The algorithm estimates an initial parameter set to be shared among the task-specific models; the intuition is that gradient descent from the learned initialization provides a favorable inductive bias for fast adaptation. However, this inductive bias has been evaluated only empirically in prior work (Finn et al., 2017). In this work, we present a novel derivation of and a novel extension to MAML, illustrating that this algorithm can be understood as inference for the parameters of a prior distribution in a hierarchical Bayesian model. The learned prior allows for quick adaptation to unseen tasks on the basis of an implicit predictive density over task-specific parameters. The reinterpretation as hierarchical Bayes gives a principled statistical motivation for MAML as a meta-learning algorithm, and sheds light on the reasons for its favorable performance even among methods with significantly more parameters. More importantly, by casting gradient-based meta-learning within a Bayesian framework, we are able to improve MAML by taking insights from Bayesian posterior estimation as novel augmentations to the gradient-based meta-learning procedure. We experimentally demonstrate that this enables better performance on a few-shot learning benchmark. 2 META-LEARNING FORMULATION The goal of a meta-learner is to extract task-general knowledge through the experience of solving a number of related tasks. By using this learned prior knowledge, the learner has the potential to quickly adapt to novel tasks even in the face of limited data or limited computation time. Formally, we consider a dataset D that defines a distribution over a family of tasks T . These tasks share some common structure such that learning to solve a single task has the potential to aid in solving another. Each task T defines a distribution over data points x, which we assume in this work to consist of inputs and either regression targets or classification labels y in a supervised learning problem (although this assumption can be relaxed to include reinforcement learning problems; e.g., see Finn et al., 2017). The objective of the meta-learner is to be able to minimize a task-specific performance metric associated with any given unseen task from the dataset given even only a small amount of data from the task; i.e., to be capable of fast adaptation to a novel task. In the following subsections, we discuss two ways of formulating a solution to the meta-learning problem: gradient-based hyperparameter optimization and probabilistic inference in a hierarchical Bayesian model. These approaches were developed orthogonally, but, in Section 3.1, we draw a novel connection between the two. 2.1 META-LEARNING AS GRADIENT-BASED HYPERPARAMETER OPTIMIZATION A parametric meta-learner aims to find some shared parameters θ that make it easier to find the right task-specific parameters φ when faced with a novel task. A variety of meta-learners that employ gradient methods for task-specific fast adaptation have been proposed (e.g., Andrychowicz et al., 2016; Li & Malik, 2017a;b; Wichrowska et al., 2017). MAML (Finn et al., 2017) is distinct in that it provides a gradient-based meta-learning procedure that employs a single additional parameter (the meta-learning rate) and operates on the same parameter space for both meta-learning and fast adaptation. These are necessary features for the equivalence we show in Section 3.1. To address the meta-learning problem, MAML estimates the parameters θ of a set of models so that when one or a few batch gradient descent steps are taken from the initialization at θ given a small sample of task data xj1 , . . . ,xjN ∼ pTj (x) each model has good generalization performance on another sample xjN+1 , . . . ,xjN+M ∼ pTj (x) from the same task. The MAML objective in a maximum likelihood setting is L(θ) = 1 J ∑ j [ 1 M ∑ m − log p ( xjN+m | θ − α∇θ 1 N ∑ n − log p ( xjn | θ ) ︸ ︷︷ ︸ φj )] (1) where we use φj to denote the updated parameters after taking a single batch gradient descent step from the initialization at θ with step size α on the negative log-likelihood associated with the task Tj . Note that since φj is an iterate of a gradient descent procedure that starts from θ, each φj is of the same dimensionality as θ. We refer to the inner gradient descent procedure that computes φj as fast adaptation. The computational graph of MAML is given in Figure 1 (left). 2.2 META-LEARNING AS HIERARCHICAL BAYESIAN INFERENCE An alternative way to formulate meta-learning is as a problem of probabilistic inference in the hierarchical model depicted in Figure 1 (right). In particular, in the case of meta-learning, each task-specific parameter φj is distinct from but should influence the estimation of the parameters {φj′ | j ′ 6= j} from other tasks. We can capture this intuition by introducing a meta-level parameter θ on which each task-specific parameter is statistically dependent. With this formulation, the mutual dependence of the task-specific parameters φj is realized only through their individual dependence on the meta-level parameters θ. As such, estimating θ provides a way to constrain the estimation of each of the φj . Given some data in a multi-task setting, we may estimate θ by integrating out the task-specific parameters to form the marginal likelihood of the data. Formally, grouping all of the data from each of the tasks as X and again denoting by xj1 , . . . ,xjN a sample from task Tj , the marginal likelihood of the observed data is given by p (X | θ ) = ∏ j (∫ p ( xj1 , . . . ,xjN | φj ) p ( φj | θ ) dφj ) . (2) Maximizing (2) as a function of θ gives a point estimate for θ, an instance of a method known as empirical Bayes (Bernardo & Smith, 2006; Gelman et al., 2014) due to its use of the data to estimate the parameters of the prior distribution. Hierarchical Bayesian models have a long history of use in both transfer learning and domain adaptation (e.g., Lawrence & Platt, 2004; Yu et al., 2005; Gao et al., 2008; Daumé III, 2009; Wan et al., 2012). However, the formulation of meta-learning as hierarchical Bayes does not automatically provide an inference procedure, and furthermore, there is no guarantee that inference is tractable for expressive models with many parameters such as deep neural networks. 3 LINKING GRADIENT-BASED META-LEARNING & HIERARCHICAL BAYES In this section, we connect the two independent approaches of Section 2.1 and Section 2.2 by showing that MAML can be understood as empirical Bayes in a hierarchical probabilistic model. Furthermore, we build on this understanding by showing that a choice of update rule for the taskspecific parameters φj (i.e., a choice of inner-loop optimizer) corresponds to a choice of prior over task-specific parameters, p(φj | θ ). 3.1 MODEL-AGNOSTIC META-LEARNING AS EMPIRICAL BAYES In general, when performing empirical Bayes, the marginalization over task-specific parameters φj in (2) is not tractable to compute exactly. To avoid this issue, we can consider an approximation that makes use of a point estimate φ̂j instead of performing the integration over φ in (2). Using φ̂j as an estimator for each φj , we may write the negative logarithm of the marginal likelihood as − log p (X | θ ) ≈ ∑ j [ − log p ( xjN+1 , . . .xjN+M | φ̂j )] . (3) Setting φ̂j = θ + α∇θ log p(xj1 , . . . ,xjN | θ ) for each j in (3) recovers the unscaled form of the one-step MAML objective in (1). This tells us that the MAML objective is equivalent to a maximization with respect to the meta-level parameters θ of the marginal likelihood p(X | θ ), where a point estimate for each task-specific parameter φj is computed via one or a few steps of gradient descent. By taking only a few steps from the initialization at θ, the point estimate φ̂j trades off Algorithm MAML-HB(D) Initialize θ randomly while not converged do Draw J samples T1, . . . , TJ ∼ pD(T ) Estimate Ex∼pT1 (x)[− log p(x | θ )], . . . ,Ex∼pTJ (x)[− log p(x | θ )] using ML-· · · Update θ ← θ − β ∇θ ∑ j Ex∼pTj (x)[− log p(x | θ )] end Algorithm 2: Model-agnostic meta-learning as hierarchical Bayesian inference. The choices of the subroutine ML-· · · that we consider are defined in Subroutine 3 and Subroutine 4. Subroutine ML-POINT(θ, T ) Draw N samples x1, . . . ,xN ∼ pT (x) Initialize φ← θ for k in 1, . . . ,K do Update φ← φ+ α∇φ log p(x1, . . . ,xN | φ ) end Draw M samples xN+1, . . . ,xN+M ∼ pT (x) return − log p(xN+1, . . . ,xN+M | φ ) Subroutine 3: Subroutine for computing a point estimate φ̂ using truncated gradient descent to approximate the marginal negative log likelihood (NLL). minimizing the fast adaptation objective − log p(xj1 , . . . ,xjN | θ ) with staying close in value to the parameter initialization θ. We can formalize this trade-off by considering the linear regression case. Recall that the maximum a posteriori (MAP) estimate of φj corresponds to the global mode of the posterior p(φj | xj1 , . . .xjN ,θ ) ∝ p(xj1 , . . .xjN | φj )p(φj | θ ). In the case of a linear model, early stopping of an iterative gradient descent procedure to estimate φj is exactly equivalent to MAP estimation of φj under the assumption of a prior that depends on the number of descent steps as well as the direction in which each step is taken. In particular, write the input examples as X and the vector of regression targets as y, omit the task index from φ, and consider the gradient descent update φ(k) = φ(k−1) − α∇φ [ ‖y −Xφ‖22 ] φ=φ(k−1) = φ(k−1) − αX T (Xφ(k−1) − y) (4) for iteration index k and learning rate α ∈ R+. Santos (1996) shows that, starting from φ(0) = θ, φ(k) in (4) solves the regularized linear least squares problem min ( ‖y −Xφ‖22 + ‖θ − φ‖ 2 Q ) (5) with Q-norm defined by ‖z‖Q = z TQ−1z for a symmetric positive definite matrix Q that depends on the step size α and iteration index k as well as on the covariance structure of X. We describe the exact form of the dependence in Section 3.2. The minimization in (5) can be expressed as a posterior maximization problem given a conditional Gaussian likelihood over y and a Gaussian prior over φ. The posterior takes the form p (φ | X,y,θ ) ∝ N (y ; Xφ, I) N (φ ; θ,Q) . (6) Since φ(k) in (4) maximizes (6), we may conclude that k iterations of gradient descent in a linear regression model with squared error exactly computes the MAP estimate of φ, given a Gaussian-noised observation model and a Gaussian prior over φ with parameters µ0 = θ and Σ0 = Q. Therefore, in the case of linear regression with squared error, MAML is exactly empirical Bayes using the MAP estimate as the point estimate of φ. In the nonlinear case, MAML is again equivalent to an empirical Bayes procedure to maximize the marginal likelihood that uses a point estimate for φ computed by one or a few steps of gradient descent. However, this point estimate is not necessarily the global mode of a posterior. We can instead understand the point estimate given by truncated gradient descent as the value of the mode of an implicit posterior over φ resulting from an empirical loss interpreted as a negative log-likelihood, and regularization penalties and the early stopping procedure jointly acting as priors (for similar interpretations, see Sjöberg & Ljung, 1995; Bishop, 1995; Duvenaud et al., 2016). The exact equivalence between early stopping and a Gaussian prior on the weights in the linear case, as well as the implicit regularization to the parameter initialization the nonlinear case, tells us that every iterate of truncated gradient descent is a mode of an implicit posterior. In particular, we are not required to take the gradient descent procedure of fast adaptation that computes φ̂ to convergence in order to establish a connection between MAML and hierarchical Bayes. MAML can therefore be understood to approximate an expectation of the marginal negative log likelihood (NLL) for each task Tj as Ex∼pTj (x) [− log p (x | θ )] ≈ 1 M ∑ m − log p ( xjN+m | φ̂j ) using the point estimate φ̂j = θ + α∇θ log p(xjn | θ ) for single-step fast adaptation. The algorithm for MAML as probabilistic inference is given in Algorithm 2; Subroutine 3 computes each marginal NLL using the point estimate of φ̂ as just described. Formulating MAML in this way, as probabilistic inference in a hierarchical Bayesian model, motivates the interpretation in Section 3.2 of using various meta-optimization algorithms to induce a prior over task-specific parameters. 3.2 THE PRIOR OVER TASK-SPECIFIC PARAMETERS From Section 3.1, we may conclude that early stopping during fast adaptation is equivalent to a specific choice of a prior over task-specific parameters, p(φj | θ ). We can better understand the role of early stopping in defining the task-specific parameter prior in the case of a quadratic objective. Omit the task index from φ and x, and consider a second-order approximation of the fast adaptation objective `(φ) = − log p(x1 . . . ,xN | φ ) about a minimum φ ∗: `(φ) ≈ ˜̀(φ) := 12‖φ− φ ∗‖2 H −1 + `(φ∗) (7) where the Hessian H = ∇2φ `(φ ∗) is assumed to be positive definite so that ˜̀ is bounded below. Furthermore, consider using a curvature matrix B to precondition the gradient in gradient descent, giving the update φ(k) = φ(k−1) − B∇φ ˜̀(φ(k−1)) . (8) If B is diagonal, we can identify (8) as a Newton method with a diagonal approximation to the inverse Hessian; using the inverse Hessian evaluated at the point φ(k−1) recovers Newton’s method itself. On the other hand, meta-learning the matrix B matrix via gradient descent provides a method to incorporate task-general information into the covariance of the fast adaptation prior, p(φ | θ ). For instance, the meta-learned matrix B may encode correlations between parameters that dictates how such parameters are updated relative to each other. Formally, taking k steps of gradient descent from φ(0) = θ using the update rule in (8) gives a φ(k) that solves min ( ‖φ− φ∗‖2 H −1 + ‖φ(0) − φ‖ 2 Q ) . (9) The minimization in (9) corresponds to taking a Gaussian prior p(φ | θ ) with mean θ and covariance Q for Q = OΛ−1((I−BΛ)−k − I)OT (Santos, 1996) where B is a diagonal matrix that results from a simultaneous diagonalization of H and B as OTHO = diag(λ1, . . . , λn) = Λ and OTB−1O = diag(b1, . . . , bn) = B with bi, λi ≥ 0 for i = 1, . . . , n (Theorem 8.7.1 in Golub & Van Loan, 1983). If the true objective is indeed quadratic, then, assuming the data is centered, H is the unscaled covariance matrix of features, XTX. 4 IMPROVING MODEL-AGNOSTIC META-LEARNING Identifying MAML as a method for probabilistic inference in a hierarchical model allows us to develop novel improvements to the algorithm. In Section 4.1, we consider an approach from Bayesian parameter estimation to improve the MAML algorithm, and in Section 4.2, we discuss how to make this procedure computationally tractable for high-dimensional models. 4.1 LAPLACE’S METHOD OF INTEGRATION We have shown that the MAML algorithm is an empirical Bayes procedure that employs a point estimate for the mid-level, task-specific parameters in a hierarchical Bayesian model. However, the use of this point estimate may lead to an inaccurate point approximation of the integral in (2) if the posterior over the task-specific parameters, p(φj | xjN+1 , . . . ,xjN+M ,θ ), is not sharply peaked at the value of the point estimate. The Laplace approximation (Laplace, 1986; MacKay, 1992b;a) is applicable in this case as it replaces a point estimate of an integral with the volume of a Gaussian centered at a mode of the integrand, thereby forming a local quadratic approximation. We can make use of this approximation to incorporate uncertainty about the task-specific parameters into the MAML algorithm at fast adaptation time. In particular, suppose that each integrand in (2) has a mode φ∗j at which it is locally well-approximated by a quadratic function. The Laplace approximation uses a second-order Taylor expansion of the negative log posterior in order to approximate each integral in the product in (2) as∫ p ( Xj | φj ) p ( φj | θ ) dφj ≈ p ( Xj | φ ∗ j ) p ( φ∗j | θ ) det(Hj/2π) − 12 (10) where Hj is the Hessian matrix of second derivatives of the negative log posterior. Classically, the Laplace approximation uses the MAP estimate for φ∗j , although any mode can be used as an expansion site provided the integrand is well enough approximated there by a quadratic. We use the point estimate φ̂j uncovered by fast adaptation, in which case the MAML objective in (1) becomes an appropriately scaled version of the approximate marginal likelihood − log p (X | θ ) ≈ ∑ j [ − log p ( Xj | φ̂j ) − log p ( φ̂j | θ ) + 12 log det(Hj) ] . (11) The term log p( φ̂j | θ ) results from the implicit regularization imposed by early stopping during fast adaptation, as discussed in Section 3.1. The term 1/2 log det(Hj), on the other hand, results from the Laplace approximation and can be interpreted as a form of regularization that penalizes model complexity. 4.2 USING CURVATURE INFORMATION TO IMPROVE MAML Using (11) as a training criterion for a neural network model is difficult due to the required computation of the determinant of the Hessian of the log posterior Hj , which itself decomposes into a sum of the Hessian of the log likelihood and the Hessian of the log prior as Hj = ∇ 2 φj [ − log p ( Xj | φj )] +∇2φj [ − log p ( φj | θ )] . In our case of early stopping as regularization, the prior over task-specific parameters p(φj | θ ) is implicit and thus no closed form is available for a general model. Although we may use the quadratic approximation derived in Section 3.2 to obtain an approximate Gaussian prior, this prior is not diagonal and does not, to our knowledge, have a convenient factorization. Therefore, in our experiments, we instead use a simple approximation in which the prior is approximated as a diagonal Gaussian with precision τ . We keep τ fixed, although this parameter may be cross-validated for improved performance. Subroutine ML-LAPLACE(θ, T ) Draw N samples x1, . . . ,xN ∼ pT (x) Initialize φ← θ for k in 1, . . . ,K do Update φ← φ+ α∇φ log p(x1, . . . ,xN | φ ) end Draw M samples xN+1, . . . ,xN+M ∼ pT (x) Estimate quadratic curvature Ĥ return − log p(xN+1, . . . ,xN+M | φ ) + η log det(Ĥ) Subroutine 4: Subroutine for computing a Laplace approximation of the marginal likelihood. Similarly, the Hessian of the log likelihood is intractable to form exactly for all but the smallest models, and furthermore, is not guaranteed to be positive definite at all points, possibly rendering the Laplace approximation undefined. To combat this, we instead seek a curvature matrix Ĥ that approximates the quadratic curvature of a neural network objective function. Since it is well-known that the curvature associated with neural network objective functions is highly non-diagonal (e.g., Martens, 2016), a further requirement is that the matrix have off-diagonal terms. Due to the difficulties listed above, we turn to second order gradient descent methods, which precondition the gradient with an inverse curvature matrix at each iteration of descent. The Fisher information matrix (Fisher, 1925) has been extensively used as an approximation of curvature, giving rise to a method known as natural gradient descent (Amari, 1998). A neural network with an appropriate choice of loss function is a probabilistic model and therefore defines a Fisher information matrix. Furthermore, the Fisher information matrix can be seen to define a convex quadratic approximation to the objective function of a probabilistic neural model (Pascanu & Bengio, 2014; Martens, 2014). Importantly for our use case, the Fisher information matrix is positive definite by definition as well as non-diagonal. However, the Fisher information matrix is still expensive to work with. Martens & Grosse (2015) developed Kronecker-factored approximate curvature (K-FAC), a scheme for approximating the curvature of the objective function of a neural network with a block-diagonal approximation to the Fisher information matrix. Each block corresponds to a unique layer in the network, and each block is further approximated as a Kronecker product (see Van Loan, 2000) of two much smaller matrices by assuming that the second-order statistics of the input activation and the back-propagated derivatives within a layer are independent. These two approximations ensure that the inverse of the Fisher information matrix can be computed efficiently for the natural gradient. For the Laplace approximation, we are interested in the determinant of a curvature matrix instead of its inverse. However, we may also make use of the approximations to the Fisher information matrix from K-FAC as well as properties of the Kronecker product. In particular, we use the fact that the determinant of a Kronecker product is the product of the exponentiated determinants of each of the factors, and that the determinant of a block diagonal matrix is the product of the determinants of the blocks (Van Loan, 2000). The determinants for each factor can be computed as efficiently as the inverses required by K-FAC, in O(d3) time for a d-dimensional Kronecker factor. We make use of the Laplace approximation and K-FAC to replace Subroutine 3, which computes the task-specific marginal NLLs using a point estimate for φ̂. We call this method the Lightweight Laplace Approximation for Meta-Adaptation (LLAMA), and give a replacement subroutine in Subroutine 4. 5 EXPERIMENTAL EVALUATION The goal of our experiments is to evaluate if we can use our probabilistic interpretation of MAML to generate samples from the distribution over adapted parameters, and futhermore, if our method can be applied to large-scale meta-learning problems such as miniImageNet. 5.1 WARMUP: TOY NONLINEAR MODEL The connection between MAML and hierarchical Bayes suggests that we should expect MAML to behave like an algorithm that learns the mean of a Gaussian prior on model parameters, and uses the mean of this prior as an initialization during fast adaptation. Using the Laplace approximation to the integration over task-specific parameters as in (10) assumes a task-specific parameter posterior with mean at the adapted parameters φ̂ and covariance equal to the inverse Hessian of the log posterior evaluated at the adapted parameter value. Instead of simply using this density in the Laplace approximation as an additional regularization term as in (11), we may sample parameters φj from this density and use each set of sampled parameters to form a set of predictions for a given task. To illustrate this relationship between MAML and hierarchical Bayes, we present a meta-dataset of sinusoid tasks in which each task involves regressing to the output of a sinusoid wave in Figure 5. Variation between tasks is obtained by sampling the amplitude uniformly from [0.1, 5.0] and the phase from [0, π]. During training and for each task, 10 input datapoints are sampled uniformly from [−10.0, 10.0] and the loss is the mean squared error between the prediction and the true value. We observe in Figure 5 that our method allows us to directly sample models from the task-specific parameter distribution after being presented with 10 datapoints from a new, previously unseen sinusoid curve. In particular, the column on the right of Figure 5 demonstrates that the sampled models display an appropriate level of uncertainty when the datapoints are ambiguous (as in the bottom right). 5.2 LARGE-SCALE EXPERIMENT: miniIMAGENET We evaluate LLAMA on the miniImageNet Ravi & Larochelle (2017) 1-shot, 5-way classification task, a standard benchmark in few-shot classification. miniImageNet comprises 64 training classes, 12 validation classes, and 24 test classes. Following the setup of Vinyals et al. (2016), we structure the N -shot, J-way classification task as follows: The model observes N instances of J unseen classes, and is evaluated on its ability to classify M new instances within the J classes. We use a neural network architecture standard to few-shot classification (e.g., Vinyals et al., 2016; Ravi & Larochelle, 2017), consisting of 4 layers with 3× 3 convolutions and 64 filters, followed by batch normalization (BN) (Ioffe & Szegedy, 2015), a ReLU nonlinearity, and 2× 2 max-pooling. For the scaling variable β and centering variable γ of BN (see Ioffe & Szegedy, 2015), we ignore the fast adaptation update as well as the Fisher factors for K-FAC. We use Adam (Kingma & Ba, 2014) as the meta-optimizer, and standard batch gradient descent with a fixed learning rate to update the model during fast adaptation. LLAMA requires the prior precision term τ as well as an additional parameter η ∈ R+ that weights the regularization term log det Ĥ contributed by the Laplace approximation. We fix τ = 0.001 and selected η = 10−6 via cross-validation; all other parameters are set to the values reported in Finn et al. (2017). We find that LLAMA is practical enough to be applied to this larger-scale problem. In particular, our TensorFlow implementation of LLAMA trains for 60,000 iterations on one TITAN Xp GPU in 9 hours, compared to 5 hours to train MAML. As shown in Table 1, LLAMA achieves comparable performance to the state-of-the-art meta-learning method by Triantafillou et al. (2017). While the gap between MAML and LLAMA is small, the improvement from the Laplace approximation suggests that a more accurate approximation to the marginalization over task-specific parameters will lead to further improvements. 6 RELATED WORK Meta-learning and few-shot learning have a long history in hierarchical Bayesian modeling (e.g., Tenenbaum, 1999; Fei-Fei et al., 2003; Lawrence & Platt, 2004; Yu et al., 2005; Gao et al., 2008; Daumé III, 2009; Wan et al., 2012). A related subfield is that of transfer learning, which has used hierarchical Bayes extensively (e.g., Raina et al., 2006). A variety of inference methods have been used in Bayesian models, including exact inference (Lake et al., 2011), sampling methods (Salakhutdinov et al., 2012), and variational methods (Edwards & Storkey, 2017). While some prior works on hierarchical Bayesian models have proposed to handle basic image recognition tasks, the complexity of these tasks does not yet approach the kinds of complex image recognition problems that can be solved by discriminatively trained deep networks, such as the miniImageNet experiment in our evaluation (Mansinghka et al., 2013). Recently, the Omniglot benchmark Lake et al. (2016) has rekindled interest in the problem of learning from few examples. Modern methods accomplish few-shot learning either through the design of network architectures that ingest the few-shot training samples directly (e.g., Koch, 2015; Vinyals et al., 2016; Snell et al., 2017; Hariharan & Girshick, 2017; Triantafillou et al., 2017), or formulating the problem as one of learning to learn, or meta-learning (e.g., Schmidhuber, 1987; Bengio et al., 1991; Schmidhuber, 1992; Bengio et al., 1992). A variety of inference methods have been used in Bayesian models, including exact inference (Lake et al., 2011), sampling methods (Salakhutdinov et al., 2013), and variational methods (Edwards & Storkey, 2017). Our work bridges the gap between gradient-based meta-learning methods and hierarchical Bayesian modeling. Our contribution is not to formulate the meta-learning problem as a hierarchical Bayesian 1Improved performance on miniImageNet has been reported by several works (Mishra et al., 2017; Munkhdalai & Yu, 2017; Sung et al., 2017) by making use of a model architecture with significantly more parameters than the methods in Table 1. Since we do not explore variations in neural network architecture in this work, we omit such results from the table. model, but instead to formulate a gradient-based meta-learner as hierarchical Bayesian inference, thus providing a way to efficiently perform posterior inference in a model-agnostic manner. 7 CONCLUSION We have shown that model-agnostic meta-learning (MAML) estimates the parameters of a prior in a hierarchical Bayesian model. By casting gradient-based meta-learning within a Bayesian framework, our analysis opens the door to novel improvements inspired by probabilistic machinery. As a step in this direction, we propose an extension to MAML that employs a Laplace approximation to the posterior distribution over task-specific parameters. This technique provides a more accurate estimate of the integral that, in the original MAML algorithm, is approximated via a point estimate. We show how to estimate the quantity required by the Laplace approximation using Kroneckerfactored approximate curvature (K-FAC), a method recently proposed to approximate the quadratic curvature of a neural network objective for the purpose of a second-order gradient descent technique. Our contribution illuminates the road to exploring further connections between gradient-based metalearning methods and hierarchical Bayesian modeling. For instance, in this work we assume that the predictive distribution over new data-points is narrow and well-approximated by a point estimate. We may instead employ methods that make use of the variance of the distribution over task-specific parameters in order to model the predictive density over examples from a novel task. Furthermore, it is known that the Laplace approximation is inaccurate in cases where the integral is highly skewed, or is not unimodal and thus is not amenable to approximation by a single Gaussian mode. This could be solved by using a finite mixture of Gaussians, which can approximate many density functions arbitrarily well (Sorenson & Alspach, 1971; Alspach & Sorenson, 1972). The exploration of additional improvements such as this is an exciting line of future work.
1. What is the focus of the paper, and how does it contribute to the field of machine learning? 2. What is the novel perspective offered by the paper on the MAML algorithm? 3. How does the paper improve upon the original MAML algorithm? 4. What are some potential limitations or drawbacks of the proposed approach? 5. How does the paper compare the proposed method with other recent alternatives in the literature?
Review
Review The paper reformulates the model-agnostic meta-learning algorithm (MAML) in terms of inference for parameters of a prior distribution in a hierarchical Bayesian model. This provides an interesting and, as far as I can tell, novel view on MAML. The paper uses this view to improve the MAML algorithm. The writing of the paper is excellent. Experimental evalution is well done against a number of recently developed alternative methods in favor of the presented method, except for TCML which has been exluded using a not so convincing argument. The overview of the literature is also very well done.
ICLR
Title Recasting Gradient-Based Meta-Learning as Hierarchical Bayes Abstract Meta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task. Bayesian hierarchical modeling provides a theoretical framework for formalizing meta-learning as inference for a set of parameters that are shared across tasks. Here, we reformulate the model-agnostic meta-learning algorithm (MAML) of Finn et al. (2017) as a method for probabilistic inference in a hierarchical Bayesian model. In contrast to prior methods for meta-learning via hierarchical Bayes, MAML is naturally applicable to complex function approximators through its use of a scalable gradient descent procedure for posterior inference. Furthermore, the identification of MAML as hierarchical Bayes provides a way to understand the algorithm’s operation as a meta-learning procedure, as well as an opportunity to make use of computational strategies for efficient inference. We use this opportunity to propose an improvement to the MAML algorithm that makes use of techniques from approximate inference and curvature estimation. 1 INTRODUCTION A remarkable aspect of human intelligence is the ability to quickly solve a novel problem and to be able to do so even in the face of limited experience in a novel domain. Such fast adaptation is made possible by leveraging prior learning experience in order to improve the efficiency of later learning. This capacity for meta-learning also has the potential to enable an artificially intelligent agent to learn more efficiently in situations with little available data or limited computational resources (Schmidhuber, 1987; Bengio et al., 1991; Naik & Mammone, 1992). In machine learning, meta-learning is formulated as the extraction of domain-general information that can act as an inductive bias to improve learning efficiency in novel tasks (Caruana, 1998; Thrun & Pratt, 1998). This inductive bias has been implemented in various ways: as learned hyperparameters in a hierarchical Bayesian model that regularize task-specific parameters (Heskes, 1998), as a learned metric space in which to group neighbors (Bottou & Vapnik, 1992), as a trained recurrent neural network that allows encoding and retrieval of episodic information (Santoro et al., 2016), or as an optimization algorithm with learned parameters (Schmidhuber, 1987; Bengio et al., 1992). The model-agnostic meta-learning (MAML) of Finn et al. (2017) is an instance of a learned optimization procedure that directly optimizes the standard gradient descent rule. The algorithm estimates an initial parameter set to be shared among the task-specific models; the intuition is that gradient descent from the learned initialization provides a favorable inductive bias for fast adaptation. However, this inductive bias has been evaluated only empirically in prior work (Finn et al., 2017). In this work, we present a novel derivation of and a novel extension to MAML, illustrating that this algorithm can be understood as inference for the parameters of a prior distribution in a hierarchical Bayesian model. The learned prior allows for quick adaptation to unseen tasks on the basis of an implicit predictive density over task-specific parameters. The reinterpretation as hierarchical Bayes gives a principled statistical motivation for MAML as a meta-learning algorithm, and sheds light on the reasons for its favorable performance even among methods with significantly more parameters. More importantly, by casting gradient-based meta-learning within a Bayesian framework, we are able to improve MAML by taking insights from Bayesian posterior estimation as novel augmentations to the gradient-based meta-learning procedure. We experimentally demonstrate that this enables better performance on a few-shot learning benchmark. 2 META-LEARNING FORMULATION The goal of a meta-learner is to extract task-general knowledge through the experience of solving a number of related tasks. By using this learned prior knowledge, the learner has the potential to quickly adapt to novel tasks even in the face of limited data or limited computation time. Formally, we consider a dataset D that defines a distribution over a family of tasks T . These tasks share some common structure such that learning to solve a single task has the potential to aid in solving another. Each task T defines a distribution over data points x, which we assume in this work to consist of inputs and either regression targets or classification labels y in a supervised learning problem (although this assumption can be relaxed to include reinforcement learning problems; e.g., see Finn et al., 2017). The objective of the meta-learner is to be able to minimize a task-specific performance metric associated with any given unseen task from the dataset given even only a small amount of data from the task; i.e., to be capable of fast adaptation to a novel task. In the following subsections, we discuss two ways of formulating a solution to the meta-learning problem: gradient-based hyperparameter optimization and probabilistic inference in a hierarchical Bayesian model. These approaches were developed orthogonally, but, in Section 3.1, we draw a novel connection between the two. 2.1 META-LEARNING AS GRADIENT-BASED HYPERPARAMETER OPTIMIZATION A parametric meta-learner aims to find some shared parameters θ that make it easier to find the right task-specific parameters φ when faced with a novel task. A variety of meta-learners that employ gradient methods for task-specific fast adaptation have been proposed (e.g., Andrychowicz et al., 2016; Li & Malik, 2017a;b; Wichrowska et al., 2017). MAML (Finn et al., 2017) is distinct in that it provides a gradient-based meta-learning procedure that employs a single additional parameter (the meta-learning rate) and operates on the same parameter space for both meta-learning and fast adaptation. These are necessary features for the equivalence we show in Section 3.1. To address the meta-learning problem, MAML estimates the parameters θ of a set of models so that when one or a few batch gradient descent steps are taken from the initialization at θ given a small sample of task data xj1 , . . . ,xjN ∼ pTj (x) each model has good generalization performance on another sample xjN+1 , . . . ,xjN+M ∼ pTj (x) from the same task. The MAML objective in a maximum likelihood setting is L(θ) = 1 J ∑ j [ 1 M ∑ m − log p ( xjN+m | θ − α∇θ 1 N ∑ n − log p ( xjn | θ ) ︸ ︷︷ ︸ φj )] (1) where we use φj to denote the updated parameters after taking a single batch gradient descent step from the initialization at θ with step size α on the negative log-likelihood associated with the task Tj . Note that since φj is an iterate of a gradient descent procedure that starts from θ, each φj is of the same dimensionality as θ. We refer to the inner gradient descent procedure that computes φj as fast adaptation. The computational graph of MAML is given in Figure 1 (left). 2.2 META-LEARNING AS HIERARCHICAL BAYESIAN INFERENCE An alternative way to formulate meta-learning is as a problem of probabilistic inference in the hierarchical model depicted in Figure 1 (right). In particular, in the case of meta-learning, each task-specific parameter φj is distinct from but should influence the estimation of the parameters {φj′ | j ′ 6= j} from other tasks. We can capture this intuition by introducing a meta-level parameter θ on which each task-specific parameter is statistically dependent. With this formulation, the mutual dependence of the task-specific parameters φj is realized only through their individual dependence on the meta-level parameters θ. As such, estimating θ provides a way to constrain the estimation of each of the φj . Given some data in a multi-task setting, we may estimate θ by integrating out the task-specific parameters to form the marginal likelihood of the data. Formally, grouping all of the data from each of the tasks as X and again denoting by xj1 , . . . ,xjN a sample from task Tj , the marginal likelihood of the observed data is given by p (X | θ ) = ∏ j (∫ p ( xj1 , . . . ,xjN | φj ) p ( φj | θ ) dφj ) . (2) Maximizing (2) as a function of θ gives a point estimate for θ, an instance of a method known as empirical Bayes (Bernardo & Smith, 2006; Gelman et al., 2014) due to its use of the data to estimate the parameters of the prior distribution. Hierarchical Bayesian models have a long history of use in both transfer learning and domain adaptation (e.g., Lawrence & Platt, 2004; Yu et al., 2005; Gao et al., 2008; Daumé III, 2009; Wan et al., 2012). However, the formulation of meta-learning as hierarchical Bayes does not automatically provide an inference procedure, and furthermore, there is no guarantee that inference is tractable for expressive models with many parameters such as deep neural networks. 3 LINKING GRADIENT-BASED META-LEARNING & HIERARCHICAL BAYES In this section, we connect the two independent approaches of Section 2.1 and Section 2.2 by showing that MAML can be understood as empirical Bayes in a hierarchical probabilistic model. Furthermore, we build on this understanding by showing that a choice of update rule for the taskspecific parameters φj (i.e., a choice of inner-loop optimizer) corresponds to a choice of prior over task-specific parameters, p(φj | θ ). 3.1 MODEL-AGNOSTIC META-LEARNING AS EMPIRICAL BAYES In general, when performing empirical Bayes, the marginalization over task-specific parameters φj in (2) is not tractable to compute exactly. To avoid this issue, we can consider an approximation that makes use of a point estimate φ̂j instead of performing the integration over φ in (2). Using φ̂j as an estimator for each φj , we may write the negative logarithm of the marginal likelihood as − log p (X | θ ) ≈ ∑ j [ − log p ( xjN+1 , . . .xjN+M | φ̂j )] . (3) Setting φ̂j = θ + α∇θ log p(xj1 , . . . ,xjN | θ ) for each j in (3) recovers the unscaled form of the one-step MAML objective in (1). This tells us that the MAML objective is equivalent to a maximization with respect to the meta-level parameters θ of the marginal likelihood p(X | θ ), where a point estimate for each task-specific parameter φj is computed via one or a few steps of gradient descent. By taking only a few steps from the initialization at θ, the point estimate φ̂j trades off Algorithm MAML-HB(D) Initialize θ randomly while not converged do Draw J samples T1, . . . , TJ ∼ pD(T ) Estimate Ex∼pT1 (x)[− log p(x | θ )], . . . ,Ex∼pTJ (x)[− log p(x | θ )] using ML-· · · Update θ ← θ − β ∇θ ∑ j Ex∼pTj (x)[− log p(x | θ )] end Algorithm 2: Model-agnostic meta-learning as hierarchical Bayesian inference. The choices of the subroutine ML-· · · that we consider are defined in Subroutine 3 and Subroutine 4. Subroutine ML-POINT(θ, T ) Draw N samples x1, . . . ,xN ∼ pT (x) Initialize φ← θ for k in 1, . . . ,K do Update φ← φ+ α∇φ log p(x1, . . . ,xN | φ ) end Draw M samples xN+1, . . . ,xN+M ∼ pT (x) return − log p(xN+1, . . . ,xN+M | φ ) Subroutine 3: Subroutine for computing a point estimate φ̂ using truncated gradient descent to approximate the marginal negative log likelihood (NLL). minimizing the fast adaptation objective − log p(xj1 , . . . ,xjN | θ ) with staying close in value to the parameter initialization θ. We can formalize this trade-off by considering the linear regression case. Recall that the maximum a posteriori (MAP) estimate of φj corresponds to the global mode of the posterior p(φj | xj1 , . . .xjN ,θ ) ∝ p(xj1 , . . .xjN | φj )p(φj | θ ). In the case of a linear model, early stopping of an iterative gradient descent procedure to estimate φj is exactly equivalent to MAP estimation of φj under the assumption of a prior that depends on the number of descent steps as well as the direction in which each step is taken. In particular, write the input examples as X and the vector of regression targets as y, omit the task index from φ, and consider the gradient descent update φ(k) = φ(k−1) − α∇φ [ ‖y −Xφ‖22 ] φ=φ(k−1) = φ(k−1) − αX T (Xφ(k−1) − y) (4) for iteration index k and learning rate α ∈ R+. Santos (1996) shows that, starting from φ(0) = θ, φ(k) in (4) solves the regularized linear least squares problem min ( ‖y −Xφ‖22 + ‖θ − φ‖ 2 Q ) (5) with Q-norm defined by ‖z‖Q = z TQ−1z for a symmetric positive definite matrix Q that depends on the step size α and iteration index k as well as on the covariance structure of X. We describe the exact form of the dependence in Section 3.2. The minimization in (5) can be expressed as a posterior maximization problem given a conditional Gaussian likelihood over y and a Gaussian prior over φ. The posterior takes the form p (φ | X,y,θ ) ∝ N (y ; Xφ, I) N (φ ; θ,Q) . (6) Since φ(k) in (4) maximizes (6), we may conclude that k iterations of gradient descent in a linear regression model with squared error exactly computes the MAP estimate of φ, given a Gaussian-noised observation model and a Gaussian prior over φ with parameters µ0 = θ and Σ0 = Q. Therefore, in the case of linear regression with squared error, MAML is exactly empirical Bayes using the MAP estimate as the point estimate of φ. In the nonlinear case, MAML is again equivalent to an empirical Bayes procedure to maximize the marginal likelihood that uses a point estimate for φ computed by one or a few steps of gradient descent. However, this point estimate is not necessarily the global mode of a posterior. We can instead understand the point estimate given by truncated gradient descent as the value of the mode of an implicit posterior over φ resulting from an empirical loss interpreted as a negative log-likelihood, and regularization penalties and the early stopping procedure jointly acting as priors (for similar interpretations, see Sjöberg & Ljung, 1995; Bishop, 1995; Duvenaud et al., 2016). The exact equivalence between early stopping and a Gaussian prior on the weights in the linear case, as well as the implicit regularization to the parameter initialization the nonlinear case, tells us that every iterate of truncated gradient descent is a mode of an implicit posterior. In particular, we are not required to take the gradient descent procedure of fast adaptation that computes φ̂ to convergence in order to establish a connection between MAML and hierarchical Bayes. MAML can therefore be understood to approximate an expectation of the marginal negative log likelihood (NLL) for each task Tj as Ex∼pTj (x) [− log p (x | θ )] ≈ 1 M ∑ m − log p ( xjN+m | φ̂j ) using the point estimate φ̂j = θ + α∇θ log p(xjn | θ ) for single-step fast adaptation. The algorithm for MAML as probabilistic inference is given in Algorithm 2; Subroutine 3 computes each marginal NLL using the point estimate of φ̂ as just described. Formulating MAML in this way, as probabilistic inference in a hierarchical Bayesian model, motivates the interpretation in Section 3.2 of using various meta-optimization algorithms to induce a prior over task-specific parameters. 3.2 THE PRIOR OVER TASK-SPECIFIC PARAMETERS From Section 3.1, we may conclude that early stopping during fast adaptation is equivalent to a specific choice of a prior over task-specific parameters, p(φj | θ ). We can better understand the role of early stopping in defining the task-specific parameter prior in the case of a quadratic objective. Omit the task index from φ and x, and consider a second-order approximation of the fast adaptation objective `(φ) = − log p(x1 . . . ,xN | φ ) about a minimum φ ∗: `(φ) ≈ ˜̀(φ) := 12‖φ− φ ∗‖2 H −1 + `(φ∗) (7) where the Hessian H = ∇2φ `(φ ∗) is assumed to be positive definite so that ˜̀ is bounded below. Furthermore, consider using a curvature matrix B to precondition the gradient in gradient descent, giving the update φ(k) = φ(k−1) − B∇φ ˜̀(φ(k−1)) . (8) If B is diagonal, we can identify (8) as a Newton method with a diagonal approximation to the inverse Hessian; using the inverse Hessian evaluated at the point φ(k−1) recovers Newton’s method itself. On the other hand, meta-learning the matrix B matrix via gradient descent provides a method to incorporate task-general information into the covariance of the fast adaptation prior, p(φ | θ ). For instance, the meta-learned matrix B may encode correlations between parameters that dictates how such parameters are updated relative to each other. Formally, taking k steps of gradient descent from φ(0) = θ using the update rule in (8) gives a φ(k) that solves min ( ‖φ− φ∗‖2 H −1 + ‖φ(0) − φ‖ 2 Q ) . (9) The minimization in (9) corresponds to taking a Gaussian prior p(φ | θ ) with mean θ and covariance Q for Q = OΛ−1((I−BΛ)−k − I)OT (Santos, 1996) where B is a diagonal matrix that results from a simultaneous diagonalization of H and B as OTHO = diag(λ1, . . . , λn) = Λ and OTB−1O = diag(b1, . . . , bn) = B with bi, λi ≥ 0 for i = 1, . . . , n (Theorem 8.7.1 in Golub & Van Loan, 1983). If the true objective is indeed quadratic, then, assuming the data is centered, H is the unscaled covariance matrix of features, XTX. 4 IMPROVING MODEL-AGNOSTIC META-LEARNING Identifying MAML as a method for probabilistic inference in a hierarchical model allows us to develop novel improvements to the algorithm. In Section 4.1, we consider an approach from Bayesian parameter estimation to improve the MAML algorithm, and in Section 4.2, we discuss how to make this procedure computationally tractable for high-dimensional models. 4.1 LAPLACE’S METHOD OF INTEGRATION We have shown that the MAML algorithm is an empirical Bayes procedure that employs a point estimate for the mid-level, task-specific parameters in a hierarchical Bayesian model. However, the use of this point estimate may lead to an inaccurate point approximation of the integral in (2) if the posterior over the task-specific parameters, p(φj | xjN+1 , . . . ,xjN+M ,θ ), is not sharply peaked at the value of the point estimate. The Laplace approximation (Laplace, 1986; MacKay, 1992b;a) is applicable in this case as it replaces a point estimate of an integral with the volume of a Gaussian centered at a mode of the integrand, thereby forming a local quadratic approximation. We can make use of this approximation to incorporate uncertainty about the task-specific parameters into the MAML algorithm at fast adaptation time. In particular, suppose that each integrand in (2) has a mode φ∗j at which it is locally well-approximated by a quadratic function. The Laplace approximation uses a second-order Taylor expansion of the negative log posterior in order to approximate each integral in the product in (2) as∫ p ( Xj | φj ) p ( φj | θ ) dφj ≈ p ( Xj | φ ∗ j ) p ( φ∗j | θ ) det(Hj/2π) − 12 (10) where Hj is the Hessian matrix of second derivatives of the negative log posterior. Classically, the Laplace approximation uses the MAP estimate for φ∗j , although any mode can be used as an expansion site provided the integrand is well enough approximated there by a quadratic. We use the point estimate φ̂j uncovered by fast adaptation, in which case the MAML objective in (1) becomes an appropriately scaled version of the approximate marginal likelihood − log p (X | θ ) ≈ ∑ j [ − log p ( Xj | φ̂j ) − log p ( φ̂j | θ ) + 12 log det(Hj) ] . (11) The term log p( φ̂j | θ ) results from the implicit regularization imposed by early stopping during fast adaptation, as discussed in Section 3.1. The term 1/2 log det(Hj), on the other hand, results from the Laplace approximation and can be interpreted as a form of regularization that penalizes model complexity. 4.2 USING CURVATURE INFORMATION TO IMPROVE MAML Using (11) as a training criterion for a neural network model is difficult due to the required computation of the determinant of the Hessian of the log posterior Hj , which itself decomposes into a sum of the Hessian of the log likelihood and the Hessian of the log prior as Hj = ∇ 2 φj [ − log p ( Xj | φj )] +∇2φj [ − log p ( φj | θ )] . In our case of early stopping as regularization, the prior over task-specific parameters p(φj | θ ) is implicit and thus no closed form is available for a general model. Although we may use the quadratic approximation derived in Section 3.2 to obtain an approximate Gaussian prior, this prior is not diagonal and does not, to our knowledge, have a convenient factorization. Therefore, in our experiments, we instead use a simple approximation in which the prior is approximated as a diagonal Gaussian with precision τ . We keep τ fixed, although this parameter may be cross-validated for improved performance. Subroutine ML-LAPLACE(θ, T ) Draw N samples x1, . . . ,xN ∼ pT (x) Initialize φ← θ for k in 1, . . . ,K do Update φ← φ+ α∇φ log p(x1, . . . ,xN | φ ) end Draw M samples xN+1, . . . ,xN+M ∼ pT (x) Estimate quadratic curvature Ĥ return − log p(xN+1, . . . ,xN+M | φ ) + η log det(Ĥ) Subroutine 4: Subroutine for computing a Laplace approximation of the marginal likelihood. Similarly, the Hessian of the log likelihood is intractable to form exactly for all but the smallest models, and furthermore, is not guaranteed to be positive definite at all points, possibly rendering the Laplace approximation undefined. To combat this, we instead seek a curvature matrix Ĥ that approximates the quadratic curvature of a neural network objective function. Since it is well-known that the curvature associated with neural network objective functions is highly non-diagonal (e.g., Martens, 2016), a further requirement is that the matrix have off-diagonal terms. Due to the difficulties listed above, we turn to second order gradient descent methods, which precondition the gradient with an inverse curvature matrix at each iteration of descent. The Fisher information matrix (Fisher, 1925) has been extensively used as an approximation of curvature, giving rise to a method known as natural gradient descent (Amari, 1998). A neural network with an appropriate choice of loss function is a probabilistic model and therefore defines a Fisher information matrix. Furthermore, the Fisher information matrix can be seen to define a convex quadratic approximation to the objective function of a probabilistic neural model (Pascanu & Bengio, 2014; Martens, 2014). Importantly for our use case, the Fisher information matrix is positive definite by definition as well as non-diagonal. However, the Fisher information matrix is still expensive to work with. Martens & Grosse (2015) developed Kronecker-factored approximate curvature (K-FAC), a scheme for approximating the curvature of the objective function of a neural network with a block-diagonal approximation to the Fisher information matrix. Each block corresponds to a unique layer in the network, and each block is further approximated as a Kronecker product (see Van Loan, 2000) of two much smaller matrices by assuming that the second-order statistics of the input activation and the back-propagated derivatives within a layer are independent. These two approximations ensure that the inverse of the Fisher information matrix can be computed efficiently for the natural gradient. For the Laplace approximation, we are interested in the determinant of a curvature matrix instead of its inverse. However, we may also make use of the approximations to the Fisher information matrix from K-FAC as well as properties of the Kronecker product. In particular, we use the fact that the determinant of a Kronecker product is the product of the exponentiated determinants of each of the factors, and that the determinant of a block diagonal matrix is the product of the determinants of the blocks (Van Loan, 2000). The determinants for each factor can be computed as efficiently as the inverses required by K-FAC, in O(d3) time for a d-dimensional Kronecker factor. We make use of the Laplace approximation and K-FAC to replace Subroutine 3, which computes the task-specific marginal NLLs using a point estimate for φ̂. We call this method the Lightweight Laplace Approximation for Meta-Adaptation (LLAMA), and give a replacement subroutine in Subroutine 4. 5 EXPERIMENTAL EVALUATION The goal of our experiments is to evaluate if we can use our probabilistic interpretation of MAML to generate samples from the distribution over adapted parameters, and futhermore, if our method can be applied to large-scale meta-learning problems such as miniImageNet. 5.1 WARMUP: TOY NONLINEAR MODEL The connection between MAML and hierarchical Bayes suggests that we should expect MAML to behave like an algorithm that learns the mean of a Gaussian prior on model parameters, and uses the mean of this prior as an initialization during fast adaptation. Using the Laplace approximation to the integration over task-specific parameters as in (10) assumes a task-specific parameter posterior with mean at the adapted parameters φ̂ and covariance equal to the inverse Hessian of the log posterior evaluated at the adapted parameter value. Instead of simply using this density in the Laplace approximation as an additional regularization term as in (11), we may sample parameters φj from this density and use each set of sampled parameters to form a set of predictions for a given task. To illustrate this relationship between MAML and hierarchical Bayes, we present a meta-dataset of sinusoid tasks in which each task involves regressing to the output of a sinusoid wave in Figure 5. Variation between tasks is obtained by sampling the amplitude uniformly from [0.1, 5.0] and the phase from [0, π]. During training and for each task, 10 input datapoints are sampled uniformly from [−10.0, 10.0] and the loss is the mean squared error between the prediction and the true value. We observe in Figure 5 that our method allows us to directly sample models from the task-specific parameter distribution after being presented with 10 datapoints from a new, previously unseen sinusoid curve. In particular, the column on the right of Figure 5 demonstrates that the sampled models display an appropriate level of uncertainty when the datapoints are ambiguous (as in the bottom right). 5.2 LARGE-SCALE EXPERIMENT: miniIMAGENET We evaluate LLAMA on the miniImageNet Ravi & Larochelle (2017) 1-shot, 5-way classification task, a standard benchmark in few-shot classification. miniImageNet comprises 64 training classes, 12 validation classes, and 24 test classes. Following the setup of Vinyals et al. (2016), we structure the N -shot, J-way classification task as follows: The model observes N instances of J unseen classes, and is evaluated on its ability to classify M new instances within the J classes. We use a neural network architecture standard to few-shot classification (e.g., Vinyals et al., 2016; Ravi & Larochelle, 2017), consisting of 4 layers with 3× 3 convolutions and 64 filters, followed by batch normalization (BN) (Ioffe & Szegedy, 2015), a ReLU nonlinearity, and 2× 2 max-pooling. For the scaling variable β and centering variable γ of BN (see Ioffe & Szegedy, 2015), we ignore the fast adaptation update as well as the Fisher factors for K-FAC. We use Adam (Kingma & Ba, 2014) as the meta-optimizer, and standard batch gradient descent with a fixed learning rate to update the model during fast adaptation. LLAMA requires the prior precision term τ as well as an additional parameter η ∈ R+ that weights the regularization term log det Ĥ contributed by the Laplace approximation. We fix τ = 0.001 and selected η = 10−6 via cross-validation; all other parameters are set to the values reported in Finn et al. (2017). We find that LLAMA is practical enough to be applied to this larger-scale problem. In particular, our TensorFlow implementation of LLAMA trains for 60,000 iterations on one TITAN Xp GPU in 9 hours, compared to 5 hours to train MAML. As shown in Table 1, LLAMA achieves comparable performance to the state-of-the-art meta-learning method by Triantafillou et al. (2017). While the gap between MAML and LLAMA is small, the improvement from the Laplace approximation suggests that a more accurate approximation to the marginalization over task-specific parameters will lead to further improvements. 6 RELATED WORK Meta-learning and few-shot learning have a long history in hierarchical Bayesian modeling (e.g., Tenenbaum, 1999; Fei-Fei et al., 2003; Lawrence & Platt, 2004; Yu et al., 2005; Gao et al., 2008; Daumé III, 2009; Wan et al., 2012). A related subfield is that of transfer learning, which has used hierarchical Bayes extensively (e.g., Raina et al., 2006). A variety of inference methods have been used in Bayesian models, including exact inference (Lake et al., 2011), sampling methods (Salakhutdinov et al., 2012), and variational methods (Edwards & Storkey, 2017). While some prior works on hierarchical Bayesian models have proposed to handle basic image recognition tasks, the complexity of these tasks does not yet approach the kinds of complex image recognition problems that can be solved by discriminatively trained deep networks, such as the miniImageNet experiment in our evaluation (Mansinghka et al., 2013). Recently, the Omniglot benchmark Lake et al. (2016) has rekindled interest in the problem of learning from few examples. Modern methods accomplish few-shot learning either through the design of network architectures that ingest the few-shot training samples directly (e.g., Koch, 2015; Vinyals et al., 2016; Snell et al., 2017; Hariharan & Girshick, 2017; Triantafillou et al., 2017), or formulating the problem as one of learning to learn, or meta-learning (e.g., Schmidhuber, 1987; Bengio et al., 1991; Schmidhuber, 1992; Bengio et al., 1992). A variety of inference methods have been used in Bayesian models, including exact inference (Lake et al., 2011), sampling methods (Salakhutdinov et al., 2013), and variational methods (Edwards & Storkey, 2017). Our work bridges the gap between gradient-based meta-learning methods and hierarchical Bayesian modeling. Our contribution is not to formulate the meta-learning problem as a hierarchical Bayesian 1Improved performance on miniImageNet has been reported by several works (Mishra et al., 2017; Munkhdalai & Yu, 2017; Sung et al., 2017) by making use of a model architecture with significantly more parameters than the methods in Table 1. Since we do not explore variations in neural network architecture in this work, we omit such results from the table. model, but instead to formulate a gradient-based meta-learner as hierarchical Bayesian inference, thus providing a way to efficiently perform posterior inference in a model-agnostic manner. 7 CONCLUSION We have shown that model-agnostic meta-learning (MAML) estimates the parameters of a prior in a hierarchical Bayesian model. By casting gradient-based meta-learning within a Bayesian framework, our analysis opens the door to novel improvements inspired by probabilistic machinery. As a step in this direction, we propose an extension to MAML that employs a Laplace approximation to the posterior distribution over task-specific parameters. This technique provides a more accurate estimate of the integral that, in the original MAML algorithm, is approximated via a point estimate. We show how to estimate the quantity required by the Laplace approximation using Kroneckerfactored approximate curvature (K-FAC), a method recently proposed to approximate the quadratic curvature of a neural network objective for the purpose of a second-order gradient descent technique. Our contribution illuminates the road to exploring further connections between gradient-based metalearning methods and hierarchical Bayesian modeling. For instance, in this work we assume that the predictive distribution over new data-points is narrow and well-approximated by a point estimate. We may instead employ methods that make use of the variance of the distribution over task-specific parameters in order to model the predictive density over examples from a novel task. Furthermore, it is known that the Laplace approximation is inaccurate in cases where the integral is highly skewed, or is not unimodal and thus is not amenable to approximation by a single Gaussian mode. This could be solved by using a finite mixture of Gaussians, which can approximate many density functions arbitrarily well (Sorenson & Alspach, 1971; Alspach & Sorenson, 1972). The exploration of additional improvements such as this is an exciting line of future work.
1. What is the main contribution of the paper, and how does it relate to MAML? 2. How does the proposed method differ from other forms of task-specific subroutine ML, such as L-MAML? 3. What are the strengths and weaknesses of the experimental performance reported in Table 1? 4. How does the reviewer assess the relevance of the reformulation of MAML, and the technical steps involved in the reformulation? 5. Are there any suggestions on improving the readability of the paper, such as changing the symbols used or reordering the structure of certain sections? 6. Are there any typos or grammatical errors in the review that should be corrected?
Review
Review MAML (Finn+ 2017) is recast as a hierarchical Bayesian learning procedure. In particular the inner (task) training is initially cast as point-wise max likelihood estimation, and then (sec4) improved upon by making use of the Laplace approximation. Experimental evidence of the relevance of the method is provided on a toy task involving a NIW prior of Gaussians, and the (benchmark) MiniImageNet task. Casting MAML as HB seems a good idea. The paper does a good job of explaining the connection, but I think the presentation could be clarified. The role of the task prior and how it emerges from early stopping (ie a finite number of gradient descent steps) (sec 3.2) is original and technically non-trivial, and is a contribution of this paper. The synthetic data experiment sec5.1 and fig5 is clearly explained and serves to additionally clarify the proposed method. Regarding the MiniImageNet experiments, I read the exchange on TCML and agree with the authors of the paper under review. However, I recommend including the references to Mukhdalai 2017 and Sung 2017 in the footnote on TCML to strengthen the point more generically, and show that not just TCML but other non-shallow architectures are not considered for comparison here. In addition, the point made by the TCML authors is fair ("nothing prevented you from...") and I would also recommend mentioning the reviewed paper's authors' decision (not to test deeper architectures) in the footnote. This decision is in order but needs to be stated in order for the reader to form a balanced view of methods at her disposal. The experimental performance reported Table 1 remains small and largely within one standard deviation of competitor methods. I am assessing this paper as "7" because despite the merit of the paper, the relevance of the reformulation of MAML, and the technical steps involved in the reformulation, the paper does not eg address other forms (than L-MAML) of the task-specific subroutine ML-..., and the benchmark improvements are quite small. I think the approach is good and fruitful. # Suggestions on readability * I have the feeling the paper inverts $\alpha, \beta$ from their use in Finn 2017 (step size for meta- vs task-training). This is unfortunate and will certainly confuse readers; I advise carefully changing this throughout the entire paper (eg Algo 2,3,4, eq 1, last eq in sec3.1, eq in text below eq3, etc) * I advise avoiding the use of the symbol f, which appears in only two places in Algo 2 and the end of sec 3.1. This is in part because f is given another meaning in Finn 2017, but also out of general parsimony in symbol use. (could leave the output of ML-... implicit by writing ML-...(\theta, T)_j in the $sum_j$; if absolutely needed, use another symbol than f) * Maybe sec3 can be clarified in its structure by re-ordering points on the quadratic error function and early stopping (eg avoiding to split them between end of 3.1 and 3.2). * sec6 "Machine learning and deep learning": I would definitely avoid this formulation, seems to tail in with all the media nonsense on "what's the difference between ML and DL ?". In addition the formulation seems to contrast ML with hierarchical Bayesian modeling, which does not make sense/ is wrong and confusing. # Typos * sec1 second parag: did you really mean "in the architecture or loss function"? unclear. * sec2: over a family * "common structure, so that" (not such that) * orthgonal * sec2.1 suggestion: clarify that \theta and \phi are in the same space * sec2.2 suggestion: task-specific parameter $\phi_j$ is distinct from ... parameters $\phi_{j'}, j' \neq j} * "unless an approximate ... is provided" (the use of the subjunctive here is definitely dated :-) ) * sec3.1 task-specific parameters $\phi_j$ (I would avoid writing just \phi altogether to distinguish in usage from \theta) * Gaussian-noised * approximation of the it objective * before eq9: "that solves": well, it doesn't really "solve" the minimisation, in that it is not a minimum; reformulate this? * sec4.1 innaccurate * well approximated * sec4.2 an curvature * (Amari 1989) * For the the Laplace * O(n^3) : what is n ? * sec5.2 (Ravi and L 2017) * for the the
ICLR
Title Hierarchical Reinforcement Learning via Advantage-Weighted Information Maximization Abstract Real-world tasks are often highly structured. Hierarchical reinforcement learning (HRL) has attracted research interest as an approach for leveraging the hierarchical structure of a given task in reinforcement learning (RL). However, identifying the hierarchical policy structure that enhances the performance of RL is not a trivial task. In this paper, we propose an HRL method that learns a latent variable of a hierarchical policy using mutual information maximization. Our approach can be interpreted as a way to learn a discrete and latent representation of the state-action space. To learn option policies that correspond to modes of the advantage function, we introduce advantage-weighted importance sampling. In our HRL method, the gating policy learns to select option policies based on an option-value function, and these option policies are optimized based on the deterministic policy gradient method. This framework is derived by leveraging the analogy between a monolithic policy in standard RL and a hierarchical policy in HRL by using a deterministic option policy. Experimental results indicate that our HRL approach can learn a diversity of options and that it can enhance the performance of RL in continuous control tasks. 1 INTRODUCTION Reinforcement learning (RL) has been successfully applied to a variety of tasks, including board games (Silver et al., 2016), robotic manipulation tasks (Levine et al., 2016), and video games (Mnih et al., 2015). Hierarchical reinforcement learning (HRL) is a type of RL that leverages the hierarchical structure of a given task by learning a hierarchical policy (Sutton et al., 1999; Dietterich, 2000). Past studies in this field have shown that HRL can solve challenging tasks in the video game domain (Vezhnevets et al., 2017; Bacon et al., 2017) and robotic manipulation (Daniel et al., 2016; Osa et al., 2018b). In HRL, lower-level policies, which are often referred to as option policies, learn different behavior/control patterns, and the upper-level policy, which is often referred to as the gating policy, learns to select option policies. Recent studies have developed HRL methods using deep learning (Goodfellow et al., 2016) and have shown that HRL can yield impressive performance for complex tasks (Bacon et al., 2017; Frans et al., 2018; Vezhnevets et al., 2017; Haarnoja et al., 2018a). However, identifying the hierarchical policy structure that yields efficient learning is not a trivial task, since the problem involves learning a sufficient variety of types of behavior to solve a given task. In this study, we present an HRL method via the mutual information (MI) maximization with advantage-weighted importance, which we refer to as adInfoHRL. We formulate the problem of learning a latent variable in a hierarchical policy as one of learning discrete and interpretable repre- sentations of states and actions. Ideally, each option policy should be located at separate modes of the advantage function. To estimate the latent variable that corresponds to modes of the advantage function, we introduce advantage-weighted importance weights. Our approach can be considered to divide the state-action space based on an information maximization criterion, and it learns option policies corresponding to each region of the state-action space. We derive adInfoHRL as an HRL method based on deterministic option policies that are trained based on an extension of the deterministic policy gradient (Silver et al., 2014; Fujimoto et al., 2018). The contributions of this paper are twofold: 1. We propose the learning of a latent variable of a hierarchical policy as a discrete and hidden representation of the state-action space. To learn option policies that correspond to the modes of the advantage function, we introduce advantage-weighted importance. 2. We propose an HRL method, where the option policies are optimized based on the deterministic policy gradient and the gating policy selects the option that maximizes the expected return. The experimental results show that our proposed method adInfoHRL can learn a diversity of options on continuous control tasks. Moreover, our approach can improve the performance of TD3 on such tasks as the Walker2d and Ant tasks in OpenAI Gym with MuJoco simulator. 2 BACKGROUND In this section, we formulate the problem of HRL in this paper and describe methods related to our proposal. 2.1 HIERARCHICAL REINFORCEMENT LEARNING We consider tasks that can be modeled as a Markov decision process (MDP), consisting of a state space S, an action space A, a reward function r : S × A 7→ R, an initial state distribution ρ(s0), and a transition probability p(st+1|st,at) that defines the probability of transitioning from state st and action at at time t to next state st+1. The return is defined as Rt = ∑T i=t γ i−tr(si,ai), where γ is a discount factor, and policy π(a|s) is defined as the density of action a given state s. Let dπ(s) = ∑T t=0 γ tp(st = s) denote the discounted visitation frequency induced by the policy π. The goal of reinforcement learning is to learn a policy that maximizes the expected return J(π) = Es0,a0,...[R0] where s0 ∼ ρ(s0),a ∼ π and st+1 ∼ p(st+1|st,at). By defining the Q-function as Qπ(s,a) = Es0,a0,...[Rt|st = s,at = a], the objective function of reinforcement learning can be rewritten as follows: J(π) = ∫∫ dπ(s)π(a|s)Qπ(s,a)dads. (1) Herein, we consider hierarchical policy π(a|s) = ∑ o∈O π(o|s)π(a|s, o), where o is the latent variable andO is the set of possible values of o. Many existing HRL methods employ a policy structure of this form (Frans et al., 2018; Vezhnevets et al., 2017; Bacon et al., 2017; Florensa et al., 2017; Daniel et al., 2016). In general, latent variable o can be discrete (Frans et al., 2018; Bacon et al., 2017; Florensa et al., 2017; Daniel et al., 2016; Osa & Sugiyama, 2018) or continuous (Vezhnevets et al., 2017). π(o|s) is often referred to as a gating policy (Daniel et al., 2016; Osa & Sugiyama, 2018), policy over options (Bacon et al., 2017), or manager (Vezhnevets et al., 2017). Likewise, π(a|s, o) is often referred to as an option policy (Osa & Sugiyama, 2018), sub-policy (Daniel et al., 2016), or worker (Vezhnevets et al., 2017). In HRL, the objective function is given by J(π) = ∫∫ dπ(s) ∑ o∈O π(o|s)π(a|s, o)Qπ(s,a)dads. (2) As discussed in the literature on inverse RL (Ziebart, 2010), multiple policies can yield equivalent expected returns. This indicates that there exist multiple solutions to latent variable o that maximizes the expected return. To obtain the preferable solution for o, we need to impose additional constraints in HRL. Although prior work has employed regularizers (Bacon et al., 2017) and constraints (Daniel et al., 2016) to obtain various option policies, the method of learning a good latent variable o that improves sample-efficiency of the learning process remains unclear. In this study we propose the learning of the latent variable by maximizing MI between latent variables and state-action pairs. 2.2 DETERMINISTIC POLICY GRADIENT The deterministic policy gradient (DPG) algorithm was developed for learning a monolithic deterministic policy µθ(s) : S 7→ A by Silver et al. (2014). In off-policy RL, the objective is to maximize the expectation of the return, averaged over the state distribution induced by a behavior policy β(a|s): J(π) = ∫∫ dβ(s)π(a|s)Qπ ( s,a)dads. (3) When a policy is deterministic, the objective becomes J(π) = ∫ dβ(s)Qπ ( s,µθ(s) ) ds. Silver et al. (2014) have shown that the gradient of a deterministic policy is given by ∇θEs∼dβ(s)[Qπ(s,a)] = Es∼dβ(s) [ ∇θµθ(s)∇aQπ ( s,a ) |a=µθ(s) ] . (4) The DPG algorithm has been extended to the deep deterministic policy gradient (DDPG) for continuous control problems that require neural network policies (Lillicrap et al., 2016). Twin Delayed Deep Deterministic policy gradient algorithm (TD3) proposed by Fujimoto et al. (2018) is a variant of DDPG that outperforms the state-of-the-art on-policy methods such as TRPO (Schulman et al., 2017a) and PPO (Schulman et al., 2017b) in certain domains. We extend this deterministic policy gradient to learn a hierarchical policy. 2.3 REPRESENTATION LEARNING VIA INFORMATION MAXIMIZATION Recent studies such as those by Chen et al. (2016); Hu et al. (2017); Li et al. (2017) have shown that an interpretable representation can be learned by maximizing MI. Given a datasetX = (x1, ...,xn), regularized information maximization (RIM) proposed by Gomes et al. (2010) involves learning a conditional model p̂(y|x;η) with parameter vector η that predicts a label y. The objective of RIM is to minimize `(η)− λIη(x, y), (5) where `(η) is the regularization term, Iη(x, y) is MI, and λ is a coefficient. MI can be decomposed as Iη(x, y) = H(y)−H(y|x) where H(y) is entropy and H(y|x) the conditional entropy. Increasing H(y) conduces the label to be uniformly distributed, and decreasing H(y|x) conduces to clear cluster assignments. Although RIM was originally developed for unsupervised clustering problems, the concept is applicable to various problems that require learning a hidden discrete representation. In this study, we formulate the problem of learning the latent variable o of a hierarchical policy as one of learning a latent representation of the state-action space. 3 LEARNING OPTIONS VIA ADVANTAGE-WEIGHTED INFORMATION MAXIMIZATION In this section, we propose a novel HRL method based on advantage-weighted information maximization. We first introduce the latent representation learning via advantage-weighted information maximization, and we then describe the HRL framework based on deterministic option policies. 3.1 LATENT REPRESENTATION LEARNING VIA ADVANTAGE-WEIGHTED INFORMATION MAXIMIZATION Although prior work has often considered H(o|s) or I(s, o), which results in a division of the state space, we are interested in using I ( (s,a), o ) for dividing the state-action space instead. A schematic sketch of our approach is shown in Figure 1. As shown in the left side of Figure 1, the advantage function often has multiple modes. Ideally, each option policies should correspond to separate modes of the advantage function. However, it is non-trivial to find the modes of the advantage function in practice. For this purpose, we reduce the problem of finding modes of the advantage function to that of finding the modes of the probability density of state action pairs. We consider a policy based on the advantage function of the form πAd(a|s) = f ( Aπ(s,a) ) Z , (6) where Aπ(s,a) = Qπ(s,a) − V π(s) is the advantage function, V π(s) is the state value function, and Z is the partition function. f(·) is a functional, which is a function of a function. f(·) is a monotonically increasing function with respect to the input variable and always satisfies f(·) > 0. In our implementation we used the exponential function f(·) = exp(·). When following such a policy, an action with the larger advantage is drawn with a higher probability. Under this assumption, finding the modes of the advantage function is equivalent to finding modes of the density induced by πAd. Thus, finding the modes of the advantage function can be reduced to the problem of clustering samples induced by πAd. Following the formulation of RIM introduced in Section 2.3, we formulate the problem of clustering samples induced by πAd as the learning of discrete representations via MI maximization. For this purpose, we consider a neural network that estimates p(o|s,a;η) parameterized with vector η, which we refer to as the option network. We formulate the learning of the latent variable o as minimizing Loption(η) = `(η)− λI ( o, (s,a);η ) , (7) where I(o, (s,a)) = Ĥ(o|s,a;η) − Ĥ(o;η), and `(η) is the regularization term. In practice, we need to approximate the advantage function, and we learn the discrete variable o that corresponds to the modes of the current estimate of the advantage function. For regularization, we used a simplified version of virtual adversarial training (VAT) proposed by Miyato et al. (2016). Namely, we set `(η) = DKL ( p(o|snoise,anoise;η)||p(o|s,a;η) ) where snoise = s+ s, anoise = a+ a, s and a denote white noise. This regularization term penalizes dissimilarity between an original state-action pair and a perturbed one, and Hu et al. (2017) empirically show that this regularization improves the performance of learning latent discrete representations. When computing MI, we need to compute p(o) and H(o|s,a) given by p(o) = ∫ pπAd(s,a)p(o|s,a;η)dads = E(s,a)∼pπAd (s,a) [p(o|s,a;η)] (8) H(o|s,a) = E(s,a)∼pπAd (s,a) [p(o|s,a;η) log p(o|s,a;η)] . (9) Thus, the probability density of (s,a) induced by πAd is necessary for computing MI for our purpose. To estimate the probability density of (s,a) induced by πAd, we introduce the advantageweighted importance in the next section. 3.2 IMPORTANCE WEIGHTS FOR MUTUAL INFORMATION ESTIMATION Although we show that the problem of finding the modes of the advantage function can be reduced to MI maximization with respect to the samples induced by πAd, samples induced by πAd are not available in practice. While those induced during the learning process are available, a discrete representation obtained from such samples does not correspond to the modes of the advantage function. To estimate the density induced by πAd, we employ an importance sampling approach. We assume that the change of the state distribution induced by the policy update is sufficiently small, namely, dπAd(s) ≈ dβ(s). Then, the importance weight can be approximated as W (s,a) = pπAd(s,a) pβ(s,a) = dπAd(s)πAd(a|s) dβ(s)β(a|s) ≈ πAd(a|s) β(a|s) = f(A(s,a)) Zβ(a|s) . (10) and the normalized importance weight is given gy W̃ (s,a) = W (s,a)∑N j=1W (sj ,aj) = f(A(s,a)) Zβ(a|s)∑N j=1 f(A(sj ,aj)) Zβ(aj |sj) = f(A(s,a)) β(a|s)∑N j=1 f(A(sj ,aj)) β(aj |sj) . (11) As the partition function Z is canceled, we do not need to compute Z when computing the importance weight in practice. We call this importance weightW the advantage-weighted importance and employ it to compute the objective function used to estimate the latent variable. This advantage-weighted importance is used to compute the entropy terms for computing MI in Equation (7). The empirical estimate of the entropy H(o) is given by Ĥ(o;η) = − ∑ o∈O p̂(o;η) log p̂(o;η),where p̂(o;η) = 1 N N∑ i=1 W (si,ai)p(o|si,ai;η). (12) where the samples (si,ai) are drawn from pβ(s,a) induced by a behavior policy β(a|s). Likewise, the empirical estimate of the conditional entropy H(o|s,a) is given by Ĥ(o|s,a;η) = 1 N N∑ i W (si,ai)p(o|si,ai;η) log p(o|si,ai;η). (13) The derivations of Equations (12) and (13) are provided in Appendix A. To train the option network, we store the samples collected by the M most recent behavior policies, to which we refer as onpolicy buffer Don. Although the algorithm works with entire samples stored in the replay buffer, we observe that the use of the on-policy buffer for latent representation learning exhibits better performance. For this reason, we decided to use the on-policy buffer in our implementation. Therefore, while the algorithm is off-policy in the sense that the option is learned from samples collected by behavior policies, our implementation is “semi”on-policy in the sense that we use samples collected by the most recent behavior policies. 4 HRL OBJECTIVE WITH DETERMINISTIC OPTION POLICIES Instead of stochastic option policies, we consider deterministic option policies and model them using separate neural networks. We denote by π(a|s, o) = µoθ(s) deterministic option policies parameterized by vector θ. The objective function of off-policy HRL with deterministic option policies can then be obtained by replacing π(a|s) with ∑ o∈O π(o|s)π(a|s, o) in Equation (3): J(w,θ) = ∫ dβ(s) ∑ o∈O π(o|s)Qπ ( s,µoθ(s);w ) ds, (14) where Qπ(s,a;w) is an approximated Q-function parameterized using vector w. This form of the objective function is analogous to Equation (3). Thus, we can extend standard RL techniques to the learning of the gating policy π(o|s) in HRL with deterministic option policies. In HRL, the goal of the gating policy is to generate a value of o that maximizes the conditional expectation of the return: QπΩ(s, o) = E [R|st = s, ot = o] = ∫ π(a|s, o)Qπ(s,a)da, (15) which is often referred to as the option-value function (Sutton et al., 1999). When option policies are stochastic, it is often necessary to approximate the option-value function QπΩ(s, o) in addition to the action-value function Qπ(s,a). However, in our case, the option-value function for deterministic option policies is given by QπΩ(s, o) = Q π(s,µoθ(s)), (16) Algorithm 1 HRL via Advantage-Weighted Information Maximization (adInfoHRL) Input: Number of options O, size of on-policy buffer Initialize: Replay buffer DR, on-policy buffer Don, network parameters η, θ, w, θtarget, wtarget repeat for t = 0 to t = T do Draw an option for a given s by following Equation 17: o ∼ π(o|s) Draw an action a ∼ β(a|s, o) = µoθ(s) + Record a data sample (s,a, r, s′) Aggregate the data in DR and Don if the on-policy buffer is full then Update the option network by minimizing Equation (7) for samples in Don Clear the on-policy buffer Don end if Sample a batch Dbatch ∈ DR Update the Q network parameter w if t mod d then Estimate p(o|si,ai) for (si,ai) ∈ Dbatch using the option network Assign samples (si,ai) ∈ Dbatch to the option o∗ = argmax p(o|si,ai) Update the option policy networks µoθ(s) for o = 1, ..., O with Equation (19) Update the target networks: wtarget ← τw+(1−τ)wtarget, θtarget ← τθ+(1−τ)θtarget end if end for until the convergence return θ which we can estimate using the deterministic option policy µoθ(s) and the approximated actionvalue function Qπ(s,a;w). In this work we employ the softmax gating policy of the form π(o|s) = exp ( Qπ(s,µoθ(s);w) )∑ o∈O exp ( Qπ ( s,µoθ(s);w )) , (17) which encodes the exploration in its form (Daniel et al., 2016). The state value function is given as V π(s) = ∑ o∈O π(o|s)Qπ(s,µoθ(s);w), (18) which can be computed using Equation (17). We use this state-value function when computing the advantage-weighted importance as A(s,a) = Q(s,a) − V (s). In this study, the Q-function is trained in a manner proposed by Fujimoto et al. (2018). Two neural networks (Qπw1 , Q π w2) are trained to estimate the Q-function, and the target value of the Q-function is computed as yi = ri + γmin1,2Q(si,ai) for sample (si,ai,a′i, ri) in a batch sampled from a replay buffer, where ri = r(si,ai). In this study, the gating policy determines the option once every N time steps, i.e., t = 0, N, 2N, . . . Neural networks that model µoθ(a|s) for o = 1, ..., O, which we refer to as option-policy networks, are trained separately for each option. In the learning phase, p(o|s,a) is estimated by the option network. Then, samples are assigned to option o∗ = argmaxo p(o|s,a;η) and are used to update the option-policy network that corresponds to o∗. When performing a rollout, o is drawn by following the gating policy in Equation (17), and an action is generated by the selected option-policy network. Differentiating the objective function in Equation (14), we obtain the deterministic policy gradient of our option-policy µoθ(s) given by ∇θJ(w,θ) = Es∼dβ(s)π(o|s) [ ∇θµoθ(s)∇aQπ ( s,a ) |a=µoθ(s) ] . (19) The procedure of adInfoHRL is summarized by Algorithm 1. As in TD3 (Fujimoto et al., 2018), we employed the soft update using a target value network and a target policy network. 5 EXPERIMENTS We evaluated the proposed algorithm adInfoHRL on the OpenAI Gym platform (Brockman et al., 2016) with the MuJoCo Physics simulator (Todorov et al., 2012). We compared its performance with that of PPO implemented in OpenAI baselines (Dhariwal et al., 2017) and TD3. Henderson et al. (2018) have recently claimed that algorithm performance varies across environment, there is thus no clearly best method for all benchmark environments, and off-policy and on-policy methods have advantages in different problem domains. To analyze the performance of adInfoHRL, we compared it with state-of-the-art algorithms for both on-policy and off-policy methods, although we focused on the comparison with TD3, as our implementation of adInfoHRL is based on it. To determine the effect of learning the latent variable via information maximization, we used the same network architectures for the actor and critic in adInfoHRL and TD3. In addition, to evaluate the benefit of the advantage-weighted importance, we evaluated a variant of adInfoHRL, which does not use the advantage-weighted importance for computing mutual information.We refer to this variant of adInfoHRL as infoHRL. The gating policy updated variable o once every three time steps. We tested the performance of adInfoHRL with two and four options. The activation of options over time and snapshots of the learned option policies on the Walker2d task are shown in Figure 2, which visualizes the result from adInfoHRL with four options. One can see that the option policies are activated in different phases of locomotion. While the option indicated by yellow in Figure 2 corresponds to the phase for kicking the floor, the option indicated by blue corresponds to the phase when the agent was on the fly. Visualization of the options learned on the HalfCheetah and Ant tasks are shown in Appendix D. The averaged return of five trials is reported in Figure 3(a)-(d). AdIfoHRL yields the best performance on Ant1 and Walker2d, whereas the performance of TD3 and adInfoHRL was comparable on HalfCheetah and Hopper, and PPO outperformed the other methods on Hopper. Henderson et al. (2018) claimed that on-policy methods show their superiority on tasks with unstable dynamics, and our experimental results are in line with such previous studies. AdinfoHRL outperformed infoHRL, which isthe variant of adInfoHRL without the advantage-weighted importance on all the tasks. This result shows that the adavatage-weighted importance enhanced the performance of learning options. AdInfoHRL exhibited the sample efficiency on Ant and Walker2d in the sense that it required fewer samples than TD3 to achieve comparable performance on those tasks. The concept underlying adInfoHRL is to divide the state-action space to deal with the multi-modal advantage function and learn option policies corresponding to separate modes of the advantage function. Therefore, adInfoHRL shows its superiority on tasks with the multi-modal advantage function and not on tasks with a simple advantage function. Thus, it is natural that the benefit of adInfoHRL is dependent on the characteristics of the task. The outputs of the option network and the activation of options on Walker2d are shown in Figure 3(e)-(f), which visualize the result from adInfoHRL with four options. For visualization, the dimensionality was reduced using t-SNE (van der Maaten & Hinton, 2008). The state-action space 1We report the result on the Ant task implemented in rllab (Duan et al., 2016) instead of Ant-v1 implemented in the OpenAI gym, since the Ant task in the rllab is known to be harder than the Ant-v1 in the OpenAI gym. Results on Ant-v1 in the OpenAI gym is reported in Appendix D. is clearly divided into separate domains in Figure 3(e). As shown in Figure 3(f), the options are activated in different domains of the state space, which indicates that diverse options are learned by adInfoHRL. 6 RELATED WORK AND DISCUSSION Past studies have proposed several ways to deal with the latent variable in HRL. The recent work by Smith et al. (2018) proposed inferred option policy gradients (IOPG), which is derived as an extension of policy gradient to the option framework. Nachum et al. (2018) recently proposed off-policy target correction for HRL on goal-oriented tasks, where a higher-level policy instructs a lower-level policy by generating the goal signal instead of an inferred latent variable. A popular approach for learning the latent variable in HRL is the variational approach. The recent work by Haarnoja et al. (2018a) is based on soft actor critic (Haarnoja et al., 2018b), and the latent variable is inferred using the variational approach. The work by Hausman et al. (2018) is also closely related to the variational approach, and they proposed a method for learning a latent variable of a hierarchical policy via a variational bound. On the contrary, our method learns the latent variable by maximizing MI with advantage-weighted importance. Recent studies by Gregor et al. (2016); Florensa et al. (2017); Eysenbach et al. (2018) also considered the MI in their formulation. In these methods, MI between the state and the latent variable is considered so as to obtain diverse behaviors. Our approach is different from the previous studies in the sense that we employ MI between the latent variable and the state-action pairs, which leads to the division of the state-action space instead of considering only the state space. We think that dividing the state-action space is an efficient approach when the advantage function is multi-modal, as depicted in Figure 1. InfoGAIL proposed by Li et al. (2017) learns the interpretable representation of the state-action space via MI maximization. InfoGAIL can be interpreted as a method that divides the state-action space based on the density induced by an expert’s policy by maximizing the regularized MI objective. In this sense, it is closely related to our method, although their problem setting is imitation learning (Osa et al., 2018a), which is different from our HRL problem setting. The use of the importance weight based on the value function has appeared in previous studies (Dayan & Hinton, 1997; Kober & Peters, 2011; Neumann & Peters, 2009; Osa & Sugiyama, 2018). For example, the method proposed by Neumann & Peters (2009) employs the importance weight based on the advantage function for learning a monolithic policy, while our method uses a similar importance weight for learning a latent variable of a hierarchical policy. Although Osa & Sugiyama (2018) proposed to learn a latent variable in HRL with importance sampling, their method is limited to episodic settings where only a single option is used in an episode. Our method can be interpreted as an approach that divides the state-action space based on the MI criterion. This concept is related to that of Divide and Conquer (DnC) proposed by Ghosh et al. (2018), although DnC clusters the initial states and does not consider switching between option policies during the execution of a single trajectory. In this study we developed adInfoHRL based on deterministic option policies. However, the concept of dividing the state-action space via advantage-weighted importance can be applied to stochastic policy gradients as well. Further investigation in this direction is necessary in future work. 7 CONCLUSIONS We proposed a novel HRL method, hierarchical reinforcement learning via advantage-weighted information maximization. In our framework, the latent variable of a hierarchical policy is learned as a discrete latent representation of the state-action space. Our HRL framework is derived by considering deterministic option policies and by leveraging the analogy between the gating policy for HRL and a monolithic policy for the standard RL. The results of the experiments indicate that adInfoHRL can learn diverse options on continuous control tasks. Our results also suggested that our approach can improve the performance of TD3 in certain problem domains. ACKNOWLEDGMENTS MS was partially supported by KAKENHI 17H00757. A MUTUAL INFORMATION WITH ADVANTAGE-WEIGHTED IMPORTANCE The mutual information (MI) between the latent variable o and the state action pair (s,a) is defined as I ( (s,a), o ) = H(o)−H(o|s,a) (20) where H(o) = ∫ p(o) log p(o)do and H(o|s,a) = ∫ p(o|s,a) log p(o|s,a)do. We make the empirical estimate of MI employed by Gomes et al. (2010); Hu et al. (2017) and modify it to employ the importance weight. The empirical estimate of MI with respect to the density induced by a policy π is given by Î(s,a; o) = ∑ o∈O p̂(o) log p̂(o)− Ĥ(o|s,a). (21) We consider the case where we have samples collected by a behavior policy β(s|a) and need to estimate MI with respect to the density induced by policy π. Given a model p(o|s,a;η) parameterized by vector η, p(o) can be rewritten as p(o) = ∫ pβ(s,a) pπ(s,a) pβ(s,a) p(o|s,a;η)dads = E [W (s,a)p(o|s,a;η)] , (22) where W (s,a) = p π(s,a) pβ(s,a) is the importance weight. Therefore, the empirical estimate of p(o) with respect to the density induced by a policy π is given by p̂(o) = 1 N N∑ i=1 W̃ (si,ai)p(o|si,ai;η), (23) where W̃ (s,a) = W̃ (s,a)∑N j=1 W̃ (sj ,aj) is the normalized importance weight. Likewise, the conditional entropy with respect to the density induced by a policy π is given by H(o|s,a) = ∫ pπ(s,a)p(o|s,a;η) log p(o|s,a;η)dsda (24) = ∫ pβ(s,a) pπ(s,a) pβ(s,a) p(o|s,a;η) log p(o|s,a;η)dsda (25) = E [W (s,a)p(o|s,a;η) log p(o|s,a;η)] . (26) Therefore, the empirical estimate of the conditional entropy with respect to the density induced by a policy π is given by Ĥ(o|s,a) = 1 N N∑ i=1 W (si,ai)p(o|si,ai;η) log p(o|si,ai;η). (27) Thus, the empirical estimates of MI can be computed by Equations (21), (23) and (27). B DERIVATION OF THE STATE-VALUE FUNCTION In HRL, the value function is given by V (s) = ∫ ∑ o∈O π(o|s)π(a|s, o)Qπ(s,a)da = ∑ o∈O π(o|s) ∫ π(a|s, o)Qπ(s,a)da (28) Since option policies are deterministic given by µoθ(s), the state-value function is given by V (s) = ∑ o∈O π(o|s)Qπ(s,µoθ(s))da. (29) C EXPERIMENTAL DETAILS We performed evaluations using benchmark tasks in the OpenAI Gym platform (Brockman et al., 2016) with Mujoco physics simulator (Todorov et al., 2012). Hyperparameters of reinforcement learning methods used in the experiment are shown in Tables 1-3. For exploration, both adInfoHRL and TD3 used the clipped noise drawn from the normal distribution as ∼ clip ( N (0, σ),−c, c ) , where σ = 0.2 and c = 0.5. For hyperparameters of PPO, we used the default values in OpenAI baselines (Dhariwal et al., 2017). For the Walker2d, HalfCheetah, and Hopper tasks, we used the Walker2d-v1, HalfCHeetah-v1, and Hopper-v1 in the OpenAI Gym, respectively. For the Ant task, we used the AntEnv implemented in the rllab (Duan et al., 2016). When training a policy with AdInfoHRL, infoHRL, and TD3, critics are trained once per time step, and actors are trained once every after two updates of the critics. The source code is available at https://github.com/ TakaOsa/adInfoHRL. We performed the experiments five times with different seeds, and reported the averaged test return where the test return was computed once every 5000 time steps by executing 10 episodes without exploration. When performing the learned policy without exploration, the option was drawn as o = max o′ Qπ(s,µo ′ (s)), (30) instead of following the stochastic gating policy in Equations (17). D ADDITIONAL INFORMATION ON EXPERIMENTAL RESULTS On the HalfCheetah task, adInfoHRL delivered the best performance with two options. The distribution of options on HalfCheetah0v1 after one million steps is shown in Figure 4. Although the state-action space is evenly divided, the options are not evenly activated. This behavior can occur because the state-action space is divided based on the density induced by the behavior policy while the activation of options is determined based on the quality of the option policies in a given state. Moreover, an even division in the action-state space is not necessarily the even division in the state space. The activation of the options over time is shown in Figure 5. It is clear that one of the option corresponds to the stable running phase and the other corresponds to the phase for recovering from unstable states. The distribution of four options on the Ant-rllab task after one million steps is shown in Figure 6. Four options are activated in the different domains of the state space. The activation of the options over time on the Ant-rllab task is shown in Figure 7. While four options are actively used in the beginning of the episode, two (blue and yellow) options are mainly activated during the stable locomotion. Since the Ant task implemented in rllab is known to be harder than the Ant-v1 implemented in the OpenAI gym, we reported the result of the Ant task in rllab in the main manuscript. Here, we report the result of the Ant-v1 task implemented in the OpenAI gym. On the Ant-v1 task, adInfoHRL yielded the best performance with two options. The performance of adInfoHRL with two options is comparable to that of TD3 on Ant-v1. This result indicates that the Ant-v1 task does not require a hierarchical policy structure, while a hierarchical policy improves the performance of learning on Ant-rllab. The distribution of options on Ant-v1 task after one million steps is shown in Figure 8. The activation of the options over time is shown in Figure 9. It is evident that two option policies on the Ant-v1 task corresponded to different postures of the agent. A recent study on HRL by Smith et al. (2018) reported the performance of IOPG on Walker2d-v1, Hopper-v1, and HalfCheetah-v1. The study by Haarnoja et al. (2018a) reported the performance of SAC-LSP on Walker2d-v1, Hopper-v1, HalfCheetah-v1, and Ant-rllab. A comparison of performance between our method, IOPG, and SAC-LSP is summarized in Table 4. We report the performance after 1 million steps. It is worth noting that adInfoHRL outperformed IOPG on these tasks in terms of the achieved return, although we are aware that the qualitative performance is also important in HRL. AdInfoHRL outperformed SAC-LSP on Walker2d-v1 and Ant-rllab, and SAC-LSP shows its superiority on HalfCheetah-v1 and Hopper-v1. However, the results of SAC-LSP were obtained by using reward scaling, which was not used in the evaluation of adInfoHRL. Therefore, further experiments are necessary for fair comparison under the same condition.
1. How does the proposed HRL algorithm differ from previous work on unsupervised option discovery? 2. What is the meaning of "optimal policy" in the context of this paper? 3. What is the purpose of Equation 10, and why is white noise added to states and actions? 4. How does the algorithm block fail to provide a clear understanding of the underlying algorithm's nesting structure? 5. Why are there no equations referenced in the option policy network update? 6. What are some potential issues with the experimental results shown in the paper? 7. How might the absence of other HRL approaches being evaluated against impact the paper's validity? 8. Would including a comparison table of results reported in other papers strengthen the paper? 9. What is the significance of TD3 with an action repeat adjustment as a baseline? 10. How might extensive rewrites and additional experiments improve the quality of the paper?
Review
Review The authors propose an HRL algorithm that attempts to learn options that maximize their mutual information with the state-action density under the optimal policy. Several key terms are used in ways that differ from the rest of the literature. The authors claim options are learned in an "unsupervised" manner, but it is unclear what this means. Previous work (none of which is cited) has dealt with unsupervised option discovery in the context of mutual information maximization (Variational intrinsic control, diversity is all you need, etc), but they do so in the absence of reward, unlike this paper. "Optimal policy" is similarly abused, with it appearing to mean optimal from the perspective of the current model parameters, rather than optimal in any global sense. Or at least I think that is what the authors intend. If they do mean the globally optimal policy, then its unclear how to interpret Equation 8, with its reference to a behavior policy and an advantage function, neither of which would be available if meant to represent the global optimum. Equation 10 comes out of nowhere. One must assume they meant "maximize mutual information" and not "minimize", but who knows. Why is white-noise being added to the states and actions? Is this some sort of noise-contrastive estimation approach to mutual information estimation? It doesn't appear to be, but it is unclear what else could motivate it. Even the appendices fail to shine light on this equation. The algorithm block isn't terribly helpful. The "t" variable is used outside of its for loop, which draws into question the exact nesting structure of the underlying algorithm (which isn't obvious for HRL methods). There aren't any equations referenced, with the option policy network's update not even referencing the loss nor data over which the loss would be evaluated. Some of the experimental results show promise, but the PPO Ant result raises some questions. Clearly the OpenAI implementation of PPO used would have tuned for the OpenAI gym Ant implementation, and the appendix shows it getting decent results. But it never takes off in the harder RlLab version -- were the hyper-parameters adjusted for this new environment? It is also odd that no other HRL approaches are evaluated against, given the number cited. Running these methods might be too costly, but surely a table comparing results reported in those papers should be included. A minor point: another good baseline would be TD3 with the action repeat adjusted to be inline with the gating policy. I apologise if this review came off as too harsh -- I believe a good paper can be made of this with extensive rewrites and additional experiments. But the complete lack of clarity makes it feel like it was rushed out prematurely. EDIT: Now this is a paper that makes sense! With the terminology cleared up and the algorithm fully unpacked, this approach seems quite interesting. The experimental results could always be stronger, but no longer have any holes in them. Score 3-->6
ICLR
Title Hierarchical Reinforcement Learning via Advantage-Weighted Information Maximization Abstract Real-world tasks are often highly structured. Hierarchical reinforcement learning (HRL) has attracted research interest as an approach for leveraging the hierarchical structure of a given task in reinforcement learning (RL). However, identifying the hierarchical policy structure that enhances the performance of RL is not a trivial task. In this paper, we propose an HRL method that learns a latent variable of a hierarchical policy using mutual information maximization. Our approach can be interpreted as a way to learn a discrete and latent representation of the state-action space. To learn option policies that correspond to modes of the advantage function, we introduce advantage-weighted importance sampling. In our HRL method, the gating policy learns to select option policies based on an option-value function, and these option policies are optimized based on the deterministic policy gradient method. This framework is derived by leveraging the analogy between a monolithic policy in standard RL and a hierarchical policy in HRL by using a deterministic option policy. Experimental results indicate that our HRL approach can learn a diversity of options and that it can enhance the performance of RL in continuous control tasks. 1 INTRODUCTION Reinforcement learning (RL) has been successfully applied to a variety of tasks, including board games (Silver et al., 2016), robotic manipulation tasks (Levine et al., 2016), and video games (Mnih et al., 2015). Hierarchical reinforcement learning (HRL) is a type of RL that leverages the hierarchical structure of a given task by learning a hierarchical policy (Sutton et al., 1999; Dietterich, 2000). Past studies in this field have shown that HRL can solve challenging tasks in the video game domain (Vezhnevets et al., 2017; Bacon et al., 2017) and robotic manipulation (Daniel et al., 2016; Osa et al., 2018b). In HRL, lower-level policies, which are often referred to as option policies, learn different behavior/control patterns, and the upper-level policy, which is often referred to as the gating policy, learns to select option policies. Recent studies have developed HRL methods using deep learning (Goodfellow et al., 2016) and have shown that HRL can yield impressive performance for complex tasks (Bacon et al., 2017; Frans et al., 2018; Vezhnevets et al., 2017; Haarnoja et al., 2018a). However, identifying the hierarchical policy structure that yields efficient learning is not a trivial task, since the problem involves learning a sufficient variety of types of behavior to solve a given task. In this study, we present an HRL method via the mutual information (MI) maximization with advantage-weighted importance, which we refer to as adInfoHRL. We formulate the problem of learning a latent variable in a hierarchical policy as one of learning discrete and interpretable repre- sentations of states and actions. Ideally, each option policy should be located at separate modes of the advantage function. To estimate the latent variable that corresponds to modes of the advantage function, we introduce advantage-weighted importance weights. Our approach can be considered to divide the state-action space based on an information maximization criterion, and it learns option policies corresponding to each region of the state-action space. We derive adInfoHRL as an HRL method based on deterministic option policies that are trained based on an extension of the deterministic policy gradient (Silver et al., 2014; Fujimoto et al., 2018). The contributions of this paper are twofold: 1. We propose the learning of a latent variable of a hierarchical policy as a discrete and hidden representation of the state-action space. To learn option policies that correspond to the modes of the advantage function, we introduce advantage-weighted importance. 2. We propose an HRL method, where the option policies are optimized based on the deterministic policy gradient and the gating policy selects the option that maximizes the expected return. The experimental results show that our proposed method adInfoHRL can learn a diversity of options on continuous control tasks. Moreover, our approach can improve the performance of TD3 on such tasks as the Walker2d and Ant tasks in OpenAI Gym with MuJoco simulator. 2 BACKGROUND In this section, we formulate the problem of HRL in this paper and describe methods related to our proposal. 2.1 HIERARCHICAL REINFORCEMENT LEARNING We consider tasks that can be modeled as a Markov decision process (MDP), consisting of a state space S, an action space A, a reward function r : S × A 7→ R, an initial state distribution ρ(s0), and a transition probability p(st+1|st,at) that defines the probability of transitioning from state st and action at at time t to next state st+1. The return is defined as Rt = ∑T i=t γ i−tr(si,ai), where γ is a discount factor, and policy π(a|s) is defined as the density of action a given state s. Let dπ(s) = ∑T t=0 γ tp(st = s) denote the discounted visitation frequency induced by the policy π. The goal of reinforcement learning is to learn a policy that maximizes the expected return J(π) = Es0,a0,...[R0] where s0 ∼ ρ(s0),a ∼ π and st+1 ∼ p(st+1|st,at). By defining the Q-function as Qπ(s,a) = Es0,a0,...[Rt|st = s,at = a], the objective function of reinforcement learning can be rewritten as follows: J(π) = ∫∫ dπ(s)π(a|s)Qπ(s,a)dads. (1) Herein, we consider hierarchical policy π(a|s) = ∑ o∈O π(o|s)π(a|s, o), where o is the latent variable andO is the set of possible values of o. Many existing HRL methods employ a policy structure of this form (Frans et al., 2018; Vezhnevets et al., 2017; Bacon et al., 2017; Florensa et al., 2017; Daniel et al., 2016). In general, latent variable o can be discrete (Frans et al., 2018; Bacon et al., 2017; Florensa et al., 2017; Daniel et al., 2016; Osa & Sugiyama, 2018) or continuous (Vezhnevets et al., 2017). π(o|s) is often referred to as a gating policy (Daniel et al., 2016; Osa & Sugiyama, 2018), policy over options (Bacon et al., 2017), or manager (Vezhnevets et al., 2017). Likewise, π(a|s, o) is often referred to as an option policy (Osa & Sugiyama, 2018), sub-policy (Daniel et al., 2016), or worker (Vezhnevets et al., 2017). In HRL, the objective function is given by J(π) = ∫∫ dπ(s) ∑ o∈O π(o|s)π(a|s, o)Qπ(s,a)dads. (2) As discussed in the literature on inverse RL (Ziebart, 2010), multiple policies can yield equivalent expected returns. This indicates that there exist multiple solutions to latent variable o that maximizes the expected return. To obtain the preferable solution for o, we need to impose additional constraints in HRL. Although prior work has employed regularizers (Bacon et al., 2017) and constraints (Daniel et al., 2016) to obtain various option policies, the method of learning a good latent variable o that improves sample-efficiency of the learning process remains unclear. In this study we propose the learning of the latent variable by maximizing MI between latent variables and state-action pairs. 2.2 DETERMINISTIC POLICY GRADIENT The deterministic policy gradient (DPG) algorithm was developed for learning a monolithic deterministic policy µθ(s) : S 7→ A by Silver et al. (2014). In off-policy RL, the objective is to maximize the expectation of the return, averaged over the state distribution induced by a behavior policy β(a|s): J(π) = ∫∫ dβ(s)π(a|s)Qπ ( s,a)dads. (3) When a policy is deterministic, the objective becomes J(π) = ∫ dβ(s)Qπ ( s,µθ(s) ) ds. Silver et al. (2014) have shown that the gradient of a deterministic policy is given by ∇θEs∼dβ(s)[Qπ(s,a)] = Es∼dβ(s) [ ∇θµθ(s)∇aQπ ( s,a ) |a=µθ(s) ] . (4) The DPG algorithm has been extended to the deep deterministic policy gradient (DDPG) for continuous control problems that require neural network policies (Lillicrap et al., 2016). Twin Delayed Deep Deterministic policy gradient algorithm (TD3) proposed by Fujimoto et al. (2018) is a variant of DDPG that outperforms the state-of-the-art on-policy methods such as TRPO (Schulman et al., 2017a) and PPO (Schulman et al., 2017b) in certain domains. We extend this deterministic policy gradient to learn a hierarchical policy. 2.3 REPRESENTATION LEARNING VIA INFORMATION MAXIMIZATION Recent studies such as those by Chen et al. (2016); Hu et al. (2017); Li et al. (2017) have shown that an interpretable representation can be learned by maximizing MI. Given a datasetX = (x1, ...,xn), regularized information maximization (RIM) proposed by Gomes et al. (2010) involves learning a conditional model p̂(y|x;η) with parameter vector η that predicts a label y. The objective of RIM is to minimize `(η)− λIη(x, y), (5) where `(η) is the regularization term, Iη(x, y) is MI, and λ is a coefficient. MI can be decomposed as Iη(x, y) = H(y)−H(y|x) where H(y) is entropy and H(y|x) the conditional entropy. Increasing H(y) conduces the label to be uniformly distributed, and decreasing H(y|x) conduces to clear cluster assignments. Although RIM was originally developed for unsupervised clustering problems, the concept is applicable to various problems that require learning a hidden discrete representation. In this study, we formulate the problem of learning the latent variable o of a hierarchical policy as one of learning a latent representation of the state-action space. 3 LEARNING OPTIONS VIA ADVANTAGE-WEIGHTED INFORMATION MAXIMIZATION In this section, we propose a novel HRL method based on advantage-weighted information maximization. We first introduce the latent representation learning via advantage-weighted information maximization, and we then describe the HRL framework based on deterministic option policies. 3.1 LATENT REPRESENTATION LEARNING VIA ADVANTAGE-WEIGHTED INFORMATION MAXIMIZATION Although prior work has often considered H(o|s) or I(s, o), which results in a division of the state space, we are interested in using I ( (s,a), o ) for dividing the state-action space instead. A schematic sketch of our approach is shown in Figure 1. As shown in the left side of Figure 1, the advantage function often has multiple modes. Ideally, each option policies should correspond to separate modes of the advantage function. However, it is non-trivial to find the modes of the advantage function in practice. For this purpose, we reduce the problem of finding modes of the advantage function to that of finding the modes of the probability density of state action pairs. We consider a policy based on the advantage function of the form πAd(a|s) = f ( Aπ(s,a) ) Z , (6) where Aπ(s,a) = Qπ(s,a) − V π(s) is the advantage function, V π(s) is the state value function, and Z is the partition function. f(·) is a functional, which is a function of a function. f(·) is a monotonically increasing function with respect to the input variable and always satisfies f(·) > 0. In our implementation we used the exponential function f(·) = exp(·). When following such a policy, an action with the larger advantage is drawn with a higher probability. Under this assumption, finding the modes of the advantage function is equivalent to finding modes of the density induced by πAd. Thus, finding the modes of the advantage function can be reduced to the problem of clustering samples induced by πAd. Following the formulation of RIM introduced in Section 2.3, we formulate the problem of clustering samples induced by πAd as the learning of discrete representations via MI maximization. For this purpose, we consider a neural network that estimates p(o|s,a;η) parameterized with vector η, which we refer to as the option network. We formulate the learning of the latent variable o as minimizing Loption(η) = `(η)− λI ( o, (s,a);η ) , (7) where I(o, (s,a)) = Ĥ(o|s,a;η) − Ĥ(o;η), and `(η) is the regularization term. In practice, we need to approximate the advantage function, and we learn the discrete variable o that corresponds to the modes of the current estimate of the advantage function. For regularization, we used a simplified version of virtual adversarial training (VAT) proposed by Miyato et al. (2016). Namely, we set `(η) = DKL ( p(o|snoise,anoise;η)||p(o|s,a;η) ) where snoise = s+ s, anoise = a+ a, s and a denote white noise. This regularization term penalizes dissimilarity between an original state-action pair and a perturbed one, and Hu et al. (2017) empirically show that this regularization improves the performance of learning latent discrete representations. When computing MI, we need to compute p(o) and H(o|s,a) given by p(o) = ∫ pπAd(s,a)p(o|s,a;η)dads = E(s,a)∼pπAd (s,a) [p(o|s,a;η)] (8) H(o|s,a) = E(s,a)∼pπAd (s,a) [p(o|s,a;η) log p(o|s,a;η)] . (9) Thus, the probability density of (s,a) induced by πAd is necessary for computing MI for our purpose. To estimate the probability density of (s,a) induced by πAd, we introduce the advantageweighted importance in the next section. 3.2 IMPORTANCE WEIGHTS FOR MUTUAL INFORMATION ESTIMATION Although we show that the problem of finding the modes of the advantage function can be reduced to MI maximization with respect to the samples induced by πAd, samples induced by πAd are not available in practice. While those induced during the learning process are available, a discrete representation obtained from such samples does not correspond to the modes of the advantage function. To estimate the density induced by πAd, we employ an importance sampling approach. We assume that the change of the state distribution induced by the policy update is sufficiently small, namely, dπAd(s) ≈ dβ(s). Then, the importance weight can be approximated as W (s,a) = pπAd(s,a) pβ(s,a) = dπAd(s)πAd(a|s) dβ(s)β(a|s) ≈ πAd(a|s) β(a|s) = f(A(s,a)) Zβ(a|s) . (10) and the normalized importance weight is given gy W̃ (s,a) = W (s,a)∑N j=1W (sj ,aj) = f(A(s,a)) Zβ(a|s)∑N j=1 f(A(sj ,aj)) Zβ(aj |sj) = f(A(s,a)) β(a|s)∑N j=1 f(A(sj ,aj)) β(aj |sj) . (11) As the partition function Z is canceled, we do not need to compute Z when computing the importance weight in practice. We call this importance weightW the advantage-weighted importance and employ it to compute the objective function used to estimate the latent variable. This advantage-weighted importance is used to compute the entropy terms for computing MI in Equation (7). The empirical estimate of the entropy H(o) is given by Ĥ(o;η) = − ∑ o∈O p̂(o;η) log p̂(o;η),where p̂(o;η) = 1 N N∑ i=1 W (si,ai)p(o|si,ai;η). (12) where the samples (si,ai) are drawn from pβ(s,a) induced by a behavior policy β(a|s). Likewise, the empirical estimate of the conditional entropy H(o|s,a) is given by Ĥ(o|s,a;η) = 1 N N∑ i W (si,ai)p(o|si,ai;η) log p(o|si,ai;η). (13) The derivations of Equations (12) and (13) are provided in Appendix A. To train the option network, we store the samples collected by the M most recent behavior policies, to which we refer as onpolicy buffer Don. Although the algorithm works with entire samples stored in the replay buffer, we observe that the use of the on-policy buffer for latent representation learning exhibits better performance. For this reason, we decided to use the on-policy buffer in our implementation. Therefore, while the algorithm is off-policy in the sense that the option is learned from samples collected by behavior policies, our implementation is “semi”on-policy in the sense that we use samples collected by the most recent behavior policies. 4 HRL OBJECTIVE WITH DETERMINISTIC OPTION POLICIES Instead of stochastic option policies, we consider deterministic option policies and model them using separate neural networks. We denote by π(a|s, o) = µoθ(s) deterministic option policies parameterized by vector θ. The objective function of off-policy HRL with deterministic option policies can then be obtained by replacing π(a|s) with ∑ o∈O π(o|s)π(a|s, o) in Equation (3): J(w,θ) = ∫ dβ(s) ∑ o∈O π(o|s)Qπ ( s,µoθ(s);w ) ds, (14) where Qπ(s,a;w) is an approximated Q-function parameterized using vector w. This form of the objective function is analogous to Equation (3). Thus, we can extend standard RL techniques to the learning of the gating policy π(o|s) in HRL with deterministic option policies. In HRL, the goal of the gating policy is to generate a value of o that maximizes the conditional expectation of the return: QπΩ(s, o) = E [R|st = s, ot = o] = ∫ π(a|s, o)Qπ(s,a)da, (15) which is often referred to as the option-value function (Sutton et al., 1999). When option policies are stochastic, it is often necessary to approximate the option-value function QπΩ(s, o) in addition to the action-value function Qπ(s,a). However, in our case, the option-value function for deterministic option policies is given by QπΩ(s, o) = Q π(s,µoθ(s)), (16) Algorithm 1 HRL via Advantage-Weighted Information Maximization (adInfoHRL) Input: Number of options O, size of on-policy buffer Initialize: Replay buffer DR, on-policy buffer Don, network parameters η, θ, w, θtarget, wtarget repeat for t = 0 to t = T do Draw an option for a given s by following Equation 17: o ∼ π(o|s) Draw an action a ∼ β(a|s, o) = µoθ(s) + Record a data sample (s,a, r, s′) Aggregate the data in DR and Don if the on-policy buffer is full then Update the option network by minimizing Equation (7) for samples in Don Clear the on-policy buffer Don end if Sample a batch Dbatch ∈ DR Update the Q network parameter w if t mod d then Estimate p(o|si,ai) for (si,ai) ∈ Dbatch using the option network Assign samples (si,ai) ∈ Dbatch to the option o∗ = argmax p(o|si,ai) Update the option policy networks µoθ(s) for o = 1, ..., O with Equation (19) Update the target networks: wtarget ← τw+(1−τ)wtarget, θtarget ← τθ+(1−τ)θtarget end if end for until the convergence return θ which we can estimate using the deterministic option policy µoθ(s) and the approximated actionvalue function Qπ(s,a;w). In this work we employ the softmax gating policy of the form π(o|s) = exp ( Qπ(s,µoθ(s);w) )∑ o∈O exp ( Qπ ( s,µoθ(s);w )) , (17) which encodes the exploration in its form (Daniel et al., 2016). The state value function is given as V π(s) = ∑ o∈O π(o|s)Qπ(s,µoθ(s);w), (18) which can be computed using Equation (17). We use this state-value function when computing the advantage-weighted importance as A(s,a) = Q(s,a) − V (s). In this study, the Q-function is trained in a manner proposed by Fujimoto et al. (2018). Two neural networks (Qπw1 , Q π w2) are trained to estimate the Q-function, and the target value of the Q-function is computed as yi = ri + γmin1,2Q(si,ai) for sample (si,ai,a′i, ri) in a batch sampled from a replay buffer, where ri = r(si,ai). In this study, the gating policy determines the option once every N time steps, i.e., t = 0, N, 2N, . . . Neural networks that model µoθ(a|s) for o = 1, ..., O, which we refer to as option-policy networks, are trained separately for each option. In the learning phase, p(o|s,a) is estimated by the option network. Then, samples are assigned to option o∗ = argmaxo p(o|s,a;η) and are used to update the option-policy network that corresponds to o∗. When performing a rollout, o is drawn by following the gating policy in Equation (17), and an action is generated by the selected option-policy network. Differentiating the objective function in Equation (14), we obtain the deterministic policy gradient of our option-policy µoθ(s) given by ∇θJ(w,θ) = Es∼dβ(s)π(o|s) [ ∇θµoθ(s)∇aQπ ( s,a ) |a=µoθ(s) ] . (19) The procedure of adInfoHRL is summarized by Algorithm 1. As in TD3 (Fujimoto et al., 2018), we employed the soft update using a target value network and a target policy network. 5 EXPERIMENTS We evaluated the proposed algorithm adInfoHRL on the OpenAI Gym platform (Brockman et al., 2016) with the MuJoCo Physics simulator (Todorov et al., 2012). We compared its performance with that of PPO implemented in OpenAI baselines (Dhariwal et al., 2017) and TD3. Henderson et al. (2018) have recently claimed that algorithm performance varies across environment, there is thus no clearly best method for all benchmark environments, and off-policy and on-policy methods have advantages in different problem domains. To analyze the performance of adInfoHRL, we compared it with state-of-the-art algorithms for both on-policy and off-policy methods, although we focused on the comparison with TD3, as our implementation of adInfoHRL is based on it. To determine the effect of learning the latent variable via information maximization, we used the same network architectures for the actor and critic in adInfoHRL and TD3. In addition, to evaluate the benefit of the advantage-weighted importance, we evaluated a variant of adInfoHRL, which does not use the advantage-weighted importance for computing mutual information.We refer to this variant of adInfoHRL as infoHRL. The gating policy updated variable o once every three time steps. We tested the performance of adInfoHRL with two and four options. The activation of options over time and snapshots of the learned option policies on the Walker2d task are shown in Figure 2, which visualizes the result from adInfoHRL with four options. One can see that the option policies are activated in different phases of locomotion. While the option indicated by yellow in Figure 2 corresponds to the phase for kicking the floor, the option indicated by blue corresponds to the phase when the agent was on the fly. Visualization of the options learned on the HalfCheetah and Ant tasks are shown in Appendix D. The averaged return of five trials is reported in Figure 3(a)-(d). AdIfoHRL yields the best performance on Ant1 and Walker2d, whereas the performance of TD3 and adInfoHRL was comparable on HalfCheetah and Hopper, and PPO outperformed the other methods on Hopper. Henderson et al. (2018) claimed that on-policy methods show their superiority on tasks with unstable dynamics, and our experimental results are in line with such previous studies. AdinfoHRL outperformed infoHRL, which isthe variant of adInfoHRL without the advantage-weighted importance on all the tasks. This result shows that the adavatage-weighted importance enhanced the performance of learning options. AdInfoHRL exhibited the sample efficiency on Ant and Walker2d in the sense that it required fewer samples than TD3 to achieve comparable performance on those tasks. The concept underlying adInfoHRL is to divide the state-action space to deal with the multi-modal advantage function and learn option policies corresponding to separate modes of the advantage function. Therefore, adInfoHRL shows its superiority on tasks with the multi-modal advantage function and not on tasks with a simple advantage function. Thus, it is natural that the benefit of adInfoHRL is dependent on the characteristics of the task. The outputs of the option network and the activation of options on Walker2d are shown in Figure 3(e)-(f), which visualize the result from adInfoHRL with four options. For visualization, the dimensionality was reduced using t-SNE (van der Maaten & Hinton, 2008). The state-action space 1We report the result on the Ant task implemented in rllab (Duan et al., 2016) instead of Ant-v1 implemented in the OpenAI gym, since the Ant task in the rllab is known to be harder than the Ant-v1 in the OpenAI gym. Results on Ant-v1 in the OpenAI gym is reported in Appendix D. is clearly divided into separate domains in Figure 3(e). As shown in Figure 3(f), the options are activated in different domains of the state space, which indicates that diverse options are learned by adInfoHRL. 6 RELATED WORK AND DISCUSSION Past studies have proposed several ways to deal with the latent variable in HRL. The recent work by Smith et al. (2018) proposed inferred option policy gradients (IOPG), which is derived as an extension of policy gradient to the option framework. Nachum et al. (2018) recently proposed off-policy target correction for HRL on goal-oriented tasks, where a higher-level policy instructs a lower-level policy by generating the goal signal instead of an inferred latent variable. A popular approach for learning the latent variable in HRL is the variational approach. The recent work by Haarnoja et al. (2018a) is based on soft actor critic (Haarnoja et al., 2018b), and the latent variable is inferred using the variational approach. The work by Hausman et al. (2018) is also closely related to the variational approach, and they proposed a method for learning a latent variable of a hierarchical policy via a variational bound. On the contrary, our method learns the latent variable by maximizing MI with advantage-weighted importance. Recent studies by Gregor et al. (2016); Florensa et al. (2017); Eysenbach et al. (2018) also considered the MI in their formulation. In these methods, MI between the state and the latent variable is considered so as to obtain diverse behaviors. Our approach is different from the previous studies in the sense that we employ MI between the latent variable and the state-action pairs, which leads to the division of the state-action space instead of considering only the state space. We think that dividing the state-action space is an efficient approach when the advantage function is multi-modal, as depicted in Figure 1. InfoGAIL proposed by Li et al. (2017) learns the interpretable representation of the state-action space via MI maximization. InfoGAIL can be interpreted as a method that divides the state-action space based on the density induced by an expert’s policy by maximizing the regularized MI objective. In this sense, it is closely related to our method, although their problem setting is imitation learning (Osa et al., 2018a), which is different from our HRL problem setting. The use of the importance weight based on the value function has appeared in previous studies (Dayan & Hinton, 1997; Kober & Peters, 2011; Neumann & Peters, 2009; Osa & Sugiyama, 2018). For example, the method proposed by Neumann & Peters (2009) employs the importance weight based on the advantage function for learning a monolithic policy, while our method uses a similar importance weight for learning a latent variable of a hierarchical policy. Although Osa & Sugiyama (2018) proposed to learn a latent variable in HRL with importance sampling, their method is limited to episodic settings where only a single option is used in an episode. Our method can be interpreted as an approach that divides the state-action space based on the MI criterion. This concept is related to that of Divide and Conquer (DnC) proposed by Ghosh et al. (2018), although DnC clusters the initial states and does not consider switching between option policies during the execution of a single trajectory. In this study we developed adInfoHRL based on deterministic option policies. However, the concept of dividing the state-action space via advantage-weighted importance can be applied to stochastic policy gradients as well. Further investigation in this direction is necessary in future work. 7 CONCLUSIONS We proposed a novel HRL method, hierarchical reinforcement learning via advantage-weighted information maximization. In our framework, the latent variable of a hierarchical policy is learned as a discrete latent representation of the state-action space. Our HRL framework is derived by considering deterministic option policies and by leveraging the analogy between the gating policy for HRL and a monolithic policy for the standard RL. The results of the experiments indicate that adInfoHRL can learn diverse options on continuous control tasks. Our results also suggested that our approach can improve the performance of TD3 in certain problem domains. ACKNOWLEDGMENTS MS was partially supported by KAKENHI 17H00757. A MUTUAL INFORMATION WITH ADVANTAGE-WEIGHTED IMPORTANCE The mutual information (MI) between the latent variable o and the state action pair (s,a) is defined as I ( (s,a), o ) = H(o)−H(o|s,a) (20) where H(o) = ∫ p(o) log p(o)do and H(o|s,a) = ∫ p(o|s,a) log p(o|s,a)do. We make the empirical estimate of MI employed by Gomes et al. (2010); Hu et al. (2017) and modify it to employ the importance weight. The empirical estimate of MI with respect to the density induced by a policy π is given by Î(s,a; o) = ∑ o∈O p̂(o) log p̂(o)− Ĥ(o|s,a). (21) We consider the case where we have samples collected by a behavior policy β(s|a) and need to estimate MI with respect to the density induced by policy π. Given a model p(o|s,a;η) parameterized by vector η, p(o) can be rewritten as p(o) = ∫ pβ(s,a) pπ(s,a) pβ(s,a) p(o|s,a;η)dads = E [W (s,a)p(o|s,a;η)] , (22) where W (s,a) = p π(s,a) pβ(s,a) is the importance weight. Therefore, the empirical estimate of p(o) with respect to the density induced by a policy π is given by p̂(o) = 1 N N∑ i=1 W̃ (si,ai)p(o|si,ai;η), (23) where W̃ (s,a) = W̃ (s,a)∑N j=1 W̃ (sj ,aj) is the normalized importance weight. Likewise, the conditional entropy with respect to the density induced by a policy π is given by H(o|s,a) = ∫ pπ(s,a)p(o|s,a;η) log p(o|s,a;η)dsda (24) = ∫ pβ(s,a) pπ(s,a) pβ(s,a) p(o|s,a;η) log p(o|s,a;η)dsda (25) = E [W (s,a)p(o|s,a;η) log p(o|s,a;η)] . (26) Therefore, the empirical estimate of the conditional entropy with respect to the density induced by a policy π is given by Ĥ(o|s,a) = 1 N N∑ i=1 W (si,ai)p(o|si,ai;η) log p(o|si,ai;η). (27) Thus, the empirical estimates of MI can be computed by Equations (21), (23) and (27). B DERIVATION OF THE STATE-VALUE FUNCTION In HRL, the value function is given by V (s) = ∫ ∑ o∈O π(o|s)π(a|s, o)Qπ(s,a)da = ∑ o∈O π(o|s) ∫ π(a|s, o)Qπ(s,a)da (28) Since option policies are deterministic given by µoθ(s), the state-value function is given by V (s) = ∑ o∈O π(o|s)Qπ(s,µoθ(s))da. (29) C EXPERIMENTAL DETAILS We performed evaluations using benchmark tasks in the OpenAI Gym platform (Brockman et al., 2016) with Mujoco physics simulator (Todorov et al., 2012). Hyperparameters of reinforcement learning methods used in the experiment are shown in Tables 1-3. For exploration, both adInfoHRL and TD3 used the clipped noise drawn from the normal distribution as ∼ clip ( N (0, σ),−c, c ) , where σ = 0.2 and c = 0.5. For hyperparameters of PPO, we used the default values in OpenAI baselines (Dhariwal et al., 2017). For the Walker2d, HalfCheetah, and Hopper tasks, we used the Walker2d-v1, HalfCHeetah-v1, and Hopper-v1 in the OpenAI Gym, respectively. For the Ant task, we used the AntEnv implemented in the rllab (Duan et al., 2016). When training a policy with AdInfoHRL, infoHRL, and TD3, critics are trained once per time step, and actors are trained once every after two updates of the critics. The source code is available at https://github.com/ TakaOsa/adInfoHRL. We performed the experiments five times with different seeds, and reported the averaged test return where the test return was computed once every 5000 time steps by executing 10 episodes without exploration. When performing the learned policy without exploration, the option was drawn as o = max o′ Qπ(s,µo ′ (s)), (30) instead of following the stochastic gating policy in Equations (17). D ADDITIONAL INFORMATION ON EXPERIMENTAL RESULTS On the HalfCheetah task, adInfoHRL delivered the best performance with two options. The distribution of options on HalfCheetah0v1 after one million steps is shown in Figure 4. Although the state-action space is evenly divided, the options are not evenly activated. This behavior can occur because the state-action space is divided based on the density induced by the behavior policy while the activation of options is determined based on the quality of the option policies in a given state. Moreover, an even division in the action-state space is not necessarily the even division in the state space. The activation of the options over time is shown in Figure 5. It is clear that one of the option corresponds to the stable running phase and the other corresponds to the phase for recovering from unstable states. The distribution of four options on the Ant-rllab task after one million steps is shown in Figure 6. Four options are activated in the different domains of the state space. The activation of the options over time on the Ant-rllab task is shown in Figure 7. While four options are actively used in the beginning of the episode, two (blue and yellow) options are mainly activated during the stable locomotion. Since the Ant task implemented in rllab is known to be harder than the Ant-v1 implemented in the OpenAI gym, we reported the result of the Ant task in rllab in the main manuscript. Here, we report the result of the Ant-v1 task implemented in the OpenAI gym. On the Ant-v1 task, adInfoHRL yielded the best performance with two options. The performance of adInfoHRL with two options is comparable to that of TD3 on Ant-v1. This result indicates that the Ant-v1 task does not require a hierarchical policy structure, while a hierarchical policy improves the performance of learning on Ant-rllab. The distribution of options on Ant-v1 task after one million steps is shown in Figure 8. The activation of the options over time is shown in Figure 9. It is evident that two option policies on the Ant-v1 task corresponded to different postures of the agent. A recent study on HRL by Smith et al. (2018) reported the performance of IOPG on Walker2d-v1, Hopper-v1, and HalfCheetah-v1. The study by Haarnoja et al. (2018a) reported the performance of SAC-LSP on Walker2d-v1, Hopper-v1, HalfCheetah-v1, and Ant-rllab. A comparison of performance between our method, IOPG, and SAC-LSP is summarized in Table 4. We report the performance after 1 million steps. It is worth noting that adInfoHRL outperformed IOPG on these tasks in terms of the achieved return, although we are aware that the qualitative performance is also important in HRL. AdInfoHRL outperformed SAC-LSP on Walker2d-v1 and Ant-rllab, and SAC-LSP shows its superiority on HalfCheetah-v1 and Hopper-v1. However, the results of SAC-LSP were obtained by using reward scaling, which was not used in the evaluation of adInfoHRL. Therefore, further experiments are necessary for fair comparison under the same condition.
1. What is the focus of the paper, and what are the contributions of the proposed HRL system? 2. How does the method optimize mutual information, and what is the theoretical justification for this approach? 3. What are the strengths and weaknesses of the experimental results, and how do they relate to the paper's claims? 4. Are there any concerns regarding the paper's terminology, clarity, and precision? 5. What are some potential ways to improve the paper, such as additional ablation studies or exploring more challenging control problems?
Review
Review Revision: The authors addressed most of my concerns and clearly put in effort to improve the paper. The paper explains the central idea better, is more precise in terminology in general, and the additional ablation gives more insight into the relative importance of the advantage weighting. I still think that the results are a bit limited in scope but the idea is interesting and seems to work for the tasks in the paper. I adjusted my score to reflect this. Summary: The paper proposes an HRL system in which the mutual information of the latent (option) variable and the state-action pairs is approximately maximized. To approximate the mutual information term, samples are reweighted based on their estimated advantage. TD3 is used to optimize the modules of the system. The system is evaluated on continuous control task from OpenAI gym and rllab. For the most part, the paper is well-written and it provides a good overview of related work and relevant terminology. The experiments seem sound even though the results are not that impressive. The extra analysis of the option space and temporal distribution is interesting. Some parts of the theoretical justification for the method are not entirely clear to me and would benefit from some clarification. Most importantly, it is not clear to me why the policy in Equation 7 is considered to be optimal. Given some value or advantage function, the optimal policy would be the one that picks the action that maximizes it. The authors refer to earlier work in which similar equations are used, but in those papers this is typically in the context of some entropy maximizing penalty or KL constraint. A temperature parameter would also influence the exploration-exploitation trade-off in this ‘optimal’ policy. I understand that the rough intuition is to take actions with higher advantage more often while still being stochastic and exploring but the motivation could be more precise given that most of the subsequent arguments are built on top of it. However, this is not the policy that is used to generate behavior. In short, the paper is clear enough about how the method is constructed but it is not very clear to me *why* the mutual information should be optimized with respect to this 'optimal' policy instead of the actual policy one is generating trajectories from. HRL is an interesting area of research with the potential to learn complicated behaviors. However, it is currently not clear how to evaluate the importance/usefulness of hierarchical RL systems directly and the tasks in the paper are still solvable by standard systems. That said, the occasional increase in sample efficiency over plain TD3 looks promising. It is somewhat disappointing that the number of beneficial option is generally so low. To get more insight in the methods it would have been nice to see a more systematic ablation of related methods with different mutual information pairings (action or state only) and without the advantage weighting. Could it be that the number of options has to remain limited because there is no parameter sharing between them? It would be interesting to see results on more challenging control problems where the hypothesized multi-modal advantage structure is more likely to be present. All in all I think that this is an interesting paper but the foundations of the theoretical motivation need a bit more clarification. In addition, experiments on more challenging problems and a more systematic comparison with similar models would make this a much stronger paper. Minor issues/typos: - Contributions 2 and 3 have a lot of overlap. - The ‘o’ in Equation 2 should not be bold font. - Appendix A. Shouldn’t there be summations over ‘o’ in the entropy definitions?
ICLR
Title Hierarchical Reinforcement Learning via Advantage-Weighted Information Maximization Abstract Real-world tasks are often highly structured. Hierarchical reinforcement learning (HRL) has attracted research interest as an approach for leveraging the hierarchical structure of a given task in reinforcement learning (RL). However, identifying the hierarchical policy structure that enhances the performance of RL is not a trivial task. In this paper, we propose an HRL method that learns a latent variable of a hierarchical policy using mutual information maximization. Our approach can be interpreted as a way to learn a discrete and latent representation of the state-action space. To learn option policies that correspond to modes of the advantage function, we introduce advantage-weighted importance sampling. In our HRL method, the gating policy learns to select option policies based on an option-value function, and these option policies are optimized based on the deterministic policy gradient method. This framework is derived by leveraging the analogy between a monolithic policy in standard RL and a hierarchical policy in HRL by using a deterministic option policy. Experimental results indicate that our HRL approach can learn a diversity of options and that it can enhance the performance of RL in continuous control tasks. 1 INTRODUCTION Reinforcement learning (RL) has been successfully applied to a variety of tasks, including board games (Silver et al., 2016), robotic manipulation tasks (Levine et al., 2016), and video games (Mnih et al., 2015). Hierarchical reinforcement learning (HRL) is a type of RL that leverages the hierarchical structure of a given task by learning a hierarchical policy (Sutton et al., 1999; Dietterich, 2000). Past studies in this field have shown that HRL can solve challenging tasks in the video game domain (Vezhnevets et al., 2017; Bacon et al., 2017) and robotic manipulation (Daniel et al., 2016; Osa et al., 2018b). In HRL, lower-level policies, which are often referred to as option policies, learn different behavior/control patterns, and the upper-level policy, which is often referred to as the gating policy, learns to select option policies. Recent studies have developed HRL methods using deep learning (Goodfellow et al., 2016) and have shown that HRL can yield impressive performance for complex tasks (Bacon et al., 2017; Frans et al., 2018; Vezhnevets et al., 2017; Haarnoja et al., 2018a). However, identifying the hierarchical policy structure that yields efficient learning is not a trivial task, since the problem involves learning a sufficient variety of types of behavior to solve a given task. In this study, we present an HRL method via the mutual information (MI) maximization with advantage-weighted importance, which we refer to as adInfoHRL. We formulate the problem of learning a latent variable in a hierarchical policy as one of learning discrete and interpretable repre- sentations of states and actions. Ideally, each option policy should be located at separate modes of the advantage function. To estimate the latent variable that corresponds to modes of the advantage function, we introduce advantage-weighted importance weights. Our approach can be considered to divide the state-action space based on an information maximization criterion, and it learns option policies corresponding to each region of the state-action space. We derive adInfoHRL as an HRL method based on deterministic option policies that are trained based on an extension of the deterministic policy gradient (Silver et al., 2014; Fujimoto et al., 2018). The contributions of this paper are twofold: 1. We propose the learning of a latent variable of a hierarchical policy as a discrete and hidden representation of the state-action space. To learn option policies that correspond to the modes of the advantage function, we introduce advantage-weighted importance. 2. We propose an HRL method, where the option policies are optimized based on the deterministic policy gradient and the gating policy selects the option that maximizes the expected return. The experimental results show that our proposed method adInfoHRL can learn a diversity of options on continuous control tasks. Moreover, our approach can improve the performance of TD3 on such tasks as the Walker2d and Ant tasks in OpenAI Gym with MuJoco simulator. 2 BACKGROUND In this section, we formulate the problem of HRL in this paper and describe methods related to our proposal. 2.1 HIERARCHICAL REINFORCEMENT LEARNING We consider tasks that can be modeled as a Markov decision process (MDP), consisting of a state space S, an action space A, a reward function r : S × A 7→ R, an initial state distribution ρ(s0), and a transition probability p(st+1|st,at) that defines the probability of transitioning from state st and action at at time t to next state st+1. The return is defined as Rt = ∑T i=t γ i−tr(si,ai), where γ is a discount factor, and policy π(a|s) is defined as the density of action a given state s. Let dπ(s) = ∑T t=0 γ tp(st = s) denote the discounted visitation frequency induced by the policy π. The goal of reinforcement learning is to learn a policy that maximizes the expected return J(π) = Es0,a0,...[R0] where s0 ∼ ρ(s0),a ∼ π and st+1 ∼ p(st+1|st,at). By defining the Q-function as Qπ(s,a) = Es0,a0,...[Rt|st = s,at = a], the objective function of reinforcement learning can be rewritten as follows: J(π) = ∫∫ dπ(s)π(a|s)Qπ(s,a)dads. (1) Herein, we consider hierarchical policy π(a|s) = ∑ o∈O π(o|s)π(a|s, o), where o is the latent variable andO is the set of possible values of o. Many existing HRL methods employ a policy structure of this form (Frans et al., 2018; Vezhnevets et al., 2017; Bacon et al., 2017; Florensa et al., 2017; Daniel et al., 2016). In general, latent variable o can be discrete (Frans et al., 2018; Bacon et al., 2017; Florensa et al., 2017; Daniel et al., 2016; Osa & Sugiyama, 2018) or continuous (Vezhnevets et al., 2017). π(o|s) is often referred to as a gating policy (Daniel et al., 2016; Osa & Sugiyama, 2018), policy over options (Bacon et al., 2017), or manager (Vezhnevets et al., 2017). Likewise, π(a|s, o) is often referred to as an option policy (Osa & Sugiyama, 2018), sub-policy (Daniel et al., 2016), or worker (Vezhnevets et al., 2017). In HRL, the objective function is given by J(π) = ∫∫ dπ(s) ∑ o∈O π(o|s)π(a|s, o)Qπ(s,a)dads. (2) As discussed in the literature on inverse RL (Ziebart, 2010), multiple policies can yield equivalent expected returns. This indicates that there exist multiple solutions to latent variable o that maximizes the expected return. To obtain the preferable solution for o, we need to impose additional constraints in HRL. Although prior work has employed regularizers (Bacon et al., 2017) and constraints (Daniel et al., 2016) to obtain various option policies, the method of learning a good latent variable o that improves sample-efficiency of the learning process remains unclear. In this study we propose the learning of the latent variable by maximizing MI between latent variables and state-action pairs. 2.2 DETERMINISTIC POLICY GRADIENT The deterministic policy gradient (DPG) algorithm was developed for learning a monolithic deterministic policy µθ(s) : S 7→ A by Silver et al. (2014). In off-policy RL, the objective is to maximize the expectation of the return, averaged over the state distribution induced by a behavior policy β(a|s): J(π) = ∫∫ dβ(s)π(a|s)Qπ ( s,a)dads. (3) When a policy is deterministic, the objective becomes J(π) = ∫ dβ(s)Qπ ( s,µθ(s) ) ds. Silver et al. (2014) have shown that the gradient of a deterministic policy is given by ∇θEs∼dβ(s)[Qπ(s,a)] = Es∼dβ(s) [ ∇θµθ(s)∇aQπ ( s,a ) |a=µθ(s) ] . (4) The DPG algorithm has been extended to the deep deterministic policy gradient (DDPG) for continuous control problems that require neural network policies (Lillicrap et al., 2016). Twin Delayed Deep Deterministic policy gradient algorithm (TD3) proposed by Fujimoto et al. (2018) is a variant of DDPG that outperforms the state-of-the-art on-policy methods such as TRPO (Schulman et al., 2017a) and PPO (Schulman et al., 2017b) in certain domains. We extend this deterministic policy gradient to learn a hierarchical policy. 2.3 REPRESENTATION LEARNING VIA INFORMATION MAXIMIZATION Recent studies such as those by Chen et al. (2016); Hu et al. (2017); Li et al. (2017) have shown that an interpretable representation can be learned by maximizing MI. Given a datasetX = (x1, ...,xn), regularized information maximization (RIM) proposed by Gomes et al. (2010) involves learning a conditional model p̂(y|x;η) with parameter vector η that predicts a label y. The objective of RIM is to minimize `(η)− λIη(x, y), (5) where `(η) is the regularization term, Iη(x, y) is MI, and λ is a coefficient. MI can be decomposed as Iη(x, y) = H(y)−H(y|x) where H(y) is entropy and H(y|x) the conditional entropy. Increasing H(y) conduces the label to be uniformly distributed, and decreasing H(y|x) conduces to clear cluster assignments. Although RIM was originally developed for unsupervised clustering problems, the concept is applicable to various problems that require learning a hidden discrete representation. In this study, we formulate the problem of learning the latent variable o of a hierarchical policy as one of learning a latent representation of the state-action space. 3 LEARNING OPTIONS VIA ADVANTAGE-WEIGHTED INFORMATION MAXIMIZATION In this section, we propose a novel HRL method based on advantage-weighted information maximization. We first introduce the latent representation learning via advantage-weighted information maximization, and we then describe the HRL framework based on deterministic option policies. 3.1 LATENT REPRESENTATION LEARNING VIA ADVANTAGE-WEIGHTED INFORMATION MAXIMIZATION Although prior work has often considered H(o|s) or I(s, o), which results in a division of the state space, we are interested in using I ( (s,a), o ) for dividing the state-action space instead. A schematic sketch of our approach is shown in Figure 1. As shown in the left side of Figure 1, the advantage function often has multiple modes. Ideally, each option policies should correspond to separate modes of the advantage function. However, it is non-trivial to find the modes of the advantage function in practice. For this purpose, we reduce the problem of finding modes of the advantage function to that of finding the modes of the probability density of state action pairs. We consider a policy based on the advantage function of the form πAd(a|s) = f ( Aπ(s,a) ) Z , (6) where Aπ(s,a) = Qπ(s,a) − V π(s) is the advantage function, V π(s) is the state value function, and Z is the partition function. f(·) is a functional, which is a function of a function. f(·) is a monotonically increasing function with respect to the input variable and always satisfies f(·) > 0. In our implementation we used the exponential function f(·) = exp(·). When following such a policy, an action with the larger advantage is drawn with a higher probability. Under this assumption, finding the modes of the advantage function is equivalent to finding modes of the density induced by πAd. Thus, finding the modes of the advantage function can be reduced to the problem of clustering samples induced by πAd. Following the formulation of RIM introduced in Section 2.3, we formulate the problem of clustering samples induced by πAd as the learning of discrete representations via MI maximization. For this purpose, we consider a neural network that estimates p(o|s,a;η) parameterized with vector η, which we refer to as the option network. We formulate the learning of the latent variable o as minimizing Loption(η) = `(η)− λI ( o, (s,a);η ) , (7) where I(o, (s,a)) = Ĥ(o|s,a;η) − Ĥ(o;η), and `(η) is the regularization term. In practice, we need to approximate the advantage function, and we learn the discrete variable o that corresponds to the modes of the current estimate of the advantage function. For regularization, we used a simplified version of virtual adversarial training (VAT) proposed by Miyato et al. (2016). Namely, we set `(η) = DKL ( p(o|snoise,anoise;η)||p(o|s,a;η) ) where snoise = s+ s, anoise = a+ a, s and a denote white noise. This regularization term penalizes dissimilarity between an original state-action pair and a perturbed one, and Hu et al. (2017) empirically show that this regularization improves the performance of learning latent discrete representations. When computing MI, we need to compute p(o) and H(o|s,a) given by p(o) = ∫ pπAd(s,a)p(o|s,a;η)dads = E(s,a)∼pπAd (s,a) [p(o|s,a;η)] (8) H(o|s,a) = E(s,a)∼pπAd (s,a) [p(o|s,a;η) log p(o|s,a;η)] . (9) Thus, the probability density of (s,a) induced by πAd is necessary for computing MI for our purpose. To estimate the probability density of (s,a) induced by πAd, we introduce the advantageweighted importance in the next section. 3.2 IMPORTANCE WEIGHTS FOR MUTUAL INFORMATION ESTIMATION Although we show that the problem of finding the modes of the advantage function can be reduced to MI maximization with respect to the samples induced by πAd, samples induced by πAd are not available in practice. While those induced during the learning process are available, a discrete representation obtained from such samples does not correspond to the modes of the advantage function. To estimate the density induced by πAd, we employ an importance sampling approach. We assume that the change of the state distribution induced by the policy update is sufficiently small, namely, dπAd(s) ≈ dβ(s). Then, the importance weight can be approximated as W (s,a) = pπAd(s,a) pβ(s,a) = dπAd(s)πAd(a|s) dβ(s)β(a|s) ≈ πAd(a|s) β(a|s) = f(A(s,a)) Zβ(a|s) . (10) and the normalized importance weight is given gy W̃ (s,a) = W (s,a)∑N j=1W (sj ,aj) = f(A(s,a)) Zβ(a|s)∑N j=1 f(A(sj ,aj)) Zβ(aj |sj) = f(A(s,a)) β(a|s)∑N j=1 f(A(sj ,aj)) β(aj |sj) . (11) As the partition function Z is canceled, we do not need to compute Z when computing the importance weight in practice. We call this importance weightW the advantage-weighted importance and employ it to compute the objective function used to estimate the latent variable. This advantage-weighted importance is used to compute the entropy terms for computing MI in Equation (7). The empirical estimate of the entropy H(o) is given by Ĥ(o;η) = − ∑ o∈O p̂(o;η) log p̂(o;η),where p̂(o;η) = 1 N N∑ i=1 W (si,ai)p(o|si,ai;η). (12) where the samples (si,ai) are drawn from pβ(s,a) induced by a behavior policy β(a|s). Likewise, the empirical estimate of the conditional entropy H(o|s,a) is given by Ĥ(o|s,a;η) = 1 N N∑ i W (si,ai)p(o|si,ai;η) log p(o|si,ai;η). (13) The derivations of Equations (12) and (13) are provided in Appendix A. To train the option network, we store the samples collected by the M most recent behavior policies, to which we refer as onpolicy buffer Don. Although the algorithm works with entire samples stored in the replay buffer, we observe that the use of the on-policy buffer for latent representation learning exhibits better performance. For this reason, we decided to use the on-policy buffer in our implementation. Therefore, while the algorithm is off-policy in the sense that the option is learned from samples collected by behavior policies, our implementation is “semi”on-policy in the sense that we use samples collected by the most recent behavior policies. 4 HRL OBJECTIVE WITH DETERMINISTIC OPTION POLICIES Instead of stochastic option policies, we consider deterministic option policies and model them using separate neural networks. We denote by π(a|s, o) = µoθ(s) deterministic option policies parameterized by vector θ. The objective function of off-policy HRL with deterministic option policies can then be obtained by replacing π(a|s) with ∑ o∈O π(o|s)π(a|s, o) in Equation (3): J(w,θ) = ∫ dβ(s) ∑ o∈O π(o|s)Qπ ( s,µoθ(s);w ) ds, (14) where Qπ(s,a;w) is an approximated Q-function parameterized using vector w. This form of the objective function is analogous to Equation (3). Thus, we can extend standard RL techniques to the learning of the gating policy π(o|s) in HRL with deterministic option policies. In HRL, the goal of the gating policy is to generate a value of o that maximizes the conditional expectation of the return: QπΩ(s, o) = E [R|st = s, ot = o] = ∫ π(a|s, o)Qπ(s,a)da, (15) which is often referred to as the option-value function (Sutton et al., 1999). When option policies are stochastic, it is often necessary to approximate the option-value function QπΩ(s, o) in addition to the action-value function Qπ(s,a). However, in our case, the option-value function for deterministic option policies is given by QπΩ(s, o) = Q π(s,µoθ(s)), (16) Algorithm 1 HRL via Advantage-Weighted Information Maximization (adInfoHRL) Input: Number of options O, size of on-policy buffer Initialize: Replay buffer DR, on-policy buffer Don, network parameters η, θ, w, θtarget, wtarget repeat for t = 0 to t = T do Draw an option for a given s by following Equation 17: o ∼ π(o|s) Draw an action a ∼ β(a|s, o) = µoθ(s) + Record a data sample (s,a, r, s′) Aggregate the data in DR and Don if the on-policy buffer is full then Update the option network by minimizing Equation (7) for samples in Don Clear the on-policy buffer Don end if Sample a batch Dbatch ∈ DR Update the Q network parameter w if t mod d then Estimate p(o|si,ai) for (si,ai) ∈ Dbatch using the option network Assign samples (si,ai) ∈ Dbatch to the option o∗ = argmax p(o|si,ai) Update the option policy networks µoθ(s) for o = 1, ..., O with Equation (19) Update the target networks: wtarget ← τw+(1−τ)wtarget, θtarget ← τθ+(1−τ)θtarget end if end for until the convergence return θ which we can estimate using the deterministic option policy µoθ(s) and the approximated actionvalue function Qπ(s,a;w). In this work we employ the softmax gating policy of the form π(o|s) = exp ( Qπ(s,µoθ(s);w) )∑ o∈O exp ( Qπ ( s,µoθ(s);w )) , (17) which encodes the exploration in its form (Daniel et al., 2016). The state value function is given as V π(s) = ∑ o∈O π(o|s)Qπ(s,µoθ(s);w), (18) which can be computed using Equation (17). We use this state-value function when computing the advantage-weighted importance as A(s,a) = Q(s,a) − V (s). In this study, the Q-function is trained in a manner proposed by Fujimoto et al. (2018). Two neural networks (Qπw1 , Q π w2) are trained to estimate the Q-function, and the target value of the Q-function is computed as yi = ri + γmin1,2Q(si,ai) for sample (si,ai,a′i, ri) in a batch sampled from a replay buffer, where ri = r(si,ai). In this study, the gating policy determines the option once every N time steps, i.e., t = 0, N, 2N, . . . Neural networks that model µoθ(a|s) for o = 1, ..., O, which we refer to as option-policy networks, are trained separately for each option. In the learning phase, p(o|s,a) is estimated by the option network. Then, samples are assigned to option o∗ = argmaxo p(o|s,a;η) and are used to update the option-policy network that corresponds to o∗. When performing a rollout, o is drawn by following the gating policy in Equation (17), and an action is generated by the selected option-policy network. Differentiating the objective function in Equation (14), we obtain the deterministic policy gradient of our option-policy µoθ(s) given by ∇θJ(w,θ) = Es∼dβ(s)π(o|s) [ ∇θµoθ(s)∇aQπ ( s,a ) |a=µoθ(s) ] . (19) The procedure of adInfoHRL is summarized by Algorithm 1. As in TD3 (Fujimoto et al., 2018), we employed the soft update using a target value network and a target policy network. 5 EXPERIMENTS We evaluated the proposed algorithm adInfoHRL on the OpenAI Gym platform (Brockman et al., 2016) with the MuJoCo Physics simulator (Todorov et al., 2012). We compared its performance with that of PPO implemented in OpenAI baselines (Dhariwal et al., 2017) and TD3. Henderson et al. (2018) have recently claimed that algorithm performance varies across environment, there is thus no clearly best method for all benchmark environments, and off-policy and on-policy methods have advantages in different problem domains. To analyze the performance of adInfoHRL, we compared it with state-of-the-art algorithms for both on-policy and off-policy methods, although we focused on the comparison with TD3, as our implementation of adInfoHRL is based on it. To determine the effect of learning the latent variable via information maximization, we used the same network architectures for the actor and critic in adInfoHRL and TD3. In addition, to evaluate the benefit of the advantage-weighted importance, we evaluated a variant of adInfoHRL, which does not use the advantage-weighted importance for computing mutual information.We refer to this variant of adInfoHRL as infoHRL. The gating policy updated variable o once every three time steps. We tested the performance of adInfoHRL with two and four options. The activation of options over time and snapshots of the learned option policies on the Walker2d task are shown in Figure 2, which visualizes the result from adInfoHRL with four options. One can see that the option policies are activated in different phases of locomotion. While the option indicated by yellow in Figure 2 corresponds to the phase for kicking the floor, the option indicated by blue corresponds to the phase when the agent was on the fly. Visualization of the options learned on the HalfCheetah and Ant tasks are shown in Appendix D. The averaged return of five trials is reported in Figure 3(a)-(d). AdIfoHRL yields the best performance on Ant1 and Walker2d, whereas the performance of TD3 and adInfoHRL was comparable on HalfCheetah and Hopper, and PPO outperformed the other methods on Hopper. Henderson et al. (2018) claimed that on-policy methods show their superiority on tasks with unstable dynamics, and our experimental results are in line with such previous studies. AdinfoHRL outperformed infoHRL, which isthe variant of adInfoHRL without the advantage-weighted importance on all the tasks. This result shows that the adavatage-weighted importance enhanced the performance of learning options. AdInfoHRL exhibited the sample efficiency on Ant and Walker2d in the sense that it required fewer samples than TD3 to achieve comparable performance on those tasks. The concept underlying adInfoHRL is to divide the state-action space to deal with the multi-modal advantage function and learn option policies corresponding to separate modes of the advantage function. Therefore, adInfoHRL shows its superiority on tasks with the multi-modal advantage function and not on tasks with a simple advantage function. Thus, it is natural that the benefit of adInfoHRL is dependent on the characteristics of the task. The outputs of the option network and the activation of options on Walker2d are shown in Figure 3(e)-(f), which visualize the result from adInfoHRL with four options. For visualization, the dimensionality was reduced using t-SNE (van der Maaten & Hinton, 2008). The state-action space 1We report the result on the Ant task implemented in rllab (Duan et al., 2016) instead of Ant-v1 implemented in the OpenAI gym, since the Ant task in the rllab is known to be harder than the Ant-v1 in the OpenAI gym. Results on Ant-v1 in the OpenAI gym is reported in Appendix D. is clearly divided into separate domains in Figure 3(e). As shown in Figure 3(f), the options are activated in different domains of the state space, which indicates that diverse options are learned by adInfoHRL. 6 RELATED WORK AND DISCUSSION Past studies have proposed several ways to deal with the latent variable in HRL. The recent work by Smith et al. (2018) proposed inferred option policy gradients (IOPG), which is derived as an extension of policy gradient to the option framework. Nachum et al. (2018) recently proposed off-policy target correction for HRL on goal-oriented tasks, where a higher-level policy instructs a lower-level policy by generating the goal signal instead of an inferred latent variable. A popular approach for learning the latent variable in HRL is the variational approach. The recent work by Haarnoja et al. (2018a) is based on soft actor critic (Haarnoja et al., 2018b), and the latent variable is inferred using the variational approach. The work by Hausman et al. (2018) is also closely related to the variational approach, and they proposed a method for learning a latent variable of a hierarchical policy via a variational bound. On the contrary, our method learns the latent variable by maximizing MI with advantage-weighted importance. Recent studies by Gregor et al. (2016); Florensa et al. (2017); Eysenbach et al. (2018) also considered the MI in their formulation. In these methods, MI between the state and the latent variable is considered so as to obtain diverse behaviors. Our approach is different from the previous studies in the sense that we employ MI between the latent variable and the state-action pairs, which leads to the division of the state-action space instead of considering only the state space. We think that dividing the state-action space is an efficient approach when the advantage function is multi-modal, as depicted in Figure 1. InfoGAIL proposed by Li et al. (2017) learns the interpretable representation of the state-action space via MI maximization. InfoGAIL can be interpreted as a method that divides the state-action space based on the density induced by an expert’s policy by maximizing the regularized MI objective. In this sense, it is closely related to our method, although their problem setting is imitation learning (Osa et al., 2018a), which is different from our HRL problem setting. The use of the importance weight based on the value function has appeared in previous studies (Dayan & Hinton, 1997; Kober & Peters, 2011; Neumann & Peters, 2009; Osa & Sugiyama, 2018). For example, the method proposed by Neumann & Peters (2009) employs the importance weight based on the advantage function for learning a monolithic policy, while our method uses a similar importance weight for learning a latent variable of a hierarchical policy. Although Osa & Sugiyama (2018) proposed to learn a latent variable in HRL with importance sampling, their method is limited to episodic settings where only a single option is used in an episode. Our method can be interpreted as an approach that divides the state-action space based on the MI criterion. This concept is related to that of Divide and Conquer (DnC) proposed by Ghosh et al. (2018), although DnC clusters the initial states and does not consider switching between option policies during the execution of a single trajectory. In this study we developed adInfoHRL based on deterministic option policies. However, the concept of dividing the state-action space via advantage-weighted importance can be applied to stochastic policy gradients as well. Further investigation in this direction is necessary in future work. 7 CONCLUSIONS We proposed a novel HRL method, hierarchical reinforcement learning via advantage-weighted information maximization. In our framework, the latent variable of a hierarchical policy is learned as a discrete latent representation of the state-action space. Our HRL framework is derived by considering deterministic option policies and by leveraging the analogy between the gating policy for HRL and a monolithic policy for the standard RL. The results of the experiments indicate that adInfoHRL can learn diverse options on continuous control tasks. Our results also suggested that our approach can improve the performance of TD3 in certain problem domains. ACKNOWLEDGMENTS MS was partially supported by KAKENHI 17H00757. A MUTUAL INFORMATION WITH ADVANTAGE-WEIGHTED IMPORTANCE The mutual information (MI) between the latent variable o and the state action pair (s,a) is defined as I ( (s,a), o ) = H(o)−H(o|s,a) (20) where H(o) = ∫ p(o) log p(o)do and H(o|s,a) = ∫ p(o|s,a) log p(o|s,a)do. We make the empirical estimate of MI employed by Gomes et al. (2010); Hu et al. (2017) and modify it to employ the importance weight. The empirical estimate of MI with respect to the density induced by a policy π is given by Î(s,a; o) = ∑ o∈O p̂(o) log p̂(o)− Ĥ(o|s,a). (21) We consider the case where we have samples collected by a behavior policy β(s|a) and need to estimate MI with respect to the density induced by policy π. Given a model p(o|s,a;η) parameterized by vector η, p(o) can be rewritten as p(o) = ∫ pβ(s,a) pπ(s,a) pβ(s,a) p(o|s,a;η)dads = E [W (s,a)p(o|s,a;η)] , (22) where W (s,a) = p π(s,a) pβ(s,a) is the importance weight. Therefore, the empirical estimate of p(o) with respect to the density induced by a policy π is given by p̂(o) = 1 N N∑ i=1 W̃ (si,ai)p(o|si,ai;η), (23) where W̃ (s,a) = W̃ (s,a)∑N j=1 W̃ (sj ,aj) is the normalized importance weight. Likewise, the conditional entropy with respect to the density induced by a policy π is given by H(o|s,a) = ∫ pπ(s,a)p(o|s,a;η) log p(o|s,a;η)dsda (24) = ∫ pβ(s,a) pπ(s,a) pβ(s,a) p(o|s,a;η) log p(o|s,a;η)dsda (25) = E [W (s,a)p(o|s,a;η) log p(o|s,a;η)] . (26) Therefore, the empirical estimate of the conditional entropy with respect to the density induced by a policy π is given by Ĥ(o|s,a) = 1 N N∑ i=1 W (si,ai)p(o|si,ai;η) log p(o|si,ai;η). (27) Thus, the empirical estimates of MI can be computed by Equations (21), (23) and (27). B DERIVATION OF THE STATE-VALUE FUNCTION In HRL, the value function is given by V (s) = ∫ ∑ o∈O π(o|s)π(a|s, o)Qπ(s,a)da = ∑ o∈O π(o|s) ∫ π(a|s, o)Qπ(s,a)da (28) Since option policies are deterministic given by µoθ(s), the state-value function is given by V (s) = ∑ o∈O π(o|s)Qπ(s,µoθ(s))da. (29) C EXPERIMENTAL DETAILS We performed evaluations using benchmark tasks in the OpenAI Gym platform (Brockman et al., 2016) with Mujoco physics simulator (Todorov et al., 2012). Hyperparameters of reinforcement learning methods used in the experiment are shown in Tables 1-3. For exploration, both adInfoHRL and TD3 used the clipped noise drawn from the normal distribution as ∼ clip ( N (0, σ),−c, c ) , where σ = 0.2 and c = 0.5. For hyperparameters of PPO, we used the default values in OpenAI baselines (Dhariwal et al., 2017). For the Walker2d, HalfCheetah, and Hopper tasks, we used the Walker2d-v1, HalfCHeetah-v1, and Hopper-v1 in the OpenAI Gym, respectively. For the Ant task, we used the AntEnv implemented in the rllab (Duan et al., 2016). When training a policy with AdInfoHRL, infoHRL, and TD3, critics are trained once per time step, and actors are trained once every after two updates of the critics. The source code is available at https://github.com/ TakaOsa/adInfoHRL. We performed the experiments five times with different seeds, and reported the averaged test return where the test return was computed once every 5000 time steps by executing 10 episodes without exploration. When performing the learned policy without exploration, the option was drawn as o = max o′ Qπ(s,µo ′ (s)), (30) instead of following the stochastic gating policy in Equations (17). D ADDITIONAL INFORMATION ON EXPERIMENTAL RESULTS On the HalfCheetah task, adInfoHRL delivered the best performance with two options. The distribution of options on HalfCheetah0v1 after one million steps is shown in Figure 4. Although the state-action space is evenly divided, the options are not evenly activated. This behavior can occur because the state-action space is divided based on the density induced by the behavior policy while the activation of options is determined based on the quality of the option policies in a given state. Moreover, an even division in the action-state space is not necessarily the even division in the state space. The activation of the options over time is shown in Figure 5. It is clear that one of the option corresponds to the stable running phase and the other corresponds to the phase for recovering from unstable states. The distribution of four options on the Ant-rllab task after one million steps is shown in Figure 6. Four options are activated in the different domains of the state space. The activation of the options over time on the Ant-rllab task is shown in Figure 7. While four options are actively used in the beginning of the episode, two (blue and yellow) options are mainly activated during the stable locomotion. Since the Ant task implemented in rllab is known to be harder than the Ant-v1 implemented in the OpenAI gym, we reported the result of the Ant task in rllab in the main manuscript. Here, we report the result of the Ant-v1 task implemented in the OpenAI gym. On the Ant-v1 task, adInfoHRL yielded the best performance with two options. The performance of adInfoHRL with two options is comparable to that of TD3 on Ant-v1. This result indicates that the Ant-v1 task does not require a hierarchical policy structure, while a hierarchical policy improves the performance of learning on Ant-rllab. The distribution of options on Ant-v1 task after one million steps is shown in Figure 8. The activation of the options over time is shown in Figure 9. It is evident that two option policies on the Ant-v1 task corresponded to different postures of the agent. A recent study on HRL by Smith et al. (2018) reported the performance of IOPG on Walker2d-v1, Hopper-v1, and HalfCheetah-v1. The study by Haarnoja et al. (2018a) reported the performance of SAC-LSP on Walker2d-v1, Hopper-v1, HalfCheetah-v1, and Ant-rllab. A comparison of performance between our method, IOPG, and SAC-LSP is summarized in Table 4. We report the performance after 1 million steps. It is worth noting that adInfoHRL outperformed IOPG on these tasks in terms of the achieved return, although we are aware that the qualitative performance is also important in HRL. AdInfoHRL outperformed SAC-LSP on Walker2d-v1 and Ant-rllab, and SAC-LSP shows its superiority on HalfCheetah-v1 and Hopper-v1. However, the results of SAC-LSP were obtained by using reward scaling, which was not used in the evaluation of adInfoHRL. Therefore, further experiments are necessary for fair comparison under the same condition.
1. What is the main contribution of the paper regarding hierarchical reinforcement learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in its design choices and empirical results? 3. Do you have any concerns or questions regarding the exposition and explanation of the ideas in the paper? 4. How does the reviewer assess the novelty and significance of the proposed method compared to prior works in reinforcement learning? 5. Are there any suggestions for improving the paper's content or presentation?
Review
Review The paper considers the problem of hierarchical reinforcement learning, and proposes a criterion that aims to maximize the mutual information between options and state-action pairs. The idea of having options partition the state-action space is appealing, because this allows options visit the same states, so long as they act differently, which is natural. The authors show empirically that the learned options do indeed decompose the state-action space, but not the state space. There is a lot in the paper already, but the exposition could be much improved. Many of the design choices appear very ad hoc, and some are outright confusing. Some detailed comments: * I got really confused in Section 3 re: advantage-weighted importance sampling. Why do this? If the option policies are trying to optimize reward, won’t they become optimal eventually (or so we usually hope in RL)? This section seems to assume that the advantage function is somehow given. It also doesn’t look like this gets used in the actual algorithm, and in fact on page 5 it is stated that “we decided to use the on-policy buffer in our implementation”. Then why introduce the off-policy bit at all, and list it as a contribution? * Please motivate the choices. The paper mentions that one of its contributions are options with deterministic policies. This isn’t a contribution unless it addresses some problem that stochastic policies fail at. For example, DPG allows one to address continuous control problems. Same with using information maximization. The paper literally states that “an interpretable representation can be learned by maximizing mutual information”. Representation of what? MI between what? * Although the qualitative results are nice (separation of the state-action space), empirical results are modest at best. This may be ok, because based on the partition of the state-action space it seems that the option policies learn diverse behaviors in the same states. Maybe videos visualizing different options from the same states would be informative. * Please add more discussion on why the options are switched at every step
ICLR
Title Linking average- and worst-case perturbation robustness via class selectivity and dimensionality Abstract Representational sparsity is known to affect robustness to input perturbations in deep neural networks (DNNs), but less is known about how the semantic content of representations affects robustness. Class selectivity—the variability of a unit’s responses across data classes or dimensions—is one way of quantifying the sparsity of semantic representations. Given recent evidence that class selectivity may not be necessary for, and in some cases can impair generalization, we sought to investigate whether it also confers robustness (or vulnerability) to perturbations of input data. We found that class selectivity leads to increased vulnerability to average-case (naturalistic) perturbations in ResNet18, ResNet50, and ResNet20, as measured using Tiny ImageNetC (ResNet18 and ResNet50) and CIFAR10C (ResNet20). Networks regularized to have lower levels of class selectivity are more robust to average-case perturbations, while networks with higher class selectivity are more vulnerable. In contrast, we found that class selectivity increases robustness to multiple types of worst-case (i.e. white box adversarial) perturbations, suggesting that while decreasing class selectivity is helpful for average-case perturbations, it is harmful for worst-case perturbations. To explain this difference, we studied the dimensionality of the networks’ representations: we found that the dimensionality of early-layer representations is inversely proportional to a network’s class selectivity, and that adversarial samples cause a larger increase in early-layer dimensionality than corrupted samples. We also found that the input-unit gradient was more variable across samples and units in high-selectivity networks compared to low-selectivity networks. These results lead to the conclusion that units participate more consistently in low-selectivity regimes compared to high-selectivity regimes, effectively creating a larger attack surface and hence vulnerability to worst-case perturbations. 1 INTRODUCTION Methods for understanding deep neural networks (DNNs) often attempt to find individual neurons or small sets of neurons that are representative of a network’s decision (Erhan et al., 2009; Zeiler and Fergus, 2014; Karpathy et al., 2016; Amjad et al., 2018; Lillian et al., 2018; Dhamdhere et al., 2019; Olah et al., 2020). Selectivity in individual units (i.e. variability in a neuron’s activations across semantically-relevant data features) has been of particular interest to researchers trying to better understand deep neural networks (DNNs) (Zhou et al., 2015; Olah et al., 2017; Morcos et al., 2018; Zhou et al., 2018; Meyes et al., 2019; Na et al., 2019; Zhou et al., 2019; Rafegas et al., 2019; Bau et al., 2020; Leavitt and Morcos, 2020). However, recent work has shown that selective neurons can be irrelevant, or even detrimental to network performance, emphasizing the importance of examining distributed representations for understanding DNNs (Morcos et al., 2018; Donnelly and Roegiest, 2019; Dalvi et al., 2019b; Leavitt and Morcos, 2020). In parallel, work on robustness seeks to build models that are robust to perturbed inputs (Szegedy et al., 2013; Carlini and Wagner, 2017a;b; Vasiljevic et al., 2016; Kurakin et al., 2017; Gilmer et al., 2018; Zheng et al., 2016). Hendrycks and Dietterich (2019) distinguish between two types of robustness: corruption robustness, which measures a classifier’s performance on low-quality or naturalistically-perturbed inputs—and thus is an "average-case" measure—and adversarial robustness, which measures a classifier’s performance on small, additive perturbations that are tailored to the classifier—and thus is a "worst-case" measure.1 Research on robustness has been predominantly focused on worst-case perturbations, which is affected by weight and activation sparsity (Madry et al., 2018; Balda et al., 2020; Ye et al., 2018; Guo et al., 2018; Dhillon et al., 2018) and representational dimensionality (Langeberg et al., 2019; Sanyal et al., 2020; Nayebi and Ganguli, 2017). But less is known about the mechanisms underlying average-case perturbation robustness and its common factors with worst-case robustness. Some techniques for improving worst-case robustness also improve average-case robustness (Hendrycks and Dietterich, 2019; Ford et al., 2019; Yin et al., 2019), thus it is possible that sparsity and representational dimensionality also contribute to average-case robustness. Selectivity in individual units can be also be thought of a measure of the sparsity with which semantic information is represented.2 And because class selectivity regularization provides a method for controlling selectivity, and class selectivity regularization has been shown to improve test accuracy on unperturbed data (Leavitt and Morcos, 2020), we sought to investigate whether it could be utilized to improve perturbation robustness and elucidate the factors underlying it. In this work we pursue a series of experiments investigating the causal role of selectivity in robustness to worst-case and average-case perturbations in DNNs. To do so, we used a recently-developed class selectivity regularizer (Leavitt and Morcos, 2020) to directly modify the amount of class selectivity learned by DNNs, and examined how this affected the DNNs’ robustness to worst-case and average-case perturbations. Our findings are as follows: • Networks regularized to have lower levels of class selectivity are more robust to average-case perturbations, while networks with higher class selectivity are generally less robust to average-case perturbations, as measured in ResNets using the Tiny ImageNetC and CIFAR10C datasets. The corruption robustness imparted by regularizing against class selectivity was consistent across nearly all tested corruptions. • In contrast to its impact on average-case perturbations, decreasing class selectivity reduces robustness to worst-case perturbations in both tested models, as assessed using gradient-based white-box attacks. • The variability of the input-unit gradient across samples and units is proportional to a network’s overall class selectivity, indicating that high variability in perturbability within and across units may facilitate worst-case perturbation robustness. • The dimensionality of activation changes caused by corruption markedly increases in early layers for both perturbation types, but is larger for worst-case perturbations and low-selectivity networks. This implies that representational dimensionality may present a trade-off between worst-case and average-case perturbation robustness. Our results demonstrate that changing class selectivity, and hence the sparsity of semantic representations, can confer robustness to average-case or worst-case perturbations, but not both simultaneously. They also highlight the roles of input-unit gradient variability and representational dimensionality in mediating this trade-off. 2 RELATED WORK 2.1 PERTURBATION ROBUSTNESS The most commonly studied form of robustness in DNNs is robustness to adversarial attacks, in which an input is perturbed in a manner that maximizes the change in the network’s output while 1We use the terms "worst-case perturbation" and "average-case perturbation" instead of "adversarial attack" and "corruption", respectively, because this usage is more general and dispenses with the implied categorical distinction of using seemingly-unrelated terms. Also note that while Hendrycks and Dietterich (2019) assign specific and distinct meanings to "perturbation" and "corruption", we use the term "perturbation" more generally to refer to any change to an input. 2Class information is semantic. And because class selectivity measures the degree to which class information is represented in individual neurons, it can be considered a form of sparsity. For example, if a network has high test accuracy on a classification task, it is necessarily representing class (semantic) information. But if the mean class selectivity across units is low, then the individual units do not contain much class information, thus the class information must be distributed across units; the semantic representation in this case is not sparse, it is distributed. attempting to minimize or maintain below some threshold the magnitude of the change to the input (Serban et al., 2019; Warde-Farley and Goodfellow, 2017) . Because white-box adversarial attacks are optimized to best confuse a given network, robustness to adversarial attacks are a "worst-case" measure of robustness. Two factors that have been proposed to account for DNN robustness to worst-case perturbations are particularly relevant to the present study: sparsity and dimensionality. Multiple studies have linked activation and weight sparsity with robustness to worst-case perturbations. Adversarial training improves worst-case robustness Goodfellow et al. (2015); Huang et al. (2016) and results in sparser weight matrices (Madry et al., 2018; Balda et al., 2020). Methods for increasing the sparsity of weight matrices (Ye et al., 2018; Guo et al., 2018) and activations (Dhillon et al., 2018) likewise improve worst-case robustness, indicating that the weight sparsity caused by worst-case perturbation training is not simply a side-effect. Researchers have also attempted to understand the nature of worst-case robustness from a perspective complementary to that of sparsity: dimensionality. Like sparsity, worst-case perturbation training reduces the rank of weight matrices and representations, and regularizing weight matrices and representations to be low-rank can improve worst-case perturbation robustness (Langeberg et al., 2019; Sanyal et al., 2020; Nayebi and Ganguli, 2017). Taken together, these studies support the notion that networks with low-dimensional representations are more robust to worst-case perturbations. Comparatively less research has been conducted to understand the factors underlying averagecase robustness. Certain techniques for improving worst-case perturbation robustness also help against average-case perturbations (Hendrycks and Dietterich, 2019; Geirhos et al., 2018; Ford et al., 2019). Examining the frequency domain has elucidated one mechanism: worst-case perturbations for "baseline" models tend to be in the high frequency domain, and improvements in averagecase robustness resulting from worst-case robustness training are at least partially ascribable to models becoming less reliant on high-frequency information (Yin et al., 2019; Tsuzuku and Sato, 2019; Geirhos et al., 2018). But it remains unknown whether other factors such as sparsity and dimensionality link these two forms of robustness. 2.2 CLASS SELECTIVITY One technique that has been of particular interest to researchers trying to better understand deep (and biological) neural networks is examining the selectivity of individual units (Zhou et al., 2015; Olah et al., 2017; Morcos et al., 2018; Zhou et al., 2018; Meyes et al., 2019; Na et al., 2019; Zhou et al., 2019; Rafegas et al., 2019; Bau et al., 2020; Leavitt and Morcos, 2020; Sherrington, 1906; Kandel et al., 2000). Evidence regarding the importance of selectivity has mostly relied on single unit ablation, and has been equivocal (Radford et al., 2017; Morcos et al., 2018; Amjad et al., 2018; Zhou et al., 2018; Donnelly and Roegiest, 2019; Dalvi et al., 2019a). However Leavitt and Morcos (2020) examined the role of single unit selectivity in network performance by regularizing for or against class selectivity in the loss function, which sidesteps the limitations of single unit ablation and correlative approaches and allowed them to investigate the causal effect of class selectivity. They found that reducing class selectivity has little negative impact on—and can even improve—test accuracy in CNNs trained on image recognition tasks, but that increasing class selectivity has significant negative effects on test accuracy. However, their study focused on examining the effects of class selectivity on test accuracy in unperturbed (clean) inputs. Thus it remains unknown how class selectivity affects robustness to perturbed inputs, and whether class selectivity can serve as or elucidate a link between worst-case and average-case robustness. 3 APPROACH A detailed description of our approach is provided in Appendix A.1. Models and training protocols Our experiments were performed on ResNet18 and ResNet50 (He et al., 2016) trained on Tiny ImageNet (Fei-Fei et al., 2015), and ResNet20 (He et al., 2016) trained on CIFAR10 (Krizhevsky, 2009). We focus primarily on the results for ResNet18 trained on Tiny ImageNet in the main text for space, though results were qualitatively similar for ResNet50, and ResNet20 trained on CIFAR10. Experimental results were obtained with model parameters from the epoch that achieved the highest validation set accuracy over the training epochs, and 20 replicate models (ResNet18 and ResNet20) or 5 replicate models (Resnet50) with different random seeds were run for each hyperparameter set. Class selectivity index Following (Leavitt and Morcos, 2020). A unit’s class selectivity index is calculated as follows: At every ReLU, the activation in response to a single sample was averaged across all elements of the filter map (which we refer to as a "unit"). The class-conditional mean activation was then calculated across all samples in the clean test set, and the class selectivity index (SI) was calculated as follows: SI = µmax − µ−max µmax + µ−max (1) where µmax is the largest class-conditional mean activation and µ−max is the mean response to the remaining (i.e. non-µmax) classes. The selectivity index ranges from 0 to 1. A unit with identical average activity for all classes would have a selectivity of 0, and a unit that only responds to a single class would have a selectivity of 1. As Morcos et al. (2018) note, the selectivity index is not a perfect measure of information content in single units. For example, a unit with a litte bit of information about many classes would have a low selectivity index. However, it identifies units that are class-selective similarly to prior studies (Zhou et al., 2018). Most importantly, it is differentiable with respect to the model parameters. Class selectivity regularization We used (Leavitt and Morcos, 2020)’s class selectivity regularizer to control the levels of class selectivity learned by units in a network during training. Class selectivity regularization is achieved by minimizing the following loss function during training: loss = − C∑ c yc· log(ŷc)− αµSI (2) The left-hand term in the loss function is the standard classification cross-entropy, where c is the class index, C is the number of classes, yc is the true class label, and ŷc is the predicted class probability. The right-hand component of the loss function, −αµSI , is the class selectivity regularizer. The regularizer consists of two terms: the selectivity term, µSI = 1 L L∑ l 1 U U∑ u SIu,l (3) where l is a convolutional layer, L is number of layers, u is a unit, U is the number of units in a given layer, and SIu is the class selectivity index of unit u. The selectivity term of the regularizer is obtained by computing the selectivity index for each unit in a layer, then computing the mean selectivity index across units within each layer, then computing the mean selectivity index across layers. Computing the mean within layers before computing the mean across layers (as compared to computing the mean across all units in the network) mitigates the biases induced by the larger numbers of units in deeper layers. The other term in the regularizer is α, the regularization scale, which determines whether class selectivity is promoted or discouraged. Negative values of α discourage class selectivity in individual units and positive values encourage it. The magnitude of α controls the contribution of the selectivity term to the overall loss. During training, the class selectivity index was computed for each minibatch. The final (logit) layer was not subject to selectivity regularization or included in our analyses because by definition, the logit layer must be class selective in a classification task. Measuring average-case robustness To evaluate robustness to average-case perturbations, we tested our networks on CIFAR10C and Tiny ImageNetC, two benchmark datasets consisting of the CIFAR10 or Tiny ImageNet data, respectively, to which a set of naturalistic corruptions have been applied (Hendrycks and Dietterich, 2019, examples in Figure A1). We average across all corruption types and severities (see Appendix A.1.2 for details) when reporting corrupted test accuracy. Measuring worst-case robustness We tested our models’ worst-case (i.e. adversarial) robustness using two methods. The fast gradient sign method (FGSM) (Goodfellow et al., 2015) is a simple attack that computes the gradient of the loss with respect to the input image, then scales the image’s pixels (within some bound) in the direction that increases the loss. The second method, projected gradient descent (PGD) (Kurakin et al., 2016; Madry et al., 2018), is an iterated version of FGSM. We used a step size of 0.0001 and an l∞ norm perturbation budget ( ) of 16/255. Computing the stability of units and layers To quantify variation in networks’ perturbability, we first computed the l2 norm of the input-unit gradient for each unit u in a network. We then computed the mean (µu) and standard deviation (σu) of the norm across samples for each unit. σu/µu yields a) Shot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur 0 5 10 15 20 25 30 35 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -2.0 Low Selectivity 2.0 High Selectivity 0 Baseline Selectivity -1.0 -0.7 -0.4 -0.2 0.2 0.4 0.7 1.0 b) -2.0 -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 2.0 Regularization Scale (α) 13 14 15 16 17 Te st A cc ur ac y Figure 1: Reducing class selectivity improves average-case robustness. Test accuracy (y-axis) as a function of corruption type (x-axis), class selectivity regularization scale (α; color), and corruption severity (ordering along y-axis). Test accuracy is reduced proportionally to corruption severity, leading to an ordering along the y-axis; corruption severity 1 (least severe) is at the top, corruption severity 5 (most severe) at the bottom. (b) Mean test accuracy across all corruptions and severities (y-axis) as a function of α (x-axis). Results shown are for ResNet18 trained on Tiny ImageNet and tested on Tiny ImageNetC. Error bars = 95% confidence intervals of the mean. See Figure A6 for CIFAR10C results. the coefficient of variation (Everitt, 2002) for a unit (CVu), a measure of variation in perturbability for individual units. We also quantified the variation across units in a layer by computing the standard deviation of µu across units in a layer l, σ(µu) = σl, and dividing this by the corresponding mean across units µ(µu) = µl, to yield the CV across units σl/µl = CVl. 4 RESULTS 4.1 AVERAGE-CASE ROBUSTNESS IS INVERSELY PROPORTIONAL TO CLASS SELECTIVITY Certain kinds of sparsity—including reliance on single directions (Morcos et al., 2018), and the semantic sparsity measured by class selectivity (Leavitt and Morcos, 2020)—have been shown to impair network performance. We sought to extend this question to robustness: how does the sparsity of semantic representations affect robustness to average-case perturbations of the input data? We used a recently-introduced method (Leavitt and Morcos (2020); Approach 3) to modulate the amount of class selectivity learned by DNNs (Figure A2 demonstrates effects of selectivity regularization). We then examined how this affected performance on Tiny ImageNetC and CIFAR10C, two benchmark datasets for average-case corruptions (Approach 3; example images in Figure A1). Changing the level of class selectivity across neurons in a network could one of the following effects on corruption robustness: If concentrating semantic representations into fewer neurons (i.e. promoting semantic sparsity) provides fewer potent dimensions on which perturbed inputs can act, then increasing class selectivity should confer networks with robustness to average-case perturbations, while reducing class selectivity should render networks more vulnerable. Alternatively, if distributing semantic representations across more units (i.e. reducing sparsity) dilutes the changes induced by perturbed inputs, then reducing class selectivity should increase a network’s robustness to averagecase perturbations, while increasing class selectivity should reduce robustness. We found that decreasing class selectivity leads to increased robustness to average-case perturbations for both ResNet18 tested on Tiny ImageNetC (Figure 1) and ResNet20 tested on CIFAR10C (Figure A6). In ResNet18, we found that mean test accuracy on corrupted inputs increases as class selectivity decreases (Figure 1), with test accuracy reaching a maximum at regularization scale α = −2.0 (mean test accuracy across corruptions and severities at α−2.0 =17), representing a 3.5 percentage point (pp) increase relative to no selectivity regularization (i.e. α0; test accuracy at α0 = 13.5). In contrast, regularizing to increase class selectivity has either no effect or a negative impact on corruption robustness. Corrupted test accuracy remains relatively stable until α = 1.0, after which point it declines. The results are qualitatively similar for ResNet50 tested on Tiny ImageNetC (Figure A9), and for ResNet20 tested on CIFAR10C (Figure A6), except the vulnerability to corruption caused by increasing selectivity is even more dramatic in ResNet20. We also found similar results when controlling for the difference in clean accuracy for models with different α (Appendix A.3). We observed that regularizing to decrease class selectivity causes robustness to average-case perturbations. But it’s possible that the causality is unidirectional, leading to the question of whether the converse is also true: does increasing robustness to average-case perturbations cause class selectivity to decrease? We investigated this question by training with AugMix, a technique known to improve worst-case robustness (Hendrycks et al., 2020a). We found that AugMix does indeed decrease the mean level of class selectivity across neurons in a network (Appendix A.4; Figure A11). AugMix decreases overall levels of selectivity similarly to training with a class selectivity regularization scale of approximately α = −0.1 or α = −0.2 in both ResNet18 trained on Tiny ImageNet (Figures A11a and A11b) and ResNet20 trained on CIFAR10 (Figures A11c and A11d). These results indicate that the causal relationship between average-case perturbation robustness and class selectivity is bidirectional: not only does decreasing class selectivity improve average-case perturbation robustness, but improving average-case perturbation-robustness also causes class selectivity to decrease. We also found that the effect of class selectivity on perturbed robustness is consistent across corruption types. Regularizing against selectivity improves perturbation robustness in all 15 Tiny ImageNetC corruption types for ResNet18 (Figure A4) and 14 of 15 Tiny ImageNetC corruption types in ResNet50 (Figure A10), and 14 of 19 corruption types in CIFAR10C for ResNet20 (Figure A7). Together these results demonstrate that reduced class selectivity confers robustness to average-case perturbations, implying that distributing semantic representations across neurons—i.e. low sparsity—may dilute the changes induced by average-case perturbations. 4.2 CLASS SELECTIVITY IMPARTS WORST-CASE PERTURBATION ROBUSTNESS We showed that the sparsity of a network’s semantic representations, as measured with class selectivity, is causally related to a network’s robustness to average-case perturbations. But how does the sparsity of semantic representations affect worst-case robustness? We addressed this question by testing our class selectivity-regularized networks on inputs that had been perturbed using using one of two gradient-based methods (see Approach 3). If distributing semantic representations across units provides more dimensions upon which a worstcase perturbation is potent, then worst-case perturbation robustness should be proportional to class selectivity. However, if increasing the sparsity of semantic representations creates more responsive individual neurons, then worst-case robustness should be inversely proportional to class selectivity. Unlike average-case perturbations, decreasing class selectivity decreases robustness to worst-case perturbations for ResNet18 (Figure 2) and ResNet50 (Figure A13) trained on Tiny ImageNet, and ResNet20 trained on CIFAR10 (Figures A12). For small perturbations (i.e. close to x=0), the effects of class selectivity regularization on test accuracy (class selectivity is inversely correlated with unperturbed test accuracy) appear to overwhelm the effects of perturbations. But as the magnitude of perturbation increases, a stark ordering emerges: test accuracy monotonically decreases as a function of class selectivity in ResNet18 and ResNet50 for both FGSM and PGD attacks (ResNet18: Figures 2a and 2b; ResNet50: Figures A13a and A13b). The ordering is also present for ResNet20, though less consistent for the two networks with the highest class selectivity (α = 0.7 and α = 1.0). However, increasing class selectivity is much more damaging to test accuracy in ResNet20 trained on CIFAR10 compared to ResNet18 trained on Tiny ImageNet (Leavitt and Morcos, 2020, Figure A2), so the the substantial performance deficits of extreme selectivity in ResNet20 likely mask the perturbation-robustness. This result demonstrates that networks with sparse semantic representations are less vulnerable to worst-case perturbation than networks with distributed semantic representations. We also verified that the worst-case robustness of high-selectivity networks is not fully explained by gradient-masking (Athalye et al., 2018, Appendix A.5). Interestingly, class selectivity regularization does not appear to affect robustness to "natural" adversarial examples (Appendix A.6), which are "unmodified, real-world examples...selected to cause a model to make a mistake" (Hendrycks et al., 2020b). Performance on ImageNet-A, a benchmark of natural adversarial examples (Hendrycks et al., 2020b), was similar across all tested values of α for both ResNet18 (Figure A15a) and ResNet50 (Figure A15b), indicating that class selectivity regularization may share some limitations with other methods for improving both worst-case and average-case robustness, many of which also fail to yield significant robustness improvements against ImageNet-A (Hendrycks et al., 2020b). We found that regularizing to increase class selectivity causes robustness to worst-case perturbations. But is the converse true? Does increasing robustness to worst-case perturbations also cause class selectivity to increase? We investigated this by training networks with a commonly-used technique to improve worst-case perturbation robustness, PGD training. We found that PGD training does indeed increase the mean level of class selectivity across neurons in a network, and this effect is proportional to the strength of PGD training: networks trained with more strongly-perturbed samples have higher class selectivity (Appendix A.7). This effect was present in both ResNet18 trained on Tiny ImageNet (Figure A16c) and ResNet20 trained on CIFAR10 (Figure A16f), indicating that the causal relationship between worst-case perturbation robustness and class selectivity is bidirectional. Networks whose outputs are more stable to small input perturbations are known to have improved generalization performance and worst-case perturbation robustness (Drucker and Le Cun, 1992; Novak et al., 2018; Sokolic et al., 2017; Rifai et al., 2011; Hoffman et al., 2019). To examine whether increasing class selectivity improves worst-case perturbation robustness by increasing network stability, we analyzed each network’s input-output Jacobian, which is proportional to its stability—a large-magnitude Jacobian means that a small change to the network’s input will cause a large change to its output. If class selectivity induces worst-case robustness by increasing network stability, then networks with higher class selectivity should have smaller Jacobians. But if increased class selectivity induces adversarial robustness through alternative mechanisms, then class selectivity should have no effect on the Jacobian. We found that the l2 norm of the input-output Jacobian is inversely proportional to class selectivity for ResNet18 (Figure 2c), ResNet50 (Figure A13c), and ResNet20 (Figure A12c), indicating that distributed semantic representations are more vulnerable to worst-case perturbation because they are less stable than sparse semantic representations. 4.3 VARIABILITY OF THE INPUT-UNIT GRADIENT ACROSS SAMPLES AND UNITS We observed that the input-output Jacobian is proportional to worst-case vulnerability and inversely proportional to class selectivity, but focusing on input-output stability potentially overlooks phenomena present in hidden layers and units. If class selectivity imparts worst-case robustness by making individual units less reliably perturbable—because each unit is highly tuned to a particular subset of images—then we should expect to see more variation across input-unit gradients for units in high-selectivity networks compared to units in low-selectivity networks. Alternatively, worst-case robustness in high-selectivity networks could be achieved by reducing both the magnitude and variation of units’ perturbability, in which case we would expect to observe lower variation across input-unit gradients for units in high-selectivity networks compared to low-selectivity networks. We quantified variation in unit perturbability using the coefficient of variation of the input-unit gradient across samples for each unit (CVu; Approach 3). The CV is a measure of variability that normalizes the standard deviation of a quantity by the mean. A large CV indicates high variability, a small CV indicates low variability. To quantify variation in perturbability across units, we computed the CV across units in each layer, (CVl; Approach 3). We found that units in high-selectivity networks exhibited greater variation in their perturbability than units in low-selectivity networks, both within individual units and across units in each layer. This effect was present in both ResNet18 trained on Tiny ImageNet (Figure 3) and ResNet20 trained on CIFAR10 (Figure A18), although the effect was less consistent for across-unit variability in later layers in ResNet18 (Figure 3b). Interestingly, class selectivity affects both the numerator (σ) and denominator (µ) of the CV calculation for both the CV across samples and CV across units (Appendix A.8). These results indicate that that high class selectivity imparts worst-case robustness by increasing the variation in perturbability within and across units, while the worst-case vulnerability associated with low class selectivity results from more consistently perturbable units. It is worth noting that the inverse can be stated with regards to average-case robustness: low variation in perturbability both within and across units in low-selectivity networks is associated with robustness to average-case perturbations, despite the these units (and networks) being more perturbable on average. 4.4 DIMENSIONALITY IN EARLY LAYERS PREDICTS PERTURBATION VULNERABILITY a) 0 2 4 6 8 10 12 14 16 Layer 10−1 100 In pu tU ni t G ra di en t Va ria bi lit y, U ni t ( CV u) Class Selectivity Regularization Scale (α) -2.0 -1.0 2.01.00.70-0.4-0.7 -0.3 -0.2 -0.1 0.1 0.2 0.3 0.4 b) 0 2 4 6 8 10 12 14 16 Layer 10−1 100 In pu tU ni t G ra di en t Va ria bi lit y, L ay er (C V l ) ality would be unaffected by class selectivity. We found that the sparsity of a DNN’s semantic representations corresponds directly to the dimensionality of those representations. Dimensionality is inversely proportional to class selectivity in early ResNet18 layers (≤layer 9; Figure 4a), and across all of ResNet20 (Figure A21d). Networks with higher class selectivity tend to have lower dimensionality, and networks with lower class selectivity tend to have higher dimensionality. These results show that the sparsity of a network’s semantic representations is indeed reflected in those representations’ dimensionality. We next examined the dimensionality of perturbation-induced changes in representations by subtracting the perturbed activation matrix from the clean activation matrix and computing the dimensionality of this "difference matrix" (see Appendix A.1.4). Intuitively, this metric quantifies the dimensionality of the change in the representation caused by perturbing the input. If it is small, the perturbation impacts fewer units, while if it is large, more units are impacted. Interestingly, we found that the dimensionality of the changes in activations induced by both average-case (Figure 4b) and worst-case perturbations (Figure 4c) was notably higher for networks with reduced class-selectivity, suggesting that decreasing class selectivity causes changes in input to become more distributed. We found that the activation changes caused by average-case perturbations are higher-dimensional than the representations of the clean data in both ResNet18 (compare Figures 4b and 4a) and ResNet20 (Figures A21e and A21d), and that this effect is inversely proportional to class selectivity (Figures 4b and A21e); the increase in dimensionality from average-case perturbations was more pronounced in low-selectivity networks than in high-selectivity networks. These results indicate that class selectivity not only predicts the dimensionality of a representation, but also the change in dimensionality induced by an average-case perturbation. Notably, however, the increase in early-layer dimensionality was much larger for worst-case perturbations than average-case perturbations (Figure 4c; Figure A21f) . These results indicate that, while the changes in dimensionality induced by both naturalistic and adversarial perturbations are proportional to the dimensionality of the network’s representations, these changes do not consistently project onto coding-relevant dimensions of the representations. Indeed, the larger change in early-layer dimensionality caused by worst-case perturbations likely reflects targeted projection onto codingrelevant dimensions and provides intuition as to why low-selectivity networks are more susceptible to worst-case perturbations. Hidden layer representations in DNNs are known to lie on non-linear manifolds that are of lower dimensionality than the space in which they’re embedded (Goodfellow et al., 2016; Ansuini et al., 2019). Consequently, linear methods such as PCA can provide misleading estimates of hidden layer dimensionality. Thus we also quantified the intrinsic dimensionality (ID) of each layer’s representations (see Appendix A.1.4). Interestingly, the results were qualitatively similar to what we observed when examining linear dimensionality (Figure A22) in both ResNet18 trained on Tiny ImageNet (Figure A22a-A22c) and ResNet20 trained on CIFAR10 (Figure A22d-A22f). Thus both linear and non-linear measures of dimensionality imply that representational dimensionality may present a trade-off between worst-case and average-case perturbation robustness. 5 DISCUSSION Our results demonstrate that changes in the sparsity of semantic representations, as measured with class selectivity, induce a trade-off between robustness to average-case vs. worst-case perturbations: highly-distributed semantic representations confer robustness to average-case perturbations, but their increased dimensionality and consistent perturbability result in vulnerability to worst-case perturbations. In contrast, sparse semantic representations yield low-dimensional representations and inconsistently-perturbable units, imparting worst-case robustness. Furthermore, the dimensionality of the difference in early-layer activations between clean and perturbed samples is larger for worst-case perturbations than for average-case perturbations. More generally, our results link average-case and worst-case perturbation robustness through class selectivity and representational dimensionality. We hesitate to generalize too broadly about our findings, as they are limited to CNNs trained on image classification tasks. It is possible that the results we report here are specific to our models and/or datasets, and also may not extend to other tasks. Scaling class selectivity regularization to datasets with large numbers of classes also remains an open problem (Leavitt and Morcos, 2020). Our findings could be utilized for practical ends and to clarify findings in prior work. Relevant to both of these issues is the task of adversarial example detection. There is conflicting evidence that intrinsic dimensionality can be used to characterize or detect adversarial (worst-case) samples (Ma et al., 2018; Lu et al., 2018). The finding that worst-case perturbations cause a marked increase in both intrinsic and linear dimensionality indicates that there may be merit in continuing to study these quantities for use in worst-case perturbation detection. And the observation that the causal relationship between class-selectivity and worst- and average-case robustness is bidirectional helps clarify the known benefits of sparsity (Madry et al., 2018; Balda et al., 2020; Ye et al., 2018; Guo et al., 2018; Dhillon et al., 2018) and dimensionality (Langeberg et al., 2019; Sanyal et al., 2020; Nayebi and Ganguli, 2017) on worst-case robustness. It furthermore raises the question of whether enforcing low-dimensional representations also causes class selectivity to increase. Our work may also hold practical relevance to developing robust models: class selectivity could be used as both a metric for measuring model robustness and a method for achieving robustness (via regularization). We hope future work will more comprehensively assess the utility of class selectivity as part of the deep learning toolkit for these purposes. A APPENDIX A.1 DETAILED APPROACH Unless otherwise noted: all experimental results were derived from the corrupted or adversarial test set with the parameters from the epoch that achieved the highest clean validation set accuracy over the training epochs; 20 replicates with different random seeds were run for each hyperparameter set; error bars and shaded regions denote bootstrapped 95% confidence intervals; selectivity regularization was not applied to the final (output) layer, nor was the final layer included in any of our analyses. A.1.1 MODELS All models were trained using stochastic gradient descent (SGD) with momentum = 0.9 and weight decay = 0.0001. The maxpool layer after the first batchnorm layer in ResNet18 (see He et al. (2016)) was removed because of the smaller size of Tiny ImageNet images compared to standard ImageNet images (64x64 vs. 256x256, respectively). ResNet18 and ResNet50 were trained for 90 epochs with a minibatch size of 4096 (ResNet18) or 1400 (ResNet50) samples with a learning rate of 0.1, multiplied (annealed) by 0.1 at epochs 35, 50, 65, and 80. ResNet20 (code modified from Idelbayev (2020)) were trained for 200 epochs using a minibatch size of 256 samples and a learning rate of 0.1, annealed by 0.1 at epochs 100 and 150. A.1.2 DATASETS Tiny Imagenet (Fei-Fei et al., 2015) consists of 500 training images and 50 images for each of its 200 classes. We used the validation set for testing and created a new validation set by taking 50 images per class from the training set, selected randomly for each seed. We split the 50k CIFAR10 training samples into a 45k sample training set and a 5k validation set, similar to our approach with Tiny Imagenet. All experimental results were derived from the test set with the parameters from the epoch that achieved the highest validation set accuracy over the training epochs. 20 replicates with different random seeds were run for each hyperparameter set. Selectivity regularization was not applied to the final (output) layer, nor was the final layer included any of our analyses. CIFAR10C consists of a dataset in which 19 different naturalistic corruptions have been applied to the CIFAR10 test set at 5 different levels of severity. Tiny ImageNetC also has 5 levels of corruption severity, but consists of 15 corruptions. We would like to note that Tiny ImageNetC does not use the Tiny ImageNet test data. While the two datasets were created using the same data generation procedure—cropping and scaling images from the same 200 ImageNet classes—they differ in the specific ImageNet images they use. It is possible that the images used to create Tiny ImageNetC are out-of-distribution with regards to the Tiny ImageNet training data, in which case our results from testing on Tiny ImageNetC actually underestimate the corruption robustness of our networks. The creators of Tiny ImageNetC kindly provided the clean (uncorrupted) Tiny ImageNetC data necessary for the dimensionality analysis, which relies on matches corrupted and clean data samples. A.1.3 SOFTWARE Experiments were conducted using PyTorch (Paszke et al., 2019), analyzed using the SciPy ecosystem (Virtanen et al., 2019), and visualized using Seaborn (Waskom et al., 2017). A.1.4 QUANTIFYING DIMENSIONALITY We quantified the dimensionality of a layer’s representations by applying PCA to the layer’s activation matrix for the clean test data and counting the number of dimensions necessary to explain 95% of the variance, then dividing by the total number of dimensions (i.e. the fraction of total dimensionality; we also replicated our results using the fraction of total dimensionality necessary to explain 90% and 99% of the variance). The same procedure was applied to compute the dimensionality of perturbationinduced changes in representations, except the activations for a perturbed data set were subtracted a) b) c) d) e) Figure A1: Example naturalistic corruptions from the Tiny ImageNetC dataset. (a) Clean (no corruption). (b) Brightness. (c) Contrast. (d) Elastic transform. (e) Shot noise. All corruptions are shown at severity level 5/5. from the corresponding clean activations prior to applying PCA. For average-case perturbations, we performed this analysis for every corruption type and severity, and for the worst-case perturbations we used PGD with 40 steps. Hidden layer representations in DNNs are known to lie on non-linear manifolds that are of lower dimensionality than the space in which they’re embedded (Goodfellow et al., 2016; Ansuini et al., 2019). Consequently, linear methods such as PCA can fail to capture the "intrinsic" dimensionality of hidden layer representations. Thus we also quantified the intrinsic dimensionality (ID) of each layer’s representations using the method of (Facco et al., 2017). The method, based on that of Levina and Bickel (2005), estimates ID by computing the ratio between the distances to the second and first nearest neighbors of each data point. We used the implementation of Ansuini et al. (2019). Our procedure was otherwise identical as when computing the linear dimensionality: we computed the dimensionality across all test data for each layer, then divided by the number of units per layer. We then computed the dimensionality of perturbation-induced changes in representations, except the activations for a perturbed data set were subtracted from the corresponding clean activations prior to computed ID. For average-case perturbations, we performed this analysis for every corruption type and severity, and for the worst-case perturbations we used PGD with 40 steps. A.2 EFFECTS OF CLASS SELECTIVITY REGULARIZATION ON TEST ACCURACY a) 0.0 0.2 0.4 0.6 0.8 1.0 Mean Class Selectivity 0 10 20 30 40 50 Te st A cc ur ac y -100.0 -30.0 -10.0 -5.0 -2.0 -1.0 -0.7 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.7 1.0 2.0 5.0 10.0 30.0 100.0 Regularization Scale (α) b) 0.0 0.2 0.4 0.6 0.8 Mean Class Selectivity 10 20 30 40 50 60 70 80 90 Te st A cc ur ac y Figure A2: Effects of class selectivity regularization on test accuracy. Replicated as in Leavitt and Morcos (2020). (a) Test accuracy (y-axis) as a function of mean class selectivity (x-axis) for ResNet18 trained on Tiny ImageNet. α denotes the sign and intensity of class selectivity regularization. Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect. Each data point represents the mean class selectivity across all units in a single trained model. (b) Same as (a), but for ResNet20 trained on CIFAR10. A.3 ADDITIONAL RESULTS FOR AVERAGE-CASE PERTURBATION ROBUSTNESS Because modifying class selectivity can affect performance on clean (unperturbed) inputs (Leavitt and Morcos (2020); Figure A2), it is possible that the effects we observe of class selectivity on perturbed test accuracy are not caused by changes in perturbation robustness per se, but simply by changes in baseline model accuracy. We controlled for this by normalizing each model’s perturbed test accuracy by its clean (unperturbed) test accuracy. The results are generally consistent even after controlling for clean test accuracy, although increasing class selectivity does not cause the same deficits in as measured using non-normalized perturbed test accuracy in ResNet18 trained on Tiny ImageNet (Figure A3a). Interestingly, in ResNet20 trained on CIFAR10, normalizing perturbed test accuracy reveals a more dramatic improvement in perturbation robustness caused by reducing class selectivity (Figure A6c). The results for Resnet50 trained on Tiny ImageNet are entirely consistent between raw vs. normalized measures (Figure A9b vs. Figure A9c) a) -2.0 -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 2.0 Regularization Scale (α) 0.34 0.35 0.36 0.37 0.38 0.39 0.40 0.41 0.42 Co rr up te d Te st A cc ur ac y Cl ea n Te st A cc ur ac y b) -2.0 -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 2.0 Regularization Scale (α) 10 15 20 25 Te st A cc ur ac y Corruption Severity 1 Weakest 2 3 4 5 Strongest Figure A3: Controlling for clean test accuracy, and effect of corruption severity across corruptions. (a) Corrupted test accuracy normalized by clean test accuracy (y-axis) as a function of class selectivity regularization scale (α; x-axis). Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect. Normalized perturbed test accuracy appears higher in networks with high class selectivity (large α), but this is likely due to a floor effect: clean test accuracy is already much closer to the lower bound—chance—in networks with very high class selectivity, which may reflect a different performance regime, making direct comparison difficult. (b) Mean test accuracy across all corruptions (y-axis) as a function of α (x-axis) for different corruption severities (ordering along y-axis; shade of connecting line). Error bars indicate 95% confidence intervals of the mean. Shot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur 5 10 15 20 25 30 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -2.0 Low Selectivity 2.0 High Selectivity 0 Baseline Selectivity -1.0 -0.7 -0.4 -0.2 0.2 0.4 0.7 1.0 Figure A4: Mean test accuracy across corruption intensities for each corruption type for ResNet18 tested on Tiny ImageNetC. Test accuracy (y-axis) as a function of corruption type (x-axis) and class selectivity regularization scale (α, color). Reducing class selectivity improves robustness against all 15/15 corruption types. α Figure A5: Trade-off between clean and perturbed test accuracy in ResNet18 tested on Tiny ImageNetC. Clean test accuracy (x-axis) vs. perturbed test accuracy (y-axis) for different corruption severities (border color) and regularization scales (α, fill color). Mean is computed across all corruption types. Error bars = 95% confidence intervals of the mean. a) Fog Jpeg Compression Zoom Blur Speckle Noise Glass Blur Spatter Shot Noise Defocus Blur Elastic Transform Gaussian Blur Frost Saturate Brightness Snow Gaussian Noise Motion Blur Contrast Impulse Noise Pixelate 20 30 40 50 60 70 80 90 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -1.0 Low Selectivity 1.0 High Selectivity 0 Baseline Selectivity -0.7 -0.4 -0.2 0.2 0.4 0.7 b) -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 60 62 64 66 68 70 72 Te st A cc ur ac y c) -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 0.73 0.74 0.75 0.76 0.77 0.78 0.79 Co rr up te d Te st A cc ur ac y Cl ea n Te st A cc ur ac y d) -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 45 50 55 60 65 70 75 80 85 Te st A cc ur ac y Corruption Severity 1 Weakest 2 3 4 5 Strongest Figure A6: Reducing class selectivity confers robustness to average-case perturbations in ResNet20 tested on CIFAR10C. (a) Test accuracy (y-axis) as a function of corruption type (x-axis), class selectivity regularization scale (α; color), and corruption severity (ordering along y-axis). Test accuracy is reduced proportionally to corruption severity, leading to an ordering along the y-axis, with corruption severity 1 (least severe) at the top and corruption severity 5 (most severe) at the bottom. Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect (see Figure A2b and Approach 3). (b) Mean test accuracy across all corruptions and severities (y-axis) as a function of α (x-axis). (c) Corrupted test accuracy normalized by clean test accuracy (y-axis) as a function of class selectivity regularization scale (α; x-axis). Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect. Normalized perturbed test accuracy appears higher in networks with higher class selectivity (larger α), but this is likely due to a floor effect: clean test accuracy is already much closer to the lower bound—chance—in networks with very high class selectivity, which may reflect a different performance regime, making direct comparison difficult. (d) Mean test accuracy across all corruptions (y-axis) as a function of α (x-axis) for different corruption severities (ordering along y-axis; shade of connecting line). Error bars indicate 95% confidence intervals of the mean. Fog Jpeg Compression Zoom Blur Speckle Noise Glass Blur Spatter Shot Noise Defocus Blur Elastic Transform Gaussian Blur Frost SaturateBrightness Snow Gaussian Noise Motion Blur Contrast Impulse Noise Pixelate 40 50 60 70 80 90 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -1.0 Low Selectivity 1.0 High Selectivity 0 Baseline Selectivity -0.7 -0.4 -0.2 0.2 0.4 0.7 Figure A7: Mean test accuracy across corruption intensities for each corruption type for ResNet20 tested on CIFAR10C. Test accuracy (y-axis) as a function of corruption type (x-axis) and class selectivity regularization scale (α, color). Reducing class selectivity improves robustness against 14/19 corruption types. Error bars = 95% confidence intervals of the mean. α Figure A8: Trade-off between clean and corrupted test accuracy in ResNet20 tested on CIFAR10C. Clean test accuracy (x-axis) vs. corrupted test accuracy (y-axis) for different corruption severities (border color) and regularization scales (α, fill color). Mean is computed across all corruption types. a) Shot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur 0 5 10 15 20 25 30 35 40 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -1.0 Low Selectivity 1.0 High Selectivity 0 Baseline Selectivity -0.7 -0.4 -0.2 0.2 0.4 0.7 b) -1.0 -0.7 -0.4 -0.2 0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 15 16 17 18 19 20 21 Te st A cc ur ac y c) -1.0 -0.7 -0.4 -0.2 0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 0.36 0.38 0.40 0.42 0.44 Co rr up te d Te st A cc ur ac y Cl ea n Te st A cc ur ac y d) -1.0 -0.7 -0.4 -0.2 0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 10 15 20 25 30 Te st A cc ur ac y Figure A9: Reducing class selectivity confers robustness to average-case perturbations in ResNet50 tested on Tiny ImageNetC. (a) Test accuracy (y-axis) as a function of corruption type (x-axis), class selectivity regularization scale (α; color), and corruption severity (ordering along y-axis). Test accuracy is reduced proportionally to corruption severity, leading to an ordering along the y-axis, with corruption severity 1 (least severe) at the top and corruption severity 5 (most severe) at the bottom. Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect (see Approach 3). (b) Mean test accuracy across all corruptions and severities (y-axis) as a function of α (x-axis). (c) Corrupted test accuracy normalized by clean test accuracy (y-axis) as a function of class selectivity regularization scale (α; x-axis). Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect. (d) Mean test accuracy across all corruptions (y-axis) as a function of α (x-axis) for different corruption severities (ordering along y-axis; shade of connecting line). Error bars indicate 95% confidence intervals of the mean. Note that confidence intervals are larger in part due to a smaller sample size—only 5 replicates per α instead of 20. Shot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur 5 10 15 20 25 30 35 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -1.0 Low Selectivity 1.0 High Selectivity 0 Baseline Selectivity -0.7 -0.4 -0.2 0.2 0.4 0.7 Figure A10: Mean test accuracy across corruption intensities for each corruption type for ResNet50 tested on Tiny ImageNetC. Test accuracy (y-axis) as a function of corruption type (x-axis) and class selectivity regularization scale (α, color). Reducing class selectivity improves robustness against 14/15 corruption types. Error bars = 95% confidence intervals of the mean. Note that confidence intervals are larger in part due to a smaller sample size—only 5 replicates per α instead of 20. A.4 THE CAUSAL RELATIONSHIP BETWEEN CLASS SELECTIVITY AND AVERAGE-CASE ROBUSTNESS IS BIDIRECTIONAL We found that regularizing to decrease class selectivity causes robustness to average-case perturbations. But is the converse is also true? Does increasing robustness to average-case perturbations also cause class selectivity to increase? We investigated this question by training with AugMix, a technique known to improve worst-case robustness (Hendrycks et al., 2020a). Briefly, AugMix stochastically applies a diverse set of image augmentations and uses a Jensen-Shannon Divergence consistency loss. Our AugMix parameters were as follows: mixture width: 3; mixture depth: stochastic; augmentation probability: 1; augmentation severity: 2. We found that AugMix does indeed decrese the mean level of class selectivity across neurons in a network (Figure A11). AugMix decreases overall levels of selectivity similarly to training with a class selectivity regularization scale of approximately α = −0.1 or α = −0.2 in both ResNet18 trained on Tiny ImageNet (Figures A11a and A11b) and ResNet20 trained on CIFAR10 (Figures A11c and A11d). These results indicate that the causal relationship between average-case perturbation robustness and class selectivity is bidirectional: not only does decreasing class selectivity cause average-case perturbation robustness to increase, but increasing average-case perturbation-robustness also causes class selectivity to decrease. A.5 WORST-CASE PERTURBATION ROBUSTNESS We also confirmed that the worst-case robustness of high-selectivity ResNet18 and ResNet20 networks was not simply due to gradient-masking (Athalye et al., 2018) by generating worst-case perturbations using each of the replicate models trained with no selectivity regularization (α = 0), then testing selectivity-regularized models on these samples. We found that high-selectivity models were less vulnerable to the α = 0 samples than low-selectivity models for high-intensity perturbations (Appendix A14, indicating that gradient-masking does not fully account for the worst-case robustness of high-selectivity models. A.6 CLASS SELECTIVITY REGULARIZATION DOES NOT AFFECT ROBUSTNESS TO NATURAL ADVERSARIAL EXAMPLES We also examined whether class selectivity regularization affects robustness to "natural" adversarial examples, images that are "natural, unmodified, real-world examples...selected to cause a fixed model to make a mistake" (Hendrycks et al., 2020b). We tested robustness to natural adversarial examples using ImageNet-A, a dataset of natural adversarial examples that belong to ImageNet classes but consistently cause misclassification errors with high confidence (Hendrycks et al., 2020b). We adapted ImageNet-A to our models trained on Tiny ImageNet (ResNet18 and ResNet50) by only testing on the 74 image classes that overlap between ImageNet-A and Tiny ImageNet (yielding a total of 2957 samples), and downsampling the images to 64 x 64. Test accuracy was similar across all tested values of α for both ResNet18 (Figure A15a) and ResNet50 (Figure A15b), indicating that class selectivity regularization may share some limitations with other methods for improving robustness, many of which also fail to yield significant robustness against ImageNet-A (Hendrycks et al., 2020b). A.7 THE CAUSAL RELATIONSHIP BETWEEN CLASS SELECTIVITY AND WORST-CASE ROBUSTNESS IS BIDIRECTIONAL We observed that regularizing to increase class selectivity causes robustness to worst-case perturbations. But is the converse is also true? Does increasing robustness to worst-case perturbations cause class selectivity to increase? We investigated this question using PGD training, a common technique for improving worst-case robustness. PGD training applies the PGD method of sample perturbation (see Approach 3) to samples during training. We used the same parameters for PGD sample generation when training our models as when testing (Approach 3). The number of PGD iterations controls the intensity of the perturbation, and the degree of perturbation-robustness in the trained model (Madry et al., 2018). We found that PGD training does indeed increase the mean level of class selectivity across neurons in a network, and this effect is proportional to the strength of PGD training: networks trained with more strongly-perturbed samples have higher class selectivity (Figure A16). Interestingly, PGD training also appear to cause units to die (Lu et al., 2019), and the number of dead untis is proportional to the intensity of PGD training (Figures A16b and A16e). Removing dead units, which have a class selectivity index of 0, from the calculation of mean class selectivity results in a clear, monotonic effect of PGD training intensity on class selectivity in both ResNet18 trained on Tiny ImageNet (Figure A16c) and ResNet20 trained on CIFAR10 (Figure A16f). These results indicate that the causal relationship between worst-case perturbation robustness and class selectivity is bidirectional: increasing class selectivity not only causes increased worst-case perturbation robustness, but increasing worst-case perturbation-robustness also causes increased class selectivity. A.8 STABILITY TO INPUT PERTURBATIONS IN UNITS AND LAYERS A.9 REPRESENTATIONAL DIMENSIONALITY a) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.1 0.2 0.3 0.4 0.5 Fr ac tio n of T ot al D im en si on al ity Class Selectivity Regularization Scale (α) -2.0 -1.0 2.01.00-0.4 -0.3 -0.2 -0.1 0.1 0.2 0.3 0.4 b) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.1 0.2 0.3 0.4 0.5 Fr ac tio n of T ot al D im en sio na lit y c) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.1 0.2 0.3 0.4 0.5 Fr ac tio n of T ot al D im en sio na lit y d) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.2 0.4 0.6 0.8 1.0 Fr ac tio n of T ot al D im en si on al ity Class Selectivity Regularization Scale (α) -2.0 -1.0 2.01.00-0.4 -0.3 -0.2 -0.1 0.1 0.2 0.3 0.4 e) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.2 0.4 0.6 0.8 1.0 Fr ac tio n of T ot al D im en sio na lit y f) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.2 0.4 0.6 0.8 1.0 Fr ac tio n of T ot al D im en sio na lit y Figure A20: Dimensionality in early layers predicts worst-case vulnerability in ResNet18 trained on Tiny ImageNet. Identical to Figure 4, but dimensionality is computed as the number of principal components needed to explain 90% of variance in (a) - (c), and 99% of variance in (d) - (f). (a) Fraction of dimensionality (y-axis; see Appendix A.1.4) as a function of layer (x-axis). (b) Dimensionality of difference between clean and average-case perturbation activations (y-axis) as a function of layer (x-axis). (c) Dimensionality of difference between clean and worst-case perturbation activations (y-axis) as a function of layer (x-axis). (d) - (f), identical to (a) - (c), but for 99% explained variance threshold.
1. What is the main contribution of the paper regarding perturbation robustness in neural networks? 2. What are the strengths and weaknesses of the experimental analysis in the paper? 3. Do you have any concerns about the conclusions drawn from the experiments, particularly in Sections 4.1 and 4.2? 4. How does the reviewer assess the relationship between class selectivity and robustness in the paper, and what are the implications for future research? 5. What is the reviewer's opinion on the definition and distinction between average-case and worst-case perturbations in the paper? 6. Are there any suggestions for improving the paper, such as adding visualizations or reevaluating the conclusions drawn from the experiments?
Review
Review This work investigates two classes of perturbation robustness, average-case perturbations which are considered to be naturally occurring in image data, and worst-case perturbations that are perturbations generated by an adversary. Neural network susceptibility to these perturbations is evaluated with respect to a class selectivity metric and dimensionality measure. The authors find that decreasing class selectivity will increase robustness to average-case perturbations while reducing robustness to worst-case perturbations. Simultaneously, increasing class selectivity improves robustness to worst-case perturbations while reducing average-case perturbation robustness. In addition, the authors consider the correspondence between the dimensionality of the early layers and observe how this dimensionality corresponds to class selectivity and robustness. They find the dimensionality is inversely related to class selectivity while also positively correlated with reduced worst-case robustness. While the experiments are thorough in showing a relationship between class selectivity and robustness, many of the observations do not map convincingly to the conclusions. For instance, in Section 4.1, the second paragraph appears to be conjecture which is not justified by the potential observation outcomes. Having higher class selectivity does not necessarily mean fewer potent neurons is the reason for increased average-case robustness. This conclusion is not sufficiently supported by measuring robustness as a function of class selectivity. It also seems odd to hypothesize two different conclusions for opposite outcomes. The same issue of drawing a conclusion regarding the worst-case perturbation analysis in section 4.2. Why create potential conclusions for two opposite outcomes? The relationship between the definition of an average-case perturbation and a worst-case perturbation is not well-defined. It appears to be a semantic association of what is expected to occur naturally (average-case) and what is caused by an adversary (worst-case). But without some topology or relationship between these two cases, the observations of class selectivity on each are just independent and unrelated observations. For instance, what is in between an average-case and worst-case perturbation? Where do images from datasets like ObjectNet (https://objectnet.dev/) or Natural Adversarial Examples (https://arxiv.org/abs/1907.07174) fall in the average-case to worst-case dimension? Overall, this work would benefit from a unified view of how class selectivity relations to image perturbations. As it currently stands, this paper appears to be a collection of observations in need of a clear conclusion. Other notes: -It would be illustrative to have some visualization of the average-case perturbations. - S I u in Eq 3. should also be a function of l
ICLR
Title Linking average- and worst-case perturbation robustness via class selectivity and dimensionality Abstract Representational sparsity is known to affect robustness to input perturbations in deep neural networks (DNNs), but less is known about how the semantic content of representations affects robustness. Class selectivity—the variability of a unit’s responses across data classes or dimensions—is one way of quantifying the sparsity of semantic representations. Given recent evidence that class selectivity may not be necessary for, and in some cases can impair generalization, we sought to investigate whether it also confers robustness (or vulnerability) to perturbations of input data. We found that class selectivity leads to increased vulnerability to average-case (naturalistic) perturbations in ResNet18, ResNet50, and ResNet20, as measured using Tiny ImageNetC (ResNet18 and ResNet50) and CIFAR10C (ResNet20). Networks regularized to have lower levels of class selectivity are more robust to average-case perturbations, while networks with higher class selectivity are more vulnerable. In contrast, we found that class selectivity increases robustness to multiple types of worst-case (i.e. white box adversarial) perturbations, suggesting that while decreasing class selectivity is helpful for average-case perturbations, it is harmful for worst-case perturbations. To explain this difference, we studied the dimensionality of the networks’ representations: we found that the dimensionality of early-layer representations is inversely proportional to a network’s class selectivity, and that adversarial samples cause a larger increase in early-layer dimensionality than corrupted samples. We also found that the input-unit gradient was more variable across samples and units in high-selectivity networks compared to low-selectivity networks. These results lead to the conclusion that units participate more consistently in low-selectivity regimes compared to high-selectivity regimes, effectively creating a larger attack surface and hence vulnerability to worst-case perturbations. 1 INTRODUCTION Methods for understanding deep neural networks (DNNs) often attempt to find individual neurons or small sets of neurons that are representative of a network’s decision (Erhan et al., 2009; Zeiler and Fergus, 2014; Karpathy et al., 2016; Amjad et al., 2018; Lillian et al., 2018; Dhamdhere et al., 2019; Olah et al., 2020). Selectivity in individual units (i.e. variability in a neuron’s activations across semantically-relevant data features) has been of particular interest to researchers trying to better understand deep neural networks (DNNs) (Zhou et al., 2015; Olah et al., 2017; Morcos et al., 2018; Zhou et al., 2018; Meyes et al., 2019; Na et al., 2019; Zhou et al., 2019; Rafegas et al., 2019; Bau et al., 2020; Leavitt and Morcos, 2020). However, recent work has shown that selective neurons can be irrelevant, or even detrimental to network performance, emphasizing the importance of examining distributed representations for understanding DNNs (Morcos et al., 2018; Donnelly and Roegiest, 2019; Dalvi et al., 2019b; Leavitt and Morcos, 2020). In parallel, work on robustness seeks to build models that are robust to perturbed inputs (Szegedy et al., 2013; Carlini and Wagner, 2017a;b; Vasiljevic et al., 2016; Kurakin et al., 2017; Gilmer et al., 2018; Zheng et al., 2016). Hendrycks and Dietterich (2019) distinguish between two types of robustness: corruption robustness, which measures a classifier’s performance on low-quality or naturalistically-perturbed inputs—and thus is an "average-case" measure—and adversarial robustness, which measures a classifier’s performance on small, additive perturbations that are tailored to the classifier—and thus is a "worst-case" measure.1 Research on robustness has been predominantly focused on worst-case perturbations, which is affected by weight and activation sparsity (Madry et al., 2018; Balda et al., 2020; Ye et al., 2018; Guo et al., 2018; Dhillon et al., 2018) and representational dimensionality (Langeberg et al., 2019; Sanyal et al., 2020; Nayebi and Ganguli, 2017). But less is known about the mechanisms underlying average-case perturbation robustness and its common factors with worst-case robustness. Some techniques for improving worst-case robustness also improve average-case robustness (Hendrycks and Dietterich, 2019; Ford et al., 2019; Yin et al., 2019), thus it is possible that sparsity and representational dimensionality also contribute to average-case robustness. Selectivity in individual units can be also be thought of a measure of the sparsity with which semantic information is represented.2 And because class selectivity regularization provides a method for controlling selectivity, and class selectivity regularization has been shown to improve test accuracy on unperturbed data (Leavitt and Morcos, 2020), we sought to investigate whether it could be utilized to improve perturbation robustness and elucidate the factors underlying it. In this work we pursue a series of experiments investigating the causal role of selectivity in robustness to worst-case and average-case perturbations in DNNs. To do so, we used a recently-developed class selectivity regularizer (Leavitt and Morcos, 2020) to directly modify the amount of class selectivity learned by DNNs, and examined how this affected the DNNs’ robustness to worst-case and average-case perturbations. Our findings are as follows: • Networks regularized to have lower levels of class selectivity are more robust to average-case perturbations, while networks with higher class selectivity are generally less robust to average-case perturbations, as measured in ResNets using the Tiny ImageNetC and CIFAR10C datasets. The corruption robustness imparted by regularizing against class selectivity was consistent across nearly all tested corruptions. • In contrast to its impact on average-case perturbations, decreasing class selectivity reduces robustness to worst-case perturbations in both tested models, as assessed using gradient-based white-box attacks. • The variability of the input-unit gradient across samples and units is proportional to a network’s overall class selectivity, indicating that high variability in perturbability within and across units may facilitate worst-case perturbation robustness. • The dimensionality of activation changes caused by corruption markedly increases in early layers for both perturbation types, but is larger for worst-case perturbations and low-selectivity networks. This implies that representational dimensionality may present a trade-off between worst-case and average-case perturbation robustness. Our results demonstrate that changing class selectivity, and hence the sparsity of semantic representations, can confer robustness to average-case or worst-case perturbations, but not both simultaneously. They also highlight the roles of input-unit gradient variability and representational dimensionality in mediating this trade-off. 2 RELATED WORK 2.1 PERTURBATION ROBUSTNESS The most commonly studied form of robustness in DNNs is robustness to adversarial attacks, in which an input is perturbed in a manner that maximizes the change in the network’s output while 1We use the terms "worst-case perturbation" and "average-case perturbation" instead of "adversarial attack" and "corruption", respectively, because this usage is more general and dispenses with the implied categorical distinction of using seemingly-unrelated terms. Also note that while Hendrycks and Dietterich (2019) assign specific and distinct meanings to "perturbation" and "corruption", we use the term "perturbation" more generally to refer to any change to an input. 2Class information is semantic. And because class selectivity measures the degree to which class information is represented in individual neurons, it can be considered a form of sparsity. For example, if a network has high test accuracy on a classification task, it is necessarily representing class (semantic) information. But if the mean class selectivity across units is low, then the individual units do not contain much class information, thus the class information must be distributed across units; the semantic representation in this case is not sparse, it is distributed. attempting to minimize or maintain below some threshold the magnitude of the change to the input (Serban et al., 2019; Warde-Farley and Goodfellow, 2017) . Because white-box adversarial attacks are optimized to best confuse a given network, robustness to adversarial attacks are a "worst-case" measure of robustness. Two factors that have been proposed to account for DNN robustness to worst-case perturbations are particularly relevant to the present study: sparsity and dimensionality. Multiple studies have linked activation and weight sparsity with robustness to worst-case perturbations. Adversarial training improves worst-case robustness Goodfellow et al. (2015); Huang et al. (2016) and results in sparser weight matrices (Madry et al., 2018; Balda et al., 2020). Methods for increasing the sparsity of weight matrices (Ye et al., 2018; Guo et al., 2018) and activations (Dhillon et al., 2018) likewise improve worst-case robustness, indicating that the weight sparsity caused by worst-case perturbation training is not simply a side-effect. Researchers have also attempted to understand the nature of worst-case robustness from a perspective complementary to that of sparsity: dimensionality. Like sparsity, worst-case perturbation training reduces the rank of weight matrices and representations, and regularizing weight matrices and representations to be low-rank can improve worst-case perturbation robustness (Langeberg et al., 2019; Sanyal et al., 2020; Nayebi and Ganguli, 2017). Taken together, these studies support the notion that networks with low-dimensional representations are more robust to worst-case perturbations. Comparatively less research has been conducted to understand the factors underlying averagecase robustness. Certain techniques for improving worst-case perturbation robustness also help against average-case perturbations (Hendrycks and Dietterich, 2019; Geirhos et al., 2018; Ford et al., 2019). Examining the frequency domain has elucidated one mechanism: worst-case perturbations for "baseline" models tend to be in the high frequency domain, and improvements in averagecase robustness resulting from worst-case robustness training are at least partially ascribable to models becoming less reliant on high-frequency information (Yin et al., 2019; Tsuzuku and Sato, 2019; Geirhos et al., 2018). But it remains unknown whether other factors such as sparsity and dimensionality link these two forms of robustness. 2.2 CLASS SELECTIVITY One technique that has been of particular interest to researchers trying to better understand deep (and biological) neural networks is examining the selectivity of individual units (Zhou et al., 2015; Olah et al., 2017; Morcos et al., 2018; Zhou et al., 2018; Meyes et al., 2019; Na et al., 2019; Zhou et al., 2019; Rafegas et al., 2019; Bau et al., 2020; Leavitt and Morcos, 2020; Sherrington, 1906; Kandel et al., 2000). Evidence regarding the importance of selectivity has mostly relied on single unit ablation, and has been equivocal (Radford et al., 2017; Morcos et al., 2018; Amjad et al., 2018; Zhou et al., 2018; Donnelly and Roegiest, 2019; Dalvi et al., 2019a). However Leavitt and Morcos (2020) examined the role of single unit selectivity in network performance by regularizing for or against class selectivity in the loss function, which sidesteps the limitations of single unit ablation and correlative approaches and allowed them to investigate the causal effect of class selectivity. They found that reducing class selectivity has little negative impact on—and can even improve—test accuracy in CNNs trained on image recognition tasks, but that increasing class selectivity has significant negative effects on test accuracy. However, their study focused on examining the effects of class selectivity on test accuracy in unperturbed (clean) inputs. Thus it remains unknown how class selectivity affects robustness to perturbed inputs, and whether class selectivity can serve as or elucidate a link between worst-case and average-case robustness. 3 APPROACH A detailed description of our approach is provided in Appendix A.1. Models and training protocols Our experiments were performed on ResNet18 and ResNet50 (He et al., 2016) trained on Tiny ImageNet (Fei-Fei et al., 2015), and ResNet20 (He et al., 2016) trained on CIFAR10 (Krizhevsky, 2009). We focus primarily on the results for ResNet18 trained on Tiny ImageNet in the main text for space, though results were qualitatively similar for ResNet50, and ResNet20 trained on CIFAR10. Experimental results were obtained with model parameters from the epoch that achieved the highest validation set accuracy over the training epochs, and 20 replicate models (ResNet18 and ResNet20) or 5 replicate models (Resnet50) with different random seeds were run for each hyperparameter set. Class selectivity index Following (Leavitt and Morcos, 2020). A unit’s class selectivity index is calculated as follows: At every ReLU, the activation in response to a single sample was averaged across all elements of the filter map (which we refer to as a "unit"). The class-conditional mean activation was then calculated across all samples in the clean test set, and the class selectivity index (SI) was calculated as follows: SI = µmax − µ−max µmax + µ−max (1) where µmax is the largest class-conditional mean activation and µ−max is the mean response to the remaining (i.e. non-µmax) classes. The selectivity index ranges from 0 to 1. A unit with identical average activity for all classes would have a selectivity of 0, and a unit that only responds to a single class would have a selectivity of 1. As Morcos et al. (2018) note, the selectivity index is not a perfect measure of information content in single units. For example, a unit with a litte bit of information about many classes would have a low selectivity index. However, it identifies units that are class-selective similarly to prior studies (Zhou et al., 2018). Most importantly, it is differentiable with respect to the model parameters. Class selectivity regularization We used (Leavitt and Morcos, 2020)’s class selectivity regularizer to control the levels of class selectivity learned by units in a network during training. Class selectivity regularization is achieved by minimizing the following loss function during training: loss = − C∑ c yc· log(ŷc)− αµSI (2) The left-hand term in the loss function is the standard classification cross-entropy, where c is the class index, C is the number of classes, yc is the true class label, and ŷc is the predicted class probability. The right-hand component of the loss function, −αµSI , is the class selectivity regularizer. The regularizer consists of two terms: the selectivity term, µSI = 1 L L∑ l 1 U U∑ u SIu,l (3) where l is a convolutional layer, L is number of layers, u is a unit, U is the number of units in a given layer, and SIu is the class selectivity index of unit u. The selectivity term of the regularizer is obtained by computing the selectivity index for each unit in a layer, then computing the mean selectivity index across units within each layer, then computing the mean selectivity index across layers. Computing the mean within layers before computing the mean across layers (as compared to computing the mean across all units in the network) mitigates the biases induced by the larger numbers of units in deeper layers. The other term in the regularizer is α, the regularization scale, which determines whether class selectivity is promoted or discouraged. Negative values of α discourage class selectivity in individual units and positive values encourage it. The magnitude of α controls the contribution of the selectivity term to the overall loss. During training, the class selectivity index was computed for each minibatch. The final (logit) layer was not subject to selectivity regularization or included in our analyses because by definition, the logit layer must be class selective in a classification task. Measuring average-case robustness To evaluate robustness to average-case perturbations, we tested our networks on CIFAR10C and Tiny ImageNetC, two benchmark datasets consisting of the CIFAR10 or Tiny ImageNet data, respectively, to which a set of naturalistic corruptions have been applied (Hendrycks and Dietterich, 2019, examples in Figure A1). We average across all corruption types and severities (see Appendix A.1.2 for details) when reporting corrupted test accuracy. Measuring worst-case robustness We tested our models’ worst-case (i.e. adversarial) robustness using two methods. The fast gradient sign method (FGSM) (Goodfellow et al., 2015) is a simple attack that computes the gradient of the loss with respect to the input image, then scales the image’s pixels (within some bound) in the direction that increases the loss. The second method, projected gradient descent (PGD) (Kurakin et al., 2016; Madry et al., 2018), is an iterated version of FGSM. We used a step size of 0.0001 and an l∞ norm perturbation budget ( ) of 16/255. Computing the stability of units and layers To quantify variation in networks’ perturbability, we first computed the l2 norm of the input-unit gradient for each unit u in a network. We then computed the mean (µu) and standard deviation (σu) of the norm across samples for each unit. σu/µu yields a) Shot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur 0 5 10 15 20 25 30 35 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -2.0 Low Selectivity 2.0 High Selectivity 0 Baseline Selectivity -1.0 -0.7 -0.4 -0.2 0.2 0.4 0.7 1.0 b) -2.0 -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 2.0 Regularization Scale (α) 13 14 15 16 17 Te st A cc ur ac y Figure 1: Reducing class selectivity improves average-case robustness. Test accuracy (y-axis) as a function of corruption type (x-axis), class selectivity regularization scale (α; color), and corruption severity (ordering along y-axis). Test accuracy is reduced proportionally to corruption severity, leading to an ordering along the y-axis; corruption severity 1 (least severe) is at the top, corruption severity 5 (most severe) at the bottom. (b) Mean test accuracy across all corruptions and severities (y-axis) as a function of α (x-axis). Results shown are for ResNet18 trained on Tiny ImageNet and tested on Tiny ImageNetC. Error bars = 95% confidence intervals of the mean. See Figure A6 for CIFAR10C results. the coefficient of variation (Everitt, 2002) for a unit (CVu), a measure of variation in perturbability for individual units. We also quantified the variation across units in a layer by computing the standard deviation of µu across units in a layer l, σ(µu) = σl, and dividing this by the corresponding mean across units µ(µu) = µl, to yield the CV across units σl/µl = CVl. 4 RESULTS 4.1 AVERAGE-CASE ROBUSTNESS IS INVERSELY PROPORTIONAL TO CLASS SELECTIVITY Certain kinds of sparsity—including reliance on single directions (Morcos et al., 2018), and the semantic sparsity measured by class selectivity (Leavitt and Morcos, 2020)—have been shown to impair network performance. We sought to extend this question to robustness: how does the sparsity of semantic representations affect robustness to average-case perturbations of the input data? We used a recently-introduced method (Leavitt and Morcos (2020); Approach 3) to modulate the amount of class selectivity learned by DNNs (Figure A2 demonstrates effects of selectivity regularization). We then examined how this affected performance on Tiny ImageNetC and CIFAR10C, two benchmark datasets for average-case corruptions (Approach 3; example images in Figure A1). Changing the level of class selectivity across neurons in a network could one of the following effects on corruption robustness: If concentrating semantic representations into fewer neurons (i.e. promoting semantic sparsity) provides fewer potent dimensions on which perturbed inputs can act, then increasing class selectivity should confer networks with robustness to average-case perturbations, while reducing class selectivity should render networks more vulnerable. Alternatively, if distributing semantic representations across more units (i.e. reducing sparsity) dilutes the changes induced by perturbed inputs, then reducing class selectivity should increase a network’s robustness to averagecase perturbations, while increasing class selectivity should reduce robustness. We found that decreasing class selectivity leads to increased robustness to average-case perturbations for both ResNet18 tested on Tiny ImageNetC (Figure 1) and ResNet20 tested on CIFAR10C (Figure A6). In ResNet18, we found that mean test accuracy on corrupted inputs increases as class selectivity decreases (Figure 1), with test accuracy reaching a maximum at regularization scale α = −2.0 (mean test accuracy across corruptions and severities at α−2.0 =17), representing a 3.5 percentage point (pp) increase relative to no selectivity regularization (i.e. α0; test accuracy at α0 = 13.5). In contrast, regularizing to increase class selectivity has either no effect or a negative impact on corruption robustness. Corrupted test accuracy remains relatively stable until α = 1.0, after which point it declines. The results are qualitatively similar for ResNet50 tested on Tiny ImageNetC (Figure A9), and for ResNet20 tested on CIFAR10C (Figure A6), except the vulnerability to corruption caused by increasing selectivity is even more dramatic in ResNet20. We also found similar results when controlling for the difference in clean accuracy for models with different α (Appendix A.3). We observed that regularizing to decrease class selectivity causes robustness to average-case perturbations. But it’s possible that the causality is unidirectional, leading to the question of whether the converse is also true: does increasing robustness to average-case perturbations cause class selectivity to decrease? We investigated this question by training with AugMix, a technique known to improve worst-case robustness (Hendrycks et al., 2020a). We found that AugMix does indeed decrease the mean level of class selectivity across neurons in a network (Appendix A.4; Figure A11). AugMix decreases overall levels of selectivity similarly to training with a class selectivity regularization scale of approximately α = −0.1 or α = −0.2 in both ResNet18 trained on Tiny ImageNet (Figures A11a and A11b) and ResNet20 trained on CIFAR10 (Figures A11c and A11d). These results indicate that the causal relationship between average-case perturbation robustness and class selectivity is bidirectional: not only does decreasing class selectivity improve average-case perturbation robustness, but improving average-case perturbation-robustness also causes class selectivity to decrease. We also found that the effect of class selectivity on perturbed robustness is consistent across corruption types. Regularizing against selectivity improves perturbation robustness in all 15 Tiny ImageNetC corruption types for ResNet18 (Figure A4) and 14 of 15 Tiny ImageNetC corruption types in ResNet50 (Figure A10), and 14 of 19 corruption types in CIFAR10C for ResNet20 (Figure A7). Together these results demonstrate that reduced class selectivity confers robustness to average-case perturbations, implying that distributing semantic representations across neurons—i.e. low sparsity—may dilute the changes induced by average-case perturbations. 4.2 CLASS SELECTIVITY IMPARTS WORST-CASE PERTURBATION ROBUSTNESS We showed that the sparsity of a network’s semantic representations, as measured with class selectivity, is causally related to a network’s robustness to average-case perturbations. But how does the sparsity of semantic representations affect worst-case robustness? We addressed this question by testing our class selectivity-regularized networks on inputs that had been perturbed using using one of two gradient-based methods (see Approach 3). If distributing semantic representations across units provides more dimensions upon which a worstcase perturbation is potent, then worst-case perturbation robustness should be proportional to class selectivity. However, if increasing the sparsity of semantic representations creates more responsive individual neurons, then worst-case robustness should be inversely proportional to class selectivity. Unlike average-case perturbations, decreasing class selectivity decreases robustness to worst-case perturbations for ResNet18 (Figure 2) and ResNet50 (Figure A13) trained on Tiny ImageNet, and ResNet20 trained on CIFAR10 (Figures A12). For small perturbations (i.e. close to x=0), the effects of class selectivity regularization on test accuracy (class selectivity is inversely correlated with unperturbed test accuracy) appear to overwhelm the effects of perturbations. But as the magnitude of perturbation increases, a stark ordering emerges: test accuracy monotonically decreases as a function of class selectivity in ResNet18 and ResNet50 for both FGSM and PGD attacks (ResNet18: Figures 2a and 2b; ResNet50: Figures A13a and A13b). The ordering is also present for ResNet20, though less consistent for the two networks with the highest class selectivity (α = 0.7 and α = 1.0). However, increasing class selectivity is much more damaging to test accuracy in ResNet20 trained on CIFAR10 compared to ResNet18 trained on Tiny ImageNet (Leavitt and Morcos, 2020, Figure A2), so the the substantial performance deficits of extreme selectivity in ResNet20 likely mask the perturbation-robustness. This result demonstrates that networks with sparse semantic representations are less vulnerable to worst-case perturbation than networks with distributed semantic representations. We also verified that the worst-case robustness of high-selectivity networks is not fully explained by gradient-masking (Athalye et al., 2018, Appendix A.5). Interestingly, class selectivity regularization does not appear to affect robustness to "natural" adversarial examples (Appendix A.6), which are "unmodified, real-world examples...selected to cause a model to make a mistake" (Hendrycks et al., 2020b). Performance on ImageNet-A, a benchmark of natural adversarial examples (Hendrycks et al., 2020b), was similar across all tested values of α for both ResNet18 (Figure A15a) and ResNet50 (Figure A15b), indicating that class selectivity regularization may share some limitations with other methods for improving both worst-case and average-case robustness, many of which also fail to yield significant robustness improvements against ImageNet-A (Hendrycks et al., 2020b). We found that regularizing to increase class selectivity causes robustness to worst-case perturbations. But is the converse true? Does increasing robustness to worst-case perturbations also cause class selectivity to increase? We investigated this by training networks with a commonly-used technique to improve worst-case perturbation robustness, PGD training. We found that PGD training does indeed increase the mean level of class selectivity across neurons in a network, and this effect is proportional to the strength of PGD training: networks trained with more strongly-perturbed samples have higher class selectivity (Appendix A.7). This effect was present in both ResNet18 trained on Tiny ImageNet (Figure A16c) and ResNet20 trained on CIFAR10 (Figure A16f), indicating that the causal relationship between worst-case perturbation robustness and class selectivity is bidirectional. Networks whose outputs are more stable to small input perturbations are known to have improved generalization performance and worst-case perturbation robustness (Drucker and Le Cun, 1992; Novak et al., 2018; Sokolic et al., 2017; Rifai et al., 2011; Hoffman et al., 2019). To examine whether increasing class selectivity improves worst-case perturbation robustness by increasing network stability, we analyzed each network’s input-output Jacobian, which is proportional to its stability—a large-magnitude Jacobian means that a small change to the network’s input will cause a large change to its output. If class selectivity induces worst-case robustness by increasing network stability, then networks with higher class selectivity should have smaller Jacobians. But if increased class selectivity induces adversarial robustness through alternative mechanisms, then class selectivity should have no effect on the Jacobian. We found that the l2 norm of the input-output Jacobian is inversely proportional to class selectivity for ResNet18 (Figure 2c), ResNet50 (Figure A13c), and ResNet20 (Figure A12c), indicating that distributed semantic representations are more vulnerable to worst-case perturbation because they are less stable than sparse semantic representations. 4.3 VARIABILITY OF THE INPUT-UNIT GRADIENT ACROSS SAMPLES AND UNITS We observed that the input-output Jacobian is proportional to worst-case vulnerability and inversely proportional to class selectivity, but focusing on input-output stability potentially overlooks phenomena present in hidden layers and units. If class selectivity imparts worst-case robustness by making individual units less reliably perturbable—because each unit is highly tuned to a particular subset of images—then we should expect to see more variation across input-unit gradients for units in high-selectivity networks compared to units in low-selectivity networks. Alternatively, worst-case robustness in high-selectivity networks could be achieved by reducing both the magnitude and variation of units’ perturbability, in which case we would expect to observe lower variation across input-unit gradients for units in high-selectivity networks compared to low-selectivity networks. We quantified variation in unit perturbability using the coefficient of variation of the input-unit gradient across samples for each unit (CVu; Approach 3). The CV is a measure of variability that normalizes the standard deviation of a quantity by the mean. A large CV indicates high variability, a small CV indicates low variability. To quantify variation in perturbability across units, we computed the CV across units in each layer, (CVl; Approach 3). We found that units in high-selectivity networks exhibited greater variation in their perturbability than units in low-selectivity networks, both within individual units and across units in each layer. This effect was present in both ResNet18 trained on Tiny ImageNet (Figure 3) and ResNet20 trained on CIFAR10 (Figure A18), although the effect was less consistent for across-unit variability in later layers in ResNet18 (Figure 3b). Interestingly, class selectivity affects both the numerator (σ) and denominator (µ) of the CV calculation for both the CV across samples and CV across units (Appendix A.8). These results indicate that that high class selectivity imparts worst-case robustness by increasing the variation in perturbability within and across units, while the worst-case vulnerability associated with low class selectivity results from more consistently perturbable units. It is worth noting that the inverse can be stated with regards to average-case robustness: low variation in perturbability both within and across units in low-selectivity networks is associated with robustness to average-case perturbations, despite the these units (and networks) being more perturbable on average. 4.4 DIMENSIONALITY IN EARLY LAYERS PREDICTS PERTURBATION VULNERABILITY a) 0 2 4 6 8 10 12 14 16 Layer 10−1 100 In pu tU ni t G ra di en t Va ria bi lit y, U ni t ( CV u) Class Selectivity Regularization Scale (α) -2.0 -1.0 2.01.00.70-0.4-0.7 -0.3 -0.2 -0.1 0.1 0.2 0.3 0.4 b) 0 2 4 6 8 10 12 14 16 Layer 10−1 100 In pu tU ni t G ra di en t Va ria bi lit y, L ay er (C V l ) ality would be unaffected by class selectivity. We found that the sparsity of a DNN’s semantic representations corresponds directly to the dimensionality of those representations. Dimensionality is inversely proportional to class selectivity in early ResNet18 layers (≤layer 9; Figure 4a), and across all of ResNet20 (Figure A21d). Networks with higher class selectivity tend to have lower dimensionality, and networks with lower class selectivity tend to have higher dimensionality. These results show that the sparsity of a network’s semantic representations is indeed reflected in those representations’ dimensionality. We next examined the dimensionality of perturbation-induced changes in representations by subtracting the perturbed activation matrix from the clean activation matrix and computing the dimensionality of this "difference matrix" (see Appendix A.1.4). Intuitively, this metric quantifies the dimensionality of the change in the representation caused by perturbing the input. If it is small, the perturbation impacts fewer units, while if it is large, more units are impacted. Interestingly, we found that the dimensionality of the changes in activations induced by both average-case (Figure 4b) and worst-case perturbations (Figure 4c) was notably higher for networks with reduced class-selectivity, suggesting that decreasing class selectivity causes changes in input to become more distributed. We found that the activation changes caused by average-case perturbations are higher-dimensional than the representations of the clean data in both ResNet18 (compare Figures 4b and 4a) and ResNet20 (Figures A21e and A21d), and that this effect is inversely proportional to class selectivity (Figures 4b and A21e); the increase in dimensionality from average-case perturbations was more pronounced in low-selectivity networks than in high-selectivity networks. These results indicate that class selectivity not only predicts the dimensionality of a representation, but also the change in dimensionality induced by an average-case perturbation. Notably, however, the increase in early-layer dimensionality was much larger for worst-case perturbations than average-case perturbations (Figure 4c; Figure A21f) . These results indicate that, while the changes in dimensionality induced by both naturalistic and adversarial perturbations are proportional to the dimensionality of the network’s representations, these changes do not consistently project onto coding-relevant dimensions of the representations. Indeed, the larger change in early-layer dimensionality caused by worst-case perturbations likely reflects targeted projection onto codingrelevant dimensions and provides intuition as to why low-selectivity networks are more susceptible to worst-case perturbations. Hidden layer representations in DNNs are known to lie on non-linear manifolds that are of lower dimensionality than the space in which they’re embedded (Goodfellow et al., 2016; Ansuini et al., 2019). Consequently, linear methods such as PCA can provide misleading estimates of hidden layer dimensionality. Thus we also quantified the intrinsic dimensionality (ID) of each layer’s representations (see Appendix A.1.4). Interestingly, the results were qualitatively similar to what we observed when examining linear dimensionality (Figure A22) in both ResNet18 trained on Tiny ImageNet (Figure A22a-A22c) and ResNet20 trained on CIFAR10 (Figure A22d-A22f). Thus both linear and non-linear measures of dimensionality imply that representational dimensionality may present a trade-off between worst-case and average-case perturbation robustness. 5 DISCUSSION Our results demonstrate that changes in the sparsity of semantic representations, as measured with class selectivity, induce a trade-off between robustness to average-case vs. worst-case perturbations: highly-distributed semantic representations confer robustness to average-case perturbations, but their increased dimensionality and consistent perturbability result in vulnerability to worst-case perturbations. In contrast, sparse semantic representations yield low-dimensional representations and inconsistently-perturbable units, imparting worst-case robustness. Furthermore, the dimensionality of the difference in early-layer activations between clean and perturbed samples is larger for worst-case perturbations than for average-case perturbations. More generally, our results link average-case and worst-case perturbation robustness through class selectivity and representational dimensionality. We hesitate to generalize too broadly about our findings, as they are limited to CNNs trained on image classification tasks. It is possible that the results we report here are specific to our models and/or datasets, and also may not extend to other tasks. Scaling class selectivity regularization to datasets with large numbers of classes also remains an open problem (Leavitt and Morcos, 2020). Our findings could be utilized for practical ends and to clarify findings in prior work. Relevant to both of these issues is the task of adversarial example detection. There is conflicting evidence that intrinsic dimensionality can be used to characterize or detect adversarial (worst-case) samples (Ma et al., 2018; Lu et al., 2018). The finding that worst-case perturbations cause a marked increase in both intrinsic and linear dimensionality indicates that there may be merit in continuing to study these quantities for use in worst-case perturbation detection. And the observation that the causal relationship between class-selectivity and worst- and average-case robustness is bidirectional helps clarify the known benefits of sparsity (Madry et al., 2018; Balda et al., 2020; Ye et al., 2018; Guo et al., 2018; Dhillon et al., 2018) and dimensionality (Langeberg et al., 2019; Sanyal et al., 2020; Nayebi and Ganguli, 2017) on worst-case robustness. It furthermore raises the question of whether enforcing low-dimensional representations also causes class selectivity to increase. Our work may also hold practical relevance to developing robust models: class selectivity could be used as both a metric for measuring model robustness and a method for achieving robustness (via regularization). We hope future work will more comprehensively assess the utility of class selectivity as part of the deep learning toolkit for these purposes. A APPENDIX A.1 DETAILED APPROACH Unless otherwise noted: all experimental results were derived from the corrupted or adversarial test set with the parameters from the epoch that achieved the highest clean validation set accuracy over the training epochs; 20 replicates with different random seeds were run for each hyperparameter set; error bars and shaded regions denote bootstrapped 95% confidence intervals; selectivity regularization was not applied to the final (output) layer, nor was the final layer included in any of our analyses. A.1.1 MODELS All models were trained using stochastic gradient descent (SGD) with momentum = 0.9 and weight decay = 0.0001. The maxpool layer after the first batchnorm layer in ResNet18 (see He et al. (2016)) was removed because of the smaller size of Tiny ImageNet images compared to standard ImageNet images (64x64 vs. 256x256, respectively). ResNet18 and ResNet50 were trained for 90 epochs with a minibatch size of 4096 (ResNet18) or 1400 (ResNet50) samples with a learning rate of 0.1, multiplied (annealed) by 0.1 at epochs 35, 50, 65, and 80. ResNet20 (code modified from Idelbayev (2020)) were trained for 200 epochs using a minibatch size of 256 samples and a learning rate of 0.1, annealed by 0.1 at epochs 100 and 150. A.1.2 DATASETS Tiny Imagenet (Fei-Fei et al., 2015) consists of 500 training images and 50 images for each of its 200 classes. We used the validation set for testing and created a new validation set by taking 50 images per class from the training set, selected randomly for each seed. We split the 50k CIFAR10 training samples into a 45k sample training set and a 5k validation set, similar to our approach with Tiny Imagenet. All experimental results were derived from the test set with the parameters from the epoch that achieved the highest validation set accuracy over the training epochs. 20 replicates with different random seeds were run for each hyperparameter set. Selectivity regularization was not applied to the final (output) layer, nor was the final layer included any of our analyses. CIFAR10C consists of a dataset in which 19 different naturalistic corruptions have been applied to the CIFAR10 test set at 5 different levels of severity. Tiny ImageNetC also has 5 levels of corruption severity, but consists of 15 corruptions. We would like to note that Tiny ImageNetC does not use the Tiny ImageNet test data. While the two datasets were created using the same data generation procedure—cropping and scaling images from the same 200 ImageNet classes—they differ in the specific ImageNet images they use. It is possible that the images used to create Tiny ImageNetC are out-of-distribution with regards to the Tiny ImageNet training data, in which case our results from testing on Tiny ImageNetC actually underestimate the corruption robustness of our networks. The creators of Tiny ImageNetC kindly provided the clean (uncorrupted) Tiny ImageNetC data necessary for the dimensionality analysis, which relies on matches corrupted and clean data samples. A.1.3 SOFTWARE Experiments were conducted using PyTorch (Paszke et al., 2019), analyzed using the SciPy ecosystem (Virtanen et al., 2019), and visualized using Seaborn (Waskom et al., 2017). A.1.4 QUANTIFYING DIMENSIONALITY We quantified the dimensionality of a layer’s representations by applying PCA to the layer’s activation matrix for the clean test data and counting the number of dimensions necessary to explain 95% of the variance, then dividing by the total number of dimensions (i.e. the fraction of total dimensionality; we also replicated our results using the fraction of total dimensionality necessary to explain 90% and 99% of the variance). The same procedure was applied to compute the dimensionality of perturbationinduced changes in representations, except the activations for a perturbed data set were subtracted a) b) c) d) e) Figure A1: Example naturalistic corruptions from the Tiny ImageNetC dataset. (a) Clean (no corruption). (b) Brightness. (c) Contrast. (d) Elastic transform. (e) Shot noise. All corruptions are shown at severity level 5/5. from the corresponding clean activations prior to applying PCA. For average-case perturbations, we performed this analysis for every corruption type and severity, and for the worst-case perturbations we used PGD with 40 steps. Hidden layer representations in DNNs are known to lie on non-linear manifolds that are of lower dimensionality than the space in which they’re embedded (Goodfellow et al., 2016; Ansuini et al., 2019). Consequently, linear methods such as PCA can fail to capture the "intrinsic" dimensionality of hidden layer representations. Thus we also quantified the intrinsic dimensionality (ID) of each layer’s representations using the method of (Facco et al., 2017). The method, based on that of Levina and Bickel (2005), estimates ID by computing the ratio between the distances to the second and first nearest neighbors of each data point. We used the implementation of Ansuini et al. (2019). Our procedure was otherwise identical as when computing the linear dimensionality: we computed the dimensionality across all test data for each layer, then divided by the number of units per layer. We then computed the dimensionality of perturbation-induced changes in representations, except the activations for a perturbed data set were subtracted from the corresponding clean activations prior to computed ID. For average-case perturbations, we performed this analysis for every corruption type and severity, and for the worst-case perturbations we used PGD with 40 steps. A.2 EFFECTS OF CLASS SELECTIVITY REGULARIZATION ON TEST ACCURACY a) 0.0 0.2 0.4 0.6 0.8 1.0 Mean Class Selectivity 0 10 20 30 40 50 Te st A cc ur ac y -100.0 -30.0 -10.0 -5.0 -2.0 -1.0 -0.7 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.7 1.0 2.0 5.0 10.0 30.0 100.0 Regularization Scale (α) b) 0.0 0.2 0.4 0.6 0.8 Mean Class Selectivity 10 20 30 40 50 60 70 80 90 Te st A cc ur ac y Figure A2: Effects of class selectivity regularization on test accuracy. Replicated as in Leavitt and Morcos (2020). (a) Test accuracy (y-axis) as a function of mean class selectivity (x-axis) for ResNet18 trained on Tiny ImageNet. α denotes the sign and intensity of class selectivity regularization. Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect. Each data point represents the mean class selectivity across all units in a single trained model. (b) Same as (a), but for ResNet20 trained on CIFAR10. A.3 ADDITIONAL RESULTS FOR AVERAGE-CASE PERTURBATION ROBUSTNESS Because modifying class selectivity can affect performance on clean (unperturbed) inputs (Leavitt and Morcos (2020); Figure A2), it is possible that the effects we observe of class selectivity on perturbed test accuracy are not caused by changes in perturbation robustness per se, but simply by changes in baseline model accuracy. We controlled for this by normalizing each model’s perturbed test accuracy by its clean (unperturbed) test accuracy. The results are generally consistent even after controlling for clean test accuracy, although increasing class selectivity does not cause the same deficits in as measured using non-normalized perturbed test accuracy in ResNet18 trained on Tiny ImageNet (Figure A3a). Interestingly, in ResNet20 trained on CIFAR10, normalizing perturbed test accuracy reveals a more dramatic improvement in perturbation robustness caused by reducing class selectivity (Figure A6c). The results for Resnet50 trained on Tiny ImageNet are entirely consistent between raw vs. normalized measures (Figure A9b vs. Figure A9c) a) -2.0 -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 2.0 Regularization Scale (α) 0.34 0.35 0.36 0.37 0.38 0.39 0.40 0.41 0.42 Co rr up te d Te st A cc ur ac y Cl ea n Te st A cc ur ac y b) -2.0 -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 2.0 Regularization Scale (α) 10 15 20 25 Te st A cc ur ac y Corruption Severity 1 Weakest 2 3 4 5 Strongest Figure A3: Controlling for clean test accuracy, and effect of corruption severity across corruptions. (a) Corrupted test accuracy normalized by clean test accuracy (y-axis) as a function of class selectivity regularization scale (α; x-axis). Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect. Normalized perturbed test accuracy appears higher in networks with high class selectivity (large α), but this is likely due to a floor effect: clean test accuracy is already much closer to the lower bound—chance—in networks with very high class selectivity, which may reflect a different performance regime, making direct comparison difficult. (b) Mean test accuracy across all corruptions (y-axis) as a function of α (x-axis) for different corruption severities (ordering along y-axis; shade of connecting line). Error bars indicate 95% confidence intervals of the mean. Shot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur 5 10 15 20 25 30 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -2.0 Low Selectivity 2.0 High Selectivity 0 Baseline Selectivity -1.0 -0.7 -0.4 -0.2 0.2 0.4 0.7 1.0 Figure A4: Mean test accuracy across corruption intensities for each corruption type for ResNet18 tested on Tiny ImageNetC. Test accuracy (y-axis) as a function of corruption type (x-axis) and class selectivity regularization scale (α, color). Reducing class selectivity improves robustness against all 15/15 corruption types. α Figure A5: Trade-off between clean and perturbed test accuracy in ResNet18 tested on Tiny ImageNetC. Clean test accuracy (x-axis) vs. perturbed test accuracy (y-axis) for different corruption severities (border color) and regularization scales (α, fill color). Mean is computed across all corruption types. Error bars = 95% confidence intervals of the mean. a) Fog Jpeg Compression Zoom Blur Speckle Noise Glass Blur Spatter Shot Noise Defocus Blur Elastic Transform Gaussian Blur Frost Saturate Brightness Snow Gaussian Noise Motion Blur Contrast Impulse Noise Pixelate 20 30 40 50 60 70 80 90 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -1.0 Low Selectivity 1.0 High Selectivity 0 Baseline Selectivity -0.7 -0.4 -0.2 0.2 0.4 0.7 b) -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 60 62 64 66 68 70 72 Te st A cc ur ac y c) -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 0.73 0.74 0.75 0.76 0.77 0.78 0.79 Co rr up te d Te st A cc ur ac y Cl ea n Te st A cc ur ac y d) -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 45 50 55 60 65 70 75 80 85 Te st A cc ur ac y Corruption Severity 1 Weakest 2 3 4 5 Strongest Figure A6: Reducing class selectivity confers robustness to average-case perturbations in ResNet20 tested on CIFAR10C. (a) Test accuracy (y-axis) as a function of corruption type (x-axis), class selectivity regularization scale (α; color), and corruption severity (ordering along y-axis). Test accuracy is reduced proportionally to corruption severity, leading to an ordering along the y-axis, with corruption severity 1 (least severe) at the top and corruption severity 5 (most severe) at the bottom. Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect (see Figure A2b and Approach 3). (b) Mean test accuracy across all corruptions and severities (y-axis) as a function of α (x-axis). (c) Corrupted test accuracy normalized by clean test accuracy (y-axis) as a function of class selectivity regularization scale (α; x-axis). Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect. Normalized perturbed test accuracy appears higher in networks with higher class selectivity (larger α), but this is likely due to a floor effect: clean test accuracy is already much closer to the lower bound—chance—in networks with very high class selectivity, which may reflect a different performance regime, making direct comparison difficult. (d) Mean test accuracy across all corruptions (y-axis) as a function of α (x-axis) for different corruption severities (ordering along y-axis; shade of connecting line). Error bars indicate 95% confidence intervals of the mean. Fog Jpeg Compression Zoom Blur Speckle Noise Glass Blur Spatter Shot Noise Defocus Blur Elastic Transform Gaussian Blur Frost SaturateBrightness Snow Gaussian Noise Motion Blur Contrast Impulse Noise Pixelate 40 50 60 70 80 90 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -1.0 Low Selectivity 1.0 High Selectivity 0 Baseline Selectivity -0.7 -0.4 -0.2 0.2 0.4 0.7 Figure A7: Mean test accuracy across corruption intensities for each corruption type for ResNet20 tested on CIFAR10C. Test accuracy (y-axis) as a function of corruption type (x-axis) and class selectivity regularization scale (α, color). Reducing class selectivity improves robustness against 14/19 corruption types. Error bars = 95% confidence intervals of the mean. α Figure A8: Trade-off between clean and corrupted test accuracy in ResNet20 tested on CIFAR10C. Clean test accuracy (x-axis) vs. corrupted test accuracy (y-axis) for different corruption severities (border color) and regularization scales (α, fill color). Mean is computed across all corruption types. a) Shot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur 0 5 10 15 20 25 30 35 40 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -1.0 Low Selectivity 1.0 High Selectivity 0 Baseline Selectivity -0.7 -0.4 -0.2 0.2 0.4 0.7 b) -1.0 -0.7 -0.4 -0.2 0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 15 16 17 18 19 20 21 Te st A cc ur ac y c) -1.0 -0.7 -0.4 -0.2 0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 0.36 0.38 0.40 0.42 0.44 Co rr up te d Te st A cc ur ac y Cl ea n Te st A cc ur ac y d) -1.0 -0.7 -0.4 -0.2 0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 10 15 20 25 30 Te st A cc ur ac y Figure A9: Reducing class selectivity confers robustness to average-case perturbations in ResNet50 tested on Tiny ImageNetC. (a) Test accuracy (y-axis) as a function of corruption type (x-axis), class selectivity regularization scale (α; color), and corruption severity (ordering along y-axis). Test accuracy is reduced proportionally to corruption severity, leading to an ordering along the y-axis, with corruption severity 1 (least severe) at the top and corruption severity 5 (most severe) at the bottom. Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect (see Approach 3). (b) Mean test accuracy across all corruptions and severities (y-axis) as a function of α (x-axis). (c) Corrupted test accuracy normalized by clean test accuracy (y-axis) as a function of class selectivity regularization scale (α; x-axis). Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect. (d) Mean test accuracy across all corruptions (y-axis) as a function of α (x-axis) for different corruption severities (ordering along y-axis; shade of connecting line). Error bars indicate 95% confidence intervals of the mean. Note that confidence intervals are larger in part due to a smaller sample size—only 5 replicates per α instead of 20. Shot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur 5 10 15 20 25 30 35 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -1.0 Low Selectivity 1.0 High Selectivity 0 Baseline Selectivity -0.7 -0.4 -0.2 0.2 0.4 0.7 Figure A10: Mean test accuracy across corruption intensities for each corruption type for ResNet50 tested on Tiny ImageNetC. Test accuracy (y-axis) as a function of corruption type (x-axis) and class selectivity regularization scale (α, color). Reducing class selectivity improves robustness against 14/15 corruption types. Error bars = 95% confidence intervals of the mean. Note that confidence intervals are larger in part due to a smaller sample size—only 5 replicates per α instead of 20. A.4 THE CAUSAL RELATIONSHIP BETWEEN CLASS SELECTIVITY AND AVERAGE-CASE ROBUSTNESS IS BIDIRECTIONAL We found that regularizing to decrease class selectivity causes robustness to average-case perturbations. But is the converse is also true? Does increasing robustness to average-case perturbations also cause class selectivity to increase? We investigated this question by training with AugMix, a technique known to improve worst-case robustness (Hendrycks et al., 2020a). Briefly, AugMix stochastically applies a diverse set of image augmentations and uses a Jensen-Shannon Divergence consistency loss. Our AugMix parameters were as follows: mixture width: 3; mixture depth: stochastic; augmentation probability: 1; augmentation severity: 2. We found that AugMix does indeed decrese the mean level of class selectivity across neurons in a network (Figure A11). AugMix decreases overall levels of selectivity similarly to training with a class selectivity regularization scale of approximately α = −0.1 or α = −0.2 in both ResNet18 trained on Tiny ImageNet (Figures A11a and A11b) and ResNet20 trained on CIFAR10 (Figures A11c and A11d). These results indicate that the causal relationship between average-case perturbation robustness and class selectivity is bidirectional: not only does decreasing class selectivity cause average-case perturbation robustness to increase, but increasing average-case perturbation-robustness also causes class selectivity to decrease. A.5 WORST-CASE PERTURBATION ROBUSTNESS We also confirmed that the worst-case robustness of high-selectivity ResNet18 and ResNet20 networks was not simply due to gradient-masking (Athalye et al., 2018) by generating worst-case perturbations using each of the replicate models trained with no selectivity regularization (α = 0), then testing selectivity-regularized models on these samples. We found that high-selectivity models were less vulnerable to the α = 0 samples than low-selectivity models for high-intensity perturbations (Appendix A14, indicating that gradient-masking does not fully account for the worst-case robustness of high-selectivity models. A.6 CLASS SELECTIVITY REGULARIZATION DOES NOT AFFECT ROBUSTNESS TO NATURAL ADVERSARIAL EXAMPLES We also examined whether class selectivity regularization affects robustness to "natural" adversarial examples, images that are "natural, unmodified, real-world examples...selected to cause a fixed model to make a mistake" (Hendrycks et al., 2020b). We tested robustness to natural adversarial examples using ImageNet-A, a dataset of natural adversarial examples that belong to ImageNet classes but consistently cause misclassification errors with high confidence (Hendrycks et al., 2020b). We adapted ImageNet-A to our models trained on Tiny ImageNet (ResNet18 and ResNet50) by only testing on the 74 image classes that overlap between ImageNet-A and Tiny ImageNet (yielding a total of 2957 samples), and downsampling the images to 64 x 64. Test accuracy was similar across all tested values of α for both ResNet18 (Figure A15a) and ResNet50 (Figure A15b), indicating that class selectivity regularization may share some limitations with other methods for improving robustness, many of which also fail to yield significant robustness against ImageNet-A (Hendrycks et al., 2020b). A.7 THE CAUSAL RELATIONSHIP BETWEEN CLASS SELECTIVITY AND WORST-CASE ROBUSTNESS IS BIDIRECTIONAL We observed that regularizing to increase class selectivity causes robustness to worst-case perturbations. But is the converse is also true? Does increasing robustness to worst-case perturbations cause class selectivity to increase? We investigated this question using PGD training, a common technique for improving worst-case robustness. PGD training applies the PGD method of sample perturbation (see Approach 3) to samples during training. We used the same parameters for PGD sample generation when training our models as when testing (Approach 3). The number of PGD iterations controls the intensity of the perturbation, and the degree of perturbation-robustness in the trained model (Madry et al., 2018). We found that PGD training does indeed increase the mean level of class selectivity across neurons in a network, and this effect is proportional to the strength of PGD training: networks trained with more strongly-perturbed samples have higher class selectivity (Figure A16). Interestingly, PGD training also appear to cause units to die (Lu et al., 2019), and the number of dead untis is proportional to the intensity of PGD training (Figures A16b and A16e). Removing dead units, which have a class selectivity index of 0, from the calculation of mean class selectivity results in a clear, monotonic effect of PGD training intensity on class selectivity in both ResNet18 trained on Tiny ImageNet (Figure A16c) and ResNet20 trained on CIFAR10 (Figure A16f). These results indicate that the causal relationship between worst-case perturbation robustness and class selectivity is bidirectional: increasing class selectivity not only causes increased worst-case perturbation robustness, but increasing worst-case perturbation-robustness also causes increased class selectivity. A.8 STABILITY TO INPUT PERTURBATIONS IN UNITS AND LAYERS A.9 REPRESENTATIONAL DIMENSIONALITY a) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.1 0.2 0.3 0.4 0.5 Fr ac tio n of T ot al D im en si on al ity Class Selectivity Regularization Scale (α) -2.0 -1.0 2.01.00-0.4 -0.3 -0.2 -0.1 0.1 0.2 0.3 0.4 b) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.1 0.2 0.3 0.4 0.5 Fr ac tio n of T ot al D im en sio na lit y c) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.1 0.2 0.3 0.4 0.5 Fr ac tio n of T ot al D im en sio na lit y d) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.2 0.4 0.6 0.8 1.0 Fr ac tio n of T ot al D im en si on al ity Class Selectivity Regularization Scale (α) -2.0 -1.0 2.01.00-0.4 -0.3 -0.2 -0.1 0.1 0.2 0.3 0.4 e) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.2 0.4 0.6 0.8 1.0 Fr ac tio n of T ot al D im en sio na lit y f) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.2 0.4 0.6 0.8 1.0 Fr ac tio n of T ot al D im en sio na lit y Figure A20: Dimensionality in early layers predicts worst-case vulnerability in ResNet18 trained on Tiny ImageNet. Identical to Figure 4, but dimensionality is computed as the number of principal components needed to explain 90% of variance in (a) - (c), and 99% of variance in (d) - (f). (a) Fraction of dimensionality (y-axis; see Appendix A.1.4) as a function of layer (x-axis). (b) Dimensionality of difference between clean and average-case perturbation activations (y-axis) as a function of layer (x-axis). (c) Dimensionality of difference between clean and worst-case perturbation activations (y-axis) as a function of layer (x-axis). (d) - (f), identical to (a) - (c), but for 99% explained variance threshold.
1. What is the main contribution of the paper regarding class selectivity and its correlation with robustness to natural and adversarial attacks? 2. What are the strengths of the paper, particularly in explaining the phenomenon from two aspects: variability of input-unit gradient and dimensionality in early layers? 3. Do you have any concerns or questions regarding the paper's conclusions, experiments, or discussions? 4. How does the reviewer assess the paper's impact on the tradeoff between clean accuracy and adversarial accuracy, as well as the inherent tradeoff between average- and worst-case adversarial accuracy? 5. Are there any suggestions for additional experiments or analyses that could further support or explain the paper's findings?
Review
Review This paper presents a new finding that class selectivity is negatively correlated with average-case corruptions (natural visual distortions) while positively correlated with worst-case corruptions (adversarial attacks). The authors then try to explain this phenomenon from two aspects: variability of input-unit gradient and dimensionality in early layers. Pros: Class selectivity is previously used as a metric to indicate model generalization or memorization. It is good to see the authors generalize its usage to measuring model robustness to natural corruptions and adversarial attacks. The finding that robustness to average-case corruptions are negatively correlated with class selectivity is no surprise to me, since it has been previously shown that low class selectivity indicates better generalization ability [1]. The intriguing finding is that robustness to adversarial images is positively correlated with class selectivity. This is contour intuitive to me at first glance, but the explanations provided in Section 4.3 and 4.4 convince me. If the conclusions in this paper holds, I think this is one step further from the well-know accuracy-robustness tradeoff [5,6]: not only do we have tradeoff between clean accuracy and adversarial accuracy [5,6], but also a (possibly inherent) tradeoff between average- and worst-case adversarial accuracy. Cons: The general idea to use class selectivity as an indicator for model robustness in this paper is very interesting. But I do think more experiments and discussions are needed to explain these phenomenons, especially the different behaviors of average-and worst-case corruptions. Adversarial training (e.g., [2]) can greatly improve adversarial robustness. If the conclusions in this paper holds, we would expect adversarially trained models to have much better class selectivity than normally trained models. I'm wondering whether the authors could kindly provide these results? Similarly, there are some methods (e.g., AugMix [4]) improving model robustness to average-case corruptions. Does AugMix model has significantly lower class selectivity than normally trained models. In my point of view, one possible explanation for the different behaviors between average-case and worst-case corruptions could be that normal adversarial images (generated by PGD, FGSM as in your paper) are out-of-distribution samples [3], while average-case corruptions are likely to be on-distribution. So I'm a bit curious about the behaviors of the "on– manifold adversarial examples" defined in [3]: do they behave more like normal adversarial images (e.g., causing a larger increase in early-layer dimensionality) or more like average-case corruptions (causing a smaller increase in early-layer dimensionality )? [1] On the importance of single directions for generalization. ICLR, 2018. [2] Towards Deep Learning Models Resistant to Adversarial Attacks. ICLR, 2018. [3] Disentangling Adversarial Robustness and Generalization. CVPR, 2019. [4] AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty. ICLR, 2020. [5] Robustness May Be at Odds with Accuracy. ICLR, 2019. [6] Theoretically Principled Trade-off between Robustness and Accuracy. ICML, 2019. Update: Thanks the authors for their response. All my concerns are addressed and I decide to increase my score from 6 to 7.
ICLR
Title Linking average- and worst-case perturbation robustness via class selectivity and dimensionality Abstract Representational sparsity is known to affect robustness to input perturbations in deep neural networks (DNNs), but less is known about how the semantic content of representations affects robustness. Class selectivity—the variability of a unit’s responses across data classes or dimensions—is one way of quantifying the sparsity of semantic representations. Given recent evidence that class selectivity may not be necessary for, and in some cases can impair generalization, we sought to investigate whether it also confers robustness (or vulnerability) to perturbations of input data. We found that class selectivity leads to increased vulnerability to average-case (naturalistic) perturbations in ResNet18, ResNet50, and ResNet20, as measured using Tiny ImageNetC (ResNet18 and ResNet50) and CIFAR10C (ResNet20). Networks regularized to have lower levels of class selectivity are more robust to average-case perturbations, while networks with higher class selectivity are more vulnerable. In contrast, we found that class selectivity increases robustness to multiple types of worst-case (i.e. white box adversarial) perturbations, suggesting that while decreasing class selectivity is helpful for average-case perturbations, it is harmful for worst-case perturbations. To explain this difference, we studied the dimensionality of the networks’ representations: we found that the dimensionality of early-layer representations is inversely proportional to a network’s class selectivity, and that adversarial samples cause a larger increase in early-layer dimensionality than corrupted samples. We also found that the input-unit gradient was more variable across samples and units in high-selectivity networks compared to low-selectivity networks. These results lead to the conclusion that units participate more consistently in low-selectivity regimes compared to high-selectivity regimes, effectively creating a larger attack surface and hence vulnerability to worst-case perturbations. 1 INTRODUCTION Methods for understanding deep neural networks (DNNs) often attempt to find individual neurons or small sets of neurons that are representative of a network’s decision (Erhan et al., 2009; Zeiler and Fergus, 2014; Karpathy et al., 2016; Amjad et al., 2018; Lillian et al., 2018; Dhamdhere et al., 2019; Olah et al., 2020). Selectivity in individual units (i.e. variability in a neuron’s activations across semantically-relevant data features) has been of particular interest to researchers trying to better understand deep neural networks (DNNs) (Zhou et al., 2015; Olah et al., 2017; Morcos et al., 2018; Zhou et al., 2018; Meyes et al., 2019; Na et al., 2019; Zhou et al., 2019; Rafegas et al., 2019; Bau et al., 2020; Leavitt and Morcos, 2020). However, recent work has shown that selective neurons can be irrelevant, or even detrimental to network performance, emphasizing the importance of examining distributed representations for understanding DNNs (Morcos et al., 2018; Donnelly and Roegiest, 2019; Dalvi et al., 2019b; Leavitt and Morcos, 2020). In parallel, work on robustness seeks to build models that are robust to perturbed inputs (Szegedy et al., 2013; Carlini and Wagner, 2017a;b; Vasiljevic et al., 2016; Kurakin et al., 2017; Gilmer et al., 2018; Zheng et al., 2016). Hendrycks and Dietterich (2019) distinguish between two types of robustness: corruption robustness, which measures a classifier’s performance on low-quality or naturalistically-perturbed inputs—and thus is an "average-case" measure—and adversarial robustness, which measures a classifier’s performance on small, additive perturbations that are tailored to the classifier—and thus is a "worst-case" measure.1 Research on robustness has been predominantly focused on worst-case perturbations, which is affected by weight and activation sparsity (Madry et al., 2018; Balda et al., 2020; Ye et al., 2018; Guo et al., 2018; Dhillon et al., 2018) and representational dimensionality (Langeberg et al., 2019; Sanyal et al., 2020; Nayebi and Ganguli, 2017). But less is known about the mechanisms underlying average-case perturbation robustness and its common factors with worst-case robustness. Some techniques for improving worst-case robustness also improve average-case robustness (Hendrycks and Dietterich, 2019; Ford et al., 2019; Yin et al., 2019), thus it is possible that sparsity and representational dimensionality also contribute to average-case robustness. Selectivity in individual units can be also be thought of a measure of the sparsity with which semantic information is represented.2 And because class selectivity regularization provides a method for controlling selectivity, and class selectivity regularization has been shown to improve test accuracy on unperturbed data (Leavitt and Morcos, 2020), we sought to investigate whether it could be utilized to improve perturbation robustness and elucidate the factors underlying it. In this work we pursue a series of experiments investigating the causal role of selectivity in robustness to worst-case and average-case perturbations in DNNs. To do so, we used a recently-developed class selectivity regularizer (Leavitt and Morcos, 2020) to directly modify the amount of class selectivity learned by DNNs, and examined how this affected the DNNs’ robustness to worst-case and average-case perturbations. Our findings are as follows: • Networks regularized to have lower levels of class selectivity are more robust to average-case perturbations, while networks with higher class selectivity are generally less robust to average-case perturbations, as measured in ResNets using the Tiny ImageNetC and CIFAR10C datasets. The corruption robustness imparted by regularizing against class selectivity was consistent across nearly all tested corruptions. • In contrast to its impact on average-case perturbations, decreasing class selectivity reduces robustness to worst-case perturbations in both tested models, as assessed using gradient-based white-box attacks. • The variability of the input-unit gradient across samples and units is proportional to a network’s overall class selectivity, indicating that high variability in perturbability within and across units may facilitate worst-case perturbation robustness. • The dimensionality of activation changes caused by corruption markedly increases in early layers for both perturbation types, but is larger for worst-case perturbations and low-selectivity networks. This implies that representational dimensionality may present a trade-off between worst-case and average-case perturbation robustness. Our results demonstrate that changing class selectivity, and hence the sparsity of semantic representations, can confer robustness to average-case or worst-case perturbations, but not both simultaneously. They also highlight the roles of input-unit gradient variability and representational dimensionality in mediating this trade-off. 2 RELATED WORK 2.1 PERTURBATION ROBUSTNESS The most commonly studied form of robustness in DNNs is robustness to adversarial attacks, in which an input is perturbed in a manner that maximizes the change in the network’s output while 1We use the terms "worst-case perturbation" and "average-case perturbation" instead of "adversarial attack" and "corruption", respectively, because this usage is more general and dispenses with the implied categorical distinction of using seemingly-unrelated terms. Also note that while Hendrycks and Dietterich (2019) assign specific and distinct meanings to "perturbation" and "corruption", we use the term "perturbation" more generally to refer to any change to an input. 2Class information is semantic. And because class selectivity measures the degree to which class information is represented in individual neurons, it can be considered a form of sparsity. For example, if a network has high test accuracy on a classification task, it is necessarily representing class (semantic) information. But if the mean class selectivity across units is low, then the individual units do not contain much class information, thus the class information must be distributed across units; the semantic representation in this case is not sparse, it is distributed. attempting to minimize or maintain below some threshold the magnitude of the change to the input (Serban et al., 2019; Warde-Farley and Goodfellow, 2017) . Because white-box adversarial attacks are optimized to best confuse a given network, robustness to adversarial attacks are a "worst-case" measure of robustness. Two factors that have been proposed to account for DNN robustness to worst-case perturbations are particularly relevant to the present study: sparsity and dimensionality. Multiple studies have linked activation and weight sparsity with robustness to worst-case perturbations. Adversarial training improves worst-case robustness Goodfellow et al. (2015); Huang et al. (2016) and results in sparser weight matrices (Madry et al., 2018; Balda et al., 2020). Methods for increasing the sparsity of weight matrices (Ye et al., 2018; Guo et al., 2018) and activations (Dhillon et al., 2018) likewise improve worst-case robustness, indicating that the weight sparsity caused by worst-case perturbation training is not simply a side-effect. Researchers have also attempted to understand the nature of worst-case robustness from a perspective complementary to that of sparsity: dimensionality. Like sparsity, worst-case perturbation training reduces the rank of weight matrices and representations, and regularizing weight matrices and representations to be low-rank can improve worst-case perturbation robustness (Langeberg et al., 2019; Sanyal et al., 2020; Nayebi and Ganguli, 2017). Taken together, these studies support the notion that networks with low-dimensional representations are more robust to worst-case perturbations. Comparatively less research has been conducted to understand the factors underlying averagecase robustness. Certain techniques for improving worst-case perturbation robustness also help against average-case perturbations (Hendrycks and Dietterich, 2019; Geirhos et al., 2018; Ford et al., 2019). Examining the frequency domain has elucidated one mechanism: worst-case perturbations for "baseline" models tend to be in the high frequency domain, and improvements in averagecase robustness resulting from worst-case robustness training are at least partially ascribable to models becoming less reliant on high-frequency information (Yin et al., 2019; Tsuzuku and Sato, 2019; Geirhos et al., 2018). But it remains unknown whether other factors such as sparsity and dimensionality link these two forms of robustness. 2.2 CLASS SELECTIVITY One technique that has been of particular interest to researchers trying to better understand deep (and biological) neural networks is examining the selectivity of individual units (Zhou et al., 2015; Olah et al., 2017; Morcos et al., 2018; Zhou et al., 2018; Meyes et al., 2019; Na et al., 2019; Zhou et al., 2019; Rafegas et al., 2019; Bau et al., 2020; Leavitt and Morcos, 2020; Sherrington, 1906; Kandel et al., 2000). Evidence regarding the importance of selectivity has mostly relied on single unit ablation, and has been equivocal (Radford et al., 2017; Morcos et al., 2018; Amjad et al., 2018; Zhou et al., 2018; Donnelly and Roegiest, 2019; Dalvi et al., 2019a). However Leavitt and Morcos (2020) examined the role of single unit selectivity in network performance by regularizing for or against class selectivity in the loss function, which sidesteps the limitations of single unit ablation and correlative approaches and allowed them to investigate the causal effect of class selectivity. They found that reducing class selectivity has little negative impact on—and can even improve—test accuracy in CNNs trained on image recognition tasks, but that increasing class selectivity has significant negative effects on test accuracy. However, their study focused on examining the effects of class selectivity on test accuracy in unperturbed (clean) inputs. Thus it remains unknown how class selectivity affects robustness to perturbed inputs, and whether class selectivity can serve as or elucidate a link between worst-case and average-case robustness. 3 APPROACH A detailed description of our approach is provided in Appendix A.1. Models and training protocols Our experiments were performed on ResNet18 and ResNet50 (He et al., 2016) trained on Tiny ImageNet (Fei-Fei et al., 2015), and ResNet20 (He et al., 2016) trained on CIFAR10 (Krizhevsky, 2009). We focus primarily on the results for ResNet18 trained on Tiny ImageNet in the main text for space, though results were qualitatively similar for ResNet50, and ResNet20 trained on CIFAR10. Experimental results were obtained with model parameters from the epoch that achieved the highest validation set accuracy over the training epochs, and 20 replicate models (ResNet18 and ResNet20) or 5 replicate models (Resnet50) with different random seeds were run for each hyperparameter set. Class selectivity index Following (Leavitt and Morcos, 2020). A unit’s class selectivity index is calculated as follows: At every ReLU, the activation in response to a single sample was averaged across all elements of the filter map (which we refer to as a "unit"). The class-conditional mean activation was then calculated across all samples in the clean test set, and the class selectivity index (SI) was calculated as follows: SI = µmax − µ−max µmax + µ−max (1) where µmax is the largest class-conditional mean activation and µ−max is the mean response to the remaining (i.e. non-µmax) classes. The selectivity index ranges from 0 to 1. A unit with identical average activity for all classes would have a selectivity of 0, and a unit that only responds to a single class would have a selectivity of 1. As Morcos et al. (2018) note, the selectivity index is not a perfect measure of information content in single units. For example, a unit with a litte bit of information about many classes would have a low selectivity index. However, it identifies units that are class-selective similarly to prior studies (Zhou et al., 2018). Most importantly, it is differentiable with respect to the model parameters. Class selectivity regularization We used (Leavitt and Morcos, 2020)’s class selectivity regularizer to control the levels of class selectivity learned by units in a network during training. Class selectivity regularization is achieved by minimizing the following loss function during training: loss = − C∑ c yc· log(ŷc)− αµSI (2) The left-hand term in the loss function is the standard classification cross-entropy, where c is the class index, C is the number of classes, yc is the true class label, and ŷc is the predicted class probability. The right-hand component of the loss function, −αµSI , is the class selectivity regularizer. The regularizer consists of two terms: the selectivity term, µSI = 1 L L∑ l 1 U U∑ u SIu,l (3) where l is a convolutional layer, L is number of layers, u is a unit, U is the number of units in a given layer, and SIu is the class selectivity index of unit u. The selectivity term of the regularizer is obtained by computing the selectivity index for each unit in a layer, then computing the mean selectivity index across units within each layer, then computing the mean selectivity index across layers. Computing the mean within layers before computing the mean across layers (as compared to computing the mean across all units in the network) mitigates the biases induced by the larger numbers of units in deeper layers. The other term in the regularizer is α, the regularization scale, which determines whether class selectivity is promoted or discouraged. Negative values of α discourage class selectivity in individual units and positive values encourage it. The magnitude of α controls the contribution of the selectivity term to the overall loss. During training, the class selectivity index was computed for each minibatch. The final (logit) layer was not subject to selectivity regularization or included in our analyses because by definition, the logit layer must be class selective in a classification task. Measuring average-case robustness To evaluate robustness to average-case perturbations, we tested our networks on CIFAR10C and Tiny ImageNetC, two benchmark datasets consisting of the CIFAR10 or Tiny ImageNet data, respectively, to which a set of naturalistic corruptions have been applied (Hendrycks and Dietterich, 2019, examples in Figure A1). We average across all corruption types and severities (see Appendix A.1.2 for details) when reporting corrupted test accuracy. Measuring worst-case robustness We tested our models’ worst-case (i.e. adversarial) robustness using two methods. The fast gradient sign method (FGSM) (Goodfellow et al., 2015) is a simple attack that computes the gradient of the loss with respect to the input image, then scales the image’s pixels (within some bound) in the direction that increases the loss. The second method, projected gradient descent (PGD) (Kurakin et al., 2016; Madry et al., 2018), is an iterated version of FGSM. We used a step size of 0.0001 and an l∞ norm perturbation budget ( ) of 16/255. Computing the stability of units and layers To quantify variation in networks’ perturbability, we first computed the l2 norm of the input-unit gradient for each unit u in a network. We then computed the mean (µu) and standard deviation (σu) of the norm across samples for each unit. σu/µu yields a) Shot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur 0 5 10 15 20 25 30 35 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -2.0 Low Selectivity 2.0 High Selectivity 0 Baseline Selectivity -1.0 -0.7 -0.4 -0.2 0.2 0.4 0.7 1.0 b) -2.0 -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 2.0 Regularization Scale (α) 13 14 15 16 17 Te st A cc ur ac y Figure 1: Reducing class selectivity improves average-case robustness. Test accuracy (y-axis) as a function of corruption type (x-axis), class selectivity regularization scale (α; color), and corruption severity (ordering along y-axis). Test accuracy is reduced proportionally to corruption severity, leading to an ordering along the y-axis; corruption severity 1 (least severe) is at the top, corruption severity 5 (most severe) at the bottom. (b) Mean test accuracy across all corruptions and severities (y-axis) as a function of α (x-axis). Results shown are for ResNet18 trained on Tiny ImageNet and tested on Tiny ImageNetC. Error bars = 95% confidence intervals of the mean. See Figure A6 for CIFAR10C results. the coefficient of variation (Everitt, 2002) for a unit (CVu), a measure of variation in perturbability for individual units. We also quantified the variation across units in a layer by computing the standard deviation of µu across units in a layer l, σ(µu) = σl, and dividing this by the corresponding mean across units µ(µu) = µl, to yield the CV across units σl/µl = CVl. 4 RESULTS 4.1 AVERAGE-CASE ROBUSTNESS IS INVERSELY PROPORTIONAL TO CLASS SELECTIVITY Certain kinds of sparsity—including reliance on single directions (Morcos et al., 2018), and the semantic sparsity measured by class selectivity (Leavitt and Morcos, 2020)—have been shown to impair network performance. We sought to extend this question to robustness: how does the sparsity of semantic representations affect robustness to average-case perturbations of the input data? We used a recently-introduced method (Leavitt and Morcos (2020); Approach 3) to modulate the amount of class selectivity learned by DNNs (Figure A2 demonstrates effects of selectivity regularization). We then examined how this affected performance on Tiny ImageNetC and CIFAR10C, two benchmark datasets for average-case corruptions (Approach 3; example images in Figure A1). Changing the level of class selectivity across neurons in a network could one of the following effects on corruption robustness: If concentrating semantic representations into fewer neurons (i.e. promoting semantic sparsity) provides fewer potent dimensions on which perturbed inputs can act, then increasing class selectivity should confer networks with robustness to average-case perturbations, while reducing class selectivity should render networks more vulnerable. Alternatively, if distributing semantic representations across more units (i.e. reducing sparsity) dilutes the changes induced by perturbed inputs, then reducing class selectivity should increase a network’s robustness to averagecase perturbations, while increasing class selectivity should reduce robustness. We found that decreasing class selectivity leads to increased robustness to average-case perturbations for both ResNet18 tested on Tiny ImageNetC (Figure 1) and ResNet20 tested on CIFAR10C (Figure A6). In ResNet18, we found that mean test accuracy on corrupted inputs increases as class selectivity decreases (Figure 1), with test accuracy reaching a maximum at regularization scale α = −2.0 (mean test accuracy across corruptions and severities at α−2.0 =17), representing a 3.5 percentage point (pp) increase relative to no selectivity regularization (i.e. α0; test accuracy at α0 = 13.5). In contrast, regularizing to increase class selectivity has either no effect or a negative impact on corruption robustness. Corrupted test accuracy remains relatively stable until α = 1.0, after which point it declines. The results are qualitatively similar for ResNet50 tested on Tiny ImageNetC (Figure A9), and for ResNet20 tested on CIFAR10C (Figure A6), except the vulnerability to corruption caused by increasing selectivity is even more dramatic in ResNet20. We also found similar results when controlling for the difference in clean accuracy for models with different α (Appendix A.3). We observed that regularizing to decrease class selectivity causes robustness to average-case perturbations. But it’s possible that the causality is unidirectional, leading to the question of whether the converse is also true: does increasing robustness to average-case perturbations cause class selectivity to decrease? We investigated this question by training with AugMix, a technique known to improve worst-case robustness (Hendrycks et al., 2020a). We found that AugMix does indeed decrease the mean level of class selectivity across neurons in a network (Appendix A.4; Figure A11). AugMix decreases overall levels of selectivity similarly to training with a class selectivity regularization scale of approximately α = −0.1 or α = −0.2 in both ResNet18 trained on Tiny ImageNet (Figures A11a and A11b) and ResNet20 trained on CIFAR10 (Figures A11c and A11d). These results indicate that the causal relationship between average-case perturbation robustness and class selectivity is bidirectional: not only does decreasing class selectivity improve average-case perturbation robustness, but improving average-case perturbation-robustness also causes class selectivity to decrease. We also found that the effect of class selectivity on perturbed robustness is consistent across corruption types. Regularizing against selectivity improves perturbation robustness in all 15 Tiny ImageNetC corruption types for ResNet18 (Figure A4) and 14 of 15 Tiny ImageNetC corruption types in ResNet50 (Figure A10), and 14 of 19 corruption types in CIFAR10C for ResNet20 (Figure A7). Together these results demonstrate that reduced class selectivity confers robustness to average-case perturbations, implying that distributing semantic representations across neurons—i.e. low sparsity—may dilute the changes induced by average-case perturbations. 4.2 CLASS SELECTIVITY IMPARTS WORST-CASE PERTURBATION ROBUSTNESS We showed that the sparsity of a network’s semantic representations, as measured with class selectivity, is causally related to a network’s robustness to average-case perturbations. But how does the sparsity of semantic representations affect worst-case robustness? We addressed this question by testing our class selectivity-regularized networks on inputs that had been perturbed using using one of two gradient-based methods (see Approach 3). If distributing semantic representations across units provides more dimensions upon which a worstcase perturbation is potent, then worst-case perturbation robustness should be proportional to class selectivity. However, if increasing the sparsity of semantic representations creates more responsive individual neurons, then worst-case robustness should be inversely proportional to class selectivity. Unlike average-case perturbations, decreasing class selectivity decreases robustness to worst-case perturbations for ResNet18 (Figure 2) and ResNet50 (Figure A13) trained on Tiny ImageNet, and ResNet20 trained on CIFAR10 (Figures A12). For small perturbations (i.e. close to x=0), the effects of class selectivity regularization on test accuracy (class selectivity is inversely correlated with unperturbed test accuracy) appear to overwhelm the effects of perturbations. But as the magnitude of perturbation increases, a stark ordering emerges: test accuracy monotonically decreases as a function of class selectivity in ResNet18 and ResNet50 for both FGSM and PGD attacks (ResNet18: Figures 2a and 2b; ResNet50: Figures A13a and A13b). The ordering is also present for ResNet20, though less consistent for the two networks with the highest class selectivity (α = 0.7 and α = 1.0). However, increasing class selectivity is much more damaging to test accuracy in ResNet20 trained on CIFAR10 compared to ResNet18 trained on Tiny ImageNet (Leavitt and Morcos, 2020, Figure A2), so the the substantial performance deficits of extreme selectivity in ResNet20 likely mask the perturbation-robustness. This result demonstrates that networks with sparse semantic representations are less vulnerable to worst-case perturbation than networks with distributed semantic representations. We also verified that the worst-case robustness of high-selectivity networks is not fully explained by gradient-masking (Athalye et al., 2018, Appendix A.5). Interestingly, class selectivity regularization does not appear to affect robustness to "natural" adversarial examples (Appendix A.6), which are "unmodified, real-world examples...selected to cause a model to make a mistake" (Hendrycks et al., 2020b). Performance on ImageNet-A, a benchmark of natural adversarial examples (Hendrycks et al., 2020b), was similar across all tested values of α for both ResNet18 (Figure A15a) and ResNet50 (Figure A15b), indicating that class selectivity regularization may share some limitations with other methods for improving both worst-case and average-case robustness, many of which also fail to yield significant robustness improvements against ImageNet-A (Hendrycks et al., 2020b). We found that regularizing to increase class selectivity causes robustness to worst-case perturbations. But is the converse true? Does increasing robustness to worst-case perturbations also cause class selectivity to increase? We investigated this by training networks with a commonly-used technique to improve worst-case perturbation robustness, PGD training. We found that PGD training does indeed increase the mean level of class selectivity across neurons in a network, and this effect is proportional to the strength of PGD training: networks trained with more strongly-perturbed samples have higher class selectivity (Appendix A.7). This effect was present in both ResNet18 trained on Tiny ImageNet (Figure A16c) and ResNet20 trained on CIFAR10 (Figure A16f), indicating that the causal relationship between worst-case perturbation robustness and class selectivity is bidirectional. Networks whose outputs are more stable to small input perturbations are known to have improved generalization performance and worst-case perturbation robustness (Drucker and Le Cun, 1992; Novak et al., 2018; Sokolic et al., 2017; Rifai et al., 2011; Hoffman et al., 2019). To examine whether increasing class selectivity improves worst-case perturbation robustness by increasing network stability, we analyzed each network’s input-output Jacobian, which is proportional to its stability—a large-magnitude Jacobian means that a small change to the network’s input will cause a large change to its output. If class selectivity induces worst-case robustness by increasing network stability, then networks with higher class selectivity should have smaller Jacobians. But if increased class selectivity induces adversarial robustness through alternative mechanisms, then class selectivity should have no effect on the Jacobian. We found that the l2 norm of the input-output Jacobian is inversely proportional to class selectivity for ResNet18 (Figure 2c), ResNet50 (Figure A13c), and ResNet20 (Figure A12c), indicating that distributed semantic representations are more vulnerable to worst-case perturbation because they are less stable than sparse semantic representations. 4.3 VARIABILITY OF THE INPUT-UNIT GRADIENT ACROSS SAMPLES AND UNITS We observed that the input-output Jacobian is proportional to worst-case vulnerability and inversely proportional to class selectivity, but focusing on input-output stability potentially overlooks phenomena present in hidden layers and units. If class selectivity imparts worst-case robustness by making individual units less reliably perturbable—because each unit is highly tuned to a particular subset of images—then we should expect to see more variation across input-unit gradients for units in high-selectivity networks compared to units in low-selectivity networks. Alternatively, worst-case robustness in high-selectivity networks could be achieved by reducing both the magnitude and variation of units’ perturbability, in which case we would expect to observe lower variation across input-unit gradients for units in high-selectivity networks compared to low-selectivity networks. We quantified variation in unit perturbability using the coefficient of variation of the input-unit gradient across samples for each unit (CVu; Approach 3). The CV is a measure of variability that normalizes the standard deviation of a quantity by the mean. A large CV indicates high variability, a small CV indicates low variability. To quantify variation in perturbability across units, we computed the CV across units in each layer, (CVl; Approach 3). We found that units in high-selectivity networks exhibited greater variation in their perturbability than units in low-selectivity networks, both within individual units and across units in each layer. This effect was present in both ResNet18 trained on Tiny ImageNet (Figure 3) and ResNet20 trained on CIFAR10 (Figure A18), although the effect was less consistent for across-unit variability in later layers in ResNet18 (Figure 3b). Interestingly, class selectivity affects both the numerator (σ) and denominator (µ) of the CV calculation for both the CV across samples and CV across units (Appendix A.8). These results indicate that that high class selectivity imparts worst-case robustness by increasing the variation in perturbability within and across units, while the worst-case vulnerability associated with low class selectivity results from more consistently perturbable units. It is worth noting that the inverse can be stated with regards to average-case robustness: low variation in perturbability both within and across units in low-selectivity networks is associated with robustness to average-case perturbations, despite the these units (and networks) being more perturbable on average. 4.4 DIMENSIONALITY IN EARLY LAYERS PREDICTS PERTURBATION VULNERABILITY a) 0 2 4 6 8 10 12 14 16 Layer 10−1 100 In pu tU ni t G ra di en t Va ria bi lit y, U ni t ( CV u) Class Selectivity Regularization Scale (α) -2.0 -1.0 2.01.00.70-0.4-0.7 -0.3 -0.2 -0.1 0.1 0.2 0.3 0.4 b) 0 2 4 6 8 10 12 14 16 Layer 10−1 100 In pu tU ni t G ra di en t Va ria bi lit y, L ay er (C V l ) ality would be unaffected by class selectivity. We found that the sparsity of a DNN’s semantic representations corresponds directly to the dimensionality of those representations. Dimensionality is inversely proportional to class selectivity in early ResNet18 layers (≤layer 9; Figure 4a), and across all of ResNet20 (Figure A21d). Networks with higher class selectivity tend to have lower dimensionality, and networks with lower class selectivity tend to have higher dimensionality. These results show that the sparsity of a network’s semantic representations is indeed reflected in those representations’ dimensionality. We next examined the dimensionality of perturbation-induced changes in representations by subtracting the perturbed activation matrix from the clean activation matrix and computing the dimensionality of this "difference matrix" (see Appendix A.1.4). Intuitively, this metric quantifies the dimensionality of the change in the representation caused by perturbing the input. If it is small, the perturbation impacts fewer units, while if it is large, more units are impacted. Interestingly, we found that the dimensionality of the changes in activations induced by both average-case (Figure 4b) and worst-case perturbations (Figure 4c) was notably higher for networks with reduced class-selectivity, suggesting that decreasing class selectivity causes changes in input to become more distributed. We found that the activation changes caused by average-case perturbations are higher-dimensional than the representations of the clean data in both ResNet18 (compare Figures 4b and 4a) and ResNet20 (Figures A21e and A21d), and that this effect is inversely proportional to class selectivity (Figures 4b and A21e); the increase in dimensionality from average-case perturbations was more pronounced in low-selectivity networks than in high-selectivity networks. These results indicate that class selectivity not only predicts the dimensionality of a representation, but also the change in dimensionality induced by an average-case perturbation. Notably, however, the increase in early-layer dimensionality was much larger for worst-case perturbations than average-case perturbations (Figure 4c; Figure A21f) . These results indicate that, while the changes in dimensionality induced by both naturalistic and adversarial perturbations are proportional to the dimensionality of the network’s representations, these changes do not consistently project onto coding-relevant dimensions of the representations. Indeed, the larger change in early-layer dimensionality caused by worst-case perturbations likely reflects targeted projection onto codingrelevant dimensions and provides intuition as to why low-selectivity networks are more susceptible to worst-case perturbations. Hidden layer representations in DNNs are known to lie on non-linear manifolds that are of lower dimensionality than the space in which they’re embedded (Goodfellow et al., 2016; Ansuini et al., 2019). Consequently, linear methods such as PCA can provide misleading estimates of hidden layer dimensionality. Thus we also quantified the intrinsic dimensionality (ID) of each layer’s representations (see Appendix A.1.4). Interestingly, the results were qualitatively similar to what we observed when examining linear dimensionality (Figure A22) in both ResNet18 trained on Tiny ImageNet (Figure A22a-A22c) and ResNet20 trained on CIFAR10 (Figure A22d-A22f). Thus both linear and non-linear measures of dimensionality imply that representational dimensionality may present a trade-off between worst-case and average-case perturbation robustness. 5 DISCUSSION Our results demonstrate that changes in the sparsity of semantic representations, as measured with class selectivity, induce a trade-off between robustness to average-case vs. worst-case perturbations: highly-distributed semantic representations confer robustness to average-case perturbations, but their increased dimensionality and consistent perturbability result in vulnerability to worst-case perturbations. In contrast, sparse semantic representations yield low-dimensional representations and inconsistently-perturbable units, imparting worst-case robustness. Furthermore, the dimensionality of the difference in early-layer activations between clean and perturbed samples is larger for worst-case perturbations than for average-case perturbations. More generally, our results link average-case and worst-case perturbation robustness through class selectivity and representational dimensionality. We hesitate to generalize too broadly about our findings, as they are limited to CNNs trained on image classification tasks. It is possible that the results we report here are specific to our models and/or datasets, and also may not extend to other tasks. Scaling class selectivity regularization to datasets with large numbers of classes also remains an open problem (Leavitt and Morcos, 2020). Our findings could be utilized for practical ends and to clarify findings in prior work. Relevant to both of these issues is the task of adversarial example detection. There is conflicting evidence that intrinsic dimensionality can be used to characterize or detect adversarial (worst-case) samples (Ma et al., 2018; Lu et al., 2018). The finding that worst-case perturbations cause a marked increase in both intrinsic and linear dimensionality indicates that there may be merit in continuing to study these quantities for use in worst-case perturbation detection. And the observation that the causal relationship between class-selectivity and worst- and average-case robustness is bidirectional helps clarify the known benefits of sparsity (Madry et al., 2018; Balda et al., 2020; Ye et al., 2018; Guo et al., 2018; Dhillon et al., 2018) and dimensionality (Langeberg et al., 2019; Sanyal et al., 2020; Nayebi and Ganguli, 2017) on worst-case robustness. It furthermore raises the question of whether enforcing low-dimensional representations also causes class selectivity to increase. Our work may also hold practical relevance to developing robust models: class selectivity could be used as both a metric for measuring model robustness and a method for achieving robustness (via regularization). We hope future work will more comprehensively assess the utility of class selectivity as part of the deep learning toolkit for these purposes. A APPENDIX A.1 DETAILED APPROACH Unless otherwise noted: all experimental results were derived from the corrupted or adversarial test set with the parameters from the epoch that achieved the highest clean validation set accuracy over the training epochs; 20 replicates with different random seeds were run for each hyperparameter set; error bars and shaded regions denote bootstrapped 95% confidence intervals; selectivity regularization was not applied to the final (output) layer, nor was the final layer included in any of our analyses. A.1.1 MODELS All models were trained using stochastic gradient descent (SGD) with momentum = 0.9 and weight decay = 0.0001. The maxpool layer after the first batchnorm layer in ResNet18 (see He et al. (2016)) was removed because of the smaller size of Tiny ImageNet images compared to standard ImageNet images (64x64 vs. 256x256, respectively). ResNet18 and ResNet50 were trained for 90 epochs with a minibatch size of 4096 (ResNet18) or 1400 (ResNet50) samples with a learning rate of 0.1, multiplied (annealed) by 0.1 at epochs 35, 50, 65, and 80. ResNet20 (code modified from Idelbayev (2020)) were trained for 200 epochs using a minibatch size of 256 samples and a learning rate of 0.1, annealed by 0.1 at epochs 100 and 150. A.1.2 DATASETS Tiny Imagenet (Fei-Fei et al., 2015) consists of 500 training images and 50 images for each of its 200 classes. We used the validation set for testing and created a new validation set by taking 50 images per class from the training set, selected randomly for each seed. We split the 50k CIFAR10 training samples into a 45k sample training set and a 5k validation set, similar to our approach with Tiny Imagenet. All experimental results were derived from the test set with the parameters from the epoch that achieved the highest validation set accuracy over the training epochs. 20 replicates with different random seeds were run for each hyperparameter set. Selectivity regularization was not applied to the final (output) layer, nor was the final layer included any of our analyses. CIFAR10C consists of a dataset in which 19 different naturalistic corruptions have been applied to the CIFAR10 test set at 5 different levels of severity. Tiny ImageNetC also has 5 levels of corruption severity, but consists of 15 corruptions. We would like to note that Tiny ImageNetC does not use the Tiny ImageNet test data. While the two datasets were created using the same data generation procedure—cropping and scaling images from the same 200 ImageNet classes—they differ in the specific ImageNet images they use. It is possible that the images used to create Tiny ImageNetC are out-of-distribution with regards to the Tiny ImageNet training data, in which case our results from testing on Tiny ImageNetC actually underestimate the corruption robustness of our networks. The creators of Tiny ImageNetC kindly provided the clean (uncorrupted) Tiny ImageNetC data necessary for the dimensionality analysis, which relies on matches corrupted and clean data samples. A.1.3 SOFTWARE Experiments were conducted using PyTorch (Paszke et al., 2019), analyzed using the SciPy ecosystem (Virtanen et al., 2019), and visualized using Seaborn (Waskom et al., 2017). A.1.4 QUANTIFYING DIMENSIONALITY We quantified the dimensionality of a layer’s representations by applying PCA to the layer’s activation matrix for the clean test data and counting the number of dimensions necessary to explain 95% of the variance, then dividing by the total number of dimensions (i.e. the fraction of total dimensionality; we also replicated our results using the fraction of total dimensionality necessary to explain 90% and 99% of the variance). The same procedure was applied to compute the dimensionality of perturbationinduced changes in representations, except the activations for a perturbed data set were subtracted a) b) c) d) e) Figure A1: Example naturalistic corruptions from the Tiny ImageNetC dataset. (a) Clean (no corruption). (b) Brightness. (c) Contrast. (d) Elastic transform. (e) Shot noise. All corruptions are shown at severity level 5/5. from the corresponding clean activations prior to applying PCA. For average-case perturbations, we performed this analysis for every corruption type and severity, and for the worst-case perturbations we used PGD with 40 steps. Hidden layer representations in DNNs are known to lie on non-linear manifolds that are of lower dimensionality than the space in which they’re embedded (Goodfellow et al., 2016; Ansuini et al., 2019). Consequently, linear methods such as PCA can fail to capture the "intrinsic" dimensionality of hidden layer representations. Thus we also quantified the intrinsic dimensionality (ID) of each layer’s representations using the method of (Facco et al., 2017). The method, based on that of Levina and Bickel (2005), estimates ID by computing the ratio between the distances to the second and first nearest neighbors of each data point. We used the implementation of Ansuini et al. (2019). Our procedure was otherwise identical as when computing the linear dimensionality: we computed the dimensionality across all test data for each layer, then divided by the number of units per layer. We then computed the dimensionality of perturbation-induced changes in representations, except the activations for a perturbed data set were subtracted from the corresponding clean activations prior to computed ID. For average-case perturbations, we performed this analysis for every corruption type and severity, and for the worst-case perturbations we used PGD with 40 steps. A.2 EFFECTS OF CLASS SELECTIVITY REGULARIZATION ON TEST ACCURACY a) 0.0 0.2 0.4 0.6 0.8 1.0 Mean Class Selectivity 0 10 20 30 40 50 Te st A cc ur ac y -100.0 -30.0 -10.0 -5.0 -2.0 -1.0 -0.7 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.7 1.0 2.0 5.0 10.0 30.0 100.0 Regularization Scale (α) b) 0.0 0.2 0.4 0.6 0.8 Mean Class Selectivity 10 20 30 40 50 60 70 80 90 Te st A cc ur ac y Figure A2: Effects of class selectivity regularization on test accuracy. Replicated as in Leavitt and Morcos (2020). (a) Test accuracy (y-axis) as a function of mean class selectivity (x-axis) for ResNet18 trained on Tiny ImageNet. α denotes the sign and intensity of class selectivity regularization. Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect. Each data point represents the mean class selectivity across all units in a single trained model. (b) Same as (a), but for ResNet20 trained on CIFAR10. A.3 ADDITIONAL RESULTS FOR AVERAGE-CASE PERTURBATION ROBUSTNESS Because modifying class selectivity can affect performance on clean (unperturbed) inputs (Leavitt and Morcos (2020); Figure A2), it is possible that the effects we observe of class selectivity on perturbed test accuracy are not caused by changes in perturbation robustness per se, but simply by changes in baseline model accuracy. We controlled for this by normalizing each model’s perturbed test accuracy by its clean (unperturbed) test accuracy. The results are generally consistent even after controlling for clean test accuracy, although increasing class selectivity does not cause the same deficits in as measured using non-normalized perturbed test accuracy in ResNet18 trained on Tiny ImageNet (Figure A3a). Interestingly, in ResNet20 trained on CIFAR10, normalizing perturbed test accuracy reveals a more dramatic improvement in perturbation robustness caused by reducing class selectivity (Figure A6c). The results for Resnet50 trained on Tiny ImageNet are entirely consistent between raw vs. normalized measures (Figure A9b vs. Figure A9c) a) -2.0 -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 2.0 Regularization Scale (α) 0.34 0.35 0.36 0.37 0.38 0.39 0.40 0.41 0.42 Co rr up te d Te st A cc ur ac y Cl ea n Te st A cc ur ac y b) -2.0 -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 2.0 Regularization Scale (α) 10 15 20 25 Te st A cc ur ac y Corruption Severity 1 Weakest 2 3 4 5 Strongest Figure A3: Controlling for clean test accuracy, and effect of corruption severity across corruptions. (a) Corrupted test accuracy normalized by clean test accuracy (y-axis) as a function of class selectivity regularization scale (α; x-axis). Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect. Normalized perturbed test accuracy appears higher in networks with high class selectivity (large α), but this is likely due to a floor effect: clean test accuracy is already much closer to the lower bound—chance—in networks with very high class selectivity, which may reflect a different performance regime, making direct comparison difficult. (b) Mean test accuracy across all corruptions (y-axis) as a function of α (x-axis) for different corruption severities (ordering along y-axis; shade of connecting line). Error bars indicate 95% confidence intervals of the mean. Shot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur 5 10 15 20 25 30 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -2.0 Low Selectivity 2.0 High Selectivity 0 Baseline Selectivity -1.0 -0.7 -0.4 -0.2 0.2 0.4 0.7 1.0 Figure A4: Mean test accuracy across corruption intensities for each corruption type for ResNet18 tested on Tiny ImageNetC. Test accuracy (y-axis) as a function of corruption type (x-axis) and class selectivity regularization scale (α, color). Reducing class selectivity improves robustness against all 15/15 corruption types. α Figure A5: Trade-off between clean and perturbed test accuracy in ResNet18 tested on Tiny ImageNetC. Clean test accuracy (x-axis) vs. perturbed test accuracy (y-axis) for different corruption severities (border color) and regularization scales (α, fill color). Mean is computed across all corruption types. Error bars = 95% confidence intervals of the mean. a) Fog Jpeg Compression Zoom Blur Speckle Noise Glass Blur Spatter Shot Noise Defocus Blur Elastic Transform Gaussian Blur Frost Saturate Brightness Snow Gaussian Noise Motion Blur Contrast Impulse Noise Pixelate 20 30 40 50 60 70 80 90 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -1.0 Low Selectivity 1.0 High Selectivity 0 Baseline Selectivity -0.7 -0.4 -0.2 0.2 0.4 0.7 b) -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 60 62 64 66 68 70 72 Te st A cc ur ac y c) -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 0.73 0.74 0.75 0.76 0.77 0.78 0.79 Co rr up te d Te st A cc ur ac y Cl ea n Te st A cc ur ac y d) -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 45 50 55 60 65 70 75 80 85 Te st A cc ur ac y Corruption Severity 1 Weakest 2 3 4 5 Strongest Figure A6: Reducing class selectivity confers robustness to average-case perturbations in ResNet20 tested on CIFAR10C. (a) Test accuracy (y-axis) as a function of corruption type (x-axis), class selectivity regularization scale (α; color), and corruption severity (ordering along y-axis). Test accuracy is reduced proportionally to corruption severity, leading to an ordering along the y-axis, with corruption severity 1 (least severe) at the top and corruption severity 5 (most severe) at the bottom. Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect (see Figure A2b and Approach 3). (b) Mean test accuracy across all corruptions and severities (y-axis) as a function of α (x-axis). (c) Corrupted test accuracy normalized by clean test accuracy (y-axis) as a function of class selectivity regularization scale (α; x-axis). Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect. Normalized perturbed test accuracy appears higher in networks with higher class selectivity (larger α), but this is likely due to a floor effect: clean test accuracy is already much closer to the lower bound—chance—in networks with very high class selectivity, which may reflect a different performance regime, making direct comparison difficult. (d) Mean test accuracy across all corruptions (y-axis) as a function of α (x-axis) for different corruption severities (ordering along y-axis; shade of connecting line). Error bars indicate 95% confidence intervals of the mean. Fog Jpeg Compression Zoom Blur Speckle Noise Glass Blur Spatter Shot Noise Defocus Blur Elastic Transform Gaussian Blur Frost SaturateBrightness Snow Gaussian Noise Motion Blur Contrast Impulse Noise Pixelate 40 50 60 70 80 90 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -1.0 Low Selectivity 1.0 High Selectivity 0 Baseline Selectivity -0.7 -0.4 -0.2 0.2 0.4 0.7 Figure A7: Mean test accuracy across corruption intensities for each corruption type for ResNet20 tested on CIFAR10C. Test accuracy (y-axis) as a function of corruption type (x-axis) and class selectivity regularization scale (α, color). Reducing class selectivity improves robustness against 14/19 corruption types. Error bars = 95% confidence intervals of the mean. α Figure A8: Trade-off between clean and corrupted test accuracy in ResNet20 tested on CIFAR10C. Clean test accuracy (x-axis) vs. corrupted test accuracy (y-axis) for different corruption severities (border color) and regularization scales (α, fill color). Mean is computed across all corruption types. a) Shot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur 0 5 10 15 20 25 30 35 40 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -1.0 Low Selectivity 1.0 High Selectivity 0 Baseline Selectivity -0.7 -0.4 -0.2 0.2 0.4 0.7 b) -1.0 -0.7 -0.4 -0.2 0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 15 16 17 18 19 20 21 Te st A cc ur ac y c) -1.0 -0.7 -0.4 -0.2 0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 0.36 0.38 0.40 0.42 0.44 Co rr up te d Te st A cc ur ac y Cl ea n Te st A cc ur ac y d) -1.0 -0.7 -0.4 -0.2 0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 10 15 20 25 30 Te st A cc ur ac y Figure A9: Reducing class selectivity confers robustness to average-case perturbations in ResNet50 tested on Tiny ImageNetC. (a) Test accuracy (y-axis) as a function of corruption type (x-axis), class selectivity regularization scale (α; color), and corruption severity (ordering along y-axis). Test accuracy is reduced proportionally to corruption severity, leading to an ordering along the y-axis, with corruption severity 1 (least severe) at the top and corruption severity 5 (most severe) at the bottom. Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect (see Approach 3). (b) Mean test accuracy across all corruptions and severities (y-axis) as a function of α (x-axis). (c) Corrupted test accuracy normalized by clean test accuracy (y-axis) as a function of class selectivity regularization scale (α; x-axis). Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect. (d) Mean test accuracy across all corruptions (y-axis) as a function of α (x-axis) for different corruption severities (ordering along y-axis; shade of connecting line). Error bars indicate 95% confidence intervals of the mean. Note that confidence intervals are larger in part due to a smaller sample size—only 5 replicates per α instead of 20. Shot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur 5 10 15 20 25 30 35 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -1.0 Low Selectivity 1.0 High Selectivity 0 Baseline Selectivity -0.7 -0.4 -0.2 0.2 0.4 0.7 Figure A10: Mean test accuracy across corruption intensities for each corruption type for ResNet50 tested on Tiny ImageNetC. Test accuracy (y-axis) as a function of corruption type (x-axis) and class selectivity regularization scale (α, color). Reducing class selectivity improves robustness against 14/15 corruption types. Error bars = 95% confidence intervals of the mean. Note that confidence intervals are larger in part due to a smaller sample size—only 5 replicates per α instead of 20. A.4 THE CAUSAL RELATIONSHIP BETWEEN CLASS SELECTIVITY AND AVERAGE-CASE ROBUSTNESS IS BIDIRECTIONAL We found that regularizing to decrease class selectivity causes robustness to average-case perturbations. But is the converse is also true? Does increasing robustness to average-case perturbations also cause class selectivity to increase? We investigated this question by training with AugMix, a technique known to improve worst-case robustness (Hendrycks et al., 2020a). Briefly, AugMix stochastically applies a diverse set of image augmentations and uses a Jensen-Shannon Divergence consistency loss. Our AugMix parameters were as follows: mixture width: 3; mixture depth: stochastic; augmentation probability: 1; augmentation severity: 2. We found that AugMix does indeed decrese the mean level of class selectivity across neurons in a network (Figure A11). AugMix decreases overall levels of selectivity similarly to training with a class selectivity regularization scale of approximately α = −0.1 or α = −0.2 in both ResNet18 trained on Tiny ImageNet (Figures A11a and A11b) and ResNet20 trained on CIFAR10 (Figures A11c and A11d). These results indicate that the causal relationship between average-case perturbation robustness and class selectivity is bidirectional: not only does decreasing class selectivity cause average-case perturbation robustness to increase, but increasing average-case perturbation-robustness also causes class selectivity to decrease. A.5 WORST-CASE PERTURBATION ROBUSTNESS We also confirmed that the worst-case robustness of high-selectivity ResNet18 and ResNet20 networks was not simply due to gradient-masking (Athalye et al., 2018) by generating worst-case perturbations using each of the replicate models trained with no selectivity regularization (α = 0), then testing selectivity-regularized models on these samples. We found that high-selectivity models were less vulnerable to the α = 0 samples than low-selectivity models for high-intensity perturbations (Appendix A14, indicating that gradient-masking does not fully account for the worst-case robustness of high-selectivity models. A.6 CLASS SELECTIVITY REGULARIZATION DOES NOT AFFECT ROBUSTNESS TO NATURAL ADVERSARIAL EXAMPLES We also examined whether class selectivity regularization affects robustness to "natural" adversarial examples, images that are "natural, unmodified, real-world examples...selected to cause a fixed model to make a mistake" (Hendrycks et al., 2020b). We tested robustness to natural adversarial examples using ImageNet-A, a dataset of natural adversarial examples that belong to ImageNet classes but consistently cause misclassification errors with high confidence (Hendrycks et al., 2020b). We adapted ImageNet-A to our models trained on Tiny ImageNet (ResNet18 and ResNet50) by only testing on the 74 image classes that overlap between ImageNet-A and Tiny ImageNet (yielding a total of 2957 samples), and downsampling the images to 64 x 64. Test accuracy was similar across all tested values of α for both ResNet18 (Figure A15a) and ResNet50 (Figure A15b), indicating that class selectivity regularization may share some limitations with other methods for improving robustness, many of which also fail to yield significant robustness against ImageNet-A (Hendrycks et al., 2020b). A.7 THE CAUSAL RELATIONSHIP BETWEEN CLASS SELECTIVITY AND WORST-CASE ROBUSTNESS IS BIDIRECTIONAL We observed that regularizing to increase class selectivity causes robustness to worst-case perturbations. But is the converse is also true? Does increasing robustness to worst-case perturbations cause class selectivity to increase? We investigated this question using PGD training, a common technique for improving worst-case robustness. PGD training applies the PGD method of sample perturbation (see Approach 3) to samples during training. We used the same parameters for PGD sample generation when training our models as when testing (Approach 3). The number of PGD iterations controls the intensity of the perturbation, and the degree of perturbation-robustness in the trained model (Madry et al., 2018). We found that PGD training does indeed increase the mean level of class selectivity across neurons in a network, and this effect is proportional to the strength of PGD training: networks trained with more strongly-perturbed samples have higher class selectivity (Figure A16). Interestingly, PGD training also appear to cause units to die (Lu et al., 2019), and the number of dead untis is proportional to the intensity of PGD training (Figures A16b and A16e). Removing dead units, which have a class selectivity index of 0, from the calculation of mean class selectivity results in a clear, monotonic effect of PGD training intensity on class selectivity in both ResNet18 trained on Tiny ImageNet (Figure A16c) and ResNet20 trained on CIFAR10 (Figure A16f). These results indicate that the causal relationship between worst-case perturbation robustness and class selectivity is bidirectional: increasing class selectivity not only causes increased worst-case perturbation robustness, but increasing worst-case perturbation-robustness also causes increased class selectivity. A.8 STABILITY TO INPUT PERTURBATIONS IN UNITS AND LAYERS A.9 REPRESENTATIONAL DIMENSIONALITY a) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.1 0.2 0.3 0.4 0.5 Fr ac tio n of T ot al D im en si on al ity Class Selectivity Regularization Scale (α) -2.0 -1.0 2.01.00-0.4 -0.3 -0.2 -0.1 0.1 0.2 0.3 0.4 b) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.1 0.2 0.3 0.4 0.5 Fr ac tio n of T ot al D im en sio na lit y c) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.1 0.2 0.3 0.4 0.5 Fr ac tio n of T ot al D im en sio na lit y d) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.2 0.4 0.6 0.8 1.0 Fr ac tio n of T ot al D im en si on al ity Class Selectivity Regularization Scale (α) -2.0 -1.0 2.01.00-0.4 -0.3 -0.2 -0.1 0.1 0.2 0.3 0.4 e) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.2 0.4 0.6 0.8 1.0 Fr ac tio n of T ot al D im en sio na lit y f) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.2 0.4 0.6 0.8 1.0 Fr ac tio n of T ot al D im en sio na lit y Figure A20: Dimensionality in early layers predicts worst-case vulnerability in ResNet18 trained on Tiny ImageNet. Identical to Figure 4, but dimensionality is computed as the number of principal components needed to explain 90% of variance in (a) - (c), and 99% of variance in (d) - (f). (a) Fraction of dimensionality (y-axis; see Appendix A.1.4) as a function of layer (x-axis). (b) Dimensionality of difference between clean and average-case perturbation activations (y-axis) as a function of layer (x-axis). (c) Dimensionality of difference between clean and worst-case perturbation activations (y-axis) as a function of layer (x-axis). (d) - (f), identical to (a) - (c), but for 99% explained variance threshold.
1. What is the main contribution of the paper regarding robustness and class selectivity? 2. What are the strengths and weaknesses of the paper's empirical results? 3. How does the reviewer assess the significance of the results in Figure 2? 4. What is the reviewer's concern regarding the interpretation of Figure 4? 5. Does the reviewer have any questions about the paper's use of the term "causal"? 6. How does the reviewer evaluate the method of dimensionality estimation used in the paper? 7. Are there any additional concerns or suggestions for improvement mentioned by the reviewer?
Review
Review ########################################################################## Summary: This work empirically studies the relationship between robustness and class selectivity, a measure of neuron variability between classes. Robustness to both adversarial ("worst-case") perturbations and corruptions ("average-case") are considered. This work builds off the recent work of Leavitt and Morcos (2020) (currently in review at ICLR 2021) who claim empirical evidence that class selectivity may be harmful for generalization. The experiments in this paper examine the robustness (in both senses) of networks explicitly regularized for class selectivity. The main empirical claims are that (1) class sensitivity is negatively correlated with robustness to corruptions (2) class sensitivity is positively correlated with robustness to adversarial perturbations. ########################################################################## Reasons for score: Overall I vote for rejection. The authors frame the results in connection to sparsity and dimensionality. But at present the evidence for this connection appears preliminary. For example the differences in the class selectivity curves in Figure 2 appear marginal. On the other hand, Figure 1 does seem convincing for the claim reducing class selectivity improves robustness to corruptions. ########################################################################## Pros: Clear results in Figure 1a Important topic Potentially relevant results ########################################################################## Cons: Marginal results in Figure 2 Unclear results in Figure 4 Measurements of dimensionality limited to PCA Not easy to read. The paper appears hastily written. ########################################################################## Questions during rebuttal period: Please argue why the differences between class-selective curves in Figure 2 are significant. I am quite unclear on how to interpret Figure 4. Please clarify. The authors use the word "causal" several times in the paper, which appears to me dubious. I can find no justification for a claim of causality here, since the results are correlative. Can the authors clarify this? The authors only examine one dimensionality estimation method: the number of PCA components required to capture 95% of the data variance. Dimensionality estimation with PCA on data with non-linear (i.e. manifold) structure is problematic. Thus I have doubts on measurements of dimensionality used here. The authors may consider adding the method of Levina and Bickel [0] [0] Maximum Likelihood Estimation of Intrinsic Dimension - Levina and Bickel (Neurips 2004) https://papers.nips.cc/paper/2577-maximum-likelihood-estimation-of-intrinsic-dimension ######################################################################### Additional Feedback: It is the prerogative of the authors to choose the words they believe best express their message. However I lament the author's choice to use "worst-case perturbation" and "average-case perturbation" to refer to "adversarial attack"and "corruption". The literature on adversarial attacks is quite large at this point, and "adversarial" is the de facto terminology. The authors claim their terminology is more general, however I cannot see the justification given the widespread existing usage. ######################################################################### POST-REBUTTAL RESPONSE: I read the author's rebuttal but have decided to not increase my score. I still have doubts over the claims in this paper.
ICLR
Title Linking average- and worst-case perturbation robustness via class selectivity and dimensionality Abstract Representational sparsity is known to affect robustness to input perturbations in deep neural networks (DNNs), but less is known about how the semantic content of representations affects robustness. Class selectivity—the variability of a unit’s responses across data classes or dimensions—is one way of quantifying the sparsity of semantic representations. Given recent evidence that class selectivity may not be necessary for, and in some cases can impair generalization, we sought to investigate whether it also confers robustness (or vulnerability) to perturbations of input data. We found that class selectivity leads to increased vulnerability to average-case (naturalistic) perturbations in ResNet18, ResNet50, and ResNet20, as measured using Tiny ImageNetC (ResNet18 and ResNet50) and CIFAR10C (ResNet20). Networks regularized to have lower levels of class selectivity are more robust to average-case perturbations, while networks with higher class selectivity are more vulnerable. In contrast, we found that class selectivity increases robustness to multiple types of worst-case (i.e. white box adversarial) perturbations, suggesting that while decreasing class selectivity is helpful for average-case perturbations, it is harmful for worst-case perturbations. To explain this difference, we studied the dimensionality of the networks’ representations: we found that the dimensionality of early-layer representations is inversely proportional to a network’s class selectivity, and that adversarial samples cause a larger increase in early-layer dimensionality than corrupted samples. We also found that the input-unit gradient was more variable across samples and units in high-selectivity networks compared to low-selectivity networks. These results lead to the conclusion that units participate more consistently in low-selectivity regimes compared to high-selectivity regimes, effectively creating a larger attack surface and hence vulnerability to worst-case perturbations. 1 INTRODUCTION Methods for understanding deep neural networks (DNNs) often attempt to find individual neurons or small sets of neurons that are representative of a network’s decision (Erhan et al., 2009; Zeiler and Fergus, 2014; Karpathy et al., 2016; Amjad et al., 2018; Lillian et al., 2018; Dhamdhere et al., 2019; Olah et al., 2020). Selectivity in individual units (i.e. variability in a neuron’s activations across semantically-relevant data features) has been of particular interest to researchers trying to better understand deep neural networks (DNNs) (Zhou et al., 2015; Olah et al., 2017; Morcos et al., 2018; Zhou et al., 2018; Meyes et al., 2019; Na et al., 2019; Zhou et al., 2019; Rafegas et al., 2019; Bau et al., 2020; Leavitt and Morcos, 2020). However, recent work has shown that selective neurons can be irrelevant, or even detrimental to network performance, emphasizing the importance of examining distributed representations for understanding DNNs (Morcos et al., 2018; Donnelly and Roegiest, 2019; Dalvi et al., 2019b; Leavitt and Morcos, 2020). In parallel, work on robustness seeks to build models that are robust to perturbed inputs (Szegedy et al., 2013; Carlini and Wagner, 2017a;b; Vasiljevic et al., 2016; Kurakin et al., 2017; Gilmer et al., 2018; Zheng et al., 2016). Hendrycks and Dietterich (2019) distinguish between two types of robustness: corruption robustness, which measures a classifier’s performance on low-quality or naturalistically-perturbed inputs—and thus is an "average-case" measure—and adversarial robustness, which measures a classifier’s performance on small, additive perturbations that are tailored to the classifier—and thus is a "worst-case" measure.1 Research on robustness has been predominantly focused on worst-case perturbations, which is affected by weight and activation sparsity (Madry et al., 2018; Balda et al., 2020; Ye et al., 2018; Guo et al., 2018; Dhillon et al., 2018) and representational dimensionality (Langeberg et al., 2019; Sanyal et al., 2020; Nayebi and Ganguli, 2017). But less is known about the mechanisms underlying average-case perturbation robustness and its common factors with worst-case robustness. Some techniques for improving worst-case robustness also improve average-case robustness (Hendrycks and Dietterich, 2019; Ford et al., 2019; Yin et al., 2019), thus it is possible that sparsity and representational dimensionality also contribute to average-case robustness. Selectivity in individual units can be also be thought of a measure of the sparsity with which semantic information is represented.2 And because class selectivity regularization provides a method for controlling selectivity, and class selectivity regularization has been shown to improve test accuracy on unperturbed data (Leavitt and Morcos, 2020), we sought to investigate whether it could be utilized to improve perturbation robustness and elucidate the factors underlying it. In this work we pursue a series of experiments investigating the causal role of selectivity in robustness to worst-case and average-case perturbations in DNNs. To do so, we used a recently-developed class selectivity regularizer (Leavitt and Morcos, 2020) to directly modify the amount of class selectivity learned by DNNs, and examined how this affected the DNNs’ robustness to worst-case and average-case perturbations. Our findings are as follows: • Networks regularized to have lower levels of class selectivity are more robust to average-case perturbations, while networks with higher class selectivity are generally less robust to average-case perturbations, as measured in ResNets using the Tiny ImageNetC and CIFAR10C datasets. The corruption robustness imparted by regularizing against class selectivity was consistent across nearly all tested corruptions. • In contrast to its impact on average-case perturbations, decreasing class selectivity reduces robustness to worst-case perturbations in both tested models, as assessed using gradient-based white-box attacks. • The variability of the input-unit gradient across samples and units is proportional to a network’s overall class selectivity, indicating that high variability in perturbability within and across units may facilitate worst-case perturbation robustness. • The dimensionality of activation changes caused by corruption markedly increases in early layers for both perturbation types, but is larger for worst-case perturbations and low-selectivity networks. This implies that representational dimensionality may present a trade-off between worst-case and average-case perturbation robustness. Our results demonstrate that changing class selectivity, and hence the sparsity of semantic representations, can confer robustness to average-case or worst-case perturbations, but not both simultaneously. They also highlight the roles of input-unit gradient variability and representational dimensionality in mediating this trade-off. 2 RELATED WORK 2.1 PERTURBATION ROBUSTNESS The most commonly studied form of robustness in DNNs is robustness to adversarial attacks, in which an input is perturbed in a manner that maximizes the change in the network’s output while 1We use the terms "worst-case perturbation" and "average-case perturbation" instead of "adversarial attack" and "corruption", respectively, because this usage is more general and dispenses with the implied categorical distinction of using seemingly-unrelated terms. Also note that while Hendrycks and Dietterich (2019) assign specific and distinct meanings to "perturbation" and "corruption", we use the term "perturbation" more generally to refer to any change to an input. 2Class information is semantic. And because class selectivity measures the degree to which class information is represented in individual neurons, it can be considered a form of sparsity. For example, if a network has high test accuracy on a classification task, it is necessarily representing class (semantic) information. But if the mean class selectivity across units is low, then the individual units do not contain much class information, thus the class information must be distributed across units; the semantic representation in this case is not sparse, it is distributed. attempting to minimize or maintain below some threshold the magnitude of the change to the input (Serban et al., 2019; Warde-Farley and Goodfellow, 2017) . Because white-box adversarial attacks are optimized to best confuse a given network, robustness to adversarial attacks are a "worst-case" measure of robustness. Two factors that have been proposed to account for DNN robustness to worst-case perturbations are particularly relevant to the present study: sparsity and dimensionality. Multiple studies have linked activation and weight sparsity with robustness to worst-case perturbations. Adversarial training improves worst-case robustness Goodfellow et al. (2015); Huang et al. (2016) and results in sparser weight matrices (Madry et al., 2018; Balda et al., 2020). Methods for increasing the sparsity of weight matrices (Ye et al., 2018; Guo et al., 2018) and activations (Dhillon et al., 2018) likewise improve worst-case robustness, indicating that the weight sparsity caused by worst-case perturbation training is not simply a side-effect. Researchers have also attempted to understand the nature of worst-case robustness from a perspective complementary to that of sparsity: dimensionality. Like sparsity, worst-case perturbation training reduces the rank of weight matrices and representations, and regularizing weight matrices and representations to be low-rank can improve worst-case perturbation robustness (Langeberg et al., 2019; Sanyal et al., 2020; Nayebi and Ganguli, 2017). Taken together, these studies support the notion that networks with low-dimensional representations are more robust to worst-case perturbations. Comparatively less research has been conducted to understand the factors underlying averagecase robustness. Certain techniques for improving worst-case perturbation robustness also help against average-case perturbations (Hendrycks and Dietterich, 2019; Geirhos et al., 2018; Ford et al., 2019). Examining the frequency domain has elucidated one mechanism: worst-case perturbations for "baseline" models tend to be in the high frequency domain, and improvements in averagecase robustness resulting from worst-case robustness training are at least partially ascribable to models becoming less reliant on high-frequency information (Yin et al., 2019; Tsuzuku and Sato, 2019; Geirhos et al., 2018). But it remains unknown whether other factors such as sparsity and dimensionality link these two forms of robustness. 2.2 CLASS SELECTIVITY One technique that has been of particular interest to researchers trying to better understand deep (and biological) neural networks is examining the selectivity of individual units (Zhou et al., 2015; Olah et al., 2017; Morcos et al., 2018; Zhou et al., 2018; Meyes et al., 2019; Na et al., 2019; Zhou et al., 2019; Rafegas et al., 2019; Bau et al., 2020; Leavitt and Morcos, 2020; Sherrington, 1906; Kandel et al., 2000). Evidence regarding the importance of selectivity has mostly relied on single unit ablation, and has been equivocal (Radford et al., 2017; Morcos et al., 2018; Amjad et al., 2018; Zhou et al., 2018; Donnelly and Roegiest, 2019; Dalvi et al., 2019a). However Leavitt and Morcos (2020) examined the role of single unit selectivity in network performance by regularizing for or against class selectivity in the loss function, which sidesteps the limitations of single unit ablation and correlative approaches and allowed them to investigate the causal effect of class selectivity. They found that reducing class selectivity has little negative impact on—and can even improve—test accuracy in CNNs trained on image recognition tasks, but that increasing class selectivity has significant negative effects on test accuracy. However, their study focused on examining the effects of class selectivity on test accuracy in unperturbed (clean) inputs. Thus it remains unknown how class selectivity affects robustness to perturbed inputs, and whether class selectivity can serve as or elucidate a link between worst-case and average-case robustness. 3 APPROACH A detailed description of our approach is provided in Appendix A.1. Models and training protocols Our experiments were performed on ResNet18 and ResNet50 (He et al., 2016) trained on Tiny ImageNet (Fei-Fei et al., 2015), and ResNet20 (He et al., 2016) trained on CIFAR10 (Krizhevsky, 2009). We focus primarily on the results for ResNet18 trained on Tiny ImageNet in the main text for space, though results were qualitatively similar for ResNet50, and ResNet20 trained on CIFAR10. Experimental results were obtained with model parameters from the epoch that achieved the highest validation set accuracy over the training epochs, and 20 replicate models (ResNet18 and ResNet20) or 5 replicate models (Resnet50) with different random seeds were run for each hyperparameter set. Class selectivity index Following (Leavitt and Morcos, 2020). A unit’s class selectivity index is calculated as follows: At every ReLU, the activation in response to a single sample was averaged across all elements of the filter map (which we refer to as a "unit"). The class-conditional mean activation was then calculated across all samples in the clean test set, and the class selectivity index (SI) was calculated as follows: SI = µmax − µ−max µmax + µ−max (1) where µmax is the largest class-conditional mean activation and µ−max is the mean response to the remaining (i.e. non-µmax) classes. The selectivity index ranges from 0 to 1. A unit with identical average activity for all classes would have a selectivity of 0, and a unit that only responds to a single class would have a selectivity of 1. As Morcos et al. (2018) note, the selectivity index is not a perfect measure of information content in single units. For example, a unit with a litte bit of information about many classes would have a low selectivity index. However, it identifies units that are class-selective similarly to prior studies (Zhou et al., 2018). Most importantly, it is differentiable with respect to the model parameters. Class selectivity regularization We used (Leavitt and Morcos, 2020)’s class selectivity regularizer to control the levels of class selectivity learned by units in a network during training. Class selectivity regularization is achieved by minimizing the following loss function during training: loss = − C∑ c yc· log(ŷc)− αµSI (2) The left-hand term in the loss function is the standard classification cross-entropy, where c is the class index, C is the number of classes, yc is the true class label, and ŷc is the predicted class probability. The right-hand component of the loss function, −αµSI , is the class selectivity regularizer. The regularizer consists of two terms: the selectivity term, µSI = 1 L L∑ l 1 U U∑ u SIu,l (3) where l is a convolutional layer, L is number of layers, u is a unit, U is the number of units in a given layer, and SIu is the class selectivity index of unit u. The selectivity term of the regularizer is obtained by computing the selectivity index for each unit in a layer, then computing the mean selectivity index across units within each layer, then computing the mean selectivity index across layers. Computing the mean within layers before computing the mean across layers (as compared to computing the mean across all units in the network) mitigates the biases induced by the larger numbers of units in deeper layers. The other term in the regularizer is α, the regularization scale, which determines whether class selectivity is promoted or discouraged. Negative values of α discourage class selectivity in individual units and positive values encourage it. The magnitude of α controls the contribution of the selectivity term to the overall loss. During training, the class selectivity index was computed for each minibatch. The final (logit) layer was not subject to selectivity regularization or included in our analyses because by definition, the logit layer must be class selective in a classification task. Measuring average-case robustness To evaluate robustness to average-case perturbations, we tested our networks on CIFAR10C and Tiny ImageNetC, two benchmark datasets consisting of the CIFAR10 or Tiny ImageNet data, respectively, to which a set of naturalistic corruptions have been applied (Hendrycks and Dietterich, 2019, examples in Figure A1). We average across all corruption types and severities (see Appendix A.1.2 for details) when reporting corrupted test accuracy. Measuring worst-case robustness We tested our models’ worst-case (i.e. adversarial) robustness using two methods. The fast gradient sign method (FGSM) (Goodfellow et al., 2015) is a simple attack that computes the gradient of the loss with respect to the input image, then scales the image’s pixels (within some bound) in the direction that increases the loss. The second method, projected gradient descent (PGD) (Kurakin et al., 2016; Madry et al., 2018), is an iterated version of FGSM. We used a step size of 0.0001 and an l∞ norm perturbation budget ( ) of 16/255. Computing the stability of units and layers To quantify variation in networks’ perturbability, we first computed the l2 norm of the input-unit gradient for each unit u in a network. We then computed the mean (µu) and standard deviation (σu) of the norm across samples for each unit. σu/µu yields a) Shot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur 0 5 10 15 20 25 30 35 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -2.0 Low Selectivity 2.0 High Selectivity 0 Baseline Selectivity -1.0 -0.7 -0.4 -0.2 0.2 0.4 0.7 1.0 b) -2.0 -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 2.0 Regularization Scale (α) 13 14 15 16 17 Te st A cc ur ac y Figure 1: Reducing class selectivity improves average-case robustness. Test accuracy (y-axis) as a function of corruption type (x-axis), class selectivity regularization scale (α; color), and corruption severity (ordering along y-axis). Test accuracy is reduced proportionally to corruption severity, leading to an ordering along the y-axis; corruption severity 1 (least severe) is at the top, corruption severity 5 (most severe) at the bottom. (b) Mean test accuracy across all corruptions and severities (y-axis) as a function of α (x-axis). Results shown are for ResNet18 trained on Tiny ImageNet and tested on Tiny ImageNetC. Error bars = 95% confidence intervals of the mean. See Figure A6 for CIFAR10C results. the coefficient of variation (Everitt, 2002) for a unit (CVu), a measure of variation in perturbability for individual units. We also quantified the variation across units in a layer by computing the standard deviation of µu across units in a layer l, σ(µu) = σl, and dividing this by the corresponding mean across units µ(µu) = µl, to yield the CV across units σl/µl = CVl. 4 RESULTS 4.1 AVERAGE-CASE ROBUSTNESS IS INVERSELY PROPORTIONAL TO CLASS SELECTIVITY Certain kinds of sparsity—including reliance on single directions (Morcos et al., 2018), and the semantic sparsity measured by class selectivity (Leavitt and Morcos, 2020)—have been shown to impair network performance. We sought to extend this question to robustness: how does the sparsity of semantic representations affect robustness to average-case perturbations of the input data? We used a recently-introduced method (Leavitt and Morcos (2020); Approach 3) to modulate the amount of class selectivity learned by DNNs (Figure A2 demonstrates effects of selectivity regularization). We then examined how this affected performance on Tiny ImageNetC and CIFAR10C, two benchmark datasets for average-case corruptions (Approach 3; example images in Figure A1). Changing the level of class selectivity across neurons in a network could one of the following effects on corruption robustness: If concentrating semantic representations into fewer neurons (i.e. promoting semantic sparsity) provides fewer potent dimensions on which perturbed inputs can act, then increasing class selectivity should confer networks with robustness to average-case perturbations, while reducing class selectivity should render networks more vulnerable. Alternatively, if distributing semantic representations across more units (i.e. reducing sparsity) dilutes the changes induced by perturbed inputs, then reducing class selectivity should increase a network’s robustness to averagecase perturbations, while increasing class selectivity should reduce robustness. We found that decreasing class selectivity leads to increased robustness to average-case perturbations for both ResNet18 tested on Tiny ImageNetC (Figure 1) and ResNet20 tested on CIFAR10C (Figure A6). In ResNet18, we found that mean test accuracy on corrupted inputs increases as class selectivity decreases (Figure 1), with test accuracy reaching a maximum at regularization scale α = −2.0 (mean test accuracy across corruptions and severities at α−2.0 =17), representing a 3.5 percentage point (pp) increase relative to no selectivity regularization (i.e. α0; test accuracy at α0 = 13.5). In contrast, regularizing to increase class selectivity has either no effect or a negative impact on corruption robustness. Corrupted test accuracy remains relatively stable until α = 1.0, after which point it declines. The results are qualitatively similar for ResNet50 tested on Tiny ImageNetC (Figure A9), and for ResNet20 tested on CIFAR10C (Figure A6), except the vulnerability to corruption caused by increasing selectivity is even more dramatic in ResNet20. We also found similar results when controlling for the difference in clean accuracy for models with different α (Appendix A.3). We observed that regularizing to decrease class selectivity causes robustness to average-case perturbations. But it’s possible that the causality is unidirectional, leading to the question of whether the converse is also true: does increasing robustness to average-case perturbations cause class selectivity to decrease? We investigated this question by training with AugMix, a technique known to improve worst-case robustness (Hendrycks et al., 2020a). We found that AugMix does indeed decrease the mean level of class selectivity across neurons in a network (Appendix A.4; Figure A11). AugMix decreases overall levels of selectivity similarly to training with a class selectivity regularization scale of approximately α = −0.1 or α = −0.2 in both ResNet18 trained on Tiny ImageNet (Figures A11a and A11b) and ResNet20 trained on CIFAR10 (Figures A11c and A11d). These results indicate that the causal relationship between average-case perturbation robustness and class selectivity is bidirectional: not only does decreasing class selectivity improve average-case perturbation robustness, but improving average-case perturbation-robustness also causes class selectivity to decrease. We also found that the effect of class selectivity on perturbed robustness is consistent across corruption types. Regularizing against selectivity improves perturbation robustness in all 15 Tiny ImageNetC corruption types for ResNet18 (Figure A4) and 14 of 15 Tiny ImageNetC corruption types in ResNet50 (Figure A10), and 14 of 19 corruption types in CIFAR10C for ResNet20 (Figure A7). Together these results demonstrate that reduced class selectivity confers robustness to average-case perturbations, implying that distributing semantic representations across neurons—i.e. low sparsity—may dilute the changes induced by average-case perturbations. 4.2 CLASS SELECTIVITY IMPARTS WORST-CASE PERTURBATION ROBUSTNESS We showed that the sparsity of a network’s semantic representations, as measured with class selectivity, is causally related to a network’s robustness to average-case perturbations. But how does the sparsity of semantic representations affect worst-case robustness? We addressed this question by testing our class selectivity-regularized networks on inputs that had been perturbed using using one of two gradient-based methods (see Approach 3). If distributing semantic representations across units provides more dimensions upon which a worstcase perturbation is potent, then worst-case perturbation robustness should be proportional to class selectivity. However, if increasing the sparsity of semantic representations creates more responsive individual neurons, then worst-case robustness should be inversely proportional to class selectivity. Unlike average-case perturbations, decreasing class selectivity decreases robustness to worst-case perturbations for ResNet18 (Figure 2) and ResNet50 (Figure A13) trained on Tiny ImageNet, and ResNet20 trained on CIFAR10 (Figures A12). For small perturbations (i.e. close to x=0), the effects of class selectivity regularization on test accuracy (class selectivity is inversely correlated with unperturbed test accuracy) appear to overwhelm the effects of perturbations. But as the magnitude of perturbation increases, a stark ordering emerges: test accuracy monotonically decreases as a function of class selectivity in ResNet18 and ResNet50 for both FGSM and PGD attacks (ResNet18: Figures 2a and 2b; ResNet50: Figures A13a and A13b). The ordering is also present for ResNet20, though less consistent for the two networks with the highest class selectivity (α = 0.7 and α = 1.0). However, increasing class selectivity is much more damaging to test accuracy in ResNet20 trained on CIFAR10 compared to ResNet18 trained on Tiny ImageNet (Leavitt and Morcos, 2020, Figure A2), so the the substantial performance deficits of extreme selectivity in ResNet20 likely mask the perturbation-robustness. This result demonstrates that networks with sparse semantic representations are less vulnerable to worst-case perturbation than networks with distributed semantic representations. We also verified that the worst-case robustness of high-selectivity networks is not fully explained by gradient-masking (Athalye et al., 2018, Appendix A.5). Interestingly, class selectivity regularization does not appear to affect robustness to "natural" adversarial examples (Appendix A.6), which are "unmodified, real-world examples...selected to cause a model to make a mistake" (Hendrycks et al., 2020b). Performance on ImageNet-A, a benchmark of natural adversarial examples (Hendrycks et al., 2020b), was similar across all tested values of α for both ResNet18 (Figure A15a) and ResNet50 (Figure A15b), indicating that class selectivity regularization may share some limitations with other methods for improving both worst-case and average-case robustness, many of which also fail to yield significant robustness improvements against ImageNet-A (Hendrycks et al., 2020b). We found that regularizing to increase class selectivity causes robustness to worst-case perturbations. But is the converse true? Does increasing robustness to worst-case perturbations also cause class selectivity to increase? We investigated this by training networks with a commonly-used technique to improve worst-case perturbation robustness, PGD training. We found that PGD training does indeed increase the mean level of class selectivity across neurons in a network, and this effect is proportional to the strength of PGD training: networks trained with more strongly-perturbed samples have higher class selectivity (Appendix A.7). This effect was present in both ResNet18 trained on Tiny ImageNet (Figure A16c) and ResNet20 trained on CIFAR10 (Figure A16f), indicating that the causal relationship between worst-case perturbation robustness and class selectivity is bidirectional. Networks whose outputs are more stable to small input perturbations are known to have improved generalization performance and worst-case perturbation robustness (Drucker and Le Cun, 1992; Novak et al., 2018; Sokolic et al., 2017; Rifai et al., 2011; Hoffman et al., 2019). To examine whether increasing class selectivity improves worst-case perturbation robustness by increasing network stability, we analyzed each network’s input-output Jacobian, which is proportional to its stability—a large-magnitude Jacobian means that a small change to the network’s input will cause a large change to its output. If class selectivity induces worst-case robustness by increasing network stability, then networks with higher class selectivity should have smaller Jacobians. But if increased class selectivity induces adversarial robustness through alternative mechanisms, then class selectivity should have no effect on the Jacobian. We found that the l2 norm of the input-output Jacobian is inversely proportional to class selectivity for ResNet18 (Figure 2c), ResNet50 (Figure A13c), and ResNet20 (Figure A12c), indicating that distributed semantic representations are more vulnerable to worst-case perturbation because they are less stable than sparse semantic representations. 4.3 VARIABILITY OF THE INPUT-UNIT GRADIENT ACROSS SAMPLES AND UNITS We observed that the input-output Jacobian is proportional to worst-case vulnerability and inversely proportional to class selectivity, but focusing on input-output stability potentially overlooks phenomena present in hidden layers and units. If class selectivity imparts worst-case robustness by making individual units less reliably perturbable—because each unit is highly tuned to a particular subset of images—then we should expect to see more variation across input-unit gradients for units in high-selectivity networks compared to units in low-selectivity networks. Alternatively, worst-case robustness in high-selectivity networks could be achieved by reducing both the magnitude and variation of units’ perturbability, in which case we would expect to observe lower variation across input-unit gradients for units in high-selectivity networks compared to low-selectivity networks. We quantified variation in unit perturbability using the coefficient of variation of the input-unit gradient across samples for each unit (CVu; Approach 3). The CV is a measure of variability that normalizes the standard deviation of a quantity by the mean. A large CV indicates high variability, a small CV indicates low variability. To quantify variation in perturbability across units, we computed the CV across units in each layer, (CVl; Approach 3). We found that units in high-selectivity networks exhibited greater variation in their perturbability than units in low-selectivity networks, both within individual units and across units in each layer. This effect was present in both ResNet18 trained on Tiny ImageNet (Figure 3) and ResNet20 trained on CIFAR10 (Figure A18), although the effect was less consistent for across-unit variability in later layers in ResNet18 (Figure 3b). Interestingly, class selectivity affects both the numerator (σ) and denominator (µ) of the CV calculation for both the CV across samples and CV across units (Appendix A.8). These results indicate that that high class selectivity imparts worst-case robustness by increasing the variation in perturbability within and across units, while the worst-case vulnerability associated with low class selectivity results from more consistently perturbable units. It is worth noting that the inverse can be stated with regards to average-case robustness: low variation in perturbability both within and across units in low-selectivity networks is associated with robustness to average-case perturbations, despite the these units (and networks) being more perturbable on average. 4.4 DIMENSIONALITY IN EARLY LAYERS PREDICTS PERTURBATION VULNERABILITY a) 0 2 4 6 8 10 12 14 16 Layer 10−1 100 In pu tU ni t G ra di en t Va ria bi lit y, U ni t ( CV u) Class Selectivity Regularization Scale (α) -2.0 -1.0 2.01.00.70-0.4-0.7 -0.3 -0.2 -0.1 0.1 0.2 0.3 0.4 b) 0 2 4 6 8 10 12 14 16 Layer 10−1 100 In pu tU ni t G ra di en t Va ria bi lit y, L ay er (C V l ) ality would be unaffected by class selectivity. We found that the sparsity of a DNN’s semantic representations corresponds directly to the dimensionality of those representations. Dimensionality is inversely proportional to class selectivity in early ResNet18 layers (≤layer 9; Figure 4a), and across all of ResNet20 (Figure A21d). Networks with higher class selectivity tend to have lower dimensionality, and networks with lower class selectivity tend to have higher dimensionality. These results show that the sparsity of a network’s semantic representations is indeed reflected in those representations’ dimensionality. We next examined the dimensionality of perturbation-induced changes in representations by subtracting the perturbed activation matrix from the clean activation matrix and computing the dimensionality of this "difference matrix" (see Appendix A.1.4). Intuitively, this metric quantifies the dimensionality of the change in the representation caused by perturbing the input. If it is small, the perturbation impacts fewer units, while if it is large, more units are impacted. Interestingly, we found that the dimensionality of the changes in activations induced by both average-case (Figure 4b) and worst-case perturbations (Figure 4c) was notably higher for networks with reduced class-selectivity, suggesting that decreasing class selectivity causes changes in input to become more distributed. We found that the activation changes caused by average-case perturbations are higher-dimensional than the representations of the clean data in both ResNet18 (compare Figures 4b and 4a) and ResNet20 (Figures A21e and A21d), and that this effect is inversely proportional to class selectivity (Figures 4b and A21e); the increase in dimensionality from average-case perturbations was more pronounced in low-selectivity networks than in high-selectivity networks. These results indicate that class selectivity not only predicts the dimensionality of a representation, but also the change in dimensionality induced by an average-case perturbation. Notably, however, the increase in early-layer dimensionality was much larger for worst-case perturbations than average-case perturbations (Figure 4c; Figure A21f) . These results indicate that, while the changes in dimensionality induced by both naturalistic and adversarial perturbations are proportional to the dimensionality of the network’s representations, these changes do not consistently project onto coding-relevant dimensions of the representations. Indeed, the larger change in early-layer dimensionality caused by worst-case perturbations likely reflects targeted projection onto codingrelevant dimensions and provides intuition as to why low-selectivity networks are more susceptible to worst-case perturbations. Hidden layer representations in DNNs are known to lie on non-linear manifolds that are of lower dimensionality than the space in which they’re embedded (Goodfellow et al., 2016; Ansuini et al., 2019). Consequently, linear methods such as PCA can provide misleading estimates of hidden layer dimensionality. Thus we also quantified the intrinsic dimensionality (ID) of each layer’s representations (see Appendix A.1.4). Interestingly, the results were qualitatively similar to what we observed when examining linear dimensionality (Figure A22) in both ResNet18 trained on Tiny ImageNet (Figure A22a-A22c) and ResNet20 trained on CIFAR10 (Figure A22d-A22f). Thus both linear and non-linear measures of dimensionality imply that representational dimensionality may present a trade-off between worst-case and average-case perturbation robustness. 5 DISCUSSION Our results demonstrate that changes in the sparsity of semantic representations, as measured with class selectivity, induce a trade-off between robustness to average-case vs. worst-case perturbations: highly-distributed semantic representations confer robustness to average-case perturbations, but their increased dimensionality and consistent perturbability result in vulnerability to worst-case perturbations. In contrast, sparse semantic representations yield low-dimensional representations and inconsistently-perturbable units, imparting worst-case robustness. Furthermore, the dimensionality of the difference in early-layer activations between clean and perturbed samples is larger for worst-case perturbations than for average-case perturbations. More generally, our results link average-case and worst-case perturbation robustness through class selectivity and representational dimensionality. We hesitate to generalize too broadly about our findings, as they are limited to CNNs trained on image classification tasks. It is possible that the results we report here are specific to our models and/or datasets, and also may not extend to other tasks. Scaling class selectivity regularization to datasets with large numbers of classes also remains an open problem (Leavitt and Morcos, 2020). Our findings could be utilized for practical ends and to clarify findings in prior work. Relevant to both of these issues is the task of adversarial example detection. There is conflicting evidence that intrinsic dimensionality can be used to characterize or detect adversarial (worst-case) samples (Ma et al., 2018; Lu et al., 2018). The finding that worst-case perturbations cause a marked increase in both intrinsic and linear dimensionality indicates that there may be merit in continuing to study these quantities for use in worst-case perturbation detection. And the observation that the causal relationship between class-selectivity and worst- and average-case robustness is bidirectional helps clarify the known benefits of sparsity (Madry et al., 2018; Balda et al., 2020; Ye et al., 2018; Guo et al., 2018; Dhillon et al., 2018) and dimensionality (Langeberg et al., 2019; Sanyal et al., 2020; Nayebi and Ganguli, 2017) on worst-case robustness. It furthermore raises the question of whether enforcing low-dimensional representations also causes class selectivity to increase. Our work may also hold practical relevance to developing robust models: class selectivity could be used as both a metric for measuring model robustness and a method for achieving robustness (via regularization). We hope future work will more comprehensively assess the utility of class selectivity as part of the deep learning toolkit for these purposes. A APPENDIX A.1 DETAILED APPROACH Unless otherwise noted: all experimental results were derived from the corrupted or adversarial test set with the parameters from the epoch that achieved the highest clean validation set accuracy over the training epochs; 20 replicates with different random seeds were run for each hyperparameter set; error bars and shaded regions denote bootstrapped 95% confidence intervals; selectivity regularization was not applied to the final (output) layer, nor was the final layer included in any of our analyses. A.1.1 MODELS All models were trained using stochastic gradient descent (SGD) with momentum = 0.9 and weight decay = 0.0001. The maxpool layer after the first batchnorm layer in ResNet18 (see He et al. (2016)) was removed because of the smaller size of Tiny ImageNet images compared to standard ImageNet images (64x64 vs. 256x256, respectively). ResNet18 and ResNet50 were trained for 90 epochs with a minibatch size of 4096 (ResNet18) or 1400 (ResNet50) samples with a learning rate of 0.1, multiplied (annealed) by 0.1 at epochs 35, 50, 65, and 80. ResNet20 (code modified from Idelbayev (2020)) were trained for 200 epochs using a minibatch size of 256 samples and a learning rate of 0.1, annealed by 0.1 at epochs 100 and 150. A.1.2 DATASETS Tiny Imagenet (Fei-Fei et al., 2015) consists of 500 training images and 50 images for each of its 200 classes. We used the validation set for testing and created a new validation set by taking 50 images per class from the training set, selected randomly for each seed. We split the 50k CIFAR10 training samples into a 45k sample training set and a 5k validation set, similar to our approach with Tiny Imagenet. All experimental results were derived from the test set with the parameters from the epoch that achieved the highest validation set accuracy over the training epochs. 20 replicates with different random seeds were run for each hyperparameter set. Selectivity regularization was not applied to the final (output) layer, nor was the final layer included any of our analyses. CIFAR10C consists of a dataset in which 19 different naturalistic corruptions have been applied to the CIFAR10 test set at 5 different levels of severity. Tiny ImageNetC also has 5 levels of corruption severity, but consists of 15 corruptions. We would like to note that Tiny ImageNetC does not use the Tiny ImageNet test data. While the two datasets were created using the same data generation procedure—cropping and scaling images from the same 200 ImageNet classes—they differ in the specific ImageNet images they use. It is possible that the images used to create Tiny ImageNetC are out-of-distribution with regards to the Tiny ImageNet training data, in which case our results from testing on Tiny ImageNetC actually underestimate the corruption robustness of our networks. The creators of Tiny ImageNetC kindly provided the clean (uncorrupted) Tiny ImageNetC data necessary for the dimensionality analysis, which relies on matches corrupted and clean data samples. A.1.3 SOFTWARE Experiments were conducted using PyTorch (Paszke et al., 2019), analyzed using the SciPy ecosystem (Virtanen et al., 2019), and visualized using Seaborn (Waskom et al., 2017). A.1.4 QUANTIFYING DIMENSIONALITY We quantified the dimensionality of a layer’s representations by applying PCA to the layer’s activation matrix for the clean test data and counting the number of dimensions necessary to explain 95% of the variance, then dividing by the total number of dimensions (i.e. the fraction of total dimensionality; we also replicated our results using the fraction of total dimensionality necessary to explain 90% and 99% of the variance). The same procedure was applied to compute the dimensionality of perturbationinduced changes in representations, except the activations for a perturbed data set were subtracted a) b) c) d) e) Figure A1: Example naturalistic corruptions from the Tiny ImageNetC dataset. (a) Clean (no corruption). (b) Brightness. (c) Contrast. (d) Elastic transform. (e) Shot noise. All corruptions are shown at severity level 5/5. from the corresponding clean activations prior to applying PCA. For average-case perturbations, we performed this analysis for every corruption type and severity, and for the worst-case perturbations we used PGD with 40 steps. Hidden layer representations in DNNs are known to lie on non-linear manifolds that are of lower dimensionality than the space in which they’re embedded (Goodfellow et al., 2016; Ansuini et al., 2019). Consequently, linear methods such as PCA can fail to capture the "intrinsic" dimensionality of hidden layer representations. Thus we also quantified the intrinsic dimensionality (ID) of each layer’s representations using the method of (Facco et al., 2017). The method, based on that of Levina and Bickel (2005), estimates ID by computing the ratio between the distances to the second and first nearest neighbors of each data point. We used the implementation of Ansuini et al. (2019). Our procedure was otherwise identical as when computing the linear dimensionality: we computed the dimensionality across all test data for each layer, then divided by the number of units per layer. We then computed the dimensionality of perturbation-induced changes in representations, except the activations for a perturbed data set were subtracted from the corresponding clean activations prior to computed ID. For average-case perturbations, we performed this analysis for every corruption type and severity, and for the worst-case perturbations we used PGD with 40 steps. A.2 EFFECTS OF CLASS SELECTIVITY REGULARIZATION ON TEST ACCURACY a) 0.0 0.2 0.4 0.6 0.8 1.0 Mean Class Selectivity 0 10 20 30 40 50 Te st A cc ur ac y -100.0 -30.0 -10.0 -5.0 -2.0 -1.0 -0.7 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.7 1.0 2.0 5.0 10.0 30.0 100.0 Regularization Scale (α) b) 0.0 0.2 0.4 0.6 0.8 Mean Class Selectivity 10 20 30 40 50 60 70 80 90 Te st A cc ur ac y Figure A2: Effects of class selectivity regularization on test accuracy. Replicated as in Leavitt and Morcos (2020). (a) Test accuracy (y-axis) as a function of mean class selectivity (x-axis) for ResNet18 trained on Tiny ImageNet. α denotes the sign and intensity of class selectivity regularization. Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect. Each data point represents the mean class selectivity across all units in a single trained model. (b) Same as (a), but for ResNet20 trained on CIFAR10. A.3 ADDITIONAL RESULTS FOR AVERAGE-CASE PERTURBATION ROBUSTNESS Because modifying class selectivity can affect performance on clean (unperturbed) inputs (Leavitt and Morcos (2020); Figure A2), it is possible that the effects we observe of class selectivity on perturbed test accuracy are not caused by changes in perturbation robustness per se, but simply by changes in baseline model accuracy. We controlled for this by normalizing each model’s perturbed test accuracy by its clean (unperturbed) test accuracy. The results are generally consistent even after controlling for clean test accuracy, although increasing class selectivity does not cause the same deficits in as measured using non-normalized perturbed test accuracy in ResNet18 trained on Tiny ImageNet (Figure A3a). Interestingly, in ResNet20 trained on CIFAR10, normalizing perturbed test accuracy reveals a more dramatic improvement in perturbation robustness caused by reducing class selectivity (Figure A6c). The results for Resnet50 trained on Tiny ImageNet are entirely consistent between raw vs. normalized measures (Figure A9b vs. Figure A9c) a) -2.0 -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 2.0 Regularization Scale (α) 0.34 0.35 0.36 0.37 0.38 0.39 0.40 0.41 0.42 Co rr up te d Te st A cc ur ac y Cl ea n Te st A cc ur ac y b) -2.0 -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 2.0 Regularization Scale (α) 10 15 20 25 Te st A cc ur ac y Corruption Severity 1 Weakest 2 3 4 5 Strongest Figure A3: Controlling for clean test accuracy, and effect of corruption severity across corruptions. (a) Corrupted test accuracy normalized by clean test accuracy (y-axis) as a function of class selectivity regularization scale (α; x-axis). Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect. Normalized perturbed test accuracy appears higher in networks with high class selectivity (large α), but this is likely due to a floor effect: clean test accuracy is already much closer to the lower bound—chance—in networks with very high class selectivity, which may reflect a different performance regime, making direct comparison difficult. (b) Mean test accuracy across all corruptions (y-axis) as a function of α (x-axis) for different corruption severities (ordering along y-axis; shade of connecting line). Error bars indicate 95% confidence intervals of the mean. Shot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur 5 10 15 20 25 30 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -2.0 Low Selectivity 2.0 High Selectivity 0 Baseline Selectivity -1.0 -0.7 -0.4 -0.2 0.2 0.4 0.7 1.0 Figure A4: Mean test accuracy across corruption intensities for each corruption type for ResNet18 tested on Tiny ImageNetC. Test accuracy (y-axis) as a function of corruption type (x-axis) and class selectivity regularization scale (α, color). Reducing class selectivity improves robustness against all 15/15 corruption types. α Figure A5: Trade-off between clean and perturbed test accuracy in ResNet18 tested on Tiny ImageNetC. Clean test accuracy (x-axis) vs. perturbed test accuracy (y-axis) for different corruption severities (border color) and regularization scales (α, fill color). Mean is computed across all corruption types. Error bars = 95% confidence intervals of the mean. a) Fog Jpeg Compression Zoom Blur Speckle Noise Glass Blur Spatter Shot Noise Defocus Blur Elastic Transform Gaussian Blur Frost Saturate Brightness Snow Gaussian Noise Motion Blur Contrast Impulse Noise Pixelate 20 30 40 50 60 70 80 90 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -1.0 Low Selectivity 1.0 High Selectivity 0 Baseline Selectivity -0.7 -0.4 -0.2 0.2 0.4 0.7 b) -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 60 62 64 66 68 70 72 Te st A cc ur ac y c) -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 0.73 0.74 0.75 0.76 0.77 0.78 0.79 Co rr up te d Te st A cc ur ac y Cl ea n Te st A cc ur ac y d) -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 45 50 55 60 65 70 75 80 85 Te st A cc ur ac y Corruption Severity 1 Weakest 2 3 4 5 Strongest Figure A6: Reducing class selectivity confers robustness to average-case perturbations in ResNet20 tested on CIFAR10C. (a) Test accuracy (y-axis) as a function of corruption type (x-axis), class selectivity regularization scale (α; color), and corruption severity (ordering along y-axis). Test accuracy is reduced proportionally to corruption severity, leading to an ordering along the y-axis, with corruption severity 1 (least severe) at the top and corruption severity 5 (most severe) at the bottom. Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect (see Figure A2b and Approach 3). (b) Mean test accuracy across all corruptions and severities (y-axis) as a function of α (x-axis). (c) Corrupted test accuracy normalized by clean test accuracy (y-axis) as a function of class selectivity regularization scale (α; x-axis). Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect. Normalized perturbed test accuracy appears higher in networks with higher class selectivity (larger α), but this is likely due to a floor effect: clean test accuracy is already much closer to the lower bound—chance—in networks with very high class selectivity, which may reflect a different performance regime, making direct comparison difficult. (d) Mean test accuracy across all corruptions (y-axis) as a function of α (x-axis) for different corruption severities (ordering along y-axis; shade of connecting line). Error bars indicate 95% confidence intervals of the mean. Fog Jpeg Compression Zoom Blur Speckle Noise Glass Blur Spatter Shot Noise Defocus Blur Elastic Transform Gaussian Blur Frost SaturateBrightness Snow Gaussian Noise Motion Blur Contrast Impulse Noise Pixelate 40 50 60 70 80 90 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -1.0 Low Selectivity 1.0 High Selectivity 0 Baseline Selectivity -0.7 -0.4 -0.2 0.2 0.4 0.7 Figure A7: Mean test accuracy across corruption intensities for each corruption type for ResNet20 tested on CIFAR10C. Test accuracy (y-axis) as a function of corruption type (x-axis) and class selectivity regularization scale (α, color). Reducing class selectivity improves robustness against 14/19 corruption types. Error bars = 95% confidence intervals of the mean. α Figure A8: Trade-off between clean and corrupted test accuracy in ResNet20 tested on CIFAR10C. Clean test accuracy (x-axis) vs. corrupted test accuracy (y-axis) for different corruption severities (border color) and regularization scales (α, fill color). Mean is computed across all corruption types. a) Shot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur 0 5 10 15 20 25 30 35 40 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -1.0 Low Selectivity 1.0 High Selectivity 0 Baseline Selectivity -0.7 -0.4 -0.2 0.2 0.4 0.7 b) -1.0 -0.7 -0.4 -0.2 0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 15 16 17 18 19 20 21 Te st A cc ur ac y c) -1.0 -0.7 -0.4 -0.2 0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 0.36 0.38 0.40 0.42 0.44 Co rr up te d Te st A cc ur ac y Cl ea n Te st A cc ur ac y d) -1.0 -0.7 -0.4 -0.2 0.0 0.2 0.4 0.7 1.0 Regularization Scale (α) 10 15 20 25 30 Te st A cc ur ac y Figure A9: Reducing class selectivity confers robustness to average-case perturbations in ResNet50 tested on Tiny ImageNetC. (a) Test accuracy (y-axis) as a function of corruption type (x-axis), class selectivity regularization scale (α; color), and corruption severity (ordering along y-axis). Test accuracy is reduced proportionally to corruption severity, leading to an ordering along the y-axis, with corruption severity 1 (least severe) at the top and corruption severity 5 (most severe) at the bottom. Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect (see Approach 3). (b) Mean test accuracy across all corruptions and severities (y-axis) as a function of α (x-axis). (c) Corrupted test accuracy normalized by clean test accuracy (y-axis) as a function of class selectivity regularization scale (α; x-axis). Negative α lowers selectivity, positive α increases selectivity, and the magnitude of α changes the strength of the effect. (d) Mean test accuracy across all corruptions (y-axis) as a function of α (x-axis) for different corruption severities (ordering along y-axis; shade of connecting line). Error bars indicate 95% confidence intervals of the mean. Note that confidence intervals are larger in part due to a smaller sample size—only 5 replicates per α instead of 20. Shot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur 5 10 15 20 25 30 35 Te st A cc ur ac y Class Selectivity Regularization Scale (α) -1.0 Low Selectivity 1.0 High Selectivity 0 Baseline Selectivity -0.7 -0.4 -0.2 0.2 0.4 0.7 Figure A10: Mean test accuracy across corruption intensities for each corruption type for ResNet50 tested on Tiny ImageNetC. Test accuracy (y-axis) as a function of corruption type (x-axis) and class selectivity regularization scale (α, color). Reducing class selectivity improves robustness against 14/15 corruption types. Error bars = 95% confidence intervals of the mean. Note that confidence intervals are larger in part due to a smaller sample size—only 5 replicates per α instead of 20. A.4 THE CAUSAL RELATIONSHIP BETWEEN CLASS SELECTIVITY AND AVERAGE-CASE ROBUSTNESS IS BIDIRECTIONAL We found that regularizing to decrease class selectivity causes robustness to average-case perturbations. But is the converse is also true? Does increasing robustness to average-case perturbations also cause class selectivity to increase? We investigated this question by training with AugMix, a technique known to improve worst-case robustness (Hendrycks et al., 2020a). Briefly, AugMix stochastically applies a diverse set of image augmentations and uses a Jensen-Shannon Divergence consistency loss. Our AugMix parameters were as follows: mixture width: 3; mixture depth: stochastic; augmentation probability: 1; augmentation severity: 2. We found that AugMix does indeed decrese the mean level of class selectivity across neurons in a network (Figure A11). AugMix decreases overall levels of selectivity similarly to training with a class selectivity regularization scale of approximately α = −0.1 or α = −0.2 in both ResNet18 trained on Tiny ImageNet (Figures A11a and A11b) and ResNet20 trained on CIFAR10 (Figures A11c and A11d). These results indicate that the causal relationship between average-case perturbation robustness and class selectivity is bidirectional: not only does decreasing class selectivity cause average-case perturbation robustness to increase, but increasing average-case perturbation-robustness also causes class selectivity to decrease. A.5 WORST-CASE PERTURBATION ROBUSTNESS We also confirmed that the worst-case robustness of high-selectivity ResNet18 and ResNet20 networks was not simply due to gradient-masking (Athalye et al., 2018) by generating worst-case perturbations using each of the replicate models trained with no selectivity regularization (α = 0), then testing selectivity-regularized models on these samples. We found that high-selectivity models were less vulnerable to the α = 0 samples than low-selectivity models for high-intensity perturbations (Appendix A14, indicating that gradient-masking does not fully account for the worst-case robustness of high-selectivity models. A.6 CLASS SELECTIVITY REGULARIZATION DOES NOT AFFECT ROBUSTNESS TO NATURAL ADVERSARIAL EXAMPLES We also examined whether class selectivity regularization affects robustness to "natural" adversarial examples, images that are "natural, unmodified, real-world examples...selected to cause a fixed model to make a mistake" (Hendrycks et al., 2020b). We tested robustness to natural adversarial examples using ImageNet-A, a dataset of natural adversarial examples that belong to ImageNet classes but consistently cause misclassification errors with high confidence (Hendrycks et al., 2020b). We adapted ImageNet-A to our models trained on Tiny ImageNet (ResNet18 and ResNet50) by only testing on the 74 image classes that overlap between ImageNet-A and Tiny ImageNet (yielding a total of 2957 samples), and downsampling the images to 64 x 64. Test accuracy was similar across all tested values of α for both ResNet18 (Figure A15a) and ResNet50 (Figure A15b), indicating that class selectivity regularization may share some limitations with other methods for improving robustness, many of which also fail to yield significant robustness against ImageNet-A (Hendrycks et al., 2020b). A.7 THE CAUSAL RELATIONSHIP BETWEEN CLASS SELECTIVITY AND WORST-CASE ROBUSTNESS IS BIDIRECTIONAL We observed that regularizing to increase class selectivity causes robustness to worst-case perturbations. But is the converse is also true? Does increasing robustness to worst-case perturbations cause class selectivity to increase? We investigated this question using PGD training, a common technique for improving worst-case robustness. PGD training applies the PGD method of sample perturbation (see Approach 3) to samples during training. We used the same parameters for PGD sample generation when training our models as when testing (Approach 3). The number of PGD iterations controls the intensity of the perturbation, and the degree of perturbation-robustness in the trained model (Madry et al., 2018). We found that PGD training does indeed increase the mean level of class selectivity across neurons in a network, and this effect is proportional to the strength of PGD training: networks trained with more strongly-perturbed samples have higher class selectivity (Figure A16). Interestingly, PGD training also appear to cause units to die (Lu et al., 2019), and the number of dead untis is proportional to the intensity of PGD training (Figures A16b and A16e). Removing dead units, which have a class selectivity index of 0, from the calculation of mean class selectivity results in a clear, monotonic effect of PGD training intensity on class selectivity in both ResNet18 trained on Tiny ImageNet (Figure A16c) and ResNet20 trained on CIFAR10 (Figure A16f). These results indicate that the causal relationship between worst-case perturbation robustness and class selectivity is bidirectional: increasing class selectivity not only causes increased worst-case perturbation robustness, but increasing worst-case perturbation-robustness also causes increased class selectivity. A.8 STABILITY TO INPUT PERTURBATIONS IN UNITS AND LAYERS A.9 REPRESENTATIONAL DIMENSIONALITY a) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.1 0.2 0.3 0.4 0.5 Fr ac tio n of T ot al D im en si on al ity Class Selectivity Regularization Scale (α) -2.0 -1.0 2.01.00-0.4 -0.3 -0.2 -0.1 0.1 0.2 0.3 0.4 b) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.1 0.2 0.3 0.4 0.5 Fr ac tio n of T ot al D im en sio na lit y c) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.1 0.2 0.3 0.4 0.5 Fr ac tio n of T ot al D im en sio na lit y d) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.2 0.4 0.6 0.8 1.0 Fr ac tio n of T ot al D im en si on al ity Class Selectivity Regularization Scale (α) -2.0 -1.0 2.01.00-0.4 -0.3 -0.2 -0.1 0.1 0.2 0.3 0.4 e) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.2 0.4 0.6 0.8 1.0 Fr ac tio n of T ot al D im en sio na lit y f) 0 2 4 6 8 10 12 14 16 Layer 0.0 0.2 0.4 0.6 0.8 1.0 Fr ac tio n of T ot al D im en sio na lit y Figure A20: Dimensionality in early layers predicts worst-case vulnerability in ResNet18 trained on Tiny ImageNet. Identical to Figure 4, but dimensionality is computed as the number of principal components needed to explain 90% of variance in (a) - (c), and 99% of variance in (d) - (f). (a) Fraction of dimensionality (y-axis; see Appendix A.1.4) as a function of layer (x-axis). (b) Dimensionality of difference between clean and average-case perturbation activations (y-axis) as a function of layer (x-axis). (c) Dimensionality of difference between clean and worst-case perturbation activations (y-axis) as a function of layer (x-axis). (d) - (f), identical to (a) - (c), but for 99% explained variance threshold.
1. What is the main contribution of the paper regarding class selectivity and robustness? 2. What are the strengths and weaknesses of the paper's empirical observations and experimental design? 3. How does the reviewer assess the paper's limitations in terms of task, model, and dataset scope? 4. What is the reviewer's opinion on the paper's theoretical connections and practical impact? 5. Are there any questions or suggestions from the reviewer regarding the paper's content or future research directions?
Review
Review Summary: This paper studies the relationship between class selectivity and robustness. In particular, it finds that higher class selectivity tends to lead to worse average-case robustness (a.k.a. performance on naturalistically perturbed inputs) but better worst-case robustness (a.k.a. performance on adversarial inputs). It further finds that variability of input-unit gradient across samples and units is proportional to a network’s overall class selectivity. Besides, the increase of dimensionality of activation caused by corruption is larger for worst-case perturbation and low-selectivity networks. ################################################ Reasons for score: Overall, this paper makes a series of interesting observations on the relationship between class selectivity and robustness. I feel that the paper’s contribution is a bit limited to some interesting empirical observations on a particular task (image classification) and type of model (resnet) on two datasets (CIFAR-10 and Tiny ImageNet). Also, no attempt to make much theoretical connections and the practical impact of the paper is not very clear. ################################################ Pros: +mostly well-written and easy to follow +interesting empirical findings and exploration of sensitivity on worst-case robustness and average-case robustness +experiments support the claims Cons: -evaluation looks a bit limited in terms of tasks/models/datasets. As even the authors stated in the paper “We hesitate to generalize too broadly about our findings, as they are limited to CNNs trained on image classification tasks. It is possible that the results we report here are specific to our models and/or datasets, and also may not extend to other tasks.” The results are indeed a bit limited in terms of its scope. The evaluation is limited to only ResNets (Res18 and Res20, which are similar to each other in terms of properties) for the image classification task only. I wonder if the findings can hold for deeper models since Fig 3 (b) seems to show that for deeper layers the variability of different SI tends to be increasingly similar. -pure empirical results and not enough theoretical justification I understand that this is the first paper trying to connect average-case robustness and worst-case robustness with class selectivity but I still would like to see a bit more discussion on the theoretical side. -the impact of the observations need more discussions. I understand that this paper focuses on empirically linking average-case robustness and worst-case robustness. However, I think it needs a bit more discussion on how other researchers/practitioners can benefit from these findings. For example, how should one leverage such observations to detect adversarial / naturally perturbed error-prone input , improve robustness, or find the optimal tradeoff between different measures (e.g. natural accuracy, average-case robustness, worst-case robustness. ################################################ Typo: In Figure A6 caption “Figure ??” ################################################ Questions: At the end of sec 4.2 “indicating that distributed semantic representations are more vulnerable to worst-case perturbation because they are less stable than sparse semantic representations.” What does “distributed semantic representations” mean here? What’s the difference between “distributed semantic representations” and “sparse semantic representations“ in this sentence? ################################################ Suggestions: The paper can be improved by addressing at least one of the three cons I mentioned. ################################################ Post-Rebuttal: Thanks the authors for their detailed responses! The authors responses addressed most of my concerns. In particular,1. the authors showed that their results generalize to Tiny-ImageNet and ResNet50 with additional experiments. 2. the authors added some discussions on the theoretical side. Besides, I also appreciate the addition of experiments on AugMix and PGD which imply the bidirectional causality of class selectivity and perturbation robustness. Overall, I think this work will potentially be a good addition to the existing understanding of trade-off between worst-case robustness and average-case robustness for the community. Therefore, I decide to increase my score to 6.
ICLR
Title Proximal Curriculum for Reinforcement Learning Agents Abstract We consider the problem of curriculum design for reinforcement learning (RL) agents in contextual multi-task settings. Existing techniques on automatic curriculum design typically have limited theoretical underpinnings or require domainspecific hyperparameter tuning. To tackle these limitations, we design our curriculum strategy, PROCURL, inspired by the pedagogical concept of Zone of Proximal Development (ZPD). We mathematically derive PROCURL by formalizing the ZPD concept, which suggests that learning progress is maximized when picking tasks that are neither too hard nor too easy for the learner. We also present a practical variant of PROCURL that can be directly integrated with deep RL frameworks with minimal hyperparameter tuning. Experimental results on a variety of domains demonstrate the effectiveness of our curriculum strategy over state-ofthe-art baselines in accelerating the training process of deep RL agents. 1 INTRODUCTION Recent advances in deep reinforcement learning (RL) have demonstrated impressive performance in games, continuous control, and robotics (Mnih et al., 2015; Lillicrap et al., 2015; Silver et al., 2017; Levine et al., 2016). Despite these remarkable successes, a broader application of RL in real-world domains is often very limited. For example, training RL agents in contextual multi-task settings and goal-based tasks with sparse rewards still remains challenging (Hallak et al., 2015; Kirk et al., 2021; Andrychowicz et al., 2017; Florensa et al., 2017; Riedmiller et al., 2018). Inspired by the importance of curricula in pedagogical domains, there is a growing interest in leveraging curriculum strategies when training machine learning models in challenging domains. In the supervised learning setting, such as image classification, the impact of the order of presented training examples has been studied both theoretically and empirically (Weinshall et al., 2018; Weinshall & Amir, 2018; Zhou & Bilmes, 2018; Zhou et al., 2021; Elman, 1993; Bengio et al., 2009; Zaremba & Sutskever, 2014). Recent works have also studied curriculum strategies for learners in sequentialdecision-making settings, such as imitation learning (where the agent learns from demonstrations) and RL (where the agent learns from rewards). In the imitation learning setting, recent works have proposed greedy curriculum strategies for picking the next training demonstration according to the agent’s learning progress (Kamalaruban et al., 2019; Yengera et al., 2021). In the RL setting, several curriculum strategies have been proposed to improve sample efficiency, e.g., by choosing an appropriate next starting state or goal state for the task to train on (Wöhlke et al., 2020; Florensa et al., 2017; 2018; Racanière et al., 2020; Riedmiller et al., 2018; Klink et al., 2020a;b; Eimer et al., 2021). Despite extensive research on curriculum design for the RL setting, existing techniques typically have limited theoretical underpinnings or require domain-specific hyperparameter tuning. In this paper, we are interested in developing a principled curriculum strategy for the RL setting that is broadly applicable to many domains with minimal tuning of hyperparameters. To this end, we rely on the Zone of Proximal Development (ZPD) concept from the educational psychology literature (Vygotsky & Cole, 1978; Chaiklin, 2003). The ZPD concept, when applied in terms of learning progress, suggests that progress is maximized when the learner is presented with tasks that lie in the proximal zone, i.e., tasks that are neither too hard nor too easy. To formally capture this idea of proximal zone, we use a notion of probability of success score PoSπt(s) w.r.t. the learner’s current policy πt for any given task s. We mathematically derive an intuitive curriculum strategy based on a learner update rule that captures the ZPD concept in terms of the learning progress and reflects characteristics of the policy gradient style update. Our main results and contributions are as follows: I. We propose a curriculum strategy, PROCURL, inspired by the ZPD concept. PROCURL formalizes the idea of picking tasks that are neither too hard nor too easy for the learner in the form of selection strategy argmaxs PoSπt(s) · ( PoS∗(s) − PoSπt(s) ) , where PoS∗(s) corresponds to the probability of success score w.r.t. an optimal policy (Section 3.1). II. We derive PROCURL under two specific learning settings where we analyze the effect of picking a task on the agent’s learning progress (Section 3.2). III. We present a practical variant of PROCURL, namely PROCURL-VAL, that can be easily integrated with deep RL frameworks with minimal hyperparameter tuning (Section 3.3). IV. We empirically demonstrate the effectiveness of PROCURL-VAL over state-of-the-art baselines in accelerating the training process of deep RL agents in a variety of environments (Section 4). 1.1 RELATED WORK Curriculum strategies based on domain knowledge. Early works on curriculum design for supervised learning setting typically order the training examples in increasing difficulty (Elman, 1993; Bengio et al., 2009; Schmidhuber, 2013; Zaremba & Sutskever, 2014). This easy-to-hard design principle has been utilized in the hand-crafted curriculum approaches for the RL setting (Asada et al., 1996; Wu & Tian, 2016). Moreover, there has been recent works on designing greedy curriculum strategies for the imitation learning setting based on the iterative machine teaching framework (Liu et al., 2017; Yang et al., 2018; Zhu et al., 2018; Kamalaruban et al., 2019; Yengera et al., 2021). However, these approaches require domain-specific expert knowledge for designing difficulty measures. Curriculum strategies based on ZPD concept. In the pedagogical setting, it has been realized that effective teaching provides tasks that are neither too hard nor too easy for the human learner. This intuition of providing tasks from a particular range of difficulties is conceptualized in the ZPD concept (Vygotsky & Cole, 1978; Chaiklin, 2003; Oudeyer et al., 2007; Baranes & Oudeyer, 2013; Zou et al., 2019). In the RL setting, several curriculum strategies that have been proposed are inherently based on the ZPD concept (Florensa et al., 2017; 2018; Wöhlke et al., 2020). A common underlying theme in both (Florensa et al., 2017) and (Florensa et al., 2018) is that they choose the next task (starting or goal state) for the learner uniformly at random from the set {s : rmin ≤ PoSπt(s) ≤ rmax}. Here, the threshold values rmin and rmax require tuning according to the learner’s progress and specific to the domain. The authors in (Wöhlke et al., 2020) propose a unified framework for the learner’s performance-based starting state curricula in RL. In particular, the starting state selection policy of (Wöhlke et al., 2020), P [ s (0) t = s ] ∝ G(PoSπt(s)) for some function G, accommodates existing curriculum generation methods like (Florensa et al., 2017; Graves et al., 2017). Despite promising empirical results, a conceptual formalism or theoretical underpinnings relating an RL agent’s learning progress to the ZPD concept is still missing in the aforementioned works. We address this conceptual gap in the literature by designing and analyzing a learner update rule that captures the ZPD concept in terms of the learning progress and also reflects characteristics of the policy gradient style update. Curriculum strategies based on self-paced learning (SPL). In the supervised learning setting, the curriculum strategies using the SPL concept optimize the trade-off between exposing the learner to all available training examples and selecting examples in which it currently performs well (Kumar et al., 2010; Jiang et al., 2015). In SPDL (Klink et al., 2020b;a; 2021; 2022) and SPACE (Eimer et al., 2021), the authors have adapted the concept of SPL to the RL setting by controlling the intermediate task distribution with respect to the learner’s current training progress. However, SPDL and SPACE differ in their mode of operation and the objective. SPDL considers the procedural task generation framework where tasks of appropriate difficult levels can be synthesized, as also considered in (Florensa et al., 2017; 2018)). In contrast, SPACE considers a pool-based curriculum framework for picking suitable tasks, as popular in supervised learning setting. Further, SPDL considers the objective of a targeted performance w.r.t. a target distribution (e.g., concentrated distribution on hard tasks); in contrast, SPACE considers the objective of uniform performance across a given pool of tasks. Similar to SPACE, in our work, we consider the pool-based setting with uniform performance objective. Both SPDL and SPACE serve as state-of-the-art baselines in our experimental evaluation. In terms of curriculum strategy, SPDL operates by solving an optimization problem at each step to pick a task (Klink et al., 2021); SPaCE uses a ranking induced by magnitude of differences in current/previous critic values at each step to pick a task (Eimer et al., 2021). In the appendix, we have also provided some additional information on hyperparameters for SPDL and SPaCE. Other automatic curriculum strategies. There are other approaches for automatic curriculum generation, including: (i) by formulating the curriculum design problem with the use of a meta-level Markov Decision Process (Narvekar et al., 2017; Narvekar & Stone, 2019); (ii) by learning how to generate training tasks similar to a teacher (Dendorfer et al., 2020; Such et al., 2020; Matiisen et al., 2019; Turchetta et al., 2020); (iii) by leveraging self-play as a form of curriculum generation (Sukhbaatar et al., 2018); (iv) by using the disagreement between different agents trained on the same tasks (Zhang et al., 2020); (v) by picking the starting states based on a single demonstration (Salimans & Chen, 2018; Resnick et al., 2018); and (vi) by providing agents with environment variations that are at the frontier of an agent’s capabilities, e.g., Unsupervised Environment Design methods (Dennis et al., 2020; Jiang et al., 2021; Parker-Holder et al., 2022). We refer the reader to recent surveys on curriculum design for the RL setting (Narvekar et al., 2020; Portelas et al., 2021; Weng, 2020). 2 FORMAL SETUP In this section, we formalize our problem setting based on prior work on teacher-student curriculum learning (Matiisen et al., 2019). MDP environment. We consider a learning environment defined as a Markov Decision Process (MDP)M := (S,A, T , H,R,Sinit). Here, S andA denote the state and action spaces, T : S ×S × A → [0, 1] is the transition dynamics, H is the maximum length of the episode, and R : S×A → R is the reward function. The set of initial states Sinit ⊆ S specifies a fixed pool of tasks, i.e., each starting state s ∈ Sinit corresponds to a unique task. Note that the above environment formalism is quite general enough to cover many practical settings, including the contextual multi-task MDP setting (Hallak et al., 2015).1 RL agent and training process. We consider an RL agent acting in this environment via a policy π : S × A → [0, 1] that is a mapping from a state to a probability distribution over actions. Given a task with the corresponding starting state s ∈ Sinit, the agent attempts the task via a trajectory rollout obtained by executing its policy π from s in the MDP M. The trajectory rollout is denoted as ξ = {(s(τ), a(τ), R(s(τ), a(τ)))}τ=0,1,...,h with s(0) = s and for some h ≤ H . The agent’s performance on task s is measured via the value function V π(s) := E [∑h τ=0 R(s (τ), a(τ)) ∣∣π,M, s(0) = s]. Then, the uniform performance of the agent over the pool of tasks Sinit is given by V π := Es∼Uniform(Sinit) [V π(s)]. The training process of the agent involves an interaction between two components: a student component that is responsible for policy update and a teacher component that is responsible for task selection. The interaction happens in discrete steps, indexed by t = 1, 2, . . ., and is formally described in Algorithm 1. Let πend denote the agent’s final policy at the end of training. The training objective is to ensure that the uniform performance of the policy πend is ϵ-near-optimal, i.e., (maxπ V π − V πend) ≤ ϵ. In the following two paragraphs, we discuss the student and teacher components in detail. Student component. We consider a parametric representation for the RL agent, whose current knowledge is parameterized by θ ∈ Θ ⊆ Rd and each parameter θ is mapped to a policy πθ : S×A → [0, 1]. At step t, the student component updates the knowledge parameter based on the following quantities: the current knowledge parameter θt, the task picked by the teacher component, and the rollout ξt = {(s(τ)t , a(τ)t , R(s(τ)t , a(τ)t ))}τ . Then, the updated knowledge parameter θt+1 is mapped to the agent’s policy given by πt+1 := πθt+1 . As a concrete example, the knowledge parameter of the REINFORCE agent (Sutton et al., 1999) is updated as θt+1 ← θt + ηt · ∑h−1 τ=0 G (τ) t · g(τ)t , where ηt is the learning rate, G (τ) t = ∑h τ ′=τ R(s (τ ′) t , a (τ ′) t ), and g (τ) t = [ ∇θ log πθ(a(τ)t |s(τ)t ) ] θ=θt . Teacher component. At step t, the teacher component picks a task with the corresponding starting state s(0)t for the student component to attempt via a trajectory rollout (see line 3 in Algorithm 1). The sequence of tasks (curriculum) picked by the teacher component affects the performance improvement of the policy πt. The main focus of this work is to develop a teacher component to achieve the training objective in both computational and sample efficient manner. 1In this setting, for a given set of contexts C, the pool of tasks is given by {Mc = (S,A, Tc, H,Rc,S init) : c ∈ C}. Our environment formalism (MDP M) covers this setting as follows: S = S × C; Sinit = S init × C; T ((s̄′, c)|(s̄, c), a) = Tc(s̄′|s̄, a) and R((s̄, c), a) = Rc(s̄, a), ∀s̄, s̄′ ∈ S, a ∈ A, c ∈ C. Algorithm 1 RL Agent Training as Interaction between Teacher-Student Components 1: Input: RL agent’s initial policy π1 2: for t = 1, 2, . . . do 3: Teacher component picks a task with the corresponding starting state s(0)t . 4: Student component attempts the task via a trajectory rollout ξt using the policy πt from s (0) t . 5: Student component updates the policy to πt+1. 6: Output: RL agent’s final policy πend ← πt+1. 3 PROXIMAL CURRICULUM STRATEGY In Section 3.1, we propose a curriculum strategy for the goal-based setting. In Section 3.2, we show that the proposed curriculum strategy can be derived from basic principles by formalizing the ZPD concept. In Section 3.3, we present our final curriculum strategy that is applicable in general settings. 3.1 CURRICULUM STRATEGY FOR THE GOAL-BASED SETTING Here, we introduce our curriculum strategy for the goal-based setting using the notion of probability of success scores. Goal-based setting. In this setting, the reward function R is goal-based, i.e., the agent gets a reward of 1 only at the goal states and 0 at other states; moreover, any action from a goal state also leads to termination. For any task with the corresponding starting state s ∈ Sinit, we say that the attempted rollout ξ succeeds in the task if the final state of ξ is a goal state. Formally, succ(ξ; s) is an indicator function whose value is 1 when the rollout ξ succeeds in the task s, and 0 otherwise. Furthermore, for an agent with policy π, we have that V π(s) := E [ succ(ξ; s) ∣∣π,M] is equal to the total probability of reaching a goal state by executing the policy π starting from s ∈ Sinit. Probability of success. We begin by assigning a probability of success score for any task with the corresponding starting state s ∈ Sinit w.r.t. any parameterized policy πθ in the MDPM. Definition 1. For any given knowledge parameter θ ∈ Θ and any starting state s ∈ Sinit, we define the probability of success score PoSθ(s) as the probability of successfully solving the task s by executing the policy πθ in the MDPM. For the goal-based settings, we have PoSθ(s) = V πθ (s). With the above definition, the probability of success score for any task s ∈ Sinit w.r.t. the agent’s current policy πt is given by PoSt(s) := PoSθt(s). Further, we define PoS ∗(s) := maxθ∈Θ PoSθ(s). Curriculum strategy. Based on the notion of probability of success scores that we defined above, we propose the following curriculum strategy: s (0) t ← argmax s∈Sinit ( PoSt(s) · ( PoS∗(s)− PoSt(s) )) , (1) i.e., at step t, the teacher component picks a task associated with the starting state s(0)t according to Eq. 1. In the following subsection, we show that our curriculum strategy can be derived by considering simple learning settings, such as contextual bandit problems with REINFORCE agent; these derivations provide insights about the design of the curriculum strategy. In Section 3.3, we provide a detailed step-by-step discussion on how our curriculum can be applied in practice to increasingly complex settings. 3.2 THEORETICAL JUSTIFICATIONS FOR THE CURRICULUM STRATEGY To derive our curriculum strategy for the goal-based setting, we additionally consider independent tasks where any task s(0)t picked from the pool Sinit at step t only affects the agent’s knowledge component corresponding to that task. Further, we assume that there exists a knowledge parameter θ∗ ∈ Θ such that πθ∗ ∈ argmaxπ V π , and πθ∗ is referred to as the target policy. Then, based on the work of (Weinshall et al., 2018; Kamalaruban et al., 2019; Yengera et al., 2021), we investigate the effect of picking a task s(0)t at step t on the convergence of the agent’s parameter θt towards the target parameter θ∗. Under a smoothness condition on the value function of the form |V πθ − V πθ′ | ≤ L · ∥θ − θ′∥1 ,∀θ, θ′ ∈ Θ for some L > 0, we can translate the parameter convergence (θt → θ∗) into the performance convergence (V πθt → V πθ∗ ). Thus, we define the improvement in the training objective at step t as ∆t(θt+1 ∣∣θt, s(0)t , ξt) := [∥θ∗ − θt∥1 − ∥θ∗ − θt+1∥1]. (2) In this objective, we use the ℓ1-norm because our theoretical analysis considers the independent task setting mentioned above. Given two success values p∗, p ∈ R, we define a set of feasible tasks at step t as Dt(p∗, p) := {s ∈ Sinit : PoSθ∗(s) = p∗,PoSθt(s) = p}. The set Dt(p∗, p) contains all the tasks for which the probability of success score w.r.t. the target policy is equal to the value p∗ and the probability of success score w.r.t. the agent’s current policy is equal to the value p. Further, we define the expected improvement in the training objective at step t given success values p∗ and p as follows: Ct(p ∗, p) := E s (0) t ∼Uniform(Dt(p∗,p)) E ξt|s(0)t [ ∆t(θt+1|θt, s(0)t , ξt) ] , where the outer expectation is w.r.t. the uniform distribution over the setDt(p∗, p). In the following, we analyze the above quantity for specific agent models under the independent task setting. More concretely, Theorems 1 and 2 characterize the impact of picking a task at step t on the objective in Eq. 2 with the following values: (i) task’s PoS w.r.t. the target policy πθ∗ having value p∗ and (ii) task’s PoS w.r.t. the agent’s current policy having value p. For the specific settings considered in Sections 3.2.1 and 3.2.2, Theorems 1 and 2 imply that picking tasks based on the curriculum strategy given in Eq. 1 maximizes the expected value of the objective in Eq. 2. 3.2.1 ABSTRACT AGENT WITH DIRECT PERFORMANCE PARAMETERIZATION We consider an abstract agent model with the following direct performance parameterization: for any θ ∈ Θ = [0, 1]|Sinit|, we have PoSθ(s) = θ[s],∀s ∈ Sinit.2 Under this model, the agent’s current knowledge θt at step t is encoded directly by its probability of success scores {PoSθt(s) | s ∈ Sinit}. The target knowledge parameter θ∗ is given by {PoSθ∗(s) | s ∈ Sinit}. Under the independent task setting, we design an update rule for the agent to capture the ZPD concept in terms of the learning progress (Vygotsky & Cole, 1978; Chaiklin, 2003), and to also reflect characteristics of the policy gradient style update. In particular, for s = s(0)t ∈ Sinit, we update θt+1[s] ← θt[s] + α · succ(ξt; s) · (θ∗[s]− θt[s]) + β · (1− succ(ξt; s)) · (θ∗[s]− θt[s]), where α, β ∈ [0, 1] and α > β. For s ∈ Sinit and s ̸= s(0)t , we maintain θt+1[s]← θt[s]. Importantly, α > β implies that the agent’s current knowledge for the picked task is updated more when the agent succeeds in that task compared to the failure case. The update rule captures the following idea: when picking a task that is “too easy”, the progress in θt towards θ∗ is minimal since (θ∗[s] − θt[s]) is low; similarly, when picking a task that is “too hard”, the progress in θt towards θ∗ is minimal since β · (θ∗[s]− θt[s]) is low for β ≪ 1. The following theorem shows the differential effect of the probability of success scores p∗ and p on the expected improvement in the training objective Ct(p∗, p). Theorem 1. Consider the abstract agent with direct performance parameterization under the independent task setting as described above. Let s(0)t be the task picked at step t with PoSθt(s (0) t ) = p and PoSθ∗(s (0) t ) = p ∗. Then, we have: (i) ∂Ct(p ∗,p) ∂p > 0, for p < αp∗−βp∗−β 2(α−β) , (ii) ∂Ct(p ∗,p) ∂p < 0, for p > αp ∗−βp∗−β 2(α−β) , (iii) ∂Ct(p ∗,p) ∂p = 0, for p = αp∗−βp∗−β 2(α−β) , and (iv) ∂Ct(p ∗,p) ∂p∗ > 0, ∀p∗ ∈ [0, 1]. For the above setting with α = 1 and β = 0, maxp∗,p Ct(p∗, p) is equivalent to maxp∗,p p · (p∗−p). This, in turn, implies that the curriculum strategy given in Eq. 1 can be seen as greedily optimizing the expected improvement in the training objective at step t. 3.2.2 REINFORCE AGENT WITH SOFTMAX POLICY PARAMETERIZATION We consider the REINFORCE agent model with the following softmax policy parameterization: for any θ ∈ R|S|·|A|, we parameterize the policy as πθ(a|s) ∝ exp(θ[s, a]),∀s ∈ S, a ∈ A. For 2In this setting, we abstract out the policy πθ and directly map the “parameter” θ to a vector of “performance on tasks” PoSθ . Then, we choose the parameter space as Θ = [0, 1]Sinit (where d = Sinit) and define PoSθ = θ. Thus, an update in the “parameter” θ is equivalent to an update in the “performance on tasks” PoSθ . this policy parameterization, the smoothness condition on the reward function provided in (Kamalaruban et al., 2019) can be translated to the smoothness condition on the value function. In the following, we consider a problem instance involving a pool of contextual bandit tasks (a special case of independent task setting). Consider an MDP M with g ∈ S as the goal state for all tasks, Sinit = S \ {g}, A = {a1, a2}, and H = 1. We define the reward function as follows: R(s, a) = 0,∀s ∈ S \ {g}, a ∈ A and R(g, a) = 1,∀a ∈ A. For a given probability mapping prand : S → [0, 1], we define the transition dynamics as follows: T (g|s, a1) = prand(s),∀s ∈ S; T (s|s, a1) = 1 − prand(s),∀s ∈ S; and T (s|s, a2) = 1,∀s ∈ S. Then, for the REINFORCE agent under the above setting, the following theorem shows the differential effect of p∗ and p on Ct(p∗, p): Theorem 2. Consider the REINFORCE agent with softmax policy parameterization under the independent task setting as described above. Let s(0)t be the task picked at step t with PoSθt(s (0) t ) = p and PoSθ∗(s (0) t ) = p ∗. Then, we have: (i) ∂Ct(p ∗,p) ∂p > 0, for p < p∗ 2 , (ii) ∂Ct(p ∗,p) ∂p < 0, for p > p∗ 2 , (iii) ∂Ct(p ∗,p) ∂p = 0, for p = p∗ 2 , and (iv) ∂Ct(p ∗,p) ∂p∗ > 0, ∀p∗ ∈ [0, 1]. For the above setting with prand(s) = 1,∀s ∈ S , maxp Ct(1, p) is equivalent to maxp p · (1 − p). This means that for the case of PoS∗(s) = 1,∀s ∈ Sinit, the curriculum strategy given in Eq. 1 can be seen as greedily optimizing the expected improvement in the training objective at step t. 3.3 CURRICULUM STRATEGY FOR GENERAL SETTINGS Next, we discuss various practical issues in directly applying the curriculum strategy in Eq. 1 for general settings, and introduce several design choices to address these issues. Softmax selection. When training deep RL agents, it is typically useful to allow some stochasticity in the selected batch of tasks. Moreoever, the argmax selection in Eq. 1 is brittle in the presence of any approximation errors in computing PoS(·) values. To tackle this issue, we replace argmax selection in Eq. 1 with softmax selection and sample according to the following distribution: P [ s (0) t = s ] ∝ exp ( β · PoSt(s) · ( PoS∗(s)− PoSt(s) )) , (3) where β is a hyperparameter. Here, PoSt(s) values are computed for each s ∈ Sinit using rollouts obtained via executing the policy πt inM; PoS∗(s) values are assumed to be provided as input. PoS∗(·) is not known. Since the target policy πθ∗ is unknown, it is not possible to compute the PoS∗(s) values without additional domain knowledge. In our experiments, we resort to simply setting PoS∗(s) = 1,∀s ∈ Sinit in Eq. 3 – the rationale behind this choice is that we expect the ideal πθ∗ to succeed in all the tasks in the pool. However, the above choice could lead to suboptimal strategy for specific scenarios, e.g., all PoS∗(s) are below 0.5. As an alternative, one could estimate PoS∗(s) during the training process, e.g., using top K% rollouts obtained by executing the current policy πt starting from s. This brings us to the following curriculum strategy referred to as PROCURL-ENV in our experimental evaluation: P [ s (0) t = s ] ∝ exp ( β · PoSt(s) · ( 1− PoSt(s) )) . (4) Computing PoSt(·) is expensive. It is expensive (sample inefficient) to estimate PoSt(s) over the space Sinit using rollouts of the policy πt. To tackle this issue, we replace PoSt(s) with values Vt(s) obtained from the critic network of the RL agent. This brings us to the following curriculum strategy referred to as PROCURL-VAL in our experimental evaluation: P [ s (0) t = s ] ∝ exp ( β · Vt(s) · ( 1− Vt(s) )) . (5) Extension to non-binary or dense reward settings. The current forms of PROCURL-VAL in Eq. 5 and PROCURL-ENV in Eq. 4 are not directly applicable for settings where the reward is non-binary or dense. To deal with this issue in PROCURL-VAL, we replace Vt(s) values from the critic in Eq. 5 with normalized values given by V t(s) = Vt(s)−Vmin Vmax−Vmin clipped to the range [0, 1]. Here, Vmin and Vmax could be provided as input based on the environment’s reward function; alternatively we can dynamically set Vmin and Vmax during the training process by taking min-max values of the critic for states Sinit at step t. To deal with this issue in PROCURL-ENV, we replace PoSt(s) values from the rollouts in Eq. 4 with normalized values V t(s) as above. Algorithm 2 in the appendix provides a complete pseudo-code for the RL agent training with PROCURL-VAL in this general setting. 4 EXPERIMENTAL EVALUATION In this section, we evaluate the effectiveness of our curriculum strategies on a variety of domains w.r.t. the uniform performance of the trained RL agent over the training pool of tasks. Additionally, we consider the following two metrics in our evaluation: (i) total number of environment steps incurred jointly by the teacher and the student components at the end of the training process; (ii) total clock time required for the training process. Throughout all the experiments, we use PPO method from Stable-Baselines3 library for policy optimization (Schulman et al., 2017; Raffin et al., 2021). 4.1 ENVIRONMENTS We consider 5 different environments in our evaluation as described in the following paragraphs. Figure 1 provides a summary and illustration of these environments. POINTMASS-S and POINTMASS-D. Based on the work of (Klink et al., 2020b), we consider a contextual POINTMASS environment where an agent navigates a point mass through a gate of a given size towards a goal in a two-dimensional space. More concretely, we consider two settings: (i) POINTMASS-S environment corresponds to a goal-based (i.e., binary and sparse) reward setting where the agent receives a reward of 1 only if it successfully moves the point mass to the goal position; (ii) POINTMASS-D environment corresponds to a dense reward setting as used by (Klink et al., 2020b) where the reward values decay in a squared exponential manner with increasing distance to the goal. Here, the contextual variable c ∈ R3 controls the position of the gate (C-GatePosition), the width of the gate (C-GateWidth), and the friction coefficient of the ground (C-Friction). We construct the training pool of tasks by uniformly sampling 100 tasks over the space of possible tasks (here, each task corresponds to a different contextual variable). BASICKAREL. This environment is inspired by the Karel program synthesis domain Bunel et al. (2018), where the goal of an agent is to transform an initial grid into a final grid configuration by a sequence of commands. In our BASICKAREL environment, we do not allow any programming constructs such as conditionals or loops and limit the commands to the “basic” actions given by A = {move,turnLeft,turnRight,pickMarker,putMarker,finish}. A task in this environment corresponds to a pair of initial grid and final grid configuration; the environment is episodic with goal-based (i.e., binary and sparse) reward setting where the agent receives a reward of 1 only if it successfully transforms the task’s initial grid into the task’s final grid. Here, the contextual variable is discrete where each task can be considered as a discrete context. We construct the training pool of tasks by sampling 24000 tasks; additional details are provided in the appendix. BALLCATCHING. This environment is same as used in the work of (Klink et al., 2020b); here, an agent needs to direct a robot to catch a ball thrown towards it. The reward function is sparse and non-binary, only rewarding the robot when it catches the ball and penalizing it for excessive movements. The contextual vector c ∈ R3 captures the distance to the robot from which the ball is thrown and its goal position in a plane that intersects the base of the robot. We construct the training pool of tasks by uniformly sampling 100 tasks over the space of possible tasks. ANTGOAL. This environment is adapted from the original MuJoCo ANT environment (Todorov et al., 2012). In our adaptation, we additionally have a goal on a flat 2D surface and an agent is rewarded for moving an ant robot towards the goal location. This goal-based reward term replaces the original reward term of making the ant move forward; also, this reward term increases exponentially when the ant moves closer to the goal location. We keep the other reward terms such as control and contact costs similar to the original MuJoCo ANT environment. The environment is episodic with a length of 200 steps. The goal location essentially serves as a contextual variable in R2. We construct the training pool of tasks by uniformly sampling 50 goal locations from a circle around the ant. 4.2 CURRICULUM STRATEGIES EVALUATED Variants of our curriculum strategy. We consider the curriculum strategies PROCURL-VAL and PROCURL-ENV from Section 3.3. Since PROCURL-ENV uses policy rollouts to estimate PoSt(s) in Eq. 4, it requires environment steps for selecting tasks in addition to environment steps for training. To compare PROCURL-VAL and PROCURL-ENV in terms of trade-off between performance and sample efficiency, we introduce a variant PROCURL-ENVX where x controls the budget of the total number of steps used for estimation and training. In Figure 3, variants with x ∈ {2, 4} refer to a total budget of about x million environment steps when training comprises of 1 million steps. State-of-the-art baselines. SPDL (Klink et al., 2020b) and SPACE (Eimer et al., 2021) are state-ofthe-art curriculum strategies for contextual RL. We adapt the implementation of an improved version of SPDL, presented in (Klink et al., 2021), to work with a discrete pool of tasks. We also introduce a variant of SPACE, namely SPACE-ALT, by adapting the implementation of (Eimer et al., 2021) to sample the next training task as P [ s (0) t = s ] ∝ exp ( β · ( Vt(s)− Vt−1(s) )) . Prototypical baselines. IID strategy randomly samples the next task from the pool; note that IID serves as a competitive baseline since we consider the uniform performance objective. We introduce two additional variants of PROCURL-ENV, namely EASY and HARD, to understand the importance of the two terms PoSt(s) and ( 1 − PoSt(s) ) in Eq. 4. EASY samples tasks as P [ s (0) t = s ] ∝ exp ( β·PoSt(s) ) , and HARD samples tasks as P [ s (0) t = s ] ∝ exp ( β· ( 1−PoSt(s) )) . 4.3 RESULTS Convergence behavior and curriculum plots. As shown in Figure 2, the RL agents trained using the variants of our curriculum strategy, PROCURL-ENV and PROCURL-VAL, either match or outperform the agents trained with state-of-the-art and prototypical baselines in all the environments. Figures 4 and 5 visualize the curriculums generated by PROCURL-ENV, PROCURL-VAL, and IID; the trends for PROCURL-VAL generally indicate a gradual shift towards harder tasks across different contexts. We provide further details in the appendix. Metrics comparison. In Figure 3, we compare curriculum strategies considered in our experiments w.r.t. different metrics. PROCURL-VAL has similar sample complexity as state-of-the-art baselines since it does not require additional environment steps for the teacher component. PROCURL-VAL performs better compared to SPDL and SPACE in terms of computational complexity. The effect of that is more evident as the pool size increases. The reason is that PROCURL-VAL only requires forward-pass operation on the critic-model to obtain value estimates for each task in the pool. SPDL and SPACE not only require the same forward-pass operations, but SPDL does an additional optimization step, and SPACE requires a task ordering step. In terms of agent’s performance, our curriculum strategies exceed or match these baselines at different training segments. Even though PROCURL-ENV consistently surpasses all the other variants in terms of performance, its teacher component requires a lot of additional environment steps. Regarding the prototypical baselines in Figure 3, we make the following observations: (a) IID is a strong baseline in terms of sample and computational efficiency, however, its performance tends to be unstable in POINTMASS-S environment because of high randomness; (b) EASY performs well in POINTMASS-S because of the presence of easy tasks in the task space of this environment, but, performs quite poorly in BASICKAREL; (c) HARD consistently fails in both the environments. 5 CONCLUDING DISCUSSIONS We proposed a novel curriculum strategy for deep RL agents inspired by the ZPD concept. We mathematically derived our strategy from basic principles and empirically demonstrated its effectiveness in a variety of complex domains. Here, we discuss a few limitations of our work and outline a plan on how to address them in future work. First, we provided theoretical justifications of our proposed curriculum using simple learner models; it would be interesting also to provide a rigorous analysis of how our curriculum strategy improves the convergence rates of (deep) RL agents. Second, our experimental results show that different variants of our proposed curriculum provide an inherent trade-off between runtime and performance; it would be interesting to systematically study these variants to obtain a more effective curriculum strategy across different metrics. Third, it would be interesting to extend our curriculum strategy to high-dimensional sparse reward environments; in particular, our curriculum strategy requires estimating the probability of success of all tasks in the pool when sampling a new task which becomes challenging in high dimensional context space. A TABLE OF CONTENTS In this section, we give a brief description of the content provided in the appendices of the paper. • Appendix B provides proofs for Theorems 1 and 2. (Section 3.2) • Appendix C provides additional details and results for experimental evaluation. (Section 4) B THEORETICAL JUSTIFICATIONS FOR THE CURRICULUM STRATEGY – PROOFS (SECTION 3.2) B.1 PROOF OF THEOREM 1 Proof. Let s(0)t = s ∈ Sinit, and consider the following: ∆t(θt+1 ∣∣θt, s, ξt) = ∥θ∗ − θt∥1 − ∥θ∗ − θt+1∥1 = θt+1[s]− θt[s] = α · succ(ξt; s) · (θ∗[s]− θt[s]) + β · (1− succ(ξt; s)) · (θ∗[s]− θt[s]). For the abstract learner model defined in Section 3.2.1, we have PoSθ(s) = V πθ (s) = θ[s], for any s ∈ Sinit. Given two success values p∗, p ∈ R, the set of feasible tasks at time t is given by Dt(p∗, p) := {s ∈ S : θ∗[s] = p∗, θt[s] = p}. Now, we consider the following: Ct(p ∗, p) = Es∼Unif(Dt(p∗,p))Eξt|s [ ∆t(θt+1 ∣∣θt, s, ξt)] = Es∼Unif(Dt(p∗,p))Eξt|s [α · succ(ξt; s) · (θ∗[s]− θt[s]) + β · (1− succ(ξt; s)) · (θ∗[s]− θt[s])] = Es∼Unif(Dt(p∗,p)) [α · θt[s] · (θ∗[s]− θt[s]) + β · (1− θt[s]) · (θ∗[s]− θt[s])] = α · p · (p∗ − p) + β · (1− p) · (p∗ − p) = α · p · p∗ − α · p2 + β · p∗ − β · p− β · p · p∗ + β · p2. We have the following partial derivatives: ∂Ct(p ∗, p) ∂p = α · p∗ − 2 · α · p− β − β · p∗ + 2 · β · p ∂Ct(p ∗, p) ∂p∗ = α− β ≥ 0 ∂C2t (p ∗, p) (∂p)2 = − 2 · (α+ β) ≤ 0. Noting that ∂Ct∂pL = 0 when p = αp∗−βp∗−β 2(α−β) completes the proof. B.2 PROOF OF THEOREM 2 Proof. For the contextual bandit setting described in Section 3.2.2, the REINFORCE learner’s update rule reduces to the following: θt+1 ← θt+ηt·1{s(1)t = g}· [ ∇θ log πθ(a(0)t |s(0)t ) ] θ=θt . In particular, for s(0)t = s and a (0) t = a1, we update: θt+1[s, a1] ← θt[s, a1] + ηt · 1{s(1)t = g} · (1− πθt(a1|s)) θt+1[s, a2] ← θt[s, a2]− ηt · 1{s(1)t = g} · (1− πθt(a1|s)) and we set θt+1[s, ·]← θt[s, ·] when s(0)t ̸= s or a(0)t ̸= a1. Let s(0)t = s, and consider the following: ∆t(θt+1 ∣∣θt, s, ξt) = ∥θ∗ − θt∥1 − ∥θ∗ − θt+1∥1 = ∥θ∗[s, ·]− θt[s, ·]∥1 − ∥θ∗[s, ·]− θt+1[s, ·]∥1 = {θ∗[s, a1]− θt[s, a1] + θt[s, a2]− θ∗[s, a2]} − {θ∗[s, a1]− θt+1[s, a1] + θt+1[s, a2]− θ∗[s, a2]} = θt+1[s, a1]− θt[s, a1] + θt[s, a2]− θt+1[s, a2] = 2 · ηt · 1{a(0)t = a1, s(1)t = g} · (1− πθt(a1|s)). For the contextual bandit setting, the probability of success is given by PoSθ(s) = V πθ (s) = prand(s) · πθ(a1|s),∀s ∈ S. We assume that ∃ θ∗ such that πθ∗(a1|s) → 1; here, πθ∗ is the target policy. With the above definition, the probability of success scores for any task associated with the starting state s ∈ S w.r.t. the target and agent’s current policies (at any step t) are respectively given by PoS∗(s) = PoSθ∗(s) = prand(s) and PoSt(s) = PoSθt(s) = prand(s) · πθt(a1|s). Given two success values p∗, p ∈ R, the set of feasible tasks at time t is given by Dt(p∗, p) := {s ∈ S : prand(s) = p∗, prand(s) · πθt(a1|s) = p}. Now, we consider the following: Ct(p ∗, p) = Es∼Unif(Dt(p∗,p))Eξt|s [ ∆t(θt+1 ∣∣θt, s, ξt)] = Es∼Unif(Dt(p∗,p))Eξt|s [ 2 · ηt · 1{a(0)t = a1, s(1)t = g} · (1− πθt(a1|s)) ] = Es∼Unif(Dt(p∗,p)) [2 · ηt · prand(s) · πθt(a1|s) · (1− πθt(a1|s))] = 2 · ηt · p · ( 1− p p∗ ) . We have the following partial derivatives: ∂Ct(p ∗, p) ∂p = 2 · ηt · ( 1− 2p p∗ ) ∂Ct(p ∗, p) ∂p∗ = 2 · ηt · ( p p∗ )2 ≥ 0 ∂C2t (p ∗, p) (∂p)2 = − 4 · ηt p∗ ≤ 0. Noting that ∂Ct∂pL = 0 when p = p∗ 2 completes the proof. C EXPERIMENTAL EVALUATION – ADDITIONAL DETAILS (SECTION 4) C.1 ENVIRONMENTS BASICKAREL. This environment is inspired by the Karel program synthesis domain Bunel et al. (2018), where the goal of an agent is to transform an initial grid into a final grid configuration by a sequence of commands. In the BASICKAREL environment, we do not allow any programming constructs such as conditionals or loops and limit the commands to the “basic” actions given by the action space A = {move,turnLeft,turnRight,pickMarker,putMarker,finish}. A task in this environment corresponds to a pair of initial grid and final grid configuration. It consists of an avatar, walls, markers, and empty grid cells, and each element has a specific location in the grid. The avatar is characterized by its current location and orientation. Its orientation can be any direction {North,East,South,West}, and its location can be any grid cell, except from grid cells where a wall is located. The state space S of BASICKAREL is any possible configuration of the avatar, walls, and markers in a pair of grids. The avatar can move around the grid and is directed via the basic Karel commands, i.e., the action space A. While the avatar moves, if it hits a wall or the grid boundary, it “crashes” and the episode terminates. If pickMarker is selected when no marker is present, the avatar “crashes” and the program ends. Likewise, if the putMarker action is taken and a marker is already present, the avatar “crashes” and the program terminates. The finish action indicates the end of the sequence of actions, i.e., the episode ends after encountering this action. To successfully solve a BASICKAREL task, the sequence of actions must end with a finish, and there should be no termination via “crashes”. Based on this environment, we created a multi-task dataset that consists of 24000 training tasks and 2400 test tasks. All the generated tasks have a grid size of 4× 4. C.2 EVALUATION SETUP Hyperparameters of PPO method. We use the PPO method from Stable-Baselines3 library with a basic MLP policy for all the conducted experiments (Schulman et al., 2017; Raffin et al., 2021). For the POINTMASS-S, POINTMASS-D, and BALLCATCHING environments, the MLP policy has a shared layer with 64 units and a second layer with separate 64 units for the policy and 64 units for the value function. For the BASICKAREL environment, we use two separate layers of size [512, 256] for the policy network and two layers of size [256, 128] for the value function network. For the ANTGOAL environment, we use two separate layers of size [512, 512] for the policy network and two layers of size [512, 512] for the value function network. For all the experiments, ReLU is the chosen activation function. In Figure 6, we report the PPO hyperparameters used in the experiments. For each environment, all the hyperparameters are consistent across all the different curriculum strategies. Compute resources. All the experiments were conducted on a cluster of machines with CPUs of model Intel Xeon Gold 6134M CPU @ 3.20GHz. C.3 CURRICULUM STRATEGIES EVALUATED Variants of the curriculum strategy. Algorithm 2 provides a complete pseudo-code for the RL agent using PPO method when trained with PROCURL-VAL in the general setting of non-binary or dense rewards (see Section 3.3). In Eq. 1 and Algorithm 1, we defined t at an episodic level; however, in Algorithm 2, t denotes an environment step (in the context of the PPO method). For PROCURL-ENV, in line 24 of Algorithm 2, we estimate the probability of success for all the tasks using the additional rollouts obtained by executing the current policy inM. Hyperparameters of curriculum strategies. In Figure 7, we report the hyperparameters of each curriculum strategy used in the experiments (for each environment). Below, we provide a short description of these hyperparameters: 1. β parameter controls the stochasticity of the softmax selection. 2. Npos parameter controls the frequency at which V t is updated. For PROCURL-ENV, we set Npos higher than Nsteps since obtaining rollouts to update V t(s) is expensive. For all the other curriculum strategies, we set Npos = Nsteps. For SPACE, Npos controls how frequently the current task dataset is updated based on their curriculum. For SPDL, Npos controls how often we perform the optimization step to update the distribution for selecting tasks. 3. crollouts determines the number of additional rollouts required to compute the probability of success score for each task (only for PROCURL-ENV). 4. {Vmin, Vmax} are used in the environments with non-binary or dense rewards to obtain the normalized values V (s) (see Section 3.3). In Figure 7, {Vmin,t, Vmax,t} denote the min-max values of the critic for states Sinit at step t. 5. η and κ parameters as used in SPACE (Eimer et al., 2021). 6. VLB performance threshold as used in SPDL (Klink et al., 2021). Algorithm 2 RL agent using PPO method when trained with PROCURL-VAL in the general setting 1: Input: RL algorithm PPO, rollout buffer D 2: Hyperparameters: policy update frequency Nsteps, number of epochs Nepochs, number of mini- batches Nbatch, parameter β, Vmin, and Vmax 3: Initialization: randomly initialize policy π1 and critic V1; set normalized probability of success scores V 1(s) = 0 and PoS∗(s) = 1, ∀s ∈ Sinit 4: for t = 1, . . . , T do 5: // add an environment step to the buffer 6: observe the state st, and select the action at ∼ πt(st) 7: execute the action at in the environment 8: observe reward rt, next state st+1, and done signal dt+1 to indicate whether st+1 is terminal 9: store (st, at, rt, st+1, dt+1) in the rollout buffer D 10: // choose new task when the current task/episode ends 11: if dt+1 = true then 12: reset the environment state 13: sample next task st+1 from P [ st+1 = s ] ∝ exp ( β · V t(s) · (1− V t(s)) ) 14: // policy and V t(s) update 15: if t%Nsteps = 0 then 16: set π′ ← πt and V ′ ← Vt 17: for e = 1, . . . , Nepochs do 18: for b = 1, . . . , Nbatch do 19: sample b-th minibatch of Nsteps/Nbatch transitions B = {(s, a, r, s′, d)} from D 20: update policy and critic using PPO algorithm π′, V ′ ← PPO(π′, V ′, B) 21: set πt+1 ← π′ and Vt+1 ← V ′ 22: empty the rollout buffer D 23: // normalization for the environments with non-binary or dense rewards 24: update V t+1(s)← Vt+1(s)−VminVmax−Vmin , ∀s ∈ Sinit using forward passes on critic 25: else 26: maintain the previous values πt+1 ← πt, Vt+1 ← Vt, and V t+1 ← V t 27: Output: policy πT C.4 RESULTS Convergence behavior. In Figure 8, we report the performance of the trained models in the training set and a test set for comparison purposes. For POINTMASS-S, we constructed a separate test set of 100 tasks by uniformly picking tasks from the task space. For BASICKAREL, we have a train and test dataset of 24000 and 2400 tasks, respectively. Curriculum plots. Figures 4 and 5 visualize the curriculums generated by PROCURL-ENV, PROCURL-VAL, and IID; the trends for PROCURL-VAL generally indicate a gradual shift towards harder tasks across different contexts. The increasing trend in Figure 4a corresponds to a preference shift towards tasks with the gate positioned closer to the edges; the decreasing trend in Figure 4b corresponds to a preference shift towards tasks with narrower gates. For BASICKAREL, the increasing trends in Figures 5a and 5b correspond to a preference towards tasks with longer solution trajectories and tasks requiring a marker to be picked or put, respectively. In Figures 5c and 5d, tasks with a distractor marker (C-DistractorMarker) and tasks with more walls (C-Walls) are increasingly selected while training. In Figure 9, we show illustrative tasks of BASICKAREL used during the training process at different steps for PROCURL-VAL. Ablation and robustness experiments. We conduct additional experiments to evaluate the robustness of PROCURL-VAL w.r.t. different values of β and different ϵ-level noise in Vt(s) values. The results are reported in Figure 10. From the reported results, we note that picking a value for β somewhere between 10 to 30 leads to competitive performance, and PROCURL-VAL is robust even for noise levels up to ϵ = 0.2. C.5 ADDITIONAL RESULTS AND DISCUSSION PROCURL-ENVX vs. PROCURL-VAL. To achieve the constrained budget of evaluation steps in PROCURL-ENVx (with x ∈ {2, 4}), we reduce the frequency of updating PoSt since this is the most expensive operation for PROCURL-ENV requiring additional rollouts for each task. On the other hand, PROCURL-VAL updates PoSt by using the values obtained from forward-pass on the critic model – this update happens whenever the critic model is updated (every 2048 training steps for BASICKAREL). This higher frequency of updating PoSt in PROCURL-VAL is why it is slower than PROCURL-ENVx (with x ∈ {2, 4}) for BASICKAREL. Note that the relative frequency of updates for POINTMASS is different in comparison to BASICKAREL because of very different pool sizes. Hence, the behavior in total clock times is different. γ2/γ1 ablation. We conduct an additional ablation study on the form of our curriculum objective presented in Eq. 1. More specifically, we consider the following generalized variant of Eq. 1 with parameters γ1 and γ2: s (0) t ← argmax s∈Sinit ( PoSt(s) · ( γ1 · PoS∗(s)− γ2 · PoSt(s) )) (6) In our experiments, we consider the following range of γ2/γ1 ∈ {0.6, 0.8, 1.0, 1.2, 1.4}. Our default curriculum strategy in Eq. 1 essentially corresponds to γ2/γ1 = 1.0. The following table presents the results for the environments POINTMASS-S and BASICKAREL.
1. What is the focus and contribution of the paper regarding reinforcement learning? 2. What are the strengths of the proposed approach, particularly in its experimental results? 3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works? 4. Do you have any questions or concerns about the paper's content? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a curriculum strategy for reinforcement learning that selects initial states for the agent such that they are in the "Zone of Proximal Development" (ZPD), i.e. selecting states that are not too easy and not too hard. They formalize this concept in the context of reinforcement learning, and provide different implementations of the strategy. The paper provides experimental results on several different environments (PointMass, BasicKarel, BallCatching, and Antgoal), and show that their method performs as well or better than the baselines on most environments. Strengths And Weaknesses Strengths: I think the paper is well motivated with the idea of ZPD. The idea of selecting a curriculum based on that concept makes sense, and the paper also does a decent enough job at formalizing this concept (although certain steps need more explanation). The paper runs its experiments on a fairly wide array of environments, and shows that their method is competitive or outperforms other baselines on all of those environments, even when constraining to a similar number of resources. The paper proposes two variants of their method, one based on policy rollouts to estimate of the "probability of success" metric, and one where the metric was computed with a function approximator value function. The paper also shows the effect of using fewer policy rollouts and getting closer to the value function based variant. Weaknesses: The authors mention a result that as the pool size of available initial states increases, their method is better able to create a curriculum compared to other methods. To make this claim, they looked at the effect of pool size across different environments, but this claim would be more convincing if within the same environment, they sample different number of tasks for the pool, and show that their method does better as the pool size increases. The theoretical justification could be made clearer. Specifically the link between theorems 1/2 and the curriculum objectives. Question: Why do the procurl-env^{2,4} methods take less time than the procurl-val method for BasicKarel? Clarity, Quality, Novelty And Reproducibility The ideas in the paper seem to be novel as far as I know. The theoretical justification could be made clearer.
ICLR
Title Proximal Curriculum for Reinforcement Learning Agents Abstract We consider the problem of curriculum design for reinforcement learning (RL) agents in contextual multi-task settings. Existing techniques on automatic curriculum design typically have limited theoretical underpinnings or require domainspecific hyperparameter tuning. To tackle these limitations, we design our curriculum strategy, PROCURL, inspired by the pedagogical concept of Zone of Proximal Development (ZPD). We mathematically derive PROCURL by formalizing the ZPD concept, which suggests that learning progress is maximized when picking tasks that are neither too hard nor too easy for the learner. We also present a practical variant of PROCURL that can be directly integrated with deep RL frameworks with minimal hyperparameter tuning. Experimental results on a variety of domains demonstrate the effectiveness of our curriculum strategy over state-ofthe-art baselines in accelerating the training process of deep RL agents. 1 INTRODUCTION Recent advances in deep reinforcement learning (RL) have demonstrated impressive performance in games, continuous control, and robotics (Mnih et al., 2015; Lillicrap et al., 2015; Silver et al., 2017; Levine et al., 2016). Despite these remarkable successes, a broader application of RL in real-world domains is often very limited. For example, training RL agents in contextual multi-task settings and goal-based tasks with sparse rewards still remains challenging (Hallak et al., 2015; Kirk et al., 2021; Andrychowicz et al., 2017; Florensa et al., 2017; Riedmiller et al., 2018). Inspired by the importance of curricula in pedagogical domains, there is a growing interest in leveraging curriculum strategies when training machine learning models in challenging domains. In the supervised learning setting, such as image classification, the impact of the order of presented training examples has been studied both theoretically and empirically (Weinshall et al., 2018; Weinshall & Amir, 2018; Zhou & Bilmes, 2018; Zhou et al., 2021; Elman, 1993; Bengio et al., 2009; Zaremba & Sutskever, 2014). Recent works have also studied curriculum strategies for learners in sequentialdecision-making settings, such as imitation learning (where the agent learns from demonstrations) and RL (where the agent learns from rewards). In the imitation learning setting, recent works have proposed greedy curriculum strategies for picking the next training demonstration according to the agent’s learning progress (Kamalaruban et al., 2019; Yengera et al., 2021). In the RL setting, several curriculum strategies have been proposed to improve sample efficiency, e.g., by choosing an appropriate next starting state or goal state for the task to train on (Wöhlke et al., 2020; Florensa et al., 2017; 2018; Racanière et al., 2020; Riedmiller et al., 2018; Klink et al., 2020a;b; Eimer et al., 2021). Despite extensive research on curriculum design for the RL setting, existing techniques typically have limited theoretical underpinnings or require domain-specific hyperparameter tuning. In this paper, we are interested in developing a principled curriculum strategy for the RL setting that is broadly applicable to many domains with minimal tuning of hyperparameters. To this end, we rely on the Zone of Proximal Development (ZPD) concept from the educational psychology literature (Vygotsky & Cole, 1978; Chaiklin, 2003). The ZPD concept, when applied in terms of learning progress, suggests that progress is maximized when the learner is presented with tasks that lie in the proximal zone, i.e., tasks that are neither too hard nor too easy. To formally capture this idea of proximal zone, we use a notion of probability of success score PoSπt(s) w.r.t. the learner’s current policy πt for any given task s. We mathematically derive an intuitive curriculum strategy based on a learner update rule that captures the ZPD concept in terms of the learning progress and reflects characteristics of the policy gradient style update. Our main results and contributions are as follows: I. We propose a curriculum strategy, PROCURL, inspired by the ZPD concept. PROCURL formalizes the idea of picking tasks that are neither too hard nor too easy for the learner in the form of selection strategy argmaxs PoSπt(s) · ( PoS∗(s) − PoSπt(s) ) , where PoS∗(s) corresponds to the probability of success score w.r.t. an optimal policy (Section 3.1). II. We derive PROCURL under two specific learning settings where we analyze the effect of picking a task on the agent’s learning progress (Section 3.2). III. We present a practical variant of PROCURL, namely PROCURL-VAL, that can be easily integrated with deep RL frameworks with minimal hyperparameter tuning (Section 3.3). IV. We empirically demonstrate the effectiveness of PROCURL-VAL over state-of-the-art baselines in accelerating the training process of deep RL agents in a variety of environments (Section 4). 1.1 RELATED WORK Curriculum strategies based on domain knowledge. Early works on curriculum design for supervised learning setting typically order the training examples in increasing difficulty (Elman, 1993; Bengio et al., 2009; Schmidhuber, 2013; Zaremba & Sutskever, 2014). This easy-to-hard design principle has been utilized in the hand-crafted curriculum approaches for the RL setting (Asada et al., 1996; Wu & Tian, 2016). Moreover, there has been recent works on designing greedy curriculum strategies for the imitation learning setting based on the iterative machine teaching framework (Liu et al., 2017; Yang et al., 2018; Zhu et al., 2018; Kamalaruban et al., 2019; Yengera et al., 2021). However, these approaches require domain-specific expert knowledge for designing difficulty measures. Curriculum strategies based on ZPD concept. In the pedagogical setting, it has been realized that effective teaching provides tasks that are neither too hard nor too easy for the human learner. This intuition of providing tasks from a particular range of difficulties is conceptualized in the ZPD concept (Vygotsky & Cole, 1978; Chaiklin, 2003; Oudeyer et al., 2007; Baranes & Oudeyer, 2013; Zou et al., 2019). In the RL setting, several curriculum strategies that have been proposed are inherently based on the ZPD concept (Florensa et al., 2017; 2018; Wöhlke et al., 2020). A common underlying theme in both (Florensa et al., 2017) and (Florensa et al., 2018) is that they choose the next task (starting or goal state) for the learner uniformly at random from the set {s : rmin ≤ PoSπt(s) ≤ rmax}. Here, the threshold values rmin and rmax require tuning according to the learner’s progress and specific to the domain. The authors in (Wöhlke et al., 2020) propose a unified framework for the learner’s performance-based starting state curricula in RL. In particular, the starting state selection policy of (Wöhlke et al., 2020), P [ s (0) t = s ] ∝ G(PoSπt(s)) for some function G, accommodates existing curriculum generation methods like (Florensa et al., 2017; Graves et al., 2017). Despite promising empirical results, a conceptual formalism or theoretical underpinnings relating an RL agent’s learning progress to the ZPD concept is still missing in the aforementioned works. We address this conceptual gap in the literature by designing and analyzing a learner update rule that captures the ZPD concept in terms of the learning progress and also reflects characteristics of the policy gradient style update. Curriculum strategies based on self-paced learning (SPL). In the supervised learning setting, the curriculum strategies using the SPL concept optimize the trade-off between exposing the learner to all available training examples and selecting examples in which it currently performs well (Kumar et al., 2010; Jiang et al., 2015). In SPDL (Klink et al., 2020b;a; 2021; 2022) and SPACE (Eimer et al., 2021), the authors have adapted the concept of SPL to the RL setting by controlling the intermediate task distribution with respect to the learner’s current training progress. However, SPDL and SPACE differ in their mode of operation and the objective. SPDL considers the procedural task generation framework where tasks of appropriate difficult levels can be synthesized, as also considered in (Florensa et al., 2017; 2018)). In contrast, SPACE considers a pool-based curriculum framework for picking suitable tasks, as popular in supervised learning setting. Further, SPDL considers the objective of a targeted performance w.r.t. a target distribution (e.g., concentrated distribution on hard tasks); in contrast, SPACE considers the objective of uniform performance across a given pool of tasks. Similar to SPACE, in our work, we consider the pool-based setting with uniform performance objective. Both SPDL and SPACE serve as state-of-the-art baselines in our experimental evaluation. In terms of curriculum strategy, SPDL operates by solving an optimization problem at each step to pick a task (Klink et al., 2021); SPaCE uses a ranking induced by magnitude of differences in current/previous critic values at each step to pick a task (Eimer et al., 2021). In the appendix, we have also provided some additional information on hyperparameters for SPDL and SPaCE. Other automatic curriculum strategies. There are other approaches for automatic curriculum generation, including: (i) by formulating the curriculum design problem with the use of a meta-level Markov Decision Process (Narvekar et al., 2017; Narvekar & Stone, 2019); (ii) by learning how to generate training tasks similar to a teacher (Dendorfer et al., 2020; Such et al., 2020; Matiisen et al., 2019; Turchetta et al., 2020); (iii) by leveraging self-play as a form of curriculum generation (Sukhbaatar et al., 2018); (iv) by using the disagreement between different agents trained on the same tasks (Zhang et al., 2020); (v) by picking the starting states based on a single demonstration (Salimans & Chen, 2018; Resnick et al., 2018); and (vi) by providing agents with environment variations that are at the frontier of an agent’s capabilities, e.g., Unsupervised Environment Design methods (Dennis et al., 2020; Jiang et al., 2021; Parker-Holder et al., 2022). We refer the reader to recent surveys on curriculum design for the RL setting (Narvekar et al., 2020; Portelas et al., 2021; Weng, 2020). 2 FORMAL SETUP In this section, we formalize our problem setting based on prior work on teacher-student curriculum learning (Matiisen et al., 2019). MDP environment. We consider a learning environment defined as a Markov Decision Process (MDP)M := (S,A, T , H,R,Sinit). Here, S andA denote the state and action spaces, T : S ×S × A → [0, 1] is the transition dynamics, H is the maximum length of the episode, and R : S×A → R is the reward function. The set of initial states Sinit ⊆ S specifies a fixed pool of tasks, i.e., each starting state s ∈ Sinit corresponds to a unique task. Note that the above environment formalism is quite general enough to cover many practical settings, including the contextual multi-task MDP setting (Hallak et al., 2015).1 RL agent and training process. We consider an RL agent acting in this environment via a policy π : S × A → [0, 1] that is a mapping from a state to a probability distribution over actions. Given a task with the corresponding starting state s ∈ Sinit, the agent attempts the task via a trajectory rollout obtained by executing its policy π from s in the MDP M. The trajectory rollout is denoted as ξ = {(s(τ), a(τ), R(s(τ), a(τ)))}τ=0,1,...,h with s(0) = s and for some h ≤ H . The agent’s performance on task s is measured via the value function V π(s) := E [∑h τ=0 R(s (τ), a(τ)) ∣∣π,M, s(0) = s]. Then, the uniform performance of the agent over the pool of tasks Sinit is given by V π := Es∼Uniform(Sinit) [V π(s)]. The training process of the agent involves an interaction between two components: a student component that is responsible for policy update and a teacher component that is responsible for task selection. The interaction happens in discrete steps, indexed by t = 1, 2, . . ., and is formally described in Algorithm 1. Let πend denote the agent’s final policy at the end of training. The training objective is to ensure that the uniform performance of the policy πend is ϵ-near-optimal, i.e., (maxπ V π − V πend) ≤ ϵ. In the following two paragraphs, we discuss the student and teacher components in detail. Student component. We consider a parametric representation for the RL agent, whose current knowledge is parameterized by θ ∈ Θ ⊆ Rd and each parameter θ is mapped to a policy πθ : S×A → [0, 1]. At step t, the student component updates the knowledge parameter based on the following quantities: the current knowledge parameter θt, the task picked by the teacher component, and the rollout ξt = {(s(τ)t , a(τ)t , R(s(τ)t , a(τ)t ))}τ . Then, the updated knowledge parameter θt+1 is mapped to the agent’s policy given by πt+1 := πθt+1 . As a concrete example, the knowledge parameter of the REINFORCE agent (Sutton et al., 1999) is updated as θt+1 ← θt + ηt · ∑h−1 τ=0 G (τ) t · g(τ)t , where ηt is the learning rate, G (τ) t = ∑h τ ′=τ R(s (τ ′) t , a (τ ′) t ), and g (τ) t = [ ∇θ log πθ(a(τ)t |s(τ)t ) ] θ=θt . Teacher component. At step t, the teacher component picks a task with the corresponding starting state s(0)t for the student component to attempt via a trajectory rollout (see line 3 in Algorithm 1). The sequence of tasks (curriculum) picked by the teacher component affects the performance improvement of the policy πt. The main focus of this work is to develop a teacher component to achieve the training objective in both computational and sample efficient manner. 1In this setting, for a given set of contexts C, the pool of tasks is given by {Mc = (S,A, Tc, H,Rc,S init) : c ∈ C}. Our environment formalism (MDP M) covers this setting as follows: S = S × C; Sinit = S init × C; T ((s̄′, c)|(s̄, c), a) = Tc(s̄′|s̄, a) and R((s̄, c), a) = Rc(s̄, a), ∀s̄, s̄′ ∈ S, a ∈ A, c ∈ C. Algorithm 1 RL Agent Training as Interaction between Teacher-Student Components 1: Input: RL agent’s initial policy π1 2: for t = 1, 2, . . . do 3: Teacher component picks a task with the corresponding starting state s(0)t . 4: Student component attempts the task via a trajectory rollout ξt using the policy πt from s (0) t . 5: Student component updates the policy to πt+1. 6: Output: RL agent’s final policy πend ← πt+1. 3 PROXIMAL CURRICULUM STRATEGY In Section 3.1, we propose a curriculum strategy for the goal-based setting. In Section 3.2, we show that the proposed curriculum strategy can be derived from basic principles by formalizing the ZPD concept. In Section 3.3, we present our final curriculum strategy that is applicable in general settings. 3.1 CURRICULUM STRATEGY FOR THE GOAL-BASED SETTING Here, we introduce our curriculum strategy for the goal-based setting using the notion of probability of success scores. Goal-based setting. In this setting, the reward function R is goal-based, i.e., the agent gets a reward of 1 only at the goal states and 0 at other states; moreover, any action from a goal state also leads to termination. For any task with the corresponding starting state s ∈ Sinit, we say that the attempted rollout ξ succeeds in the task if the final state of ξ is a goal state. Formally, succ(ξ; s) is an indicator function whose value is 1 when the rollout ξ succeeds in the task s, and 0 otherwise. Furthermore, for an agent with policy π, we have that V π(s) := E [ succ(ξ; s) ∣∣π,M] is equal to the total probability of reaching a goal state by executing the policy π starting from s ∈ Sinit. Probability of success. We begin by assigning a probability of success score for any task with the corresponding starting state s ∈ Sinit w.r.t. any parameterized policy πθ in the MDPM. Definition 1. For any given knowledge parameter θ ∈ Θ and any starting state s ∈ Sinit, we define the probability of success score PoSθ(s) as the probability of successfully solving the task s by executing the policy πθ in the MDPM. For the goal-based settings, we have PoSθ(s) = V πθ (s). With the above definition, the probability of success score for any task s ∈ Sinit w.r.t. the agent’s current policy πt is given by PoSt(s) := PoSθt(s). Further, we define PoS ∗(s) := maxθ∈Θ PoSθ(s). Curriculum strategy. Based on the notion of probability of success scores that we defined above, we propose the following curriculum strategy: s (0) t ← argmax s∈Sinit ( PoSt(s) · ( PoS∗(s)− PoSt(s) )) , (1) i.e., at step t, the teacher component picks a task associated with the starting state s(0)t according to Eq. 1. In the following subsection, we show that our curriculum strategy can be derived by considering simple learning settings, such as contextual bandit problems with REINFORCE agent; these derivations provide insights about the design of the curriculum strategy. In Section 3.3, we provide a detailed step-by-step discussion on how our curriculum can be applied in practice to increasingly complex settings. 3.2 THEORETICAL JUSTIFICATIONS FOR THE CURRICULUM STRATEGY To derive our curriculum strategy for the goal-based setting, we additionally consider independent tasks where any task s(0)t picked from the pool Sinit at step t only affects the agent’s knowledge component corresponding to that task. Further, we assume that there exists a knowledge parameter θ∗ ∈ Θ such that πθ∗ ∈ argmaxπ V π , and πθ∗ is referred to as the target policy. Then, based on the work of (Weinshall et al., 2018; Kamalaruban et al., 2019; Yengera et al., 2021), we investigate the effect of picking a task s(0)t at step t on the convergence of the agent’s parameter θt towards the target parameter θ∗. Under a smoothness condition on the value function of the form |V πθ − V πθ′ | ≤ L · ∥θ − θ′∥1 ,∀θ, θ′ ∈ Θ for some L > 0, we can translate the parameter convergence (θt → θ∗) into the performance convergence (V πθt → V πθ∗ ). Thus, we define the improvement in the training objective at step t as ∆t(θt+1 ∣∣θt, s(0)t , ξt) := [∥θ∗ − θt∥1 − ∥θ∗ − θt+1∥1]. (2) In this objective, we use the ℓ1-norm because our theoretical analysis considers the independent task setting mentioned above. Given two success values p∗, p ∈ R, we define a set of feasible tasks at step t as Dt(p∗, p) := {s ∈ Sinit : PoSθ∗(s) = p∗,PoSθt(s) = p}. The set Dt(p∗, p) contains all the tasks for which the probability of success score w.r.t. the target policy is equal to the value p∗ and the probability of success score w.r.t. the agent’s current policy is equal to the value p. Further, we define the expected improvement in the training objective at step t given success values p∗ and p as follows: Ct(p ∗, p) := E s (0) t ∼Uniform(Dt(p∗,p)) E ξt|s(0)t [ ∆t(θt+1|θt, s(0)t , ξt) ] , where the outer expectation is w.r.t. the uniform distribution over the setDt(p∗, p). In the following, we analyze the above quantity for specific agent models under the independent task setting. More concretely, Theorems 1 and 2 characterize the impact of picking a task at step t on the objective in Eq. 2 with the following values: (i) task’s PoS w.r.t. the target policy πθ∗ having value p∗ and (ii) task’s PoS w.r.t. the agent’s current policy having value p. For the specific settings considered in Sections 3.2.1 and 3.2.2, Theorems 1 and 2 imply that picking tasks based on the curriculum strategy given in Eq. 1 maximizes the expected value of the objective in Eq. 2. 3.2.1 ABSTRACT AGENT WITH DIRECT PERFORMANCE PARAMETERIZATION We consider an abstract agent model with the following direct performance parameterization: for any θ ∈ Θ = [0, 1]|Sinit|, we have PoSθ(s) = θ[s],∀s ∈ Sinit.2 Under this model, the agent’s current knowledge θt at step t is encoded directly by its probability of success scores {PoSθt(s) | s ∈ Sinit}. The target knowledge parameter θ∗ is given by {PoSθ∗(s) | s ∈ Sinit}. Under the independent task setting, we design an update rule for the agent to capture the ZPD concept in terms of the learning progress (Vygotsky & Cole, 1978; Chaiklin, 2003), and to also reflect characteristics of the policy gradient style update. In particular, for s = s(0)t ∈ Sinit, we update θt+1[s] ← θt[s] + α · succ(ξt; s) · (θ∗[s]− θt[s]) + β · (1− succ(ξt; s)) · (θ∗[s]− θt[s]), where α, β ∈ [0, 1] and α > β. For s ∈ Sinit and s ̸= s(0)t , we maintain θt+1[s]← θt[s]. Importantly, α > β implies that the agent’s current knowledge for the picked task is updated more when the agent succeeds in that task compared to the failure case. The update rule captures the following idea: when picking a task that is “too easy”, the progress in θt towards θ∗ is minimal since (θ∗[s] − θt[s]) is low; similarly, when picking a task that is “too hard”, the progress in θt towards θ∗ is minimal since β · (θ∗[s]− θt[s]) is low for β ≪ 1. The following theorem shows the differential effect of the probability of success scores p∗ and p on the expected improvement in the training objective Ct(p∗, p). Theorem 1. Consider the abstract agent with direct performance parameterization under the independent task setting as described above. Let s(0)t be the task picked at step t with PoSθt(s (0) t ) = p and PoSθ∗(s (0) t ) = p ∗. Then, we have: (i) ∂Ct(p ∗,p) ∂p > 0, for p < αp∗−βp∗−β 2(α−β) , (ii) ∂Ct(p ∗,p) ∂p < 0, for p > αp ∗−βp∗−β 2(α−β) , (iii) ∂Ct(p ∗,p) ∂p = 0, for p = αp∗−βp∗−β 2(α−β) , and (iv) ∂Ct(p ∗,p) ∂p∗ > 0, ∀p∗ ∈ [0, 1]. For the above setting with α = 1 and β = 0, maxp∗,p Ct(p∗, p) is equivalent to maxp∗,p p · (p∗−p). This, in turn, implies that the curriculum strategy given in Eq. 1 can be seen as greedily optimizing the expected improvement in the training objective at step t. 3.2.2 REINFORCE AGENT WITH SOFTMAX POLICY PARAMETERIZATION We consider the REINFORCE agent model with the following softmax policy parameterization: for any θ ∈ R|S|·|A|, we parameterize the policy as πθ(a|s) ∝ exp(θ[s, a]),∀s ∈ S, a ∈ A. For 2In this setting, we abstract out the policy πθ and directly map the “parameter” θ to a vector of “performance on tasks” PoSθ . Then, we choose the parameter space as Θ = [0, 1]Sinit (where d = Sinit) and define PoSθ = θ. Thus, an update in the “parameter” θ is equivalent to an update in the “performance on tasks” PoSθ . this policy parameterization, the smoothness condition on the reward function provided in (Kamalaruban et al., 2019) can be translated to the smoothness condition on the value function. In the following, we consider a problem instance involving a pool of contextual bandit tasks (a special case of independent task setting). Consider an MDP M with g ∈ S as the goal state for all tasks, Sinit = S \ {g}, A = {a1, a2}, and H = 1. We define the reward function as follows: R(s, a) = 0,∀s ∈ S \ {g}, a ∈ A and R(g, a) = 1,∀a ∈ A. For a given probability mapping prand : S → [0, 1], we define the transition dynamics as follows: T (g|s, a1) = prand(s),∀s ∈ S; T (s|s, a1) = 1 − prand(s),∀s ∈ S; and T (s|s, a2) = 1,∀s ∈ S. Then, for the REINFORCE agent under the above setting, the following theorem shows the differential effect of p∗ and p on Ct(p∗, p): Theorem 2. Consider the REINFORCE agent with softmax policy parameterization under the independent task setting as described above. Let s(0)t be the task picked at step t with PoSθt(s (0) t ) = p and PoSθ∗(s (0) t ) = p ∗. Then, we have: (i) ∂Ct(p ∗,p) ∂p > 0, for p < p∗ 2 , (ii) ∂Ct(p ∗,p) ∂p < 0, for p > p∗ 2 , (iii) ∂Ct(p ∗,p) ∂p = 0, for p = p∗ 2 , and (iv) ∂Ct(p ∗,p) ∂p∗ > 0, ∀p∗ ∈ [0, 1]. For the above setting with prand(s) = 1,∀s ∈ S , maxp Ct(1, p) is equivalent to maxp p · (1 − p). This means that for the case of PoS∗(s) = 1,∀s ∈ Sinit, the curriculum strategy given in Eq. 1 can be seen as greedily optimizing the expected improvement in the training objective at step t. 3.3 CURRICULUM STRATEGY FOR GENERAL SETTINGS Next, we discuss various practical issues in directly applying the curriculum strategy in Eq. 1 for general settings, and introduce several design choices to address these issues. Softmax selection. When training deep RL agents, it is typically useful to allow some stochasticity in the selected batch of tasks. Moreoever, the argmax selection in Eq. 1 is brittle in the presence of any approximation errors in computing PoS(·) values. To tackle this issue, we replace argmax selection in Eq. 1 with softmax selection and sample according to the following distribution: P [ s (0) t = s ] ∝ exp ( β · PoSt(s) · ( PoS∗(s)− PoSt(s) )) , (3) where β is a hyperparameter. Here, PoSt(s) values are computed for each s ∈ Sinit using rollouts obtained via executing the policy πt inM; PoS∗(s) values are assumed to be provided as input. PoS∗(·) is not known. Since the target policy πθ∗ is unknown, it is not possible to compute the PoS∗(s) values without additional domain knowledge. In our experiments, we resort to simply setting PoS∗(s) = 1,∀s ∈ Sinit in Eq. 3 – the rationale behind this choice is that we expect the ideal πθ∗ to succeed in all the tasks in the pool. However, the above choice could lead to suboptimal strategy for specific scenarios, e.g., all PoS∗(s) are below 0.5. As an alternative, one could estimate PoS∗(s) during the training process, e.g., using top K% rollouts obtained by executing the current policy πt starting from s. This brings us to the following curriculum strategy referred to as PROCURL-ENV in our experimental evaluation: P [ s (0) t = s ] ∝ exp ( β · PoSt(s) · ( 1− PoSt(s) )) . (4) Computing PoSt(·) is expensive. It is expensive (sample inefficient) to estimate PoSt(s) over the space Sinit using rollouts of the policy πt. To tackle this issue, we replace PoSt(s) with values Vt(s) obtained from the critic network of the RL agent. This brings us to the following curriculum strategy referred to as PROCURL-VAL in our experimental evaluation: P [ s (0) t = s ] ∝ exp ( β · Vt(s) · ( 1− Vt(s) )) . (5) Extension to non-binary or dense reward settings. The current forms of PROCURL-VAL in Eq. 5 and PROCURL-ENV in Eq. 4 are not directly applicable for settings where the reward is non-binary or dense. To deal with this issue in PROCURL-VAL, we replace Vt(s) values from the critic in Eq. 5 with normalized values given by V t(s) = Vt(s)−Vmin Vmax−Vmin clipped to the range [0, 1]. Here, Vmin and Vmax could be provided as input based on the environment’s reward function; alternatively we can dynamically set Vmin and Vmax during the training process by taking min-max values of the critic for states Sinit at step t. To deal with this issue in PROCURL-ENV, we replace PoSt(s) values from the rollouts in Eq. 4 with normalized values V t(s) as above. Algorithm 2 in the appendix provides a complete pseudo-code for the RL agent training with PROCURL-VAL in this general setting. 4 EXPERIMENTAL EVALUATION In this section, we evaluate the effectiveness of our curriculum strategies on a variety of domains w.r.t. the uniform performance of the trained RL agent over the training pool of tasks. Additionally, we consider the following two metrics in our evaluation: (i) total number of environment steps incurred jointly by the teacher and the student components at the end of the training process; (ii) total clock time required for the training process. Throughout all the experiments, we use PPO method from Stable-Baselines3 library for policy optimization (Schulman et al., 2017; Raffin et al., 2021). 4.1 ENVIRONMENTS We consider 5 different environments in our evaluation as described in the following paragraphs. Figure 1 provides a summary and illustration of these environments. POINTMASS-S and POINTMASS-D. Based on the work of (Klink et al., 2020b), we consider a contextual POINTMASS environment where an agent navigates a point mass through a gate of a given size towards a goal in a two-dimensional space. More concretely, we consider two settings: (i) POINTMASS-S environment corresponds to a goal-based (i.e., binary and sparse) reward setting where the agent receives a reward of 1 only if it successfully moves the point mass to the goal position; (ii) POINTMASS-D environment corresponds to a dense reward setting as used by (Klink et al., 2020b) where the reward values decay in a squared exponential manner with increasing distance to the goal. Here, the contextual variable c ∈ R3 controls the position of the gate (C-GatePosition), the width of the gate (C-GateWidth), and the friction coefficient of the ground (C-Friction). We construct the training pool of tasks by uniformly sampling 100 tasks over the space of possible tasks (here, each task corresponds to a different contextual variable). BASICKAREL. This environment is inspired by the Karel program synthesis domain Bunel et al. (2018), where the goal of an agent is to transform an initial grid into a final grid configuration by a sequence of commands. In our BASICKAREL environment, we do not allow any programming constructs such as conditionals or loops and limit the commands to the “basic” actions given by A = {move,turnLeft,turnRight,pickMarker,putMarker,finish}. A task in this environment corresponds to a pair of initial grid and final grid configuration; the environment is episodic with goal-based (i.e., binary and sparse) reward setting where the agent receives a reward of 1 only if it successfully transforms the task’s initial grid into the task’s final grid. Here, the contextual variable is discrete where each task can be considered as a discrete context. We construct the training pool of tasks by sampling 24000 tasks; additional details are provided in the appendix. BALLCATCHING. This environment is same as used in the work of (Klink et al., 2020b); here, an agent needs to direct a robot to catch a ball thrown towards it. The reward function is sparse and non-binary, only rewarding the robot when it catches the ball and penalizing it for excessive movements. The contextual vector c ∈ R3 captures the distance to the robot from which the ball is thrown and its goal position in a plane that intersects the base of the robot. We construct the training pool of tasks by uniformly sampling 100 tasks over the space of possible tasks. ANTGOAL. This environment is adapted from the original MuJoCo ANT environment (Todorov et al., 2012). In our adaptation, we additionally have a goal on a flat 2D surface and an agent is rewarded for moving an ant robot towards the goal location. This goal-based reward term replaces the original reward term of making the ant move forward; also, this reward term increases exponentially when the ant moves closer to the goal location. We keep the other reward terms such as control and contact costs similar to the original MuJoCo ANT environment. The environment is episodic with a length of 200 steps. The goal location essentially serves as a contextual variable in R2. We construct the training pool of tasks by uniformly sampling 50 goal locations from a circle around the ant. 4.2 CURRICULUM STRATEGIES EVALUATED Variants of our curriculum strategy. We consider the curriculum strategies PROCURL-VAL and PROCURL-ENV from Section 3.3. Since PROCURL-ENV uses policy rollouts to estimate PoSt(s) in Eq. 4, it requires environment steps for selecting tasks in addition to environment steps for training. To compare PROCURL-VAL and PROCURL-ENV in terms of trade-off between performance and sample efficiency, we introduce a variant PROCURL-ENVX where x controls the budget of the total number of steps used for estimation and training. In Figure 3, variants with x ∈ {2, 4} refer to a total budget of about x million environment steps when training comprises of 1 million steps. State-of-the-art baselines. SPDL (Klink et al., 2020b) and SPACE (Eimer et al., 2021) are state-ofthe-art curriculum strategies for contextual RL. We adapt the implementation of an improved version of SPDL, presented in (Klink et al., 2021), to work with a discrete pool of tasks. We also introduce a variant of SPACE, namely SPACE-ALT, by adapting the implementation of (Eimer et al., 2021) to sample the next training task as P [ s (0) t = s ] ∝ exp ( β · ( Vt(s)− Vt−1(s) )) . Prototypical baselines. IID strategy randomly samples the next task from the pool; note that IID serves as a competitive baseline since we consider the uniform performance objective. We introduce two additional variants of PROCURL-ENV, namely EASY and HARD, to understand the importance of the two terms PoSt(s) and ( 1 − PoSt(s) ) in Eq. 4. EASY samples tasks as P [ s (0) t = s ] ∝ exp ( β·PoSt(s) ) , and HARD samples tasks as P [ s (0) t = s ] ∝ exp ( β· ( 1−PoSt(s) )) . 4.3 RESULTS Convergence behavior and curriculum plots. As shown in Figure 2, the RL agents trained using the variants of our curriculum strategy, PROCURL-ENV and PROCURL-VAL, either match or outperform the agents trained with state-of-the-art and prototypical baselines in all the environments. Figures 4 and 5 visualize the curriculums generated by PROCURL-ENV, PROCURL-VAL, and IID; the trends for PROCURL-VAL generally indicate a gradual shift towards harder tasks across different contexts. We provide further details in the appendix. Metrics comparison. In Figure 3, we compare curriculum strategies considered in our experiments w.r.t. different metrics. PROCURL-VAL has similar sample complexity as state-of-the-art baselines since it does not require additional environment steps for the teacher component. PROCURL-VAL performs better compared to SPDL and SPACE in terms of computational complexity. The effect of that is more evident as the pool size increases. The reason is that PROCURL-VAL only requires forward-pass operation on the critic-model to obtain value estimates for each task in the pool. SPDL and SPACE not only require the same forward-pass operations, but SPDL does an additional optimization step, and SPACE requires a task ordering step. In terms of agent’s performance, our curriculum strategies exceed or match these baselines at different training segments. Even though PROCURL-ENV consistently surpasses all the other variants in terms of performance, its teacher component requires a lot of additional environment steps. Regarding the prototypical baselines in Figure 3, we make the following observations: (a) IID is a strong baseline in terms of sample and computational efficiency, however, its performance tends to be unstable in POINTMASS-S environment because of high randomness; (b) EASY performs well in POINTMASS-S because of the presence of easy tasks in the task space of this environment, but, performs quite poorly in BASICKAREL; (c) HARD consistently fails in both the environments. 5 CONCLUDING DISCUSSIONS We proposed a novel curriculum strategy for deep RL agents inspired by the ZPD concept. We mathematically derived our strategy from basic principles and empirically demonstrated its effectiveness in a variety of complex domains. Here, we discuss a few limitations of our work and outline a plan on how to address them in future work. First, we provided theoretical justifications of our proposed curriculum using simple learner models; it would be interesting also to provide a rigorous analysis of how our curriculum strategy improves the convergence rates of (deep) RL agents. Second, our experimental results show that different variants of our proposed curriculum provide an inherent trade-off between runtime and performance; it would be interesting to systematically study these variants to obtain a more effective curriculum strategy across different metrics. Third, it would be interesting to extend our curriculum strategy to high-dimensional sparse reward environments; in particular, our curriculum strategy requires estimating the probability of success of all tasks in the pool when sampling a new task which becomes challenging in high dimensional context space. A TABLE OF CONTENTS In this section, we give a brief description of the content provided in the appendices of the paper. • Appendix B provides proofs for Theorems 1 and 2. (Section 3.2) • Appendix C provides additional details and results for experimental evaluation. (Section 4) B THEORETICAL JUSTIFICATIONS FOR THE CURRICULUM STRATEGY – PROOFS (SECTION 3.2) B.1 PROOF OF THEOREM 1 Proof. Let s(0)t = s ∈ Sinit, and consider the following: ∆t(θt+1 ∣∣θt, s, ξt) = ∥θ∗ − θt∥1 − ∥θ∗ − θt+1∥1 = θt+1[s]− θt[s] = α · succ(ξt; s) · (θ∗[s]− θt[s]) + β · (1− succ(ξt; s)) · (θ∗[s]− θt[s]). For the abstract learner model defined in Section 3.2.1, we have PoSθ(s) = V πθ (s) = θ[s], for any s ∈ Sinit. Given two success values p∗, p ∈ R, the set of feasible tasks at time t is given by Dt(p∗, p) := {s ∈ S : θ∗[s] = p∗, θt[s] = p}. Now, we consider the following: Ct(p ∗, p) = Es∼Unif(Dt(p∗,p))Eξt|s [ ∆t(θt+1 ∣∣θt, s, ξt)] = Es∼Unif(Dt(p∗,p))Eξt|s [α · succ(ξt; s) · (θ∗[s]− θt[s]) + β · (1− succ(ξt; s)) · (θ∗[s]− θt[s])] = Es∼Unif(Dt(p∗,p)) [α · θt[s] · (θ∗[s]− θt[s]) + β · (1− θt[s]) · (θ∗[s]− θt[s])] = α · p · (p∗ − p) + β · (1− p) · (p∗ − p) = α · p · p∗ − α · p2 + β · p∗ − β · p− β · p · p∗ + β · p2. We have the following partial derivatives: ∂Ct(p ∗, p) ∂p = α · p∗ − 2 · α · p− β − β · p∗ + 2 · β · p ∂Ct(p ∗, p) ∂p∗ = α− β ≥ 0 ∂C2t (p ∗, p) (∂p)2 = − 2 · (α+ β) ≤ 0. Noting that ∂Ct∂pL = 0 when p = αp∗−βp∗−β 2(α−β) completes the proof. B.2 PROOF OF THEOREM 2 Proof. For the contextual bandit setting described in Section 3.2.2, the REINFORCE learner’s update rule reduces to the following: θt+1 ← θt+ηt·1{s(1)t = g}· [ ∇θ log πθ(a(0)t |s(0)t ) ] θ=θt . In particular, for s(0)t = s and a (0) t = a1, we update: θt+1[s, a1] ← θt[s, a1] + ηt · 1{s(1)t = g} · (1− πθt(a1|s)) θt+1[s, a2] ← θt[s, a2]− ηt · 1{s(1)t = g} · (1− πθt(a1|s)) and we set θt+1[s, ·]← θt[s, ·] when s(0)t ̸= s or a(0)t ̸= a1. Let s(0)t = s, and consider the following: ∆t(θt+1 ∣∣θt, s, ξt) = ∥θ∗ − θt∥1 − ∥θ∗ − θt+1∥1 = ∥θ∗[s, ·]− θt[s, ·]∥1 − ∥θ∗[s, ·]− θt+1[s, ·]∥1 = {θ∗[s, a1]− θt[s, a1] + θt[s, a2]− θ∗[s, a2]} − {θ∗[s, a1]− θt+1[s, a1] + θt+1[s, a2]− θ∗[s, a2]} = θt+1[s, a1]− θt[s, a1] + θt[s, a2]− θt+1[s, a2] = 2 · ηt · 1{a(0)t = a1, s(1)t = g} · (1− πθt(a1|s)). For the contextual bandit setting, the probability of success is given by PoSθ(s) = V πθ (s) = prand(s) · πθ(a1|s),∀s ∈ S. We assume that ∃ θ∗ such that πθ∗(a1|s) → 1; here, πθ∗ is the target policy. With the above definition, the probability of success scores for any task associated with the starting state s ∈ S w.r.t. the target and agent’s current policies (at any step t) are respectively given by PoS∗(s) = PoSθ∗(s) = prand(s) and PoSt(s) = PoSθt(s) = prand(s) · πθt(a1|s). Given two success values p∗, p ∈ R, the set of feasible tasks at time t is given by Dt(p∗, p) := {s ∈ S : prand(s) = p∗, prand(s) · πθt(a1|s) = p}. Now, we consider the following: Ct(p ∗, p) = Es∼Unif(Dt(p∗,p))Eξt|s [ ∆t(θt+1 ∣∣θt, s, ξt)] = Es∼Unif(Dt(p∗,p))Eξt|s [ 2 · ηt · 1{a(0)t = a1, s(1)t = g} · (1− πθt(a1|s)) ] = Es∼Unif(Dt(p∗,p)) [2 · ηt · prand(s) · πθt(a1|s) · (1− πθt(a1|s))] = 2 · ηt · p · ( 1− p p∗ ) . We have the following partial derivatives: ∂Ct(p ∗, p) ∂p = 2 · ηt · ( 1− 2p p∗ ) ∂Ct(p ∗, p) ∂p∗ = 2 · ηt · ( p p∗ )2 ≥ 0 ∂C2t (p ∗, p) (∂p)2 = − 4 · ηt p∗ ≤ 0. Noting that ∂Ct∂pL = 0 when p = p∗ 2 completes the proof. C EXPERIMENTAL EVALUATION – ADDITIONAL DETAILS (SECTION 4) C.1 ENVIRONMENTS BASICKAREL. This environment is inspired by the Karel program synthesis domain Bunel et al. (2018), where the goal of an agent is to transform an initial grid into a final grid configuration by a sequence of commands. In the BASICKAREL environment, we do not allow any programming constructs such as conditionals or loops and limit the commands to the “basic” actions given by the action space A = {move,turnLeft,turnRight,pickMarker,putMarker,finish}. A task in this environment corresponds to a pair of initial grid and final grid configuration. It consists of an avatar, walls, markers, and empty grid cells, and each element has a specific location in the grid. The avatar is characterized by its current location and orientation. Its orientation can be any direction {North,East,South,West}, and its location can be any grid cell, except from grid cells where a wall is located. The state space S of BASICKAREL is any possible configuration of the avatar, walls, and markers in a pair of grids. The avatar can move around the grid and is directed via the basic Karel commands, i.e., the action space A. While the avatar moves, if it hits a wall or the grid boundary, it “crashes” and the episode terminates. If pickMarker is selected when no marker is present, the avatar “crashes” and the program ends. Likewise, if the putMarker action is taken and a marker is already present, the avatar “crashes” and the program terminates. The finish action indicates the end of the sequence of actions, i.e., the episode ends after encountering this action. To successfully solve a BASICKAREL task, the sequence of actions must end with a finish, and there should be no termination via “crashes”. Based on this environment, we created a multi-task dataset that consists of 24000 training tasks and 2400 test tasks. All the generated tasks have a grid size of 4× 4. C.2 EVALUATION SETUP Hyperparameters of PPO method. We use the PPO method from Stable-Baselines3 library with a basic MLP policy for all the conducted experiments (Schulman et al., 2017; Raffin et al., 2021). For the POINTMASS-S, POINTMASS-D, and BALLCATCHING environments, the MLP policy has a shared layer with 64 units and a second layer with separate 64 units for the policy and 64 units for the value function. For the BASICKAREL environment, we use two separate layers of size [512, 256] for the policy network and two layers of size [256, 128] for the value function network. For the ANTGOAL environment, we use two separate layers of size [512, 512] for the policy network and two layers of size [512, 512] for the value function network. For all the experiments, ReLU is the chosen activation function. In Figure 6, we report the PPO hyperparameters used in the experiments. For each environment, all the hyperparameters are consistent across all the different curriculum strategies. Compute resources. All the experiments were conducted on a cluster of machines with CPUs of model Intel Xeon Gold 6134M CPU @ 3.20GHz. C.3 CURRICULUM STRATEGIES EVALUATED Variants of the curriculum strategy. Algorithm 2 provides a complete pseudo-code for the RL agent using PPO method when trained with PROCURL-VAL in the general setting of non-binary or dense rewards (see Section 3.3). In Eq. 1 and Algorithm 1, we defined t at an episodic level; however, in Algorithm 2, t denotes an environment step (in the context of the PPO method). For PROCURL-ENV, in line 24 of Algorithm 2, we estimate the probability of success for all the tasks using the additional rollouts obtained by executing the current policy inM. Hyperparameters of curriculum strategies. In Figure 7, we report the hyperparameters of each curriculum strategy used in the experiments (for each environment). Below, we provide a short description of these hyperparameters: 1. β parameter controls the stochasticity of the softmax selection. 2. Npos parameter controls the frequency at which V t is updated. For PROCURL-ENV, we set Npos higher than Nsteps since obtaining rollouts to update V t(s) is expensive. For all the other curriculum strategies, we set Npos = Nsteps. For SPACE, Npos controls how frequently the current task dataset is updated based on their curriculum. For SPDL, Npos controls how often we perform the optimization step to update the distribution for selecting tasks. 3. crollouts determines the number of additional rollouts required to compute the probability of success score for each task (only for PROCURL-ENV). 4. {Vmin, Vmax} are used in the environments with non-binary or dense rewards to obtain the normalized values V (s) (see Section 3.3). In Figure 7, {Vmin,t, Vmax,t} denote the min-max values of the critic for states Sinit at step t. 5. η and κ parameters as used in SPACE (Eimer et al., 2021). 6. VLB performance threshold as used in SPDL (Klink et al., 2021). Algorithm 2 RL agent using PPO method when trained with PROCURL-VAL in the general setting 1: Input: RL algorithm PPO, rollout buffer D 2: Hyperparameters: policy update frequency Nsteps, number of epochs Nepochs, number of mini- batches Nbatch, parameter β, Vmin, and Vmax 3: Initialization: randomly initialize policy π1 and critic V1; set normalized probability of success scores V 1(s) = 0 and PoS∗(s) = 1, ∀s ∈ Sinit 4: for t = 1, . . . , T do 5: // add an environment step to the buffer 6: observe the state st, and select the action at ∼ πt(st) 7: execute the action at in the environment 8: observe reward rt, next state st+1, and done signal dt+1 to indicate whether st+1 is terminal 9: store (st, at, rt, st+1, dt+1) in the rollout buffer D 10: // choose new task when the current task/episode ends 11: if dt+1 = true then 12: reset the environment state 13: sample next task st+1 from P [ st+1 = s ] ∝ exp ( β · V t(s) · (1− V t(s)) ) 14: // policy and V t(s) update 15: if t%Nsteps = 0 then 16: set π′ ← πt and V ′ ← Vt 17: for e = 1, . . . , Nepochs do 18: for b = 1, . . . , Nbatch do 19: sample b-th minibatch of Nsteps/Nbatch transitions B = {(s, a, r, s′, d)} from D 20: update policy and critic using PPO algorithm π′, V ′ ← PPO(π′, V ′, B) 21: set πt+1 ← π′ and Vt+1 ← V ′ 22: empty the rollout buffer D 23: // normalization for the environments with non-binary or dense rewards 24: update V t+1(s)← Vt+1(s)−VminVmax−Vmin , ∀s ∈ Sinit using forward passes on critic 25: else 26: maintain the previous values πt+1 ← πt, Vt+1 ← Vt, and V t+1 ← V t 27: Output: policy πT C.4 RESULTS Convergence behavior. In Figure 8, we report the performance of the trained models in the training set and a test set for comparison purposes. For POINTMASS-S, we constructed a separate test set of 100 tasks by uniformly picking tasks from the task space. For BASICKAREL, we have a train and test dataset of 24000 and 2400 tasks, respectively. Curriculum plots. Figures 4 and 5 visualize the curriculums generated by PROCURL-ENV, PROCURL-VAL, and IID; the trends for PROCURL-VAL generally indicate a gradual shift towards harder tasks across different contexts. The increasing trend in Figure 4a corresponds to a preference shift towards tasks with the gate positioned closer to the edges; the decreasing trend in Figure 4b corresponds to a preference shift towards tasks with narrower gates. For BASICKAREL, the increasing trends in Figures 5a and 5b correspond to a preference towards tasks with longer solution trajectories and tasks requiring a marker to be picked or put, respectively. In Figures 5c and 5d, tasks with a distractor marker (C-DistractorMarker) and tasks with more walls (C-Walls) are increasingly selected while training. In Figure 9, we show illustrative tasks of BASICKAREL used during the training process at different steps for PROCURL-VAL. Ablation and robustness experiments. We conduct additional experiments to evaluate the robustness of PROCURL-VAL w.r.t. different values of β and different ϵ-level noise in Vt(s) values. The results are reported in Figure 10. From the reported results, we note that picking a value for β somewhere between 10 to 30 leads to competitive performance, and PROCURL-VAL is robust even for noise levels up to ϵ = 0.2. C.5 ADDITIONAL RESULTS AND DISCUSSION PROCURL-ENVX vs. PROCURL-VAL. To achieve the constrained budget of evaluation steps in PROCURL-ENVx (with x ∈ {2, 4}), we reduce the frequency of updating PoSt since this is the most expensive operation for PROCURL-ENV requiring additional rollouts for each task. On the other hand, PROCURL-VAL updates PoSt by using the values obtained from forward-pass on the critic model – this update happens whenever the critic model is updated (every 2048 training steps for BASICKAREL). This higher frequency of updating PoSt in PROCURL-VAL is why it is slower than PROCURL-ENVx (with x ∈ {2, 4}) for BASICKAREL. Note that the relative frequency of updates for POINTMASS is different in comparison to BASICKAREL because of very different pool sizes. Hence, the behavior in total clock times is different. γ2/γ1 ablation. We conduct an additional ablation study on the form of our curriculum objective presented in Eq. 1. More specifically, we consider the following generalized variant of Eq. 1 with parameters γ1 and γ2: s (0) t ← argmax s∈Sinit ( PoSt(s) · ( γ1 · PoS∗(s)− γ2 · PoSt(s) )) (6) In our experiments, we consider the following range of γ2/γ1 ∈ {0.6, 0.8, 1.0, 1.2, 1.4}. Our default curriculum strategy in Eq. 1 essentially corresponds to γ2/γ1 = 1.0. The following table presents the results for the environments POINTMASS-S and BASICKAREL.
1. What is the main contribution of the paper regarding curriculum learning in reinforcement learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in its novelty, empirical evaluation, writing, literature, methodology, theoretical justification, and reproducibility? 3. Do you have any concerns or questions regarding the paper's content, such as the origin of Equation 1, the link between Equations 1 and 2, the smooth condition in [1], the meaning of \theta and \lambda, and the estimation error of V_t?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper mainly considers curriculum learning in goal-based reinforcement learning. A fixed set of curriculum tasks is defined by a set of initial states, and the algorithm can sample tasks from the set at every step. The final goal is to maximize performance with respect to a uniform distribution of tasks (which is known). Specifically, motivated by the Zone of Proximal Development (ZPD) in educational psychology, the authors propose a curriculum strategy by choosing tasks of medium difficulty. The difficulty is measured by a negative quadratic term of the value function. Then the authors justify a training objective and an update rule w.r.t. the agent parameters by showing two specific settings. Besides, the authors provide solutions to some practical issues. The experiments are based on maze environments and robotic control environments and show a leading performance against the state-of-the-art curriculum algorithms. Strengths And Weaknesses Pros: Novelty: the authors provide a new self-paced curriculum learning objective. Empirical evaluation: This paper has a nice experimental part, with four representative environments, multiple baselines, and ablation studies in terms of \beta and noise. Writing: This paper has a good flow to present the motivation and the solution. However, I have several concerns here: Literature: the authors should discuss and highlight the difference between ProCuRL and other related algorithms, like SPCL and SPACE. Since the main part of the related work just introduces other works. Methodology It is unclear to me where Eq.1 comes from. To my understanding, P o S ∗ is a constant when S i n i t is given. Then, when P o S t = 0.5 × P o S ∗ , the maximization is obtained. So Eq.1 implicitly assumes that achieving the half of value function is the medium difficulty, which is not obvious in the experimental environments. In this case, what if we maximize P o S t ( ˙ γ 1 P o S ∗ − γ 2 P o S t ) with different γ 1 and γ 2 . Then, it is also unclear to me the link between Eq.1 and Eq.2. Does it mean that when following curriculum strategy as Eq.1, the objective in Eq.2 is maximized? It is not obvious how to use the smooth condition in [1]. Eq.7 in [1] shows the smooth condition, however, the \lambda in [1] does not have the same meaning as \theta in this paper. Since intuitively, the case that θ is approaching θ ∗ does not mean that V θ is approaching V θ ∗ The s t in the left term of Eq.1, Eq.3, Eq.4, and Eq.5 does not match line 13 in Algorithm 2. It may be a typo but cause some confusing points. θ t is a learned parameter in PPO. So V t might introduce an estimation error during training. For example, if V t is of high variance, the curriculum strategy will fail. Theoretical justification Just as stated in the conclusion part, the justification is far away from the implementation, which makes this paper less attractive. [1] Interactive Teaching Algorithms for Inverse Reinforcement Learning. Clarity, Quality, Novelty And Reproducibility Clarity: the presentation of this paper is clear, however, some points are confusing. (See Strength And Weaknesses) Quality: this paper is complete. I can see the effort of the authors in presenting this work. Novelty: If the curriculum strategy is just shown in Eq.1, the novelty is inadequate. Since the objective is just to maximize a negative quadratic term of value function. Reproducibility: the authors provide the source code.
ICLR
Title Proximal Curriculum for Reinforcement Learning Agents Abstract We consider the problem of curriculum design for reinforcement learning (RL) agents in contextual multi-task settings. Existing techniques on automatic curriculum design typically have limited theoretical underpinnings or require domainspecific hyperparameter tuning. To tackle these limitations, we design our curriculum strategy, PROCURL, inspired by the pedagogical concept of Zone of Proximal Development (ZPD). We mathematically derive PROCURL by formalizing the ZPD concept, which suggests that learning progress is maximized when picking tasks that are neither too hard nor too easy for the learner. We also present a practical variant of PROCURL that can be directly integrated with deep RL frameworks with minimal hyperparameter tuning. Experimental results on a variety of domains demonstrate the effectiveness of our curriculum strategy over state-ofthe-art baselines in accelerating the training process of deep RL agents. 1 INTRODUCTION Recent advances in deep reinforcement learning (RL) have demonstrated impressive performance in games, continuous control, and robotics (Mnih et al., 2015; Lillicrap et al., 2015; Silver et al., 2017; Levine et al., 2016). Despite these remarkable successes, a broader application of RL in real-world domains is often very limited. For example, training RL agents in contextual multi-task settings and goal-based tasks with sparse rewards still remains challenging (Hallak et al., 2015; Kirk et al., 2021; Andrychowicz et al., 2017; Florensa et al., 2017; Riedmiller et al., 2018). Inspired by the importance of curricula in pedagogical domains, there is a growing interest in leveraging curriculum strategies when training machine learning models in challenging domains. In the supervised learning setting, such as image classification, the impact of the order of presented training examples has been studied both theoretically and empirically (Weinshall et al., 2018; Weinshall & Amir, 2018; Zhou & Bilmes, 2018; Zhou et al., 2021; Elman, 1993; Bengio et al., 2009; Zaremba & Sutskever, 2014). Recent works have also studied curriculum strategies for learners in sequentialdecision-making settings, such as imitation learning (where the agent learns from demonstrations) and RL (where the agent learns from rewards). In the imitation learning setting, recent works have proposed greedy curriculum strategies for picking the next training demonstration according to the agent’s learning progress (Kamalaruban et al., 2019; Yengera et al., 2021). In the RL setting, several curriculum strategies have been proposed to improve sample efficiency, e.g., by choosing an appropriate next starting state or goal state for the task to train on (Wöhlke et al., 2020; Florensa et al., 2017; 2018; Racanière et al., 2020; Riedmiller et al., 2018; Klink et al., 2020a;b; Eimer et al., 2021). Despite extensive research on curriculum design for the RL setting, existing techniques typically have limited theoretical underpinnings or require domain-specific hyperparameter tuning. In this paper, we are interested in developing a principled curriculum strategy for the RL setting that is broadly applicable to many domains with minimal tuning of hyperparameters. To this end, we rely on the Zone of Proximal Development (ZPD) concept from the educational psychology literature (Vygotsky & Cole, 1978; Chaiklin, 2003). The ZPD concept, when applied in terms of learning progress, suggests that progress is maximized when the learner is presented with tasks that lie in the proximal zone, i.e., tasks that are neither too hard nor too easy. To formally capture this idea of proximal zone, we use a notion of probability of success score PoSπt(s) w.r.t. the learner’s current policy πt for any given task s. We mathematically derive an intuitive curriculum strategy based on a learner update rule that captures the ZPD concept in terms of the learning progress and reflects characteristics of the policy gradient style update. Our main results and contributions are as follows: I. We propose a curriculum strategy, PROCURL, inspired by the ZPD concept. PROCURL formalizes the idea of picking tasks that are neither too hard nor too easy for the learner in the form of selection strategy argmaxs PoSπt(s) · ( PoS∗(s) − PoSπt(s) ) , where PoS∗(s) corresponds to the probability of success score w.r.t. an optimal policy (Section 3.1). II. We derive PROCURL under two specific learning settings where we analyze the effect of picking a task on the agent’s learning progress (Section 3.2). III. We present a practical variant of PROCURL, namely PROCURL-VAL, that can be easily integrated with deep RL frameworks with minimal hyperparameter tuning (Section 3.3). IV. We empirically demonstrate the effectiveness of PROCURL-VAL over state-of-the-art baselines in accelerating the training process of deep RL agents in a variety of environments (Section 4). 1.1 RELATED WORK Curriculum strategies based on domain knowledge. Early works on curriculum design for supervised learning setting typically order the training examples in increasing difficulty (Elman, 1993; Bengio et al., 2009; Schmidhuber, 2013; Zaremba & Sutskever, 2014). This easy-to-hard design principle has been utilized in the hand-crafted curriculum approaches for the RL setting (Asada et al., 1996; Wu & Tian, 2016). Moreover, there has been recent works on designing greedy curriculum strategies for the imitation learning setting based on the iterative machine teaching framework (Liu et al., 2017; Yang et al., 2018; Zhu et al., 2018; Kamalaruban et al., 2019; Yengera et al., 2021). However, these approaches require domain-specific expert knowledge for designing difficulty measures. Curriculum strategies based on ZPD concept. In the pedagogical setting, it has been realized that effective teaching provides tasks that are neither too hard nor too easy for the human learner. This intuition of providing tasks from a particular range of difficulties is conceptualized in the ZPD concept (Vygotsky & Cole, 1978; Chaiklin, 2003; Oudeyer et al., 2007; Baranes & Oudeyer, 2013; Zou et al., 2019). In the RL setting, several curriculum strategies that have been proposed are inherently based on the ZPD concept (Florensa et al., 2017; 2018; Wöhlke et al., 2020). A common underlying theme in both (Florensa et al., 2017) and (Florensa et al., 2018) is that they choose the next task (starting or goal state) for the learner uniformly at random from the set {s : rmin ≤ PoSπt(s) ≤ rmax}. Here, the threshold values rmin and rmax require tuning according to the learner’s progress and specific to the domain. The authors in (Wöhlke et al., 2020) propose a unified framework for the learner’s performance-based starting state curricula in RL. In particular, the starting state selection policy of (Wöhlke et al., 2020), P [ s (0) t = s ] ∝ G(PoSπt(s)) for some function G, accommodates existing curriculum generation methods like (Florensa et al., 2017; Graves et al., 2017). Despite promising empirical results, a conceptual formalism or theoretical underpinnings relating an RL agent’s learning progress to the ZPD concept is still missing in the aforementioned works. We address this conceptual gap in the literature by designing and analyzing a learner update rule that captures the ZPD concept in terms of the learning progress and also reflects characteristics of the policy gradient style update. Curriculum strategies based on self-paced learning (SPL). In the supervised learning setting, the curriculum strategies using the SPL concept optimize the trade-off between exposing the learner to all available training examples and selecting examples in which it currently performs well (Kumar et al., 2010; Jiang et al., 2015). In SPDL (Klink et al., 2020b;a; 2021; 2022) and SPACE (Eimer et al., 2021), the authors have adapted the concept of SPL to the RL setting by controlling the intermediate task distribution with respect to the learner’s current training progress. However, SPDL and SPACE differ in their mode of operation and the objective. SPDL considers the procedural task generation framework where tasks of appropriate difficult levels can be synthesized, as also considered in (Florensa et al., 2017; 2018)). In contrast, SPACE considers a pool-based curriculum framework for picking suitable tasks, as popular in supervised learning setting. Further, SPDL considers the objective of a targeted performance w.r.t. a target distribution (e.g., concentrated distribution on hard tasks); in contrast, SPACE considers the objective of uniform performance across a given pool of tasks. Similar to SPACE, in our work, we consider the pool-based setting with uniform performance objective. Both SPDL and SPACE serve as state-of-the-art baselines in our experimental evaluation. In terms of curriculum strategy, SPDL operates by solving an optimization problem at each step to pick a task (Klink et al., 2021); SPaCE uses a ranking induced by magnitude of differences in current/previous critic values at each step to pick a task (Eimer et al., 2021). In the appendix, we have also provided some additional information on hyperparameters for SPDL and SPaCE. Other automatic curriculum strategies. There are other approaches for automatic curriculum generation, including: (i) by formulating the curriculum design problem with the use of a meta-level Markov Decision Process (Narvekar et al., 2017; Narvekar & Stone, 2019); (ii) by learning how to generate training tasks similar to a teacher (Dendorfer et al., 2020; Such et al., 2020; Matiisen et al., 2019; Turchetta et al., 2020); (iii) by leveraging self-play as a form of curriculum generation (Sukhbaatar et al., 2018); (iv) by using the disagreement between different agents trained on the same tasks (Zhang et al., 2020); (v) by picking the starting states based on a single demonstration (Salimans & Chen, 2018; Resnick et al., 2018); and (vi) by providing agents with environment variations that are at the frontier of an agent’s capabilities, e.g., Unsupervised Environment Design methods (Dennis et al., 2020; Jiang et al., 2021; Parker-Holder et al., 2022). We refer the reader to recent surveys on curriculum design for the RL setting (Narvekar et al., 2020; Portelas et al., 2021; Weng, 2020). 2 FORMAL SETUP In this section, we formalize our problem setting based on prior work on teacher-student curriculum learning (Matiisen et al., 2019). MDP environment. We consider a learning environment defined as a Markov Decision Process (MDP)M := (S,A, T , H,R,Sinit). Here, S andA denote the state and action spaces, T : S ×S × A → [0, 1] is the transition dynamics, H is the maximum length of the episode, and R : S×A → R is the reward function. The set of initial states Sinit ⊆ S specifies a fixed pool of tasks, i.e., each starting state s ∈ Sinit corresponds to a unique task. Note that the above environment formalism is quite general enough to cover many practical settings, including the contextual multi-task MDP setting (Hallak et al., 2015).1 RL agent and training process. We consider an RL agent acting in this environment via a policy π : S × A → [0, 1] that is a mapping from a state to a probability distribution over actions. Given a task with the corresponding starting state s ∈ Sinit, the agent attempts the task via a trajectory rollout obtained by executing its policy π from s in the MDP M. The trajectory rollout is denoted as ξ = {(s(τ), a(τ), R(s(τ), a(τ)))}τ=0,1,...,h with s(0) = s and for some h ≤ H . The agent’s performance on task s is measured via the value function V π(s) := E [∑h τ=0 R(s (τ), a(τ)) ∣∣π,M, s(0) = s]. Then, the uniform performance of the agent over the pool of tasks Sinit is given by V π := Es∼Uniform(Sinit) [V π(s)]. The training process of the agent involves an interaction between two components: a student component that is responsible for policy update and a teacher component that is responsible for task selection. The interaction happens in discrete steps, indexed by t = 1, 2, . . ., and is formally described in Algorithm 1. Let πend denote the agent’s final policy at the end of training. The training objective is to ensure that the uniform performance of the policy πend is ϵ-near-optimal, i.e., (maxπ V π − V πend) ≤ ϵ. In the following two paragraphs, we discuss the student and teacher components in detail. Student component. We consider a parametric representation for the RL agent, whose current knowledge is parameterized by θ ∈ Θ ⊆ Rd and each parameter θ is mapped to a policy πθ : S×A → [0, 1]. At step t, the student component updates the knowledge parameter based on the following quantities: the current knowledge parameter θt, the task picked by the teacher component, and the rollout ξt = {(s(τ)t , a(τ)t , R(s(τ)t , a(τ)t ))}τ . Then, the updated knowledge parameter θt+1 is mapped to the agent’s policy given by πt+1 := πθt+1 . As a concrete example, the knowledge parameter of the REINFORCE agent (Sutton et al., 1999) is updated as θt+1 ← θt + ηt · ∑h−1 τ=0 G (τ) t · g(τ)t , where ηt is the learning rate, G (τ) t = ∑h τ ′=τ R(s (τ ′) t , a (τ ′) t ), and g (τ) t = [ ∇θ log πθ(a(τ)t |s(τ)t ) ] θ=θt . Teacher component. At step t, the teacher component picks a task with the corresponding starting state s(0)t for the student component to attempt via a trajectory rollout (see line 3 in Algorithm 1). The sequence of tasks (curriculum) picked by the teacher component affects the performance improvement of the policy πt. The main focus of this work is to develop a teacher component to achieve the training objective in both computational and sample efficient manner. 1In this setting, for a given set of contexts C, the pool of tasks is given by {Mc = (S,A, Tc, H,Rc,S init) : c ∈ C}. Our environment formalism (MDP M) covers this setting as follows: S = S × C; Sinit = S init × C; T ((s̄′, c)|(s̄, c), a) = Tc(s̄′|s̄, a) and R((s̄, c), a) = Rc(s̄, a), ∀s̄, s̄′ ∈ S, a ∈ A, c ∈ C. Algorithm 1 RL Agent Training as Interaction between Teacher-Student Components 1: Input: RL agent’s initial policy π1 2: for t = 1, 2, . . . do 3: Teacher component picks a task with the corresponding starting state s(0)t . 4: Student component attempts the task via a trajectory rollout ξt using the policy πt from s (0) t . 5: Student component updates the policy to πt+1. 6: Output: RL agent’s final policy πend ← πt+1. 3 PROXIMAL CURRICULUM STRATEGY In Section 3.1, we propose a curriculum strategy for the goal-based setting. In Section 3.2, we show that the proposed curriculum strategy can be derived from basic principles by formalizing the ZPD concept. In Section 3.3, we present our final curriculum strategy that is applicable in general settings. 3.1 CURRICULUM STRATEGY FOR THE GOAL-BASED SETTING Here, we introduce our curriculum strategy for the goal-based setting using the notion of probability of success scores. Goal-based setting. In this setting, the reward function R is goal-based, i.e., the agent gets a reward of 1 only at the goal states and 0 at other states; moreover, any action from a goal state also leads to termination. For any task with the corresponding starting state s ∈ Sinit, we say that the attempted rollout ξ succeeds in the task if the final state of ξ is a goal state. Formally, succ(ξ; s) is an indicator function whose value is 1 when the rollout ξ succeeds in the task s, and 0 otherwise. Furthermore, for an agent with policy π, we have that V π(s) := E [ succ(ξ; s) ∣∣π,M] is equal to the total probability of reaching a goal state by executing the policy π starting from s ∈ Sinit. Probability of success. We begin by assigning a probability of success score for any task with the corresponding starting state s ∈ Sinit w.r.t. any parameterized policy πθ in the MDPM. Definition 1. For any given knowledge parameter θ ∈ Θ and any starting state s ∈ Sinit, we define the probability of success score PoSθ(s) as the probability of successfully solving the task s by executing the policy πθ in the MDPM. For the goal-based settings, we have PoSθ(s) = V πθ (s). With the above definition, the probability of success score for any task s ∈ Sinit w.r.t. the agent’s current policy πt is given by PoSt(s) := PoSθt(s). Further, we define PoS ∗(s) := maxθ∈Θ PoSθ(s). Curriculum strategy. Based on the notion of probability of success scores that we defined above, we propose the following curriculum strategy: s (0) t ← argmax s∈Sinit ( PoSt(s) · ( PoS∗(s)− PoSt(s) )) , (1) i.e., at step t, the teacher component picks a task associated with the starting state s(0)t according to Eq. 1. In the following subsection, we show that our curriculum strategy can be derived by considering simple learning settings, such as contextual bandit problems with REINFORCE agent; these derivations provide insights about the design of the curriculum strategy. In Section 3.3, we provide a detailed step-by-step discussion on how our curriculum can be applied in practice to increasingly complex settings. 3.2 THEORETICAL JUSTIFICATIONS FOR THE CURRICULUM STRATEGY To derive our curriculum strategy for the goal-based setting, we additionally consider independent tasks where any task s(0)t picked from the pool Sinit at step t only affects the agent’s knowledge component corresponding to that task. Further, we assume that there exists a knowledge parameter θ∗ ∈ Θ such that πθ∗ ∈ argmaxπ V π , and πθ∗ is referred to as the target policy. Then, based on the work of (Weinshall et al., 2018; Kamalaruban et al., 2019; Yengera et al., 2021), we investigate the effect of picking a task s(0)t at step t on the convergence of the agent’s parameter θt towards the target parameter θ∗. Under a smoothness condition on the value function of the form |V πθ − V πθ′ | ≤ L · ∥θ − θ′∥1 ,∀θ, θ′ ∈ Θ for some L > 0, we can translate the parameter convergence (θt → θ∗) into the performance convergence (V πθt → V πθ∗ ). Thus, we define the improvement in the training objective at step t as ∆t(θt+1 ∣∣θt, s(0)t , ξt) := [∥θ∗ − θt∥1 − ∥θ∗ − θt+1∥1]. (2) In this objective, we use the ℓ1-norm because our theoretical analysis considers the independent task setting mentioned above. Given two success values p∗, p ∈ R, we define a set of feasible tasks at step t as Dt(p∗, p) := {s ∈ Sinit : PoSθ∗(s) = p∗,PoSθt(s) = p}. The set Dt(p∗, p) contains all the tasks for which the probability of success score w.r.t. the target policy is equal to the value p∗ and the probability of success score w.r.t. the agent’s current policy is equal to the value p. Further, we define the expected improvement in the training objective at step t given success values p∗ and p as follows: Ct(p ∗, p) := E s (0) t ∼Uniform(Dt(p∗,p)) E ξt|s(0)t [ ∆t(θt+1|θt, s(0)t , ξt) ] , where the outer expectation is w.r.t. the uniform distribution over the setDt(p∗, p). In the following, we analyze the above quantity for specific agent models under the independent task setting. More concretely, Theorems 1 and 2 characterize the impact of picking a task at step t on the objective in Eq. 2 with the following values: (i) task’s PoS w.r.t. the target policy πθ∗ having value p∗ and (ii) task’s PoS w.r.t. the agent’s current policy having value p. For the specific settings considered in Sections 3.2.1 and 3.2.2, Theorems 1 and 2 imply that picking tasks based on the curriculum strategy given in Eq. 1 maximizes the expected value of the objective in Eq. 2. 3.2.1 ABSTRACT AGENT WITH DIRECT PERFORMANCE PARAMETERIZATION We consider an abstract agent model with the following direct performance parameterization: for any θ ∈ Θ = [0, 1]|Sinit|, we have PoSθ(s) = θ[s],∀s ∈ Sinit.2 Under this model, the agent’s current knowledge θt at step t is encoded directly by its probability of success scores {PoSθt(s) | s ∈ Sinit}. The target knowledge parameter θ∗ is given by {PoSθ∗(s) | s ∈ Sinit}. Under the independent task setting, we design an update rule for the agent to capture the ZPD concept in terms of the learning progress (Vygotsky & Cole, 1978; Chaiklin, 2003), and to also reflect characteristics of the policy gradient style update. In particular, for s = s(0)t ∈ Sinit, we update θt+1[s] ← θt[s] + α · succ(ξt; s) · (θ∗[s]− θt[s]) + β · (1− succ(ξt; s)) · (θ∗[s]− θt[s]), where α, β ∈ [0, 1] and α > β. For s ∈ Sinit and s ̸= s(0)t , we maintain θt+1[s]← θt[s]. Importantly, α > β implies that the agent’s current knowledge for the picked task is updated more when the agent succeeds in that task compared to the failure case. The update rule captures the following idea: when picking a task that is “too easy”, the progress in θt towards θ∗ is minimal since (θ∗[s] − θt[s]) is low; similarly, when picking a task that is “too hard”, the progress in θt towards θ∗ is minimal since β · (θ∗[s]− θt[s]) is low for β ≪ 1. The following theorem shows the differential effect of the probability of success scores p∗ and p on the expected improvement in the training objective Ct(p∗, p). Theorem 1. Consider the abstract agent with direct performance parameterization under the independent task setting as described above. Let s(0)t be the task picked at step t with PoSθt(s (0) t ) = p and PoSθ∗(s (0) t ) = p ∗. Then, we have: (i) ∂Ct(p ∗,p) ∂p > 0, for p < αp∗−βp∗−β 2(α−β) , (ii) ∂Ct(p ∗,p) ∂p < 0, for p > αp ∗−βp∗−β 2(α−β) , (iii) ∂Ct(p ∗,p) ∂p = 0, for p = αp∗−βp∗−β 2(α−β) , and (iv) ∂Ct(p ∗,p) ∂p∗ > 0, ∀p∗ ∈ [0, 1]. For the above setting with α = 1 and β = 0, maxp∗,p Ct(p∗, p) is equivalent to maxp∗,p p · (p∗−p). This, in turn, implies that the curriculum strategy given in Eq. 1 can be seen as greedily optimizing the expected improvement in the training objective at step t. 3.2.2 REINFORCE AGENT WITH SOFTMAX POLICY PARAMETERIZATION We consider the REINFORCE agent model with the following softmax policy parameterization: for any θ ∈ R|S|·|A|, we parameterize the policy as πθ(a|s) ∝ exp(θ[s, a]),∀s ∈ S, a ∈ A. For 2In this setting, we abstract out the policy πθ and directly map the “parameter” θ to a vector of “performance on tasks” PoSθ . Then, we choose the parameter space as Θ = [0, 1]Sinit (where d = Sinit) and define PoSθ = θ. Thus, an update in the “parameter” θ is equivalent to an update in the “performance on tasks” PoSθ . this policy parameterization, the smoothness condition on the reward function provided in (Kamalaruban et al., 2019) can be translated to the smoothness condition on the value function. In the following, we consider a problem instance involving a pool of contextual bandit tasks (a special case of independent task setting). Consider an MDP M with g ∈ S as the goal state for all tasks, Sinit = S \ {g}, A = {a1, a2}, and H = 1. We define the reward function as follows: R(s, a) = 0,∀s ∈ S \ {g}, a ∈ A and R(g, a) = 1,∀a ∈ A. For a given probability mapping prand : S → [0, 1], we define the transition dynamics as follows: T (g|s, a1) = prand(s),∀s ∈ S; T (s|s, a1) = 1 − prand(s),∀s ∈ S; and T (s|s, a2) = 1,∀s ∈ S. Then, for the REINFORCE agent under the above setting, the following theorem shows the differential effect of p∗ and p on Ct(p∗, p): Theorem 2. Consider the REINFORCE agent with softmax policy parameterization under the independent task setting as described above. Let s(0)t be the task picked at step t with PoSθt(s (0) t ) = p and PoSθ∗(s (0) t ) = p ∗. Then, we have: (i) ∂Ct(p ∗,p) ∂p > 0, for p < p∗ 2 , (ii) ∂Ct(p ∗,p) ∂p < 0, for p > p∗ 2 , (iii) ∂Ct(p ∗,p) ∂p = 0, for p = p∗ 2 , and (iv) ∂Ct(p ∗,p) ∂p∗ > 0, ∀p∗ ∈ [0, 1]. For the above setting with prand(s) = 1,∀s ∈ S , maxp Ct(1, p) is equivalent to maxp p · (1 − p). This means that for the case of PoS∗(s) = 1,∀s ∈ Sinit, the curriculum strategy given in Eq. 1 can be seen as greedily optimizing the expected improvement in the training objective at step t. 3.3 CURRICULUM STRATEGY FOR GENERAL SETTINGS Next, we discuss various practical issues in directly applying the curriculum strategy in Eq. 1 for general settings, and introduce several design choices to address these issues. Softmax selection. When training deep RL agents, it is typically useful to allow some stochasticity in the selected batch of tasks. Moreoever, the argmax selection in Eq. 1 is brittle in the presence of any approximation errors in computing PoS(·) values. To tackle this issue, we replace argmax selection in Eq. 1 with softmax selection and sample according to the following distribution: P [ s (0) t = s ] ∝ exp ( β · PoSt(s) · ( PoS∗(s)− PoSt(s) )) , (3) where β is a hyperparameter. Here, PoSt(s) values are computed for each s ∈ Sinit using rollouts obtained via executing the policy πt inM; PoS∗(s) values are assumed to be provided as input. PoS∗(·) is not known. Since the target policy πθ∗ is unknown, it is not possible to compute the PoS∗(s) values without additional domain knowledge. In our experiments, we resort to simply setting PoS∗(s) = 1,∀s ∈ Sinit in Eq. 3 – the rationale behind this choice is that we expect the ideal πθ∗ to succeed in all the tasks in the pool. However, the above choice could lead to suboptimal strategy for specific scenarios, e.g., all PoS∗(s) are below 0.5. As an alternative, one could estimate PoS∗(s) during the training process, e.g., using top K% rollouts obtained by executing the current policy πt starting from s. This brings us to the following curriculum strategy referred to as PROCURL-ENV in our experimental evaluation: P [ s (0) t = s ] ∝ exp ( β · PoSt(s) · ( 1− PoSt(s) )) . (4) Computing PoSt(·) is expensive. It is expensive (sample inefficient) to estimate PoSt(s) over the space Sinit using rollouts of the policy πt. To tackle this issue, we replace PoSt(s) with values Vt(s) obtained from the critic network of the RL agent. This brings us to the following curriculum strategy referred to as PROCURL-VAL in our experimental evaluation: P [ s (0) t = s ] ∝ exp ( β · Vt(s) · ( 1− Vt(s) )) . (5) Extension to non-binary or dense reward settings. The current forms of PROCURL-VAL in Eq. 5 and PROCURL-ENV in Eq. 4 are not directly applicable for settings where the reward is non-binary or dense. To deal with this issue in PROCURL-VAL, we replace Vt(s) values from the critic in Eq. 5 with normalized values given by V t(s) = Vt(s)−Vmin Vmax−Vmin clipped to the range [0, 1]. Here, Vmin and Vmax could be provided as input based on the environment’s reward function; alternatively we can dynamically set Vmin and Vmax during the training process by taking min-max values of the critic for states Sinit at step t. To deal with this issue in PROCURL-ENV, we replace PoSt(s) values from the rollouts in Eq. 4 with normalized values V t(s) as above. Algorithm 2 in the appendix provides a complete pseudo-code for the RL agent training with PROCURL-VAL in this general setting. 4 EXPERIMENTAL EVALUATION In this section, we evaluate the effectiveness of our curriculum strategies on a variety of domains w.r.t. the uniform performance of the trained RL agent over the training pool of tasks. Additionally, we consider the following two metrics in our evaluation: (i) total number of environment steps incurred jointly by the teacher and the student components at the end of the training process; (ii) total clock time required for the training process. Throughout all the experiments, we use PPO method from Stable-Baselines3 library for policy optimization (Schulman et al., 2017; Raffin et al., 2021). 4.1 ENVIRONMENTS We consider 5 different environments in our evaluation as described in the following paragraphs. Figure 1 provides a summary and illustration of these environments. POINTMASS-S and POINTMASS-D. Based on the work of (Klink et al., 2020b), we consider a contextual POINTMASS environment where an agent navigates a point mass through a gate of a given size towards a goal in a two-dimensional space. More concretely, we consider two settings: (i) POINTMASS-S environment corresponds to a goal-based (i.e., binary and sparse) reward setting where the agent receives a reward of 1 only if it successfully moves the point mass to the goal position; (ii) POINTMASS-D environment corresponds to a dense reward setting as used by (Klink et al., 2020b) where the reward values decay in a squared exponential manner with increasing distance to the goal. Here, the contextual variable c ∈ R3 controls the position of the gate (C-GatePosition), the width of the gate (C-GateWidth), and the friction coefficient of the ground (C-Friction). We construct the training pool of tasks by uniformly sampling 100 tasks over the space of possible tasks (here, each task corresponds to a different contextual variable). BASICKAREL. This environment is inspired by the Karel program synthesis domain Bunel et al. (2018), where the goal of an agent is to transform an initial grid into a final grid configuration by a sequence of commands. In our BASICKAREL environment, we do not allow any programming constructs such as conditionals or loops and limit the commands to the “basic” actions given by A = {move,turnLeft,turnRight,pickMarker,putMarker,finish}. A task in this environment corresponds to a pair of initial grid and final grid configuration; the environment is episodic with goal-based (i.e., binary and sparse) reward setting where the agent receives a reward of 1 only if it successfully transforms the task’s initial grid into the task’s final grid. Here, the contextual variable is discrete where each task can be considered as a discrete context. We construct the training pool of tasks by sampling 24000 tasks; additional details are provided in the appendix. BALLCATCHING. This environment is same as used in the work of (Klink et al., 2020b); here, an agent needs to direct a robot to catch a ball thrown towards it. The reward function is sparse and non-binary, only rewarding the robot when it catches the ball and penalizing it for excessive movements. The contextual vector c ∈ R3 captures the distance to the robot from which the ball is thrown and its goal position in a plane that intersects the base of the robot. We construct the training pool of tasks by uniformly sampling 100 tasks over the space of possible tasks. ANTGOAL. This environment is adapted from the original MuJoCo ANT environment (Todorov et al., 2012). In our adaptation, we additionally have a goal on a flat 2D surface and an agent is rewarded for moving an ant robot towards the goal location. This goal-based reward term replaces the original reward term of making the ant move forward; also, this reward term increases exponentially when the ant moves closer to the goal location. We keep the other reward terms such as control and contact costs similar to the original MuJoCo ANT environment. The environment is episodic with a length of 200 steps. The goal location essentially serves as a contextual variable in R2. We construct the training pool of tasks by uniformly sampling 50 goal locations from a circle around the ant. 4.2 CURRICULUM STRATEGIES EVALUATED Variants of our curriculum strategy. We consider the curriculum strategies PROCURL-VAL and PROCURL-ENV from Section 3.3. Since PROCURL-ENV uses policy rollouts to estimate PoSt(s) in Eq. 4, it requires environment steps for selecting tasks in addition to environment steps for training. To compare PROCURL-VAL and PROCURL-ENV in terms of trade-off between performance and sample efficiency, we introduce a variant PROCURL-ENVX where x controls the budget of the total number of steps used for estimation and training. In Figure 3, variants with x ∈ {2, 4} refer to a total budget of about x million environment steps when training comprises of 1 million steps. State-of-the-art baselines. SPDL (Klink et al., 2020b) and SPACE (Eimer et al., 2021) are state-ofthe-art curriculum strategies for contextual RL. We adapt the implementation of an improved version of SPDL, presented in (Klink et al., 2021), to work with a discrete pool of tasks. We also introduce a variant of SPACE, namely SPACE-ALT, by adapting the implementation of (Eimer et al., 2021) to sample the next training task as P [ s (0) t = s ] ∝ exp ( β · ( Vt(s)− Vt−1(s) )) . Prototypical baselines. IID strategy randomly samples the next task from the pool; note that IID serves as a competitive baseline since we consider the uniform performance objective. We introduce two additional variants of PROCURL-ENV, namely EASY and HARD, to understand the importance of the two terms PoSt(s) and ( 1 − PoSt(s) ) in Eq. 4. EASY samples tasks as P [ s (0) t = s ] ∝ exp ( β·PoSt(s) ) , and HARD samples tasks as P [ s (0) t = s ] ∝ exp ( β· ( 1−PoSt(s) )) . 4.3 RESULTS Convergence behavior and curriculum plots. As shown in Figure 2, the RL agents trained using the variants of our curriculum strategy, PROCURL-ENV and PROCURL-VAL, either match or outperform the agents trained with state-of-the-art and prototypical baselines in all the environments. Figures 4 and 5 visualize the curriculums generated by PROCURL-ENV, PROCURL-VAL, and IID; the trends for PROCURL-VAL generally indicate a gradual shift towards harder tasks across different contexts. We provide further details in the appendix. Metrics comparison. In Figure 3, we compare curriculum strategies considered in our experiments w.r.t. different metrics. PROCURL-VAL has similar sample complexity as state-of-the-art baselines since it does not require additional environment steps for the teacher component. PROCURL-VAL performs better compared to SPDL and SPACE in terms of computational complexity. The effect of that is more evident as the pool size increases. The reason is that PROCURL-VAL only requires forward-pass operation on the critic-model to obtain value estimates for each task in the pool. SPDL and SPACE not only require the same forward-pass operations, but SPDL does an additional optimization step, and SPACE requires a task ordering step. In terms of agent’s performance, our curriculum strategies exceed or match these baselines at different training segments. Even though PROCURL-ENV consistently surpasses all the other variants in terms of performance, its teacher component requires a lot of additional environment steps. Regarding the prototypical baselines in Figure 3, we make the following observations: (a) IID is a strong baseline in terms of sample and computational efficiency, however, its performance tends to be unstable in POINTMASS-S environment because of high randomness; (b) EASY performs well in POINTMASS-S because of the presence of easy tasks in the task space of this environment, but, performs quite poorly in BASICKAREL; (c) HARD consistently fails in both the environments. 5 CONCLUDING DISCUSSIONS We proposed a novel curriculum strategy for deep RL agents inspired by the ZPD concept. We mathematically derived our strategy from basic principles and empirically demonstrated its effectiveness in a variety of complex domains. Here, we discuss a few limitations of our work and outline a plan on how to address them in future work. First, we provided theoretical justifications of our proposed curriculum using simple learner models; it would be interesting also to provide a rigorous analysis of how our curriculum strategy improves the convergence rates of (deep) RL agents. Second, our experimental results show that different variants of our proposed curriculum provide an inherent trade-off between runtime and performance; it would be interesting to systematically study these variants to obtain a more effective curriculum strategy across different metrics. Third, it would be interesting to extend our curriculum strategy to high-dimensional sparse reward environments; in particular, our curriculum strategy requires estimating the probability of success of all tasks in the pool when sampling a new task which becomes challenging in high dimensional context space. A TABLE OF CONTENTS In this section, we give a brief description of the content provided in the appendices of the paper. • Appendix B provides proofs for Theorems 1 and 2. (Section 3.2) • Appendix C provides additional details and results for experimental evaluation. (Section 4) B THEORETICAL JUSTIFICATIONS FOR THE CURRICULUM STRATEGY – PROOFS (SECTION 3.2) B.1 PROOF OF THEOREM 1 Proof. Let s(0)t = s ∈ Sinit, and consider the following: ∆t(θt+1 ∣∣θt, s, ξt) = ∥θ∗ − θt∥1 − ∥θ∗ − θt+1∥1 = θt+1[s]− θt[s] = α · succ(ξt; s) · (θ∗[s]− θt[s]) + β · (1− succ(ξt; s)) · (θ∗[s]− θt[s]). For the abstract learner model defined in Section 3.2.1, we have PoSθ(s) = V πθ (s) = θ[s], for any s ∈ Sinit. Given two success values p∗, p ∈ R, the set of feasible tasks at time t is given by Dt(p∗, p) := {s ∈ S : θ∗[s] = p∗, θt[s] = p}. Now, we consider the following: Ct(p ∗, p) = Es∼Unif(Dt(p∗,p))Eξt|s [ ∆t(θt+1 ∣∣θt, s, ξt)] = Es∼Unif(Dt(p∗,p))Eξt|s [α · succ(ξt; s) · (θ∗[s]− θt[s]) + β · (1− succ(ξt; s)) · (θ∗[s]− θt[s])] = Es∼Unif(Dt(p∗,p)) [α · θt[s] · (θ∗[s]− θt[s]) + β · (1− θt[s]) · (θ∗[s]− θt[s])] = α · p · (p∗ − p) + β · (1− p) · (p∗ − p) = α · p · p∗ − α · p2 + β · p∗ − β · p− β · p · p∗ + β · p2. We have the following partial derivatives: ∂Ct(p ∗, p) ∂p = α · p∗ − 2 · α · p− β − β · p∗ + 2 · β · p ∂Ct(p ∗, p) ∂p∗ = α− β ≥ 0 ∂C2t (p ∗, p) (∂p)2 = − 2 · (α+ β) ≤ 0. Noting that ∂Ct∂pL = 0 when p = αp∗−βp∗−β 2(α−β) completes the proof. B.2 PROOF OF THEOREM 2 Proof. For the contextual bandit setting described in Section 3.2.2, the REINFORCE learner’s update rule reduces to the following: θt+1 ← θt+ηt·1{s(1)t = g}· [ ∇θ log πθ(a(0)t |s(0)t ) ] θ=θt . In particular, for s(0)t = s and a (0) t = a1, we update: θt+1[s, a1] ← θt[s, a1] + ηt · 1{s(1)t = g} · (1− πθt(a1|s)) θt+1[s, a2] ← θt[s, a2]− ηt · 1{s(1)t = g} · (1− πθt(a1|s)) and we set θt+1[s, ·]← θt[s, ·] when s(0)t ̸= s or a(0)t ̸= a1. Let s(0)t = s, and consider the following: ∆t(θt+1 ∣∣θt, s, ξt) = ∥θ∗ − θt∥1 − ∥θ∗ − θt+1∥1 = ∥θ∗[s, ·]− θt[s, ·]∥1 − ∥θ∗[s, ·]− θt+1[s, ·]∥1 = {θ∗[s, a1]− θt[s, a1] + θt[s, a2]− θ∗[s, a2]} − {θ∗[s, a1]− θt+1[s, a1] + θt+1[s, a2]− θ∗[s, a2]} = θt+1[s, a1]− θt[s, a1] + θt[s, a2]− θt+1[s, a2] = 2 · ηt · 1{a(0)t = a1, s(1)t = g} · (1− πθt(a1|s)). For the contextual bandit setting, the probability of success is given by PoSθ(s) = V πθ (s) = prand(s) · πθ(a1|s),∀s ∈ S. We assume that ∃ θ∗ such that πθ∗(a1|s) → 1; here, πθ∗ is the target policy. With the above definition, the probability of success scores for any task associated with the starting state s ∈ S w.r.t. the target and agent’s current policies (at any step t) are respectively given by PoS∗(s) = PoSθ∗(s) = prand(s) and PoSt(s) = PoSθt(s) = prand(s) · πθt(a1|s). Given two success values p∗, p ∈ R, the set of feasible tasks at time t is given by Dt(p∗, p) := {s ∈ S : prand(s) = p∗, prand(s) · πθt(a1|s) = p}. Now, we consider the following: Ct(p ∗, p) = Es∼Unif(Dt(p∗,p))Eξt|s [ ∆t(θt+1 ∣∣θt, s, ξt)] = Es∼Unif(Dt(p∗,p))Eξt|s [ 2 · ηt · 1{a(0)t = a1, s(1)t = g} · (1− πθt(a1|s)) ] = Es∼Unif(Dt(p∗,p)) [2 · ηt · prand(s) · πθt(a1|s) · (1− πθt(a1|s))] = 2 · ηt · p · ( 1− p p∗ ) . We have the following partial derivatives: ∂Ct(p ∗, p) ∂p = 2 · ηt · ( 1− 2p p∗ ) ∂Ct(p ∗, p) ∂p∗ = 2 · ηt · ( p p∗ )2 ≥ 0 ∂C2t (p ∗, p) (∂p)2 = − 4 · ηt p∗ ≤ 0. Noting that ∂Ct∂pL = 0 when p = p∗ 2 completes the proof. C EXPERIMENTAL EVALUATION – ADDITIONAL DETAILS (SECTION 4) C.1 ENVIRONMENTS BASICKAREL. This environment is inspired by the Karel program synthesis domain Bunel et al. (2018), where the goal of an agent is to transform an initial grid into a final grid configuration by a sequence of commands. In the BASICKAREL environment, we do not allow any programming constructs such as conditionals or loops and limit the commands to the “basic” actions given by the action space A = {move,turnLeft,turnRight,pickMarker,putMarker,finish}. A task in this environment corresponds to a pair of initial grid and final grid configuration. It consists of an avatar, walls, markers, and empty grid cells, and each element has a specific location in the grid. The avatar is characterized by its current location and orientation. Its orientation can be any direction {North,East,South,West}, and its location can be any grid cell, except from grid cells where a wall is located. The state space S of BASICKAREL is any possible configuration of the avatar, walls, and markers in a pair of grids. The avatar can move around the grid and is directed via the basic Karel commands, i.e., the action space A. While the avatar moves, if it hits a wall or the grid boundary, it “crashes” and the episode terminates. If pickMarker is selected when no marker is present, the avatar “crashes” and the program ends. Likewise, if the putMarker action is taken and a marker is already present, the avatar “crashes” and the program terminates. The finish action indicates the end of the sequence of actions, i.e., the episode ends after encountering this action. To successfully solve a BASICKAREL task, the sequence of actions must end with a finish, and there should be no termination via “crashes”. Based on this environment, we created a multi-task dataset that consists of 24000 training tasks and 2400 test tasks. All the generated tasks have a grid size of 4× 4. C.2 EVALUATION SETUP Hyperparameters of PPO method. We use the PPO method from Stable-Baselines3 library with a basic MLP policy for all the conducted experiments (Schulman et al., 2017; Raffin et al., 2021). For the POINTMASS-S, POINTMASS-D, and BALLCATCHING environments, the MLP policy has a shared layer with 64 units and a second layer with separate 64 units for the policy and 64 units for the value function. For the BASICKAREL environment, we use two separate layers of size [512, 256] for the policy network and two layers of size [256, 128] for the value function network. For the ANTGOAL environment, we use two separate layers of size [512, 512] for the policy network and two layers of size [512, 512] for the value function network. For all the experiments, ReLU is the chosen activation function. In Figure 6, we report the PPO hyperparameters used in the experiments. For each environment, all the hyperparameters are consistent across all the different curriculum strategies. Compute resources. All the experiments were conducted on a cluster of machines with CPUs of model Intel Xeon Gold 6134M CPU @ 3.20GHz. C.3 CURRICULUM STRATEGIES EVALUATED Variants of the curriculum strategy. Algorithm 2 provides a complete pseudo-code for the RL agent using PPO method when trained with PROCURL-VAL in the general setting of non-binary or dense rewards (see Section 3.3). In Eq. 1 and Algorithm 1, we defined t at an episodic level; however, in Algorithm 2, t denotes an environment step (in the context of the PPO method). For PROCURL-ENV, in line 24 of Algorithm 2, we estimate the probability of success for all the tasks using the additional rollouts obtained by executing the current policy inM. Hyperparameters of curriculum strategies. In Figure 7, we report the hyperparameters of each curriculum strategy used in the experiments (for each environment). Below, we provide a short description of these hyperparameters: 1. β parameter controls the stochasticity of the softmax selection. 2. Npos parameter controls the frequency at which V t is updated. For PROCURL-ENV, we set Npos higher than Nsteps since obtaining rollouts to update V t(s) is expensive. For all the other curriculum strategies, we set Npos = Nsteps. For SPACE, Npos controls how frequently the current task dataset is updated based on their curriculum. For SPDL, Npos controls how often we perform the optimization step to update the distribution for selecting tasks. 3. crollouts determines the number of additional rollouts required to compute the probability of success score for each task (only for PROCURL-ENV). 4. {Vmin, Vmax} are used in the environments with non-binary or dense rewards to obtain the normalized values V (s) (see Section 3.3). In Figure 7, {Vmin,t, Vmax,t} denote the min-max values of the critic for states Sinit at step t. 5. η and κ parameters as used in SPACE (Eimer et al., 2021). 6. VLB performance threshold as used in SPDL (Klink et al., 2021). Algorithm 2 RL agent using PPO method when trained with PROCURL-VAL in the general setting 1: Input: RL algorithm PPO, rollout buffer D 2: Hyperparameters: policy update frequency Nsteps, number of epochs Nepochs, number of mini- batches Nbatch, parameter β, Vmin, and Vmax 3: Initialization: randomly initialize policy π1 and critic V1; set normalized probability of success scores V 1(s) = 0 and PoS∗(s) = 1, ∀s ∈ Sinit 4: for t = 1, . . . , T do 5: // add an environment step to the buffer 6: observe the state st, and select the action at ∼ πt(st) 7: execute the action at in the environment 8: observe reward rt, next state st+1, and done signal dt+1 to indicate whether st+1 is terminal 9: store (st, at, rt, st+1, dt+1) in the rollout buffer D 10: // choose new task when the current task/episode ends 11: if dt+1 = true then 12: reset the environment state 13: sample next task st+1 from P [ st+1 = s ] ∝ exp ( β · V t(s) · (1− V t(s)) ) 14: // policy and V t(s) update 15: if t%Nsteps = 0 then 16: set π′ ← πt and V ′ ← Vt 17: for e = 1, . . . , Nepochs do 18: for b = 1, . . . , Nbatch do 19: sample b-th minibatch of Nsteps/Nbatch transitions B = {(s, a, r, s′, d)} from D 20: update policy and critic using PPO algorithm π′, V ′ ← PPO(π′, V ′, B) 21: set πt+1 ← π′ and Vt+1 ← V ′ 22: empty the rollout buffer D 23: // normalization for the environments with non-binary or dense rewards 24: update V t+1(s)← Vt+1(s)−VminVmax−Vmin , ∀s ∈ Sinit using forward passes on critic 25: else 26: maintain the previous values πt+1 ← πt, Vt+1 ← Vt, and V t+1 ← V t 27: Output: policy πT C.4 RESULTS Convergence behavior. In Figure 8, we report the performance of the trained models in the training set and a test set for comparison purposes. For POINTMASS-S, we constructed a separate test set of 100 tasks by uniformly picking tasks from the task space. For BASICKAREL, we have a train and test dataset of 24000 and 2400 tasks, respectively. Curriculum plots. Figures 4 and 5 visualize the curriculums generated by PROCURL-ENV, PROCURL-VAL, and IID; the trends for PROCURL-VAL generally indicate a gradual shift towards harder tasks across different contexts. The increasing trend in Figure 4a corresponds to a preference shift towards tasks with the gate positioned closer to the edges; the decreasing trend in Figure 4b corresponds to a preference shift towards tasks with narrower gates. For BASICKAREL, the increasing trends in Figures 5a and 5b correspond to a preference towards tasks with longer solution trajectories and tasks requiring a marker to be picked or put, respectively. In Figures 5c and 5d, tasks with a distractor marker (C-DistractorMarker) and tasks with more walls (C-Walls) are increasingly selected while training. In Figure 9, we show illustrative tasks of BASICKAREL used during the training process at different steps for PROCURL-VAL. Ablation and robustness experiments. We conduct additional experiments to evaluate the robustness of PROCURL-VAL w.r.t. different values of β and different ϵ-level noise in Vt(s) values. The results are reported in Figure 10. From the reported results, we note that picking a value for β somewhere between 10 to 30 leads to competitive performance, and PROCURL-VAL is robust even for noise levels up to ϵ = 0.2. C.5 ADDITIONAL RESULTS AND DISCUSSION PROCURL-ENVX vs. PROCURL-VAL. To achieve the constrained budget of evaluation steps in PROCURL-ENVx (with x ∈ {2, 4}), we reduce the frequency of updating PoSt since this is the most expensive operation for PROCURL-ENV requiring additional rollouts for each task. On the other hand, PROCURL-VAL updates PoSt by using the values obtained from forward-pass on the critic model – this update happens whenever the critic model is updated (every 2048 training steps for BASICKAREL). This higher frequency of updating PoSt in PROCURL-VAL is why it is slower than PROCURL-ENVx (with x ∈ {2, 4}) for BASICKAREL. Note that the relative frequency of updates for POINTMASS is different in comparison to BASICKAREL because of very different pool sizes. Hence, the behavior in total clock times is different. γ2/γ1 ablation. We conduct an additional ablation study on the form of our curriculum objective presented in Eq. 1. More specifically, we consider the following generalized variant of Eq. 1 with parameters γ1 and γ2: s (0) t ← argmax s∈Sinit ( PoSt(s) · ( γ1 · PoS∗(s)− γ2 · PoSt(s) )) (6) In our experiments, we consider the following range of γ2/γ1 ∈ {0.6, 0.8, 1.0, 1.2, 1.4}. Our default curriculum strategy in Eq. 1 essentially corresponds to γ2/γ1 = 1.0. The following table presents the results for the environments POINTMASS-S and BASICKAREL.
1. What is the main contribution of the paper regarding automated curriculum learning? 2. What are the strengths and weaknesses of the proposed method, particularly in its mathematical formulation and notation? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any suggestions or recommendations for improving the proposed method or evaluating its performance more fairly?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes an automated curriculum learning strategy based on the idea of Zone of Proximal Development (ZPD), where the tasks are meant to be neither too hard nor too easy. The strategy, named ProCuRL, consists of choosing tasks maximizing c u r r e n t P e r f o r m a n c e ∗ ( b e s t P e r f o r m a n c e − c u r r e n t P e r f o r m a n c e ) Authors provide a mathematical argumentation behind using such a rule under the assumptions of: Independent tasks: learning on one task doesn't affect policy parameters used in other tasks Task being described by the initial state, with {0, 1} reward A gradient-based like rule with a single parameter for a given task Due to practical limitations, the actual method that is tested involves sampling from a softmax distribution, assuming the best possible score on any task is 1, and (optionally) estimating the current performance using (normalised) value functions. The method is evaluated on a set of 5 tasks, and the performance is compared to SPaCE, SPDL, and uniform task-sampling strategy, with authors claiming performance improvements over the other methods. Strengths And Weaknesses Strengths The empirical results are convincing and pointing to improvement over the baselines. Weaknesses The notation of the paper is very confusing, to the point I am not sure I understand (the mathematical motivation part of) the paper. The same letter θ is used for "parameters" (from R d ) and "performance on tasks" (from [ 0 , 1 ] | t a s k s | ). At some point one is implicitly interpreted as the other, despite different meanings and incompatible types. This is further confusing as derivatives over scores are calculated, but given that the same object is interpreted as the parameters, which the method controls, so to optimize the "score=parameter" surely just setting the parameters to 1 (corresponding to always solving the task) would solve the posed optimization problem. Even when considered alone, the gradient-based rule seems unintuitive: authors assume to have access to the final/target parameter θ ∗ and move the current parameter towards final with various strengths α , β , depending on whether the current trajectory succeeded in solving the tasks. The setting chosen by authors for motivating the work seems arbitrary and unrealistic: all currently published methods rely on the network sharing parameters between tasks. The mathematical methods used to prove the case for ProCuRL are not interesting on their own. The claim about the proposed method being in any way closer to the ZPD concept than the previous literature is an overstatement. It's not clear how authors interpret that statement, given that the manuscript makes no reference to ZPD when defining their method. In its final formulation, the method effectively amounts to (softly) prefer tasks on which agent's performance (modeled by the agent) is close to 0.5, which doesn't, actually, follow ZPD: at the beginning of training, tasks with 0.5 success rate may be too hard for the agent, and at the end, they may be too easy. The evaluation of ProCuRL-Env, which doesn't take into account the environment frames used for estimating the current performance of the agent and choose better tasks is not fair. Suggestions It would be good to include a baseline based on goal relabeling, e.g VDS [1], or HER [2]. [1]: Zhang et al. Automatic Curriculum Learning through Value Disagreement [2]: Andrychowicz et al. Hindsight Experience Replay Clarity, Quality, Novelty And Reproducibility As mentioned above, my biggest concern about the work is the clarity of the writing, which makes it hard for me to understand the motivation part of the paper. While I doubt the exact formulation that authors use for choosing the tasks was used in previous literature, I don't consider prioritizing the tasks according to V ( s ) ∗ ( 1 − V ( s ) ) to be particularly novel nor efficient for general task distributions, despite the encouraging results presented by the authors in the experimental section. I consider the work reproducible (in the same task distributions).
ICLR
Title Proximal Curriculum for Reinforcement Learning Agents Abstract We consider the problem of curriculum design for reinforcement learning (RL) agents in contextual multi-task settings. Existing techniques on automatic curriculum design typically have limited theoretical underpinnings or require domainspecific hyperparameter tuning. To tackle these limitations, we design our curriculum strategy, PROCURL, inspired by the pedagogical concept of Zone of Proximal Development (ZPD). We mathematically derive PROCURL by formalizing the ZPD concept, which suggests that learning progress is maximized when picking tasks that are neither too hard nor too easy for the learner. We also present a practical variant of PROCURL that can be directly integrated with deep RL frameworks with minimal hyperparameter tuning. Experimental results on a variety of domains demonstrate the effectiveness of our curriculum strategy over state-ofthe-art baselines in accelerating the training process of deep RL agents. 1 INTRODUCTION Recent advances in deep reinforcement learning (RL) have demonstrated impressive performance in games, continuous control, and robotics (Mnih et al., 2015; Lillicrap et al., 2015; Silver et al., 2017; Levine et al., 2016). Despite these remarkable successes, a broader application of RL in real-world domains is often very limited. For example, training RL agents in contextual multi-task settings and goal-based tasks with sparse rewards still remains challenging (Hallak et al., 2015; Kirk et al., 2021; Andrychowicz et al., 2017; Florensa et al., 2017; Riedmiller et al., 2018). Inspired by the importance of curricula in pedagogical domains, there is a growing interest in leveraging curriculum strategies when training machine learning models in challenging domains. In the supervised learning setting, such as image classification, the impact of the order of presented training examples has been studied both theoretically and empirically (Weinshall et al., 2018; Weinshall & Amir, 2018; Zhou & Bilmes, 2018; Zhou et al., 2021; Elman, 1993; Bengio et al., 2009; Zaremba & Sutskever, 2014). Recent works have also studied curriculum strategies for learners in sequentialdecision-making settings, such as imitation learning (where the agent learns from demonstrations) and RL (where the agent learns from rewards). In the imitation learning setting, recent works have proposed greedy curriculum strategies for picking the next training demonstration according to the agent’s learning progress (Kamalaruban et al., 2019; Yengera et al., 2021). In the RL setting, several curriculum strategies have been proposed to improve sample efficiency, e.g., by choosing an appropriate next starting state or goal state for the task to train on (Wöhlke et al., 2020; Florensa et al., 2017; 2018; Racanière et al., 2020; Riedmiller et al., 2018; Klink et al., 2020a;b; Eimer et al., 2021). Despite extensive research on curriculum design for the RL setting, existing techniques typically have limited theoretical underpinnings or require domain-specific hyperparameter tuning. In this paper, we are interested in developing a principled curriculum strategy for the RL setting that is broadly applicable to many domains with minimal tuning of hyperparameters. To this end, we rely on the Zone of Proximal Development (ZPD) concept from the educational psychology literature (Vygotsky & Cole, 1978; Chaiklin, 2003). The ZPD concept, when applied in terms of learning progress, suggests that progress is maximized when the learner is presented with tasks that lie in the proximal zone, i.e., tasks that are neither too hard nor too easy. To formally capture this idea of proximal zone, we use a notion of probability of success score PoSπt(s) w.r.t. the learner’s current policy πt for any given task s. We mathematically derive an intuitive curriculum strategy based on a learner update rule that captures the ZPD concept in terms of the learning progress and reflects characteristics of the policy gradient style update. Our main results and contributions are as follows: I. We propose a curriculum strategy, PROCURL, inspired by the ZPD concept. PROCURL formalizes the idea of picking tasks that are neither too hard nor too easy for the learner in the form of selection strategy argmaxs PoSπt(s) · ( PoS∗(s) − PoSπt(s) ) , where PoS∗(s) corresponds to the probability of success score w.r.t. an optimal policy (Section 3.1). II. We derive PROCURL under two specific learning settings where we analyze the effect of picking a task on the agent’s learning progress (Section 3.2). III. We present a practical variant of PROCURL, namely PROCURL-VAL, that can be easily integrated with deep RL frameworks with minimal hyperparameter tuning (Section 3.3). IV. We empirically demonstrate the effectiveness of PROCURL-VAL over state-of-the-art baselines in accelerating the training process of deep RL agents in a variety of environments (Section 4). 1.1 RELATED WORK Curriculum strategies based on domain knowledge. Early works on curriculum design for supervised learning setting typically order the training examples in increasing difficulty (Elman, 1993; Bengio et al., 2009; Schmidhuber, 2013; Zaremba & Sutskever, 2014). This easy-to-hard design principle has been utilized in the hand-crafted curriculum approaches for the RL setting (Asada et al., 1996; Wu & Tian, 2016). Moreover, there has been recent works on designing greedy curriculum strategies for the imitation learning setting based on the iterative machine teaching framework (Liu et al., 2017; Yang et al., 2018; Zhu et al., 2018; Kamalaruban et al., 2019; Yengera et al., 2021). However, these approaches require domain-specific expert knowledge for designing difficulty measures. Curriculum strategies based on ZPD concept. In the pedagogical setting, it has been realized that effective teaching provides tasks that are neither too hard nor too easy for the human learner. This intuition of providing tasks from a particular range of difficulties is conceptualized in the ZPD concept (Vygotsky & Cole, 1978; Chaiklin, 2003; Oudeyer et al., 2007; Baranes & Oudeyer, 2013; Zou et al., 2019). In the RL setting, several curriculum strategies that have been proposed are inherently based on the ZPD concept (Florensa et al., 2017; 2018; Wöhlke et al., 2020). A common underlying theme in both (Florensa et al., 2017) and (Florensa et al., 2018) is that they choose the next task (starting or goal state) for the learner uniformly at random from the set {s : rmin ≤ PoSπt(s) ≤ rmax}. Here, the threshold values rmin and rmax require tuning according to the learner’s progress and specific to the domain. The authors in (Wöhlke et al., 2020) propose a unified framework for the learner’s performance-based starting state curricula in RL. In particular, the starting state selection policy of (Wöhlke et al., 2020), P [ s (0) t = s ] ∝ G(PoSπt(s)) for some function G, accommodates existing curriculum generation methods like (Florensa et al., 2017; Graves et al., 2017). Despite promising empirical results, a conceptual formalism or theoretical underpinnings relating an RL agent’s learning progress to the ZPD concept is still missing in the aforementioned works. We address this conceptual gap in the literature by designing and analyzing a learner update rule that captures the ZPD concept in terms of the learning progress and also reflects characteristics of the policy gradient style update. Curriculum strategies based on self-paced learning (SPL). In the supervised learning setting, the curriculum strategies using the SPL concept optimize the trade-off between exposing the learner to all available training examples and selecting examples in which it currently performs well (Kumar et al., 2010; Jiang et al., 2015). In SPDL (Klink et al., 2020b;a; 2021; 2022) and SPACE (Eimer et al., 2021), the authors have adapted the concept of SPL to the RL setting by controlling the intermediate task distribution with respect to the learner’s current training progress. However, SPDL and SPACE differ in their mode of operation and the objective. SPDL considers the procedural task generation framework where tasks of appropriate difficult levels can be synthesized, as also considered in (Florensa et al., 2017; 2018)). In contrast, SPACE considers a pool-based curriculum framework for picking suitable tasks, as popular in supervised learning setting. Further, SPDL considers the objective of a targeted performance w.r.t. a target distribution (e.g., concentrated distribution on hard tasks); in contrast, SPACE considers the objective of uniform performance across a given pool of tasks. Similar to SPACE, in our work, we consider the pool-based setting with uniform performance objective. Both SPDL and SPACE serve as state-of-the-art baselines in our experimental evaluation. In terms of curriculum strategy, SPDL operates by solving an optimization problem at each step to pick a task (Klink et al., 2021); SPaCE uses a ranking induced by magnitude of differences in current/previous critic values at each step to pick a task (Eimer et al., 2021). In the appendix, we have also provided some additional information on hyperparameters for SPDL and SPaCE. Other automatic curriculum strategies. There are other approaches for automatic curriculum generation, including: (i) by formulating the curriculum design problem with the use of a meta-level Markov Decision Process (Narvekar et al., 2017; Narvekar & Stone, 2019); (ii) by learning how to generate training tasks similar to a teacher (Dendorfer et al., 2020; Such et al., 2020; Matiisen et al., 2019; Turchetta et al., 2020); (iii) by leveraging self-play as a form of curriculum generation (Sukhbaatar et al., 2018); (iv) by using the disagreement between different agents trained on the same tasks (Zhang et al., 2020); (v) by picking the starting states based on a single demonstration (Salimans & Chen, 2018; Resnick et al., 2018); and (vi) by providing agents with environment variations that are at the frontier of an agent’s capabilities, e.g., Unsupervised Environment Design methods (Dennis et al., 2020; Jiang et al., 2021; Parker-Holder et al., 2022). We refer the reader to recent surveys on curriculum design for the RL setting (Narvekar et al., 2020; Portelas et al., 2021; Weng, 2020). 2 FORMAL SETUP In this section, we formalize our problem setting based on prior work on teacher-student curriculum learning (Matiisen et al., 2019). MDP environment. We consider a learning environment defined as a Markov Decision Process (MDP)M := (S,A, T , H,R,Sinit). Here, S andA denote the state and action spaces, T : S ×S × A → [0, 1] is the transition dynamics, H is the maximum length of the episode, and R : S×A → R is the reward function. The set of initial states Sinit ⊆ S specifies a fixed pool of tasks, i.e., each starting state s ∈ Sinit corresponds to a unique task. Note that the above environment formalism is quite general enough to cover many practical settings, including the contextual multi-task MDP setting (Hallak et al., 2015).1 RL agent and training process. We consider an RL agent acting in this environment via a policy π : S × A → [0, 1] that is a mapping from a state to a probability distribution over actions. Given a task with the corresponding starting state s ∈ Sinit, the agent attempts the task via a trajectory rollout obtained by executing its policy π from s in the MDP M. The trajectory rollout is denoted as ξ = {(s(τ), a(τ), R(s(τ), a(τ)))}τ=0,1,...,h with s(0) = s and for some h ≤ H . The agent’s performance on task s is measured via the value function V π(s) := E [∑h τ=0 R(s (τ), a(τ)) ∣∣π,M, s(0) = s]. Then, the uniform performance of the agent over the pool of tasks Sinit is given by V π := Es∼Uniform(Sinit) [V π(s)]. The training process of the agent involves an interaction between two components: a student component that is responsible for policy update and a teacher component that is responsible for task selection. The interaction happens in discrete steps, indexed by t = 1, 2, . . ., and is formally described in Algorithm 1. Let πend denote the agent’s final policy at the end of training. The training objective is to ensure that the uniform performance of the policy πend is ϵ-near-optimal, i.e., (maxπ V π − V πend) ≤ ϵ. In the following two paragraphs, we discuss the student and teacher components in detail. Student component. We consider a parametric representation for the RL agent, whose current knowledge is parameterized by θ ∈ Θ ⊆ Rd and each parameter θ is mapped to a policy πθ : S×A → [0, 1]. At step t, the student component updates the knowledge parameter based on the following quantities: the current knowledge parameter θt, the task picked by the teacher component, and the rollout ξt = {(s(τ)t , a(τ)t , R(s(τ)t , a(τ)t ))}τ . Then, the updated knowledge parameter θt+1 is mapped to the agent’s policy given by πt+1 := πθt+1 . As a concrete example, the knowledge parameter of the REINFORCE agent (Sutton et al., 1999) is updated as θt+1 ← θt + ηt · ∑h−1 τ=0 G (τ) t · g(τ)t , where ηt is the learning rate, G (τ) t = ∑h τ ′=τ R(s (τ ′) t , a (τ ′) t ), and g (τ) t = [ ∇θ log πθ(a(τ)t |s(τ)t ) ] θ=θt . Teacher component. At step t, the teacher component picks a task with the corresponding starting state s(0)t for the student component to attempt via a trajectory rollout (see line 3 in Algorithm 1). The sequence of tasks (curriculum) picked by the teacher component affects the performance improvement of the policy πt. The main focus of this work is to develop a teacher component to achieve the training objective in both computational and sample efficient manner. 1In this setting, for a given set of contexts C, the pool of tasks is given by {Mc = (S,A, Tc, H,Rc,S init) : c ∈ C}. Our environment formalism (MDP M) covers this setting as follows: S = S × C; Sinit = S init × C; T ((s̄′, c)|(s̄, c), a) = Tc(s̄′|s̄, a) and R((s̄, c), a) = Rc(s̄, a), ∀s̄, s̄′ ∈ S, a ∈ A, c ∈ C. Algorithm 1 RL Agent Training as Interaction between Teacher-Student Components 1: Input: RL agent’s initial policy π1 2: for t = 1, 2, . . . do 3: Teacher component picks a task with the corresponding starting state s(0)t . 4: Student component attempts the task via a trajectory rollout ξt using the policy πt from s (0) t . 5: Student component updates the policy to πt+1. 6: Output: RL agent’s final policy πend ← πt+1. 3 PROXIMAL CURRICULUM STRATEGY In Section 3.1, we propose a curriculum strategy for the goal-based setting. In Section 3.2, we show that the proposed curriculum strategy can be derived from basic principles by formalizing the ZPD concept. In Section 3.3, we present our final curriculum strategy that is applicable in general settings. 3.1 CURRICULUM STRATEGY FOR THE GOAL-BASED SETTING Here, we introduce our curriculum strategy for the goal-based setting using the notion of probability of success scores. Goal-based setting. In this setting, the reward function R is goal-based, i.e., the agent gets a reward of 1 only at the goal states and 0 at other states; moreover, any action from a goal state also leads to termination. For any task with the corresponding starting state s ∈ Sinit, we say that the attempted rollout ξ succeeds in the task if the final state of ξ is a goal state. Formally, succ(ξ; s) is an indicator function whose value is 1 when the rollout ξ succeeds in the task s, and 0 otherwise. Furthermore, for an agent with policy π, we have that V π(s) := E [ succ(ξ; s) ∣∣π,M] is equal to the total probability of reaching a goal state by executing the policy π starting from s ∈ Sinit. Probability of success. We begin by assigning a probability of success score for any task with the corresponding starting state s ∈ Sinit w.r.t. any parameterized policy πθ in the MDPM. Definition 1. For any given knowledge parameter θ ∈ Θ and any starting state s ∈ Sinit, we define the probability of success score PoSθ(s) as the probability of successfully solving the task s by executing the policy πθ in the MDPM. For the goal-based settings, we have PoSθ(s) = V πθ (s). With the above definition, the probability of success score for any task s ∈ Sinit w.r.t. the agent’s current policy πt is given by PoSt(s) := PoSθt(s). Further, we define PoS ∗(s) := maxθ∈Θ PoSθ(s). Curriculum strategy. Based on the notion of probability of success scores that we defined above, we propose the following curriculum strategy: s (0) t ← argmax s∈Sinit ( PoSt(s) · ( PoS∗(s)− PoSt(s) )) , (1) i.e., at step t, the teacher component picks a task associated with the starting state s(0)t according to Eq. 1. In the following subsection, we show that our curriculum strategy can be derived by considering simple learning settings, such as contextual bandit problems with REINFORCE agent; these derivations provide insights about the design of the curriculum strategy. In Section 3.3, we provide a detailed step-by-step discussion on how our curriculum can be applied in practice to increasingly complex settings. 3.2 THEORETICAL JUSTIFICATIONS FOR THE CURRICULUM STRATEGY To derive our curriculum strategy for the goal-based setting, we additionally consider independent tasks where any task s(0)t picked from the pool Sinit at step t only affects the agent’s knowledge component corresponding to that task. Further, we assume that there exists a knowledge parameter θ∗ ∈ Θ such that πθ∗ ∈ argmaxπ V π , and πθ∗ is referred to as the target policy. Then, based on the work of (Weinshall et al., 2018; Kamalaruban et al., 2019; Yengera et al., 2021), we investigate the effect of picking a task s(0)t at step t on the convergence of the agent’s parameter θt towards the target parameter θ∗. Under a smoothness condition on the value function of the form |V πθ − V πθ′ | ≤ L · ∥θ − θ′∥1 ,∀θ, θ′ ∈ Θ for some L > 0, we can translate the parameter convergence (θt → θ∗) into the performance convergence (V πθt → V πθ∗ ). Thus, we define the improvement in the training objective at step t as ∆t(θt+1 ∣∣θt, s(0)t , ξt) := [∥θ∗ − θt∥1 − ∥θ∗ − θt+1∥1]. (2) In this objective, we use the ℓ1-norm because our theoretical analysis considers the independent task setting mentioned above. Given two success values p∗, p ∈ R, we define a set of feasible tasks at step t as Dt(p∗, p) := {s ∈ Sinit : PoSθ∗(s) = p∗,PoSθt(s) = p}. The set Dt(p∗, p) contains all the tasks for which the probability of success score w.r.t. the target policy is equal to the value p∗ and the probability of success score w.r.t. the agent’s current policy is equal to the value p. Further, we define the expected improvement in the training objective at step t given success values p∗ and p as follows: Ct(p ∗, p) := E s (0) t ∼Uniform(Dt(p∗,p)) E ξt|s(0)t [ ∆t(θt+1|θt, s(0)t , ξt) ] , where the outer expectation is w.r.t. the uniform distribution over the setDt(p∗, p). In the following, we analyze the above quantity for specific agent models under the independent task setting. More concretely, Theorems 1 and 2 characterize the impact of picking a task at step t on the objective in Eq. 2 with the following values: (i) task’s PoS w.r.t. the target policy πθ∗ having value p∗ and (ii) task’s PoS w.r.t. the agent’s current policy having value p. For the specific settings considered in Sections 3.2.1 and 3.2.2, Theorems 1 and 2 imply that picking tasks based on the curriculum strategy given in Eq. 1 maximizes the expected value of the objective in Eq. 2. 3.2.1 ABSTRACT AGENT WITH DIRECT PERFORMANCE PARAMETERIZATION We consider an abstract agent model with the following direct performance parameterization: for any θ ∈ Θ = [0, 1]|Sinit|, we have PoSθ(s) = θ[s],∀s ∈ Sinit.2 Under this model, the agent’s current knowledge θt at step t is encoded directly by its probability of success scores {PoSθt(s) | s ∈ Sinit}. The target knowledge parameter θ∗ is given by {PoSθ∗(s) | s ∈ Sinit}. Under the independent task setting, we design an update rule for the agent to capture the ZPD concept in terms of the learning progress (Vygotsky & Cole, 1978; Chaiklin, 2003), and to also reflect characteristics of the policy gradient style update. In particular, for s = s(0)t ∈ Sinit, we update θt+1[s] ← θt[s] + α · succ(ξt; s) · (θ∗[s]− θt[s]) + β · (1− succ(ξt; s)) · (θ∗[s]− θt[s]), where α, β ∈ [0, 1] and α > β. For s ∈ Sinit and s ̸= s(0)t , we maintain θt+1[s]← θt[s]. Importantly, α > β implies that the agent’s current knowledge for the picked task is updated more when the agent succeeds in that task compared to the failure case. The update rule captures the following idea: when picking a task that is “too easy”, the progress in θt towards θ∗ is minimal since (θ∗[s] − θt[s]) is low; similarly, when picking a task that is “too hard”, the progress in θt towards θ∗ is minimal since β · (θ∗[s]− θt[s]) is low for β ≪ 1. The following theorem shows the differential effect of the probability of success scores p∗ and p on the expected improvement in the training objective Ct(p∗, p). Theorem 1. Consider the abstract agent with direct performance parameterization under the independent task setting as described above. Let s(0)t be the task picked at step t with PoSθt(s (0) t ) = p and PoSθ∗(s (0) t ) = p ∗. Then, we have: (i) ∂Ct(p ∗,p) ∂p > 0, for p < αp∗−βp∗−β 2(α−β) , (ii) ∂Ct(p ∗,p) ∂p < 0, for p > αp ∗−βp∗−β 2(α−β) , (iii) ∂Ct(p ∗,p) ∂p = 0, for p = αp∗−βp∗−β 2(α−β) , and (iv) ∂Ct(p ∗,p) ∂p∗ > 0, ∀p∗ ∈ [0, 1]. For the above setting with α = 1 and β = 0, maxp∗,p Ct(p∗, p) is equivalent to maxp∗,p p · (p∗−p). This, in turn, implies that the curriculum strategy given in Eq. 1 can be seen as greedily optimizing the expected improvement in the training objective at step t. 3.2.2 REINFORCE AGENT WITH SOFTMAX POLICY PARAMETERIZATION We consider the REINFORCE agent model with the following softmax policy parameterization: for any θ ∈ R|S|·|A|, we parameterize the policy as πθ(a|s) ∝ exp(θ[s, a]),∀s ∈ S, a ∈ A. For 2In this setting, we abstract out the policy πθ and directly map the “parameter” θ to a vector of “performance on tasks” PoSθ . Then, we choose the parameter space as Θ = [0, 1]Sinit (where d = Sinit) and define PoSθ = θ. Thus, an update in the “parameter” θ is equivalent to an update in the “performance on tasks” PoSθ . this policy parameterization, the smoothness condition on the reward function provided in (Kamalaruban et al., 2019) can be translated to the smoothness condition on the value function. In the following, we consider a problem instance involving a pool of contextual bandit tasks (a special case of independent task setting). Consider an MDP M with g ∈ S as the goal state for all tasks, Sinit = S \ {g}, A = {a1, a2}, and H = 1. We define the reward function as follows: R(s, a) = 0,∀s ∈ S \ {g}, a ∈ A and R(g, a) = 1,∀a ∈ A. For a given probability mapping prand : S → [0, 1], we define the transition dynamics as follows: T (g|s, a1) = prand(s),∀s ∈ S; T (s|s, a1) = 1 − prand(s),∀s ∈ S; and T (s|s, a2) = 1,∀s ∈ S. Then, for the REINFORCE agent under the above setting, the following theorem shows the differential effect of p∗ and p on Ct(p∗, p): Theorem 2. Consider the REINFORCE agent with softmax policy parameterization under the independent task setting as described above. Let s(0)t be the task picked at step t with PoSθt(s (0) t ) = p and PoSθ∗(s (0) t ) = p ∗. Then, we have: (i) ∂Ct(p ∗,p) ∂p > 0, for p < p∗ 2 , (ii) ∂Ct(p ∗,p) ∂p < 0, for p > p∗ 2 , (iii) ∂Ct(p ∗,p) ∂p = 0, for p = p∗ 2 , and (iv) ∂Ct(p ∗,p) ∂p∗ > 0, ∀p∗ ∈ [0, 1]. For the above setting with prand(s) = 1,∀s ∈ S , maxp Ct(1, p) is equivalent to maxp p · (1 − p). This means that for the case of PoS∗(s) = 1,∀s ∈ Sinit, the curriculum strategy given in Eq. 1 can be seen as greedily optimizing the expected improvement in the training objective at step t. 3.3 CURRICULUM STRATEGY FOR GENERAL SETTINGS Next, we discuss various practical issues in directly applying the curriculum strategy in Eq. 1 for general settings, and introduce several design choices to address these issues. Softmax selection. When training deep RL agents, it is typically useful to allow some stochasticity in the selected batch of tasks. Moreoever, the argmax selection in Eq. 1 is brittle in the presence of any approximation errors in computing PoS(·) values. To tackle this issue, we replace argmax selection in Eq. 1 with softmax selection and sample according to the following distribution: P [ s (0) t = s ] ∝ exp ( β · PoSt(s) · ( PoS∗(s)− PoSt(s) )) , (3) where β is a hyperparameter. Here, PoSt(s) values are computed for each s ∈ Sinit using rollouts obtained via executing the policy πt inM; PoS∗(s) values are assumed to be provided as input. PoS∗(·) is not known. Since the target policy πθ∗ is unknown, it is not possible to compute the PoS∗(s) values without additional domain knowledge. In our experiments, we resort to simply setting PoS∗(s) = 1,∀s ∈ Sinit in Eq. 3 – the rationale behind this choice is that we expect the ideal πθ∗ to succeed in all the tasks in the pool. However, the above choice could lead to suboptimal strategy for specific scenarios, e.g., all PoS∗(s) are below 0.5. As an alternative, one could estimate PoS∗(s) during the training process, e.g., using top K% rollouts obtained by executing the current policy πt starting from s. This brings us to the following curriculum strategy referred to as PROCURL-ENV in our experimental evaluation: P [ s (0) t = s ] ∝ exp ( β · PoSt(s) · ( 1− PoSt(s) )) . (4) Computing PoSt(·) is expensive. It is expensive (sample inefficient) to estimate PoSt(s) over the space Sinit using rollouts of the policy πt. To tackle this issue, we replace PoSt(s) with values Vt(s) obtained from the critic network of the RL agent. This brings us to the following curriculum strategy referred to as PROCURL-VAL in our experimental evaluation: P [ s (0) t = s ] ∝ exp ( β · Vt(s) · ( 1− Vt(s) )) . (5) Extension to non-binary or dense reward settings. The current forms of PROCURL-VAL in Eq. 5 and PROCURL-ENV in Eq. 4 are not directly applicable for settings where the reward is non-binary or dense. To deal with this issue in PROCURL-VAL, we replace Vt(s) values from the critic in Eq. 5 with normalized values given by V t(s) = Vt(s)−Vmin Vmax−Vmin clipped to the range [0, 1]. Here, Vmin and Vmax could be provided as input based on the environment’s reward function; alternatively we can dynamically set Vmin and Vmax during the training process by taking min-max values of the critic for states Sinit at step t. To deal with this issue in PROCURL-ENV, we replace PoSt(s) values from the rollouts in Eq. 4 with normalized values V t(s) as above. Algorithm 2 in the appendix provides a complete pseudo-code for the RL agent training with PROCURL-VAL in this general setting. 4 EXPERIMENTAL EVALUATION In this section, we evaluate the effectiveness of our curriculum strategies on a variety of domains w.r.t. the uniform performance of the trained RL agent over the training pool of tasks. Additionally, we consider the following two metrics in our evaluation: (i) total number of environment steps incurred jointly by the teacher and the student components at the end of the training process; (ii) total clock time required for the training process. Throughout all the experiments, we use PPO method from Stable-Baselines3 library for policy optimization (Schulman et al., 2017; Raffin et al., 2021). 4.1 ENVIRONMENTS We consider 5 different environments in our evaluation as described in the following paragraphs. Figure 1 provides a summary and illustration of these environments. POINTMASS-S and POINTMASS-D. Based on the work of (Klink et al., 2020b), we consider a contextual POINTMASS environment where an agent navigates a point mass through a gate of a given size towards a goal in a two-dimensional space. More concretely, we consider two settings: (i) POINTMASS-S environment corresponds to a goal-based (i.e., binary and sparse) reward setting where the agent receives a reward of 1 only if it successfully moves the point mass to the goal position; (ii) POINTMASS-D environment corresponds to a dense reward setting as used by (Klink et al., 2020b) where the reward values decay in a squared exponential manner with increasing distance to the goal. Here, the contextual variable c ∈ R3 controls the position of the gate (C-GatePosition), the width of the gate (C-GateWidth), and the friction coefficient of the ground (C-Friction). We construct the training pool of tasks by uniformly sampling 100 tasks over the space of possible tasks (here, each task corresponds to a different contextual variable). BASICKAREL. This environment is inspired by the Karel program synthesis domain Bunel et al. (2018), where the goal of an agent is to transform an initial grid into a final grid configuration by a sequence of commands. In our BASICKAREL environment, we do not allow any programming constructs such as conditionals or loops and limit the commands to the “basic” actions given by A = {move,turnLeft,turnRight,pickMarker,putMarker,finish}. A task in this environment corresponds to a pair of initial grid and final grid configuration; the environment is episodic with goal-based (i.e., binary and sparse) reward setting where the agent receives a reward of 1 only if it successfully transforms the task’s initial grid into the task’s final grid. Here, the contextual variable is discrete where each task can be considered as a discrete context. We construct the training pool of tasks by sampling 24000 tasks; additional details are provided in the appendix. BALLCATCHING. This environment is same as used in the work of (Klink et al., 2020b); here, an agent needs to direct a robot to catch a ball thrown towards it. The reward function is sparse and non-binary, only rewarding the robot when it catches the ball and penalizing it for excessive movements. The contextual vector c ∈ R3 captures the distance to the robot from which the ball is thrown and its goal position in a plane that intersects the base of the robot. We construct the training pool of tasks by uniformly sampling 100 tasks over the space of possible tasks. ANTGOAL. This environment is adapted from the original MuJoCo ANT environment (Todorov et al., 2012). In our adaptation, we additionally have a goal on a flat 2D surface and an agent is rewarded for moving an ant robot towards the goal location. This goal-based reward term replaces the original reward term of making the ant move forward; also, this reward term increases exponentially when the ant moves closer to the goal location. We keep the other reward terms such as control and contact costs similar to the original MuJoCo ANT environment. The environment is episodic with a length of 200 steps. The goal location essentially serves as a contextual variable in R2. We construct the training pool of tasks by uniformly sampling 50 goal locations from a circle around the ant. 4.2 CURRICULUM STRATEGIES EVALUATED Variants of our curriculum strategy. We consider the curriculum strategies PROCURL-VAL and PROCURL-ENV from Section 3.3. Since PROCURL-ENV uses policy rollouts to estimate PoSt(s) in Eq. 4, it requires environment steps for selecting tasks in addition to environment steps for training. To compare PROCURL-VAL and PROCURL-ENV in terms of trade-off between performance and sample efficiency, we introduce a variant PROCURL-ENVX where x controls the budget of the total number of steps used for estimation and training. In Figure 3, variants with x ∈ {2, 4} refer to a total budget of about x million environment steps when training comprises of 1 million steps. State-of-the-art baselines. SPDL (Klink et al., 2020b) and SPACE (Eimer et al., 2021) are state-ofthe-art curriculum strategies for contextual RL. We adapt the implementation of an improved version of SPDL, presented in (Klink et al., 2021), to work with a discrete pool of tasks. We also introduce a variant of SPACE, namely SPACE-ALT, by adapting the implementation of (Eimer et al., 2021) to sample the next training task as P [ s (0) t = s ] ∝ exp ( β · ( Vt(s)− Vt−1(s) )) . Prototypical baselines. IID strategy randomly samples the next task from the pool; note that IID serves as a competitive baseline since we consider the uniform performance objective. We introduce two additional variants of PROCURL-ENV, namely EASY and HARD, to understand the importance of the two terms PoSt(s) and ( 1 − PoSt(s) ) in Eq. 4. EASY samples tasks as P [ s (0) t = s ] ∝ exp ( β·PoSt(s) ) , and HARD samples tasks as P [ s (0) t = s ] ∝ exp ( β· ( 1−PoSt(s) )) . 4.3 RESULTS Convergence behavior and curriculum plots. As shown in Figure 2, the RL agents trained using the variants of our curriculum strategy, PROCURL-ENV and PROCURL-VAL, either match or outperform the agents trained with state-of-the-art and prototypical baselines in all the environments. Figures 4 and 5 visualize the curriculums generated by PROCURL-ENV, PROCURL-VAL, and IID; the trends for PROCURL-VAL generally indicate a gradual shift towards harder tasks across different contexts. We provide further details in the appendix. Metrics comparison. In Figure 3, we compare curriculum strategies considered in our experiments w.r.t. different metrics. PROCURL-VAL has similar sample complexity as state-of-the-art baselines since it does not require additional environment steps for the teacher component. PROCURL-VAL performs better compared to SPDL and SPACE in terms of computational complexity. The effect of that is more evident as the pool size increases. The reason is that PROCURL-VAL only requires forward-pass operation on the critic-model to obtain value estimates for each task in the pool. SPDL and SPACE not only require the same forward-pass operations, but SPDL does an additional optimization step, and SPACE requires a task ordering step. In terms of agent’s performance, our curriculum strategies exceed or match these baselines at different training segments. Even though PROCURL-ENV consistently surpasses all the other variants in terms of performance, its teacher component requires a lot of additional environment steps. Regarding the prototypical baselines in Figure 3, we make the following observations: (a) IID is a strong baseline in terms of sample and computational efficiency, however, its performance tends to be unstable in POINTMASS-S environment because of high randomness; (b) EASY performs well in POINTMASS-S because of the presence of easy tasks in the task space of this environment, but, performs quite poorly in BASICKAREL; (c) HARD consistently fails in both the environments. 5 CONCLUDING DISCUSSIONS We proposed a novel curriculum strategy for deep RL agents inspired by the ZPD concept. We mathematically derived our strategy from basic principles and empirically demonstrated its effectiveness in a variety of complex domains. Here, we discuss a few limitations of our work and outline a plan on how to address them in future work. First, we provided theoretical justifications of our proposed curriculum using simple learner models; it would be interesting also to provide a rigorous analysis of how our curriculum strategy improves the convergence rates of (deep) RL agents. Second, our experimental results show that different variants of our proposed curriculum provide an inherent trade-off between runtime and performance; it would be interesting to systematically study these variants to obtain a more effective curriculum strategy across different metrics. Third, it would be interesting to extend our curriculum strategy to high-dimensional sparse reward environments; in particular, our curriculum strategy requires estimating the probability of success of all tasks in the pool when sampling a new task which becomes challenging in high dimensional context space. A TABLE OF CONTENTS In this section, we give a brief description of the content provided in the appendices of the paper. • Appendix B provides proofs for Theorems 1 and 2. (Section 3.2) • Appendix C provides additional details and results for experimental evaluation. (Section 4) B THEORETICAL JUSTIFICATIONS FOR THE CURRICULUM STRATEGY – PROOFS (SECTION 3.2) B.1 PROOF OF THEOREM 1 Proof. Let s(0)t = s ∈ Sinit, and consider the following: ∆t(θt+1 ∣∣θt, s, ξt) = ∥θ∗ − θt∥1 − ∥θ∗ − θt+1∥1 = θt+1[s]− θt[s] = α · succ(ξt; s) · (θ∗[s]− θt[s]) + β · (1− succ(ξt; s)) · (θ∗[s]− θt[s]). For the abstract learner model defined in Section 3.2.1, we have PoSθ(s) = V πθ (s) = θ[s], for any s ∈ Sinit. Given two success values p∗, p ∈ R, the set of feasible tasks at time t is given by Dt(p∗, p) := {s ∈ S : θ∗[s] = p∗, θt[s] = p}. Now, we consider the following: Ct(p ∗, p) = Es∼Unif(Dt(p∗,p))Eξt|s [ ∆t(θt+1 ∣∣θt, s, ξt)] = Es∼Unif(Dt(p∗,p))Eξt|s [α · succ(ξt; s) · (θ∗[s]− θt[s]) + β · (1− succ(ξt; s)) · (θ∗[s]− θt[s])] = Es∼Unif(Dt(p∗,p)) [α · θt[s] · (θ∗[s]− θt[s]) + β · (1− θt[s]) · (θ∗[s]− θt[s])] = α · p · (p∗ − p) + β · (1− p) · (p∗ − p) = α · p · p∗ − α · p2 + β · p∗ − β · p− β · p · p∗ + β · p2. We have the following partial derivatives: ∂Ct(p ∗, p) ∂p = α · p∗ − 2 · α · p− β − β · p∗ + 2 · β · p ∂Ct(p ∗, p) ∂p∗ = α− β ≥ 0 ∂C2t (p ∗, p) (∂p)2 = − 2 · (α+ β) ≤ 0. Noting that ∂Ct∂pL = 0 when p = αp∗−βp∗−β 2(α−β) completes the proof. B.2 PROOF OF THEOREM 2 Proof. For the contextual bandit setting described in Section 3.2.2, the REINFORCE learner’s update rule reduces to the following: θt+1 ← θt+ηt·1{s(1)t = g}· [ ∇θ log πθ(a(0)t |s(0)t ) ] θ=θt . In particular, for s(0)t = s and a (0) t = a1, we update: θt+1[s, a1] ← θt[s, a1] + ηt · 1{s(1)t = g} · (1− πθt(a1|s)) θt+1[s, a2] ← θt[s, a2]− ηt · 1{s(1)t = g} · (1− πθt(a1|s)) and we set θt+1[s, ·]← θt[s, ·] when s(0)t ̸= s or a(0)t ̸= a1. Let s(0)t = s, and consider the following: ∆t(θt+1 ∣∣θt, s, ξt) = ∥θ∗ − θt∥1 − ∥θ∗ − θt+1∥1 = ∥θ∗[s, ·]− θt[s, ·]∥1 − ∥θ∗[s, ·]− θt+1[s, ·]∥1 = {θ∗[s, a1]− θt[s, a1] + θt[s, a2]− θ∗[s, a2]} − {θ∗[s, a1]− θt+1[s, a1] + θt+1[s, a2]− θ∗[s, a2]} = θt+1[s, a1]− θt[s, a1] + θt[s, a2]− θt+1[s, a2] = 2 · ηt · 1{a(0)t = a1, s(1)t = g} · (1− πθt(a1|s)). For the contextual bandit setting, the probability of success is given by PoSθ(s) = V πθ (s) = prand(s) · πθ(a1|s),∀s ∈ S. We assume that ∃ θ∗ such that πθ∗(a1|s) → 1; here, πθ∗ is the target policy. With the above definition, the probability of success scores for any task associated with the starting state s ∈ S w.r.t. the target and agent’s current policies (at any step t) are respectively given by PoS∗(s) = PoSθ∗(s) = prand(s) and PoSt(s) = PoSθt(s) = prand(s) · πθt(a1|s). Given two success values p∗, p ∈ R, the set of feasible tasks at time t is given by Dt(p∗, p) := {s ∈ S : prand(s) = p∗, prand(s) · πθt(a1|s) = p}. Now, we consider the following: Ct(p ∗, p) = Es∼Unif(Dt(p∗,p))Eξt|s [ ∆t(θt+1 ∣∣θt, s, ξt)] = Es∼Unif(Dt(p∗,p))Eξt|s [ 2 · ηt · 1{a(0)t = a1, s(1)t = g} · (1− πθt(a1|s)) ] = Es∼Unif(Dt(p∗,p)) [2 · ηt · prand(s) · πθt(a1|s) · (1− πθt(a1|s))] = 2 · ηt · p · ( 1− p p∗ ) . We have the following partial derivatives: ∂Ct(p ∗, p) ∂p = 2 · ηt · ( 1− 2p p∗ ) ∂Ct(p ∗, p) ∂p∗ = 2 · ηt · ( p p∗ )2 ≥ 0 ∂C2t (p ∗, p) (∂p)2 = − 4 · ηt p∗ ≤ 0. Noting that ∂Ct∂pL = 0 when p = p∗ 2 completes the proof. C EXPERIMENTAL EVALUATION – ADDITIONAL DETAILS (SECTION 4) C.1 ENVIRONMENTS BASICKAREL. This environment is inspired by the Karel program synthesis domain Bunel et al. (2018), where the goal of an agent is to transform an initial grid into a final grid configuration by a sequence of commands. In the BASICKAREL environment, we do not allow any programming constructs such as conditionals or loops and limit the commands to the “basic” actions given by the action space A = {move,turnLeft,turnRight,pickMarker,putMarker,finish}. A task in this environment corresponds to a pair of initial grid and final grid configuration. It consists of an avatar, walls, markers, and empty grid cells, and each element has a specific location in the grid. The avatar is characterized by its current location and orientation. Its orientation can be any direction {North,East,South,West}, and its location can be any grid cell, except from grid cells where a wall is located. The state space S of BASICKAREL is any possible configuration of the avatar, walls, and markers in a pair of grids. The avatar can move around the grid and is directed via the basic Karel commands, i.e., the action space A. While the avatar moves, if it hits a wall or the grid boundary, it “crashes” and the episode terminates. If pickMarker is selected when no marker is present, the avatar “crashes” and the program ends. Likewise, if the putMarker action is taken and a marker is already present, the avatar “crashes” and the program terminates. The finish action indicates the end of the sequence of actions, i.e., the episode ends after encountering this action. To successfully solve a BASICKAREL task, the sequence of actions must end with a finish, and there should be no termination via “crashes”. Based on this environment, we created a multi-task dataset that consists of 24000 training tasks and 2400 test tasks. All the generated tasks have a grid size of 4× 4. C.2 EVALUATION SETUP Hyperparameters of PPO method. We use the PPO method from Stable-Baselines3 library with a basic MLP policy for all the conducted experiments (Schulman et al., 2017; Raffin et al., 2021). For the POINTMASS-S, POINTMASS-D, and BALLCATCHING environments, the MLP policy has a shared layer with 64 units and a second layer with separate 64 units for the policy and 64 units for the value function. For the BASICKAREL environment, we use two separate layers of size [512, 256] for the policy network and two layers of size [256, 128] for the value function network. For the ANTGOAL environment, we use two separate layers of size [512, 512] for the policy network and two layers of size [512, 512] for the value function network. For all the experiments, ReLU is the chosen activation function. In Figure 6, we report the PPO hyperparameters used in the experiments. For each environment, all the hyperparameters are consistent across all the different curriculum strategies. Compute resources. All the experiments were conducted on a cluster of machines with CPUs of model Intel Xeon Gold 6134M CPU @ 3.20GHz. C.3 CURRICULUM STRATEGIES EVALUATED Variants of the curriculum strategy. Algorithm 2 provides a complete pseudo-code for the RL agent using PPO method when trained with PROCURL-VAL in the general setting of non-binary or dense rewards (see Section 3.3). In Eq. 1 and Algorithm 1, we defined t at an episodic level; however, in Algorithm 2, t denotes an environment step (in the context of the PPO method). For PROCURL-ENV, in line 24 of Algorithm 2, we estimate the probability of success for all the tasks using the additional rollouts obtained by executing the current policy inM. Hyperparameters of curriculum strategies. In Figure 7, we report the hyperparameters of each curriculum strategy used in the experiments (for each environment). Below, we provide a short description of these hyperparameters: 1. β parameter controls the stochasticity of the softmax selection. 2. Npos parameter controls the frequency at which V t is updated. For PROCURL-ENV, we set Npos higher than Nsteps since obtaining rollouts to update V t(s) is expensive. For all the other curriculum strategies, we set Npos = Nsteps. For SPACE, Npos controls how frequently the current task dataset is updated based on their curriculum. For SPDL, Npos controls how often we perform the optimization step to update the distribution for selecting tasks. 3. crollouts determines the number of additional rollouts required to compute the probability of success score for each task (only for PROCURL-ENV). 4. {Vmin, Vmax} are used in the environments with non-binary or dense rewards to obtain the normalized values V (s) (see Section 3.3). In Figure 7, {Vmin,t, Vmax,t} denote the min-max values of the critic for states Sinit at step t. 5. η and κ parameters as used in SPACE (Eimer et al., 2021). 6. VLB performance threshold as used in SPDL (Klink et al., 2021). Algorithm 2 RL agent using PPO method when trained with PROCURL-VAL in the general setting 1: Input: RL algorithm PPO, rollout buffer D 2: Hyperparameters: policy update frequency Nsteps, number of epochs Nepochs, number of mini- batches Nbatch, parameter β, Vmin, and Vmax 3: Initialization: randomly initialize policy π1 and critic V1; set normalized probability of success scores V 1(s) = 0 and PoS∗(s) = 1, ∀s ∈ Sinit 4: for t = 1, . . . , T do 5: // add an environment step to the buffer 6: observe the state st, and select the action at ∼ πt(st) 7: execute the action at in the environment 8: observe reward rt, next state st+1, and done signal dt+1 to indicate whether st+1 is terminal 9: store (st, at, rt, st+1, dt+1) in the rollout buffer D 10: // choose new task when the current task/episode ends 11: if dt+1 = true then 12: reset the environment state 13: sample next task st+1 from P [ st+1 = s ] ∝ exp ( β · V t(s) · (1− V t(s)) ) 14: // policy and V t(s) update 15: if t%Nsteps = 0 then 16: set π′ ← πt and V ′ ← Vt 17: for e = 1, . . . , Nepochs do 18: for b = 1, . . . , Nbatch do 19: sample b-th minibatch of Nsteps/Nbatch transitions B = {(s, a, r, s′, d)} from D 20: update policy and critic using PPO algorithm π′, V ′ ← PPO(π′, V ′, B) 21: set πt+1 ← π′ and Vt+1 ← V ′ 22: empty the rollout buffer D 23: // normalization for the environments with non-binary or dense rewards 24: update V t+1(s)← Vt+1(s)−VminVmax−Vmin , ∀s ∈ Sinit using forward passes on critic 25: else 26: maintain the previous values πt+1 ← πt, Vt+1 ← Vt, and V t+1 ← V t 27: Output: policy πT C.4 RESULTS Convergence behavior. In Figure 8, we report the performance of the trained models in the training set and a test set for comparison purposes. For POINTMASS-S, we constructed a separate test set of 100 tasks by uniformly picking tasks from the task space. For BASICKAREL, we have a train and test dataset of 24000 and 2400 tasks, respectively. Curriculum plots. Figures 4 and 5 visualize the curriculums generated by PROCURL-ENV, PROCURL-VAL, and IID; the trends for PROCURL-VAL generally indicate a gradual shift towards harder tasks across different contexts. The increasing trend in Figure 4a corresponds to a preference shift towards tasks with the gate positioned closer to the edges; the decreasing trend in Figure 4b corresponds to a preference shift towards tasks with narrower gates. For BASICKAREL, the increasing trends in Figures 5a and 5b correspond to a preference towards tasks with longer solution trajectories and tasks requiring a marker to be picked or put, respectively. In Figures 5c and 5d, tasks with a distractor marker (C-DistractorMarker) and tasks with more walls (C-Walls) are increasingly selected while training. In Figure 9, we show illustrative tasks of BASICKAREL used during the training process at different steps for PROCURL-VAL. Ablation and robustness experiments. We conduct additional experiments to evaluate the robustness of PROCURL-VAL w.r.t. different values of β and different ϵ-level noise in Vt(s) values. The results are reported in Figure 10. From the reported results, we note that picking a value for β somewhere between 10 to 30 leads to competitive performance, and PROCURL-VAL is robust even for noise levels up to ϵ = 0.2. C.5 ADDITIONAL RESULTS AND DISCUSSION PROCURL-ENVX vs. PROCURL-VAL. To achieve the constrained budget of evaluation steps in PROCURL-ENVx (with x ∈ {2, 4}), we reduce the frequency of updating PoSt since this is the most expensive operation for PROCURL-ENV requiring additional rollouts for each task. On the other hand, PROCURL-VAL updates PoSt by using the values obtained from forward-pass on the critic model – this update happens whenever the critic model is updated (every 2048 training steps for BASICKAREL). This higher frequency of updating PoSt in PROCURL-VAL is why it is slower than PROCURL-ENVx (with x ∈ {2, 4}) for BASICKAREL. Note that the relative frequency of updates for POINTMASS is different in comparison to BASICKAREL because of very different pool sizes. Hence, the behavior in total clock times is different. γ2/γ1 ablation. We conduct an additional ablation study on the form of our curriculum objective presented in Eq. 1. More specifically, we consider the following generalized variant of Eq. 1 with parameters γ1 and γ2: s (0) t ← argmax s∈Sinit ( PoSt(s) · ( γ1 · PoS∗(s)− γ2 · PoSt(s) )) (6) In our experiments, we consider the following range of γ2/γ1 ∈ {0.6, 0.8, 1.0, 1.2, 1.4}. Our default curriculum strategy in Eq. 1 essentially corresponds to γ2/γ1 = 1.0. The following table presents the results for the environments POINTMASS-S and BASICKAREL.
1. What is the main contribution of the paper in deep reinforcement learning? 2. What are the strengths and weaknesses of the proposed ProCuRL method? 3. Do you have any questions or concerns about the distance between theory and practical algorithm used in empirical evaluations? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any connections or similarities between ProCuRL and other related works in curriculum learning or unsupervised environment design that the reviewer thinks need to be acknowledged?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposed a novel curriculum learning strategy for deep RL called ProCuRL. It is inspired by the concept of Zone of Proximal Development (ZPD) used in pedagogy. Authors mathematically derive how to apply the ZPD concept in RL, as well as propose its practical variant that can be incorporated with different RL frameworks. The paper presents experimental results on a variety of domains and demonstrates its effectiveness against other baselines. Strengths And Weaknesses Strengths The paper takes on an important problem in Deep RL, a curriculum learning for agents that are not too hard or too easy. This can provide a natural sequence of tasks for learning quickly on tasks that are always at the forefront of the agent's capabilities. This paper provides a comprehensive set of experiments on many domains to illustrate the strong empirical performance of the ProCuRL family of methods. Weaknesses One of my concerns about this work is the distance between the theory (including the ZPD concept) and the actual practical algorithm used in empirical evaluations. For example, Section 3.2.1. assumes θ ∈ Θ = [ 0 , 1 ] | S i n i t | which is a very big assumption that isn’t held in practice. P o S ∗ ( s ) = 1 is another big simplification that further distances methods from theory and ZPD. What if there are unsolvable tasks in the task space, i.e. the goals are unreachable from a certain initial state? In that case, setting P o S ∗ ( s ) = 1 will provide an incorrect curriculum based on ZSP. Another weakness is that all environments seem to include only a limited | S i n i t | (in the order of hundreds, except BasicKernel if I understand correctly). It is therefore unclear how this approach would work if the number of tasks is much larger, perhaps infinite. Would the method scale to such settings? It is unclear whether the agents would generalise to new tasks that they have not seen during training. It would be interesting to see the zero-shot out-of-distribution generalization performance of the methods and baselines on environments used for evaluation. I’d also be interested to see the curriculum visualisation of ProCuRL-ENV, which is closer to the theory. The method proposed in this work presents a great resemblance to some of the Unsupervised Environment Design (UED) methods, which also belong to the field of Curriculum Learning. Specifically, the regret-based UED methods also provide agents with environment variations that are at the frontier of an agent's capabilities [1, 2, 3]. While ProCuRL handles this using curriculum strategy (1), regret-based UED methods use regret (the difference between optimal and current policy) to provide the agent with environment variations to train on (note that regret is very similar to the objective (1) though there are differences, of course). I thus think these connections need to be acknowledged and prior work properly cited. (Minor) shouldn’t the ξ t also include the rewards in their notation? [1] Dennis et al, Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design, 2020. [2] Jiang, et al, Replay-Guided Adversarial Environment Design (2021). [3] Parker-Holder, et al, Evolving Curricula with Regret-Based Environment Design (2022). Clarity, Quality, Novelty And Reproducibility The paper is well-written, clear and easy to follow. The technical quality of the work is high, and theoretical claims are well supported. The way curriculum learning and ZSP are connected are novel, but in a limited way considering regret-based UED methods (see above). The authors provided their code alongside the submission. Therefore, there are no concerns regarding the reproducibility of results.
ICLR
Title A Generalized Training Approach for Multiagent Learning Abstract This paper investigates a population-based training regime based on game-theoretic principles called Policy-Spaced Response Oracles (PSRO). PSRO is general in the sense that it (1) encompasses well-known algorithms such as fictitious play and double oracle as special cases, and (2) in principle applies to general-sum, manyplayer games. Despite this, prior studies of PSRO have been focused on two-player zero-sum games, a regime wherein Nash equilibria are tractably computable. In moving from two-player zero-sum games to more general settings, computation of Nash equilibria quickly becomes infeasible. Here, we extend the theoretical underpinnings of PSRO by considering an alternative solution concept, α-Rank, which is unique (thus faces no equilibrium selection issues, unlike Nash) and applies readily to general-sum, many-player settings. We establish convergence guarantees in several games classes, and identify links between Nash equilibria and α-Rank. We demonstrate the competitive performance of α-Rank-based PSRO against an exact Nash solver-based PSRO in 2-player Kuhn and Leduc Poker. We then go beyond the reach of prior PSRO applications by considering 3to 5-player poker games, yielding instances where α-Rank achieves faster convergence than approximate Nash solvers, thus establishing it as a favorable general games solver. We also carry out an initial empirical validation in MuJoCo soccer, illustrating the feasibility of the proposed approach in another complex domain. 1 INTRODUCTION Creating agents that learn to interact in large-scale systems is a key challenge in artificial intelligence. Impressive results have been recently achieved in restricted settings (e.g., zero-sum, two-player games) using game-theoretic principles such as iterative best response computation (Lanctot et al., 2017), self-play (Silver et al., 2018), and evolution-based training (Jaderberg et al., 2019; Liu et al., 2019). A key principle underlying these approaches is to iteratively train a growing population of player policies, with population evolution informed by heuristic skill ratings (e.g., Elo (Elo, 1978)) or game-theoretic solution concepts such as Nash equilibria. A general application of this principle is embodied by the Policy-Space Response Oracles (PSRO) algorithm and its related extensions (Lanctot et al., 2017; Balduzzi et al., 2019). Given a game (e.g., poker), PSRO constructs a higherlevel meta-game by simulating outcomes for all match-ups of a population of players’ policies. It then trains new policies for each player (via an oracle) against a distribution over the existing meta-game policies (typically an approximate Nash equilibrium, obtained via a meta-solver1), appends these new policies to the meta-game population, and iterates. In two-player zero sum games, fictitious play (Brown, 1951), double oracle (McMahan et al., 2003), and independent reinforcement learning can all be considered instances of PSRO, demonstrating its representative power (Lanctot et al., 2017). Prior applications of PSRO have used Nash equilibria as the policy-selection distribution (Lanctot et al., 2017; Balduzzi et al., 2019), which limits the scalability of PSRO to general games: Nash equilibria are intractable to compute in general (Daskalakis et al., 2009); computing approximate Nash equilibria is also intractable, even for some classes of two-player games (Daskalakis, 2013); finally, when they can be computed, Nash equilibria suffer from a selection problem (Harsanyi et al., 1988; Goldberg et al., 2013). It is, thus, evident that the reliance of PSRO on the Nash equilibrium as the driver of population growth is a key limitation, preventing its application to general games. Recent work has proposed a scalable alternative to the Nash equilibrium, called α-Rank, which applies readily to general games (Omidshafiei et al., 2019), making it a promising candidate for population-based training. Given that the formal study of PSRO has only been conducted under the restricted settings determined by the limitations of Nash equilibria, establishing its theoretical and empirical behaviors under alternative meta-solvers remains an important and open research problem. We study several PSRO variants in the context of general-sum, many-player games, providing convergence guarantees in several classes of such games for PSRO instances that use α-Rank as a meta-solver. We also establish connections between Nash and α-Rank in specific classes of games, and identify links between α-Rank and the Projected Replicator Dynamics employed in prior PSRO instances (Lanctot et al., 2017). We develop a new notion of best response that guarantees convergence to the α-Rank distribution in several classes of games, verifying this empirically in randomly-generated general-sum games. We conduct empirical evaluations in Kuhn and Leduc Poker, first establishing our approach as a competitive alternative to Nash-based PSRO by focusing on two-player variants of these games that have been investigated in these prior works. We subsequently demonstrate empirical results extending beyond the reach of PSRO with Nash as a meta-solver by evaluating training in 3- to 5-player games. Finally, we conduct preliminary evaluations in MuJoCo soccer (Liu et al., 2019), another complex domain wherein we use reinforcement learning agents as oracles in our proposed PSRO variants, illustrating the feasibility of the approach. 2 PRELIMINARIES Games We consider K-player games, where each player k ∈ [K] has a finite set of pure strategies Sk. Let S = ∏ k S k denote the space of pure strategy profiles. Denote by S−k = ∏ l 6=k S l the set of pure strategy profiles excluding those of player k. Let M(s) = (M1(s), . . . ,MK(s)) ∈ RK denote the vector of expected player payoffs for each s ∈ S. A game is said to be zero-sum if∑ kM k(s) = 0 for all s ∈ S. A game is said to be symmetric if all players have identical strategy sets Sk, and for any permutation ρ, strategy profile (s1, . . . , sK) ∈ S, and index k ∈ [K], one has Mk(s1, . . . , sK) = Mρ(k)(sρ(1), . . . , sρ(K)). A mixed strategy profile is defined as π ∈ ∆S , a tuple representing the probability distribution over pure strategy profiles s ∈ S. The expected payoff to player k under a mixed strategy profile π is given byMk(π) = ∑ s∈S π(s)M k(s). Nash Equilibrium (NE) Given a mixed profile π, the best response for a player k is defined BRk(π) = arg maxν∈∆ Sk [Mk(ν,π−k)]. A factorized mixed profile π(s) = ∏ k π k(sk) is a Nash equilibrium (NE) if πk ∈ BRk(π) for all k ∈ [K]. Define NASHCONV(π) =∑ kM k(BRk(π),π−k)−Mk(π); roughly speaking, this measures “distance” from an NE (Lanctot et al., 2017). In prior PSRO instances (Lanctot et al., 2017), a variant of the replicator dynamics (Taylor and Jonker, 1978; Maynard Smith and Price, 1973), called the Projected Replicator Dynamics (PRD), has been used as an approximate Nash meta-solver (see Appendix E for details on PRD). α-Rank While NE exist in all finite games (Nash, 1950), their computation is intractable in general games, and their non-uniqueness leads to an equilibrium-selection problem (Harsanyi et al., 1988; Goldberg et al., 2013). This limits their applicability as the underlying driver of training beyond the two-player, zero-sum regime. Recently, an alternate solution concept called α-Rank was proposed by 1A meta-solver is a method that computes, or approximates, the solution concept that is being deployed. Player 1’s policy set ...... Player k’s policy set Player K’s policy set Profile distribution Meta-solver Oracle ...... Profile distribution Randomly initialize player policy sets ... ... Game simulations (a) Complete: compute missing payoff tensorM entries via game simulations. (b) Solve: given the updated payoff tensor M , calculate metastrategy π via meta-solver M. (c) Expand: append a new policy to each player’s policy space using the oracle O. Figure 1: Overview of PSRO(M, O) algorithm phases. Omidshafiei et al. (2019), the key associated benefits being its uniqueness and efficient computation in many-player and general-sum games, making it a promising means for directing multiagent training. The α-Rank distribution is computed by constructing the response graph of the game: each strategy profile s ∈ S of the game is a node of this graph; a directed edge points from any profile s ∈ S to σ ∈ S in the graph if (1) s and σ differ in only a single player k’s strategy and (2)Mk(σ) >Mk(s). α-Rank constructs a random walk along this directed graph, perturbing the process by injecting a small probability of backwards-transitions from σ to s (dependent on a parameter, α, whose value is prescribed by the algorithm); this ensures irreducibility of the resulting Markov chain and the existence of a unique stationary distribution, π ∈ ∆S , called the α-Rank distribution. The masses of π are supported by the sink strongly-connected components (SSCCs) of the response graph (Omidshafiei et al., 2019). For more details on α-Rank, see Appendix D and Rowland et al. (2019). Oracles We define an oracle O as an abstract computational entity that, given a game, computes policies with precise associated properties. For instance, a best-response oracle Ok(π) = BRk(π) computes the best-response policy for any player k, given a profile π. One may also consider approximate-best-response oracles that, e.g., use reinforcement learning to train a player k’s policy against a fixed distribution over the other players’ policies, π−k. Oracles play a key role in populationbased training, as they compute the policies that are incrementally added to players’ growing policy populations (McMahan et al., 2003; Lanctot et al., 2017; Balduzzi et al., 2019). The choice of oracle O also affects the training convergence rate and final equilibrium reached (e.g., Nash or α-Rank). Empirical Game-theoretic Analysis PSRO relies on principles from empirical game-theoretic analysis (EGTA) (Walsh et al., 2002; Phelps et al., 2004; Wellman, 2006). Given a game (e.g., poker), EGTA operates via construction of a higher-level ‘meta-game’, where strategies s correspond to policies (e.g., ‘play defensively’ in poker) rather than atomic actions (e.g., ‘fold’). A meta-payoff table M is then constructed by simulating games for all joint policy combinations, with entries corresponding to the players’ expected utilities under these policies. Game-theoretic analysis can then be conducted on the meta-game in a manner analogous to the lower-level game, albeit in a much more scalable manner. As the theoretical discussion hereafter pertains to the meta-game, we use s, M , and π to respectively refer to policies, payoffs, and distributions at the meta-level, rather than the underlying low-level game. In our analysis, it will be important to distinguish between SSCCs of the underlying game, and of the meta-game constructed by PSRO; we refer to the latter as meta-SSCCs. 3 POLICY-SPACE RESPONSE ORACLES: NASH AND BEYOND We first overview Policy-Space Response Oracles (PSRO) prior to presenting our findings. Given an underlying game (e.g., Poker), PSRO first initializes the policy space S using randomly-generated policies, then expands the players’ policy populations in three iterated phases: complete, solve, and Algorithm 1 PSRO(M, O) 1: Initialize the players’ policy set S = ∏ k S k via random policies 2: for iteration ∈ {1, 2, · · · } do 3: Update payoff tensorM for new policy profiles in S via game simulations . (Fig. 1a) 4: Compute the meta-strategy π using meta-solverM(M) . (Fig. 1b) 5: Expand the policy space for each player k ∈ [K] via Sk ← Sk ∪ Ok(π) . (Fig. 1c) Game type M O Converges to α-Rank? SP α-Rank BR 7 (Example 1) SP α-Rank PBR 3 (Sub-SSCC,† Proposition 3) MP α-Rank BR 7 (Example 2) MP α-Rank PBR 3 (With novelty-bound oracle,† Proposition 1) SP / MP Uniform or Nash BR 7 (Examples 4 and 5, Appendix A.2) Table 1: Theory overview. SP and MP, resp., denote single and multi-population games. BR and PBR, resp., denote best response and preference-based best response. †Defined in the noted propositions. expand (see Algorithm 1 and Fig. 1). In the complete phase, a meta-game consisting of all match-ups of these joint policies is synthesized, with missing payoff entries in M completed through game simulations. Next, in the solve phase, a meta-solverM computes a profile π over the player policies (e.g., Nash, α-Rank, or uniform distributions). Finally, in the expand phase, an oracle O computes at least one new policy s′k for each player k ∈ [K], given profile π. As other players’ policy spaces S−k and profile π−k are fixed, this phase involves solving a single-player optimization problem. The new policies are appended to the respective players’ policy sets, and the algorithm iterates. We use PSRO(M, O) to refer to the PSRO instance using meta-solverM and oracle O. Notably, PSRO-based training for two-player symmetric games can be conducted using a single population of policies that is shared by all players (i.e., Sk is identical for all k). Thus, we henceforth refer to twoplayer symmetric games as ‘single-population games’, and more generally refer to games that require player-specific policy populations as ‘multi-population games’. Recent investigations of PSRO have solely focused on Nash-based meta-solvers and best-response-based oracles (Lanctot et al., 2017; Balduzzi et al., 2019), with theory focused around the two-player zero-sum case. Unfortunately, these guarantees do not hold in games beyond this regime, making investigation of alternative meta-solvers and oracles critical for further establishing PSRO’s generalizability. 4 GENERALIZING PSRO THEORY This section establishes theoretical properties of PSRO for several useful classes of general games. We summarize our results in Table 1, giving a full exposition below. 4.1 ESTABLISHING CONVERGENCE TO α-RANK Player 2 A B C D X Pl ay er 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 Table 2: Symmetric zero-sum game used to analyze the behavior of PSRO in Example 1. Here, 0 < ε 1 and φ 1. It is well-known that PSRO(Nash, BR) will eventually return an NE in two-player zero-sum games (McMahan et al., 2003). In more general games, where Nash faces the issues outlined earlier, α-Rank appears a promising meta-solver candidate as it applies to many-player, general-sum games and has no selection problem. However, open questions remain regarding convergence guarantees of PSRO when using α-Rank, and whether standard BR oracles suffice for ensuring these guarantees. We investigate these theoretical questions, namely, whether particular variants of PSRO can converge to the α-Rank distribution for the underlying game. A first attempt to establish convergence to α-Rank might involve running PSRO to convergence (until the oracle returns a strategy already in the convex hull of the known strategies), using α-Rank as the meta-solver, and a standard best response oracle. However, the following example shows that this will not work in general for the single-population case (see Fig. A.5 for a step-by-step illustration). Example 1. Consider the symmetric zero-sum game specified in Table 2. As X is the sole sink component of the game’s response graph (as illustrated in Fig. A.5a), the single-population α-Rank distribution for this game puts unit mass on X . We now show that a PSRO algorithm that computes best responses to the α-Rank distribution over the current strategy set need not recover strategy X , by computing directly the strategy sets of the algorithm initialized with the set {C}. 1. The initial strategy space consists only of the strategy C; the best response against C is D. 2. The α-Rank distribution over {C,D} puts all mass on D; the best response against D is A. 3. The α-Rank distribution over {C,D,A} puts all mass on A; the best response against A is B. 4. The α-Rank distribution over {C,D,A,B} puts mass (1/3, 1/3, 1/6, 1/6) on (A,B,C,D) respec- tively. For φ sufficiently large, the payoff that C receives against B dominates all others, and since B has higher mass than C in the α-Rank distribution, the best response is C. Thus, PSRO(α-Rank, BR) leads to the algorithm terminating with strategy set {A,B,C,D} and not discovering strategy X in the sink strongly-connected component. This conclusion also holds in the multi-population case, as the following counterexample shows. Example 2. Consider the game in Table 2, treating it now as a multi-population problem. It is readily verified that the multi-population α-Rank distributions obtained by PSRO with initial strategy sets consisting solely of C for each player are: (i) a Dirac delta at the joint strategy (C,C), leading to best responses of D for both players; (ii) a Dirac delta at (D,D) leading to best responses of A for both players; (iii) a Dirac delta at (A,A), leading to best responses of B for both players; and finally (iv) a distribution over joint strategies of the 4×4 subgame induced by strategies A,B,C,D that leads to a best response not equal to X; thus, the full α-Rank distribution is again not recovered. 4.2 A NEW RESPONSE ORACLE The previous examples indicate that the use of standard best responses in PSRO may be the root cause of the incompatibility with the α-Rank solution concept. Thus, we define the Preference-based Best Response (PBR) oracle, which is more closely aligned with the dynamics defining α-Rank, and which enables us to establish desired PSRO guarantees with respect to α-Rank. Consider first the single-population case. Given an N -strategy population {s1, . . . , sN} and corresponding meta-solver distribution (πi)Ni=1∈∆N , a PBR oracle is defined as any function satisfying PBR (∑ i πisi ) ⊆ arg maxσ ∑ i πi1 [ M1(σ, si) >M 2(σ, si) ] , (1) where the arg max returns the set of policies optimizing the objective, and the optimization is over pure strategies in the underlying game. The intuition for the definition of PBR is that we would like the oracle to return strategies that will receive high mass under α-Rank when added to the population; objective (1) essentially encodes the probability flux that the vertex corresponding to σ would receive in the random walk over the α-Rank response graph (see Section 2 or Appendix D for further details). We demonstrate below that the use of the PBR resolves the issue highlighted in Example 1 (see Fig. A.6 in Appendix A for an accompanying visual). Example 3. Steps 1 to 3 of correspond exactly to those of Example 1. In step 4, the α-Rank distribution over {C,D,A,B} puts mass (1/3, 1/3, 1/6, 1/6) on (A,B,C,D) respectively. A beats C and D, thus its PBR score is 1/3. B beats A and D, thus its PBR score is 1/2. C beats B, its PBR score is thus 1/3. D beats C, its PBR score is thus 1/6. Finally, X beats every other strategy, and its PBR score is thus 1. Thus, there is only one strategy maximizing PBR, X , which is then chosen, thereby recovering the SSCC of the game and the correct α-Rank distribution at the next timestep. In the multi-population case, consider a population of N strategy profiles {s1, . . . , sN} and corresponding meta-solver distribution (πi)Ni=1. Several meta-SSCCs may exist in the multi-population α-Rank response graph. In this case, we run the PBR oracle for each meta-SSCC separately, as follows. Suppose there are ` meta-SSCCs, and denote by π(`) the distribution π restricted to the `th meta-SSCC, for all 1 ≤ ` ≤ L. The PBR for player k on the `th meta-SSCC is then defined by PBRk (∑ i π (`) i si ) ⊆ arg maxσ ∑ i π (`) i 1 [ Mk(σ, s−ki ) >M k(ski , s −k i ) ] . (2) Thus, the PBR oracle generates one new strategy for each player for every meta-SSCC in the α-Rank response graph; we return this full set of strategies and append to the policy space accordingly, as in Line 5 of Algorithm 1. Intuitively, this leads to a diversification of strategies introduced by the oracle, as each new strategy need only perform well against a subset of prior strategies. This hints at interesting links with the recently-introduced concept of rectified-Nash BR (Balduzzi et al., 2019), which also attempts to improve diversity in PSRO, albeit only in two-player zero-sum games. We henceforth denote PSRO(α-Rank, PBR) as α-PSRO for brevity. We next define α-CONV, an approximate measure of convergence to α-Rank. We restrict discussion to the multi-population case here, describing the single-population case in Appendix A.4. With the notation introduced above, we define PBR-SCOREk(σ;π, S) = ∑ i ∑ ` π (`) i 1 [ Mk(σ, s−ki ) >M k(ski , s −k i ) ] , and α-CONV = ∑ k maxσ PBR-SCORE k(σ)−maxs∈Sk PBR-SCOREk(s) , where maxσ is taken over the pure strategies of the underlying game. Unfortunately, in the multi-population case, a PBR-SCORE of 0 does not necessarily imply α-partial convergence. We thus introduce a further measure, PCS-SCORE, defined by PCS-SCORE = # of α-PSRO strategy profiles in the underlying game’s SSCCs # of α-PSRO strategy profiles in meta-SSCCs , which assesses the quality of the α-PSRO population. We refer readers to Appendix C.3 for pseudocode detailing how to implement these measures in practice. 4.3 α-PSRO: THEORY, PRACTICE, AND CONNECTIONS TO NASH We next study the theoretical properties of PSRO(α-Rank, PBR), which we henceforth refer to as α-PSRO for brevity. We consider that α-PSRO has converged if no new strategy has been returned by PBR for any player at the end of an iteration. Proofs of all results are provided in Appendix B. Definition 1. A PSRO algorithm is said to converge α-fully (resp., α-partially) to an SSCC of the underlying game if its strategy population contains the full SSCC (resp., a sub-cycle of the SSCC, denoted a ‘sub-SSCC’) after convergence. Definition 2. We also adapt PBR to be what we call novelty-bound by restricting the arg max in Equation (1) to be over strategies not already included in the population with PBR-SCORE > 0. In particular, the novelty-bound version of the PBR oracle is given by restricting the arg max appearing in (2) to only be over strategies not already present in the population. These definitions enable the following results for α-PSRO in the single- and multi-population cases. Proposition 1. If at any point the population of α-PSRO contains a member of an SSCC of the game, then α-PSRO will α-partially converge to that SSCC. Proposition 2. If we constrain the PBR oracle used in α-PSRO to be novelty-bound, then α-PSRO will α-fully converge to at least one SSCC of the game. Stronger guarantees exist for two-players symmetric (i.e., single-population) games, though the multi-population case encounters more issues, as follows. Proposition 3. (Single-population) α-PSRO converges α-partially to the unique SSCC. Proposition 4. (Multi-population) Without a novelty-bound oracle, there exist games for which α-PSRO does not converge α-partially to any SSCC. Intuitively, the lack of convergence without a novelty-bound oracle can occur due to intransitivities in the game (i.e., cycles in the game can otherwise trap the oracle). An example demonstrating this issue is shown in Fig. B.7, with an accompanying step-by-step walkthrough in Appendix B.4. Specifically, SSCCs may be hidden by “intermediate” strategies that, while not receiving as high a payoff as current population-pool members, can actually lead to well-performing strategies outside the population. As these “intermediate” strategies are avoided, SSCCs are consequently not found. Note also that this is related to the common problem of action/equilibrium shadowing, as detailed in Matignon et al. (2012). In Section 5, we further investigate convergence behavior beyond the conditions studied above. In practice, we demonstrate that despite the negative result of Proposition 4, α-PSRO does significantly increase the probability of converging to an SSCC, in contrast to PSRO(Nash, BR). Overall, we have shown that for general-sum multi-player games, it is possible to give theoretical guarantees for a version of PSRO driven by α-Rank in several circumstances. By contrast, using exact NE in PSRO is intractable in general. In prior work, this motivated the use of approximate Nash solvers generally based on the simulation of dynamical systems or regret minimization algorithms, both of which generally require specification of several hyperparameters (e.g., simulation iterations, window sizes for computing time-average policies, and entropy-injection rates), and a greater computational burden than α-Rank to carry out the simulation in the first place. Implementing the PBR Oracle Recall from Section 3 that the BR oracle inherently solves a singleplayer optimization problem, permitting use of a single-agent RL algorithm as a BR approximator, which is useful in practice. As noted in Section 4.1, however, there exist games where the BR and PBR objectives are seemingly incompatible, preventing the use of standard RL agents for PBR approximation. While exact PBR is computable in small-scale (e.g., normal-form) games, we next consider more general games classes where PBR can also be approximated using standard RL agents. Definition 3. Objective A is ‘compatible’ with objective B if any solution to A is a solution to B. Proposition 5. A constant-sum game is denoted as win-loss ifMk(s) ∈ {0, 1} for all k ∈ [K] and s ∈ S. BR is compatible with PBR in win-loss games in the two-player single-population case. Proposition 6. A symmetric two-player game is denoted monotonic if there exists a function f : S → R and a non-decreasing function σ : R → R such that M1(s, ν) = σ(f(s) − f(ν)). BR is compatible with PBR in monotonic games in the single-population case. Finally, we next demonstrate that under certain conditions, there are strong connections between the PBR objective defined above and the broader field of preference-based RL (Wirth et al., 2017). Proposition 7. Consider symmetric win-loss games where outcomes between deterministic strategies are deterministic. A preference-based RL agent (i.e., an agent aiming to maximize its probability of winning against a distribution π of strategies {s1, . . . , sN}) optimizes exactly the PBR objective (1). Given this insight, we believe an important subject of future work will involve the use of preferencebased RL algorithms in implementing the PBR oracle for more general classes of games. We conclude this section with some indicative results of the relationship between α-Rank and NE. Proposition 8. For symmetric two-player zero-sum games where off-diagonal payoffs have equal magnitude, all NE have support contained within that of the single-population α-Rank distribution. Proposition 9. In a symmetric two-player zero-sum game, there exists an NE with support contained within that of the α-Rank distribution. For more general games, the link between α-Rank and Nash equilibria will likely require a more complex description. We leave this for future work, providing additional discussion in Appendix A.3. 5 EVALUATION We conduct evaluations on games of increasing complexity, extending beyond prior PSRO applications that have focused on two-player zero-sum games. For experimental procedures, see Appendix C. Oracle comparisons We evaluate here the performance of the BR and PBR oracles in games where PBR can be exactly computed. We consider randomly generated, K-player, general-sum games with increasing strategy space sizes, |Sk|. Figure 2 reports these results for the 4- and 5-player instances (see Appendix C.4 for 2-3 player results). The asymmetric nature of these games, in combination with the number of players and strategies involved, makes them inherently, and perhaps surprisingly, large in scale. For example, the largest considered game in Fig. 2 involves 5 players with 30 strategies each, making for a total of more than 24 million strategy profiles in total. For each combination of K and |Sk|, we generate 1e6 random games. We conduct 10 trials per game, in each trial running the BR and PBR oracles starting from a random strategy in the corresponding response graph, then iteratively expanding the population space until convergence. Importantly, this implies that the starting strategy may not even be in an SSCC. As mentioned in Section 4.2, α-CONV and PCS-SCORE jointly characterize the oracle behaviors in these multi-population settings. Figure 2a plots α-CONV for both oracles, demonstrating that PBR outperforms BR in the sense that it captures more of the game SSCCs. Figures 2b and 2c, respectively, plot the PCS-SCORE for BR and PBR over all game instances. The PCS-SCORE here is typically either (a) greater than 95%, or (b) less than 5%, and otherwise rarely between 5% to 95%. For all values of |Sk|, PBR consistently discovers a larger proportion of the α-Rank support in contrast to BR, serving as useful validation of the theoretical results of Section 4.3. Meta-solver comparisons We consider next the standard benchmarks of Kuhn and Leduc poker (Kuhn, 1950; Southey et al., 2005; Lanctot et al., 2019). We detail these domains in Appendix C.2, noting here that both are K-player, although Leduc is significantly more complex than Kuhn. We first consider two-player instances of these poker domains, permitting use of an exact Nash meta-solver. Figure 3 compares the NASHCONV of PSRO(M, BR) for various meta-solverM choices. Note that the x axis of Figure 3 and Figure 4 is the Total Pool Length (The sum of the length of each player’s pool in PSRO) instead of the number of iterations of PSRO, since Rectified solvers can add more than one policy to the pool at each PSRO iteration (Possibly doubling pool size at every PSRO iteration). It is therefore more pertinent to compare exploitabilities at the same pool sizes rather than at the same number of PSRO iterations. In Kuhn poker (Fig. 3a), the α-Rank, Nash, and the Projected Replicator Dynamics (PRD) metasolvers converge essentially at the same rate towards zero NASHCONV, in contrast to the slower rate of the Uniform meta-solver, the very slow rate of the Rectified PRD solver, and the seemingly constant NASHCONV of the Rectified Nash solver. We provide in Appendix C.5 a walkthrough of the first steps of the Rectified Nash results to more precisely determine the cause of its plateauing NASHCONV. A high level explanation thereof is that it is caused by Rectified Nash cycling through the same policies, effectively not discovering new policies. We posit these characteristics, antipodal to the motivation behind Rectified Nash, come from the important fact that Rectified Nash was designed to work only in symmetric games, and is therefore not inherently well-suited for the Kuhn and Leduc poker domains investigated here, as they are both asymmetric games. We did not add the Rectified PRD results the other, greater-than-2 players experiments, as its performance remained non-competitive. As noted in Lanctot et al. (2017), PSRO(Uniform, BR) corresponds to Fictitious Play (Brown, 1951) and is thus guaranteed to find an NE in such instances of two-player zero-sum games. Its slower convergence rate is explained by the assignment of uniform mass across all policies s ∈ S, implying that PSRO essentially wastes resources on training the oracle to beat even poor-performing strategies. While α-Rank does not seek to find an approximation of Nash, it nonetheless reduces the NASHCONV yielding competitive results in comparison to an exact-Nash solver in these instances. Notably, the similar performance of α-Rank and Nash serves as empirical evidence that α-Rank can be applied competitively even in the two-player zero-sum setting, while also showing great promise to be deployed in broader settings where Nash is no longer tractable. We next consider significantly larger variants of Kuhn and Leduc Poker involving more than two players, extending beyond the reach of prior PSRO results (Lanctot et al., 2017). Figure 4 visualizes the NASHCONV of PSRO using the various meta-solvers (with the exception of an exact Nash solver, due to its intractability in these instances). In all instances of Kuhn Poker, α-Rank and PRD show competitive convergence rates. In 3-player Leduc poker, however, α-Rank shows fastest convergence, with Uniform following throughout most of training and PRD eventually reaching a similar NASHCONV. Several key insights can be made here. First, computation of an approximate Nash via PRD involves simulation of the associated replicator dynamics, which can be chaotic (Palaiopanos et al., 2017) even in two-player two-strategy games, making it challenging to determine when PRD has suitably converged. Second, the addition of the projection step in PRD severs its connection with NE; the theoretical properties of PRD were left open in Lanctot et al. (2017), leaving it without any guarantees. These limitations go beyond theoretical, manifesting in practice, e.g., in Fig. 4d, where PRD is outperformed by even the uniform meta-solver for many iterations. Given these issues, we take a first (and informal) step towards analyzing PRD in Appendix E. For α-Rank, by contrast, we both establish theoretical properties in Section 4, and face no simulation-related challenges as its computation involves solving of a linear system, even in the general-sum many-player case (Omidshafiei et al., 2019), thus establishing it as a favorable and general PSRO meta-solver. MuJoCo Soccer While the key objective of this paper is to take a first step in establishing a theoretically-grounded framework for PSRO-based training of agents in many-player settings, an exciting question regards the behaviors of the proposed α-Rank-based PSRO algorithm in complex domains where function-approximation-based policies need to be relied upon. In Appendix F, we take a first step towards conducting this investigation in the MuJoCo soccer domain introduced in Liu et al. (2019). We remark that these results, albeit interesting, are primarily intended to lay the foundation for use of α-Rank as a meta-solver in complex many agent domains where RL agents serve as useful oracles, warranting additional research and analysis to make conclusive insights. 6 RELATED WORK We discuss the most closely related work along two axes. We start with PSRO-based research and some multiagent deep RL work that focuses on training of networks in various multiagent settings. Then we continue with related work that uses evolutionary dynamics (α-Rank and replicator dynamics) as a solution concept to examine underlying behavior of multiagent interactions using meta-games. Policy-space response oracles (Lanctot et al., 2017) unify many existing approaches to multiagent learning. Notable examples include fictitious play (Brown, 1951; Robinson, 1951), independent reinforcement learning (Matignon et al., 2012) and the double oracle algorithm (McMahan et al., 2003). PSRO also relies, fundamentally, on principles from empirical game-theoretic analysis (EGTA) (Walsh et al., 2002; Phelps et al., 2004; Tuyls et al., 2018; Wellman, 2006; Vorobeychik, 2010; Wiedenbeck and Wellman, 2012; Wiedenbeck et al., 2014). The related Parallel Nash Memory (PNM) algorithm (Oliehoek et al., 2006), which can also be seen as a generalization of the double oracle algorithm, incrementally grows the space of strategies, though using a search heuristic rather than exact best responses. PNMs have been successfully applied to games settings utilizing function approximation, notably to address exploitability issues when training Generative Adversarial Networks (GANs) (Oliehoek et al., 2019). PSRO allows the multiagent learning problem to be decomposed into a sequence of single-agent learning problems. A wide variety of other approaches that deal with the multiagent learning problem without this reduction are also available, such as Multiagent Deep Deterministic Policy Gradients (MADDPG) (Lowe et al., 2017), Counterfactual Multiagent Policy Gradients (COMA) (Foerster et al., 2018), Differentiable Inter-Agent Learning (DIAL) (Foerster et al., 2016), Hysteretic Deep Recurrent Q-learning (Omidshafiei et al., 2017), and lenient Multiagent Deep Reinforcement Learning (Palmer et al., 2018). Several notable contributions have also been made in addressing multiagent learning challenges in continuous-control settings, most recently including the approaches of Iqbal and Sha (2019); Gupta et al. (2017); Wei et al. (2018); Peng et al. (2017); Khadka et al. (2019). We refer interested readers to the following survey of recent deep multiagent RL approaches Hernandez-Leal et al. (2019). α-Rank was introduced by Omidshafiei et al. (2019) as a scalable dynamic alternative to Nash equilibria that can be applied in general-sum, many-player games and is capable of capturing the underlying multiagent evolutionary dynamics. Concepts from evolutionary dynamics have long been used in analysis of multiagent interactions from a meta-game standpoint (Walsh et al., 2002; Tuyls and Parsons, 2007; Hennes et al., 2013; Bloembergen et al., 2015; Tuyls et al., 2018). 7 DISCUSSION This paper studied variants of PSRO using α-Rank as a meta-solver, which were shown to be competitive with Nash-based PSRO in zero-sum games, and scale effortlessly to general-sum manyplayer games, in contrast to Nash-based PSRO. We believe there are many interesting directions for future work, including how uncertainty in the meta-solver distribution, informed by recent developments in dealing with incomplete information in games (Reeves and Wellman, 2004; Walsh et al., 2003; Rowland et al., 2019), can be used to inform the selection of new strategies to be added to populations. In summary, we strongly believe that the theoretical and empirical results established in this paper will play a key role in scaling up multiagent training in general settings. ACKNOLWEDGEMENTS The authors gratefully thank Bart De Vylder for providing helpful feedback on the paper draft. A EXAMPLES A.1 FURTHER EXPOSITION OF EXAMPLES 1 AND 2 Player 2 A B C D X Player 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 (a) Overview. Full payoff table on left, full response graph on right, with values over directed edges indicating the payoff gained by deviating from one strategy to another. Player 2 A B C D X Player 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 (b) Consider an initial strategy space consisting only of the strategy C; the best response against C is D. Player 2 A B C D X Player 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 (c) The α-Rank distribution over {C,D} puts all mass on D; the best response against D is A. Player 2 A B C D X Player 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 (d) The α-Rank distribution over {C,D,A} puts all mass on A; the best response against A is B. Player 2 A B C D X Player 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 (e) The α-Rank distribution over {C,D,A,B} puts mass (1/3, 1/3, 1/6, 1/6) on (A,B,C,D) respectively. For φ sufficiently large, the payoff that C receives against B dominates all others, and since B has higher mass than C in the α-Rank distribution, the best response is C. Figure A.5: Example 1 with oracle O = BR. In each step above, the α-Rank support is highlighted by the light green box of the payoff table, and the BR strategy against it in bold, dark green. Player 2 A B C D X Player 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 (e) The α-Rank distribution over {C,D,A,B} puts mass (1/3, 1/3, 1/6, 1/6) on (A,B,C,D) respectively. A beats C and D, and therefore its PBR score is 1/3. B beats A and D, therefore its PBR score is 1/2. C beats B, its PBR score is therefore 1/3. D beats C, its PBR score is therefore 1/6. Finally, X beats every other strategy, and its PBR score is thus 1. There is only one strategy maximizing PBR, X , which is then chosen, and the SSCC of the game, recovered. Figure A.6: Example 1 with oracle O = PBR. Steps (a) to (d) are not shown as they are identical to their analogs in Fig. A.5. A.2 EXAMPLE BEHAVIOR OF PSRO(NASH, BR) A first attempt to establish convergence to α-Rank might involve running PSRO to convergence (until the oracle returns a strategy already in the convex hull of the known strategies), and then running α-Rank on the resulting meta-game. However, the following provides a counterexample to this approach when using either PSRO(Nash, BR) or PSRO(Uniform, BR). Example 4. Consider the two-player symmetric game specified in Table 3a. The sink stronglyconnected component of the single-population response graph (and hence the α-Rank distribution) contains all three strategies, but all NE are supported on {A,B} only, and the best response to a strategy supported on {A,B} is another strategy supported on {A,B}. Thus, the single-population variant of PSRO, using M ∈ {Nash,Uniform} with initial strategies contained in {A,B} will terminate before discovering strategy X; the full α-Rank distribution will thus not be recovered. Example 5. Consider the two-player zero-sum game specified in Table 3b. All strategy profiles recieve non-zero probability in the multi-population α-Rank distribution. However, the Nash equilibrium over the game restricted to actions A,B for each player has a unique Nash equilibrium of (1/2, 1/2). Player 1’s best response to this Nash is to play some mixture of A and B, and therefore strategy X is not recovered by PSRO(Nash, BR) in this case, and so the full α-Rank distribution will thus not be recovered. A.3 COUNTEREXAMPLES: α-RANK VS. NASH SUPPORT The Game of Chicken The Game of Chicken provides an example where the support of α-Rankin the multipopulation case - does not include the full support of Nash Equilibria. This game has three Nash equilibria: Two pure, (D,C) and (C,D), and one mixed, where the population plays Dare with probability 13 . Nevertheless, α-rank only puts weight on (C,D) and (D,C), effectively not putting weight on the full mixed-nash support. Prisoner’s Dilemma The Prisoner’s Dilemma provides a counterexample that the support of αRank- in the multi-population case - does not include the full support of correlated equilibria. This game has correlated equilibria that include (C,D), (D,C) and (C,C) in their support; nevertheless, α-Rank only puts weight on (D,D), effectively being fully disjoint from the support of the correlated equilibria. A.4 SINGLE-POPULATION α-CONV In analogy with the multi-population definition in Section 4.2, we define a single-population version of α-CONV. We start by defining the single-population version of PBR-Score, given by PBR-SCORE(σ;π, S) = ∑ i πi1 [ M1(σ, si) >M 2(σi, si) ] . The single-population α-CONV is then defined as α-CONV = max σ PBR-SCORE(σ)−max s∈S PBR-SCORE(s) , where maxσ is taken over the pure strategies of the underlying game. B PROOFS B.1 PROOF OF PROPOSITION 1 Proposition 1. If at any point the population of α-PSRO contains a member of an SSCC of the game, then α-PSRO will α-partially converge to that SSCC. Proof. Suppose that a member of one of the underlying game’s SSCCs appears in the α-PSRO population. This member will induce its own meta-SSCC in the meta-game’s response graph. At least one of the members of the underlying game’s corresponding SSCC will thus always have positive probability under the α-Rank distribution for the meta-game, and the PBR oracle for this meta-SSCC will always return a member of the underlying game’s SSCC. If the PBR oracle returns a member of the underlying SSCC already in the PSRO population, we claim that the corresponding meta-SSCC already contains a cycle of the underlying SSCC. To see this, note that if the meta-SSCC does not contain a cycle, it must be a singleton. Either this singleton is equal to the full SSCC of the underlying game (in which we have α-fully converged), or it is not, in which case the PBR oracle must return a new strategy from the underlying SSCC, contradicting our assumption that it has terminated. B.2 PROOF OF PROPOSITION 2 Proposition 2. If we constrain the PBR oracle used in α-PSRO to be novelty-bound, then α-PSRO will α-fully converge to at least one SSCC of the game. Proof. Suppose that α-PSRO has converged, and consider a meta-SSCC. Since α-PSRO has converged, it follows that each strategy profile of the meta-SSCC is an element of an SSCC of the underlying game. Any strategy profile in this SSCC which is not in the meta-SSCC will obtain a positive value for the PBR objective, and since α-PSRO has converged, there can be no such strategy profile. Thus, the meta-SSCC contains every strategy profile contained within the corresponding SSCC of the underlying game, and therefore conclude that α-PSRO α-fully converges to an SSCC of the underlying game. B.3 PROOF OF PROPOSITION 3 Proposition 3. (Single-population) α-PSRO converges α-partially to the unique SSCC. Proof. The uniqueness of the SSCC follows from the fact that in the single-population case, the response graph is fully-connected. Suppose at termination of α-PSRO, the α-PSRO population contains no strategy within the SSCC, and let s be a strategy in the SSCC. We claim that s attains a higher value for the objective defining the PBR oracle than any strategy in the α-PSRO population, which contradicts the fact that α-PSRO has terminated. To complete this argument, we note that by virtue of s being in the SSCC, we haveM1(s, s′) >M1(s′, s) for all s′ outside the SSCC, and in particular for all s′ ∈ S, thus the PBR objective for s is 1. In contrast, for any si ∈ S, the PBR objective for si is upper-bounded by 1−πi. If πi > 0, then this shows si is not selected by the oracle, since the objective value is lower than that of s. If πi = 0, then the objective value for si is 0, and so an SSCC member will always have a maximal PBR score of 1 against a population not composed of any SSCC member, and all members of that population have < 1 PBR scores. Consequently, singlepopulation α-PSRO cannot terminate before it has encountered an SSCC member. By Proposition 1, the proposition is therefore proven. B.4 PROOF OF PROPOSITION 4 Proposition 4. (Multi-population) Without a novelty-bound oracle, there exist games for which α-PSRO does not converge α-partially to any SSCC. Proof. We exhibit a specific counterexample to the claim. Consider the three-player, three-strategy game with response graph illustrated in Fig. B.7a; note that we do not enumerate all strategy profiles not appearing in the SSCC for space and clarity reasons. The sequence of updates undertaken by α-PSRO in this game is illustrated in Figs. B.7b to B.7f; whilst the singleton strategy profile (3, 2, 3) forms the unique SSCC for this game, α-PSRO terminates before reaching it, which concludes the proof. The steps taken by the algorithm are described below; again, we do not enumerate all strategy profiles not appearing in the SSCC for space and clarity reasons. 1. Begin with strategies [[2], [1], [1]] in the α-PSRO population (Player 1 only has access to strategy 2, Players 2 and 3 only have access to strategy 1) 2. The PBR to (2,1,1) for player 2 is 2, and no other player has a PBR on this round. We add 2 to the strategy space of player 2, which changes the space of available joint strategies to [(2, 1, 1), (2, 2, 1)]. 3. α-Rank puts all its mass on (2,2,1). The PBR to (2,2,1) for player 3 is 2, and no other player has a PBR on this round. We add strategy 2 to player 3’s strategy space, which changes the space of available joint strategies to [(2, 1, 1), (2, 2, 1), (2, 2, 2)]. 4. α-Rank puts all its mass on (2,2,2). The PBR to (2,2,2) for player 1 is 1, and no other player has a PBR on this round. We add strategy 1 to player 1’s strategy space, which changes the space of available joint strategies to [(1, 1, 1), (1, 1, 2), (1, 2, 1), (1, 2, 2), (2, 1, 1), (2, 2, 1), (2, 2, 2)]. 5. Define σ as the α-Rank probabilities of the meta-game. Player 1 playing strategy 2 has a PBR score of σ((1, 1, 1)) + σ((1, 2, 1)), and the same player playing strategy 3 has a PBR score of σ((1, 2, 1)), which is lower than the PBR Score of playing strategy 2. No other player has a valid PBR for this round, and therefore, α-PSRO terminates. In the above example, pictured in Fig. B.7, a relatively weak joint strategy (Strategy (3,2,1)) bars agents from finding the optimal joint strategy of the game (Strategy (3,2,3)) : getting to this joint strategy requires coordinated changes between agents, and is therefore closely related to the common problem of Action/Equilibrium Shadowing mentioned in (Matignon et al., 2012). B.5 PROOF OF PROPOSITION 5 Proposition 5. A constant-sum game is denoted as win-loss ifMk(s) ∈ {0, 1} for all k ∈ [K] and s ∈ S. BR is compatible with PBR in win-loss games in the two-player single-population case. Proof. We manipulate the best-response objective as follows: M1(ν,π) = ∑ s∈S π(s)M1(ν, s) = ∑ s∈S π(s)1[M1(ν, s) >M2(ν, s)] . Noting that the final line is the single-population PBR objective, we are done. B.6 PROOF OF PROPOSITION 6 Proposition 6. A symmetric two-player game is denoted monotonic if there exists a function f : S → R and a non-decreasing function σ : R → R such that M1(s, ν) = σ(f(s) − f(ν)). BR is compatible with PBR in monotonic games in the single-population case. Proof. Rewriting the objectives given that the game is monotonic, we have that the value-based objective becomes K∑ k=1 πkM 1(s, sk) = K∑ k=1 πkσ(f(s)− f(sk)) . Given the fact that the only condition we have on σ is its non-decreasing character, this objective does not reduce to maximizing f(s) in the general case. The objective for PBR is K∑ k=1 πk1[M 1(s, sk) >M 2(s, sk)] = K∑ k=1 πk1[σ(f(s)− f(sk)) > σ(f(sk)− f(s))] Since σ is non-decreasing, σ(f(s)− f(sk)) > σ(f(sk)− f(s)) ⇒ f(s) > f(sk) and conversely, f(s) > f(sk) ⇒ σ(f(s)− f(sk)) ≥ σ(f(sk)− f(s)) Without loss of generality, we reorder the strategies such that if i < k, f(si) ≤ f(sk). Let sv maximize the value objective. Therefore, by monotonicity, sv maximizes σ(f(s)− f(sK)). Three possibilities then ensue. If there exists s such that σ(f(s)− f(sK)) > σ(f(sK)− f(s)) then σ(f(sv)− f(sK)) > σ(f(sK)− f(sv)) since sv maximizes σ(f(s)− f(sK)) and σ is non-decreasing. Consequently sv maximizes the PBR objective. Indeed, let us remark that for all k ≤ K, we have that σ(f(sv)− f(sk)) > σ(f(sk)− f(sv)) since σ(f(sv)− f(sk)) ≥ σ(f(sv)− f(sK)) > σ(f(sK)− f(sv)) ≥ σ(f(sk)− f(sv)) Else, if there does not exist any policy s such that σ(f(s)− f(sK)) > σ(f(sK)− f(s)), that is, for all s, σ(f(s)− f(sK)) ≤ σ(f(sK)− f(s)) Since sK is a possible solution to the value objective, σ(f(sv)− f(sK)) = σ(f(sK)− f(sv)) Let n be the integer such that sn = arg max{f(sk), sk ∈ Population | ∃s s.t. σ(f(s)− f(sk)) > σ(f(sk)− f(s))} If sn exists, then we have that for all si such that f(si) > f(sn), σ(f(sv)− f(si)) = σ(f(si)− f(sv)) The PBR objective is K∑ k=1 πk1[σ(f(s)− f(sk)) > σ(f(sk)− f(s))] which, according to our assumptions, is equivalent to n∑ k=1 πk1[σ(f(s)− f(sk)) > σ(f(sk)− f(s))] We know that for all i ≤ n, σ(f(sv)− f(si)) > σ(f(si)− f(sv)), and therefore, sv maximizes the PBR objective. Finally, if sn doesn’t exist, then any policy is solution to the PBR objective, and therefore sv is. A toy example showing the compatibility between Best Response and Preference-based Best Response is shown in Fig. B.8. The setting is that of a monotonic game where every strategy is assigned a number. Strategies are then dominated by all strategies with higher number than theirs. We compute BR and PBR on an initial population composed of one strategy that we choose to be dominated by every other strategy. Any strategy dominating the current population is a valid solution for PBR, as represented in Fig. B.8c; whereas, if we consider that the game is monotonic with σ a strictly increasing function, only one strategy maximizes Best Response, strategy N – and it is thus the only solution of BR, as shown in Fig. B.8d. As we can see, the solution of BR is part of the possible solutions of PBR, demonstrating the result of Proposition 6: BR is compatible with PBR in monotonic games. B.7 PROOF OF PROPOSITION 7 Proposition 7. Consider symmetric win-loss games where outcomes between deterministic strategies are deterministic. A preference-based RL agent (i.e., an agent aiming to maximize its probability of winning against a distribution π of strategies {s1, . . . , sN}) optimizes exactly the PBR objective (1). Proof. Commencing with the above preference-based RL objective, we calculate as follows, arg max σ P ( σ beats N∑ i=1 πisi ) = arg max σ Ei [P(σ beats si|index i selected)] = arg max σ N∑ i=1 πiP(σ beats si) = arg max σ N∑ i=1 πi1[σ receives a positive expected payoff against si] with the final equality whenever game outcomes between two deterministic strategies are deterministic. Note that this is precisely the PBR objective (1). B.8 PROOF OF PROPOSITION 8 Proposition 8. For symmetric two-player zero-sum games where off-diagonal payoffs have equal magnitude, all NE have support contained within that of the single-population α-Rank distribution. Proof. In the single-population case, the support of the α-Rank distribution is simply the (unique) sink strongly-connected component of the response graph (uniqueness follows from the fact that the response graph, viewed as an undirected graph, is fully-connected). We will now argue that for a strategy s in the sink strongly-connected component and a strategy z outside the sink stronglyconnected component, we have∑ a∈S π(a)M1(s, a) > ∑ a∈S π(a)M1(z, a) , (3) This inequality states that when an opponent plays according to π, the expected payoff to the row player is greater if they defect to s whenever they would have played z. This implies that if a supposed symmetric Nash equilibrium contains a strategy z outside the sink strongly-connected component in its support, then it could receive higher reward by playing s instead, which contradicts the fact that it is an NE. We show (3) by proving a stronger result — namely, that s dominates z as strategies. Firstly, since s is the sink strongly-connected component and z is not, s beats z, and soM1(s, z) > M1(s, s) = M1(z, z) >M1(z, s). Next, if a 6∈ {s, z} is in the sink strongly-connected component, then a beats z, and so M1(s, a) > M1(z, a) if s beats a, and M1(s, a) = M1(z, a) otherwise. Finally, if a 6= s, z is not in the sink strongly-connected component, thenM1(s, a) = M1(z, a) is z beats a, andM1(s, a) >M1(z, a) otherwise. Thus, (3) is proven, and the result follows. B.9 PROOF OF PROPOSITION 9 Proposition 9. In a symmetric two-player zero-sum game, there exists an NE with support contained within that of the α-Rank distribution. Proof. Consider the restriction of the game to the strategies contained in the sink strongly-connected component of the original game. Let π be an NE for this restricted game, and consider this as a distribution over all strategies in the original game (putting 0 mass on strategies outside the sink component). We argue that this is an NE for the full game, and the statement follows. To see this, note that since any strategy outside the sink strongly-connected component receives a non-positive payoff when playing against a strategy in the sink strongly-connected component, and that for at least one strategy in the sink strongly-connected component, this payoff is negative. Considering the payoffs available to the row player when the column player plays according to π, we observe that the expected payoff for any strategy outside the sink strongly-connected component is negative, since every strategy in the sink strongly-connected component beats the strategy outside the component. The payoff when defecting to a strategy in the sink strongly-connected component must be non-positive, since π is an NE for the restricted game. C ADDITIONAL DETAILS ON EXPERIMENTS C.1 EXPERIMENTAL PROCEDURES The code backend for the Poker experiments used OpenSpiel (Lanctot et al., 2019). Specifically, we used OpenSpiel’s Kuhn and Leduc poker implementations, and exact best responses were computed by traversing the game tree (see implementation details in https://github.com/deepmind/open_spiel/blob/master/open_spiel/ python/algorithms/best_response.py). 100 game simulations were used to estimate the payoff matrix for each possible strategy pair. Although the underlying Kuhn and Leduc poker games are stochastic (due to random initial card deals), the associated meta-games are essentially deterministic (as, given enough game simulations, the mean payoffs are fixed). The subsequent PSRO updates are, thus, also deterministic. Despite this, we report averages over 2 runs per PSROM, primarily to capture stochasticity due to differences in machine-specific rounding errors that occur due to the distributed computational platforms we run these experiments on. For experiments involving α-Rank, we conduct a full sweep over the ranking-intensity parameter, α, following each iteration of α-PSRO. We implemented a version of α-Rank (building on the OpenSpiel implementation https://github.com/deepmind/open_spiel/blob/master/ open_spiel/python/egt/alpharank.py) that used a sparse representation for the underlying transition matrix, enabling scaling-up to the large-scale NFG results presented in the experiments. For experiments involving the projected replicator dynamics (PRD), we used uniformly-initialized meta-distributions, running PRD for 5e4 iterations, using a step-size of dt = 1e− 3, and exploration parameter γ = 1e− 10. Time-averaged distributions were computed over the entire trajectory. C.2 DOMAIN DESCRIPTION AND GENERATION C.2.1 NORMAL FORM GAMES GENERATION Algorithms 2 to 4 provide an overview of the procedure we use to randomly-generate normal-form games for the oracle comparisons visualized in Fig. 2. Algorithm 2 GenerateTransitive(Actions, Players, meanvalue = [0.0, 1.0], meanprobability = [0.5, 0.5], var = 0.1) 1: T = [] 2: for Player k do 3: Initialize fk = [0] ∗ Actions 4: for Action a ≤ Actions do 5: Randomly sample mean µ from meanvalue according to meanprobability 6: fk[a] ∼ N (µ, var) 7: for Player k do 8: T [k] = fk − 1|Players|−1 ∑ i 6=k fi 9: Return T Algorithm 3 GenerateCyclic(Actions, Players, var = 0.4) 1: C = [] 2: for Player k do 3: Initialize C[k] ∼ N (0, var), Shape(C[k]) = (ActionsFirst Player, . . . ,ActionsLast Player) 4: for Player k do 5: Sum = ∑ Actions ai of all player i6=k C[k][a1, . . . , ak−1, : , ak+1, ...] 6: Shape(Sum) = (1, . . . , 1,ActionsPlayer k, 1, . . . , 1) 7: C[k] = C[k]− Sum 8: Return C Algorithm 4 General Normal Form Games Generation(Actions, Players) 1: Generate matrix lists T = GenerateTransitive(Actions, Players), C = GenerateCyclic(Actions, Players) 2: Return [T [k] + C[k] for Player k] C.2.2 KUHN AND LEDUC POKER K-player Kuhn poker is played with a deck of K + 1 cards. Each player starts with 2 chips and 1 face-down card, and antes 1 chip to play. Players either bet (raise/call) or fold iteratively, until each player is either in (has contributed equally to the pot) or has folded. Amongst the remaining players, the one with the highest-ranked card wins the pot. Leduc Poker, in comparison, has a significantly larger state-space. Players in Leduc have unlimited chips, receive 1 face-down card, ante 1 chip to play, with subsequent bets limited to 2 and 4 chips in rounds 1 and 2. A maximum of two raises are allowed in each round, and a public card is revealed before the second round. C.3 PBR COMPUTATION IN NORMAL FORM GAMES The algorithms used to compute PBR and PBR-SCORE in the games generated by the algorithm described in Section C.2.1 is shown in Algorithms 5 and 6. Note that they compute the multipopulation version of PBR. PCS-SCORE is computed by pre-computing the full game’s SSCC, and computing the proportion of currently selected strategies in the empirical game that also belongs to the full game’s SSCC. Note that the PBR-SCORE and PCS-SCORE are useful measures for assessing the quality of convergence in our examples, in a manner analogous to NASHCONV. The computation of these scores is, however, not tractable in general games. Notably, this is also the case for NASHCONV (as it requires computation of player-wise best responses, which can be problematic even in moderatelysized games). Despite this, these scores remain a useful way to empirically verify the convergence characteristics in small games where they can be tractably computed. Algorithm 5 PBR Score(Strategy S, Payoff Tensor, Current Player Id, Joint Strategies, Joint Strategy Probability) 1: New strategy score = 0 2: for Joint strategy J, Joint probability P in Joint Strategies, Joint Strategy Probability do 3: New strategy = J 4: New strategy[Current Player Id] = S 5: New strategy payoff = Payoff Tensor[New Strategy] 6: Old strategy payoff = Payoff Tensor[J] 7: New strategy score += P * (New Strategy Payoff > Old Strategy Payoff) 8: Return New strategy score Algorithm 6 PBR(Payoff Tensor list LM, Joint Strategies per player PJ, Alpharank Probability per Joint Strategy PA, Current Player) 1: maxPBR = 0 2: maxstrat = None 3: for Strategy S available to Current Player among all possible strategies do 4: score = PBR Score(S, LM[Current Player Id], Current Player Id, PJ, PA) 5: if New Strategy Score > maxPBR then 6: maxPBR = New Strategy Score 7: maxstrat = S 8: Return maxPBR,maxstrat C.4 ADDITIONAL ORACLE COMPARISON RESULTS We present additional oracle comparisons in Fig. C.9, all of these in the multi-population case. C.5 NOTES ON RECTIFIED NASH PERFORMANCE This section provides additional insights into the Rectified Nash results detailed in Section 5. We begin with an important disclaimer that Rectified Nash was developed solely with symmetric games in mind. As Kuhn Poker and Leduc Poker are not symmetric games, they lie beyond the theoretical scope of Rectified Nash. Nevertheless, comparing the performance of rectified and non-rectified approaches from an empirical perspective yields insights, which may be useful for future investigations that seek to potentially extend and apply rectified training approaches to more general games. As noted in the main paper, the poor performance of PSRO using Rectified Nash (in Fig. 3) is initially surprising as it indicates premature convergence to a high-NASHCONV distribution over the players’ policy pools. Investigating this further led to a counterintuitive result for the domains evaluated: Rectified Nash was, roughly speaking, not increasing the overall diversity of behavioral policies added to each player’s population pool. In certain regards, it even prevented diversity from emerging. To more concretely pinpoint the issues, we detail below the first 3 iterations of PSRO(Rectified Nash, BR) in Kuhn Poker. Payoff matrices at each PSRO iteration are included in Tables 6a to 6c. For clarity, we also include the 5 best responses trained by Rectified Nash and the policies they were trained against, in their order of discovery: 2 policies for Player 1 (in Fig. C.11) and 3 policies for Player 2 (in Fig. C.12). 1. Iteration 0: both players start with uniform random policies. 2. Iteration 1: • Player 1 trains a best response against Player 2’s uniform random policy; its policy set is now the original uniform policy, and the newly-computed best response. • Player 2 trains a best response against Player 1’s uniform random policy; its policy set is now the original uniform policy, and the newly-computed best response. • Player 2’s best response beats both of Player 1’s policies. • Payoff values are represented in Table 6a. 3. Iteration 2: • By Rectified Nash rules, Player 1 only trains policies against policies it beats; i.e., only against Player 2’s random policy, and thus it adds the same policy as in iteration 1 to its pool. • Player 2 trains a best response against the Nash mixture of Player 1’s first best response and random policy. This policy also beats all policies of player 1. • Payoff values are represented in Table 6b. 4. Iteration 3: • Player 1 only trains best responses against Player 2’s random policy. • Player 2 only trains best responses against the Nash of Player 1’s two unique policies. This yields the same policies for player 2 as those previously added to its pool (i.e., a loop occurs). • Payoff values are represented in Table 6c 5. Rectified Nash has looped. As noted above, Rectified Nash loops at iteration 3, producing already-existing best responses against Player 1’s policies. Player 1 is, therefore, constrained to never being able to train best responses against any other policy than Player 2’s random policy. In turn, this prevents Player 2 from training additional novel policies, and puts the game in a deadlocked state. Noise in the payoff matrices may lead to different best responses against the Nash Mixture of policies, effectively increasing diversity. However, this effect did not seem to manifest in our experiments. To more clearly illustrate this, we introduce a means of evaluating the policy pool diversity, counting the number of unique policies in the pool. Specifically, given that Kuhn poker is a finite state game, comparing policies is straightforward, and only amounts to comparing each policy’s output on all states of the games. If two policies have exactly the same output on all the game’s states, they are equal; otherwise, they are distinct. We plot in Fig. C.10 the policy diversity of each meta-solver, where we observe that both Rectified Nash and Rectified PRD discover a total of 5 different policies. We have nevertheless noticed that in a few rare seeds, when using low number of simulations per payoff entry (Around 10), Rectified Nash was able to converge to low exploitability scores, suggesting a relationship between payoff noise, uncertainty and convergence of Rectified Nash whose investigation we leave for future work. We also leave the investigation of the relationship between Policy Diversity and Exploitability for future work, though note that there appears to be a clear correlation between both. Overall, these results demonstrate that the Rectified Nash solver fails to discover as many unique policies as the other solvers, thereby plateauing at a low NASHCONV. Finally, regarding Rectified PRD, which performs better in terms of NASHCONV when compared to Rectified Nash, we suspect that payoff noise in combination with the intrinsic noise of PRD, plays a key factor - but those two are not enough to deterministically make Rectified PRD converge to 0 exploitability, since in the seed that generated Fig. C.10, it actually doesn’t (Though it indeed converges in Fig. 3). We conjecture this noisier behavior may enable Rectified PRD to free itself from deadlocks more easily, and thus discover more policies on average. A more detailed analysis of Rectified PRD is left as future work. D α-RANK IN DETAIL In this section we give further details of α-Rank; for a full description, see Omidshafiei et al. (2019). Essentially α-Rank defines a directed response graph over the pure strategy profiles of the game under study, by indicating when a player has an incentive to make a unilateral deviation from their current strategy. An irreducible (noisy) random walk over this graph is then defined, and the strategy profile rankings are obtained by ordering the masses of this Markov chain’s unique invariant distribution π. The Markov transition matrix C that specifies this random walk is defined as follows for the multipopulation case; see Omidshafiei et al. (2019) for the single-population case. Consider a pure strategy profile s ∈ S, and let σ = (σk, s−k) be the pure strategy profile which is equal to s, except for player k, which uses strategy σk ∈ Sk instead of sk. Let Cs,σ denote the transition probability from s to σ, and Cs,s the self-transition probability of s, with each defined as: Cs,σ = { η 1−exp(−α(Mk(σ)−Mk(s))) 1−exp(−αm(Mk(σ)−Mk(s))) if M k(σ) 6= Mk(s) η m otherwise , Cs,s = 1− ∑ k∈[K] σ|σk∈Sk\{sk} Cs,σ , where η = ( ∑ l(|Sl| − 1))−1. If two strategy profiles s and s′ differ in more than one player’s strategy, then Cs,s′ = 0. Here α ≥ 0 and m ∈ N are parameters to be specified; the form of this transition probability is described by evolutionary dynamics models from evolutionary game theory and is explained in more detail in Omidshafiei et al. (2019). Large values of α correspond to higher selection pressure in the evolutionary model under consideration; the version of α-Rank used throughout this paper corresponds to the limiting invariant distribution as α→∞, under which only strategy profiles appearing in the sink strongly-connected components of the response graph can have positive mass. E TOWARDS THEORETICAL GUARANTEES FOR THE PROJECTED REPLICATOR DYNAMICS Computing Nash equilibria is intractable for general games and can suffer from a selection problem (Daskalakis et al., 2009); therefore, it quickly becomes computationally intractable to employ an exact Nash meta-solver in the inner loop of a PSRO algorithm. To get around this, Lanctot et al. (2017) use regret minimization algorithms to attain an approximate correlated equilibrium (which is guaranteed to be an approximate Nash equilibrium under certain conditions on the underlying game, such as two-player zero-sum). A dynamical system from evolutionary game theory that also converges to equilibria under certain conditions is the replicator dynamics (Taylor and Jonker, 1978; Schuster and Sigmund, 1983; Cressman and Tao, 2014; Bloembergen et al., 2015), which defines a dynamical system over distributions of strategies (πks (t) | k ∈ [K], s ∈ Sk), given by π̇ks (t) = π k s (t) [ Mk(s,π−k(t))−Mk(πk(t)) ] , for all k ∈ [K], s ∈ Sk , (4) with an arbitrary initial condition. Lanctot et al. (2017) introduced a variant of replicator dynamics, termed projected replicator dynamics (PRD), which projects the flow of the system so that each distribution πk(t) lies in the set ∆γ Sk = {π ∈ ∆Sk | πs ≥ γ/(|Sk| + 1), ∀s ∈ Sk}; see, e.g., Nagurney and Zhang (2012) for properties of such projected dynamical systems. This heuristically enforces additional “exploration” relative to standard replicator dynamics, and was observed to provide strong empirical results when used as a meta-solver within PSRO. However, the introduction of projection potentially severs the connection between replicator dynamics and Nash equilibria, and the theoretical game-theoretic properties of PRD were left open in Lanctot et al. (2017). Here, we take a first step towards investigating theoretical guarantees for PRD. Specifically, we highlight a possible connection between α-Rank, the calculation of which requires no simulation, and a constrained
1. What is the focus of the paper in game theory? 2. What are the contributions of the paper in establishing connections between Nash and α-Rank? 3. What are the novel aspects of the construction of best response guaranteed to converge to the α-Rank? 4. How do the empirical results support or contrast with the claims made in the paper? 5. How has the paper been improved to address reviewer concerns?
Review
Review The paper studies α-Rank, a scalable alternative to Nash equilibrium, across a number of areas. Specifically the paper establishes connections between Nash and α-Rank in specific instances, presents a novel construction of best response that guarantees convergence to the α-Rank in several games, and demonstrates empirical results in poker and soccer games. The paper is well-written and well-argued. Even without a deep understanding of the subject I was able to follow along across the examples and empirical results. In particular, it was good to see the authors clearly lay out where their novel approach would work and where it would not and to be able to identify why in both cases. My only real concern stems from the empirical results compared to some of the claims made early in the paper. Given the strength of the claims comparing the authors approach and prior approaches, it seems that the empirical results are somewhat weak. The authors make sure to put these results into context, but given the clarity of the results in the toy domains I would have expected clearer takeaways from the empirical results as well. Edit: The authors greatly improved the paper, addressing all major reviewer concerns.
ICLR
Title A Generalized Training Approach for Multiagent Learning Abstract This paper investigates a population-based training regime based on game-theoretic principles called Policy-Spaced Response Oracles (PSRO). PSRO is general in the sense that it (1) encompasses well-known algorithms such as fictitious play and double oracle as special cases, and (2) in principle applies to general-sum, manyplayer games. Despite this, prior studies of PSRO have been focused on two-player zero-sum games, a regime wherein Nash equilibria are tractably computable. In moving from two-player zero-sum games to more general settings, computation of Nash equilibria quickly becomes infeasible. Here, we extend the theoretical underpinnings of PSRO by considering an alternative solution concept, α-Rank, which is unique (thus faces no equilibrium selection issues, unlike Nash) and applies readily to general-sum, many-player settings. We establish convergence guarantees in several games classes, and identify links between Nash equilibria and α-Rank. We demonstrate the competitive performance of α-Rank-based PSRO against an exact Nash solver-based PSRO in 2-player Kuhn and Leduc Poker. We then go beyond the reach of prior PSRO applications by considering 3to 5-player poker games, yielding instances where α-Rank achieves faster convergence than approximate Nash solvers, thus establishing it as a favorable general games solver. We also carry out an initial empirical validation in MuJoCo soccer, illustrating the feasibility of the proposed approach in another complex domain. 1 INTRODUCTION Creating agents that learn to interact in large-scale systems is a key challenge in artificial intelligence. Impressive results have been recently achieved in restricted settings (e.g., zero-sum, two-player games) using game-theoretic principles such as iterative best response computation (Lanctot et al., 2017), self-play (Silver et al., 2018), and evolution-based training (Jaderberg et al., 2019; Liu et al., 2019). A key principle underlying these approaches is to iteratively train a growing population of player policies, with population evolution informed by heuristic skill ratings (e.g., Elo (Elo, 1978)) or game-theoretic solution concepts such as Nash equilibria. A general application of this principle is embodied by the Policy-Space Response Oracles (PSRO) algorithm and its related extensions (Lanctot et al., 2017; Balduzzi et al., 2019). Given a game (e.g., poker), PSRO constructs a higherlevel meta-game by simulating outcomes for all match-ups of a population of players’ policies. It then trains new policies for each player (via an oracle) against a distribution over the existing meta-game policies (typically an approximate Nash equilibrium, obtained via a meta-solver1), appends these new policies to the meta-game population, and iterates. In two-player zero sum games, fictitious play (Brown, 1951), double oracle (McMahan et al., 2003), and independent reinforcement learning can all be considered instances of PSRO, demonstrating its representative power (Lanctot et al., 2017). Prior applications of PSRO have used Nash equilibria as the policy-selection distribution (Lanctot et al., 2017; Balduzzi et al., 2019), which limits the scalability of PSRO to general games: Nash equilibria are intractable to compute in general (Daskalakis et al., 2009); computing approximate Nash equilibria is also intractable, even for some classes of two-player games (Daskalakis, 2013); finally, when they can be computed, Nash equilibria suffer from a selection problem (Harsanyi et al., 1988; Goldberg et al., 2013). It is, thus, evident that the reliance of PSRO on the Nash equilibrium as the driver of population growth is a key limitation, preventing its application to general games. Recent work has proposed a scalable alternative to the Nash equilibrium, called α-Rank, which applies readily to general games (Omidshafiei et al., 2019), making it a promising candidate for population-based training. Given that the formal study of PSRO has only been conducted under the restricted settings determined by the limitations of Nash equilibria, establishing its theoretical and empirical behaviors under alternative meta-solvers remains an important and open research problem. We study several PSRO variants in the context of general-sum, many-player games, providing convergence guarantees in several classes of such games for PSRO instances that use α-Rank as a meta-solver. We also establish connections between Nash and α-Rank in specific classes of games, and identify links between α-Rank and the Projected Replicator Dynamics employed in prior PSRO instances (Lanctot et al., 2017). We develop a new notion of best response that guarantees convergence to the α-Rank distribution in several classes of games, verifying this empirically in randomly-generated general-sum games. We conduct empirical evaluations in Kuhn and Leduc Poker, first establishing our approach as a competitive alternative to Nash-based PSRO by focusing on two-player variants of these games that have been investigated in these prior works. We subsequently demonstrate empirical results extending beyond the reach of PSRO with Nash as a meta-solver by evaluating training in 3- to 5-player games. Finally, we conduct preliminary evaluations in MuJoCo soccer (Liu et al., 2019), another complex domain wherein we use reinforcement learning agents as oracles in our proposed PSRO variants, illustrating the feasibility of the approach. 2 PRELIMINARIES Games We consider K-player games, where each player k ∈ [K] has a finite set of pure strategies Sk. Let S = ∏ k S k denote the space of pure strategy profiles. Denote by S−k = ∏ l 6=k S l the set of pure strategy profiles excluding those of player k. Let M(s) = (M1(s), . . . ,MK(s)) ∈ RK denote the vector of expected player payoffs for each s ∈ S. A game is said to be zero-sum if∑ kM k(s) = 0 for all s ∈ S. A game is said to be symmetric if all players have identical strategy sets Sk, and for any permutation ρ, strategy profile (s1, . . . , sK) ∈ S, and index k ∈ [K], one has Mk(s1, . . . , sK) = Mρ(k)(sρ(1), . . . , sρ(K)). A mixed strategy profile is defined as π ∈ ∆S , a tuple representing the probability distribution over pure strategy profiles s ∈ S. The expected payoff to player k under a mixed strategy profile π is given byMk(π) = ∑ s∈S π(s)M k(s). Nash Equilibrium (NE) Given a mixed profile π, the best response for a player k is defined BRk(π) = arg maxν∈∆ Sk [Mk(ν,π−k)]. A factorized mixed profile π(s) = ∏ k π k(sk) is a Nash equilibrium (NE) if πk ∈ BRk(π) for all k ∈ [K]. Define NASHCONV(π) =∑ kM k(BRk(π),π−k)−Mk(π); roughly speaking, this measures “distance” from an NE (Lanctot et al., 2017). In prior PSRO instances (Lanctot et al., 2017), a variant of the replicator dynamics (Taylor and Jonker, 1978; Maynard Smith and Price, 1973), called the Projected Replicator Dynamics (PRD), has been used as an approximate Nash meta-solver (see Appendix E for details on PRD). α-Rank While NE exist in all finite games (Nash, 1950), their computation is intractable in general games, and their non-uniqueness leads to an equilibrium-selection problem (Harsanyi et al., 1988; Goldberg et al., 2013). This limits their applicability as the underlying driver of training beyond the two-player, zero-sum regime. Recently, an alternate solution concept called α-Rank was proposed by 1A meta-solver is a method that computes, or approximates, the solution concept that is being deployed. Player 1’s policy set ...... Player k’s policy set Player K’s policy set Profile distribution Meta-solver Oracle ...... Profile distribution Randomly initialize player policy sets ... ... Game simulations (a) Complete: compute missing payoff tensorM entries via game simulations. (b) Solve: given the updated payoff tensor M , calculate metastrategy π via meta-solver M. (c) Expand: append a new policy to each player’s policy space using the oracle O. Figure 1: Overview of PSRO(M, O) algorithm phases. Omidshafiei et al. (2019), the key associated benefits being its uniqueness and efficient computation in many-player and general-sum games, making it a promising means for directing multiagent training. The α-Rank distribution is computed by constructing the response graph of the game: each strategy profile s ∈ S of the game is a node of this graph; a directed edge points from any profile s ∈ S to σ ∈ S in the graph if (1) s and σ differ in only a single player k’s strategy and (2)Mk(σ) >Mk(s). α-Rank constructs a random walk along this directed graph, perturbing the process by injecting a small probability of backwards-transitions from σ to s (dependent on a parameter, α, whose value is prescribed by the algorithm); this ensures irreducibility of the resulting Markov chain and the existence of a unique stationary distribution, π ∈ ∆S , called the α-Rank distribution. The masses of π are supported by the sink strongly-connected components (SSCCs) of the response graph (Omidshafiei et al., 2019). For more details on α-Rank, see Appendix D and Rowland et al. (2019). Oracles We define an oracle O as an abstract computational entity that, given a game, computes policies with precise associated properties. For instance, a best-response oracle Ok(π) = BRk(π) computes the best-response policy for any player k, given a profile π. One may also consider approximate-best-response oracles that, e.g., use reinforcement learning to train a player k’s policy against a fixed distribution over the other players’ policies, π−k. Oracles play a key role in populationbased training, as they compute the policies that are incrementally added to players’ growing policy populations (McMahan et al., 2003; Lanctot et al., 2017; Balduzzi et al., 2019). The choice of oracle O also affects the training convergence rate and final equilibrium reached (e.g., Nash or α-Rank). Empirical Game-theoretic Analysis PSRO relies on principles from empirical game-theoretic analysis (EGTA) (Walsh et al., 2002; Phelps et al., 2004; Wellman, 2006). Given a game (e.g., poker), EGTA operates via construction of a higher-level ‘meta-game’, where strategies s correspond to policies (e.g., ‘play defensively’ in poker) rather than atomic actions (e.g., ‘fold’). A meta-payoff table M is then constructed by simulating games for all joint policy combinations, with entries corresponding to the players’ expected utilities under these policies. Game-theoretic analysis can then be conducted on the meta-game in a manner analogous to the lower-level game, albeit in a much more scalable manner. As the theoretical discussion hereafter pertains to the meta-game, we use s, M , and π to respectively refer to policies, payoffs, and distributions at the meta-level, rather than the underlying low-level game. In our analysis, it will be important to distinguish between SSCCs of the underlying game, and of the meta-game constructed by PSRO; we refer to the latter as meta-SSCCs. 3 POLICY-SPACE RESPONSE ORACLES: NASH AND BEYOND We first overview Policy-Space Response Oracles (PSRO) prior to presenting our findings. Given an underlying game (e.g., Poker), PSRO first initializes the policy space S using randomly-generated policies, then expands the players’ policy populations in three iterated phases: complete, solve, and Algorithm 1 PSRO(M, O) 1: Initialize the players’ policy set S = ∏ k S k via random policies 2: for iteration ∈ {1, 2, · · · } do 3: Update payoff tensorM for new policy profiles in S via game simulations . (Fig. 1a) 4: Compute the meta-strategy π using meta-solverM(M) . (Fig. 1b) 5: Expand the policy space for each player k ∈ [K] via Sk ← Sk ∪ Ok(π) . (Fig. 1c) Game type M O Converges to α-Rank? SP α-Rank BR 7 (Example 1) SP α-Rank PBR 3 (Sub-SSCC,† Proposition 3) MP α-Rank BR 7 (Example 2) MP α-Rank PBR 3 (With novelty-bound oracle,† Proposition 1) SP / MP Uniform or Nash BR 7 (Examples 4 and 5, Appendix A.2) Table 1: Theory overview. SP and MP, resp., denote single and multi-population games. BR and PBR, resp., denote best response and preference-based best response. †Defined in the noted propositions. expand (see Algorithm 1 and Fig. 1). In the complete phase, a meta-game consisting of all match-ups of these joint policies is synthesized, with missing payoff entries in M completed through game simulations. Next, in the solve phase, a meta-solverM computes a profile π over the player policies (e.g., Nash, α-Rank, or uniform distributions). Finally, in the expand phase, an oracle O computes at least one new policy s′k for each player k ∈ [K], given profile π. As other players’ policy spaces S−k and profile π−k are fixed, this phase involves solving a single-player optimization problem. The new policies are appended to the respective players’ policy sets, and the algorithm iterates. We use PSRO(M, O) to refer to the PSRO instance using meta-solverM and oracle O. Notably, PSRO-based training for two-player symmetric games can be conducted using a single population of policies that is shared by all players (i.e., Sk is identical for all k). Thus, we henceforth refer to twoplayer symmetric games as ‘single-population games’, and more generally refer to games that require player-specific policy populations as ‘multi-population games’. Recent investigations of PSRO have solely focused on Nash-based meta-solvers and best-response-based oracles (Lanctot et al., 2017; Balduzzi et al., 2019), with theory focused around the two-player zero-sum case. Unfortunately, these guarantees do not hold in games beyond this regime, making investigation of alternative meta-solvers and oracles critical for further establishing PSRO’s generalizability. 4 GENERALIZING PSRO THEORY This section establishes theoretical properties of PSRO for several useful classes of general games. We summarize our results in Table 1, giving a full exposition below. 4.1 ESTABLISHING CONVERGENCE TO α-RANK Player 2 A B C D X Pl ay er 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 Table 2: Symmetric zero-sum game used to analyze the behavior of PSRO in Example 1. Here, 0 < ε 1 and φ 1. It is well-known that PSRO(Nash, BR) will eventually return an NE in two-player zero-sum games (McMahan et al., 2003). In more general games, where Nash faces the issues outlined earlier, α-Rank appears a promising meta-solver candidate as it applies to many-player, general-sum games and has no selection problem. However, open questions remain regarding convergence guarantees of PSRO when using α-Rank, and whether standard BR oracles suffice for ensuring these guarantees. We investigate these theoretical questions, namely, whether particular variants of PSRO can converge to the α-Rank distribution for the underlying game. A first attempt to establish convergence to α-Rank might involve running PSRO to convergence (until the oracle returns a strategy already in the convex hull of the known strategies), using α-Rank as the meta-solver, and a standard best response oracle. However, the following example shows that this will not work in general for the single-population case (see Fig. A.5 for a step-by-step illustration). Example 1. Consider the symmetric zero-sum game specified in Table 2. As X is the sole sink component of the game’s response graph (as illustrated in Fig. A.5a), the single-population α-Rank distribution for this game puts unit mass on X . We now show that a PSRO algorithm that computes best responses to the α-Rank distribution over the current strategy set need not recover strategy X , by computing directly the strategy sets of the algorithm initialized with the set {C}. 1. The initial strategy space consists only of the strategy C; the best response against C is D. 2. The α-Rank distribution over {C,D} puts all mass on D; the best response against D is A. 3. The α-Rank distribution over {C,D,A} puts all mass on A; the best response against A is B. 4. The α-Rank distribution over {C,D,A,B} puts mass (1/3, 1/3, 1/6, 1/6) on (A,B,C,D) respec- tively. For φ sufficiently large, the payoff that C receives against B dominates all others, and since B has higher mass than C in the α-Rank distribution, the best response is C. Thus, PSRO(α-Rank, BR) leads to the algorithm terminating with strategy set {A,B,C,D} and not discovering strategy X in the sink strongly-connected component. This conclusion also holds in the multi-population case, as the following counterexample shows. Example 2. Consider the game in Table 2, treating it now as a multi-population problem. It is readily verified that the multi-population α-Rank distributions obtained by PSRO with initial strategy sets consisting solely of C for each player are: (i) a Dirac delta at the joint strategy (C,C), leading to best responses of D for both players; (ii) a Dirac delta at (D,D) leading to best responses of A for both players; (iii) a Dirac delta at (A,A), leading to best responses of B for both players; and finally (iv) a distribution over joint strategies of the 4×4 subgame induced by strategies A,B,C,D that leads to a best response not equal to X; thus, the full α-Rank distribution is again not recovered. 4.2 A NEW RESPONSE ORACLE The previous examples indicate that the use of standard best responses in PSRO may be the root cause of the incompatibility with the α-Rank solution concept. Thus, we define the Preference-based Best Response (PBR) oracle, which is more closely aligned with the dynamics defining α-Rank, and which enables us to establish desired PSRO guarantees with respect to α-Rank. Consider first the single-population case. Given an N -strategy population {s1, . . . , sN} and corresponding meta-solver distribution (πi)Ni=1∈∆N , a PBR oracle is defined as any function satisfying PBR (∑ i πisi ) ⊆ arg maxσ ∑ i πi1 [ M1(σ, si) >M 2(σ, si) ] , (1) where the arg max returns the set of policies optimizing the objective, and the optimization is over pure strategies in the underlying game. The intuition for the definition of PBR is that we would like the oracle to return strategies that will receive high mass under α-Rank when added to the population; objective (1) essentially encodes the probability flux that the vertex corresponding to σ would receive in the random walk over the α-Rank response graph (see Section 2 or Appendix D for further details). We demonstrate below that the use of the PBR resolves the issue highlighted in Example 1 (see Fig. A.6 in Appendix A for an accompanying visual). Example 3. Steps 1 to 3 of correspond exactly to those of Example 1. In step 4, the α-Rank distribution over {C,D,A,B} puts mass (1/3, 1/3, 1/6, 1/6) on (A,B,C,D) respectively. A beats C and D, thus its PBR score is 1/3. B beats A and D, thus its PBR score is 1/2. C beats B, its PBR score is thus 1/3. D beats C, its PBR score is thus 1/6. Finally, X beats every other strategy, and its PBR score is thus 1. Thus, there is only one strategy maximizing PBR, X , which is then chosen, thereby recovering the SSCC of the game and the correct α-Rank distribution at the next timestep. In the multi-population case, consider a population of N strategy profiles {s1, . . . , sN} and corresponding meta-solver distribution (πi)Ni=1. Several meta-SSCCs may exist in the multi-population α-Rank response graph. In this case, we run the PBR oracle for each meta-SSCC separately, as follows. Suppose there are ` meta-SSCCs, and denote by π(`) the distribution π restricted to the `th meta-SSCC, for all 1 ≤ ` ≤ L. The PBR for player k on the `th meta-SSCC is then defined by PBRk (∑ i π (`) i si ) ⊆ arg maxσ ∑ i π (`) i 1 [ Mk(σ, s−ki ) >M k(ski , s −k i ) ] . (2) Thus, the PBR oracle generates one new strategy for each player for every meta-SSCC in the α-Rank response graph; we return this full set of strategies and append to the policy space accordingly, as in Line 5 of Algorithm 1. Intuitively, this leads to a diversification of strategies introduced by the oracle, as each new strategy need only perform well against a subset of prior strategies. This hints at interesting links with the recently-introduced concept of rectified-Nash BR (Balduzzi et al., 2019), which also attempts to improve diversity in PSRO, albeit only in two-player zero-sum games. We henceforth denote PSRO(α-Rank, PBR) as α-PSRO for brevity. We next define α-CONV, an approximate measure of convergence to α-Rank. We restrict discussion to the multi-population case here, describing the single-population case in Appendix A.4. With the notation introduced above, we define PBR-SCOREk(σ;π, S) = ∑ i ∑ ` π (`) i 1 [ Mk(σ, s−ki ) >M k(ski , s −k i ) ] , and α-CONV = ∑ k maxσ PBR-SCORE k(σ)−maxs∈Sk PBR-SCOREk(s) , where maxσ is taken over the pure strategies of the underlying game. Unfortunately, in the multi-population case, a PBR-SCORE of 0 does not necessarily imply α-partial convergence. We thus introduce a further measure, PCS-SCORE, defined by PCS-SCORE = # of α-PSRO strategy profiles in the underlying game’s SSCCs # of α-PSRO strategy profiles in meta-SSCCs , which assesses the quality of the α-PSRO population. We refer readers to Appendix C.3 for pseudocode detailing how to implement these measures in practice. 4.3 α-PSRO: THEORY, PRACTICE, AND CONNECTIONS TO NASH We next study the theoretical properties of PSRO(α-Rank, PBR), which we henceforth refer to as α-PSRO for brevity. We consider that α-PSRO has converged if no new strategy has been returned by PBR for any player at the end of an iteration. Proofs of all results are provided in Appendix B. Definition 1. A PSRO algorithm is said to converge α-fully (resp., α-partially) to an SSCC of the underlying game if its strategy population contains the full SSCC (resp., a sub-cycle of the SSCC, denoted a ‘sub-SSCC’) after convergence. Definition 2. We also adapt PBR to be what we call novelty-bound by restricting the arg max in Equation (1) to be over strategies not already included in the population with PBR-SCORE > 0. In particular, the novelty-bound version of the PBR oracle is given by restricting the arg max appearing in (2) to only be over strategies not already present in the population. These definitions enable the following results for α-PSRO in the single- and multi-population cases. Proposition 1. If at any point the population of α-PSRO contains a member of an SSCC of the game, then α-PSRO will α-partially converge to that SSCC. Proposition 2. If we constrain the PBR oracle used in α-PSRO to be novelty-bound, then α-PSRO will α-fully converge to at least one SSCC of the game. Stronger guarantees exist for two-players symmetric (i.e., single-population) games, though the multi-population case encounters more issues, as follows. Proposition 3. (Single-population) α-PSRO converges α-partially to the unique SSCC. Proposition 4. (Multi-population) Without a novelty-bound oracle, there exist games for which α-PSRO does not converge α-partially to any SSCC. Intuitively, the lack of convergence without a novelty-bound oracle can occur due to intransitivities in the game (i.e., cycles in the game can otherwise trap the oracle). An example demonstrating this issue is shown in Fig. B.7, with an accompanying step-by-step walkthrough in Appendix B.4. Specifically, SSCCs may be hidden by “intermediate” strategies that, while not receiving as high a payoff as current population-pool members, can actually lead to well-performing strategies outside the population. As these “intermediate” strategies are avoided, SSCCs are consequently not found. Note also that this is related to the common problem of action/equilibrium shadowing, as detailed in Matignon et al. (2012). In Section 5, we further investigate convergence behavior beyond the conditions studied above. In practice, we demonstrate that despite the negative result of Proposition 4, α-PSRO does significantly increase the probability of converging to an SSCC, in contrast to PSRO(Nash, BR). Overall, we have shown that for general-sum multi-player games, it is possible to give theoretical guarantees for a version of PSRO driven by α-Rank in several circumstances. By contrast, using exact NE in PSRO is intractable in general. In prior work, this motivated the use of approximate Nash solvers generally based on the simulation of dynamical systems or regret minimization algorithms, both of which generally require specification of several hyperparameters (e.g., simulation iterations, window sizes for computing time-average policies, and entropy-injection rates), and a greater computational burden than α-Rank to carry out the simulation in the first place. Implementing the PBR Oracle Recall from Section 3 that the BR oracle inherently solves a singleplayer optimization problem, permitting use of a single-agent RL algorithm as a BR approximator, which is useful in practice. As noted in Section 4.1, however, there exist games where the BR and PBR objectives are seemingly incompatible, preventing the use of standard RL agents for PBR approximation. While exact PBR is computable in small-scale (e.g., normal-form) games, we next consider more general games classes where PBR can also be approximated using standard RL agents. Definition 3. Objective A is ‘compatible’ with objective B if any solution to A is a solution to B. Proposition 5. A constant-sum game is denoted as win-loss ifMk(s) ∈ {0, 1} for all k ∈ [K] and s ∈ S. BR is compatible with PBR in win-loss games in the two-player single-population case. Proposition 6. A symmetric two-player game is denoted monotonic if there exists a function f : S → R and a non-decreasing function σ : R → R such that M1(s, ν) = σ(f(s) − f(ν)). BR is compatible with PBR in monotonic games in the single-population case. Finally, we next demonstrate that under certain conditions, there are strong connections between the PBR objective defined above and the broader field of preference-based RL (Wirth et al., 2017). Proposition 7. Consider symmetric win-loss games where outcomes between deterministic strategies are deterministic. A preference-based RL agent (i.e., an agent aiming to maximize its probability of winning against a distribution π of strategies {s1, . . . , sN}) optimizes exactly the PBR objective (1). Given this insight, we believe an important subject of future work will involve the use of preferencebased RL algorithms in implementing the PBR oracle for more general classes of games. We conclude this section with some indicative results of the relationship between α-Rank and NE. Proposition 8. For symmetric two-player zero-sum games where off-diagonal payoffs have equal magnitude, all NE have support contained within that of the single-population α-Rank distribution. Proposition 9. In a symmetric two-player zero-sum game, there exists an NE with support contained within that of the α-Rank distribution. For more general games, the link between α-Rank and Nash equilibria will likely require a more complex description. We leave this for future work, providing additional discussion in Appendix A.3. 5 EVALUATION We conduct evaluations on games of increasing complexity, extending beyond prior PSRO applications that have focused on two-player zero-sum games. For experimental procedures, see Appendix C. Oracle comparisons We evaluate here the performance of the BR and PBR oracles in games where PBR can be exactly computed. We consider randomly generated, K-player, general-sum games with increasing strategy space sizes, |Sk|. Figure 2 reports these results for the 4- and 5-player instances (see Appendix C.4 for 2-3 player results). The asymmetric nature of these games, in combination with the number of players and strategies involved, makes them inherently, and perhaps surprisingly, large in scale. For example, the largest considered game in Fig. 2 involves 5 players with 30 strategies each, making for a total of more than 24 million strategy profiles in total. For each combination of K and |Sk|, we generate 1e6 random games. We conduct 10 trials per game, in each trial running the BR and PBR oracles starting from a random strategy in the corresponding response graph, then iteratively expanding the population space until convergence. Importantly, this implies that the starting strategy may not even be in an SSCC. As mentioned in Section 4.2, α-CONV and PCS-SCORE jointly characterize the oracle behaviors in these multi-population settings. Figure 2a plots α-CONV for both oracles, demonstrating that PBR outperforms BR in the sense that it captures more of the game SSCCs. Figures 2b and 2c, respectively, plot the PCS-SCORE for BR and PBR over all game instances. The PCS-SCORE here is typically either (a) greater than 95%, or (b) less than 5%, and otherwise rarely between 5% to 95%. For all values of |Sk|, PBR consistently discovers a larger proportion of the α-Rank support in contrast to BR, serving as useful validation of the theoretical results of Section 4.3. Meta-solver comparisons We consider next the standard benchmarks of Kuhn and Leduc poker (Kuhn, 1950; Southey et al., 2005; Lanctot et al., 2019). We detail these domains in Appendix C.2, noting here that both are K-player, although Leduc is significantly more complex than Kuhn. We first consider two-player instances of these poker domains, permitting use of an exact Nash meta-solver. Figure 3 compares the NASHCONV of PSRO(M, BR) for various meta-solverM choices. Note that the x axis of Figure 3 and Figure 4 is the Total Pool Length (The sum of the length of each player’s pool in PSRO) instead of the number of iterations of PSRO, since Rectified solvers can add more than one policy to the pool at each PSRO iteration (Possibly doubling pool size at every PSRO iteration). It is therefore more pertinent to compare exploitabilities at the same pool sizes rather than at the same number of PSRO iterations. In Kuhn poker (Fig. 3a), the α-Rank, Nash, and the Projected Replicator Dynamics (PRD) metasolvers converge essentially at the same rate towards zero NASHCONV, in contrast to the slower rate of the Uniform meta-solver, the very slow rate of the Rectified PRD solver, and the seemingly constant NASHCONV of the Rectified Nash solver. We provide in Appendix C.5 a walkthrough of the first steps of the Rectified Nash results to more precisely determine the cause of its plateauing NASHCONV. A high level explanation thereof is that it is caused by Rectified Nash cycling through the same policies, effectively not discovering new policies. We posit these characteristics, antipodal to the motivation behind Rectified Nash, come from the important fact that Rectified Nash was designed to work only in symmetric games, and is therefore not inherently well-suited for the Kuhn and Leduc poker domains investigated here, as they are both asymmetric games. We did not add the Rectified PRD results the other, greater-than-2 players experiments, as its performance remained non-competitive. As noted in Lanctot et al. (2017), PSRO(Uniform, BR) corresponds to Fictitious Play (Brown, 1951) and is thus guaranteed to find an NE in such instances of two-player zero-sum games. Its slower convergence rate is explained by the assignment of uniform mass across all policies s ∈ S, implying that PSRO essentially wastes resources on training the oracle to beat even poor-performing strategies. While α-Rank does not seek to find an approximation of Nash, it nonetheless reduces the NASHCONV yielding competitive results in comparison to an exact-Nash solver in these instances. Notably, the similar performance of α-Rank and Nash serves as empirical evidence that α-Rank can be applied competitively even in the two-player zero-sum setting, while also showing great promise to be deployed in broader settings where Nash is no longer tractable. We next consider significantly larger variants of Kuhn and Leduc Poker involving more than two players, extending beyond the reach of prior PSRO results (Lanctot et al., 2017). Figure 4 visualizes the NASHCONV of PSRO using the various meta-solvers (with the exception of an exact Nash solver, due to its intractability in these instances). In all instances of Kuhn Poker, α-Rank and PRD show competitive convergence rates. In 3-player Leduc poker, however, α-Rank shows fastest convergence, with Uniform following throughout most of training and PRD eventually reaching a similar NASHCONV. Several key insights can be made here. First, computation of an approximate Nash via PRD involves simulation of the associated replicator dynamics, which can be chaotic (Palaiopanos et al., 2017) even in two-player two-strategy games, making it challenging to determine when PRD has suitably converged. Second, the addition of the projection step in PRD severs its connection with NE; the theoretical properties of PRD were left open in Lanctot et al. (2017), leaving it without any guarantees. These limitations go beyond theoretical, manifesting in practice, e.g., in Fig. 4d, where PRD is outperformed by even the uniform meta-solver for many iterations. Given these issues, we take a first (and informal) step towards analyzing PRD in Appendix E. For α-Rank, by contrast, we both establish theoretical properties in Section 4, and face no simulation-related challenges as its computation involves solving of a linear system, even in the general-sum many-player case (Omidshafiei et al., 2019), thus establishing it as a favorable and general PSRO meta-solver. MuJoCo Soccer While the key objective of this paper is to take a first step in establishing a theoretically-grounded framework for PSRO-based training of agents in many-player settings, an exciting question regards the behaviors of the proposed α-Rank-based PSRO algorithm in complex domains where function-approximation-based policies need to be relied upon. In Appendix F, we take a first step towards conducting this investigation in the MuJoCo soccer domain introduced in Liu et al. (2019). We remark that these results, albeit interesting, are primarily intended to lay the foundation for use of α-Rank as a meta-solver in complex many agent domains where RL agents serve as useful oracles, warranting additional research and analysis to make conclusive insights. 6 RELATED WORK We discuss the most closely related work along two axes. We start with PSRO-based research and some multiagent deep RL work that focuses on training of networks in various multiagent settings. Then we continue with related work that uses evolutionary dynamics (α-Rank and replicator dynamics) as a solution concept to examine underlying behavior of multiagent interactions using meta-games. Policy-space response oracles (Lanctot et al., 2017) unify many existing approaches to multiagent learning. Notable examples include fictitious play (Brown, 1951; Robinson, 1951), independent reinforcement learning (Matignon et al., 2012) and the double oracle algorithm (McMahan et al., 2003). PSRO also relies, fundamentally, on principles from empirical game-theoretic analysis (EGTA) (Walsh et al., 2002; Phelps et al., 2004; Tuyls et al., 2018; Wellman, 2006; Vorobeychik, 2010; Wiedenbeck and Wellman, 2012; Wiedenbeck et al., 2014). The related Parallel Nash Memory (PNM) algorithm (Oliehoek et al., 2006), which can also be seen as a generalization of the double oracle algorithm, incrementally grows the space of strategies, though using a search heuristic rather than exact best responses. PNMs have been successfully applied to games settings utilizing function approximation, notably to address exploitability issues when training Generative Adversarial Networks (GANs) (Oliehoek et al., 2019). PSRO allows the multiagent learning problem to be decomposed into a sequence of single-agent learning problems. A wide variety of other approaches that deal with the multiagent learning problem without this reduction are also available, such as Multiagent Deep Deterministic Policy Gradients (MADDPG) (Lowe et al., 2017), Counterfactual Multiagent Policy Gradients (COMA) (Foerster et al., 2018), Differentiable Inter-Agent Learning (DIAL) (Foerster et al., 2016), Hysteretic Deep Recurrent Q-learning (Omidshafiei et al., 2017), and lenient Multiagent Deep Reinforcement Learning (Palmer et al., 2018). Several notable contributions have also been made in addressing multiagent learning challenges in continuous-control settings, most recently including the approaches of Iqbal and Sha (2019); Gupta et al. (2017); Wei et al. (2018); Peng et al. (2017); Khadka et al. (2019). We refer interested readers to the following survey of recent deep multiagent RL approaches Hernandez-Leal et al. (2019). α-Rank was introduced by Omidshafiei et al. (2019) as a scalable dynamic alternative to Nash equilibria that can be applied in general-sum, many-player games and is capable of capturing the underlying multiagent evolutionary dynamics. Concepts from evolutionary dynamics have long been used in analysis of multiagent interactions from a meta-game standpoint (Walsh et al., 2002; Tuyls and Parsons, 2007; Hennes et al., 2013; Bloembergen et al., 2015; Tuyls et al., 2018). 7 DISCUSSION This paper studied variants of PSRO using α-Rank as a meta-solver, which were shown to be competitive with Nash-based PSRO in zero-sum games, and scale effortlessly to general-sum manyplayer games, in contrast to Nash-based PSRO. We believe there are many interesting directions for future work, including how uncertainty in the meta-solver distribution, informed by recent developments in dealing with incomplete information in games (Reeves and Wellman, 2004; Walsh et al., 2003; Rowland et al., 2019), can be used to inform the selection of new strategies to be added to populations. In summary, we strongly believe that the theoretical and empirical results established in this paper will play a key role in scaling up multiagent training in general settings. ACKNOLWEDGEMENTS The authors gratefully thank Bart De Vylder for providing helpful feedback on the paper draft. A EXAMPLES A.1 FURTHER EXPOSITION OF EXAMPLES 1 AND 2 Player 2 A B C D X Player 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 (a) Overview. Full payoff table on left, full response graph on right, with values over directed edges indicating the payoff gained by deviating from one strategy to another. Player 2 A B C D X Player 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 (b) Consider an initial strategy space consisting only of the strategy C; the best response against C is D. Player 2 A B C D X Player 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 (c) The α-Rank distribution over {C,D} puts all mass on D; the best response against D is A. Player 2 A B C D X Player 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 (d) The α-Rank distribution over {C,D,A} puts all mass on A; the best response against A is B. Player 2 A B C D X Player 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 (e) The α-Rank distribution over {C,D,A,B} puts mass (1/3, 1/3, 1/6, 1/6) on (A,B,C,D) respectively. For φ sufficiently large, the payoff that C receives against B dominates all others, and since B has higher mass than C in the α-Rank distribution, the best response is C. Figure A.5: Example 1 with oracle O = BR. In each step above, the α-Rank support is highlighted by the light green box of the payoff table, and the BR strategy against it in bold, dark green. Player 2 A B C D X Player 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 (e) The α-Rank distribution over {C,D,A,B} puts mass (1/3, 1/3, 1/6, 1/6) on (A,B,C,D) respectively. A beats C and D, and therefore its PBR score is 1/3. B beats A and D, therefore its PBR score is 1/2. C beats B, its PBR score is therefore 1/3. D beats C, its PBR score is therefore 1/6. Finally, X beats every other strategy, and its PBR score is thus 1. There is only one strategy maximizing PBR, X , which is then chosen, and the SSCC of the game, recovered. Figure A.6: Example 1 with oracle O = PBR. Steps (a) to (d) are not shown as they are identical to their analogs in Fig. A.5. A.2 EXAMPLE BEHAVIOR OF PSRO(NASH, BR) A first attempt to establish convergence to α-Rank might involve running PSRO to convergence (until the oracle returns a strategy already in the convex hull of the known strategies), and then running α-Rank on the resulting meta-game. However, the following provides a counterexample to this approach when using either PSRO(Nash, BR) or PSRO(Uniform, BR). Example 4. Consider the two-player symmetric game specified in Table 3a. The sink stronglyconnected component of the single-population response graph (and hence the α-Rank distribution) contains all three strategies, but all NE are supported on {A,B} only, and the best response to a strategy supported on {A,B} is another strategy supported on {A,B}. Thus, the single-population variant of PSRO, using M ∈ {Nash,Uniform} with initial strategies contained in {A,B} will terminate before discovering strategy X; the full α-Rank distribution will thus not be recovered. Example 5. Consider the two-player zero-sum game specified in Table 3b. All strategy profiles recieve non-zero probability in the multi-population α-Rank distribution. However, the Nash equilibrium over the game restricted to actions A,B for each player has a unique Nash equilibrium of (1/2, 1/2). Player 1’s best response to this Nash is to play some mixture of A and B, and therefore strategy X is not recovered by PSRO(Nash, BR) in this case, and so the full α-Rank distribution will thus not be recovered. A.3 COUNTEREXAMPLES: α-RANK VS. NASH SUPPORT The Game of Chicken The Game of Chicken provides an example where the support of α-Rankin the multipopulation case - does not include the full support of Nash Equilibria. This game has three Nash equilibria: Two pure, (D,C) and (C,D), and one mixed, where the population plays Dare with probability 13 . Nevertheless, α-rank only puts weight on (C,D) and (D,C), effectively not putting weight on the full mixed-nash support. Prisoner’s Dilemma The Prisoner’s Dilemma provides a counterexample that the support of αRank- in the multi-population case - does not include the full support of correlated equilibria. This game has correlated equilibria that include (C,D), (D,C) and (C,C) in their support; nevertheless, α-Rank only puts weight on (D,D), effectively being fully disjoint from the support of the correlated equilibria. A.4 SINGLE-POPULATION α-CONV In analogy with the multi-population definition in Section 4.2, we define a single-population version of α-CONV. We start by defining the single-population version of PBR-Score, given by PBR-SCORE(σ;π, S) = ∑ i πi1 [ M1(σ, si) >M 2(σi, si) ] . The single-population α-CONV is then defined as α-CONV = max σ PBR-SCORE(σ)−max s∈S PBR-SCORE(s) , where maxσ is taken over the pure strategies of the underlying game. B PROOFS B.1 PROOF OF PROPOSITION 1 Proposition 1. If at any point the population of α-PSRO contains a member of an SSCC of the game, then α-PSRO will α-partially converge to that SSCC. Proof. Suppose that a member of one of the underlying game’s SSCCs appears in the α-PSRO population. This member will induce its own meta-SSCC in the meta-game’s response graph. At least one of the members of the underlying game’s corresponding SSCC will thus always have positive probability under the α-Rank distribution for the meta-game, and the PBR oracle for this meta-SSCC will always return a member of the underlying game’s SSCC. If the PBR oracle returns a member of the underlying SSCC already in the PSRO population, we claim that the corresponding meta-SSCC already contains a cycle of the underlying SSCC. To see this, note that if the meta-SSCC does not contain a cycle, it must be a singleton. Either this singleton is equal to the full SSCC of the underlying game (in which we have α-fully converged), or it is not, in which case the PBR oracle must return a new strategy from the underlying SSCC, contradicting our assumption that it has terminated. B.2 PROOF OF PROPOSITION 2 Proposition 2. If we constrain the PBR oracle used in α-PSRO to be novelty-bound, then α-PSRO will α-fully converge to at least one SSCC of the game. Proof. Suppose that α-PSRO has converged, and consider a meta-SSCC. Since α-PSRO has converged, it follows that each strategy profile of the meta-SSCC is an element of an SSCC of the underlying game. Any strategy profile in this SSCC which is not in the meta-SSCC will obtain a positive value for the PBR objective, and since α-PSRO has converged, there can be no such strategy profile. Thus, the meta-SSCC contains every strategy profile contained within the corresponding SSCC of the underlying game, and therefore conclude that α-PSRO α-fully converges to an SSCC of the underlying game. B.3 PROOF OF PROPOSITION 3 Proposition 3. (Single-population) α-PSRO converges α-partially to the unique SSCC. Proof. The uniqueness of the SSCC follows from the fact that in the single-population case, the response graph is fully-connected. Suppose at termination of α-PSRO, the α-PSRO population contains no strategy within the SSCC, and let s be a strategy in the SSCC. We claim that s attains a higher value for the objective defining the PBR oracle than any strategy in the α-PSRO population, which contradicts the fact that α-PSRO has terminated. To complete this argument, we note that by virtue of s being in the SSCC, we haveM1(s, s′) >M1(s′, s) for all s′ outside the SSCC, and in particular for all s′ ∈ S, thus the PBR objective for s is 1. In contrast, for any si ∈ S, the PBR objective for si is upper-bounded by 1−πi. If πi > 0, then this shows si is not selected by the oracle, since the objective value is lower than that of s. If πi = 0, then the objective value for si is 0, and so an SSCC member will always have a maximal PBR score of 1 against a population not composed of any SSCC member, and all members of that population have < 1 PBR scores. Consequently, singlepopulation α-PSRO cannot terminate before it has encountered an SSCC member. By Proposition 1, the proposition is therefore proven. B.4 PROOF OF PROPOSITION 4 Proposition 4. (Multi-population) Without a novelty-bound oracle, there exist games for which α-PSRO does not converge α-partially to any SSCC. Proof. We exhibit a specific counterexample to the claim. Consider the three-player, three-strategy game with response graph illustrated in Fig. B.7a; note that we do not enumerate all strategy profiles not appearing in the SSCC for space and clarity reasons. The sequence of updates undertaken by α-PSRO in this game is illustrated in Figs. B.7b to B.7f; whilst the singleton strategy profile (3, 2, 3) forms the unique SSCC for this game, α-PSRO terminates before reaching it, which concludes the proof. The steps taken by the algorithm are described below; again, we do not enumerate all strategy profiles not appearing in the SSCC for space and clarity reasons. 1. Begin with strategies [[2], [1], [1]] in the α-PSRO population (Player 1 only has access to strategy 2, Players 2 and 3 only have access to strategy 1) 2. The PBR to (2,1,1) for player 2 is 2, and no other player has a PBR on this round. We add 2 to the strategy space of player 2, which changes the space of available joint strategies to [(2, 1, 1), (2, 2, 1)]. 3. α-Rank puts all its mass on (2,2,1). The PBR to (2,2,1) for player 3 is 2, and no other player has a PBR on this round. We add strategy 2 to player 3’s strategy space, which changes the space of available joint strategies to [(2, 1, 1), (2, 2, 1), (2, 2, 2)]. 4. α-Rank puts all its mass on (2,2,2). The PBR to (2,2,2) for player 1 is 1, and no other player has a PBR on this round. We add strategy 1 to player 1’s strategy space, which changes the space of available joint strategies to [(1, 1, 1), (1, 1, 2), (1, 2, 1), (1, 2, 2), (2, 1, 1), (2, 2, 1), (2, 2, 2)]. 5. Define σ as the α-Rank probabilities of the meta-game. Player 1 playing strategy 2 has a PBR score of σ((1, 1, 1)) + σ((1, 2, 1)), and the same player playing strategy 3 has a PBR score of σ((1, 2, 1)), which is lower than the PBR Score of playing strategy 2. No other player has a valid PBR for this round, and therefore, α-PSRO terminates. In the above example, pictured in Fig. B.7, a relatively weak joint strategy (Strategy (3,2,1)) bars agents from finding the optimal joint strategy of the game (Strategy (3,2,3)) : getting to this joint strategy requires coordinated changes between agents, and is therefore closely related to the common problem of Action/Equilibrium Shadowing mentioned in (Matignon et al., 2012). B.5 PROOF OF PROPOSITION 5 Proposition 5. A constant-sum game is denoted as win-loss ifMk(s) ∈ {0, 1} for all k ∈ [K] and s ∈ S. BR is compatible with PBR in win-loss games in the two-player single-population case. Proof. We manipulate the best-response objective as follows: M1(ν,π) = ∑ s∈S π(s)M1(ν, s) = ∑ s∈S π(s)1[M1(ν, s) >M2(ν, s)] . Noting that the final line is the single-population PBR objective, we are done. B.6 PROOF OF PROPOSITION 6 Proposition 6. A symmetric two-player game is denoted monotonic if there exists a function f : S → R and a non-decreasing function σ : R → R such that M1(s, ν) = σ(f(s) − f(ν)). BR is compatible with PBR in monotonic games in the single-population case. Proof. Rewriting the objectives given that the game is monotonic, we have that the value-based objective becomes K∑ k=1 πkM 1(s, sk) = K∑ k=1 πkσ(f(s)− f(sk)) . Given the fact that the only condition we have on σ is its non-decreasing character, this objective does not reduce to maximizing f(s) in the general case. The objective for PBR is K∑ k=1 πk1[M 1(s, sk) >M 2(s, sk)] = K∑ k=1 πk1[σ(f(s)− f(sk)) > σ(f(sk)− f(s))] Since σ is non-decreasing, σ(f(s)− f(sk)) > σ(f(sk)− f(s)) ⇒ f(s) > f(sk) and conversely, f(s) > f(sk) ⇒ σ(f(s)− f(sk)) ≥ σ(f(sk)− f(s)) Without loss of generality, we reorder the strategies such that if i < k, f(si) ≤ f(sk). Let sv maximize the value objective. Therefore, by monotonicity, sv maximizes σ(f(s)− f(sK)). Three possibilities then ensue. If there exists s such that σ(f(s)− f(sK)) > σ(f(sK)− f(s)) then σ(f(sv)− f(sK)) > σ(f(sK)− f(sv)) since sv maximizes σ(f(s)− f(sK)) and σ is non-decreasing. Consequently sv maximizes the PBR objective. Indeed, let us remark that for all k ≤ K, we have that σ(f(sv)− f(sk)) > σ(f(sk)− f(sv)) since σ(f(sv)− f(sk)) ≥ σ(f(sv)− f(sK)) > σ(f(sK)− f(sv)) ≥ σ(f(sk)− f(sv)) Else, if there does not exist any policy s such that σ(f(s)− f(sK)) > σ(f(sK)− f(s)), that is, for all s, σ(f(s)− f(sK)) ≤ σ(f(sK)− f(s)) Since sK is a possible solution to the value objective, σ(f(sv)− f(sK)) = σ(f(sK)− f(sv)) Let n be the integer such that sn = arg max{f(sk), sk ∈ Population | ∃s s.t. σ(f(s)− f(sk)) > σ(f(sk)− f(s))} If sn exists, then we have that for all si such that f(si) > f(sn), σ(f(sv)− f(si)) = σ(f(si)− f(sv)) The PBR objective is K∑ k=1 πk1[σ(f(s)− f(sk)) > σ(f(sk)− f(s))] which, according to our assumptions, is equivalent to n∑ k=1 πk1[σ(f(s)− f(sk)) > σ(f(sk)− f(s))] We know that for all i ≤ n, σ(f(sv)− f(si)) > σ(f(si)− f(sv)), and therefore, sv maximizes the PBR objective. Finally, if sn doesn’t exist, then any policy is solution to the PBR objective, and therefore sv is. A toy example showing the compatibility between Best Response and Preference-based Best Response is shown in Fig. B.8. The setting is that of a monotonic game where every strategy is assigned a number. Strategies are then dominated by all strategies with higher number than theirs. We compute BR and PBR on an initial population composed of one strategy that we choose to be dominated by every other strategy. Any strategy dominating the current population is a valid solution for PBR, as represented in Fig. B.8c; whereas, if we consider that the game is monotonic with σ a strictly increasing function, only one strategy maximizes Best Response, strategy N – and it is thus the only solution of BR, as shown in Fig. B.8d. As we can see, the solution of BR is part of the possible solutions of PBR, demonstrating the result of Proposition 6: BR is compatible with PBR in monotonic games. B.7 PROOF OF PROPOSITION 7 Proposition 7. Consider symmetric win-loss games where outcomes between deterministic strategies are deterministic. A preference-based RL agent (i.e., an agent aiming to maximize its probability of winning against a distribution π of strategies {s1, . . . , sN}) optimizes exactly the PBR objective (1). Proof. Commencing with the above preference-based RL objective, we calculate as follows, arg max σ P ( σ beats N∑ i=1 πisi ) = arg max σ Ei [P(σ beats si|index i selected)] = arg max σ N∑ i=1 πiP(σ beats si) = arg max σ N∑ i=1 πi1[σ receives a positive expected payoff against si] with the final equality whenever game outcomes between two deterministic strategies are deterministic. Note that this is precisely the PBR objective (1). B.8 PROOF OF PROPOSITION 8 Proposition 8. For symmetric two-player zero-sum games where off-diagonal payoffs have equal magnitude, all NE have support contained within that of the single-population α-Rank distribution. Proof. In the single-population case, the support of the α-Rank distribution is simply the (unique) sink strongly-connected component of the response graph (uniqueness follows from the fact that the response graph, viewed as an undirected graph, is fully-connected). We will now argue that for a strategy s in the sink strongly-connected component and a strategy z outside the sink stronglyconnected component, we have∑ a∈S π(a)M1(s, a) > ∑ a∈S π(a)M1(z, a) , (3) This inequality states that when an opponent plays according to π, the expected payoff to the row player is greater if they defect to s whenever they would have played z. This implies that if a supposed symmetric Nash equilibrium contains a strategy z outside the sink strongly-connected component in its support, then it could receive higher reward by playing s instead, which contradicts the fact that it is an NE. We show (3) by proving a stronger result — namely, that s dominates z as strategies. Firstly, since s is the sink strongly-connected component and z is not, s beats z, and soM1(s, z) > M1(s, s) = M1(z, z) >M1(z, s). Next, if a 6∈ {s, z} is in the sink strongly-connected component, then a beats z, and so M1(s, a) > M1(z, a) if s beats a, and M1(s, a) = M1(z, a) otherwise. Finally, if a 6= s, z is not in the sink strongly-connected component, thenM1(s, a) = M1(z, a) is z beats a, andM1(s, a) >M1(z, a) otherwise. Thus, (3) is proven, and the result follows. B.9 PROOF OF PROPOSITION 9 Proposition 9. In a symmetric two-player zero-sum game, there exists an NE with support contained within that of the α-Rank distribution. Proof. Consider the restriction of the game to the strategies contained in the sink strongly-connected component of the original game. Let π be an NE for this restricted game, and consider this as a distribution over all strategies in the original game (putting 0 mass on strategies outside the sink component). We argue that this is an NE for the full game, and the statement follows. To see this, note that since any strategy outside the sink strongly-connected component receives a non-positive payoff when playing against a strategy in the sink strongly-connected component, and that for at least one strategy in the sink strongly-connected component, this payoff is negative. Considering the payoffs available to the row player when the column player plays according to π, we observe that the expected payoff for any strategy outside the sink strongly-connected component is negative, since every strategy in the sink strongly-connected component beats the strategy outside the component. The payoff when defecting to a strategy in the sink strongly-connected component must be non-positive, since π is an NE for the restricted game. C ADDITIONAL DETAILS ON EXPERIMENTS C.1 EXPERIMENTAL PROCEDURES The code backend for the Poker experiments used OpenSpiel (Lanctot et al., 2019). Specifically, we used OpenSpiel’s Kuhn and Leduc poker implementations, and exact best responses were computed by traversing the game tree (see implementation details in https://github.com/deepmind/open_spiel/blob/master/open_spiel/ python/algorithms/best_response.py). 100 game simulations were used to estimate the payoff matrix for each possible strategy pair. Although the underlying Kuhn and Leduc poker games are stochastic (due to random initial card deals), the associated meta-games are essentially deterministic (as, given enough game simulations, the mean payoffs are fixed). The subsequent PSRO updates are, thus, also deterministic. Despite this, we report averages over 2 runs per PSROM, primarily to capture stochasticity due to differences in machine-specific rounding errors that occur due to the distributed computational platforms we run these experiments on. For experiments involving α-Rank, we conduct a full sweep over the ranking-intensity parameter, α, following each iteration of α-PSRO. We implemented a version of α-Rank (building on the OpenSpiel implementation https://github.com/deepmind/open_spiel/blob/master/ open_spiel/python/egt/alpharank.py) that used a sparse representation for the underlying transition matrix, enabling scaling-up to the large-scale NFG results presented in the experiments. For experiments involving the projected replicator dynamics (PRD), we used uniformly-initialized meta-distributions, running PRD for 5e4 iterations, using a step-size of dt = 1e− 3, and exploration parameter γ = 1e− 10. Time-averaged distributions were computed over the entire trajectory. C.2 DOMAIN DESCRIPTION AND GENERATION C.2.1 NORMAL FORM GAMES GENERATION Algorithms 2 to 4 provide an overview of the procedure we use to randomly-generate normal-form games for the oracle comparisons visualized in Fig. 2. Algorithm 2 GenerateTransitive(Actions, Players, meanvalue = [0.0, 1.0], meanprobability = [0.5, 0.5], var = 0.1) 1: T = [] 2: for Player k do 3: Initialize fk = [0] ∗ Actions 4: for Action a ≤ Actions do 5: Randomly sample mean µ from meanvalue according to meanprobability 6: fk[a] ∼ N (µ, var) 7: for Player k do 8: T [k] = fk − 1|Players|−1 ∑ i 6=k fi 9: Return T Algorithm 3 GenerateCyclic(Actions, Players, var = 0.4) 1: C = [] 2: for Player k do 3: Initialize C[k] ∼ N (0, var), Shape(C[k]) = (ActionsFirst Player, . . . ,ActionsLast Player) 4: for Player k do 5: Sum = ∑ Actions ai of all player i6=k C[k][a1, . . . , ak−1, : , ak+1, ...] 6: Shape(Sum) = (1, . . . , 1,ActionsPlayer k, 1, . . . , 1) 7: C[k] = C[k]− Sum 8: Return C Algorithm 4 General Normal Form Games Generation(Actions, Players) 1: Generate matrix lists T = GenerateTransitive(Actions, Players), C = GenerateCyclic(Actions, Players) 2: Return [T [k] + C[k] for Player k] C.2.2 KUHN AND LEDUC POKER K-player Kuhn poker is played with a deck of K + 1 cards. Each player starts with 2 chips and 1 face-down card, and antes 1 chip to play. Players either bet (raise/call) or fold iteratively, until each player is either in (has contributed equally to the pot) or has folded. Amongst the remaining players, the one with the highest-ranked card wins the pot. Leduc Poker, in comparison, has a significantly larger state-space. Players in Leduc have unlimited chips, receive 1 face-down card, ante 1 chip to play, with subsequent bets limited to 2 and 4 chips in rounds 1 and 2. A maximum of two raises are allowed in each round, and a public card is revealed before the second round. C.3 PBR COMPUTATION IN NORMAL FORM GAMES The algorithms used to compute PBR and PBR-SCORE in the games generated by the algorithm described in Section C.2.1 is shown in Algorithms 5 and 6. Note that they compute the multipopulation version of PBR. PCS-SCORE is computed by pre-computing the full game’s SSCC, and computing the proportion of currently selected strategies in the empirical game that also belongs to the full game’s SSCC. Note that the PBR-SCORE and PCS-SCORE are useful measures for assessing the quality of convergence in our examples, in a manner analogous to NASHCONV. The computation of these scores is, however, not tractable in general games. Notably, this is also the case for NASHCONV (as it requires computation of player-wise best responses, which can be problematic even in moderatelysized games). Despite this, these scores remain a useful way to empirically verify the convergence characteristics in small games where they can be tractably computed. Algorithm 5 PBR Score(Strategy S, Payoff Tensor, Current Player Id, Joint Strategies, Joint Strategy Probability) 1: New strategy score = 0 2: for Joint strategy J, Joint probability P in Joint Strategies, Joint Strategy Probability do 3: New strategy = J 4: New strategy[Current Player Id] = S 5: New strategy payoff = Payoff Tensor[New Strategy] 6: Old strategy payoff = Payoff Tensor[J] 7: New strategy score += P * (New Strategy Payoff > Old Strategy Payoff) 8: Return New strategy score Algorithm 6 PBR(Payoff Tensor list LM, Joint Strategies per player PJ, Alpharank Probability per Joint Strategy PA, Current Player) 1: maxPBR = 0 2: maxstrat = None 3: for Strategy S available to Current Player among all possible strategies do 4: score = PBR Score(S, LM[Current Player Id], Current Player Id, PJ, PA) 5: if New Strategy Score > maxPBR then 6: maxPBR = New Strategy Score 7: maxstrat = S 8: Return maxPBR,maxstrat C.4 ADDITIONAL ORACLE COMPARISON RESULTS We present additional oracle comparisons in Fig. C.9, all of these in the multi-population case. C.5 NOTES ON RECTIFIED NASH PERFORMANCE This section provides additional insights into the Rectified Nash results detailed in Section 5. We begin with an important disclaimer that Rectified Nash was developed solely with symmetric games in mind. As Kuhn Poker and Leduc Poker are not symmetric games, they lie beyond the theoretical scope of Rectified Nash. Nevertheless, comparing the performance of rectified and non-rectified approaches from an empirical perspective yields insights, which may be useful for future investigations that seek to potentially extend and apply rectified training approaches to more general games. As noted in the main paper, the poor performance of PSRO using Rectified Nash (in Fig. 3) is initially surprising as it indicates premature convergence to a high-NASHCONV distribution over the players’ policy pools. Investigating this further led to a counterintuitive result for the domains evaluated: Rectified Nash was, roughly speaking, not increasing the overall diversity of behavioral policies added to each player’s population pool. In certain regards, it even prevented diversity from emerging. To more concretely pinpoint the issues, we detail below the first 3 iterations of PSRO(Rectified Nash, BR) in Kuhn Poker. Payoff matrices at each PSRO iteration are included in Tables 6a to 6c. For clarity, we also include the 5 best responses trained by Rectified Nash and the policies they were trained against, in their order of discovery: 2 policies for Player 1 (in Fig. C.11) and 3 policies for Player 2 (in Fig. C.12). 1. Iteration 0: both players start with uniform random policies. 2. Iteration 1: • Player 1 trains a best response against Player 2’s uniform random policy; its policy set is now the original uniform policy, and the newly-computed best response. • Player 2 trains a best response against Player 1’s uniform random policy; its policy set is now the original uniform policy, and the newly-computed best response. • Player 2’s best response beats both of Player 1’s policies. • Payoff values are represented in Table 6a. 3. Iteration 2: • By Rectified Nash rules, Player 1 only trains policies against policies it beats; i.e., only against Player 2’s random policy, and thus it adds the same policy as in iteration 1 to its pool. • Player 2 trains a best response against the Nash mixture of Player 1’s first best response and random policy. This policy also beats all policies of player 1. • Payoff values are represented in Table 6b. 4. Iteration 3: • Player 1 only trains best responses against Player 2’s random policy. • Player 2 only trains best responses against the Nash of Player 1’s two unique policies. This yields the same policies for player 2 as those previously added to its pool (i.e., a loop occurs). • Payoff values are represented in Table 6c 5. Rectified Nash has looped. As noted above, Rectified Nash loops at iteration 3, producing already-existing best responses against Player 1’s policies. Player 1 is, therefore, constrained to never being able to train best responses against any other policy than Player 2’s random policy. In turn, this prevents Player 2 from training additional novel policies, and puts the game in a deadlocked state. Noise in the payoff matrices may lead to different best responses against the Nash Mixture of policies, effectively increasing diversity. However, this effect did not seem to manifest in our experiments. To more clearly illustrate this, we introduce a means of evaluating the policy pool diversity, counting the number of unique policies in the pool. Specifically, given that Kuhn poker is a finite state game, comparing policies is straightforward, and only amounts to comparing each policy’s output on all states of the games. If two policies have exactly the same output on all the game’s states, they are equal; otherwise, they are distinct. We plot in Fig. C.10 the policy diversity of each meta-solver, where we observe that both Rectified Nash and Rectified PRD discover a total of 5 different policies. We have nevertheless noticed that in a few rare seeds, when using low number of simulations per payoff entry (Around 10), Rectified Nash was able to converge to low exploitability scores, suggesting a relationship between payoff noise, uncertainty and convergence of Rectified Nash whose investigation we leave for future work. We also leave the investigation of the relationship between Policy Diversity and Exploitability for future work, though note that there appears to be a clear correlation between both. Overall, these results demonstrate that the Rectified Nash solver fails to discover as many unique policies as the other solvers, thereby plateauing at a low NASHCONV. Finally, regarding Rectified PRD, which performs better in terms of NASHCONV when compared to Rectified Nash, we suspect that payoff noise in combination with the intrinsic noise of PRD, plays a key factor - but those two are not enough to deterministically make Rectified PRD converge to 0 exploitability, since in the seed that generated Fig. C.10, it actually doesn’t (Though it indeed converges in Fig. 3). We conjecture this noisier behavior may enable Rectified PRD to free itself from deadlocks more easily, and thus discover more policies on average. A more detailed analysis of Rectified PRD is left as future work. D α-RANK IN DETAIL In this section we give further details of α-Rank; for a full description, see Omidshafiei et al. (2019). Essentially α-Rank defines a directed response graph over the pure strategy profiles of the game under study, by indicating when a player has an incentive to make a unilateral deviation from their current strategy. An irreducible (noisy) random walk over this graph is then defined, and the strategy profile rankings are obtained by ordering the masses of this Markov chain’s unique invariant distribution π. The Markov transition matrix C that specifies this random walk is defined as follows for the multipopulation case; see Omidshafiei et al. (2019) for the single-population case. Consider a pure strategy profile s ∈ S, and let σ = (σk, s−k) be the pure strategy profile which is equal to s, except for player k, which uses strategy σk ∈ Sk instead of sk. Let Cs,σ denote the transition probability from s to σ, and Cs,s the self-transition probability of s, with each defined as: Cs,σ = { η 1−exp(−α(Mk(σ)−Mk(s))) 1−exp(−αm(Mk(σ)−Mk(s))) if M k(σ) 6= Mk(s) η m otherwise , Cs,s = 1− ∑ k∈[K] σ|σk∈Sk\{sk} Cs,σ , where η = ( ∑ l(|Sl| − 1))−1. If two strategy profiles s and s′ differ in more than one player’s strategy, then Cs,s′ = 0. Here α ≥ 0 and m ∈ N are parameters to be specified; the form of this transition probability is described by evolutionary dynamics models from evolutionary game theory and is explained in more detail in Omidshafiei et al. (2019). Large values of α correspond to higher selection pressure in the evolutionary model under consideration; the version of α-Rank used throughout this paper corresponds to the limiting invariant distribution as α→∞, under which only strategy profiles appearing in the sink strongly-connected components of the response graph can have positive mass. E TOWARDS THEORETICAL GUARANTEES FOR THE PROJECTED REPLICATOR DYNAMICS Computing Nash equilibria is intractable for general games and can suffer from a selection problem (Daskalakis et al., 2009); therefore, it quickly becomes computationally intractable to employ an exact Nash meta-solver in the inner loop of a PSRO algorithm. To get around this, Lanctot et al. (2017) use regret minimization algorithms to attain an approximate correlated equilibrium (which is guaranteed to be an approximate Nash equilibrium under certain conditions on the underlying game, such as two-player zero-sum). A dynamical system from evolutionary game theory that also converges to equilibria under certain conditions is the replicator dynamics (Taylor and Jonker, 1978; Schuster and Sigmund, 1983; Cressman and Tao, 2014; Bloembergen et al., 2015), which defines a dynamical system over distributions of strategies (πks (t) | k ∈ [K], s ∈ Sk), given by π̇ks (t) = π k s (t) [ Mk(s,π−k(t))−Mk(πk(t)) ] , for all k ∈ [K], s ∈ Sk , (4) with an arbitrary initial condition. Lanctot et al. (2017) introduced a variant of replicator dynamics, termed projected replicator dynamics (PRD), which projects the flow of the system so that each distribution πk(t) lies in the set ∆γ Sk = {π ∈ ∆Sk | πs ≥ γ/(|Sk| + 1), ∀s ∈ Sk}; see, e.g., Nagurney and Zhang (2012) for properties of such projected dynamical systems. This heuristically enforces additional “exploration” relative to standard replicator dynamics, and was observed to provide strong empirical results when used as a meta-solver within PSRO. However, the introduction of projection potentially severs the connection between replicator dynamics and Nash equilibria, and the theoretical game-theoretic properties of PRD were left open in Lanctot et al. (2017). Here, we take a first step towards investigating theoretical guarantees for PRD. Specifically, we highlight a possible connection between α-Rank, the calculation of which requires no simulation, and a constrained
1. How does the paper contribute to the understanding of alpha-rank in multi-agent reinforcement learning? 2. What are the theoretical findings that show the shortcomings of using the typical best response oracle? 3. What is the preference-based best response proposed by the paper, and how does it address the limitations of the traditional best response oracle? 4. Why is it necessary to make direct comparisons to recent related literature, specifically Balduzzi et al.'s PSRO Rectified Nash approach and Liu et al.'s population-based training method? 5. What is the significance of the MuJoCo soccer environment in the context of the paper's contributions and existing work in the field? 6. How can the paper better position itself in relation to existing work, particularly for readers who may not be familiar with the area? 7. Are there any minor errors or inconsistencies in the paper's references, figures, or appendices that need to be addressed?
Review
Review Review Update (18/11/2019) Thank you for the detailed replies and significant updates to the paper in response to all reviewers. You have comfortably addressed all of my concerns and so I have updated my score. I think the paper has improved significantly through the rebuttal stage and therefore the update in my score is also significant to match the far larger contribution to the community that the paper now represents. -- This paper considers alpha-rank as a solution concept for multi-agent reinforcement learning with a focus on its use as a meta-solver for PSRO. Based on theoretical findings showing shortcomings of using the typical best response oracle, the paper finds a necessity for a new response oracle and proposes preference-based best response. The theoretical contributions help further the community's understanding of alpha-rank but the method remains somewhat disconnected from other recent related literature. Therefore, I think the paper's subsequent impact could be significantly improved by making more direct comparison to recent results. Specifically: 1) In the 2-player games comparisons are currently made to PRD based on its use in Lanctot et al (NeurIPS, 2017) instead of the more recent PSRO Rectified Nash approach proposed by Balduzzi et al. (ICML, 2019). Please make this direct comparison or justify its exclusion. 2) The preliminary MuJoCo soccer results in Appendix G significantly increase the relevance of this work to the ICLR community given the prior publication of this environment at ICLR 2019. However, the results are currently incomplete. In particular, to again strengthen the link to existing work, comparison of the method proposed in this paper to the agents trained by population based training in Liu et al. (ICLR, 2019) would be a more informative comparison than the preliminary results presented in comparison to the naïve uniform meta-solver. 3) Appendix A includes a brief literature survey. This is important material to position the paper in relation to existing work, particularly for readers not familiar with the area that will rely on this to understand the paper as a self contained reference. Please move this section into the main body of the paper and expand to fully credit the work this paper builds upon. Minor Comments: In Appendix C.4 should the reference to Figure C.7 be to Figure C.7a specifically? and the reference to Figure C. 7a be to Figure C. 7b-f inclusive? If so, I believe the available joint strategies in step 4 is missing (1,1,2) as shown in Figure C. 7f.
ICLR
Title A Generalized Training Approach for Multiagent Learning Abstract This paper investigates a population-based training regime based on game-theoretic principles called Policy-Spaced Response Oracles (PSRO). PSRO is general in the sense that it (1) encompasses well-known algorithms such as fictitious play and double oracle as special cases, and (2) in principle applies to general-sum, manyplayer games. Despite this, prior studies of PSRO have been focused on two-player zero-sum games, a regime wherein Nash equilibria are tractably computable. In moving from two-player zero-sum games to more general settings, computation of Nash equilibria quickly becomes infeasible. Here, we extend the theoretical underpinnings of PSRO by considering an alternative solution concept, α-Rank, which is unique (thus faces no equilibrium selection issues, unlike Nash) and applies readily to general-sum, many-player settings. We establish convergence guarantees in several games classes, and identify links between Nash equilibria and α-Rank. We demonstrate the competitive performance of α-Rank-based PSRO against an exact Nash solver-based PSRO in 2-player Kuhn and Leduc Poker. We then go beyond the reach of prior PSRO applications by considering 3to 5-player poker games, yielding instances where α-Rank achieves faster convergence than approximate Nash solvers, thus establishing it as a favorable general games solver. We also carry out an initial empirical validation in MuJoCo soccer, illustrating the feasibility of the proposed approach in another complex domain. 1 INTRODUCTION Creating agents that learn to interact in large-scale systems is a key challenge in artificial intelligence. Impressive results have been recently achieved in restricted settings (e.g., zero-sum, two-player games) using game-theoretic principles such as iterative best response computation (Lanctot et al., 2017), self-play (Silver et al., 2018), and evolution-based training (Jaderberg et al., 2019; Liu et al., 2019). A key principle underlying these approaches is to iteratively train a growing population of player policies, with population evolution informed by heuristic skill ratings (e.g., Elo (Elo, 1978)) or game-theoretic solution concepts such as Nash equilibria. A general application of this principle is embodied by the Policy-Space Response Oracles (PSRO) algorithm and its related extensions (Lanctot et al., 2017; Balduzzi et al., 2019). Given a game (e.g., poker), PSRO constructs a higherlevel meta-game by simulating outcomes for all match-ups of a population of players’ policies. It then trains new policies for each player (via an oracle) against a distribution over the existing meta-game policies (typically an approximate Nash equilibrium, obtained via a meta-solver1), appends these new policies to the meta-game population, and iterates. In two-player zero sum games, fictitious play (Brown, 1951), double oracle (McMahan et al., 2003), and independent reinforcement learning can all be considered instances of PSRO, demonstrating its representative power (Lanctot et al., 2017). Prior applications of PSRO have used Nash equilibria as the policy-selection distribution (Lanctot et al., 2017; Balduzzi et al., 2019), which limits the scalability of PSRO to general games: Nash equilibria are intractable to compute in general (Daskalakis et al., 2009); computing approximate Nash equilibria is also intractable, even for some classes of two-player games (Daskalakis, 2013); finally, when they can be computed, Nash equilibria suffer from a selection problem (Harsanyi et al., 1988; Goldberg et al., 2013). It is, thus, evident that the reliance of PSRO on the Nash equilibrium as the driver of population growth is a key limitation, preventing its application to general games. Recent work has proposed a scalable alternative to the Nash equilibrium, called α-Rank, which applies readily to general games (Omidshafiei et al., 2019), making it a promising candidate for population-based training. Given that the formal study of PSRO has only been conducted under the restricted settings determined by the limitations of Nash equilibria, establishing its theoretical and empirical behaviors under alternative meta-solvers remains an important and open research problem. We study several PSRO variants in the context of general-sum, many-player games, providing convergence guarantees in several classes of such games for PSRO instances that use α-Rank as a meta-solver. We also establish connections between Nash and α-Rank in specific classes of games, and identify links between α-Rank and the Projected Replicator Dynamics employed in prior PSRO instances (Lanctot et al., 2017). We develop a new notion of best response that guarantees convergence to the α-Rank distribution in several classes of games, verifying this empirically in randomly-generated general-sum games. We conduct empirical evaluations in Kuhn and Leduc Poker, first establishing our approach as a competitive alternative to Nash-based PSRO by focusing on two-player variants of these games that have been investigated in these prior works. We subsequently demonstrate empirical results extending beyond the reach of PSRO with Nash as a meta-solver by evaluating training in 3- to 5-player games. Finally, we conduct preliminary evaluations in MuJoCo soccer (Liu et al., 2019), another complex domain wherein we use reinforcement learning agents as oracles in our proposed PSRO variants, illustrating the feasibility of the approach. 2 PRELIMINARIES Games We consider K-player games, where each player k ∈ [K] has a finite set of pure strategies Sk. Let S = ∏ k S k denote the space of pure strategy profiles. Denote by S−k = ∏ l 6=k S l the set of pure strategy profiles excluding those of player k. Let M(s) = (M1(s), . . . ,MK(s)) ∈ RK denote the vector of expected player payoffs for each s ∈ S. A game is said to be zero-sum if∑ kM k(s) = 0 for all s ∈ S. A game is said to be symmetric if all players have identical strategy sets Sk, and for any permutation ρ, strategy profile (s1, . . . , sK) ∈ S, and index k ∈ [K], one has Mk(s1, . . . , sK) = Mρ(k)(sρ(1), . . . , sρ(K)). A mixed strategy profile is defined as π ∈ ∆S , a tuple representing the probability distribution over pure strategy profiles s ∈ S. The expected payoff to player k under a mixed strategy profile π is given byMk(π) = ∑ s∈S π(s)M k(s). Nash Equilibrium (NE) Given a mixed profile π, the best response for a player k is defined BRk(π) = arg maxν∈∆ Sk [Mk(ν,π−k)]. A factorized mixed profile π(s) = ∏ k π k(sk) is a Nash equilibrium (NE) if πk ∈ BRk(π) for all k ∈ [K]. Define NASHCONV(π) =∑ kM k(BRk(π),π−k)−Mk(π); roughly speaking, this measures “distance” from an NE (Lanctot et al., 2017). In prior PSRO instances (Lanctot et al., 2017), a variant of the replicator dynamics (Taylor and Jonker, 1978; Maynard Smith and Price, 1973), called the Projected Replicator Dynamics (PRD), has been used as an approximate Nash meta-solver (see Appendix E for details on PRD). α-Rank While NE exist in all finite games (Nash, 1950), their computation is intractable in general games, and their non-uniqueness leads to an equilibrium-selection problem (Harsanyi et al., 1988; Goldberg et al., 2013). This limits their applicability as the underlying driver of training beyond the two-player, zero-sum regime. Recently, an alternate solution concept called α-Rank was proposed by 1A meta-solver is a method that computes, or approximates, the solution concept that is being deployed. Player 1’s policy set ...... Player k’s policy set Player K’s policy set Profile distribution Meta-solver Oracle ...... Profile distribution Randomly initialize player policy sets ... ... Game simulations (a) Complete: compute missing payoff tensorM entries via game simulations. (b) Solve: given the updated payoff tensor M , calculate metastrategy π via meta-solver M. (c) Expand: append a new policy to each player’s policy space using the oracle O. Figure 1: Overview of PSRO(M, O) algorithm phases. Omidshafiei et al. (2019), the key associated benefits being its uniqueness and efficient computation in many-player and general-sum games, making it a promising means for directing multiagent training. The α-Rank distribution is computed by constructing the response graph of the game: each strategy profile s ∈ S of the game is a node of this graph; a directed edge points from any profile s ∈ S to σ ∈ S in the graph if (1) s and σ differ in only a single player k’s strategy and (2)Mk(σ) >Mk(s). α-Rank constructs a random walk along this directed graph, perturbing the process by injecting a small probability of backwards-transitions from σ to s (dependent on a parameter, α, whose value is prescribed by the algorithm); this ensures irreducibility of the resulting Markov chain and the existence of a unique stationary distribution, π ∈ ∆S , called the α-Rank distribution. The masses of π are supported by the sink strongly-connected components (SSCCs) of the response graph (Omidshafiei et al., 2019). For more details on α-Rank, see Appendix D and Rowland et al. (2019). Oracles We define an oracle O as an abstract computational entity that, given a game, computes policies with precise associated properties. For instance, a best-response oracle Ok(π) = BRk(π) computes the best-response policy for any player k, given a profile π. One may also consider approximate-best-response oracles that, e.g., use reinforcement learning to train a player k’s policy against a fixed distribution over the other players’ policies, π−k. Oracles play a key role in populationbased training, as they compute the policies that are incrementally added to players’ growing policy populations (McMahan et al., 2003; Lanctot et al., 2017; Balduzzi et al., 2019). The choice of oracle O also affects the training convergence rate and final equilibrium reached (e.g., Nash or α-Rank). Empirical Game-theoretic Analysis PSRO relies on principles from empirical game-theoretic analysis (EGTA) (Walsh et al., 2002; Phelps et al., 2004; Wellman, 2006). Given a game (e.g., poker), EGTA operates via construction of a higher-level ‘meta-game’, where strategies s correspond to policies (e.g., ‘play defensively’ in poker) rather than atomic actions (e.g., ‘fold’). A meta-payoff table M is then constructed by simulating games for all joint policy combinations, with entries corresponding to the players’ expected utilities under these policies. Game-theoretic analysis can then be conducted on the meta-game in a manner analogous to the lower-level game, albeit in a much more scalable manner. As the theoretical discussion hereafter pertains to the meta-game, we use s, M , and π to respectively refer to policies, payoffs, and distributions at the meta-level, rather than the underlying low-level game. In our analysis, it will be important to distinguish between SSCCs of the underlying game, and of the meta-game constructed by PSRO; we refer to the latter as meta-SSCCs. 3 POLICY-SPACE RESPONSE ORACLES: NASH AND BEYOND We first overview Policy-Space Response Oracles (PSRO) prior to presenting our findings. Given an underlying game (e.g., Poker), PSRO first initializes the policy space S using randomly-generated policies, then expands the players’ policy populations in three iterated phases: complete, solve, and Algorithm 1 PSRO(M, O) 1: Initialize the players’ policy set S = ∏ k S k via random policies 2: for iteration ∈ {1, 2, · · · } do 3: Update payoff tensorM for new policy profiles in S via game simulations . (Fig. 1a) 4: Compute the meta-strategy π using meta-solverM(M) . (Fig. 1b) 5: Expand the policy space for each player k ∈ [K] via Sk ← Sk ∪ Ok(π) . (Fig. 1c) Game type M O Converges to α-Rank? SP α-Rank BR 7 (Example 1) SP α-Rank PBR 3 (Sub-SSCC,† Proposition 3) MP α-Rank BR 7 (Example 2) MP α-Rank PBR 3 (With novelty-bound oracle,† Proposition 1) SP / MP Uniform or Nash BR 7 (Examples 4 and 5, Appendix A.2) Table 1: Theory overview. SP and MP, resp., denote single and multi-population games. BR and PBR, resp., denote best response and preference-based best response. †Defined in the noted propositions. expand (see Algorithm 1 and Fig. 1). In the complete phase, a meta-game consisting of all match-ups of these joint policies is synthesized, with missing payoff entries in M completed through game simulations. Next, in the solve phase, a meta-solverM computes a profile π over the player policies (e.g., Nash, α-Rank, or uniform distributions). Finally, in the expand phase, an oracle O computes at least one new policy s′k for each player k ∈ [K], given profile π. As other players’ policy spaces S−k and profile π−k are fixed, this phase involves solving a single-player optimization problem. The new policies are appended to the respective players’ policy sets, and the algorithm iterates. We use PSRO(M, O) to refer to the PSRO instance using meta-solverM and oracle O. Notably, PSRO-based training for two-player symmetric games can be conducted using a single population of policies that is shared by all players (i.e., Sk is identical for all k). Thus, we henceforth refer to twoplayer symmetric games as ‘single-population games’, and more generally refer to games that require player-specific policy populations as ‘multi-population games’. Recent investigations of PSRO have solely focused on Nash-based meta-solvers and best-response-based oracles (Lanctot et al., 2017; Balduzzi et al., 2019), with theory focused around the two-player zero-sum case. Unfortunately, these guarantees do not hold in games beyond this regime, making investigation of alternative meta-solvers and oracles critical for further establishing PSRO’s generalizability. 4 GENERALIZING PSRO THEORY This section establishes theoretical properties of PSRO for several useful classes of general games. We summarize our results in Table 1, giving a full exposition below. 4.1 ESTABLISHING CONVERGENCE TO α-RANK Player 2 A B C D X Pl ay er 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 Table 2: Symmetric zero-sum game used to analyze the behavior of PSRO in Example 1. Here, 0 < ε 1 and φ 1. It is well-known that PSRO(Nash, BR) will eventually return an NE in two-player zero-sum games (McMahan et al., 2003). In more general games, where Nash faces the issues outlined earlier, α-Rank appears a promising meta-solver candidate as it applies to many-player, general-sum games and has no selection problem. However, open questions remain regarding convergence guarantees of PSRO when using α-Rank, and whether standard BR oracles suffice for ensuring these guarantees. We investigate these theoretical questions, namely, whether particular variants of PSRO can converge to the α-Rank distribution for the underlying game. A first attempt to establish convergence to α-Rank might involve running PSRO to convergence (until the oracle returns a strategy already in the convex hull of the known strategies), using α-Rank as the meta-solver, and a standard best response oracle. However, the following example shows that this will not work in general for the single-population case (see Fig. A.5 for a step-by-step illustration). Example 1. Consider the symmetric zero-sum game specified in Table 2. As X is the sole sink component of the game’s response graph (as illustrated in Fig. A.5a), the single-population α-Rank distribution for this game puts unit mass on X . We now show that a PSRO algorithm that computes best responses to the α-Rank distribution over the current strategy set need not recover strategy X , by computing directly the strategy sets of the algorithm initialized with the set {C}. 1. The initial strategy space consists only of the strategy C; the best response against C is D. 2. The α-Rank distribution over {C,D} puts all mass on D; the best response against D is A. 3. The α-Rank distribution over {C,D,A} puts all mass on A; the best response against A is B. 4. The α-Rank distribution over {C,D,A,B} puts mass (1/3, 1/3, 1/6, 1/6) on (A,B,C,D) respec- tively. For φ sufficiently large, the payoff that C receives against B dominates all others, and since B has higher mass than C in the α-Rank distribution, the best response is C. Thus, PSRO(α-Rank, BR) leads to the algorithm terminating with strategy set {A,B,C,D} and not discovering strategy X in the sink strongly-connected component. This conclusion also holds in the multi-population case, as the following counterexample shows. Example 2. Consider the game in Table 2, treating it now as a multi-population problem. It is readily verified that the multi-population α-Rank distributions obtained by PSRO with initial strategy sets consisting solely of C for each player are: (i) a Dirac delta at the joint strategy (C,C), leading to best responses of D for both players; (ii) a Dirac delta at (D,D) leading to best responses of A for both players; (iii) a Dirac delta at (A,A), leading to best responses of B for both players; and finally (iv) a distribution over joint strategies of the 4×4 subgame induced by strategies A,B,C,D that leads to a best response not equal to X; thus, the full α-Rank distribution is again not recovered. 4.2 A NEW RESPONSE ORACLE The previous examples indicate that the use of standard best responses in PSRO may be the root cause of the incompatibility with the α-Rank solution concept. Thus, we define the Preference-based Best Response (PBR) oracle, which is more closely aligned with the dynamics defining α-Rank, and which enables us to establish desired PSRO guarantees with respect to α-Rank. Consider first the single-population case. Given an N -strategy population {s1, . . . , sN} and corresponding meta-solver distribution (πi)Ni=1∈∆N , a PBR oracle is defined as any function satisfying PBR (∑ i πisi ) ⊆ arg maxσ ∑ i πi1 [ M1(σ, si) >M 2(σ, si) ] , (1) where the arg max returns the set of policies optimizing the objective, and the optimization is over pure strategies in the underlying game. The intuition for the definition of PBR is that we would like the oracle to return strategies that will receive high mass under α-Rank when added to the population; objective (1) essentially encodes the probability flux that the vertex corresponding to σ would receive in the random walk over the α-Rank response graph (see Section 2 or Appendix D for further details). We demonstrate below that the use of the PBR resolves the issue highlighted in Example 1 (see Fig. A.6 in Appendix A for an accompanying visual). Example 3. Steps 1 to 3 of correspond exactly to those of Example 1. In step 4, the α-Rank distribution over {C,D,A,B} puts mass (1/3, 1/3, 1/6, 1/6) on (A,B,C,D) respectively. A beats C and D, thus its PBR score is 1/3. B beats A and D, thus its PBR score is 1/2. C beats B, its PBR score is thus 1/3. D beats C, its PBR score is thus 1/6. Finally, X beats every other strategy, and its PBR score is thus 1. Thus, there is only one strategy maximizing PBR, X , which is then chosen, thereby recovering the SSCC of the game and the correct α-Rank distribution at the next timestep. In the multi-population case, consider a population of N strategy profiles {s1, . . . , sN} and corresponding meta-solver distribution (πi)Ni=1. Several meta-SSCCs may exist in the multi-population α-Rank response graph. In this case, we run the PBR oracle for each meta-SSCC separately, as follows. Suppose there are ` meta-SSCCs, and denote by π(`) the distribution π restricted to the `th meta-SSCC, for all 1 ≤ ` ≤ L. The PBR for player k on the `th meta-SSCC is then defined by PBRk (∑ i π (`) i si ) ⊆ arg maxσ ∑ i π (`) i 1 [ Mk(σ, s−ki ) >M k(ski , s −k i ) ] . (2) Thus, the PBR oracle generates one new strategy for each player for every meta-SSCC in the α-Rank response graph; we return this full set of strategies and append to the policy space accordingly, as in Line 5 of Algorithm 1. Intuitively, this leads to a diversification of strategies introduced by the oracle, as each new strategy need only perform well against a subset of prior strategies. This hints at interesting links with the recently-introduced concept of rectified-Nash BR (Balduzzi et al., 2019), which also attempts to improve diversity in PSRO, albeit only in two-player zero-sum games. We henceforth denote PSRO(α-Rank, PBR) as α-PSRO for brevity. We next define α-CONV, an approximate measure of convergence to α-Rank. We restrict discussion to the multi-population case here, describing the single-population case in Appendix A.4. With the notation introduced above, we define PBR-SCOREk(σ;π, S) = ∑ i ∑ ` π (`) i 1 [ Mk(σ, s−ki ) >M k(ski , s −k i ) ] , and α-CONV = ∑ k maxσ PBR-SCORE k(σ)−maxs∈Sk PBR-SCOREk(s) , where maxσ is taken over the pure strategies of the underlying game. Unfortunately, in the multi-population case, a PBR-SCORE of 0 does not necessarily imply α-partial convergence. We thus introduce a further measure, PCS-SCORE, defined by PCS-SCORE = # of α-PSRO strategy profiles in the underlying game’s SSCCs # of α-PSRO strategy profiles in meta-SSCCs , which assesses the quality of the α-PSRO population. We refer readers to Appendix C.3 for pseudocode detailing how to implement these measures in practice. 4.3 α-PSRO: THEORY, PRACTICE, AND CONNECTIONS TO NASH We next study the theoretical properties of PSRO(α-Rank, PBR), which we henceforth refer to as α-PSRO for brevity. We consider that α-PSRO has converged if no new strategy has been returned by PBR for any player at the end of an iteration. Proofs of all results are provided in Appendix B. Definition 1. A PSRO algorithm is said to converge α-fully (resp., α-partially) to an SSCC of the underlying game if its strategy population contains the full SSCC (resp., a sub-cycle of the SSCC, denoted a ‘sub-SSCC’) after convergence. Definition 2. We also adapt PBR to be what we call novelty-bound by restricting the arg max in Equation (1) to be over strategies not already included in the population with PBR-SCORE > 0. In particular, the novelty-bound version of the PBR oracle is given by restricting the arg max appearing in (2) to only be over strategies not already present in the population. These definitions enable the following results for α-PSRO in the single- and multi-population cases. Proposition 1. If at any point the population of α-PSRO contains a member of an SSCC of the game, then α-PSRO will α-partially converge to that SSCC. Proposition 2. If we constrain the PBR oracle used in α-PSRO to be novelty-bound, then α-PSRO will α-fully converge to at least one SSCC of the game. Stronger guarantees exist for two-players symmetric (i.e., single-population) games, though the multi-population case encounters more issues, as follows. Proposition 3. (Single-population) α-PSRO converges α-partially to the unique SSCC. Proposition 4. (Multi-population) Without a novelty-bound oracle, there exist games for which α-PSRO does not converge α-partially to any SSCC. Intuitively, the lack of convergence without a novelty-bound oracle can occur due to intransitivities in the game (i.e., cycles in the game can otherwise trap the oracle). An example demonstrating this issue is shown in Fig. B.7, with an accompanying step-by-step walkthrough in Appendix B.4. Specifically, SSCCs may be hidden by “intermediate” strategies that, while not receiving as high a payoff as current population-pool members, can actually lead to well-performing strategies outside the population. As these “intermediate” strategies are avoided, SSCCs are consequently not found. Note also that this is related to the common problem of action/equilibrium shadowing, as detailed in Matignon et al. (2012). In Section 5, we further investigate convergence behavior beyond the conditions studied above. In practice, we demonstrate that despite the negative result of Proposition 4, α-PSRO does significantly increase the probability of converging to an SSCC, in contrast to PSRO(Nash, BR). Overall, we have shown that for general-sum multi-player games, it is possible to give theoretical guarantees for a version of PSRO driven by α-Rank in several circumstances. By contrast, using exact NE in PSRO is intractable in general. In prior work, this motivated the use of approximate Nash solvers generally based on the simulation of dynamical systems or regret minimization algorithms, both of which generally require specification of several hyperparameters (e.g., simulation iterations, window sizes for computing time-average policies, and entropy-injection rates), and a greater computational burden than α-Rank to carry out the simulation in the first place. Implementing the PBR Oracle Recall from Section 3 that the BR oracle inherently solves a singleplayer optimization problem, permitting use of a single-agent RL algorithm as a BR approximator, which is useful in practice. As noted in Section 4.1, however, there exist games where the BR and PBR objectives are seemingly incompatible, preventing the use of standard RL agents for PBR approximation. While exact PBR is computable in small-scale (e.g., normal-form) games, we next consider more general games classes where PBR can also be approximated using standard RL agents. Definition 3. Objective A is ‘compatible’ with objective B if any solution to A is a solution to B. Proposition 5. A constant-sum game is denoted as win-loss ifMk(s) ∈ {0, 1} for all k ∈ [K] and s ∈ S. BR is compatible with PBR in win-loss games in the two-player single-population case. Proposition 6. A symmetric two-player game is denoted monotonic if there exists a function f : S → R and a non-decreasing function σ : R → R such that M1(s, ν) = σ(f(s) − f(ν)). BR is compatible with PBR in monotonic games in the single-population case. Finally, we next demonstrate that under certain conditions, there are strong connections between the PBR objective defined above and the broader field of preference-based RL (Wirth et al., 2017). Proposition 7. Consider symmetric win-loss games where outcomes between deterministic strategies are deterministic. A preference-based RL agent (i.e., an agent aiming to maximize its probability of winning against a distribution π of strategies {s1, . . . , sN}) optimizes exactly the PBR objective (1). Given this insight, we believe an important subject of future work will involve the use of preferencebased RL algorithms in implementing the PBR oracle for more general classes of games. We conclude this section with some indicative results of the relationship between α-Rank and NE. Proposition 8. For symmetric two-player zero-sum games where off-diagonal payoffs have equal magnitude, all NE have support contained within that of the single-population α-Rank distribution. Proposition 9. In a symmetric two-player zero-sum game, there exists an NE with support contained within that of the α-Rank distribution. For more general games, the link between α-Rank and Nash equilibria will likely require a more complex description. We leave this for future work, providing additional discussion in Appendix A.3. 5 EVALUATION We conduct evaluations on games of increasing complexity, extending beyond prior PSRO applications that have focused on two-player zero-sum games. For experimental procedures, see Appendix C. Oracle comparisons We evaluate here the performance of the BR and PBR oracles in games where PBR can be exactly computed. We consider randomly generated, K-player, general-sum games with increasing strategy space sizes, |Sk|. Figure 2 reports these results for the 4- and 5-player instances (see Appendix C.4 for 2-3 player results). The asymmetric nature of these games, in combination with the number of players and strategies involved, makes them inherently, and perhaps surprisingly, large in scale. For example, the largest considered game in Fig. 2 involves 5 players with 30 strategies each, making for a total of more than 24 million strategy profiles in total. For each combination of K and |Sk|, we generate 1e6 random games. We conduct 10 trials per game, in each trial running the BR and PBR oracles starting from a random strategy in the corresponding response graph, then iteratively expanding the population space until convergence. Importantly, this implies that the starting strategy may not even be in an SSCC. As mentioned in Section 4.2, α-CONV and PCS-SCORE jointly characterize the oracle behaviors in these multi-population settings. Figure 2a plots α-CONV for both oracles, demonstrating that PBR outperforms BR in the sense that it captures more of the game SSCCs. Figures 2b and 2c, respectively, plot the PCS-SCORE for BR and PBR over all game instances. The PCS-SCORE here is typically either (a) greater than 95%, or (b) less than 5%, and otherwise rarely between 5% to 95%. For all values of |Sk|, PBR consistently discovers a larger proportion of the α-Rank support in contrast to BR, serving as useful validation of the theoretical results of Section 4.3. Meta-solver comparisons We consider next the standard benchmarks of Kuhn and Leduc poker (Kuhn, 1950; Southey et al., 2005; Lanctot et al., 2019). We detail these domains in Appendix C.2, noting here that both are K-player, although Leduc is significantly more complex than Kuhn. We first consider two-player instances of these poker domains, permitting use of an exact Nash meta-solver. Figure 3 compares the NASHCONV of PSRO(M, BR) for various meta-solverM choices. Note that the x axis of Figure 3 and Figure 4 is the Total Pool Length (The sum of the length of each player’s pool in PSRO) instead of the number of iterations of PSRO, since Rectified solvers can add more than one policy to the pool at each PSRO iteration (Possibly doubling pool size at every PSRO iteration). It is therefore more pertinent to compare exploitabilities at the same pool sizes rather than at the same number of PSRO iterations. In Kuhn poker (Fig. 3a), the α-Rank, Nash, and the Projected Replicator Dynamics (PRD) metasolvers converge essentially at the same rate towards zero NASHCONV, in contrast to the slower rate of the Uniform meta-solver, the very slow rate of the Rectified PRD solver, and the seemingly constant NASHCONV of the Rectified Nash solver. We provide in Appendix C.5 a walkthrough of the first steps of the Rectified Nash results to more precisely determine the cause of its plateauing NASHCONV. A high level explanation thereof is that it is caused by Rectified Nash cycling through the same policies, effectively not discovering new policies. We posit these characteristics, antipodal to the motivation behind Rectified Nash, come from the important fact that Rectified Nash was designed to work only in symmetric games, and is therefore not inherently well-suited for the Kuhn and Leduc poker domains investigated here, as they are both asymmetric games. We did not add the Rectified PRD results the other, greater-than-2 players experiments, as its performance remained non-competitive. As noted in Lanctot et al. (2017), PSRO(Uniform, BR) corresponds to Fictitious Play (Brown, 1951) and is thus guaranteed to find an NE in such instances of two-player zero-sum games. Its slower convergence rate is explained by the assignment of uniform mass across all policies s ∈ S, implying that PSRO essentially wastes resources on training the oracle to beat even poor-performing strategies. While α-Rank does not seek to find an approximation of Nash, it nonetheless reduces the NASHCONV yielding competitive results in comparison to an exact-Nash solver in these instances. Notably, the similar performance of α-Rank and Nash serves as empirical evidence that α-Rank can be applied competitively even in the two-player zero-sum setting, while also showing great promise to be deployed in broader settings where Nash is no longer tractable. We next consider significantly larger variants of Kuhn and Leduc Poker involving more than two players, extending beyond the reach of prior PSRO results (Lanctot et al., 2017). Figure 4 visualizes the NASHCONV of PSRO using the various meta-solvers (with the exception of an exact Nash solver, due to its intractability in these instances). In all instances of Kuhn Poker, α-Rank and PRD show competitive convergence rates. In 3-player Leduc poker, however, α-Rank shows fastest convergence, with Uniform following throughout most of training and PRD eventually reaching a similar NASHCONV. Several key insights can be made here. First, computation of an approximate Nash via PRD involves simulation of the associated replicator dynamics, which can be chaotic (Palaiopanos et al., 2017) even in two-player two-strategy games, making it challenging to determine when PRD has suitably converged. Second, the addition of the projection step in PRD severs its connection with NE; the theoretical properties of PRD were left open in Lanctot et al. (2017), leaving it without any guarantees. These limitations go beyond theoretical, manifesting in practice, e.g., in Fig. 4d, where PRD is outperformed by even the uniform meta-solver for many iterations. Given these issues, we take a first (and informal) step towards analyzing PRD in Appendix E. For α-Rank, by contrast, we both establish theoretical properties in Section 4, and face no simulation-related challenges as its computation involves solving of a linear system, even in the general-sum many-player case (Omidshafiei et al., 2019), thus establishing it as a favorable and general PSRO meta-solver. MuJoCo Soccer While the key objective of this paper is to take a first step in establishing a theoretically-grounded framework for PSRO-based training of agents in many-player settings, an exciting question regards the behaviors of the proposed α-Rank-based PSRO algorithm in complex domains where function-approximation-based policies need to be relied upon. In Appendix F, we take a first step towards conducting this investigation in the MuJoCo soccer domain introduced in Liu et al. (2019). We remark that these results, albeit interesting, are primarily intended to lay the foundation for use of α-Rank as a meta-solver in complex many agent domains where RL agents serve as useful oracles, warranting additional research and analysis to make conclusive insights. 6 RELATED WORK We discuss the most closely related work along two axes. We start with PSRO-based research and some multiagent deep RL work that focuses on training of networks in various multiagent settings. Then we continue with related work that uses evolutionary dynamics (α-Rank and replicator dynamics) as a solution concept to examine underlying behavior of multiagent interactions using meta-games. Policy-space response oracles (Lanctot et al., 2017) unify many existing approaches to multiagent learning. Notable examples include fictitious play (Brown, 1951; Robinson, 1951), independent reinforcement learning (Matignon et al., 2012) and the double oracle algorithm (McMahan et al., 2003). PSRO also relies, fundamentally, on principles from empirical game-theoretic analysis (EGTA) (Walsh et al., 2002; Phelps et al., 2004; Tuyls et al., 2018; Wellman, 2006; Vorobeychik, 2010; Wiedenbeck and Wellman, 2012; Wiedenbeck et al., 2014). The related Parallel Nash Memory (PNM) algorithm (Oliehoek et al., 2006), which can also be seen as a generalization of the double oracle algorithm, incrementally grows the space of strategies, though using a search heuristic rather than exact best responses. PNMs have been successfully applied to games settings utilizing function approximation, notably to address exploitability issues when training Generative Adversarial Networks (GANs) (Oliehoek et al., 2019). PSRO allows the multiagent learning problem to be decomposed into a sequence of single-agent learning problems. A wide variety of other approaches that deal with the multiagent learning problem without this reduction are also available, such as Multiagent Deep Deterministic Policy Gradients (MADDPG) (Lowe et al., 2017), Counterfactual Multiagent Policy Gradients (COMA) (Foerster et al., 2018), Differentiable Inter-Agent Learning (DIAL) (Foerster et al., 2016), Hysteretic Deep Recurrent Q-learning (Omidshafiei et al., 2017), and lenient Multiagent Deep Reinforcement Learning (Palmer et al., 2018). Several notable contributions have also been made in addressing multiagent learning challenges in continuous-control settings, most recently including the approaches of Iqbal and Sha (2019); Gupta et al. (2017); Wei et al. (2018); Peng et al. (2017); Khadka et al. (2019). We refer interested readers to the following survey of recent deep multiagent RL approaches Hernandez-Leal et al. (2019). α-Rank was introduced by Omidshafiei et al. (2019) as a scalable dynamic alternative to Nash equilibria that can be applied in general-sum, many-player games and is capable of capturing the underlying multiagent evolutionary dynamics. Concepts from evolutionary dynamics have long been used in analysis of multiagent interactions from a meta-game standpoint (Walsh et al., 2002; Tuyls and Parsons, 2007; Hennes et al., 2013; Bloembergen et al., 2015; Tuyls et al., 2018). 7 DISCUSSION This paper studied variants of PSRO using α-Rank as a meta-solver, which were shown to be competitive with Nash-based PSRO in zero-sum games, and scale effortlessly to general-sum manyplayer games, in contrast to Nash-based PSRO. We believe there are many interesting directions for future work, including how uncertainty in the meta-solver distribution, informed by recent developments in dealing with incomplete information in games (Reeves and Wellman, 2004; Walsh et al., 2003; Rowland et al., 2019), can be used to inform the selection of new strategies to be added to populations. In summary, we strongly believe that the theoretical and empirical results established in this paper will play a key role in scaling up multiagent training in general settings. ACKNOLWEDGEMENTS The authors gratefully thank Bart De Vylder for providing helpful feedback on the paper draft. A EXAMPLES A.1 FURTHER EXPOSITION OF EXAMPLES 1 AND 2 Player 2 A B C D X Player 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 (a) Overview. Full payoff table on left, full response graph on right, with values over directed edges indicating the payoff gained by deviating from one strategy to another. Player 2 A B C D X Player 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 (b) Consider an initial strategy space consisting only of the strategy C; the best response against C is D. Player 2 A B C D X Player 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 (c) The α-Rank distribution over {C,D} puts all mass on D; the best response against D is A. Player 2 A B C D X Player 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 (d) The α-Rank distribution over {C,D,A} puts all mass on A; the best response against A is B. Player 2 A B C D X Player 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 (e) The α-Rank distribution over {C,D,A,B} puts mass (1/3, 1/3, 1/6, 1/6) on (A,B,C,D) respectively. For φ sufficiently large, the payoff that C receives against B dominates all others, and since B has higher mass than C in the α-Rank distribution, the best response is C. Figure A.5: Example 1 with oracle O = BR. In each step above, the α-Rank support is highlighted by the light green box of the payoff table, and the BR strategy against it in bold, dark green. Player 2 A B C D X Player 1 A 0 −φ 1 φ −ε B φ 0 −φ2 1 −ε C −1 φ2 0 −φ −ε D −φ −1 φ 0 −ε X ε ε ε ε 0 (e) The α-Rank distribution over {C,D,A,B} puts mass (1/3, 1/3, 1/6, 1/6) on (A,B,C,D) respectively. A beats C and D, and therefore its PBR score is 1/3. B beats A and D, therefore its PBR score is 1/2. C beats B, its PBR score is therefore 1/3. D beats C, its PBR score is therefore 1/6. Finally, X beats every other strategy, and its PBR score is thus 1. There is only one strategy maximizing PBR, X , which is then chosen, and the SSCC of the game, recovered. Figure A.6: Example 1 with oracle O = PBR. Steps (a) to (d) are not shown as they are identical to their analogs in Fig. A.5. A.2 EXAMPLE BEHAVIOR OF PSRO(NASH, BR) A first attempt to establish convergence to α-Rank might involve running PSRO to convergence (until the oracle returns a strategy already in the convex hull of the known strategies), and then running α-Rank on the resulting meta-game. However, the following provides a counterexample to this approach when using either PSRO(Nash, BR) or PSRO(Uniform, BR). Example 4. Consider the two-player symmetric game specified in Table 3a. The sink stronglyconnected component of the single-population response graph (and hence the α-Rank distribution) contains all three strategies, but all NE are supported on {A,B} only, and the best response to a strategy supported on {A,B} is another strategy supported on {A,B}. Thus, the single-population variant of PSRO, using M ∈ {Nash,Uniform} with initial strategies contained in {A,B} will terminate before discovering strategy X; the full α-Rank distribution will thus not be recovered. Example 5. Consider the two-player zero-sum game specified in Table 3b. All strategy profiles recieve non-zero probability in the multi-population α-Rank distribution. However, the Nash equilibrium over the game restricted to actions A,B for each player has a unique Nash equilibrium of (1/2, 1/2). Player 1’s best response to this Nash is to play some mixture of A and B, and therefore strategy X is not recovered by PSRO(Nash, BR) in this case, and so the full α-Rank distribution will thus not be recovered. A.3 COUNTEREXAMPLES: α-RANK VS. NASH SUPPORT The Game of Chicken The Game of Chicken provides an example where the support of α-Rankin the multipopulation case - does not include the full support of Nash Equilibria. This game has three Nash equilibria: Two pure, (D,C) and (C,D), and one mixed, where the population plays Dare with probability 13 . Nevertheless, α-rank only puts weight on (C,D) and (D,C), effectively not putting weight on the full mixed-nash support. Prisoner’s Dilemma The Prisoner’s Dilemma provides a counterexample that the support of αRank- in the multi-population case - does not include the full support of correlated equilibria. This game has correlated equilibria that include (C,D), (D,C) and (C,C) in their support; nevertheless, α-Rank only puts weight on (D,D), effectively being fully disjoint from the support of the correlated equilibria. A.4 SINGLE-POPULATION α-CONV In analogy with the multi-population definition in Section 4.2, we define a single-population version of α-CONV. We start by defining the single-population version of PBR-Score, given by PBR-SCORE(σ;π, S) = ∑ i πi1 [ M1(σ, si) >M 2(σi, si) ] . The single-population α-CONV is then defined as α-CONV = max σ PBR-SCORE(σ)−max s∈S PBR-SCORE(s) , where maxσ is taken over the pure strategies of the underlying game. B PROOFS B.1 PROOF OF PROPOSITION 1 Proposition 1. If at any point the population of α-PSRO contains a member of an SSCC of the game, then α-PSRO will α-partially converge to that SSCC. Proof. Suppose that a member of one of the underlying game’s SSCCs appears in the α-PSRO population. This member will induce its own meta-SSCC in the meta-game’s response graph. At least one of the members of the underlying game’s corresponding SSCC will thus always have positive probability under the α-Rank distribution for the meta-game, and the PBR oracle for this meta-SSCC will always return a member of the underlying game’s SSCC. If the PBR oracle returns a member of the underlying SSCC already in the PSRO population, we claim that the corresponding meta-SSCC already contains a cycle of the underlying SSCC. To see this, note that if the meta-SSCC does not contain a cycle, it must be a singleton. Either this singleton is equal to the full SSCC of the underlying game (in which we have α-fully converged), or it is not, in which case the PBR oracle must return a new strategy from the underlying SSCC, contradicting our assumption that it has terminated. B.2 PROOF OF PROPOSITION 2 Proposition 2. If we constrain the PBR oracle used in α-PSRO to be novelty-bound, then α-PSRO will α-fully converge to at least one SSCC of the game. Proof. Suppose that α-PSRO has converged, and consider a meta-SSCC. Since α-PSRO has converged, it follows that each strategy profile of the meta-SSCC is an element of an SSCC of the underlying game. Any strategy profile in this SSCC which is not in the meta-SSCC will obtain a positive value for the PBR objective, and since α-PSRO has converged, there can be no such strategy profile. Thus, the meta-SSCC contains every strategy profile contained within the corresponding SSCC of the underlying game, and therefore conclude that α-PSRO α-fully converges to an SSCC of the underlying game. B.3 PROOF OF PROPOSITION 3 Proposition 3. (Single-population) α-PSRO converges α-partially to the unique SSCC. Proof. The uniqueness of the SSCC follows from the fact that in the single-population case, the response graph is fully-connected. Suppose at termination of α-PSRO, the α-PSRO population contains no strategy within the SSCC, and let s be a strategy in the SSCC. We claim that s attains a higher value for the objective defining the PBR oracle than any strategy in the α-PSRO population, which contradicts the fact that α-PSRO has terminated. To complete this argument, we note that by virtue of s being in the SSCC, we haveM1(s, s′) >M1(s′, s) for all s′ outside the SSCC, and in particular for all s′ ∈ S, thus the PBR objective for s is 1. In contrast, for any si ∈ S, the PBR objective for si is upper-bounded by 1−πi. If πi > 0, then this shows si is not selected by the oracle, since the objective value is lower than that of s. If πi = 0, then the objective value for si is 0, and so an SSCC member will always have a maximal PBR score of 1 against a population not composed of any SSCC member, and all members of that population have < 1 PBR scores. Consequently, singlepopulation α-PSRO cannot terminate before it has encountered an SSCC member. By Proposition 1, the proposition is therefore proven. B.4 PROOF OF PROPOSITION 4 Proposition 4. (Multi-population) Without a novelty-bound oracle, there exist games for which α-PSRO does not converge α-partially to any SSCC. Proof. We exhibit a specific counterexample to the claim. Consider the three-player, three-strategy game with response graph illustrated in Fig. B.7a; note that we do not enumerate all strategy profiles not appearing in the SSCC for space and clarity reasons. The sequence of updates undertaken by α-PSRO in this game is illustrated in Figs. B.7b to B.7f; whilst the singleton strategy profile (3, 2, 3) forms the unique SSCC for this game, α-PSRO terminates before reaching it, which concludes the proof. The steps taken by the algorithm are described below; again, we do not enumerate all strategy profiles not appearing in the SSCC for space and clarity reasons. 1. Begin with strategies [[2], [1], [1]] in the α-PSRO population (Player 1 only has access to strategy 2, Players 2 and 3 only have access to strategy 1) 2. The PBR to (2,1,1) for player 2 is 2, and no other player has a PBR on this round. We add 2 to the strategy space of player 2, which changes the space of available joint strategies to [(2, 1, 1), (2, 2, 1)]. 3. α-Rank puts all its mass on (2,2,1). The PBR to (2,2,1) for player 3 is 2, and no other player has a PBR on this round. We add strategy 2 to player 3’s strategy space, which changes the space of available joint strategies to [(2, 1, 1), (2, 2, 1), (2, 2, 2)]. 4. α-Rank puts all its mass on (2,2,2). The PBR to (2,2,2) for player 1 is 1, and no other player has a PBR on this round. We add strategy 1 to player 1’s strategy space, which changes the space of available joint strategies to [(1, 1, 1), (1, 1, 2), (1, 2, 1), (1, 2, 2), (2, 1, 1), (2, 2, 1), (2, 2, 2)]. 5. Define σ as the α-Rank probabilities of the meta-game. Player 1 playing strategy 2 has a PBR score of σ((1, 1, 1)) + σ((1, 2, 1)), and the same player playing strategy 3 has a PBR score of σ((1, 2, 1)), which is lower than the PBR Score of playing strategy 2. No other player has a valid PBR for this round, and therefore, α-PSRO terminates. In the above example, pictured in Fig. B.7, a relatively weak joint strategy (Strategy (3,2,1)) bars agents from finding the optimal joint strategy of the game (Strategy (3,2,3)) : getting to this joint strategy requires coordinated changes between agents, and is therefore closely related to the common problem of Action/Equilibrium Shadowing mentioned in (Matignon et al., 2012). B.5 PROOF OF PROPOSITION 5 Proposition 5. A constant-sum game is denoted as win-loss ifMk(s) ∈ {0, 1} for all k ∈ [K] and s ∈ S. BR is compatible with PBR in win-loss games in the two-player single-population case. Proof. We manipulate the best-response objective as follows: M1(ν,π) = ∑ s∈S π(s)M1(ν, s) = ∑ s∈S π(s)1[M1(ν, s) >M2(ν, s)] . Noting that the final line is the single-population PBR objective, we are done. B.6 PROOF OF PROPOSITION 6 Proposition 6. A symmetric two-player game is denoted monotonic if there exists a function f : S → R and a non-decreasing function σ : R → R such that M1(s, ν) = σ(f(s) − f(ν)). BR is compatible with PBR in monotonic games in the single-population case. Proof. Rewriting the objectives given that the game is monotonic, we have that the value-based objective becomes K∑ k=1 πkM 1(s, sk) = K∑ k=1 πkσ(f(s)− f(sk)) . Given the fact that the only condition we have on σ is its non-decreasing character, this objective does not reduce to maximizing f(s) in the general case. The objective for PBR is K∑ k=1 πk1[M 1(s, sk) >M 2(s, sk)] = K∑ k=1 πk1[σ(f(s)− f(sk)) > σ(f(sk)− f(s))] Since σ is non-decreasing, σ(f(s)− f(sk)) > σ(f(sk)− f(s)) ⇒ f(s) > f(sk) and conversely, f(s) > f(sk) ⇒ σ(f(s)− f(sk)) ≥ σ(f(sk)− f(s)) Without loss of generality, we reorder the strategies such that if i < k, f(si) ≤ f(sk). Let sv maximize the value objective. Therefore, by monotonicity, sv maximizes σ(f(s)− f(sK)). Three possibilities then ensue. If there exists s such that σ(f(s)− f(sK)) > σ(f(sK)− f(s)) then σ(f(sv)− f(sK)) > σ(f(sK)− f(sv)) since sv maximizes σ(f(s)− f(sK)) and σ is non-decreasing. Consequently sv maximizes the PBR objective. Indeed, let us remark that for all k ≤ K, we have that σ(f(sv)− f(sk)) > σ(f(sk)− f(sv)) since σ(f(sv)− f(sk)) ≥ σ(f(sv)− f(sK)) > σ(f(sK)− f(sv)) ≥ σ(f(sk)− f(sv)) Else, if there does not exist any policy s such that σ(f(s)− f(sK)) > σ(f(sK)− f(s)), that is, for all s, σ(f(s)− f(sK)) ≤ σ(f(sK)− f(s)) Since sK is a possible solution to the value objective, σ(f(sv)− f(sK)) = σ(f(sK)− f(sv)) Let n be the integer such that sn = arg max{f(sk), sk ∈ Population | ∃s s.t. σ(f(s)− f(sk)) > σ(f(sk)− f(s))} If sn exists, then we have that for all si such that f(si) > f(sn), σ(f(sv)− f(si)) = σ(f(si)− f(sv)) The PBR objective is K∑ k=1 πk1[σ(f(s)− f(sk)) > σ(f(sk)− f(s))] which, according to our assumptions, is equivalent to n∑ k=1 πk1[σ(f(s)− f(sk)) > σ(f(sk)− f(s))] We know that for all i ≤ n, σ(f(sv)− f(si)) > σ(f(si)− f(sv)), and therefore, sv maximizes the PBR objective. Finally, if sn doesn’t exist, then any policy is solution to the PBR objective, and therefore sv is. A toy example showing the compatibility between Best Response and Preference-based Best Response is shown in Fig. B.8. The setting is that of a monotonic game where every strategy is assigned a number. Strategies are then dominated by all strategies with higher number than theirs. We compute BR and PBR on an initial population composed of one strategy that we choose to be dominated by every other strategy. Any strategy dominating the current population is a valid solution for PBR, as represented in Fig. B.8c; whereas, if we consider that the game is monotonic with σ a strictly increasing function, only one strategy maximizes Best Response, strategy N – and it is thus the only solution of BR, as shown in Fig. B.8d. As we can see, the solution of BR is part of the possible solutions of PBR, demonstrating the result of Proposition 6: BR is compatible with PBR in monotonic games. B.7 PROOF OF PROPOSITION 7 Proposition 7. Consider symmetric win-loss games where outcomes between deterministic strategies are deterministic. A preference-based RL agent (i.e., an agent aiming to maximize its probability of winning against a distribution π of strategies {s1, . . . , sN}) optimizes exactly the PBR objective (1). Proof. Commencing with the above preference-based RL objective, we calculate as follows, arg max σ P ( σ beats N∑ i=1 πisi ) = arg max σ Ei [P(σ beats si|index i selected)] = arg max σ N∑ i=1 πiP(σ beats si) = arg max σ N∑ i=1 πi1[σ receives a positive expected payoff against si] with the final equality whenever game outcomes between two deterministic strategies are deterministic. Note that this is precisely the PBR objective (1). B.8 PROOF OF PROPOSITION 8 Proposition 8. For symmetric two-player zero-sum games where off-diagonal payoffs have equal magnitude, all NE have support contained within that of the single-population α-Rank distribution. Proof. In the single-population case, the support of the α-Rank distribution is simply the (unique) sink strongly-connected component of the response graph (uniqueness follows from the fact that the response graph, viewed as an undirected graph, is fully-connected). We will now argue that for a strategy s in the sink strongly-connected component and a strategy z outside the sink stronglyconnected component, we have∑ a∈S π(a)M1(s, a) > ∑ a∈S π(a)M1(z, a) , (3) This inequality states that when an opponent plays according to π, the expected payoff to the row player is greater if they defect to s whenever they would have played z. This implies that if a supposed symmetric Nash equilibrium contains a strategy z outside the sink strongly-connected component in its support, then it could receive higher reward by playing s instead, which contradicts the fact that it is an NE. We show (3) by proving a stronger result — namely, that s dominates z as strategies. Firstly, since s is the sink strongly-connected component and z is not, s beats z, and soM1(s, z) > M1(s, s) = M1(z, z) >M1(z, s). Next, if a 6∈ {s, z} is in the sink strongly-connected component, then a beats z, and so M1(s, a) > M1(z, a) if s beats a, and M1(s, a) = M1(z, a) otherwise. Finally, if a 6= s, z is not in the sink strongly-connected component, thenM1(s, a) = M1(z, a) is z beats a, andM1(s, a) >M1(z, a) otherwise. Thus, (3) is proven, and the result follows. B.9 PROOF OF PROPOSITION 9 Proposition 9. In a symmetric two-player zero-sum game, there exists an NE with support contained within that of the α-Rank distribution. Proof. Consider the restriction of the game to the strategies contained in the sink strongly-connected component of the original game. Let π be an NE for this restricted game, and consider this as a distribution over all strategies in the original game (putting 0 mass on strategies outside the sink component). We argue that this is an NE for the full game, and the statement follows. To see this, note that since any strategy outside the sink strongly-connected component receives a non-positive payoff when playing against a strategy in the sink strongly-connected component, and that for at least one strategy in the sink strongly-connected component, this payoff is negative. Considering the payoffs available to the row player when the column player plays according to π, we observe that the expected payoff for any strategy outside the sink strongly-connected component is negative, since every strategy in the sink strongly-connected component beats the strategy outside the component. The payoff when defecting to a strategy in the sink strongly-connected component must be non-positive, since π is an NE for the restricted game. C ADDITIONAL DETAILS ON EXPERIMENTS C.1 EXPERIMENTAL PROCEDURES The code backend for the Poker experiments used OpenSpiel (Lanctot et al., 2019). Specifically, we used OpenSpiel’s Kuhn and Leduc poker implementations, and exact best responses were computed by traversing the game tree (see implementation details in https://github.com/deepmind/open_spiel/blob/master/open_spiel/ python/algorithms/best_response.py). 100 game simulations were used to estimate the payoff matrix for each possible strategy pair. Although the underlying Kuhn and Leduc poker games are stochastic (due to random initial card deals), the associated meta-games are essentially deterministic (as, given enough game simulations, the mean payoffs are fixed). The subsequent PSRO updates are, thus, also deterministic. Despite this, we report averages over 2 runs per PSROM, primarily to capture stochasticity due to differences in machine-specific rounding errors that occur due to the distributed computational platforms we run these experiments on. For experiments involving α-Rank, we conduct a full sweep over the ranking-intensity parameter, α, following each iteration of α-PSRO. We implemented a version of α-Rank (building on the OpenSpiel implementation https://github.com/deepmind/open_spiel/blob/master/ open_spiel/python/egt/alpharank.py) that used a sparse representation for the underlying transition matrix, enabling scaling-up to the large-scale NFG results presented in the experiments. For experiments involving the projected replicator dynamics (PRD), we used uniformly-initialized meta-distributions, running PRD for 5e4 iterations, using a step-size of dt = 1e− 3, and exploration parameter γ = 1e− 10. Time-averaged distributions were computed over the entire trajectory. C.2 DOMAIN DESCRIPTION AND GENERATION C.2.1 NORMAL FORM GAMES GENERATION Algorithms 2 to 4 provide an overview of the procedure we use to randomly-generate normal-form games for the oracle comparisons visualized in Fig. 2. Algorithm 2 GenerateTransitive(Actions, Players, meanvalue = [0.0, 1.0], meanprobability = [0.5, 0.5], var = 0.1) 1: T = [] 2: for Player k do 3: Initialize fk = [0] ∗ Actions 4: for Action a ≤ Actions do 5: Randomly sample mean µ from meanvalue according to meanprobability 6: fk[a] ∼ N (µ, var) 7: for Player k do 8: T [k] = fk − 1|Players|−1 ∑ i 6=k fi 9: Return T Algorithm 3 GenerateCyclic(Actions, Players, var = 0.4) 1: C = [] 2: for Player k do 3: Initialize C[k] ∼ N (0, var), Shape(C[k]) = (ActionsFirst Player, . . . ,ActionsLast Player) 4: for Player k do 5: Sum = ∑ Actions ai of all player i6=k C[k][a1, . . . , ak−1, : , ak+1, ...] 6: Shape(Sum) = (1, . . . , 1,ActionsPlayer k, 1, . . . , 1) 7: C[k] = C[k]− Sum 8: Return C Algorithm 4 General Normal Form Games Generation(Actions, Players) 1: Generate matrix lists T = GenerateTransitive(Actions, Players), C = GenerateCyclic(Actions, Players) 2: Return [T [k] + C[k] for Player k] C.2.2 KUHN AND LEDUC POKER K-player Kuhn poker is played with a deck of K + 1 cards. Each player starts with 2 chips and 1 face-down card, and antes 1 chip to play. Players either bet (raise/call) or fold iteratively, until each player is either in (has contributed equally to the pot) or has folded. Amongst the remaining players, the one with the highest-ranked card wins the pot. Leduc Poker, in comparison, has a significantly larger state-space. Players in Leduc have unlimited chips, receive 1 face-down card, ante 1 chip to play, with subsequent bets limited to 2 and 4 chips in rounds 1 and 2. A maximum of two raises are allowed in each round, and a public card is revealed before the second round. C.3 PBR COMPUTATION IN NORMAL FORM GAMES The algorithms used to compute PBR and PBR-SCORE in the games generated by the algorithm described in Section C.2.1 is shown in Algorithms 5 and 6. Note that they compute the multipopulation version of PBR. PCS-SCORE is computed by pre-computing the full game’s SSCC, and computing the proportion of currently selected strategies in the empirical game that also belongs to the full game’s SSCC. Note that the PBR-SCORE and PCS-SCORE are useful measures for assessing the quality of convergence in our examples, in a manner analogous to NASHCONV. The computation of these scores is, however, not tractable in general games. Notably, this is also the case for NASHCONV (as it requires computation of player-wise best responses, which can be problematic even in moderatelysized games). Despite this, these scores remain a useful way to empirically verify the convergence characteristics in small games where they can be tractably computed. Algorithm 5 PBR Score(Strategy S, Payoff Tensor, Current Player Id, Joint Strategies, Joint Strategy Probability) 1: New strategy score = 0 2: for Joint strategy J, Joint probability P in Joint Strategies, Joint Strategy Probability do 3: New strategy = J 4: New strategy[Current Player Id] = S 5: New strategy payoff = Payoff Tensor[New Strategy] 6: Old strategy payoff = Payoff Tensor[J] 7: New strategy score += P * (New Strategy Payoff > Old Strategy Payoff) 8: Return New strategy score Algorithm 6 PBR(Payoff Tensor list LM, Joint Strategies per player PJ, Alpharank Probability per Joint Strategy PA, Current Player) 1: maxPBR = 0 2: maxstrat = None 3: for Strategy S available to Current Player among all possible strategies do 4: score = PBR Score(S, LM[Current Player Id], Current Player Id, PJ, PA) 5: if New Strategy Score > maxPBR then 6: maxPBR = New Strategy Score 7: maxstrat = S 8: Return maxPBR,maxstrat C.4 ADDITIONAL ORACLE COMPARISON RESULTS We present additional oracle comparisons in Fig. C.9, all of these in the multi-population case. C.5 NOTES ON RECTIFIED NASH PERFORMANCE This section provides additional insights into the Rectified Nash results detailed in Section 5. We begin with an important disclaimer that Rectified Nash was developed solely with symmetric games in mind. As Kuhn Poker and Leduc Poker are not symmetric games, they lie beyond the theoretical scope of Rectified Nash. Nevertheless, comparing the performance of rectified and non-rectified approaches from an empirical perspective yields insights, which may be useful for future investigations that seek to potentially extend and apply rectified training approaches to more general games. As noted in the main paper, the poor performance of PSRO using Rectified Nash (in Fig. 3) is initially surprising as it indicates premature convergence to a high-NASHCONV distribution over the players’ policy pools. Investigating this further led to a counterintuitive result for the domains evaluated: Rectified Nash was, roughly speaking, not increasing the overall diversity of behavioral policies added to each player’s population pool. In certain regards, it even prevented diversity from emerging. To more concretely pinpoint the issues, we detail below the first 3 iterations of PSRO(Rectified Nash, BR) in Kuhn Poker. Payoff matrices at each PSRO iteration are included in Tables 6a to 6c. For clarity, we also include the 5 best responses trained by Rectified Nash and the policies they were trained against, in their order of discovery: 2 policies for Player 1 (in Fig. C.11) and 3 policies for Player 2 (in Fig. C.12). 1. Iteration 0: both players start with uniform random policies. 2. Iteration 1: • Player 1 trains a best response against Player 2’s uniform random policy; its policy set is now the original uniform policy, and the newly-computed best response. • Player 2 trains a best response against Player 1’s uniform random policy; its policy set is now the original uniform policy, and the newly-computed best response. • Player 2’s best response beats both of Player 1’s policies. • Payoff values are represented in Table 6a. 3. Iteration 2: • By Rectified Nash rules, Player 1 only trains policies against policies it beats; i.e., only against Player 2’s random policy, and thus it adds the same policy as in iteration 1 to its pool. • Player 2 trains a best response against the Nash mixture of Player 1’s first best response and random policy. This policy also beats all policies of player 1. • Payoff values are represented in Table 6b. 4. Iteration 3: • Player 1 only trains best responses against Player 2’s random policy. • Player 2 only trains best responses against the Nash of Player 1’s two unique policies. This yields the same policies for player 2 as those previously added to its pool (i.e., a loop occurs). • Payoff values are represented in Table 6c 5. Rectified Nash has looped. As noted above, Rectified Nash loops at iteration 3, producing already-existing best responses against Player 1’s policies. Player 1 is, therefore, constrained to never being able to train best responses against any other policy than Player 2’s random policy. In turn, this prevents Player 2 from training additional novel policies, and puts the game in a deadlocked state. Noise in the payoff matrices may lead to different best responses against the Nash Mixture of policies, effectively increasing diversity. However, this effect did not seem to manifest in our experiments. To more clearly illustrate this, we introduce a means of evaluating the policy pool diversity, counting the number of unique policies in the pool. Specifically, given that Kuhn poker is a finite state game, comparing policies is straightforward, and only amounts to comparing each policy’s output on all states of the games. If two policies have exactly the same output on all the game’s states, they are equal; otherwise, they are distinct. We plot in Fig. C.10 the policy diversity of each meta-solver, where we observe that both Rectified Nash and Rectified PRD discover a total of 5 different policies. We have nevertheless noticed that in a few rare seeds, when using low number of simulations per payoff entry (Around 10), Rectified Nash was able to converge to low exploitability scores, suggesting a relationship between payoff noise, uncertainty and convergence of Rectified Nash whose investigation we leave for future work. We also leave the investigation of the relationship between Policy Diversity and Exploitability for future work, though note that there appears to be a clear correlation between both. Overall, these results demonstrate that the Rectified Nash solver fails to discover as many unique policies as the other solvers, thereby plateauing at a low NASHCONV. Finally, regarding Rectified PRD, which performs better in terms of NASHCONV when compared to Rectified Nash, we suspect that payoff noise in combination with the intrinsic noise of PRD, plays a key factor - but those two are not enough to deterministically make Rectified PRD converge to 0 exploitability, since in the seed that generated Fig. C.10, it actually doesn’t (Though it indeed converges in Fig. 3). We conjecture this noisier behavior may enable Rectified PRD to free itself from deadlocks more easily, and thus discover more policies on average. A more detailed analysis of Rectified PRD is left as future work. D α-RANK IN DETAIL In this section we give further details of α-Rank; for a full description, see Omidshafiei et al. (2019). Essentially α-Rank defines a directed response graph over the pure strategy profiles of the game under study, by indicating when a player has an incentive to make a unilateral deviation from their current strategy. An irreducible (noisy) random walk over this graph is then defined, and the strategy profile rankings are obtained by ordering the masses of this Markov chain’s unique invariant distribution π. The Markov transition matrix C that specifies this random walk is defined as follows for the multipopulation case; see Omidshafiei et al. (2019) for the single-population case. Consider a pure strategy profile s ∈ S, and let σ = (σk, s−k) be the pure strategy profile which is equal to s, except for player k, which uses strategy σk ∈ Sk instead of sk. Let Cs,σ denote the transition probability from s to σ, and Cs,s the self-transition probability of s, with each defined as: Cs,σ = { η 1−exp(−α(Mk(σ)−Mk(s))) 1−exp(−αm(Mk(σ)−Mk(s))) if M k(σ) 6= Mk(s) η m otherwise , Cs,s = 1− ∑ k∈[K] σ|σk∈Sk\{sk} Cs,σ , where η = ( ∑ l(|Sl| − 1))−1. If two strategy profiles s and s′ differ in more than one player’s strategy, then Cs,s′ = 0. Here α ≥ 0 and m ∈ N are parameters to be specified; the form of this transition probability is described by evolutionary dynamics models from evolutionary game theory and is explained in more detail in Omidshafiei et al. (2019). Large values of α correspond to higher selection pressure in the evolutionary model under consideration; the version of α-Rank used throughout this paper corresponds to the limiting invariant distribution as α→∞, under which only strategy profiles appearing in the sink strongly-connected components of the response graph can have positive mass. E TOWARDS THEORETICAL GUARANTEES FOR THE PROJECTED REPLICATOR DYNAMICS Computing Nash equilibria is intractable for general games and can suffer from a selection problem (Daskalakis et al., 2009); therefore, it quickly becomes computationally intractable to employ an exact Nash meta-solver in the inner loop of a PSRO algorithm. To get around this, Lanctot et al. (2017) use regret minimization algorithms to attain an approximate correlated equilibrium (which is guaranteed to be an approximate Nash equilibrium under certain conditions on the underlying game, such as two-player zero-sum). A dynamical system from evolutionary game theory that also converges to equilibria under certain conditions is the replicator dynamics (Taylor and Jonker, 1978; Schuster and Sigmund, 1983; Cressman and Tao, 2014; Bloembergen et al., 2015), which defines a dynamical system over distributions of strategies (πks (t) | k ∈ [K], s ∈ Sk), given by π̇ks (t) = π k s (t) [ Mk(s,π−k(t))−Mk(πk(t)) ] , for all k ∈ [K], s ∈ Sk , (4) with an arbitrary initial condition. Lanctot et al. (2017) introduced a variant of replicator dynamics, termed projected replicator dynamics (PRD), which projects the flow of the system so that each distribution πk(t) lies in the set ∆γ Sk = {π ∈ ∆Sk | πs ≥ γ/(|Sk| + 1), ∀s ∈ Sk}; see, e.g., Nagurney and Zhang (2012) for properties of such projected dynamical systems. This heuristically enforces additional “exploration” relative to standard replicator dynamics, and was observed to provide strong empirical results when used as a meta-solver within PSRO. However, the introduction of projection potentially severs the connection between replicator dynamics and Nash equilibria, and the theoretical game-theoretic properties of PRD were left open in Lanctot et al. (2017). Here, we take a first step towards investigating theoretical guarantees for PRD. Specifically, we highlight a possible connection between α-Rank, the calculation of which requires no simulation, and a constrained
1. What is the main contribution of the paper, and how does it extend the original PSRO paper? 2. What is the significance of using an $\alpha$-Rank based metasolver instead of projected replicator dynamics and Nash equilibria? 3. Can you explain the concept of preference-based Best-Response (PBR) oracle and its difference from the standard Best-Response (BR) oracle? 4. How does the paper perform empirical experiments on different versions of poker, and what are the findings? 5. What are the clarifications needed regarding tractability, convergence, dependence on alpha, oracle details, compatibility between BR and PBR, and experimental protocols?
Review
Review This paper extends the original PSRO paper to use an $\alpha$-Rank based metasolver instead of the projected replicator dynamics and Nash equilibria based metasolvers in the original. To this end, the paper modifies the original idea of Best-Response (BR) oracle since it can ignore some strategies in $\alpha$-Rank defining SSCC to introduce the idea of _preference-based_ Best-Response (PBR) oracle. The need for a different oracle is well justified especially with the visualization in the Appendix. The main contributions that the paper seems to be going for is a theoretical analysis of $\alpha$-Rank based PSRO compared to standard PSRO. From the PBR's description (especially in Sec 4.3) it seems the paper is intereseted in expanding the population with novel agents rather than finding the "best" single agent which is not well defined for complex games with intransitivities. Nevertheless, it seems that BR is mostly compatible with PBR for symmetric zero-sum two-player games. The paper performs empirical experiments on different versions of poker. First set of experiments compare BR and PBR with $\alpha$-Rank based metasolver on random games and finds that PBR does better than BR at population expansion as defined. The second set of experiments compare the metasolvers. $\alpha$-Rank performs similarly to Nash where applicable. Moreover it's faster than Uniform (fictitious self-play) on Kuhn. Then the paper tacks on the MuJoCo soccer experiment as a teaser for ICLR crowd. Overall the paper is quite interesting from the perspective of multiagent learning and I would lean towards accepting. However the paper needs to clarify a lot of details to have any chance of being reproducible. ** Clarifications needed: - Tractability of PBR-Score and PCS-Score It's unclear how tractable these are. Moreover these were only reported for random games. What did these scores look like for the Poker games? Could you clarify how exactly these were computed? - It's somewhat unclear what the lack of convergence without novelty-bound oracle implies. Does this have to do with intransitivities in the game? - Dependence of $\alpha$? The original $\alpha$-Rank paper said a lot about the importance of choosing the right value for $\alpha$. How were these chosen? Do you do the sweep after every iteration of PSRO? - Oracle in experiments? The paper fails to mention the details about the Oracles being used in the experiments. They weren't RL oracles but more details would be useful. - BR not compatible with PBR, albeit not the other way around, meaning one of the solutions you get from PBR might be BR, but can we say which one? - For MuJoCo soccer was it true PSRO or cognitive hierarchy. In general, the original PSRO paper was partly talking about the scalable approach via DCH. This paper doesn't mention that at all. So were the MuJoCo experiments with plain PSRO? What was the exact protocol there? From the appendix it's unclear how the team-vs-team meta game works with individual RL agents. Moreover how are the meta-game evaluation matrices computed in general? How many samples were needed for the Poker games and MuJoCo soccer? - The counterexamples in Appendix B3 are quite interesting. Do you have any hypotheses about the disjoint support from games' correlated equilibria?
ICLR
Title Encoder-Agnostic Adaptation for Conditional Language Generation Abstract Large pretrained language models have changed the way researchers approach discriminative natural language understanding tasks, leading to the dominance of approaches that adapt a pretrained model for arbitrary downstream tasks. However, it is an open question how to use similar techniques for language generation. Early results in the encoder-agnostic setting have been mostly negative. In this work, we explore methods for adapting a pretrained language model to arbitrary conditional input. We observe that pretrained transformer models are sensitive to large parameter changes during tuning. Therefore, we propose an adaptation that directly injects arbitrary conditioning into self attention, an approach we call pseudo self attention. Through experiments on four diverse conditional text generation tasks, we show that this encoder-agnostic technique outperforms strong baselines, produces coherent generations, and is data-efficient. 1 INTRODUCTION Large-scale language models have been shown to dramatically improve the performance of natural language understanding (NLU) systems on a broad range of tasks (Peters et al., 2018; Devlin et al., 2018; Radford & Salimans, 2018; McCann et al., 2017). The dominant paradigm is to pretrain a self attention-based language model on a large corpus of unlabeled text and then finetune the language model with an additional task-specific classification head on supervised data. Optimizing the effectiveness of this approach has been the focus of much study (Houlsby et al., 2019; Wang et al., 2019; Chronopoulou et al., 2019). Given the success of pretraining for NLU tasks, how can large language models best be adapted for conditional language generation? Ideally, one should only need to train a large language model once and then apply it as part of the decoder to a range of tasks with different source modalities (e.g., text, images, bits). In the encoder/decoder framework, a task-specific encoder can encode source information into a continuous vector. The central question is thus how to adapt a pretrained decoder to effectively utilize arbitrary source information from an encoder (encoder-agnostic adaptation). Considering the high quality of samples from large languae models (Radford et al., 2019), it is natural to expect encoder-agnostic adaptation to improve the coherence and grammaticality of conditional text generation over training a decoder from scratch, even when the source modality is not text (e.g. image captioning or class-conditional generation). Unfortunately, past results indicate otherwise. Edunov et al. (2019) show, for example, that a straightforward extension of contextual representations (Peters et al., 2018) to the conditional generation setting actually hurts performance compared to a model without any pretraining. Other pretraining approaches for language generation (Song et al., 2019; Dong et al., 2019; Lample & Conneau, 2019) have demonstrated strong performance on text-to-text tasks, but these methods are constrained to tasks where the source is natural language and do not address the encoder-agnostic setting. In this work, we consider several different approaches to the problem of encoder-agnostic adaptation. We first observe that standard adaptation approaches perform poorly on this task. We hypothesize that because these techniques require relearning significant parts of the network structure to inject contextual conditioning, they move the parameters too far from the pretrained values. In contrast, Radford et al. (2019) observe that even trivial conditioning with the original model produces reasonable zero-shot generations without finetuning. These results motivate an approach that learns the correct conditioning to control the model’s output, which we call pseudo self attention. The idea is to learn a task-specific encoder that injects pseudo history into a pretrained self attention model. Because self attention works with sets of any size, the model can immediately utilize or ignore this history. Finetuning adapts the model to this new input while training a task-specific encoder. Experiments utilize the GPT-2 (Radford et al., 2019) transformer as a pretrained model. We consider four diverse generation tasks spanning a range of source modalities: class-conditional generation, document summarization, story generation, and image paragraph captioning. Across all tasks, we find that pseudo self attention outperforms the other pretraining methods and is the most consistent. As a practical tool, pseudo self attention improves performance compared to a baseline without pretraining by large margins without sacrificing adherence to the source, even for tasks with large amounts of supervised data. We further demonstrate that the approach is data-efficient and produces qualitatively more coherent outputs. Code is available at https://github.com/anon37234/ encoder-agnostic-adaptation. 2 RELATED WORK Pretrained Decoder Transfer learning for NLG Natural language generation (NLG) tasks have a long history of incorporating unconditional language models with conditional input(Bahl et al., 1983; Koehn et al., 2003). These approaches traditionally use the noisy channel model (i.e., Bayes’ rule), and n-gram models as the language model. Recent adaptations of these ideas include the Neural Noisy Channel (Yu et al., 2017) as well as “fusion” methods (Koehn et al., 2003; Gulcehre et al., 2015; Sriram et al., 2018; Stahlberg et al., 2018) in which the output logits of a language model and a conditional model are combined to calculate the output probabilities. We consider this class of transfer learning as a baseline in a preliminary experiment (see Section 4.1), but focus on alternative “deep” approaches that incorporate the language model weights as an integral part of the model instead of an add-on at the end. Along these lines, Ramachandran et al. (2017) propose a finetuning-based method for machine translation with LSTMs, in which some of the layers of the LSTM are initialized with pretrained language model weights. As their method is specific to LSTMs, however, it is incompatible with modern transformer architectures. Pretraining-Based Transfer Learning for NLG Zhang et al. (2019) use BERT in the encoder and decoder of a summarization model via a unique cloze generative process. They demonstrate strong summarization performance, but the value of pretraining relative to other model components is not clear, and the cloze process significantly reduces the practicality of the model. More related, Edunov et al. (2019) experiment with a representation-based approach for applying ELMo (Peters et al., 2018) to the source and target sides of a standard seq2seq model separately. Their approach consistently improves performance when applied to the source, but hurts performance when applied to the decoder. We consider such a representation approach as a baseline in this work. Most recently, several studies experiment with BERT-like masking approaches that are compatible with natural language generation (Song et al., 2019; Dong et al., 2019; Lample & Conneau, 2019). While these works demonstrate impressive performance, they are constrained to text-to-text tasks because they do not have a way to handle arbitrary conditional information. Whereas these works study pretraining methods that optimize transfer for text-to-text tasks, our study considers the separate problem of adapting a fixed pretrained model to arbitrary source conditioning. Concurrent with this work, Golovanov et al. (2019) propose a similar approach to pseudo self attention and report initial experiments with dialogue generation. This study complements ours with positive results on dialogue generation, though we aim for experimental data over a wide range of language generation tasks and input modalities and comparison to strong encoder-agnostic baselines. 3 METHODS We assume that we have a pretrained language model, p(y) = p(y1, . . . , yT ; θ), that the model is an autoregressive neural network, and that it is based on self attention to implement conditioning, i.e., SA(Y ) = softmax((YWq)(YWk)>)(YWv), where input Y ∈ T ×D for hidden dimensionD, Wk,Wv,Wq ∈ D×D′ are parameters, representing the key, value, and query projections respectively, and the output is T ×D′. 1 We are interested in using this model to estimate the conditional probability p(y | x) for an arbitrary input x for which we have a small amount of supervised (x,y) pairs. The goal is to learn a model on this new data that best makes use of the pretrained model p(y) with a method that is agnostic to the form of x. All models are based on the encoder/decoder architecture, and for each we follow the same high-level procedure: First, some of the weights of the decoder are initialized with weight values from a pretrained language model. Next, a task-specific encoder and all non-pretrained decoder weights are randomly initialized. Finally, the entire model is trained/finetuned end-to-end using the supervised data for the given task. In all cases, the input and output embeddings are tied. Each approach uses all of the pretrained weights, differing only in where and how they use pretrained weights in the decoder. Further experimental details are included in appendix Section A. Baseline 1: Repr-Transformer The first approach considered (Fig 1a) utilizes the pretrained LM to produce a general-purpose representation of the target text before introducing the source information. For this method, a standard transformer decoder is used with the target word embeddings replaced by the output representation of the pretrained language model. In preliminary experiments, we considered both fixing and updating these representations and found that a fixed weightedaveraging (”ELMo-Style”) method performed better, consistent with Edunov et al. (2019). One possible downside to this approach is that the conditioning information from the encoder is injected after all of the pretrained weights. Baseline 2: Context-Attn The second approach (Fig 1b) considers initializing a standard transformer decoder with the shared weights of a pretrained LM. The newly added context attention weights at each layer are randomly initialized. Compared to Repr-Transformer, the conditioning information is injected alongside the pretrained weights. However, the randomly initialized context attention block may interfere with the carefully co-tuned pretrained weights of the rest of the model. This interference may introduce optimization challenges and lead to reduced performance. 1In practice many of these units (”heads”) are stacked together via concatenation across dimension followed by a final linear projection Wf ∈ D ×D. Proposed Model: Pseudo-Self A more radical approach to incorporating conditional information is the “zero-shot” model proposed by Radford et al. (2019). Instead of learning a representation for x and passing it into a context attention block they note that an auto-regressive model, p(yt | y<t), is already a conditional model. If x is the same modality as y (e.g., both language), one can condition on x by prepending the source to target: p(yt |x, y<t) = p(yt | x y<t).2 While this does not produce competitive models and is limited in its applicability, it is surprising that it works at all. Taking inspiration from this approach, we propose learning this contextualization in an encoderagnostic way. Our approach, pseudo self attention, simply injects learned encoder conditioning directly into the pretrained self attention of the model. Assume that we have a matrix X ∈ S ×D representing a size S encoding of x, define pseudo self attention as PSA(X,Y ) = softmax((YWq) [ XUk YWk ]> ) [ XUv YWv ] , where Uk, Uv ∈ D × D′ are new parameters which project encoder outputs into the decoder self attention space. Because attention is inherently variable length, these additional inputs can be injected without changing the module and only act additively on the attention output. The full model is shown in Figure 1c. Compared to Context-Attn, the proposed approach only introduces new parameters in the self attention block, which we expect leads to only minimal interference. As the pretrained LM weights encode for generation capability, deviating less from this initialization may lead to better generation performance. We explore this quantitatively in Section 5. 4 EXPERIMENTS AND RESULTS Experiments consider four diverse tasks spanning input modalities, dataset sizes, and information about the target contained in the source. Tasks are chosen to emphasize long-form targets to probe the generation capabilities of the different models in a conditional setting. Perplexity is used to measure overall performance and diversity of output, combined with standard task-specific metrics. For all tasks, we use GPT-2 small (117M parameters) (Radford et al., 2019) as the pretrained language model. GPT-2 small has 12 layers, 12 heads per layer, and a model dimension of 768 units; the Context-Attn and Pseudo-Self models use the same architecture. For the Repr-Transformer model to avoid overfitting, we use 6/8/512 layers/heads/dim for the decoder (in addition to the GPT-2 contextual representation network). All experiments use the same 50k type BPE GPT-2 vocabulary. 4.1 PRELIMINARY: CLASS-CONDITIONAL GENERATION We first consider a control experiment with a minimal encoder model. We consider producing classconditional samples, e.g., p(y | x = 0) and p(y | x = 1), from the IMDb sentiment classification dataset (Maas et al.), similar to previous works for sentiment transfer (Shen et al., 2017; Zhao et al., 2018). We set x to be a sentiment bit (positive/negative), and the movie review as the target y. We maintain the original IMDb 25k/25k train/test split, with 2.5k reviews of the original training split held out for validation, and truncate reviews to 400 BPE tokens during training. Model quality is evaluated by perplexity, and adherence to the source bit x is evaluated by the sentiment classification accuracy of an external classifier on generated reviews as in Shen et al. (2017). Reviews are generated via random sampling with a temperature of 0.7. For our external classifier we use fastText (Joulin et al., 2016), which has an accuracy of 90.1% on the IMDb test set. Table 1 shows results for the conditional models, GPT-2 without finetuning, and Simple Fusion (Stahlberg et al., 2018). The GPT-2 model itself already shows a greatly reduced PPL compared to a baseline transformer. All pretraining methods further improve perplexity. The pseudo self attention approach significantly outperforms the approaches in terms of class adherence. Despite being initialized as a language model, the approach only sees a decrease of 0.4% classification accuracy compared to the randomly initialized model. In contrast, Repr-Transformer and Context-Attn see a decrease of 20.0% and 3.9%, respectively. We additionally report the results of Simple Fusion in 2This method is most successful when hand-selected task-dependent buffer words are inserted between x and y<t as well such as ”tl;dr” for summarization. Model PPL Cls Acc Test set - 90.1 GPT-2 41.21 - Simple Fusion 38.31 65.1 Transformer 105.43 92.7 Repr-Trans 39.69 72.7 Context-Attn 40.74 88.8 Pseudo-Self 34.80 92.3 Table 1: Class-Conditional Generation on IMDb movie reviews. Classification accuracy is measured by a sentiment classifier trained on the IMDb training set. Bold indicates statistically significant best results at p ≤ 0.05. Model R1 / R2 / RL PPL PointerGen+BU 41.22 / 18.68 / 38.34 - ELMo+SHDEMB† 41.56 / 18.94 / 38.47 - BERT+Two-Stage† 41.38 / 19.34 / 38.37 - UniLM+ExtLoss† 43.47 / 20.30 / 40.63 Transformer+Copy 39.94 / 17.73 / 37.09 8.21 Repr-Trans 37.09 / 13.77 / 33.99 13.58 Context-Attn 40.59 / 18.17 / 37.24 6.68 Pseudo-Self 40.72 / 18.38 / 37.46 6.43 Pseudo-Self+BU 41.62 / 18.66 / 38.46 6.43 Table 2: Abstractive summarization on CNN/DM. † indicates pretraining of the encoder side. PointerGen+BU from (Gehrmann et al., 2018), ELMo+SHDEMB from (Edunov et al., 2019), BERT+Two-Stage from (Zhang et al., 2019), UniLM+ExtLoss from (Dong et al., 2019). Bold indicates statistically significant best results among general models and encoder-agnostic models at p ≤ 0.05. Table 1. Compared to Pseudo-Self, it gives a worse PPL and inferior classification accuracy. Given the weak results, we focus on comparisons between the deep models for the rest of the paper. 4.2 DOCUMENT SUMMARIZATION Abstractive document summarization requires the model to produce a long-form summary given a full news article. For these experiments, we use the non-anonymized CNN-Daily Mail dataset (Hermann et al., 2015). The dataset is comprised of 280k training examples of document-scale source news articles and corresponding 2-4 sentence target summaries. Summarization is a mature testbed with state-of-the-art models that use task-specific architecture modifications, so transfer learning methods need to be able to mesh well with these changes. We use the transformer version of the copy mechanism from Gehrmann et al. (2018) and employ bottom-up (BU) summarization attention pruning (Gehrmann et al., 2018). Generation is conducted via beam-search with a beam size of 5 with tri-gram blocking, consistent with the literature models (Edunov et al., 2019). Table 2 shows the performance of the models tested with recent state-of-the-art models for comparison. Compared to the baseline model without pretraining, Pseudo-Self improves ROUGE-1 by 0.78, ROUGE-2 by 0.65, ROUGE-L by 0.37, and reduced PPL by 20%. The Context-Attn approach nearly matches these results for this task, but the Repr-Transformer approach performs more poorly. We additionally experiment with the bottom-up summarization attention pruning approach applied at inference time as in Gehrmann et al. (2018). With this modification, Pseudo-Self outperforms all literature models in ROUGE-1 except the text-to-text UniLM+ExtractLoss, which uses joint pretraining of the source and target and is trained with an additional extractive loss. The performance of all of our models can potentially be further improved through a pretrained encoder. 4.3 CONDITIONAL STORY GENERATION Conditional story generation with the WritingPrompts dataset (Fan et al., 2018) requires the model to produce an on-topic story given a short textual prompt. While summarization relies heavily on the encoder, this task gives more flexibility to the decoder. The dataset is well supervised, containing 300k single sentence writing prompts (the source) and stories (the target). Following the preprocessing of Fan et al. (2018), we truncate the stories to 1000 tokens. Due to the story lengths, the total number of training tokens is on the order of 100 million, resulting in a large in-domain data setting. To compare models, we compute two metrics: perplexity (PPL) and prompt ranking. Perplexity assess approximate quality and diversity, whereas prompt ranking measures the relevance of the story to the prompt. To calculate the prompt ranking, we use the procedure from Fan et al. (2018): For each story in the test set, the likelihood is evaluated under the model for the “true” corresponding Model PPL Rank Acc. Transformer 30.58 80.6 Repr-Trans 21.16 76.7 Context-Attn >5000 9.3 Pseudo-Self 21.21 81.8 Table 3: Story generation on the WritingPrompts dataset. Rank acc. refers to the top1 prompt ranking accuracy metric described in Section 4.3. (Experiments use the GPT2 BPE scheme, so PPL numbers are not directly comparable to those reported in (Fan et al., 2018)). Bold indicates statistically significant best results at p ≤ 0.05. Model CIDEr B4 Krause et al. (2017) 13.5 8.7 Chatterjee et al. (2018) 20.9 9.4 Melas-Kyriazi et al. (2018) 22.7 8.7 Transformer 19.9 8.0 Repr-Trans 19.3 7.2 Context-Attn 22.6 7.6 Pseudo-Self 24.0 8.3 Table 4: Image paragraph captioning on Visual Genome, as measured by CIDEr and BLEU-4 (B4) scores. Bold indicates statistically significant best results at p ≤ 0.05. prompt and 9 other randomly selected “fake” prompts from the test set. Then, the rank accuracy is the percentage of stories for which the model gave the highest likelihood to the correct prompt. Table 3 shows the results. Despite the large dataset size, the Repr-Transfomer and Pseudo-Self approaches still substantially reduce the PPL, suggesting that these models effectively make use of the GPT-2 LM. Pseudo-Self sees only a 0.3% decrease in prompt ranking accuracy, while the Repr-Transformer approach sees a larger decrease. The Context-Attn model runs into optimization challenges and fails to learn in this setting. We hypothesize that this failure is a result of introducing a randomly initialized attention block, which makes Context-Attn susceptible to optimization challenges, but further work is needed to understand this more completely. 4.4 IMAGE PARAGRAPH CAPTIONING Our final set of experiments consider image paragraph captioning using the Visual Genome dataset from Krause et al. (2017). Image captioning represents a strong real-world use case for encoderagnostic pretraining. Visual Genome, in particular, represents a realistic setting with paragraphsized captions (5-8 short sentences), which requires greater fluency than single sentence captions. Due to the difficulty of producing labeled paragraph captions Visual Genome, contains fewer than 20,000 image-paragraph pairs. As a result, models trained from scratch on Visual Genome have been observed to have difficulty learning the structure of language. We use the same convolutional encoder as Krause et al. (2017), without the final pooling layer: for each image the output of the encoder is a tensor of size (36, 2048) extracted from a ResNet. Note that in this experiment the encoder and decoder are trained separately rather than end-to-end. Models are evaluated using the common CIDEr and BLEU-4 metrics. Table 4 shows the results on the captioning task, comparing the transfer learning methods with a non-pretraining baseline and models from the literature which use the same loss function3. Of the three pretraining approaches, Pseudo-Self and Context-Attn give the statistically significant best performance, and Pseudo-Self is the only model to improve both CIDEr and BLEU compared to the Transformer baseline. Pseudo-Self additionally improves performance over the literature models in terms of CIDEr but gives a slightly worse BLEU-4. 5 ANALYSIS AND DISCUSSION Experimental trends Overall, Repr-Trans gives poor performance in three out of the four tasks, underperforming a transformer without pretraining on summarization and image captioning. Context-Attn gives stronger results than Repr-Trans, but shows significantly worse performance than Pseudo-Self on class-conditional generation and slightly worse performance on summarization 3Recent work shows it is possible to improve paragraph captioning models by incorporating sequencelevel (Melas-Kyriazi et al., 2018) and adversarial (Chatterjee & Schwing, 2018) losses, but these loss function improvements are orthogonal to improvements in the underlying model architecture. and image captioning. Most critically, Context-Attn demonstrates a susceptibility to optimization challenges. On all tasks Pseudo-Self gives the best performance or is tied for best. Interference by added parameters In Section 3, we hypothesized that pseudo self attention enables better use of the pretrained LM because the introduction of parameters in Pseudo-Self interferes less with the original parameters than Context-Attn. To explore this quantitatively, we plot the root median squared deviation of parameters from their original values in the feed-forward layer for the class-conditional generation task (Figure 2a). While both models start with the same parameters, the Context-Attn parameters change significantly more than Pseudo-Self during training. The parameter values are only part of the story: parameters may change while the overall generative behavior stays the same. We probe this for image captioning by first finetuning the pretrained LM on target-side data, giving an unconditional model with an identical structure to Pseudo-Self and Context-Attn except without any additional randomly initialized parameters. In Figure 2b, we plot the KL divergence between the self attention distributions of the unconditional model and those of the conditional models at each layer. Both models have similar attention distributions to the unconditional model at the first layer (in Context-Attn this precedes the introduction of new parameters). Beyond the first layer, the KL for Context-Attn becomes over an order of magnitude larger than that of Pseudo-Self, suggesting that the introduction of the context attention block significantly perturbs the overall behavior of the model. The additional perturbation is not associated with improved incorporation of the source information, as Table 4 shows that Pseudo-Self gives better performance. Effect of pretrained LM size There is a continuing trend to larger pretrained LMs. During the preparation of this manuscript, a larger version of GPT-2 was made available with 345M parameters, increasing the model dimension to 1028, the number of attention heads to 16, and the number of layers to 24. We retrained our model using this larger LM for class-conditional generation, using the same training hyperparameters and re-tuning the generation temperature (Table 5). The larger model improves PPL by 4.5 points while attaining similarly high classification accuracy. This datapoint suggests that transfer learning effectiveness can continue to improve along with the quality of the pretrained model used. Low-data supervision Many of our tasks showed improvements even with medium-to-large training sets. To study the effectiveness of the approach in low data regimes, we create small datasets by subsampling the IMDb dataset to sizes between 200 and 16k datapoints. We retrain our model using the same hyperparameters and use datasize-dependent early stopping to prevent overfitting. To reduce variance and measure uncertainty, we repeat the process 8 times for each dataset size, calculating the PPL and classification accuracy. Results are shown in Figure 3. Note that a non-pretrained model has a PPL of over 1000 when trained on 200 examples. The pretrained model starts with reasonable outputs (44.4 PPL after 200 examples) and increases task accuracy steadily with more data. See Section B in the appendix for representative samples. Human evaluation To assess the quality of generations, we conducted a human evaluation based on the story generation task. Generation uses a temperature of 0.9 and a top-k value of 100. We ask participants on Amazon Mechanical Turk a series of four yes/no questions mapped to desirable linguistic properties outlined in Dang (2006): grammaticality, non-redundancy, consistency, and typicality. 125 stories are evaluated for each model, and each story is evaluated by 5 unique workers. Scores are calculated for each property as the total percentage of positive responses. A combined score rates the model overall on a scale from 0-4 based on the equally-weighted combination of the four properties. The results are shown in Table 6. In all four categories, the Pseudo-Self and Repr-Transformer models show statistically significant performance gains compared to the baseline Transformer model. The Pseudo-Self model achieves a grammaticality score of only 6.1% less than the test set, indicating strong grammaticality, likely a more localized property, is well learned by the pretrained LM and effectively transferred to the conditional models. In contrast, all models score significantly worse than the test data in terms of consistency and typicality. This suggests that these higher-level properties, while best transferred in the Pseudo-Self case, still represent a challenge for neural models. 6 CONCLUSION We study encoder-agnostic approaches for adapting a pretrained language model to general-purpose conditional language generation. Experiments spanning a range of diverse long-form conditional generation tasks demonstrate that pseudo self attention improves performance over strong encoderagnostic pretraining baselines, and is the only consistently performant model. From a practical perspective, the approach gives robust, sizable improvements over a non-pretraining baseline while maintaining adherence to the source context. Furthermore, we demonstrate the data efficiency and qualitative properties of the approach. Beyond empirical results, this study highlights the distinction between improving our ability to produce contextual representations of a source language and improving our capacity to generate text in a target language. While they appear to be similar problems, they exhibit substantially different phenomenology. For example, the representation-based approach, which works well for NLU, gives poor performance for NLG. Future work can study this distinction further. A EXPERIMENTAL DETAILS Approximate hyperparameter settings were taken from Gehrmann et al. (2018), followed by a coarse hyperparameter tuning for each dataset. Most parameters were held constant between the different models, though in some cases to enable as fair a comparison as possible individual parameters were separately optimized for each model. For the complete set of hyperparameters for each model, see the details at https://github.com/anon37234/encoder-agnostic-adaptation. For all models Dropout and early stopping are used for regularization. In addition, in initial experiments we found two optimization details to help generalization across all models: discriminative finetuning (Howard & Ruder, 2018) and using a lower learning rate for the decoder than the encoder. In discriminative finetuning, the learning rate of each layer decreases exponentially from the top transformer layer to the bottom transformer layer. Using a smaller learning rate for the decoder ensures that the decoder does not initially significantly change to accommodate the uninformative information from the randomly initialized encoder. B QUALITATIVE EXAMPLES Representative samples for the movie review dataset are shown in Table 7. The No-Pretraining model is the transformer from Table 1 and the number in the left column indicates the number of supervised examples in the training dataset. Samples are generated via random sampling with a temperature of 0.75. Without pretraining, the model makes many coherence mistakes. The Pseudo-Self 22K makes no grammatical mistakes and follows a single train of thought, although it is somewhat generic. The distinction between the models is further exaggerated when only 1.8k supervised examples are given. The baseline model trained on only 1.8k datapoints leads to very incoherent generated text. In contrast, the Pseudo-Attention model shows significantly improved grammar and sentence structure. Despite a handful of mistakes, the review follows a consistent description of a movie over multiple sentences. Given the poor performance of the baseline model, these properties must have been transferred from the original unconditional LM. These samples were selected to be representative of the broader set for the indicated models.
1. What is the focus of the paper regarding adapting large-scale pre-trained language models to NLG tasks? 2. What are the strengths of the proposed approach, particularly in its simplicity and effectiveness across various tasks and modalities? 3. Do you have any concerns about the method's potential impact on adequacy, divergence from language model initializations, or interactions with encoder pre-training? 4. Are there any suggestions for additional experiments to explore the method's limitations and potential improvements, such as isolating the effect of pseudo self-attention on adequacy or examining end-to-end training?
Review
Review This paper proposes a simple yet effective method to adapt large-scale pre-trained language models, which have been shown to substantially improve performance on broadly classification-based NLU tasks, to NLG. The approach is explored in the encoder-agnostic {X}-to-text setup, where the source encoding {X} could represent arbitrary modalities, such as text or images. More concretely, the paper leverages a pre-trained, large-scale language model (in this case a GPT-2), and examines how to best cast such unconditional language model into a decoder that generates text conditional on the source information {X}. As self-attention inherently works with sequences of any length, the proposed pseudo self-attention approach simply injects the encoder representation as additional conditioning context (using some additional projection matrices that are learned from scratch) into the pre-trained self-attention layers of the decoder. Extensive experiments and analysis on four diverse tasks demonstrate that pseudo self-attention generally outperforms two other ways of pre-training the decoder, improves NLG data efficiency, and produces texts that are judged more favourably by human evaluators. Overall, this paper presents a simple, general, and effective method for adapting large-scale pre-trained language models to conditional text generation. Based on the pros and cons that I have listed below, I am giving a rating of "Accept". I hope that some of my concerns will be addressed in the authors' response. Pros: 1. The paper is well-written and the methodology is explained very clearly. Figure 1 is particularly helpful in illustrating the differences between pseudo self-attention and the baselines. 2. The paper addresses a very important problem, and helps make sure that the advances that have been made in language modelling (which can leverage large amounts of unlabelled data), would transfer well to conditional text generation tasks, which hold immediate practical value yet often require expensive annotations. 3. The approach is simple and easy-to-implement, but has been shown to be effective across a broad range of problems, multiple modalities, and various evaluation metric. 4. The paper features extensive reference to relevant prior work, and clearly highlights the key similarities and differences with prior approaches. Cons: 1. It is still unclear how using language model pre-training affects adequacy (as opposed to fluency). The paper shows that using pseudo self-attention results in a decoder that diverges less from its language model initialisation. One potential risk is that the decoder may prefer fluent, "safe" outputs (which is arguably what a language model would prefer since it is an unconditional model) that are nevertheless less faithful to the source information. Since none of the evaluation metric specifically assesses for adequacy on its own, it would be good to isolate the effect of pseudo self-attention on adequacy, and compare it with the baselines, in addition to a Transformer trained from scratch on each downstream task. How to measure adequacy is naturally still an open question, but there are a few things that can be done (e.g. recall of salient information, reverse perplexity to see how much of the source information can be "reconstructed" given the predicted target text, etc.). 2. It would be interesting to further examine the interaction between encoder pre-training and the decoder pre-training that is explored in this work. Another interesting experiment to run is whether end-to-end training (including fine-tuning the encoder) would help, since prior work has shown the benefits of end-to-end learning (at least when large amounts of data are available). Questions: 1. Why is the context-attn model performance not included in Table 6? Is it because of the optimisation issue associated with that model in Table 3? 2. In page 7, it is mentioned that "Both models have similar attention distributions ... at the first layer, which precedes the introduction of new parameters in both models". Does the first layer here refer to the token + position embedding layer?
ICLR
Title Encoder-Agnostic Adaptation for Conditional Language Generation Abstract Large pretrained language models have changed the way researchers approach discriminative natural language understanding tasks, leading to the dominance of approaches that adapt a pretrained model for arbitrary downstream tasks. However, it is an open question how to use similar techniques for language generation. Early results in the encoder-agnostic setting have been mostly negative. In this work, we explore methods for adapting a pretrained language model to arbitrary conditional input. We observe that pretrained transformer models are sensitive to large parameter changes during tuning. Therefore, we propose an adaptation that directly injects arbitrary conditioning into self attention, an approach we call pseudo self attention. Through experiments on four diverse conditional text generation tasks, we show that this encoder-agnostic technique outperforms strong baselines, produces coherent generations, and is data-efficient. 1 INTRODUCTION Large-scale language models have been shown to dramatically improve the performance of natural language understanding (NLU) systems on a broad range of tasks (Peters et al., 2018; Devlin et al., 2018; Radford & Salimans, 2018; McCann et al., 2017). The dominant paradigm is to pretrain a self attention-based language model on a large corpus of unlabeled text and then finetune the language model with an additional task-specific classification head on supervised data. Optimizing the effectiveness of this approach has been the focus of much study (Houlsby et al., 2019; Wang et al., 2019; Chronopoulou et al., 2019). Given the success of pretraining for NLU tasks, how can large language models best be adapted for conditional language generation? Ideally, one should only need to train a large language model once and then apply it as part of the decoder to a range of tasks with different source modalities (e.g., text, images, bits). In the encoder/decoder framework, a task-specific encoder can encode source information into a continuous vector. The central question is thus how to adapt a pretrained decoder to effectively utilize arbitrary source information from an encoder (encoder-agnostic adaptation). Considering the high quality of samples from large languae models (Radford et al., 2019), it is natural to expect encoder-agnostic adaptation to improve the coherence and grammaticality of conditional text generation over training a decoder from scratch, even when the source modality is not text (e.g. image captioning or class-conditional generation). Unfortunately, past results indicate otherwise. Edunov et al. (2019) show, for example, that a straightforward extension of contextual representations (Peters et al., 2018) to the conditional generation setting actually hurts performance compared to a model without any pretraining. Other pretraining approaches for language generation (Song et al., 2019; Dong et al., 2019; Lample & Conneau, 2019) have demonstrated strong performance on text-to-text tasks, but these methods are constrained to tasks where the source is natural language and do not address the encoder-agnostic setting. In this work, we consider several different approaches to the problem of encoder-agnostic adaptation. We first observe that standard adaptation approaches perform poorly on this task. We hypothesize that because these techniques require relearning significant parts of the network structure to inject contextual conditioning, they move the parameters too far from the pretrained values. In contrast, Radford et al. (2019) observe that even trivial conditioning with the original model produces reasonable zero-shot generations without finetuning. These results motivate an approach that learns the correct conditioning to control the model’s output, which we call pseudo self attention. The idea is to learn a task-specific encoder that injects pseudo history into a pretrained self attention model. Because self attention works with sets of any size, the model can immediately utilize or ignore this history. Finetuning adapts the model to this new input while training a task-specific encoder. Experiments utilize the GPT-2 (Radford et al., 2019) transformer as a pretrained model. We consider four diverse generation tasks spanning a range of source modalities: class-conditional generation, document summarization, story generation, and image paragraph captioning. Across all tasks, we find that pseudo self attention outperforms the other pretraining methods and is the most consistent. As a practical tool, pseudo self attention improves performance compared to a baseline without pretraining by large margins without sacrificing adherence to the source, even for tasks with large amounts of supervised data. We further demonstrate that the approach is data-efficient and produces qualitatively more coherent outputs. Code is available at https://github.com/anon37234/ encoder-agnostic-adaptation. 2 RELATED WORK Pretrained Decoder Transfer learning for NLG Natural language generation (NLG) tasks have a long history of incorporating unconditional language models with conditional input(Bahl et al., 1983; Koehn et al., 2003). These approaches traditionally use the noisy channel model (i.e., Bayes’ rule), and n-gram models as the language model. Recent adaptations of these ideas include the Neural Noisy Channel (Yu et al., 2017) as well as “fusion” methods (Koehn et al., 2003; Gulcehre et al., 2015; Sriram et al., 2018; Stahlberg et al., 2018) in which the output logits of a language model and a conditional model are combined to calculate the output probabilities. We consider this class of transfer learning as a baseline in a preliminary experiment (see Section 4.1), but focus on alternative “deep” approaches that incorporate the language model weights as an integral part of the model instead of an add-on at the end. Along these lines, Ramachandran et al. (2017) propose a finetuning-based method for machine translation with LSTMs, in which some of the layers of the LSTM are initialized with pretrained language model weights. As their method is specific to LSTMs, however, it is incompatible with modern transformer architectures. Pretraining-Based Transfer Learning for NLG Zhang et al. (2019) use BERT in the encoder and decoder of a summarization model via a unique cloze generative process. They demonstrate strong summarization performance, but the value of pretraining relative to other model components is not clear, and the cloze process significantly reduces the practicality of the model. More related, Edunov et al. (2019) experiment with a representation-based approach for applying ELMo (Peters et al., 2018) to the source and target sides of a standard seq2seq model separately. Their approach consistently improves performance when applied to the source, but hurts performance when applied to the decoder. We consider such a representation approach as a baseline in this work. Most recently, several studies experiment with BERT-like masking approaches that are compatible with natural language generation (Song et al., 2019; Dong et al., 2019; Lample & Conneau, 2019). While these works demonstrate impressive performance, they are constrained to text-to-text tasks because they do not have a way to handle arbitrary conditional information. Whereas these works study pretraining methods that optimize transfer for text-to-text tasks, our study considers the separate problem of adapting a fixed pretrained model to arbitrary source conditioning. Concurrent with this work, Golovanov et al. (2019) propose a similar approach to pseudo self attention and report initial experiments with dialogue generation. This study complements ours with positive results on dialogue generation, though we aim for experimental data over a wide range of language generation tasks and input modalities and comparison to strong encoder-agnostic baselines. 3 METHODS We assume that we have a pretrained language model, p(y) = p(y1, . . . , yT ; θ), that the model is an autoregressive neural network, and that it is based on self attention to implement conditioning, i.e., SA(Y ) = softmax((YWq)(YWk)>)(YWv), where input Y ∈ T ×D for hidden dimensionD, Wk,Wv,Wq ∈ D×D′ are parameters, representing the key, value, and query projections respectively, and the output is T ×D′. 1 We are interested in using this model to estimate the conditional probability p(y | x) for an arbitrary input x for which we have a small amount of supervised (x,y) pairs. The goal is to learn a model on this new data that best makes use of the pretrained model p(y) with a method that is agnostic to the form of x. All models are based on the encoder/decoder architecture, and for each we follow the same high-level procedure: First, some of the weights of the decoder are initialized with weight values from a pretrained language model. Next, a task-specific encoder and all non-pretrained decoder weights are randomly initialized. Finally, the entire model is trained/finetuned end-to-end using the supervised data for the given task. In all cases, the input and output embeddings are tied. Each approach uses all of the pretrained weights, differing only in where and how they use pretrained weights in the decoder. Further experimental details are included in appendix Section A. Baseline 1: Repr-Transformer The first approach considered (Fig 1a) utilizes the pretrained LM to produce a general-purpose representation of the target text before introducing the source information. For this method, a standard transformer decoder is used with the target word embeddings replaced by the output representation of the pretrained language model. In preliminary experiments, we considered both fixing and updating these representations and found that a fixed weightedaveraging (”ELMo-Style”) method performed better, consistent with Edunov et al. (2019). One possible downside to this approach is that the conditioning information from the encoder is injected after all of the pretrained weights. Baseline 2: Context-Attn The second approach (Fig 1b) considers initializing a standard transformer decoder with the shared weights of a pretrained LM. The newly added context attention weights at each layer are randomly initialized. Compared to Repr-Transformer, the conditioning information is injected alongside the pretrained weights. However, the randomly initialized context attention block may interfere with the carefully co-tuned pretrained weights of the rest of the model. This interference may introduce optimization challenges and lead to reduced performance. 1In practice many of these units (”heads”) are stacked together via concatenation across dimension followed by a final linear projection Wf ∈ D ×D. Proposed Model: Pseudo-Self A more radical approach to incorporating conditional information is the “zero-shot” model proposed by Radford et al. (2019). Instead of learning a representation for x and passing it into a context attention block they note that an auto-regressive model, p(yt | y<t), is already a conditional model. If x is the same modality as y (e.g., both language), one can condition on x by prepending the source to target: p(yt |x, y<t) = p(yt | x y<t).2 While this does not produce competitive models and is limited in its applicability, it is surprising that it works at all. Taking inspiration from this approach, we propose learning this contextualization in an encoderagnostic way. Our approach, pseudo self attention, simply injects learned encoder conditioning directly into the pretrained self attention of the model. Assume that we have a matrix X ∈ S ×D representing a size S encoding of x, define pseudo self attention as PSA(X,Y ) = softmax((YWq) [ XUk YWk ]> ) [ XUv YWv ] , where Uk, Uv ∈ D × D′ are new parameters which project encoder outputs into the decoder self attention space. Because attention is inherently variable length, these additional inputs can be injected without changing the module and only act additively on the attention output. The full model is shown in Figure 1c. Compared to Context-Attn, the proposed approach only introduces new parameters in the self attention block, which we expect leads to only minimal interference. As the pretrained LM weights encode for generation capability, deviating less from this initialization may lead to better generation performance. We explore this quantitatively in Section 5. 4 EXPERIMENTS AND RESULTS Experiments consider four diverse tasks spanning input modalities, dataset sizes, and information about the target contained in the source. Tasks are chosen to emphasize long-form targets to probe the generation capabilities of the different models in a conditional setting. Perplexity is used to measure overall performance and diversity of output, combined with standard task-specific metrics. For all tasks, we use GPT-2 small (117M parameters) (Radford et al., 2019) as the pretrained language model. GPT-2 small has 12 layers, 12 heads per layer, and a model dimension of 768 units; the Context-Attn and Pseudo-Self models use the same architecture. For the Repr-Transformer model to avoid overfitting, we use 6/8/512 layers/heads/dim for the decoder (in addition to the GPT-2 contextual representation network). All experiments use the same 50k type BPE GPT-2 vocabulary. 4.1 PRELIMINARY: CLASS-CONDITIONAL GENERATION We first consider a control experiment with a minimal encoder model. We consider producing classconditional samples, e.g., p(y | x = 0) and p(y | x = 1), from the IMDb sentiment classification dataset (Maas et al.), similar to previous works for sentiment transfer (Shen et al., 2017; Zhao et al., 2018). We set x to be a sentiment bit (positive/negative), and the movie review as the target y. We maintain the original IMDb 25k/25k train/test split, with 2.5k reviews of the original training split held out for validation, and truncate reviews to 400 BPE tokens during training. Model quality is evaluated by perplexity, and adherence to the source bit x is evaluated by the sentiment classification accuracy of an external classifier on generated reviews as in Shen et al. (2017). Reviews are generated via random sampling with a temperature of 0.7. For our external classifier we use fastText (Joulin et al., 2016), which has an accuracy of 90.1% on the IMDb test set. Table 1 shows results for the conditional models, GPT-2 without finetuning, and Simple Fusion (Stahlberg et al., 2018). The GPT-2 model itself already shows a greatly reduced PPL compared to a baseline transformer. All pretraining methods further improve perplexity. The pseudo self attention approach significantly outperforms the approaches in terms of class adherence. Despite being initialized as a language model, the approach only sees a decrease of 0.4% classification accuracy compared to the randomly initialized model. In contrast, Repr-Transformer and Context-Attn see a decrease of 20.0% and 3.9%, respectively. We additionally report the results of Simple Fusion in 2This method is most successful when hand-selected task-dependent buffer words are inserted between x and y<t as well such as ”tl;dr” for summarization. Model PPL Cls Acc Test set - 90.1 GPT-2 41.21 - Simple Fusion 38.31 65.1 Transformer 105.43 92.7 Repr-Trans 39.69 72.7 Context-Attn 40.74 88.8 Pseudo-Self 34.80 92.3 Table 1: Class-Conditional Generation on IMDb movie reviews. Classification accuracy is measured by a sentiment classifier trained on the IMDb training set. Bold indicates statistically significant best results at p ≤ 0.05. Model R1 / R2 / RL PPL PointerGen+BU 41.22 / 18.68 / 38.34 - ELMo+SHDEMB† 41.56 / 18.94 / 38.47 - BERT+Two-Stage† 41.38 / 19.34 / 38.37 - UniLM+ExtLoss† 43.47 / 20.30 / 40.63 Transformer+Copy 39.94 / 17.73 / 37.09 8.21 Repr-Trans 37.09 / 13.77 / 33.99 13.58 Context-Attn 40.59 / 18.17 / 37.24 6.68 Pseudo-Self 40.72 / 18.38 / 37.46 6.43 Pseudo-Self+BU 41.62 / 18.66 / 38.46 6.43 Table 2: Abstractive summarization on CNN/DM. † indicates pretraining of the encoder side. PointerGen+BU from (Gehrmann et al., 2018), ELMo+SHDEMB from (Edunov et al., 2019), BERT+Two-Stage from (Zhang et al., 2019), UniLM+ExtLoss from (Dong et al., 2019). Bold indicates statistically significant best results among general models and encoder-agnostic models at p ≤ 0.05. Table 1. Compared to Pseudo-Self, it gives a worse PPL and inferior classification accuracy. Given the weak results, we focus on comparisons between the deep models for the rest of the paper. 4.2 DOCUMENT SUMMARIZATION Abstractive document summarization requires the model to produce a long-form summary given a full news article. For these experiments, we use the non-anonymized CNN-Daily Mail dataset (Hermann et al., 2015). The dataset is comprised of 280k training examples of document-scale source news articles and corresponding 2-4 sentence target summaries. Summarization is a mature testbed with state-of-the-art models that use task-specific architecture modifications, so transfer learning methods need to be able to mesh well with these changes. We use the transformer version of the copy mechanism from Gehrmann et al. (2018) and employ bottom-up (BU) summarization attention pruning (Gehrmann et al., 2018). Generation is conducted via beam-search with a beam size of 5 with tri-gram blocking, consistent with the literature models (Edunov et al., 2019). Table 2 shows the performance of the models tested with recent state-of-the-art models for comparison. Compared to the baseline model without pretraining, Pseudo-Self improves ROUGE-1 by 0.78, ROUGE-2 by 0.65, ROUGE-L by 0.37, and reduced PPL by 20%. The Context-Attn approach nearly matches these results for this task, but the Repr-Transformer approach performs more poorly. We additionally experiment with the bottom-up summarization attention pruning approach applied at inference time as in Gehrmann et al. (2018). With this modification, Pseudo-Self outperforms all literature models in ROUGE-1 except the text-to-text UniLM+ExtractLoss, which uses joint pretraining of the source and target and is trained with an additional extractive loss. The performance of all of our models can potentially be further improved through a pretrained encoder. 4.3 CONDITIONAL STORY GENERATION Conditional story generation with the WritingPrompts dataset (Fan et al., 2018) requires the model to produce an on-topic story given a short textual prompt. While summarization relies heavily on the encoder, this task gives more flexibility to the decoder. The dataset is well supervised, containing 300k single sentence writing prompts (the source) and stories (the target). Following the preprocessing of Fan et al. (2018), we truncate the stories to 1000 tokens. Due to the story lengths, the total number of training tokens is on the order of 100 million, resulting in a large in-domain data setting. To compare models, we compute two metrics: perplexity (PPL) and prompt ranking. Perplexity assess approximate quality and diversity, whereas prompt ranking measures the relevance of the story to the prompt. To calculate the prompt ranking, we use the procedure from Fan et al. (2018): For each story in the test set, the likelihood is evaluated under the model for the “true” corresponding Model PPL Rank Acc. Transformer 30.58 80.6 Repr-Trans 21.16 76.7 Context-Attn >5000 9.3 Pseudo-Self 21.21 81.8 Table 3: Story generation on the WritingPrompts dataset. Rank acc. refers to the top1 prompt ranking accuracy metric described in Section 4.3. (Experiments use the GPT2 BPE scheme, so PPL numbers are not directly comparable to those reported in (Fan et al., 2018)). Bold indicates statistically significant best results at p ≤ 0.05. Model CIDEr B4 Krause et al. (2017) 13.5 8.7 Chatterjee et al. (2018) 20.9 9.4 Melas-Kyriazi et al. (2018) 22.7 8.7 Transformer 19.9 8.0 Repr-Trans 19.3 7.2 Context-Attn 22.6 7.6 Pseudo-Self 24.0 8.3 Table 4: Image paragraph captioning on Visual Genome, as measured by CIDEr and BLEU-4 (B4) scores. Bold indicates statistically significant best results at p ≤ 0.05. prompt and 9 other randomly selected “fake” prompts from the test set. Then, the rank accuracy is the percentage of stories for which the model gave the highest likelihood to the correct prompt. Table 3 shows the results. Despite the large dataset size, the Repr-Transfomer and Pseudo-Self approaches still substantially reduce the PPL, suggesting that these models effectively make use of the GPT-2 LM. Pseudo-Self sees only a 0.3% decrease in prompt ranking accuracy, while the Repr-Transformer approach sees a larger decrease. The Context-Attn model runs into optimization challenges and fails to learn in this setting. We hypothesize that this failure is a result of introducing a randomly initialized attention block, which makes Context-Attn susceptible to optimization challenges, but further work is needed to understand this more completely. 4.4 IMAGE PARAGRAPH CAPTIONING Our final set of experiments consider image paragraph captioning using the Visual Genome dataset from Krause et al. (2017). Image captioning represents a strong real-world use case for encoderagnostic pretraining. Visual Genome, in particular, represents a realistic setting with paragraphsized captions (5-8 short sentences), which requires greater fluency than single sentence captions. Due to the difficulty of producing labeled paragraph captions Visual Genome, contains fewer than 20,000 image-paragraph pairs. As a result, models trained from scratch on Visual Genome have been observed to have difficulty learning the structure of language. We use the same convolutional encoder as Krause et al. (2017), without the final pooling layer: for each image the output of the encoder is a tensor of size (36, 2048) extracted from a ResNet. Note that in this experiment the encoder and decoder are trained separately rather than end-to-end. Models are evaluated using the common CIDEr and BLEU-4 metrics. Table 4 shows the results on the captioning task, comparing the transfer learning methods with a non-pretraining baseline and models from the literature which use the same loss function3. Of the three pretraining approaches, Pseudo-Self and Context-Attn give the statistically significant best performance, and Pseudo-Self is the only model to improve both CIDEr and BLEU compared to the Transformer baseline. Pseudo-Self additionally improves performance over the literature models in terms of CIDEr but gives a slightly worse BLEU-4. 5 ANALYSIS AND DISCUSSION Experimental trends Overall, Repr-Trans gives poor performance in three out of the four tasks, underperforming a transformer without pretraining on summarization and image captioning. Context-Attn gives stronger results than Repr-Trans, but shows significantly worse performance than Pseudo-Self on class-conditional generation and slightly worse performance on summarization 3Recent work shows it is possible to improve paragraph captioning models by incorporating sequencelevel (Melas-Kyriazi et al., 2018) and adversarial (Chatterjee & Schwing, 2018) losses, but these loss function improvements are orthogonal to improvements in the underlying model architecture. and image captioning. Most critically, Context-Attn demonstrates a susceptibility to optimization challenges. On all tasks Pseudo-Self gives the best performance or is tied for best. Interference by added parameters In Section 3, we hypothesized that pseudo self attention enables better use of the pretrained LM because the introduction of parameters in Pseudo-Self interferes less with the original parameters than Context-Attn. To explore this quantitatively, we plot the root median squared deviation of parameters from their original values in the feed-forward layer for the class-conditional generation task (Figure 2a). While both models start with the same parameters, the Context-Attn parameters change significantly more than Pseudo-Self during training. The parameter values are only part of the story: parameters may change while the overall generative behavior stays the same. We probe this for image captioning by first finetuning the pretrained LM on target-side data, giving an unconditional model with an identical structure to Pseudo-Self and Context-Attn except without any additional randomly initialized parameters. In Figure 2b, we plot the KL divergence between the self attention distributions of the unconditional model and those of the conditional models at each layer. Both models have similar attention distributions to the unconditional model at the first layer (in Context-Attn this precedes the introduction of new parameters). Beyond the first layer, the KL for Context-Attn becomes over an order of magnitude larger than that of Pseudo-Self, suggesting that the introduction of the context attention block significantly perturbs the overall behavior of the model. The additional perturbation is not associated with improved incorporation of the source information, as Table 4 shows that Pseudo-Self gives better performance. Effect of pretrained LM size There is a continuing trend to larger pretrained LMs. During the preparation of this manuscript, a larger version of GPT-2 was made available with 345M parameters, increasing the model dimension to 1028, the number of attention heads to 16, and the number of layers to 24. We retrained our model using this larger LM for class-conditional generation, using the same training hyperparameters and re-tuning the generation temperature (Table 5). The larger model improves PPL by 4.5 points while attaining similarly high classification accuracy. This datapoint suggests that transfer learning effectiveness can continue to improve along with the quality of the pretrained model used. Low-data supervision Many of our tasks showed improvements even with medium-to-large training sets. To study the effectiveness of the approach in low data regimes, we create small datasets by subsampling the IMDb dataset to sizes between 200 and 16k datapoints. We retrain our model using the same hyperparameters and use datasize-dependent early stopping to prevent overfitting. To reduce variance and measure uncertainty, we repeat the process 8 times for each dataset size, calculating the PPL and classification accuracy. Results are shown in Figure 3. Note that a non-pretrained model has a PPL of over 1000 when trained on 200 examples. The pretrained model starts with reasonable outputs (44.4 PPL after 200 examples) and increases task accuracy steadily with more data. See Section B in the appendix for representative samples. Human evaluation To assess the quality of generations, we conducted a human evaluation based on the story generation task. Generation uses a temperature of 0.9 and a top-k value of 100. We ask participants on Amazon Mechanical Turk a series of four yes/no questions mapped to desirable linguistic properties outlined in Dang (2006): grammaticality, non-redundancy, consistency, and typicality. 125 stories are evaluated for each model, and each story is evaluated by 5 unique workers. Scores are calculated for each property as the total percentage of positive responses. A combined score rates the model overall on a scale from 0-4 based on the equally-weighted combination of the four properties. The results are shown in Table 6. In all four categories, the Pseudo-Self and Repr-Transformer models show statistically significant performance gains compared to the baseline Transformer model. The Pseudo-Self model achieves a grammaticality score of only 6.1% less than the test set, indicating strong grammaticality, likely a more localized property, is well learned by the pretrained LM and effectively transferred to the conditional models. In contrast, all models score significantly worse than the test data in terms of consistency and typicality. This suggests that these higher-level properties, while best transferred in the Pseudo-Self case, still represent a challenge for neural models. 6 CONCLUSION We study encoder-agnostic approaches for adapting a pretrained language model to general-purpose conditional language generation. Experiments spanning a range of diverse long-form conditional generation tasks demonstrate that pseudo self attention improves performance over strong encoderagnostic pretraining baselines, and is the only consistently performant model. From a practical perspective, the approach gives robust, sizable improvements over a non-pretraining baseline while maintaining adherence to the source context. Furthermore, we demonstrate the data efficiency and qualitative properties of the approach. Beyond empirical results, this study highlights the distinction between improving our ability to produce contextual representations of a source language and improving our capacity to generate text in a target language. While they appear to be similar problems, they exhibit substantially different phenomenology. For example, the representation-based approach, which works well for NLU, gives poor performance for NLG. Future work can study this distinction further. A EXPERIMENTAL DETAILS Approximate hyperparameter settings were taken from Gehrmann et al. (2018), followed by a coarse hyperparameter tuning for each dataset. Most parameters were held constant between the different models, though in some cases to enable as fair a comparison as possible individual parameters were separately optimized for each model. For the complete set of hyperparameters for each model, see the details at https://github.com/anon37234/encoder-agnostic-adaptation. For all models Dropout and early stopping are used for regularization. In addition, in initial experiments we found two optimization details to help generalization across all models: discriminative finetuning (Howard & Ruder, 2018) and using a lower learning rate for the decoder than the encoder. In discriminative finetuning, the learning rate of each layer decreases exponentially from the top transformer layer to the bottom transformer layer. Using a smaller learning rate for the decoder ensures that the decoder does not initially significantly change to accommodate the uninformative information from the randomly initialized encoder. B QUALITATIVE EXAMPLES Representative samples for the movie review dataset are shown in Table 7. The No-Pretraining model is the transformer from Table 1 and the number in the left column indicates the number of supervised examples in the training dataset. Samples are generated via random sampling with a temperature of 0.75. Without pretraining, the model makes many coherence mistakes. The Pseudo-Self 22K makes no grammatical mistakes and follows a single train of thought, although it is somewhat generic. The distinction between the models is further exaggerated when only 1.8k supervised examples are given. The baseline model trained on only 1.8k datapoints leads to very incoherent generated text. In contrast, the Pseudo-Attention model shows significantly improved grammar and sentence structure. Despite a handful of mistakes, the review follows a consistent description of a movie over multiple sentences. Given the poor performance of the baseline model, these properties must have been transferred from the original unconditional LM. These samples were selected to be representative of the broader set for the indicated models.
1. What is the main contribution of the paper in the field of language generation? 2. How does the proposed architecture differ from existing methods in terms of incorporating input for conditional generation? 3. What are the strengths of the paper, particularly in the experimental section? 4. Are there any limitations or potential drawbacks to the proposed approach? 5. How might the proposed method be applied to other generative tasks beyond language generation?
Review
Review This paper proposes a new architecture to train decoder models on language generation using a pre-trained encoder (such as BERT or GPT-2). They introduce a novel block called `````"pseudo self-attention" that allow injecting the input for conditional generation in the self-attention layer (i.e. softmax of YW_q (XU_k | YW_k)^T (XU_v | YW_v) instead of softmax(YW_q(YW_k)^T)YW_v). They extensively evaluate their approach on a large set of tasks showing improvements across all of them (which includes class-conditional generation, summarization, story generation and paragraph generation). They also provide interesting ablation studies. This paper proposes a simple architectural block to try and translate the success of large pre-trained encoders on discriminative tasks to the generative setting. The idea seems well-motivated and the paper is well-written and easy to follow. The experimental section is very thorough and show large improvements on a variety of task---I particularly appreciate that they experimented with conditional inputs of different nature (class value, image, different languages etc...) to show the effectiveness of their method. Overall, while the idea is quite simple, the experiments speak for themselves and this could prove to be a useful `layer' to use on large pre-trained language models.
ICLR
Title Encoder-Agnostic Adaptation for Conditional Language Generation Abstract Large pretrained language models have changed the way researchers approach discriminative natural language understanding tasks, leading to the dominance of approaches that adapt a pretrained model for arbitrary downstream tasks. However, it is an open question how to use similar techniques for language generation. Early results in the encoder-agnostic setting have been mostly negative. In this work, we explore methods for adapting a pretrained language model to arbitrary conditional input. We observe that pretrained transformer models are sensitive to large parameter changes during tuning. Therefore, we propose an adaptation that directly injects arbitrary conditioning into self attention, an approach we call pseudo self attention. Through experiments on four diverse conditional text generation tasks, we show that this encoder-agnostic technique outperforms strong baselines, produces coherent generations, and is data-efficient. 1 INTRODUCTION Large-scale language models have been shown to dramatically improve the performance of natural language understanding (NLU) systems on a broad range of tasks (Peters et al., 2018; Devlin et al., 2018; Radford & Salimans, 2018; McCann et al., 2017). The dominant paradigm is to pretrain a self attention-based language model on a large corpus of unlabeled text and then finetune the language model with an additional task-specific classification head on supervised data. Optimizing the effectiveness of this approach has been the focus of much study (Houlsby et al., 2019; Wang et al., 2019; Chronopoulou et al., 2019). Given the success of pretraining for NLU tasks, how can large language models best be adapted for conditional language generation? Ideally, one should only need to train a large language model once and then apply it as part of the decoder to a range of tasks with different source modalities (e.g., text, images, bits). In the encoder/decoder framework, a task-specific encoder can encode source information into a continuous vector. The central question is thus how to adapt a pretrained decoder to effectively utilize arbitrary source information from an encoder (encoder-agnostic adaptation). Considering the high quality of samples from large languae models (Radford et al., 2019), it is natural to expect encoder-agnostic adaptation to improve the coherence and grammaticality of conditional text generation over training a decoder from scratch, even when the source modality is not text (e.g. image captioning or class-conditional generation). Unfortunately, past results indicate otherwise. Edunov et al. (2019) show, for example, that a straightforward extension of contextual representations (Peters et al., 2018) to the conditional generation setting actually hurts performance compared to a model without any pretraining. Other pretraining approaches for language generation (Song et al., 2019; Dong et al., 2019; Lample & Conneau, 2019) have demonstrated strong performance on text-to-text tasks, but these methods are constrained to tasks where the source is natural language and do not address the encoder-agnostic setting. In this work, we consider several different approaches to the problem of encoder-agnostic adaptation. We first observe that standard adaptation approaches perform poorly on this task. We hypothesize that because these techniques require relearning significant parts of the network structure to inject contextual conditioning, they move the parameters too far from the pretrained values. In contrast, Radford et al. (2019) observe that even trivial conditioning with the original model produces reasonable zero-shot generations without finetuning. These results motivate an approach that learns the correct conditioning to control the model’s output, which we call pseudo self attention. The idea is to learn a task-specific encoder that injects pseudo history into a pretrained self attention model. Because self attention works with sets of any size, the model can immediately utilize or ignore this history. Finetuning adapts the model to this new input while training a task-specific encoder. Experiments utilize the GPT-2 (Radford et al., 2019) transformer as a pretrained model. We consider four diverse generation tasks spanning a range of source modalities: class-conditional generation, document summarization, story generation, and image paragraph captioning. Across all tasks, we find that pseudo self attention outperforms the other pretraining methods and is the most consistent. As a practical tool, pseudo self attention improves performance compared to a baseline without pretraining by large margins without sacrificing adherence to the source, even for tasks with large amounts of supervised data. We further demonstrate that the approach is data-efficient and produces qualitatively more coherent outputs. Code is available at https://github.com/anon37234/ encoder-agnostic-adaptation. 2 RELATED WORK Pretrained Decoder Transfer learning for NLG Natural language generation (NLG) tasks have a long history of incorporating unconditional language models with conditional input(Bahl et al., 1983; Koehn et al., 2003). These approaches traditionally use the noisy channel model (i.e., Bayes’ rule), and n-gram models as the language model. Recent adaptations of these ideas include the Neural Noisy Channel (Yu et al., 2017) as well as “fusion” methods (Koehn et al., 2003; Gulcehre et al., 2015; Sriram et al., 2018; Stahlberg et al., 2018) in which the output logits of a language model and a conditional model are combined to calculate the output probabilities. We consider this class of transfer learning as a baseline in a preliminary experiment (see Section 4.1), but focus on alternative “deep” approaches that incorporate the language model weights as an integral part of the model instead of an add-on at the end. Along these lines, Ramachandran et al. (2017) propose a finetuning-based method for machine translation with LSTMs, in which some of the layers of the LSTM are initialized with pretrained language model weights. As their method is specific to LSTMs, however, it is incompatible with modern transformer architectures. Pretraining-Based Transfer Learning for NLG Zhang et al. (2019) use BERT in the encoder and decoder of a summarization model via a unique cloze generative process. They demonstrate strong summarization performance, but the value of pretraining relative to other model components is not clear, and the cloze process significantly reduces the practicality of the model. More related, Edunov et al. (2019) experiment with a representation-based approach for applying ELMo (Peters et al., 2018) to the source and target sides of a standard seq2seq model separately. Their approach consistently improves performance when applied to the source, but hurts performance when applied to the decoder. We consider such a representation approach as a baseline in this work. Most recently, several studies experiment with BERT-like masking approaches that are compatible with natural language generation (Song et al., 2019; Dong et al., 2019; Lample & Conneau, 2019). While these works demonstrate impressive performance, they are constrained to text-to-text tasks because they do not have a way to handle arbitrary conditional information. Whereas these works study pretraining methods that optimize transfer for text-to-text tasks, our study considers the separate problem of adapting a fixed pretrained model to arbitrary source conditioning. Concurrent with this work, Golovanov et al. (2019) propose a similar approach to pseudo self attention and report initial experiments with dialogue generation. This study complements ours with positive results on dialogue generation, though we aim for experimental data over a wide range of language generation tasks and input modalities and comparison to strong encoder-agnostic baselines. 3 METHODS We assume that we have a pretrained language model, p(y) = p(y1, . . . , yT ; θ), that the model is an autoregressive neural network, and that it is based on self attention to implement conditioning, i.e., SA(Y ) = softmax((YWq)(YWk)>)(YWv), where input Y ∈ T ×D for hidden dimensionD, Wk,Wv,Wq ∈ D×D′ are parameters, representing the key, value, and query projections respectively, and the output is T ×D′. 1 We are interested in using this model to estimate the conditional probability p(y | x) for an arbitrary input x for which we have a small amount of supervised (x,y) pairs. The goal is to learn a model on this new data that best makes use of the pretrained model p(y) with a method that is agnostic to the form of x. All models are based on the encoder/decoder architecture, and for each we follow the same high-level procedure: First, some of the weights of the decoder are initialized with weight values from a pretrained language model. Next, a task-specific encoder and all non-pretrained decoder weights are randomly initialized. Finally, the entire model is trained/finetuned end-to-end using the supervised data for the given task. In all cases, the input and output embeddings are tied. Each approach uses all of the pretrained weights, differing only in where and how they use pretrained weights in the decoder. Further experimental details are included in appendix Section A. Baseline 1: Repr-Transformer The first approach considered (Fig 1a) utilizes the pretrained LM to produce a general-purpose representation of the target text before introducing the source information. For this method, a standard transformer decoder is used with the target word embeddings replaced by the output representation of the pretrained language model. In preliminary experiments, we considered both fixing and updating these representations and found that a fixed weightedaveraging (”ELMo-Style”) method performed better, consistent with Edunov et al. (2019). One possible downside to this approach is that the conditioning information from the encoder is injected after all of the pretrained weights. Baseline 2: Context-Attn The second approach (Fig 1b) considers initializing a standard transformer decoder with the shared weights of a pretrained LM. The newly added context attention weights at each layer are randomly initialized. Compared to Repr-Transformer, the conditioning information is injected alongside the pretrained weights. However, the randomly initialized context attention block may interfere with the carefully co-tuned pretrained weights of the rest of the model. This interference may introduce optimization challenges and lead to reduced performance. 1In practice many of these units (”heads”) are stacked together via concatenation across dimension followed by a final linear projection Wf ∈ D ×D. Proposed Model: Pseudo-Self A more radical approach to incorporating conditional information is the “zero-shot” model proposed by Radford et al. (2019). Instead of learning a representation for x and passing it into a context attention block they note that an auto-regressive model, p(yt | y<t), is already a conditional model. If x is the same modality as y (e.g., both language), one can condition on x by prepending the source to target: p(yt |x, y<t) = p(yt | x y<t).2 While this does not produce competitive models and is limited in its applicability, it is surprising that it works at all. Taking inspiration from this approach, we propose learning this contextualization in an encoderagnostic way. Our approach, pseudo self attention, simply injects learned encoder conditioning directly into the pretrained self attention of the model. Assume that we have a matrix X ∈ S ×D representing a size S encoding of x, define pseudo self attention as PSA(X,Y ) = softmax((YWq) [ XUk YWk ]> ) [ XUv YWv ] , where Uk, Uv ∈ D × D′ are new parameters which project encoder outputs into the decoder self attention space. Because attention is inherently variable length, these additional inputs can be injected without changing the module and only act additively on the attention output. The full model is shown in Figure 1c. Compared to Context-Attn, the proposed approach only introduces new parameters in the self attention block, which we expect leads to only minimal interference. As the pretrained LM weights encode for generation capability, deviating less from this initialization may lead to better generation performance. We explore this quantitatively in Section 5. 4 EXPERIMENTS AND RESULTS Experiments consider four diverse tasks spanning input modalities, dataset sizes, and information about the target contained in the source. Tasks are chosen to emphasize long-form targets to probe the generation capabilities of the different models in a conditional setting. Perplexity is used to measure overall performance and diversity of output, combined with standard task-specific metrics. For all tasks, we use GPT-2 small (117M parameters) (Radford et al., 2019) as the pretrained language model. GPT-2 small has 12 layers, 12 heads per layer, and a model dimension of 768 units; the Context-Attn and Pseudo-Self models use the same architecture. For the Repr-Transformer model to avoid overfitting, we use 6/8/512 layers/heads/dim for the decoder (in addition to the GPT-2 contextual representation network). All experiments use the same 50k type BPE GPT-2 vocabulary. 4.1 PRELIMINARY: CLASS-CONDITIONAL GENERATION We first consider a control experiment with a minimal encoder model. We consider producing classconditional samples, e.g., p(y | x = 0) and p(y | x = 1), from the IMDb sentiment classification dataset (Maas et al.), similar to previous works for sentiment transfer (Shen et al., 2017; Zhao et al., 2018). We set x to be a sentiment bit (positive/negative), and the movie review as the target y. We maintain the original IMDb 25k/25k train/test split, with 2.5k reviews of the original training split held out for validation, and truncate reviews to 400 BPE tokens during training. Model quality is evaluated by perplexity, and adherence to the source bit x is evaluated by the sentiment classification accuracy of an external classifier on generated reviews as in Shen et al. (2017). Reviews are generated via random sampling with a temperature of 0.7. For our external classifier we use fastText (Joulin et al., 2016), which has an accuracy of 90.1% on the IMDb test set. Table 1 shows results for the conditional models, GPT-2 without finetuning, and Simple Fusion (Stahlberg et al., 2018). The GPT-2 model itself already shows a greatly reduced PPL compared to a baseline transformer. All pretraining methods further improve perplexity. The pseudo self attention approach significantly outperforms the approaches in terms of class adherence. Despite being initialized as a language model, the approach only sees a decrease of 0.4% classification accuracy compared to the randomly initialized model. In contrast, Repr-Transformer and Context-Attn see a decrease of 20.0% and 3.9%, respectively. We additionally report the results of Simple Fusion in 2This method is most successful when hand-selected task-dependent buffer words are inserted between x and y<t as well such as ”tl;dr” for summarization. Model PPL Cls Acc Test set - 90.1 GPT-2 41.21 - Simple Fusion 38.31 65.1 Transformer 105.43 92.7 Repr-Trans 39.69 72.7 Context-Attn 40.74 88.8 Pseudo-Self 34.80 92.3 Table 1: Class-Conditional Generation on IMDb movie reviews. Classification accuracy is measured by a sentiment classifier trained on the IMDb training set. Bold indicates statistically significant best results at p ≤ 0.05. Model R1 / R2 / RL PPL PointerGen+BU 41.22 / 18.68 / 38.34 - ELMo+SHDEMB† 41.56 / 18.94 / 38.47 - BERT+Two-Stage† 41.38 / 19.34 / 38.37 - UniLM+ExtLoss† 43.47 / 20.30 / 40.63 Transformer+Copy 39.94 / 17.73 / 37.09 8.21 Repr-Trans 37.09 / 13.77 / 33.99 13.58 Context-Attn 40.59 / 18.17 / 37.24 6.68 Pseudo-Self 40.72 / 18.38 / 37.46 6.43 Pseudo-Self+BU 41.62 / 18.66 / 38.46 6.43 Table 2: Abstractive summarization on CNN/DM. † indicates pretraining of the encoder side. PointerGen+BU from (Gehrmann et al., 2018), ELMo+SHDEMB from (Edunov et al., 2019), BERT+Two-Stage from (Zhang et al., 2019), UniLM+ExtLoss from (Dong et al., 2019). Bold indicates statistically significant best results among general models and encoder-agnostic models at p ≤ 0.05. Table 1. Compared to Pseudo-Self, it gives a worse PPL and inferior classification accuracy. Given the weak results, we focus on comparisons between the deep models for the rest of the paper. 4.2 DOCUMENT SUMMARIZATION Abstractive document summarization requires the model to produce a long-form summary given a full news article. For these experiments, we use the non-anonymized CNN-Daily Mail dataset (Hermann et al., 2015). The dataset is comprised of 280k training examples of document-scale source news articles and corresponding 2-4 sentence target summaries. Summarization is a mature testbed with state-of-the-art models that use task-specific architecture modifications, so transfer learning methods need to be able to mesh well with these changes. We use the transformer version of the copy mechanism from Gehrmann et al. (2018) and employ bottom-up (BU) summarization attention pruning (Gehrmann et al., 2018). Generation is conducted via beam-search with a beam size of 5 with tri-gram blocking, consistent with the literature models (Edunov et al., 2019). Table 2 shows the performance of the models tested with recent state-of-the-art models for comparison. Compared to the baseline model without pretraining, Pseudo-Self improves ROUGE-1 by 0.78, ROUGE-2 by 0.65, ROUGE-L by 0.37, and reduced PPL by 20%. The Context-Attn approach nearly matches these results for this task, but the Repr-Transformer approach performs more poorly. We additionally experiment with the bottom-up summarization attention pruning approach applied at inference time as in Gehrmann et al. (2018). With this modification, Pseudo-Self outperforms all literature models in ROUGE-1 except the text-to-text UniLM+ExtractLoss, which uses joint pretraining of the source and target and is trained with an additional extractive loss. The performance of all of our models can potentially be further improved through a pretrained encoder. 4.3 CONDITIONAL STORY GENERATION Conditional story generation with the WritingPrompts dataset (Fan et al., 2018) requires the model to produce an on-topic story given a short textual prompt. While summarization relies heavily on the encoder, this task gives more flexibility to the decoder. The dataset is well supervised, containing 300k single sentence writing prompts (the source) and stories (the target). Following the preprocessing of Fan et al. (2018), we truncate the stories to 1000 tokens. Due to the story lengths, the total number of training tokens is on the order of 100 million, resulting in a large in-domain data setting. To compare models, we compute two metrics: perplexity (PPL) and prompt ranking. Perplexity assess approximate quality and diversity, whereas prompt ranking measures the relevance of the story to the prompt. To calculate the prompt ranking, we use the procedure from Fan et al. (2018): For each story in the test set, the likelihood is evaluated under the model for the “true” corresponding Model PPL Rank Acc. Transformer 30.58 80.6 Repr-Trans 21.16 76.7 Context-Attn >5000 9.3 Pseudo-Self 21.21 81.8 Table 3: Story generation on the WritingPrompts dataset. Rank acc. refers to the top1 prompt ranking accuracy metric described in Section 4.3. (Experiments use the GPT2 BPE scheme, so PPL numbers are not directly comparable to those reported in (Fan et al., 2018)). Bold indicates statistically significant best results at p ≤ 0.05. Model CIDEr B4 Krause et al. (2017) 13.5 8.7 Chatterjee et al. (2018) 20.9 9.4 Melas-Kyriazi et al. (2018) 22.7 8.7 Transformer 19.9 8.0 Repr-Trans 19.3 7.2 Context-Attn 22.6 7.6 Pseudo-Self 24.0 8.3 Table 4: Image paragraph captioning on Visual Genome, as measured by CIDEr and BLEU-4 (B4) scores. Bold indicates statistically significant best results at p ≤ 0.05. prompt and 9 other randomly selected “fake” prompts from the test set. Then, the rank accuracy is the percentage of stories for which the model gave the highest likelihood to the correct prompt. Table 3 shows the results. Despite the large dataset size, the Repr-Transfomer and Pseudo-Self approaches still substantially reduce the PPL, suggesting that these models effectively make use of the GPT-2 LM. Pseudo-Self sees only a 0.3% decrease in prompt ranking accuracy, while the Repr-Transformer approach sees a larger decrease. The Context-Attn model runs into optimization challenges and fails to learn in this setting. We hypothesize that this failure is a result of introducing a randomly initialized attention block, which makes Context-Attn susceptible to optimization challenges, but further work is needed to understand this more completely. 4.4 IMAGE PARAGRAPH CAPTIONING Our final set of experiments consider image paragraph captioning using the Visual Genome dataset from Krause et al. (2017). Image captioning represents a strong real-world use case for encoderagnostic pretraining. Visual Genome, in particular, represents a realistic setting with paragraphsized captions (5-8 short sentences), which requires greater fluency than single sentence captions. Due to the difficulty of producing labeled paragraph captions Visual Genome, contains fewer than 20,000 image-paragraph pairs. As a result, models trained from scratch on Visual Genome have been observed to have difficulty learning the structure of language. We use the same convolutional encoder as Krause et al. (2017), without the final pooling layer: for each image the output of the encoder is a tensor of size (36, 2048) extracted from a ResNet. Note that in this experiment the encoder and decoder are trained separately rather than end-to-end. Models are evaluated using the common CIDEr and BLEU-4 metrics. Table 4 shows the results on the captioning task, comparing the transfer learning methods with a non-pretraining baseline and models from the literature which use the same loss function3. Of the three pretraining approaches, Pseudo-Self and Context-Attn give the statistically significant best performance, and Pseudo-Self is the only model to improve both CIDEr and BLEU compared to the Transformer baseline. Pseudo-Self additionally improves performance over the literature models in terms of CIDEr but gives a slightly worse BLEU-4. 5 ANALYSIS AND DISCUSSION Experimental trends Overall, Repr-Trans gives poor performance in three out of the four tasks, underperforming a transformer without pretraining on summarization and image captioning. Context-Attn gives stronger results than Repr-Trans, but shows significantly worse performance than Pseudo-Self on class-conditional generation and slightly worse performance on summarization 3Recent work shows it is possible to improve paragraph captioning models by incorporating sequencelevel (Melas-Kyriazi et al., 2018) and adversarial (Chatterjee & Schwing, 2018) losses, but these loss function improvements are orthogonal to improvements in the underlying model architecture. and image captioning. Most critically, Context-Attn demonstrates a susceptibility to optimization challenges. On all tasks Pseudo-Self gives the best performance or is tied for best. Interference by added parameters In Section 3, we hypothesized that pseudo self attention enables better use of the pretrained LM because the introduction of parameters in Pseudo-Self interferes less with the original parameters than Context-Attn. To explore this quantitatively, we plot the root median squared deviation of parameters from their original values in the feed-forward layer for the class-conditional generation task (Figure 2a). While both models start with the same parameters, the Context-Attn parameters change significantly more than Pseudo-Self during training. The parameter values are only part of the story: parameters may change while the overall generative behavior stays the same. We probe this for image captioning by first finetuning the pretrained LM on target-side data, giving an unconditional model with an identical structure to Pseudo-Self and Context-Attn except without any additional randomly initialized parameters. In Figure 2b, we plot the KL divergence between the self attention distributions of the unconditional model and those of the conditional models at each layer. Both models have similar attention distributions to the unconditional model at the first layer (in Context-Attn this precedes the introduction of new parameters). Beyond the first layer, the KL for Context-Attn becomes over an order of magnitude larger than that of Pseudo-Self, suggesting that the introduction of the context attention block significantly perturbs the overall behavior of the model. The additional perturbation is not associated with improved incorporation of the source information, as Table 4 shows that Pseudo-Self gives better performance. Effect of pretrained LM size There is a continuing trend to larger pretrained LMs. During the preparation of this manuscript, a larger version of GPT-2 was made available with 345M parameters, increasing the model dimension to 1028, the number of attention heads to 16, and the number of layers to 24. We retrained our model using this larger LM for class-conditional generation, using the same training hyperparameters and re-tuning the generation temperature (Table 5). The larger model improves PPL by 4.5 points while attaining similarly high classification accuracy. This datapoint suggests that transfer learning effectiveness can continue to improve along with the quality of the pretrained model used. Low-data supervision Many of our tasks showed improvements even with medium-to-large training sets. To study the effectiveness of the approach in low data regimes, we create small datasets by subsampling the IMDb dataset to sizes between 200 and 16k datapoints. We retrain our model using the same hyperparameters and use datasize-dependent early stopping to prevent overfitting. To reduce variance and measure uncertainty, we repeat the process 8 times for each dataset size, calculating the PPL and classification accuracy. Results are shown in Figure 3. Note that a non-pretrained model has a PPL of over 1000 when trained on 200 examples. The pretrained model starts with reasonable outputs (44.4 PPL after 200 examples) and increases task accuracy steadily with more data. See Section B in the appendix for representative samples. Human evaluation To assess the quality of generations, we conducted a human evaluation based on the story generation task. Generation uses a temperature of 0.9 and a top-k value of 100. We ask participants on Amazon Mechanical Turk a series of four yes/no questions mapped to desirable linguistic properties outlined in Dang (2006): grammaticality, non-redundancy, consistency, and typicality. 125 stories are evaluated for each model, and each story is evaluated by 5 unique workers. Scores are calculated for each property as the total percentage of positive responses. A combined score rates the model overall on a scale from 0-4 based on the equally-weighted combination of the four properties. The results are shown in Table 6. In all four categories, the Pseudo-Self and Repr-Transformer models show statistically significant performance gains compared to the baseline Transformer model. The Pseudo-Self model achieves a grammaticality score of only 6.1% less than the test set, indicating strong grammaticality, likely a more localized property, is well learned by the pretrained LM and effectively transferred to the conditional models. In contrast, all models score significantly worse than the test data in terms of consistency and typicality. This suggests that these higher-level properties, while best transferred in the Pseudo-Self case, still represent a challenge for neural models. 6 CONCLUSION We study encoder-agnostic approaches for adapting a pretrained language model to general-purpose conditional language generation. Experiments spanning a range of diverse long-form conditional generation tasks demonstrate that pseudo self attention improves performance over strong encoderagnostic pretraining baselines, and is the only consistently performant model. From a practical perspective, the approach gives robust, sizable improvements over a non-pretraining baseline while maintaining adherence to the source context. Furthermore, we demonstrate the data efficiency and qualitative properties of the approach. Beyond empirical results, this study highlights the distinction between improving our ability to produce contextual representations of a source language and improving our capacity to generate text in a target language. While they appear to be similar problems, they exhibit substantially different phenomenology. For example, the representation-based approach, which works well for NLU, gives poor performance for NLG. Future work can study this distinction further. A EXPERIMENTAL DETAILS Approximate hyperparameter settings were taken from Gehrmann et al. (2018), followed by a coarse hyperparameter tuning for each dataset. Most parameters were held constant between the different models, though in some cases to enable as fair a comparison as possible individual parameters were separately optimized for each model. For the complete set of hyperparameters for each model, see the details at https://github.com/anon37234/encoder-agnostic-adaptation. For all models Dropout and early stopping are used for regularization. In addition, in initial experiments we found two optimization details to help generalization across all models: discriminative finetuning (Howard & Ruder, 2018) and using a lower learning rate for the decoder than the encoder. In discriminative finetuning, the learning rate of each layer decreases exponentially from the top transformer layer to the bottom transformer layer. Using a smaller learning rate for the decoder ensures that the decoder does not initially significantly change to accommodate the uninformative information from the randomly initialized encoder. B QUALITATIVE EXAMPLES Representative samples for the movie review dataset are shown in Table 7. The No-Pretraining model is the transformer from Table 1 and the number in the left column indicates the number of supervised examples in the training dataset. Samples are generated via random sampling with a temperature of 0.75. Without pretraining, the model makes many coherence mistakes. The Pseudo-Self 22K makes no grammatical mistakes and follows a single train of thought, although it is somewhat generic. The distinction between the models is further exaggerated when only 1.8k supervised examples are given. The baseline model trained on only 1.8k datapoints leads to very incoherent generated text. In contrast, the Pseudo-Attention model shows significantly improved grammar and sentence structure. Despite a handful of mistakes, the review follows a consistent description of a movie over multiple sentences. Given the poor performance of the baseline model, these properties must have been transferred from the original unconditional LM. These samples were selected to be representative of the broader set for the indicated models.
1. What is the focus of the paper being reviewed, and what are the research questions being addressed? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its performance compared to prior works? 3. Are there any concerns or limitations regarding the experimental design or results that require further investigation? 4. How does the reviewer assess the novelty and significance of the work in the context of encoder-agnostic methods for text generation? 5. What suggestions does the reviewer provide for improving the study, such as including additional experiments or comparisons with other works?
Review
Review This paper compares a few encoder agnostic methods for using pretrained decoders in text generation tax. The author compared a few intuitive ways of doing this, and presents results showing that that pseudo-self attention does the best. However, I think the results has some strange points that needs further investigation. Going from repr-transfomer to context-attention to pseudo-self, there is an increasing amount of parameters initialized by pretraining. However, both of the first two methods often perform worse than the baseline transformer without pretraining. So should more things be initialized with pre-training or less? It would be good to verify that this is not due to under-training. Except paragraph captioning, the results on other tasks are not better than prior results, which do not use pretraining. The baseline transformer is also usually worse than prior results. The human evaluation shows that the proposed method do better on story generation, but this one is essentially text to text. What is missing is how this compares with even more pretraining, say GPT-2, without any fine tuning. Transferring gains of pretraining to generation tasks is clearly a promising direction, and the bar for success in this area need to be outperforming the best previous methods that do not use pretraining. There is no comparison with previous text 2 text methods that use pretraining. If the proposed methods are truely encoder agnostic, then they should perform reasonably on text-to-text as well. I think some MT experiments would be good since the evaluations are more competitive and reliable. Perhaps using some language pairs that do not have sufficient training data.
ICLR
Title Spectral Nonlocal Block for Neural Network Abstract The nonlocal network is designed for capturing long-range spatial-temporal dependencies in several computer vision tasks. Although having shown excellent performances, it needs an elaborate preparation for both the number and position of the building blocks. In this paper, we propose a new formulation of the nonlocal block and interpret it from the general graph signal processing perspective, where we view it as a fully-connected graph filter approximated by Chebyshev polynomials. The proposed nonlocal block is more efficient and robust, which is a generalized form of existing nonlocal blocks (e.g. nonlocal block, nonlocal stage). Moreover, we give the stable hypothesis and show that the steady-state of the deeper nonlocal structure should meet with it. Based on the stable hypothesis, a full-order approximation of the nonlocal block is derived for consecutive connections. Experimental results illustrate the clear-cut improvement and practical applicability of the generalized nonlocal block on both image and video classification tasks. 1 INTRODUCTION Capturing the long-range spatial-temporal dependencies is crucial for the Deep Convolutional Neural Networks (CNNs) to extract discriminate features in vision tasks such as image and video classification. However, the traditional convolution operator only focuses on processing local neighborhood at a time. This makes the CNNs need to go deeper with convolutional operations to enlarge the receptive fields, which lead to higher computation and memory. Moreover, going deeper cannot always increase the effective receptive fields due to the Gaussian distribution of the kernel weight (Luo et al. (2016)). To eliminate this limitation, some recent works focus on designing the network architecture with wider and well-designed modules to catch the long-range dependencies such as (Peng et al. (2017), Chen et al. (2017), Zhao et al. (2017)). Although having larger receptive fields, these modules still need to be applied recursively to catch the dependencies of the pairs in large distances. Inspired by the classical non-local means method in image denoising, Wang et al. (2018) proposes the nonlocal neural network which uses the nonlocal (NL) block to concern the “full-range” dependencies in only one module by exploring the correlations between each position and all other positions. In the NL block, the affinity matrix is first computed to represent the correlations between each position pair. Then the weight means of features are calculated based on the affinity matrix to refine the feature representation. Finally, the residual connection is added to the refined feature map. Due to its simplicity and effectiveness, the nonlocal block has been widely used in image and video classification (Wang et al. (2018); Yue et al. (2018); Tao et al. (2018); Chen et al. (2018)), image segmentation (Huang et al. (2018); Yue et al. (2018); Wang et al. (2018)) and person re-identification (Liao et al. (2018); Zhang et al. (2019)) recently. However, due to the complexity of the affinity matrix, the nonlocal block 1 needs much more computational effort and is sensitive to its number and position in the neural network (Tao et al. (2018)). Some works solve the first problem by simplifying the calculation of the affinity matrix such as Huang et al. (2018), He et al. (2019), Yue et al. (2018), Chen et al. (2018). Only a few works try to solve the second problem which limits the robustness of the nonlocal network 2. Tao et al. (2018) 1The nonlocal block is composed of a nonlocal operator and a residual connection 2The nonlocal network is composed of several nonlocal blocks proposes the nonlocal stage (NS) block which concerns the diffusion nature and maintains the same affinity matrix for all the nonlocal units in the NS block. Comparing with the NL block, the NS block is insensitive to the numbers and allows deeper nonlocal structure. However, the deeper nonlocal structure of NS block increases the complexity and do not have a remarkable improvement. In this work, we focus on elaborating a robust nonlocal block which is more flexible when using in the neural network. We prove that the nonlocal operator in the nonlocal block is equivalent to the Chebyshev-approximated fully-connected graph filter with irrational constraints that limits its liberty for learning. To remove these irrational constraints, we propose the Spectral-based Nonlocal (SNL) block which is more robust and can degrade into the NL and NS with specific assumptions. We also prove that the deeper nonlocal structure satisfies the stable hypothesis with the help of steadystate analysis. Based on this hypothesis, we give the full-order approximated spectral nonlocal (gSNL) block which is well-performed for deeper nonlocal structure. Finally, we add our proposed nonlocal blocks into the deep network and evaluate them on the image and video classification tasks. Experiments show that the networks with our proposed blocks are more robust and have a higher accuracy than using other types of nonlocal blocks. To summarize, our contributions are threefold: • We propose a spectral nonlocal (SNL) block as an efficient, simple, and generic component for capturing long-range spatial-temporal dependencies with deep neural networks, which is a generalization of the classical nonlocal blocks. • We propose the stable hypothesis, which can enable the deeper nonlocal structure without an elaborate preparation for both the number and position of the building blocks. We further extend SNL into generalized SNL (gSNL), which can enable multiple nonlocal blocks to be plugged into the existing computer vision architectures with stable learning dynamics. • Both SNL and gSNL have outperformed other nonlocal blocks across both image and video classification tasks with a clear-cut improvement. 2 PRELIMINARY Nonlocal block The NL block consist of NL operator with residual connection and is expressed as: Y = X + F(A,Z) with Z = XWg, (1) where X ∈ RN×C1 is the input feature map, F(A,Z) is the NL operator, Z ∈ RN×Cs is the transferred feature map that compresses the channels of X ∈ RN×C1 by a linear transformation with kernel Wg ∈ RC1×Cs . Here N is the number of positions. The affinity matrix A ∈ RN×N is composed by pairwise correlations between pixels. In the NL block, the NL operator explores the “full-range” dependencies by concerning the relationships between all the position pairs: F(A,Z) = AZW with A = (aij)N×N , Aij = f(Xi,:,Xj,:), (2) where W ∈ RCs×C1 is the weight matrix of a linear transformation. f(·) is the affinity kernel which can adopt the “Dot Product”, “Traditional Gasuassian”, “Embedded Gasussian” or other kernel matrix with a finite Frobenius norm. Nonlocal stage To make the NL operator follow the diffusion nature that allows deeper nonlocal structure (Tao et al. (2018)), the nonlocal stage (NS) operator uses the graph laplacian L = DA−A to replace the affinity matrix A in the NL operator: F̄(A,Z) = (A−DA)ZW with DA = diag(di), (3) where F̄(A,Z) is the NS operator. di = ∑ j aij is the degree of node i. Moreover, when adding multiple blocks with the same affinity matrix A and replacing the NL operator by the NS operator, these consecutively-connected blocks become the NS block. We called these nonlocal blocks in the NS block as the NS units. 3 METHOD The nonlocal operator can be divided into two steps: calculating the affinity matrix A to represent the correlations between each position pairs and refining the feature map by calculating the weighted means based on A. In this section, a fully-connected graph filter is utilized for explaining the nonlocal operator. With the Chebyshev approximation, we propose the SNL operator which is proved to be a generalized form of NL and NS operator and is more robust with higher performance in computer vision tasks. Furthermore, based on the stable hypothesis that deeper nonlocal structure tends to learn a stable affinity matrix, we extend our SNL operator into a full-order Chebyshev approximation version, i.e. the gSNL. 3.1 THE PROPOSED SPECTRAL NONLOCAL OPERATOR Nonlocal operator in the graph view The nonlocal operator F(A,Z) is a filter that computes a weighted mean of all the positions in the feature map Z based on the affinity matrix A and then conduct the feature transformation with the kernel W. This is the same as filtering the signal Z by a graph filter Ω in the graph domain defined by the affinity matrix A (Shuman et al. (2013)). Based on this perspective (Shuman et al. (2013)), we further define the nonlocal operator as: Theorem 1. Given an affinity matrix A ∈ RN×N and the signal Z ∈ RN×Cs , the nonlocal operator is the same as filtering the signal Z in the graph domain of a fully-connected weighted graph G: F(A,Z) = Z ∗ g = Ugθ(Λ)UTZ = UΩUTZ with L = DL −A = UTΛU, (4) where the graph filter Ω ∈ RN×N is a diagonal parameter matrix, i.e. Ω = diag(ω), ω = (ω1, ω2, ..., ωn). G = (V,A) is a fully-connected graph with the vertex set V and affinity matrix A. Λ = diag({λ1, λ2, ..., λi, ..., λN}) and U = {u1,u2, ...,ui, ...,uN} are the eigenvectors and eigenvalues of the graph laplacian L. This definition requires that the graph laplacian L has non-singular eigenvalue and eigenvector, so the affinity matrix A should be a symmetric, non-negative, row-normalized matrix. To meet this requirement, the affinity matrix A can be obtained by the following steps. First, the affinity kernel is used to calculate the matrix A (we use the dot product with embeded weight matrix Wφ ∈ RC1×Cs and Wϕ ∈ RC1×Cs as the affinity kernel, i.e. A = (XWφ)(XWϕ)). Then we make the matrix A symmetric: Ā = A T+A 2 . We normalize the row of Ā to make it satisfy di = 1 and having Ǎ = D−1A Ā. For the simplicity, in the following sections the symmetric, non-negative, row-normalized matrix Ǎ is denoted as A. The proposed spectral nonlocal operator The graph filter Ω in Eq. (4) contains N parameters. To simplify it, we use the Chebyshev polynomials which can reduce the N parameters into k (k N ). For simplicity, we firstly assume that the input Z, the output F(A,Z) and the output F(A,Z) have only one channel. Following the similar method as Defferrard et al. (2016), the kst-order Chebyshev polynomials is used to approximate the graph filter function gθ(Λ): F(A,Z) = K−1∑ k=0 θkTk(L ′ )Z with L ′ = 2L/λmax − In, s.t. T0(L ′ ) = In, T1(L ′ ) = L ′ , Tk(L ′ ) = 2L ′ Tk−1(L ′ )− Tk−2(L ′ ). (5) Due to L is a random walk laplacican, the maximum eiginvalue λmax satisfies λmax = 2 which makes L ′ = A (Shuman et al. (2013)). Then Eq. (5) becomes: F(A,Z) = K−1∑ k=0 θkTk(A)Z = θ0Z + θ1AZ + K−1∑ k=2 θkTk(A)Z, (6) If k = 1, the first-order Chebyshev approximation of Eq. (6) becomes: F(A,Z) = θ0Z + θ1AZ, (7) where θ0 and θ1 are the coefficients for the first and second term which are approximated by learning with SGD. Then, extending Eq. (7) into multi-channel conditions, we can get the formation of our SNL operator: Fs(A,Z) = ZW1 + AZW2, (8) where Fs(A,Z) is the SNL operator, W1 ∈ RCs×C1 , W2 ∈ RCs×C1 . Finally, a residual connection is added with the SNL operator to form the SNL block: Y = X + Fs(A,Z) = X + ZW1 + AZW2. (9) Relation with other nonlocal operators As shown in fig. 1, our SNL operator can degrade into the NL operator by setting W1 = 0, i.e. θ0 = 0. However, its analytic solution: θ0 = 2N ∑N j=0 ωj controls the total filtering intensity, which cannot be guaranteed to be 0. This setting will limit the search space when training the network and reduce the robustness of the NL block. The NL operator cannot magnify features of a large range and damp some discriminative features such as the beak of the waterfowl. Our SNL operator can also degrade into the NS operator by setting W1 = −W2, i.e. θ1 + θ0 = 0. However, the analytic solution of this equation is θ1 + θ0 = 2N ∑N j=0 ωj(λj + 1) = 0. When setting it to zero, the filter strength of the high-frequency signal (with high λ) such as the small part or twig is suppressed. Thus, it still cannot magnify the discriminative part such as the beak of the waterfowl as shown in fig. 1. Comparing with NL and NS, our SNL does not have these irrational constraints and give these two parameters a liberal learning space. Thus, θ0 can control the preserve strength of the discriminative features, while θ1 can pay more attention to the low-frequency signal to diminish the noise. 3.2 THE PROPOSED GENERALIZED SPECTRAL NONLOCAL OPERATOR To fully exploit the “full-range” dependencies, the nonlocal block should have the ability to be consecutively stacked into the network to form a deeper nonlocal structure. However, some types of nonlocal blocks such as the NL and CGNL block cannot achieve this purpose (Tao et al. (2018)). To show the robustness of our SNL block when used in the deeper nonlocal structure, we firstly study the steady-state of deeper nonlocal structure when consecutively adding our SNL block. We also prove the stable hypothesis that the deeper nonlocal structure tends to learn a stable affinity. Based on this hypothesis, we can extend our SNL block into a full-order Chebyshev approximation, i.e. the gSNL block which is more applicable for deeper nonlocal structure. The stable hypothesis The Steady-state analysis can be used to analyze the stable dynamics of the nonlocal block. Here we give the steady-state analysis of our SNL block when consecutively adds into the network structure and get the Stable Hypothesis: Lemma 1. The Stable Hypothesis: when adding more than two consecutively-connected SNL blocks with the same affinity matrix A into the network structure, these SNL blocks are stable when the variable affinity matrix A satisfies: Ak = A. Proof. The stability holds when the weight parameters in W1,W2 and W are small enough such that the CFL condition is satisfied (Tao et al. (2018)). So we ignore them for simplicity. The discrete nonlinear operator of our SNL have a similar formulation as the NS operator: LhZN := −LZ, where h is the discretization parameter. ZN is the input of the N th block in the deeper nonlocal structure with Z0 = X. The stable assumption demands that ZN+1 = ZN , so the steady-state equation of the last SNL block can be written as: ZN+1 − ZN = LhZN = −LZN = 0. The deeper nonlocal structure has more than one SNL blocks. So the ZN−1 and LhZN−1 can be used to express ZN : −LZN = −(I−A)ZN = −(I−A)(ZN−1 + LhZN−1) = −(I−A)ZN−1 + (I−A)(I−A)ZN−1 = 0. Finally, the steady-state equation becomes: (I−A)ZN−1 = (I−A)2ZN−1 ⇐⇒ A2 = A This equation can naturally extend to the k-hop affinity matrix Ak, i.e. Ak = A. To verify the stable hypothesis, we add five consecutively-connected SNL blocks (and NS blocks) into the PreResnet56 He et al. (2016) and train this model on the train set of the CIFAR100 dataset with the initial learning rate 0.1 which is subsequently divided by 10 at 150 and 250 epochs (total 300 epochs). A weight decay 1e − 4 and momentum 0.9 are also used. Then we test the trained model on the test set and output the affinity matrix of each image. Figure. 2 shows the statistics that reflects the strength of the affinity matrix, 2-hop, 3-hop, and 4-hop affinity matrix: A,A2,A3,A4. We can see that the number of elements in each histogram bin are nearly the same. This means that the A, A2, A3, A4 have similar distribution of all the elements in k-hop affinity matrixes, which also empirically verifies the stable-state equation: Ak = A. Full-order spectral nonlocal operator With the stable hypothesis, the Chebyshev polynomials can be simplified into a piece-wise function (details in Appendix B). Taking this piece-wise function into the Eq. 7, we can get the full-order approximation of the SNL operator: F∗s (A,Z) = ∑ k θkTk(A)Z = Zθ̃1 + AZθ̃2 + (2A− I)Zθ̃3, (10) where θ̃1 = ∑k%4=0 i1 θi1 , θ̃2 = ∑k%4=1||k%4=2 i2 θi1 , θ̃3 = ∑k%4=3 i1 θi1 , whose upper bound is less than 1. Then, extending it into multi-channel input and output with the residual connection, we can get our gSNL block: Y = X + F∗s (A,Z) = X + ZW1 + AZW2 + (2A− I)ZW3 (11) The gSNL block is well-performed when the stable affinity hypothesis is satisfied, i.e. adding more than two nonlocal blocks with the same affinity matrix as shown in Table. 4. 3.3 IMPLEMENTATION DETAILS The implementation details of the gSNL block is shown in fig. 3. The input feature map X ∈ RW×H×C1 is first fed into three 1x1 convolutions with the weight kernel: Wφ ∈ RC1×Cs , Wϕ ∈ RC1×Cs , Wg ∈ RC1×Cs to subtract the number of channel. One of the output Z ∈ RW×H×Cs is used as the transferred feature map to reduce the calculation complexity, while the other two output Φ ∈ RW×H×Cs , Ψ ∈ RW×H×Cs are used to get the affinity matrix A. The sub-channel Cs are usually two times less than the input channel C1. The affinity matrix is calculated by the affinity kernel function f(·) and then use the operation in Sec3.1 to make it non-negative, symmetric and normalized. Finally, with the affinity matrix A and the transferred feature map Z, the output of the nonlocal block can be obtained by the equation Eq. (11). Specifically, the three weight matrixes W1 ∈ RCs×C1 , W2 ∈ RCs×C1 , W3 ∈ RCs×C1 are implemented as three 1x1 convolutions. 4 EXPERIMENT 4.1 SETTING Datasets Our proposed SNL and gSNL blocks have been evaluated across several computer vision tasks, including image classification and video-based action recognition. For the image classification, both CIFAR-10 and CIFAR-100 datasets (Krizhevsky & Hinton (2009)) are tested. The CIFAR10 dataset contains 60, 000 images of 10 classes, and CIFAR-100 dataset contains 60, 000 images of 100 classes. For these two datasets, we use 50, 000 images as the train set and 10, 000 images as the test set. We also generate experiments for the fine-grained classification on the Birds-200-2011 (CUB-200) dataset (Welinder et al. (2010)) which contains 11, 788 images of 200 bird categories. For the action recognition, the experiments are conducted on the UCF-101 dataset (Soomro et al. (2012)), which contains 101 different actions. Backbones For the image classification, the ResNet-50 and the PreResNet variations (including both PreResNet-20 and PreResNet-56) are used as the backbone networks. For the video classification task, we follow the I3D structure (Hara et al. (2018)) which uses k × k × k kernels to replace the convolution operator in the residual block. Setting for the network In the main experiments, we setCs = C1/2. Without loss of the generality, we use the “Dot Product” as the affinity kernel in the experiments. We add one SNL (or gSNL) block into these backbone networks to construct the SNL (or gSNL) network. For the ResNet and the I3D (Hara et al. (2018)), following Wang et al. (2018) we add the SNL block right before the last residual block of res4. For the PreResNet series, we add the SNL block right after the second residual block in res1. For the other nonlocal-base block including the NL (Wang et al. (2018)), the NS (Tao et al. (2018)), the Compact Generalized Nonlocal Block (CGNL) (Yue et al. (2018)) and the Double Attention Block (A2), the settings are all the same as ours. The difference of these blocks are shown in Table. 1, in which the Approximated Condition shows the strategy for the Chebyshev approximation and Channel-wise reflect the consideration of the channel relations. Setting for the training For the image classification on CIFAR-10 dataset and CIFAR-100 dataset, we train the models end-to-end without using pretrained model. The initial learning rate 0.1 is used for these two datasets with the weight decay 1e− 4 and momentum 0.9. The learning rate is divided by 10 at 150 and 250 epochs. The models are trained for total 300 epochs. For the fine-grained classification on CUB-200 dataset, we use the models pretrained on ImageNet (Russakovsky et al. (2015)) to initialize the weights. We train the models for total 200 epochs with the initial learning rate 0.1 which is subsequently divided by 10 at 31, 61, 81 epochs. The weight decay and momentum are the same as the setting of CIFAR-10 and CIFAR-100. For the video classification on the UCF-101 dataset, the weights are initialized by the pretrained I3D model on Kinetics dataset (Kay et al. (2017)). We train the models with the initial learning rate 0.1 which is subsequently divided by 10 each 40 epochs. The training stops at the 100 epochs. The weight decay and momentum are the same as the setting of CIFAR-10 and CIFAR-100. 4.2 ABLATION EXPERIMENT The number of channels in transferred feature space The nonlocal-based block firstly reduces the channels of original feature mapC1 into the transferred feature spaceCs by the 1×1 convolution to reduce the computation complexity. When Cs is too large, the feature map will contain redundant information which introduces the noise when calculating the affinity matrix A. However, if Cs is too small, it is hard to reconstruct the output feature map due to inadequate features. To test the robustness for the number of the Cs, we generate three types of models with different number of the transferred channels with the setting: “Sub 1” (Cs = C1), “Sub 2” (Cs = C12 ), “Sub 4” (Cs = C1 4 ) as shown in Table. 2. Other parameters of the models and the training steps are the same as the setting in Sec.4.1. Table. 2 shows the experimental results of the three types of models with different nonlocal blocks. Our SNL and gSNL blocks outperforms other models profited by their flexible for learning. Moreover, from Table. 2, we can see that the performances of the CGNL steeply drops when the number of the transferred channels increases. This is because the CGNL block concerns the relationship between channels, when the number of the sub-channel increases, the relationship between the redundant channels seriously interferes its effects. Overall, our proposed nonlocal block is the most robust for the large number of transferred channels (our model rise 1.1% in Top1 while the best of others only rise 0.4% compared to the baseline). The stage for adding the nonlocal blocks The nonlocal-based blocks can be added into the different stages of the preResNet (or the ResNet) to form the Nonlocal Net. In Tao et al. (2018), the nonlocalbased blocks are added into the early stage of the preResNet to catch the long-range correlations. Here we experiment the performance of adding different types of nonlocal blocks into the three stages (the first, the second and the third stage of the preResNet) and train the models on CIFAR100 dataset with the same setting discussed in Sec.5.2. The experimental results are shown in Table. 3. We can see that the performances of the NL block is lower than the backbones when adding into the early stage. However, our proposed SNL block has 0.81% improvement compared with the backbone when respectively adding into all the three stages, which is much higher than the other type nonlocal blocks (only 0.42% for the best case). To intuitively show the stability and robustness of our SNL, we give the spectrum analysis for the estimated weight matrices (Tao et al. (2018)). We extract the self-attention weight matrix: Wg,W of the NL block and the NS block, Wg,W2 of our proposed SNL block. The dimension of the weight matrix satisfies: Wg ∈ RC1×Cs , W ∈ RCs×C1 W2 ∈ RCs×C1 . To make all the eigenvalues real, we let: W̃ = (WgW)+(WgW) T 2 . We do the same to the W2. Figure. 5 shows the top thirtytwo eigenvalues of the weight matrix of W̃ on the models in Table. 3. We can see that the density of the negative eigenvalues is higher than the positive eigenvalues of the NL block when adding into all three stages. This phenomenon makes the NL operator F(A,Z) in Eq. (1) less than zero. So the output feature map is less than the input feature map, i.e. Y < X (more detail of this phenomenon can be seen in Tao et al. (2018)). The NS block can avoid “the damping effect” to some extent by concerning the diffusion nature. However, when adding into the early stage, only six eigenvalues of the nonlocal stage are not equal to zero. This phenomenon makes the nonlocal stage cannot effectively magnify the discriminated feature. Comparing with these two models, our proposed SNL block has more positive eigenvalues which takes effect to enhance the discriminated features and also avoids the “damping effect”. The number of the nonlocal blocks We test the robustness for adding multiple nonlocal blocks into the backbone network which forms the three type network “Different Position 3 (DP 3)”, “Same Position 3 (SP 3)” “Same Position 5 (SP 5)” as shown in Table. 4. The result are shown in Table. 4. For the model “DP3”, three blocks are added into the stage 1, stage 2, and stage 3 (right after the second residual block). We can see that adding three proposed nonlocal operators into different stages of the backbone generate a larger improvement than the NS operator and NL operator (2.4% improvement). This is because when adding NS and NL into the early stage, these two models cannot better aggregate the low-level features and interfere the following blocks. For the model “SP 3” (“SP 5”), we add three (five) consecutively-connected nonlocal blocks into the stage 1. Note that different from the experiment in Tao et al. (2018) and Wang et al. (2018), these consecutivelyconnected nonlocal blocks have the same affinity matrix. From Table. 4, we can see that profited by concerning the stable hypothesis discussed in Sec 3.3, our gSNL outperform all other models when adding consecutively-connected nonlocal blocks (rises average 0.72% to the backbone and 0.41% higher than the best performance of other type nonlocal blocks) and has a relatively stable performance. However, one drawback is that our gSNL may interfere the learning when adding only one nonlocal block (the stable hypothesis is not satisfied). 4.3 MAIN RESULTS We test the networks with the Nonlocal Block (NL), the Nonlocal Stage (NS), the Compact Generalized Nonlocal block (CGNL), the Double Attention Block (A2) and our SNL (gSNL) blocks in the different visual learning tasks. The experiment settings are discussed in Sec.4.1. Our models outperform other types of the nonlocal blocks across several standard benchmarks. Table. 5 shows the experimental results on the CIFAR10 dataset, we can see that by adding one proposed block, the Top1 rises about 0.65%, which is higher than adding other type nonlocal blocks (0.3%). As the experiments on CIFAR100 dataset shown in Table. 7, using our proposed block brings improvement about 1.8% with ResNet50. While using a more simple backbone PreResnet56, our model can still generate 1.1% improvement as shown in Table. 6. Table. 9 shows the experimental results on the fine-grained image classification task on CUB-200 datasets. Our model outperforms other non-channel-concerning blocks and generate (0.42%) im- provement. Comparing with the channel-wise concerning CGNL block, our model is only a bit lower in Top1. Fig. 4 also shows the visualized feature map which is formed by adding the upsampled feature output with the source image. We can see that the feature maps of our proposed block can cover more critical area of the birds. For example, both the left and right wings (red square) of the birds can be focused profited by the better long-range concerning of our SNL. Moreover, benefited from the flexibility of the W1, our proposed SNL can also catch a relatively large range of the discriminative parts. Table. 8 shows the experimental results on the action recognition task. The network with our proposed block can generate 1.8% improvement than the I3D model and outperforms all other nonlocal models on the UCF-101 dataset. Table 8: The Results on UCF101 model top1 top5 I3D 81.57% 95.40% + NL 81.37% 95.76% + NS 82.50% 95.84% + A2 82.68% 95.85% + CGNL 83.16% 96.16 % + *SNL 82.30% 95.56% + *gSNL 83.21% 96.53% Table 9: The Results on CUB model top1 top5 R-50 85.43% 96.70% + NL 85.34% 96.77% + NS 85.54% 96.56% + A2 86.02% 96.56% + CGNL 86.14% 96.34% + *SNL 85.91% 96.65% + *gSNL 85.95% 96.79% 5 CONCLUSION In this paper, we explain the nonlocal block in the graph view and propose the spectral nonlocal (SNL) block which is more robust and well-behaved. Our SNL block is a generalized version of the NL and NS block and having more liberty for the parameter learning. We also give the stable hypothesis for deeper nonlocal structure and extend the SNL to gSNL that can be applied to the deeper nonlocal structures. The experiments on multiple computer vision tasks show the high robustness and performance of our proposed nonlocal block. Feature works will focus on using the SNL block into different vision task and its roubustness for the other type of neural network such as the Generative Adversarial Networks (GAN). A ANALYTIC SOLUTION OF THE CHEBYSHEV APPROXIMATE Here we give the analytic solution for the coefficients in Chebyshev polynomials (Phillips (2003)): Theorem 2. Giving a function f(x), x = {x1, x2, ..., xN}, it can be optimally approximated by Chebyshev polynomials: f(x) ≈ ∑K−1 k=0 akTk(x), only when ak satisfies: ak = 2 N ∑N j=0 f(xj)Tk(xj). We call the ak as the analytic solution of the Chebyshev coeffcients. Based on these theorem, we can get the analytic solution of the parameter θ for Eq. (7): Lemma 2. The spectral nonlocal operator can be best approximated when the function g(λ) = ω can be best approximated by the Chebyshev polynomials, i.e. the analytic solutions of the Chebyshev coeffcients satisfy: θk = ak = 2 N N∑ j=0 g(λj)Tk(λj) = 2 N N∑ j=0 ωjTk(λj) (12) B THE PIECEWISE CHEBYSHEV POLYNOMIALS Taking Ak = A into the Chebyshev polynomials of the affinity matrix A, the Chebyshev polynomials becomes: T0(A) = I T1(A) = A T2(A) = 2AT1(A)− T0(A) = 2AA− I = 2A− I T3(A) = 2AT2(A)− T1(A) = 2A(2A− I)−A = A T4(A) = 2AT3(A)− T2(A) = 2AA− 2A + I = I = T0(A) T5(A) = 2AT4(A)− T3(A) = 2AI−A = A = T1(A) T6(A) = 2AT5(A)− T4(A) = 2 ∗ T2(A)− T1(A) = T2(A) (13) This cyclic form of Chebshev polynomials Tk(A) can be reformulated as a piecewise function: Tk(A) = { I k%4 = 0 A k%4 = 1 || k%4 = 3 2A− I k%4 = 2 (14) C EXPERIMENT OF SEMANTIC SEGMENTATION ON VOC2012 DATASET For the semantic segmentation tasks, we generate experiment on the VOC2012 dataset with the model proposed by Chen et al. (2017).We add different types of nonlocal blocks on right before the last residual block in res4 of the ResNet50. The models are trained for 50 epochs with the SGD optimize algorithm. The learning rate is set 0.007 with the weight decay 5e− 4 and momentum 0.9. Experimental results show that the model with our proposed block can the best results. D THE EXAMPLE OF THE AFFINITY MATRIX ON CUB DATASETS Experiments to verify the stable hypothesis is also generated on the CUB datasets, we add three consecutively-connected SNL blocks (and NS blocks) into the ResNet50 (right before the last residual block of res4) and train this model on the train set of the CUB dataset with the initial learning rate 0.1 which is subsequently divided by 10 at 31, 61 and 81 epochs (total 200 epochs). A weight decay 1e− 4 and momentum 0.9 are also used. Figure. 6 shows the histogram of the strength statistics of the affinity matrix A. We can see that although using different backbone and dataset, the distribution of the k-hop affinity matrixes are corresponded with the experiments on CIFAR100. E EXPERIMENTS ON VIDEO-BASED PERSON RE-IDENTIFICATION Experiments are also conducted on the challenging datasets on Video-based Person Re-identification task including the Mars, ILID-SVID and PRID2011. For the backbone, we follow the strategy of Gao & Nevatia (2018) that uses the pooling (RTMtp) and attention (RTMta) to fuse the spatialtemporal features. Note that the models are totally trained on ilidsvid and prid2011 rather than finetuning the pre-trained model on Mars dataset. The experimental results are shown in Table.11, 12, 13. We can see that in these datasets, our proposed block can still generate consistent improvements. Table 11: The Results on Mars dataset model mAP Rank1 RTMta 77.70% 79.10% + NL 72.90% 80.90% + *SNL 74.00% 81.98% RTMtp 75.70% 82.30% + NL 75.54% 83.40% + *SNL 76.80% 99.92% Table 12: The Results on ILIDSVID dataset model mAP Rank1 RTMta 69.70% 58.70% + NL 66.30% 56.00% + *SNL 79.40% 70.00% RTMtp 81.60% 74.70% + NL 83.00% 75.30% + *SNL 84.80% 76.60% Table 13: The Results on PRID2011 dataset model mAP Rank1 RTMta 86.60% 79.80% + NL 90.70% 85.40% + *SNL 91.50% 86.50% RTMtp 90.50% 86.50% + NL 89.70% 85.40% + *SNL 92.40% 88.80% F ADDITIONAL EXPERIMENTS ON ACTION CLASSIFICATION Ours SNL can also improve the performance of other network structures such as the Pseudo 3D Convolutional Network (P3D) (Qiu et al. (2017)), the Motion-augmented RGB Stream (MARS) (Crasto et al. (2019)), the Slow-Fast Network (Slow-Fast) (Feichtenhofer et al. (2019)) and the Video Transformer Network (VTN) (Kozlov et al. (2019)). For P3D and MARS, our SNL block is inserted right before the last residual layer of the res3. For the Slow-Fast, we replace its original NL block with our SNL block. For the VTN, we replace its multi-head self-attention blocks (paralleledconnected NL blocks) with our SNL blocks. The Slow-Fast network are trained end-to-end on the UCF-101 dataset while others use the model pretrained on Kinetic400 dataset and finetuning on the UCF-101 dataset. From Table. 14, We can see that all the performances are improved when adding our proposed SNL model. Experiments on Kinetics-400 dataset are also given in Table. 15. We can see that inserting SNL block into the Slow-Fast Network can generate 2.1% improvement.
1. What is the focus of the paper, and what are the proposed contributions? 2. How does the paper reinterpret nonlocal blocks, and what is the resulting formulation? 3. What is the purpose of using Chebyshev approximation, and how does it simplify the spectral nonlocal block? 4. Can you explain the steady-state analysis and how it builds a deeper nonlocal structure? 5. How does the gSNL differ from other nonlocal structures, and what are its advantages? 6. What are your concerns regarding the experimental section, and why do you think large-scale video classification is necessary?
Review
Review In this paper, authors propose a spectral nonlocal block. First, they re-interpret the nonlocal blocks in a graph view and then use Chebyshev approximation to obtain the spectral nonlocal block which is quite simple by adding a ZW_1 term. Furthermore, they analyze the steady-state to build up a deeper nonlocal structure. Also, the gSNL is simple by adding a (2A-I)ZW_3 term. Overall, the paper is written well. I like the idea to interpret the nonlocal operation in the graph view. More important, the resulting formulation is quit concise for implementation. However, my main concern is the experiment, which should be further enhanced by perform large-scale video classification like Kinetics400.
ICLR
Title Spectral Nonlocal Block for Neural Network Abstract The nonlocal network is designed for capturing long-range spatial-temporal dependencies in several computer vision tasks. Although having shown excellent performances, it needs an elaborate preparation for both the number and position of the building blocks. In this paper, we propose a new formulation of the nonlocal block and interpret it from the general graph signal processing perspective, where we view it as a fully-connected graph filter approximated by Chebyshev polynomials. The proposed nonlocal block is more efficient and robust, which is a generalized form of existing nonlocal blocks (e.g. nonlocal block, nonlocal stage). Moreover, we give the stable hypothesis and show that the steady-state of the deeper nonlocal structure should meet with it. Based on the stable hypothesis, a full-order approximation of the nonlocal block is derived for consecutive connections. Experimental results illustrate the clear-cut improvement and practical applicability of the generalized nonlocal block on both image and video classification tasks. 1 INTRODUCTION Capturing the long-range spatial-temporal dependencies is crucial for the Deep Convolutional Neural Networks (CNNs) to extract discriminate features in vision tasks such as image and video classification. However, the traditional convolution operator only focuses on processing local neighborhood at a time. This makes the CNNs need to go deeper with convolutional operations to enlarge the receptive fields, which lead to higher computation and memory. Moreover, going deeper cannot always increase the effective receptive fields due to the Gaussian distribution of the kernel weight (Luo et al. (2016)). To eliminate this limitation, some recent works focus on designing the network architecture with wider and well-designed modules to catch the long-range dependencies such as (Peng et al. (2017), Chen et al. (2017), Zhao et al. (2017)). Although having larger receptive fields, these modules still need to be applied recursively to catch the dependencies of the pairs in large distances. Inspired by the classical non-local means method in image denoising, Wang et al. (2018) proposes the nonlocal neural network which uses the nonlocal (NL) block to concern the “full-range” dependencies in only one module by exploring the correlations between each position and all other positions. In the NL block, the affinity matrix is first computed to represent the correlations between each position pair. Then the weight means of features are calculated based on the affinity matrix to refine the feature representation. Finally, the residual connection is added to the refined feature map. Due to its simplicity and effectiveness, the nonlocal block has been widely used in image and video classification (Wang et al. (2018); Yue et al. (2018); Tao et al. (2018); Chen et al. (2018)), image segmentation (Huang et al. (2018); Yue et al. (2018); Wang et al. (2018)) and person re-identification (Liao et al. (2018); Zhang et al. (2019)) recently. However, due to the complexity of the affinity matrix, the nonlocal block 1 needs much more computational effort and is sensitive to its number and position in the neural network (Tao et al. (2018)). Some works solve the first problem by simplifying the calculation of the affinity matrix such as Huang et al. (2018), He et al. (2019), Yue et al. (2018), Chen et al. (2018). Only a few works try to solve the second problem which limits the robustness of the nonlocal network 2. Tao et al. (2018) 1The nonlocal block is composed of a nonlocal operator and a residual connection 2The nonlocal network is composed of several nonlocal blocks proposes the nonlocal stage (NS) block which concerns the diffusion nature and maintains the same affinity matrix for all the nonlocal units in the NS block. Comparing with the NL block, the NS block is insensitive to the numbers and allows deeper nonlocal structure. However, the deeper nonlocal structure of NS block increases the complexity and do not have a remarkable improvement. In this work, we focus on elaborating a robust nonlocal block which is more flexible when using in the neural network. We prove that the nonlocal operator in the nonlocal block is equivalent to the Chebyshev-approximated fully-connected graph filter with irrational constraints that limits its liberty for learning. To remove these irrational constraints, we propose the Spectral-based Nonlocal (SNL) block which is more robust and can degrade into the NL and NS with specific assumptions. We also prove that the deeper nonlocal structure satisfies the stable hypothesis with the help of steadystate analysis. Based on this hypothesis, we give the full-order approximated spectral nonlocal (gSNL) block which is well-performed for deeper nonlocal structure. Finally, we add our proposed nonlocal blocks into the deep network and evaluate them on the image and video classification tasks. Experiments show that the networks with our proposed blocks are more robust and have a higher accuracy than using other types of nonlocal blocks. To summarize, our contributions are threefold: • We propose a spectral nonlocal (SNL) block as an efficient, simple, and generic component for capturing long-range spatial-temporal dependencies with deep neural networks, which is a generalization of the classical nonlocal blocks. • We propose the stable hypothesis, which can enable the deeper nonlocal structure without an elaborate preparation for both the number and position of the building blocks. We further extend SNL into generalized SNL (gSNL), which can enable multiple nonlocal blocks to be plugged into the existing computer vision architectures with stable learning dynamics. • Both SNL and gSNL have outperformed other nonlocal blocks across both image and video classification tasks with a clear-cut improvement. 2 PRELIMINARY Nonlocal block The NL block consist of NL operator with residual connection and is expressed as: Y = X + F(A,Z) with Z = XWg, (1) where X ∈ RN×C1 is the input feature map, F(A,Z) is the NL operator, Z ∈ RN×Cs is the transferred feature map that compresses the channels of X ∈ RN×C1 by a linear transformation with kernel Wg ∈ RC1×Cs . Here N is the number of positions. The affinity matrix A ∈ RN×N is composed by pairwise correlations between pixels. In the NL block, the NL operator explores the “full-range” dependencies by concerning the relationships between all the position pairs: F(A,Z) = AZW with A = (aij)N×N , Aij = f(Xi,:,Xj,:), (2) where W ∈ RCs×C1 is the weight matrix of a linear transformation. f(·) is the affinity kernel which can adopt the “Dot Product”, “Traditional Gasuassian”, “Embedded Gasussian” or other kernel matrix with a finite Frobenius norm. Nonlocal stage To make the NL operator follow the diffusion nature that allows deeper nonlocal structure (Tao et al. (2018)), the nonlocal stage (NS) operator uses the graph laplacian L = DA−A to replace the affinity matrix A in the NL operator: F̄(A,Z) = (A−DA)ZW with DA = diag(di), (3) where F̄(A,Z) is the NS operator. di = ∑ j aij is the degree of node i. Moreover, when adding multiple blocks with the same affinity matrix A and replacing the NL operator by the NS operator, these consecutively-connected blocks become the NS block. We called these nonlocal blocks in the NS block as the NS units. 3 METHOD The nonlocal operator can be divided into two steps: calculating the affinity matrix A to represent the correlations between each position pairs and refining the feature map by calculating the weighted means based on A. In this section, a fully-connected graph filter is utilized for explaining the nonlocal operator. With the Chebyshev approximation, we propose the SNL operator which is proved to be a generalized form of NL and NS operator and is more robust with higher performance in computer vision tasks. Furthermore, based on the stable hypothesis that deeper nonlocal structure tends to learn a stable affinity matrix, we extend our SNL operator into a full-order Chebyshev approximation version, i.e. the gSNL. 3.1 THE PROPOSED SPECTRAL NONLOCAL OPERATOR Nonlocal operator in the graph view The nonlocal operator F(A,Z) is a filter that computes a weighted mean of all the positions in the feature map Z based on the affinity matrix A and then conduct the feature transformation with the kernel W. This is the same as filtering the signal Z by a graph filter Ω in the graph domain defined by the affinity matrix A (Shuman et al. (2013)). Based on this perspective (Shuman et al. (2013)), we further define the nonlocal operator as: Theorem 1. Given an affinity matrix A ∈ RN×N and the signal Z ∈ RN×Cs , the nonlocal operator is the same as filtering the signal Z in the graph domain of a fully-connected weighted graph G: F(A,Z) = Z ∗ g = Ugθ(Λ)UTZ = UΩUTZ with L = DL −A = UTΛU, (4) where the graph filter Ω ∈ RN×N is a diagonal parameter matrix, i.e. Ω = diag(ω), ω = (ω1, ω2, ..., ωn). G = (V,A) is a fully-connected graph with the vertex set V and affinity matrix A. Λ = diag({λ1, λ2, ..., λi, ..., λN}) and U = {u1,u2, ...,ui, ...,uN} are the eigenvectors and eigenvalues of the graph laplacian L. This definition requires that the graph laplacian L has non-singular eigenvalue and eigenvector, so the affinity matrix A should be a symmetric, non-negative, row-normalized matrix. To meet this requirement, the affinity matrix A can be obtained by the following steps. First, the affinity kernel is used to calculate the matrix A (we use the dot product with embeded weight matrix Wφ ∈ RC1×Cs and Wϕ ∈ RC1×Cs as the affinity kernel, i.e. A = (XWφ)(XWϕ)). Then we make the matrix A symmetric: Ā = A T+A 2 . We normalize the row of Ā to make it satisfy di = 1 and having Ǎ = D−1A Ā. For the simplicity, in the following sections the symmetric, non-negative, row-normalized matrix Ǎ is denoted as A. The proposed spectral nonlocal operator The graph filter Ω in Eq. (4) contains N parameters. To simplify it, we use the Chebyshev polynomials which can reduce the N parameters into k (k N ). For simplicity, we firstly assume that the input Z, the output F(A,Z) and the output F(A,Z) have only one channel. Following the similar method as Defferrard et al. (2016), the kst-order Chebyshev polynomials is used to approximate the graph filter function gθ(Λ): F(A,Z) = K−1∑ k=0 θkTk(L ′ )Z with L ′ = 2L/λmax − In, s.t. T0(L ′ ) = In, T1(L ′ ) = L ′ , Tk(L ′ ) = 2L ′ Tk−1(L ′ )− Tk−2(L ′ ). (5) Due to L is a random walk laplacican, the maximum eiginvalue λmax satisfies λmax = 2 which makes L ′ = A (Shuman et al. (2013)). Then Eq. (5) becomes: F(A,Z) = K−1∑ k=0 θkTk(A)Z = θ0Z + θ1AZ + K−1∑ k=2 θkTk(A)Z, (6) If k = 1, the first-order Chebyshev approximation of Eq. (6) becomes: F(A,Z) = θ0Z + θ1AZ, (7) where θ0 and θ1 are the coefficients for the first and second term which are approximated by learning with SGD. Then, extending Eq. (7) into multi-channel conditions, we can get the formation of our SNL operator: Fs(A,Z) = ZW1 + AZW2, (8) where Fs(A,Z) is the SNL operator, W1 ∈ RCs×C1 , W2 ∈ RCs×C1 . Finally, a residual connection is added with the SNL operator to form the SNL block: Y = X + Fs(A,Z) = X + ZW1 + AZW2. (9) Relation with other nonlocal operators As shown in fig. 1, our SNL operator can degrade into the NL operator by setting W1 = 0, i.e. θ0 = 0. However, its analytic solution: θ0 = 2N ∑N j=0 ωj controls the total filtering intensity, which cannot be guaranteed to be 0. This setting will limit the search space when training the network and reduce the robustness of the NL block. The NL operator cannot magnify features of a large range and damp some discriminative features such as the beak of the waterfowl. Our SNL operator can also degrade into the NS operator by setting W1 = −W2, i.e. θ1 + θ0 = 0. However, the analytic solution of this equation is θ1 + θ0 = 2N ∑N j=0 ωj(λj + 1) = 0. When setting it to zero, the filter strength of the high-frequency signal (with high λ) such as the small part or twig is suppressed. Thus, it still cannot magnify the discriminative part such as the beak of the waterfowl as shown in fig. 1. Comparing with NL and NS, our SNL does not have these irrational constraints and give these two parameters a liberal learning space. Thus, θ0 can control the preserve strength of the discriminative features, while θ1 can pay more attention to the low-frequency signal to diminish the noise. 3.2 THE PROPOSED GENERALIZED SPECTRAL NONLOCAL OPERATOR To fully exploit the “full-range” dependencies, the nonlocal block should have the ability to be consecutively stacked into the network to form a deeper nonlocal structure. However, some types of nonlocal blocks such as the NL and CGNL block cannot achieve this purpose (Tao et al. (2018)). To show the robustness of our SNL block when used in the deeper nonlocal structure, we firstly study the steady-state of deeper nonlocal structure when consecutively adding our SNL block. We also prove the stable hypothesis that the deeper nonlocal structure tends to learn a stable affinity. Based on this hypothesis, we can extend our SNL block into a full-order Chebyshev approximation, i.e. the gSNL block which is more applicable for deeper nonlocal structure. The stable hypothesis The Steady-state analysis can be used to analyze the stable dynamics of the nonlocal block. Here we give the steady-state analysis of our SNL block when consecutively adds into the network structure and get the Stable Hypothesis: Lemma 1. The Stable Hypothesis: when adding more than two consecutively-connected SNL blocks with the same affinity matrix A into the network structure, these SNL blocks are stable when the variable affinity matrix A satisfies: Ak = A. Proof. The stability holds when the weight parameters in W1,W2 and W are small enough such that the CFL condition is satisfied (Tao et al. (2018)). So we ignore them for simplicity. The discrete nonlinear operator of our SNL have a similar formulation as the NS operator: LhZN := −LZ, where h is the discretization parameter. ZN is the input of the N th block in the deeper nonlocal structure with Z0 = X. The stable assumption demands that ZN+1 = ZN , so the steady-state equation of the last SNL block can be written as: ZN+1 − ZN = LhZN = −LZN = 0. The deeper nonlocal structure has more than one SNL blocks. So the ZN−1 and LhZN−1 can be used to express ZN : −LZN = −(I−A)ZN = −(I−A)(ZN−1 + LhZN−1) = −(I−A)ZN−1 + (I−A)(I−A)ZN−1 = 0. Finally, the steady-state equation becomes: (I−A)ZN−1 = (I−A)2ZN−1 ⇐⇒ A2 = A This equation can naturally extend to the k-hop affinity matrix Ak, i.e. Ak = A. To verify the stable hypothesis, we add five consecutively-connected SNL blocks (and NS blocks) into the PreResnet56 He et al. (2016) and train this model on the train set of the CIFAR100 dataset with the initial learning rate 0.1 which is subsequently divided by 10 at 150 and 250 epochs (total 300 epochs). A weight decay 1e − 4 and momentum 0.9 are also used. Then we test the trained model on the test set and output the affinity matrix of each image. Figure. 2 shows the statistics that reflects the strength of the affinity matrix, 2-hop, 3-hop, and 4-hop affinity matrix: A,A2,A3,A4. We can see that the number of elements in each histogram bin are nearly the same. This means that the A, A2, A3, A4 have similar distribution of all the elements in k-hop affinity matrixes, which also empirically verifies the stable-state equation: Ak = A. Full-order spectral nonlocal operator With the stable hypothesis, the Chebyshev polynomials can be simplified into a piece-wise function (details in Appendix B). Taking this piece-wise function into the Eq. 7, we can get the full-order approximation of the SNL operator: F∗s (A,Z) = ∑ k θkTk(A)Z = Zθ̃1 + AZθ̃2 + (2A− I)Zθ̃3, (10) where θ̃1 = ∑k%4=0 i1 θi1 , θ̃2 = ∑k%4=1||k%4=2 i2 θi1 , θ̃3 = ∑k%4=3 i1 θi1 , whose upper bound is less than 1. Then, extending it into multi-channel input and output with the residual connection, we can get our gSNL block: Y = X + F∗s (A,Z) = X + ZW1 + AZW2 + (2A− I)ZW3 (11) The gSNL block is well-performed when the stable affinity hypothesis is satisfied, i.e. adding more than two nonlocal blocks with the same affinity matrix as shown in Table. 4. 3.3 IMPLEMENTATION DETAILS The implementation details of the gSNL block is shown in fig. 3. The input feature map X ∈ RW×H×C1 is first fed into three 1x1 convolutions with the weight kernel: Wφ ∈ RC1×Cs , Wϕ ∈ RC1×Cs , Wg ∈ RC1×Cs to subtract the number of channel. One of the output Z ∈ RW×H×Cs is used as the transferred feature map to reduce the calculation complexity, while the other two output Φ ∈ RW×H×Cs , Ψ ∈ RW×H×Cs are used to get the affinity matrix A. The sub-channel Cs are usually two times less than the input channel C1. The affinity matrix is calculated by the affinity kernel function f(·) and then use the operation in Sec3.1 to make it non-negative, symmetric and normalized. Finally, with the affinity matrix A and the transferred feature map Z, the output of the nonlocal block can be obtained by the equation Eq. (11). Specifically, the three weight matrixes W1 ∈ RCs×C1 , W2 ∈ RCs×C1 , W3 ∈ RCs×C1 are implemented as three 1x1 convolutions. 4 EXPERIMENT 4.1 SETTING Datasets Our proposed SNL and gSNL blocks have been evaluated across several computer vision tasks, including image classification and video-based action recognition. For the image classification, both CIFAR-10 and CIFAR-100 datasets (Krizhevsky & Hinton (2009)) are tested. The CIFAR10 dataset contains 60, 000 images of 10 classes, and CIFAR-100 dataset contains 60, 000 images of 100 classes. For these two datasets, we use 50, 000 images as the train set and 10, 000 images as the test set. We also generate experiments for the fine-grained classification on the Birds-200-2011 (CUB-200) dataset (Welinder et al. (2010)) which contains 11, 788 images of 200 bird categories. For the action recognition, the experiments are conducted on the UCF-101 dataset (Soomro et al. (2012)), which contains 101 different actions. Backbones For the image classification, the ResNet-50 and the PreResNet variations (including both PreResNet-20 and PreResNet-56) are used as the backbone networks. For the video classification task, we follow the I3D structure (Hara et al. (2018)) which uses k × k × k kernels to replace the convolution operator in the residual block. Setting for the network In the main experiments, we setCs = C1/2. Without loss of the generality, we use the “Dot Product” as the affinity kernel in the experiments. We add one SNL (or gSNL) block into these backbone networks to construct the SNL (or gSNL) network. For the ResNet and the I3D (Hara et al. (2018)), following Wang et al. (2018) we add the SNL block right before the last residual block of res4. For the PreResNet series, we add the SNL block right after the second residual block in res1. For the other nonlocal-base block including the NL (Wang et al. (2018)), the NS (Tao et al. (2018)), the Compact Generalized Nonlocal Block (CGNL) (Yue et al. (2018)) and the Double Attention Block (A2), the settings are all the same as ours. The difference of these blocks are shown in Table. 1, in which the Approximated Condition shows the strategy for the Chebyshev approximation and Channel-wise reflect the consideration of the channel relations. Setting for the training For the image classification on CIFAR-10 dataset and CIFAR-100 dataset, we train the models end-to-end without using pretrained model. The initial learning rate 0.1 is used for these two datasets with the weight decay 1e− 4 and momentum 0.9. The learning rate is divided by 10 at 150 and 250 epochs. The models are trained for total 300 epochs. For the fine-grained classification on CUB-200 dataset, we use the models pretrained on ImageNet (Russakovsky et al. (2015)) to initialize the weights. We train the models for total 200 epochs with the initial learning rate 0.1 which is subsequently divided by 10 at 31, 61, 81 epochs. The weight decay and momentum are the same as the setting of CIFAR-10 and CIFAR-100. For the video classification on the UCF-101 dataset, the weights are initialized by the pretrained I3D model on Kinetics dataset (Kay et al. (2017)). We train the models with the initial learning rate 0.1 which is subsequently divided by 10 each 40 epochs. The training stops at the 100 epochs. The weight decay and momentum are the same as the setting of CIFAR-10 and CIFAR-100. 4.2 ABLATION EXPERIMENT The number of channels in transferred feature space The nonlocal-based block firstly reduces the channels of original feature mapC1 into the transferred feature spaceCs by the 1×1 convolution to reduce the computation complexity. When Cs is too large, the feature map will contain redundant information which introduces the noise when calculating the affinity matrix A. However, if Cs is too small, it is hard to reconstruct the output feature map due to inadequate features. To test the robustness for the number of the Cs, we generate three types of models with different number of the transferred channels with the setting: “Sub 1” (Cs = C1), “Sub 2” (Cs = C12 ), “Sub 4” (Cs = C1 4 ) as shown in Table. 2. Other parameters of the models and the training steps are the same as the setting in Sec.4.1. Table. 2 shows the experimental results of the three types of models with different nonlocal blocks. Our SNL and gSNL blocks outperforms other models profited by their flexible for learning. Moreover, from Table. 2, we can see that the performances of the CGNL steeply drops when the number of the transferred channels increases. This is because the CGNL block concerns the relationship between channels, when the number of the sub-channel increases, the relationship between the redundant channels seriously interferes its effects. Overall, our proposed nonlocal block is the most robust for the large number of transferred channels (our model rise 1.1% in Top1 while the best of others only rise 0.4% compared to the baseline). The stage for adding the nonlocal blocks The nonlocal-based blocks can be added into the different stages of the preResNet (or the ResNet) to form the Nonlocal Net. In Tao et al. (2018), the nonlocalbased blocks are added into the early stage of the preResNet to catch the long-range correlations. Here we experiment the performance of adding different types of nonlocal blocks into the three stages (the first, the second and the third stage of the preResNet) and train the models on CIFAR100 dataset with the same setting discussed in Sec.5.2. The experimental results are shown in Table. 3. We can see that the performances of the NL block is lower than the backbones when adding into the early stage. However, our proposed SNL block has 0.81% improvement compared with the backbone when respectively adding into all the three stages, which is much higher than the other type nonlocal blocks (only 0.42% for the best case). To intuitively show the stability and robustness of our SNL, we give the spectrum analysis for the estimated weight matrices (Tao et al. (2018)). We extract the self-attention weight matrix: Wg,W of the NL block and the NS block, Wg,W2 of our proposed SNL block. The dimension of the weight matrix satisfies: Wg ∈ RC1×Cs , W ∈ RCs×C1 W2 ∈ RCs×C1 . To make all the eigenvalues real, we let: W̃ = (WgW)+(WgW) T 2 . We do the same to the W2. Figure. 5 shows the top thirtytwo eigenvalues of the weight matrix of W̃ on the models in Table. 3. We can see that the density of the negative eigenvalues is higher than the positive eigenvalues of the NL block when adding into all three stages. This phenomenon makes the NL operator F(A,Z) in Eq. (1) less than zero. So the output feature map is less than the input feature map, i.e. Y < X (more detail of this phenomenon can be seen in Tao et al. (2018)). The NS block can avoid “the damping effect” to some extent by concerning the diffusion nature. However, when adding into the early stage, only six eigenvalues of the nonlocal stage are not equal to zero. This phenomenon makes the nonlocal stage cannot effectively magnify the discriminated feature. Comparing with these two models, our proposed SNL block has more positive eigenvalues which takes effect to enhance the discriminated features and also avoids the “damping effect”. The number of the nonlocal blocks We test the robustness for adding multiple nonlocal blocks into the backbone network which forms the three type network “Different Position 3 (DP 3)”, “Same Position 3 (SP 3)” “Same Position 5 (SP 5)” as shown in Table. 4. The result are shown in Table. 4. For the model “DP3”, three blocks are added into the stage 1, stage 2, and stage 3 (right after the second residual block). We can see that adding three proposed nonlocal operators into different stages of the backbone generate a larger improvement than the NS operator and NL operator (2.4% improvement). This is because when adding NS and NL into the early stage, these two models cannot better aggregate the low-level features and interfere the following blocks. For the model “SP 3” (“SP 5”), we add three (five) consecutively-connected nonlocal blocks into the stage 1. Note that different from the experiment in Tao et al. (2018) and Wang et al. (2018), these consecutivelyconnected nonlocal blocks have the same affinity matrix. From Table. 4, we can see that profited by concerning the stable hypothesis discussed in Sec 3.3, our gSNL outperform all other models when adding consecutively-connected nonlocal blocks (rises average 0.72% to the backbone and 0.41% higher than the best performance of other type nonlocal blocks) and has a relatively stable performance. However, one drawback is that our gSNL may interfere the learning when adding only one nonlocal block (the stable hypothesis is not satisfied). 4.3 MAIN RESULTS We test the networks with the Nonlocal Block (NL), the Nonlocal Stage (NS), the Compact Generalized Nonlocal block (CGNL), the Double Attention Block (A2) and our SNL (gSNL) blocks in the different visual learning tasks. The experiment settings are discussed in Sec.4.1. Our models outperform other types of the nonlocal blocks across several standard benchmarks. Table. 5 shows the experimental results on the CIFAR10 dataset, we can see that by adding one proposed block, the Top1 rises about 0.65%, which is higher than adding other type nonlocal blocks (0.3%). As the experiments on CIFAR100 dataset shown in Table. 7, using our proposed block brings improvement about 1.8% with ResNet50. While using a more simple backbone PreResnet56, our model can still generate 1.1% improvement as shown in Table. 6. Table. 9 shows the experimental results on the fine-grained image classification task on CUB-200 datasets. Our model outperforms other non-channel-concerning blocks and generate (0.42%) im- provement. Comparing with the channel-wise concerning CGNL block, our model is only a bit lower in Top1. Fig. 4 also shows the visualized feature map which is formed by adding the upsampled feature output with the source image. We can see that the feature maps of our proposed block can cover more critical area of the birds. For example, both the left and right wings (red square) of the birds can be focused profited by the better long-range concerning of our SNL. Moreover, benefited from the flexibility of the W1, our proposed SNL can also catch a relatively large range of the discriminative parts. Table. 8 shows the experimental results on the action recognition task. The network with our proposed block can generate 1.8% improvement than the I3D model and outperforms all other nonlocal models on the UCF-101 dataset. Table 8: The Results on UCF101 model top1 top5 I3D 81.57% 95.40% + NL 81.37% 95.76% + NS 82.50% 95.84% + A2 82.68% 95.85% + CGNL 83.16% 96.16 % + *SNL 82.30% 95.56% + *gSNL 83.21% 96.53% Table 9: The Results on CUB model top1 top5 R-50 85.43% 96.70% + NL 85.34% 96.77% + NS 85.54% 96.56% + A2 86.02% 96.56% + CGNL 86.14% 96.34% + *SNL 85.91% 96.65% + *gSNL 85.95% 96.79% 5 CONCLUSION In this paper, we explain the nonlocal block in the graph view and propose the spectral nonlocal (SNL) block which is more robust and well-behaved. Our SNL block is a generalized version of the NL and NS block and having more liberty for the parameter learning. We also give the stable hypothesis for deeper nonlocal structure and extend the SNL to gSNL that can be applied to the deeper nonlocal structures. The experiments on multiple computer vision tasks show the high robustness and performance of our proposed nonlocal block. Feature works will focus on using the SNL block into different vision task and its roubustness for the other type of neural network such as the Generative Adversarial Networks (GAN). A ANALYTIC SOLUTION OF THE CHEBYSHEV APPROXIMATE Here we give the analytic solution for the coefficients in Chebyshev polynomials (Phillips (2003)): Theorem 2. Giving a function f(x), x = {x1, x2, ..., xN}, it can be optimally approximated by Chebyshev polynomials: f(x) ≈ ∑K−1 k=0 akTk(x), only when ak satisfies: ak = 2 N ∑N j=0 f(xj)Tk(xj). We call the ak as the analytic solution of the Chebyshev coeffcients. Based on these theorem, we can get the analytic solution of the parameter θ for Eq. (7): Lemma 2. The spectral nonlocal operator can be best approximated when the function g(λ) = ω can be best approximated by the Chebyshev polynomials, i.e. the analytic solutions of the Chebyshev coeffcients satisfy: θk = ak = 2 N N∑ j=0 g(λj)Tk(λj) = 2 N N∑ j=0 ωjTk(λj) (12) B THE PIECEWISE CHEBYSHEV POLYNOMIALS Taking Ak = A into the Chebyshev polynomials of the affinity matrix A, the Chebyshev polynomials becomes: T0(A) = I T1(A) = A T2(A) = 2AT1(A)− T0(A) = 2AA− I = 2A− I T3(A) = 2AT2(A)− T1(A) = 2A(2A− I)−A = A T4(A) = 2AT3(A)− T2(A) = 2AA− 2A + I = I = T0(A) T5(A) = 2AT4(A)− T3(A) = 2AI−A = A = T1(A) T6(A) = 2AT5(A)− T4(A) = 2 ∗ T2(A)− T1(A) = T2(A) (13) This cyclic form of Chebshev polynomials Tk(A) can be reformulated as a piecewise function: Tk(A) = { I k%4 = 0 A k%4 = 1 || k%4 = 3 2A− I k%4 = 2 (14) C EXPERIMENT OF SEMANTIC SEGMENTATION ON VOC2012 DATASET For the semantic segmentation tasks, we generate experiment on the VOC2012 dataset with the model proposed by Chen et al. (2017).We add different types of nonlocal blocks on right before the last residual block in res4 of the ResNet50. The models are trained for 50 epochs with the SGD optimize algorithm. The learning rate is set 0.007 with the weight decay 5e− 4 and momentum 0.9. Experimental results show that the model with our proposed block can the best results. D THE EXAMPLE OF THE AFFINITY MATRIX ON CUB DATASETS Experiments to verify the stable hypothesis is also generated on the CUB datasets, we add three consecutively-connected SNL blocks (and NS blocks) into the ResNet50 (right before the last residual block of res4) and train this model on the train set of the CUB dataset with the initial learning rate 0.1 which is subsequently divided by 10 at 31, 61 and 81 epochs (total 200 epochs). A weight decay 1e− 4 and momentum 0.9 are also used. Figure. 6 shows the histogram of the strength statistics of the affinity matrix A. We can see that although using different backbone and dataset, the distribution of the k-hop affinity matrixes are corresponded with the experiments on CIFAR100. E EXPERIMENTS ON VIDEO-BASED PERSON RE-IDENTIFICATION Experiments are also conducted on the challenging datasets on Video-based Person Re-identification task including the Mars, ILID-SVID and PRID2011. For the backbone, we follow the strategy of Gao & Nevatia (2018) that uses the pooling (RTMtp) and attention (RTMta) to fuse the spatialtemporal features. Note that the models are totally trained on ilidsvid and prid2011 rather than finetuning the pre-trained model on Mars dataset. The experimental results are shown in Table.11, 12, 13. We can see that in these datasets, our proposed block can still generate consistent improvements. Table 11: The Results on Mars dataset model mAP Rank1 RTMta 77.70% 79.10% + NL 72.90% 80.90% + *SNL 74.00% 81.98% RTMtp 75.70% 82.30% + NL 75.54% 83.40% + *SNL 76.80% 99.92% Table 12: The Results on ILIDSVID dataset model mAP Rank1 RTMta 69.70% 58.70% + NL 66.30% 56.00% + *SNL 79.40% 70.00% RTMtp 81.60% 74.70% + NL 83.00% 75.30% + *SNL 84.80% 76.60% Table 13: The Results on PRID2011 dataset model mAP Rank1 RTMta 86.60% 79.80% + NL 90.70% 85.40% + *SNL 91.50% 86.50% RTMtp 90.50% 86.50% + NL 89.70% 85.40% + *SNL 92.40% 88.80% F ADDITIONAL EXPERIMENTS ON ACTION CLASSIFICATION Ours SNL can also improve the performance of other network structures such as the Pseudo 3D Convolutional Network (P3D) (Qiu et al. (2017)), the Motion-augmented RGB Stream (MARS) (Crasto et al. (2019)), the Slow-Fast Network (Slow-Fast) (Feichtenhofer et al. (2019)) and the Video Transformer Network (VTN) (Kozlov et al. (2019)). For P3D and MARS, our SNL block is inserted right before the last residual layer of the res3. For the Slow-Fast, we replace its original NL block with our SNL block. For the VTN, we replace its multi-head self-attention blocks (paralleledconnected NL blocks) with our SNL blocks. The Slow-Fast network are trained end-to-end on the UCF-101 dataset while others use the model pretrained on Kinetic400 dataset and finetuning on the UCF-101 dataset. From Table. 14, We can see that all the performances are improved when adding our proposed SNL model. Experiments on Kinetics-400 dataset are also given in Table. 15. We can see that inserting SNL block into the Slow-Fast Network can generate 2.1% improvement.
1. What is the key contribution of the paper, particularly the proposed spectral non-local block? 2. How does the reviewer assess the effectiveness of the method based on the presented experiments? 3. What are some limitations or potential improvements regarding the choice of datasets and applications? 4. Are there any connections or similarities between the proposed approach and other related works, such as Self-Attention GAN?
Review
Review SUMMARY: - Propose spectral non-local block - improvement on image and video classification tasks Apologies, I am not at all familiar with the theory and math behind this proposal, I do not think I am in a position to review this paper. The experiments seem convincing enough that the authors made enough effort to prove their method might work. - Feature maps to show robustness of method is a good point - CIFAR-10 and CIFAR-100 are certainly a good start, but might not be the best datasets to test for image classification, in lieu of ImageNet and others. - Classification itself is a good start, it might be interesting to think about using this in a generative model such as GAN. The content reminds me of Self-Attention GAN which uses a similar non-local block (self-attention).